meta
dict | text
stringlengths 1
1.2M
|
---|---|
{
"arxiv_id": "2302.14181",
"language": "en",
"timestamp": "2023-03-01T02:04:05",
"url": "https://arxiv.org/abs/2302.14181",
"yymm": "2302"
} | \section*{Abstract}
{\bf
\input{text_abstract.tex}
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\newcommand{\diagram}[1]{\quad\vcenter{\hbox{\includegraphics[scale=0.4,page=#1]{./diagrams.pdf}}}\quad}
\newcommand{\ensuremath{\mathrm{e}}}{\ensuremath{\mathrm{e}}}
\newcommand{\ensuremath{\mathrm{e}^{\tau H}}}{\ensuremath{\mathrm{e}^{\tau H}}}
\newcommand{\ensuremath{\mathbb{I}}}{\ensuremath{\mathbb{I}}}
\newcommand{\ensuremath{\mathrm{SU}}}{\ensuremath{\mathrm{SU}}}
\allowdisplaybreaks[4]
\input{text_main}
\section{Explicit expressions}
Here we recapitulate the expressions for the optimal first- and second-order MPOs. Starting from a Hamiltonian in MPO form
\begin{equation*}
\begin{tabular}{ |c|c|c| }
\hline
$\ensuremath{\mathbb{I}}$ & $C$ & $D$ \\
\hline
& $A$ & $B$ \\
\hline
& & $\ensuremath{\mathbb{I}}$ \\
\hline
\end{tabular} \;,
\end{equation*}
the optimal first-order MPO is given by
\begin{equation*}
\begin{tabular}{ |c|c| }
\hline
$\ensuremath{\mathbb{I}} + \tau D + \frac{\tau^2}{2!} D^2$ & $C + \frac{\tau}{2} \{CD\}$ \\
\hline
$\tau B + \frac{\tau^2}{2!} \{BD\}$ & $A + \frac{\tau}{2} (\{AD\} + \{BC\})$ \\
\hline
\end{tabular} \;,
\end{equation*}
and the optimal second-order MPO is given by
\begin{equation*}
\begin{tabular}{ |c|c|c| }
\hline
$\ensuremath{\mathbb{I}} + \tau D + \frac{\tau^2}{2} D^2 + \frac{\tau^3}{6} D^3$ & $C + \frac{\tau}{2} \{CD\} + \frac{\tau^2}{6}\{CDD\}$ & $CC + \frac{\tau}{3} \{CCD\}$ \\
\hline
\begin{tabular}{l} $\tau B + \frac{\tau^2}{2} \{BD\}$ \\ \hspace{1cm} + $\frac{\tau^3}{6} \{BDD\}$ \end{tabular} & \begin{tabular}{l} $A + \frac{\tau}{2}(\{BC\} + \{AD\})$ \\ \hspace{1cm} + $\frac{\tau^2}{6} (\{CBD\}+\{ADD\})$ \end{tabular} & $\{AC\} + \frac{\tau}{3}(\{ACD\}+\{CCB\})$ \\
\hline
$\frac{\tau^2}{2} BB + \frac{\tau^3}{6} \{BBD\}$ & $\frac{\tau}{2} \{AB\} + \frac{\tau^2}{6} (\{ABD\} + \{BBC\}$ & $AA + \frac{\tau}{3} (\{ABC\}+\{AAD\})$ \\
\hline
\end{tabular} \;.
\end{equation*}
The expressions for the higher-order MPOs are too large to display on this page, and we advise to implement the generic algorithms from the main text.
\section{Introduction}
Some years following the discovery of the density matrix renormalization group (DMRG) \cite{White1992} algorithm, it was reformulated as a variational method in the language of matrix product states (MPS). This proved to be a fruitful endeavor, as it not only explained the astounding accuracy of DMRG in approximating ground state properties of strongly interacting one-dimensional quantum systems, but it also opened the door to a zoo of algorithms which greatly extend the range of applicability beyond mere ground state properties \cite{Schollwoeck2011}.
In particular, it was realized that MPS can also be used to simulate the time evolution of an interacting system. Although the entanglement in a state generically increases under unitary time evolution and the MPS bond dimension would have to grow exponentially, in practice MPS simulations can reach surprisingly long times with high accuracy. Initial algorithms were limited to short-range interacting systems by using the Trotter-Suzuki decomposition of the time-evolution operator \cite{Vidal2004, White2004, Daley2004}. This restriction has by now been lifted using more involved algorithms \cite{Stoudenmire2010, Zaletel2015, Haegeman2011}, allowing one to target even quasi two-dimensional and long-range interacting systems. Still, these methods all rely on evolving states by taking small time steps, to the effect that some non-equilibrium properties remain difficult to calculate up to the desired precision without investing a tremendous amount of CPU hours. Recently, a new approach \cite{Vanhecke2021} based on cluster expansions was introduced to find tensor network approximations of the time evolution operator that are accurate for much larger time steps, but again this approach is limited to short-range interactions.
In this work, we introduce an approach based on matrix product operators (MPO) \cite{Pirvu2010} that allows us to approximate the full time-evolution operator up to arbitrary order, even for long-range interactions. Our construction can be seen as a higher-order generalization of the $W_I$/$W_{II}$ operators of Ref.~\cite{Zaletel2015} or as an extension of the cluster-expansion approach of Ref.\cite{Vanhecke2021} to generic Hamiltonians; the form of the MPO reduces to the one of Ref.~\cite{Zaletel2015} when considering the first-order case. We demonstrate the utility of such a higher-order scheme in practice, as it is shown to drastically outperform state-of-the-art algorithms for simulating time evolution with MPS.
The resulting algorithm is both simple to implement and highly flexible, applicable to both finite and infinite systems with arbitrary unit cells and non-abelian symmetries, as long as the Hamiltonian can be represented as an MPO \cite{Parker2020}. We provide an example implementation and include the analytical MPO expressions that can be implemented and combined with pre-existing tensor network toolboxes.
\section{Matrix product states and matrix product operators}
\label{sec:mps}
\renewcommand{\diagram}[1]{\quad\vcenter{\hbox{\includegraphics[scale=0.4,page=#1]{./Diagrams/diagrams_intro.pdf}}}\quad}
In this first section we recapitulate all the essentials on MPS and MPO representations, in order to fix notation and set the stage for the next sections.
\subsection{General notation}
A matrix product state (MPS) is represented as
\begin{equation}
\ket{\Psi}_{\text{finite}} = \diagram{1} ,
\end{equation}
where the variational parameters are contained within the local complex-valued three-leg tensors $M^i$. The dimensions of the virtual bonds of the MPS tensors are called the bond dimension. Similarly as for states, a matrix product operator (MPO) can be constructed as the contraction of local four-leg tensors
\begin{equation}
O_{\text{finite}} = \diagram{2}.
\end{equation}
This representation of a quantum state is size-extensive, in the sense that the state is built up from local objects. The construction can therefore be extended to an infinite system, where the state is built up as an infinite repetition of an $n$-site unit cell of tensors $M^i$:
\begin{equation}
\ket{\Psi}_{\text{infinite}} = \diagram{3}.
\end{equation}
The norm of such an infinite-system state is given by
\begin{equation}
\diagram{4}
\end{equation}
and is well-defined if the unit-cell transfer matrix
\begin{equation}
\diagram{5}
\end{equation}
has a unique leading eigenvalue -- this is called an injective MPS. In that case the leading eigenvalue is necessarily real-positive, and we can naturally normalize by rescaling the MPS tensors such that the leading eigenvalue of the transfer matrix is set to one.\footnote{For more details on uniform MPS, we refer the reader to Ref. \cite{Vanderstraeten2019}.}
\par An MPO can be similarly considered directly in the thermodynamic limit, and the expectation value of this MPO with respect to an MPS is characterized by the leading eigenvalue of the triple-layer transfer matrix
\begin{equation} \label{eq:triple}
\lambda = \rho_{\max} \left( \diagram{6} \right),
\end{equation}
such that we can evaluate
\begin{equation}
\lambda = \lim_{N\to\infty} \frac{1}{N} \log \left( \bra{\Psi} O \ket{\Psi} \right)
\end{equation}
(where $N$ denotes the diverging system size). If this triple-layer transfer matrix is diagonalizable and has a unique leading eigenvalue, the MPO is called a zero-degree MPO.\footnote{Here we take a simple definition of an $n$'th degree MPO, which is related to the scale of the norm of the MPO in the thermodynamic limit. Since the choice of norm for an operator is not fixed naturally as it is for states, we do not go into detail here on this definition in terms of operator norms -- we refer to Ref.~\cite{Parker2020} for more details. Here, it suffices to refer to the scaling of the expectation value of the MPO with respect to an injective MPS.}
\subsection{Applying an MPO to an MPS}
One of the most basic steps in MPS-based algorithms is the application of an MPO to an MPS. The bond dimension of the resulting MPS is the product of the original MPS and MPO bond dimensions, which becomes intractable after doing a few consecutive MPO applications. Therefore, we want to approximate the result again as an MPS with a smaller bond dimension:
\begin{equation}
\diagram{7} \approx \diagram{8} \;.
\label{eq:approx}
\end{equation}
A natural way to find the tensors $M_i'$ is to naively apply the MPO of bond dimension $D$ to the MPS of bond dimension $\chi$, yielding an MPS with bond dimension $\chi'=\chi D$. In a second step we can then truncate this bond dimension down using the Schmidt decomposition, giving an algorithm scaling as $O(D^3 \chi^3)$.
There are more performant schemes available, for example by directly minimizing the $2$-norm difference between the left and right hand side of Eq.~\ref{eq:approx}. For finite systems this can be done by a sweeping-like DMRG scheme \cite{verstraete2004_1, verstraete2004_2} or with a global non-linear optimization scheme \cite{Hauru2021} -- the latter can be extended to infinite systems by using variational schemes over uniform MPS \cite{Vanderstraeten2019, bram2021}. Alternatively, for finite systems there is the zip-up method \cite{paeckel_2019, Stoudenmire2010} that performs singular-value decompositions without first bringing the state into canonical form. This softens the computational cost considerably, and only leads to small errors. Finally, there is a method based on consecutive truncations of the reduced density matrix \cite{itensors_manual}, also yielding a smaller computational costs. In Table~\ref{tab:mpo} we summarize these different methods with their scope and computational costs. The benchmarks in Sec.~\ref{sec:bench} were always performed using variational schemes.
\begin{table} \centering
\begin{tabular}{| c c c c c|}
\hline
algorithm & scaling & finite & infinite & iterative\\
\hline
naive \cite{Schollwoeck2011} & $O(D^3 \chi^3)$ & \checkmark & \checkmark & \\
zip-up \cite{paeckel_2019,Stoudenmire2010} & $O(D \chi^3)$ & \checkmark & & \\
density matrix algorithm \cite{itensors_manual} & $O(D^2 \chi^3)$ & \checkmark & \checkmark & \\
(i)DMRG \cite{verstraete2004_1,verstraete2004_2} & $O(D \chi^3)$ & \checkmark & \checkmark & \checkmark\\
non-linear optimization & $O(D \chi^3)$ & \checkmark & \checkmark & \checkmark\\
variational uniform MPS \cite{bram2021} & $O(D \chi^3)$ & & \checkmark & \checkmark\\
\hline
\end{tabular}
\caption{Different methods for applying an MPO to an MPS.} \label{tab:mpo}
\end{table}
\subsection{MPO representation of extensive Hamiltonians}
A generic spin-chain Hamiltonian $H$ can be represented as an MPO, with the local MPO tensor having the following substructure \cite{Michel2010, Schollwoeck2011, Hubig2017}:
\begin{equation} \label{eq:jordan_mpo}
H \sim \begin{pmatrix} \ensuremath{\mathbb{I}} & C & D \\ & A & B \\ & & \ensuremath{\mathbb{I}} \end{pmatrix}.
\end{equation}
The blocks $A$, $B$, $C$ and $D$ are all four-leg tensors and $\ensuremath{\mathbb{I}}$ is the identity operator acting on the local Hilbert space:
\begin{equation} \begin{split}
\ensuremath{\mathbb{I}} = \diagram{17}, \qquad & A = \diagram{21}, \qquad B = \diagram{19}, \\ & C = \diagram{20}, \qquad D = \diagram{18}
\end{split} \end{equation}
The dimensions of the first and last virtual levels is always one (denoted by the dashed line above), but the dimension of the middle level can be larger; this dimension is henceforth called the MPO's bond dimension $\chi$. We always require that the spectral radius\footnote{Here `spectral radius' is again interpreted in terms of the triple-layer transfer matrix with respect to an injective MPS, now restricted to the diagonal $A$ block; for more details on the conditions on the MPO, we again refer to Ref.~\cite{Parker2020}.} of the middle block $A$ is smaller than one.
\par This operator is a first-degree MPO \cite{Parker2020}, in the sense that the expectation value with respect to an injective MPS scales linearly with system size -- as it should for a local Hamiltonian. This is reflected in the structure of the triple-layer transfer matrix [Eq.~\ref{eq:triple}], which has a unique dominant eigenvalue with value $1$ (provided the MPS is properly normalized), to which is associated a two-dimensional generalised eigenspace, or thus, a two-dimensional Jordan block. Upon taking the $N$th power, this gives rise to terms scaling as $1^N$, thus constant, as well as terms scaling as $N 1^N$, or thus linearly in $N$. The prefactor of this last term corresponds exactly to the bulk energy density.
\par A particularly insightful way of representing a first-degree MPO is by a finite-state machine \cite{Crosswhite2008}:
\begin{equation}
\diagram{9},
\end{equation}
which makes the meaning of the different blocks immediately clear: When going from left to right through the MPO, the virtual level `1' denotes that the Hamiltonian has not yet acted, the virtual level `2' denotes that the Hamiltonian is acting non-trivially and the virtual level `3' denotes that the Hamiltonian has acted completely. Transitions between the levels are performed in the MPO by the non-trivial blocks. Contracting the MPO from left to right, one can never go down a level.
\par Written out in full, the Hamiltonian is given by
\begin{equation}
H = \sum_{i} \left( D_i + C_i B_{i+1} + C_iA_{i+1}B_{i+2} + C_iA_{i+1}A_{i+2}B_{i+3} + \dots \right).
\end{equation}
This shows that any Hamiltonian with exponentially decaying interactions can be efficiently represented by an MPO of this form. Moreover, other decay profiles can often be very well approximated by this type of MPO \cite{Pirvu2010, Parker2020}.
\subsection{Examples}
It is instructive to give a few examples of Hamiltonians written in this form, partly because we will use these examples as benchmark cases in Sec.~\ref{sec:bench}. The nearest-neighbour transverse-field Ising model is defined by the Hamiltonian
\begin{equation}
H_{\text{ising},\text{nn}} = - \sum_i Z_iZ_{i+1} + h \sum_i X_i \sim \begin{pmatrix} \ensuremath{\mathbb{I}} & -Z & h X \\ & 0 & Z \\ & & \ensuremath{\mathbb{I}} \end{pmatrix}.
\end{equation}
In this case, the diagonal $A$ block is zero and the dimension of the middle level is $\chi=1$. This Hamiltonian can be extended with long-range exponentially-decaying interactions by including an entry on the diagonal
\begin{equation}
H_{\text{ising},\text{lr}} = - \sum_{i<j} \lambda^{j-i-1} Z_iZ_{j} + h \sum_i X_i \sim \begin{pmatrix} \ensuremath{\mathbb{I}} & -Z & h X \\ & \lambda \ensuremath{\mathbb{I}} & Z \\ & & \ensuremath{\mathbb{I}} \end{pmatrix}, \qquad \lambda<1.
\end{equation}
Another paradigmatic example is the Heisenberg spin-1/2 chain, represented as
\begin{equation}
H_{\text{heisenberg},\text{nn}} = \sum_i S^\alpha_i S^\alpha_j \sim \begin{pmatrix} \ensuremath{\mathbb{I}} & S^\alpha & 0 \\ & 0 & S^\alpha \\ & & \ensuremath{\mathbb{I}} \end{pmatrix}.
\end{equation}
Here, the spin operators are $S^\alpha=(S^x,S^y,S^z)$, such that the blocks have dimension $\chi=3$.\footnote{Without encoding $\ensuremath{\mathrm{SU}}(2)$ symmetry explicitly in the MPO, it can be simply rewritten in the form
\begin{equation*}
\begin{pmatrix} \ensuremath{\mathbb{I}} & S^x & S^y & S^z & 0 \\ & 0 & 0 & 0 & S^x \\ & 0 & 0 & 0 & S^y \\ & 0 & 0 & 0 & S^z \\ & & & & \ensuremath{\mathbb{I}} \end{pmatrix}.
\end{equation*}
When encoding $\ensuremath{\mathrm{SU}}(2)$ symmetry, however, we can not split up the MPO in the spin components (which break $\ensuremath{\mathrm{SU}}(2)$ invariance) and we have to keep the above form with the $S^\alpha$ tensor defined as
\begin{equation}
\diagram{10} = \diagram{11}, \qquad S^\alpha = \diagram{12}.
\end{equation}
Here, the leg denoted by $\alpha$ transforms under the spin-1 representation of $\ensuremath{\mathrm{SU}}(2)$.}
A next-nearest-neighbour $J_1$-$J_2$ spin-1/2 chain is given by
\begin{equation}
H_{\text{heisenberg},\text{nnn}} = J_1 \sum_i S^\alpha_i S^\alpha_{i+1} + J_2 \sum_i S^\alpha_i S^\alpha_{i+2} \sim \begin{pmatrix} \ensuremath{\mathbb{I}} & S^\alpha & 0 & 0 \\ & 0 & \ensuremath{\mathbb{I}} & J_1 S^\alpha \\ & 0 & 0 & J_2 S^\alpha \\ & & & \ensuremath{\mathbb{I}} \end{pmatrix},
\end{equation}
where the tensor $\ensuremath{\mathbb{I}}$ in the $A$ block again represents the direct product of two unit matrices,
\begin{equation}
\ensuremath{\mathbb{I}} = \diagram{13} \;.
\end{equation}
Finally, we give an example of a two-dimensional system, where we have wrapped the system onto a cylinder and reformulated the model as a one-dimensional system. The transverse-field Ising model on a square lattice formulated on a cylinder of circumference $L_y$ with spiral boundary conditions is given by
\begin{align} \label{eq:ising_cylinder}
H_{\text{ising},\text{cylinder}} &= - \sum_{i} Z_i Z_{i+1} - \sum_i Z_i Z_{i+L_y} + h \sum_i X_i \\
&\sim \begin{pmatrix} \ensuremath{\mathbb{I}} & -Z & 0 & 0 & \dots & 0 & h X \\ & 0 & \ensuremath{\mathbb{I}} & 0 & \dots & 0 & Z \\ & 0 & 0 & \ensuremath{\mathbb{I}} & \dots & 0 & 0 \\ & \vdots & & & \ddots & & \vdots\\ & 0 & & & \dots & \ensuremath{\mathbb{I}} & 0 \\ & 0 & & & \dots & 0 & Z \\ & & & & & & \ensuremath{\mathbb{I}} \end{pmatrix}.
\end{align}
\subsection{Powers of MPOs}
This MPO representation of Hamiltonians is convenient for expressing powers of the Hamiltonian, and evaluating e.g. the variance or higher-order cumulants of the Hamiltonian with respect to a given MPS.
\par We start by rewriting the Hamiltonian in table form:
\begin{equation} \label{eq:H_mpo}
H \sim
\;\begin{tabular}{ |c|c|c|c| }
\hline
& (1) & (2) & (3) \\
\hline
(1) & $\mathbb{I}$ & $C$ & $D$\\
\hline
(2) & & $A$ & $B$ \\
\hline
(3) & & & $\mathbb{I}$ \\
\hline
\end{tabular} \;.
\end{equation}
We can now represent $H^2$, the product of this Hamiltonian with itself, as a sparse MPO of the form
\begin{equation} \label{eq:Hsq}
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|c| }
\hline
& (1,1) & (1,2) & (1,3) & (2,1) & (2,2) & (2,3) & (3,1) & (3,2) & (3,3) \\
\hline
(1,1) & $\mathbb{I}$ & $C$ & $D$ & $C$ & $CC$ & $CD$ & $D$ & $DC$ & $DD$\\
\hline
(1,2) & & $A$ & $B$ & & $CA$ & $CB$ & & $DA$ & $DB$ \\
\hline
(1,3) & & & $\mathbb{I}$ & & & $C$ & & & $D$ \\
\hline
(2,1) & & & & $A$ & $AC$ & $AD$ & $B$ & $BC$ & $BD$\\
\hline
(2,2) & & & & & $AA$ & $AB$ & & $BA$ & $BB$\\
\hline
(2,3) & & & & & & $A$ & & & $B$\\
\hline
(3,1) & & & & & & & $\mathbb{I}$ & $C$ & $D$\\
\hline
(3,2) & & & & & & & & $A$ & $B$\\
\hline
(3,3) & & & & & & & & & $\mathbb{I}$\\
\hline
\end{tabular}\;.
\end{equation}
Here we have used a particular notation for combining the blocks: we take the operator product on the physical legs and a direct product on the virtual legs. For example:
\begin{equation}
\diagram{14} = \diagram{15}.
\end{equation}
Upon computing the triple-layer transfer matrix associated to taking the expectation value of $H^2$ with respect to an injective MPS, the diagonal blocks $\mathbb{I}$ in the above form will give rise to an eigenvalue $1$, which will have an (algebraic) multiplicity of $4$. For reasons to be explained in Section~\ref{sec:exactcompression}, this will decompose into a one-dimensional eigenspace that does not couple to the boundary conditions, and a three-dimensional generalised eigenspace, or thus a three-dimensional Jordan block, giving rise to terms scaling as a second order polynomial of $N$ upon taking the $N$th power. It therefore represents a second-degree MPO and its expectation value can be evaluated using the methods of Refs.~\cite{Michel2010, Pillay2019}. Again, we can understand this MPO as a finite-state machine
\begin{equation*}
\diagram{16},
\end{equation*}
where we have omitted the operators denoting the different transitions in the graph (they can be read off from the table). The structure of this MPO is best understood by decomposing it into two parts, i.e. the disconnected terms and the connected terms. The former are the terms that are the direct product of single actions of the Hamiltonians that do not overlap, and in the diagram they are obtained by passing through levels (1,3) or (3,1). Indeed, the meaning of these levels is that one of the Hamiltonians has already acted, whereas the second one has not. The connected terms are the ones where the two Hamiltonian operators overlap. For example, jumping from (1,1) or (1,2) immediately to (2,3) means that the two Hamiltonian operators overlap on one site, and similarly for the jump from (1,1) or (2,1) to (3,2). All the other connected terms pass through level (2,2), which denotes that both Hamiltonian operators are acting simultaneously, and therefore this level has a bond dimension $\chi^2$.
\section{From powers of the Hamiltonian to extensive MPOs}
\label{sec:dalai}
\renewcommand{\diagram}[1]{\quad\vcenter{\hbox{\includegraphics[scale=0.4,page=#1]{./Diagrams/diagrams_mpo.pdf}}}\quad}
Let us now investigate how to approximate the exponential of the Hamiltonian in terms of MPOs. We take a generic spin-chain Hamiltonian $H = \sum_i h_i$, with $h_i$ the (quasi) local hamiltonian operator acting on sites $i,i+1,\dots$, which can be represented as an MPO of the form in Eq.~\eqref{eq:H_mpo}. We wish to approximate
\begin{equation}
\ensuremath{\mathrm{e}}^{\tau H} = \ensuremath{\mathbb{I}} + \tau H + \frac{\tau^2}{2} H^2 + \frac{\tau^3}{6} H^3 + \dots,
\end{equation}
where we assume that $\tau$ is a small parameter. Naively, one could try to use the above representation of $H^n$ to approximate the exponential. Adding different powers of $H$ is, however, an ill-defined operation in the thermodynamic limit because the norms of these different terms scale with different powers of system size. Therefore, applying a sum of different powers of $H$ to a given state $\ket{\Psi}$, would yield a state
\begin{equation}
\ensuremath{\mathrm{e}}^{\tau H} \ket{\Psi} \approx \sum_{i=0}^N \ket{\Psi_i}, \qquad \braket{\Psi_i|\Psi_i} \propto N^i,
\end{equation}
which cannot be normalized in the thermodynamic limit.\footnote{This problem suggests that MPS methods that rely on taking powers of the Hamiltonian do not scale well for large system sizes and cannot be formulated directly in the thermodynamic limit.}
Instead, an appropriate MPO representation of $\ensuremath{\mathrm{e}}^{\tau H}$ requires a size-extensive approach. Therefore, we introduce a transformation that maps a given power of $H$ to a size-extensive operator, yielding an $n$'th order approximation for $\ensuremath{\mathrm{e}}^{\tau H}$. We start at first order. Given the finite-state machine representation of $H$, the transformation can be visualized as
\begin{equation}
\diagram{1} \\ \to \diagram{3}.
\end{equation}
I.e., instead of falling onto the level `3' in the MPO for the Hamiltonian, we go back to level `1' and we omit level `3' from the MPO. In addition, we multiply with the appropriate factor $\tau$. In table form, this gives rise to
\begin{equation} \label{eq:W1}
\begin{tabular}{ |c|c|c| }
\hline
& (1) & (2) \\
\hline
(1) & $\mathbb{I}$ + $\tau$ $D$ & $C$ \\
\hline
(2) & $\tau$ $B$ & $A$ \\
\hline
\end{tabular}\;,
\end{equation}
which serves as a first-order approximation of the time evolution operator $\ensuremath{\mathrm{e}}^{\tau H}$, as introduced in Ref.~\cite{Zaletel2015}. In the absence of any Jordan blocks, this operator is size-extensive: upon applying this MPO to a normalizable state, it returns a normalizable state. It is also size-extensive in another sense: it contains all disconnected higher-order terms in the expansion (with correct prefactor), i.e.\ higher order terms in which different actions of the Hamiltonian do not overlap. Indeed, if we write out the MPO from Eq.~\eqref{eq:W1} in orders of $\tau$ we obtain
\begin{equation}
\ensuremath{\mathbb{I}} + \tau \sum_i h_i + \tau^2 \sum_{i<j,\text{disc}} h_i h_j + \tau^3 \sum_{i<j<k,\text{disc}} h_i h_j h_k + \dots,
\end{equation}
where the second and third sum runs over all terms for which the $h_i$ do not overlap.
This transformation can be extended to second order, where we have to include the terms where two actions of the Hamiltonian overlap. These are contained within the MPO representation of $H^2$ [Eq.~\eqref{eq:Hsq}], so this is the starting point. The level (1,3) encodes the situation where one action of the Hamiltonian has been applied, while the other Hamiltonian can be recognized in the subblock
\begin{equation}
\begin{tabular}{ |c|c|c|c| }
\hline
& (1,3) & (2,3) & (3,3) \\
\hline
(1,3) & $\ensuremath{\mathbb{I}}$ & $C$ & $D$ \\
\hline
(2,3) & & $A$ & $B$ \\
\hline
(3,3) & & & $\ensuremath{\mathbb{I}}$ \\
\hline
\end{tabular}\;.
\end{equation}
This level (1,3) therefore encodes a disconnected term in $H^2$ and should be immediately mapped back to the starting state (1,1). The (3,1) level is completely equivalent to the (1,3) level, and should also be mapped back to the starting state (1,1). In practice this can be done by taking the columns (1,3) and (3,1) in $H^2$, multiplying by $\frac{\tau}{2}$, and adding them to the first column. Afterwards both columns are removed, and we end up with the MPO:
\begin{equation}
\begin{tabular}{ |c|c|c|c|c|c|c|c| }
\hline
& (1,1) & (1,2) & (2,1) & (2,2) & (2,3) & (3,2) & (3,3) \\
\hline
(1,1) & $\mathbb{I} + \tau D$ & $C$ & $C$ & $CC$ & $CD$ & $DC$ & $DD$\\
\hline
(1,2) & $\frac{\tau}{2} B$ & $A$ & & $CA$ & $CB$ & $DA$ & $DB$ \\
\hline
(2,1) & $\frac{\tau}{2} B$ & & $A$ & $AC$ & $AD$ & $BC$ & $BD$\\
\hline
(2,2) & & & & $AA$ & $AB$ & $BA$ & $BB$\\
\hline
(2,3) & & & & & $A$ & & $B$\\
\hline
(3,2) & & & & & & $A$ & $B$\\
\hline
(3,3) & & & & & & & $\mathbb{I}$\\
\hline
\end{tabular}\;.
\end{equation}
In terms of the finite-state machine, one can think of this operation as follows
\begin{multline}
\diagram{2} \\ \to \diagram{4}
\end{multline}
The (3,3) level represents the state where both Hamiltonians were applied. Because we have already filtered out the disconnected contributions in the previous step, this state now only contains the connected second-order cluster contributions! Similar to the (1,3) case, we can take the (3,3) column, this time multiply by $\frac{\tau^2}{2}$, and add it to the first column. Then remove the (3,3) row and column:
\begin{equation} \label{eq:W2}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
& (1,1) & (1,2) & (2,1) & (2,2) & (2,3) & (3,2) \\
\hline
(1,1) & $\ensuremath{\mathbb{I}} + \tau D + \frac{\tau^2}{2} DD $ & $C$ & $C$ & $CC$ & $CD$ & $DC$ \\
\hline
(1,2) & $\frac{\tau}{2} B + \frac{\tau^2}{2} DB $ & $A$ & & $CA$ & $CB$ & $DA$\\
\hline
(2,1) & $\frac{\tau}{2} B + \frac{\tau^2}{2} BD$ & & $A$ & $AC$ & $AD$ & $BC$\\
\hline
(2,2) & $\frac{\tau^2}{2} BB$ & & & $AA$ & $AB$ & $BA$\\
\hline
(2,3) & $\frac{\tau^2}{2} B$ & & & & $A$ & \\
\hline
(3,2) & $\frac{\tau^2}{2} B$ & & & & & $A$ \\
\hline
\end{tabular}\;,
\end{equation}
or in terms of a finite-state machine, we take the transformation
\begin{multline}
\diagram{4} \\ \to \diagram{5}.
\end{multline}
The above MPO now gives an approximation of $\ensuremath{\mathrm{e}}^{\tau H}$ that captures all second-order terms exactly. Moreover, just as before, due to its size extensivity, it contains all higher-order terms that consist of disconnected first- and second-order parts.
\par This construction can be generalized to any order by the same idea, and the algorithm can be found in Alg. \ref{alg:dalai}.
\begin{figure}[H]
\begin{algorithm}[H]
\begin{algorithmic}[1]
\State Inputs $\hat H, N, \tau$
\State $O \gets \hat H^N$ \Comment{multiply the hamiltonian $N$ times with itself}
\For{$a \in [1,N]$}
\State $P \gets $ permutations of $(1,1,...,1,3,3,...,3)$ ($3$ occurs $a$ times)
\For{$b \in P$}
\State $O[:,1] = O[:,1] + \tau^a \frac{(N-a)!}{N!} O[:,b]$
\State Remove row and column $b$
\EndFor
\EndFor
\end{algorithmic}
\caption{Pseudocode for constructing the $N$'th order time evolution MPO}
\label{alg:dalai}
\end{algorithm}
\end{figure}
In this section, we have explained our construction in terms of a single MPO tensor, but the construction is easily extended for systems with a non-trivial unit cell. For finite systems, one should impose the correct left and right boundary conditions:
\begin{equation}
\label{eq:boundary}
L = \begin{tabular}{ |c|c|c|c| }
\hline
$1$ & $0$ & ... & $0$ \\
\hline
\end{tabular}\;,
\quad R = \begin{tabular}{ |c| }
\hline
$1$ \\
\hline
$0$ \\
\hline
... \\
\hline
$0$ \\
\hline
\end{tabular}\; .
\end{equation}
\section{Exact compression steps}
\label{sec:exactcompression}
The operator we arrived at in the previous section is essentially an operator-valued block matrix, a matrix where the entries correspond to operators. It is possible to multiply these by scalar-valued block matrices and in particular we can left and right multiply with the matrix
\begin{equation} \label{eq:basistransform}
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|c| }
\hline
& (1,1) & (1,2) & (2,1) & (2,2) & (2,3) & (3,2) \\
\hline
(1,1) & $1$ & & & & &\\
\hline
(1,2) & & $1/\sqrt{2}$ & $1/\sqrt{2}$ & & & \\
\hline
(2,1) & &$1/\sqrt{2}$ & $-1/\sqrt{2}$ & & & \\
\hline
(2,2) & & & &$1$ & & \\
\hline
(2,3) & & & & & $1$ & \\
\hline
(3,2) & & & & & & $1$ \\
\hline
\end{tabular}\;,
\end{equation}
to obtain the MPO
\begin{equation}
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|c| }
\hline
& (1,1) & (1,2) & (2,1) & (2,2) & (2,3) & (3,2) \\
\hline
(1,1) & $\mathbb{I}$ + $\tau$ $D$ + $\frac{\tau^2}{2}$ $DD$ & $C$ & & $CC$ & $CD$ & $DC$\\
\hline
(1,2) & $\tau$ $B$ + $\frac{\tau^2}{2}(DB+BD)$ & $A$ & & $CA+AC$ & $CB+AD$ & $DA+BC$ \\
\hline
(2,1) & $\tau$ $B$ + $\frac{\tau^2}{2}(DB-BD)$ & & $A$ & $CA-AC$ & $CB-AD$ & $DA-BC$\\
\hline
(2,2) & $\frac{\tau^2}{2}BB$ & & & $AA$ & $AB$ & $BA$\\
\hline
(2,3) & $\frac{\tau^2}{2}B$ & & & & $A$ &\\
\hline
(3,2) & $\frac{\tau^2}{2}B$ & & & & & $A$\\
\hline
\end{tabular}\;.
\end{equation}
Given the boundary conditions at the left boundary [Eq.~\ref{eq:boundary}], there is no way to reach level (2,1). The corresponding entry in the left environments will always be zero and the corresponding row/column can therefore be safely removed.
\par Another way to see this compression is to look at the graphical representation of the original MPO, and noting the symmetry:
\begin{multline}
\diagram{5}\\ \to \diagram{6}\;.
\end{multline}
The transitions from (1,1) to (1,2) and from (1,1) to (2,1) are completely equivalent, we can therefore deform the diagram without changing the MPO. Simply add all arrows that leave the (2,1) node to the (1,2) node, and remove the (2,1) node.
\par A similar observation holds for the (2,3) and (3,2) nodes: all operators that follow the node before arriving at (1,1) are the same! We can redirect all arrows that point to (3,2), point them at (2,3) and remove the node (3,2). Equivalently, a similar basis transformation as in Eq.~\ref{eq:basistransform} will eliminate the transition from one of the (2,3) (3,2) levels back to (1,1). Given the right boundary condition [Eq.~\ref{eq:boundary}], the right environment will be zero for that level, and the corresponding row/column can be removed.
\begin{multline}
\diagram{6} \\ \to \diagram{7}\;.
\end{multline}
We eventually end up with the following operator:
\begin{equation}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& (1,1) & (1,2) & (2,2) & (2,3) \\
\hline
(1,1) & $\mathbb{I}$ + $\tau$ $D$ + $\frac{\tau^2}{2}$ $DD$ & $C$ & $CC$ & $CD+DC$\\
\hline
(1,2) & $\tau$ $B$ + $\frac{\tau^2}{2}$ $(DB+BD)$ & $A$ & $CA+AC$ & $CB+AD+DA+BC$ \\
\hline
(2,2) & $\frac{\tau^2}{2}$ $BB$ & & $AA$ & $AB + BA$\\
\hline
(2,3) & $\frac{\tau^2}{2} B$ & & & $A$\\
\hline
\end{tabular}\;,
\end{equation}
which represents a compressed version of the original second-order MPO in Eq.~\eqref{eq:W2}.
\par This compression step can be generalized to the general $n$'th order MPOs, see Alg.~\ref{alg:exact_compression}.
\begin{figure}[H]
\begin{algorithm}[H]
\begin{algorithmic}[1]
\State O $\gets$ Alg. \ref{alg:dalai}
\For{$c \in $ possible levels in $O$}
\State $s_c \gets$ Sort the $1$'s in $c$ to the front
\State $s_r \gets$ Sort the $3$'s in $c$ to the front
\State $n_1 \gets$ the number of $1$'s in $c$
\State $n_3 \gets$ the number of $3$'s in $c$
\If{$n_3 \le n_1 \And s_c \ne c$} \Comment{Equivalent column}
\State $O[s_c,:] = O[s_c,:] + O[c,:]$ \Comment{Add row $c$ to row $s_c$}
\State Remove row and column $c$
\EndIf
\If{$n_3 > n_1 \And s_r \ne c$} \Comment{Equivalent row}
\State $O[:,s_r] = O[:,s_r] + O[:,c]$ \Comment{Add column $c$ to column $s_r$}
\State Remove row and column $c$
\EndIf
\EndFor
\end{algorithmic}
\caption{Pseudocode incorporating exact compression}
\label{alg:exact_compression}
\end{algorithm}
\end{figure}
\section{Incorporating higher-order terms}
\label{sec:aproxextend}
At this point, we have found an MPO expression for $\ensuremath{\mathrm{e}^{\tau H}}$ that is correct up to a given order $n$, but which also contains all disconnected higher-order terms that can be decomposed into smaller-order factors. Yet we can still incorporate more higher-order terms in the MPO without changing the bond dimension. Starting from the first-order MPO in Eq.~\ref{eq:W1}, it was indeed noticed in Ref.~\cite{Zaletel2015} that the second-order term with Hamiltonians only overlapping on a single site can be readily included in the MPO. In this section, we show that our construction of the $N$'th order MPO can be similarly extended to contain all terms of order $N+1$ that only overlap at most $N$ times!
\par Let us return to the expression we have obtained for the second-order uncompressed MPO
\begin{equation}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
& (1,1) & (1,2) & (2,1) & (2,2) & (2,3) & (3,2) \\
\hline
(1,1) & $\ensuremath{\mathbb{I}} + \tau D + \frac{\tau^2}{2} DD$ & $C$ & $C$ & $CC$ & $CD$ & $DC$ \\
\hline
(1,2) & $\frac{\tau}{2} B + \frac{\tau^2}{2} DB $ & $A$ & & $CA$ & $CB$ & $DA$ \\
\hline
(2,1) & $\frac{\tau}{2} B + \frac{\tau^2}{2} BD$ & & $A$ & $AC$ & $AD$ & $BC$ \\
\hline
(2,2) & $\frac{\tau^2}{2} BB$ & & & $AA$ & $AB$ & $BA$\\
\hline
(2,3) & $\frac{\tau^2}{2} B$ & & & & $A$ & \\
\hline
(3,2) & $\frac{\tau^2}{2} B$ & & & & & $A$ \\
\hline
\end{tabular}\;.
\end{equation}
The third-order MPO contains terms connecting (1,2,2) $\rightarrow$ (2,3,3) $\rightarrow$ (1,1,1) $\frac{\tau^3}{6}$. The (1,2,2) level in the third-order MPO is rather similar to the (2,2) level in our second-order MPO, because the finite state machine that brings (1,1,1) to (1,2,2) is identical to the one that brings (1,1) to (2,2):
\begin{equation}
\diagram{8} = \diagram{9}\;.
\end{equation}
The (2,3,3) level is also similar to the (2,3), (2,3) and (1,2) levels, in the sense that in both cases only a single connected cluster will follow (in lowest order in $\tau$):
\begin{equation}
\diagram{10} = \diagram{11}\;.
\end{equation}
We can therefore mimic this transition by adding a term connecting (2,2) to (1,2) with a factor $\tau^2/6$. This correctly gives all third-order clusters containing [(1,2,2) $\rightarrow$ (2,3,3)], at the cost of introducing new fourth-order terms generated by (2,2) $\rightarrow$ $\frac{\tau^2}{6}$ (1,2) $\rightarrow$ (2,2) $\rightarrow$ $\frac{\tau^2}{2}$ (1,1). We can continue this procedure and systematically incorporate all transitions that occur in the third-order MPO within the second-order MPO with the correct prefactors, except those involving the level (2,2,2)!
\par The combination of this extension step and the compression step described in the previous section gives the following second order MPO\footnote{Here we have introduced a shorthand notation for denoting the sum of all permutations of a given set of operators; for example $\{BD\} = BD+DB$, $\{ABB\} = ABB + BAB + BBA$ or $\{ABC\} = ABC + ACB + BAC + BCA + CAB + CBA$.}
\begin{multline}
\begin{tabular}{ |c|c|c|c| }
\hline
& (1,1) & (1,2) & (2,2)\\
\hline
(1,1) & $\mathbb{I} + \tau D + \frac{\tau^2}{2} DD + \frac{\tau^3}{6} DDD$ & $C$ & $CC + \frac{\tau}{3} \{CCD\} $ \\
\hline
(1,2) & $\tau B + \frac{\tau^2}{2} \{DB\} + \frac{\tau^3}{6} \{BDD\}$ & $A$ & $\{AC\} + \frac{\tau}{3} (\{ACD\} + \{BCC\})$ \\
\hline
(2,2) & $\frac{\tau^2}{2} BB + \frac{\tau^3}{6} \{BBD\}$ & & $AA + \frac{\tau}{3} (\{ABC\} + \{AAD\})$ \\
\hline
(2,3) & $\frac{\tau^2}{2} B + \frac{\tau^3}{6} (\{BD\} + BD)$ & & $\frac{\tau}{3} (\{AC\} + AC)$ \\
\hline
\end{tabular} \quad \cdots
\\
\cdots \quad \begin{tabular}{ |c|c| }
\hline
& (2,3) \\
\hline
(1,1) & $\{CD\} + \frac{\tau}{3} \{CCD\}$\\
\hline
(1,2) & $\{BC\}+\{AD\} + \frac{\tau}{3} (\{BCD\} + \{ADD\})$ \\
\hline
(2,2) & $\{AB\} + \frac{\tau}{3} (\{ABD\} + \{BBC\})$\\
\hline
(2,3) & $A + \frac{\tau}{3} (\{AD\} + AD + \{BC\} + BC)$\\
\hline
\end{tabular}
\end{multline}
\par It is more convenient to apply the extension step at the level of $H^N$, when generalizing to arbitrary order $N$. Pseudocode for the resulting MPO can be found in Alg.~\ref{alg:approximate_extension}.
\begin{figure}[H]
\begin{algorithm}[H]
\begin{algorithmic}[1]
\State Inputs $\hat H, N, dt$
\State $O \gets \hat H^N$
\For{$a,b \in $ possible levels in $O \And 1 \in b$} \Comment{Incorporate higher order corrections}
\State if $2 \not \in a \And 3 \in a$, skip
\For{$c,d \in [1:N+1]$}
\State $a_e = $ insert a $1$ at position $c$ in $a$
\State $b_e = $ insert a $3$ at position $d$ in $b$
\State $n_1 = $ the number of $1$'s in $a_e$
\State $n_3 = $ the number of $3$'s in $b_e$
\State $O[a,b] = O[a,b] + H^{N+1}[ a_e, b_e] \tau \frac{N!}{(N+1)! n_1 n_3}$
\EndFor
\EndFor
\State{Apply algorithm \ref{alg:exact_compression}}
\end{algorithmic}
\caption{Pseudocode for the extension step}
\label{alg:approximate_extension}
\end{algorithm}
\end{figure}
\section{Approximate compressions}
\label{sec:aproxcompression}
There is one more possible compression for an $N$th order MPO, similar in spirit to the previous extension step. This compression step is only accurate up to order $N$, and it therefore slightly lowers the precision of the extended MPO. We will again illustrate the method starting from the second-order MPO, and then extend it to arbitrary order.
The essential observation is that the levels (12) and (23) in the second order MPO are similar. The diagonal elements $O_2[(12),(12)]$ and $O_2[(23),(23)]$ are equal in the lowest order in $\tau$. Furthermore, the transition from (12) to (11) and from (23) to (11) are also related. The lowest order of $O_2[(23),(11)]$ equals the lowest order of $O_2[(12),(11)]$, multiplied with an extra factor of $\frac{\tau}{2}$.
\begin{equation}
\diagram{12}
\end{equation}
This means that one can add $\frac{\tau}{2}$ times the (23) column to the (21) column and remove the (23) level, and the resulting MPO will also be accurate up to second order! The second-order MPO now becomes
\begin{multline}
\begin{tabular}{ |c|c|c| }
\hline
&(1,1) & (1,2)\\
\hline
(1,1) & $\ensuremath{\mathbb{I}} + \tau D + \frac{\tau^2}{2!} D^2 + \frac{\tau^3}{3!} D^3$ & $C + \frac{\tau}{2} \{CD\} + \frac{\tau^2}{6}\{CDD\}$ \\
\hline
(1,2) & $\tau B + \frac{\tau^2}{2!} \{BD\} + \frac{\tau^3}{3!} \{BDD\}$ & $A + \frac{\tau}{2}(\{BC\} + \{AD\}) + \frac{\tau^2}{6} (\{CBD\}+\{ADD\})$ \\
\hline
(2,1) & $\frac{\tau^2}{2} BB + \frac{\tau^3}{3!} \{BBD\}$ & $\frac{\tau}{2} \{AB\} + \frac{\tau^2}{6} (\{ABD\} + \{BBC\}$ \\
\hline
\end{tabular} \quad \cdots \\
\cdots \quad \begin{tabular}{ |c|c| }
\hline
&(2,2) \\
\hline
(1,1) & $CC + \frac{\tau}{3} \{CCD\}$ \\
\hline
(1,2) & $\{AC\} + \frac{\tau}{3}(\{ACD\}+\{CCB\}$ \\
\hline
(2,1) & $AA + \frac{\tau}{3} (\{ABC\}+\{AAD\}$ \\
\hline
\end{tabular}\;.
\end{multline}
Once again we can generalize this step to any order:
\begin{figure}[H]
\begin{algorithm}[H]
\begin{algorithmic}[1]
\State Apply algorithm \ref{alg:approximate_extension}
\For{$a \in $ possible levels in $O \And 1 \not \in a$}
\State $n_1$ = the number of $3$'s in $a$
\State $b$ = replace all $3$'s with $1$'s in $a$
\State $O[:,b] = O[:,b] + O[:,a] \tau^{n_1} \frac{(N-n_1)!}{N!}$
\State remove level $a$
\EndFor
\end{algorithmic}
\caption{Pseudocode for the approximate compression step}
\label{alg:approxcompression}
\end{algorithm}
\end{figure}
\section{Numerical compression}
In the previous three sections, we have provided analytical techniques for compressing and extending our construction for approximating $\ensuremath{\mathrm{e}}^{\tau H}$ as an MPO. However, we can also compress the MPO numerically using singular-value decompositions. The idea behind this is that we interpret the MPO as a regular MPS with two physical legs, and truncate with respect to the 2-norm for states. This procedure should be taken with care, because we are working with a norm that is not suitable for operators, and should maybe only be used in cases where we can do \emph{exact} compressions (for which the singular values are exactly zero, and it doesn't matter which norm is taken).
\par We can use this numerical compression for checking whether we have found all exact compression steps. If we do this on the uncompressed MPOs from Sec.~\ref{sec:dalai}, we observe that we indeed find a number of exact zero singular values in the MPO, corresponding to the analytical compression steps that were identified above. After having done these analytical compressions, we find that the MPO cannot be compressed further, showing that we have found all possible exact compressions.
\section{Benchmarks}
\label{sec:bench}
\subsection{Precision of \texorpdfstring{$n$}{n}th order MPO}
Let us first illustrate the precision of our MPO construction. Therefore, we first optimize an MPS ground-state approximation $\ket{\Psi_0}$ of a given Hamiltonian $H$ in the thermodynamic limit and subsequently evaluate
\begin{equation}
p = \left| \lambda - i E_0 \delta t \right|, \qquad \lambda = \lim_{N\to\infty} \frac{1}{N} \log \bra{\Psi_0} U(\delta t) \ket{\Psi_0} ,
\label{eq:phase_error}
\end{equation}
where $\lambda$ can be evaluated directly in the thermodynamic limit (see Sec.~\ref{sec:mps}) and $E_0$ is the ground-state energy per site. In this set-up, we make sure that the MPS $\ket{\Psi_0}$ is approximating the true ground state quasi-exactly -- in practice, we just take very large bond dimension -- such that $p$ is indeed measuring errors in the MPO approximation $U(\delta t)$ for the time-evolution operator.
\par In Fig.~\ref{fig:precision} we plot this quantity as a function of $\delta t$ for both the $n$th-order MPO without extensions and approximate compressions and the extended and compressed MPO, each time for different orders. We find that the error has the expected scaling as a function of $\delta t$, showing that our MPO construction is correct up to a given order. We observe that the approximate compression and extension steps give rise to more precise MPOs, although the bond dimension is smaller. This shows that it is always beneficial to work with these MPOs.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{./Figures/dalai_precision.pdf}
\caption{Precision of the $n$th order operator (as measured by Eq.~\ref{eq:phase_error}) for the $\ensuremath{\mathrm{SU}}(2)$ symmetric spin-1 Heisenberg model, plotted as a function of the time step. We find the expected power-law behavior, where different orders directly correspond to different powers (the $n$th order operator has an error scaling as $n+1$). The open circles show the results from only using exact compression steps, while the filled circles were obtained using both the approximate compression and extension.}
\label{fig:precision}
\end{figure}
\subsection{Efficiency}
After having showed that our construction works as intended, we now show that it is actually efficient to use higher-order MPOs in practical MPS time-evolution algorithms. Let us therefore take the Hamiltonian of the two-dimensional transverse-field Ising model on a finite cylinder with spiral boundary conditions [Eq.~\eqref{eq:ising_cylinder}], find an MPS ground-state, perform a spin flip in the middle of the cylinder and time-evolve the state. This is the typical set-up for evaluating a spectral function. We time-evolve for a total time $T=1$ with different times steps $\delta t$, where we approximate the time-evolution operator $\ensuremath{\mathrm{e}}^{i H\delta t}$ by MPOs of different orders. In each time step, we perform a variational sweeping optimization of the new time-evolved MPS and keep the bond dimension fixed. After the time evolution we evaluate the fidelity per site with respect to a benchmark time-evolved state (which was obtained by the algorithm based on the time-dependent variational principle (TDVP) \cite{Haegeman2016} with time step $\delta t=0.0001$):
\begin{equation} \label{eq:fid}
f = \frac{1}{N} \log \braket{\Psi_{\text{bench}}(T) | \Psi_n(T) },
\end{equation}
with $N$ the number of sites.
\par In the first panel of Fig.~\ref{fig:timescaling} we plot this fidelity density as a function of time step, showing that we indeed find higher precision with higher-order MPOs and that the error scales with the correct power of $\delta t$.
Note that the first-order MPO is exactly the same as the $W_{II}$ operator from Ref.~\cite{Zaletel2015}. Curiously, we find that the error for the second-order MPO scales according to a third-order MPO, but this is not generically true and depends on both the particular Hamiltonian.
\par In the second panel, we show the computational time as a function of the fidelity density, showing how much time is needed to reach a certain accuracy. This plot clearly shows that it is beneficial to go beyond the first-order MPO. For general models, we expect that we can obtain better fidelity at the same computational cost by using higher order methods. The extraordinary performance of the second-order MPO in this particular example originates from the fact that it is correct up to third-order, which is not expected in general.
\begin{figure}[!tbp]
\includegraphics[width=0.49\textwidth]{./Figures/hW4_ising_prec_dt_scaling.pdf}
\includegraphics[width=0.49\textwidth]{./Figures/hW4_ising_prec_time_scaling.pdf}
\caption{Benchmark results for the transverse-field Ising model on a cylinder ($W = 4$). In the left panel we plot the fidelity density [Eq.~\ref{eq:fid}] as a function of time step. In the right panel we plot the fidelity density as a function of total simulation time.}
\label{fig:timescaling}
\end{figure}
\subsection{Splitting schemes}
There is a well known approach for generating higher-order time-evolution methods out of lower-order approximation schemes, by combining ingeniously chosen time steps \cite{3-540-30663-3}. Given a first order method, such as our time-evolution operator $O_1(t)$, it can be combined with alternating timesteps $t_1 = (1+i)/2$ and $t_2 = (1-i)/2$. The composite operator $O_1(t_1) O_1(t_2)$ $= O_2(t_1+t_2) $ is then accurate up to second order \cite{Zaletel2015}. In general, a second-order method and more than two time steps are required, in order to construct higher order schemes by combining only real time steps. This is also the basis behind higher order Suzuki-Trotter decompositions.
In contrast to these splitting schemes, the construction of the $N$th order MPO $O_N$ has a bond dimension as listed in the following table (where $\chi$ is the bond dimension of the $A$ block in the Hamiltonian):
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Order & Bond dimension \\
\hline
1 & $1+\chi$ \\
2 & $1+\chi+\chi^2$\\
3 & $1+3\chi+\chi^2+\chi^3$\\
4 & $1+5\chi+4\chi^2+\chi^3+\chi^4$\\
\hline
\end{tabular}
\end{center}
Even in the case that we assume that our MPO operators are fully dense, the composition of $N$ first-order operators will therefore always have a larger bond dimension than the construction we put forward.
Furthermore, for splitting schemes including complex-valued time steps, the resulting operator will exponentially grow high energy contributions before exponentially suppressing them in a subsequent step, raising serious concerns about their stability. These splitting schemes may however be useful as a trade-off between CPU time and memory usage. A high-order time evolution operator corresponds to an MPO tensor with an exponentially large bond dimension. By combining a splitting scheme with the highest-order operator that can still reside in memory, one can push time evolution simulations to even higher levels of accuracy.
\subsection{Finite temperature}
\begin{figure}[!tbp]
\includegraphics[width=0.49\textwidth]{./Figures/finite_temp_nn_heis_energy.pdf}
\includegraphics[width=0.49\textwidth]{./Figures/finite_temp_nn_heis_free.pdf}
\caption{Finite temperature results for the spin-$\frac{1}{2}$ XXZ model in the thermodynamic limit. In the left panel we plot the energy density. The ground state energy density is indicated by a black dashed line. In the right panel we plot the free energy density.}
\label{fig:finitetemp}
\end{figure}
Our method can also be used to directly construct the finite temperature density matrix $e^{-\beta H}$, at different orders of precision. We have calculated the free energy and energy density for different values of $\beta$ at different expansion orders for the spin $\frac{1}{2}$ XXZ model, directly in the thermodynamic limit (see figure \ref{fig:finitetemp}).
This calculation is highly and straightforwardly parallelisable (at least on a shared-memory architecture), as it boils down to solving an iterative dominant eigenvalue problem of a block-sparse matrix. It is however fundamentally limited in the achievable $\beta$. At some crossover point the error term will always start to dominate, and the results become wildly inaccurate. The best results will presumably be obtained by multiplying multiple density matrices at smaller $\beta$ (which can be calculate up to arbitrary precision).
\section{Conclusion and outlook}
We have introduced a new way of approximating the time evolution operator as an MPO correctly up to arbitrary order in the time step. The algorithm is formulated in the language of Hamiltonians represented as first-degree MPOs and is directly compatible with spatial symmetries (in particular, translation invariance) and non-abelian on-site symmetries. While such a construction is interesting in its own right, we have demonstrated that a higher order scheme allows us to speed up more conventional time simulations by an order of magnitude! The higher-order MPOs can be readily used in existing time-evolution algorithms, leading to immediate speedups. For the reader's convenience, we have summarized the most useful MPO expressions in the Appendix.
It would be interesting to explore the interplay between the approximate compression step from Sec.~\ref{sec:aproxcompression} and the extension step from Sec.~\ref{sec:aproxextend}. The compression step should in principle introduce errors of order $N+1$, while these are precisely the kind of terms we correctly try to incorporate in the extension step, and so in principle we would expect these steps to be at odds with each other. Nevertheless we observe that a combination of the two gives the best results, which is not yet fully understood.
In principle it is clear how one can apply a very similar methodology to time-dependent Hamiltonians. For the example of periodic driving, it could allow us to construct the time evolution operator over an entire period at once. In turn, we would be able to analyze this operator with spectral methods, extracting information on the effective time-averaged operator, as an alternative to the more conventional perturbative expansion! We leave this open for future work.
\paragraph*{Acknowledgments}
We would like to thank Bram Vanhecke and Frank Verstraete for earlier collaborations that inspired this work.
\paragraph*{Code availability}
The computer code can be found in the software package MPSKit.jl \cite{mpskit}.
\paragraph*{Funding information}
MV and JH have received support from the European Research Council (ERC) under the European Union’s Horizon 2020 program [Grant Agreement No. 715861 (ERQUAF)] and from the Research Foundation Flanders. LV is supported by the Research Foundation Flanders (FWO) via grant FWO20/PDS/115.
|
{
"arxiv_id": "2302.14183",
"language": "en",
"timestamp": "2023-03-01T02:04:09",
"url": "https://arxiv.org/abs/2302.14183",
"yymm": "2302"
} | \section{Introduction}
\label{intro}
Quantum gravity is one of the frontiers of current physics. It attempts to reconcile the contradiction between general relativity and quantum field theory in order to achieve a consistent understanding of such questions as black holes and the origin of the universe \cite{Polchinski1,Polchinski2,Rovelli}.
Physicists have explored quantum gravity for decades and come up with loop quantum gravity theory and string theory (as well as other theories of quantum gravity)\cite{Polchinski1,Polchinski2,Rovelli}.
The main idea of loop quantum gravity theory is to quantize gravity while preserving the background dependence properties of general relativity. loop quantum gravity theory is one of the main approaches to preserve the non-perturbative properties and the background dependence \cite{2007LNP...721..185T,2003LNP...631...41T}.
From the perspective of general relativity, loop quantum gravity theory is expected to solve classical singularities problem (such as the common Big Bang naked singularities and black hole singularities).
Since general relativity is a geometric theory describing gravitational phenomena, the loop quantum gravity theory can be understood as quantizing the continuous geometric structure, making the loop quantum gravity theory also a quantum geometric theory.
In recent decades, there has been extensive work by physicists on loop quantum gravity theory and its applications, due to the significant progress made on the problem of cosmological singularities
\cite{2005LRR.....8...11B,2001PhRvL..86.5227B,2006PhRvD..73l4038A}.
In recent years, black hole physics has become a testing ground for theories of quantum gravity, as scientists have made breakthroughs in the observation of binary black holes gravitational waves and shadow of supermassive black holes \cite{2016PhRvL.116f1102A,2019ApJ...875L...1E}.
Of course, this includes testing loop quantum gravity theories using black hole gravitational waves and black hole shadow observations, which naturally makes the use of loop quantum gravity theories to study black holes an attractive problem \cite{2004PhRvD..70l4009M,2007hep.th....1239M,2006CQGra..23..391A,2006CQGra..23.5587M,2008PhRvL.101p1301G}.
Remarkably, physicists have discovered that it is possible to solve the singularity problem of spherically symmetric black holes via loop quantum gravity theory. They have found that when black holes are considered in loop quantum gravity theories, there is no singularity at the center of the black hole, while the event horizon of the black hole is preserved. This is true for both the full loop quantum gravity theory case and the semi-classical case of loop quantum gravity
\cite{2006CQGra..23..391A}.
In the case of full-loop quantum gravity theory. In article \cite{2019arXiv191200774B}, the analytical form of the black hole metric under spherically symmetric conditions is obtained. Suddhasattwa. et.al extended it to the case of rotating black holes via the Newman-Janis method and investigated the possibility of testing loop quantum gravity in astronomical phenomena
\cite{2021PhRvL.126r1301B}.
In the semi-classical approximation, Leonardo Modesto reduces the action of loop quantum gravity on black holes to two effective parameters and obtains the semiclassical spherically symmetric black hole metric in loop quantum gravity theory \cite{2010IJTP...49.1649M}.
After this, Francesco Caravelli and Leonardo Modesto extended the semi-classical black hole solution to rotating black holes by the Newman-Janis method, but Azreg-Aïnou Mustapha showed that the rotating black hole metric is not a solution of Einstein's gravitational field equations \cite{2010CQGra..27x5022C,2011CQGra..28n8001A}.
In literature \cite{2020PhRvD.101h4001L}, the Newman-Janis method was used to derive the rotating black hole in the semi-classical case of loop quantum gravity theory, but they still retained an unknown function $H$, which made them not actually obtain the solution of the rotating black hole in the semi-classical case of loop quantum gravity theory.
In this work, we will obtain the analytical expression of the unknown function $H$ by approximate calculation, so that the solution of the rotating black hole in the semi-classical case of loop quantum gravity theory can be obtained.
The paper is organized as follows. In Section \ref{kerr-like}, we will construct the solution of the rotating black hole according to the Newman-Janis method, and obtain the analytic expression of the unknown function $H$. In Section \ref{proper}, we temporarily analyze and discuss the properties of rotating black holes in the semi-classical case of loop quantum gravity theory. The summary is in Section \ref{discuss}. We have used the natural system of units throughout the paper.
\section{Kerr-like semi-classical black hoe in LQG}
\label{kerr-like}
According to the research paper \cite{2010IJTP...49.1649M,2010CQGra..27x5022C,2020PhRvD.101h4001L}, the space-time line elements of spherically symmetric black holes (BH) under the semi-classical approximation in the theory of loop quantum gravity (LQG) are as follows
\begin{equation}
ds^{2}=-f(r)dt^{2}+\dfrac{1}{g(r)}dr^{2}+h(r)(d\theta^{2}+\sin^{2}\theta d\phi^{2}),
\label{metric1}
\end{equation}
where $f(r)$, $g(r)$ and $h(r)$ are the space-time metric coefficients, whose expressions are
\begin{equation}
f(r)=\frac{(r-r_{+})(r-r_{-})(r+r_{*})^{4}}{r^{4}+a^{2}_{0}},
\label{metric2}
\end{equation}
\begin{equation}
g(r)=\frac{(r-r_{+})(r-r_{-})r^{4}}{(r+r_{*})^{2}(r^{4}+a^{2}_{0})},
\label{metric3}
\end{equation}
and
\begin{equation}
h(r)=r^{2}+\frac{a^{2}_{0}}{r^{2}}.
\label{metric4}
\end{equation}
In the metric coefficients, the constants $r_{-}$ and $r_{+}$ are the inner and outer horizons, denoted $r_{+}=2M/(1+P)^{2}$ and $r_{-}=2MP^{2}/(1+P)^{2}$, respectively. The constant in the metric coefficient $r_{*}=\sqrt{r_{+}r_{-}}$.
The physical meaning of the parameters $a_{0}$, $M$ and $P$ can be found in \cite{2010IJTP...49.1649M}.
The energy-momentum tensor corresponding to the metric (\ref{metric1}) of a semi-classical spherically symmetric BH can be written as follows $T_{\mu\nu}=diag(-\rho,p_{r},p_{\theta},p_{\phi})$, where $\rho$ is the energy density, $p_{r}$ is the radial pressure, $p_{\theta}$ and $p_{\phi}$ is the tangential pressure. These energy density and pressure of the semi-classical BH in LQG are
\begin{equation}
-8\pi\rho=-g(r)\Big(-\frac{h^{'}g^{'}}{2hg}-\frac{1}{h}\Big)-\frac{1}{h},
\label{energy-m1}
\end{equation}
\begin{equation}
8\pi p_{r}=g(r)\Big(\frac{h^{'}f^{'}}{2hf}+\frac{1}{h}\Big)-\frac{1}{h},
\label{energy-m2}
\end{equation}
\begin{equation}
8\pi p_{\theta}=8\pi p_{\phi}=g(r)\Big[\frac{f^{''}}{2f}-\frac{(f^{'})^{2}}{4f^{2}}+\frac{g^{'}f^{'}}{4gf}+\frac{h^{'}}{4h}(\frac{f^{'}}{f}+\frac{g^{'}}{g})\Big].
\label{energy-m3}
\end{equation}
According to LQG theory, in the common phenomena of astrophysics, the quantum gravitational effect should normally be extremely tiny (which has not been observed yet), which makes the parameter values in the model considerably restricted. It is commonly believed that the parameters $P$ and $a_{0}$ of semi-classical black holes tend to zero or are much less than $1M$.
Therefore, in the LQG theory, the radial pressure and tangential pressure corresponding to the semi-classical black hole (\ref{metric1}) should be approximately equal $p_{r}=p_{\theta}=p$. In other words, the following conditions should be satisfied:
\begin{equation}
\frac{f^{''}}{2f}--\frac{(f^{'})^{2}}{4f^{2}}+\frac{1}{4gf}+\frac{h^{'}}{4h}\Big(\frac{g^{'}}{g}-\frac{f^{'}}{f}\Big)=\frac{1}{h}\Big(1-\frac{1}{g}\Big).
\label{relation1}
\end{equation}
This condition indicates that the pressure corresponding to the metric (1) of a semi-classical black hole is isotropic. Computationally, we find that the expression for the pressure $p$ is of the form:
\begin{equation}
p=\frac{(r-r_{+})(r-r_{-})(r^{4}-a^{2}_{0})r^{3}}{8\pi(r+r_{*})^{2}(r^{4}+a^{2}_{0})}\Big(\dfrac{2r-(r_{+}+r_{-})}{(r-r_{+})(r-r_{-})}+\dfrac{4}{r+r_{*}}-\dfrac{4r^{3}}{r^{4}+a^{2}_{0}}\Big)+\frac{(r-r_{+})(r-r_{-})r^{6}}{8\pi(r+r_{*})^{2}(r^{4}+a^{2}_{0})^{2}}-\dfrac{r^{2}}{8\pi(r^{4}+a^{2}_{0})}.
\label{relation2}
\end{equation}
According to the discussion in \cite{2014PhLB..730...95A}(and their references), the semi-classical spherically symmetric space-time metric (\ref{metric1}) is extended to the rotational case, and the general rotational space-time line elements are as follows
\begin{equation}
ds^{2}=-\dfrac{H}{\Sigma^{2}}\big(1-\dfrac{2\overline{f}}{\Sigma^{2}} \big)dt^{2}+\dfrac{H}{\Delta}dr^{2}-\dfrac{4a\overline{f}\sin^{2}\theta H}{\Sigma^{4}}dtd\phi+H d\theta^{2}+\dfrac{H A \sin^{2}\theta}{\Sigma^{4}}d\phi^{2},
\label{NJA10}
\end{equation}
where $k(r)=h(r)\sqrt{f(r)/g(r)}$, $2\overline{f}=k(r)-h(r)f(r)$, $\Delta(r)=h(r)f(r)+a^{2}$, $A=(k(r)+a^{2})^{2}-a^{2}\Delta \sin^{2}\theta$, $\Sigma^{2}=k(r)+a^{2}\cos^{2}\theta$ and $H=H(r,\theta,a)$ is unknown function.
In order to find the specific expression of the unknown function $H$, we need to use the rotational condition $G_{r\theta}=0$ and the Einstein field equation $G_{\mu\nu}=8\pi T_{\mu\nu}$. By substituting the space-time line element (\ref{NJA10}) into these two conditions, we can obtain the following system of equations
\begin{equation}
(k+a^{2}y^{2})^{2}(3H_{,r}H_{,y^{2}}-2HH_{,ry^{2}})=3a^{2}k_{,r}H^{2},
\label{NJA11}
\end{equation}
\begin{equation}
H[k^{2}_{,r}+k(2-k_{,rr})-a^{2}y^{2}(2+k_{,rr})]+(k+a^{2}y^{2})(4y^{2}H_{,y^{2}}-k_{,r}H_{,r})=0,
\label{NJA12}
\end{equation}
where $H_{,ry^{2}}=\partial^{2}H/\partial r \partial y^{2}$, $k_{,r}=\partial k(r)/\partial r$ and $y=\cos\theta$.
In Reference \cite{2020PhRvD.101h4001L}, authors discuss the properties of rotating BHs while retaining the unknown function $H$. In fact, we can figure out the expression for the unknown function $H$. For the spherically symmetric metric of $f(r)\neq g(r)$, the solution of the unknown function $H$ can be found under the condition $P_{r}\simeq P_{\theta}\simeq P_{\phi}=p$. In the literature \cite{2014PhLB..730...95A}, the author gives the strict solution of the equations (\ref{NJA11}) and (\ref{NJA12}), whose analytical form is as follows
\begin{equation}
H=r^{2}+p^{2}+a^{2}y^{2}=r^{2}+p^{2}+a^{2}\cos^{2}\theta,
\label{NJA15}
\end{equation}
here $p$ is the pressure and is given by the expression (\ref{relation2}).
To sum up, the analytical expressions of the unknown functions and parameters in the rotating BH (\ref{NJA10}) can be obtained by proper arrangement.
\begin{equation}
H=\Big[\frac{(r-r_{+})(r-r_{-})(r^{4}-a^{2}_{0})r^{3}}{8\pi(r+r_{*})^{2}(r^{4}+a^{2}_{0})}\Big(\dfrac{2r-(r_{+}+r_{-})}{(r-r_{+})(r-r_{-})}+\dfrac{4}{r+r_{*}}-\dfrac{4r^{3}}{r^{4}+a^{2}_{0}}\Big)+\frac{(r-r_{+})(r-r_{-})r^{6}}{8\pi(r+r_{*})^{2}(r^{4}+a^{2}_{0})^{2}}-\dfrac{r^{2}}{8\pi(r^{4}+a^{2}_{0})}\Big]^{2}$$$$
+r^{2}+a^{2}\cos^{2}\theta,
\label{NFW-BH1}
\end{equation}
\begin{equation}
k(r)=\Big(r^{2}+\frac{a^{2}_{0}}{r^{2}}\Big)\Big(1+\frac{r_{*}}{r}\Big)^{2}(r+r_{*}),~~~~~\Sigma^{2}=\Big(r^{2}+\frac{a^{2}_{0}}{r^{2}}\Big)\Big(1+\frac{r_{*}}{r}\Big)^{2}(r+r_{*})+a^{2}\cos^{2}\theta,
\label{NFW-BH2}
\end{equation}
\begin{equation}
2\overline{f}=\Big(r^{2}+\frac{a^{2}_{0}}{r^{2}}\Big)\Big(1+\frac{r_{*}}{r}\Big)^{2}(r+r_{*})-(1-\frac{r_{+}}{r} )(1-\frac{r_{-}}{r})(r+r_{*})^{4},~~~~~~
\Delta(r)=(1-\frac{r_{+}}{r} )(1-\frac{r_{-}}{r})(r+r_{*})^{4}+a^{2},
\label{NFW-BH3}
\end{equation}
\begin{equation}
A=\Big(\Big(r^{2}+\frac{a^{2}_{0}}{r^{2}}\Big)\Big(1+\frac{r_{*}}{r}\Big)^{2}(r+r_{*})+a^{2}\Big)^{2}-a^{2}\sin^{2}\theta\Big((1-\frac{r_{+}}{r} )(1-\frac{r_{-}}{r})(r+r_{*})^{4}+a^{2}\Big),
\label{NFW-BH4}
\end{equation}
\section{Some properties and discussion}
\label{proper}
For the semi-classical BH metric (\ref{NJA10}) in LQG theory, we would temporarily discuss and analyze its basic properties.
Firstly, the structure of the BH event horizon determined by the semi-classical BH metric (\ref{NJA10}) is analyzed.
According to the BH event horizon structure equation $g^{rr}=-\frac{H}{\Delta}=0$, it can be found that the event horizon structure of the BH is only determined by the expression of $\Delta$, and has no relation with the expression of function $H$.
Therefore, the expression of function $H$ would not modify the analysis of BH event horizon structure in reference \cite{2020PhRvD.101h4001L}. Details of the event horizon structure and properties of BH (\ref{NJA10}) have been thoroughly discussed in reference \cite{2020PhRvD.101h4001L}.
Secondly, for the properties of the stationary limit surfaces and ergosphere of the semi-classical BH (\ref{NJA10}), since the inner and outer stationary limit surfaces are determined by the equation $g^{tt}=-\dfrac{H}{\Sigma^{2}}\big(1-\dfrac{2\overline{f}}{\Sigma^{2}}\big)=0$, the structure of the function $H$ does not modify the properties of the inner and outer stationary limit surfaces. A detailed analysis of the inner and outer stationary limit surfaces can be found in \cite{2020PhRvD.101h4001L}.
The boundary of the ergosphere is determined by the outer horizon of the BH and the outer stationary limit surface, so the size of the ergosphere region has no relationship with the function $H$, and its analysis is still exactly the same as that in reference \cite{2020PhRvD.101h4001L}.
Finally, the singularity problem of semi-classical BH metric (\ref{NJA10}). The singularity of the BH can be analyzed by computing the Kretschmann scalar $R$, which is
\begin{equation}
R=\dfrac{2}{\Big[\Big(r^{2}+\frac{a^{2}_{0}}{r^{2}}\Big)\Big(1+\frac{r_{*}}{r}\Big)^{2}(r+r_{*})+a^{2}\cos^{2}\theta \Big]^{3}}\times\Big[r^{2}+\Big[\frac{(r-r_{+})(r-r_{-})(r^{4}-a^{2}_{0})r^{3}}{8\pi(r+r_{*})^{2}(r^{4}+a^{2}_{0})}\Big(\dfrac{2r-(r_{+}+r_{-})}{(r-r_{+})(r-r_{-})}+\dfrac{4}{r+r_{*}}$$$$
-\dfrac{4r^{3}}{r^{4}+a^{2}_{0}}\Big)+\frac{(r-r_{+})(r-r_{-})r^{6}}{8\pi(r+r_{*})^{2}(r^{4}+a^{2}_{0})^{2}}-\dfrac{r^{2}}{8\pi(r^{4}+a^{2}_{0})}\Big]^{2}+a^{2}(2-\cos^{2}\theta)-\frac{2(r-r_{+})(r-r_{-})(r+r_{*})^{4}}{r^{4}+a^{2}_{0}}\Big]\times $$$$\Big[\frac{(r-r_{+})(r-r_{-})(r^{4}-a^{2}_{0})r^{3}}{8\pi(r+r_{*})^{2}(r^{4}+a^{2}_{0})}\Big(\dfrac{2r-(r_{+}+r_{-})}{(r-r_{+})(r-r_{-})}+\dfrac{4}{r+r_{*}}-\dfrac{4r^{3}}{r^{4}+a^{2}_{0}}\Big)+\frac{(r-r_{+})(r-r_{-})r^{6}}{8\pi(r+r_{*})^{2}(r^{4}+a^{2}_{0})^{2}}-\dfrac{r^{2}}{8\pi(r^{4}+a^{2}_{0})}\Big]^{2}$$$$
-\dfrac{2}{\Big(r^{2}+\frac{a^{2}_{0}}{r^{2}}\Big)\Big(1+\frac{r_{*}}{r}\Big)^{2}(r+r_{*})+a^{2}\cos^{2}\theta}\times\Big[ \frac{2(r+r_{*})^{4}+8(2r-r_{+}-r_{-})(r+r_{*})^{3}+12(r-r_{+})(r-r_{-})(r+r_{*})^{2}}{r^{4}+a^{2}_{0}}+$$$$
\frac{32(r-r_{+})(r-r_{-})(r+r_{*})^{4}r^{6}}{(r^{4}+a^{2}_{0})^{3}}
-\frac{12(r-r_{+})(r-r_{-})(r+r_{*})^{4}r^{3}+8(2r-r_{+}-r_{-})(r+r_{*})^{4}r^{3}+32(r-r_{+})(r-r_{-})(r+r_{*})^{3}r^{3}}{(r^{4}+a^{2}_{0})^{2}} \Big].
\label{Kretschmann scalar}
\end{equation}
It follows from this expression that $R$ is finite in all spacetime regions.
The specific properties of the function $H$ do not affect the qualitative discussion of the singularity of the BH, but can modify the magnitude of the value of $R$, and thus alter the related physical properties.
For the semi-classical BH metric (\ref{NJA10}), although the previous discussion is all about the space-time metric in vacuum, it can actually be equivalent to the curvature of space-time caused by some material fluid, which allows us to understand the physical properties of the BH by calculating the energy-momentum tensor component of the metric (\ref{NJA10}). By a series of calculations, we find that its non-zero component are
\begin{equation}
\rho(Kerr-like BH)=\frac{1}{\Sigma^{6}}\Big[-6f+r^{2}+p^{2}+a^{2}(2-\cos^{2}\theta)\Big]p^{2}-\frac{2}{\Sigma^{4}}\Big(rf^{'} -f\Big),
\label{EMT1}
\end{equation}
\begin{equation}
p_{r}(Kerr-like BH)=\frac{1}{\Sigma^{6}}\Big[2f+r^{2}+p^{2}+a^{2}\cos^{2}\theta\Big]p^{2}-\frac{2}{\Sigma^{4}}\Big(rf^{'} -f\Big),
\label{EMT2}
\end{equation}
\begin{equation}
p_{\theta}(Kerr-like BH)=\frac{2(r^{2}+a^{2}\cos^{2}\theta)}{\Sigma^{6}}f-\frac{1}{\Sigma^{4}}\Big[2rf^{'}+p^{2}\Big]+\frac{1}{\Sigma^{2}}f^{''} ,
\label{EMT3}
\end{equation}
\begin{equation}
p_{\phi}(Kerr-like BH)=\frac{2}{\Sigma^{6}}[(r^{2}+a^{2}\cos^{2}\theta)f-2a^{2}\sin^{2}\theta p^{2}]--\frac{1}{\Sigma^{4}}\Big[2rf^{'}+p^{2}\Big]+\frac{1}{\Sigma^{2}}f^{''},
\label{EMT4}
\end{equation}
where $f^{'}$ and $f^{''}$ are
\begin{equation}
f^{'}=\frac{(2r-r_{+}-r_{-})(r+r_{*})^{4}}{r^{4}+a^{2}_{0}}+\frac{4(r-r_{+})(r-r_{-})(r+r_{*})^{3}}{r^{4}+a^{2}_{0}}-\frac{4(r-r_{+})(r-r_{-})(r+r_{*})^{4}r^{3}}{(r^{4}+a^{2}_{0})^{2}},
\label{f1}
\end{equation}
\begin{equation}
f^{''}=\frac{2(r+r_{*})^{4}+8(2r-r_{+}-r_{-})(r+r_{*})^{3}+12(r-r_{+})(r-r_{-})(r+r_{*})^{2}}{r^{4}+a^{2}_{0}}+\frac{32(r-r_{+})(r-r_{-})(r+r_{*})^{4}r^{6}}{(r^{4}+a^{2}_{0})^{3}}$$$$
-\frac{12(r-r_{+})(r-r_{-})(r+r_{*})^{4}r^{3}+8(2r-r_{+}-r_{-})(r+r_{*})^{4}r^{3}+32(r-r_{+})(r-r_{-})(r+r_{*})^{3}r^{3}}{(r^{4}+a^{2}_{0})^{2}},
\label{f2}
\end{equation}
Through these expressions of energy density and pressure, it can be understood that in the semi-classical rotating BH space-time of LQG theory, the equivalent fluids corresponding to BH (\ref{NJA10}) do not satisfy the isotropic conditions.
From the previous discussion, we know that the semi-classical BH metric (1) is approximately isotropic in the case of spherically symmetric case. Since the BH metric (\ref{NJA10}) takes into account the BH spin, the isotropic condition is destroyed, which makes the physical properties of BH (\ref{NJA10}) more complicated.
\section{Summary}
\label{discuss}
In the research paper \cite{2020PhRvD.101h4001L}, the authors used the Newman-Janis method to derive the semi-classical rotating BH metric in the theory of LQG, but there is still an unknown function $H$ in their semi-classical rotating BH metric. In this comment we give the analytical expression of this unknown function $H$, which makes it possible to write the mathematical form of the metric of semi-classical rotating BHs in the theory of LQG altogether.
The conclusions in the research paper \cite{2020PhRvD.101h4001L} are generally unaffected, and our work is only to improve the space-time metric so that some of the conclusions can be changed from qualitative to quantitative calculations.
\begin{acknowledgments}
We acknowledge the anonymous referee for a constructive report that has significantly improved this paper. We acknowledge the Special Natural Science Fund of Guizhou University (grant
No. X2020068) and the financial support from the China Postdoctoral Science Foundation funded project under grants No. 2019M650846.
\end{acknowledgments}
|
{
"arxiv_id": "2302.14232",
"language": "en",
"timestamp": "2023-03-01T02:06:09",
"url": "https://arxiv.org/abs/2302.14232",
"yymm": "2302"
} | \section{introduction}
\label{introduction}
Ultracold neutral atoms provides a fertile playground for engineering artificial gauge fields~\cite{Goldman,Galitski,Zhai2015,Zhang2016}.
Synthetic spin-orbit coupling, utilizing atomic hyperfine levels as pseudo-spins, can be realized by coupling these states via Raman lasers~\cite{Lin,WangP,Cheuk}.
Spin-orbit-coupled Bose-Einstein condensates (BECs) open a new route in exploring exotic superfluid states and simulating topological matters~\cite{JiS,WuZ,HuangL,Mossman,Valdes,Frolian}. One of interesting features is that the spin-orbit coupling modifies dispersion relation of a BEC. The spin-orbit-coupled dispersion may have multiple energy minima. Condensations into these energy minima present exotic quantum phases, such as plane-wave (PW) phase and stripe phase
~\cite{Wang2010,Wucong2011,Hotian2011,Hu2012,Yongping2012,LiY2012}.
The PW phase occupies one of the minima and possesses a nonzero quasimomentum, which breaks the time-reversal symmetry~\cite{Wucong2011}. The phase transition and excitations of PW states have been experimentally observed~\cite{JiS,Khamehchi}. The stripe phase, condensing at least two minima, represents a combination of spatial density modulation and superfluidity and is identified to have supersolid properties~\cite{LiY2013}.
Realization of the stripe phase requires miscibility of the two spin components and a low Rabi frequency of the Raman lasers~\cite{LiY2012, Zheng2013}.
This is quite a challenge in $^{87}$Rb atoms experiments, since atomic interactions are insensitive to the hyperfine states~\cite{Martone2014, Luo2019,Peter2019}.
Recently, the spin-orbit-coupling-induced stripe phase has been observed in atoms loaded into superlattices~\cite{LiJR}, in which the sub-lattice sites are treated as pseudo-spins.
A spinor BEC has more degrees of freedom and intriguing interactions which lead to a rich ground-state phase diagram~\cite{Stamper-Kurn}. A spin-orbit-coupled spin-1 BEC has been experimentally realized~\cite{Campbell}. Quantum phases in spin-orbit-coupled spin-1 BECs depend on antiferromagnetic and ferromagnetic spin-spin interactions and show salient features~\cite{Lan,Sun, Yu,Martone2016,ChenY}. Three different kinds of stripe phases are revealed to exist and phase transitions between different phases are so abundant that tricritical points emerge~\cite{Yu,Martone2016}. One of outstanding features is that the quadratic Zeeman field plays an important role in the existence of stripe phases. Especially, in a ferromagnetic spinor BEC, stripes appear in the limited regime of low Rabi frequency and quadratic Zeeman field~\cite{Campbell}.
On the other hand, Floquet engineering is a powerful tool in quantum physics for controlling system parameters and manipulating quantum states~\cite{Bukov,Eckardt,Oka}.
In a periodically driven system, an effective static Hamiltonian can be tailored which depends on the driving parameters.
The driving could lead to dramatic changes of the system properties.
Ultracold atoms provide an ideal platform for Floquet engineering due to the tunability and purity of the system,
which has been used to explore artificial gauge fields, topological insulators, and soliton dynamics~\cite{Jotzu,Struck,GoldmanPRX,Flaschner,Ha,Schweizer,Wintersperger,Mitchell,LuM}. In spin-orbit-coupled ultracold atoms, a coherently periodic modulation of Raman laser intensities can produce a tunable spin-orbit coupling strength~\cite{Zhang2013,Jimenez-Garcia,Llorente},
which provides a practical way for dynamical control. A periodic modulation of Raman laser frequencies is employed to manipulate the emergence of the Dirac point in Floquet bands, thus to change band topology~\cite{Huang2018}. A shaking Raman lattice that generates high dimensional spin-orbit coupling is implemented to tune Floquet topological bands~\cite{ZhangJY}.
Recently, a Floquet spinor BEC has been proposed by a periodically driven quadratic Zeeman field~\cite{Kazuya}. Compared with a usual spinor BEC, the Floquet spinor BEC has an additional spin-exchange interaction which is induced by the high-frequency driving. Such an induced spin-exchange interaction can have a profound effect in ferromagnetic condensates and generate unconventional quantum phases~\cite{Kazuya}.
In this paper, we study a Floquet spin-1 BEC with spin-orbit coupling. In spin-1 spin-orbit coupling experiments, three external Raman lasers are used to couple three hyperfine states and the quadratic Zeeman effect is proportional to the two-photon detunings between Raman lasers and hyperfine states~\cite{Campbell}. We propose to drive the quadratic Zeeman effect periodically around a constant value via a periodic modulation of Raman laser frequencies. Under high-frequency driving, the spin-orbit-coupled spinor BEC is effectively described by a static Floquet spinor BEC, in which the Rabi coupling is modulated by a Bessel function and a unique spin-exchange interaction emerges. Quantum ground phases are investigated in such a spin-orbit-coupled Floquet spinor BEC with antiferromagnetic or ferromagnetic spin-spin interactions. Our main results are the following.
(i) Due to the Bessel-function-modulated Rabi frequency, each quantum phase can exist in a broad region of the Rabi frequency. The previous studies show that the stripe phases in antiferromagnetic and ferromagnetic spinor BECs exist in a small regime of the Rabi frequency, saying $\Omega_{c1} <\Omega <\Omega_{c2} $, where $\Omega$ is the Rabi frequency and $\Omega_{c1,c2}$ are two critical values with $\Omega_{c2}-\Omega_{c1}$ being a small quantity~\cite{Campbell,Yu,Martone2016}. In the Floquet spinor BEC, the Rabi frequency is modulated as $\Omega J_0$ with $J_0$ the zero-order Bessel function of the first kind. We find that the corresponding phases appear in $\Omega_{c1}/J_0 <\Omega <\Omega_{c2}/J_0 $. Since $J_0$ is tunable and less than 1, the $\Omega$ region for the existence of the stripe phase is enlarged significantly. Such extension of the Rabi frequency for the stripe phases is beneficial for their experimental observations.
(ii) For antiferromagnetic interactions, the appearance of the Floquet-induced spin-exchange interaction extends the second stripe phase to broaden quadratic Zeeman field domain, which exists in an extremely narrow region of the quadratic Zeeman field in a usual spin-orbit-coupled spinor BEC.
(iii) For ferromagnetic interactions, a new stripe phase is induced by the combined effects of the Floquet-induced spin-exchange interaction and the Rabi coupling. These stripes have a very high density contrast. Their Bogoliubov excitations are identified to have two gapless Nambu-Goldstone modes.
This paper is organized as follows.
In Sec.~\ref{model}, we present the theoretical model for a spin-orbit-coupled Floquet spinor BEC. It features the Floquet-induced spin-exchange interaction and the Bessel-function-modulated Rabi frequency. In Sec.~\ref{noninteracting}, phase diagram of a noninteracting spin-orbit-coupled Floquet spinor BEC is analyzed. In Sec.~\ref{interacting}, phase diagrams in antiferromagnetic and ferromagnetic spin-spin interactions are studied separately.
Finally, the conclusion follows in Sec.~\ref{conclusion}.
\section{MODEL}
\label{model}
We consider an experimentally realizable spin-orbit-coupled spin-1 BEC. The spin-orbit coupling is implemented by coupling three hyperfine states with total angular momentum $F=1$ ($m_F=0,\pm1$)
via three Raman laser propagating along the $x$-axis~\cite{Campbell}. Adjusting two-photon detunings between Raman lasers and hyperfine states to be equal can mimic an effective quadratic Zeeman field. We propose to periodically drive it by a periodic oscillation of the Raman laser frequencies. The mean-field energy functional of the oscillating system is
\begin{align}
E[\Phi]&=\int d\bm{r} \Phi^\dagger \left[ H_{\text{SOC}}+ \xi(t) F_z^2 \right] \Phi \notag\\
&\phantom{={}}+\int d\bm{r} \Phi^\dagger\left[\frac{c_0}{2} \Phi^\dagger \Phi+\frac{c_2}{2}
\Phi^\dagger \bm{F} \Phi\cdot \bm{F} \right] \Phi,
\label{EnergyS}
\end{align}
with $\Phi=(\Phi_1,\Phi_2,\Phi_3)$ being the spin-1 spinor to describe three-component wave functions.
$\bm{F}=(F_x,F_y,F_z)$ are the spin-1 Pauli matrices. $H_{\text{SOC}}$ is the single-particle spin-orbit-coupled Hamiltonian,
\begin{align}
H_{\text{SOC}}=\left(-i\frac{\partial}{\partial x} +2F_z\right)^2 + \varepsilon F^2_{z} +\frac{\Omega}{\sqrt{2}} F_{x},
\label{SocH}
\end{align}
where $\Omega$ is the Rabi frequency depending on the laser intensities, and $\varepsilon$ is a constant quadratic Zeeman shift which is induced
by the detunings of the Raman lasers~\cite{Campbell}. The spin-1 spin-orbit coupling is represented by the second term in Eq.~(\ref{SocH}) with a fixed coupling strength due to the experimentally chosen gauge.
In our dimensionless equations, the units of momentum, length, and energy are $\hbar k_{\text{Ram}}$, $1/k_{\text{Ram}}$,
and $\hbar^2k^2_{\text{Ram}}/2m$, respectively.
Here, $m$ is the atom mass, and $k_{\text{Ram}} = 2\pi/\lambda_{\text{Ram}}$ is the wave number of the Raman lasers with $\lambda_{\text{Ram}}$ being the wavelength.
The quadratic Zeeman field is periodically driven,
\begin{equation}
\xi(t)=\alpha \cos(\omega t),
\end{equation}
with $\omega$ and $\alpha$ being
the frequency and amplitude of the driving, respectively.
$c_{0}$ and $c_{2}$ in Eq.~(\ref{EnergyS}) denote density-density and spin-spin interactions, respectively,
which depend on the $s$-wave scattering lengths in the total spin channels.
In this work, we assume a repulsive density-density interaction ($c_0>0$), while the
spin-spin interaction $c_2$ can be either positive (antiferromagnetic) or negative (ferromagnetic).
For a high-frequency driving, we can derive an effective static Hamiltonian by averaging the time-dependent Hamiltonian over one modulation period~\cite{Eckardt}.
We transform the system into an oscillating frame by using the transformation,
\begin{align}
U(t)=\exp\left(-i\frac{\alpha}{\omega}\sin(\omega t)F_z^2\right).
\end{align}
After applying the transformation $\Phi=U(t)\Psi$, resultant time oscillating terms are dropped due to the average in a period. At last, we end up with a time-independent energy functional as,
\begin{align}
E[\Psi]&=\int d\bm{r} \Psi^\dagger\left[ H^{\prime}_{\text{SOC}}
+\frac{c_0}{2} \Psi^\dagger \Psi+\frac{c_2}{2}
\Psi^\dagger \bm{F} \Psi\cdot \bm{F} \right] \Psi \notag\\
&\phantom{={}} +c_{f}\int d\bm{r} \left( \Psi_{1}^{\dagger}\Psi_{3}^{\dagger}\Psi^{2}_{2}+ \Psi_{1}\Psi_{3}\Psi_{2}^{\dagger2}\right).
\label{eq:energy}
\end{align}
The energy functional describes a spin-orbit-coupled Floquet spinor BEC with the spinor $\Psi=(\Psi_1,\Psi_2,\Psi_3)$ . The modulated single-particle Hamiltonian is
\begin{align}
H^{\prime}_{\text{SOC}}&=\left(-i\frac{\partial}{\partial x} +2F_z\right)^2 + \varepsilon F^2_{z}
+\frac{\Omega}{\sqrt{2}}J_0\left(\frac{\alpha}{\omega}\right) F_{x}.
\label{eq:SOC}
\end{align}
Note that the only difference between the Floquet spin-orbit coupled Hamiltonian $H^{\prime}_{\text{SOC}}$ and the original one $H_{\text{SOC}}$ is that the Rabi frequency is modulated by the zero-order Bessel function of the first kind $J_0(\alpha/\omega)$.
The density-density and spin-spin interactions in Eq.~(\ref{eq:SOC}) are the same as a usual spinor BEC. Nevertheless, there is a new spin-exchange interaction with the coefficient $c_f$ which is a pure effect of Floquet modulation~\cite{Kazuya},
\begin{equation}
c_{f}=c_{2}\left[1- J_{0}\left(2 \alpha /\omega \right)\right].
\end{equation}
The spin-orbit-coupled Floquet spinor BEC is reduced back to a usual spin-orbit-coupled spinor BEC if the driving disappears, i.e., $\alpha/\omega=0$.
\begin{figure}[t]
\includegraphics[width=3.2in]{Fig1.pdf}
\caption{Quantum ground-state phase diagram of a noninteracting spin-orbit-coupled Floquet spinor BEC in the space of the Rabi frequency $\Omega$ and the quadratic Zeeman field $\varepsilon$. The driving is $\alpha/\omega=2$ ($J_0(\alpha/\omega)=0.224$). The background corresponds to values of the tensor magnetization $\langle {F}^2_z \rangle$.
The black and white dotted lines represent first-order and second-order phase transitions, respectively. Below the dotted lines is the plane-wave phase, and beyond is the zero momentum phase.
The red star denotes a tricritical point. Insets show the lowest bands of the single-particle dispersion.
The black dashed lines separate different regions where the lowest band of the dispersion has three, two or one local energy minima.
}
\label{Fig1}
\end{figure}
\section{Phase diagram of the noninteracting spin-orbit-coupled Floquet spinor BEC}
\label{noninteracting}
We study quantum phases in a spin-orbit-coupled Floquet spinor BEC. First, we analyze the single-particle phase diagram which has been addressed in Refs.~\cite{Lan,Sun, Yu}. The analysis of the single-particle phase diagram can provide an insight to ground states in the interacting system. The dispersion of $H^{\prime}_{\text{SOC}}$ can be calculated by a direct diagonalization. Depending on spin-orbit-coupling parameters, the lowest band in the dispersion may have three, two or one local minima. Ground states choose one of minima to occupy. Therefore, a general ground-state wave function should be
\begin{align}
\Psi=\sqrt{\bar{n}}e^{ikx}\left(\begin{array}{c}
\cos\theta\cos\varphi \\ -\sin\theta \\ \cos\theta\sin\varphi
\end{array}\right),
\label{eq:pw}
\end{align}
where $\bar{n}=N/V$, with $N$ being the total atom number and $V$ the volume of the system,
$k$ is the quasimomentum, and $\theta$ and $\varphi$ are two parameters. By substituting Eq.~(\ref{eq:pw}) into Eq.~(\ref{eq:energy}) (with $c_0=c_2=0$), we obtain the energy per particle,
\begin{align}
E_{k}=k^{2}- \left( \frac{A_{k}^{\prime}}{54}\right)^{\frac{3}{2}}-A_{k} \left(\frac{2} {27 A_{k}^{\prime}}\right)^{\frac{3}{2}} +\frac{2}{3} A_{0},
\end{align}
with
\begin{align}
A_{k}&=48 k^{2}+(\varepsilon+4)^2+\frac{3}{2}J^2_{0}\left(\frac{\alpha}{\omega}\right) \Omega^{2},\notag\\
A^{\prime}_{k}&=(\varepsilon+4)A_{k}^{\prime \prime}+\sqrt{(\varepsilon+4)^{2} A_{k}^{\prime \prime 2}-4 A_{k}^{3}} , \notag\\
A_{k}^{\prime \prime}&=-288 k^{2}+2(\varepsilon+4)^{2}+\frac{9}{2}J^2_{0}\left(\frac{\alpha}{\omega}\right) \Omega^{2}. \notag
\end{align}
Then the quasimomentum can be determined by solving $\partial E_k/\partial k=0$. The occupation of $k=0$ is the zero momentum (ZM) state, and the occupation of a nonzero quasimomentum is the plane-wave (PW) state.
Fig.~\ref{Fig1} shows ground-state phase diagram in the $(\Omega, \varepsilon)$ plane,
in which the tensor magnetization $\langle {F}^2_z \rangle=\cos^2\theta$ is chosen as the order parameter. The dotted lines are the transition line between PW and ZM phases, above which is the ZM phase and below is the PW phase. We also show the lowest band of $H^{\prime}_{\text{SOC}}$ in Fig.~\ref{Fig1}. The dashed line in the ZM regime is a separation, above which the lowest band has only one minimum at $k=0$ and below which it has three local minima but the lowest one at $k=0$. In the PW regime, the lowest band may have three local minima or two. The separation between these two cases is demonstrated by the blakc dashed lines. Two dashed lines merge together with the phase transition line at the so-called tricritical point, which is labeled by the red star in Fig.~\ref{Fig1}. The location of the tricritical point can be analytically calculated from $\partial^2 E_k/\partial k^2=0$ and the equal energy between the PW and ZM states~\cite{Sun,Yu}. The calculated value for the tricritical point is $(\Omega^{\ast},\varepsilon^{\ast})=(30.14,-1.66)$. When $\Omega<\Omega^{\ast}$ the PW-ZM transition is first-order and when $\Omega>\Omega^{\ast}$ the phase transition is second-order.
\section{Phase diagram of the interacting spin-orbit-coupled Floquet spinor BEC}
\label{interacting}
For a spin-orbit-coupled spin-1 BEC, the previous works revealed ground states including PW, ZM and stripe phases and rich phase transitions between them~\cite{Campbell, Sun,Yu,Martone2016}. The single-particle spin-orbit-coupled dispersion provides diverse arrangements of energy-minima, and interactions determine the way to condense into these minima. Since the dispersion of $H^{\prime}_{\text{SOC}}$ have three energy-minima at most, we construct ground-state wave functions as a superposition of the spinors at these minima, which can be assumed as
\begin{align}
\Psi&=\sqrt{\bar{n}} C_{+}e^{ikx}\left(\begin{array}{c}
\cos\theta_1\cos\varphi \\ -\sin\theta_1 \\ \cos\theta_1\sin\varphi
\end{array}\right)+\sqrt{\bar{n}} C_{0}\left(\begin{array}{l}
\sin\theta_2/\sqrt{2} \\ -\cos\theta_2 \\ \sin\theta_2/\sqrt{2}
\end{array}\right) \notag\\
&\phantom{={}}+\sqrt{\bar{n}}C_{-}e^{-ikx} \left(\begin{array}{c}
\cos\theta_1\sin\varphi \\ -\sin\theta_1 \\ \cos\theta_1\cos\varphi
\end{array}\right).
\label{eq:variation}
\end{align}
The superposition coefficients satisfy normalization condition, $|C_{+}|^2+|C_0|^2+|C_{-}|^2=1$. The spinors are the eigenstates of $H^{\prime}_{\text{SOC}}$ with the concrete parameters $\theta_{1,2}$ and $\varphi$ to be specified. The second state in Eq.~(\ref{eq:variation}) is the spinor at $k=0$, and the first and third ones are spinors modulated by the plane waves at $\pm k$. The symmetry of $H^{\prime}_{\text{SOC}}$ requires that the first and third states have the same $\theta_1$ and $\varphi$. We substitute the above variational wave functions Eq.~(\ref{eq:variation}) into the energy functional in Eq.~(\ref{eq:energy}). The minimization of the resultant energy functional gives the values of parameters $k$, $C_{0,\pm}$, $\theta_{1,2}$ and $\varphi$. From these parameters, we can classify ground states:
the ZM phase has $C_\pm=0$; the PW phase has a nonzero $k$ and $C_0=0$ with one of $C_\pm$ being nonzero; the stripe phase requires $k\ne 0$ and at least two of $C_{\pm,0}$ are nonzero. The stripe phases can be further classified according to relative values of $C_{\pm,0}$~\cite{Yu,Martone2016}.
Considering that the classification of ground states depends strongly on $C_{\pm,0}$, we use the tensor magnetization $\left\langle F^2_z\right\rangle$ as the order parameter to identify different phases,
\begin{equation}
\left\langle F^2_z\right\rangle=\left(\left|C_{+}\right|^2+\left|C_{-}\right|^2\right) \cos ^2 \theta_1+\left|C_0\right|^2 \sin ^2 \theta_2.
\label{order}
\end{equation}
We find that antiferromagnetic and ferromagnetic spin-spin interactions have very different ground-state phase diagrams,
which are studied separately.
\subsection{Antiferromagnetic interactions}
\begin{figure}[t]
\includegraphics[ width=3.2in]{Fig2.pdf}
\caption{Quantum ground-state phase diagram of a spin-orbit coupled Floquet spinor BEC with an antiferromagnetic spin-spin interaction ($\bar{n}c_0=1$ and $\bar{n}c_2=0.1$). The background corresponds to values of the tensor magnetization $\langle {F}^2_z \rangle$ defined in Eq.~(\ref{order}). The black and white dotted lines represent the first-order and second-order phase transitions.
The different tricritical points are denoted by the red and purple stars.
The driving is $\alpha/\omega=2$ ($J_0(\alpha/\omega)=0.224$ and $J_0(2\alpha/\omega)=-0.397$).
}
\label{Fig2}
\end{figure}
The antiferromagnetic interaction is $c_2>0$, which is typical for the $^{23}$Na BEC. Fig.~\ref{Fig2} demonstrates the phase diagram for antiferromagnetic interactions with a driving $\alpha/\omega=2$ in the space of the quadratic Zeeman field $\varepsilon$ and the Rabi frequency $\Omega$. When $\varepsilon$ is negative, the single-particle dispersion has two lowest minima locating at $\pm k_m$ [see the inset in Fig.~\ref{Fig1}], the antiferromagnetic interaction allows atoms to simultaneously occupy these two minima to form a stripe for a low $\Omega$. This stripe phase labeled as S1 in Fig.~\ref{Fig2} has $|C_{+}|=|C_{-}|=1/\sqrt{2}$ and $C_0=0$.
Using the wave functions in Eq.~(\ref{eq:variation}) with $C_0=0$ and considering the single-particle spinors at $\pm k_m$ having $\varphi= \pi/2$, we get the energy of antiferromagnetic interaction $\langle E \rangle_{c_2}$ and Floquet-induced spin-exchange interaction $\langle E \rangle_{c_f}$,
\begin{align}
\langle E \rangle_{c_2} +\langle E \rangle_{c_f} &=\frac{c_2\bar{n}^2 }{2}\cos^4\theta_1 + c_2\bar{n}^2|C_{-}|^2|C_{+}|^2
\notag \\
&\phantom{={}}\cdot\left[(1+\frac{c_f}{c_2}) \sin^2(2\theta_1)-2\cos^4\theta_1\right].
\label{Twoenergy}
\end{align}
For a low $\Omega$, we have $\theta_1\approx 0$ and the minimization of $\langle E \rangle_{c_2} +\langle E \rangle_{c_f}$ leads to $|C_{+}|=|C_{-}|=1/\sqrt{2}$,
corresponding to the S1 phase, the tensor magnetization of which is $\left\langle F^2_z\right\rangle \approx 1$, as shown in Fig.~\ref{Fig2}.
$\theta_1$ prefers to be nonzero for a large $\Omega$.
Meanwhile, the first term
$c_2\bar{n}^2/2\cos^4\theta_1$ in $\langle E \rangle_{c_2} +\langle E \rangle_{c_f}$ allows $\theta_1$ approaching to $\pi/2$ at which it is minimized,
so that $\theta_1$ can grows from zero to $\pi/2$ as $\Omega$ increases. Consequently, for a high $\Omega$, we may have $(1+c_f/c_2) \sin^2(2\theta_1)-2\cos^4\theta_1>0$. Then the minimization of $\langle E \rangle_{c_2} +\langle E \rangle_{c_f}$ requires one of $C_{\pm}$ to be zero. Even though the single-particle dispersion has two minima, the antiferromagnetic interaction chooses one of them to occupy, generating the PW phase shown in Fig.~\ref{Fig2}. The phase transition between the S1 and PW phases is first order. Physically, $\langle E \rangle_{c_2} +\langle E \rangle_{c_f}$ is proportional to $c_2\bar{n}^2 |C_{+}|^2|C_{-}|^2 [(1+c_f/c_2)\langle F_x \rangle_+ \langle F_x \rangle_- + \langle F_z \rangle_+ \langle F_z \rangle_-]$, where $\langle F_x \rangle_\pm $ and $\langle F_z \rangle_\pm $ are the $x$ and $z$ polarization of the spinors at $\pm k_m$. The antiferromagnetic interaction generates $ \langle F_z \rangle_+ \langle F_z \rangle_- <0$ and the Rabi coupling is in favor of $\langle F_x \rangle_+ \langle F_x \rangle_->0$. The competition between these two effects gives rise to the S1-PW transition and we have $\left\langle F^2_z\right\rangle < 1$ in the PW phase [see Fig.~\ref{Fig2}].
The emergence of the ZM phase in Fig.~\ref{Fig2} is due to the fact that the lowest minimum of the single-particle dispersion lays at $k=0$. There is a second stripe phase labeled as S2 which is unique only for antiferromagnetic interactions. The S2 phase is featured with $|C_-|=|C_+|\ne 0, |C_0| \ne 0$ and $\Theta\equiv \arg(C_-)+ \arg(C_+)-2\arg(C_0)=\pi$.
At first glance, the phase diagram shown in Fig.~\ref{Fig2} is similar to that of a usual spin-orbit-coupled BEC demonstrated in Refs.~\cite{Yu,Martone2016} (i.e., Fig.~1(a) in~\cite{Yu} and Fig.~1 in~\cite{Martone2016}). There are two tricritical points represented by stars in Fig.~\ref{Fig2}. The first (second) order phase transitions between different phases are shown by black-dotted (white-dotted) lines.
However, there are two new features in our system. (i) All phases exist in a broadened region of the Rabi frequency. This is a straightforward consequence of the Bessel-function modulation $\Omega J_0$. (ii) The existence of the S2 phase is also extended in the $\varepsilon$ domain. In the usual spin-orbit-coupled antiferromagnetic BEC the S2 phase exists in an extremely narrow region of $\varepsilon$ (see Fig.~1(a) in~\cite{Yu} and Fig.~1 in~\cite{Martone2016}). Our Floquet system has a large extension, which benefits from the Floquet-induced interaction.
\begin{figure}[t]
\includegraphics[ width=3.2in]{Fig3.pdf}
\caption{Phase diagram for an antiferromagnetic interaction ($\bar{n}c_0=1$ and $\bar{n}c_2=0.1$) as a function of the driving $\alpha/\omega$. The Rabi frequency is $\Omega=2$. The background corresponds to values of the tensor magnetization $\langle {F}^2_z \rangle$.
The black and white dotted lines represent the first-order and second-order phase transitions.}
\label{Fig3}
\end{figure}
To reveal the extension of the stripe regions more clearly, we study the phase diagram as a function of the driving $\alpha/\omega$. The results are shown in Fig.~\ref{Fig3}
for $\Omega=2$.
It is clear from the figure that without the driving ($\alpha/\omega =0$) the S2 phase exists in a extremely narrow region of $\varepsilon$. This gives rise to a challenge for its experimental implementations. The upper boundary of the S2 phase corresponds to the degeneracy of three minima of the single-particle dispersion, i.e., $E_{-k_{m}}= E_{k=0}=E_{k_{m}}$, and beyond the boundary $E_{k=0}$ becomes the lowest one such that ground state is the ZM phase. Below the boundary, we have $E_{-k_{m}}=E_{k_{m}}< E_{k=0}$.
For a low $\Omega$, the spinors at $\pm k_{m}$ have $\theta_1=0$, $\varphi=\pi/2$ and the spinor at $k=0$ has $\theta_2=0$. The general wave function becomes
\begin{equation}
\Psi=\sqrt{\bar{n}} \begin{pmatrix}
C_-e^{-ikx} \\ -C_0 \\ C_+e^{ikx}
\end{pmatrix},
\label{Approx}
\end{equation}
which is a good approximation for a low $\Omega$. By using Eq.~(\ref{Approx}), we find that
the antiferromagnetic energy can be minimized as $\langle E \rangle_{c_2}=0$ in both the S1 phase ($|C_{\pm}|=1/\sqrt{2}, C_0=0$) and the S2 phase
($|C_-|=|C_+|<1/\sqrt{2}$, $C_0\neq 0$, $\Theta=\pi$) .
However, the S2 phase is not a minimization of the quadratic Zeeman energy $\langle E \rangle_{\varepsilon}=\varepsilon\bar{n} ( |C_-|^2+|C_+|^2 )$ for
$\varepsilon<0$, so that the ground state is the S1 phase. A dominant Rabi frequency $\Omega$ causes a small deviation $\theta_1$ from 0, i.e., $\theta_1=\delta\theta$, where $\delta\theta>0$ is a very small quantity. This term leads to $\langle E \rangle_{c_2}=8c_2\bar{n}^2|C_+|^4(\delta \theta)^2$ for both the S1 and S2 phases. Considering the S1 phase with $|C_+|^2=1/2$ and the S2 phase with $|C_+|^2<1/2$, this extra antiferromagnetic energy prefers the S2 phase as the ground state if the quadratic Zeeman energy is weak. If the quadratic Zeeman energy exceeds this extra energy, the S1 phase is back as the ground state. Since the extra energy is a small quantity of second order, the S2 ground state exists in a very small $\varepsilon$ domain.
In the presence of the driving, the region of the S2 phase is dramatically extended around $\varepsilon=0$ [see Fig.~\ref{Fig3}]. The upward shift of the region is due to the Bessel-function-modulated Rabi frequency. As the driving $\alpha/\omega$ increasing from 0, $\Omega J_0(\alpha/\omega)$ decreases towards zero. As shown in Fig.~\ref{Fig2}, for a small $\Omega$, the S2 phase locates around $\varepsilon=0$. The dramatic expansion of the existence area is the consequence of the Floquet-induced spin-exchange interaction. The S2 phase can greatly minimize the spin-exchange-interaction energy, which can be easily seen from the approximate wave function for a low $\Omega$ in Eq.~(\ref{Approx}). With the wave function, the spin-exchange-interaction energy
becomes $\langle E \rangle_{c_f}=2c_f\bar{n}^2 |C_-||C_+||C_0|^2\cos(\Theta)$. The S2 phase, having $0<|C_-|=|C_+|<1/\sqrt{2}$ and $\Theta=\pi$, makes the spin-exchange energy minimized. Other phases, such as the ZM phase ($C_0=1$), the PW phase ($C_0=0$, $|C_+|+|C_-|=1$), and the S1 phase ($C_0=0, |C_{\pm}|=1/\sqrt{2}$), lead to $\langle E \rangle_{c_f}=0$,
so that the Floquet-induced spin-exchange energy cannot be minimized. Meanwhile, the S2 phase also minimizes the antiferromagnetic interaction energy, $\langle E \rangle_{c_2}=c_2\bar{n}^2/2 (|C_-|^2-|C_+|^2)^2 +c_2\bar{n}^2[|C_-|^2|C_0|^2+|C_+|^2|C_0|^2+2|C_-||C_+||C_0|^2\cos(\Theta)]=0$. The only obstacle for the existence of the S2 phase is the quadratic Zeeman energy $\langle E \rangle_{\varepsilon}=\varepsilon \bar{n}( |C_-|^2+|C_+|^2 )$. If $\varepsilon>0$, the quadratic Zeeman energy prefers the ZM phase and when $\varepsilon<0$ it prefers the S1 phase. Therefore, the competition between the Floquet-induced spin-exchange interaction and the quadratic Zeeman field leads to the existence region for the S2 phase which is dramatically extended in comparison with the usual case with $\alpha/\omega=0$. The S2-ZM (white-dotted) and S2-S1 (black-dotted) transition lines oscillate as a function of $\alpha/\omega$. It is noted that the maxima of the transition lines correspond to the zeros of $J_0( \alpha/\omega)$, therefore the oscillations come from $\Omega J_0(\alpha/\omega)$. It is also interesting that without the driving the S2 phase always exists in the negative $\varepsilon$ area, but with the driving it can exist even in positive $\varepsilon$ areas.
\subsection{Ferromagnetic interactions}
The ferromagnetic interaction is $c_2<0$. We consider $c_2/c_0=-0.5$, which is typical of $^7$Li atoms~\cite{Martone2016}. Fig.~\ref{Fig4} demonstrates the phase diagram for ferromagnetic interactions with a driving $\alpha/\omega=1.6$. In the low $\Omega$ region, there is a third stripe phase, which is labeled as S3 in Fig.~\ref{Fig4}. It has $|C_-|=|C_+|\ne 0, |C_0| \ne 0$, and $\Theta=0$. Using the approximate wave function in Eq.~(\ref{Approx}), we know that the S3 phase only minimizes the second term in the ferromagnetic interaction energy $\langle E \rangle_{c_2}=c_2\bar{n}^2/2 (|C_-|^2-|C_+|^2)^2 +c_2\bar{n}^2[|C_-|^2|C_0|^2+|C_+|^2|C_0|^2+2|C_-||C_+||C_0|^2\cos(\Theta)]$ ($c_2<0$) and it cannot minimize the first term $c_2\bar{n}^2/2 (|C_-|^2-|C_+|^2)^2$ which is minimized by the PW phase. With the effect of the quadratic Zeeman field, the S3, PW and ZM phases are distributed in the way shown in Fig.~\ref{Fig4}. These three phases are similar to the previous studies~\cite{Yu,Martone2016} (i.e., Fig.~1(b) in~\cite{Yu} and Fig.~2 in~\cite{Martone2016}), but with an outstanding feature that every phase exists in a broaden region of $\Omega$ due to the Bessel-function modulation.
\begin{figure}[t]
\includegraphics[width=3.2in]{Fig4.pdf}
\caption{Quantum ground-state phase diagram of a spin-orbit coupled Floquet spinor BEC with a ferromagnetic spin-spin interaction $\bar{n}c_0=1$ and $\bar{n}c_2=-0.5$. The background corresponds to values of the tensor magnetization $\langle {F}^2_z \rangle$. The black and white dotted lines represent the first-order and second-order phase transitions. The different tricritical points are denoted by red and yellow stars. The driving is $\alpha/\omega=1.6$ ($J_0(\alpha/\omega)=0.455$ and $J_0(2\alpha/\omega)=-0.320$). }
\label{Fig4}
\end{figure}
Different from the case of $\alpha/\omega=0$ in Refs.~\cite{Yu,Martone2016, Campbell}, we find in the Floquet spinor BEC that there exists a new stripe phase, which is labeled as S4. The S4 phase locates inside the region where the single-particle dispersion has two energy minima at $\pm k_m$,
and they are equally occupied by the S4 phase with $|C_{\pm}|=1/\sqrt{2}$ and $C_0=0$. This condition is exactly same to the S1 phase with antiferromagnetic interactions. Nevertheless, the S1 phase exists in the low $\Omega $ region [see Fig.~\ref{Fig2}] while the S4 phase is in the high $\Omega$ region [see Fig.~\ref{Fig4}]. Such a difference of existence region related to $\Omega$ brings new features to the S4 phase. With $C_0=0$, the minimization of the ferromagnetic energy and the Floquet-induced energy demonstrated in Eq.~(\ref{Twoenergy}) leads to $|C_-|=0$ or $|C_+|=0$ for low $\Omega$ ($\theta_1\approx 0$). In this case, the ground state is the PW phase with $\left\langle F^2_z\right\rangle\approx 1$, as shown in Fig.~\ref{Fig4}.
For a high $\Omega$, one may have $\theta_1\ne 0$ and $(1+c_f/c_2) \sin^2(2\theta_1)-2\cos^4\theta_1>0$.
The minimization of
$\langle E \rangle_{c_2} +\langle E \rangle_{c_f}$ requires $|C_{\pm}|=1/\sqrt{2}$, so that the ground state is the S4 phase. Due to the existence of the S4 phase, there are two tricritical points labeled by stars in Fig.~\ref{Fig4}.
\begin{figure}[b]
\includegraphics[ width=3.2in]{Fig5.pdf}
\caption{Phase diagram in a ferromagnetic interaction ($\bar{n}c_0=1$ and $\bar{n}c_2=-0.5$) as a function of the driving $\alpha/\omega$. The Rabi frequency is $\Omega=8$. The background corresponds to values of the tensor magnetization $\langle {F}^2_z \rangle$. The black and white dotted lines represent the first-order and second-order phase transitions. The different tricritical points are denoted by red and yellow stars.}
\label{Fig5}
\end{figure}
We want to emphasize that without the driving ($c_f=0$) the S4 phase cannot exist~\cite{Yu,Martone2016, Campbell}.
In absence of the driving, Eq.~(\ref{Twoenergy}) becomes $\langle E \rangle_{c_2} =c_2\bar{n}^2 /2\cos^4\theta_1+
c_2\bar{n}^2|C_{-}|^2|C_{+}|^2\left[ \sin^2(2\theta_1)-2\cos^4\theta_1\right]$. For $c_2<0$,
the first term $c_2\bar{n}^2 /2\cos^4\theta_1$ prefers $\theta_1=0$. According to the second term, the realization of the S4 phase needs a nonzero $\theta_1$ satisfying $\sin^2(2\theta_1)-2\cos^4\theta_1>0$, which can be achieved by increasing the Rabi frequency.
Besides, the negative Rabi coupling energy is also beneficial for lowing the total energy.
However, the single-particle dispersion with a large $\Omega$ will have the energy minimum at $k=0$
lower than the energy minima at $k=\pm k_{m}$ and the ground state prefers the ZM phase. Thus, there is no way for the S4 phase to exist. The Floquet-induced interaction has the nature of spin-exchange. It has two effects: the spin-exchange interaction causes a direct competition with the first term since it prefers $\theta_1=\pi/4$ so that three components having equal populations in each spinor; according to Eq.~(\ref{Twoenergy}), the S4 phase requires $ (1+c_f/c_2) \sin^2(2\theta_1)-2\cos^4\theta_1>0$,
and the positive $c_f/c_2$ as a prefactor also increases the possibility for $\theta_1$ satisfying the requirement. Therefore, combined effects of the Rabi coupling and the Floquet-induced interaction makes the possible existence of the S4 phase.
\begin{figure}[t]
\includegraphics[ width=3.2in]{Fig6.pdf}
\caption{Profiles of the S4 phase in a ferromagnetic interaction ($\bar{n}c_0=1$ and $\bar{n}c_2=-0.5$). The driving is $\alpha/\omega=1.6$ ($J_0(\alpha/\omega)=0.455$ and $J_0(2\alpha/\omega)=-0.320$). The quadratic Zeeman field is $\varepsilon=-2$. (a) Spatial density distributions, $n_i=|\Psi_i|^2$, and $n$ the total density $n=n_1+n_2+n_3$. The Rabi frequency is $\Omega=8$. (b) The contrast $(n_{\mathrm{max}}-n_{\mathrm{min}})/(n_{\mathrm{max}}+n_{\mathrm{min}})$ as a function of $\Omega$.}
\label{Fig6}
\end{figure}
In order to know how the S4 phase emerges in the presence of the driving, we analyze the phase diagram as a function of the driving $\alpha/\omega$, which is demonstrated in Fig.~\ref{Fig5}. The Rabi frequency is fixed as $\Omega=8$.
For $\alpha/\omega=0$, the ground state is the ZM phase, as shown by the focused area of $\varepsilon$ in Fig.~\ref{Fig5}, which is consistent with the results in Refs.~\cite{Yu,Martone2016, Campbell}. As $\alpha/\omega$ increasing, the S3, S4 and PW phases appear and have an interesting distribution shown in Fig.~\ref{Fig5}.
Transition (white-dotted and back-dotted) lines have an oscillating behavior with the maxima matching with the zeros of $J_0(\alpha/\omega)$. The S3 and S4 phases locate between two transition lines. Furthermore, the S4 phase exists in limited regions. The changing of $\alpha/\omega$ is equivalent to scan $\Omega$. A high $\alpha/\omega$ leads to $\Omega J_0(\alpha/\omega)$ confining around $0$. According to Fig.~\ref{Fig4}, the ground state around $\Omega=0$ is the S3 phase.
Therefore, for a high $\alpha/\omega$ there is no S4 phase anymore [see Fig.~\ref{Fig5}].
In Fig.~\ref{Fig6}(a), we show density distributions $n_i=|\Psi_i|^2$ of a typical S4 state. The outstanding feature is that the second component $n_2$ is comparable with other components $n_1=n_3$. This is completely different from the S1 phase with antiferromagnetic interactions, where $n_2 \ll n_1=n_3$.
This is due to the low $\Omega$ region for the S1 phase. For a low $\Omega$, the spinors at $\pm k_{m}$ can be physically approximated as $e^{ik_mx}(\delta^2, \delta, 1)^T$ and $e^{-ik_mx}(1, \delta, \delta^2)^T$ respectively, where $\delta$ is a small quantity. The S1 phase is an equal superposition of the two spinors and we have $n_1=n_3=1+2\delta^2\cos(2k_{m}x)$ and $n_2=4\delta^2\cos^2(k_{m}x)$. Therefore, the S1 phase has $n_2 \ll n_1=n_3$ and a very low contrast for $n_1$ and $n_2$ which is proportional to a small quantity of second order. The contrast is defined as $(n_{\mathrm{max}}-n_{\mathrm{min}})/(n_{\mathrm{max}}+n_{\mathrm{min}})$ with $n_{\mathrm{max}}$ ($n_{\mathrm{min}}$) the density maximum (minimum). The low contrast of the S1 phase is unfavorable for experimental observations. However, the S4 phase with ferromagnetic interactions exists in the high $\Omega$ region, and with the further help from the Floquet-induced spin-exchange, $\delta$ is not a small quantity anymore. Therefore, the contrast of $n_1$ and $n_3$ is obviously high for the S4 phase. The excellence of the second component is that its contrast is always maximized (which is equal to one). The dominated occupation in the second component makes it perfect for directly experimental observations. In Fig.~\ref{Fig6}(b), we show the contrast in the full $\Omega$ region. The contrast of $n_1$ and $n_3$ increases with the increase of $\Omega$, and it is always one as expected for the second component $n_2$.
\begin{figure}[t]
\includegraphics[ width=3in]{Fig7.pdf}
\caption{Bogoliubov excitation spectrum $\zeta(q_x)$ of a typical S4 state. The parameters are $\bar{n}c_0=1$, $\bar{n}c_2=-0.5$, $\Omega=7$ and $\varepsilon=-1.1$. The lowest two bands are gapless corresponding to two Nambu-Goldstone modes.
}
\label{Fig7}
\end{figure}
A closely related study of ground states is their elementary excitations. Excitation spectrum of each phase in usual spin-orbit-coupled spin-1 BECs has been investigated~\cite{Sun,Yu, ChenY}. The new S4 phase only exists in Floquet spinor BECs and we study its Bogoliubov excitation. The stripe wave function ansatz in Eq.~(\ref{eq:variation}) only includes low-order plane waves. It has been known that such ansatz cannot precisely capture Bogoliubov excitation and high-order planes waves should be involved~
\cite{Martone2014,LiY2013,Xiaolong,Lyu,Guanqiangli2021}. Therefore, we use the ansatz with high-order modes~\cite{ChenY},
\begin{equation}
\Psi= \sqrt{\bar{n}} \sum_{j=-L}^L e^{ijKx}\begin{pmatrix}
\varphi_1^{(j)} \\ \varphi_2^{(j)} \\ \varphi_3^{(j)}
\end{pmatrix},
\label{stripeexcitation}
\end{equation}
with the normalization condition $\sum_{\sigma,j}|\varphi_\sigma^{(j)}|^2=1$. Here, $L$ is the cutoff of plane waves, and $K$ relates to the period of the stripes. Spinors $ (\varphi_1^{(j)}, \varphi_2^{(j)}, \varphi_3^{(j)})^T$ and $K$ are determined by minimizing the energy function in Eq.~(\ref{eq:energy}) using Eq.~(\ref{stripeexcitation}). In the S4 phase parameter region, we first get the stripe wave function by the minimization procedures, and then we use the ground state to solve Bogoliubov-de Gennes equation to get the elementary excitation energy $\zeta$~\cite{ChenY} . A typical excitation spectrum $\zeta(q_x)$, i.e., the relation between excitation energy $\zeta$ and excitation quasimomentum $q_x$, is demonstrated in Fig.~\ref{Fig7}, in which only three lowest bands are shown. The size of Brillouin zone is $2K$ which means that the period of stripes is $\pi/K$. The lowest two bands are gapless, corresponding to two Nambu-Goldstone modes. The physical origin of these two gapless modes is that stripes spontaneously break the continuously translational symmetry and gauge symmetry~\cite{LiY2013}.
\section{Conclusion}
\label{conclusion}
Spin-orbit-coupled spin-1 BECs have been realized in experiments. Based on the experimental platform, we propose a spin-orbit-coupled Floquet spinor BEC by periodically driving the quadratic Zeeman field with a high-frequency. In the Floquet spinor BEC, the Rabi frequency is modulated by a Bessel function and a Floquet-induced spin-exchange interaction emerges. We study quantum ground-state phase diagram of a spin-orbit-coupled Floquet spinor BEC considering antiferromagnetic and ferromagnetic spin-spin interactions separately. A general result is that due to the Bessel-function modulation, every phase in diagram can exist in a broadened Rabi frequency region. For antiferromagnetic interactions, we find that the existence of a stripe phase can be dramatically extended in the $\varepsilon$ domain due to the Floquet-induced spin-exchange interaction. For ferromagnetic interactions, a new stripe phase is revealed, and its features, including high contrast and Bogoliubov excitations, are identified. In all previous studies of spin-$1/2$ and spin-1 spin-orbit-coupled BECs, stripes have a very low contrast, since they exist in low $\Omega$ regime and the contrast is proportional to the Rabi frequency $\Omega$~\cite{Martone2014}. The new stripe phase in the Floquet spinor BEC exists in high $\Omega$ region and its high contrast is in favor of experimental observations.
\section*{Acknowledgments}
We appreciate Prof. Peter Engels for stimulating discussions. This work is supported by the National Natural Science Foundation of China with Grants No. 11974235 and No. 11774219.
H.L acknowledges supports from Okinawa Institute of Science and Technology Graduate University.
|
{
"arxiv_id": "2302.14168",
"language": "en",
"timestamp": "2023-03-01T02:03:28",
"url": "https://arxiv.org/abs/2302.14168",
"yymm": "2302"
} | \section{Introduction}
This paper defines and develops the theory of signal propagation in double edged relays. We will refer to various configurations of double edged relays as SPIDER models. This acronym is derived from the biological inspiration for this work. If you imagine a spider waiting patiently on its web, it can sense vibrations along the strands of the web and use those vibrations to obtain information about the type and location of prey and other disturbances anywhere in its web. As any frustrated childhood arachnologist can tell you spiders appear to be able to distinguish between different kinds of incident vibrations within the web and are not often fooled by curious children trying to trick the spider into exploring the web.\\
A double edged relay mixes the formalism of finite state automata with the discretized dynamics of the one dimensional wave equation. By blending these two mathematical frameworks we are able to create a very flexible framework for creating signaling simulations which can reveal many interesting features of graphs and networks. By creating double edged relays which are forced to behave like the linear wave equation we are able to exploit several important well known properties of solutions to linear hyperbolic partial differential equations: finite propagation speed, and linear superposition of signals.\\
The paper begins by reviewing the a limited form of the D'Alembart solution to the wave equation in one spatial dimension. This theory is used to demonstrate how information can be encoded into traveling waves. This introduction is followed by the definition of a double edged relay which we prove obeys a limited form of superposition. (Namely: superposition of signals in different positions on the relay, and superposition of amplitudes.). With these fundamentals in place we demonstrate how using signal propagation in individual arrays can be used to achieve simple computational tasks.\\
With the theoretical foundations in place we explain how to overlay double edged relays over existed graphs and several techniques for encoding information into the amplitude of the signals. These ideas are applied to three well-known graph theoretic problems: In this paper we explore how SPIDER models can solve the shortest path problem of graph theory, and offer a narratively compelling algorithmic alternative to Dijkstra's algorithm. \\
\section{Review of D'Alembart's formula for a stationary initial displacement.}
The wave equation of mathematical physics is a hyperbolic partial differential equation which has been studied for hundreds of years. As a hyperbolic equation it possesses many amazing properties including linear superposition and finite propagation speed of information. The D'Alembart solution to the wave equation in one dimension has a particularly beautiful form when an elastic string is subjected to a displacement with no initial velocity.\cite{1} To those unfamiliar with the equation or this particular solution:\\
The initial value problem to find $u(x,t)$ in $(x,t)\in (-\infty, \infty)\times (0,\infty)$ given $f(x)$ .
\begin{align}
\frac{\partial^2 u}{\partial t^2} -c^2\frac{\partial^2 u}{\partial x^2} = 0, \hspace{.2in} u(0)= f(x), \hspace{.2in} \frac{\partial u}{\partial t}|_{(x,t=0)} = 0, \nonumber\end{align}
has the exact solution:
\begin{align}
u(x,t) = \frac{1}{2}f(x-ct) + \frac{1}{2} f(x+ct) \nonumber
\end{align}
In the special case where $f$ has compact support (i.e. is identically zero outside a finite interval) this eventually resolves into two disjoint waves which appear to have the same shape as the initial displacement, but move in opposite directions at constant speed.\\
\begin{center}
\includegraphics[height=150pt]{wavepacket.png}\end{center}
This behavior is robust and can be observed in simulation and experiment (until the solution decays due dissipative forces which are neglected in the theoretical model). Because of this property, one can encode information into any functional shape and transmit it from one location to another along a wave packet. The approach we will adopt here is to pick a single signal shape and encode information into the amplitude of the signal making it stronger or weaker as needed.\\
In 1929 (reprinted in English in 1967) Courant, Friedrichs and Levy developed the theory of the difference equations of mathematical physics, and made the remarkable observation that because of the constant propagation speed of the linear wave equation it is actually possible to create solutions from $\delta$ impulses which are exact solutions to both the continuous and discretized equations. These solution may be simulated with zero error when the grid spacing for the finite difference methods is chosen to match the timestep.\cite{2} For the applications we envision, it is sufficient to limit our attention to unit propagation speed.
\newpage
\section{The SPIDER model}
The double edged relay is created by using paired, equal length discrete arrays to link what we will call 'relays'. In all the examples and applications we develop the relays will be coincident with the vertices of graphs, however the purpose of the relay is to perform operations on the signals carried by the connecting arrays. By changing these operations one may be able to create spider models for myriad applications we have not yet envisioned.\\
One simple instance of a double edged relay is given below:\\
\begin{center}
\includegraphics[height=150pt]{DoubleEdgeRelay.png}
\end{center}
We restrict the behavior of the edges to follow the transport properties of the discretized wave equation. By limiting ourselves to integer edge weights and delta signals we ensure that the simulations for spider models blend exact solutions to both the continuous wave equation and discretize wave equation. This elegant blending of discrete and continuous creates a compelling model with many exploitable mathematical properties. Each time step we move the array entries exactly one position (according to the arrows). This behavior ensures that along the array signals travel with constant speed, and obey all of the expected superposition properties of the discrete wave equation. By modeling two edges and giving them opposite propagation directions we ensure that the relays can communicate with each other.\\
When a signal reaches a relay we are free to specify any desired operation including: transport, amplifying, splitting, copying, filtering. A relay could have a single or multiple functions and could change its behavior based upon any specified criteria. The only restriction for the operation of the relays is that any output signals are computed and transported to the next edge location(s) at the beginning of the time step after the incident signal reached the relay.\\
This restriction ensures that signals traveling through one or more double edged relays maintain the constant propagation speed of the wave equation. This restriction ensures that the time step is always a reliable measure of the distance a signal (possibly mutated) has traveled through the array.\\
To illustrate some of the possible behaviors we have designed double edged relays to achieve two simple programming tasks. We have tried to explain the relay operations graphically.
\begin{enumerate}
\item A periodic signal loop:
\begin{center}
\includegraphics[height=75pt]{relayloop.png}
\end{center}
Here regardless of signal amplitude we simple bounce signals from one edge to the other. This type of relay would precisely mimic a homogeneous Neumann boundary condition.
\item An amplitude alternating loop:
This single relay will create a periodic signal which alternates in sign as it passes from one edge to another. Several time-steps are shown so the reader can follow the logic through the relay.
\begin{center}
\includegraphics[height=75pt]{altloop1.png}
\includegraphics[height=75pt]{altloop2.png}
\includegraphics[height=75pt]{altloop3.png}
\includegraphics[height=75pt]{altloop4.png}
\end{center}
This type of condition mimics a traveling wave packet interacting with a homogeneous Dirichlet boundary condition on a finite domain.
\item A signal which degrades after a finite number of steps. Here the operations performed by the relays have a direct impact on the signal amplitudes and result in non-linear behavior.
\begin{center}
\includegraphics[height=75pt]{finloop1.png}
\includegraphics[height=75pt]{finloop2.png}
\includegraphics[height=75pt]{finloop3.png}
\includegraphics[height=75pt]{finloop4.png}
\end{center}
{\bf{Remark:}} If we amplified the signal amplitude here rather than limit it we would create a signal which evolved according to a geometric progression.\
\end{enumerate}
\section{SPIDERs on graphs}
The most straight forward application of the SPIDER model is to provide a dynamic simulation framework to conduct numerical experiments on graphs. In order to overlay double edged relays over an undirected arbitrary graph (if the graph is weighted we assume the weights are positive integers) we model the vertices of the graph as the relays and when two vertices are connected we model the graph edges with double edge arrays with length equal to the original edges. The figure below shows the configuration for a single vertex drawn with 4 incident edges and empty signals in a discrete radius of 2 away from this vertex.
\begin{center}
\includegraphics[height=100pt]{emptyvertex.png}
\end{center}
With this overlay in place we have adopted several conventions which make the underlying mathematics particularly elegant. We model signals traveling through the relays as integers and we treat an amplitude of 1 as an empty signal. We label the vertices of the graph with the prime numbers and we have each vertex use the following relay operations.\\
\begin{enumerate}
\item When a signal with amplitude different from 1 is incident upon an edge we amplify that signal amplitude by the corresponding prime label for that vertex.\\
\item After amplification each vertex transmits the newly amplified signal to all connecting outgoing edges. (Usually it is also useful to filter signals which are revisiting edges they have already visited. This is easy using the prime labeling since we can check newly amplified signals using modulo filtering. We make exceptions to this filtering when completing cycles etc.)
\end{enumerate}
We show the interaction between an incident signal and the vertex in the frames below. In the first illustration we only show amplification and splitting. In the second illustration we show how filtering is employed to stop certain signals from propagating.
\begin{center}
\includegraphics[height=100pt]{signalvertex1.png}\hspace{.2in}
\includegraphics[height=100pt]{signalvertex2.png}\hspace{.2in}
\includegraphics[height=100pt]{signalvertex3.png}
\end{center}
Filtering:
\begin{center}
\includegraphics[height=100pt]{filter1.png}\hspace{.2in}
\includegraphics[height=100pt]{filter2.png}\hspace{.2in}
\includegraphics[height=100pt]{filter3.png}
\end{center}
This behavior for the edges results in the following emergent behavior: The amplitude of the signals propagating through the graph carry historical information about the vertices visited by that signal. This powerful idea means that SPIDERS can be used to easily generate algorithms for solving combinatorial problems about paths and cycles through graphs.\\
If we consider the shortest path problem for a particular set of vertices in a graph with positive integer edge weights we can create a SPIDER overlay for the graph, and initialize the SPIDER overlay with the prime signal at all edges leaving the starting vertex. We let those signals propagate through the graph until they reach the desired ending vertex. The first signal which reaches the destination vertex corresponds to the shortest path. In the case of any signal collisions (situations where two or more signals are incident upon the same vertex at the same timestep) any of the incident signals can be propagated forward along the edge since all of the incident signals can be traced back to the starting edge using equal length paths.\\
Because of the rigid adherence to the finite propagation speed of the signals the shortest path length is guaranteed to be equal to the number of time-steps of the simulation. By using the prime labeling and multiplicative amplification we can factor the amplitude to obtain a list of all the vertices on the shortest path. (And by analyzing the corresponding signal propagation we can retrieve the ordering of the path as well. This is guarateed since this instatiation of the algorithm splits signals in forward time. Backtracking through the signal history is equivalent to backtracking through a tree.) The factoring here is not difficult since amplitudes are limited to binary combinations of the corresponding vertices. If one wishes to use this model for combinatorial optimization rather than just finding the shortest path between two vertices, then using bit strings recording vertex visitation and pointers to the relay edge locations would provide better scaling for parallel implementation. \\
\section{Discussion and Further Work}
The SPIDER model provides an interesting hybrid modeling paradigm which mixes elements of continuous linear wave propagation and discrete signal processing in a way that allows knowledge of the analog properties of hyperbolic systems to reach powerful \textit{a priori} guarantees linking the temporal and spatial aspects of computations on graphs. This model is one of a family of biologically inspired computing approached including the Bat algorithm\cite{4} and Water Wave optimization\cite{5}, all of these approaches have narratively and conceptually compelling approaches to computational problems. We feel the SPIDER model's compelling combination of rigorous foundation with computationally exploitable properties, and extremely simple signal representation options may provide a computational framework which has different strengths and weaknesses than other approaches. We have already created basic implementations of this model for solving the hamiltonian cycle problem, and the traveling salesman problem, and continue to benchmark the performance of the algorithm on these problems.
|
{
"arxiv_id": "2302.14189",
"language": "en",
"timestamp": "2023-03-01T02:04:26",
"url": "https://arxiv.org/abs/2302.14189",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Link prediction \citep{lichtenwalter2010new,zhang2018link,safdari2022reciprocity,yang2022few,nasiri2022impact} is an important technique used in various applications concerning complex network systems, such as e-commerce item recommendation \citep{chen2005link}, social network analysis \citep{al2011survey}, knowledge graph relation completion \citep{kazemi2018simple}, and more. State-of-the-art methods leverage Graph Neural Networks (GNNs) to discover latent links in the system. Methods such as Graph AutoEncoder (GAE) \citep{kipf2016variational}, SEAL \citep{zhang2018link,zhang2021labeling}, PLNLP \citep{wang2021pairwise}, and Neo-GNN \citep{yun2021neo} perform reliably on link prediction when the target graph has a high ratio of edge connectivity. However, many real-world data is \textit{sparse}, and these methods are less effective in these situations. This issue is known as the ``cold-start problem,'' and has recently been studied in the context of e-commerce \citep{zheng2021cold} and social networks \citep{leroy2010cold}.
One solution to address the difficulties related to cold-start settings is transfer learning \citep{gritsenkograph,cai2021graph}.
To alleviate sparsity issues of the target graph, transfer learning seeks to bring knowledge from a related graph, i.e., the \textit{source graph}, which shares similar structures or features with the target graph. The source graph should have better observable connectivity.
If such a related source graph can be found, its richer connectivity can support the target graph, augment its training data, and enhance latent link discovery.
However, transferring knowledge between graphs poses significant challenges. These challenges are \citep{kan2021zero,zhu2020transfer},
mainly due to differences in optimization objectives and data distribution between pre-training and downstream tasks (graphs) \citep{han2021adaptive, zhu2020transfer}.
To this end, \cite{ruiz2020graphon} theoretically bounded the transfer error between two graphs from the same ``graphon family,'' but this highly restrictive assumption limits its applications in real-world scenarios.
Another series of studies have examined the transferability of generally pre-trained GNNs \citep{hu2020strategies,you2020does,hu2020gpt}, aiming to leverage the abundant self-supervised data for auxilary tasks \citep{hwang2020self}. However, they do not study which self-supervised data or tasks are more beneficial for downstream tasks.
Other GNN transfer learning methods leverage a meta-learning framework \citep{lan2020node} or introduce domain-adaptive modules with specified losses \citep{wu2020unsupervised}, but they fail to capture the potential structural prior when the source and target graphs have shared nodes.
To better exploit the potential of source-target graph transfer,
we observe a widespread structural prior: the source graph may share an \textit{intersection} subgraph with the sparse target graph, i.e., they may have nodes and edges in common. Next, we first discuss a few real-world examples before further specifying this setting.
\subsection{Motivating Examples}
\label{sec:assumptions}
\begin{wrapfigure}{r}{0.4\textwidth}
\begin{center}
\vspace{-1em}
\centerline{\includegraphics[trim={3cm 3.6cm 6cm 5.5cm},clip,width=0.4\columnwidth]{figs/g12.pdf}}
\caption{An illustration of the proposed GITL\ setting. In this setting, the target graph is sparse, while the source graph has rich link information. The source graph and the target graph are assumed to have shared nodes and edges. The goal is to use the rich information in the source graph to improve link prediction in the target graph by exploiting the structural prior.}
\label{fig:illu}
\end{center}
\vspace{-2em}
\end{wrapfigure}
Our setting assumes that given a specific graph of interest, we can find another graph with a common subset of nodes. Furthermore, the second graph has richer link information than the original graph. We refer to the second graph as the source graph, which we employ to transfer knowledge. We refer to the original graph as the target graph.
We conceptually illustrate this setting in \Cref{fig:illu}.
This setting is motivated by a few important real-world examples, and we discuss two of them drawn from \textbf{global e-commerce} and \textbf{social network} scenarios.
In global e-commerce stores such as Amazon, eBay, or Taobao, the product items and the customer queries constitute bipartite graphs. The products and user queries are defined as nodes, with the user behaviors (clicks, add-to-carts, purchases, etc.) defined as edges. These graphs can be \textit{huge in size}, and are instrumental for customizing query recommendations, predicting search trends, and improving the search experience. To improve the recommendation engine, one may formulate the information filtering process as a link prediction task and then train GNN models to predict user behavior data.
These global e-commerce stores operate in multiple locales simultaneously. Among the emerging (smaller) locales, one practical and critical challenge commonly arises: these locales have not received rich enough user behavior, which may lead to the \textit{cold-start} issue \citep{hu2021graph,zhang2021graph,zheng2021cold}. In other words, very few customer interactions results in a sparse and noisy graph, leading to less reliable predictions.
To improve prediction performance in these emerging locales, we can leverage rich behavioral data from more established locales, which may have years of user activity.
In this case, one structural prior facilitates such a transfer: many items are available in multiple locales, and some query words might also be used by customers in multiple locales. These \textit{shared nodes} (products and user queries) naturally bridges the two graphs.
Note that the emerging and established locale graphs may have different node feature distributions. This \textit{domain gap} arises from differences in the items available across locales, as well as the customer behavior differences related to societal, economic, cultural, or other reasons.
Other examples can be found in social networks. For instance, academic collaborations can be modeled as a graph where the nodes are authors and the edges indicate collaborations or co-authored papers.
One task in such a graph is to predict co-authorship links. In this task, we can once again formulate the source-target transfer: the source graph can be an established field where authors collaborate extensively, and the target graph can be an emerging field with fewer collaborations. As another formulation, the source graph can be the author collaborations in past years and the target graph can refer to projected collaborations in future years. In both formulations, we can identify a shared subgraph: the shared nodes are the common authors, and the shared edges are pairs of authors who have publications in both disciplines (or within the same year).
With these real-world examples, we summarize the common properties of these tasks, and formulate them into a new setting, which we term as the \textit{Graph Intersection-induced Transfer Learning} (\textbf{GITL}). In a nutshell, the GITL\ setting represents the cases where the source graph shares nodes with the target graph, so that we can broadcast the source graph's richer link information to the target graph via the common subgraph.
\begin{figure*}[!t]
\begin{center}
\centerline{\includegraphics[trim={4cm 10.5cm 4.5cm 5.5cm},clip,width=0.9\columnwidth]{figs/method.pdf}}
\caption{A visualization of the proposed framework, which contains two stages. The first stage identifies the training dataset for the model, and the second stage investigates two broadcasting approaches: the modified edge-centric label propagation, and the knowledge distillation MLP.}
\vspace{-2em}
\label{fig:model}
\end{center}
\end{figure*}
\subsection{Contributions}
We propose a framework that tackles the GITL\ setting from two angles: \textit{training instance optimization} and \textit{prediction broadcast}.
Our framework addresses these two aspects using a two-stage learning process, as shown in \Cref{fig:model}.
For training instance optimization, we leverage the shared subset of nodes as the key structural information to transfer from the source graph to the target graph. In this step, the GNN model is trained only on the shared subgraph instead of the full source graph, which we show through experiments as a more effective method of transfer. For the prediction broadcast, we design a novel label propagation approach, which shifts the node-based graph to an edge-centric graph. This avoids over-smoothing during the broadcast.
We also study a pointwise MLP model via teacher-student knowledge distillation.
Our method falls into the category of \textit{instance-level transfer} \citep{pan2009survey,koh2017understanding,wang2018data,wang2019instance}. Distinct from other transfer learning approaches that fine-tune pre-trained GNNs on the target domain (dubbed the \textit{parameter-level transfer} \citep{pan2009survey}), the instance-level transfer \textit{selects} or \textit{re-weights} the samples from the source domain to form a new training set using guidance from the target domain. As a result, models trained on these \textit{processed} source domain samples generalize better without fine-tuning the model weights.
Our method is an instantiation of this appraoch, as we first leverage the structure overlap prior to select the training instances, then re-weight the sample predictions via source-to-target broadcast.
We consider this instance-level transfer to be better suited for our setting, as the graph model is lightweight and easy to optimize. On the other hand, since the training data is usually massive in size, dropping samples is unlikely to cause model underfitting.
The contributions of this paper are outlined as follows:
\begin{itemize}
\item We formulate GITL, a practically important graph transfer learning setting that exists in several real-world applications.
We propose a novel framework to optimize the GITL\ setting, which leverages the intersection subgraph as the key to transfer important graph structure information.
\item The proposed framework first identifies the shared intersection subgraph as the training set, then broadcasts link information from this subgraph to the full target graph. We investigate two broadcasting strategies: a label propagation approach and a pointwise MLP model.
\item We show through comprehensive experiments on proprietary e-commerce graphs and open-source academic graphs that our approach outperforms other state-of-the-art methods.
\end{itemize}
\subsection{Related works}
\label{sec:related works}
\textbf{Link Prediction in Sparse Graphs.} The link prediction problem is well studied in literature \citep{liben2007link}, with many performant models \citep{singh2021edge,wang2021pairwise,yun2021neo,zhang2021labeling,zhang2018link,subbian2015plums,zhu2021neural} and heuristics \citep{chowdhury2010introduction,zhou2009predicting,adamic2003friends,newman2001clustering}.
Unfortunately, most methods are hampered by link sparsity (i.e., low edge connection density). Mitigating the link prediction challenge in sparsely connected graphs has attracted considerable efforts. Some works suggest that improved node embeddings can alleviate this problem \citep{chen2021ssne}, e.g., using similarity-score-based linking \citep{liben2007link} or auxiliary information \citep{leroy2010cold}. However, these methods do not yet fully resolve sparsity challenges. \cite{bose2019meta,yang2022wsdm} treated few-shot prediction as a meta-learning problem, but their solutions depend on having many sparse graph samples coming from the same underlying distribution.
\textbf{Graph Transfer Learning.} While transfer learning has received extensive research in deep learning research, it remains highly non-trivial to transfer learned structural information across different graphs. \citet{gritsenkograph} theoretically showed that a classifier trained on embeddings of one graph is generally no better than random guessing when applied to embeddings of another graph. This is because general graph embeddings capture only the relative (instead of absolute) node locations.
GNNs have recently outperformed traditional approaches in numerous graph-based tasks \citep{sun2020benchmarking,chen2021bag,duancomprehensive}. While many modern GNNs are trained in (semi-)supervised and dataset-specific ways, recent successes of self-supervised graph learning \citep{velivckovic2018deep,you2020graph,sun2020infograph,lan2020node,hu2020strategies,you2020does,hu2020gpt} have invoked the interest in transferring learned graphs representations to other graphs. However, transferring them to node/link prediction over a different graph has seen limited success, and is mostly restricted to graphs that are substantially similar \citep{you2020does,hu2020gpt}. An early study on graph transfer learning under shared nodes \citep{jiang2015social} uses ``common nodes'' as the bridge to transfer, without using graph neural networks (GNN) or leveraging a selected subgraph. \cite{wu2020unsupervised} addressed a more general graph domain adaptation problem, but only when the source and target tasks are homogeneous. Furthermore, most GNN transfer works lack a rigorous analysis on their representation transferability, save for a few pioneering works \citep{ruiz2020graphon,zhu2020transfer} that rely on strong similarity assumptions between the source and target graphs.
Another related concept to our approach is \textit{entity alignment} across different knowledge graphs (KGs), which aims to match entities from different KGs that represent the same real-world entities \citep{zhu2020collective,sun2020benchmarking}.
Since most KGs are sparse \citep{zhang2021comprehensive}, entity alignment will also enable the enrichment of a KG from a complementary one, hence improving its quality and coverage.
However, the primary focus of GITL\ is different from entity alignment, as we assume the shared nodes to be already known or easily identifiable. For example, in an e-commerce network, products have unique IDs, so nodes can be easily matched.
\textbf{Instance-Level Transfer Learning.}
Though model-based transfer has become the most frequently used method in deep transfer learning, several research works have shown the significance of data on the effectiveness of deep transfer learning.
\cite{koh2017understanding} first studied the influence of the training data on the testing loss. They provide a Heissian approximation for the influence caused by a single sample in the training data. Inspired by this work, \cite{wang2018data} proposed a scheme centered on data dropout to optimize training data. The data dropout approach loops for each instance in the training dataset, estimates its influence on the validation loss, and drops all ``bad samples.''
Following this line, \cite{wang2019instance} proposes an instance-based approach to improve deep transfer learning in a target domain. It first pre-trains a model in the source domain, then leverages this model to optimize the training data of the target domain by removing the training samples that will lower the performance of the pre-trained model. Finally, it fine-tunes the model using the optimized training data. Though such instance-wise training data dropout optimization does yield improved performance, it requires a time complexity of $\mathcal{O}(N_\mathrm{training\_set\_size}*N_\mathrm{validation\_set\_size})$, which can be prohibitively costly in large graphs or dynamic real-world graphs that are constantly growing.
\textbf{Label Propagation.} Label propagation (LP) \citep{zhu2005semi,wang2007label,karasuyama2013manifold,gong2016label,liu2018learning} is a classical family of graph algorithms for semi-supervised transductive learning, which diffuses labels in the graph and makes predictions based on the diffused labels. Early works include several semi-supervised learning algorithms such as the spectral graph transducer \citep{joachims2003transductive}, Gaussian random field models \citep{zhu2003semi}, and label spreading \citep{zhou2004learning}. Later, LP techniques have been used for learning on relational graph data \citep{koutra2011unifying,chin2019decoupled}. More recent works provided theoretical analysis \citep{wang2020unifying} and also found that combining label propagation (which ignores node features) with simple MLP models (which ignores graph structure) achieves surprisingly high node classification performance \citep{huang2020combining}.
However, LP is not suitable for the link prediction task. LP is prone to over-smoothing \citep{wang2020unifying} for nodes within the same class, which may also hurt link prediction. For these reasons, we focus on an edge-centric variation of LP, discussed in Section \ref{sec:lp}.
\section{Model Formulations}
\label{sec:method}
\subsection{Notations And Assumptions}
Denote the source graph as ${\mathcal{G}}^\mathrm{src}$ and the target graph as ${\mathcal{G}}^\mathrm{tar}$, and denote their node sets as ${\mathcal{V}}^\mathrm{src}$ and ${\mathcal{V}}^\mathrm{tar}$, respectively. We make the following assumptions:
\begin{itemize}
\item[1.] \textbf{Large size}: the source and the target graphs are large, and there are possibly new nodes being added over time. This demands simplicity and efficiency in the learning pipeline.
\item[2.] \textbf{Distribution shift}: node features may follow different distributions between these two graphs. The source graph also has relatively richer links (e.g., higher average node degrees) than the target graph.
\item[3.] \textbf{Overlap}: there are common nodes between the two graphs, i.e., ${\mathcal{V}}^\mathrm{src} \bigcap {\mathcal{V}}^\mathrm{tar} \neq \emptyset$. We frame the shared nodes as a bridge that enables effective cross-graph transfer. We do not make assumptions about the size or ratio of the common nodes.
\end{itemize}
In the following sections, we describe the details of the proposed framework under the GITL\ setting, which consists of two stages.
\subsection{GITL Stage I: Instance Selection}
We consider the instance-level transfer, which selects and/or re-weights the training samples to improve the transfer learning performance in the target domain. Previous instance-level transfer research leverages brute-force search \citep{wang2019instance}: it loops for all instances and drops the ones that negatively affect performance. However, most real-world graphs are large in size and are constantly growing (\textit{Assumption 1}), which make such an approach infeasible. We instead leverage a practically simple instance selection method, choosing the \textit{intersection subgraph} between the source and target graphs as the training data. While this seems to be counter-intuitive to the ``more the better'' philosophy when it comes to transferring knowledge, we find it more effective in \Cref{sec:exps}, and refer to it as the \textit{negative transfer} phenomenon.
Specifically, we compare three different learning regimes.
\ding{202} \textit{target $\to$ target} (\textit{Tar. $\to$ Tar.}) directly trains on the target graph without leveraging any of the source graph information.
\ding{203} \textit{union $\to$ target} (\textit{Uni. $\to$ Tar.}), where the training set is the set of all source graph edges, plus part of the available edges in the target graphs. The dataset size is the biggest among the three regimes, but no instance optimization trick is applied and it is therefore suboptimal due to the observed \textit{negative transfer} phenomenon.
\ding{204} \textit{intersection $\to$ target} (\textit{Int. $\to$ Tar.}) is our proposed framework, which trains on the intersection subgraph.
The training nodes are selected based on the intersection structural prior, and the edges are adaptively enriched based on the source graph information. Specifically, we first extract the common nodes in the two graphs ${\mathcal{G}}^\mathrm{src}$ and ${\mathcal{G}}^\mathrm{tar}$:
\begin{equation}
{\mathcal{V}}^*={\mathcal{V}}^\mathrm{src} \bigcap {\mathcal{V}}^\mathrm{tar}
\end{equation}
Next, all edges from the source and target graphs that have one end node in ${\mathcal{V}}^*$ are united together to form the intersection graph, i.e., ${\mathcal{E}}^*={\mathcal{E}}^\mathrm{src} \bigcup {\mathcal{E}}^\mathrm{tar}$. Then, we build a positive edge set ${\mathcal{E}}^+ = {\mathcal{E}}^*$ and sample a negative edge set ${\mathcal{E}}^-$.
\textbf{Train/Test Split.}
Under the graph transfer learning setting, we care about the knowledge transfer quality in the target graph. Therefore, the most important edges are those with at least one node exclusive to the target graph.
To this end, the positive and negative edges with both end nodes in the source graph are used as the training set. 20\% of positive and negative edges that have at least one node outside the source graph are also added to the training set. Finally, the remaining positive and negative edges are evenly split into the validation set and testing set.
\subsection{GITL Stage II: Source-to-Target Broadcast}
\label{sec:lp}
In stage II, we first train a GNN model for the link prediction task to generate initial predictions, then use label propagation or MLP-based methods to broadcast the predictions to the entire target graph.
\textbf{Training the GNN Base Model for Link Prediction.}
Our GNN model training follows the common practice of link prediction benchmarks \citep{wang2021pairwise,zhang2021labeling,zhang2018link}.
For the construction of node features, we concatenate the original node features ${\bf X}$ (non-trainable) with a randomly initialized trainable vector ${\bf X}'$ for all nodes.
On the output side, the GNN generates $d$-dimensional node embeddings for all nodes via ${\bf Y} = \mathrm{GNN}({\bf A},[{\bf X},{\bf X}'])$. For node pair $(i, j)$ with their embeddings ${\bf Y}[i]\in{\mathbb{R}}^d$ and ${\bf Y}[j]\in{\mathbb{R}}^d$, the link existence is predicted as the inner product:
\begin{equation}\label{eq:dot-prod}
z_{i,j}=<{\bf Y}[i], {\bf Y}[j]>
\end{equation}
where $z_{i,j}>0$ indicates a positive estimate of the link existence and vise versa. We randomly sample positive edge set ${\mathcal{E}}^+$ and negative edge set ${\mathcal{E}}^-$ to train the GNN model. The model is trained with the common AUC link prediction loss \citep{wang2021pairwise}:
\begin{equation}\label{eq:linkp-loss}
\Theta, {\bf X}' = \mathop{\arg\max}_{\Theta, {\bf X}'} \sum_{e^+\in{\mathcal{E}}^+, e^- \in{\mathcal{E}}^-} (1 - {\bf Z}[e^+] + {\bf Z}[e^-])^2
\end{equation}
After the model is trained, we have the predictions $z_{i,j}$ for all edges between node $i$ and $j$. These predictions are already available for downsream tasks. However, in the proposed framework, we further leverage the label propagation as a post-processing step, which broadcasts the information from the source graph to the target graph.
Specifically, we concatenate $z_{i,j}$ for all edges $e=(i,j)$ in ${\mathcal{E}}^+$ and ${\mathcal{E}}^-$, and get ${\bf Z}\in{\mathbb{R}}^{(|{\mathcal{E}}^+|+|{\mathcal{E}}^-|)\times1}$.
\textbf{Broadcasting Predictions with Edge Centric Label Propagation.}
Label propagation (LP) is simple to implement and is hardware friendly, easy to parallelize on GPUs and fast on CPUs. Generic LP methods \citep{zhu2005semi,huang2020combining} diffuse the node embeddings across edges in the graph. To avoid the \textit{over-smoothness} induced by the node-level diffusion of traditional LP, we shift the role of nodes into edges, and propose an edge-centric variant of LP. We term this method as logit diffusion-based LP (Logit-LP).
Denote $N$ as the number of nodes in the graph to be broadcast. The LP requires two sets of inputs: the stack of the \textit{initial embedding} of all nodes, denoted as ${\bf Z}^{(0)}\in{\mathbb{R}}^{N\times d}$, and the \textit{diffusion source embedding}, denoted as ${\bf G}\in{\mathbb{R}}^{N\times d}$. The diffusion procedure generally used in LP methods \citep{zhu2005semi,wang2007label,karasuyama2013manifold,gong2016label,liu2018learning} can be summarized into the formula below, which iterates $k$ until it reaches a predefined maximum value $K_\mathrm{max}$:
\begin{equation}
{\bf Z}^{(k+1)}=\alpha {\bf A}{\bf Z}^{(k)}+(1-\alpha){\bf G}
\label{eq:lp0}
\end{equation}
In our method, the Logit-LP uses an \textit{edge-centric} view to model the diffusion procedure.
As visualized in \Cref{fig:method}, it shifts the role of edges into nodes, i.e., it builds a new graph $\tilde{{\mathcal{G}}}$, which consists of the original edges as its nodes.
The idea is that it operates on an edge-centric graph $\tilde{{\mathcal{G}}}$ with ``switched roles,'' where the nodes in $\tilde{{\mathcal{G}}}$ are edges in ${\mathcal{G}}$, and the edges in $\tilde{{\mathcal{G}}}$ represent the connectivity of edges in ${\mathcal{G}}$: if two edges in ${\mathcal{G}}$ share a node, then there is a corresponding edge in $\tilde{{\mathcal{G}}}$. It shifts the embeddings from the node embeddings to the edge embeddings. Mathematically, this is done by re-building the adjacency matrix $\tilde{{\bf A}}$, the initial embedding $\tilde{{\bf Z}}^{(0)}$, and the diffusion source embedding $\tilde{{\bf G}}$.
The new nodes are binary labeled: the positive and negative edges in the original graph. The number of new nodes is $|\tilde{{\mathcal{G}}}|=|{\mathcal{E}}^+| + |{\mathcal{E}}^-|$. The embeddings of these nodes, denoted as $\tilde{{\bf Z}}_\mathrm{Logit-LP}$, are edge prediction logits. In other words, the initial embedding is inherited from the GNN: $\tilde{{\bf Z}}^{(0)}_\mathrm{Logit-LP}=\mathrm{vec}(z_{i,j})\in{\mathbb{R}}^{(|{\mathcal{E}}^+|+|{\mathcal{E}}^-|)\times1}$, where $z_{i,j}$ is defined in \Cref{eq:dot-prod}
\begin{wrapfigure}{r}{0.6\textwidth}
\begin{center}
\vspace{-1em}
\centerline{\includegraphics[trim={7cm 10cm 8.2cm 6.5cm},clip,width=0.55\columnwidth]{figs/BeTr-WSDM.key.pdf}}
\caption{The edge-centric label propagation algorithm demonstrated with a subgraph. The edge prediction is first computed from the node embeddings (GNN output), then the graph is switched to an edge-centric view. The diffusion propagates the residual error from the labeled edges to the entire graph.}
\label{fig:method}
\end{center}
\vspace{-1em}
\end{wrapfigure}
The next operations are all based on $\tilde{{\mathcal{G}}}$, where we follow \cite{huang2020combining} for the propagation procedure optimization.
The initial embedding $\tilde{{\bf Z}}^{(0)}_\mathrm{Logit-LP}$ is first processed by the $\mathrm{sigmoid}(\cdot)$ function, then the diffusion source embedding ${\bf G}$ is set up in the following way. \textit{(1) On the training set of ${\mathcal{E}}^+$/${\mathcal{E}}^-$}: the node values are $0/1$-valued labels minus the initial embedding, referring to the residual error. \textit{(2) On validation and testing sets}: the node values are all-zero embeddings.
After \Cref{eq:lp0} generates the final residual errors $\tilde{{\bf Z}}^{(k)}\vert_{k=K_\mathrm{max}+1}$, we add the initial values $\tilde{{\bf Z}}^0$ and convert the residual to the final result.
Besides the Logit-LP, we also propose and discuss two other ways to shift the original graph to an edge-centric view, namely, the embedding diffusion LP and the XMC diffusion-based LP.
\textbf{Variant Model: The Embedding Diffusion-Based LP.}
The embedding diffusion LP (Emb-LP) is similar to Logit-LP in that it also performs diffusion on an edge-centric graph $\tilde{{\mathcal{G}}}$, the nodes of which represent edges in the original graph.
The number of nodes of the edge-centric graph in this case is $\tilde{{\mathcal{G}}}$ is $|{\mathcal{E}}^+|$, which is the number of positive edges in ${\mathcal{G}}$.
In Emb-LP, the initial embedding $\tilde{{\bf Z}}^{(0)}$ and the diffusion source embedding $\tilde{{\bf G}}$ are identical, which is processed from the GNN output embeddings ${\bf Y}$. Denote the embedding for node $i$ and $j$ as ${\bf Y}[i]$ and ${\bf Y}[j]$. If there is an edge between the node pair $(i,j)$, then in $\tilde{{\mathcal{G}}}$, the embedding for this edge is the concatenation of ${\bf Y}[i]$ and ${\bf Y}[j]$.
The label propagation procedures are the same as \Cref{eq:lp0}.
After the propagation, an edge's existence is predicted via the dot product of the original embeddings of its two end nodes.
\textbf{Variant Model: The XMC Diffusion-Based LP.} The third LP variant is based on the eXtreme Multi-label Classification (XMC) \citep{liu2017deep,bhatia2015sparse} formulation of link prediction (abbreviated as XMC-LP). In the multi-label classification formulation of the link prediction, each node can independently belong to $N=|{\mathcal{G}}|$ classes (not mutually exclusive). Each class means the existence of the link to the corresponding node (one of $N$ total nodes). XMC-LP operates on the original graph ${\mathcal{G}}$, and the adjacency matrix is the same as the original one. The initial embedding $\tilde{{\bf Z}}^{(0)}$ is set to be the post-dot-product logits and has the shape of ${N\times N}$. The value at the location $(i,j)$ corresponds to the dot product of the GNN output ${\bf Y}[i]$ and ${\bf Y}[j]$. The remaining steps are the same as Logit-LP. After performing diffusion using \Cref{eq:lp0}, the edge existence between node $i$ and $j$ can be predicted by looking at the location $(i,j)$ of the diffusion result $\tilde{{\bf Z}}^{(k)}\vert_{k=K_\mathrm{max}+1}$.
\textbf{Summary of Three LP Variants.}
The three different views leverage different advantages that LP offers. Logit-LP is supervised by edge labels (positive or negative edges) and performs the best. Emb-LP is unsupervised, and is the only variant that outputs embeddings instead of logits. XMC-LP operates on the smaller original graph instead of the edge-centric graph, though it requires more memory.
Our edge-centric LP algorithm is conceptually simple, lightweight, and generic. However, many real-world graphs follow long-tail distributions, with the majority of their nodes having few connections. Some nodes are even isolated, with no connected neighborhood. These cases are shown to have negative effects on message-passing-based approaches \citep{zheng2021cold}. Therefore, we also study and compare an MLP-based graph predictor \citep{hu2021graph,zhang2021graph,zheng2021cold}, which has been shown to be effective in sparsely linked graphs.
\subsection{Alternative Broadcasting Approach for Stage II: Teacher-Student Learning via GNN-MLP}
\label{sec:cb}
In this alternative approach, we train an MLP with a simple procedure of knowledge distillation from a GNN teacher. Here, the GNN teacher is the same link prediction GNN model discussed previously. The student MLP is first trained to mimic the output of the teacher GNN:
\begin{equation}
\Theta_\textrm{MLP}, {\bf X}' = \mathop{\arg\min}_{\Theta_\textrm{MLP}, {\bf X}'} || {\bf Y}-\mathrm{MLP}([{\bf X},{\bf X}']; \Theta_\textrm{MLP}) ||^2
\end{equation}
where $\Theta_\textrm{MLP}$ denotes the MLP parameters. After training with this imitation loss until convergence, the MLP is then fine-tuned alone on the link prediction task using the loss in \Cref{eq:linkp-loss}. The distilled MLP has no graph dependency during inference, so it can be applied on low-degree nodes. Furthermore, it can generalize well due to the structural knowledge learned from the GNN teacher on the well-connected nodes.
\textbf{Comparing Logit-LP and GNN-MLP in Stage II.} So far, we have introduced two options for Stage II broadcasting: a novel edge-centric LP variant and one that adopts off-the-shelf tools (originally developed for accelerated inference and cold-start generalization) in a new context (graph transfer learning).
Because this paper focuses on diving into the new GITL\ setting and workflow, we purposely keep the individual stages simple. The novelty of our method does not lie in inventing Stage II building blocks.
We also clarify another important question: \textit{why do we need two options?} In short, we propose Logit-LP to address the over-smoothness when transferring across source/target graph samples, and conveniently scale up to industry-scale graphs \citep{chen2020scalable,sun2021scalable,wu2019simplifying,huang2020combining,chen2021bag,duancomprehensive}. On the other hand, we develop and study the GNN-MLP as an alternative strategy with complementary merits to tackle the inherently diverse real-world graph learning challenges.
\section{Empirical Evaluation}
\label{sec:exps}
\subsection{Experimental Settings}
\label{sec:exp-settings}
In this section, we evaluate the proposed framework on several concrete applications and datasets. We use an e-commerce graph dataset of queries and items, and two public social network and citation graph benchmarks (OGB-collab and OGB-citation2). The statistics of these datasets are summarized in \Cref{tab:datasets}.
\begin{table*}[h]
\begin{center}
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1.2}
\resizebox{0.7\columnwidth}{!}{
\begin{tabular}[t]{l|c|c|c|c|c|cc}
\toprule[1.5pt]
\multirow{1}{*}{\textbf{Datasets}} & \multicolumn{2}{|c}{\bf E-commerce (E1)} & \multicolumn{2}{|c}{\bf OGB-citation2} & \multicolumn{2}{|c}{\bf OGB-collab} \\
\midrule
Intersection nodes & \multicolumn{2}{c|}{1,061,674} & \multicolumn{2}{c|}{347,795} & \multicolumn{2}{c}{55,423} & \\
Union nodes & \multicolumn{2}{c|}{11,202,981} & \multicolumn{2}{c|}{2,927,963} & \multicolumn{2}{c}{235,868} & \\
\midrule
\textbf{Subgraphs} & \textbf{Source} & \textbf{Target} & \textbf{Source} & \textbf{Target} & \textbf{Source} & \textbf{Target} \\
\midrule
Num. of Nodes & 10,456,209 & 1,934,188 & 2,604,211 & 671,547 & 218,738 & 77,137 & \\
Num. of Edges & 82,604,051 & 6,576,239 & 24,582,568 & 2,525,272 & 2,213,952 & 622,468 & \\
Mean Degree & 7.9 & 3.4 & 9.4 & 3.8 & 10.1 & 8.0 & \\
Median Degree & 2 & 1 & 6 & 1 & 5 & 5 & \\
\bottomrule[1.5pt]
\end{tabular}}
\captionof{table}{\small The statistics of datasets selected for evaluation.}
\label{tab:datasets}
\end{center}
\end{table*}
\textbf{Datasets.}
The e-commerce dataset is sampled from anonymized logs of a global e-commerce store.
The data used here is not representative of production.
The source and target graphs correspond to two locales. The source graph locale has much more frequent user behavior. The graphs are naturally bipartite: the two types of nodes correspond to products and user query terms. The raw texts of the product titles or user queries are available for each node, and the node features are generated from these texts using Byte-Pair Encoding \citep{heinzerling2018bpemb}.
OGB-collab (representing a collaboration between co-authors) and OGB-citation2 (representing papers that cite each other) are open-source academic datasets. The \textit{edges} of OGB-collab contain the year that the two authors collaborated and the \textit{nodes} of OGB-citation2 contain the year the corresponding paper was published. To better simulate our setting, we manually split the data into source and target graphs according to time: the collaborations/papers prior to a given year $y^{(h)}$ are organized into the source graph, while the collaborations/papers after a given year $y^{(l)}$ are organized into the target graph. We set $y^{(l)}<y^{(h)}$ so as to ensure the source and target graphs have overlapping nodes.
\textbf{Metrics and Baselines.}
We mainly use recall as the evaluation metric to judge the model's ability to recover unseen edges in the sparse target graphs. We adopt PLNLP \citep{wang2021pairwise} as the link prediction baseline (shown as \textit{GNN} in the tables) for comparison as well as the GNN embedding generator in our approach. On the public datasets, we also compare our methods to SEAL \citep{zhang2018link,zhang2021labeling}, Neo-GNN \citep{yun2021neo}, unsupervised/self-supervised pretraining methods such as EGI \citep{zhu2020transfer} and DGI \citep{velickovic2019deep}, and a few heuristics-based approaches, including Common Neighbors (CN), Adamic Adar (AA) \citep{adamic2003friends}, and Personalized Page Rank (PPR). In the following, we specify the details and discuss the results of the proprietary e-commerce dataset and the two public datasets.
\textbf{Special Dataset Processing.}
For each dataset, we have slight differences in how we build the source and target graphs. For the proprietary e-commerce recommendation graph, the source and target graphs naturally come from the two locales.
We use additional purchase information to build three different views of the data: \textbf{E1}, \textbf{E2}, \textbf{E3}. In \textbf{E1}, there exists an edge between query and product if there is at least one purchase. In \textbf{E2}, we threshold the number of purchases to be at least three to form the \textit{less connected graph}. This leads to a sparser but cleaner graph to learn from and transfer. \textbf{E3} is a graph that uses the human-measured relevance relation as the edges between queries and products. The nodes of \textbf{E3} remain the same as \textbf{E1}, \textbf{E2}, while the edges of \textbf{E3} are the edges of \textbf{E1} enriched by the positive relevance ratings. Therefore, \textbf{E2} is the most sparse graph and \textbf{E3} is the most dense one. Table \ref{tab:datasets} reflects the statistics of \textbf{E1}. For \textbf{E2}, the mean and median degrees of the source and target graphs are (2.1, 1) and (1.2, 1), respectively. For \textbf{E3}, these numbers are (8.3, 2) and (4.2, 1), respectively.
For the open-source graphs, we treat the original graph as the union graph and manually split it into a source graph and a target graph according to the timestamp metadata of the nodes or edges.
\subsection{Main Results}
We next verify the performance of the GITL\ framework via systematic experiments\footnote{Our codes are available at \url{https://github.com/amazon-science/gnn-tail-generalization}}.
Our results address the following questions.
\textbf{Q1: How does the method discover latent edges under different sparsity levels?}
We show the model performances on the e-commerce data in Table \ref{tab:ama-full}.
In the tables, $N^+_{e}$ is the number of positive edges in the graph. The \textit{recall @ $N^+_{e}$} and \textit{recall @$1.25N^+_{e}$} in the table are the recall numbers for the top $N^+_{e} / 1.25N^+_{e}$ largest model predictions.
To test the model performance with respect to sparsity in the target graph (the key pain point in the GITL\ setting), we customized three graphs \textbf{E1}, \textbf{E2}, \textbf{E3} with different sparsity levels as specified above, where \textbf{E2} is the sparsest one. As can be seen from the results, the proposed Logit-LP performs the best across the most sparsity levels.
In a few cases of the less connected edges (${\bf E2}$), the propagation is limited due to disconnected subgraphs. In these cases, because the GNN-MLP only requires node features to make predictions, the performance degrades less, making it a better candidate.
\begin{table}[h]
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tabular}{c|c|cc|cc|cccccccc}
\toprule[1.5pt]
\multirow{2.5}{*}{\textbf{Datasets}} & \multicolumn{1}{c}{ \textbf{Regimes} } & \multicolumn{2}{|c}{\bf Tar. $\to$ Tar.} & \multicolumn{2}{|c}{\bf Uni. $\to$ Tar.} & \multicolumn{2}{|c}{\bf Int. $\to$ Tar. } \\
\cmidrule(r){2-8}
& \textbf{Recall@} & $N^+_{e}$ & $1.25N^+_{e}$ & $N^+_{e}$ & $1.25N^+_{e}$& $N^+_{e}$ & $1.25N^+_{e}$ & \\
\midrule
\multirow{6.5}{*}{\textbf{E1}} & GNN & 86.1 & 89.7 & 73.8 & 77.2 & 90.2 & 93.4 \\
\cmidrule(r){2-2}
& Emb-LP & 86.8 & 90.0 & 74.0 & 77.6 & 90.6 & 93.7 \\
\cmidrule(r){2-2}
& Logit-LP & \textbf{88.4} & \textbf{91.5} & \textbf{76.6} & \textbf{79.9} & \textbf{91.4} & \textbf{94.7} \\
\cmidrule(r){2-2}
& XMC-LP & 86.3 & 89.2 & 73.8 & 77.4 & 90.5 & 93.6 \\
\cmidrule(r){2-2}
& GNN-MLP & 84.3 & 87.1 & 69.4 & 73.1 & 88.3 & 91.0 \\
\midrule
\multirow{6.5}{*}{\textbf{E2}} & GNN & 84.3 & 86.4 & 71.6 & 74.2 & 86.0 & 89.1 \\
\cmidrule(r){2-2}
& Emb-LP & 84.7 & 87.1 & 72.4 & 74.9 & 87.2 & 90.3 \\
\cmidrule(r){2-2}
& Logit-LP & 85.4 & 87.6 & \textbf{74.0} & \textbf{76.7} & 87.1 & 90.0 \\
\cmidrule(r){2-2}
& XMC-LP & 83.4 & 85.3 & 72.0 & 73.9 & 86.5 & 89.7 \\
\cmidrule(r){2-2}
& GNN-MLP & \textbf{86.8} & \textbf{89.9} & 68.5 & 70.1 & \textbf{88.0} & \textbf{91.0} \\
\midrule
\multirow{6.5}{*}{\textbf{E3}} & GNN & 66.5 & 68.4 & 62.4 & 65.0 & 69.5 & 72.1 \\
\cmidrule(r){2-2}
& Emb-LP & 66.9 & 68.8 & \textbf{62.9} & \textbf{65.7} & 70.1 & 72.6 \\
\cmidrule(r){2-2}
& Logit-LP & \textbf{68.9} & \textbf{71.1} & 61.4 & 63.8 & \textbf{70.2} & 72.6 \\
\cmidrule(r){2-2}
& XMC-LP & 66.8 & 68.6 & 62.7 & 65.5 & 69.7 & 72.2 \\
\cmidrule(r){2-2}
& GNN-MLP & 68.5 & 70.0 & 60.8 & 62.3 & 70.1 & \textbf{72.8} \\
\bottomrule[1.5pt]
\end{tabular}}
\caption{The evaluations for different views of the e-commerce graph. \textbf{E1}/\textbf{E2}/\textbf{E3} correspond to the no purchase thresholding (original), purchase thresholded by a minimum of 3, and the user rated relavance indicator\ graphs. Best results are bolded.}
\label{tab:ama-full}
\end{table}
\textbf{Q2: How does the proposed framework compared with other methods?}
\begin{table}[h]
\begin{minipage}{0.45\linewidth}
\centering
\resizebox{1\textwidth}{!}{
\begin{tabular}{c|cc|cc|cccccc}
\toprule[1.5pt]
\multicolumn{1}{c}{ \textbf{Regimes} } & \multicolumn{2}{|c}{\bf Tar. $\to$ Tar.} & \multicolumn{2}{|c}{\bf Uni. $\to$ Tar.} & \multicolumn{2}{|c}{\bf Int. $\to$ Tar. } \\
\cmidrule(r){2-8}
\textbf{Recall@} & $N^+_{e}$ & $1.25N^+_{e}$ & $N^+_{e}$ & $1.25N^+_{e}$& $N^+_{e}$ & $1.25N^+_{e}$ & \\
\midrule
GNN (PLNLP) & 51.7 & 62.9 & 49.3 & 60.1 & 51.9 & 63.3 \\
\cmidrule(r){1-1}
Emb-LP & 52.2 & 63.3 & 49.9 & 61.2 & 52.4 & 63.7 \\
\cmidrule(r){1-1}
Logit-LP & \textbf{55.7} & \textbf{65.7} & \textbf{51.4} & \textbf{63.4} & \textbf{55.9} & \textbf{65.2} \\
\cmidrule(r){1-1}
XMC-LP & 52.2 & 63.2 & 49.5 & 61.0 & 52.5 & 63.6 \\
\cmidrule(r){1-1}
GNN-MLP & 49.8 & 61.4 & 48.5 & 60.0 & 50.6 & 62.9 \\
\cmidrule(r){1-1}
Neo-GNN & 51.9 & 63.2 & 49.7 & 61.0 & 51.5 & 63.5 \\
\cmidrule(r){1-1}
CN & 51.1 & 62.5 & 51.1 & 62.5 & 51.1 & 62.5 \\
\cmidrule(r){1-1}
AA & 50.1 & 62.5 & 50.1 & 62.5 & 50.1 & 62.5 \\
\cmidrule(r){1-1}
PPR & 51.2 & 62.7 & 50.4 & 61.5 & 51.5 & 63.0 \\
\cmidrule(r){1-1}
EGI & 52.4 & 64.3 & 50.8 & 61.6 & 53.2 & 65.0 \\
\cmidrule(r){1-1}
DGI & 52.0 & 64.0 & 50.2 & 61.0 & 52.5 & 64.5 \\
\bottomrule[1.5pt]
\end{tabular}}
\caption{The recall evaluations on OGB-collab graph. }
\label{tab:coll-full}
\end{minipage}
\hfill
\begin{minipage}{0.5\linewidth}
\centering
\resizebox{1\textwidth}{!}{
\begin{tabular}{c|cc|cc|cccccc}
\toprule[1.5pt]
\multicolumn{1}{c}{ \textbf{Regimes} } & \multicolumn{2}{|c}{\bf Tar. $\to$ Tar.} & \multicolumn{2}{|c}{\bf Uni. $\to$ Tar.} & \multicolumn{2}{|c}{\bf Int. $\to$ Tar. } \\
\cmidrule(r){2-8}
\textbf{Recall@} & $N^+_{e}$ & $1.25N^+_{e}$ & $N^+_{e}$ & $1.25N^+_{e}$& $N^+_{e}$ & $1.25N^+_{e}$ & \\
\midrule
GNN (PLNLP) & 46.2 & 58.1 & 47.9 & 60.0 & 48.7 & 60.5 \\
\cmidrule(r){1-1}
Emb-LP & 45.9 & 58.4 & 48.3 & 60.3 & 48.9 & 60.8 \\
\cmidrule(r){1-1}
Logit-LP & 47.2 & \textbf{61.2} & 48.0 & \textbf{63.2} & \textbf{51.0} & \textbf{64.2} \\
\cmidrule(r){1-1}
GNN-MLP & 44.0 & 55.9 & 45.2 & 58.1 & 48.2 & 58.8 \\
\cmidrule(r){1-1}
Neo-GNN & 45.9 & 58.2 & 47.5 & 60.6 & 48.0 & 60.9 \\
\cmidrule(r){1-1}
CN & 12.2 & 12.5 & 29.7 & 30.3 & 18.9 & 19.4 \\
\cmidrule(r){1-1}
AA & 12.1 & 12.4 & 29.5 & 30.0 & 18.8 & 19.1 \\
\cmidrule(r){1-1}
EGI & \textbf{48.0} & 60.6 & \textbf{49.3} & 62.6 & 49.9 & 62.1 \\
\cmidrule(r){1-1}
DGI & 47.7 & 59.0 & 49.0 & 60.6 & 49.2 & 61.6 \\
\bottomrule[1.5pt]
\end{tabular}}
\captionof{table}{The recall evaluations on OGB-citation2 graph. }
\label{tab:cite-full}
\end{minipage}
\end{table}
The results of two open-source graphs are shown in Table \ref{tab:coll-full} and Table~\ref{tab:cite-full}, where the baselines are described in \Cref{sec:exp-settings}.
We see that the proposed edge-centric LP approaches achieve the best performance, better than other state-of-the-art methods in most cases.
In contrast, other methods, especially the heuristics-based methods, cannot yield meaningful predictions under low degrees (OGB-citation2). To evaluate a pair of nodes, they rely on the shared neighborhoods, which are empty for most node pairs.
\textbf{Q3: What are the precision and accuracy metrics?}
\begin{table}[h]
\centering
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{c|ccc|ccccccc }
\toprule[1.5pt]
\multirow{2}{*}{\textbf{\makecell[c]{Setting}}} &
\multicolumn{3}{c}{\textbf{\makecell{Precision}}} &
\multicolumn{3}{|c}{\textbf{\makecell{Accuracy}}} \\
\cmidrule(r){2-4} \cmidrule(r){5-7}
& Uni.$\to$Tar. & Int.$\to$Tar. & Tar.$\to$Tar.
& Uni.$\to$Tar. & Int.$\to$Tar. & Tar.$\to$Tar. & \\
\hline
SEAL & \textbf{33.6} & 35.2 & \textbf{35.5} & \textbf{56.3} & 57.5 & \textbf{56.4} \\
Logit-LP & 27.4 & \textbf{39.2} & 34.8 & 46.1 & \textbf{58.6} & 55.0 \\
\bottomrule[1.5pt]
\end{tabular}
}
\caption{The precision and accuracy metrics.}
\label{tab:acc}
\end{table}
We present the precision and accuracy results on the OGB-collab dataset in Table~\ref{tab:acc}. As can be seen in Table~\ref{tab:acc}, if we compare across different training set settings, the precision and accuracy numbers are the best when using the intersection-enhanced source graph for training (i.e., the \textit{Int.$\to$Tar.} case). If we compare between Logit-LP and SEAL, Logit-LP is better on the \textit{Int.$\to$Tar.} cases while performing worse than SEAL in other cases. Logit-LP performs best for the \textit{Int.$\to$Tar.} case.
\textbf{Q4: How does instance selection affect the model performance?}
In the tables above, \textit{Int. $\to$ Tar.} consistently outperforms other settings. Comparing the \textit{Tar. $\to$ Tar.} and \textit{Uni. $\to$ Tar.} settings, in the OGB-collab graph, \textit{Uni. $\to$ Tar.} is worse, while in the OGB-citation2 graph, the \textit{Uni. $\to$ Tar.} is generally better. Nevertheless, we still see that \textit{Int. $\to$ Tar.} achieves the best performance among the three settings.
We refer to this phenomenon, where \textit{Int. $\to$ Tar.} performs better than \textit{Uni. $\to$ Tar.}, as \textit{negative transfer}. This is possibly due to the different distributions of the source graph and the target graph. This lends credence to our hypothesis that adding more data to the source for transfer learning can possibly be counterproductive.
\textbf{Q5: Does the model perform better on the source graph compared to the target graph?}
We answer this question by comparing the training and test sets, and show the results in Table~\ref{tab:splits-full}.
The training set is mostly composed of the source graph edges while the validation and test sets are only selected from the unshared subgraph of the target graph. From the table, we see that the models behave differently in the source graph (training set) and the target graph (validation/test sets). Since the GNN, LP, and GNN-MLP rely on node features, they perform better on the training set. On the other hand, the featureless heuristics (CN and AA) achieve slightly better performance on the test set.
Comparing the performances on the training and testing sets, most non-heuristic-based methods perform better on the source graph (training set). This performance gap is due to the fact that the edge information is much richer in the source graph. On the other hand, Logit-LP significantly outperforms heuristic-based methods and the GNN, which verifies the effectiveness of the proposed method.
\begin{table*}[h]
\centering
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{c|ccc|ccc|ccccccc}
\toprule[1.5pt]
\multicolumn{1}{c}{ \textbf{Regimes} } & \multicolumn{3}{|c}{\textbf{Target $\to$ Target} } & \multicolumn{3}{|c}{\bf \textbf{Union $\to$ Target}} & \multicolumn{3}{|c}{\bf \textbf{Intersection $\to$ Target} } \\
\cmidrule(r){2-10}
\textbf{Splits} & \textbf{Train} & \textbf{Valid} & \textbf{Test} & \textbf{Train} & \textbf{Valid} & \textbf{Test} & \textbf{Train} & \textbf{Valid} & \textbf{Test} \\
\midrule
GNN (PLNLP) & 64.9 & 37.8 & 37.8 & 55.0 & 39.6 & 39.3 & 62.9 & 39.4 & 39.5 \\
\cmidrule(r){1-1}
Emb-LP & 65.3 & 38.3 & 38.3 & 55.7 & 40.4 & 39.9 & 63.5 & 40.3 & 40.3 \\
\cmidrule(r){1-1}
Logit-LP & 65.9 & 38.9 & \textbf{38.9} & 56.6 & 41.4 & \textbf{40.3} & 64.3 & 41.0 & \textbf{41.0} \\
\cmidrule(r){1-1}
GNN-MLP & 60.9 & 32.9 & 33.6 & 51.5 & 37.4 & 35.9 & 59.8 & 36.6 & 36.2 \\
\cmidrule(r){1-1}
CN & 10.5 & 11.1 & 11.2 & 26.6 & 27.2 & 27.2 & 16.4 & 17.2 & 17.3 \\
\cmidrule(r){1-1}
AA & 10.2 & 10.6 & 10.6 & 26.4 & 26.7 & 26.7 & 16.0 & 16.8 & 16.9 \\
\bottomrule[1.5pt]
\end{tabular}}
\caption{The \textit{recall @ $0.8N^+_{e}$} expanded for train/validation/test splits of the OGB-citation2 graph.}
\label{tab:splits-full}
\end{table*}
\textbf{Q6: How does the intersection size influence the performance?}
The influence of intersection size on OGB-collab is shown in Table~\ref{tab:intersection size}, where we i.i.d. vary the intersection ratio (number of nodes in the shared subgraph, divided by the total node number of target graph). Even though the ratio is only 20\%, our proposed transfer performance is higher than the no-transfer baseline.
\begin{table}[h]
\centering
\resizebox{0.65\textwidth}{!}{
\begin{tabular}{c|cccccc|c }
\toprule[1.5pt]
Intersection size & 1\% & 5\% & 10\% & 15\% & 20\% & 30\% & Tar.$\to$Tar. \\
\hline
Recall & 13.2 & 32.5 & 53.6 & \textbf{55.5} & \textbf{56.4} & 56.9 & \textbf{55.7} \\
\bottomrule[1.5pt]
\end{tabular}
}
\caption{Influence of the intersection size on OGB-collab.}
\label{tab:intersection size}
\end{table}
\textbf{Q7: How expensive is the computational overhead?}
We provide the computational and memory overhead of the LP variants below. For a given graph, denote the number of nodes as $N_{nodes}$, the number of edges as $N_{e}$, and the number of edges in the edge-centric graph as $N_E$. We can estimate $N_E$ via $N_E\approx 4N_e*\mathrm{mean\_deg}$, where $\mathrm{mean\_deg}$ is the mean degree of the graph. The number of multiplications during each iteration, as well as the memory overhead of the LP variants, are summarized in Table \ref{tab:complexity}.
\begin{table*}[h]
\centering
\resizebox{0.75\textwidth}{!}{
\begin{tabular}{c|c c c c cccc }
\toprule[1.5pt]
\textbf{LP variants} & Emb-LP & Logit-LP & XMC-LP \\
\hline
\textbf{Num. of multiplications at each iteration} & $N_{E}*d$ & $N_{E}*1$ & $N_{e}*N_{nodes}$ \\
\textbf{Memory overhead after $k$ iterations} & $\mathcal{O}(N_{E}*d)$ & $\mathcal{O}(N_{E}*1)$ & $\mathcal{O}(N_{e}*k)$ \\
\bottomrule[1.5pt]
\end{tabular}
}
\caption{Computation complexity for different LP variants.}
\label{tab:complexity}
\end{table*}
\section{Conclusion and Future Work}
\label{sec:limitation}
In this paper, we discuss a special case of graph transfer learning called GITL, where there is a subset of nodes shared between the source and the target graphs. We investigate two alternative approaches to broadcast the link information from the source-enriched intersection subgraph to the full target graph: an edge-centric label propagation and a teacher-student GNN-MLP framework. We demonstrate the effectiveness of our proposed approach through extensive experiments on real-world graphs.
In spirit, this new setting may be reminiscent of previous transfer learning methods, where only a subset of latent components are selectively transferred.
Our next steps will study curriculum graph adaptation, in which the target is a composite of multiple graphs without domain labels. This will help us progressively bootstrap generalization across more domains.
|
{
"arxiv_id": "2302.14206",
"language": "en",
"timestamp": "2023-03-01T02:05:16",
"url": "https://arxiv.org/abs/2302.14206",
"yymm": "2302"
} | \section*{Introduction}
Atomically-thin transition metal dichalcogenides (TMDs) offer the possibility to optically address the spin and valley degrees-of-freedom of charge carriers.
Through circularly polarized light, $\sigma_{+}$ or $\sigma_{-}$, one can excite electron-hole pairs at opposite points at the edges of the Brillouin zone, known as $K_{+}$ and $K_{-}$ valleys \cite{Mak2012,Zeng2012,Xu2014}.
The high spin-orbit coupling in these materials further causes a spin splitting of the levels, connecting the spin and valley degrees-of-freedom and their dynamics.
For this reason, TMDs are very attractive for (opto)valleytronic and (opto)spintronic applications \cite{Mak2016, Liu2016, Schaibley2016, Zhong2017, Luo2017}.
One of the main bottlenecks for such applications is the control of these degrees-of-freedom.
Magnetic fields have been shown to strongly affect the valley polarization.
This has been demonstrated by photoluminescence (PL) measurements of TMD monolayers under high out-of-plane magnetic fields, showing a linear shift of the emission peaks reflecting the reduction of the band gap for one valley and the increase of the other \cite{Macneill2015, Li2014, Srivastava2015, Aivazian2015, Koperski2019}.
This effect became known as the valley-Zeeman effect.
These experiments extract an exciton and trion g-factor of $\sim$ $-$4, which is in agreement with theoretical expectations \cite{Wang20152D, Rybkovskiy2017, Wozniak2020}.
Different works have reported the dynamics of electrons, excitons, and trions by performing time-resolved photoluminescence (TRPL) \cite{Wang2015, Godde2016, Wang2020}, time-resolved differential reflectivity (TRDR) \cite{Kumar2014, Kumar2014prb, Ye2018}, and time-resolved Kerr rotation (TRKR) \cite{Zhu2014, Hsu2015, Guimaraes2018, Anghel2018, Ersfeld2019, Li2020}, with most works focusing on either zero or in-plane external magnetic fields.
More recently, Zhang et al. \cite{Zhang2021} showed the effect of out-of-plane magnetic fields on polarized TRPL, but without any clear difference between the results obtained at different circularly polarized excitation and high magnetic fields.
Nonetheless, PL is solely sensitive to the valley polarization and radiative decay of excitons and trions.
Therefore, PL is insensitive to light-induced spin polarization of resident carriers which can sustain the spin information over much longer times.
The understanding of the decay processes and possibilities of control of resident carriers are of huge importance for the engineering of a new generation of opto-spintronic devices \cite{Seyler2018, Benitez2018, Luo2017, Sierra2021}.
However, the use of an out-of-plane magnetic field to control the long-lived spin dynamics in monolayer TMDs -- beyond the radiative recombination times -- remains largely unexplored.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.92]{Fig1.eps}
\caption{(a) Schematics of the TRKR measurements with an external magnetic field. (b) Valley-Zeeman effect on the K$_{+}$ and K$_{-}$ spin states of monolayer MoSe$_2$. Dashed lines indicates the zero magnetic field spin states while solid lines represent the magnetic field shifted states. The Valley-Zeeman effect results in larger energy shifting in the valence band $\Delta E_v$ than in the conduction band $\Delta E_c$. (c) TRKR signal for $\sigma_+$ (red) and $\sigma_-$ (blue) polarized pump, at different magnetic fields: B = -5 T, 0 T (d), +5 T (e), for $\lambda$= 755 nm and T = 6 K.}
\label{Fig1:summary}
\end{figure*}
Here we show that the spin information of long-lived carriers in monolayer MoSe$_2$ can be effectively controlled via the valley-Zeeman effect.
Using time-resolved magneto-optic Kerr effect, we show that magnetic fields can induce an ultrafast spin-valley scattering and enhancement of a light-induced spin accumulation with a preference for the valley with higher valence band energy.
While we find that the spin scattering rates do not show a clear trend for the external magnetic field strength, the magneto-optical signal strength shows a clear linear behavior.
Particularly, we observe a nearly field-independent long spin lifetime of $\sim$2 ns.
The magnetic field dependence of the light-induced spin accumulation we measure agrees with hole spin dynamics and resident carrier recombination at longer timescales, while we argue that electron spins may have faster relaxation rates.
Our results are well described by a simple rate-equation model, which takes into consideration a field-dependent carrier recombination and intervalley scattering.
Our model indicates that the magnetic field has a stronger effect on the hole transfer between valleys than in electrons, resulting in a spin imbalance with a larger population of carriers from the valley with the smaller bandgap.
In this way, we demonstrate here that an applied magnetic field can be used to effectively control the optically-generated spin accumulation in TMDs.
Our samples were prepared by mechanical exfoliation of a bulk MoSe$_2$ crystal (HQ Graphene) onto a polydimethylsiloxane (PDMS - Gel Pack) and then transferred to a 285 nm SiO$_2$/Si substrate.
The spin dynamics in our samples was measured by TRKR, which presents a higher sensitivity to spin-related phenomena, compared with TRPL or TRDR.
We perform a single-color (degenerated) pump-probe technique in a dual-frequency modulation (see supplementary information) using a similar experimental setup as described elsewhere \cite{Guimaraes2018}.
When a high intensity circularly polarized ($\sigma _{\pm}$) laser pulse (pump) excites the monolayer, electrons of a single valley ($K_{\pm}$) are excited to the conduction band and can generate a spin imbalance.
By shining a linearly polarized (probe) pulse at a certain time-delay after the pump pulse, the spin/valley imbalance is measured by a change on the polarization axis of the reflected beam, which is rotated by an angle $\theta_{K}$ (Fig. \ref{Fig1:summary}a).
Finally, the time evolution of the spin imbalance in the sample is measured by changing the delay time (d$t$) between the pump and probe pulses.
All measurements were performed at low-temperature T = 6 K.
Figure \ref{Fig1:summary}d shows the TRKR at zero magnetic field and how the sign of the Kerr rotation depends on which valley we are exciting: a positive (negative) Kerr rotation is observed upon shining $\sigma _+$ ($\sigma _-$) polarized pump pulses.
This behavior is a fingerprint of the symmetry and spin selectivity of the two valleys of the TMD, when excited with circularly polarized light \cite{Zhu2014, Hsu2015, DalConte2015, Guimaraes2018, Li2020}.
Two decay time constants are visible in our measurements, a faster ($\tau_1 \sim$ 1.5 ps) and a longer one ($\tau_2 \sim$ 2 ns) that goes beyond the measurement range.
We observe a drastic change on the TRKR dynamics upon an applied magnetic field.
Figure \ref{Fig1:summary}c and e show the TRKR signal of our sample for an applied magnetic field of -5 T and +5 T, respectively.
Within the first 3 ps after excitation we observe a reversal of the TRKR signal for one pump polarization, while a stabilization for the other.
This can be explained by an out-of-plane magnetic field lifting the valley degeneracy causing an opposite shift on the TMD bands of opposite valleys \cite{Macneill2015, Li2014, Srivastava2015, Aivazian2015} (Fig \ref{Fig1:summary}b).
Due to the different net angular momenta of the valence and conduction bands, the energy shifts for the conduction ($\Delta E_c$) and valence bands ($\Delta E_v$) are also different.
The resulting effect is that the lowest energy state for holes lies within one valley, while for electrons lies in the other.
These observations reveal that our TRKR signal is dominated by hole relaxation.
Upon populating the $K_{-}$ valley through a $\sigma_{-}$-polarized pump, we observe a reversal of the signal from negative to positive for positive magnetic fields.
For this case, the electron ground state lies in the conduction band of the $K_{-}$ valley, while the ground state of holes is located at the $K_{+}$ valley.
The fact that we observe a positive TRKR signal after $\sim$ 3 ps, indicates a higher spin population of the $K_{+}$ valley, implying a fast intervalley scattering and that the spin accumulation measured in our experiments arises from the hole spin.
When the direction of the magnetic field is reversed, similar behavior is observed but with an opposite TRKR signal.
The long decay times we observe do not show any clear dependence with the magnetic field.
The two decay time constants are obtained through a biexponential decay of the form $\theta _K = A_1 e^{-t/\tau _1}+A_2 e^{-t/\tau _2}$, where $A_n$ are the Kerr rotation angles at a delay time d$t$ = 0 and $\tau _n$ the decay times at the fast and slow decay processes $n = 1, 2$, respectively.
This is done for measurements using both directions of pump polarization and various values of applied magnetic fields, from B = -5 T to +5 T.
Figures \ref{Fig3:lifetimes}a and \ref{Fig3:lifetimes}b show the TRKR signals at B = -5 T to +5 T over a long d$t$ range, up to 1.55 ns.
The TRKR signals are still clearly measurable even at these long-time delays.
The magnetic field dependence for $\tau_2$ is shown in Figure \ref{Fig3:lifetimes}c (see the supplementary material for $\tau_1$).
For zero, and also high magnetic fields, we find non-zero pump-induced TRKR signals.
However, for 0$< |B| <$ 3 T, the signal for one of the pump polarizations is within our experimental noise, and therefore only measurements for one pump polarization were used within this range.
Nonetheless, we do not observe any striking features on the TRKR decay times for the whole range of magnetic fields studied, even though additional spin scattering channels are predicted to arise from the breaking of time-reversal symmetry \cite{Gilardoni2021}.
This indicates that, while the intervalley scattering rate could be modified by the magnetic field, the main source of spin scattering remains unaffected for our samples.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig2.eps}
\caption{(a) TRKR signal for $\sigma_+$ (red) and $\sigma_-$ (blue) polarized pump, at B = -5 T (a) and +5 T (b) at long time scales. The orange dashed lines show the fit that extracts the slow relaxation times. (c) Extracted slow decay times ($\tau _2$) and (d) amplitudes (A$_2$) at different magnetic fields for the two excitation polarizations.}
\label{Fig3:lifetimes}
\end{figure}
The amplitude of our TRKR signals shows a clear linear behavior with the magnetic field, Fig. \ref{Fig3:lifetimes}d.
Strikingly, the slopes for the two pump polarizations are slightly different, resulting in a non-symmetric response with the magnetic field direction at long time delays.
Previous measurement runs of the same sample also presented a linear trend with the magnetic field, with different slopes for each excitation polarization, see supplementary material for details.
Although small experimental artifacts are not discarded, a phenomenological origin of these results would be inconsistent with a simple description involving solely a Zeeman energy shift of the energy states.
We currently do not have a clear understanding of the origin of such an effect, which should be explored in more detail in later studies.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig3.eps}
\caption{(a) TRKR Polarization at different wavelengths and zero external magnetic fields. Measurements were shifted in the vertical axis for clarity. (b) Polarized photoluminescence spectra of our sample at B = 0 T. Inset: Intensity profile of the TRKR Polarization at different delay times: 2, 4, 6, 10, 100, and 200 ps. (c) TRKR for $\sigma_+$ (red) and $\sigma_-$ (blue) polarized pump at $\lambda$ = 765 nm for B = -5 T (c) and +5 T (d) at T = 6 K. }
\label{Fig2:wavelength}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.95]{Fig4.eps}
\caption{(a) Experimental (left) and theoretical (right) results for B = 0 T with $\tau_+=\tau_- =$ 20 ps, $\tau_{c_{+-}}=\tau_{c_{-+}} =$ 1.8 ps and $\tau_{v_{+-}}=\tau_{v_{-+}} =$ 5 ns. (b) Experimental (left) and theoretical (right) results for B = +5 T with an asymmetric hole scattering time $\tau_{v_{+-}} = 5$ ps, $\tau_{v_{-+}} =$ 5 ns. (c) Representation of the dynamics of the spin population under positive magnetic fields: the ground state of the system with an initial hole population is photo-excited ($\sigma _-$) at the valley $K_-$. Later, electrons are scattered between valleys in the conduction band followed by the scattering of holes from $K_-$ to $K_+$. Subsequently, excited states recombine radiatively. Finally, resident holes in $K_+$ reach thermal equilibrium with $K_-$. The represented transfer times indicate the dominant process at each panel.}
\label{Fig4:model}
\end{figure*}
By studying the dependence of the TRKR signal on the excitation wavelength, we can probe the contribution of excitons and trions to the spin signal.
To reduce spurious effects which are independent of the valley/spin polarization, here we use the polarization of the Kerr rotation, namely the difference of the TRKR measurements for the two polarizations of excitation $(\theta_K(\sigma_+)-\theta_K(\sigma_-))/2$ \cite{Guimaraes2018}.
As expected, our signal is strongly modulated upon a change of the wavelength when going over the exciton and trion resonances (Fig. \ref{Fig2:wavelength}a).
We observe that when exciting with $\lambda \leqslant$ 745 nm, KR polarization decays in the first couple of picoseconds.
On the other hand, when exciting at $\lambda \geqslant$ 750 nm, the spin imbalance remains finite, within the time range of our measurements.
The largest signal is seen at $\lambda=$ 755 nm, see inset of Figure \ref{Fig2:wavelength}b.
A comparison to photoluminescence measurements (Fig. \ref{Fig2:wavelength}b) reveals that our findings are consistent with short-lived signals coming from excitations at and below the wavelength of the exciton resonance (X), while the long-lived signals come from energies close to the peak of trion emission (T).
Our observations are in agreement with previous reports in literature \cite{Schwemmer2017, Godde2016, Hsu2015, Wang2015} showing that trions dominate the spin signal in TMDs monolayers.
We observe an interesting effect arising from the magnetic field-induced reduction of the bandgap for one valley with respect to the other.
For high magnetic fields (B = $\pm$5 T) and with the lowest excitation energy resulting on a measurable signal ($\lambda$ = 765 nm), we only observe a TRKR signal above our noise level for one pump polarization, Figure \ref{Fig2:wavelength}c and d.
For the positive magnetic field, the bandgap of the K$_+$ valley is reduced and produces a spin accumulation when excited with $\sigma_+$.
Nevertheless, as the bandgap at K$_-$ is increased, the energy of the pumping photons is not enough to excite the electrons and generate any spin imbalance, resulting in no TRKR signal for $\sigma_-$ excitation.
A similar behavior is observed for negative magnetic fields, but with opposite spin accumulation in the K$_-$ valley and no signal for $\sigma_+$ excitation.
Additional measurements of wavelength dependence with the magnetic field can be found in the supplementary material.
To quantitatively describe the spin dynamics and to elucidate the relaxation processes behind our measurements, we use a rate-equation model similar to what has been proposed before \cite{Hsu2015}, but including the effect of an external magnetic field.
We consider three main carrier scattering processes: direct radiative recombination at the same valley, the spin-valley flip of electrons in the conduction band, and the spin-valley flip of holes in the valence band.
The breaking of the energy degeneracy of the $K_+$ and $K_-$ valleys is represented by different intervalley scattering rates.
Therefore, considering the more general case, where conduction and valence band transfer rates, and also radiative recombination of both valleys, are different, we get to the set of equations
\begin{eqnarray}
\frac{dn_+}{dt}&=&-\frac{n_+p_+}{\tau_+}-\frac{n_+}{\tau_{c_{+-}}}+\frac{n_-}{\tau_{c_{-+}}} \\
\frac{dn_-}{dt}&=&-\frac{n_-p_-}{\tau_-}-\frac{n_-}{\tau_{c_{-+}}}+\frac{n_+}{\tau_{c_{+-}}} \\
\frac{dp_+}{dt}&=&-\frac{n_+p_+}{\tau_+}-\frac{p_+}{\tau_{v_{+-}}}+\frac{p_-}{\tau_{v_{-+}}} \\
\frac{dp_-}{dt}&=&-\frac{n_-p_-}{\tau_-}-\frac{p_-}{\tau_{v_{-+}}}+\frac{p_+}{\tau_{v_{+-}}}
\end{eqnarray}
\noindent where $n_i$ and $p_i$ are the photo-excited populations at the $K_i$ valley for electrons and holes, respectively.
The first term at the right side of each equation represents the radiative recombination at a specific valley with recombination time $\tau _\pm$.
The second term represents the population of carriers that is transferred to the opposite valley in the conduction or valence bands with an intervalley scattering time $\tau _{c,v_{\pm \mp}}$.
Finally, the last term relates to the carrier transfer coming from the opposite valley (source term).
As Kerr rotation measures the spin/valley imbalance of the system, we model our measurements as the difference between the valley populations $\theta _K = (n_++p_+)-(n_-+p_-)$.
The initial conditions allow us to consider the two types of excitation, for instance, $n_+(0)=N/2$, $p_+(0)=N/2$, $n_-(0)=0$, and $p_-(0)=0$ for $\sigma_+$ excitation, where $N/2$ is the number of excited carriers.
The results obtained by solving our rate-equation system are in good agreement with the experimental measurements with and without magnetic fields.
In Figure \ref{Fig4:model}a we compare our experimental and theoretical results at zero magnetic field, where valley degeneracy is still present.
We noticed that to properly reproduce our measurements, the radiative recombination rate could not be either the shorter nor the longer decay process, and instead, good results were achieved by using decay times ($\tau$) of tens of picoseconds.
The latter agrees with previous reports of the trion lifetime for MoSe$_2$ \cite{Wang2015, Godde2016, Anghel2018, Zhang2021}.
Therefore, as our results point to a spin accumulation of holes at the long time range ($\tau_v$ = 5 $ns$), the fast decay ($\tau_c$ = 1.8 $ps$) is assigned to electron transfer at the conduction band.
This is consistent with previous reports pointing out to the fast depolarization of electrons in the conduction band when compared to the holes, and even radiative recombination of trions \cite{Hsu2015, Mai2014}.
To simulate our results at high magnetic fields (Figure \ref{Fig4:model}b) we introduce an asymmetry in the scattering times of the different considered processes (see supplementary information for details).
Our model indicates that the main process responsible for the fast switch of spin polarization is a strong reduction of the hole inter-valley scattering through the valley with higher valence band energy.
In the right panel of Figure \ref{Fig4:model}b we show the result of the TRKR calculation just with the effect of an asymmetric scattering process at the valence band, by considering $\tau _{v_{-+}}$ = 5 ps, three orders of magnitude smaller than $\tau _{v_{+-}}$.
Meanwhile, the introduction of different scattering times for the electrons in the conduction band and different radiative recombination times at the the two valleys can help to refine the result at the short lifetimes but they have no major impact on the total decay profile.
We point out that in both calculations in Figures \ref{Fig4:model}a and \ref{Fig4:model}b, we did not obtain a long-lived decay process, such as the ones observed in the experimental results, even considering larger scattering lifetimes for holes $\tau _v$.
Therefore, we associate this long-lived component to resident charge carriers in agreement with previous reports \cite{Schwemmer2017, Ersfeld2019, Anghel2018, Hsu2015}.
In Figure \ref{Fig4:model}c, we illustrate the obtained spin dynamics for positive magnetic field and an optical excitation of the $K_-$ valley.
Our results show that optically-generated spins in TMDs can be very effectively controlled by magnetic fields, which is explained by a strong field-induced asymmetry on the intervalley scattering rates.
We envision that the external magnetic field used in our experiments could be replaced by magnetic proximity to van der Waals magnets.
These heterostructures have already shown to result on a significant control over valley polarization in photoluminescence measurements \cite{Seyler2018}, and our work implies that a significant polarization of long-lived resident carriers can potentially be efficiently obtained and controlled via proximity effects, with lifetimes of several ns opposed to a few ps for excitons/trions.
We believe that this will open new directions for opto-spintronic devices based on TMDs which do not depend on the stabilization of bound electron-hole pairs, but make use of the full potential of the long spin lifetimes offered by resident carriers in these materials.
See Supplementary Information for additional details regarding the experimental setup, photoluminescence measurements with magnetic field, TRKR data at different wavelengths with an external magnetic field, TRDR, and further discussion of the rate-equation model.
We thank J. G. Holstein, H. de Vries, F. van der Velde, H. Adema, and A. Joshua for their technical support.
This work was supported by the Dutch Research Council (NWO—STU.019.014), the Zernike Institute for Advanced Materials, and the Brazilian funding agencies CNPq, FAPEMIG and the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Project code 88887.476316/2020-00.
\nocite{*}
|
{
"arxiv_id": "2302.14207",
"language": "en",
"timestamp": "2023-03-01T02:05:16",
"url": "https://arxiv.org/abs/2302.14207",
"yymm": "2302"
} |
\subsubsection*{\bibname}}
\bibliographystyle{apalike}
\DeclareMathOperator{\MI}{\mathtt{MI}}
\DeclareMathOperator{\SemStr}{\mathtt{SemanticStrengthening}}
\begin{document}
\twocolumn[
\aistatstitle{Semantic Strengthening of Neuro-Symbolic Learning}
\aistatsauthor{Kareem Ahmed\And Kai-Wei Chang \And Guy Van den Broeck }
\aistatsaddress{ Computer Science Department\\UCLA\\ahmedk@cs.ucla.edu
\And Computer Science Department\\UCLA\\kwchang@cs.ucla.edu
\And Computer Science Department\\UCLA\\guyvdb@cs.ucla.edu } ]
\begin{abstract}
Numerous neuro-symbolic approaches have recently been proposed typically with the goal of adding symbolic
knowledge to the output layer of a neural network.
Ideally, such losses maximize the probability that the neural network's predictions satisfy the
underlying domain.
Unfortunately, this type of probabilistic inference is often computationally infeasible.
Neuro-symbolic approaches therefore commonly resort to fuzzy approximations of this probabilistic objective,
sacrificing sound probabilistic semantics, or to sampling which is very seldom feasible.
We approach the problem by first assuming the constraint decomposes conditioned on the features learned by the network.
We iteratively \emph{strengthen} our approximation, restoring the dependence between the constraints most responsible for degrading the quality of the approximation.
This corresponds to computing the mutual information between pairs of constraints conditioned on the network's learned features,
and may be construed as a measure of how well aligned the gradients of two distributions are.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles, observing that it improves upon the baselines while sidestepping intractability. %
\end{abstract}
\section{Introduction}
Neural networks have been established as excellent feature extractors,
managing to learn intricate statistical features from large datasets.
However, without a notion of the \emph{symbolic rules} underlying any
given problem domain, neural networks are often only able to achieve
decent \emph{label-level} accuracy, with a complete disregard to the
structure \emph{jointly} encoded by the individual labels.
These structures may encode, for example, a path in a graph, a matching of users to their preferences, or even the solution to a Sudoku puzzle.
Neuro-symbolic approaches \citep{RaedtIJCAI2020} hope to remedy
the problem by injecting into the training process knowledge
regarding the underlying problem domain, e.g. a Sudoku puzzle
is characterized by the uniqueness of the elements of every row,
column, and $3 \times 3$ square.
This is achieved by maximizing the probability allocated by the
neural network to outputs satisfying the rules of the underlying
domain.
Computing this quantity is, in general, a \#P-hard problem \citep{Valiant1979b}, which
while tractable for a range of practical problems \citep{Xu18,Ahmed22nesyentropy},
precludes many problems of interest.
A common approach is to side step the hardness of computing the
probability \emph{exactly} by replacing logical operators with
their fuzzy t-norms, and logical implications with simple
inequalities \citep{Grespan21, Krieken2020AnalyzingDF}.
This, however, does not preserve the sound probabilistic semantics
of the underlying logical statement: equivalent logic statements no
longer correspond to the same set of satisfying assignments, %
to different probability distributions, and consequently, vastly
different constraint probabilities.
On the other hand, obtaining a Monte Carlo estimate of the probability
\citep{Ahmed22pylon} is infeasible in exponentially-sized output spaces
where the valid outputs represent only a sliver of the distribution's
support.
In this paper, starting from first principles, we derive a probabilistic approach to scaling probabilistic
inference for neuro-symbolic learning while retaining the sound semantics of the underlying logic.
Namely, we start by assuming that the probability of the constraint decomposes, conditioned on the network's learned features.
That is, we assume the events encoded by the logical formula to be \emph{mutually independent} given the learned features, and therefore, joint probability factorizes as a product of probabilities.
This generalizes the prolific assumption that the probabilities of the variables are \emph{mutually-independent} conditioned on the network's learned features \citep{mullenbach2018explainable,Xu18,giunchiglia2020coherent} to events over arbitrary number of atoms.
This reduces the (often intractable) problem of probabilistically satisfying the constraint, the validity of a Sudoku puzzle, to the (tractable) problem of probabilistically satisfying the individual local constraints, e.g. the uniqueness of the elements of a row, column, or square.
This, however, introduces inconsistencies: an assignment that satisfies one constraint might violate another, leading to misaligned gradients.
More precisely, for each pair of constraints, we are interested in the penalty incurred, in terms of modeling error, by assuming the constraints to be independent when they are in fact dependent, conditioned on the features learned by the neural network.
This corresponds exactly to the conditional mutual information, a quantity notoriously hard to calculate.
We give an algorithm for tractably computing the conditional mutual information, given that our constraints are represented as circuits satisfying certain structural properties.
Training then proceeds, where we interleave the process of learning the neural network, with the process of \emph{semantic strengthening}, where we iteratively tightening our approximation, using the neural network to guide us to which constraints need to be made dependent.
\definecolor{color1}{rgb}{0.50, 0.70, 1}
\definecolor{color2}{rgb}{0.10, 0.10, 0.1}
\begin{figure*}[t]
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
no markers, domain=0:10, samples=100,
axis lines*=left, xlabel=$\rvar{y}$, ylabel=$p(\rvar{y}|x)$,
every axis y label/.style={at=(current axis.above origin),anchor=south},
every axis x label/.style={at=(current axis.right of origin),anchor=west},
height=4cm,
xtick=\empty, ytick=\empty,
enlargelimits=false, clip=false, axis on top,
grid = major,
]
\addplot [fill=sanae1, draw=none, domain=4.0:5.0] {gaussianmixture2(3,0.5,6,1,0.5)} \closedcycle;
\addplot [very thick,sanae5] {gaussianmixture2(3,0.5,6,1,0.5)};
\draw [yshift=-0.3cm, latex-latex, |-|](axis cs:2.5,0) -- node [fill=white] {$m(\alpha)$} (axis cs:6.5,0);
\end{axis}
\end{tikzpicture}
\caption{Setting where satisfying assignments are only fraction of distribution support.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
no markers, domain=0:10, samples=100,
axis lines*=left, xlabel=$\rvar{y}$, ylabel=$p(\rvar{y}|x)$,
every axis y label/.style={at=(current axis.above origin),anchor=south},
every axis x label/.style={at=(current axis.right of origin),anchor=west},
height=4cm,
xtick=\empty, ytick=\empty,
enlargelimits=false, clip=false, axis on top,
grid = major,
]
\addplot [fill=sanae1, draw=none, domain=4.0:10.0] {gaussianmixture2(0.5,3,7,1,0.1)} \closedcycle;
\addplot [very thick,sanae5] {gaussianmixture2(0.5,2,7,1,0.1)};
\draw [yshift=-0.3cm, latex-latex, |-|](axis cs:4.0,0) -- node [fill=white] {$m(\alpha)$} (axis cs:10,0);
\end{axis}
\end{tikzpicture}
\caption{A network allocating \emph{most} of probability mass to satisfying assignments.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[scale=0.5]{sudoku-5.pdf}
\vspace{0.8em}
\caption{Distributions over empty entries of Sudoku row and col modeled separately.}
\end{subfigure}
\caption{%
Estimating the probability of a constraint using sampling can fail when, (a)
the set of satisfying assignments represents only a minuscule subset of the
distribution's support,
or, (b) when the network already largely satisfies the constraints, and consequently,
we are very unlikely to sample very low-probability assignments violating the constraint.
Using product t-norm, (c), to model the probability of satisfying constraints reduces the problem to satisfying the constraints locally, which can often lead to conflicting probabilities, and therefore, conflicting gradients. Here, e.g., according to the distribution over the Sudoku row, $3$ is the likely value of the cell in grey, where as, according to the distribution over the Sudoku column, $4$ is the likely value.
}
\label{fig:tnorm-failure}
\end{figure*}
We test our approach on three different tasks: predicting a minimum-cost path in a Warcraft terrain, predicting a minimum-cost perfect matching, as well as solving Sudoku puzzles, where we observe that our approach greatly improves upon the baselines all for a minuscule increase in computation time (our experiments are capped at ~2-3, and 7 seconds per iteration for Warcraft min-cost path, MNIST perfect matching, and Sudoku, respectively), thereby sidestepping the intractability of the problem. Our code is publiclt available at
\hyperlink{https://github.com/UCLA-StarAI/Semantic-Strengthening}{github.com/UCLA-StarAI/Semantic-Strengthening}.
\section{Problem Statement and Motivation}\label{sec:problem_statement}
We will start by introducing the notational choices used throughout the remainder of the paper, followed by a motivation of the problem.
We write uppercase letters ($\rvar{X}$, $\rvar{Y}$) for Boolean variables and lowercase letters ($\rvar{x}$, $\rvar{y}$) for their instantiation ($\rvar{Y}=0$ or $\rvar{Y}=1$).
Sets of variables are written in bold uppercase ($\rvars{X}$, $\rvars{Y}$), and their joint instantiation in bold lowercase ($\ensuremath{\vv{x}}\xspace$, $\ensuremath{\vv{y}}\xspace$).
A literal is a variable ($\rvar{Y}$) or its negation ($\neg \rvar{Y}$).
A logical sentence ($\alpha$ or $\beta$) is constructed from variables and logical connectives ($\land$, $\lor$, etc.), and is also called a (logical) formula or constraint.
A state or world $\ensuremath{\vv{y}}\xspace$ is an instantiation to all variables $\rvars{Y}$.
A state $\ensuremath{\vv{y}}\xspace$ satisfies a sentence $\alpha$, denoted $\ensuremath{\vv{y}}\xspace \models \alpha$, if the sentence evaluates to true in that world. A state $\ensuremath{\vv{y}}\xspace$ that satisfies a sentence $\alpha$ is also said to be a model of $\alpha$.
We denote by $m(\alpha)$ the set of all models of $\alpha$.
The notation for states $\ensuremath{\vv{y}}\xspace$ is used to refer to an assignment, the logical sentence enforcing the assignment, or the binary output vector capturing the assignment, as these are all equivalent notions.
A sentence $\alpha$ entails another sentence $\beta$, denoted $\alpha \models \beta$, if all worlds that satisfy $\alpha$ also satisfy $\beta$.
\paragraph{A Probability Distribution over Possible Structures}
Let $\alpha$ be a logical sentence defined over Boolean variables $\rvars{Y} = \{\rvar{Y}_1,\dots,\rvar{Y}_n\}$.
Let ${\bm{p}}$ be a vector of probabilities for the same variables $\rvars{Y}$, where ${\bm{p}}_i$ denotes the predicted probability of variable $\rvar{Y}_i$ and corresponds to a single output of the neural network.
The neural network's outputs induce a probability distribution $P(\cdot)$ over possible states $\ensuremath{\vv{y}}\xspace$ of $\rvars{Y}$:
\begin{equation}\label{eqn:pr_struct}
P(\ensuremath{\vv{y}}\xspace) = \prod_{i: \ensuremath{\vv{y}}\xspace \models \rvar{Y}_i} {\bm{p}}_i \prod_{i: \ensuremath{\vv{y}}\xspace \models \lnot \rvar{Y}_i} (1 - {\bm{p}}_i).
\end{equation}
\paragraph{Semantic Loss}\label{sec:semantic_loss}
The semantic loss \citep{Xu18} is a function of the logical constraint $\alpha$ and a probability vector ${\bm{p}}$.
It quantifies how close the neural network comes to satisfying the constraint by computing the probability of the constraint under the distribution $P(\cdot)$ induced by ${\bm{p}}$.
It does so by reducing the problem of probability computation to weighted model counting (WMC): summing up the models of $\alpha$, each weighted by its likelihood under $P(\cdot)$.
It, therefore, maximizes the probability mass allocated by the network to the models of $\alpha$
\begin{equation}
\label{eq:sloss}
P(\alpha)
= \mathbb{E}_{\ensuremath{\vv{y}}\xspace \sim P}\left[ \bm{1}\{\ensuremath{\vv{y}}\xspace \models \alpha\} \right]
= \sum_{\ensuremath{\vv{y}}\xspace \models \alpha} P(\ensuremath{\vv{y}}\xspace).
\end{equation}
Taking the negative logarithm recovers semantic loss.
Computing the above expectation is generally \#P-hard \citep{Valiant1979b}: there are potentially exponentially many models of $\alpha$.
For instance, there are $6.67 \times 10^{21}$ valid $9 \times 9$ Sudokus \citep{Felgenhauer2005}, where as the number of valid matchings or paths in a $n \times n$ grid grows doubly-exponentially in the grid size \citep{STREHL2001}.
A common approach resorts to \emph{relaxing} the logical statements, replacing logical operators with their fuzzy t-norms, and implications with simple inequalities, and come in different flavors: Product~\citep{rocktaschel2015, li2019, asai2020}, G\"{o}del~\citep{minervini2017}, and \L{}ukasiewicz~\citep{Bach2017}, which differ only in their interpretation of the logical operators. \citet{Grespan21} offer a comprehensive theoretical, and empirical, treatment of the subject matter.
While attractive due to their tractability, t-norms suffer from a few major
drawbacks. First, they \emph{lose the precise meaning of the logical
statement}, i.e. the satisfying and unsatisfying assignments of the relaxed
logical formula differ from those of the original logical formula. Second, the logic
is no longer consistent, i.e. logical statements that are otherwise equivalent
correspond to different truth values, as the relaxations are a function of
their syntax rather than their semantics. Lastly, the relaxation sacrifices sound
probabilistic semantics, unlike other approaches~\citep{Xu18, manhaeve2018}
where the output probability corresponds to the probability mass allocated to
truth assignments of the logical statement, the output probability
has no sound probabilistic interpretation~\citep{Grespan21}.%
A slightly more benign relaxation~\citep{rocktaschel2015} only assumes that,
for a constraint $\alpha = \beta_1 \land \ldots \land \beta_n$, a neural
network $f(\cdot)$, and an input $\ensuremath{\vv{x}}\xspace$, the events $\beta_i$ are mutually
independent conditioned on the features learned by the neural network.
That is, the probability of the constraint factorizes as $P(\alpha
\given{f(\ensuremath{\vv{x}}\xspace)}) = P(\beta_1 \given{f(\ensuremath{\vv{x}}\xspace)}) \times \ldots \times
P(\beta_n \given{f(\ensuremath{\vv{x}}\xspace)})$.
This recovers the true probabilistic semantics of the logical statement
when $\beta_1, \ldots, \beta_n$ are over disjoint sets of variables,
i.e. $\forall_{i, j}$ $ \ensuremath{\mathsf{vars}}(\beta_i) \cap \ensuremath{\mathsf{vars}}(\beta_j) = \emptyset$ for $i \neq j$
and can otherwise
be thought of as a \emph{tractable} approximation, the basis of which
is the neural network's ability to sufficiently encode the dependencies
shared between the constraints, rendering them conditionally independent
given the learned features.
That is assuming the neural network makes almost-deterministic predictions
of the output variables given the embeddings.
However, even assuming the true function being learned is deterministic,
there is still the problem of an imperfect embedding giving probabilistic
predictions whereby clauses are dependent.
The above relaxation reduces the \emph{intractable} problem of satisfying
the global constraint to the \emph{tractable} problem of satisfying the local
constraints, and can therefore often lead to \emph{misaligned gradients}.
Consider cell $(1,1)$ of the Sudoku in \cref{fig:tnorm-failure}.
Consider the two constraints asserting that the elements of row $2$ and that
the elements of column $2$ are unique, and assume the probability distribution
induced by the network over row and column assignments are as shown in
\cref{fig:tnorm-failure}, right.
This leads to opposing gradients for cell $(1,1)$: On the one hand, the gradient
from maximizing the probability of the column constraint pushes it to $2$, whereas
the gradient from maximizing the probability of the row constraint pushes it to $4$.
The problem here stems from modeling as independent two constraints that are strongly
coupled, so much so that the value of one determines the value of the other.
Recently, \citet{Ahmed22pylon} proposed using sampling to obtain a Monte Carlo
estimate of the probability of the constraint being satisfied.
This offers the convenience of specifying constraints as PyTorch functions,
as well as accommodating non-differentiable elements in the training pipeline
of the constraint, especially in cases where the training pipeline includes non-differentiable elements.
However, when problems are intractable, this is often accompanied by a state space that is combinatorial in size, meaning that the probability of sampling a valid structure drops precipitously as a function of the size of the state space, making it near impossible to obtain any learning signal, as almost all the sampled states will necessarily violate our constraint.
The same applies when the constraint is almost satisfied, meaning we never sample
low-probability assignment that violate the constraint.
That is not to mention the downfalls of gradient estimators: the gradient estimator employed by \citet{Ahmed22pylon} is the REINFORCE gradient estimator, which while unbiased in the limited of many samples, exhibits variances that makes it very hard to learn. Even gradient estimators that do not exhibit this problem of variance, trade off variance for bias, making it unlikely to obtain the true gradient.
\section{Semantic Strengthening}
We are interested in an approach that, much like the approaches
discussed in \cref{sec:problem_statement} is tractable, but retains sound
probabilistic semantics, and yields a non-zero gradient when the constraint
is locally, or globally, violated.
Let our constraint $\alpha$ be given by a conjunctive normal form (CNF),
$\alpha = \beta_1 \land \ldots \land \beta_n$.
We start by assuming that, for a neural
network $f(\cdot)$, and an input $\ensuremath{\vv{x}}\xspace$, the clauses $\beta_i$
are mutually independent conditioned on the features learned
by the neural network i.e. the probability of the constraint
factorizes as $P(\alpha \given{f(\ensuremath{\vv{x}}\xspace)}) = P(\beta_1
\given{f(\ensuremath{\vv{x}}\xspace)}) \times \ldots \times P(\beta_n \given{
f(\ensuremath{\vv{x}}\xspace)})$, where the probability of each of the clauses,
$P(\beta_i)$, can be computed tractably.
This recovers the true probabilistic semantics of the logical statement
when $\beta_1, \ldots, \beta_n$ are over disjoint sets of variables,
i.e. $\forall_{i, j}$ $ \ensuremath{\mathsf{vars}}(\beta_i) \cap \ensuremath{\mathsf{vars}}(\beta_j) = \emptyset$ for $i \neq j$, and can otherwise
be thought of as a \emph{tractable} approximation, the basis of which
is the neural network's ability to sufficiently encode the dependencies
shared between the constraints, rendering them conditionally independent
given the learned features, again, assuming the true function is deterministic,
with no inherent uncertainty.
The above approximation is semantically sound in the sense that, the probability
of each term $P(\beta_i)$ accounts for all the truth assignment of the clause
$\beta_i$.
It is also guaranteed to yield a semantic loss value of $0$,
and therefore a zero gradient
if and only if all the clauses, $\beta_i$, are satisfied.
\begin{figure*}[t!]
\begin{subfigure}[b]{0.18\textwidth}
\centering
\scalebox{.8}{
\begin{tikzpicture}[circuit logic US]
\node (or1) [or gate, inputs=nn, rotate=90, scale=0.9] at (4.25,7) {};
\node (output) at (4.25, 8.0) {};
\draw (or1) -- (output) node[pos=0.2, above right, color=sanae5] {$0.76$};
\node (and1) [and gate, inputs=nn, rotate=90, scale=0.9] at (3,5.8) {};
\node (and2) [and gate, inputs=nn, rotate=90, scale=0.9] at (5.7,5.8) {};
\node (c) at (2.2,4.6) {$C$};
\node (cval) at (2.2,3.8) {$\color{sanae1}0.2$};
\draw[->] (cval) edge (c);
\node (nc) at (4.7,4.6) {$\neg C$};
\node (ncval) at (4.7,3.8) {$\color{sanae1}0.8$};
\draw[->] (ncval) edge (nc);
\node (or2) [or gate, inputs=nn, rotate=90, scale=0.9] at (3.1,4.6) {};
\node (or3) [or gate, inputs=nnn, rotate=90, scale=0.9] at (5.8,4.6) {};
\node (a1) at (2.7,3.6) {$A$};
\node (a1val) at (2.7,2.8) {$\color{gray}0.3$};
\draw[->] (a1val) edge (a1);
\node (na1) at (3.2,3.6) {$\neg A$};
\node (na1val) at (3.2,2.8) {$\color{gray}0.7$};
\draw[->] (na1val) edge (na1);
\draw (or1.input 1) -- ++ (down: 0.25) -| (and1) node[pos=-0.2, above right, color=sanae5] {$0.56$};
\draw (or1.input 2) -- ++ (down: 0.25) -| (and2) node[pos=-0.5, above right, color=sanae5] {$0.20$};
\draw (and1.input 1) -- ++ (down: 0.25) -| (c);
\draw (and1.input 2) -- (or2) node[pos=0.4, right, color=sanae5] {$1.0$};
\draw (and2.input 1) -- ++ (down: 0.25) -| (nc);
\draw (and2.input 2) -- (or3) node[pos=0.4, right, color=sanae5] {$0.7$};
\draw (or2.input 1) -- ++ (down: 0.25) -| (a1);
\draw (or2.input 2) edge (na1);
\draw (or3.input 2) -- ++ (down: 0.25) -| (na1);
\end{tikzpicture}
}
\end{subfigure}
\begin{subfigure}[b]{0.07\textwidth}
\scalebox{.4}{
\begin{tikzpicture}[baseline=-250pt]
\node[circle,draw,minimum size=0.8cm] (a1) at (0,0.5) {};
\node[circle,draw,minimum size=0.8cm] (a2) at (0,1.5) {};
\node[circle,draw,minimum size=0.8cm] (a3) at (0,2.5) {};
\node[circle,draw,minimum size=0.8cm] (b1) at (1.5,0) {};
\node[circle,draw,minimum size=0.8cm] (b2) at (1.5,1) {};
\node[circle,draw,minimum size=0.8cm] (b3) at (1.5,2) {};
\node[circle,draw,minimum size=0.8cm] (b4) at (1.5,3) {};
\foreach \i in {1,...,3} {
\foreach \j in {1,...,4} {
\draw (a\i) -- (b\j);
}
}
\node[circle,draw,minimum size=0.8cm] (c1) at (3.0,0) {};
\node[circle,draw,minimum size=0.8cm] (c2) at (3.0,1) {};
\node[circle,draw,minimum size=0.8cm] (c3) at (3.0,2) {};
\node[circle,draw,minimum size=0.8cm] (c4) at (3.0,3) {};
\foreach \i in {1,...,4} {
\foreach \j in {1,...,4} {
\draw (b\i) -- (c\j);
}
}
\node[circle,draw,minimum size=0.8cm] (d1) at (4.5,0.5) {$\color{gray}0.3$};
\node[circle,draw,minimum size=0.8cm] (d2) at (4.5,1.5) {$\color{sanae3}0.5$};
\node[circle,draw,minimum size=0.8cm] (d3) at (4.5,2.5) {$\color{sanae1}0.2$};
\foreach \i in {1,...,4} {
\foreach \j in {1,...,3} {
\draw (c\i) -- (d\j);
}
}
\end{tikzpicture}
}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\centering
\scalebox{.8}{
\begin{tikzpicture}[circuit logic US]
\node (or1) [or gate, inputs=nn, rotate=90, scale=0.9] at (4.25,7) {};
\node (output) at (4.25, 8.0) {};
\draw (or1) -- (output) node[pos=0.2, above right, color=sanae5] {$0.60$};
\node (and1) [and gate, inputs=nn, rotate=90, scale=0.9] at (3,5.8) {};
\node (and2) [and gate, inputs=nn, rotate=90, scale=0.9] at (5.7,5.8) {};
\node (cval) at (2.2,3.8) {$\color{sanae1}{0.2}$};
\draw[->] (cval) edge (c);
\node (c) at (2.2,4.6) {$C$};
\node (cval) at (2.2,3.8) {$\color{sanae1}0.2$};
\draw[->] (cval) edge (c);
\node (nc) at (4.7,4.6) {$\neg C$};
\node (ncval) at (4.7,3.8) {$\color{sanae1}0.8$};
\draw[->] (ncval) edge (nc);
\node (or2) [or gate, inputs=nn, rotate=90, scale=0.9] at (3.1,4.6) {};
\node (or3) [or gate, inputs=nnn, rotate=90, scale=0.9] at (5.8,4.6) {};
\node (a1) at (2.7,3.6) {$B$};
\node (a1val) at (2.7,2.8) {$\color{sanae3}0.5$};
\draw[->] (a1val) edge (a1);
\node (na1) at (3.2,3.6) {$\neg B$};
\node (na1val) at (3.2,2.8) {$\color{sanae3}0.5$};
\draw[->] (na1val) edge (na1);
\draw (or1.input 1) -- ++ (down: 0.25) -| (and1) node[pos=-0.2, above right, color=sanae5] {$0.40$};
\draw (or1.input 2) -- ++ (down: 0.25) -| (and2) node[pos=-0.5, above right, color=sanae5] {$0.20$};
\draw (and1.input 1) -- ++ (down: 0.25) -| (c);
\draw (and1.input 2) -- (or2) node[pos=0.4, right, color=sanae5] {$1.0$};
\draw (and2.input 1) -- ++ (down: 0.25) -| (nc);
\draw (and2.input 2) -- (or3) node[pos=0.4, right, color=sanae5] {$0.5$};
\draw (or2.input 1) -- ++ (down: 0.25) -| (a1);
\draw (or2.input 2) edge (na1);
\draw (or3.input 2) -- ++ (down: 0.25) -| (na1);
\end{tikzpicture}
}
\end{subfigure}
\hspace{1em}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\scalebox{.55}{
\begin{tikzpicture}[circuit logic US]
\node (output) at (4.25, 8.0) {};
\node (or1) [or gate, inputs=nn, rotate=90, scale=0.9] at (4.25,7) {};
\node (output) at (4.25, 8.0) {};
\draw (or1) -- (output) node[pos=0.2, above right, color=sanae5] {$0.38$};
\draw (or1.input 1) -- ++ (down: 0.25) -- ++ (down: 1.0) -- ++ (down: 0.0) -- (3, 5.48) -- (3, 4.98) node[pos=0.1, above right, color=sanae5] {$0.03$};
\draw (or1.input 2) -- ++ (down: 0.25) -| (and2) node[pos=0.2, above right, color=sanae5] {$0.35$};
\node (and1) [and gate, inputs=nn, rotate=90, scale=0.9] at (3,4.6) {};
\node (or2) [or gate, inputs=nn, rotate=90, scale=0.9] at (2.6,3.6) {};
\node (c) at (3.09,3.6) {$C$};
\node (nc) at (4.8,3.7) {$\neg C$};
\node (cval) at (3.09,2.8) {$\color{sanae1}0.2$};
\draw[->] (cval) edge (c);
\node (ncval) at (4.8,2.9) {$\color{sanae1}0.8$};
\draw[->] (ncval) edge (nc);
\draw (and1.input 2) edge (c);
\draw (and1.input 1) -- ++ (down: 0.05) -| (or2) node[pos=0.4, left, color=sanae5] {$0.15$};
\node (and2) [and gate, inputs=nn, rotate=90, scale=0.9] at (5.7,5.8) {};
\node (or3) [or gate, inputs=nn, rotate=90, scale=0.9] at (4.7,4.6) {};
\draw (or3.input 1) -- + (down: 0.25) -| (c);
\draw (or3.input 2) edge (nc);
\node (or4) [or gate, inputs=nnn, rotate=90, scale=0.9] at (5.8,3.6) {};
\draw (and2.input 1) -- ++ (down: 0.25) -| (or3) node[pos=0.4, above right, color=sanae5] {$1$};
\draw (and2.input 2) -- (or4) node[pos=0.4, above right, color=sanae5] {$0.35$};
\node (and3) [and gate, inputs=nn, rotate=90, scale=0.9] at (5.8,1.4) {};
\draw (or4.input 2) -- (and3) node[pos=0.4, above right, color=sanae5] {$0.35$};
\node (b) at (4.5, 0) {$B$};
\node (nb) at (5.9,0) {$\neg B$};
\node (bval) at (4.5,-0.8) {$\color{sanae3}0.5$};
\draw[->] (bval) edge (b);
\node (nbval) at (5.9,-0.8) {$\color{sanae3}0.5$};
\draw[->] (nbval) edge (nb);
\node (a) at (2.42, 0) {$A$};
\node (na) at (3.0,0) {$\neg A$};
\node (aval) at (2.42,-0.8) {$\color{gray}0.3$};
\draw[->] (aval) edge (a);
\node (naval) at (3.0,-0.8) {$\color{gray}0.7$};
\draw[->] (naval) edge (na);
\draw (and3.input 2) edge (nb);
\draw (and3.input 1) -- ++ (down: 0.25) -- (2.62, 0.8) (na);
\node (and4) [and gate, inputs=nn, rotate=90, scale=0.9] at (2.52,2.6) {};
\node (and5) [and gate, inputs=nn, rotate=90, scale=0.9] at (3.6,2.6) {};
\draw (or2.input 1) -- (and4) node[pos=0.6, left, color=sanae5] {$0.5$};
\draw (or2.input 2) -- ++ (down: 0.25) -| (and5) node[pos=0.4, above right, color=sanae5] {$0.3$};
\node (or4) [or gate, inputs=nn, rotate=90, scale=0.9] at (2.52,1.4) {};
\node (or5) [or gate, inputs=nn, rotate=90, scale=0.9] at (4.9,1.4) {};
\draw (or4.input 1) -- (a);
\draw (or4.input 2) -- ++ (down: 0.50) -| (na);
\draw (or5.input 1) -- ++ (down: 0.25) -| (b);
\draw (or5.input 2) -- ++ (down: 0.25) -- (5.9, 0.9) (nb);
\draw (and4.input 1) -- (2.43, 1.8) (or4) node[pos=0.4, left, color=sanae5] {$1$};
\draw (and4.input 2) -- ++ (down: 0.40) --++ (right: 1.40) --++ (down:0.9) --++ (right: 0.79)(b);
\draw (and5.input 1) -- (3.51, 1.0) -- (2.43, 1.0) ;
\draw (and5.input 2) -- ++ (down: 0.25) -| (or5) node[pos=0.4, above right, color=sanae5] {$1$};
\end{tikzpicture}
}
\end{subfigure}
\begin{subfigure}[b]{0.20\textwidth}
\centering
\includegraphics[scale=0.30, clip]{LOTP.pdf}
\vspace{2em}
\end{subfigure}
\caption{%
(Left) Example of two compatible constraint circuits parameterized by the outputs of a neural network.
To compute the probability of a circuit, we plug in the output of the neural network ${\bm{p}}_i$ and $1-{\bm{p}}_i$ for positive and negative literal $i$, respectively.
The computation proceeds bottom-up, taking products at AND gates and summations at OR gates, and the probability is accumulated at the root of the circuit.
(Right) the conjunction of the two constraint circuits, its probability, computing the probabilities required for the mutual information using the law of total probability.
}
\label{fig:circuits_mi}
\end{figure*}
However, as discussed in \cref{sec:problem_statement}, training the neural network to satisfy
the local constraints can often be problematic: two \emph{dependent} constraints assumed
independent can often disagree on the value of their shared variables leading to opposing gradients.
If we are afforded more computational resources, we can start strengthening our approximation
by relaxing some of the independence assumptions made in our model.
\subsection{Deriving the Criterion}
The question then becomes, \emph{which independence assumptions to relax}.
We are, of course, interested in relaxing the independence assumptions that
have the most positive impact on the quality of the approximation.
Or, put differently, we are interested in relaxing the independence assumptions
for which we incur the most penalty for assuming, otherwise dependent constraints,
to be independent.
For each pair of constraints $\beta_i$ and $\beta_j$, for all $i \neq j$,
this corresponds to the Kullback-Leibler divergence of the product of their marginals
from their joint distribution, and is a measure of the modeling error we incur, in bits,
by assuming the independence of the two constraints
\begin{equation}\label{eqn:kl_div}
D_{\mathrm{KL}}\!\left(P_{(X,Y)} \Vert P_X \cdot P_Y\right)
\end{equation}
where $X$ and $Y$ are Bernoulli random variables, $X \sim P(\beta_i)$, $Y \sim P
(\beta_j)$, and $(X,Y) \sim P(\beta_i, \beta_j)$,
for
all $i,j$ such that $i \neq j$.
\cref{eqn:kl_div} equivalently corresponds to the \emph{mutual information} $I(X;Y)$
given by
\begin{equation}\label{eqn:mi}
I(X;Y) = \mathbb{E}_{(X,Y)}\!\left[ \log \frac{P_{(X,Y)}(X,Y)}{P_X(X)\cdotP_Y(Y)}\right],
\end{equation}
between the random variables $X$ and $Y$, or the measure of \emph{dependence} between
them.
Intuitively, mutual information captures the information shared between $X$ and $Y$:
it measures how much knowing one reduces about the uncertainty of the other. When
they are independent, then knowing one does not give any information about the
other, and therefore the mutual information is $0$. At the other extreme, one is a
deterministic function of the other, and therefore, the mutual information is
maximized and equals to their entropy.
Note that the expectations in both \cref{eqn:kl_div} and \cref{eqn:mi} are over
the joint distribution $P_{(X,Y)}$.
We would be remiss, however, to dismiss the features learned by the
network, as they already encode some of the dependencies between the
constraints, affording us the ability to make stronger approximations.
That is, we are interested in the mutual information between all pairs
of constraints $\beta_i, \beta_j$ \emph{conditioned} on the neural
network's features.
Let $\mathcal{D}$ be our data distribution, and $Z$ be a random
variable distributed according to $D$, we are interested in computing
\begin{align}\label{eqn:conditional_mi}
&I(X;Y\mid Z) = \mathbb{E}_{Z}\!\left[\mathbb{E}_{(X,Y)\mid Z}\!\left[ \log \frac{P(x,y \mid \ensuremath{\vv{z}}\xspace)}{P(x \mid {\bm{z}})\cdotP(y \mid \ensuremath{\vv{z}}\xspace)}\right]\right]\\
&=\mathbb{E}_{Z}\!\left[\sum_{x = 0}^{1}\sum_{y = 0}^{1} P(x, y \mid \ensuremath{\vv{z}}\xspace)\!\left[ \log \frac{P(x,y \mid \ensuremath{\vv{z}}\xspace)}{P(x \mid {\bm{z}})\cdotP(y \mid \ensuremath{\vv{z}}\xspace)}\right]\right],
\end{align}
where, as is common place, we estimate the outer expectation using
Monte Carlo sampling from the data distribution.
Perhaps rather surprisingly, not withstanding the expectation w.r.t
the data distribution, the quantity in \cref{eqn:conditional_mi} is
hard to compute.
This is not only due to the intractability of the probability, which
as we have already stated is \#P-hard in general, but also due to the
hardness of conjunction, in general.
Loosely speaking, one could have constraints $\beta_i$ and $\beta_j$
for which the probability computation, $P(\beta_i)$ and $P(\beta_j)$
is tractable, yet computing $P(\alpha)$, where once again $\alpha =
\beta_i \land \beta_j$, is hard \citep{Yujia2016, Khosravi19}.
Intuitively, the hardness of conjunction comes from finding the intersection
of the satisfying assignments without enumeration.
We formalize this in \cref{sec:tractable_computation}.
\begin{figure*}[t]
\scalebox{0.95}
{
\begin{minipage}[t]{\dimexpr.5\textwidth-.5\columnsep}
\begin{algorithm}[H]
\begin{algorithmic}
\State \hskip-1em\textbf{Input}: Two compatible constraint circuits $\beta_1$ and $\beta_2$
\State \hskip-1em\textbf{Output}: Mutual Information of $\beta_1$ and $\beta_2$ given features
\\\textcolor{ourspecialtextcolor}{// Conjoin $\beta_1$ and $\beta_2$}
\State $\alpha = \beta_1 \land \beta_2$
\\\textcolor{ourspecialtextcolor}{// Compute the probability of $\alpha, \beta_1$ and $\beta_2$ c.f.\ \cref{fig:circuits_mi}}
\State $p_\alpha, p_{\beta_1}, p_{\beta_2}$ = $\text{prob}(\alpha)$, $\text{prob}(\beta_1)$, $\text{prob}(\beta_2)$
\\\textcolor{ourspecialtextcolor}{// Calculate marginals and joint using total probability}
\State $p_{\rvar{X}} = [1 - p_{\beta_1}, p_{\beta_1}]$, $p_{\rvar{Y}} = [1 - p_{\beta_2}, p_{\beta_2}]$
\State $p_{(\rvar{X}, \rvar{Y})} = [[1 - p_{\beta_1} - p_{\beta_2} - p_{\alpha} ,p_{\beta_2} - p_\alpha], [p_{\beta_1} - p_\alpha, p_\alpha]]$
\State $\text{mi} = 0$
\For{$x$, $y$ \textbf{in} $\text{product}([0,1])$}
\State $\text{mi} \mathrel{+}= p_{(\rvar{X}, \rvar{Y})}[x][y] \times \log(\frac{p_{(\rvar{X}, \rvar{Y})}[x][y]}{p_{\rvar{X}}[x] \times p_{\rvar{Y}}[y]})$
\EndFor
\vspace{-0.8mm}
\State \textbf{return} $\text{mi}$
\end{algorithmic}
\caption{$\MI(\beta_1;\beta_2 \mid f(\ensuremath{\vv{x}}\xspace))$}
\label{algo:MI}
\end{algorithm}
\end{minipage}}\hfill
\scalebox{0.95}
{
\begin{minipage}[t]{\dimexpr.5\textwidth-.5\columnsep}
\begin{algorithm}[H]
\begin{algorithmic}
\State \hskip-1em\textbf{Input}: Current set of constraint circuits
\State \hskip-1em\textbf{Output}: \emph{Strengthened} set of constraints
\State $\text{pwmi} = [\ ]$
\For{$\beta_1$, $\beta_2$ \textbf{in} $\text{product}(\text{constraints})$}
\State \textbf{if} $\text{disjoint}(\ensuremath{\mathsf{vars}}(\beta_i), \ensuremath{\mathsf{vars}}(\beta_j))$ \textbf{then} \textbf{continue}
\\\textcolor{ourspecialtextcolor}{// Keep track of constraints with mutual information}
\State $\text{pwmi}.\text{append}((\MI(\beta_i, \beta_j), \beta_i, \beta_j))$
\EndFor
\\\textcolor{ourspecialtextcolor}{// Consider only the top $\kappa$ pairs of constraints}
\State $\text{to\_merge} = \text{sorted}(\text{pwmi, reverse=True})[:\kappa]$
\For{$\text{mi}, \beta_1$, $\beta_2$ \textbf{in} $(\text{to\_merge})$}
\State constraints.remove($\beta_i$, $\beta_j$)
\State constraints.append($\beta_i \land \beta_j$)
\EndFor
\State \textbf{return} constraints
\end{algorithmic}
\caption{$\SemStr(\text{constraints}, \kappa)$}
\label{algo:Strengthen}
\end{algorithm}
\end{minipage}
}
\end{figure*}
\subsection{The Semantic Strengthening Algorithm}
For the purposes of this section, we will assume we can tractably
compute the conditional mutual information in \cref{eqn:conditional_mi},
and proceed with giving our Semantic Strengthening algorithm.
The idea is, simply put, to use the neural network to guide the process
of relaxing the independence assumptions introduced between the constraints.
Specifically, we are given an interval, $\eta$, a constraint budget, $\kappa$,
and a computational budget $\tau$.
We initiate the process of training the neural network, interrupting training
every $\eta$ epochs, computing the conditional mutual information between
pairs of constraints,
considering only those pairs sharing at least one variable (e.g.
the two constraints asserting the uniqueness of the first and last row, respectively,
do not share variables, are therefore independent, and by definition have a mutual
information of $0$, so we need not consider joining them, \emph{yet}).
Subsequently, we identify the $\kappa$ pairs of constraints with the highest pairwise
conditional mutual information, and that therefore, have the most detrimental
effect on the quality of our approximation.
We detect the strongly connected components of constraints, and conjoin them:
if $\beta_1$ and $\beta_2$ should be made dependent, and $\beta_2$ and $\beta_3$
should be made dependent, then $\beta_1$ , $\beta_2$ and $\beta_3$ are made dependent.
We delete the old constraints from, and add the new constraints, to our set of constraints,
and resume training.
This process is repeated every $\eta$ epochs until we have exhausted our computational budget $\tau$.
Our full algorithm is shown in \cref{algo:Strengthen}.%
\subsection{Tractably Computing the Criterion}\label{sec:tractable_computation}
Unlike previous approaches~\citep{Chen2018, Mesner2019ConditionalMI, Tezuka2021InformationBA}, we do not need to resort to variational approximations or neural estimation to
compute the mutual information, and instead appeal to the language of \emph{tractable
circuits}.
That is, we appeal to knowledge compilation techniques\textemdash a class of methods
that transform, or \emph{compile}, a logical theory into a target form, \emph{tractable
circuits}, which represent functions as parameterized computational graphs.
By imposing certain structural properties on these computational graphs, we enable
the tractable computation of certain classes of probabilistic queries over the
encoded functions.
As such, circuits provide us with a language for building and reasoning about tractable
representations.
\paragraph{Logical Circuits}
More formally, a \emph{logical circuit} is a directed, acyclic computational
graph representing a logical formula.
Each node $n$ in the DAG encodes a logical sub-formula, denoted $[n]$.
Each inner node in the graph is either an AND or an OR gate, and each leaf
node encodes a Boolean literal ($Y$ or $\lnot Y$).
We denote by $\ensuremath{\mathsf{in}}(n)$ the set of $n$'s children, that is, the operands of
its logical gate.
\paragraph{Structural Properties} As already alluded to, circuits enable
the tractable computation of certain classes of queries over encoded functions
granted that a set of structural properties are enforced. We explicate such
properties below.
A circuit is \emph{decomposable} if the inputs of every AND gate depend on
disjoint sets of variables i.e.\ for $\alpha = \beta \land \gamma$, $\ensuremath{\mathsf{vars}}
(\beta) \cap \ensuremath{\mathsf{vars}}(\gamma) = \varnothing$.
Intuitively, decomposable AND nodes encode local factorizations over
variables of the function. For simplicity, we assume
that decomposable AND gates always have two inputs, a condition that
can be enforced on any circuit in exchange for a polynomial increase
in its size~\citep{vergari2015simplifying,peharz2020einsum}.
A second useful property is \emph{smoothness}.
A circuit is \emph{smooth} if the children
of every OR gate depend on the same set of variables i.e. for
$\alpha = \bigvee_i \beta_i$, we have that $\ensuremath{\mathsf{vars}}(\beta_i) =
\ensuremath{\mathsf{vars}}(\beta_j)\ \forall i,j$. Decomposability and smoothness
are a sufficient and necessary condition for tractable integration
over arbitrary sets of variables in a single pass, as they allow
larger integrals to decompose into smaller ones~\citep{choi2020pc}.
Furthermore, a circuit is said to be \emph{deterministic} if,
for any input, at most one child of every OR node has a non-zero
output i.e. for $\alpha = \bigvee_i \beta_i$, we have that $ \beta_i
\land \beta_j = \bot$ for all $i \neq j$.
Similar to decomposability, determinism induces a recursive
partitioning of the function, but over the support, i.e.\
satisfying assignments, of the function, rather than the
variables.
Determinism, taken together with smoothness and decomposability,
allows us to tractably compute the probability of a constraint
~\citep{darwiche02}.%
What remains, is to show that we can tractably conjoin two constraints.
Conjoining two decomposable and deterministic circuits is NP-hard if we
wish the result to also be decomposable and deterministic, which as we
mentioned is a requirement for tractable probability
computation~\citep{darwiche02, Yujia2016, Khosravi19}.
To guarantee the tractability of the probability computation of the
conjoined constraint, we will, therefore, need to introduce one last
structural property, namely the notion of \emph{compatibility} between
two circuits~\citep{VergariNeurIPS21}.
Two circuits, $c_1$ and $c_2$ over variables $\rvars{Y}$ are said to be compatible
if (1) they are smooth and decomposable, and (2) any pair of AND nodes, $n
\in c_1$ and $m \in c_2$ with the same scope over $\rvars{Y}$ can be rearranged to be
mutually compatible and decompose in the same way i.e.\ $\ensuremath{\mathsf{vars}}(n) = \ensuremath{\mathsf{vars}}(m)
\implies \ensuremath{\mathsf{vars}}(n_i) = \ensuremath{\mathsf{vars}}(m_i)$, and $n_i$ and $m_i$ are compatible, for some
arrangement of the inputs $n_i$ and $m_i$ of $n$ and $m$.
A sufficient condition for compatibility is that both $c_1$ and $c_2$ share the
exact same hierarchical scope partitioning~\citep{VergariNeurIPS21},
sometimes called a vtree or variable ordering~\citep{choi2020pc,pipatsrisawat2008new}.
Intuitively, the two circuits should share the order in which they factorize the
function over its variables.
\cref{fig:circuits_mi} shows an example of smooth, decomposable, deterministic and compatible circuits.
At a high level, there exist off-the-shelf compilers utilizing SAT solvers,
essentially through case analysis, to compile a logical formula into a tractable
logical circuit.
We are agnostic to the exact flavor of circuit so long as the properties outlined
herein are respected.
In our experiments, we use PySDD\footnote{https://github.com/wannesm/PySDD} --
a Python SDD compiler~\citep{darwiche11,choi2013dynamic}.
Now that we have shown that we can tractably compute the probabilities
$P(\beta_1), P(\beta_2)$ and $P(\alpha)$, we can utilize
the law of total probability (c.f.\ \cref{fig:circuits_mi}) to compute
the remaining probabilities, and therefore, the mutual information.
Our algorithm is shown in \cref{algo:MI}.
\section{Related Work}\label{sec:related_work}
There has been increasing interest in combining neural learning with symbolic reasoning,
a class of methods that has been termed \emph{neuro-symbolic} methods, studying how to best combine both paradigms in a bid to accentuate their positives and mitigate their negatives.
The focus of many such approaches has therefore been on making probabilistic reasoning tractable through first-order approximations, and differentiable, through reducing logical formulas into arithmetic objectives, replacing logical operators with their fuzzy t-norms, and implications with inequalities~\citep{kimmig2012short,rocktaschel2015,fischer19a, pryor22}.
\citet{diligenti2017} and \citet{donadello2017} use first-order logic to specify constraints on outputs of a neural network. They employ fuzzy logic to reduce logical formulas into differential, arithmetic objectives denoting the extent to which neural network outputs violate the constraints, thereby supporting end-to-end learning under constraints. More recently, \citet{Xu18} introduced semantic loss, which circumvents the shortcomings of fuzzy approaches, while supporting end-to-end learning under constraints. More precisely, \emph{fuzzy reasoning} is replaced with \emph{exact probabilistic reasoning}, by compiling logical formulae into structures supporting efficient probabilistic queries.
\cite{LiuAAAI23} use semantic loss to simultaneously
learn a neural network and extract generalized logic rules. Different from other neural-symbolic methods that require background knowledge and candidate logical rules, they aim to induce task semantics with minimal priors.
Another class of neuro-symbolic approaches have their roots in logic programming. DeepProbLog~\citep{manhaeve2018} extends ProbLog, a probabilistic logic programming language, with the capacity to process neural predicates, whereby the network's outputs are construed as the probabilities of the corresponding predicates. This simple idea retains all essential components of ProbLog: the semantics, inference mechanism, and the implementation. \cite{KR2021-45} attempts to scale DeepProbLog by considering only the top-$k$ proof paths. In a similar vein, \citet{dai2018} combine domain knowledge specified as purely logical Prolog rules with the output of neural networks, dealing with the network's uncertainty through revising the hypothesis by iteratively replacing the output of the neural network with anonymous variables until a consistent hypothesis can be formed. \citet{bosnjak2017programming} present a framework combining prior procedural knowledge, as a Forth program, with neural functions learned through data. The resulting neural programs are consistent with specified prior knowledge and optimized with respect to data.
\begin{figure*}[th!]
\centering
\includegraphics[scale=0.175]{wc.png}
\hfill
\includegraphics[scale=0.175]{wc_gt.png}
\hfill
\includegraphics[scale=0.8]{pm.png}
\hfill
\includegraphics[scale=0.8]{pm_gt.png}
\caption{An example of a Warcraft terrain map (left) and an MNIST grid, and the corresponding groundtruth labels.
}
\label{fig:example_tasks}
\end{figure*}
There has recently been a plethora of approaches ensuring consistency by embedding the constraints as predictive layers, including semantic probabilistic layers (SPLs) \citep{Ahmed2022}, MultiplexNet~\citep{Hoernle2021MultiplexNetTF} and HMCCN~\citep{giunchiglia2020coherent}.
Much like semantic loss \citep{Xu18}, SPLs maintain sound probabilistic semantics, and while displaying impressive scalability to real world problems, but might struggle with encoding harder constraints.
SIMPLE \citep{AhmedICLR2023} proposes an SPL for the $k$-subset distribution, to be used as a latent space to induce a distribution over features, for which they derive a low-bias, low-variance gradient estimator.
MultiplexNet is able to encode only constraints in disjunctive normal form, which is problematic for generality and efficiency as neuro-symbolic tasks often involve an intractably large number of clauses.
HMCCN encodes label dependencies as fuzzy relaxation and is the current state-of-the-art model for hierarchical mutli-label classification~\citep{giunchiglia2020coherent}, but, similar to its recent extension~\citep{giunchiglia2021multi}, is restricted to a certain family of constraints.
\cite{Daniele2022RefiningNN} discusses how to enforce the consistency for fuzzy relaxations with general formulas.
\section{Experimental Evaluation}\label{sec:experiments}
We evaluated our approach, semantic strengthening, on several neuro-symbolic tasks,
namely Warcraft minimum-cost path finding, minimum-cost perfect matching of MNIST
digits, as well as the task of training neural networks to solve Sudoku puzzles.
The challenge with all of the above tasks, when looked at through a neuro-symbolic
lens, is the vastness of the state space: as previously mentioned, there are $6.6
\times 10^{21}$ valid $9 \times 9$ Sudokus, and the number of valid matchings,
or paths in a grid grows doubly-exponentially in the grid size\textemdash simply
too much to enumerate.
Even approaches like semantic loss which rely on circuit approaches to exploit the
local structure in the problem, essentially through caching solutions to repeated
subproblems, do not scale to large instances of these tasks.
As has been established in previous work~\citep{Xu18,Ahmed22nesyentropy,Ahmed2022},
label-level accuracy, or the accuracy of predicting individual labels is very often
a poor indication of the performance of the neural network, and is often uninteresting
in neuro-symbolic settings, where we are rather more interested in the accuracy of
our predicted structure object \emph{exactly} matching the ground truth, e.g., \emph{is the prediction a shortest path?}, a metric which we denote ``Exact'' in our
experiments, as well as the accuracy of predicting objects that are \emph{consistent}
with the constraint, e.g., \emph{is the prediction a valid path?}, a metric which we
denote ``Consistent'' in our experiments.
Note that, unlike the other two tasks, for the case of Sudoku,
these measures are one and the same: a valid Sudoku has a single \emph{unique} solution.
In all of our experiments, we compare against two baselines: a neural network, whose architecture
we specify in the corresponding experimental section, and the same neural network augmented
with product t-norm, where we assume the independence of constraints throughout training.
\paragraph{Warcraft Shortest Path}
We evaluate our approach, semantic strengthening, on the challenging task of predicting
the minimum-cost path in a weighted grid imposed over Warcraft terrain maps.
Following \citet{Pogancic2020}, our training set consists of $10,000$
terrain maps curated using the Warcraft II tileset.
Each map encodes an underlying grid of dimension $12 \times 12$, where each vertex is
assigned a cost depending on the type of terrain it represents (e.g. earth has lower cost than water).
The shortest (minimum cost) path between the top left and bottom right vertices is encoded as
an indicator matrix, and serves as label.
\cref{fig:example_tasks} shows an example input presented to the network and the input with an annotated shortest path as a groundtruth.
Presented with an image of a terrain map, a convolutional neural network\textemdash similar to \citet{Pogancic2020}, we use ResNet18 \citep{He2016}\textemdash outputs a $12 \times 12$ binary matrix indicating a set of vertices.
Note that the minimum-cost path is not unique: there may exist several paths sharing the same minimum cost, all of which are considered to be
correct by our metrics.
Table~\ref{tab:sp} shows our results.
\begin{table}[h]
\centering
\caption {Warcraft shortest path prediction results}
{
\begin{tabular}{@{}l l l l @{}}
Test accuracy \% & & Exact & Consistent\\
\midrule \midrule
ResNet-18& & $44.80$ & $56.90$\\
\midrule
+ Product t-norm& & $50.40$ & $63.20$\\
+ Semantic Strengthening& & $\mathbf{61.20}$& $\mathbf{72.70}$\\
\end{tabular}
}
\label{tab:sp}
\end{table}
We observe that incorporating constraints into learning improves the accuracy of predicting
the optimal path from $44.80\%$ to $50.40\%$, and the accuracy of predicting a \emph{valid}
path from $56.90\%$ to $63.20\%$, as denoted by the ``Exact'' and ``Consistent'' metrics,
respectively. Furthermore, and perhaps more interestingly, we see that our approach,
\emph{semantic strengthening}, greatly improves upon the baseline, as well as product t-norm
improving the accuracy of predicting the optimal path from $44.80\%$ and $50.40\%$ to $61.20\%$,
while greatly improving the accuracy of predicting a valid path from $56.90\%$ and $63.20\%$ to
$72.70\%$.
\paragraph{MNIST Perfect Matching}
Our next task consists in predicting a minimum-cost perfect-matching of a set
of $k^2$ MNIST digits arranged in a $k \times k$ grid, where diagonal matchings
are not permitted.
We consider the problem for the instance when $k = 10$.
Similar to \citet{Pogancic2020}, we generate the ground truth
by considering the underlying $k \times k$ grid graph, and solving
a min-cost perfect-matching problem using Blossom V \citep{Kolmogorov2009},
where the edge weights are given simply by reading the two vertex digits as
a two-digit number, reading downwards for vertical edges, and left to right
for horizontal edges.
The minimum-cost perfect matching label is then encoded
as an indicator vector for the subset of the selected edges.
Similar to the Warcraft experiment, the grid image is input to a (pretrained)
ResNet-18, which simply outputs a set of predicted edges.
\cref{tab:pm} shows our results.
\begin{table}[h]
\centering
\caption {Perfect Matching prediction test results}
{
\begin{tabular}{@{}l l l l @{}}
Test accuracy \% & & Exact & Consistent\\
\midrule \midrule
ResNet-18& & $9.30$ & $10.00$\\
\midrule
+ Product t-norm& & $12.70$ & $12.90$\\
+ Semantic Strengthening& & $\mathbf{15.50}$& $\mathbf{18.40}$\\
\end{tabular}
}
\label{tab:pm}
\end{table}
Similar to the Warcraft experiment, we observe that incorporating constraints into learning
improves the accuracy of predicting the optimal perfect matching from $9.30\%$ to $12.70\%$, and the
accuracy of predicting a \emph{valid} perfect matching from $10.00\%$ to $12.90\%$, as denoted by the
``Exact'' and ``Consistent'' metrics, respectively. Furthermore,
we see that our approach, \emph{semantic strengthening}, greatly improves upon the baseline,
as well as product t-norm improving the accuracy of predicting the optimal perfect matching from $09.30\%$
and $12.70\%$ to $15.50\%$, while greatly improving the accuracy of predicting a valid perfect matching
from $10.00\%$ and $12.90\%$ to $18.40\%$.
\paragraph{Sudoku}
Lastly, we consider the task of predicting a solution to a given Sudoku puzzle.
Here the task is, given a $9 \times 9$ partially-filled grid of numbers
to fill in the remaining cells in the grid such that the entries each row, column,
and $3 \times 3$ square are unique i.e.\ each of the numbers from $1$ through $9$
appears exactly once.
We use the dataset provided by \citet{Wang19}, consisting of 10K Sudoku puzzles,
split into 9K training examples, and 1K test samples, all puzzles having 10 missing entries.
As our baseline, we follow \citet{Wang19} in using a convolutional neural network
modeled on that of \citet{Park2018}.
The input to the neural network is given as a bit representation
of the initial Sudoku board, along with a mask representing the bits
to be learned, i.e.\ the bits in the empty Sudoku cells.
The network interprets the bit inputs as 9 input image channels (one for each square
in the board) and uses a sequence of 10 convolutional layers (each with 512 3$\times$3
filters) to output the solution, with the mask input as a set of additional image channels
in the same format as the board.
\cref{tab:sudoku} shows our results.
\begin{table}[h]
\centering
\caption {Sudoku test results}
{
\begin{tabular}{@{}l l l l @{}}
Test accuracy \% & & Exact & Consistent\\
\midrule \midrule
10-Layer ConvNet& & $16.80$ & $16.80$\\
\midrule
+ Product t-norm& & $22.10$ & $22.10$\\
+ Semantic Strengthening& & $\mathbf{28.00}$& $\mathbf{28.00}$\\
\end{tabular}
}
\label{tab:sudoku}
\end{table}
In line with our previous experiments, we observe that incorporating
constraints into learning improves the accuracy of predicting correct
Sudoku solutions, the ``Exact'' metric from $16.80\%$ to $22.10\%$.
Furthermore, we see that our approach, \emph{semantic strengthening},
greatly improves upon the baseline, as well as product t-norm, improving
the accuracy from $16.80\%$ and $22.10\%$ to $28.00\%$.
\section{Conclusion}\label{sec:conclusion}
In conclusion, we proposed semantic strengthening, a tractable approach to neuro-symbolic learning,
that remains faithful to the probabilistic semantic of the distribution defined by the neural network
on a given constraint.
Semantic strengthening starts by assuming the independence of the clauses in a given constraint, thereby
reducing the, often intractable, problem of satisfying a global constraint, to the tractable problem of
satisfying individual local constraints.
It uses a principled criterion, conditional mutual information, to determine, and relax any unjustified
independence assumptions most detrimental to the quality of our approximation.
We have shown that we are able to greatly improve upon the baselines on three challenging tasks, where
semantic strengthening was able to increase the \emph{accuracy} and \emph{consistency} of the model's predictions.
\subsubsection*{Acknowledgements}
KA would like to thank Arthur Choi and Yoojung Choi for helpful discussions throughout the project.
This work was funded in part by the DARPA Perceptually-enabled Task Guidance (PTG) Program under
contract number HR00112220005, and NSF grants \#IIS-1943641, \#IIS-1956441, and \#CCF-1837129.
\section{FORMATTING INSTRUCTIONS}
To prepare a supplementary pdf file, we ask the authors to use \texttt{aistats2023.sty} as a style file and to follow the same formatting instructions as in the main paper.
The only difference is that the supplementary material must be in a \emph{single-column} format.
You can use \texttt{supplement.tex} in our starter pack as a starting point, or append the supplementary content to the main paper and split the final PDF into two separate files.
Note that reviewers are under no obligation to examine your supplementary material.
\section{MISSING PROOFS}
The supplementary materials may contain detailed proofs of the results that are missing in the main paper.
\subsection{Proof of Lemma 3}
\textit{In this section, we present the detailed proof of Lemma 3 and then [ ... ]}
\section{ADDITIONAL EXPERIMENTS}
If you have additional experimental results, you may include them in the supplementary materials.
\subsection{The Effect of Regularization Parameter}
\textit{Our algorithm depends on the regularization parameter $\lambda$. Figure 1 below illustrates the effect of this parameter on the performance of our algorithm. As we can see, [ ... ]}
\vfill
\end{document}
|
{
"arxiv_id": "2302.14116",
"language": "en",
"timestamp": "2023-03-01T02:01:29",
"url": "https://arxiv.org/abs/2302.14116",
"yymm": "2302"
} | \section{\label{sec:I_intro}Introduction}
Picosecond ultrasonics subsumes the generation and detection of laser-induced nanoscopic strain pulses with GHz or even THz frequency components \cite{grah1989, mari1998}. Most studies of picosecond ultrasound have been and will be conducted with all-optical setups, which allow tackling both fundamental and applied research questions in light-matter interaction, condensed matter and material science \cite{tas1994, mari1998, tas1998, anto2006, mats2015b}. Picosecond ultrasonics experiments exploit that the energy deposition of an optical femtosecond pump-pulse creates a stress profile within an absorbing transducer layer. The subpicosecond rise of the stress gradient introduces a source term to the elastic wave equation, which is used to rationalize the occurrence of highly localized strain pulses \cite{thom1986, mats2015b, ruel2015}.
Many aspects of the strain propagation can be measured by reflection, transmission or scattering of optical probe pulses at interfaces and in the bulk. In the most straight-forward experiments, strain induced changes of the reflectivity
witness the arrival of a strain pulse in the layer and the timing can be used to infer layer thicknesses or elastic properties \cite{grah1989, wrig1991b, anto2006,lejm2014}. In transparent materials, the detection process has been interpreted as time-domain Brillouin scattering (TDBS), which is observed as oscillations of the optical reflectivity at a frequency that is proportional to the sound velocity \cite{guse2014,boja2013,guse2018}. Although the development of all-optical probing schemes with ever-growing sensitivity has enabled studies of a multitude of picosecond ultrasonics effects such as strain pulses in heterostructures \cite{wrig1991b,wrig1995b}, shear waves \cite{peze2016}, acoustic solitons \cite{van2010} or even imaging of elastic properties \cite{daly2004, pere2020}, the reported strain signatures mostly remain on a qualitative level.
In ultrafast x-ray diffraction (UXRD) experiments the optical probe pulses are replaced by ultrashort hard x-ray pulses that are diffracted from the crystal structure in motion \cite{barg2006,lind2017, trig2021}. Similar to optical reflectivity, x-ray diffraction can detect propagating strain pulses in two ways: Strain in a material is heralded by Bragg peak shifts which encode the amplitude, sign and shape of the elastic wave \cite{lind2000,lars2002,haya2006}. Secondly, TDBS of x-rays \cite{boja2013} from the sum of a reciprocal lattice vector and a phonon wave vector can probe the phonon population and the elastic properties even for wave vectors close to the Brillouin zone boundary \cite{heni2016,dorn2019,shay2022}. For such studies the high degree of collimation of hard x-rays from synchrotron and free-electron laser (FEL) sources is highly beneficial. This is also true for the time-resolved detection of x-rays scattered from incoherent phonons by thermal diffuse scattering, which yields a time- and wavevector-resolved picture of the phonon population \cite{trig2010, trig2013, zhu2015, wall2018}.
X-rays from laser-based plasma sources, in contrast, are sufficient to support the conceptually simple picosecond ultrasonics experiments that investigate Bragg peak shifts \cite{rose1999, barg2004, nico2011, quir2012, zeus2019}. These setups are particularly useful for broad diffraction peaks associated with thin films of imperfect crystallinity due to the relatively large divergence and bandwidth of the provided x-rays. For ultrathin layers, crystals consisting of light atoms, nanoparticles or advanced concepts using resonant diffraction, however, FELs are ideal as they pair high photon flux and sub-picosecond time resolution \cite{bost2016, abel2017, scho2019}. Synchrotron radiation sources typically lack temporal resolution, which can be improved via fs-pulse slicing schemes \cite{scho2000, beau2007}, x-ray streak camera setups \cite{lind2000, lars2002, enqu2010}, special short-pulse filling patterns \cite{jank2013,ross2021} or picosecond Bragg switches \cite{gaal2014, sand2019}, that all come at the cost of a reduction of the photon flux by multiple orders of magnitude.
In this article, we highlight the conceptual advantages of picosecond ultrasonics with x-rays (PUX) experiments that observe shifts of Bragg peaks of layered heterostructures. Each crystalline material scatters at a characteristic Bragg diffraction angle, and hence, the signal is often a layer-specific measure of the lattice distortions caused by the strain pulse propagation and heat flow. In other words, for each pump-probe delay, x-ray probe pulses can project the Bragg peak of each layer onto the detector with a peak shift that is proportional to the instantaneous strain averaged over one layer.
The four central advantages of x-ray probing in picosecond ultrasonics are: (i) Diffraction yields quantitative strain values (ii) with layer specific information for (iii) strain pulses and quasi static strain. (iv) The x-ray probe penetrates metals and insulators irrespective of their optical properties. This permits a very flexible exploitation of dedicated strain-sensing layers that can be placed within in the heterostructure and often naturally exist as buffer, contact or electrode layers. We elaborate and illustrate concepts for the analysis of PUX experiments that have emerged to utilize the quantitative access to the strain response for tracking the energy flow in laser-excited heterostructures.
We structured the manuscript as follows: In Section~\ref{sec:II_PtCuNi_example} we introduce the measurement principle and observables of PUX experiments based on a representative example. We show that the layer for optical excitation can be separated from a dedicated probe layer, which is optimized for x-ray scattering. This facilitates the separation of the strain pulse traveling at the speed of sound and quasi-static strain that propagates via heat diffusion.
In Section~\ref{sec:III_model_theory} we reconsider aspects of the inhomogeneous elastic wave equation that are important for a quantitative modeling of the picosecond strain response. It becomes evident that an in-plane motion introduces a transverse elastic stress that affects the out-of-plane strain response. Accordingly, constrained in-plane motion distinguishes the laser-induced thermal expansion response of a homogeneously excited continuous film from its unconstrained near equilibrium thermal expansion. We elaborate the direct relation of laser-induced energy and stress via a Gr\"uneisen parameter and emphasize that the quasi-static expansion is linearly proportional to the energy density that generates the stress. The concept of subsystem-specific Gr\"uneisen parameters is introduced to capture the stress contributions of different quasi-particle excitations to the strain response.
In Section~\ref{sec:IV_examples} we present scenarios where ultrashort hard x-ray probe pulses excel at probing the strain response in laser-excited nanolayered heterostructures. At first, we illustrate in \ref{sec:IV_a_tbfe} how PUX tracks bipolar and unipolar picosecond strain pulses launched by an opaque transducer with transparent capping layers of various thicknesses. The second use case is the quantitative determination of the picosecond strain response in granular and continuous FePt thin films, that exhibits a strong dependence on the sample morphology as shown in \ref{sec:IV_b_fept}. The third use case demonstrates that the PUX experiments can distinguish the strain response of nanoscopic heterostructures that are thinner than the optical penetration depth as demonstrated on Au-Ni bilayer structures in \ref{sec:IV_c_auni}. This thin film scenario complements the introductory example where we access the electronic energy transfer between layers of an opaque Pt-Cu-Ni heterostructure discussed in Section~\ref{sec:II_PtCuNi_example}.
The last use case in \ref{sec:IV_a_dy} is the detection of ultrafast negative thermal expansion (NTE) exhibited by a rare-earth transducer, which launches unconventional picosecond strain pulses towards a buried detection layer. This example furthermore demonstrates the extraction of subsystem-specific Gr\"uneisen parameters from equilibrium thermal expansion data.
Section~\ref{sec:V_conclusion} summarizes our findings and briefly discusses the advantages of large scale sources and advanced experimental schemes that utilize x-rays for picosecond ultrasonics.
The appendix in Section~\ref{sec:VI_xray_diffraction} contains a brief discussion of the diffraction geometry and the relation between the diffraction angles and the reciprocal space coordinates. We revisit the concept of reciprocal space slicing (RSS) as a rapid data acquisition approach that is often sufficient and less time-consuming compared to the acquisition of reciprocal space maps (RSMs), which requires scanning the incidence angle of the x-rays on the sample. We explain the scaling factor that relates the Bragg peak shift in reciprocal space to the shift measured on an area or line detector in the RSS scheme and discuss scenarios when time-resolved RSMs are required for a proper strain assessment.
\section{\label{sec:II_PtCuNi_example}PUX experiments in a nutshell}
A typical picosecond ultrasonics experiment is schematically depicted in Fig.~\ref{fig:II_concept_sketch}, where we illustrate the generic series of events common to laser-excited heterostructures \cite{mats2015b}. The laser-induced strain response in this type of experiment contains information on all four conceptual steps that occur in response to the light-matter interaction between the femtosecond laser pulse and the absorbing transducer layer i.e.,\ the energy deposition profile (1), the strain generation from a laser-induced stress (2), strain pulse propagation and reflection (3) and quasi-static strain concomitant with thermal transport (4).
\begin{figure}[tbh!]
\centering
\includegraphics[width = 0.95\columnwidth]{Figures/fig_1_general_sketch.pdf}
\caption{\textbf{Generic series of events for picosecond ultrasonics:} A femtosecond laser pulse excites a metallic heterostructure that consists of multiple, often nanoscopically thin layers. The deposited energy creates a stress that drives a picosecond strain response consisting of strain pulses propagating at the speed of sound and a quasi-static thermal expansion which evolves via heat diffusion. The small skin depth of optical probes limits the probed volume to the near-surface region wheras hard x-rays often penetrate multiple microns or more into the bulk.}
\label{fig:II_concept_sketch}
\end{figure}
In applications using optical probe pulses, the main results of such experiments are often the echo time for thickness determination \cite{anto2006,lejm2014}, the acoustic transmission amplitude through an interface for measuring impedance mismatches \cite{tas1998, gros2017} or the oscillations in TDBS to determine the sound velocity \cite{thom1986b,lin1991,guse2018}. Here, we demonstrate that PUX provides information on film thicknesses, strain-pulse reflections and microscopic energy transfer processes within the heterostructure, that set the space and time-dependent stress driving the strain response. As indicated in Fig.~\ref{fig:II_concept_sketch} PUX experiments benefit from the large extinction length of hard x-rays (e.g.\ $8\,\text{keV}$), which is typically on the order of few micrometers irrespective of the optical properties, e.g.\ of metals, semiconductors and insulators. Therefore, the hard x-ray probe pulse can report on all layers of thicker heterostructures. More importantly, when each layer exhibits a different lattice constant, i.e.,\ has a characteristic Bragg diffraction angle, the probe is even layer-specific.
In the remainder of this section, we discuss a representative PUX experiment on metallic heterostructures composed of Pt, Cu and Ni to exemplify the measurement principle and data evaluation. This example demonstrates the advantages of PUX experiments on a sample structure that is frequently used to study the effect of hot-electron pulses that are launched in a Pt layer and propagate through an opaque Cu stack towards a buried functional detection layer \cite{berg2016, fert2017, xu2017, berg2020}.
\begin{figure*}[tbh!]
\centering
\includegraphics[width = 1.00\textwidth]{Figures/fig_2_pux_experiment_ptcunisi.pdf}
\caption{\textbf{Extraction of layer-specific strain via x-ray diffraction exemplified for a Pt-Cu-Ni heterostructure:} (a) Sketch of the samples and diffraction geometry using a PSD. Scanning the angles $\alpha_\text{in}$ and $\alpha_\text{out}$ yields the RSM (c) around the Bragg peaks of Pt, Cu and Ni that are separated along the out-of-plane reciprocal coordinate $q_\text{z}$ (b). The yellow and blue lines denote the diffraction intensity distributions for the respective subset of the reciprocal space that is probed by the area detector at particular fixed angles as utilized in the time-resolved experiment. The black line is the integrated intensity distribution of (c) along $q_\text{x}$. Panels (d--f) display the laser-induced shift of the Bragg peaks of sample 1 that is used to determine the layer-specific transient strain according to Eq.~\eqref{eq:II_1_strain_definition}.}
\label{fig:II_pux_data}
\end{figure*}
Fig.~\ref{fig:II_pux_data}(a) introduces the sample structures and the specular i.e.,\ symmetric and coplanar diffraction geometry. A femtosecond x-ray pulse derived from a laser-based plasma x-ray source \cite{schi2013c} is incident under the angle $\alpha_\text{in}$ with respect to the sample surface. The pixels of a position-sensitive detector (PSD) record the x-ray intensity distribution diffracted from the individual layers of the heterostructure. On the PSD we observe three maxima separated within the diffraction plane which correspond to the Bragg peaks of the individual metal layers . The respective diffraction angles $\alpha_\text{out}$ are determined by the reciprocal lattice vector $\bm{G}$ via the Laue condition:
\begin{align}
\bm{q}=\bm{k}_\text{out} - \bm{k}_\text{in} =k \begin{pmatrix} \cos{(\alpha_\text{out})} - \cos{(\alpha_\text{in})} \\ \sin{(\alpha_\text{in})} + \sin{(\alpha_\text{out}) } \end{pmatrix} =\bm{G} \,,
\label{eq:II_0_Laue}
\end{align}
where the scattering vector $\bm{q}$ is determined by the wave vector of the incident ($\bm{k}_\text{in}$) and diffracted ($\bm{k}_\text{out}$) x-rays, which have equal magnitude $k=2 \pi /\lambda$ according to the wavelength $\lambda$ of the elastically scattered x-rays. The magnitude of $\bm{G}$ encodes the out-of-plane lattice constant $d_3$ and the diffraction order $n$ via $|\bm{G}|=2\pi n/ d_3$. Each detector pixel of the PSD probes a small volume around a specific point in reciprocal space and a symmetric scan of $\alpha_\text{in}$ and $\alpha_\text{out}$ maps the reciprocal space along the out-of-plane direction defining the $q_\mathrm{z}$-axis. A representative RSM, i.e.,\ the intensity scattered along $\bm{q}$, of a Pt-Cu-Ni heterostructure is depicted in Fig.~\ref{fig:II_pux_data}(c) and reveals Bragg peaks of each (111)-oriented layer. The width of the Bragg peaks along $q_\text{x}$ and $q_\text{z}$ is given by the crystalline quality of the layers characterized by the mosaicity and the in-plane and out-of-plane coherence length of the crystallites forming the metal layers. A more detailed discussion of the RSM and the transformation of the angle-dependent intensity to reciprocal space $q_\text{x}$-$q_\text{z}$ is provided in appendix~\ref{sec:VI_a_rsm} and \cite{schi2013c}.
The integration of the RSM along $q_\text{x}$ yields the intensity distribution along the reciprocal coordinate $q_\text{z}$ (Fig.~\ref{fig:II_pux_data}(b)) that encodes the average out-of-plane lattice constant $d_{3}$ of the diffracting layers by the center-position of their Bragg peaks $q_\text{z}=2\pi n/d_{3}$ (see Eq.~\eqref{eq:II_0_Laue}). Following the laser-induced time-dependent shift of the Bragg peaks along $q_\text{z}$ in a pump-probe experiment yields the layer-specific transient strain $\eta(t)$ which represents the relative change of the average out-of-plane lattice constant with respect to its value before excitation ($t<0$):
\begin{align}
\eta(t) = \frac{d_{3}(t)-d_{3}(t<0)}{d_{3}(t<0)}= \frac{q_\text{z}(t<0)-q_\text{z}(t)}{q_\text{z}(t)}\,.
\label{eq:II_1_strain_definition}
\end{align}
Instead of scanning the incidence angle $\alpha_\text{in}$ to create an RSM at each pump-probe delay, we often follow the transient Bragg peak position by the time-efficient RSS method \cite{zeus2021} that is further discussed in Section~\ref{sec:VI_b_rss}. The PSD simultaneously probes a subset of the reciprocal space as indicated in Fig.~\ref{fig:II_pux_data}(c) by the blue and yellow lines for two different fixed $\alpha_\text{in}$ corresponding to the Bragg angles of Ni and Pt, respectively. The intensity on the detector exemplarily shown in the inset of panel (a) for an intermediate $\alpha_\text{in}$ probing all three peaks simultaneously with less intensity is integrated along the $q_\text{y}$ coordinate of the detector. For the yellow and blue lines $\alpha_\text{in}$ is chosen to efficiently capture slices through the different maxima of the RSM (Fig.~\ref{fig:II_pux_data}(c)), in order to collect RSS (Fig.~\ref{fig:II_pux_data}(b)) which closely resemble the integrated RSM (black line). We relate the transient shifts of the Bragg peaks on the detector to a transient shifts along $q_\text{z}$ depicted in Fig.~\ref{fig:II_pux_data}(d--f) that yield the layer-specific transient out-of-plane strains according to Eq.~\eqref{eq:II_1_strain_definition}.
The resulting strain response of the Pt transducer, the Cu propagation layer, and the Ni detection layer are compared in Fig.~\ref{fig:II_PtCuNiSi_comparison}(a--c) for two different heterostructures with and without a $5\,\text{nm}$ thin insulating MgO interlayer in front of the buried Ni detection layer. The near-infrared pump-pulse mainly deposits energy in the $7\,\text{nm}$ thin Pt transducer as illustrated by the absorption profile in Fig.~\ref{fig:II_PtCuNiSi_comparison}(e). In absence of an MgO interlayer, we observe a rapid expansion of both the Pt and the Ni layer upon laser-excitation and the Cu layer is compressed within the first picoseconds. The rapid expansion of the Ni layer compressing the adjacent Cu layer originates from a fast transport of hot electrons from the laser-excited Pt transducer through Cu to the buried Ni layer where they release their energy to phonons. This dominant electronic heat transport to Ni causes the compression of the Cu layer that is heated only on a longer timescale owing to its weak electron-phonon coupling. This surprising observation was referred to as "heat transport without heating" in a publication \cite{pude2020b} that provides additional information on the role of layer-specific electron-phonon coupling and the modelling of the strain response.
Here, we highlight the suppression of the crucial electronic energy transport from Pt to Ni, if the additional MgO interlayer is introduced (blue data in Fig.~\ref{fig:II_PtCuNiSi_comparison}). Now the Ni layer is rapidly compressed by the expansion of Cu and compressed even more at $20\,\text{ps}$, when the strain pulse generated in Pt reaches the Ni layer. The expansion of the Ni detection layer only rises on the timescale of hundreds of picoseconds after laser-excitation via phononic heat transport from Cu through the MgO interlayer separating sound and heat in the time domain. The suppressed electronic energy transport to the buried Ni detection layer yields a background-free signal of the strain pulse and an increased expansion of the Cu layer compared to the pure metal heterostructure.
\begin{figure}[t!]
\centering
\includegraphics[width = 1.00\columnwidth]{Figures/fig_3_ptcuni_data.pdf}
\caption{\textbf{Comparison of the picosecond strain response in Pt-Cu-(MgO-)Ni heterostructures:} Transient strain in the (a) Pt, (b) Cu and (c) Ni layers of the heterostructures with and without MgO interlayer as depicted in (d). Blue lines are for the heterostructure with an insulating MgO barrier between Cu and Ni. (e) Absorption of the pump-pulses occurs only in the Pt layer and in the first few nm of Cu. The suppression of the electronic heat transport from Pt to Ni by the MgO interlayer changes the strain response of the Ni detection layer from expansion to compression and delays the rise of the quasi-static expansion.}
\label{fig:II_PtCuNiSi_comparison}
\end{figure}
The experiment nicely illustrates PUX: We obtain the strain response by tracking the shift of layer specific diffraction peaks that are well separated due to their material-specific lattice constants. The total strain response is a superposition of propagating strain pulses and a quasi-static expansion due to heating that dominates after the strain pulses have propagated into the substrate. Since PUX measures the strain amplitude, both the strain pulses and the quasi-static strain from laser-induced temperature changes can be quantitatively evaluated on equal footing. In addition, the transient strain accesses the layer thicknesses via the timing of the expansion and compression pulses.
\section{\label{sec:III_model_theory} Concepts for modelling the strain response}
In this section, we discuss the fundamental concepts for quantitatively modelling the picosecond strain response, driven by a laser-induced spatio-temporal stress. These dynamics are generally described by the elastic wave equation, and we elaborate on the special case of a laterally homogeneous excitation of continuous thin films and heterostructures investigated in typical picosecond ultrasonic experiments. The spatio-temporal stress is proportional to the local contributions to the energy density, which changes in time according to the heat transport within the sample. The different degrees of freedom such as electrons and phonons contribute differently to the transport and have to be accounted for in nanoscale metals. We advocate the Gr\"uneisen concept to model the effect of the energy transfer between these subsystems on the laser-induced stress. The Gr\"uneisen parameters linearly relate the energy deposited in different degrees of freedom to the respective stress contributions. Their superposition yields the total external stress driving the strain response. Finally, we provide an example of modelling the strain response of a sample by numerically solving the elastic wave equation for an educative case of an inhomogeneously excited transducer on a non-absorbing detection layer.
\subsection{\label{sec:III_a_strain_3D} Poisson stresses in a 3D strain response}
In general, the strain response of an elastic solid to a time- and space-dependent stress is found as a solution of the inhomogeneous elastic wave equation, i.e.,\ the equation of motion for the displacement field $u_i(\bm{x},t)$ at a specific position $\bm{x}$ in the sample and at a time $t$.\footnote{To enhance the readability of this section the space and time dependences $(\bm{x},t)$ are not listed further after the introduction of physical quantities.} The index~$i$ enumerates the three spatial dimensions. The mass density $\rho_\text{m}(\bm{x})$ is accelerated by the spatial gradient of a total stress $\sigma^\text{tot}_{ij}(\bm{x},t)$, as described by the inhomogeneous elastic wave equation:
\begin{align}
\begin{split}
\rho_\text{m}\frac{\partial^2 u_i}{\partial t^2} &= \sum_{j} \frac{\partial }{\partial x_{j}} \sigma_{ij}^\text{tot} \\
&= \sum_{j} \frac{\partial}{\partial x_{j}} \left( \sum_{k,l} c_{ijkl}\eta_{kl} - \sigma_{ij}^\text{ext}\right)\,.
\end{split}
\label{eq:III_1_general_wave_equation}
\end{align}
The deformation of the solid is described by the strain
\begin{align}
\eta_{kl}(\bm{x},t) = \frac{\partial u_k(\bm{x},t)}{\partial x_l}\,,
\label{eq:III_0_strain}
\end{align}
which is determined by the displacement $u_k$.
The proportionality constants of the elastic stress are the direction-dependent elastic constants $c_{ijkl}(\bm{x})$\footnote{Here, $c_{ijkl}$ does not necessarily denote the elastic constants along the high symmetry axis of the crystal but along the spatial directions given by the sample geometry. If the high symmetry axis do not match the principal axis of the sample a transformation of the elastic tensor into the coordinate system of the sample is required.}. Adding an external stress $\sigma_{ij}^\text{ext}(\bm{x},t)$ drives the atomic motion.
In this publication, we limit our discussion to longitudinal laser-induced stresses $\sigma^\text{ext}_{ii}$ and strains $\eta_{kk}$, i.e.,\ volume changing elements of the stress-strain relation. Under this limitation the elastic wave equation \eqref{eq:III_1_general_wave_equation} simplifies to:
\begin{align}
\begin{split}
\rho_\text{m} \frac{\partial^2 u_i}{\partial t^2} &= \frac{\partial}{\partial x_{i}} \left( c_{iiii}\eta_{ii} + \sum_{k \neq i} c_{iikk}\eta_{kk}- \sigma_{ii}^\text{ext} \right) \\
&= \frac{\partial }{\partial x_{i}} \left( \sigma^\text{elastic}_{ii}+ \sigma_{ii}^\text{Poi} - \sigma_{ii}^\text{ext} \right) \,.
\end{split}
\label{eq:III_2_wave_equation_without_shear}
\end{align}
Here, The negative sign in front of the external stress is chosen such that a positive longitudinal stress $\sigma^\mathrm{ext}_{ii}$ leads to an expansion $(\eta_{ii}>0)$ along the direction of the stress. If the gradients in the external stress rise faster than the elastic stress which is proportional to the strain propagating at sound velocity, a propagating strain pulse is launched. Its propagation is affected by interfaces between different layers within the sample structure unless they are acoustically impedance matched\footnote{The acoustic impedance $Z$ of a material is defined by the product of its mass density $\rho_\mathrm{m}$ and the appropriate sound velocity $v_\mathrm{s}= \sqrt{c_{3333}/\rho_\text{m}}$ i.e.,\ $Z= \rho_\mathrm{m} v_\mathrm{s}$. The acoustic reflection coefficient $R$ for a strain pulse traversing the interface from material 1 to material 2 considered in the model, is given by $R = \frac{Z_2-Z_1}{ Z_1 + Z_2}$, which implicitly assumes perfect adhesion \cite{gros2017, roye2000}.}. The strain pulses are partially reflected from the interfaces. When reflection occurs at an interface to a medium with lower acoustic impedances, e.g.\ in particular for reflections at the surface of the sample, the sign of the strain pulse changes. The strain $\eta_{ii}$ induces an elastic stress $\sigma^\text{elastic}_{ii}(\bm{x},t)$ that partially compensates the external stress. In addition, the three-dimensional response of the solid introduces Poisson stress contributions $\sigma_{ii}^\text{Poi}$ that originate from the spatio-temporal strains $\eta_{kk}$ along the perpendicular directions analogously determined by the elastic wave equation along the respective directions. In total, the three-dimensional strain response of the solid to the external stresses $\sigma_{ii}^\text{ext}$ requires a solution of the three coupled differential equations Eq.~\ref{eq:III_1_general_wave_equation}.
When the driven strain pulses and their reflections have propagated out of the volume of interest, the elastic and the Poisson stress contributions fully compensate the external stress and the vanishing total stress marks a quasi-static state\footnote{The quasi static state slowly changes due to diffusion of heat and other slow processes.}. The vanishing time derivative in Eq.~\eqref{eq:III_2_wave_equation_without_shear} then determines the quasi-static strain $\eta_{ii}^\text{qs}$ to:
\begin{align}
\begin{split}
\eta_{ii}^\mathrm{qs} &= \frac{\sigma_{ii}^\text{ext}-\sigma_{ii}^\text{poi}}{c_{iiii}} \\
&= \frac{\sigma_{ii}^\text{ext}}{c_{iiii}}-\sum_{k \neq {i}} \frac{c_{iikk}}{c_{iiii}}\eta_{kk}^
\text{qs}= \alpha_{i} \Delta T \,.
\end{split}
\label{eq:III_3_general_quasi_static_strain}
\end{align}
The anisotropic linear expansion coefficient $\alpha_{i}(\bm{x})$ relates this quasi-static strain to a temperature increase $\Delta T(\bm{x},t)$ as in thermal equilibrium\footnote{Note, this approach is only true for small $\Delta T=T_\text{f}-T_\text{i}$. Otherwise, $\eta_{ii}^\mathrm{qs}=\int_{T_\text{i}}^{T_\text{f}} \alpha_i(T') \text{d}T'$ holds.}. Here, the expansion is not only driven by the externally induced stress $\sigma_{ii}^\text{ext}$ but also reduced by a Poisson contribution that arises from the expansion along the perpendicular directions $\eta_{kk}^\text{qs}$ driven by the external stresses $\sigma_{kk}^\text{ext}$ (see Fig.~\ref{fig:III_alpha_ultrafast}(a)).
\begin{figure}[t!]
\centering
\includegraphics[width =1\columnwidth]{Figures/fig_4_poisson.pdf}
\caption{\textbf{Morphology-dependent strain response of nanoparticles and thin films:} Sketch of the laser-driven quasi-static expansion for an in-plane nanostructured film (a) and an in-plane homogeneous and continuous thin film (b) of an isotropic material without an attached substrate. The out-of-plane expansion of the continuous film is enhanced by the absence of contractive Poisson stress contributions that would partially compensate the expansive external stress. The arrows indicate the effective driving stress $\sigma_{ii}$.}
\label{fig:III_alpha_ultrafast}
\end{figure}
The parametrization of $\eta_{kk}^\text{qs}$ in Eq.~\eqref{eq:III_3_general_quasi_static_strain} by their respective linear thermal expansion coefficient $\alpha_{k}$ yields a relation between the external stress and the temperature increase:
\begin{align}
\begin{split}
\sigma_{ii}^\text{ext}(x_i,t) &= c_{iiii} \left( \alpha_{i} + \sum_{i \neq k} \frac{c_{iikk}}{c_{iiii}} \alpha_{k}\right)
\Delta T \\
&= \Gamma_{i} \rho_{Q}\,.
\end{split}
\label{eq:III_4_general_external_stress}
\end{align}
This temperature increase originates from an optically deposited energy density: $\Delta T= \rho_Q/C_V$ determined by the heat capacity per constant volume $C_V$\footnote{Note, this approach is only true for small $\Delta T=T_\text{f}-T_\text{i}$ where $C_V(T)$ only barely changes. Otherwise it should read $\rho_{Q}=\int_{T_\text{i}}^{T_\text{f}} C_V(T') \text{d}T'$. However, the chosen notation simplifies the introduction of the Gr\"uneisen parameter.}. Equation~\eqref{eq:III_4_general_external_stress} introduces the direction-dependent Gr\"uneisen parameter $\Gamma_{i}(\bm{x})$:
\begin{align}
\Gamma_{i} &= \frac{c_{iiii}}{C_V} \underbrace{\left( \alpha_{i} + \sum_{i \neq k} \frac{c_{iikk}}{c_{iiii}} \alpha_{k} \right)}_{\alpha_{i}^\text{uf}} \,,
\label{eq:III_5_definition_Gr\"uneisen_constant}
\end{align}
that linearly relates this deposited energy density to the induced external stress $\sigma_{ii}^\text{ext}$. It describes how efficiently energy density generates stress and is determined by the direction-dependent $\alpha_{i}^\text{uf}$. This is the expansion coefficient along direction i assuming that the lattice is clamped along other spatial directions, e.g.\ for the ultrafast excitation of a homogeneous thin film (see next section).
The advantage of using the Gr\"uneisen concept is that the ratios of the intrinsically temperature-dependent quantities $\alpha_{i}(T)$, $\alpha_{i}^\text{uf}(T)$ and $C_V(T)$ in Eq.~(\ref{eq:III_5_definition_Gr\"uneisen_constant}) can be combined to temperature-independent unitless parameters $\Gamma_{i}$. Eq. \ref{eq:III_4_general_external_stress} states that the energy density $\rho_Q$ is proportional to the stresses $\sigma_{ii}^\text{ext}$ which themselves linearly relate to the quasi-static strain $\eta_{ii}^\text{qs}$. Moreover, in materials with several subsystems hosting the energy density, we find a simple recipe for modeling the out-of-equilibrium expansion response by adding the stresses for all subsystems, even if their energy content may correspond to different equilibrium subsystem temperatures \cite{nie2006,nico2011,heni2016,mald2017} (see Section~\ref{sec:III_d_Gr\"uneisen_stress} for further details).
\subsection{\label{sec:III_b_strain_1D} Constraints of the thin film geometry}
In most picosecond ultrasound experiments the footprint of the excitation laser pulse (sub-mm) is much larger than both the thickness of the transducer (sub-$\text{\textmu}$m) and the footprint of the probe pulse, which results in a laterally homogeneous excitation of the probed sample volume.
Therefore, on the timescale ($t_\text{1D} \ll d_\text{L}/v_\text{s}$) given by the size of the pump-laser footprint $d_\text{L}$ and the sound velocity $v_\text{s}(\bm{x})$, the in-plane stresses are balanced and only the spatial derivative $\frac{\partial}{\partial x_{3}} \sigma_{33}^\text{ext}$ remains. Under this condition, the system of coupled Eqs.~\eqref{eq:III_2_wave_equation_without_shear} simplifies to a one-dimensional equation for the strain along the out-of-plane direction ($x_3$) of the thin film:
\begin{align}
\label{eq:III_6_1d_wave_equation_strain}
\rho_\text{m}\frac{\partial^2 u_3}{\partial t^2} &= \frac{\partial}{\partial x_3} \left( c_{3333} \eta_{33}
- \sigma^\text{ext}_{33} \right)\, .
\end{align}
Figure~\ref{fig:III_alpha_ultrafast} compares the three-dimensional response of a thin film with in-plane nanostructure to the purely one-dimensional expansion of a continuous thin film. This anisotropic picosecond strain of continuous thin films occurs even in otherwise isotropic solids. It is driven by gradients in the external stress along the out-of-plane direction which typically appear at the sample surface or layer interfaces or at the slopes of the excitation profile that is slowly changing via heat transport. The arising out-of-plane strain $\eta_{33}$ upon laser excitation partially compensates the external stress until the total stress vanishes when the strain pulses have propagated into the substrate. The absence of in-plane expansion for $t\leq t_\text{1D}$ suppresses any Poisson stress contributions. Therefore, the remaining quasi-static expansion $\eta_{33}^\text{qs}$ is directly related to the external stress $\sigma_{33}^\text{ext}$ which simplifies Eq.~\eqref{eq:III_3_general_quasi_static_strain} to:
\begin{align}
\eta_{33}^\mathrm{qs} = \frac{\sigma^\mathrm{ext}_{33}}{c_{3333}} = \frac{\Gamma_{3}\rho_Q}{c_{3333}} = \alpha_{3}^\text{uf} \Delta T \,.
\label{eq:III_8_1d_quasi_static_strain}
\end{align}
Using the concept of a Gr\"uneisen parameter we express the external stress by $\sigma^\mathrm{ext}_{33}=\Gamma_{3}\rho_Q$. With the definition of the Gr\"uneisen parameter in Eq.~\eqref{eq:III_5_definition_Gr\"uneisen_constant} the quasi-static expansion in the thin film geometry is related to the ultrafast expansion coefficient $\alpha_{3}^\text{uf}$, which differs from the corresponding equilibrium expansion coefficient $\alpha_{3}$ that is used for three-dimensional expansion (Eq.~\eqref{eq:III_3_general_quasi_static_strain}):
\begin{equation}
\alpha_{3}^\text{uf} = \alpha_{3} \left( 1 + \sum_{k \neq 3} \frac{c_{33kk} \alpha_{k}}{c_{3333} \alpha_{3}} \right)\,.
\label{eq:III_9_alpha_ultrafast}
\end{equation}
In case of isotropic solids this expression simplifies due to identical values of the off-diagonal elements of the elastic tensor and isotropic expansion coefficients $\alpha_{i}=\alpha$. For a metal with a typical Poisson constant of $\nu=c_{1133}/(c_{3333}+c_{1133})\approx 1/3$, the ultrafast expansion coefficient
\begin{equation}
\alpha^\text{uf} = \alpha \left( 1 + 2 \cdot \frac{c_{1133}}{c_{3333}} \right) \approx 2\alpha\,.
\label{eq:III_10_alpha_ultrafast_isotropic_solid}
\end{equation}
is approximately twice larger than the equilibrium constant, because the Poison stresses are absent in typical picosecond ultrasonic experiments on homogeneous thin films.
Therefore, the Poisson effect requires considering the morphology of the thin film (Fig.~\ref{fig:III_alpha_ultrafast}), because it influences the dimensionality of the strain response and hence the strain response to the laser-induced temperature increase.
The one-dimensional nature of the picosecond strain response of continuous thin films was already considered implicitly by the seminal work by Thomson et al.\ \cite{thom1986}. Their formulation of the one-dimensional wave equation~\eqref{eq:III_13_wave_equation_thomson} can be transformed to the much simpler Gr\"uneisen formulation (Eq.~\eqref{eq:III_11_wave_equation_Gr\"uneisen}) by considering the bulk modulus $B=(c_{3333} +2c_{3311})/3$ and the Poisson factor $\nu$:
\begin{align}
\label{eq:III_13_wave_equation_thomson}
\rho_\text{m}\frac{\partial^2 u_3}{\partial t^2} &= \frac{\partial}{\partial x_3} \left( 3\frac{1-\nu}{1+\nu}B\eta_{33}-3B\alpha \Delta T \right)\\
\label{eq:III_12_wave_equation_thomson_comparison}
&= \frac{\partial}{\partial x_3} \left( c_{3333} \eta_{33} - c_{3333} \alpha^\text{uf} \Delta T \right) \\
\label{eq:III_11_wave_equation_Gr\"uneisen}
&= \frac{\partial}{\partial x_3} ( \underbrace{c_{3333} \eta_{33}}_{\sigma^\text{elastic}_{33}} - \underbrace{\Gamma\rho_Q}_{\sigma^\text{ext}_{33}} )\,.
\end{align}
Equation~\eqref{eq:III_12_wave_equation_thomson_comparison} illustrates that the modified thermal expansion coefficient $\alpha^\text{uf}$ has to be used to quantify the laser-induced stress in the absence of an in-plane expansion. This formulation is useful as it connects $\alpha^\text{uf}$ to the quasi-static strain $\eta_{33}^\text{qs}$ (see Eq.~\eqref{eq:III_8_1d_quasi_static_strain}) that exists in homogeneous thin films on timescales where out-of-plane strain pulses have propagated out of the investigated region of interest, but in plane motion is still negligible due to the large, homogeneously exited area of the thin film. When $t>t_\text{1D}$ the film laterally relaxes and induces the Poisson stresses that re-establish the thermal expansion coefficients used in Eq.~\eqref{eq:III_3_general_quasi_static_strain}.
Finally, the simple formulation of the wave equation given in Eq.~\eqref{eq:III_11_wave_equation_Gr\"uneisen} highlights the direct access to the spatio-temporal energy density by measuring the transient strain response using the Gr\"uneisen concept. This perspective is particularly useful in the context of materials, where the excitation of several degrees of freedom (e.g.\ electrons, phonons and spins) simultaneously contribute to the stress with different time-dependencies as discussed in Section~\ref{sec:III_d_Gr\"uneisen_stress}.
\subsection{\label{sec:III_c_heat_transport} Energy transfer processes and diffusive two temperature models }
The elastic wave equation~\eqref{eq:III_12_wave_equation_thomson_comparison} relates the picosecond strain response of a homogeneous thin film to a laser-induced stress that is characterized by a temperature increase $\Delta T=T_\text{f}-T_\text{i}$. This simple formulation contains the strong assumption that the temperatures of the different degrees of freedom - in particular electrons and phonons - is the same.
Under this assumption the shape of both the driven picosecond strain pulses and the spatio-temporal distribution of the remaining quasi-static expansion $\eta_{33}^\text{qs}$ is determined by the spatio-temporal profile of the temperature $T(x_3,t)$ that is a solution to the one-dimensional heat equation:
\begin{equation}
C_V(T)\frac{\partial T}{\partial t}=\frac{\partial}{\partial x_3} \left( \kappa(T) \frac{\partial T}{\partial x_3} \right) + S \,,
\label{III_14_1D_heat_diffusion_1TM}
\end{equation}
with the thermal conductivity $\kappa=\kappa(T(x_3,t))$ that inherits its depth-dependence from the temperature profile and differs for different materials. In addition, the description of heat transport across interfaces may require considering interface resistances that account for different dispersion relations of the involved quasi-particles \cite{redd2005,oomm2022,herz2022}. The absorption of energy from the incoming photons is treated by the source term $S(x_3,t)$ which is in the most simple case described by Lambert-Beer's law. Especially in heterostructures and films thinner than the optical skin depth it is often necessary to consider internal optical reflections, which can be accounted for using transfer matrix approaches \cite{yeh2005,leg2013}. The one-dimensional approach is again only valid for thin, lateral homogeneously excited films where in-plane thermal transport within the probed volume can be neglected.
However, for most of the materials assuming a single temperature, i.e.,\ quasi-instantaneous equilibration of the electrons, phonons or any other energy reservoirs in the solid is a strong oversimplification. In typical metals such as Au, Cu and Pt the laser excitation leads to a sudden increase of the energy density in the electron system, which is subsequently transferred to phonons within few picoseconds. This coupling of the subsystems on a timescale comparable to the relaxation of the lattice may be crucial for the induced stress and the driven picosecond strain pulses as experimentally demonstrated for Al \cite{nie2006}, Au \cite{nico2011} and Ni \cite{wang2008}. In contrast, the strain response of materials that exhibit a very strong electron-phonon coupling such as SrRuO$_3$ are in some scenarios sufficiently well described by a single temperature \cite{schi2014b,korf2008}.
In addition to the energy distribution among different degrees of freedom, a one-temperature model also oversimplifies spatial heat transport within metal heterostructures because it disregards non-equilibrium transport phenomena like ballistic \cite{bror1987,hohl1997b} and super-diffusive \cite{batt2012,mali2018} electron transport. Already the modeling of the diffusive transport within the Pt-Cu-Ni metal stack discussed in Section~\ref{sec:II_PtCuNi_example} requires a two-temperature model (2TM) that captures the electronic thermal conductivity $\kappa^\text{el}\propto T^\text{el}/T^\text{ph}$ \cite{pude2020b, hohl2000} enhanced by a long lasting non-equilibrium of electrons and phonons ($T^\text{el}(x_3,t) \gg T^\text{ph}(x_3,t)$) due to weak electron-phonon coupling in Cu. In general, the propagation of quasiparticle excitations following the non-equilibrium after optical excitation can be discussed in a quantitative and state-resolved way by Boltzmann-transport equations \cite{nenn2018,wais2021,chen2001b}. However, in most cases the conceptually simpler diffusive 2TM suffices:
\begin{align}
\begin{split}
C^\text{el}(T^\text{el}) \frac{\partial T^\text{el}}{\partial t} &= \frac{\partial}{\partial x_3} \left( \kappa^\text{el}(T^\text{el},T^\text{ph}) \frac{\partial T^\text{el}}{\partial z} \right)\\
&~ - g^\text{el-ph}\left(T^\text{el}-T^\text{ph}\right) + S\,,\\
C^\text{ph}(T^\text{ph}) \frac{\partial T^\text{ph}}{\partial t} &= \frac{\partial}{\partial x_3} \left( \kappa^\text{ph}(T^\text{ph}) \frac{\partial T^\text{ph}}{\partial z} \right)\\
&~ + g^\text{el-ph}\left(T^\text{el}-T^\text{ph}\right)\,.
\end{split}
\label{eq:III_heat_diffusion_2TM}
\end{align}
Such a diffusive 2TM not only includes the coupling between the two subsystems (here by the electron-phonon coupling constant $g^\text{el-ph}$) but also the individual diffusion of electrons and phonons. Transport across interfaces can be treated via the spatial dependence of the thermo-physical parameters. By modeling the thermal conductivity of electrons $\kappa^\text{el}$ as a parameter depending on both the electron and phonon temperature, we can even rationalize the picosecond ultrasonic response of thin metal nanolayers, where ballistic or superdiffusive transport occurs. However, the analysis of our PUX experiments has up to now not depended on finer details of the electron transport since very rapid processes may be masked by the comparatively slow rise of the strain limited by the sound velocity.
In materials with magnetic order, the excitation of the magnetic degrees of freedom has to be treated in addition to the electron and phonon subsystems. Studies of the electron-phonon coupling in ferromagnetic (FM) transition metal elements (Ni, Fe, Co) in the high excitation fluence regime observe a distinct fluence and temperature dependence of the electron-phonon coupling timescale \cite{wang2010, zahn2021, zahn2022}. In order to explain this observation the authors explicitly treated the excitation of magnetic degrees of freedom via electrons by extending the 2TM. However, the variety of different demagnetization behaviors \cite{koop2010, batt2010, roth2012,frie2015} is related to different timescales of energy transfer to magnetic degrees of freedom. Therefore, modeling the spatio-temporal excitation of quasi-particles requires the explicit treatment of magnetic excitations in three-temperature, N-temperature or even more complex models \cite{shok2017,sieg2019,frie2020}.
\subsection{\label{sec:III_d_Gr\"uneisen_stress} Subsystem-specific stresses and Gr\"uneisen parameters}
The prevailing paradigms in picosecond ultrasonics are thermoelastic stresses that drive the observed picosecond strain response according to the elastic wave equation~\eqref{eq:III_11_wave_equation_Gr\"uneisen}. Here, the introduced Gr\"uneisen parameter $\Gamma_{3}$ linearly relates the laser-induced stress $\sigma_{33}^\text{ext}(x_3,t)$ to the spatio-temporal energy density $\rho_Q(x_3,t)$.
This energy density initially deposited to electronic excitations is subsequently distributed within the sample structure and also locally transferred to other degrees of freedom as already discussed in Sec.~\ref{sec:III_c_heat_transport}. This heat transport and subsystem couplings determine the spatio-temporal energy densities $\rho_Q^r(x_3,t)$ stored in each subsystem $r$ that add up to the total deposited energy density $\rho_Q(x_3,t)$:
\begin{align}
\label{eq:III_subsystem_energy_density}
\rho_Q(x_3,t) = \sum_r \rho_Q^r(x_3,t)\,.
\end{align}
Within the Gr\"uneisen approach the energy density deposited in the respective degrees of freedom is linearly related to a stress contribution $\sigma_{33}^r=\Gamma_{3}^r\rho_Q^r$ by subsystem-specific Gr\"uneisen parameters $\Gamma_{3}^r(x_3)$ \cite{nico2011,ruel2015,heni2016,pude2019,repp2020}. In case of different subsystem-specific Gr\"uneisen parameters $\Gamma_{3}^r$ the total stress in Eq.~(\ref{eq:III_11_wave_equation_Gr\"uneisen}) has to be adapted in order to individually treat the subsystem contributions to the laser-induced stress $\sigma_{33}^\text{ext}$:
\begin{align}
\label{eq:III_subsystem_stress_Gruneisen}
\sigma_{33}^\text{ext} = \sum_r \sigma_{33}^r(x_3,t)=\sum_r \Gamma_{3}^r \rho_Q^r(x_3,t)\,,
\end{align}
i.e.,\ their superposition gives the total laser-induced stress driving the picosecond strain response.
It depends on the properties of the material under investigation, which degrees of freedom have to be considered separately to account for the total spatio-temporal external stress. In general, the individual treatment of different degrees of freedom is only necessary for describing the picosecond strain response if the subsystem-specific Gr\"uneisen parameters differ. Conversely, the strain measurement only provides access to ultrafast microscopic processes if the involved quasi-particle excitations contribute differently to the total external stress. The separation of subsystem contributions to the strain response of the atomic lattice is schematically visualized in Fig.~\ref{fig:III_triangle} for the case of a material where electronic excitations, phonons and magnetic excitations contribute.
The quasi-instantaneous electronic stress contribution is captured by an electronic Gr\"uneisen parameter $\Gamma^\text{el}= \frac{\partial \ln{\gamma_S}}{\partial \ln{(V)}}$ that can be derived as first-order approximation from the electronic density of states at the Fermi level that determines the Sommerfeld constant $\gamma_S$ \cite{barr1980}. The value depends on details of the band structure and for the idealized case of free electrons in a parabolic band with spherical symmetry in the Sommerfeld approximation one obtains $\Gamma^\text{el}= 2/3$ \cite{barr1980}. The subsequent transfer of energy to phonons gives rise to a phonon stress that can be parameterized to first order by a macroscopic Gr\"uneisen constant $\Gamma^\text{ph}$ assuming a similar phonon population as in thermal equilibrium. However, mode-specific electron-phonon coupling can give rise to long lasting non-equilibria of the phonon system itself \cite{heni2016,mald2017} which becomes highly relevant for the modeling of picosecond acoustics in case of strongly mode-specific Gr\"uneisen parameters $\Gamma^\text{ph}_l = \frac{\partial \ln{(\hbar \omega_l)}}{\partial \ln{ (V)}}$. Here the change of the respective phonon energy $\hbar \omega_l$ with volume $V$ parametrizes the efficiency of a class of phonon modes $l$ to generate stress. In various semiconducting materials such as Tellurides the transverse phonon modes exhibit negative Gr\"uneisen parameters in contrast to the typically positive Gr\"uneisen parameter of longitudinal phonon modes \cite{whit1993,barr1980}. In thermal equilibrium this results in NTE at low temperatures since their low-frequency transverse acoustic modes are already excited at lower temperatures, as opposed to the longitudinal acoustic modes with higher frequency.
In case of strong non-equilibria between phonon modes with strongly mode-specific Gr\"uneisen parameters upon laser-excitation, the individual treatment of the different modes is necessary. However, in metals the application of a macroscopic phononic Gr\"uneisen parameter is typically sufficient to quantify the transient phonon stress \cite{nie2006, nico2011, wang2008, pude2020b, repp2016b}.
\begin{figure}[t!]
\centering
\includegraphics[width =1\columnwidth]{Figures/fig_5_subsystem_triangle.pdf}
\caption{\textbf{Gr\"uneisen concept for laser-induced stress on the lattice:} This viewgraph depicts the treatment of magnetic excitations in addition to the optically excited electrons and phonons for the laser-induced stress using the Gr\"uneisen concept. The total equilibrium heat capacity is the superposition of contributions from all subsystems. The energy density stored in their excitations generates stresses on the lattice according to the subsystem-specific Gr\"uneisen parameters $\Gamma_3^r$. Finally, the strain response is driven by the time- and space-dependent superposition of all subsystem-stress contributions that exclusively depend on the energy transfer into each subsystem.}
\label{fig:III_triangle}
\end{figure}
In laser-excited magnetically ordered metals, the laser-induced spin disorder provides an additional magnetic stress contribution in addition to the electron and phonon stresses. For a Heisenberg exchange interaction the respective Gr\"uneisen parameter can be expressed as $\Gamma^\text{mag} = \frac{\partial \ln{(J)}}{\partial \ln{(V)}}$ resulting from the dependence of the exchange constant $J(V)$ on the volume of the unit cell \cite{whit1962,pytt1965,argy1967}. However, so far only few experiments \cite{reid2018,pude2019,repp2020,repp2020b,matt2021} have investigated the magnetic stress that adds to the electron and phonon stress contribution as discussed in Sec.~\ref{sec:IV_a_dy}.
Figure~\ref{fig:III_triangle} graphically represents the main idea of the Gr\"uneisen approach treating these three stresses $\sigma^r_{3}=\Gamma^r_{3} \rho_Q^r$ with $r=\{\text{el,ph,mag}\}$ that all act on the strain $\eta_{33}$. The central idea is, that by Hooke's law -- and more generally by Eq.~\eqref{eq:III_8_1d_quasi_static_strain} -- the strain is a linear measure of each of the stress contributions. In the Gr\"uneisen model for several subsystems within the same material each of the stresses is proportional to the respective energy density $\rho_Q^r$ which can be expressed via the specific heat contributions $C^r$ in thermal equilibrium. Introducing subsystem-specific Gr\"uneisen parameters is useful as the thermal expansion coefficients and the heat capacities often share the same temperature-dependence which originates from the same occupation probability of the underlying quantum states \cite{ashc2012}. In essence, the macroscopic out-of-plane Gr\"uneisen parameters $\Gamma^r_{3}$ encode, how efficiently energy densities $\rho_Q^r(x_3,t)$ in each of the subsystems generate out-of-plane stress $\sigma^r_{33}(x_3,t)$. Their extraction is exemplarily demonstrated for the rare-earth element Dy in Section~\ref{sec:IV_a_dy} and requires the separation of the specific heat and thermal expansion into the subsystem contributions \cite{nix1941,nick1971,barr2005,whit1993}
In total, the Gr\"uneisen approach provides a separation of the laser-induced stress into distinct degrees of freedom, which is linear in the contributing energy densities, and can be used even if these subsystems are out-of-equilibrium with each other for many picoseconds \cite{repp2016,mald2017,heni2016, mald2020, ritz2020}. In such situations, the stress and strain only depend on the energy density transferred between the subsystems under the boundary condition of energy conservation. Therefore, the subsystem-specific Gr\"uneisen parameters determining the transient stress contributions enable the description of expansion and contraction in thermal equilibrium and after photoexcitation on an equal footing.
\subsection{\label{sec:III_e_numerical_simulation} Numerical modelling of the picosecond strain response}
\begin{figure*}[tbh!]
\centering
\includegraphics[width =1\textwidth]{Figures/fig_6_lcm_v3.pdf}
\caption{\textbf{Visualization of the contributions to the elastic response of a laser-excited heterostructure:} The transducer layer (thickness $L_1$) on top of a non-absorbing detection layer (thickness $L_2 > L_1$) exhibits an absorption profile for the optical excitation that is indicated as red area ontop of the sample structure (a). (b) LCM representation of the elastic response wherein spheres represent a local mass element, springs encode the elastic coupling and incompressible spacer sticks represent the laser-induced external stress. The motion of the masses is greatly enlarged, and thermal fluctuations are omitted for better visibility. The quasi-static strain is reached when the springs have relaxed to their equilibrium length and a new inter-atomic distance is attained. (c) Spatio-temporal maps of the external stress $\sigma^\text{ext}_{33}$, the elastic stress $\sigma^\text{elastic}_{33}$, the total stress $\sigma^\text{tot}_{33}$, the atomic displacement $u_{3}$, and its spatial derivative, the strain $\eta_{3}$. (d) Spatial profiles of the same quantities at selected times. (e) shows the spatially averaged strain $\eta$ of the individual layers of the bilayer heterostructure, which can be compared with the center-of-mass evolution of the Bragg peaks extracted from a PUX experiment.}
\label{fig:III_toolbox_simulation}
\end{figure*}
Modelling the time-dependent strain response allows us to obtain the spatio-temporal stress profile $\sigma_{33}^\text{ext}(x_3,t)$ that occurs as inhomogeneous term in the 1D-elastic wave equation~\eqref{eq:III_6_1d_wave_equation_strain}. This provides insights into energy transfer processes as the energy density distribution between the subsystems $\rho_Q^r(x_3,t)$ determines the external stress contributions $\sigma^{r}_{33}(x_3,t)$ as introduced in Section~\ref{sec:III_d_Gr\"uneisen_stress}.
Different approaches for solving Eq.~\eqref{eq:III_11_wave_equation_Gr\"uneisen}
for a given $\sigma_{33}^\text{ext}(x_3,t)$ exist. Analytical solutions for the strain field can be constructed for time-independent stress profiles as shown in \cite{thom1986,boja2015,mats2015b}. Numerical approaches \cite{schi2014, schi2021} may be easier to implement, when the time dependence of the stresses by sub-system couplings, thermal diffusion and interface effects need to be accounted for. A natural spatial grid in a numerical simulation is provided by the atomic layers, or unit-cells which also represent the smallest physically meaningful discretization of the strain response. Linear chain models (LCMs) of masses and springs provide an intuitive approach for the numerical calculation of the strain field $\eta_{33}(x_3,t)$ with unit-cell precision \cite{herz2012b}. Publicly available implementations of a LCM are, for example, provided by the \textsc{udkm1Dsim} \textsc{Matlab} \cite{schi2014} and Python \cite{schi2021} code-libraries, which include modules for modelling the pump-laser absorption profile using a transfer matrix model, heat-transport via diffusive N-temperature models, the strain response and dynamical x-ray scattering, which can also include magnetic scattering.
In the following, we display and discuss the modeled laser-driven strain response for a generic sample structure that consists of an opaque transducer with thickness $L_1$ and a transparent detection layer of thickness $L_2$ grown on a semi-infinite, transparent substrate. Fig.~\ref{fig:III_toolbox_simulation} illustrates the sample structure and the space- and time-dependent results for the terms in the inhomogeneous elastic wave that is solved numerically using in the \textsc{udkm1Dsim} code \cite{schi2014, schi2021}. The strain response can be visualized by a LCM as shown in Fig.~\ref{fig:III_toolbox_simulation}(b), which serves as a mechanistic analog for the time-dependent elastic response of the bilayer structure sketched in Fig.~\ref{fig:III_toolbox_simulation}(a). The time-dependent position of the masses in the LCM visualizes the displacement $u_3$ averaged over a $5\,\text{nm}$ length-fraction of the sample, where the elongation of the adjacent spring encodes the corresponding local elastic stress $\sigma^\text{elastic}_{33}$. The thickness $L_1$ of the metallic transducer is chosen to exceed the optical penetration depth leading to an inhomogeneous energy deposition shown in Fig.~\ref{fig:III_toolbox_simulation}(a). For simplicity, we assume an instantaneous rise of the laser-induced external stress $\sigma^\text{ext}_{33}$ that is indicated by the appearance of incompressible red spacer sticks compressing the springs of the linear chain directly after excitation at $t = 0$ (see Fig.~\ref{fig:III_toolbox_simulation}(b)). Unbalanced gradients in the external stress drive the elastic response, which consists of an expansion wave starting at the sample surface and a compression within the inhomogeneously excited transducer. The resulting strain pulse propagates at the speed of sound from the transducer through the detection layer into the substrate. Reflections at the interfaces are omitted for clarity by choosing perfect acoustic impedance matching of all constituents. When the strain pulse has passed, the material reaches its new equilibrium position where the residual quasi-static expansion is represented by the spacer sticks' length that varies due to heat diffusion within the heterostructure. Note that the LCM representation of the strain response in Fig.~\ref{fig:III_toolbox_simulation}(b) omits thermal fluctuations and strongly exaggerates the strain response.
The color maps and selected profiles in Fig.~\ref{fig:III_toolbox_simulation}(c) and (d) provide a numerical representation of the elastic stress, the total stress, the driven displacement $u_3$ and the corresponding strain $\eta_{33}$ that appear in the elastic wave equation~\eqref{eq:III_6_1d_wave_equation_strain}. Figures~\ref{fig:III_toolbox_simulation}(c) and (d) illustrate that the finite total stress $\sigma_{33}^\text{tot}$ arising from the unbalanced external stress $\sigma_{33}^\text{ext}$ drives a displacement of the lattice. The rising strain response $\eta_{33}$ induces an elastic stress $\sigma_{33}^\text{elastic}$ that lowers the total stress by partially compensating the external stress within the transducer. After the strain pulse has propagated into the substrate the elastic stress arising from the induced quasi-static expansion compensates the external stress which results in a vanishing total stress, which marks the quasi-static state. The resulting strain is given by the derivative of the displacement and displays both the quasi-static expansion and the strain pulse propagating through the heterostructure\footnote{The oscillations in the strain originate from surface vibrations, which occur due to the discretization of the strain response, which can be relevant for atomically flat samples\cite{chan1997,meln2003}. They are not present in analytic solutions of the continuum elastic wave equation.}.
Fig.~\ref{fig:III_toolbox_simulation}(e) displays the average strain of the transducer and the detection layer that can be inferred from the simulated strain field $\eta_{33}(x_3,t)$ and compared to the strain extracted from the Bragg peak shift in a PUX experiment. While the average strain of the laser-excited transducer contains contributions from the strain pulse and the quasi-static expansion, the strain response of the non-absorbing detection layer is dominated by the the propagating strain pulse. The timing of the inflection points in the average strain response depends on the layer thicknesses, whereas the shape of the rising and falling edge in the strain response is related to the stress-profile as discussed in \cite{schi2014b,repp2020}. At the delay $t_1=L_1/v_\text{s}$ the strain pulse has propagated through the transducer and the compressive part has fully entered the detection layer. This causes a maximum expansion of the transducer due to the superposition of the quasi-static expansion and the expansive part of the strain pulse as well as a maximum compression of the detection layer. Subsequently, also the expansive part enters the detection layer, which reduces the expansion of the transducer until the strain pulse has completely left the transducer at $2t_1$. The entrance of the expansive part and the propagation of the compressive part from the detection layer into the substrate both result in an overall expansion of the detection layer. As the compressive part of the strain pulse has completely entered the substrate at $t_1+t_2$ the detection layer reaches its maximum expansion which subsequently decreases as the expansive part of the strain pulse exits towards the substrate. Finally, for $t\geq 2t_1+t_2$ only the quasi-static expansion remains in the detection layer, which varies due to thermal transport.
Modeling the laser-driven strain response via the elastic wave equation is independent of experimental probing technique e.g.\ optical probe pulses or x-ray diffraction. A direct comparison between experimental data with the model, however, requires a weighting of the modeled spatio-temporal strain map $\eta_{33}(x_3,t)$ by a sensitivity function that is specific for the detection method. The center of mass evolution of the Bragg observed in PUX experiments is, in many cases, well-approximated by the average strain in the probed material. Variations of the crystallinity within a layer may modify the sensitivity and thus require a weighting the modelled strain map. Inhomogeneous strain profiles lead to peak broadening or even splitting effects \cite{schi2014b} that can be accounted for via explicitly modelling the Bragg peak evolution via kinematical or dynamical x-ray diffraction \cite{schi2014,schi2021}. The presented numerical modelling approach not only helps to rationalize experimental data, but also aids in the design of the sample structure as the amplitude, shape and timing of the detected strain signal can be predicted even for complex heterostructures \cite{pude2020b,zeus2019, matt2022, repp2020}.
\section{\label{sec:IV_examples}Use cases for PUX}
This section is dedicated to the presentation of PUX experiments that illustrate the theoretical concepts discussed in the previous section by utilizing the capabilities of hard x-ray diffraction as a quantitative, material specific probing technique.
In Section~\ref{sec:IV_a_tbfe} we exemplify how the large x-ray penetration depth extends the sensitivity of classical picosecond ultrasonics beyond the near-surface region of a metallic transducer. A quantitative comparison of the strain response of a nanogranular and a continuous FePt film in \ref{sec:IV_b_fept} highlights the importance of the Poisson stresses and geometrical constraints discussed in \ref{sec:III_a_strain_3D} and \ref{sec:III_b_strain_1D}. Signatures of the energy transfer processes discussed in \ref{sec:III_c_heat_transport} are demonstrated in \ref{sec:IV_c_auni} using PUX on a nanoscopic Au/Ni heterostructure with and without an insulating MgO interlayer.
Section~\ref{sec:IV_a_dy}, demonstrates the utility of sub-system specific Gr\"uneisen parameters introduced in \ref{sec:III_d_Gr\"uneisen_stress} for a Dy transducer that exhibits giant magnetic stress contributions which result in a contraction upon laser-excitation.
\subsection{\label{sec:IV_a_tbfe}Sensing shape and timing of strain pulses in buried layers}
Observing the timing, shape and amplitude of picosecond strain pulses is central to picosecond ultrasonics experiments. Here, we showcase the ability of PUX to track the propagation of strain pulses within an opaque heterostructure consisting of a thick transducer on a thin detection layer \cite{zeus2019}. We illustrate that the presence of a transparent capping layer on top of the transducer leads to the emission of unipolar strain pulses towards the sample surface and a pronounced asymmetry of the bipolar strain-pulse that propagates towards the substrate.
In particular, we discuss the strain response of heterostructures that consist of a few hundred nm thick TbFe$_\mathrm{2}$ transducer on top of a $50\,\text{nm}$ thin Nb detection layer grown on a Al$_\mathrm{2}$O$_\mathrm{3}$ substrate, which are capped with an amorphous, transparent SiO$_\mathrm{2}$ layer with a variable thickness ranging from $0$ to $1100\,\text{nm}$ as sketched in Fig.~\ref{fig:IV_tbfe_data}(d--f). The samples are excited by femtosecond laser pulses with an optical penetration depth of $30\,\text{nm}$ in the TbFe$_\mathrm{2}$ layer and the resulting strain responses observed via PUX are depicted in Fig.~\ref{fig:IV_tbfe_data}(a--c). Further details on the sample growth and experimental parameters are given in \cite{zeus2019}.
\begin{figure}[t!]
\centering
\includegraphics[width =1\columnwidth]{Figures/fig_7_tbfe_data_v3.pdf}
\caption{\textbf{Sensing propagating strain waves within an opaque heterostructure with a variable capping:} Transient strain of the metallic TbFe$_2$ transducer (yellow) and Nb detection layer (blue) for an uncapped structure (a), a structure with a transparent $550\,\text{nm}$ SiO$_2$ capping (b) and a $1100\,\text{nm}$ SiO$_\mathrm{2}$ capping (c). Solid lines represent the modeled strain response using the \textsc{udkm1Dsim} toolbox \cite{schi2014} using parameters given in \cite{zeus2019}. Panels (d) to (f) sketch the corresponding sample structures that are laser-excited from the TbFe$_\mathrm{2}$ side. A comparison of the strain response shows that the SiO$_\mathrm{2}$ capping layer separates the bipolar strain pulse emitted by the uncapped transducer (a) into an asymmetric bipolar strain pulse and a unipolar strain pulse (b, c) whose delay with respect to the bipolar pulse is set by the thickness of the capping layer and its sound velocity.}
\label{fig:IV_tbfe_data}
\end{figure}
The femtosecond laser pump-pulse deposits energy into the near surface region inducing an expansion of the $450\,\text{nm}$-thick TbFe$_\mathrm{2}$ transducer of the uncapped structure, that launches a bipolar strain pulse with a leading compression and a trailing expansion propagating through the metal heterostructure into the substrate as discussed in Sec.~\ref{sec:III_e_numerical_simulation}. Fig.~\ref{fig:IV_tbfe_data}(a) displays the corresponding rise of the average strain of the TbFe$_\mathrm{2}$ layer and the bipolar shape of the strain response of the Nb detection layer that are extracted from the diffraction peak shifts. While the strain pulse travels at the speed of sound, the deposited energy density causing a quasi-static expansion of the transducer diffuses into the Nb detection layer on a nanosecond timescale. The different propagation speeds of sound and heat thus yield a background-free signature of the strain pulse in the buried Nb detection layer. The average strain in Nb is determined by the integral over the part of the strain pulse within the Nb layer. When the compressive part enters the Nb layer at $100\,\text{ps}$ it exhibits a negative strain until $125\,\text{ps}$ when the trailing tensile part has entered the layer and the compressive part has progressed towards the substrate.
Subsequently, the tensile part dominates and the resulting positive strain recedes as the strain pulse propagates into the substrate. The average strain of the Nb layer detected via PUX is thus determined by the spatial shape of the bipolar strain pulse convoluted with the propagation of the strain pulse through the layer.
Adding an additional transparent SiO$_\mathrm{2}$ layer on top of a $350\,\text{nm}$ TbFe$_\mathrm{2}$ transducer modifies both the shape and timing of the emitted strain pulses as shown in Fig.~\ref{fig:IV_tbfe_data}(b) and (c). As before, the expansion of the TbFe$_\mathrm{2}$ launches a bipolar strain pulse propagating towards the Nb detection layer. In addition, a unipolar compression pulse is launched into the SiO$_\mathrm{2}$ capping. Consequently, the amplitude of the tensile part of the initial bipolar strain pulse detected in the buried Nb layer is reduced due to conservation of elastic energy. Subsequently, we observes a train of unipolar strain pulses with alternating sign and a period that is given by the propagation time of the strain pulse back and forth through the capping layer. The presence of the unipolar strain pulse inside the thick TbFe$_\mathrm{2}$ layer is heralded by small plateau-like signatures in the measured TbFe$_\mathrm{2}$ strain, which is rather insensitive to the shape of the strain pulses in comparison to the short, pronounced peaks that occur in the strain response of the thin Nb detection layer. It is interesting to note that if we extrapolated the results in Fig.~\ref{fig:IV_tbfe_data}(b) and (c) with a cap layer thickness of $550\,\text{nm}$ and $1100\,\text{nm}$, respectively, to a cap layer thickness of zero, the positive unipolar strain pulse would superimpose onto the asymmetric bipolar pulse in such a way that the symmetric bipolar pulse from Fig.~\ref{fig:IV_tbfe_data}(a) is recovered. One can therefore rationalize the occurrence of the tensile part of the bipolar strain pulse shape by an instantaneous and complete reflection of a compression wave driven at the transducer-air interface.
\begin{figure}[b!]
\centering
\includegraphics[width =1\columnwidth]{Figures/fig_8_tbfe_simulation_v2.pdf}
\caption{\textbf{Modeled strain map:} Spatio-temporal strain of the sample structure without SiO$_\mathrm{2}$ capping (a) and with $550\,\text{nm}$ capping (b) simulated using the modular \textsc{udkm1Dsim} toolbox \cite{schi2014}. The spatio-temporal strain displays the propagation of the strain pulses through the heterostructure and the distribution of heat via heat diffusion indicated by the slowly growing expanded part of the TbFe$_\mathrm{2}$ transducer.}
\label{fig:IV_tbfe_simulation}
\end{figure}
We modelled the strain response of the TbFe$_\mathrm{2}$ transducer and the Nb detection layer for all heterostructures using the approach introduced in Section~\ref{sec:III_e_numerical_simulation} with parameters given in \cite{zeus2019}. The modeled average layer strains shown as solid lines in Fig.~\ref{fig:IV_tbfe_data} match the experimental data indicated by symbols. Moreover, the modelling yields detailed spatio-temporal maps of the strain inside the heterostructures without and with SiO$_\mathrm{2}$ capping layer which are depicted in Fig.~\ref{fig:IV_tbfe_simulation}(a) and (b), respectively. The modeled strain maps provide a detailed depiction of the strain profile within the structure at any given time, which extends the insights from the average layer strain obtained via PUX experiments. Fig.~\ref{fig:IV_tbfe_simulation}(b) shows that the unipolar strain pulse launched into the capping layer inverts its sign upon reflection at the sample surface and that a considerable fraction of the returning wave is reflected at the SiO$_\mathrm{2}$-TbFe$_\mathrm{2}$-interface due to an acoustic impedance mismatch between these two materials. This rationalizes the multiple echoes of unipolar pulse that consecutively traverse the structure as revealed in Fig.~\ref{fig:IV_tbfe_data}(b) and (c). Modeling the strain allows us to precisely calibrate the layer thicknesses from the timing of the detected strain pulses. However, the modeled strain map does not only help to rationalize the strain observed in a PUX experiment but can furthermore guide and support the interpretation of strain signatures in all-optical data, e.g.\ from time-resolved magneto-optics and reflectivity \cite{peze2016, zeus2019, parp2021}.
Overall, we find that the observation of the strain pulses in a thin crystalline detection layer is advantageous compared to the analysis of the strain response of a thick transducer layer. The separation of the detection layer from the laser-excitation region separates the strain signatures of sound and heat in the time domain which can be exploited for the detection of strain pulses with an unconventional shape, as shown in Section~\ref{sec:IV_a_dy}. The current example shows that in principle only the detection layer needs to be crystalline to observe the strain pulses, which extends the applicability of PUX to a large class of heterostructures and transducer materials.
\subsection{\label{sec:IV_b_fept}Quantifying a morphology-dependent strain response}
Here, we compare the qualitative and quantitative picosecond strain response of thin films for various in-plane expansion constraints that are shown to affect the out-of-plane strain response. Different sample morphologies change the nature of the ultrafast strain response from one-dimensional in the case of a homogeneous film to three-dimensional in the case of nanograins even if they are attached to a substrate. Describing the ultrafast strain response of granular morphologies requires a model for the three-dimensional elastic response in accordance with Eq.~\eqref{eq:III_2_wave_equation_without_shear} which includes Poisson stress contributions that couple in-plane and out-of-plane motion.
Specifically, we discuss the picosecond strain response of three $10\,\text{nm}$ thin crystalline $\text{L1}_0$-phase FePt layers having their tetragonal c-axis oriented out-of-plane. Interestingly, the $\text{L1}_0$-phase of FePt exhibits a vanishing expansion along its c-axis ($\alpha_3 \approx 0$ \cite{tsun2004,repp2020b}) under near-equilibrium heating conditions and a distinct expansion along the in-plane directions ($\alpha_1 = \alpha_2 \approx 9 \cdot 10^{-6}$) \cite{rasm2005,repp2020b}. Fig.~\ref{fig:IV_comparison_fept} contrasts the laser-induced strain response along the tetragonal c-axis for a continuous and a granular thin film on an MgO substrate as measured via UXRD \cite{repp2018} and a quasi-free-standing granular film transferred to a transmission electron microscopy (TEM) grid measured via ultrafast electron diffraction \cite{reid2018}. The compiled experiments used comparable laser excitation conditions with a fluence of $\approx 6\,\mathrm{mJ/cm^2}$. Both substrate-supported FePt samples, i.e.,\ the continuous and the granular thin film, respond on average by an out-of-plane expansion upon laser-excitation despite the Invar-like behavior in near-equilibrium heating conditions. However, the free-standing grains on the TEM grid exhibit an initial out-of-plane contraction while the lattice expands in-plane (not shown here) \cite{reid2018}. After the coherent strain pulse oscillations have ceased, one observes a nearly vanishing out-of-plane expansion, in agreement with the Invar-like behavior that is expected from quasi-static heating experiments.
\begin{figure}[tbh!]
\centering
\includegraphics[width = 1\columnwidth]{Figures/fig_9_fetpt_strain.pdf}
\caption{\textbf{Morphology-dependent strain response of FePt $10\,\text{nm}$ specimen:} Comparison of the laser-induced out-of-plane strain-response of $\text{L1}_0$-phase FePt for three different in-plane boundary conditions: (a) continuous thin film (b) FePt nanograins on an MgO substrate (c) free-standing FePt grains on a TEM grid. The data in panel (a) and (b) are from UXRD measurements \cite{repp2018} at an incident laser fluence of $\approx 6\,\mathrm{{mJ}/{cm^2}}$. The data in panel (c) are reproduced from the publication by Reid et al.\ \cite{reid2018} in order to highlight the qualitative differences in the FePt strain response. Reid et al.\ determined the strain response for a comparable laser-fluence of $5\,\mathrm{{mJ}/{cm^2}}$ using time-resolved electron diffraction. Solid lines in in (a--c) are derived from modeling approaches discussed in the corresponding publications \cite{repp2018, reid2018} and schematic insets depict the sample morphologies. Panels (d--f) illustrate the constrained expansion with respect to the initial condition indicated as black dashed line.}
\label{fig:IV_comparison_fept}
\end{figure}
The schematic depictions (Fig.~\ref{fig:IV_comparison_fept}(d--f)) adjacent to the data illustrate the hypothesis for the data interpretation that rationalizes the observed behaviors \cite{repp2018, repp2020b}. They sketch the equilibrium film dimensions as a dashed black rectangle and the change in dimensions upon laser excitation in color. In case of the free-standing grains the modeling of the picosecond strain response requires the three-dimensional elastic wave equation Eq.~\eqref{eq:III_2_wave_equation_without_shear} that includes a coupling of the in-plane and out-of-plane strain response. The non-vanishing laser-driven in-plane strains introduce a time-dependent out-of-plane Poisson stress contribution in addition to the laser-induced out-of-plane stresses from electrons, phonons and spin-excitations \cite{repp2020b, reid2018}. The rising in-plane strain induces a contractive out-of-plane Poisson stress that dominates the strain response of the free-standing grains. When the coherent motion has ceased at $\approx 20\,\text{ps}$, the remaining quasi-static expansion is given by the near equilibrium thermal expansion coefficient $\alpha_{3}\approx 0$ according to Eq.~\eqref{eq:III_3_general_quasi_static_strain} as indicated by the out-of-plane Invar behavior and an in-plane expansion.
In contrast, the strictly one-dimensional picosecond strain response of the continuous film along the out-of-plane direction can be described by the one-dimensional elastic wave equation~\eqref{eq:III_6_1d_wave_equation_strain} derived in Section~\ref{sec:III_b_strain_1D}. Because the homogeneous laser excitation of a thin film lacks in-plane stress gradients, there is no in-plane motion on ultrashort timescales. The absence of the Poisson stress contributions modifies the out-of-plane thermal expansion coefficient on ultrafast timescales according to Eq.~\eqref{eq:III_9_alpha_ultrafast}, which in the case of FePt simplifies to:
\begin{align}
\label{eq:IV_1_FePt }
\alpha_{3}^\text{uf} &= \alpha_{3} + 2 \frac{c_{3311}}{c_{3333}}\alpha_{1} = \alpha_{3} + 2 \frac{\nu}{1-\nu} \alpha_{1}\,.
\end{align}
Assuming a Poisson ratio $\nu = 1/3$, as is common for metals one finds that $\alpha_{3}^\text{uf}$ is equal to $\alpha_{1} $ and thus positive despite the vanishing $\alpha_{3}$. According to Eq.~\eqref{eq:III_5_definition_Gr\"uneisen_constant} this is related to a positive out-of-plane Gr\"uneisen parameter that translates deposited energy density to a positive out-of-plane stress $\sigma_{33}^\text{ext}$ that is fully compensated by the Poisson stress in thermal equilibrium.
The nanograins attached to a substrate represent an intermediate case between the freestanding grains and the continuous film morphology. Accordingly, they exhibit an intermediate behavior with a short-lived contraction that is followed by a similar, albeit smaller out-of-plane thermal expansion compared to the continuous FePt film. The clamping to the substrate partially hinders the in-plane expansion of the FePt grains as sketched in Fig.~\ref{fig:IV_comparison_fept}(e). The nature of the strain response for this specimen is intrinsically three-dimensional and further complicated by the additional boundary conditions introduced by the clamping to the substrate, by a carbon filling in between the FePt grains and the grain size distribution. Finite-element modeling is however able to reproduce the observed short-lived contraction followed by an out-of-plane expansion for these boundary conditions \cite{repp2020b}. The partially allowed in-plane expansion induces a Poisson stress contribution that reduces the observed out-of-plane expansion in the quasi-static limit for $t \geq 20\,\mathrm{ps}$ compared to the continuous thin film but does not lead to the Invar-like behavior of the free-standing grains.
This use case illustrates the crucial role of the constraints for the in-plane expansion on the out-of-plane strain response. Utilizing the picosecond strain response for a quantitative determination of the laser-induced energy density or even temperature change therefore requires a careful consideration of the in-plane boundary conditions in the sample. Application of the near-equilibrium linear thermal expansion coefficient in the common thin film geometry overestimates the laser-induced temperature change, due to the absence of the Poisson stress contributions on ultrafast timescales. Nanoscale granularity drastically influences the amplitude of the out-of-plane strain, not only for samples purposely grown as lateral nanostructures but also for thin granular films or islands that unintentionally form during sample growth. However, if the Poisson effect is taken into account, time-resolved strain measurements via PUX may serve for tracking ultrafast energy transfer processes and enable thermometry applications in nanoscale structures.
\subsection{\label{sec:IV_c_auni}Identifying non-equilibrium heat transport between ultrathin layers}
Here, we showcase the ability of PUX to observe the non-equilibrium energy transfer within nanoscale heterostructures down to few-unit-cell thickness. Time-resolved, quantitative strain measurements extend the applicability of thermal expansion as a proxy for energy content and temperature changes from quasi-equilibrium to the picosecond timescale. The shape of the picosecond strain pulses provides supporting insights into the spatio-temporal stress profile that is determined by the energy density distribution and heat transfer within the structure which we quantify by modelling the strain response.
In the following, we discuss the ultrafast energy transport within a metallic bilayer of $5\,\text{nm}$ Au and $13\,\text{nm}$ Ni in the framework of a diffusive 2 TM (Eq.~\eqref{eq:III_heat_diffusion_2TM}) that we previously applied to capture the energy transfer and resulting strain response in similar Au-Ni \cite{pude2018,herz2022} and Au-Fe bilayers \cite{matt2022} across a large range of layer thicknesses. The picosecond strain response of both Au and Ni serves as a reference for the heat transfer that is dominated by electronic transport within the first hundreds of femtoseconds. In order to experimentally verify this crucial electron energy transport we compare the strain response of the Au-Ni bilayer to the strain response of a Au-MgO-Ni trilayer where the $8\,\text{nm}$ thin insulating MgO interlayer inhibits electronic energy transfer among the metal layers.
\begin{figure}[t!]
\centering
\includegraphics[width = 1\columnwidth]{Figures/fig_10_auni_simulation_v2.pdf}
\caption{\label{fig:IV_auni_sim} \textbf{Subsystem-specific results from modelling the strain response of a thin Au-Ni bilayer excited by 400\,nm pump pulses using a diffusive 2TM:} (a) spatio-temporal electron temperature increase $\Delta T^\text{el}$, (b) energy density increase in the electron system $\Delta \rho_Q^\text{el}$, (c) phonon temperature increase $\Delta T^\text{ph}$ and (d) phonon energy density increase $\Delta \rho_Q^\text{el}$. Panel (e) displays the resulting laser-induced stress that drives the strain response depicted in panel (f). (g) Layer-selective spatial averging yields the average strain of Au and Ni (solid lines) in reasonable qualitative and quantitative agreement with the experimental strain response (dots). The modelling illustrates the ultrafast electronic energy transfer, that is evident from the expansion of the Ni which compresses the Au, that in turn only heats up only within tens of ps as discussed in \cite{pude2018,herz2022}.}
\end{figure}
Fig.~\ref{fig:IV_auni_sim}(g) displays the picosecond strain response of the Au (yellow dots) and the Ni (grey dots) layer of the Au-Ni bilayer to a $100\,\text{fs}$ laser pulse with a central wavelength of $400\,\text{nm}$ observed via PUX. The modeling process yields spatio-temporal electron and phonon temperature maps and the corresponding energy densities that are depicted in Fig.~\ref{fig:IV_auni_sim}(a--d). The electron temperature map in Fig.~\ref{fig:IV_auni_sim}(a) displays an equilibration of the electron temperature across the metal stack within the first $200\,\text{fs}$ after laser excitation. The flat 3d-bands of Ni close to the Fermi level provide a large density of states, which results in a large electronic specific heat and also a large electron-phonon coupling strength compared to Au \cite{lin2008}. The thermalization of the Au and Ni electrons thus rapidly transfers most of the energy to the Ni electrons as visualized by the electronic energy density map in Fig.~\ref{fig:IV_auni_sim}(b). Subsequently, the energy is transferred to phonons in Ni via electron-phonon coupling within the first $1\,\text{ps}$ to $2\,\text{ps}$, which lowers the overall electron temperature and the energy density stored in electron excitations. The phonon temperatures of both layers equilibrate only on the timescale of tens of picoseconds as discussed in \cite{pude2018,herz2022} and illustrated by Fig.~\ref{fig:IV_auni_sim}(c).
The rapidly rising energy density in the electrons and the phonons of Ni induces a rapidly rising expansive stress via the respective positive Gr\"uneisen parameters (see Section~\ref{sec:III_d_Gr\"uneisen_stress}) \cite{nix1941,wang2008} which remains unbalanced at the Au-Ni interface and drives a rapid expansion of the Ni layer that compresses the adjacent Au layer.
The compression pulse propagates through the Au layer to the sample surface where it is turned into an expansion pulse upon reflection. This propagation of the strain pulse through the heterostructure superimposed with the quasi-static expansion of the layers is displayed in the strain map in Fig.~\ref{fig:IV_auni_sim}(f) that is obtained by solving the elastic wave equation~\eqref{eq:III_6_1d_wave_equation_strain} for the modelled external stress $\sigma_{33}^\text{ext}(x_3,t)$. Averaging the spatio-temporal strain over the Au and the Ni layer, respectively, enables the comparison of the model with the experimental data for the average strains of the layers shown in Fig.~\ref{fig:IV_auni_sim}(g). The bipolar feature in the Au strain in combination with the rapidly rising expansion of Ni indicates the initial compression of the Au layer due to the dominant energy transfer into Ni. In addition, the slowly rising expansion of Au on tens of picoseconds indicates the surprisingly slow equilibration of both metal layers due to a backward transfer of energy from Ni into the Au phonon system via conduction electrons and weak electron-phonon coupling in Au and via phonon transport across the Ni-Au interface. For heterostructures with Au thinner than $\approx 10\,\text{nm}$ one finds that the energy transfer between the two metals occurs predominantly via phonons \cite{herz2022}. The Au layer reaches its maximum quasi-static strain and thus temperature at $\approx 100\,\text{ps}$ after which the entire bilayer cools towards the MgO substrate via diffusive phonon transport, which results in the overall decrease of the strain in Fig.~\ref{fig:IV_auni_sim}(g).
It is instructive to consider further excitation scenarios and sample structures that support and extend our findings. Fig.~\ref{fig:IV_auni}(a) compares the strain response of the Au and the Ni layer to an ultrashort laser pulse excitation with a central wavelength of $400\,\text{nm}$ and $800\,\text{nm}$. Fig.~\ref{fig:IV_auni}(c) and (d) display the corresponding spatial absorption profiles calculated from literature values for the optical constants using a transfer matrix model \cite{khor2014,esch2014,pude2018}. While the absorption in Ni is essentially independent of the excitation wavelength, the Au absorption is heavily increased at $400\,\text{nm}$. However, the strain response in the bilayer does not depend on where the optically deposited energy is absorbed except for an overall scaling factor, because the heat energy in the electron system is rapidly distributed throughout the metallic heterostructure. The strong electron-phonon coupling in Ni localizes the energy in Ni phonons that dominate the overall strain response. The spatial profile of the source term $S(x_3,t)$ in the diffusive 2TM~\eqref{eq:III_heat_diffusion_2TM} is not highly relevant.
\begin{figure}[tbp!]
\centering
\includegraphics[width = \columnwidth]{Figures/fig_11_auni_data.pdf}
\caption{\label{fig:IV_auni} \textbf{Strain response for wavelength-dependent energy deposition profiles in bi- and trilayer Au-Ni samples:} Comparison between the strain response of a sample without insulating barrier (a) and with insulating MgO barrier (b) subjected to a laser pulses with $800\,\text{nm}$ (red lines) and $400\,\text{nm}$ (blue lines) central wavelength. Solid lines are spline interpolations to the data that serve as guide to the eye. Panels (c--f) represent the modeled absorbance per length $a$ in the bi- and trilayer structure from a transfer matrix model that uses the refractive indices for similarly thin Au films \cite{yaku2019} for $800$ (red areas) and $400\,\text{nm}$ (blue areas) excitation.}
\end{figure}
This changes if we insert an insulating MgO layer between Au and Ni that inhibits the ultrafast electronic energy transfer between the layers (Fig.~\ref{fig:IV_auni}(b)). Now the Au and Ni layers both expand individually due to the optical excitation of electrons and subsequent local electron-phonon coupling. The absorbed energy density in the Au and Ni layers (Fig.~\ref{fig:IV_auni}(e) and (f)) determines the stress profile which sets shape and amplitudes of the picosecond strain pulses and the quasi-static expansion of the individual layers within the first picoseconds. Phonon heat transport subsequently equilibrates the temperature in the trilayer within tens of picoseconds and cools it towards the substrate on a timescale of hundreds of picoseconds. In the trilayer, both the coherent phonon excitation (picosecond strain pulses) and the incoherent phonon population (thermal energy) depend drastically on the excitation wavelength as seen in Fig.~\ref{fig:IV_auni}(b). According to the increased absorption of the Au layer for 400\,nm excitation, Au shows a strongly enhanced quasi-static expansion around $10\,\text{ps}$ in comparison to $800\,\text{nm}$ excitation. The strain oscillation signature of the coherently driven picosecond strain pulse observed at $4\,\text{ps}$ in Au, which is attributed to the compression pulse launched by the Ni expansion also heralds the varied energy distribution. In comparison to the bilayer the compression signature is much less pronounced in the trilayer where the electron energy transfer from Au is prohibited. Moreover for $400\,\text{nm}$ excitation the laser-induced stress of the Au and Ni layers is almost equal as indicated by the similar quasi-static strain around $10\,\text{ps}$ to $20\,\text{ps}$. Accordingly, the amplitudes of the strain pulses launched by Au and Ni become comparable and the oscillatory signatures in the early Au strain response essentially cancel.
This use case thus displays various signatures of energy transfer in nanoscale heterostructures and illustrates that the insertion of an insulating interlayer suppresses the electronic transport that is key for the surprisingly slow thermal equilibration in similar structures \cite{pude2018,matt2022,herz2022}. Ultrafast electronic energy transport within metallic heterostructures has been reported and utilized previously in various all-optical experiments \cite{wang2012,choi2014,shin2020} However, the layer dimensions often need to be larger than the optical skin depth in the involved metals in order to unequivocally attribute the experimental signals to the layers. The layer-specificity of PUX overcomes this limitation and thus enables such investigations on few-nm thin films.
\subsection{\label{sec:IV_a_dy} Detecting ultrafast negative thermal expansion}
In this section, we discuss the strain response of a magnetostrictive transducer which exhibits NTE caused by a contractive stress originating from magnetic excitations. Depending on the temperature, i.e.,\ the initial magnetic state, we observe a laser-induced expansion or contraction of the transducer, which we identify via Bragg peak shifts. The strain response of a buried detection layer reveals that parts of the inhomogeneously excited transducer expand while others contract upon laser excitation. We apply the Gr\"uneisen concept to individually treat the stress contributions of electrons, phonons and magnetic excitations via subsystem-specific Gr\"uneisen parameters extracted from near-equilibrium thermal expansion experiments. Based on the strain response of the transducer and the detection layer, which encodes the shape of the strain pulse, we extract the spatio-temporal subsystem-specific stress contributions. The proportionality of energy and stress in the Gr\"uneisen concept thus provides insight into energy transfer processes between the subsystems.
Specifically, we discuss the picosecond strain response of a heterostructure that consists of a $80\,\text{nm}$ Dy transducer embedded between an Y capping and buffer layer on top of a $100\,\text{nm}$ buried Nb detection layer (Fig.~\ref{fig:IV_dy_transient_strain}(a)). Below its Néel temperature ($T_\text{N}=180\,\text{K}$) the rare-earth Dy transducer hosts a helical antiferromagnetic (AFM) order of its large localized 4f-magnetic moments along the c-axis of its hexagonal unit cell which is oriented along the out-of-plane direction. At its Curie-temperature ($T_\text{C}=60\,\text{K}$) the Dy layer undergoes a first-order phase transition below which the magnetic order becomes FM \cite{dume1996}.
\begin{figure}[tb!]
\centering
\includegraphics[width = 1\columnwidth]{Figures/fig_13_dy_data_v2.pdf}
\caption{\textbf{Temperature-dependent picosecond strain driven by the laser-excited Dy transducer:} The investigated heterostructure sketched in (a) contains an opaque Dy transducer and a buried Nb detection layer. Panels (b) and (c) display the picosecond strain response of Dy and Nb at sample temperatures above and below $T_\text{N}=180\,\text{K}$ for a fixed laser fluence of $7.2\,\mathrm{mJ/cm^2}$. Lowering the temperature below $T_\text{N}$ changes the response of Dy from expansive to contractive and modifies the shape of the emitted picosecond strain pulse that is detected in the Nb layer. Solid lines depict the modelling results as reported in a previous publication \cite{repp2020}.}
\label{fig:IV_dy_transient_strain}
\end{figure}
Figures~\ref{fig:IV_dy_transient_strain}(b) and (c) display the transient average strain of the Dy transducer and Nb detection layer, respectively, extracted from the shift of their material-specific Bragg peaks for a fixed laser fluence of $7.2\,\mathrm{mJ/cm^2}$ \cite{repp2020}. Measurements at selected temperatures above and below $T_\text{N}=180\,\text{K}$ compare the response for initial states with and without magnetic order, which directly shows that the laser-induced disorder of the spin system provides a contractive magnetic stress. The Dy transducer discussed in this section serves as a representative of the class of heavy rare-earth elements. We obtain similar findings for Gd \cite{koc2017} and Ho \cite{pude2019} in their respective FM and AFM phases.
In the paramagnetic (PM) phase at $250\,\text{K}$, we observe the conventional strain response of a laser-excited metallic transducer and a buried detection layer (see Sec.~\ref{sec:III_e_numerical_simulation}). The optical excitation of electrons in Dy within the optical penetration depth of $\approx 25\,\text{nm}$ and the subsequent energy transfer to phonons via electron-phonon coupling induces a rapidly rising expansive electron and phonon stress. This drives a bipolar strain pulse with a leading compression propagating from the Y-Dy interface through the Dy transducer into the Nb detection layer. In addition, the laser-induced stress gives rise to an expansion of the Dy layer reaching its maximum at $\approx 29\,\text{ps}$ at the time when the compressive part has completely left the Dy layer (Fig.~\ref{fig:IV_dy_transient_strain}(b)). Partial back reflections at interfaces cause the damped oscillations in the strain response of Dy that superimpose with the quasi-static expansion, which decays on a nanosecond timescale due to thermal transport into the buried Nb layer.
The entrance of the leading compression into the unexcited Nb layer leads to a negative average strain in Nb, while the exit of the compression and the entrance of the tensile part of the strain pulse cause an expansion (Fig.~\ref{fig:IV_dy_transient_strain}(c)). In total, the bipolar strain response of the Nb layer within the first picoseconds maps out the spatial stress profile within the inhomogeneously excited Dy transducer, which determines the spatial profile of the propagating strain pulse according to Eq.~\eqref{eq:III_6_1d_wave_equation_strain}. Finally, the quasi-static expansion of Nb due to heating rises on hundreds of picoseconds via the slow diffusive transport of thermal energy from Dy into Nb shown in Fig.~\ref{fig:IV_dy_transient_strain}(c).
The excitation of Dy in its AFM or FM state below $T_\text{N}=180\,\text{K}$ changes the strain response by the presence of an additional contractive stress originating from the excitation of the spin system that adds to the expansive electron-phonon stress. Already slightly below the Néel temperature at $160\,\text{K}$ we observe a reduced expansion of Dy that continuously changes to a pronounced contraction at $31\,\text{K}$ upon further decreasing the initial sample temperature. At the same time, the strain response of Nb changes from a conventional bipolar shape at $250\,\text{K}$ to a unipolar expansion at $31\,\text{K}$. At intermediate temperatures we observe an unconventional composite shape of the strain response consisting of a leading expansion and a bipolar contribution that indicates a complex space and time dependence of the total external stress within the inhomogeneously excited Dy transducer. With decreasing temperature, the leading expansion continuously becomes more pronounced and the amplitude of the bipolar component decreases.
\begin{figure}[tb!]
\centering
\includegraphics[width = 1.00\columnwidth]{Figures/fig_14_dy_grueneisen.pdf}
\caption{\textbf{Separation of the magnetic and phonon stresses in Dy thin films using the Gr\"uneisen concept:} (a) Specific heat of Dy \cite{pech1996} separated into a magnetic-, phonon- and a very small electron contribution \cite{repp2020, repp2016}. (b) Thermal expansion of Dy along the hexagonal c- and a-axis. (c) The associated Poisson stress $\sigma_{33}^\text{Poi}$ (grey dots) and external stress $\sigma_{33}^\text{ext}$ (black dots) along the out-of-plane separated into the magnetic $\sigma_{33}^\text{mag}$ ( solid blue line) and phononic $\sigma_{33}^\text{ph}$ ( solid red line) contribution by the respective Gr\"uneisen parameters determined from the linear dependence of the stress on the energy densities $\rho^Q_\text{mag}$ and $\sigma^Q_\text{ph}$ in panels (d) and (e).}
\label{fig:IV_dy_Gr\"uneisen}
\end{figure}
The solid lines in Figs.~\ref{fig:IV_dy_transient_strain}(b) and (c) represent the modeled strain response from a previous work \cite{repp2020} that reproduces the observed temperature dependence and yields subsystems specific stress contributions. This provides insights into the energy transfer processes among the different degrees of freedom. The modeling approach utilizes the concept of subsystem-specific Gr\"uneisen parameters to individually treat the stress contributions of electrons, phonons and magnetic excitations as sketched in Figure~\ref{fig:III_triangle}. Fig.~\ref{fig:IV_dy_Gr\"uneisen} exemplifies the determination of the subsystem-specific Gr\"uneisen parameters along the out-of-plane direction of the Dy film from heat capacity data and the anisotropic thermal expansion in equilibrium. In Fig.~\ref{fig:IV_dy_Gr\"uneisen}(b) the temperature-dependent out-of-plane lattice strain of the Dy layer along its hexagonal c-axis $\eta_{33}^\text{qs}$ illustrates an anomalous expansion below $180\,\text{K}$ that is concomitant to the onset of magnetic order and results in a NTE during heating. This behavior is universal for the class of the heavy rare-earth elements from Gd to Er which exhibit a pronounced NTE below their respective magnetic ordering temperatures as summarized in the works of Darnell et al.\ \cite{darn1963,darn1963b,darn1963c}.
Utilizing the anisotropic expansion of Dy $\eta_{kk}^\text{qs}(T)$ ($k=\{ 1,2,3\}$) shown in Fig.~\ref{fig:IV_dy_Gr\"uneisen}(b), we extract the external stress $\sigma_{33}^\text{ext}(T)$ along the out-of-plane direction taking the Poisson stress contribution $\sigma_{33}^\text{Poi}(T)$ into account that arises from the three-dimensional expansion in thermal equilibrium according to Eq.~\eqref{eq:III_3_general_quasi_static_strain}. The hexagonal symmetry of Dy above $T_\text{C}$ simplifies Eq.~\eqref{eq:III_3_general_quasi_static_strain} by $c_{1133}=c_{2233}$ and $\eta_{11}^\text{qs}(T)=\eta_{22}^\text{qs}(T)$ to:
\begin{align}
\begin{split}
\sigma_{33}^\text{ext}(T)&=c_{3333}\eta_{33}^\text{qs}(T) + \sigma_{33}^\text{Poi}(T)\\
&=c_{3333}\eta_{33}^\text{qs}(T) + 2 c_{1133}\eta_{11}^\text{qs}(T) \,,
\end{split}
\label{eq:IV_1_dy_external_stress}
\end{align}
with the Poisson stress contribution arising from the expansion within the hexagonal plane $\eta_{11}^\text{qs}(T)$ \footnote{We assume the same in-plane expansion as for bulk Dy taken from literature \cite{bula1996, cher2008}. However, we shift the strain signature of the first-order phase transition in temperature to account for the lower Curie-temperature of our thin film sample in comparison to bulk Dy due to the Y layers, which stabilize the spin helix i.e.,\ the AFM order \cite{dume1996}.}. Fig.~\ref{fig:IV_dy_Gr\"uneisen}(c) displays the resulting out-of-plane Poisson (grey dots) and external stress that is dominated by phonon excitations well above $T_\text{N}=180\,\text{K}$ due to a vanishing magnetic and a negligible electron heat capacity contribution shown in Fig.~\ref{fig:IV_dy_Gr\"uneisen}(a). The linear dependence of the external stress on the energy density in phonon excitations is given by the integral of the heat capacity contribution $\rho_Q^\text{ph}=\int C^\text{ph}\text{d}T$. This yields the phononic Gr\"uneisen parameter $\Gamma_{3}^\text{ph}=1.1$ (Fig.~\ref{fig:IV_dy_Gr\"uneisen}(d)) that determines the phonon stress contribution over the entire temperature range. The difference of the total external stress and the phonon contribution is given by the magnetic stress contribution that vanishes above $T_\text{N}$ since no additional energy density can be deposited in magnetic excitations, as indicated by the finite integral of the magnetic heat capacity contribution. Although $\sigma_{33}^\text{mag}$ is a strongly nonlinear function of the temperature, it depends linearly on the energy density in magnetic excitations $\rho_Q^\text{mag}=\int C^\text{mag}\text{d}T$ (Fig.~\ref{fig:IV_dy_Gr\"uneisen}(e)). The resulting large negative magnetic Gr\"uneisen parameter $\Gamma_{3}^\text{mag}=-2.9$ rationalizes that the contractive magnetic stress dominates the smaller phonon driven expansion below $T_\text{N}$. In Fig.~\ref{fig:IV_dy_Gr\"uneisen}(c) the superposition of the temperature-dependent magnetic (blue solid line) and phononic stress contribution (red solid line) modelled within the Gr\"uneisen approach (solid black line) matches the temperature-dependent total external stress and thus describes the expansion of Dy along the out-of-plane direction in thermal equilibrium.
\begin{figure}[tb!]
\centering
\includegraphics[width = 1\columnwidth]{Figures/fig_12_dy_stress_maps.pdf}
\caption{\textbf{Model of the external stress separated into subsystem contributions:} Extracted spatio-temporal electron-phonon stress contribution (a--c), magnetic stress contribution (d--f) and total external stress (g--i) for three representative sample temperatures. The laser-induced magnetic stress contributions at $130\,\text{K}$ and $31\,\text{K}$ display a saturation of the magnetic stress in the strongly excited near-surface region and an increase of the maximum magnetic stress for lower temperatures. The spatio-temporal strain (j--l) driven by the total external stress shows that at $130\,\text{K}$ the saturated magnetic stress results in a non-monotonous total stress that launches a composite strain pulse into Nb due to an expansion of Dy at the front and a contraction at its backside.}
\label{fig:IV_dy_stress_maps}
\end{figure}
Finally, the extracted Gr\"uneisen parameters are used to model the laser-induced strain response of Dy and Nb shown in Fig.~\ref{fig:IV_dy_transient_strain}(b) and (c) \cite{repp2020}. The modelled strain response (solid lines) excellently reproduces the experimental data and thus reveals the spatio-temporal laser-induced stresses separated into an expansive electron-phonon and a contractive magnetic stress contribution shown in Fig.~\ref{fig:IV_dy_stress_maps}. Within the Gr\"uneisen approach introduced in Section~\ref{sec:III_d_Gr\"uneisen_stress} the total laser-induced stress is given by the superposition of the subsystem contributions that are determined by their Gr\"uneisen parameters and the deposited energy densities as stated in Eq.~\eqref{eq:III_subsystem_stress_Gruneisen}. This approach fulfills energy conservation as the local sum of the transient subsystem energy density contributions corresponds to the total laser-deposited energy density (Eq.~\eqref{eq:III_subsystem_energy_density}) which we calibrate in the PM phase in the absence of magnetic excitations. In addition, we determine the finite rise time of the electron-phonon stress from the amplitude and shape of the strain pulse detected by the Nb strain response, which we model by assuming a smaller Gr\"uneisen parameter for the electrons than for the phonons and introduce an electron-phonon coupling constant that corresponds to a timescale of $2\,\text{ps}$ for the used excitation. However, we limit our discussion in the following to the combined expansive electron-phonon stress in contrast to the contractive magnetic stress since the phonon contribution dominates by far after electron-phonon equilibration.
The magnetic degrees of freedom are locally excited by energy transfer from the phonon subsystem on two timescales with a sub-picosecond contribution and a slower $15\,\text{ps}$ timescale resulting in a reduced expansive phonon stress contribution below $T_\text{N}$ as shown in Fig.~\ref{fig:IV_dy_stress_maps}(b) and (c). The maximum amount of energy density transferred to magnetic excitations is given by the finite integral of the magnetic heat capacity (Fig.~\ref{fig:IV_dy_Gr\"uneisen}(a)) starting at the initial sample temperature $T_\text{i}$. The Gr\"uneisen concept linearly relates this energy density to a maximum magnetic stress that is reached in the strongly excited near-surface region of the Dy transducer (see Fig.~\ref{fig:IV_dy_stress_maps}(e) and (f)). In contrast, the phonon system can take up very large amounts of energy up to the melting point. For large pump fluences, the expansive electron-phonon stress in the near-surface region therefore dominates the saturated magnetic stress (red color in Fig.~\ref{fig:IV_dy_stress_maps}(h)). In contrast, the contractive magnetic stress (blue) dominates in the Nb-near region of the transducer, because the weak excitation does not saturate the spin excitation. Hence the energy is shared among the spin- and phonon system almost equally according to Fig.~\ref{fig:IV_dy_Gr\"uneisen}(a) and the contractive spin stress prevails because of its large negative Gr\"uneisen constant.
This complex interplay of a saturable magnetic stress and a non-saturable phonon stress results in a spatially non-monotonic total stress within the inhomogeneously excited Dy transducer at $130\,\text{K}$ and $31\,\text{K}$ as displayed in Fig.~\ref{fig:IV_dy_stress_maps}(h) and (i). The corresponding strain maps, which are generated by solving the elastic wave equation~\eqref{eq:III_11_wave_equation_Gr\"uneisen}, are depicted in Fig.~\ref{fig:IV_dy_stress_maps}(k) and (l). They display an expansion of Dy at the frontside launching a bipolar strain pulse into Nb. The weaker and more slowly rising contraction at the backside drives a unipolar expansion pulse into the adjacent Nb layer. Their superposition yields the composite shape of the average strain response of the Nb detection layer as revealed by PUX in Fig.~\ref{fig:IV_dy_transient_strain}(c) for intermediate temperatures. With decreasing temperature and increasing maximum magnetic stress the expansive electron-phonon stress and the amplitude of the bipolar strain pulse are less pronounced, while the amplitude of the unipolar expansive strain pulse driven by the contraction in Dy is enhanced. Furthermore, the total stress averaged over the Dy transducer becomes more contractive, which results in the increasing contraction of Dy that is even enhanced within tens of picoseconds due to the second slowly rising magnetic stress contribution (cf. Fig.~\ref{fig:IV_dy_transient_strain}(b)). In summary, the Gr\"uneisen model reveals that the saturation of the magnetic stress due to the finite amount of energy that can be deposited in magnetic excitations is crucial to explain the temperature dependence of the strain response of the Dy and the Nb layer. Utilizing the sensitivity of the Nb strain to the stress profile within Dy we were also able to extract the spatial-profile of the magnetic stress and thus gain insights into the spatio-temporal excitations of the AFM order in Dy from our PUX experiment.
In summary, this use case demonstrates the unambiguous detection of NTE on picosecond timescales via PUX and exemplifies the extraction of the subsystem-specific stress contributions utilizing subsystem-specific Gr\"uneisen constants extracted from thermal expansion in equilibrium. In heavy rare-earth elements NTE arises from magnetostrictive stresses. However, several other mechanisms for NTE and Invar behavior exist e.g.\ dominant transverse phonon contributions at low temperature and repulsive deformation potential contributions \cite{barr2005, dove2016}. For each of these mechanisms it is interesting to investigate, if the NTE is also active in non-equilibrium states after ultrashort laser pulse excitation. An ultrafast NTE or Invar behavior may lead to novel transducer applications that may allow for tunable waveforms for picosecond ultrasonics as discussed in \cite{matt2023}.
\section{\label{sec:V_conclusion}Conclusion and Outlook}
We have presented an overview of the capabilities and concepts of PUX, where hard x-ray probe pulses detect laser-induced transient material strain via Bragg peak shifts.
Layered heterostructures provide excellent use cases, because the strain propagation and reflections connect the signals from different layers, which -- thanks to the large penetration depth of x-rays -- are accessible even beneath thick metal layers. PUX experiments often provide an intuitive picture of both the strain pulses and the quasistatic strain from the local increase in the energy density as these effects can be separated in the time-domain due to their different propagation speeds.
Motivated by the quantitative access to the strain response, we have revisited continuum elasticity theory, to connect the established formulation of picosecond ultrasonics in homogeneously excited continuous thin films via the thermal expansion coefficient to a formulation that incorporates NTE and is able to reveal the Poisson effect on ultrafast timescales.
We have shown that PUX experiments may serve as an ultrafast probe of the local energy content in nanoscopic layers. The observed strain is driven by a stress, which is in many scenarios directly proportional to the energy densities in different subsystems. The proportionality constants are the sub-system specific Gr\"uneisen parameters for electrons, phonons and magnetic excitations which enable a simplified expression for the driving stress in the inhomogeneous elastic wave equation.
The selected examples of PUX experiments illustrate scenarios where x-ray probing is advantageous compared to established and versatile all-optical experiments. Strain pulse detection in buried layers that allows us to separate propagating strain pulses and quasi-static strain in the time-domain is one of the key advantages of PUX experiments as demonstrated for a thin Nb layer buried below an opaque TbFe$_\text{2}$ transducer. We discussed the case of granular vs. continuous FePt films to illustrate the morphology dependence of the strain response and expansion amplitude. For Au-Ni heterostructures that are thinner than the optical penetration depth of visible probe pulses we showcased that the material-specific Bragg peaks in PUX experiments enable the detection of energy transfer, where all-optical techniques would convolute the contributions from both layers. Finally, the NTE on ultrafast timescales was exemplified by a Dy transducer layer whose ultrafast giant magnetostriction is capable of launching unconventional picosecond strain pulses into a buried Nb layer.
The presented experiments work in the symmetric diffraction of rather dim and poorly collimated x-rays from a laser-driven plasma source \cite{schi2013c}. Already this technologically simple approach has many more applications beyond the current manuscript. Moreover, PUX can be extended to asymmetric diffraction geometries that are sensitive to in-plane strain and to shear waves \cite{juve2020}. Optical phonons, i.e.,\ coherent atomic motions within the crystallographic unit cell, are accessible via the modulations of the x-ray intensity through the structure factor \cite{soko2003, john2009b, kozi2017}. The excitation can be tailored by changing the wavelength and duration of the pump-pulse, by multipulse-excitation sequences \cite{boja2015,repp2020b} or by combining optical excitation with externally applied magnetic or electric fields \cite{matt2023}. Time-resolved studies of phase transitions that are accompanied by lattice strain \cite{mari2012,jong2013, mogu2020} or non-thermal stresses that arise in laser-excited piezo- and ferroelectric materials \cite{schi2013,chen2016} represent another intriguing field for the application of PUX.
Large scale facilities from synchrotron radiation sources to FELs provide many additional opportunities beyond the basic PUX experiments discussed here. The excellent collimation and energy resolution of the x-ray beam supports grazing-incidence measurements, which allow for tuning the penetration depth of x-rays in the bulk and provide a higher resolution of the transient shape changes of the diffraction peaks in reciprocal space. The enormous photon flux at FELs enables the investigation of even thinner layers, pushing the spatial strain-pulse dimensions further towards the single layer limit. Inelastic scattering provides alternative access to phonon dynamics via thermal diffuse scattering \cite{trig2010}, Fourier-Transform inelastic scattering \cite{trig2013}, TDBS in the x-ray range \cite{boja2013,heni2016,shay2022} and time-resolved x-ray reflectivity on non-crystalline specimen \cite{nusk2011, jal2017}. The coherence of \nth{4} generation synchrotrons and FELs even allows for advanced strain-field imaging techniques \cite{robi2009, newt2014, hols2022}.
\renewcommand\thesection{\Alph{section}}
\setcounter{section}{0}
\numberwithin{equation}{section}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\numberwithin{figure}{section}
\renewcommand{\thefigure}{\thesection.\arabic{figure}}
\numberwithin{table}{section}
\renewcommand{\thetable}{\thesection.\arabic{table}}
\section{\label{sec:VI_xray_diffraction}Appendix: Detecting Bragg peak shifts in reciprocal space}
Here, we outline details of the x-ray diffraction process that is central to PUX experiments. We discuss both RSM and RSS schemes that utilize a PSD for the detection of Bragg peak shifts that encode the strain response of the sample.
In this article, we limit the discussion to the symmetric and coplanar scattering geometry, where the sample normal is parallel to the scattering plane, generated by the incident and the diffracted x-ray beam. This specular Bragg diffraction geometry specifically accesses the out-of-plane lattice constant.
A quantitative analysis of the three-dimensional strain response as discussed in Section~\ref{sec:IV_b_fept} requires access to asymmetric Bragg reflections. This is also true for the detection of shear waves via PUX \cite{juve2020,dorn2019}. More details on the theory of x-ray diffraction can be found in dedicated books \cite{als2011,warr1990,auth2008}, and treatises that specialize on x-ray diffraction from thin films \cite{noya1995,holy1999} or time-resolved x-ray diffraction \cite{barg2005,schi2013c,hell2007}.
\subsection{\label{sec:VI_a_rsm}Reciprocal space mapping}
As typical for picosecond ultrasonics experiments, we employ a pump-probe geometry where the probe pulses are focused sufficeintly well that they are scattered from a region of the sample that is homogeneously laser-excited within the sample plane. The laser-induced strain response then exclusively develops along the out-of-plane direction of the sample as explained in Section~\ref{sec:III_b_strain_1D}. This results in a change of the out-of-plane lattice constant $d_{3}$, i.e.,\ a change of the spacing of the lattice planes parallel to the sample surface that can be probed via symmetric x-ray diffraction.
Fig.~\ref{fig:VI_1_diffraction_geometry}(a) introduces the corresponding experimental geometry: The incident angles of the x-ray beam with respect to the sample surface and to the lattice planes are $\alpha_\text{in}$ and $\omega$, respectively. The diffracted intensity is recorded on the PSD under the angles $\alpha_\text{out}$ and $\theta$ relative to the sample surface and the lattice planes, respectively. In case of symmetric diffraction from lattice planes parallel to the sample surface, i.e.,\ $\alpha_\text{in}=\omega$ and $\alpha_\text{out}=\theta$, Bragg's law
\begin{align}
n \lambda = 2d_{3} \sin(\theta_\text{B})
\label{eq:VI_1_Braggs_law}
\end{align}
relates the out-of-plane lattice constant $d_{3}$ to the Bragg angle $\theta_\text{B}$ for a fixed wavelength of the x-rays $\lambda$ and a given diffraction order $n$. The Bragg angle $\theta_\text{B}$ represents the center and the maximum of the Bragg peak intensity reflected by a set of parallel lattice planes that are exposed to a monochromatic x-ray probe beam when scanning the angle $\theta$ relative to the lattice planes.
Note that irrespective of the detector position, Bragg diffraction only occurs if the incoming x-rays are incident at $\theta_\text{B}$ with respect to the lattice planes, i.e.,\ $\omega=\theta_\text{B}$. Within this simplified picture it is required to carry out $\theta-2\theta$ scans that symmetrically vary both $\alpha_\text{in}$ and $\alpha_\text{out}$ to identify the changing lattice constant $d_{3}$ from the corresponding change of the Bragg angle $\theta_\text{B}$.
\begin{figure}[tb]
\centering
\includegraphics[width = \columnwidth]{Figures/fig_15_laue.pdf}
\caption{\textbf{Sketch of the diffraction geometry:} (a) symmetric diffraction geometry where the angle of incidence relative to the sample surface plane $\alpha_\text{in}$ is equal to the reflection angle $\alpha_\text{out}$ which in turn are equal to the angles in respect to the lattice planes $\omega$ and $\theta$, respectively. In this case the scattering vector is aligned along the out-of plane reciprocal space direction $q_\text{z}$. (b) Sketch of an asymmetric scattering event where reflection occurs from a scattering vector that has a component along the in-plane reciprocal space coordinate $q_\text{x}$. The asymmetry can equally well result from the tilting of nanoscale crystallites in respect to the surface and from the associated nanoscale inhomogeneity within the film plane (finite coherence length). The diffraction processes in both panels are elastic so that $|\bm{k}_\text{in}|= |\bm{k}_\text{out}|$, but the scattered photons are detected on different pixels of the PSD. Sketch adapted from the book of Hol\'y et al \cite{holy1999}.}
\label{fig:VI_1_diffraction_geometry}
\end{figure}
This basic description of x-ray diffraction via Bragg's law has to convoluted with the with the finite divergence of the x-ray sources. However, investigating thin films with mosaicity or finite crystalline coherence length requires more work. Fig.~\ref{fig:VI_1_diffraction_geometry}(b) illustrates the diffraction from a mosaic film that consists of small crystallites tilted with respect to the sample surface in comparison to the idealized situation depicted in Fig.~\ref{fig:VI_1_diffraction_geometry}(a). In case b) the diffraction under the condition of $\alpha_\text{in} \neq \omega$ and $\alpha_\text{out} \neq \theta$ the description by the Laue condition
\begin{equation}
\bm{k}_\text{out} - \bm{k}_\text{in} = \bm{G} \,
\label{eq:VI_2_laue_condition}
\end{equation}
is better suited. Here, \nolinebreak{$\bm{k}_\text{in}$ and $\bm{k}_\text{out}$} are the wave-vectors of the incident and elastically scattered x-rays ($|\bm{k}_\text{in}| = |\bm{k}_\text{out}|= k = {2\pi}/{\lambda}$) which define the scattering vector $\bm{q}=\bm{k}_\text{out} - \bm{k}_\text{in}$.
In reciprocal space, Bragg peaks occur at scattering vectors $\bm{q}$ being equal to a reciprocal lattice vector $\bm{G}$ as stated by the Laue condition Eq.~\eqref{eq:VI_2_laue_condition}. As the reciprocal lattice vectors $|\bm{G}| = {2\pi n}/{d_3}$ are inversely related to the lattice plane spacing $d_3$, the determination of the position of a Bragg peak in reciprocal space yields the lattice strain.
Fig.~\ref{fig:VI_1_diffraction_geometry}(b) displays the application of this description to diffraction from a tilted crystallite with lattice planes slightly tilted with respect to the surface of the sample. Even under this condition the diffraction from the lattice planes only occurs at Bragg condition when $\omega=\theta=\theta_B$. However, the corresponding reciprocal lattice vector $\bm{G}$ does not point along the out-of-plane direction anymore but is tilted relative to the reciprocal space coordinates $q_\text{x}$ and $q_\text{z}$ that are defined with respect to the sample surface, where $q_\text{z}$ denotes the out-of-plane direction. Under this condition the maximum of the Bragg peak occurs at $\alpha_\text{in} \neq \alpha_\text{out}$. In addition, a finite nanoscopic in-plane size of the crystallites relaxes the $q_\text{x}=0$ condition by broadening of the diffraction condition along $q_\text{x}$, even in case of lattice planes parallel to the surface. This leads to diffraction intensity even if the Bragg condition is not fulfilled as illustrated in Fig.~\ref{fig:VI_1_diffraction_geometry}(b) by the diffuse spread of the reciprocal lattice vector $\bm{G}$. In total, mosaicity leads to a broadening of the Bragg peak along the in-plane direction $q_\text{x}$ and similarly along $q_\text{y}$. Hence, diffraction can even occur if both the incoming angle $\alpha_\text{in}$ and the scattering angle $\alpha_\text{out}$ deviate from $\theta_B$. The PSD simultaneously captures a range of $\alpha_\text{out}$ and additionally probes diffraction intensity perpendicular to the diffraction plane, i.e.,\ along $q_\text{y}$. However, we restrict our discussion to the diffraction plane characterized by $q_\text{x}$ and $q_\text{z}$ and integrate the recorded intensity along $q_\text{y}$. This assumes equivalent in-plane directions of the film and allows us to treat the PSD effectively as line detector. The transformation from angular $\alpha_\text{in}$-$\alpha_\text{out}$-space to reciprocal $q_\text{x}$-$q_\text{z}$-space by geometric considerations is then given by \cite{holy1999,schi2013c}:
\begin{equation}
\bm{q} = \begin{pmatrix} q_\text{x} \\ q_\text{z} \end{pmatrix} =k \begin{pmatrix} \cos{(\alpha_\text{out})} - \cos{(\alpha_\text{in})} \\ \sin{(\alpha_\text{in})} + \sin{(\alpha_\text{out}) } \end{pmatrix}\,.
\label{eq:VI_3_transformation}
\end{equation}
According to this transformation the PSD captures a range of $\alpha_\text{out}$ and therefore probes a subset of the reciprocal $q_\text{x}$-$q_\text{z}$-space for a fixed $\alpha_\text{in}$. The individual treatment of $\alpha_\text{in}$ and $\alpha_\text{out}$ is a generalization of Bragg's law~\eqref{eq:VI_1_Braggs_law} that is recovered from the Laue condition~\eqref{eq:VI_2_laue_condition} for $\alpha_\text{in}=\alpha_\text{out}=\theta$ implying $q_\text{x}=0$ as in the symmetric case in Fig.~\ref{fig:VI_1_diffraction_geometry}(a). The variation of $\alpha_\text{in}$ during a symmetric $\theta-2\theta$ scan with a line detector yields the diffraction intensity along several lines within the $q_\text{x}$-$q_\text{z}$-space with an offset along $q_\text{z}$. This set of reciprocal space slices yields the RSM in the vicinity of the Bragg peak in the form of diffraction intensity versus $q_\text{x}$ and $q_\text{z}$ as exemplified by Fig.~\ref{fig:II_pux_data}(c).
While the width of the Bragg peak along $q_\text{x}$ is a measure of the in-plane coherence length and mosaicity of the layer, its width along $q_\text{z}$ encodes the layer thickness or the out-of-plane coherence length. The central observable of PUX is the $q_\text{z}$-position of the Bragg peak which encodes the average out-of-plane lattice constant of the corresponding crystalline layer. Therefore, a heterostructure consisting of different materials with distinct out-of-plane lattice constants results in layer-specific Bragg peaks separated in reciprocal space along $q_\text{z}$ which enables probing the layer-specific laser-induced strain response as demonstrated in Section~\ref{sec:II_PtCuNi_example}. The angular divergence and the spectral width of the x-rays impinging on the sample, i.e.,\ the instrument function, broaden the recorded diffraction peaks \cite{zeus2021}.
Time-resolved RSM measurements follow the shift of the material-specific Bragg peaks along $q_\text{z}$ by recording the diffraction intensity in reciprocal space via $\omega$ scans or $\theta-2\theta$ scans for each pump-probe delay \cite{schi2013c}. The transient position of the center-of-mass of the Bragg peak yields the laser-induced out-of-plane strain by the relative change of the $q_\text{z}$ coordinate according to Eq.~\eqref{eq:II_1_strain_definition}.
Most PUX experiments focus on the average strain of each layer within the heterostructure. However, strain gradients within a material can be observed as peak broadening. Especially the case of inhomogeneous energy deposition within the transducer region or localized, high amplitude strain pulses can result in a Bragg peak splitting for the inhomogeneously strained layer where one maximum corresponds to the compressed part and one maximum for the expanded part of the layer \cite{schi2014b}. In extreme cases dynamical x-ray diffraction analysis is required to reconstruct the underlying strain profile from the observed Bragg peak deformations. Such capabilities are provided by modules of the \textsc{udkm1Dsim} \textsc{Matlab} \cite{schi2014} and Python \cite{schi2021} code libraries.
\subsection{\label{sec:VI_b_rss}Reciprocal space slicing}
Recording RSMs around multiple Bragg peaks for each time delay between laser pump pulse and x-ray probe pulse contains the full information about changes of the crystalline structure. To date, most PUX experiments only measure the out-of-plane expansion encoded by Bragg peak shifts along $q_\text{z}$, because this is the natural strain response on a picosecond timescale in the standard experimental setting utilizing a thin film excited by a pump laser profile with much larger lateral extent than the film thickness (see Section~\ref{sec:III_b_strain_1D}). In these situations we can often apply the RSS technique, which considerably speeds up the measurement, because only one or very few slices of reciprocal space and hence angular positions $\alpha_\text{in}$ are required. This means the measurement is done at an angle that offers a large number of diffracted x-ray photons, whereas RSMs contain many positions in the reciprocal space with a low number of diffracted photons. Details of this technique and its application to different types of x-ray sources have been reported previously \cite{zeus2021}.
Fig.~\ref{fig:VI_2_rss} displays the central idea of the RSS technique. Due to the finite coherence length of nanoscopic thin films both along $q_\mathrm{z}$ (thickness) and $q_\mathrm{x}$ (mosaicity) asymmetrically scattered x-rays contribute to the scattered intensity distribution around the reciprocal lattice vector $\bm{G}$. Eventually, this results in a characteristic extent of the Bragg peak as indicated by the yellow contour lines around the equilibrium reciprocal lattice vector $\bm{G}$ in Fig.~\ref{fig:VI_2_rss}(a).
In the case of a purely one-dimensional strain response of the film along the out-of-plane direction, as in typical picosecond ultrasonic experiments, the Bragg peak exclusively shifts along $q_\text{z}$ as represented by the grey contour lines around $\bm{G'}$. The pixels of the PSD simultaneously record the x-ray intensities at various diffraction angles $\alpha_\text{out}$ for a fixed $\alpha_\text{in}$.
The PSD positioned at a fixed angle thus records a specific lineout of the Bragg peak's intensity profile as indicated by the line with ticks. The PSD position is initially chosen to slice the reciprocal space through the center of the Bragg peak which maximizes the recorded diffraction intensity profile (orange profile). This exactly corresponds to the symmetrical Bragg condition. After the shift of the Bragg peak, the fixed PSD intersects the Bragg peak's intensity distribution in the wings of the intensity distribution. Hence, the recorded intensity profile shifts on the detector and is reduced in intensity (grey peak). The red arrow denotes the shift of the Bragg peak projection observed on the detector which is closely related to the actual shift of the Bragg peak (contour lines) along $q_\text{z}$ in reciprocal space.
\begin{figure}[t!]
\centering
\includegraphics[width = 1.00\columnwidth]{Figures/fig_16_rss_sketch.pdf}
\caption{\textbf{Sketch of the RSS scheme:} (a) RSS utilizes a PSD that probes multiple diffraction angles $\alpha_\mathrm{out}$ without any mechanical movements of the goniometer. The detector is approximated as a straight line in reciprocal space with $\alpha_\mathrm{in}$ and $\alpha_\mathrm{out}$ chosen such that the maximum intensity occurs on the center pixel of the detector. The peak shifts by $\Delta q_\mathrm{z,RSM}$ as a result of out-of-plane strain. The projection $\Delta q_\mathrm{z,RSS}$ of this shift in the reciprocal space slice defined by the PSD yields the measured strain. Panels (b--d) display the relation of both shifts for different shapes of the Bragg peak in reciprocal space, where both shifts are nearly identical for a Bragg peak that is elongated along $q_\text{x}$.}
\label{fig:VI_2_rss}
\end{figure}
Fig.~\ref{fig:VI_2_rss}(b--d) show a schematical zoom into the intersection of the detector slice (black line) and shifting Bragg peaks with different widths along $q_\text{x}$ and $q_\text{z}$. The ratio of the shift of the recorded intensity profile on the detector $\Delta q_\text{z,RSS}$ and the shift of the Bragg peak in reciprocal space $\Delta q_\text{z,RSM}$ depend on the shape of the Bragg peak in reciprocal space. The shifts are related by geometric arguments. The corresponding strain $\eta_\mathrm{RSM}$ is given by the projection of the RSM:
\begin{align}
\begin{split}
\eta_\mathrm{RSM} &=-\frac{\Delta q_\text{z,RSM}}{q_\text{z,0}} \\&=-\frac{\Delta q_\text{z,RSS}}{q_\text{z,0}} \underbrace{\left(1+\left(\frac{w_\text{z}}{w_\text{x}}\right)^2\tan^2{(\theta_\text{B})} \right)}_{:=s} \,,
\label{eq:VI_correction_factor}
\end{split}
\end{align}
and can be related to the shift $-(\Delta q_\text{z,RSM})/q_\text{z,0}$ of the RSS on the PSD via a scaling factor $s$. Here, $w_\text{z}$ and $w_\text{x}$ denote the width of the Bragg peak in reciprocal space along the $q_\text{z}$ and $q_\text{x}$ coordinate, respectively, which can be characterized for a given sample by standard RSM techniques. Under symmetrical scattering conditions the angle $\theta_\text{B}$ in Fig.~\ref{fig:VI_2_rss}(b) corresponds to the chosen fixed incident angle of the x-ray beam $\alpha_\text{in}=\theta_\text{B}$ that corresponds to the Bragg angle of the film yielding maximum intensity on the detector by slicing the Bragg peak through its center. Fig.~\ref{fig:VI_2_rss}(b--d) illustrate the dependence of the correction factor $s$ on the width of the Bragg peak in reciprocal space. The observed shift on the detector and in reciprocal space nearly coincide for a peak that is broad along $q_\text{x}$ (Fig.~\ref{fig:VI_2_rss}(d)), resulting in a correction factor $s\approx 1$. However, $s$ can become rather large for a Bragg peak that is narrow along $q_\text{x}$ and broad along $q_\text{z}$ (Fig.~\ref{fig:VI_2_rss}(c)). This situation is met e.g.\ for very thin films with excellent in-plane epitaxy, i.e.,\ with negligible mosaicity. Although the strain may considerably shift the Bragg peak along $q_\text{z}$, the shift of the peak on the detector can become very small in such cases, thus limiting the sensitivity and accuracy of the RSS technique.
In general, samples exhibiting Bragg peaks elongated along $q_\text{x}$ and small Bragg angles $\theta_\text{B}$ are well suited for the RSS probing scheme that is able to reduce the measurement time by an order of magnitude by avoiding $\theta-2\theta$ scans or full RSMs at each pump probe delay. The specimen discussed in the current manuscript all exhibit Bragg peaks that are broad along $q_\text{x}$ and the shifts were recorded via RSS. In contrast, some specimen do require ultrafast RSM experiments: if the Bragg peak becomes broad along $q_\text{z}$ due to very thin films or if the Bragg peak is in close vicinity of a substrate peak that provides a strong background due to its crystal truncation rod \cite{matt2021, rong2022}. Furthermore, the analysis of transient peak intensities in terms of the Debye-Waller effect requires RSM measurements since the peak shift induces an additional decrease of the peak intensity within the RSS scheme.
\section*{Acknowledgments and funding sources}
We gratefully acknowledge the DFG for financial support via No.\ BA 2281/11-1 Project-No.\ 328545488 – TRR 227, project A10.
\bibliographystyle{elsarticle-num}
|
{
"arxiv_id": "2302.14159",
"language": "en",
"timestamp": "2023-03-01T02:03:10",
"url": "https://arxiv.org/abs/2302.14159",
"yymm": "2302"
} |
\section{Introduction}
Massive neutrinos affect cosmological observables due to the unique behaviour as dark radiation in early times and as dark matter at late times, see \cite{Lesgourgues:2006nd} for a review.
Effectively, cosmology is sensitive to the total energy density of relic neutrinos.
In the standard scenario, after they become non-relativistic,
the correspondence between non-relativistic neutrino energy density and neutrino masses is approximately given by~\cite{Froustey:2021qqq}
\begin{equation}
\Omega_\nu h^2 = \sum_i m_i/(93.12\text{ eV})\,,
\end{equation}
where $h$ is the reduced Hubble parameter.
Constraining the non-relativistic neutrino energy density, therefore,
allows us to obtain bounds on the sum of the neutrino masses
under the assumption that the above equation holds. Combining information from CMB and BAO observations, the Planck collaboration obtains~\cite{Aghanim:2018eyx}
\begin{equation}\label{eq:Planck}
\sum m_\nu \equiv \sum_{i=1}^3m_i < 0.12\,{\rm eV} \,(95\%\,{\rm CL}) \,.
\end{equation}
Adding more recent data, even more stringent limits can be obtained. For instance, Ref.~\cite{DiValentino:2021hoh} finds
\begin{equation}\label{eq:cosmo_bound}
\sum m_\nu < 0.09\,{\rm eV} \,(95\%\,{\rm CL}) \, ,
\end{equation}
see also \cite{Palanque-Delabrouille:2019iyz,diValentino:2022njd}. In the near future we expect that the
DESI~\cite{DESI:2016fyo} and/or Euclid~\cite{Amendola:2016saw} surveys may provide sensitivities to $\sum m_\nu$ of down to $0.02\,\mathrm{eV}$ or beyond, see e.g.
\cite{Font-Ribera:2013rwa,Basse:2013zua,Hamann:2012fe,Carbone:2010ik,Brinckmann:2018owf}.
On the other hand, neutrino oscillation experiments provide accurate determinations of the two neutrino mass-squared splittings \cite{deSalas:2020pgw}, see also Refs.~\cite{Esteban:2020cvm,Capozzi:2021fjo}:
\begin{equation}\label{eq:oscillations}
\begin{split}
\Delta m^2_{21} &=(7.50\pm 0.21)\times 10^{-5} \,{\rm eV}^2 \,, \\
|\Delta m^2_{31}|&= \left\{
\begin{array}{ll}
(2.550\pm0.025)\times 10^{-3}\, {\rm eV}^2 &\quad \rm (NO) \\
(2.450\pm0.025)\times 10^{-3}\, {\rm eV}^2 &\quad \rm (IO)
\end{array}
\right. \,,
\end{split}
\end{equation}
where $\Delta m^2_{ij} \equiv m_i^2-m_j^2$. The sign of $\Delta m^2_{31}$ determines the type of neutrino mass ordering, being positive for normal ordering (NO) and negative for inverted ordering (IO).
With the mass-splittings determined, oscillation data provide a lower bound on the sum of the neutrino masses, obtained by assuming that the lightest neutrino mass is zero. From Eq.~\eqref{eq:oscillations}, one finds
\begin{equation}\label{eq:lower_bound}
\begin{split}
\sum m_\nu > \left\{
\begin{array}{ll}
(0.0591 \pm 0.00027) \, {\rm eV} &\quad \rm (NO) \\
(0.0997 \pm 0.00051) \, {\rm eV} &\quad \rm (IO) \\
\end{array}\right.
\end{split}\,.
\end{equation}
Comparing these results with Eqs.~\eqref{eq:Planck} and \eqref{eq:cosmo_bound}, it is possible to notice that cosmological upper bounds are already comparable to the lower bound for IO, and near future sensitivities will probe the NO region case as well.
This may happen in two ways: by measuring a value equal or slightly larger than $\ensuremath{\sum m_\nu}\simeq0.06$~eV and confirming that at least two neutrinos have a positive mass in agreement with oscillation results, or by strengthening the current upper bounds to the level that both minimal values in Eq.~\eqref{eq:lower_bound} will be disfavored by cosmology.
Therefore, it is mandatory to quantify a possible tension between cosmology and oscillation data, which constitutes the main goal of this manuscript. Such a tension would have important implications: the absence of a detection of a finite neutrino mass in cosmology as predicted by Eq.~\eqref{eq:lower_bound} could be a striking signal for a non-standard cosmological model beyond the vanilla $\Lambda$CDM model and/or non-standard neutrino properties, see e.g.~\cite{Alvey:2021xmq} for a discussion.
Furthermore, the tension between the lower bound on $\sum m_\nu$ for IO and the bound from cosmology could be used in principle to disfavour IO compared to NO, see e.g. \cite{Hannestad:2016fog,Gerbino:2016ehw,Vagnozzi:2017ovm,Simpson:2017qvj,Gariazzo:2018meg,
DeSalas:2018rby,Heavens:2018adv,Gariazzo:2018pei,RoyChoudhury:2019hls,Mahony:2019fyb,
Hergt:2021qlh,Jimenez:2022dkn,Gariazzo:2022ahe} for an incomplete list of studies on this topic.
Typically, in these papers some kind of Bayesian model comparison between NO and IO is performed, leading to posterior odds for the two models.
In the following we will address this question with a slightly different approach, namely by quantifying the tension between cosmology and oscillation data for the two orderings. We argue that it is meaningful to reject IO from a comparison of cosmology and oscillations \emph{only if} these two data sets are consistent for NO. In the case when there is tension between cosmology and oscillations for both orderings, a relative comparison of the two models can be misleading. We shall explore this putative tension exploiting both current and future cosmological measurements.
The structure of the manuscript is as follows. In Sec.~\ref{sec:metrics} we describe the different methods commonly exploited in the literature to quantify tensions between two sets of measurements. Section \ref{sec:analysis} contains a description of the methodology for the numerical analyses, the parameterizations employed to describe the parameter space and the data involved in quantifying the tension between cosmological and terrestrial neutrino mass measurements. Section~\ref{sec:results} presents the results from our analyses, including a mass ordering comparison. Finally, we conclude in Sec.~\ref{sec:conclusions}.
\section{Tension metrics}\label{sec:metrics}
In this section we provide a brief review of various metrics used to quantify a tension between different data sets. We follow closely the discussion in Ref.~\cite{DES:2020hen}, where a number of tests is reviewed and applied in the context of the $H_0$ tension. We refer the interested reader to Ref.~\cite{DES:2020hen} for further references and more in depth discussions of the various tests. Additional discussions can be found for instance, in the context of cosmology in \cite{Lin:2019zdn,Raveri:2018wln}, in the context of Type Ia Supernova analysis in \cite{Amendola:2012wc}, and within a frequentist framework in the context of neutrino oscillations in \cite{Maltoni:2003cu}.
To fix the notation, in the following
$\mathcal{L}_D = P(D|\theta,M)$ denotes the likelihood, which is the probability for the data $D$ given a model $M$ with parameters $\theta$, $\Pi = P(\theta|M)$ is the prior for the parameters,
\begin{equation}
\mathcal{Z}_D = P(D|M) = \int \text{d}\theta\, \mathcal{L}_D(\theta) \, \Pi(\theta)~,
\end{equation}
is the Bayesian evidence, and
\begin{equation}
\mathcal{P}_D(\theta) = P(\theta | D, M) = \frac{\mathcal{L}_D(\theta) \, \Pi(\theta)}{\mathcal{Z}_D}~,
\end{equation}
is the posterior density for the parameters $\theta$ for data $D$. Considering now two data sets $D=A,B$, the question posed here is whether these data are consistent within a given model. In order to quantitatively address this question, the following tests can be used:
\begin{itemize}
\item \textbf{Bayesian evidence ratio.} Consider the ratio
\begin{equation}
\label{eq:lnZratio}
R\equiv
\frac{\mathcal{Z}_{AB}}{\mathcal{Z}_{A}\mathcal{Z}_{B}} \,.
\end{equation}
The numerator corresponds to the evidence when data sets $A$ and $B$ are described by the same set of parameters $\theta$, whereas in the denominator different parameters may be preferred by the two data sets. Values of $R \gg 1 \, (\ll 1)$ would indicate agreement (disagreement) between the two data sets. As discussed in \cite{DES:2020hen}, $R$ is dependent on the prior volume, and small values of $R$, indicating a possible tension between data sets, can be increased by increasing the prior volume. Therefore, we will not use the Bayesian evidence ratio in our tension analysis below.
\item \textbf{Bayesian suspiciousness.}
This test departs from the Bayesian evidence ratio, but the information ratio $I$ based on the Kullback-Leibler divergence is used to remove the prior dependence.
Consider the log-information ratio
\begin{equation}
\label{eq:lnI}
\ln I = \mathcal{D}_A+\mathcal{D}_B-\mathcal{D}_{AB}\,,
\end{equation}
where the Kullback-Leibler divergence is defined as
\begin{equation}
\label{eq:KL_div}
\mathcal{D}_D
=
\int \text{d}\theta \,
\mathcal{P}_D
\ln\left(\frac{\mathcal{P}_D}{\Pi}\right) \,.
\end{equation}
Using the log-information ratio
we can cancel the prior dependence from the Bayesian evidence ratio $R$
and define the suspiciousness \cite{Handley:2019wlz}:
\begin{equation}
\label{eq:lnS}
\ln S
\equiv
\ln R - \ln I \,.
\end{equation}
As for $R$, positive values of $\ln S$ indicate agreement among the data sets while negative ones indicate disagreement.
For Gaussian posteriors, the quantity $(d-2\ln S)$ follows a $\chi_d^2$ distribution,
where the number of degrees of freedom can be obtained
using the Bayesian model dimensionality defined in \cite{Handley:2019pqx}:
\begin{equation}
\label{eq:bmd}
d_D = 2
\int
\text{d}\theta \,
\mathcal{P}_D
\left[
\ln\left(\frac{\mathcal{P}_D}{\Pi}\right)
-\mathcal{D}_D
\right]^2 \,.
\end{equation}
In order to compute the significance of the tension between two data sets, the relevant Bayesian dimensionality can be obtained using \cite{Handley:2019pqx}
\begin{equation}\label{eq:d}
d=d_A+d_B-d_{AB} \,.
\end{equation}
As we will discuss in the following, the Bayesian model dimensionality $d_D$
may have problems when dealing with posteriors that are not
Gaussian in the parameters under consideration,
or when the prior limits impose a significant cut on the posterior shape.
In these cases,
we will replace the Bayesian model dimensionality with a more naive counting for the number of degrees of freedom, see below.
\item \textbf{Parameter goodness-of-fit tests.} This test is based on the idea to evaluate the ``cost'' of explaining data sets together (i.e., with the same parameter values) as compared to describing them separately (i.e., each data set can chose its own preferred parameter values). Therefore this type of tests is sometimes also called ``goodness-of-fit loss'' tests.
We take as an example two data sets $A$ and $B$, as of interest in this study (generalization to more data sets is straight-forward). Compatibility of the data sets is evaluated using the test statistic
\begin{equation}\label{eq:PG}
\begin{split}
Q \equiv &-2\ln\mathcal{L}_{AB}(\hat\theta_{AB}) \\
&+ 2\ln\mathcal{L}_{A}(\hat\theta_{A}) + 2\ln\mathcal{L}_{B}(\hat\theta_{B}) \,.
\end{split}
\end{equation}
Here $\hat\theta_D$ denotes the parameter values which ``best'' describe data set $D$.
In the context of frequentist statistics, $\hat\theta_D$ is taken as the parameter values maximizing the likelihood $\mathcal{L}_D(\theta)$~\cite{Maltoni:2003cu}. In this case, the test statistic is denoted as $Q\equiv\chi^2_{\rm PG}$ and, by construction, $\chi^2_{\rm PG}$ is independent of the prior and any re-parameterization (as long as the number of independent parameters remains the same). In the context of Bayesian analysis, $\hat\theta_D$ is taken at the parameter values at the ``maximum a posteriori'' (MAP, the point at which the posterior assumes its maximum value), which in general does depend on the prior choice, see Refs.~\cite{DES:2020hen,Raveri:2018wln}, where the corresponding test statistic is denoted by $Q_{\rm DMAP}$ (difference of log-likelihoods at their MAP point). For flat uninformative priors ($\Pi(\theta) = const$) maximum likelihood and maximum posterior points are identical and $\chi^2_{\rm PG} = Q_{\rm DMAP}$.
Under certain regularity conditions, $Q$ from Eq.~\eqref{eq:PG} is distributed as a $\chi^2_n$ distribution, where $n$ is the number of parameters in common to both data sets $A$ and $B$,
\begin{equation}\label{eq:n}
n=p_A+p_B-p_{AB} \,,
\end{equation}
where $p_D$ denotes the parameters of data set $D$, see \cite{Maltoni:2003cu,Raveri:2018wln} for precise definitions.
\item \textbf{Parameter differences.}
This test measures the distance between posterior distributions for the parameters $\theta$,
given two different datasets \cite{Raveri:2021wfz,Raveri:2019gdp}.
Let us define the difference $\Delta\theta=\theta_1 - \theta_2$,
where $\theta_1$ and $\theta_2$ are two points in the shared parameter space.
Assuming, as in our case, that the datasets $A$ and $B$ are independent,
the posterior distribution for $\Delta\theta$ can be computed using:
\begin{equation}
\mathcal{P}_\Delta(\Delta\theta)
=
\int
\mathcal{P}_A(\theta)\mathcal{P}_B(\theta-\Delta\theta)
\,
d\theta
\,,
\end{equation}
where the integral runs over the entire parameter space of the shared parameters.
The probability that there is a parameter shift between the two posteriors
is quantified by the posterior mass above the iso-contour of no shift ($\Delta\theta=0$).
This can be obtained by performing the following integral:
\begin{equation}
\label{eq:delta_par_shift}
\Delta
=
\int_{\mathcal{P}_\Delta(\Delta\theta)>\mathcal{P}_\Delta(0)}
\mathcal{P}_\Delta(\Delta\theta)
\,
d\Delta\theta
\,,
\end{equation}
which is symmetric for changes of datasets A$\leftrightarrow$B and gives us the probability $\Delta$.
If $\Delta$ is close to zero, no shift is present and the two datasets are in agreement.
On the contrary, a probability $\Delta$ close to one indicates a tension between datasets.
\end{itemize}
For all the tests considered below we will report
significance in terms of number of standard deviations
by converting probabilities into two-sided Gaussian standard deviations.
\section{Analysis}
\label{sec:analysis}
\subsection{Technical details}
One of the objectives of this analysis is to compute the Suspiciousness tests
for which the calculation of Bayesian evidences and Kullback-Leibler divergences is required.
In order to obtain such quantities, we perform our numerical scans with
\texttt{PolyChord}~\cite{Handley:2015aa}
and analyse the results using \texttt{anesthetic}~\cite{Handley:2019mfs}.
Implementations of other tests are taken from the code \texttt{Tensiometer}~\footnote{\url{https://github.com/mraveri/tensiometer}.}.
Concerning the implementation of the parameter differences test,
we adopt the \texttt{Tensiometer} when considering multi-dimensional parameter spaces,
while we directly implement the integral in Eq.~\eqref{eq:delta_par_shift}
when dealing with only one parameter.
Our numerical implementation considers
different parameterizations (see later) for the neutrino masses,
which are constrained using a set of cosmological and terrestrial observations.
In order to reduce the random fluctuations that arise from the initial sampling of the live points
in \texttt{PolyChord},
we repeat the nested sampling runs several times for each data combination and
neutrino mass parameterization,
varying the number of live points each time between 500 and 1500.
The quoted results are taken as the mean of the tension metrics applied to each run separately.
\subsection{Parameterizations}
\label{subs:parameterizations}
Our interest below is focused on studying different constraints on neutrino masses.
Considering a model with three massive neutrinos, there are several possible ways
to describe their mass spectrum that have been adopted in the past in the context of cosmological studies, e.g.,
\cite{Hannestad:2016fog,Simpson:2017qvj,Schwetz:2017fey,Gariazzo:2018pei,Heavens:2018adv,Jimenez:2022dkn,Gariazzo:2022ahe}. Below we will present results for two representative examples, denoted by ``$3\nu$'' and
``$\Sigma$'', respectively:
\begin{itemize}
\item $\mathbf{3\nu}$\textbf{-parameterization:}
We consider the three neutrino masses $m_A$, $m_B$, $m_C$
as independent parameters in the analyses.
After sampling the parameters, the masses are ordered from the smallest to the largest and
assigned to the mass eigenstates, depending on the considered mass ordering:
$m_1<m_2<m_3$ for NO, $m_3<m_1<m_2$ for IO.
Similar to \cite{Jimenez:2022dkn} (see also \cite{Schwetz:2017fey,Gariazzo:2022ahe}),
we impose a Gaussian prior on the logarithm of the three neutrino masses,
with the same mean $\mu$ and standard deviation $\sigma$. Hence, neutrino masses are sampled according to a log-normal distribution, without any prior boundaries.\footnote{We have tested alternative sampling methods, such as sampling the masses or the logarithms of the masses uniformly within a given range and then apply the lognormal distribution \cite{Gariazzo:2022ahe}, leading to similar results.} The mean $\mu$ and standard deviation $\sigma$ are hyper-parameters in the analysis. We sample them considering a uniform prior on their logarithm, with bounds $5\cdot10^{-4}<\mu/\text{eV}<0.3$ and $5\cdot10^{-3}<\sigma/\text{eV}<20$, respectively, and marginalize over them.
\item $\mathbf{\Sigma}$\textbf{-parameterization:} We describe the neutrino masses by means of their sum \ensuremath{\sum m_\nu}\ and the two
mass splittings \ensuremath{\Delta m^2_{21}}\ and \ensuremath{|\Delta m^2_{31}|}~\cite{Heavens:2018adv}.
As, for practical purposes, the current and future cosmological probes considered here only depend on \ensuremath{\sum m_\nu}, it is possible to marginalize first the likelihood of terrestrial data over the two mass splittings and then perform the combined analysis or the compatibility analysis with just one free parameter (\ensuremath{\sum m_\nu}).
We verified that this procedure leads to very similar results as performing the entire calculation with three free parameters (\ensuremath{\sum m_\nu}, \ensuremath{\Delta m^2_{21}}, \ensuremath{|\Delta m^2_{31}|}).
For definiteness we show here only the results sampling \ensuremath{\sum m_\nu}\ with a linear prior, since our checks using a logarithmic prior provide very similar results.
\end{itemize}
Following \cite{Gariazzo:2018pei}, we have considered also a range of other parameterizations, e.g., using either \ensuremath{\sum m_\nu}\ or the lightest neutrino mass and the two mass splittings \ensuremath{\Delta m^2_{21}}\ and \ensuremath{|\Delta m^2_{31}|}; with uniform prior distribution either on the parameters themselves or on their logarithm.
We identified our benchmark parameterizations $3\nu$ and $\Sigma$ described above as representative examples, and therefore we restrict the discussion to the two of them.
\subsection{Cosmological and terrestrial information on neutrino masses}
\label{subs:data}
The aim of this study is to determine the level of tension between cosmological measurements
of neutrino masses
and terrestrial constraints on the masses and mass splittings.
For that purpose, we shall consider the following data constraints:
\begin{itemize}
\item Neutrino oscillation constraints are simulated by a Gaussian likelihood
on the solar and atmospheric mass differences with mean and standard deviations according to Eq.~\eqref{eq:oscillations}. Note that the $\Delta\chi^2$
between NO and IO from oscillation data does not affect the tension metrics and is therefore not relevant for our analyses.
\item For terrestrial neutrino mass measurements we include the result from
KATRIN by adopting a Gaussian likelihood with \cite{KATRIN:2021uub}
\begin{equation}
\label{eq:mbeta}
m_\beta^2 = (0.06 \pm 0.32) \, \text{eV}^2 \,.
\end{equation}
The region of interest corresponds to quasi-degenerate neutrinos and we can use the approximation for the effective mass parameter relevant for KATRIN:
\begin{equation}
m_\beta^2 \approx \left(\frac{\sum m_\nu}{3}\right)^2 \,.
\end{equation}
Effectively, this provides an upper bound on $\sum m_\nu$ for the terrestrial data.
\end{itemize}
The combination of these two data sets is denoted as ``terrestrial'' in the following. For the cosmological data we consider current data, as well as two possible future scenarios:
\begin{itemize}
\item For current cosmological data, we consider the full posterior distribution obtained using
Planck temperature, polarization and lensing data together with Supernovae Ia luminosity distance measurements and Baryon Acoustic Oscillations plus Redshift Distortions from SDSS IV, which corresponds to a 95\% CL upper limit
$\ensuremath{\sum m_\nu}<0.09$~eV~\cite{DiValentino:2021hoh}.
\item For future cosmological probes, we shall consider a precision of 0.02~eV on the sum of neutrino masses \cite{Font-Ribera:2013rwa,Basse:2013zua,Hamann:2012fe,Carbone:2010ik,Brinckmann:2018owf} and two alternative scenarios: either a value for \ensuremath{\sum m_\nu}\ corresponding to the minimal value as predicted for the NO, see Eq.~\eqref{eq:lower_bound},
\begin{equation}\label{eq:futureNO}
\ensuremath{\sum m_\nu}=0.06\pm0.02\, \text{eV \quad (future NO)}\,,
\end{equation}
or a hypothetical non-observation of finite neutrino masses in cosmology,
\begin{equation}\label{eq:future0}
\ensuremath{\sum m_\nu}=0 \pm0.02\, \text{eV \quad (future 0)}\,.
\end{equation}
Note that the latter case, by construction, is in tension with oscillation data. We will use the statistical tests discussed above to quantify this statement. In both cases, we assume a Gaussian likelihood for \ensuremath{\sum m_\nu}.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{posteriors_summnu.pdf}
\caption{Posterior of $\ensuremath{\sum m_\nu}$ for NO (IO) in solid (dashed) lines,
given different data sets, either terrestrial or cosmological ones (no combinations are shown).
}
\label{fig:S_mnu_post}
\end{figure}
Figure~\ref{fig:S_mnu_post} shows the posteriors for various data sets using the $\Sigma$-parameterization. We observe the top-hat shaped distribution for terrestrial data (red curves), with the lower bound provided by oscillations (its value depending on NO or IO) and the upper bound provided by KATRIN. The interplay with the assumed cosmological data sets is apparent from the figure, and below we are going to quantify possible existing tensions among them.
\section{Results}
\label{sec:results}
\begin{table*}[t]
\centering
\input{summary.tex}
\caption{Tension between cosmological and terrestrial neutrino mass determination assuming different cosmological data sets: current data, a future observation with a value for \ensuremath{\sum m_\nu}\ consistent with NO (future NO, Eq.~\eqref{eq:futureNO}), and a non-observation of \ensuremath{\sum m_\nu}\ (future 0, Eq.~\eqref{eq:future0}). In each case we show results for the two parameterizations $3\nu$ and $\Sigma$ (see section~\ref{subs:parameterizations}) and the two mass orderings. The table shows the test statistics $\ln S$, $Q_{\rm DMAP}$ from Eqs.~\eqref{eq:lnS} and \eqref{eq:PG} respectively, and the probabilities of the data sets being consistent, $p_S$, $1-P(Q_{\rm DMAP})$ and $1-P(\Delta)$, corresponding to the suspiciousness test, the parameter goodness-of-fit test, and the parameter shift test, respectively. In the right part of the table we show the Bayesian model dimensionalities according to Eqs.~\eqref{eq:bmd} and \eqref{eq:d}, indicating with $A$ the terrestrial and $B$ the cosmological data sets. The values for $p_S$ [as well as for $1-P(Q_{\rm DMAP})$] are calculated with the parameter counting according to Eq.~\eqref{eq:n}, i.e., for 3 (1) dof for the $3\nu$ ($\Sigma$) parameterization.}
\label{tab:summary}
\end{table*}
Let us now present the results of our analysis about the consistency or possible tension between cosmology and terrestrial neutrino mass determinations. The numerical results for the suspiciousness, parameter goodness-of-fit, and parameter shift tests are summarized in Tab.~\ref{tab:summary}. We show the results for the corresponding test statistics as well as significances. The compatibility is tested assuming either the current cosmological likelihood, or possible future determinations of \ensuremath{\sum m_\nu}, with the two cases future~NO and future~0 discussed in section~\ref{subs:data}. Furthermore, we check how the results depend on the type of the neutrino mass ordering (normal versus inverted) as well as on the parameterization used for the neutrino masses ($3\nu$ versus $\Sigma$, see section~\ref{subs:parameterizations}).
\subsection{Suspiciousness and parameter goodness-of-fit tests}
\begin{figure*}
\centering
\includegraphics[width=0.8\columnwidth]{legend}\\
\includegraphics[width=0.99\columnwidth]{d2lnS}$\qquad$
\includegraphics[width=0.99\columnwidth]{Q_DMAP} \\
\includegraphics[width=0.99\columnwidth]{pValS_N}$\qquad$
\includegraphics[width=0.99\columnwidth]{Q_DMAP_N}
\caption{Tension between cosmological and terrestrial experiments according to the suspiciousness test (left panels) and parameter goodness-of-fit test (right panels). Upper panels show the corresponding test statistics as defined in Eqs.~\eqref{eq:lnS} and \eqref{eq:PG}. The left (right) part in each panel corresponds to the $3\nu$ ($\Sigma$) parameterization as defined in Sec.~\ref{subs:parameterizations}.
Lower panels show the corresponding significance in numbers of standard deviations obtained by converting to a $p$-value assuming a $\chi^2_d$ distribution, where $d = 3$ (1) for the
$3\nu$ ($\Sigma$) parameterization. Different colors indicate different assumptions on the the cosmological data set, see Sec.~\ref{subs:data}, and square (triangle) symbols correspond to normal (inverted) neutrino mass ordering.}
\label{fig:lnS-PG}
\end{figure*}
We start by discussing the results for the suspiciousness and the parameter goodness-of-fit test, which are illustrated graphically in Fig.~\ref{fig:lnS-PG}. In the upper panels we show the corresponding test statistics $(d-2\ln S)$ and $Q_{\rm DMAP}$, see Eqs.~\eqref{eq:lnS} and \eqref{eq:PG}. We see from the figure, that these quantities are numerically very similar for the two tests, as well as for the two parameterizations. For the parameter goodness-of-fit test, the quantity $Q_{\rm DMAP}$ is obtained by taking $\hat\theta_D$ in Eq.~\eqref{eq:PG} as parameter value at the maximum of the posterior. For the $\Sigma$ parameterization, we have only one relevant parameter (namely $\sum m_\nu$) for which we take a flat linear prior. Hence, in this case maximum posterior (MAP) and maximum likelihood (MLH) are identical and therefore $Q_{\rm DMAP} \equiv \chi^2_{\rm PG}$. For the $3\nu$ parameterization we adopt flat priors in the logarithm of the three neutrino masses, constrained by hyper-parameters (see section~\ref{subs:parameterizations}). Hence, here in principle, MAP and MLH are not identical. However, we see from Fig.~\ref{fig:lnS-PG} and Tab.~\ref{tab:summary} that the $Q_{\rm DMAP}$ values for the $3\nu$ and $\Sigma$ parameterizations are very close, and hence we find also for $3\nu$ that the relationship $Q_{\rm DMAP} \approx \chi^2_{\rm PG}$ holds to good accuracy.
Under certain regularity conditions the quantities shown in the upper panels of Fig.~\ref{fig:lnS-PG} follow a $\chi^2_d$ distribution, with $d$ corresponding to the effective number of parameters in common to the two data sets, as defined in Eqs.~\eqref{eq:d} and \eqref{eq:n}, respectively. We give the Bayesian model dimensionalities obtained for the various data set combinations in the right part of Tab.~\ref{tab:summary}. We observe that in many cases Eq.~\eqref{eq:d} leads to negative values for $d$, which do not correspond to a meaningful $\chi^2$ definition. As we discuss in the Appendix, this follows from the properties of Bayesian model dimensionality and the specific shape of the posteriors in our application. Therefore, using Bayesian dimensionalities appears not suitable in our case to evaluate the effective number of degrees of freedom. Instead, we are going to use the simple parameter counting from Eq.~\eqref{eq:n} also in the case of the suspiciousness test, which gives $n=3$ or 1 for the $3\nu$ or $\Sigma$ parameterization, corresponding either to the 3 neutrino masses or to the singe parameter $\sum m_\nu$, respectively.\footnote{Using a $\chi^2_d$ distribution for $(d-2\ln S)$ in any case requires regularity conditions, such as Gaussian-shaped posteriors. Large deviations from $d_D$ from naive parameter counting signals non-Gaussian posteriors. The probabilities reported in Tab.~\ref{tab:summary} and lower panels of Fig.~\ref{fig:lnS-PG} have to be interpreted with care, and are understood {\it under the assumption} that $(d-2\ln S)$ follows a $\chi^2_n$ distribution with $n$ given in Eq.~\eqref{eq:n}.}
We observe from the lower panels of Fig.~\ref{fig:lnS-PG} that, although the test statistics themselves are very similar, the tension quantified by the corresponding significance is somewhat stronger in the $\Sigma$ parameterization, due to the smaller number of dof. This effect is a known property of the parameter goodness-of-fit test: introducing more model parameters reduces the tension \cite{Maltoni:2003cu}.
Let us now discuss the physics results. Notice that current cosmological data (blue symbols) show mild tension with terrestrial data for NO at level of $1-2\sigma$ and for IO at the level of $2-3\sigma$. We can neither claim significant tension, nor disfavour IO due to strong tension.
Assuming a future determination of $\sum m_\nu$ of 0.06~eV according to the minimal NO value (green symbols), both tests show full consistency of the the data sets for NO and disfavour IO at the level of $2\sigma$. This is expected and in agreement with the trivial observation that for a determination according to Eq.~\eqref{eq:futureNO}, $\sum m_\nu \ge 0.1$~eV can be excluded at $2\sigma$. Note also, that the tension for IO in this case even slightly decreases with respect to current data, due to the finite mean value for $\sum m_\nu$. In particular, the PG test in the $3\nu$ parameterization signals a tension for IO only at the $1\sigma$ level, i.e., no tension. Moving now to the hypothetical case of no-mass detection of future cosmological data (magenta symbols), we see very strong tension for IO (above $4\sigma$), however, also significant tension for NO (between 2 and $3\sigma$). Hence, rejection of IO on the basis of this tension becomes problematic, as also the alternative hypothesis suffers from a non-negligible tension.
\subsection{Parameter shift test}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{legend}\\
\includegraphics[width=0.99\columnwidth]{shift_N}
\caption{Tension between cosmological and terrestrial experiments in terms of Gaussian standard deviations accordingly to the parameter shift test as defined in Eq.~\eqref{eq:delta_par_shift}. The left (right) part of the figure corresponds to the $3\nu$ ($\Sigma$) parameterization, see Sec.~\ref{subs:parameterizations}. Different colors indicate different assumptions on the the cosmological data set, see Sec.~\ref{subs:data}, and square (triangle) symbols correspond to NO (IO) mass ordering.}
\label{fig:shifts}
\end{figure}
Figure~\ref{fig:shifts} depicts the corresponding results for the parameter shift test. In general we observe a similar pattern as for the suspiciousness and parameter goodness-of-fit tests, and the physics interpretation is similar. However, we notice in all cases that the parameter shift leads to a higher tension. According to the parameter shift test, current data shows tension of $\gtrsim 2\sigma$ ($\gtrsim 3\sigma$) for NO (IO). The non-observation of neutrino mass by future cosmology will lead to a (very) strong tension with oscillation data regardless of the mass ordering. Even for future NO, some tension close to the 2$\sigma$ level still remains for NO in the case of the $\Sigma$ parameterization.
The reason for the relative stronger tensions obtained with the parameter shift test is a Bayesian volume effect. This test, as defined in Eq.~\eqref{eq:delta_par_shift}, measures the relative size of the overlap of the posterior volumes in parameter space of the two models. As an example, we can see from Fig.~\ref{fig:S_mnu_post} that even for the future NO case, the overlap volume with the terrestrial posterior is rather small. The result of the parameter shift test depends on the available parameter volume of the data sets, in particular on the upper bound on $\sum m_\nu$ from KATRIN: the tension will become stronger (weaker) for a weaker (stronger) upper bound on $\sum m_\nu$, just by increasing (decreasing) the terrestrial posterior volume in the region far away from the cosmological posterior volume.\footnote{We have checked that this effect is still relatively small if we use the final KATRIN sensitivity instead of the present result, for which results of the parameter shift test are rather similar.} Note also that there is no systematic trend when switching from the $3\nu$ to the $\Sigma$ parameterizations: while for current data the tension becomes weaker, for future NO as well as future 0 it becomes stronger (both for NO and IO).
\subsection{Mass ordering comparison}
Let us now briefly compare the tension measures presented above to a direct model comparison of NO versus IO. To this aim we consider the so-called Bayes factor, in analogy to the Bayesian evidence ratio from Eq.~\eqref{eq:lnZratio}:
\begin{equation}\label{eq:B}
B_{\rm NO,IO}\equiv
\frac{\mathcal{Z}_{\rm NO}}{\mathcal{Z}_{\rm IO}} \,.
\end{equation}
This quantity describes the Bayesian odds in favour of NO, i.e., large values of $B_{\rm NO,IO}$ correspond to a preference for NO. We convert Bayes factors into probabilities by using $P_{\rm NO}=B_{\rm NO,IO}/(1+B_{\rm NO,IO})$ and $P_{\rm IO}=1/(1+B_{\rm NO,IO})$ (given equal initial prior probabilities).
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{lnBnoio}
\caption{Bayes factors in favor of NO for various data combinations and the two
considered parameterizations. Significances indicated by the background shading correspond to the probability for IO obtained as $P_{\rm IO}=1/(1+B_{\rm NO,IO})$ converted into Gaussian standard deviations.}
\label{fig:lnBnoio}
\end{figure}
Figure~\ref{fig:lnBnoio} shows the logarithm of the Bayes factor. Here we show only the contribution from the available parameter space volume from the interplay of cosmological and terrestrial data, in order to compare with the tension measures discussed above. We note that here the direct contribution from the $\chi^2$ difference between NO and IO from oscillation data alone \cite{deSalas:2020pgw,Esteban:2020cvm,Capozzi:2021fjo} is not considered, which may provide additional NO/IO discrimination in the Bayes factor, see \cite{Gariazzo:2022ahe} for a recent discussion.
The black, red, and dark-blue symbols in the figure correspond to using the prior-only, terrestrial data alone, and current cosmology without terrestrial data, respectively. None of these cases shows any significant MO preference. Note that the slightly non-zero value for $\ln B_{\rm NO,IO}$ for terrestrial data in the $3\nu$ parameterization is a pure volume effect. By comparing the
$\ln B_{\rm NO,IO}$ results for the combination of terrestrial and cosmological data (light-blue, green and magenta), we observe a significant dependence on the parameterization. This is in line with the arguments discussed in \cite{Gariazzo:2022ahe,Schwetz:2017fey}, where it is stressed that parameterizations with three independent neutrino masses in general lead to a strong preference for NO compared to other parameterizations. Indeed, from Fig.~\ref{fig:lnBnoio} we see approximately a difference of $1\sigma$ between the two considered parameterizations.
Concerning future cosmological data,
a measurement $\ensuremath{\sum m_\nu}=0.06\pm0.02$~eV would provide a significance
of approximately $2-3\sigma$ in favor of NO. Hence, from this argument alone (i.e., without using additional information from oscillation data), a precision such as the one considered here is not sufficient for a decisive determination of the mass ordering.
Within the case ``future 0'', for which the measurements provide a preferred value $\ensuremath{\sum m_\nu}=0$~eV,
the preference for NO is strong, close to the $4\sigma$ level (even for the $\Sigma$ parameterization).
This result, however, is a consequence of the stronger rejection of the region at $\ensuremath{\sum m_\nu}>0.1$~eV
with respect to the one at $\ensuremath{\sum m_\nu}>0.06$~eV, and does not take into account that also the NO solution suffers from a tension between cosmology and oscillation data,
as discussed in the previous sections.
\section{Conclusions}
\label{sec:conclusions}
The neutrino mass sensitivity from cosmological data analyses is entering an exciting phase, approaching the minimal values for $\ensuremath{\sum m_\nu}$ as required by oscillation data, i.e., $\ensuremath{\sum m_\nu} \approx 0.06$ (0.1)~eV for NO (IO).
In this manuscript we discuss quantitative measures to evaluate a possible tension between cosmology and terrestrial neutrino mass determinations. In particular we have applied the Bayesian suspiciousness test, parameter goodness-of-fit tests and Bayesian parameter differences, and studied implications for current cosmological data or sensitivities to be expected in the near future. In the latter case we assume an accuracy of 0.02~eV on $\ensuremath{\sum m_\nu}$ and consider two possible scenarios, either a mean value of $\ensuremath{\sum m_\nu} = 0.06$~eV, i.e. the minimal value predicted for NO, or $\ensuremath{\sum m_\nu} = 0$, i.e., a hypothetical non-observation of neutrino mass in cosmology. Our main conclusions can be summarized as follows:
\begin{itemize}
\item Current data show modest tension between cosmology and terrestrial data, at the level of $1-2\sigma$ for NO and $2-3\sigma$ for IO.
\item If future cosmology will find a value of $\ensuremath{\sum m_\nu}\approx 0.06$~eV
(corresponding to NO with vanishing lightest neutrino mass) the tension for IO will be at the level of $2-3\sigma$. Hence, the assumed accuracy on $\ensuremath{\sum m_\nu}$ of 0.02~eV is not sufficient to exclude IO with decisive, strong significance.
\item If future cosmological measurements do not find evidence for a non-zero neutrino mass, the tension with terrestrial data will be at the level of $2-3\sigma$ for NO and $\gtrsim 4\sigma$ for IO. Only in this case IO can be disfavoured strongly, however, at the price of having a tension between cosmology and terrestrial data present also for NO.
\item Bayesian suspiciousness and parameter goodness-of-fit tests give very similar results. In both cases, tension quantified in terms of significances depends on the number of model parameters.
\item We find that Bayesian model dimensionality is not a useful measure for the relevant degrees of freedom in our case of interest; our results are based on ``naive'' parameter counting.
\item Parameter differences in general show stronger tensions and depend on priors and parameterizations in a non-trivial way.
\end{itemize}
In conclusion, in this work we have emphasized the well-known fact that the neutrino mass-ordering sensitivity in cosmological data analyses emerges from the available parameter space in the interplay between cosmology and neutrino oscillation data. In such a situation, the relative comparison of NO and IO in terms of model-comparison may be misleading. Excluding one of the hypotheses makes sense only if the alternative hypothesis provides a good fit to the data. We have used statistical tests as tension diagnostics between data sets in order to address this point. We argue that IO can only be excluded in a meaningful way if cosmology finds a result for $\ensuremath{\sum m_\nu}$ consistent with the NO prediction. Quantitatively we find that, based on this argument, an accuracy better than 0.02~eV from cosmological observations will be absolutely required in order to reject the inverted mass ordering with decisive significance.
If a tension between cosmological measurements and oscillation results will be found also in the NO case, we will have a striking signal for a non-standard cosmological model beyond the vanilla $\Lambda$CDM model and/or
non-standard neutrino properties.
\begin{acknowledgments}
This project has received support from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements
No 754496 (FELLINI) and No 860881 (HIDDeN). OM is supported by the MCIN/AEI/10.13039/501100011033 of Spain under grant PID2020-113644GB-I00, by the Generalitat Valenciana of Spain under the grant PROMETEO/2019/083 and by the European Union’s Framework Programme for Research and Innovation Horizon 2020 (2014–2020) under grant H2020-MSCA-ITN-2019/860881-HIDDeN.
\end{acknowledgments}
|
{
"arxiv_id": "2302.14203",
"language": "en",
"timestamp": "2023-03-01T02:04:49",
"url": "https://arxiv.org/abs/2302.14203",
"yymm": "2302"
} | \section{Introduction} \label{sec:intro}
The Universe is currently expanding at an increasing rate, a finding supported by a number of observations. The most widely accepted explanation for this acceleration is dark energy, a hypothetical substance with negative pressure. In the spatially-flat $\Lambda$CDM\ model, \citep{peeb84}, dark energy is taken to be a cosmological constant and contributes about 70\% of the total energy budget of the current Universe. However, recent observations may indicate potential discrepancies with this model, \citep{DiValentinoetal2021b,PerivolaropoulosSkara2021,Morescoetal2022,Abdallaetal2022,Hu:2023jqc}, and have led to the exploration of alternate models that allow for non-zero spatial curvature and dark energy dynamics. In our analyses here we also explore some of these alternatives.
Cosmological models have been compared and cosmological parameter constraints have been determined using various observations, including cosmic microwave background (CMB) anisotropy data, \citep{planck2018b}, that probe the high-redshift Universe, and lower-redshift expansion-rate observations like those we use here. These lower-redshift data sets include better-established probes such as Hubble parameter [$H(z)$] data that reach to redshift $z\sim2$, and baryon acoustic oscillation (BAO) and type Ia supernova (SN Ia) measurements that reach to $z\sim2.3$, \cite{Yuetal2018, eBOSS_2020, Brout:2022vxf}, as well as emerging probes such as H\,\textsc{ii}\ starburst galaxy (H\,\textsc{ii}G) apparent magnitude data that reach to $z\sim2.5$, \citep{Mania_2012, Chavez_2014, GM2021, Johnsonetal2022, Mehrabietal2022, CaoRyanRatra2022}, quasar angular size (QSO-AS) measurements that reach to $z\sim2.7$, \citep{Cao_et_al2017b, Ryanetal2019, CaoRyanRatra2020, Zhengetal2021, Lian_etal_2021}, reverberation-measured (RM) Mg\,\textsc{ii}\ and C\,\textsc{iv}\ quasar (QSO) measurements that reach to $z\sim3.4$, \citep{Czernyetal2021, Zajaceketal2021, Yuetal2021, Khadkaetal_2021a, Khadka:2022ooh, Cao:2022pdv, Czerny:2022xfj}, and gamma-ray burst (GRB) data that reach to $z\sim8.2$, \citep{Wang_2016, Dirirsa2019, KhadkaRatra2020c, Caoetal_2021, Dainottetal2020, Huetal_2021, Daietal_2021, Demianskietal_2021, Khadkaetal_2021b, CaoDainottiRatra2022b, DainottiNielson2022}, of which only 118 Amati-correlated (A118) GRBs, with lower intrinsic dispersion, are suitable for cosmological purposes, \citep{Khadkaetal_2021b, LuongoMuccino2021, CaoKhadkaRatra2021, CaoDainottiRatra2022, Liuetal2022}.
In our analyses here we also exclude RM $\mathrm{H}\beta$ QSO data that probe to $z \sim 0.9$, \citep{Czernyetal2021, Zajaceketal2021, Khadkaetal2021c}, because the resulting cosmological parameter constraints are in $\sim 2\sigma$ tension with those from more established probes. QSO flux observations that reach to $z \sim 7.5$ have been studied, \citep{RisalitiLusso2015, RisalitiLusso2019, KhadkaRatra2020a, Yangetal2020, KhadkaRatra2020b, Lussoetal2020, KhadkaRatra2021, KhadkaRatra2022, Rezaeietal2022, Luongoetal2021, DainottiBardiacchi2022}, however, we also exclude these QSOs from our analyses here since the latest QSO flux compilation, \cite{Lussoetal2020}, is not standardizable, \citep{KhadkaRatra2021, KhadkaRatra2022, Petrosian:2022tlp, Khadka:2022aeg}.
We use only the above-listed, not unreliable, lower-redshift ($z \leq 8.2$) expansion-rate data sets to derive cosmological parameter constraints. This is because we want to derive constaints that are independent of CMB aniostropy data and also independent of the more conventional distance-ladder measurements of the Hubble constant $H_0$, that in some cases give contradictory results for the value of $H_0$, \citep{DiValentinoetal2021b,PerivolaropoulosSkara2021,Morescoetal2022,Abdallaetal2022,Hu:2023jqc}. We also do not use growth-rate data here, since use of such data requires assumption of a primordial inhomogeneity power spectrum and so additional freedom.
In this paper we build and improve upon our earlier work in Ref.\ \cite{CaoRatra2022}. In particular we use new Pantheon\!\raisebox{0.2ex}{+}\ SN Ia (SNP\!\raisebox{0.2ex}{+}) data, update our BAO data compilation, and update as well as now account for the correlations between some of the $H(z)$ measurements. We now also include new RM C\,\textsc{iv} QSO\ data and now also more correctly account for the asymmetric errors in RM Mg\,\textsc{ii} QSO\ and C\,\textsc{iv} QSO\ data. We show that the results from each of the individual data sets are mutually consistent and also show that the updated joint results do not differ significantly from the joint results of Ref.\ \cite{CaoRatra2022}. In particular, our joint analysis of new $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 data here yields summary model-independent values of the non-relativisitic matter density parameter $\Omega_{m0}=0.288\pm0.017$ and $H_0=69.8\pm1.3$ $\rm{km \ s^{-1} \ Mpc^{-1}}$, which are $0.29\sigma$ lower and $0.057\sigma$ higher than the summary joint constraints from Ref.\ \cite{CaoRatra2022}, $\Omega_{m0}=0.295\pm0.017$ and $H_0=69.7\pm1.2$ $\rm{km \ s^{-1} \ Mpc^{-1}}$.
Our $H_0$ measurement is in better agreement with the median statistics $H_0$ estimate of Ref.\ \cite{chenratmed} and the Tip of the Red Giant Branch (TRGB) local expansion rate $H_0$ estimate of Ref.\ \cite{Freedman2021} than with the SN Ia and Cepheids local expansion rate $H_0$ estimate of Ref.\ \cite{Riessetal2022}. These data show at most mild evidence for non-flat spatial hypersurfaces, but more significant evidence for dark energy dynamics, 2$\sigma$ or larger in the spatially-flat dynamical dark energy models we study. Based on the deviance information criterion, flat $\phi$CDM\ is the most favored cosmological model.
This paper is organized as follows. In Section \ref{sec:model} we summarize the cosmological models and parametrizations used in our analyses. Section \ref{sec:data} provides a detailed description of the data sets used in our analyses. The methods we employ are summarized in Section \ref{sec:analysis}. In Section \ref{sec:results} we present our findings on the constraints of cosmological parameters. Finally, we summarize our conclusions in Section \ref{sec:conclusion}.
\section{Cosmological models}
\label{sec:model}
We use six cosmological models to study the data sets we consider. These data are used to constrain the parameters of the six cosmological models that apply for different combinations of flat or non-flat spatial geometry and a constant cosmological constant or a dynamical dark energy density. For recent determinations of constraints on spatial curvature see Refs.\ \citep{Ranaetal2017, Oobaetal2018a, Oobaetal2018b, ParkRatra2019a, ParkRatra2019b, DESCollaboration2019, Lietal2020, EfstathiouGratton2020, DiValentinoetal2021a, KiDSCollaboration2021, Vagnozzietal2021, ArjonaNesseris2021, Dhawanetal2021, Renzietal2021, Gengetal2022, WeiMelia2022, MukherjeeBanerjee2022, Glanvilleetal2022, Wuetal2022, deCruzPerezetal2022, DahiyaJain2022, Stevensetal2022} and references therein. We compare the goodness of fit of these models and study how model-dependent the cosmological parameter constraints are.
To compute cosmological parameter constraints in these models we use the Hubble parameter, $H(z, \textbf{p})$, which is a function of redshift $z$ and the cosmological parameters $\textbf{p}$ in the given model. The Hubble parameter is related to the expansion rate function and the Hubble constant as $H(z, \textbf{p}) = H_0E(z, \textbf{p})$. In these models we assume the presence of one massive and two massless neutrino species, with the total neutrino mass ($\sum m_{\nu}$) being $0.06$ eV and an effective number of relativistic neutrino species of $N_{\rm eff} = 3.046$. This allows us to compute the current non-relativistic matter density parameter value, $\Omega_{m0}$, from the current values of the physical energy density parameters for non-relativistic neutrinos ($\Omega_{\nu}h^2$), baryons ($\Omega_{b}h^2$), and cold dark matter ($\Omega_{c}h^2$), and the Hubble constant $h$ in units of 100 $\rm{km \ s^{-1} \ Mpc^{-1}}$, as $\Omega_{m0} = (\Omega_{\nu}h^2 + \Omega_{b}h^2 + \Omega_{c}h^2)/{h^2}$, where $\Omega_{\nu}h^2=\sum m_{\nu}/(93.14\ \rm eV)$ is a constant.
In the flat and non-flat $\Lambda$CDM\ models the expansion rate function is
\begin{equation}
\label{eq:EzL}
E(z, \textbf{p}) = \sqrt{\Omega_{m0}\left(1 + z\right)^3 + \Omega_{k0}\left(1 + z\right)^2 + \Omega_{\Lambda}},
\end{equation}
where the current value of the spatial curvature energy density parameter $\Omega_{k0} = 0$ in flat $\Lambda$CDM\ and the cosmological constant dark energy density parameter $\Omega_{\Lambda} = 1 - \Omega_{m0} - \Omega_{k0}$. The cosmological parameters being constrained are $\textbf{p}=\{H_0, \Omega_{b}h^2\!, \Omega_{c}h^2\}$ and $\textbf{p}=\{H_0, \Omega_{b}h^2\!, \Omega_{c}h^2\!, \Omega_{k0}\}$ in the flat and non-flat $\Lambda$CDM\ models, respectively.
In the flat and non-flat XCDM parametrizations,
\begin{equation}
\label{eq:EzX}
\resizebox{0.475\textwidth}{!}{%
$E(z, \textbf{p}) = \sqrt{\Omega_{m0}\left(1 + z\right)^3 + \Omega_{k0}\left(1 + z\right)^2 + \Omega_{{\rm X}0}\left(1 + z\right)^{3\left(1 + w_{\rm X}\right)}},$%
}
\end{equation}
where the current value of the X-fluid dynamical dark energy density parameter $\Omega_{{\rm X}0} = 1 - \Omega_{m0} - \Omega_{k0}$ and the X-fluid (dark energy) equation of state parameter $w_{\rm X} = p_{\rm X}/\rho_{\rm X}$ is allowed to take values different from $-1$ (which corresponds to a cosmological constant), where $p_{\rm X}$ and $\rho_{\rm X}$ are the pressure and energy density of the X-fluid, respectively. The cosmological parameters being constrained are $\textbf{p}=\{H_0, \Omega_{b}h^2\!, \Omega_{c}h^2\!, w_{\rm X}\}$ and $\textbf{p}=\{H_0, \Omega_{b}h^2\!, \Omega_{c}h^2\!, w_{\rm X}, \Omega_{k0}\}$ in the flat and non-flat XCDM parametrizations, respectively.
In the flat and non-flat $\phi$CDM\ models, \citep{peebrat88,ratpeeb88,pavlov13},
\begin{equation}
\label{eq:Ezp}
E(z, \textbf{p}) = \sqrt{\Omega_{m0}\left(1 + z\right)^3 + \Omega_{k0}\left(1 + z\right)^2 + \Omega_{\phi}(z,\alpha)},
\end{equation}
where the scalar field ($\phi$) dynamical dark energy density parameter is
\begin{equation}
\label{Op}
\Omega_{\phi}(z,\alpha)=\frac{1}{6H_0^2}\bigg[\frac{1}{2}\dot{\phi}^2+V(\phi)\bigg],
\end{equation}
which is determined by numerically solving the Friedmann equation \eqref{eq:Ezp} and the equation of motion of the scalar field
\begin{equation}
\label{em}
\ddot{\phi}+3H\dot{\phi}+V'(\phi)=0,
\end{equation}
with an overdot and a prime denoting a derivative with respect to time and $\phi$, respectively. Here we assume an inverse power-law scalar field potential energy density
\begin{equation}
\label{PE}
V(\phi)=\frac{1}{2}\kappa m_p^2\phi^{-\alpha},
\end{equation}
where $m_p$ is the Planck mass, $\alpha$ is a positive constant ($\alpha=0$ corresponds to a cosmological constant), and $\kappa$ is a constant that is determined by the shooting method in the Cosmic Linear Anisotropy Solving System (\textsc{class}) code \citep{class}. The cosmological parameters being constrained are $\textbf{p}=\{H_0, \Omega_{b}h^2\!, \Omega_{c}h^2\!, \alpha\}$ and $\textbf{p}=\{H_0, \Omega_{b}h^2\!, \Omega_{c}h^2\!, \alpha, \Omega_{k0}\}$ in the flat and non-flat $\phi$CDM\ models, respectively. For recent studies on constraints on $\phi$CDM\ see Refs.\ \citep{Zhaietal2017, ooba_etal_2018b, ooba_etal_2019, park_ratra_2018, park_ratra_2019b, park_ratra_2020, SolaPercaulaetal2019, Singhetal2019, UrenaLopezRoy2020, SinhaBanerjee2021, Xuetal2021, deCruzetal2021, Jesusetal2022, Adiletal2022} and related references within these papers.
Note that in analyses of some of the data sets we use, we set $H_0=70$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ and $\Omega_{b}=0.05$, with $\textbf{p}$ changing accordingly, because these data are unable to constrain $H_0$ and $\Omega_{b}$.
\section{Data}
\label{sec:data}
\begin{table}
\centering
\begin{threeparttable}
\caption{32 $H(z)$ data.}\label{tab:hz}
\setlength{\tabcolsep}{7.5mm}{
\begin{tabular}{lcc}
\hline
$z$ & $H(z)$\tnote{a} & Reference\\
\hline
0.07 & $69.0\pm19.6$ & \cite{73}\\
0.09 & $69.0\pm12.0$ & \cite{69}\\
0.12 & $68.6\pm26.2$ & \cite{73}\\
0.17 & $83.0\pm8.0$ & \cite{69}\\
0.2 & $72.9\pm29.6$ & \cite{73}\\
0.27 & $77.0\pm14.0$ & \cite{69}\\
0.28 & $88.8\pm36.6$ & \cite{73}\\
0.4 & $95.0\pm17.0$ & \cite{69}\\
0.47 & $89.0\pm50.0$ & \cite{15}\\
0.48 & $97.0\pm62.0$ & \cite{71}\\
0.75 & $98.8\pm33.6$ & \cite{Borghi_etal_2021}\\
0.88 & $90.0\pm40.0$ & \cite{71}\\
0.9 & $117.0\pm23.0$ & \cite{69}\\
1.3 & $168.0\pm17.0$ & \cite{69}\\
1.43 & $177.0\pm18.0$ & \cite{69}\\
1.53 & $140.0\pm14.0$ & \cite{69}\\
1.75 & $202.0\pm40.0$ & \cite{69}\\
0.1791 & 74.91 & \cite{Morescoetal2020}\tnote{b}\\
0.1993 & 74.96 & \cite{Morescoetal2020}\tnote{b}\\
0.3519 & 82.78 & \cite{Morescoetal2020}\tnote{b}\\
0.3802 & 83.0 & \cite{Morescoetal2020}\tnote{b}\\
0.4004 & 76.97 & \cite{Morescoetal2020}\tnote{b}\\
0.4247 & 87.08 & \cite{Morescoetal2020}\tnote{b}\\
0.4497 & 92.78 & \cite{Morescoetal2020}\tnote{b}\\
0.4783 & 80.91 & \cite{Morescoetal2020}\tnote{b}\\
0.5929 & 103.8 & \cite{Morescoetal2020}\tnote{b}\\
0.6797 & 91.6 & \cite{Morescoetal2020}\tnote{b}\\
0.7812 & 104.5 & \cite{Morescoetal2020}\tnote{b}\\
0.8754 & 125.1 & \cite{Morescoetal2020}\tnote{b}\\
1.037 & 153.7 & \cite{Morescoetal2020}\tnote{b}\\
1.363 & 160.0 & \cite{Morescoetal2020}\tnote{b}\\
1.965 & 186.5 & \cite{Morescoetal2020}\tnote{b}\\
\hline
\end{tabular}}
\begin{tablenotes}[flushleft]
\item[a] $\rm{km \ s^{-1} \ Mpc^{-1}}$.
\item[b] These 15 measurements are correlated and used in our analyses with a full covariance matrix as noted in Sec. \ref{sec:data}.
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{table}
\centering
\begin{threeparttable}
\caption{12 BAO data.}\label{tab:bao}
\setlength{\tabcolsep}{2.3mm}{
\begin{tabular}{lccc}
\toprule
$z$ & Measurement\tnote{a} & Value & Reference\\
\midrule
$0.122$ & $D_V\left(r_{s,{\rm fid}}/r_s\right)$ & $539\pm17$ & \cite{Carter_2018}\\
$0.38$ & $D_M/r_s$ & 10.23406 & \cite{eBOSSG_2020}\tnote{b}\\
$0.38$ & $D_H/r_s$ & 24.98058 & \cite{eBOSSG_2020}\tnote{b}\\
$0.51$ & $D_M/r_s$ & 13.36595 & \cite{eBOSSG_2020}\tnote{b}\\
$0.51$ & $D_H/r_s$ & 22.31656 & \cite{eBOSSG_2020}\tnote{b}\\
$0.698$ & $D_M/r_s$ & 17.85823691865007 & \cite{eBOSSG_2020, eBOSSL_2021}\tnote{c}\\
$0.698$ & $D_H/r_s$ & 19.32575373059217 & \cite{eBOSSG_2020, eBOSSL_2021}\tnote{c}\\
$0.835$ & $D_M/r_s$ & $18.92\pm0.51$ & \cite{DES_2022}\tnote{d}\\
$1.48$ & $D_M/r_s$ & 30.6876 & \cite{eBOSSQ_2020, eBOSSQ_2021}\tnote{e}\\
$1.48$ & $D_H/r_s$ & 13.2609 & \cite{eBOSSQ_2020, eBOSSQ_2021}\tnote{e}\\
$2.334$ & $D_M/r_s$ & 37.5 & \cite{duMas2020}\tnote{f}\\
$2.334$ & $D_H/r_s$ & 8.99 & \cite{duMas2020}\tnote{f}\\
\bottomrule
\end{tabular}}
\begin{tablenotes}[flushleft]
\item[a] $D_V$, $r_s$, $r_{s, {\rm fid}}$, $D_M$, $D_H$, and $D_A$ have units of Mpc.
\item[b] The four measurements from Ref.\ \cite{eBOSSG_2020} are correlated; see equation \eqref{CovM2} below for their correlation matrix.
\item[c] The two measurements from Refs.\ \cite{eBOSSG_2020} and \cite{eBOSSL_2021} are correlated; see equation \eqref{CovM3} below for their correlation matrix.
\item[d] This measurement is updated relative to the one from Ref.\ \cite{DES_2019b} used in Ref.\ \cite{CaoRatra2022}.
\item[e] The two measurements from Refs.\ \cite{eBOSSQ_2020} and \cite{eBOSSQ_2021} are correlated; see equation \eqref{CovM4} below for their correlation matrix.
\item[f] The two measurements from Ref.\ \cite{duMas2020} are correlated; see equation \eqref{CovM1} below for their correlation matrix.
\end{tablenotes}
\end{threeparttable}
\end{table}
In this paper, compared to data we used in Ref.\ \citep{CaoRatra2022}, we now use updated $H(z)$ data and an improved $H(z)$ analysis that now includes the covariance matrix for a subset of these data from Refs.\ \cite{Morescoetal2020}, updated BAO data, new Pantheon\!\raisebox{0.2ex}{+}\ SN Ia data, an improved analysis of reverberation measured Mg\,\textsc{ii}\ QSO data, and new reverberation measured C\,\textsc{iv}\ QSO data, as well as other data sets, to constrain cosmological parameters. We also correct an error in one GRB measurement used in Ref.\ \citep{CaoRatra2022}, as discussed below. These data are summarized next.
\begin{itemize}
\item[]{$\textbf{\emph{H(z)}}$ \bf data.} The 32 $H(z)$ measurements listed in Table \ref{tab:hz} have a redshift range of $0.07 \leq z \leq 1.965$. The covariance matrix of the 15 correlated measurements originally from Refs.\ \cite{70,72,moresco_et_al_2016}, discussed in Ref.\ \cite{Morescoetal2020}, can be found at \url{https://gitlab.com/mmoresco/CCcovariance/}. In the following we refer to the $H(z)$ data set used in Ref.\ \citep{CaoRatra2022} as Old $H(z)$ data.
\item[]{\bf BAO data}. The 12 BAO measurements listed in Table \ref{tab:bao} cover the redshift range $0.122 \leq z \leq 2.334$. The quantities listed in Table \ref{tab:bao} are described as follows:
\begin{itemize}
\item $D_V(z)$: Spherically averaged BAO distance, $D_V(z)=[(cz(1+z)^2H(z)^{-1}D^2_A(z)]^{1/3}$, where $c$ is the speed of light and the angular diameter distance $D_A(z) = D_M(z)/(1+z)$ with $D_M(z)$ defined in the following
\item $D_H(z)$: Hubble distance, $D_H(z)=c/H(z)$
\item $r_s$: Sound horizon at the drag epoch, $r_{s, {\rm fid}}=147.5$ Mpc in Ref. \cite{Carter_2018}
\item $D_M(z)$: Transverse comoving distance,
\begin{equation}
\label{eq:DM}
D_M(z) =
\begin{cases}
\frac{c}{H_0\sqrt{\Omega_{k0}}}\sinh\left[\frac{H_0\sqrt{\Omega_{k0}}}{c}D_C(z)\right] & \text{if}\ \Omega_{k0} > 0, \\
D_C(z) & \text{if}\ \Omega_{k0} = 0,\\
\frac{c}{H_0\sqrt{|\Omega_{k0}|}}\sin\left[\frac{H_0\sqrt{|\Omega_{k0}|}}{c}D_C(z)\right] & \text{if}\ \Omega_{k0} < 0,
\end{cases}
\end{equation}
where the comoving distance
\begin{equation}
\label{eq:gz}
D_C(z) = c\int^z_0 \frac{dz'}{H(z')}.
\end{equation}
\end{itemize}
The covariance matrices for given BAO data are as follows. The covariance matrix $\textbf{C}$ for BAO data from Ref.\ \cite{duMas2020} is
\begin{equation}
\label{CovM1}
\begin{bmatrix}
1.3225 & -0.1009 \\
-0.1009 & 0.0380
\end{bmatrix},
\end{equation}
for BAO data from Ref.\ \cite{eBOSSG_2020} it is
\begin{equation}
\label{CovM2}
\resizebox{\columnwidth}{!}{%
$\begin{bmatrix}
0.02860520 & -0.04939281 & 0.01489688 & -0.01387079\\
-0.04939281 & 0.5307187 & -0.02423513 & 0.1767087\\
0.01489688 & -0.02423513 & 0.04147534 & -0.04873962\\
-0.01387079 & 0.1767087 & -0.04873962 & 0.3268589
\end{bmatrix},$%
}
\end{equation}
for BAO data from Refs.\ \cite{eBOSSG_2020} and \cite{eBOSSL_2021} it is
\begin{equation}
\label{CovM3}
\begin{bmatrix}
0.1076634008565565 & -0.05831820341302727\\
-0.05831820341302727 & 0.2838176386340292
\end{bmatrix},
\end{equation}
and for BAO data from Refs.\ \cite{eBOSSQ_2020} and \cite{eBOSSQ_2021} it is
\begin{equation}
\label{CovM4}
\begin{bmatrix}
0.63731604 & 0.1706891\\
0.1706891 & 0.30468415
\end{bmatrix}.
\end{equation}
In the following we refer to the BAO data compilation used in Ref.\ \citep{CaoRatra2022} as Old BAO data.
\item[]{\bf SN Ia data.} We used two sets of SN Ia data jointly in the analyses of Ref.\ \cite{CaoRatra2022}: 1048 Pantheon (abbreviated as ``SNP'') measurements from Ref.\ \cite{scolnic_et_al_2018} that span a redshift range of $0.01 < z < 2.3$ and 20 binned DES 3yr (abbreviated as ``SND'') measurements from Ref.\ \cite{DES_2019d} that span a redshift range of $0.015 \leq z \leq 0.7026$. In the following we refer to the SN Ia data compilation used in Ref.\ \citep{CaoRatra2022} as SNP + SND data. In this paper we use 1590 of 1701 Pantheon\!\raisebox{0.2ex}{+}\ SN Ia (abbreviated as ``SNP\!\raisebox{0.2ex}{+}'') measurements from Ref.\ \cite{Brout:2022vxf} that span a redshift range of $0.01016 \leq z \leq 2.26137$, with a minimum redshift of $z>0.01$ to avoid dependence on peculiar velocity models. We note that SNP\!\raisebox{0.2ex}{+}\ data includes updated SNP and SND data.
\item[]{\bf QSO angular size (QSO-AS) data.} The 120 QSO angular size measurements are listed in table 1 of Ref.\ \cite{Cao_et_al2017b} and span the redshift range $0.462 \leq z \leq 2.73$. The angular size of a QSO can be predicted in a given cosmological model by using the formula $\theta(z)=l_{\rm m}/D_A(z)$, where $l_{\rm m}$ is the characteristic linear size of the QSOs in the sample and $D_A(z)$ is the angular diameter distance at redshift $z$.
\item[]{\bf H\,\textsc{ii}G data.} The 181 H\,\textsc{ii}G\ measurements listed in table A3 of Ref.\ \cite{GM2021} include 107 low-$z$ ones from Ref.\ \cite{Chavez_2014} recalibrated in Ref.\ \cite{G-M_2019}, spanning the redshift range $0.0088 \leq z \leq 0.16417$, and 74 high-$z$ ones spanning the redshift range $0.63427 \leq z \leq 2.545$. These sources follow a correlation between the observed luminosity ($L$) of the Balmer emission lines and the velocity dispersion ($\sigma$) of the ionized gas, represented by the equation $\log L = \beta \log \sigma + \gamma$, where $\log = \log_{10}$. The intercept and slope parameters, $\beta$ and $\gamma$, are found to be $5.022 \pm 0.058$ and $33.268 \pm 0.083$, respectively, under the assumption of the Gordon extinction law, \cite{Gordon_2003}. Using this relation the observed distance modulus of an H\,\textsc{ii}G\ can be computed as $\mu_{\rm obs} = 2.5\log L - 2.5\log f - 100.2$, where $f(z)$ is the measured flux at redshift $z$. The theoretical distance modulus in a given cosmological model is $\mu_{\rm th}(z) = 5\log D_{L}(z) + 25$, where $D_L(z)=(1+z)D_M(z)$ is the luminosity distance.
\item[]{\bf Mg\,\textsc{ii} QSO\ and C\,\textsc{iv} QSO\ sample}. The sample of 78 Mg\,\textsc{ii} QSO\ and 38 C\,\textsc{iv} QSO\ measurements, listed in tables A1 of Refs.\ \cite{Khadkaetal_2021a} and \cite{Cao:2022pdv}, respectively, span a wide range of redshifts, from $0.0033 \leq z \leq 1.89$ for Mg\,\textsc{ii}\ QSOs and from $0.001064 \leq z \leq 3.368$ for C\,\textsc{iv}\ QSOs. Both Mg\,\textsc{ii} QSO\ and C\,\textsc{iv} QSO\ sources follow the radius-luminosity ($R-L$) relation $\log \tau=\beta+\gamma(\log L-44)$. Here $\tau$ (days) is the QSO time-lag, $\beta$ is the intercept parameter, and $\gamma$ is the slope parameter. For Mg\,\textsc{ii}\ QSOs and C\,\textsc{iv}\ QSOs we denote $\beta$ and $\gamma$ as $\beta_{\rm\textsc{m}}$ and $\gamma_{\rm\textsc{m}}$, and $\beta_{\rm\textsc{c}}$ and $\gamma_{\rm\textsc{c}}$, respectively. The monochromatic luminosity $L=4\pi D_L^2F$ where $F$ is the QSO flux measured in units of $\rm erg\ s^{-1}\ cm^{-2}$ at 1350\,\AA\ and 3000\,\AA\ for Mg\,\textsc{ii}\ QSOs and C\,\textsc{iv}\ QSOs, respectively. As described in Refs.\ \cite{Khadkaetal_2021a} and \cite{Cao:2022pdv}, in our analysis we must simultaneously determine both $R-L$ relation and cosmological parameters and verify that the $R-L$ relation parameters are independent of the assumed cosmological model for the Mg\,\textsc{ii}\ QSOs and C\,\textsc{iv}\ QSOs to be standardizable. In addition to what we used in Ref.\ \cite{CaoRatra2022}, here we also use the 38 C\,\textsc{iv}\ QSOs and improve on the analyses of these QSOs by now accounting for the asymmetry in the data error bars, as described in Ref.\ \cite{Cao:2022pdv}.
\item[]{\bf A118 GRB sample.} The A118 sample, which is listed in table 7 of Ref.\ \cite{Khadkaetal_2021b}, consists of 118 long GRBs and spans a wide redshift range, from $0.3399$ to $8.2$. The isotropic radiated energy $E_{\rm iso}$ of a GRB source in its rest frame is $E_{\rm iso}=4\pi D_L^2S_{\rm bolo}/(1+z)$, where $S_{\rm bolo}$ is the bolometric fluence computed in the standard rest-frame energy band $1-10^4$ keV. The peak energy of a source is $E_{\rm p} = (1+z)E_{\rm p, obs}$, where $E_{\rm p, obs}$ is the observed peak energy. There is a correlation between $E_{\rm iso}$ and $E_{\rm p}$ known as the Amati correlation, \citep{Amati2008, Amati2009}, which is given by $\log E_{\rm iso} = \beta_{\rm\textsc{a}} + \gamma_{\rm\textsc{a}}\log E_{\rm p}$. As described in Ref.\ \cite{Khadkaetal_2021b}, we must simultaneously determine both Amati relation and cosmological parameters and verify that the Amati relation parameters are independent of the assumed cosmological model for the A118 GRBs to be standardizable. Note that here we use the correct value of $E_{\rm p}=871\pm123$ keV for GRB081121, as discussed in Ref.\ \cite{Liuetal2022}, rather than the value listed in table 7 of Ref.\ \cite{Khadkaetal_2021b} and used in Ref.\ \citep{CaoRatra2022}.
\end{itemize}
\section{Data Analysis Methodology}
\label{sec:analysis}
The natural log of the likelihood function for the C\,\textsc{iv}, Mg\,\textsc{ii}, and A118 data sets (denoted with subscript ``\textsc{s}'' that is either C\,\textsc{iv}, Mg\,\textsc{ii}, or A118) with measured quantity \textbf{Q}, \citep{D'Agostini_2005}, is
\begin{equation}
\label{eq:LF_s1}
\ln\mathcal{L}_{\rm\textsc{s}}= -\frac{1}{2}\Bigg[\chi^2_{\rm\textsc{s}}+\sum^{N}_{i=1}\ln\left(2\pi\sigma^2_{\mathrm{tot,\textsc{s}},i}\right)\Bigg],
\end{equation}
where
\begin{equation}
\label{eq:chi2_s1}
\chi^2_{\rm\textsc{s}} = \sum^{N}_{i=1}\bigg[\frac{(\mathbf{Q}_{\mathrm{obs,\textsc{s}},i} - \mathbf{Q}_{\mathrm{th,\textsc{s}},i})^2}{\sigma^2_{\mathrm{tot,\textsc{s}},i}}\bigg]
\end{equation}
with total uncertainty
\begin{equation}
\label{eq:sigma_s1}
\sigma^2_{\mathrm{tot,\textsc{s}},i}=\sigma_{\rm int,\,\textsc{s}}^2+\sigma_{{\mathbf{Q}_{\mathrm{obs,\textsc{s}},i}}}^2+\sigma_{{\mathbf{Q}_{\mathrm{th,\textsc{s}},i}}}^2.
\end{equation}
$\sigma_{\rm int,\,\textsc{s}}$ represents the intrinsic scatter parameter for data \textsc{s}, which also accounts for unknown systematic uncertainties. $N$ is the total number of data points.
The natural log of the likelihood function for some $H(z)$ and BAO data and for QSO-AS and H\,\textsc{ii}G\ data (also denoted ``\textsc{s}'') with measured quantity \textbf{Q} is
\begin{equation}
\label{eq:LF_s2}
\ln\mathcal{L}_{\rm\textsc{s}}= -\frac{1}{2}\chi^2_{\rm\textsc{s}},
\end{equation}
where
\begin{equation}
\label{eq:chi2_s2}
\chi^2_{\rm\textsc{s}} = \sum^{N}_{i=1}\bigg[\frac{(\mathbf{Q}_{\mathrm{obs,\textsc{s}},i} - \mathbf{Q}_{\mathrm{th,\textsc{s}},i})^2}{\sigma^2_{\mathrm{tot,\textsc{s}},i}}\bigg]
\end{equation}
with total uncertainty
\begin{equation}
\label{eq:sigma_s2}
\sigma^2_{\mathrm{tot,\textsc{s}},i}=\sigma_{\mathrm{sys,\textsc{s}},i}^2+\sigma_{{\mathbf{Q}_{\mathrm{obs,\textsc{s}},i}}}^2+\sigma_{{\mathbf{Q}_{\mathrm{th,\textsc{s}},i}}}^2,
\end{equation}
with $\sigma_{\mathrm{sys,\textsc{s}},i}$ being the systematic uncertainty at redshift $z_i$. It is important to note that we have ignored the systematic uncertainties for H\,\textsc{ii}G\ data because they are not yet properly quantified. Following Ref.\ \cite{Cao_et_al2017b}, the total uncertainty for QSO-AS data is calculated as $\sigma_{\mathrm{tot},i}=\sigma_{\theta_{\mathrm{obs},i}} + 0.1\theta_{\mathrm{obs},i}$, taking into consideration a 10\% margin for both observational and intrinsic errors in the observed angular sizes $\theta_{\mathrm{obs},i}$.
For those $H(z)$ and BAO data (also denoted ``\textsc{s}'') with covariance matrix $\textbf{C}_{\textsc{s}}$,
\begin{equation}
\label{eq:chi2_s3}
\chi^2_{\textsc{s}} = [\mathbf{Q}_{\mathrm{th,\textsc{s}},i} - \mathbf{Q}_{\mathrm{obs,\textsc{s}},i}]^T\textbf{C}_{\textsc{s}}^{-1}[\mathbf{Q}_{\mathrm{th,\textsc{s}},i} - \mathbf{Q}_{\mathrm{obs,\textsc{s}},i}],
\end{equation}
in which superscripts $T$ and $-1$ represent transpose and inverse of the matrix, respectively.
For SN Ia data, $\chi^2_{\rm SN}$ is calculated using equation (C1) in appendix C of Ref.\ \cite{Conley_et_al_2011}. In this equation the variable $\mathcal{M}$, that includes the SN Ia absolute magnitude M and the Hubble constant $H_0$, is marginalized, therefore here SN Ia data cannot constrain $H_0$. However, when we allow $\Omega_{b}h^2$\ and $\Omega_{c}h^2$\ to be free parameters $H_0$ is derived from the $\Omega_{b}h^2$\ and $\Omega_{c}h^2$\ constraints.
\begin{table}
\centering
\resizebox{\columnwidth}{!}{%
\begin{threeparttable}
\caption{Flat priors of the constrained parameters.}
\label{tab:priors}
\begin{tabular}{lcc}
\toprule
Parameter & & Prior\\
\midrule
& Cosmological Parameters & \\
\midrule
$h$\tnote{a} & & [None, None]\\
$\Omega_{b}h^2$\,\tnote{b} & & [0, 1]\\
$\Omega_{c}h^2$\,\tnote{c} & & [0, 1]\\
$\Omega_{k0}$ & & [$-2$, 2]\\
$\alpha$ & & [0, 10]\\
$w_{\rm X}$ & & [$-5$, 0.33]\\
\midrule
& Non-Cosmological Parameters & \\
\midrule
$\gamma_{\mathrm{\textsc{m}}}$ & & [0, 5]\\
$\beta_{\mathrm{\textsc{m}}}$ & & [0, 10]\\
$\sigma_{\rm int,\,\textsc{m}}$ & & [0, 5]\\
$\gamma_{\mathrm{\textsc{c}}}$ & & [0, 5]\\
$\beta_{\mathrm{\textsc{c}}}$ & & [0, 10]\\
$\sigma_{\rm int,\,\textsc{c}}$ & & [0, 5]\\
$\gamma_{\mathrm{\textsc{a}}}$ & & [0, 5]\\
$\beta_{\mathrm{\textsc{a}}}$ & & [0, 300]\\
$\sigma_{\rm int,\,\textsc{a}}$ & & [0, 5]\\
$l_{\rm m}$ & & [None, None]\\
\bottomrule
\end{tabular}
\begin{tablenotes}[flushleft]
\item [a] $H_0$ in unit of 100 $\rm{km \ s^{-1} \ Mpc^{-1}}$. In the Mg\,\textsc{ii}\ + C\,\textsc{iv}\ and A118 cases $h=0.7$, in the SNP + SND and SNP\!\raisebox{0.2ex}{+}\ cases $0.2\leq h\leq 1$, and in other cases the $h$ prior range is irrelevant (unbounded).
\item [b] In the Mg\,\textsc{ii}\ + C\,\textsc{iv}\ and A118 cases $\Omega_{b}h^2$\ is set to be 0.0245, i.e. $\Omega_{b}=0.05$.
\item [c] In the Mg\,\textsc{ii}\ + C\,\textsc{iv}\ and A118 cases $\Omega_{m0}\in[0,1]$ is ensured.
\end{tablenotes}
\end{threeparttable}%
}
\end{table}
In this paper, we do not use CMB data. Instead, in our analyses of BAO data we determine the sound horizon $r_s$ by also constraining $\Omega_{b}h^2$\ and $\Omega_{c}h^2$\ from these data. Consequently our cosmological parameter constraints do not depend on CMB data.
The QSO-AS + H\,\textsc{ii}G\ and Mg\,\textsc{ii}\ + C\,\textsc{iv}\ constraints used in our paper are taken from Refs.\ \cite{CaoRatra2022} and \cite{Cao:2022pdv}, respectively. The analyses of Ref.\ \cite{Cao:2022pdv} account for the asymmetric errors of the Mg\,\textsc{ii}\ + C\,\textsc{iv}\ measurements.
The flat priors for the free cosmological and non-cosmological parameters are listed in Table \ref{tab:priors}. The Markov chain Monte Carlo (MCMC) code \textsc{MontePython}, \citep{Audrenetal2013,Brinckmann2019}, is utilized to maximize the likelihood functions and determine the posterior distributions of all free parameters. The \textsc{python} package \textsc{getdist}, \citep{Lewis_2019}, is used to analyze the MCMC results and create plots.
The Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and the Deviance Information Criterion (DIC) are
\begin{equation}
\label{AIC}
\mathrm{AIC}=-2\ln \mathcal{L}_{\rm max} + 2n,
\end{equation}
\begin{equation}
\label{BIC}
\mathrm{BIC}=-2\ln \mathcal{L}_{\rm max} + n\ln N,
\end{equation}
and
\begin{equation}
\label{DIC}
\mathrm{DIC}=-2\ln \mathcal{L}_{\rm max} + 2n_{\rm eff},
\end{equation}
where $n$ is the number of parameters in the given model, and $n_{\rm eff}=\langle-2\ln \mathcal{L}\rangle+2\ln \mathcal{L}_{\rm max}$ represents the number of effectively constrained parameters. Here, the angular brackets indicate an average over the posterior distribution and $\mathcal{L}_{\rm max}$ is the maximum value of the likelihood function.
$\Delta \mathrm{A/B/DIC}$ values are computed by subtracting the A/B/DIC value of the flat $\Lambda$CDM\ reference model from the values of the other five cosmological dark energy models. Negative values of $\Delta \mathrm{A/B/DIC}$ indicate that the model being evaluated fits the data set better than does the flat $\Lambda$CDM\ reference model, while positive values indicate a worse fit. In comparison to the model with the lowest A/B/DIC value, a $\Delta \mathrm{A/B/DIC}$ value within the range $(0, 2]$ indicates weak evidence against the model being evaluated, a value within $(2, 6]$ indicates positive evidence against the model, a value within $(6, 10]$ indicates strong evidence against the model, and a value greater than 10 indicates very strong evidence against the model.
\section{Results}
\label{sec:results}
\subsection{Comparison of constraints obtained from Old $H(z)$ data and $H(z)$ data, Old $H(z)$ + Old BAO data and $H(z)$ + BAO data, and SNP + SND data and SNP\!\raisebox{0.2ex}{+}\ data}
\label{subsec:comp1}
\begin{table*}
\centering
\resizebox*{2.05\columnwidth}{2.35\columnwidth}{%
\begin{threeparttable}
\caption{Unmarginalized best-fitting parameter values for all models from various combinations of BAO, $H(z)$, and SN Ia data.}\label{tab:BFP}
\begin{tabular}{lcccccccccccccc}
\toprule
Model & Data set & $\Omega_{b}h^2$ & $\Omega_{c}h^2$ & $\Omega_{m0}$ & $\Omega_{k0}$ & $w_{\mathrm{X}}$/$\alpha$\tnote{a} & $H_0$\tnote{b} & $-2\ln\mathcal{L}_{\mathrm{max}}$ & AIC & BIC & DIC & $\Delta \mathrm{AIC}$ & $\Delta \mathrm{BIC}$ & $\Delta \mathrm{DIC}$ \\
\midrule
\multirow{8}{*}{Flat $\Lambda$CDM} & Old $H(z)$ & 0.0273 & 0.1201 & 0.319 & -- & -- & 68.16 & 14.54 & 20.54 & 24.94 & 18.87 & 0.00 & 0.00 & 0.00\\%20.538539999999998, 24.935747708399177, 18.86827718991932
& $H(z)$ & 0.0120 & 0.1366 & 0.309 & -- & -- & 69.43 & 14.50 & 20.50 & 24.90 & 18.78 & 0.00 & 0.00 & 0.00\\%20.5036, 24.900807708399178, 18.782939164871124
& Old $H(z)$ + Old BAO & 0.0244 & 0.1181 & 0.301 & -- & -- & 68.98 & 25.64 & 31.64 & 36.99 & 32.32 & 0.00 & 0.00 & 0.00\\
& $H(z)$ + BAO & 0.0254 & 0.1200 & 0.297 & -- & -- & 70.12 & 30.56 & 36.56 & 41.91 & 37.32 & 0.00 & 0.00 & 0.00\\%36.5576, 41.91016890175479, 37.318529778549525
& SNP + SND & 0.0102 & 0.0133 & 0.309 & -- & -- & 27.93 & 1056.64 & 1062.64 & 1077.56 & 1058.68 & 0.00 & 0.00 & 0.00\\%1062.638, 1077.5586290585604, 1058.6801502695175
& SNP\!\raisebox{0.2ex}{+} & 0.0139 & 0.1031 & 0.331 & -- & -- & 59.59 & 1406.97 & 1412.97 & 1429.08 & 1409.17 & 0.00 & 0.00 & 0.00\\%1412.97, 1429.0844678856429, 1409.1667470308278
& Old $H(z)$ + Old BAO + SNP + SND & 0.0242 & 0.1191 & 0.304 & -- & -- & 68.86 & 1082.39 & 1088.39 & 1103.44 & 1089.92 & 0.00 & 0.00 & 0.00\\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & 0.0239 & 0.1256 & 0.312 & -- & -- & 69.35 & 1439.59 & 1445.59 & 1461.79 & 1446.05 & 0.00 & 0.00 & 0.00\\%1445.59, 1461.7863588262599, 1446.0532163049218
\\
\multirow{8}{*}{Non-flat $\Lambda$CDM} & Old $H(z)$ & 0.0205 & 0.1515 & 0.362 & $-0.136$ & -- & 69.09 & 14.49 & 22.49 & 28.36 & 20.19 & 1.96 & 3.42 & 1.32\\%22.49388, 28.356823611198905, 20.191722346437206
& $H(z)$ & 0.0180 & 0.1328 & 0.314 & $-0.012$ & -- & 69.47 & 14.50 & 22.50 & 28.37 & 20.09 & 2.00 & 3.47 & 1.31\\%22.50328, 28.366223611198905, 20.091280759915843
& Old $H(z)$ + Old BAO & 0.0260 & 0.1098 & 0.292 & 0.048 & -- & 68.35 & 25.30 & 33.30 & 40.43 & 33.87 & 1.66 & 3.44 & 1.54\\
& $H(z)$ + BAO & 0.0269 & 0.1128 & 0.289 & 0.041 & -- & 69.61 & 30.34 & 38.34 & 45.48 & 38.80 & 1.78 & 3.56 & 1.48\\%38.3384, 45.475158535673046, 38.79768870799054
& SNP + SND & 0.0335 & 0.1292 & 0.326 & $-0.043$ & -- & 70.84 & 1056.58 & 1064.58 & 1084.47 & 1060.96 & 1.94 & 6.91 & 2.28\\%1064.578, 1084.4721720780806, 1060.9595132771578
& SNP\!\raisebox{0.2ex}{+} & 0.0336 & 0.0663 & 0.295 & 0.095 & -- & 58.43 & 1406.46 & 1414.46 & 1435.95 & 1410.81 & 1.49 & 6.87 & 1.65\\%1414.464, 1435.949957180857, 1410.8144058086018
& Old $H(z)$ + Old BAO + SNP + SND & 0.0255 & 0.1121 & 0.295 & 0.035 & -- & 68.53 & 1082.11 & 1090.11 & 1110.16 & 1091.17 & 1.72 & 6.72 & 1.24\\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & 0.0276 & 0.1078 & 0.288 & 0.084 & -- & 68.69 & 1437.61 & 1445.61 & 1467.21 & 1446.04 & 0.02 & 5.42 & $-0.01$\\%1445.614, 1467.2091451016797, 1446.038510912359
\\
\multirow{8}{*}{Flat XCDM} & Old $H(z)$ & 0.0376 & 0.1236 & 0.321 & -- & $-1.261$ & 70.95 & 14.39 & 22.39 & 28.25 & 22.17 & 1.85 & 3.32 & 3.30\\%22.38862, 28.251563611198904, 22.16813089615162
& $H(z)$ & 0.0106 & 0.1464 & 0.316 & -- & $-1.140$ & 70.63 & 14.47 & 22.47 & 28.33 & 22.28 & 1.97 & 3.43 & 3.49\\%22.46914, 28.332083611198904, 22.277358141645745
& Old $H(z)$ + Old BAO & 0.0296 & 0.0951 & 0.290 & -- & $-0.754$ & 65.79 & 22.39 & 30.39 & 37.52 & 30.63 & $-1.25$ & 0.53 & $-1.69$\\
& $H(z)$ + BAO & 0.0318 & 0.0938 & 0.283 & -- & $-0.734$ & 66.67 & 26.58 & 34.58 & 41.71 & 34.83 & $-1.98$ & $-0.20$ & $-2.49$\\%34.575, 41.71175853567304, 34.826071644238766
& SNP + SND & 0.0162 & 0.1648 & 0.319 & -- & $-1.028$ & 75.45 & 1056.62 & 1064.62 & 1084.52 & 1061.01 & 1.98 & 6.96 & 2.33\\%1064.622, 1084.5161720780807, 1061.0118711892112
& SNP\!\raisebox{0.2ex}{+} & 0.0243 & 0.0745 & 0.288 & -- & $-0.895$ & 58.73 & 1406.52 & 1414.52 & 1436.00 & 1410.84 & 1.55 & 6.92 & 1.67\\%1414.516, 1436.0019571808573, 1410.8391507174683
& Old $H(z)$ + Old BAO + SNP + SND & 0.0258 & 0.1115 & 0.295 & -- & $-0.940$ & 68.37 & 1081.34 & 1089.34 & 1109.40 & 1090.43 & 0.95 & 5.96 & 0.50\\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & 0.0283 & 0.1092 & 0.290 & -- & $-0.883$ & 68.96 & 1434.63 & 1442.63 & 1464.22 & 1443.28 & $-2.96$ & 2.43 & $-2.77$\\%1442.626, 1464.2211451016797, 1443.280654848378
\\
\multirow{8}{*}{Non-flat XCDM} & Old $H(z)$ & 0.0223 & 0.0736 & 0.172 & 0.324 & $-2.272$ & 75.05 & 14.14 & 24.14 & 31.47 & 20.73 & 3.60 & 6.53 & 1.86\\%24.14188, 31.470559513998634, 20.730357562538686
& $H(z)$ & 0.0316 & 0.0530 & 0.151 & 0.378 & $-2.278$ & 75.06 & 14.21 & 24.21 & 31.54 & 21.46 & 3.71 & 6.64 & 2.68\\%24.2148, 31.543479513998633, 21.464934473762344
& Old $H(z)$ + Old BAO & 0.0289 & 0.0985 & 0.296 & $-0.053$ & $-0.730$ & 65.76 & 22.13 & 32.13 & 41.05 & 32.51 & 0.49 & 4.06 & 0.19\\
& $H(z)$ + BAO & 0.0305 & 0.0998 & 0.293 & $-0.084$ & $-0.703$ & 66.79 & 26.00 & 36.00 & 44.92 & 36.11 & $-0.56$ & 3.01 & $-1.21$\\%35.9978, 44.91874816959131, 36.11102222322448
& SNP + SND & 0.0057 & 0.0965 & 0.145 & $-0.576$ & $-0.596$ & 84.24 & 1056.24 & 1066.24 & 1091.11 & 1064.60 & 3.60 & 13.55 & 5.92\\%1066.238, 1091.1057150976008, 1064.5964234552887
& SNP\!\raisebox{0.2ex}{+} & 0.0334 & 0.0649 & 0.295 & 0.194 & $-1.155$ & 57.96 & 1406.43 & 1416.43 & 1443.29 & 1413.23 & 3.46 & 14.21 & 4.06\\%1416.434, 1443.2914464760713, 1413.2273870568781
& Old $H(z)$ + Old BAO + SNP + SND & 0.0255 & 0.1155 & 0.300 & $-0.030$ & $-0.922$ & 68.68 & 1081.28 & 1091.28 & 1116.35 & 1092.47 & 2.89 & 12.91 & 2.55\\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & 0.0278 & 0.1118 & 0.294 & $-0.032$ & $-0.865$ & 69.07 & 1434.46 & 1444.46 & 1471.45 & 1445.42 & $-1.13$ & 9.66 & $-0.63$\\%1444.456, 1471.4499313770996, 1445.4249805329234
\\
\multirow{8}{*}{Flat $\phi$CDM} & Old $H(z)$ & 0.0140 & 0.1341 & 0.321 & -- & 0.000 & 68.04 & 14.54 & 22.54 & 28.40 & 20.81 & 2.00 & 3.47 & 1.94\\%22.54038, 28.403323611198907, 20.811424086544335
& $H(z)$ & 0.0158 & 0.1335 & 0.312 & -- & 0.001 & 69.31 & 14.51 & 22.51 & 28.37 & 20.05 & 2.00 & 3.47 & 1.27\\%22.505679999999998, 28.368623611198906, 20.053752085926554
& Old $H(z)$ + Old BAO & 0.0330 & 0.0911 & 0.278 & -- & 1.018 & 66.98 & 22.14 & 30.14 & 37.27 & 29.56 & $-1.33$ & 0.46 & $-2.42$\\
& $H(z)$ + BAO & 0.0336 & 0.0866 & 0.271 & -- & 1.157 & 66.80 & 26.50 & 34.50 & 41.64 & 34.15 & $-2.05$ & $-0.27$ & $-3.17$\\%34.504599999999996, 41.64135853567304, 34.150297701330516
& SNP + SND & 0.0198 & 0.0089 & 0.308 & -- & 0.001 & 30.81 & 1056.64 & 1064.64 & 1084.53 & 1062.54 & 2.00 & 6.98 & 3.86\\%1064.64, 1084.5341720780807, 1062.5409766778905
& SNP\!\raisebox{0.2ex}{+} & 0.0132 & 0.2553 & 0.279 & -- & 0.399 & 98.21 & 1406.50 & 1414.50 & 1435.98 & 1411.49 & 1.53 & 6.90 & 2.33\\%1414.498, 1435.9839571808573, 1411.4923708931528
& Old $H(z)$ + Old BAO + SNP + SND & 0.0263 & 0.1097 & 0.292 & -- & 0.203 & 68.39 & 1081.22 & 1089.22 & 1109.28 & 1089.91 & 0.83 & 5.84 & $-0.01$\\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & 0.0288 & 0.1060 & 0.286 & -- & 0.402 & 68.84 & 1434.43 & 1442.43 & 1464.02 & 1442.92 & $-3.16$ & 2.23 & $-3.14$\\%1442.426, 1464.0211451016796, 1442.9161999694343
\\
\multirow{8}{*}{Non-flat $\phi$CDM} & Old $H(z)$ & 0.0213 & 0.1514 & 0.365 & $-0.144$ & 0.036 & 68.91 & 14.50 & 24.50 & 31.83 & 21.42 & 3.96 & 6.89 & 2.55\\%24.50086, 31.829539513998633, 21.419449207080486
& $H(z)$ & 0.0358 & 0.1120 & 0.310 & 0.003 & 0.011 & 69.18 & 14.51 & 24.51 & 31.84 & 20.63 & 4.00 & 6.94 & 1.85\\%24.50852, 31.837199513998634, 20.627998854591883 D
& Old $H(z)$ + Old BAO & 0.0306 & 0.0920 & 0.284 & $-0.058$ & 1.200 & 65.91 & 22.05 & 32.05 & 40.97 & 31.30 & 0.41 & 3.98 & $-1.02$\\
& $H(z)$ + BAO & 0.0337 & 0.0894 & 0.275 & $-0.074$ & 1.393 & 67.16 & 25.92 & 35.92 & 44.84 & 35.29 & $-0.64$ & 2.93 & $-2.03$\\%35.921800000000005, 44.8427481695913, 35.2914687221718 D
& SNP + SND & 0.0283 & 0.1387 & 0.251 & $-0.251$ & 1.107 & 81.71 & 1056.40 & 1066.40 & 1091.26 & 1062.52 & 3.76 & 13.71 & 3.84\\%1066.396, 1091.2637150976007, 1062.517350402408 D
& SNP\!\raisebox{0.2ex}{+} & 0.0140 & 0.0525 & 0.297 & 0.085 & 0.005 & 47.56 & 1406.47 & 1416.47 & 1443.32 & 1411.50 & 3.50 & 14.24 & 2.33\\%1416.466, 1443.323446476071, 1411.4993360797994 D
& Old $H(z)$ + Old BAO + SNP + SND & 0.0261 & 0.1119 & 0.295 & $-0.023$ & 0.253 & 68.56 & 1081.12 & 1091.12 & 1116.19 & 1091.27 & 2.73 & 12.75 & 1.35\\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & 0.0282 & 0.1104 & 0.291 & $-0.045$ & 0.480 & 69.13 & 1434.23 & 1444.23 & 1471.22 & 1444.26 & $-1.36$ & 9.43 & $-1.79$\\%1444.226, 1471.2199313770998, 1444.2616651905555 D
\bottomrule
\end{tabular
\begin{tablenotes}[flushleft]
\item [a] $w_{\rm X}$\ corresponds to flat/non-flat XCDM and $\alpha$ corresponds to flat/non-flat $\phi$CDM.
\item [b] $\rm{km \ s^{-1} \ Mpc^{-1}}$.
\end{tablenotes}
\end{threeparttable}%
}
\end{table*}
\begin{table*}
\centering
\resizebox*{2.05\columnwidth}{2.35\columnwidth}{%
\begin{threeparttable}
\caption{One-dimensional posterior mean parameter values and uncertainties ($\pm 1\sigma$ error bars or $2\sigma$ limits) for all models from various combinations of BAO, $H(z)$, and SN Ia data.}\label{tab:1d_BFP}
\begin{tabular}{lccccccc}
\toprule
Model & Data set & $\Omega_{b}h^2$ & $\Omega_{c}h^2$ & $\Omega_{m0}$ & $\Omega_{k0}$ & $w_{\mathrm{X}}$/$\alpha$\tnote{a} & $H_0$\tnote{b}\\
\midrule
\multirow{8}{*}{Flat $\Lambda$CDM} & Old $H(z)$ & $0.0225\pm0.0108$ & $0.1264\pm0.0207$ & $0.328^{+0.052}_{-0.073}$ & -- & -- & $67.98\pm3.24$ \\
& $H(z)$ & $0.0225\pm0.0107$ & $0.1275\pm0.0208$ & $0.319^{+0.050}_{-0.074}$ & -- & -- & $69.31\pm4.25$ \\
& Old $H(z)$ + Old BAO & $0.0247\pm0.0030$ & $0.1186^{+0.0076}_{-0.0083}$ & $0.301^{+0.016}_{-0.018}$ & -- & -- & $69.14\pm1.85$ \\
& $H(z)$ + BAO & $0.0260\pm0.0040$ & $0.1212^{+0.0091}_{-0.0101}$ & $0.297^{+0.015}_{-0.017}$ & -- & -- & $70.49\pm2.74$ \\
& SNP + SND & $0.0224\pm0.0109$ & $0.1658^{+0.0927}_{-0.0598}$ & $0.310^{+0.021}_{-0.023}$ & -- & -- & $>45.87$ \\
& SNP\!\raisebox{0.2ex}{+} & $0.0224\pm0.0109$ & $0.1785^{+0.1038}_{-0.0623}$ & $0.332\pm0.020$ & -- & -- & $>44.39$ \\
& Old $H(z)$ + Old BAO + SNP + SND & $0.0244\pm0.0027$ & $0.1199\pm0.0076$ & $0.304^{+0.014}_{-0.015}$ & -- & -- & $69.04\pm1.77$ \\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & $0.0243\pm0.0034$ & $0.1267^{+0.0080}_{-0.0089}$ & $0.312\pm0.013$ & -- & -- & $69.65\pm2.48$ \\
\\
\multirow{8}{*}{Non-flat $\Lambda$CDM} & Old $H(z)$ & $0.0223^{+0.0109}_{-0.0108}$ & $0.1685^{+0.0736}_{-0.1139}$ & $0.390^{+0.167}_{-0.172}$ & $-0.174^{+0.501}_{-0.491}$ & -- & $69.09^{+4.70}_{-4.67}$ \\
& $H(z)$ & $0.0222\pm0.0108$ & $0.1612^{+0.0691}_{-0.1207}$ & $0.374^{+0.151}_{-0.210}$ & $-0.136^{+0.564}_{-0.457}$ & -- & $69.56^{+4.89}_{-4.88}$ \\
& Old $H(z)$ + Old BAO & $0.0266^{+0.0039}_{-0.0045}$ & $0.1088\pm0.0166$ & $0.291\pm0.023$ & $0.059^{+0.081}_{-0.091}$ & -- & $68.37\pm2.10$ \\
& $H(z)$ + BAO & $0.0275^{+0.0046}_{-0.0051}$ & $0.1131^{+0.0180}_{-0.0181}$ & $0.289\pm0.023$ & $0.047^{+0.082}_{-0.089}$ & -- & $69.81\pm2.80$ \\
& SNP + SND & $0.0224\pm0.0107$ & $0.1698^{+0.0766}_{-0.0911}$ & $0.317\pm0.068$ & $-0.017^{+0.172}_{-0.174}$ & -- & $>46.83$ \\
& SNP\!\raisebox{0.2ex}{+} & $0.0224\pm0.0107$ & $0.1745^{+0.0750}_{-0.0752}$ & $0.298\pm0.056$ & $0.089\pm0.132$ & -- & $>53.95$ \\
& Old $H(z)$ + Old BAO + SNP + SND & $0.0260^{+0.0037}_{-0.0043}$ & $0.1119\pm0.0157$ & $0.294\pm0.022$ & $0.040\pm0.070$ & -- & $68.62\pm1.90$ \\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & $0.0282^{+0.0046}_{-0.0050}$ & $0.1082\pm0.0152$ & $0.288\pm0.021$ & $0.087\pm0.063$ & -- & $68.89\pm2.44$ \\
\\
\multirow{8}{*}{Flat XCDM} & Old $H(z)$ & $0.0225\pm0.0107$ & $0.1505^{+0.0337}_{-0.0206}$ & $0.285^{+0.061}_{-0.075}$ & -- & $-1.972^{+1.164}_{-0.588}$ & $79.55^{+7.01}_{-15.05}$ \\
& $H(z)$ & $0.0225\pm0.0108$ & $0.1505^{+0.0303}_{-0.0200}$ & $0.278^{+0.065}_{-0.081}$ & -- & $-2.127^{+1.335}_{-0.629}$ & $80.96^{+7.59}_{-16.10}$ \\
& Old $H(z)$ + Old BAO & $0.0295^{+0.0042}_{-0.0050}$ & $0.0969^{+0.0178}_{-0.0152}$ & $0.289\pm0.020$ & -- & $-0.784^{+0.140}_{-0.107}$ & $66.22^{+2.31}_{-2.54}$ \\
& $H(z)$ + BAO & $0.0308^{+0.0053}_{-0.0046}$ & $0.0978^{+0.0184}_{-0.0164}$ & $0.285\pm0.019$ & -- & $-0.776^{+0.130}_{-0.103}$ & $67.18\pm3.18$ \\
& SNP + SND & $0.0224\pm0.0108$ & $0.1750^{+0.0824}_{-0.0980}$ & $0.315^{+0.083}_{-0.057}$ & -- & $-1.054^{+0.237}_{-0.171}$ & $>48.66$ \\
& SNP\!\raisebox{0.2ex}{+} & $0.0224\pm0.0107$ & $0.1552^{+0.0718}_{-0.0909}$ & $0.281^{+0.079}_{-0.061}$ & -- & $-0.900^{+0.166}_{-0.124}$ & $>49.26$ \\
& Old $H(z)$ + Old BAO + SNP + SND & $0.0262^{+0.0033}_{-0.0037}$ & $0.1120\pm0.0110$ & $0.295\pm0.016$ & -- & $-0.941\pm0.064$ & $68.55\pm1.85$ \\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & $0.0287\pm0.0044$ & $0.1097\pm0.0117$ & $0.290\pm0.016$ & -- & $-0.886\pm0.053$ & $69.15\pm2.52$\\
\\
\multirow{8}{*}{Non-flat XCDM} & Old $H(z)$ & $0.0218^{+0.0075}_{-0.0144}$ & $0.0927^{+0.0217}_{-0.0890}$ & $0.228^{+0.055}_{-0.168}$ & $0.241^{+0.451}_{-0.261}$ & $-2.148^{+1.682}_{-0.776}$ & $71.98^{+5.86}_{-11.09}$ \\
& $H(z)$ & $0.0218^{+0.0093}_{-0.0119}$ & $<0.2364$ & $0.228^{+0.054}_{-0.175}$ & $0.228^{+0.456}_{-0.267}$ & $-2.149^{+1.673}_{-0.772}$ & $73.06^{+6.61}_{-11.48}$ \\
& Old $H(z)$ + Old BAO & $0.0294^{+0.0047}_{-0.0050}$ & $0.0980^{+0.0186}_{-0.0187}$ & $0.292\pm0.025$ & $-0.027\pm0.109$ & $-0.770^{+0.149}_{-0.098}$ & $66.13^{+2.35}_{-2.36}$ \\
& $H(z)$ + BAO & $0.0303^{+0.0054}_{-0.0048}$ & $0.1021\pm0.0193$ & $0.292\pm0.024$ & $-0.054\pm0.103$ & $-0.757^{+0.135}_{-0.093}$ & $67.33\pm2.96$ \\
& SNP + SND & $0.0224\pm0.0107$ & $0.1455^{+0.0644}_{-0.0987}$ & $0.271\pm0.085$ & $0.130^{+0.426}_{-0.249}$ & $-1.499^{+0.901}_{-0.237}$ & $>48.72$ \\
& SNP\!\raisebox{0.2ex}{+} & $0.0224\pm0.0107$ & $0.1404^{+0.0654}_{-0.0845}$ & $0.259^{+0.071}_{-0.063}$ & $0.215^{+0.350}_{-0.202}$ & $-1.424^{+0.798}_{-0.251}$ & $>49.14$ \\
& Old $H(z)$ + Old BAO + SNP + SND & $0.0262^{+0.0037}_{-0.0043}$ & $0.1119\pm0.0157$ & $0.295\pm0.022$ & $-0.001\pm0.098$ & $-0.948^{+0.098}_{-0.068}$ & $68.53\pm1.90$ \\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & $0.0284\pm0.0047$ & $0.1115^{+0.0151}_{-0.0165}$ & $0.293\pm0.021$ & $-0.017\pm0.095$ & $-0.884^{+0.082}_{-0.058}$ & $69.23\pm2.53$ \\
\\
\multirow{8}{*}{Flat $\phi$CDM} & Old $H(z)$ & $0.0222\pm0.0107$ & $0.0642^{+0.0288}_{-0.0432}$ & $0.215^{+0.066}_{-0.108}$ & -- & $3.959^{+1.353}_{-3.789}$ & $63.95^{+3.01}_{-3.29}$ \\
& $H(z)$ & $0.0221\pm0.0108$ & $0.0620^{+0.0273}_{-0.0440}$ & $0.199^{+0.059}_{-0.106}$ & -- & $3.972^{+1.394}_{-3.739}$ & $65.80^{+4.12}_{-4.10}$ \\
& Old $H(z)$ + Old BAO & $0.0320^{+0.0054}_{-0.0041}$ & $0.0855^{+0.0175}_{-0.0174}$ & $0.275\pm0.023$ & -- & $1.267^{+0.536}_{-0.807}$ & $65.47^{+2.22}_{-2.21}$ \\%1.267^{+1.240}_{-1.221} 2$\sigma$
& $H(z)$ + BAO & $0.0326^{+0.0061}_{-0.0030}$ & $0.0866^{+0.0197}_{-0.0180}$ & $0.272^{+0.024}_{-0.022}$ & -- & $1.271^{+0.507}_{-0.836}$ & $66.19^{+2.89}_{-2.88}$ \\%$1.271^{+1.294}_{-1.228}$ 2$\sigma$
& SNP + SND & $0.0221\pm0.0107$ & $0.0875^{+0.0333}_{-0.0733}$ & $0.181^{+0.075}_{-0.076}$ & -- & $<4.052$ & $>50.07$ \\%
& SNP\!\raisebox{0.2ex}{+} & $0.0220^{+0.0100}_{-0.0118}$ & $0.0844^{+0.0288}_{-0.0757}$ & $0.175^{+0.065}_{-0.092}$ & -- & $1.966^{+0.479}_{-1.907}$ & $>50.58$ \\%
& Old $H(z)$ + Old BAO + SNP + SND & $0.0278^{+0.0032}_{-0.0039}$ & $0.1054^{+0.0117}_{-0.0100}$ & $0.287\pm0.017$ & -- & $0.324^{+0.122}_{-0.264}$ & $68.29\pm1.78$ \\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & $0.0300^{+0.0047}_{-0.0046}$ & $0.1040\pm0.0129$ & $0.282\pm0.018$ & -- & $0.475^{+0.189}_{-0.265}$ & $69.01\pm2.43$ \\%$0.475^{+0.444}_{-0.432}$
\\
\multirow{8}{*}{Non-flat $\phi$CDM} & Old $H(z)$ & $0.0215^{+0.0090}_{-0.0120}$ & $0.0536^{+0.0160}_{-0.0495}$ & $0.193^{+0.056}_{-0.117}$ & $0.262^{+0.265}_{-0.337}$ & $3.911^{+1.188}_{-3.871}$ & $62.81^{+2.65}_{-3.18}$ \\
& $H(z)$ & $0.0213^{+0.0083}_{-0.0124}$ & $0.0481^{+0.0135}_{-0.0452}$ & $0.172^{+0.048}_{-0.104}$ & $0.277^{+0.249}_{-0.331}$ & $4.087^{+1.393}_{-3.890}$ & $64.32^{+3.59}_{-4.04}$ \\
& Old $H(z)$ + Old BAO & $0.0320^{+0.0057}_{-0.0038}$ & $0.0865^{+0.0172}_{-0.0198}$ & $0.277^{+0.023}_{-0.026}$ & $-0.034^{+0.087}_{-0.098}$ & $1.360^{+0.584}_{-0.819}$ & $65.53\pm2.19$ \\%$1.360^{+1.289}_{-1.300}$
& $H(z)$ + BAO & $0.0325^{+0.0064}_{-0.0029}$ & $0.0881^{+0.0199}_{-0.0201}$ & $0.275\pm0.025$ & $-0.052^{+0.093}_{-0.087}$ & $1.427^{+0.572}_{-0.830}$ & $66.24\pm2.88$ \\%$1.427^{+1.365}_{-1.317}$
& SNP + SND & $0.0223\pm0.0107$ & $0.1098^{+0.0426}_{-0.0861}$ & $0.212^{+0.080}_{-0.082}$ & $-0.026^{+0.126}_{-0.150}$ & $<3.067$ & $>50.82$ \\%
& SNP\!\raisebox{0.2ex}{+} & $0.0224\pm0.0107$ & $0.1113^{+0.0464}_{-0.0758}$ & $0.210^{+0.068}_{-0.069}$ & $-0.001^{+0.108}_{-0.132}$ & $1.282^{+0.290}_{-1.255}$ & $>52.64$ \\%
& Old $H(z)$ + Old BAO + SNP + SND & $0.0271^{+0.0038}_{-0.0043}$ & $0.1095\pm0.0152$ & $0.292\pm0.022$ & $-0.038^{+0.071}_{-0.085}$ & $0.382^{+0.151}_{-0.299}$ & $68.48\pm1.85$ \\
& $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ & $0.0296^{+0.0048}_{-0.0047}$ & $0.1067^{+0.0153}_{-0.0154}$ & $0.286\pm0.021$ & $-0.035^{+0.071}_{-0.085}$ & $0.550^{+0.231}_{-0.314}$ & $69.15\pm2.53$ \\%$0.550^{+0.514}_{-0.527}$
\bottomrule
\end{tabular
\begin{tablenotes}[flushleft]
\item [a] $w_{\rm X}$\ corresponds to flat/non-flat XCDM and $\alpha$ corresponds to flat/non-flat $\phi$CDM.
\item [b] $\rm{km \ s^{-1} \ Mpc^{-1}}$.
\end{tablenotes}
\end{threeparttable}%
}
\end{table*}
For Old $H(z)$, $H(z)$, Old $H(z)$ + Old BAO, $H(z)$ + BAO, SNP + SND, and SNP\!\raisebox{0.2ex}{+}\ data, the best-fitting parameter values, likelihood values, and information criteria values for all models are given in Table \ref{tab:BFP} and the marginalized posterior mean parameter values and uncertainties for all models are listed in Table \ref{tab:1d_BFP}. Figures \ref{fig1}--\ref{fig3} show the probability distributions and confidence regions of cosmological parameters, obtained from Old $H(z)$ and $H(z)$ data, from Old $H(z)$ + Old BAO and $H(z)$ + BAO data, and from SNP + SND and SNP\!\raisebox{0.2ex}{+}\ data, respectively.
\begin{figure*}
\centering
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FLCDM_Comp4.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFLCDM_Comp4.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FXCDM_Comp4.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFXCDM_Comp4.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FPCDM_Comp4.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFPCDM_Comp4.pdf}}\\
\caption{One-dimensional likelihoods and 1$\sigma$, 2$\sigma$, and 3$\sigma$ two-dimensional likelihood confidence contours from Old $H(z)$ (red) and $H(z)$ (blue) data for six different models. The black dashed zero-acceleration lines in panels (b)--(f), computed for the third cosmological parameter set to the $H(z)$ + BAO data best-fitting values listed in Table \ref{tab:BFP} in panels (d) and (f), divides the parameter space into regions associated with currently-accelerating (below or below left) and currently-decelerating (above or above right) cosmological expansion. The crimson dash-dot lines represent flat hypersurfaces, with closed spatial hypersurfaces either below or to the left. The magenta lines represent $w_{\rm X}=-1$, i.e.\ flat or non-flat $\Lambda$CDM\ models. The $\alpha = 0$ axes correspond to flat and non-flat $\Lambda$CDM\ models in panels (e) and (f), respectively.}
\label{fig1}
\end{figure*}
\begin{figure*}
\centering
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FLCDM_Comp10.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFLCDM_Comp10.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FXCDM_Comp10.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFXCDM_Comp10.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FPCDM_Comp10.pdf}}
\subfloat[]{%
\includegraphics[width=0.5\textwidth,height=0.35\textwidth]{NFPCDM_Comp10.pdf}}\\
\caption{Same as Fig. \ref{fig1} but for Old $H(z)$ + Old BAO (red) and $H(z)$ + BAO (blue) data.}
\label{fig2}
\end{figure*}
\begin{figure*}
\centering
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FLCDM_Comp7.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFLCDM_Comp7.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FXCDM_Comp7.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFXCDM_Comp7.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FPCDM_Comp7.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFPCDM_Comp7.pdf}}\\
\caption{Same as Fig.\ \ref{fig1} but for SNP + SND (red) and SNP\!\raisebox{0.2ex}{+}\ (blue) data. The black dashed zero-acceleration lines in panels (b)--(f), computed for the third cosmological parameter set to the $H(z)$ + BAO data best-fitting values listed in Table \ref{tab:BFP} in panels (d) and (f), divides the parameter space into regions associated with currently-accelerating (below or below left) and currently-decelerating (above or above right) cosmological expansion.}
\label{fig3}
\end{figure*}
Old $H(z)$ data constraints on $\Omega_{m0}$\ range from $0.193^{+0.056}_{-0.117}$ (non-flat $\phi$CDM) to $0.390^{+0.167}_{-0.172}$ (non-flat $\Lambda$CDM), with a difference of $1.1\sigma$. In contrast, $H(z)$ data favor values of $\Omega_{m0}$\ lower by $\lesssim0.17\sigma$ and ranging from $0.172^{+0.048}_{-0.104}$ (non-flat $\phi$CDM) to $0.374^{+0.151}_{-0.210}$ (non-flat $\Lambda$CDM), with a difference of $0.94\sigma$.
Old $H(z)$ data constraints on $H_0$ range from $62.81^{+2.65}_{-3.18}$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (non-flat $\phi$CDM) to $79.55^{+7.01}_{-15.05}$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat XCDM), with a difference of $1.1\sigma$. In contrast, $H(z)$ data favor values of $H_0$ higher by $\lesssim0.36\sigma$ (and with larger error bars), ranging from $64.32^{+3.59}_{-4.04}$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (non-flat $\phi$CDM) to $80.96^{+7.59}_{-16.10}$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat XCDM), with a difference of $1.0\sigma$.
Old $H(z)$ data constraints on $\Omega_{k0}$\ are $-0.174^{+0.501}_{-0.491}$, $0.241^{+0.451}_{-0.261}$, and $0.262^{+0.265}_{-0.337}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, respectively. In contrast, $H(z)$ data constraints on $\Omega_{k0}$\ are $-0.136^{+0.564}_{-0.457}$, $0.228^{+0.456}_{-0.267}$, and $0.277^{+0.249}_{-0.331}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, which are $0.056\sigma$ higher, $0.025\sigma$ lower, and $0.035\sigma$ higher than those from Old $H(z)$ data, respectively. Both Old $H(z)$ and $H(z)$ data indicate that closed spatial geometry is mildly favored by non-flat $\Lambda$CDM, and that open spatial geometry is mildly favored by non-flat XCDM and non-flat $\phi$CDM, but flat hypersurfaces are still within 1$\sigma$.
Both Old $H(z)$ data and $H(z)$ data indicate a slight preference for dark energy dynamics, with approximately similar evidence for deviation from the $\Lambda$CDM\ models. For $H(z)$ data, for the flat and non-flat XCDM models, the $w_{\rm X}$ parameter is found to be $0.84\sigma$ and $0.68\sigma$ lower than $-1$ respectively. Similarly, for both the flat and non-flat $\phi$CDM\ models, the $\alpha$ parameter is found to be $1.1\sigma$ away from $0$.
Old $H(z)$ + Old BAO data constraints on $\Omega_{m0}$\ range from $0.275\pm0.023$ (flat $\phi$CDM) to $0.301^{+0.016}_{-0.018}$ (flat $\Lambda$CDM), with a difference of $0.89\sigma$. In contrast, $H(z)$ + BAO data favor values of $\Omega_{m0}$\ lower by $\lesssim0.17\sigma$ and ranging from $0.272^{+0.024}_{-0.022}$ (flat $\phi$CDM) to $0.297^{+0.015}_{-0.017}$ (flat $\Lambda$CDM), with a difference of $0.85\sigma$.
Old $H(z)$ + Old BAO data constraints on $H_0$ range from $65.47^{+2.22}_{-2.21}$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat $\phi$CDM) to $69.14\pm1.85$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat $\Lambda$CDM), with a difference of $1.3\sigma$. In contrast, $H(z)$ + BAO data favor values of $H_0$ higher by $\lesssim0.41\sigma$ (and with larger error bars), ranging from $66.19^{+2.89}_{-2.88}$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat $\phi$CDM) to $70.49\pm2.74$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat $\Lambda$CDM), with a difference of $1.1\sigma$.
Old $H(z)$ + Old BAO data constraints on $\Omega_{k0}$\ are $0.059^{+0.081}_{-0.091}$, $-0.027\pm0.109$, and $-0.034^{+0.087}_{-0.098}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, respectively. In contrast, $H(z)$ + BAO data constraints on $\Omega_{k0}$\ are $0.047^{+0.082}_{-0.089}$, $-0.054\pm0.103$, and $-0.052^{+0.093}_{-0.087}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, which are $0.10\sigma$, $0.18\sigma$, and $0.13\sigma$ lower than those from Old $H(z)$ + Old BAO data, respectively. In contrast to Old $H(z)$ and $H(z)$ data, both Old $H(z)$ + Old BAO and $H(z)$ + BAO data indicate that open spatial geometry is mildly favored by non-flat $\Lambda$CDM, and closed spatial geometry is mildly favored by non-flat XCDM and non-flat $\phi$CDM, but flat hypersurfaces are still within 1$\sigma$.
Both Old $H(z)$ + Old BAO data and $H(z)$ + BAO data show strong evidence for dark energy dynamics. In the Old $H(z)$ + Old BAO data case, for flat (non-flat) XCDM ($1\sigma$ and $2\sigma$), $w_{\rm X}=-0.784^{+0.140}_{-0.107}{}^{+0.230}_{-0.243}$ ($w_{\rm X}=-0.770^{+0.149}_{-0.098}{}^{+0.233}_{-0.256}$), with central values being $<2\sigma$ higher than $w_{\rm X}=-1$ ($\Lambda$CDM); and for flat (non-flat) $\phi$CDM\ ($1\sigma$ and $2\sigma$), $\alpha=1.267^{+0.536}_{-0.807}{}^{+1.240}_{-1.221}$ ($\alpha=1.360^{+0.584}_{-0.819}{}^{+1.289}_{-1.300}$), with central values being $>2\sigma$ away from $\alpha=0$ ($\Lambda$CDM). In the $H(z)$ + BAO data case, for flat (non-flat) XCDM ($1\sigma$ and $2\sigma$), $w_{\rm X}=-0.776^{+0.130}_{-0.103}{}^{+0.221}_{-0.232}$ ($w_{\rm X}=-0.757^{+0.135}_{-0.093}{}^{+0.215}_{-0.236}$), with central values being $<2\sigma$ ($>2\sigma$) higher than $w_{\rm X}=-1$ ($\Lambda$CDM); and for flat (non-flat) $\phi$CDM\ ($1\sigma$ and $2\sigma$), $\alpha=1.271^{+0.507}_{-0.836}{}^{+1.294}_{-1.228}$ ($\alpha=1.427^{+0.572}_{-0.830}{}^{+1.365}_{-1.317}$), with central values being $>2\sigma$ away from $\alpha=0$ ($\Lambda$CDM).
Since SN Ia data alone cannot constrain $H_0$ we choose a narrower prior of $H_0\in[20,100]$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ for these data. The constraints on $H_0$ derived from those on $\Omega_{b}h^2$\ and $\Omega_{c}h^2$\ for both SN Ia data sets provide $2\sigma$ lower limits that are in agreement with $H_0$ constraints derived from other data.
SNP + SND data constraints on $\Omega_{m0}$\ range from $0.181^{+0.075}_{-0.076}$ (flat $\phi$CDM) to $0.317\pm0.068$ (non-flat $\Lambda$CDM), with a difference of $1.3\sigma$. In contrast, SNP\!\raisebox{0.2ex}{+}\ data constraints on $\Omega_{m0}$\ differ from the SNP + SND ones by $-0.35\sigma$ to $0.76\sigma$, ranging from $0.175^{+0.065}_{-0.092}$ (flat $\phi$CDM) to $0.332\pm0.020$ (flat $\Lambda$CDM), with a difference of $2.3\sigma$.
SNP + SND data constraints on $\Omega_{k0}$\ are $-0.017^{+0.172}_{-0.174}$, $0.130^{+0.426}_{-0.249}$, and $-0.026^{+0.126}_{-0.150}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, respectively. In contrast, SNP\!\raisebox{0.2ex}{+}\ data constraints on $\Omega_{k0}$\ are $0.089\pm0.132$, $0.215^{+0.350}_{-0.202}$, and $-0.001^{+0.108}_{-0.132}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, which are $0.49\sigma$, $0.18\sigma$, and $0.14\sigma$ higher than those from SNP + SND data, respectively. SNP + SND data show that open spatial geometry is mildly favored by non-flat XCDM, and closed spatial geometry is mildly favored by non-flat $\Lambda$CDM\ and $\phi$CDM; while SNP\!\raisebox{0.2ex}{+}\ data show that open geometry is mildly favored by non-flat $\Lambda$CDM, XCDM, and $\phi$CDM. Both sets of data indicate that flat hypersurfaces remain within the 1$\sigma$ confidence interval, with the exception of the non-flat XCDM scenario, where SNP\!\raisebox{0.2ex}{+}\ data deviates by $1.1\sigma$ from flatness.
Both SNP + SND and SNP\!\raisebox{0.2ex}{+}\ data show a slight preference for dark energy dynamics, but deviations from the $\Lambda$CDM\ models are within $1\sigma$. In the SNP + SND case, for flat (non-flat) XCDM, $w_{\rm X}=-1.054^{+0.237}_{-0.171}$ ($w_{\rm X}=-1.499^{+0.901}_{-0.237}$), with central values being $0.23\sigma$ ($0.55\sigma$) lower than $w_{\rm X}=-1$ ($\Lambda$CDM); and for flat (non-flat) $\phi$CDM, $2\sigma$ upper limits of $\alpha<4.052$ and $\alpha<3.067$ suggest that $\alpha=0$ ($\Lambda$CDM) is within $1\sigma$. In the SNP\!\raisebox{0.2ex}{+}\ case, for flat (non-flat) XCDM, $w_{\rm X}=-0.900^{+0.166}_{-0.124}$ ($w_{\rm X}=-1.424^{+0.798}_{-0.251}$), with central values being $0.81\sigma$ ($0.53\sigma$) higher (lower) than $w_{\rm X}=-1$ ($\Lambda$CDM); and for flat (non-flat) $\phi$CDM, $\alpha=1.966^{+0.479}_{-1.907}$ ($\alpha=1.282^{+0.290}_{-1.255}$), with both central values being $1.0\sigma$ away from $\alpha=0$ ($\Lambda$CDM).
\subsection{Constraints from Old $H(z)$ + Old BAO + SNP + SND data and $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ data}
\label{subsec:comp2}
\begin{figure*}
\centering
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FLCDM_Comp2.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFLCDM_Comp2.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FXCDM_Comp2.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFXCDM_Comp2.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FPCDM_Comp2.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFPCDM_Comp2.pdf}}\\
\caption{Same as Fig.\ \ref{fig1} but for SNP\!\raisebox{0.2ex}{+}\ (gray), $H(z)$ + BAO (green), and $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ (blue) data.}
\label{fig4}
\end{figure*}
\begin{figure*}
\centering
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FLCDM_Comp5.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFLCDM_Comp5.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FXCDM_Comp5.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFXCDM_Comp5.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FPCDM_Comp5.pdf}}
\subfloat[]{%
\includegraphics[width=0.5\textwidth,height=0.35\textwidth]{NFPCDM_Comp5.pdf}}\\
\caption{Same as Fig.\ \ref{fig1} but for Old $H(z)$ + Old BAO + SNP + SND (red) and $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ (blue) data.}
\label{fig5}
\end{figure*}
For Old $H(z)$ + Old BAO + SNP + SND and $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ data, the best-fitting parameter values, likelihood values, and information criteria values for all models are given in Table \ref{tab:BFP} and the marginalized posterior mean parameter values and uncertainties for all models are listed in Table \ref{tab:1d_BFP}. Figures \ref{fig4} and \ref{fig5} show the probability distributions and confidence regions of cosmological parameters, obtained from SNP\!\raisebox{0.2ex}{+}, $H(z)$ + BAO, $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}, and Old $H(z)$ + Old BAO + SNP + SND data. Constraints from SNP\!\raisebox{0.2ex}{+}, $H(z)$, and BAO data are mutually consistent so these data can be used together to constrain cosmological parameters, as discussed below. Note that the Old $H(z)$ + Old BAO + SNP + SND data results are from Ref.\ \cite{CaoRatra2022} and are used here to compare to $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ constraints.
$H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ (HzBSN) data constraints on $\Omega_{m0}$\ range from $0.282\pm0.018$ (flat $\phi$CDM) to $0.312\pm0.013$ (flat $\Lambda$CDM), with a difference of $1.4\sigma$. In contrast, Old $H(z)$ + Old BAO + SNP + SND data favor values of $\Omega_{m0}$\ higher by $\lesssim0.22\sigma$ (or lower by $0.42\sigma$ for flat $\Lambda$CDM) and ranging from $0.287\pm0.017$ (flat $\phi$CDM) to $0.304^{+0.014}_{-0.015}$ (flat $\Lambda$CDM), with a difference of $0.75\sigma$.
HzBSN data constraints on $H_0$ range from $68.89\pm2.44$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (non-flat $\Lambda$CDM) to $69.65\pm2.48$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat $\Lambda$CDM), with a difference of $0.22\sigma$. These $H_0$ values are $0.24\sigma$ (non-flat $\Lambda$CDM) and $0.44\sigma$ (flat $\Lambda$CDM) higher than the median statistics estimate of $H_0=68\pm2.8$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ \citep{chenratmed}, $0.31\sigma$ (non-flat $\Lambda$CDM) and $0.05\sigma$ (flat $\Lambda$CDM) lower than the TRGB estimate of $H_0=69.8\pm1.7$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ \citep{Freedman2021}, and $1.6\sigma$ (non-flat $\Lambda$CDM) and $1.3\sigma$ (flat $\Lambda$CDM) lower than the SN Ia and Cepheids measurement of $73.04\pm1.04$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ \cite{Riessetal2022}. The $H_0$ constraints from flat $\Lambda$CDM\ is $0.90\sigma$ higher than the $H_0$ estimate of $67.36 \pm 0.54$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ from \textit{Planck} 2018 TT,TE,EE+lowE+lensing CMB anisotropy data \cite{planck2018b}. In contrast, Old $H(z)$ + Old BAO + SNP + SND data favor values of $H_0$ lower by $\lesssim0.41\sigma$ (and with smaller error bars), ranging from $68.29\pm1.78$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat $\phi$CDM) to $69.04\pm1.77$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat $\Lambda$CDM), with a difference of $0.30\sigma$.
HzBSN data constraints on $\Omega_{k0}$\ are $0.087\pm0.063$, $-0.017\pm0.095$, and $-0.035^{+0.071}_{-0.085}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, respectively. In contrast, Old $H(z)$ + Old BAO + SNP + SND data constraints on $\Omega_{k0}$\ are $0.040\pm0.070$, $-0.001\pm0.098$, and $-0.038^{+0.071}_{-0.085}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, which are $0.50\sigma$ lower, $0.12\sigma$ higher, and $0.027\sigma$ lower than those from HzBSN data, respectively. For both data sets, non-flat $\Lambda$CDM\ favors open spatial geometry, with HzBSN data being $1.4\sigma$ away from flat and Old $H(z)$ + Old BAO + SNP + SND data being within $1\sigma$ of flat, while closed spatial geometry is favored by non-flat XCDM and non-flat $\phi$CDM, with flatness well within 1$\sigma$.
HzBSN data indicate a strong preference for dark energy dynamics. In particular, the central values of the XCDM equation of state parameter, $w_{\rm X}$, are found to be $>2\sigma$ and slightly $<2\sigma$ higher than $-1$ for flat and non-flat parametrizations respectively. Similarly, the central values of the parameter $\alpha$ in both the flat and non-flat $\phi$CDM\ models are found to be $>2\sigma$ away from 0. Note that these constraints are skewed and non-Gaussian. Old $H(z)$ + Old BAO + SNP + SND data show somewhat less preference for dark energy dynamics. Specifically, for flat (non-flat) XCDM, $w_{\rm X}=-0.941\pm0.064$ ($w_{\rm X}=-0.948^{+0.098}_{-0.068}$), with central values being $0.92\sigma$ ($0.76\sigma$) higher than $w_{\rm X}=-1$, and for flat (non-flat) $\phi$CDM, $\alpha=0.324^{+0.122}_{-0.264}$ ($\alpha=0.382^{+0.151}_{-0.299}$), with central values being $1.2\sigma$ ($1.3\sigma$) away from $\alpha=0$.
Relative to the Old $H(z)$ + Old BAO + SNP + SND data constraints of Ref.\ \cite{CaoRatra2022}, the most significant changes in the constraints from HzBSN data are that they more strongly favor dark energy dynamics and provide larger $H_0$ error bars.
\subsection{Constraints from QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 data}
\label{subsec:comp3}
\begin{table*}
\centering
\resizebox*{2.05\columnwidth}{2.6\columnwidth}{%
\begin{threeparttable}
\caption{Unmarginalized best-fitting parameter values for all models from various combinations of data.}\label{tab:BFPs}
\begin{tabular}{lcccccccccccccccccccccccc}
\toprule
Model & Data set & $\Omega_{b}h^2$ & $\Omega_{c}h^2$ & $\Omega_{m0}$ & $\Omega_{k0}$ & $w_{\mathrm{X}}$/$\alpha$\tnote{a} & $H_0$\tnote{b} & $l_{\rm m}$ & $\gamma_{\rm \textsc{m}}$ & $\beta_{\rm \textsc{m}}$ & $\sigma_{\rm int,\,\textsc{m}}$ & $\gamma_{\rm \textsc{c}}$ & $\beta_{\rm \textsc{c}}$ & $\sigma_{\rm int,\,\textsc{c}}$ & $\gamma_{\rm \textsc{a}}$ & $\beta_{\rm \textsc{a}}$ & $\sigma_{\rm int,\,\textsc{a}}$ & $-2\ln\mathcal{L}_{\mathrm{max}}$ & AIC & BIC & DIC & $\Delta \mathrm{AIC}$ & $\Delta \mathrm{BIC}$ & $\Delta \mathrm{DIC}$ \\
\midrule
& QSO-AS + H\,\textsc{ii}G & 0.0332 & 0.0947 & 0.251 & -- & -- & 71.53 & 11.06 & -- & -- & -- & -- & -- & -- & -- & -- & -- & 786.45 & 794.45 & 809.28 & 792.69 & 0.00 & 0.00 & 0.00\\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & 0.0082 & 0.068 & -- & -- & -- & -- & 0.286 & 1.647 & 0.280 & 0.412 & 0.995 & 0.274 & -- & -- & -- & 50.94 & 64.94 & 84.21 & 69.10 & 0.00 & 0.00 & 0.00\\%64.9378, 84.21293133774455, 69.10238244041872
Flat & A118 & -- & 0.2768 & 0.616 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & 1.166 & 49.92 & 0.382 & 118.63 & 126.63 & 137.71 & 125.95 & 0.00 & 0.00 & 0.00\\%126.6294, 137.71213849786267, 125.95055999632672
$\Lambda$CDM & QHMCA\tnote{c} & 0.0376 & 0.0925 & 0.256 & -- & -- & 71.45 & 11.01 & 0.286 & 1.686 & 0.287 & 0.435 & 1.066 & 0.274 & 1.205 & 49.99 & 0.380 & 958.61 & 984.61 & 1040.28 & 984.21 & 0.00 & 0.00 & 0.00\\%984.61, 1040.279467709648, 984.2102470770409
& HzBSN\tnote{d} & 0.0239 & 0.1256 & 0.312 & -- & -- & 69.35 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & 1439.59 & 1445.59 & 1461.79 & 1446.05 & 0.00 & 0.00 & 0.00\\%1445.59, 1461.7863588262599, 1446.0532163049218
& OHzNBSNQHMA\tnote{e} & 0.0258 & 0.1207 & 0.300 & -- & -- & 70.06 & 10.93 & 0.296 & 1.671 & 0.286 & -- & -- & -- & 1.110 & 50.26 & 0.406 & 2031.30 & 2051.30 & 2105.13 & 2051.86 & 0.00 & 0.00 & 0.00\\
& HzBSNQHMCA\tnote{f} & 0.0258 & 0.1279 & 0.309 & -- & -- & 70.64 & 10.81 & 0.306 & 1.687 & 0.273 & 0.423 & 1.095 & 0.268 & 1.219 & 49.93 & 0.383 & 2400.54 & 2426.54 & 2500.41 & 2427.47 & 0.00 & 0.00 & 0.00\\%2426.54, 2500.4062796407493, 2427.466554346306
\\
& QSO-AS + H\,\textsc{ii}G & 0.0228 & 0.1145 & 0.260 & $-0.360$ & -- & 72.91 & 11.60 & -- & -- & -- & -- & -- & -- & -- & -- & -- & 784.18 & 794.18 & 812.71 & 793.24 & $-0.27$ & 3.43 & 0.55\\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & 0.0791 & 0.213 & $-0.678$ & -- & -- & -- & 0.293 & 1.642 & 0.279 & 0.512 & 1.018 & 0.269 & -- & -- & -- & 42.92 & 58.92 & 80.95 & 68.83 & $-6.01$ & $-3.26$ & $-0.27$\\%58.924, 80.95272152885092, 68.83413723479097
Non-flat & A118 & -- & 0.4633 & 0.997 & 1.553 & -- & -- & -- & -- & -- & -- & -- & -- & -- & 1.177 & 49.71 & 0.380 & 117.47 & 127.47 & 141.32 & 126.29 & 0.84 & 3.61 & 0.34\\%127.4658, 141.31922312232834, 126.28595007424988
$\Lambda$CDM & QHMCA\tnote{c} & 0.0301 & 0.0997 & 0.246 & $-0.229$ & -- & 72.83 & 11.49 & 0.311 & 1.668 & 0.274 & 0.432 & 1.042 & 0.271 & 1.172 & 50.10 & 0.387 & 957.08 & 985.08 & 1045.03 & 984.54 & 0.47 & 4.75 & 0.33\\%985.076, 1045.027734456544, 984.5376217959476
& HzBSN\tnote{d} & 0.0276 & 0.1078 & 0.288 & 0.084 & -- & 68.69 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & 1437.61 & 1445.61 & 1467.21 & 1446.04 & 0.02 & 5.42 & $-0.01$\\%1445.614, 1467.2091451016797, 1446.038510912359
& OHzBSNQHMA\tnote{e} & 0.0261 & 0.1182 & 0.297 & 0.008 & -- & 69.90 & 10.94 & 0.301 & 1.674 & 0.281 & -- & -- & -- & 1.126 & 50.20 & 0.400 & 2031.26 & 2053.26 & 2112.48 & 2053.69 & 1.96 & 7.35 & 1.84\\
& HzBSNQHMCA\tnote{f} & 0.0296 & 0.1100 & 0.286 & 0.088 & -- & 70.00 & 10.85 & 0.311 & 1.670 & 0.280 & 0.444 & 1.054 & 0.278 & 1.234 & 49.90 & 0.381 & 2398.84 & 2426.84 & 2506.39 & 2427.33 & 0.30 & 5.98 & $-0.14$\\%2426.84, 2506.3883011515763, 2427.329705077832
\\
& QSO-AS + H\,\textsc{ii}G & 0.0174 & 0.1308 & 0.285 & -- & $-1.280$ & 72.32 & 11.23 & -- & -- & -- & -- & -- & -- & -- & -- & -- & 786.05 & 796.05 & 814.58 & 795.03 & 1.60 & 5.30 & 2.34\\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & $-0.0212$ & 0.008 & -- & $-4.875$ & -- & -- & 0.248 & 1.399 & 0.262 & 0.337 & 0.757 & 0.225 & -- & -- & -- & 40.24 & 56.24 & 78.27 & 66.94 & $-8.70$ & $-5.95$ & $-2.16$\\%56.2384, 78.26712152885091, 66.94437147345775
Flat & A118 & -- & $-0.0223$ & 0.006 & -- & $-0.203$ & -- & -- & -- & -- & -- & -- & -- & -- & 1.186 & 49.87 & 0.383 & 118.07 & 128.07 & 141.92 & 126.87 & 1.44 & 4.21 & 0.92\\%128.07, 141.92302312232832, 126.86662214989649
XCDM & QHMCA\tnote{c} & 0.0275 & 0.1323 & 0.305 & -- & $-1.607$ & 72.58 & 11.43 & 0.288 & 1.667 & 0.276 & 0.438 & 1.053 & 0.265 & 1.176 & 50.11 & 0.384 & 957.29 & 985.29 & 1045.24 & 984.99 & 0.68 & 4.96 & 0.78\\%985.29, 1045.241734456544, 984.9893221193076
& HzBSN\tnote{d} & 0.0283 & 0.1092 & 0.290 & -- & $-0.883$ & 68.96 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & 1434.63 & 1442.63 & 1464.22 & 1443.28 & $-2.96$ & 2.43 & $-2.77$\\%1442.626, 1464.2211451016797, 1443.280654848378
& OHzBSNQHMA\tnote{e} & 0.0266 & 0.1162 & 0.296 & -- & $-0.975$ & 69.62 & 10.94 & 0.278 & 1.696 & 0.280 & -- & -- & -- & 1.118 & 50.24 & 0.406 & 2030.88 & 2052.88 & 2112.10 & 2053.30 & 1.58 & 6.97 & 1.44\\
& HzBSNQHMCA\tnote{f} & 0.0298 & 0.1056 & 0.282 & -- & $-0.879$ & 69.45 & 10.89 & 0.279 & 1.708 & 0.286 & 0.435 & 1.068 & 0.283 & 1.195 & 50.01 & 0.384 & 2396.06 & 2424.06 & 2503.61 & 2425.97 & $-2.48$ & 3.20 & $-1.50$\\%2424.06, 2503.608301151576, 2425.9650664145006
\\
& QSO-AS + H\,\textsc{ii}G & 0.0300 & 0.0031 & 0.065 & $-0.560$ & $-0.651$ & 71.87 & 11.45 & -- & -- & -- & -- & -- & -- & -- & -- & -- & 781.18 & 793.18 & 815.43 & 799.59 & $-1.27$ & 6.15 & 6.90\\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & $-0.0149$ & 0.021 & $-0.034$ & $-5.000$ & -- & -- & 0.285 & 1.262 & 0.270 & 0.374 & 0.808 & 0.226 & -- & -- & -- & 32.43 & 50.43 & 75.21 & 68.46 & $-14.51$ & $-9.00$ & $-0.71$\\%50.4292, 75.21151171995729, 68.39431106535574
Non-flat & A118 & -- & 0.4579 & 0.986 & 1.260 & $-1.127$ & -- & -- & -- & -- & -- & -- & -- & -- & 1.179 & 49.71 & 0.383 & 117.50 & 129.50 & 146.12 & 126.97 & 2.87 & 8.41 & 1.02\\%129.4992, 146.12330774679398, 126.97333446532176
XCDM & QHMCA\tnote{c} & 0.0248 & 0.1329 & 0.293 & $-0.196$ & $-1.194$ & 73.47 & 11.37 & 0.300 & 1.673 & 0.278 & 0.422 & 1.062 & 0.289 & 1.168 & 50.09 & 0.394 & 956.99 & 986.99 & 1051.22 & 986.72 & 2.38 & 10.94 & 2.51\\%986.988, 1051.2220012034402, 986.719356367329
& HzBSN\tnote{e} & 0.0278 & 0.1118 & 0.294 & $-0.032$ & $-0.865$ & 69.07 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & 1434.46 & 1444.46 & 1471.45 & 1445.42 & $-1.13$ & 9.66 & $-0.63$\\%1444.456, 1471.4499313770996, 1445.4249805329234
& OHzBSNQHMA\tnote{e} & 0.0260 & 0.1158 & 0.295 & $-0.016$ & $-0.947$ & 69.53 & 10.94 & 0.277 & 1.697 & 0.288 & -- & -- & -- & 1.151 & 50.15 & 0.409 & 2030.78 & 2054.78 & 2119.38 & 2055.42 & 3.48 & 14.25 & 3.56\\
& HzBSNQHMCA\tnote{f} & 0.0298 & 0.1093 & 0.286 & 0.007 & $-0.900$ & 69.87 & 10.80 & 0.306 & 1.692 & 0.274 & 0.443 & 1.062 & 0.276 & 1.222 & 49.93 & 0.387 & 2396.10 & 2426.10 & 2511.33 & 2427.35 & $-0.44$ & 10.92 & $-0.12$\\%2426.1, 2511.330322662403, 2427.3463043361803
\\
& QSO-AS + H\,\textsc{ii}G & 0.0198 & 0.1066 & 0.249 & -- & 0.000 & 71.42 & 11.10 & -- & -- & -- & -- & -- & -- & -- & -- & -- & 786.46 & 796.46 & 815.00 & 796.31 & 2.01 & 5.72 & 3.62\\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & $-0.0078$ & 0.035 & -- & 0.014 & -- & -- & 0.265 & 1.662 & 0.283 & 0.399 & 0.984 & 0.263 & -- & -- & -- & 51.19 & 67.19 & 89.21 & 72.04 & 2.25 & 5.00 & 2.94\\%67.1854, 89.21412152885091, 72.04063227907369
Flat & A118 & -- & 0.1226 & 0.301 & -- & 9.805 & -- & -- & -- & -- & -- & -- & -- & -- & 1.173 & 49.89 & 0.383 & 118.24 & 128.24 & 142.10 & 125.51 & 1.62 & 4.39 & $-0.44$\\%128.2448, 142.09822312232833, 125.51414811591812
$\phi$CDM & QHMCA\tnote{c} & 0.0227 & 0.1014 & 0.243 & -- & 0.009 & 71.72 & 11.15 & 0.299 & 1.685 & 0.272 & 0.419 & 1.070 & 0.280 & 1.222 & 49.97 & 0.392 & 958.98 & 986.98 & 1046.93 & 988.57 & 2.37 & 6.65 & 4.36\\%986.976, 1046.9277344565442, 988.5670623982281
& HzBSN\tnote{d} & 0.0288 & 0.1060 & 0.286 & -- & 0.402 & 68.84 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & 1434.43 & 1442.43 & 1464.02 & 1442.92 & $-3.16$ & 2.23 & $-3.14$\\%1442.426, 1464.0211451016796, 1442.9161999694343
& OHzBSNQHMA\tnote{e} & 0.0274 & 0.1116 & 0.289 & -- & 0.150 & 69.51 & 10.97 & 0.292 & 1.685 & 0.280 & -- & -- & -- & 1.121 & 50.23 & 0.409 & 2030.52 & 2052.52 & 2111.74 & 2053.18 & 1.22 & 6.61 & 1.33\\
& HzBSNQHMCA\tnote{f} & 0.0292 & 0.1152 & 0.294 & -- & 0.286 & 70.28 & 10.78 & 0.286 & 1.696 & 0.286 & 0.426 & 1.067 & 0.294 & 1.183 & 50.04 & 0.384 & 2396.50 & 2424.50 & 2504.05 & 2424.36 & $-2.04$ & 3.64 & $-3.11$\\%2424.5, 2504.048301151576, 2424.3575784058503
\\
& QSO-AS + H\,\textsc{ii}G & 0.0338 & 0.0979 & 0.251 & $-0.250$ & 0.000 & 72.53 & 11.47 & -- & -- & -- & -- & -- & -- & -- & -- & -- & 784.61 & 796.61 & 818.85 & 801.32 & 2.16 & 9.57 & 8.63\\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & 0.0611 & 0.176 & $-0.173$ & 0.115 & -- & -- & 0.289 & 1.659 & 0.271 & 0.421 & 1.034 & 0.264 & -- & -- & -- & 50.74 & 68.74 & 93.52 & 72.86 & 3.80 & 9.31 & 3.76\\%68.7414, 93.52371171995728, 72.85802398764606
Non-flat & A118 & -- & 0.2763 & 0.615 & 0.383 & 6.632 & -- & -- & -- & -- & -- & -- & -- & -- & 1.180 & 49.87 & 0.381 & 118.00 & 130.00 & 146.62 & 126.28 & 3.37 & 8.91 & 0.33\\%129.995, 146.619107746794, 126.27807861071165
$\phi$CDM & QHMCA\tnote{c} & 0.0321 & 0.1029 & 0.259 & $-0.208$ & 0.101 & 72.42 & 11.12 & 0.304 & 1.676 & 0.277 & 0.451 & 1.039 & 0.273 & 1.155 & 50.11 & 0.386 & 957.66 & 987.66 & 1051.89 & 990.25 & 3.05 & 11.61 & 6.04\\%987.658, 1051.89200120344, 990.2481516952522 D
& HzBSN\tnote{d} & 0.0282 & 0.1104 & 0.291 & $-0.045$ & 0.480 & 69.13 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & 1434.23 & 1444.23 & 1471.22 & 1444.26 & $-1.36$ & 9.43 & $-1.79$\\%1444.226, 1471.2199313770998, 1444.2616651905555 D
& OHzBSNQHMA\tnote{e} & 0.0251 & 0.1207 & 0.300 & $-0.056$ & 0.195 & 69.84 & 10.91 & 0.279 & 1.699 & 0.284 & -- & -- & -- & 1.132 & 50.19 & 0.411 & 2030.76 & 2054.76 & 2119.36 & 2055.13 & 3.46 & 14.23 & 3.28\\
& HzBSNQHMCA\tnote{f} & 0.0313 & 0.1001 & 0.275 & $-0.015$ & 0.545 & 69.33 & 10.85 & 0.287 & 1.694 & 0.272 & 0.424 & 1.059 & 0.281 & 1.197 & 50.00 & 0.389 & 2396.34 & 2426.34 & 2511.57 & 2425.80 & $-0.20$ & 11.16 & $-1.66$\\%2426.34, 2511.570322662403, 2425.803303852993 D
\bottomrule
\end{tabular
\begin{tablenotes}[flushleft]
\item [a] $w_{\rm X}$\ corresponds to flat/non-flat XCDM and $\alpha$ corresponds to flat/non-flat $\phi$CDM.
\item [b] $\rm{km \ s^{-1} \ Mpc^{-1}}$.
\item [c] QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118.
\item [d] $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}.
\item [e] Old $H(z)$ + Old BAO + SNP + SND + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + A118.
\item [f] $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118.
\end{tablenotes}
\end{threeparttable}%
}
\end{table*}
\begin{table*}
\centering
\resizebox*{2.05\columnwidth}{2.6\columnwidth}{%
\begin{threeparttable}
\caption{One-dimensional posterior mean parameter values and uncertainties ($\pm 1\sigma$ error bars or $2\sigma$ limits) for all models from various combinations of data.}\label{tab:1d_BFPs}
\begin{tabular}{lccccccccccccccccc}
\toprule
Model & Data set & $\Omega_{b}h^2$ & $\Omega_{c}h^2$ & $\Omega_{m0}$ & $\Omega_{k0}$ & $w_{\mathrm{X}}$/$\alpha$\tnote{a} & $H_0$\tnote{b} & $l_{\rm m}$ & $\gamma_{\rm \textsc{m}}$ & $\beta_{\rm \textsc{m}}$ & $\sigma_{\rm int,\,\textsc{m}}$ & $\gamma_{\rm \textsc{c}}$ & $\beta_{\rm \textsc{c}}$ & $\sigma_{\rm int,\,\textsc{c}}$ & $\gamma_{\rm \textsc{a}}$ & $\beta_{\rm \textsc{a}}$ & $\sigma_{\rm int,\,\textsc{a}}$\\
\midrule
& QSO-AS + H\,\textsc{ii}G & $0.0225\pm0.0113$ & $0.1076^{+0.0197}_{-0.0224}$ & $0.257^{+0.037}_{-0.047}$ & -- & -- & $71.52\pm1.79$ & $11.04\pm0.34$ & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & -- & $<0.444$\tnote{c} & -- & -- & -- & -- & $0.292\pm0.045$ & $1.691\pm0.061$ & $0.289^{+0.023}_{-0.030}$ & $0.440\pm0.042$ & $1.030\pm0.089$ & $0.305^{+0.037}_{-0.054}$ & -- & -- & -- \\
Flat & A118 & -- & -- & $0.598^{+0.292}_{-0.226}$ & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & $1.171\pm0.087$ & $49.93\pm0.25$ & $0.393^{+0.027}_{-0.032}$ \\%$>0.215$
$\Lambda$CDM & QHMCA\tnote{d} & $0.0225\pm0.0117$ & $0.1090^{+0.0193}_{-0.0222}$ & $0.260^{+0.036}_{-0.046}$ & -- & -- & $71.43\pm1.73$ & $11.03\pm0.34$ & $0.292\pm0.043$ & $1.690\pm0.055$ & $0.288^{+0.023}_{-0.029}$ & $0.439\pm0.038$ & $1.030^{+0.072}_{-0.063}$ & $0.301^{+0.035}_{-0.052}$ & $1.197\pm0.086$ & $50.02\pm0.24$ & $0.393^{+0.025}_{-0.031}$ \\
& HzBSN\tnote{e} & $0.0243\pm0.0034$ & $0.1267^{+0.0080}_{-0.0089}$ & $0.312\pm0.013$ & -- & -- & $69.65\pm2.48$ & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& OHzBSNQHMA\tnote{f} & $0.0256\pm0.0020$ & $0.1201\pm0.0061$ & $0.300\pm0.012$ & -- & -- & $69.87\pm1.13$ & $10.96\pm0.25$ & $0.293\pm0.044$ & $1.684\pm0.055$ & $0.292^{+0.023}_{-0.029}$ & -- & -- & -- & $1.131\pm0.087$ & $50.20\pm0.24$ & $0.413^{+0.026}_{-0.033}$ \\
& HzBSNQHMCA\tnote{g} & $0.0250\pm0.0021$ & $0.1260\pm0.0064$ & $0.308\pm0.012$ & -- & -- & $70.13\pm1.25$ & $10.86\pm0.26$ & $0.294\pm0.044$ & $1.691\pm0.054$ & $0.288^{+0.023}_{-0.029}$ & $0.442\pm0.039$ & $1.035^{+0.072}_{-0.063}$ & $0.302^{+0.035}_{-0.052}$ & $1.190\pm0.085$ & $50.02\pm0.24$ & $0.392^{+0.025}_{-0.031}$ \\
\\
& QSO-AS + H\,\textsc{ii}G & $0.0224\pm0.0111$ & $0.1122^{+0.0223}_{-0.0218}$ & $0.260^{+0.039}_{-0.045}$ & $-0.196^{+0.112}_{-0.295}$ & -- & $72.25\pm1.99$ & $11.35\pm0.49$ & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & -- & $0.473^{+0.187}_{-0.311}$ & $-0.818^{+0.391}_{-0.637}$ & -- & -- & -- & $0.314^{+0.048}_{-0.052}$ & $1.662\pm0.065$ & $0.285^{+0.023}_{-0.030}$ & $0.491^{+0.050}_{-0.064}$ & $1.073^{+0.093}_{-0.094}$ & $0.299^{+0.036}_{-0.053}$ & -- & -- & -- \\
Non-flat & A118 & -- & -- & $>0.267$ & $0.789^{+0.664}_{-0.775}$ & -- & -- & -- & -- & -- & -- & -- & -- & -- & $1.186\pm0.089$ & $49.82\pm0.26$ & $0.392^{+0.026}_{-0.032}$ \\
$\Lambda$CDM & QHMCA\tnote{d} & $0.0224\pm0.0117$ & $0.1144^{+0.0213}_{-0.0212}$ & $0.266^{+0.036}_{-0.044}$ & $-0.157^{+0.109}_{-0.211}$ & -- & $71.97\pm1.85$ & $11.24\pm0.41$ & $0.292\pm0.043$ & $1.684\pm0.055$ & $0.288^{+0.023}_{-0.029}$ & $0.441\pm0.038$ & $1.028^{+0.071}_{-0.064}$ & $0.298^{+0.035}_{-0.052}$ & $1.182\pm0.087$ & $50.06\pm0.24$ & $0.395^{+0.026}_{-0.031}$ \\
& HzBSN\tnote{e} & $0.0282^{+0.0046}_{-0.0050}$ & $0.1082\pm0.0152$ & $0.288\pm0.021$ & $0.087\pm0.063$ & -- & $68.89\pm2.44$ & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& OHzBSNQHMA\tnote{f} & $0.0265^{+0.0032}_{-0.0038}$ & $0.1168\pm0.0127$ & $0.295\pm0.019$ & $0.018\pm0.059$ & -- & $69.79\pm1.14$ & $10.96\pm0.25$ & $0.293\pm0.044$ & $1.685\pm0.055$ & $0.292^{+0.023}_{-0.029}$ & -- & -- & -- & $1.131\pm0.086$ & $50.20\pm0.24$ & $0.413^{+0.027}_{-0.033}$ \\
& HzBSNQHMCA\tnote{g} & $0.0291^{+0.0036}_{-0.0041}$ & $0.1115\pm0.0126$ & $0.288\pm0.019$ & $0.074\pm0.056$ & -- & $70.02\pm1.25$ & $10.87\pm0.26$ & $0.293\pm0.044$ & $1.692\pm0.054$ & $0.288^{+0.023}_{-0.029}$ & $0.440\pm0.038$ & $1.033^{+0.073}_{-0.062}$ & $0.303^{+0.035}_{-0.052}$ & $1.197^{+0.084}_{-0.085}$ & $50.01\pm0.24$ & $0.392^{+0.025}_{-0.031}$ \\
\\
& QSO-AS + H\,\textsc{ii}G & $0.0224\pm0.0112$ & $0.1391^{+0.0333}_{-0.0256}$ & $0.305^{+0.056}_{-0.047}$ & -- & $-1.683^{+0.712}_{-0.387}$ & $72.92^{+2.15}_{-2.40}$ & $11.31\pm0.43$ & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & -- & $<0.563$ & -- & $<-1.509$ & -- & -- & $0.282^{+0.042}_{-0.046}$ & $1.557^{+0.117}_{-0.101}$ & $0.283^{+0.023}_{-0.029}$ & $0.405\pm0.047$ & $0.900^{+0.122}_{-0.121}$ & $0.282^{+0.038}_{-0.054}$ & -- & -- & -- \\
Flat & A118 & -- & -- & $0.557^{+0.277}_{-0.274}$ & -- & $-2.521^{+2.330}_{-2.370}$ & -- & -- & -- & -- & -- & -- & -- & -- & $1.167\pm0.088$ & $50.01^{+0.27}_{-0.31}$ & $0.393^{+0.026}_{-0.032}$ \\%$>0.159$
XCDM & QHMCA\tnote{d} & $0.0225\pm0.0117$ & $0.1457^{+0.0291}_{-0.0242}$ & $0.313\pm0.045$ & -- & $-1.929^{+0.825}_{-0.426}$ & $73.51^{+2.15}_{-2.52}$ & $11.41\pm0.43$ & $0.295\pm0.044$ & $1.677\pm0.056$ & $0.287^{+0.023}_{-0.029}$ & $0.439\pm0.038$ & $1.029^{+0.071}_{-0.064}$ & $0.297^{+0.035}_{-0.052}$ & $1.183\pm0.086$ & $50.07\pm0.24$ & $0.392^{+0.025}_{-0.031}$ \\
& HzBSN\tnote{e} & $0.0287\pm0.0044$ & $0.1097\pm0.0117$ & $0.290\pm0.016$ & -- & $-0.886\pm0.053$ & $69.15\pm2.52$ & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& OHzBSNQHMA\tnote{f} & $0.0271^{+0.0027}_{-0.0031}$ & $0.1147^{+0.0098}_{-0.0097}$ & $0.294\pm0.015$ & -- & $-0.959\pm0.059$ & $69.66\pm1.16$ & $10.94\pm0.25$ & $0.293\pm0.044$ & $1.685\pm0.055$ & $0.292^{+0.023}_{-0.029}$ & -- & -- & -- & $1.131\pm0.087$ & $50.20\pm0.24$ & $0.413^{+0.027}_{-0.033}$ \\
& HzBSNQHMCA\tnote{g} & $0.0294^{+0.0030}_{-0.0035}$ & $0.1106^{+0.0100}_{-0.0102}$ & $0.288\pm0.015$ & -- & $-0.895\pm0.051$ & $69.84\pm1.26$ & $10.80\pm0.26$ & $0.294\pm0.044$ & $1.692\pm0.054$ & $0.288^{+0.023}_{-0.029}$ & $0.442\pm0.039$ & $1.034^{+0.074}_{-0.062}$ & $0.303^{+0.035}_{-0.052}$ & $1.193\pm0.085$ & $50.01\pm0.24$ & $0.392^{+0.025}_{-0.031}$ \\
\\
& QSO-AS + H\,\textsc{ii}G & $0.0224\pm0.0114$ & $0.1122^{+0.0473}_{-0.0326}$ & $0.258^{+0.086}_{-0.057}$ & $0.018^{+0.345}_{-0.383}$ & $-1.670^{+1.063}_{-0.245}$ & $72.34\pm2.16$ & $11.20\pm0.49$ & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & -- & $0.338^{+0.101}_{-0.299}$ & $-0.410^{+0.368}_{-0.222}$ & $<-1.124$ & -- & -- & $0.319^{+0.048}_{-0.054}$ & $1.526^{+0.132}_{-0.108}$ & $0.280^{+0.023}_{-0.030}$ & $0.456^{+0.047}_{-0.056}$ & $0.966^{+0.119}_{-0.110}$ & $0.282^{+0.037}_{-0.053}$ & -- & -- & -- \\
Non-flat & A118 & -- & -- & $>0.226$ & $0.690^{+0.512}_{-0.798}$ & $-2.342^{+2.067}_{-1.106}$ & -- & -- & -- & -- & -- & -- & -- & -- & $1.185\pm0.090$ & $49.82\pm0.28$ & $0.392^{+0.026}_{-0.032}$ \\
XCDM & QHMCA\tnote{d} & $0.0226\pm0.0118$ & $0.1327^{+0.0335}_{-0.0277}$ & $0.291^{+0.056}_{-0.049}$ & $0.039^{+0.219}_{-0.229}$ & $-2.066^{+1.287}_{-0.446}$ & $73.13^{+2.14}_{-2.46}$ & $11.30\pm0.46$ & $0.294\pm0.044$ & $1.680\pm0.056$ & $0.287^{+0.023}_{-0.030}$ & $0.439\pm0.038$ & $1.030^{+0.071}_{-0.065}$ & $0.298^{+0.035}_{-0.052}$ & $1.187\pm0.087$ & $50.06\pm0.24$ & $0.393^{+0.025}_{-0.031}$ \\
& HzBSN\tnote{e} & $0.0284\pm0.0047$ & $0.1115^{+0.0151}_{-0.0165}$ & $0.293\pm0.021$ & $-0.017\pm0.095$ & $-0.884^{+0.082}_{-0.058}$ & $69.23\pm2.53$ & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& OHzBSNQHMA\tnote{f} & $0.0269^{+0.0033}_{-0.0039}$ & $0.1155^{+0.0128}_{-0.0127}$ & $0.295\pm0.019$ & $-0.009^{+0.077}_{-0.083}$ & $-0.959^{+0.090}_{-0.063}$ & $69.65\pm1.16$ & $10.93\pm0.26$ & $0.293\pm0.044$ & $1.685\pm0.055$ & $0.292^{+0.023}_{-0.029}$ & -- & -- & -- & $1.130\pm0.087$ & $50.20\pm0.24$ & $0.413^{+0.027}_{-0.033}$ \\
& HzBSNQHMCA\tnote{g} & $0.0293^{+0.0036}_{-0.0041}$ & $0.1108^{+0.0124}_{-0.0125}$ & $0.289\pm0.019$ & $-0.004\pm0.078$ & $-0.897^{+0.075}_{-0.055}$ & $69.81\pm1.26$ & $10.80\pm0.27$ & $0.294\pm0.044$ & $1.692\pm0.054$ & $0.288^{+0.023}_{-0.030}$ & $0.442\pm0.039$ & $1.034^{+0.074}_{-0.063}$ & $0.303^{+0.035}_{-0.052}$ & $1.192\pm0.085$ & $50.01\pm0.24$ & $0.392^{+0.025}_{-0.031}$ \\
\\
& QSO-AS + H\,\textsc{ii}G & $0.0217^{+0.0081}_{-0.0138}$ & $0.0543^{+0.0225}_{-0.0471}$ & $0.154^{+0.053}_{-0.086}$ & -- & $<6.506$ & $70.64\pm1.80$ & $10.81\pm0.34$ & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & -- & $<0.537$\tnote{c} & -- & $<6.202$\tnote{c} & -- & -- & $0.299\pm0.046$ & $1.717^{+0.059}_{-0.053}$ & $0.289^{+0.023}_{-0.030}$ & $0.449\pm0.043$ & $1.069^{+0.091}_{-0.074}$ & $0.312^{+0.037}_{-0.054}$ & -- & -- & -- \\
Flat & A118 & -- & -- & $0.535^{+0.293}_{-0.287}$ & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & $1.171\pm0.088$ & $49.88\pm0.24$ & $0.392^{+0.026}_{-0.032}$ \\%$>0.124$
$\phi$CDM & QHMCA\tnote{d} & $0.0219^{+0.0082}_{-0.0165}$ & $0.0647^{+0.0378}_{-0.0408}$ & $0.175^{+0.074}_{-0.080}$ & -- & $<6.630$ & $70.66\pm1.75$ & $10.83\pm0.35$ & $0.292\pm0.044$ & $1.695\pm0.054$ & $0.288^{+0.023}_{-0.029}$ & $0.439\pm0.039$ & $1.033^{+0.074}_{-0.064}$ & $0.303^{+0.035}_{-0.053}$ & $1.204\pm0.086$ & $50.00\pm0.24$ & $0.393^{+0.025}_{-0.031}$ \\
& HzBSN\tnote{e} & $0.0300^{+0.0047}_{-0.0046}$ & $0.1040\pm0.0129$ & $0.282\pm0.018$ & -- & $0.475^{+0.189}_{-0.265}$ & $69.01\pm2.43$ & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- \\%$0.475^{+0.444}_{-0.432}$
& OHzBSNQHMA\tnote{f} & $0.0286^{+0.0025}_{-0.0033}$ & $0.1089^{+0.0103}_{-0.0083}$ & $0.286\pm0.015$ & -- & $0.249^{+0.069}_{-0.239}$ & $69.50\pm1.14$ & $10.92\pm0.25$ & $0.293\pm0.044$ & $1.686\pm0.054$ & $0.292^{+0.023}_{-0.029}$ & -- & -- & -- & $1.132\pm0.086$ & $50.20\pm0.24$ & $0.413^{+0.027}_{-0.033}$ \\
& HzBSNQHMCA\tnote{g} & $0.0307^{+0.0032}_{-0.0038}$ & $0.1057^{+0.0118}_{-0.0107}$ & $0.281\pm0.017$ & -- & $0.423^{+0.168}_{-0.246}$ & $69.79^{+1.25}_{-1.26}$ & $10.79\pm0.26$ & $0.294\pm0.044$ & $1.692\pm0.054$ & $0.288^{+0.023}_{-0.029}$ & $0.442\pm0.039$ & $1.034^{+0.075}_{-0.062}$ & $0.303^{+0.035}_{-0.052}$ & $1.193\pm0.085$ & $50.01\pm0.24$ & $0.392^{+0.025}_{-0.031}$ \\%$0.423^{+0.405}_{-0.393}$
\\
& QSO-AS + H\,\textsc{ii}G & $0.0219^{+0.0093}_{-0.0130}$ & $0.0576^{+0.0268}_{-0.0424}$ & $0.163^{+0.058}_{-0.081}$ & $0.181^{+0.180}_{-0.339}$ & $<7.875$ & $70.21\pm1.83$ & $10.70^{+0.36}_{-0.41}$ & -- & -- & -- & -- & -- & -- & -- & -- & -- \\
& Mg\,\textsc{ii}\ + C\,\textsc{iv} & -- & -- & $<0.536$\tnote{c} & $0.088^{+0.384}_{-0.364}$ & $<6.162$\tnote{c} & -- & -- & $0.299\pm0.046$ & $1.719\pm0.055$ & $0.290^{+0.023}_{-0.030}$ & $0.450\pm0.043$ & $1.072^{+0.088}_{-0.076}$ & $0.312^{+0.037}_{-0.054}$ & -- & -- & -- \\%
Non-flat & A118 & -- & -- & $0.516^{+0.215}_{-0.288}$ & $0.064^{+0.293}_{-0.282}$ & $5.209^{+3.855}_{-2.462}$ & -- & -- & -- & -- & -- & -- & -- & -- & $1.174\pm0.089$ & $49.88\pm0.24$ & $0.392^{+0.026}_{-0.032}$ \\%$0.516^{+0.452}_{-0.390}$
$\phi$CDM & QHMCA\tnote{d} & $0.0221^{+0.0117}_{-0.0165}$ & $0.0742^{+0.0405}_{-0.0329}$ & $0.194^{+0.077}_{-0.063}$ & $0.018^{+0.083}_{-0.232}$ & $<6.688$ & $70.63\pm1.85$ & $10.84\pm0.39$ & $0.292\pm0.044$ & $1.694\pm0.055$ & $0.288^{+0.023}_{-0.029}$ & $0.439\pm0.039$ & $1.033^{+0.075}_{-0.064}$ & $0.303^{+0.035}_{-0.053}$ & $1.201\pm0.086$ & $50.00\pm0.24$ & $0.393^{+0.026}_{-0.031}$ \\
& HzBSN\tnote{e} & $0.0296^{+0.0048}_{-0.0047}$ & $0.1067^{+0.0153}_{-0.0154}$ & $0.286\pm0.021$ & $-0.035^{+0.071}_{-0.085}$ & $0.550^{+0.231}_{-0.314}$ & $69.15\pm2.53$ & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- \\%$0.550^{+0.514}_{-0.527}$
& OHzBSNQHMA\tnote{f} & $0.0277^{+0.0034}_{-0.0040}$ & $0.1126\pm0.0128$ & $0.291\pm0.019$ & $-0.040^{+0.064}_{-0.072}$ & $0.316^{+0.101}_{-0.292}$ & $69.52\pm1.15$ & $10.89\pm0.25$ & $0.294\pm0.044$ & $1.685\pm0.055$ & $0.292^{+0.023}_{-0.030}$ & -- & -- & -- & $1.128\pm0.087$ & $50.20\pm0.24$ & $0.413^{+0.027}_{-0.033}$ \\
& HzBSNQHMCA\tnote{g} & $0.0303^{+0.0038}_{-0.0041}$ & $0.1068\pm0.0127$ & $0.283\pm0.019$ & $-0.021^{+0.067}_{-0.074}$ & $0.468^{+0.200}_{-0.292}$ & $69.76\pm1.25$ & $10.78\pm0.26$ & $0.294\pm0.044$ & $1.692\pm0.054$ & $0.288^{+0.023}_{-0.029}$ & $0.442\pm0.039$ & $1.035^{+0.074}_{-0.063}$ & $0.303^{+0.035}_{-0.052}$ & $1.191\pm0.084$ & $50.01\pm0.24$ & $0.392^{+0.025}_{-0.031}$ \\%$<0.922$
\bottomrule
\end{tabular
\begin{tablenotes}[flushleft]
\item [a] $w_{\rm X}$\ corresponds to flat/non-flat XCDM and $\alpha$ corresponds to flat/non-flat $\phi$CDM.
\item [b] $\rm{km \ s^{-1} \ Mpc^{-1}}$.
\item [c] This is the 1$\sigma$ limit. The $2\sigma$ limit is set by the prior, and is not shown here.
\item [d] QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118.
\item [e] $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}.
\item [f] Old $H(z)$ + Old BAO + SNP + SND + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + A118.
\item [g] $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118.
\end{tablenotes}
\end{threeparttable}%
}
\end{table*}
\begin{figure*}
\centering
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FLCDM_Comp6.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFLCDM_Comp6.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FXCDM_Comp6.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFXCDM_Comp6.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FPCDM_Comp6.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFPCDM_Comp6.pdf}}\\
\caption{Same as Fig.\ \ref{fig1} but for QSO-AS + H\,\textsc{ii}G\ (blue), Mg\,\textsc{ii}\ + C\,\textsc{iv}\ (green), A118 (gray), and QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 (red) data. The black dashed zero-acceleration lines, computed for the third cosmological parameter set to the $H(z)$ + BAO data best-fitting values listed in Table \ref{tab:BFP} in panels (d) and (f), divides the parameter space into regions associated with currently-accelerating (below left) and currently-decelerating (above right) cosmological expansion.}
\label{fig6}
\end{figure*}
Constraints from QSO-AS + H\,\textsc{ii}G, Mg\,\textsc{ii}\ + C\,\textsc{iv}, and A118 data are listed in Tables \ref{tab:BFPs} and \ref{tab:1d_BFPs} and their confidence regions are shown in Fig.\ \ref{fig6}. Note that A118 results here are corrected with respect to the ones reported in Ref.\ \cite{CaoDainottiRatra2022} and used in Ref.\ \cite{CaoRatra2022}. Since the changes are insignificant we do not discuss these results in detail here. QSO-AS + H\,\textsc{ii}G, Mg\,\textsc{ii}\ + C\,\textsc{iv}, and A118 data constraints are mutually consistent, so these data can be used together to constrain cosmological parameters, as discussed next.
Joint QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 (QHMCA) data constraints on $\Omega_{m0}$\ range from $0.175^{+0.074}_{-0.080}$ (flat $\phi$CDM) to $0.313\pm0.045$ (flat XCDM), with a difference of $1.6\sigma$.
QHMCA data constraints on $H_0$ range from $70.63\pm1.85$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (non-flat $\phi$CDM) to $73.51^{+2.15}_{-2.52}$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat XCDM), with a difference of $0.92\sigma$. These $H_0$ values are $0.78\sigma$ (non-flat $\phi$CDM) and $1.5\sigma$ (flat XCDM) higher than the median statistics estimate of $H_0=68\pm2.8$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ \citep{chenratmed} (also see Refs.\ \citep{gott_etal_2001, Calabreseetal2012}), $0.33\sigma$ (non-flat $\phi$CDM) and $1.2\sigma$ (flat XCDM) higher than the TRGB estimate of $H_0=69.8\pm1.7$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ \citep{Freedman2021}, and $1.1\sigma$ (non-flat $\phi$CDM) lower and $0.17\sigma$ (flat XCDM) higher than the SN Ia and Cepheids measurement of $73.04\pm1.04$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ \cite{Riessetal2022}. The $H_0$ constraints from flat $\Lambda$CDM\ here, $71.43\pm1.73$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\, is $2.2\sigma$ higher than the $H_0$ estimate of $67.36 \pm 0.54$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ from \textit{Planck} 2018 TT,TE,EE+lowE+lensing CMB anisotropy data \cite{planck2018b}.
QHMCA data constraints on $\Omega_{k0}$\ are $-0.157^{+0.109}_{-0.211}$, $0.039^{+0.219}_{-0.229}$, and $0.018^{+0.083}_{-0.282}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, respectively. The non-flat $\Lambda$CDM\ result favors closed spatial geometry, being $1.4\sigma$ away from flat, while open spatial geometry is favored in both the non-flat XCDM and non-flat $\phi$CDM\ models, but with flatness well within 1$\sigma$.
QHMCA data indicate a preference for dark energy dynamics. In particular, the central values of the XCDM equation of state parameter, $w_{\rm X}$, are found to be $1.1\sigma$ and $0.83\sigma$ lower than $-1$ for the flat and non-flat parametrizations respectively. $2\sigma$ upper limits of $\alpha$ in both the flat and non-flat $\phi$CDM\ models suggest $\alpha=0$ is within $1\sigma$.
QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 data constraints derived here are very consistent with the QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + A118 data constraints derived in Ref.\ \cite{CaoRatra2022}.
\subsection{Constraints from $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 and Old $H(z)$ + Old BAO + SNP + SND + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + A118 data}
\label{subsec:comp4}
\begin{figure*}
\centering
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FLCDM_Comp8.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFLCDM_Comp8.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FXCDM_Comp8.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFXCDM_Comp8.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FPCDM_Comp8.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFPCDM_Comp8.pdf}}\\
\caption{Same as Fig.\ \ref{fig1} but for QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 (green), $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ (red), and $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 (blue) data. The black dashed zero-acceleration lines in panels (b)--(f), computed for the third cosmological parameter set to the $H(z)$ + BAO data best-fitting values listed in Table \ref{tab:BFP} in panels (d) and (f), divides the parameter space into regions associated with currently-accelerating (below or below left) and currently-decelerating (above or above right) cosmological expansion.}
\label{fig7}
\end{figure*}
\begin{figure*}
\centering
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FLCDM_Comp9.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFLCDM_Comp9.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FXCDM_Comp9.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFXCDM_Comp9.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FPCDM_Comp9.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFPCDM_Comp9.pdf}}\\
\caption{Same as Fig.\ \ref{fig7} but including non-cosmological parameters.}
\label{fig8}
\end{figure*}
\begin{figure*}
\centering
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FLCDM_Comp0.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFLCDM_Comp0.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FXCDM_Comp0.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFXCDM_Comp0.pdf}}\\
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{FPCDM_Comp0.pdf}}
\subfloat[]{%
\includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NFPCDM_Comp0.pdf}}\\
\caption{Same as Fig.\ \ref{fig1} but for Old $H(z)$ + Old BAO + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + A118 (red) and $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 (blue) data.}
\label{fig9}
\end{figure*}
For $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 (HzBSNQHMCA) data and Old $H(z)$ + Old BAO + SNP + SND + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + A118 (OHzBSNQHMA) data, the best-fitting parameter values, likelihood values, and information criteria values for all models are given in Table \ref{tab:BFPs} and the marginalized posterior mean parameter values and uncertainties for all models are listed in Table \ref{tab:1d_BFPs}. The OHzBSNQHMA data results are from Ref.\ \cite{CaoRatra2022} and are used here to compare to HzBSNQHMCA data constraints. Note that there is a typo in table 5 of Ref.\ \cite{CaoRatra2022}, where in the flat XCDM OHzBSNQHMA case, $\Omega_{b}h^2$\ should be $0.1147^{+0.0098}_{-0.0097}$ instead of $0.1449^{+0.0098}_{-0.0097}$.
Figures \ref{fig7} and \ref{fig8} show that $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ and QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 data constraints are mutually consistent so these data can be used together to more restrictively constrain cosmological parameter values. Figure \ref{fig9} compares the probability distributions and confidence regions of cosmological parameters, obtained from HzBSNQHMCA and OHzBSNQHMA data.
HzBSNQHMCA data constraints on $\Omega_{m0}$\ range from $0.281\pm0.017$ (flat $\phi$CDM) to $0.308\pm0.012$ (flat $\Lambda$CDM), with a difference of $1.3\sigma$. This difference is somewhat larger than the $0.73\sigma$ difference for the OHzBSNQHMA data set, \cite{CaoRatra2022}.
HzBSNQHMCA data constraints on $\Omega_{b}h^2$\ range from $0.0250\pm0.0021$ (flat $\Lambda$CDM) to $0.0307^{+0.0032}_{-0.0038}$ (flat $\phi$CDM), with a difference of $1.3\sigma$. The constraints on $\Omega_{c}h^2$\ range from $0.1057^{+0.0118}_{-0.0107}$ (flat $\phi$CDM) to $0.1260\pm0.0064$ (flat $\Lambda$CDM), with a difference of $1.5\sigma$.
HzBSNQHMCA data constraints on $H_0$ range from $69.76\pm1.25$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (non-flat $\phi$CDM) to $70.13\pm1.25$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (flat $\Lambda$CDM), with a difference of $0.21\sigma$. This difference is slightly smaller than the $0.23\sigma$ difference for the OHzBSNQHMA data set, \cite{CaoRatra2022}. These $H_0$ values are $0.57\sigma$ (non-flat $\phi$CDM) and $0.69\sigma$ (flat $\Lambda$CDM) higher than the median statistics estimate of $H_0=68\pm2.8$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ \citep{chenratmed}, $0.02\sigma$ (non-flat $\phi$CDM) lower and $0.16\sigma$ (flat $\Lambda$CDM) higher than the TRGB estimate of $H_0=69.8\pm1.7$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ \citep{Freedman2021}, and $2.0\sigma$ (non-flat $\phi$CDM) and $1.8\sigma$ (flat $\Lambda$CDM) lower than the SN Ia and Cepheids measurement of $73.04\pm1.04$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ \cite{Riessetal2022}. The $H_0$ constraints from flat $\Lambda$CDM\ is $2.0\sigma$ higher than the $H_0$ estimate of $67.36 \pm 0.54$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ from \textit{Planck} 2018 TT,TE,EE+lowE+lensing CMB anisotropy data \cite{planck2018b}.
HzBSNQHMCA data constraints on $\Omega_{k0}$\ are $0.074\pm0.056$, $-0.004\pm0.078$, and $-0.021^{+0.067}_{-0.074}$ for non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, respectively. Non-flat $\Lambda$CDM\ favors open spatial geometry, being $1.3\sigma$ away from flat, while closed spatial geometry is favored by non-flat XCDM and non-flat $\phi$CDM, with flatness within 1$\sigma$. It is interesting that for the non-CMB data compilation of Ref.\ \cite{deCruzPerezetal2022} the two non-flat $\Lambda$CDM\ models, with two different primordial power spectra, favor closed geometry at $\sim 0.7\sigma$, but that non-CMB data compilation includes growth factor measurements and so those constraints also depend on the primordial power spectrum assumed, unlike the constraints we derive here.
HzBSNQHMCA data indicate a strong preference for dark energy dynamics. For flat (non-flat) XCDM ($1\sigma$ and $2\sigma$), $w_{\rm X}=-0.895\pm0.051{}^{+0.099}_{-0.105}$ ($w_{\rm X}=-0.897^{+0.075}_{-0.055}{}^{+0.127}_{-0.139}$), with central values being $2.0\sigma$ ($1.6\sigma$) higher than $w_{\rm X}=-1$ ($\Lambda$CDM). For flat (non-flat) $\phi$CDM\ ($1\sigma$ and $2\sigma$), $\alpha=0.423^{+0.168}_{-0.246}{}^{+0.405}_{-0.393}$ ($\alpha=0.468^{+0.200}_{-0.292}{}^{+0.454}$), with central values being $>2\sigma$ ($1.6\sigma$) away from $\alpha=0$ ($\Lambda$CDM).
HzBSNQHMCA data constraints are in good agreement with OHzBSNQHMA data constraints. Specifically, the $\Omega_{b}h^2$\ constraints from the former are between $0.21\sigma$ lower (flat $\Lambda$CDM) and $0.52\sigma$ higher (flat XCDM); the $\Omega_{c}h^2$\ constraints from the former are between $0.32\sigma$ lower (non-flat $\phi$CDM) and $0.67\sigma$ higher (flat $\Lambda$CDM); the $\Omega_{m0}$\ constraints from the former are between $0.30\sigma$ lower (non-flat $\phi$CDM) and $0.47\sigma$ higher (flat $\Lambda$CDM); and the $H_0$ constraints from the former are between $0.093\sigma$ higher (non-flat XCDM) and $0.17\sigma$ higher (flat $\phi$CDM).
In non-flat $\Lambda$CDM, XCDM, and $\phi$CDM, the $\Omega_{k0}$\ constraints from HzBSNQHMCA data are $0.69\sigma$, $0.046\sigma$, and $0.19\sigma$ higher than those from OHzBSNQHMA data, respectively. In both data sets, open spatial geometry is favored by non-flat $\Lambda$CDM, and closed spatial geometry is favored by non-flat XCDM and non-flat $\phi$CDM.
In flat and non-flat XCDM and $\phi$CDM, the $w_{\rm X}$\ and $\alpha$ constraints from HzBSNQHMCA data are $0.82\sigma$, $0.59\sigma$, $0.68\sigma$, and $0.49\sigma$ higher than those from OHzBSNQHMA data, respectively. The former, new, data indicate stronger evidence for dark energy dynamics than do the latter, old data of Ref.\ \cite{CaoRatra2022}.
Overall, the changes in constraints found here, using updated and more data and improved analyses, compared to those found in Ref.\ \cite{CaoRatra2022}, are not large. This lends hope to the belief that the constraints we have derived here are more than somewhat reliable. We note however some trends from the above numerical values of the differences, and from Fig.\ \ref{fig9}: In all six models the new HzBSNQHMCA data results here favor slightly larger values of $H_0$; in the four dynamical dark energy models they favor slightly more dark energy dynamics; and in five of the six models they favor slightly larger $\Omega_{b}h^2$\ and slightly smaller $\Omega_{c}h^2$\ and $\Omega_{m0}$\ values, with the exception being the opposite behavior of the flat $\Lambda$CDM\ model.
\subsection{Model Comparison}
\label{subsec:comp}
From the AIC, BIC, and DIC values listed in Tables \ref{tab:BFP} and \ref{tab:BFPs},
we find the following results:
\begin{itemize}
\item {\bf AIC.} Flat $\Lambda$CDM\ is favored the most by $H(z)$, SNP\!\raisebox{0.2ex}{+}, A118, and QHMCA data; flat $\phi$CDM\ is favored the most by $H(z)$ + BAO and $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ data; non-flat XCDM is favored the most by QSO-AS + H\,\textsc{ii}G\ and Mg\,\textsc{ii}\ + C\,\textsc{iv}\ data; and non-flat XCDM is favored the most by HzBSNQHMCA data.
The evidence against the rest of the models/parametrizations is either only weak or positive, except for the Mg\,\textsc{ii}\ + C\,\textsc{iv}\ data, where the evidence against flat XCDM is positive, against non-flat $\Lambda$CDM\ is strong, and against other models/parametrizations are very strong.
\item {\bf BIC.} $H(z)$ + BAO data favor flat $\phi$CDM\ the most, Mg\,\textsc{ii}\ + C\,\textsc{iv}\ favor non-flat XCDM the most, and the other data combinations favor flat $\Lambda$CDM\ the most.
$H(z)$ + BAO and $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ data provide only weak or positive evidence against other models/parametrizations.
$H(z)$, QSO-AS + H\,\textsc{ii}G, and A118 data provide positive or strong (non-flat XCDM and non-flat $\phi$CDM) evidence against other models/parametrizations.
Mg\,\textsc{ii}\ + C\,\textsc{iv}\ data provide positive evidence against non-flat $\Lambda$CDM\ and flat XCDM, strong evidence against flat $\Lambda$CDM, and very strong evidence against flat and non-flat $\phi$CDM.
SNP\!\raisebox{0.2ex}{+}\ data provide strong or very strong (non-flat XCDM and non-flat $\phi$CDM) evidence against other models/parametrizations.
QHMCA data provide positive evidence against non-flat $\Lambda$CDM\ and flat XCDM, strong evidence against flat $\phi$CDM, and very strong evidence against non-flat XCDM and non-flat $\phi$CDM.
HzBSNQHMCA data provide positive or very strong (non-flat XCDM and non-flat $\phi$CDM) evidence against other models/parametrizations.
\item {\bf DIC.} Mg\,\textsc{ii}\ + C\,\textsc{iv}\ data favor flat XCDM the most, $H(z)$ + BAO, $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}, A118, and HzBSNQHMCA data favor flat $\phi$CDM\ the most, and the other data combinations favor flat $\Lambda$CDM\ the most.
There is strong evidence against non-flat XCDM and non-flat $\phi$CDM\ from QSO-AS + H\,\textsc{ii}G\ data, strong evidence against non-flat $\phi$CDM\ from QHMCA data, and weak or positive evidence against the others from the remaining data sets.
\end{itemize}
Based on the more reliable DIC values, we conclude that HzBSNQHMCA data do not provide strong evidence against any of the considered cosmological models and parametrizations.
\section{Conclusion}
\label{sec:conclusion}
We have used a large compilation of available lower-redshift, non-CMB and non-distance-ladder, expansion-rate data sets to derive cosmological constraints. By analyzing 32 $H(z)$, 12 BAO, 1590 Pantheon\!\raisebox{0.2ex}{+}\ SN Ia (SNP\!\raisebox{0.2ex}{+}), 120 QSO-AS, 181 H\,\textsc{ii}G, 78 Mg\,\textsc{ii} QSO, 38 C\,\textsc{iv} QSO, and 118 A118 GRB measurements, we find that the results from each individual data set are mutually consistent and so these data can be jointly used to study comological models. Additionally, we compare these new data set constraints to their older counterparts and do not find large differences.
The $H(z)$ + BAO + SNP\!\raisebox{0.2ex}{+}\ + QSO-AS + H\,\textsc{ii}G\ + Mg\,\textsc{ii}\ + C\,\textsc{iv}\ + A118 (HzBSNQHMCA) data combination results in a fairly precise summary value of $\Omega_{m0}=0.288\pm0.017$ (very similar to the $\Omega_{m0}=0.295\pm0.017$ we found in Ref.\ \cite{CaoRatra2022}), which is in agreement with many recent measurements, and summary values of $\Omega_{b}h^2=0.0294\pm0.0036$ and $\Omega_{b}h^2=0.1107\pm0.0113$. Our summary value of the Hubble constant, $H_0=69.8\pm1.3$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ (very similar to the $H_0=69.7\pm1.2$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ we found in Ref.\ \cite{CaoRatra2022}), is more in line with the values reported in Refs.\ \cite{Freedman2021} and \cite{chenratmed} than with the result of Ref.\ \cite{Riessetal2022}. Specifically, our summary central value of $H_0$ matches that of Ref.\ \cite{Freedman2021}, and is $0.58\sigma$ higher and $1.9\sigma$ lower than the values reported in Refs.\ \cite{chenratmed} and \cite{Riessetal2022}, respectively. Similar to the approach outlined in Refs.\ \cite{CaoRyanRatra2021, CaoRyanRatra2022, CaoRatra2022}, the summary central value is the average of the two (of six model) central mean values, while the uncertainties are the quadrature sum of the systematic uncertainty, defined as half of the difference between the two central mean values, and the statistical uncertainty, defined as the average of the error bars of the two central results. We note that from model to model the $\Omega_{m0}$\ and $\Omega_{b}h^2$\ values range over $1.3\sigma$, while those of $\Omega_{c}h^2$\ range over $1.5\sigma$, unlike the $H_0$ values which range over only $0.21\sigma$.
Our flat $\Lambda$CDM\ HzBSNQHMCA constraints, $\Omega_{b}h^2=0.0250\pm0.0021$, $\Omega_{c}h^2=0.1260\pm0.0064$, $H_0=70.13\pm1.25$ $\rm{km \ s^{-1} \ Mpc^{-1}}$, and $\Omega_{m0}=0.308\pm0.012$, are $1.2\sigma$, $0.92\sigma$, and $2.0\sigma$ higher, and $0.52\sigma$ lower than those from \textit{Planck} TT,TE,EE+lowE+lensing CMB anisotropy data, $\Omega_{b}h^2=0.02237\pm0.00015$, $\Omega_{c}h^2=0.1200\pm0.0012$, $H_0=67.36\pm0.54$ $\rm{km \ s^{-1} \ Mpc^{-1}}$, and $\Omega_{m0}=0.3153\pm0.0073$, \citep{planck2018b}, where our uncertainties are 14, 5.3, 2.3, and 1.6 times larger than those from \textit{Planck} data, respectively. Our summary values of $\Omega_{b}h^2$, $\Omega_{c}h^2$, $H_0$, and $\Omega_{m0}$, are $2.0\sigma$ higher, $0.82\sigma$ lower, $1.7\sigma$ higher, and $1.5\sigma$ lower than those from flat $\Lambda$CDM\ \textit{Planck} data, where our summary values uncertainties are 24, 9.4, 2.4, and 2.3 times larger than those from \textit{Planck} data, respectively.
Our estimated error bar for $H_0$ is slightly larger than that of Ref.\ \cite{Riessetal2022}, but is still much (2.4 times) larger than the error bar from the flat $\Lambda$CDM\ model \textit{Planck} value \citep{planck2018b}. Our measured summary value for $H_0 = 69.8 \pm 1.3$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ falls between the results from the flat $\Lambda$CDM\ model \textit{Planck} value, \citep{planck2018b}, and the local expansion rate measurement of Ref.\ \cite{Riessetal2022}, differing by about 2$\sigma$ from both. It is reasonably consistent with the slightly less-constraining flat $\Lambda$CDM\ model Atacama Cosmology Telescope (ACT) and South Pole Telescope (SPT) CMB anisotropy values, $H_0 = 67.9 \pm 1.5$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ and $H_0 = 68.8 \pm 1.5$ $\rm{km \ s^{-1} \ Mpc^{-1}}$, \citep{ACT:2020gnv, SPT-3G:2021eoc}, respectively. Our measured summary value of $H_0$ also agrees well with the slightly less-constraining TRGB measurement of Ref.\ \cite{Freedman2021}. These agreements might mean that $H_0=69.8\pm1.3$ $\rm{km \ s^{-1} \ Mpc^{-1}}$\ is the current most reasonable value for the Hubble constant.
HzBSNQHMCA data show at most mild evidence for non-flat geometry (the strongest being 1.3$\sigma$ evidence for open geometry in the non-flat $\Lambda$CDM\ model), but indicate more significant evidence for dark energy dynamics, from $1.6\sigma$ in the non-flat models to $2\sigma$ or larger in the flat dynamical dark energy models.
The DIC analysis shows that the HzBSNQHMCA data combination supports flat $\phi$CDM\ the most, but it does not provide strong evidence against models with constant dark energy or a small spatial curvature energy density (the evidence against them is either weak or positive).
We look forward to a future where the quality and quantity of lower-redshift, non-CMB and non-distance-ladder, expansion-rate data, such as those utilized in this study, are significantly improved to the level where they can measure cosmological parameter values with error bars comparable to those obtained from \textit{Planck} CMB anisotropy data.
\begin{acknowledgments}
We thank Dillon Brout and Javier de Cruz P\'{e}rez for useful discussions about Pantheon\!\raisebox{0.2ex}{+}\ data and thank Michele Moresco and Adam Riess for encouraging us to account for $H(z)$ correlations. This research was supported in part by DOE grant DE-SC0011840. The computations for this project were performed on the Beocat Research Cluster at Kansas State University, which is funded in part by NSF grants CNS-1006860, EPS-1006860, EPS-0919443, ACI-1440548, CHE-1726332, and NIH P20GM113109.
\end{acknowledgments}
|
{
"arxiv_id": "2302.14186",
"language": "en",
"timestamp": "2023-03-01T02:04:15",
"url": "https://arxiv.org/abs/2302.14186",
"yymm": "2302"
} |
\section{Introduction}
In problems with limited context-specific labeled data, machine learning models often fail to generalize well. Modern machine learning approaches such as transfer learning~\citep{pan2010survey, weiss2016survey}, domain adaptation~\citep{ sun2015survey}, meta-learning~\citep{vilalta2002perspective, vanschoren2019meta,finn2019online, hospedales2021meta}, and continual learning~\citep{van2019three, hadsell2020embracing,de2021continual, vogelstein2022representation} can mitigate this ``small data effect" but still require a sufficient amount of context-specific data to train a model that can then be updated to the context of interest. These approaches are either ineffective or unavailable for problems where the input signals are highly variable across contexts or where a single model does not have access to a sufficient amount of data due to privacy or resource constraints \citep{muhlhoff2021predictive}. Note that the terms ``context'' and ``task'' can be used interchangeably here. In this paper we deal with the physiological predictive problem -- a problem with both high variability across contexts (e.g., different recording sessions or different people) and privacy constraints (e.g., biometric data is often considered sensitive and identifiable and should be shared freely across devices).
We define the physiological predictive problem as any setting that uses biometric or physiological data (e.g., EEG, ECG, breathing rate, etc.) or any derivative thereof to make predictions related to the state of a person. The predicted state could be as high level as the person's cognitive load or affect \citep{10.3389/fnhum.2019.00191} or as low level as which hand they are trying to move \citep{10.3389/fnins.2018.00130}. The machine learning models for these problems often use expert-crafted features and a relatively simple model such as a polynomial classifier or regressor \citep{eeg-da}.
The standard cognitive load classification baseline using EEG data, for example, is a linear model trained using the band powers of neuroscientifically relevant frequencies (e.g., the theta band from 4-7 Hz, the alpha band from 8-12 Hz, etc. \citep{kandel2021principles}) as features \citep{akrami2006eeg, 8741944, GUERRERO2021e07258}. Recent systems for mental state classification have explored more complicated feature sets \citep{10.3389/fnhum.2022.930291} to the tune of some success. Similarly, popular ECG-based predictive systems use simple statistics related to the inter-peak intervals as features for a linear classifier to predict cognitive state \citep{mcduff2014remote, lee2022mental}. Outperforming these simple systems is more challenging for physiological prediction compared to domains such as computer vision and natural language processing because of the high variability and privacy limitations of the input data.
To be more specific, a common approach used in modern machine learning methods is to leverage data from multiple tasks to learn a representation of the data that is effective for every task~\citep{baxter2000model,zhang2018overview}. Once the representation is learned from the set of tasks it can then be applied to a new task and it is assumed that the representation is similarly effective for it. If the variability across the different tasks is high then data from a large set of tasks is necessary to learn a generalizable representation~\citep{baxter2000model}. For physiological predictive systems, there is context variability caused by changes in sensor location or quality, changes in user, and changes in environmental conditions. Hence, the variability across physiological predictive contexts is high and data from a large set of contexts is necessary to learn a transferable representation. Unfortunately, collecting data from a large set of relevant physiological contexts can be prohibitively expensive.
In this paper we propose a class of models based on Fisher's Linear Discriminant (``FLD")~\citep{izenman2013linear, devroye2013probabilistic} that interpolates between i) a model that is a combination of linear models trained on previous source contexts and ii) a linear model trained only on data from a new target context. We study the class of models in a particular generative setting and derive an analytical form of the risk under the 0-1 loss with respect to the target distribution. Using this analytical risk expression, we show that we can obtain an optimal combination of model i) and model ii), that improves generalization on the target context. We compare the performance of the optimal combination to the performance of model i) and model ii) across various generative parameter settings. Finally, we empirically demonstrate that by using the analytical form of the target risk we can improve upon the performance of model i) and model ii) in real-world physiological predictive settings.
\section{Problem setting}
The physiological classification problem is an instance of a more general statistical pattern recognition problem~\citep[Chapter~1]{devroye2013probabilistic}: Given training data $ \{(X_{i}, Y_{i})\}_{i=1}^{n} \in \left(\mathcal{X} \times \{1, \hdots, K\}\right)^{n} $ assumed to be $i.i.d.$ samples from a classification distribution $ \mathcal{P} $, construct a function $ h_{n} $ that takes as input an element of $ \mc{X} $ and outputs an element of $ \{1, \hdots, K\} $ such that the expected loss of $ h_n $ with respect to $ \mc{P} $ is small. With a sufficient amount of data, there exist classifiers $ h_{n} $ that have a statistically minimal expected loss for all $ \mc{P} $. In the physiological prediction problem, however, there is often \textit{not} enough data to adequately train classifiers and we assume, instead, that there is auxillary data (or derivatives thereof) from different contexts available that can be used to improve the expected loss.
In particular, given $ \{(X_{i}^{(j)}, Y_{i}^{(j)})\}_{i=1}^{n} $ assumed to be $ i.i.d. $ samples from the classification distribution $ \mc{P}^{(j)} $ for $ j \in \{0, \hdots, J\} $, we want to construct a function $ h^{(0)} $ that minimizes the expected loss with respect to the target distribution $ \mc{P}^{(0)} $. We refer to the other classification distributions, $ \mc{P}^{(1)}, \hdots, \mc{P}^{(J)} $, as source distributions. Variations of this set up are oftentimes called the transfer learning problem or domain adaptation problem and is, again, not unique to physiological prediction. Generally, for the classifier $ h^{(0)} $ to improve upon the single-task classifier $ h_{n} $, the source distributions need to be related to the target distribution such that the information learned when constructing the mappings from the input space to the label space in the context of the source distributions $ \mc{P}^{(1)}, \hdots, \mc{P}^{(J)} $ can be ``transferred" or ``adapted" to the context of the target distribution $ \mc{P}^{(0)} $~\citep{baxter2000model}.
The physiological prediction problem imposes one additional constraint compared to the standard setup: the data from one individual must remain private to the individual \citep{muhlhoff2021predictive}. The method we propose in Section \ref{sec:generative-model} addresses this constraint by leveraging the parametric model that relates the source and target classifiers.
\section{A Generative Model}
\label{sec:generative-model}
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\linewidth]{figures/illustrative-figure/illustrative-figure-tfld-3d-no-axes.PNG}
\caption{A geometric illustration of the generative assumptions, information constraints, and the model class under study. The red dots represent the data from the target distribution, the red arrow represents an estimate of the projection vector for the target distribution, the grey arrows represent source projection vectors, the blue arrow represents the average-source projection vector, and the green line interpolating between the blue and red arrows represents the possible end points of a convex combination of the red and blue arrows.}
\label{fig:illustrative-figure}
\end{figure}
Our goal is to develop a private (in some sense, see Section\ref{subsec:privacy}) and optimal classifier that can leverage information from data drawn from the target and source distributions. For this purpose, we first put relatively strict generative assumptions on the data and then make the relationship between the target distribution and the source distributions explicit.
In particular, we assume that each distribution $ \mc{P}^{(j)} $ is a binary classification distribution that can be described as follows:
\begin{equation}
\label{eq:model-assumptions}
\mc{P}^{(j)} = \pi^{(j)} \;\mathcal{N}\left(\nu^{(j)}, \Sigma^{(j)}\right) + (1 - \pi^{(j)}) \;\mathcal{N}\left((-1)\nu^{(j)}, \Sigma^{(j)}\right).
\end{equation} That is, $ \mc{P}^{(j)} $ is a mixture of two Gaussians such that the midpoint of the class conditional means is the origin and that the class conditional covariance structures are equivalent. Note that $\mc{P}^{(j)}$ is uniquely parameterized by $\nu^{(j)}, \Sigma^{(j)},$ and $\pi^{(j)}$. It is known that Fisher's Linear Discriminant function is optimal under 0-1 loss for distributions of the form described in Eq. \eqref{eq:model-assumptions}~\citep[Chapter~4]{devroye2013probabilistic}. Recall that Fisher's Linear Discriminant function is the classifier defined as
\begin{align}
\label{eq:fld}
h_{FLD}(x) = \mathbb{1}\left\{\omega^{\top} x > c\right\},
\end{align} where $ \mathbb{1}\{s\} $ is 1 if $ s $ is true and $ 0 $ otherwise and where
\begin{align*}
\omega = (\Sigma_0 + \Sigma_1)^{-1} (\nu_1 - \nu_0) \quad \quad \text{and} \quad \quad c = \omega^\top \left(\nu_0 + \nu_1\right) + \log \frac{\pi_0}{\pi_1}.
\end{align*}
Here $\nu_i, \Sigma_i$ and $\pi_i$ are the mean, covariance matrix, and prior probability pertaining to class $i \in \{0, 1\}$. For a distribution $\mc{P}$ of the form described in Eq. \ref{eq:model-assumptions}, and assuming $ \pi = 0.5 $ for simplicity, the above expressions reduce to $\omega = \frac{1}{2}\Sigma^{-1} (\nu_1 - \nu_0)$ and $c = 0$. From the form of Eq. \eqref{eq:fld} it is clear that the projection vector $\omega$ is a sufficient statistic for classification under 0-1 loss with respect to the distribution $ \mc{P}$. To this end, \textbf{we will assume that for the source distributions we only have access projection vectors $\omega^{(1)}, \hdots, \omega^{(J)}$}.
We next make a generative assumption about the source projection vectors $\omega^{(1)},
\hdots, \omega^{(J)} $ to make the relationship between the distributions explicit. Recall that the von Mises-Fisher (vMF) distribution \citep{fisher1993statistical}, denoted by $\mc{V}(\mu, \kappa)$, has realizations on the $ d $-sphere and is completely characterized by a mean direction vector $ \mu \in \mathbb{R}^{d} $ and a concentration parameter $ \kappa \in \mathbb{R}_{\ge 0} $. When the concentration parameter is close to $ 0 $ the vMF distribution is close to a uniform distribution on the $ d$-sphere. When the concentration parameter is large, the vMF distribution resembles a normal distribution with mean $ \mu $ and a scaled isotropic variance proportional to the inverse of $ \kappa $.
For our analysis we assume that the source vectors $ \omega^{(1)}, \hdots, \omega^{(J)} \stackrel{iid}{\sim} \mathcal{V}(\mu, \kappa) $ for unspecified $ \mu $ and $ \kappa $. This assumption forces additional constraints on the relationship between $ \nu^{(j)} $ and $ \Sigma^{(j)} $ in the context of Eq. \eqref{eq:model-assumptions} -- namely that $ || (\Sigma^{(j)})^{-1}\nu^{(j)} ||_{2} = \sqrt{2} $. In our simulated settings below, the generative models adhere to this constraint. In practical applications, we can use the (little) training data that we have access to force our estimates of $ \nu $ and $ \Sigma $ to be conformant.
\subsection{A Simple Class of Classifiers}
With the generative assumptions described above, we propose a class of classifiers $ \mc{H} $ that can effectively leverage the information available with just access to the projection vectors:
\begin{equation*}
\mc{H} := \left\{ h_{\alpha}(x) = \mathbb{1} \left\{ \left( \underbrace{\alpha \omega^{(0)} + (1 - \alpha) \sum_{j=1}^{J} \omega^{(j)}}_{\omega_{\alpha}}\right)^{\top}x > 0\right\} : \alpha \in [0,1] \right\}.
\end{equation*} The set $ \mc{H} $ is exactly the classifiers parameterized by the convex combinations of the target projection vector and the sum of the source projection vectors. We refer to this convex combination as $ \omega_{\alpha} $. Letting $ \bar{\omega} := \frac{1}{J}\sum_{j=1}^{J} \omega^{(j)} $, we note that $ \omega_{\alpha} $ can be reparametrized in the context of the vMF distribution with the observation that
\begin{align*}
(1-\alpha) \sum_{j=1}^{J} \omega^{(j)} &= \frac{J(1 - \alpha)||\bar{\omega}||}{||\bar{\omega}||} \bar{\omega} \\
&= J(1-\alpha)||\bar{\omega}|| \; \hat{\mu},
\end{align*} where $ \hat{\mu} = \bar{\omega} / ||\bar{\omega} || $ is the maximum likelihood estimate for the mean direction vector of the vMF distribution. By letting $ \alpha \gets \frac{\alpha}{\alpha + J (1-\alpha)||\bar{\omega}||}$
we maintain the same set $ \mc{H} $ but make the individual classifiers more amenable to analysis. Figure \ref{fig:illustrative-figure} illustrates the geometry of $ \mc{H} $ for $ d = 3 $.
We view different decision rules in $ \mc{H} $ as elements along a classical bias-variance trade off curve parameterized by $ \alpha $ \citep{belkin2019reconciling}. In particular, when the amount of data available from the target distribution is small, the projection induced by an $ \alpha $ value closer to 1 can be interpreted as a high variance, low bias estimate of the target projection vector. Conversely, an $ \alpha $ value of 0 can be interpreted as a low variance, high bias estimate. In situations where the concentration parameter $ \kappa $ is relatively large, for example, we expect to prefer combinations that favor the average-source vector. We discuss this in more detail and in the context of different parameter settings for our generative model in Section \ref{subsec:simulations}.
\subsection{Approximating optimality}\label{subsec:approximating-optimality}
We define the optimal classifier $ h_{\alpha^{*}} $ in the set $ \mc{H} $ as the classifier that minimizes the expected risk with respect to the target distribution $ \mc{P}^{(0)} $. Given the projection vectors $ \{\omega^{(j)}\}_{j=1}^{J} $ and the target class conditional mean and covariance, $ \nu^{(0)} $ and $ \Sigma^{(0)} $, the classification error (0-1 loss) of a classifier $ h_{\alpha} \in \mc{H} $ is
\begin{equation*}
\label{risk}
\ell \left( h_{\alpha} \mid \{\omega^{(j)}\}_{j=1}^{J}, \nu^{(0)}, \Sigma^{(0)} \right) = \Phi \left( \frac{-\omega_{\alpha}^\top \nu^{(0)}}{\sqrt{\omega_{\alpha}^\top \Sigma^{(0)} \omega_{\alpha}}} \right)
\end{equation*}
for $\mathcal{P}^{(0)} $ of the form described in Eq. \eqref{eq:model-assumptions} and where $ \Phi $ is the cumulative distribution function of the standard normal. The derivation is given in the \cref{sec:app-derivation}. In practice, the source projection vectors, source thresholds, target class conditional mean and covariance structure are all estimated. Based on the above loss, we can define the risk of $h_{\alpha}$ as,
\begin{equation}
\label{eq:risk-integral}
R(h_{\alpha}) = \mbb{E}_{\omega_{\alpha}} \left[ \ell \left(h_{\alpha} \mid \{\omega^{(j)}, \}_{j=1}^{J}, \nu^{(0)}, \Sigma^{(0)}\right)\right].
\end{equation}
Unfortunately, despite the strong distributional assumptions we have in place, the expected risk is too complicated to analyze entirely.
Instead, we approximate $ R(h_{\alpha}) $ with $\hat{R}(h_{\alpha})$, which we can evaluate via Monte Carlo integration by sampling from the distribution of $ \omega_{\alpha} $ (derived in Section \ref{subsec:distribution-of-omega-alpha}) and using the plug-in estimates for $ \nu^{(0)} $ and $ \Sigma^{(0)} $.
Using this risk function, we can search for optimal $\alpha^\ast$ that minimizes the expected risk over $\alpha \in [0, 1]$. The procedure for calculating $\alpha^\ast$ is outlined in Algorithm \ref{alg:cap}. For the remainder of this section, we use $ \hat{t} $ to denote an estimate of the parameter $ t $.
\begin{algorithm}[t]
\caption{Calculating the optimal convex coefficient}\label{alg:cap}
\label{alg:optimal}
\begin{algorithmic}[1]
\Require target task class conditional mean $\hat \nu_0^{(0)}$ and $\hat \nu_1^{(0)} $, target task class conditional covariance $\hat \Sigma^{(0)} $, normalized source proj. vectors $\{ \hat \omega^{(j)} \}_{j=1}^J$, the number of bootstrap samples $ B $.
\State
$ \hat \omega^{(0)} \gets \textproc{Normalize} \left( \frac{1}{2} (\hat \Sigma^{(0)})^{-1} (\hat \nu_1^{(0)} - \hat \nu_0^{(0)}) \right)$
\Comment{Estimate the target proj. vector}
\State
$ \hat{\mu} \gets$ \textproc{Normalize}$\left(\frac{1}{J} \sum_{j=1}^{J} \hat \omega^{(j)} \right)$
\Comment{Estimate vMF mean direction vector}
\State
$ \hat \Psi \gets $ \textproc{Variance}$\left(\hat{\mu}\right) $
\Comment{Variance of vMF mean direction vector}
\State
$ \hat \Sigma_{\omega} \gets $ \textproc{Variance}$\left(\hat \omega^{(0)} \right) $ \Comment{Variance of the target proj. vector (see \cref{subsec:distribution-of-omega-alpha})}
\For{each $\alpha \in [0, 1]$}
\State
$ \hat \omega_{\alpha} \gets \left(\alpha \; \hat \omega^{(0)} + (1-\alpha)\; \hat{\mu} \right)$ \Comment{Average convex combination}
\State
$ \hat \Sigma_{\alpha} \gets \alpha^{2} \hat \Sigma_{\omega} + (1-\alpha)^{2}\hat \Psi $
\Comment{Variance of average convex combination}
\For{each $ b $ in $\{1, .. , B\} $}
\State
$ \omega_{b} \gets \mc{N}\left(\hat \omega_{\alpha}, \hat \Sigma_{\alpha}\right)$
\Comment{Sample from appropriate normal distribution}
\State
$r_{b} \gets \Phi \left( - \frac{\omega_{b}^{\top} \hat \nu^{(0)}}{\sqrt{\omega_{b}^{\top} \hat \Sigma^{(0)} \; \omega_{b}}} \right) $ \Comment{Calculate error for sample}
\EndFor
\State $R\left({\alpha}\right) \gets \frac{1}{B} \sum_{b=1}^{B} r_{b} $ \Comment{Calculate risk}
\EndFor
\State $\alpha^\ast \gets \operatornamewithlimits{argmin}_{\alpha} R(\alpha)$ \Comment{Select optimal alpha}
\end{algorithmic}
\end{algorithm}
\subsection{Deriving the asymptotic distribution of $ \hat{\omega}_{\alpha} $}
\label{subsec:distribution-of-omega-alpha}
We are interested in deriving a data-driven method for finding the element of $ \mc{H} $ that performs best on the target task. For this, we rely on the asymptotic distribution of $ \hat{\omega}_{\alpha} = \alpha \hat{\omega}^{(0)} + (1-\alpha) \hat{\mu} $.
First, we consider the estimated target projection vector $\hat \omega^{(0)} = \frac{1}{2} (\hat \Sigma^{(0)})^{-1} (\hat\nu_1^{(0)} - \hat\nu_0^{(0)})$ as a product of the independent random variables, $A := n (\hat \Sigma^{(0)})^{-1}$ and $\tau := \frac{1}{2}(\hat\nu_1^{(0)} - \hat\nu_0^{(0)})$ and note that $A \sim W_{d}(n, \Sigma^{(0)})$ is distributed according to a Wishart distribution with $n$ degrees of freedom and scatter matrix $\Sigma^{(0)}$ and that $\tau \sim \mc{N}_d(\nu^{(0)}, \Sigma^{(0)} / n)$ is normally distributed.
For large $n$, the random vector $nA^{-1}\tau$ has the asymptotic distribution given by,
\[
\sqrt{n} \left( nA^{-1}\tau - (\Sigma^{(0)})^{-1} \nu \right) \stackrel{d}{\longrightarrow} \mc{N}_d(0, \tilde{\Sigma}),
\]
where $\tilde{\Sigma} = (1 + (\nu^{(0)})^\top (\Sigma^{(0)})^{-1} \nu^{(0)}) (\Sigma^{(0)})^{-1} - (\Sigma^{(0)})^{-1} \nu^{(0)} (\nu^{(0)})^\top (\Sigma^{(0)})^{-1} $ \citep{kotsiuba2016asymptotic}. It follows that $\hat \omega^{(0)}$ is aymptotically distributed according to a normal distribution with mean $\omega^{(0)} = (\Sigma^{(0)})^{-1} \nu^{(0)}$ and covariance matrix $\Sigma_{\omega} := \tilde{\Sigma} / n$.
Next, we observe that $\hat \mu$ is the sample mean direction computed from $J$ i.i.d samples drawn from a $\mathcal{V}(\mu, \kappa)$. For large $J$ we have $\hat \mu$ asymptotically distributed as a normal distribution with mean $\mu$ and covariance $\Psi$ given by
\[
\Psi = \left( \frac{1 - \frac{1}{J} \sum_{i=1}^{J} (\mu^\top \omega^{(i)})^2}{J \| \bar\omega \|} \right)^{1/2} I_d,
\]
where $I_d$ is the $d \times d$ identity matrix \citep{fisher1993statistical}.
Finally, since $\hat \omega$ and $\hat \mu$ are independent and asymptotically normally distributed, for large $n$ and $J$, we have that
\begin{equation}
\label{risk-integral}
\hat{\omega}_{\alpha} \sim \mathcal{N}\left(\underbrace{\alpha \omega^{(0)} + (1-\alpha) \mu}_{\omega_{\alpha}}, \underbrace{\alpha^2 \Sigma_{\omega} + (1-\alpha)^{2} \Psi}_{\Sigma_{\alpha}}\right).
\end{equation}
We use samples from this asymptotic distribution when evaluating the risk function described in Eq. \eqref{eq:risk-integral}.
\subsection{Simulations}
\label{subsec:simulations}
In this section we study the difference between the true-but-analytically-intractable risk and our proposed approximation under a fixed set of generative model parameters as well as the effect of different generative model parameters on the relative risks of the target classifier, the average-source classifier, and the optimal classifier derived in Section \ref{subsec:approximating-optimality}. For each simulation setting we report the expected accuracy (i.e., 1 minus the expected risk) and the optimal convex coefficient $ \alpha^{*} $.
Let $d$ be the dimensionality of the data. Without loss of generality, we consider a von Mises-Fisher distribution with mean direction $\mu = [1, 0_{d-1}]^\top$ and concentration parameter $\kappa$. In all of our experiments, we fix the mixing coefficient $ \pi^{(j)} = 0.5 $ and the class-conditional covariance $ \Sigma^{(j)} = I_{d} $ for all task distributions. We sample $ \nu^{(0)} $ and $ \{\omega^{(j)}\}_{j=1}^{J} $ from the described vMF distribution and let $ n $ be the number of samples from the target distribution. We assume that the source projection vectors are known. Finally, for each simulation setting we report the mean metric over 1,000 iterations and, hence, the standard error of each estimate is effectively zero.
\subsubsection{Analytical versus empirical risks}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{figures/analytical-versus-empirical/analytical_versus_empirical.pdf}
\caption{Comparing the analytical and empirical accuracies and optimal convex coefficient $\alpha^\ast$ for different amounts of target training data and number of source tasks.
}
\label{fig:analytical-versus-empirical}
\end{figure*}
Under the assumptions above, Figure \ref{fig:analytical-versus-empirical} compares the analytical risk derived in Section \ref{subsec:approximating-optimality} to the oracle empirical risk in the described generative setting. In particular, we simulate the generative setting by sampling 1,000 different $ (\nu^{(0)}, \{\nu^{(j)}\}_{j=1}^{J}) $ pairs with $ d = 10, \kappa=10 $ and varying amounts of source tasks $ J $. We assume that the target class covariance is known. For a given pair we sample $ n \in \{10, 20, 50, 100\} $ data from the target distribution and calculate the risk according to Eq. \eqref{eq:risk-integral} for $ \alpha \in \{0, 0.1, 0.2, \hdots, 1.0\} $ using 100 samples from $ \mathcal{N}\left(\mu_{\omega_{\alpha}}, \Sigma_{\omega_{\alpha}}\right) $. The top row of Figure \ref{fig:analytical-versus-empirical} shows these risks for $ \alpha = 0 $ (average-source), $ \alpha = 1 $ (target), and $ \alpha = \alpha^{*} $ (optimal). The top row also includes the empirical risk where we sample 10,000 $ (X, Y) $ pairs from the target distribution and evaluate the three decision functions directly. Here, the optimal vector is the convex combination of the average-source and target vectors that performs best on the test set.
Focusing on a single figure in the top row of Figure \ref{fig:analytical-versus-empirical} we see that the gap between the analytical and empirical risks associated with the target classifier decreases as the number of samples from the target distribution increases. In the early parts of the regime the asymptotic approximation of the variance associated with the target data is not as well suited to estimate the risk as it is in the later parts of the regime. In any case, the optimal classifier is able to outperform the target classifier throughout and the analytical and empirical risks are indistinguishable for large $ n $.
Now looking from the left to the right of Figure \ref{fig:analytical-versus-empirical}, we see that the gap between the analytical and empirical risks associated with the average-source and optimal classifiers decreases as we increase the number of source tasks.
This is because modeling the distribution of the average-source vector as a normal distribution becoming more appropriate as the number of the source tasks increases. When simulating this setting for $ J = 1,000 $, for example, the difference between the empirical and analytical risks associated with the average-source task is negligible.
The validity of our approximation as $ n $ gets large and $ J $ gets large is apparent when evaluating the differences between the optimal convex coefficients -- for $ J = 10 $ the coefficients are separated for the entire regime, for $ J = 100 $ there is meaningful separating for small $ n $ that goes away for larger $ n $, and for $ J = 1,000 $ the separation quickly becomes small. We take this as evidence that our approximation is appropriate.
\subsubsection{The effect of plug-in estimates, concentration, and dimensionality}
\begin{figure*}[t!]
\centering \includegraphics[width=\linewidth]{figures/effect-of-parameters/effect-of-parameters.pdf}
\caption{Studying the effect of using plug-in estimates (left) and the effect of varying different generative model parameters (center, right) on the expected accuracy of the average-source, target, and optimal classifiers and on the optimal convex coefficient.}
\label{fig:effect-of-parameters}
\end{figure*}
Figure \ref{fig:effect-of-parameters} shows the effect of estimating the covariance structure (left column), the effect of different vMF concentration parameters (middle column), and the effect of dimensionality (right column) on the relative accuracies of the average-source, target, and optimal classifiers and the calculated optimal convex coefficients.
We fix the dimensionality to be $ 10 $, the number of source tasks to be $ 100 $, the vMF concentration parameter to be $ 10 $ and the number of target samples available to be $ 20 $ when appropriate. The classifiers are evaluated using 10,000 samples from the target distribution.
The left column of Figure \ref{fig:effect-of-parameters} illustrates the effect of estimating the target task's class conditional covariance structure $ \Sigma^{(0)} $ and class 1 conditional mean $ \nu^{(0)} $ and using these estimates as plug-ins for their population values when approximating the risk described in Eq. \ref{eq:risk-integral}.
In particular, we compare the expected accuracy and optimal coefficient when using the plug-in estimates (solid lines) $ \hat{\Sigma}^{(0)} $ and $ \hat{\nu}^{(0)} $ to using the population covariance $ \Sigma^{(0)} $ and $ {\nu}^{(0)} $ (dashed lines).
We note that the difference between the performance of the optimal classifiers in the two paradigms is smaller than the difference between the performance of optimal classifier and the target classifier for small $ n $.
This behavior is expected, as the optimal classifier has access to more information through the average-source projection vector.
Finally, we note that the difference between the two optimal coefficients is smaller for the poles of the regime and larger in the middle. We think that this is due to higher entropy states between wanting to use the ``high bias, low variance" average-source classifier and the ``low bias, high variance" target classifier.
For both the middle and right columns of Figure \ref{fig:effect-of-parameters} we study only the plug-in classifiers.
The middle column of Figure \ref{fig:effect-of-parameters} investigates the effect of the vMF concentration parameter $ \kappa $.
Recall that as $\kappa $ gets larger the expected cosine distance between samples from the vMF distribution gets smaller. This means that the expected cosine distance between the average-source projection vector and the true-but-unknown target projection gets smaller.
Indeed, through the expected accuracies of the average-source and target classifiers we see that the average-source classifier dominates the target classifier in the latter part of the studied regime due to the average-source vector providing good bias.
Notably, the combined classifier is always as effective and sometimes better than the target classifier but is slightly less effective than the average-source classifier when $ \kappa $ is large. This, again, is due to the appropriateness of modeling the average-source vector as Gaussian.
The optimal convex coefficient is close to 1 when the vMF distribution is close to the uniform distribution on the unit sphere ($ \kappa $ small) and closer to 0 when vMF distribution is closer to a point mass ($\kappa $ large).
The right column of Figure \ref{fig:effect-of-parameters} shows the effect of the dimensionality of the classification problem on the expected accuracies and optimal coefficient. The top figure demonstrates that the optimal classifier is always as good as and sometimes better than both the average-source and target classifiers, with the margin being small when the dimensionality is both small and large. The reason the margin between the accuracies starts small, gets larger, and then becomes small again is likely due to the interplay between the estimation error associated with covariance structure and the relative concentration of the source vectors. We do not investigate this complicated interplay further. The optimal coefficient gets progressively larger as the dimensionality increases with the exception of a dip at $ d=20 $. We think this dip is due to a regime change in the interplay mentioned previously.
\subsection{Privacy considerations}
\label{subsec:privacy}
As presented in Algorithm \ref{alg:optimal} the process for calculating the optimal convex coefficient $ \alpha^{*} $ requires access to the normalized source projection vectors $ \{\omega^{(j)}\}_{j=1}^{J} $. This requirement can be prohibitive in applications where the data (or derivatives thereof) from a source task are required to stay local to a single device or are otherwise unable to be shared. For example, it is common for researchers to collect data in a lab setting, deploy a similar data collection protocol in a more realistic setting, and to use the in-lab data as a source task and the real-world data as a target task. Depending on the privacy agreements between the researchers and the subjects, it may be impossible to use the source data directly.
The requirements for Algorithm \ref{alg:optimal} can be changed to address these privacy concerns by calculating the average source vector $ \hat{\mu} $ and its corresponding standard error $ \Psi $ in the lab setting and only sharing these two parameters. Indeed, given $ \hat{\mu} $ and $ \Psi $ the algorithm is independent of the normalized source vectors and can be the only thing stored and shared with devices and systems collecting data from the target task.
\section{Applications to physiological prediction problems}
\begin{figure*}[t!]
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/matb/matb_cognitive_load_bas_and_alphas.pdf}
\end{subfigure} \\
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/matb/cognitive_load_histograms.pdf}
\label{subfig:matb-histograms}
\end{subfigure}
\caption{Balanced accuracy and relevant convex coefficients (top) and relative performance of the optimal and target classifiers (bottom) for the MATB-II cognitive load classification task.
}
\label{fig:cognitive-load}
\end{figure*}
We next study the proposed class of classifiers in the context of three physiological prediction problems: EEG-based cognitive load classification, EEG-based stress classification, and ECG-based social stress classification. In each setting we have access to data from a target study participant and the projection vectors from other participants. The data for each subject is processed such that the assumptions of Eq. \eqref{eq:model-assumptions} are matched as closely as possible. For example, we use the available training data from the target participant to force the class conditional class means to be on the unit sphere and for their midpoint to cross through the origin. Further, we normalize the learned projection vectors so that the assumption that the vectors come from a von-Mises Fisher distribution is sensible.
The descriptions of the cognitive load and stress datasets are altered versions of the descriptions found in Chen et al. \citep{10.3389/fnhum.2022.930291}.
Unless otherwise stated, the balanced accuracy and the convex coefficient corresponding to each method are calculated using 100 different train-test splits for each participant.
Conditioned on the class-type, the windowed data used for training are consecutive windows.
A grid search in $ \{0, 0.1, 0.2, \hdots, 1.0\} $ was used when calculating convex coefficients.
\subsection{Cognitive load (EEG)}
\label{subsec:matb}
The first dataset we consider was collected under NASA's Multi-Attribute Task Battery II (MATB-II) protocol. MATB-II is used to understand a pilot's ability to perform under various cognitive load requirements \citep{santiago2011multi} by attempting to induce four different levels of cognitive load -- no (passive), low, medium, and high ---that are a function of how many tasks the participant must actively tend to.
The data includes 50 healthy subjects with normal or corrected-to-normal vision. There were 29 female and 21 male participants and each participant was between the ages of 18 and 39 (mean 25.9, std 5.4 years). Each participant was familiarized with MATB-II and then participated in two sessions containing three segments. The three segments were further divided into blocks with the four different levels of cognitive requirements. The sessions lasted around 50 minutes and were separated by a 10 minute break. We focus our analysis on a per-subject basis, meaning there will be two sessions per subject for a total of 100 different sessions.
The EEG data was recorded using a 24-channel Smarting MOBI device and was processed using high pass (0.5 Hz) and low pass (30 Hz) filters and segmented in ten second, non-overlapping windows. Once the EEG-data was windowed we calculated the mass in the frequency domain for the theta (4-8 Hz), alpha (8-12 Hz), and lower beta (12-20 Hz) bands. We then normalized the mass of each band on a per channel basis. In our analysis we consider only the frontal channels \{Fp1, Fp2, F3 F4, F7, F8, Fz, aFz\}. Our choice in channels and bands is an attempt to mitigate the number of features while maintaining the presence of known cognitive load indicators \citep{article}. The results reported in Figure \ref{fig:cognitive-load} are for this $ (3\times 8) = 24 $-dimensional two class problem \{no \& low cognitive load, medium \& high cognitive load\}.
For a fixed session we randomly sample a continuous proportion of the participant's windowed data $ p \in \{0.05, 0.1, 0.2, 0.5\} $ and also have access to the projection vectors corresponding to all sessions except for the target participant's other session (i.e., we have 100 - 1 - 1 = 98 source projection vectors). As mentioned above, we use the training data to learn a translation and scaling to best match the model assumptions of Section \ref{sec:generative-model}.
The top left figure of Figure \ref{fig:cognitive-load} shows the mean balanced accuracy on the non-sampled windows of four different classifiers: the average-source classifier, the target classifier, the optimal classifier, and the oracle classifier. The average-source, target, and optimal classifiers are as described in Section \ref{subsec:simulations}. The oracle classifier is the convex combination of the average-source and target projection vectors that performs the best on the held out test set. The median balanced accuracy of each classifier is the median (across sessions) calculated from the mean balanced accuracy of 100 different train-test samplings for each session.
The relative behaviors of the average-source, target and optimal classifiers in this experiment are similar to what we observe when varying the amount of target data in the simulations for large $ \kappa $ -- the average-source classifier outperforms the target classifier in small data regimes, the target classifier outperforms the average-source classifier in large data regimes, and the optimal classifier is able to outperform or match the performance of both classifiers throughout the regime. Indeed, in this experiment the empirical value of $ \kappa $ when estimating the projection vectors using all of each session's data is approximately 17.2.
The top right figure of Figure \ref{fig:cognitive-load} shows scatter plots of the convex coefficients for the optimal and oracle methods. Each dot represents the average of 100 coefficients for a particular session for a given proportion of training data from the target task (i.e., one dot per session). The median coefficient is represented by a short line segment. The median coefficient for both the oracle and the optimal classifiers get closer to 1 as more target data is available. This behavior is intuitive, as we'd expect the optimal algorithm to favor the in-distribution data when the estimated variance of the target classifier is ``small".
The bottom row of Figure \ref{fig:cognitive-load} is the set of histograms of the difference between the optimal classifier's balanced accuracy and the target classifier's balanced accuracy where each count represents a single session. These histograms give us a better sense of the relative performance of the two classifiers -- a distribution centered around 0 would mean that we have no reason to prefer the optimal classifier over the target classifier and where a distribution shifted to the right of 0 would mean that we would prefer the optimal classifier to the target classifier.
For $ p=0.05 $ the optimal classifier outperforms the target classifier for 92 of the 100 sessions with differences as large as 19.2\% and a median absolute accuracy improvement of about 9.3\%.
The story is similarly dramatic for $ p = 0.10 $ with the optimal classifier outperforming the target classifier for 92 of the 100 sessions, a maximum difference of about 19.2\%, and a median difference of 7.8\%.
For $ p = 0.2 $ the distribution of the differences is still shifted to the right of $ 0 $ with a non-trivial median absolute improvement of about 3.7\%, a maixmum improvement of 12\%, and an improvement for 81 of the sessions..
For $ p = 0.5 $ the optimal classifier outperforms the target classifier for 76 of the 100 sessions, though the distribution is only slightly shifted to the right of $ 0 $.
The p-values, up to 3 decimal places, from the one-sided Wilcoxon's rank-sum test for the hypothesis that the distribution of the paired differences is symmetric and centered around 0 are $ 0.000, 0.000, 0.000 $, and, $ 0.000 $ for available proportions of target data $ 0.05, 0.1, 0.2 $ and $ 0.5 $, respectively.
\begin{figure*}[t!]
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/mental-math/mental-math_bas_and_alphas.pdf}
\end{subfigure} \\
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/mental-math/stress_histograms.pdf}
\label{subfig:mm-histograms}
\end{subfigure}
\caption{Balanced accuracy and relevant convex coefficients (top) and relative performance of the optimal and target classifiers on a per-participant basis (bottom) for the Mental Math EEG-based stress classification task.
}
\label{fig:stress}
\end{figure*}
\subsection{Stress from mental math (EEG)}
\label{subsec:mental-math}
In the next study we consider there are two recordings for each session -- one corresponding to a resting state and one corresponding to a stressed state.
For the resting state, participants counted mentally (i.e., without speaking or moving their fingers) with their eyes closed for three minutes.
For the stressful state, participants were given a four digit number (e.g., 1253) and a two digit number (e.g., 43) and asked to recursively subtract the two digit number from the four digit number for 4 minutes.
This type of mental arithmetic is known to induce stress \citep{noto2005relationship}.
There were initially 66 participants (47 women and 19 men) of matched age in the study. 30 of the participants were excluded from the released data due to poor EEG quality.
Thus we consider the provided set of 36 participants first analyzed by the study's authors \citep{zyma2019electroencephalograms}.
The released EEG data was preprocessed via a high-pass filter and a power line notch filter (50 Hz). Artifacts such as eye movements and muscle tension were removed via ICA.
We windowed the data into two and a half second chunks with no overlap and consider the two-class classification task \{stressed, not stressed\} with access only to the channels along the centerline \{Fz, Cz, Pz\} and the theta, alpha and lower beta bands described above.
The results of this experiment are displayed in Figure \ref{fig:stress} and are structured in the same way as the cognitive load results.
For this study we see relative parity between the target and average-source classifiers when $ p = 0.05 $. In this case, the optimal classifier is able to leverage the discriminative information in both sets of information and improve the balanced accuracy. This win is maintained until the target classifier performance matches the optimal classifier performance for $ p =0.5 $. The poor performance of the average-source classifier is likely due to the empirical value for $ \kappa $ being less than 3.
Interestingly, we do not see as clear of a trend for the median convex coefficients in the top right figure. They are relatively stagnant between $ p=0.05, 0.1 $ and $ 0.2 $ before jumping considerably closer to $ 1 $ for $ p=0.5 $.
When comparing the optimal classifier to the target classifier on a per-participant basis directly (bottom row) it is clear that the optimal classifier is favorable: for $ p = 0.05, 0.10 $ and $ p=0.2 $ the optimal classifier outperforms the target classifier for 25, 24, and 24 of the 36 participants, respectively, and the median absolute difference of these wins is in the 1.8\% - 2.6\% range for all three settings with maximum improvements of 19.2 for $ p=0.05 $ , 19.2 for $ p=0.1 $, 12.1 for $ p=0.2$. As with the cognitive load task, this narrative shifts for $ p=0.5 $ as the distribution of the differences is approximately centered around 0. The p-values from the one-sided rank-sum test reflect these observations: 0.001, 0.01, 0.007, and 0.896 for $ p =0.05, 0.1, 0.2$, and $ 0.5 $, respectively.
\subsection{Stress in social settings (ECG)}
\label{subsec:wesad}
\begin{figure*}[t!]
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ecg/social_stress_ecg_bas_and_alphas.pdf}
\end{subfigure} \\
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ecg/social_stress_histograms.pdf}
\label{subfig:ecg-histograms}
\end{subfigure}
\caption{Balanced accuracy and relevant convex coefficients (top) and relative performance of the optimal and target classifiers on a per-participant basis (bottom) for the Social Stress, ECG-based classification task.
}
\label{fig:ecg}
\end{figure*}
The last dataset we consider is the WEarable Stress and Affect Detection (WESAD) dataset. For WESAD, the researchers collected multi-modal data while participants underwent a neutral baseline condition, an amusement condition and a stress condition. The participants meditated between conditions. For our purposes, we will only consider the baseline condition where participants passively read a neutral magazine for approximately 20 minutes and the stress condition where participants went through a combination of the Trier Social Stress Test and a mental arithmetic task for a total of 10 minutes.
For our analysis, we consider 14 of the 15 participants and only work with their corresponding ECG data recorded at 700 Hz. Before featurizing the data, we first downsampled to 100 Hz and split the time series into 15 second, non-overlapping windows. We used Hamilton's peak detection algorithm \citep{4122227} to find the time between heartbeats for a given window. We then calculated the proportion of intervals larger than 20 milliseconds, the normalized standard deviation of the interval length, and the ratio of the high (between 15 and 40 hz) and low (between 4 and 15 hz) frequencies of the interval waveform after applying a Lomb-Scargle correction for waves with uneven sampling. These three features are known to have discriminative power in the context of stress prediction \citep{hrv-review}, though typically for larger time windows.
We report the same metrics for this dataset in Figure \ref{fig:ecg} as we do for the two EEG studies above: the mean balanced accuracies are given in the top left figure, the convex coefficients for the optimal and oracle classifiers are given in the top right and the paired difference histograms between the optimal classifier's balanced accuracy and the target classifier's balanced accuracy are given in the bottom row.
The relative behaviors of the classifiers in this study is similar to the behaviors in EEG-based stress study above. The optimal classifier is able to outperform the other two classifiers for $ p = 0.05 $ and is matched by the target classifier for the rest of the regime. The average-source classifier is never preferred and the empirical value of $ \kappa $ is approximately 1.5. The distributions of the optimal coefficients get closer to 1 as $ p $ increases but are considerably higher compared to the MATB study for each value of $ p $ -- likely due to the large difference between the empirical values of $ \kappa $ across the two problems.
Lastly, the paired difference histograms for $ p = 0.05 $ favors the optimal classifier. The histograms for $ p=0.1, 0.2, $ and $ 0.5 $ are inconclusive. The p-values for Wilcoxon's rank-sum test are $ 0.029, 0.313, 0.620 $ and $ 0.700 $ for $ p = 0.05, 0.1, 0.2 $ and $ 0.5 $, respectively.
\subsection{Visualizing the projection vectors}
\label{subsec:visualizations}
\begin{figure*}[t]
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/visualizing-projection-vectors/matb-projection-vectors-clustered.pdf}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/visualizing-projection-vectors/mental-math-projection-vectors-clustered.pdf}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.45\linewidth]{figures/visualizing-projection-vectors/ecg-projection-vectors-clustered.pdf}
\end{subfigure}
\caption{Visualizations of the projection vectors for each of the three datasets under study where each dot or arrow corresponds to a session.
The projection vectors were estimated using the entire data from each session.
For the MATB (top left) and Mental Math (top right) visualizations we show the first two principal components scaled by their corresponding eigenvalues of the $ J \times J $ cosine similarity matrix.
The WESAD visualization (bottom) shows the three dimensional projection vectors.
Colors denote the component of a Gaussian mixture model fitted to the projection vectors.
}
\label{fig:visualization}
\end{figure*}
The classification results above provide evidence that our proposed approximation to the optimal combination of the average-source and target projection vectors is useful from the perspective of improving the balanced accuracy. There is, however, a consistent gap that remains between the performance of the optimal classifier and the performance of the oracle classifier. To begin to diagnose potential issues with our model, we visualize the projection vectors from each of the tasks.
The three subfigures of Figure \ref{fig:visualization} show representations of the projection vectors for each task. The dots in the top row correspond to projection vectors from sessions from the MATB dataset (left) and the Mental Math dataset (right). The arrows with endpoints on the sphere in the bottom row correspond to projection vectors from sessions from WESAD. For these visualizations the entire dataset was used to estimate the projection vectors. The two-dimensional representations for MATB and Mental Math are first two components of the spectral embedding \citep{ase} of the affinity matrix $ A $ with entries $ a_{ij} = (\omega^{(i)\top}\omega^{(j)} + 1) / 2 $ and $ a_{ii} = 0 $. The projection vectors for the WESAD task are three dimensional and are thus amenable to visualization.
For each task we clustered the representations of the projection vectors using a Gaussian mixture model where the number of components was automatically selected via minimization of the Bayesian Information Criterion (BIC). The colors of the dots and arrows reflect this cluster membership.
The BIC objective function prefers a model with at least two components to a model with a single component for all of the classification problems -- meaning that modeling the distribution of the source vectors as a unimodal von-Mises Fisher distribution is likely wrong and that a multi-modal von-Mises Fisher distribution may be more appropriate. We do not pursue this idea further but do think that it is could be a fruitful future research direction if trying to mitigate the gap between the performances of the optimal and oracle classifiers.
\subsection{The effect of the number of samples used to calculate $\alpha^{*}$}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figures/effect-of-samples/effect_of_B_mm_subj_6.pdf}
\caption{The effect of the number samples sampled from the distribution of $ \omega_{\alpha} $ on the absolute error between the optimal $ \alpha $ calculated using $ B $ samples and the optimal $ \alpha $ calculated using $ B^{*}=10,000 $ samples for subject 6 of the Mental Math dataset. }
\label{fig:effect-of-B}
\end{figure}
In the simulation experiments described in Section \ref{subsec:simulations} and the applications to different physiological prediction problems in Sections \ref{subsec:matb}, \ref{subsec:mental-math}, and \ref{subsec:wesad} we used 100 samples from the distribution of $ \omega_{\alpha} $ to estimate the risk for a given $ \alpha $. There is no way to know \textit{a priori} how many samples is sufficient for estimating the optimal coefficient. We can, however, study how different amounts of samples effect the absolute error of the optimal coefficient compared to an coefficient calculated using an unrealistic amount of samples. For this analysis we focus on a single session from the Mental Math dataset described in \ref{subsec:mental-math}. The dataset choice was a bit arbitrary. The session was chosen because it is the session where the optimal classifier performs closest to the median balanced accuracy for $ p = 0.1 $.
Figure \ref{fig:effect-of-B} shows the effect of $ B $, the number of samples from the distribution of $ \omega_{\alpha} $ used to calculate the risk for a given $ \alpha $, on the mean absolute error when compared to a convex coefficient calculated using $B^{*} = 10,000 $ samples. The mean absolute errors shown are calculated for $ p \in \{0.05, 0.1, 0.2\} $ by first sampling a proportion of data $ p $ from the target task, training the target classifier using the sampled data, and then estimating the optimal coefficient using $ B^{*} = 10,000 $ samples from the distribution of $ \omega_{\alpha} $. We then compare this optimal coefficient to coefficient found using $ B \in \{5, 10, 20, 50, 100, 200, 500, 1000\} $ samples from the distribution of $ \omega_{\alpha} $ 30 different times, calculate the absolute difference, and record the mean. The lines shown in Figure \ref{fig:effect-of-B} is the average of 100 different training sets. In this experiment the coefficients $ \alpha \in \{0, 0.1, \hdots, 1.0\} $ were evaluated.
There are a few things of note. First, when there is more target data available the fewer samples from the distribution of $ \omega_{\alpha} $ are needed to obtain a specific value of mean absolute error. Second, the mean absolute error curves appear to be a negative exponential function of $ B $ and, for this subject, it seems that the benefit of more samples decays quite quickly after $ B = 500 $. Lastly, though the closer the convex coefficients are to the coefficient calculated using $ B^{*} $ samples the more closely the classifier will perform to the analytically derived optimal classifier, the gap between the performance of the oracle classifier and the optimal classifier in the real data sections above indicates that there may be some benefit from a non-zero mean absolute error.
\section{Discussion}
The approximation to the optimal convex combination of the target and average-source projection vector proposed in Section \ref{sec:generative-model} is effective in improving the classification performance in simulation and, more importantly, across different physiological prediction settings. The improvement is both operationally significant and statistically significant in settings where very little training data from the target distribution is available. In most Human-Computer interface systems an improvement in this part of the regime is the most critical as manufacturers want to mitigate the amount of configuration time (i.e., the time spent collecting labeled data) the users endure and, more generally, make the systems easier to use. We think that our proposed method, along with its privacy-preserving properties inherent to parameter estimation, is helpful towards that goal.
With that said, there are limitations in our work. For example, the derivation of the optimal convex coefficient and, subsequently, our proposed approximation is only valid for the two class problem. We do not think that an extension to the multi-class problem is trivial, though treating a multi-class problem as multiple instances of the two class problem is a potential way forward \citep{tibshirani, li2006using}.
Similarly, our choice to use a single coefficient on the average-source projection vector, as opposed to one coefficient per source task, may be limiting in situations where the source vectors are not well concentrated. In the WESAD analysis where $ \kappa \approx 1.5 $, for example, it may be possible to maintain an advantage over the target classifier for a larger section of the regime with a more flexible class of hypotheses. The flexibility, however, comes at the cost of privacy and computational resources. A potential middle ground between maximal flexibility and the combination of privacy preservation and computational costs modeling the distribution of the source projection vectors as a multi-modal vMF where the algorithm would only need access to the mean direction vector and standard errors associated with each constituent distribution. The visualizations in Section \ref{subsec:visualizations} provide evidence that this model may be more appropriate than the one studied here.
\section{Related Works}
\paragraph{Connection to domain adaptation theory} The problem we address in this work can be framed as a domain adaptation problem with multiple sources. While a rich body of literature~\citep{ben2010theory, mansour2008domain, duan2012domain, duan2009domain, sun2015survey, guo2018multi, zhang2015multi, zhao2018adversarial} have studied and discussed this setting, our work shares the most resemblance with the theoretical analysis proposed by~\citep{mansour2008domain}. There, the approach is to optimally combine the source hypotheses to derive a hypothesis that achieves a small error with respect to the target task which is assumed to have a distribution equal to a mixture of the source distributions. On the contrary, our work proposes a way in which we can optimally combine the average source hypothesis with the target hypothesis to achieve a minimal generalization error on the target task. Owing to the simple nature of the linear hypothesis class and task distributions under consideration, we are able to employ an analytically derived expected risk expression to find the optimal combination. This is in contrast with the studies on more general classes of hypotheses and distributions, led by the aforementioned body of literature. Moreover, \cite{de2022value} reveals that the target generalization error can be a non-monotonic function of the number of source samples and points out that an optimally weighted empirical risk minimization between target and source samples can yield a better generalization error. While a weighted ERM exploits the bias-variance trade-off by optimally weighting target and source samples during training, our work achieves it by combining only the trained hypotheses. This removes the need to store and retrieve source samples, thus allowing for a more private means of achieving domain adaptation.
\paragraph{Domain adaptation for physiological prediction problems} Domain adaptation and transfer learning are ubiquitous in the physiological prediction literature due to large context variability and small in-context sample sizes. See, for example, a review for EEG-inspired methods \citep{zhang2020application} and a review for ECG-inspired methods \citep{ecg-da}. Most similar to our work are methods that combine general-context data and personalized data \citep{nkurikiyeyezu2019effect} or weigh individual classifiers or samples from the source task based on similarities to the target distribution \citep{zadrozny2004proceedings, azab2019weighted}. Our work differs from \citep{nkurikiyeyezu2019effect}, for example, by explicitly modeling the relationship between the source and target tasks. This modeling allows us to derive an optimal combination of the models as opposed to relying strictly on empirical measures.
\paragraph{Measures of task similarity}
The similarity between two tasks can be measured in various ways and is typically used to determine how fit a pre-trained model is for a particular target task \citep{bao2018information, tran2019transferability, leep} or to define a taxonomy of tasks \citep{taskonomy}.
The convex coefficient $ \alpha $ parameterizing our proposed class of models can be thought of as a measure of model-based task dissimilarity between the target task and the average-source task -- the farther the distribution of the target projection vector is from the distribution of the source projection vector the larger the convex coefficient.
Popular task similarity measures utilize information theoretic quantities to evaluate the effectiveness of a pre-trained source model for a particular target task such as H-score \citep{bao2018information}, NCE \citep{tran2019transferability}, LEEP \citep{leep}. This collection of work is mainly empirical and does not place explicit generative relationships on the source and target tasks.
Other statistically inspired task similarity measures, like ours, rely on the representations induced by the source and target classifiers such as partitions \citep{tasksim} and others \citep{baxter2000model,ben2003exploiting, xue2007multi}.
\clearpage
\section{Theory}
\section{Derivation of the analytical expression for classification error w.r.t target distribution}
\label{sec:app-derivation}
Suppose the target distribution is given by $\mc{P} = \pi_0 \mc{P}_0 + \pi_1 \mc{P}_1$ where $\pi_i$ is the prior probability and $\mc{P}_i$ is the class conditional density of the $i$-th class. The generative model in \cref{sec:generative-model} specifies that $\mc{P}_i = \mc{N}_d\left( (-1)^{i+1}\nu, \Sigma \right)$. For simplicity, we only consider the case where $\pi_0 = \pi_1 = \frac{1}{2}$, but we note that the analysis can be easily extended to unequal priors. Under the 0-1 loss, the classification error of an FLD hypothesis $\hat h(x) = \mathbb{1}\{ \hat \omega^\top x > 0 \}$ w.r.t to the target distribution $\mc{P}$ is given by,
\begin{align*}
\ell(\hat h) &= \mathbb{P}_{X \sim \mc{P}}\left[ h(X) \neq Y \mid \hat \omega \right] \\
&= \frac{1}{2}\mathbb{P}_{X \sim \mc{P}_0}\left[ \hat \omega ^\top X > 0 \right] + \frac{1}{2}\mathbb{P}_{X \sim \mc{P}_1}\left[ \hat \omega ^\top X < 0 \right] \\
&= \frac{1}{2} - \frac{1}{2}\mathbb{P}_{X \sim \mc{P}_0}\left[ \hat \omega ^\top X < 0 \right] + \frac{1}{2}\mathbb{P}_{X \sim \mc{P}_1}\left[ \hat \omega ^\top X < 0 \right]
\end{align*}
Since $\hat \omega^\top X \sim \mc{N}_1\left( \hat \omega^\top \mathbb{E}[X], \hat \omega^\top \Sigma \; \hat \omega \right)$, we have
\begin{align*}
\ell(\hat h) &= \frac{1}{2} - \frac{1}{2}\mathbb{P}\left[ Z < \frac{\hat \omega^\top \nu}{\sqrt{\hat \omega^\top \Sigma \; \hat \omega}} \right] + \frac{1}{2}\mathbb{P}\left[ Z < \frac{-\hat \omega^\top \nu}{\sqrt{\hat \omega^\top \Sigma \; \hat \omega}} \right],
\end{align*}
where $Z$ is a standard normal random variable. Therefore,
\begin{align*}
\ell(\hat h) &= \frac{1}{2} - \frac{1}{2}\Phi \left( \frac{\hat \omega^\top \nu}{\sqrt{\hat \omega^\top \Sigma \; \hat \omega}} \right) + \frac{1}{2}\Phi \left( \frac{-\hat \omega^\top \nu}{\sqrt{\hat \omega^\top \Sigma \; \hat \omega}} \right)
\end{align*}
Using the fact that $\Phi(-x) = 1 - \Phi(x)$, we arrive at the desired expression:
\begin{equation*}
\ell(\hat h) = \Phi \left( \frac{-\hat \omega^\top \nu}{\sqrt{\hat \omega^\top \Sigma \; \hat \omega}} \right).
\end{equation*}
\section{Fisher's Linear Discriminant}
The decision rule of the Fisher's Linear Discriminant (FLD) for a binary classification problem is given by,
\begin{equation}
h(x; \omega, c) =
\begin{cases}
1 & \omega^\top x > c \\
0 & \text{otherwise}
\end{cases}
\end{equation}
where,
\begin{align}
\omega &= (\hat \Sigma_0 + \hat \Sigma_1)^{-1} (\hat \mu_1 - \hat \mu_0) \notag \\
c &= \omega^\top \bigg( \frac{\hat \mu_1 + \hat \mu_0}{2} \bigg) + 2 \log \frac{1-\hat \pi}{\hat \pi} \notag
\end{align}
In the above expressions, $\hat \mu_k$ and $\hat \Sigma_k$ are the sample mean and covariance matrix of class $k$ ($k \in \{0, 1\}$) and $\hat \pi$ is the sample proportion (prior probability) of class 1.
\section{Analysis when $d>2$, $J>1$}
We have $n$ samples from a target task, denoted by $D_t = \{ (x_i, y_i) \}_{i=1}^n \sim P_t$ and $m$ samples from a source task denoted by $D_s^j = \{ (x_i^j, y_i^j)\}_{i=1}^m \sim P_s^j$, where $x \in X \subseteq \mathbb{R}^d$, $y \in Y = \{ 0, 1 \}$, and $j \in \{ 1, \dots, J\}$. The target task distribution $P_t$ is characterized by the class conditional densities,
\begin{equation}
f_{k,t} \stackrel{d}{=} \mathcal{N}((-1)^{k+1}\mu, I_d), \notag
\end{equation}
while the source task distribution $P_s^j$ is characterized by the class conditional densities,
\begin{equation}
f_{k,s^j} \stackrel{d}{=} \mathcal{N}((-1)^{k+1}R(\nu_j) \mu, I_d). \notag
\end{equation}
Here, $k \in \{0, 1\}$, $\mu = [ 1, 0_{d-1}]^\top$ and $R(\nu)$ is a rotational matrix given by,
\begin{equation}
R(\nu) = 2 \frac{(\mu + \nu)(\mu + \nu)^\top}{(\mu + \nu)^\top(\mu + \nu)} - I_d \notag
\end{equation}
where, $\nu \sim \text{VMF}(\mu, \kappa)$ is sampled from the Von Mises-Fisher distribution. All the target and source tasks have the same label distribution, i.e. $p(y_t=1) = p(y_{s^j}=1) = \pi$.
Let $h_t(x; \omega_t, c_t)$ be a FLD decision rule trained with target task data $D_t$ and $h_{s^j}(x; \omega_{s^j}, c_{s^j})$ be a FLD decision rule trained with $j$-th source task data $D_s^j$. We consider a classifier $g_{\alpha}(x)$ defined as,
\begin{equation}
g_\alpha(x) =
\begin{cases}
1 & \omega_{\alpha}^\top x > c_{\alpha} \\
0 & \text{otherwise}
\end{cases}
\end{equation}
where,
\begin{align}
\omega_{\alpha} &= \alpha \omega_t + (1-\alpha) \sum_{j=1}^J \omega_{s^j} \notag \\
c_{\alpha} &= \alpha c_t + (1-\alpha) \sum_{j=1}^J c_{s^j} \notag
\end{align}
We make several assumptions to simplify our analysis: (1) The sample class covariance matrices are known and are equal to the identity matrix ($\hat \Sigma_0 = \hat \Sigma_1 = I_d$), (2) bisector of sample class means is equal to zero ($\frac{\hat \mu_0 + \hat \mu_1}{2} = 0$), and (3) sample class proportion is equal to the true class proportion ($\hat \pi = \pi$). Under these assumptions $\omega_t$ and $c_t$ can be written as,
\begin{align}
\omega_t &= \frac{\hat \mu_1 - \hat \mu_0}{2}, \notag \\
c_t &= 2 \log \frac{1-\pi}{ \pi}. \notag
\end{align}
Since the class conditional densities are normal, we have $\hat \mu_1 \sim \mathcal{N}(\mu, \frac{I_d}{n\pi})$ and $\hat \mu_0 \sim \mathcal{N}(-\mu, \frac{I_d}{n(1-\pi)})$. Therefore, $\omega_t$ is also normally distributed with $\omega_t \sim \mathcal{N}(\mu, \frac{I_d}{4n \pi(1-\pi)})$. Similarly, we can show that the projection term $\omega_{s^j}$ of the $j$-th source task has the conditional distribution $\omega_{s^j} \mid \nu_j \sim \mathcal{N}(R(\nu_j)\mu, \frac{I_d}{4m \pi(1-\pi)})$, while the threshold term $c_{s^j}$ is equal to 2 $\log \frac{1-\pi}{ \pi}$. Building off these intermediate results, we can find the conditional expected value $\mu_{\omega_{\alpha}}$ and variance $\Sigma_{\omega_{\alpha}}$ of the combined projection term $\omega_{\alpha}$, given $\{ \nu_j \}_{j=1}^J$:
\begin{align}
\mu_{\omega_{\alpha}} = \mathbb{E}[\omega_{\alpha} \mid \{ \nu_j \}_{j=1}^J] &= \alpha \mathbb{E}[\omega_t] + (1-\alpha) \sum_{j=1}^J \mathbb{E}[\omega_{s^j} \mid \{ \nu_j \}_{j=1}^J] \notag \\
&= \alpha \mu + (1-\alpha) \sum_{j=1}^J R(\nu_j) \mu \notag \\
&= \bigg( \alpha I_d + (1-\alpha) \sum_{j=1}^J R(\nu_j) \bigg) \mu \notag, \\
\Sigma_{\omega_{\alpha}} = \text{Var}[\omega_{\alpha} \mid \{ \nu_j \}_{j=1}^J] &= \alpha^2 \text{Var}[\omega_t] + (1-\alpha)^2 \sum_{j=1}^J \text{Var}[\omega_{s^j} \mid \{ \nu_j \}_{j=1}^J] \notag \\
&= \alpha^2 \frac{I_d}{4n \pi(1-\pi)} + (1-\alpha)^2 \sum_{j=1}^J \frac{I_d}{4m \pi(1-\pi)} \notag \\
&= \frac{1}{4\pi (1-\pi)} \bigg( \frac{\alpha^2}{n} + \frac{(1-\alpha)^2 J}{m} \bigg) I_d \notag.
\end{align}
Since $\omega_{\alpha}$ is a weighted sum of normal random variables, we can conclude that $\omega_{\alpha} \mid \{ \nu_j \}_{j=1}^J \sim \mathcal{N}(\mu_{\omega_{\alpha}}, \Sigma_{\omega_{\alpha}})$. Furthermore, we can find an expression for $c_{\alpha}$ as follows:
\begin{align}
c_{\alpha} &= \alpha c_t + (1-\alpha) \sum_{j=1}^J c_{s^j} \notag \\
&= 2 \alpha \log \frac{1-\pi}{ \pi} + 2 (1-\alpha) \sum_{j=1}^J \log \frac{1-\pi}{ \pi} \notag \\
&= 2(\alpha + (1-\alpha)J) \log \frac{1-\pi}{ \pi} \notag.
\end{align}
The generalization error on the target task of the classifier $g_{\alpha}(x)$ is given by,
\begin{align}
p(g_{\alpha}(x) \neq y \mid x, \omega_{\alpha}, \{ \nu_j \}_{j=1}^J) &= \frac{1}{2} p_{x \sim f_{1,t}}[\omega_{\alpha}^\top x < c_{\alpha}] + \frac{1}{2}p_{x \sim f_{0, t}}[\omega_{\alpha}^\top x > c_{\alpha}] \notag \\
&= \frac{1}{2} + \frac{1}{2} p_{x \sim f_{1,t}}[\omega_{\alpha}^\top x < c_{\alpha}] - \frac{1}{2}p_{x \sim f_{0, t}}[\omega_{\alpha}^\top x < c_{\alpha}] \notag \\
&= \frac{1}{2} \bigg[ \Phi \bigg( \frac{-\omega_{\alpha}^\top \mu + c_{\alpha}}{\| \omega_{\alpha} \|} \bigg) + \Phi \bigg( \frac{-\omega_{\alpha}^\top \mu - c_{\alpha}}{\| \omega_{\alpha} \|} \bigg) \bigg] \notag.
\end{align}
Let $e_t(g_{\alpha}) = \mathbb{E}_{\{ \nu_j \}_{j=1}^J \sim \text{VMF}(\mu, \kappa)} \big[ \mathbb{E}_{\omega_{\alpha} \sim \mathcal{N}(\mu_{\omega_{\alpha}}, \Sigma_{\omega_{\alpha}})} [ p(g_{\alpha}(x) \neq y \mid x, \omega_{\alpha}, \{ \nu_j \}_{j=1}^J) ] \big]$ be the expected target generalization error on the target task. We compute this expected value using MC-integration. Finally, we obtain the expected balanced accuracy on the target task by $1 - e_t(g_{\alpha})$.
\section{Analytical Results}
\begin{figure*}[h!]
\centering
\includegraphics[width=\textwidth]{figures/effect-of-kappa-analytical.pdf}
\caption{Performance and optimal $ \alpha $ as a function of $ \kappa $. \textbf{Top:} The expected accuracy for the three classifiers as a function of the concentration parameter of the Von Mises-Fisher distribution $ \kappa $. Each column corresponds to a different amount of data available from the target task, $ m \in \{10, 20, 40\} $, and includes accuracies for the classifiers for various amounts of source tasks, $ J \in \{1, 10, 100\} $. For this figure we fix the dimension $ d = 5 $ and the amount of data from each of the source tasks $ m = 20 $. \textbf{Bottom:} The optimal $ \alpha $ as a function of $ \kappa $ for varying amounts of source tasks. The optimal $ \alpha $ is calculated using Equation ?.}
\label{fig:kappa_vs_acc}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=\textwidth]{figures/effect-of-dimension-analytical.pdf}
\caption{Performance and optimal $ \alpha $ as a function of dimensionality. \textbf{Top:}
The expected accuracy for the three classifiers as a function of the concentration parameter of the Von Mises-Fisher distribution $ \kappa $. Each column corresponds to a different amount of source data, $ m \in \{10, 20, 40\} $, and includes accuracies for the classifiers for various amounts of source tasks, $ J \in \{1, 10, 100\} $. For this figure we fix the concentration parameter $ \kappa = 100 $ and the amount of data from the target task $ n = 20 $. \textbf{Bottom:} The optimal $ \alpha $ as a function of $ d $ for varying amounts of source tasks. The optimal $ \alpha $ is calculated using Equation ?.
}
\label{fig:my_label}
\end{figure*}
\begin{figure}
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{figures/mental-math/mental_math_bas_and_alphas.pdf}
\end{subfigure} \\
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{figures/mental-math/mental_math_histograms.pdf}
\end{subfigure}
\caption{Caption}
\label{fig:my_label}
\end{figure}
\clearpage
\section{Deviations between Theory and Implementation}
\begin{enumerate}
\item In the analysis, we define the projection vector of the combined classifier as,
\begin{equation}
\omega_{\alpha} &= \alpha \omega_t + (1-\alpha) \sum_{j=1}^J \omega_{s^j} \notag
\end{equation}
In the implementation, first the source projection vectors are summed and converted to a unit vector.
\begin{equation}
\omega_s = \frac{\sum_{j=1}^J \omega_{s^j}}{\| \sum_{j=1}^J \omega_{s^j} \|} \notag
\end{equation}
Then the projection vector is computed as,
\begin{equation}
\omega_{\alpha} &= \frac{\alpha \omega_t + (1-\alpha) \omega_s}{\| \alpha \omega_t + (1-\alpha) \omega_s \|} \notag
\end{equation}
\item In the analysis, the covariance matrix of the combined projection vector is given by,
\begin{equation}
\Sigma_{\omega_{\alpha}} = \frac{1}{4\pi (1-\pi)} \bigg( \frac{\alpha^2}{n} + \frac{(1-\alpha)^2 J}{m} \bigg) I_d \notag
\end{equation}
In the implementation, the following covariance matrix is used:
\begin{equation}
\Sigma_{\omega_{\alpha}} = \frac{1}{4\pi (1-\pi)} \bigg( \frac{\alpha^2}{n} + \frac{(1-\alpha)^2}{m} \bigg) I_d \notag
\end{equation}
This covariance matrix might not be accurate for two reasons: (1) We have to account for the normalization (2) The identity covariance assumption won't hold true in real-data
\item The analysis assumes that,
\begin{itemize}
\item $\mu = [1, 0, \dots, 0]$
\item Class covariance matrices are $I_d$
\item Mid-point of the class means is 0.
\item $\pi = 0.5$
\end{itemize}
The implementation is modified such that thee assumptions are held true except for the $\mu$ and $I_d$ assumptions.
\end{enumerate}
$ \frac{\alpha \mu + (1-\alpha) R\left(\frac{v_{1} + v_{2}}{||v_{2} + v_{2}||}\right)\mu}{||\alpha \mu + (1-\alpha) R\left(\frac{v_{1} + v_{2}}{||v_{2} + v_{2}||}\right)\mu||}$
|
{
"arxiv_id": "2302.14127",
"language": "en",
"timestamp": "2023-03-01T02:01:47",
"url": "https://arxiv.org/abs/2302.14127",
"yymm": "2302"
} |
\section{Introduction} \label{sec:1}
The properties of star formation are well defined and require environments characterized by a sufficiently low temperature ($<$20K) and a high gas density \citep[][]{Hsieh2021}. In contrast, the temperature of gaseous-dusty filaments in the Galactic center exceeds a gas temperature of $\sim 6000\,{\rm K}$ and a dust temperature of $\sim 250$K \citep{Cotera1999, Moser2017}. Furthermore, the velocity dispersion of all known objects close to Sgr~A* outpaces typical numerical values in star formation regions by several magnitudes \citep{Larson1981, Genzel2000}. Due to the presence of the $4\,\times\,10^{6}\,M_{\odot}$ supermassive black hole Sgr~A*, tidal forces hinder gas clumping, which is necessary for the formation of stars. However, \cite{Yusef-Zadeh2013} found at a distance of about 0.6 pc from Sgr~A* SiO clumps that imply high-mass star formation. The authors suggest that the observed clumps are an indication of {\it in situ} star formation that took place during the last 10$^4$ - 10$^5$ years. In an subsequent analysis, ALMA observations showed high-velocity clumps in the {\it inner parsec} but also within the Circum-Nuclear Disk (CND) directed towards Sgr~A* \citep{Moser2017, Hsieh2021}. Some of these dense clumps meet the conditions necessary for star formation. In addition, the authors of \cite{Jalali2014} propose that the interplay between Sgr~A* and in-spiralling clumps of several ten Solar masses could trigger star formation. Similar interpretations of the observational findings can be found in the theoretical works of \cite{Nayakshin2007} and \cite{Hobbs2009}.
The in-spiralling of clumps and/or cluster was already suggested by \citet{Portegies-Zwart2003} and \citet{Maillard2004} to explain the high-mass stars of IRS 13 and IRS 16 \citep[for a review, see][]{Genzel2010}. While the IRS 16 cluster is the origin of prominent stellar winds originating at high-mass stars \citep[][]{Krabbe1991, Krabbe1995}, IRS 13 could contain dust-enshrouded YSOs \citep[][]{Eckart2004} and evolved Wolf-Rayet (WR) stars \citep[][]{Moultaka2005}. The presence of dusty objects of IRS 13 are accompanied by the finding of dusty D-sources in or close to the S-cluster \citep[][]{Eckart2013, Peissker2020b}\footnote{\cite{Ciurlo2020} denote these objects as G-sources.}. It is evident that the finding of two distinct populations with sources that share photometric properties such as the H-K and K-L colors imply a common formation history. Furthermore, it is {suggested} that more sources such as X7 \citep[][]{peissker2021} and G2/DSO \citep[][]{peissker2021c} can be found in the {\it inner parsec} \citep[][]{Yusef-Zadeh2015, Yusef-Zadeh2017}.
At a distance of about one arcsecond from IRS 13, the previously identified bow-shock source X3 is located, which was assumed to be coreless \citep[][]{Clenet2003, muzic2010}. It is speculated that the bow shock shape of X3 is created by stellar winds originating at the IRS 16 cluster \citep[][]{Wardle1992, peissker2021}.
Here we revisit the morphological and emission properties of the bow-shock source X3 by analyzing an extensive archival data set observed between 1995 and 2020. The near- and mid-infrared observations were carried out in the H- (1.65 $\mu m$), K- (2.20 $\mu m$), L- (3.80 $\mu m$), and M-band (4.80 $\mu m$) with a total on-source integration time of several weeks. The mid-infrared domain is accompanied by narrow-filter observations between 8-20$\mu m$ using VISIR covering the N- and Q-band. In addition to radio/submm observations at 232 GHz (1292 $\mu m$) and 343 GHz (874 $\mu m$), we include 3-dimensional Integral Field Unit (IFU) data cubes observed with SINFONI. Based on the analysis presented here, we propose the detection of a high-mass YSO (HMYSO) associated with X3 {characterized by strong outflows with velocities of several hundred km/s}. This surprising and novel interpretation of the X3-system suggests an urgent need to revise our view on this one and similar objects in the Milky Way center. To justify our claim, we will investigate the spectral footprint of the X3 system and compare it to close-by early- and late-type stars. Furthermore, we present a comprehensive multi-wavelength analysis to target the complexity of the X3-system. We discuss formation scenarios consistent with theoretical models that aim to explain the origin of young stars in the {\it inner parsec}.\newline
In the following Sec. \ref{sec:analysis}, we provide an overview of the different telescopes, instruments, and methods that are used for the analysis. Hereafter, we present the { main} results in Sec. \ref{sec:results}. These results will be discussed in Sec. \ref{sec:discuss} and are { followed} by the conclusions in Sec. \ref{sec:conclusion}.
\section{Data and Tools}
\label{sec:analysis}
{In this section}, we describe the data and analysis tools that are used in this work. {Data tables} can be found in Appendix \ref{ref:data_appendix}. In Table \ref{tab:telescopes_instruments}, we list all the instruments/telescopes used with the {corresponding} wavelength domain.
\begin{table}[hbt!]
\centering
\begin{tabular}{|cc|}
\hline
\hline
Telescope/Instrument & Wavelength [$\mu m$] \\
\hline
VLT(NACO) & 1.6, 2.1, 3.8, 4.7 \\
VLT(SINFONI) & 1.4-2.3 \\
VLT(ISAAC) & 3.0-4.5 \\
VLT(VISIR) & 8.0-20.0 \\
NTT(SHARP) & 2.2 \\
NIRCAM2(KECK) & 1.6, 2.1, 3.7 \\
OSIRIS(KECK) & 2.1 \\
ALMA & 874, 1292 \\
\hline
\end{tabular}
\caption{Telescopes and instruments used in this work. The listed wavelength for the ALMA observations corresponds to 232 GHz and 343 GHz.}
\label{tab:telescopes_instruments}
\end{table}
Throughout this work, we will not distinguish between the various sub-bands such as for example the K$_S$-, K'-, and K-band since the color difference ($|K-K'|$, $|K-K_S|$ , $|K'-K_S|$) is smaller than the typical photometric uncertainty of about 0.1-0.2 mag \citep[][]{Ott1999, Eckart2004}.
\subsection{Very Large Telescope}
The Very Large Telescope (VLT) is located at Cerro de Paranal (Chile) at a height of about 2500 m. The VLT harbor four main telescopes (Unit Telescopes, abbreviated with UTs) with a individual dish size of 8.2 m and four smaller Auxiliary Telescopes (ATs) with a dish size of 1.8 m/each. While every UT is capable of Adaptive Optics (AO) to correct for the turbulent atmosphere using a Natural Guidance Star (NGS), UT4 additionally supports the use of a Laser Guide Star (LGS). Except for VISIR, the VLT instruments SINFONI, NACO, and ISAAC are already decommissioned. However, ERIS \citep[][]{Davies2018} is the successor to NACO and SINFONI that combines the capabilities of both instruments. ERIS is scheduled for first light in 2022.
\subsubsection{SINFONI and NACO}
SINFONI \citep[][]{Eisenhauer2003, Bonnet2004} is a NIR instrument that operates between $1.1\,-\,2.45\,\mu m$ and is supported by an Integrated Field Unit (IFU). The IFU is responsible for the shape of the data, which is arranged as a 3d cube containing two spatial and one spectral dimension. This data setup allows for the analysis of individual emission lines by subtracting the underlying continuum. If the source of interest { exhibits} a Line Of Sight (LOS) velocity with $\neq\,0\,$km/s, it is identical to the Doppler shift of a line with respect to the rest wavelength. Since the K-band around $2.2\,\mu m$ is not affected by telluric emission and absorption features, the Br$\gamma$ line with a rest wavelength of $2.1661\,\mu m$ is commonly used for the analysis of stars in the K-band because of the minimized confusion.\newline
For the observations, an exposure time of 300s per single data cube is used. By combining and stacking single data cubes, we are able to artificially increase the on-source integrating time. The spatial pixel scale for the here presented SINFONI data is $0.1"$ with a Field Of View (FOV) of $3"\,\times\,3"$ per single data cube, the grating is set to the H+K band ($1.45\,\mu m\,-\,2.45\,\mu m$) with a spectral resolution $R\,=\,\frac{\lambda}{\Delta\lambda}\,=\,1500$. With a central wavelength in the H+K band of $\lambda\,=\,1.95\,\mu m$, we get the value for the smallest distinguishable wavelength of $\Delta\lambda\,=\,0.0013\,\mu m$. This, however, is an upper limit since the PSF and therefore the computed spectral resolution differs as a function of position on the detector\footnote{See the SINFONI manual, available at \url{www.eso.org}.}. Using the measured values from the instrument commissioning, we adapt the line uncertainty of $\pm\,25\,$km/s.\newline\newline
In addition to SINFONI, we use the near-infrared imager NACO \citep[][]{Lenzen2003, Rousset2003} in the H-, K-, L-, and M-band with an FOV of about $\rm 15\,arcsec\,\times\,15\,arcsec\approx\,0.6\,pc\,\times\,0.6\,pc$ that shows almost the entire NSC/inner parsec of the GC. Because of the size of the FOV, the bright supergiant IRS7 is regularly used for AO. We choose the standard randomized dither pattern inside the largest dither-box of 4 arcsec. Calibration data like sky-, flat-, and dark-frames are observed with a standard procedure and are provided by the telescope site\footnote{Sky-frames are obtained from a nearby empty region.}. The reduction of the data is done by using DPUSER (Ott, MPE Garching) { as well as} in-built scripts such as dead pixel correction.
\subsubsection{ISAAC and VISIR}
The Infrared Spectrometer And Array Camera \citep[ISAAC, see][]{Moorwood1998} and VLT Imager and Spectrometer for mid Infrared \citep[VISIR, see][]{Lagage2004} instruments are capable to perform imaging and spectroscopic observations in the MIR { domain}. The ISAAC data which is used to create a MIR cube ($3.0\,\mu m\,-\,4.5\,\mu m$) is already discussed and analyzed in \cite{Moultaka2005} where the authors describe the reduction process in detail.
The ISAAC observations were performed in July 2003 using the long-wavelength and low resolution mode with $R=700$.
In this list, VISIR is the only instrument that is currently mounted at the VLT, namely, UT3. It is, like ISAAC, a MIR imager with spectrometer capabilities. In contrast to the continuum observations with ISAAC, several filters of VISIR were used. The selected filters for the observations that took place in 2004 are PAH1r1 ($8.19\,\mu m$), PAH1 ($8.59\,\mu m$), ArIII ($8.99\,\mu m$), SIVr1 ($10.02\,\mu m$), SIVr2 ($11.11\,\mu m$), PAH2 ($11.25\,\mu m$), NeIIr1 ($12.51\,\mu m$), NeII ($12.81\,\mu m$), NeIIr2 ($13.04\,\mu m$), Q2 ($18.72\,\mu m$), and Q3 ($19.50\,\mu m$). In order to increase the signal-to-noise ratio, we stack individual frames to create a final mosaic.
\subsection{NTT telescope}
The ESO New Technology Telescope (NTT) uses active optics to correct for distortion effects caused by the dish ($\sim 3.6$m). It was commissioned in 1989 and hosted instruments such as the NIR Speckle-Camera SHARP. The data used in the work was published in \cite{Menten1997} and is high-pass filtered with the Lucy-Richardson algorithm \citep[][]{Lucy1974}. We {refer to \cite{peissker2020a, Peissker2022} where we describe and discuss different aspects of the high-pass filter technique.}
\subsection{Keck observatory}
The Keck observatory is located at Mount Mauna Kea. With an elevation of over 4000 m, it is one of the highest ground-based observatories. The observatory consists of two single telescopes, Keck I and Keck II with a dish size of 10 m each. Both telescopes can work as an interferometer with an increased baseline compared to the single dish setup.
\subsubsection{NIRCAM2}
NIRCAM operates in the NIR in the K$_S$-band at a central wavelength of $2.12\,\mu m$. The spatial pixel scale is $0.0099$ mas, the size of a single exposure with an FOV of about $10\,\times\,10$ arcsec is comparable to the NACO data. We use the KOA archive\footnote{\url{www.koa.ipac.caltech.edu}} to download data observed in 2019. For the observations carreid out in 2019 (PI: Tuan Do, UCLA), a LGS was used to perform the AO correction. We use the pre-calibrated, i.e., science ready data (dark-, flat-, and sky-corrected), and apply the shift and add algorithm to maximize the signal-to-noise ratio.
\subsubsection{OSIRIS}
The OH-Suppressing Infrared Imaging Spectrograph (OSIRIS) mounted at the KeckII telescope. Comparable to SINFONI, OSIRIS provides 3d data cubes that consist of 2 spatial and 1 spectral dimension \citep[][]{Larkin2006, Mieda2014} and uses an Integrated Field Spectrograph (IFS) supported by an AO.
For the purpose of guidance in a crowded FOV, a science camera is coupled with the instrument producing NIR continuum data. For this work, we will utilize this science camera and investigate the K-band emission of the X3 system in 2020. The presented data was observed with the Kn3 filter that represents the K-band ($2.12-2.22\,\mu m$). The corresponding FOV is $2048\,\times\,2048$ pixel and a spatial resolution of 10 mas. The science ready archival data downloaded from the Keck Online Archive (KOA) is shifted and added to suppress noise and artefacts.
\subsection{ALMA}
The Atacama Large (Sub)Millimeter Array (ALMA) is located on the Chajnantor plateau. With 66 radio telescopes with a maximum distance of about 16 km between single units, it is the largest ground-based observatory to date. The antennas cover a wavelength range of 31 to 1000 GHz. Here used data (PI Masato Tsuboi, project code: 2015.1.01080.S) was previously discussed and analyzed in \cite{Tsuboi2017, tsuboi2019, Tsuboi2020a, Tsuboi2020b} and makes use of Band 7 ($\approx\,350$ GHz). In addition, we use archival science ready data\footnote{\url{https://almascience.eso.org/aq/}} observed at 232 GHz (H30$\alpha$) by PI Lena Murchikova \citep[project code: 2016.1.00870.S, see][]{Murchikova2019}.
\subsection{Methods}
In the following, we will describe the analyzing techniques used in this work. Since we used and discussed these tools in detail in previous works \citep[e.g.,][]{peissker2020a, Peissker2020b}, we will refer the reader to the related publications.
\subsubsection{High-pass filter}
For the K-band detection presented in Fig. \ref{fig:x3_system}, we used a high-pass filter to minimize the influence of overlapping PSF. We applied the high-pass filter that can be described as an {\it image sharpener} to all H- and K-band images presented in this work (except the Br$\gamma$ line maps shown in Fig. \ref{fig:linemaps_sinfo_sinfo} {and} Fig. \ref{fig:osiris_2020}, Appendix \ref{sec:appendix_further_detections}). The motivation for this process are the extended PSF wings of bright sources in the investigated FOV. If the PSF and especially its wings become too dominant, some image details are suppressed. The process is outlined in the following:
\begin{enumerate}
\item The input image ($I_{in}$) is smoothed with a Gaussian that matches the PSF of $I_{in}$
\item The resulting image $I_{low}$ is a low-pass-filtered version of $I_{in}$
\item $I_{in}$ is the result of the real image (i.e., the natural scene without the influence of a PSF) composed with an "all filter" PSF
\end{enumerate}
This above relation can be mathematically described with
\begin{equation}
I_{in}\,-\,I_{low}\,=\,I_{high}
\label{eq:highpass}
\end{equation}
where $I_{high}$ is the desired high-pass filtered image of the input frame. In addition to the process described, we can again use a Gaussian and apply it to $I_{high}$ to construct a smoothed version of the final result. With this procedure, broader PSF wings are again added to the data. However, the influence of the PSF wings is significantly reduced. In \cite{Peissker2020b}, Fig. 2, we show a comparison between filtered and non filtered data to visualize the advantage of an {\it image sharpener}. Compared to the Lucy-Richardson algorithm \citep{Lucy1974}, the {\it image sharpener} is accessible and easy to use. It can be classified as a cosmetic procedure and shows similarities to Angular Differential Imaging \citep{Marois2006}. However, we add that the SHARP data of 1995 \citep[see Sec. \ref{sec:results} and][]{Menten1997} is treated with a Lucy-Richardson (LR) deconvolution algorithm. We guide the interested reader to \cite{Peissker2022} for a detailed description of the LR process.
\subsubsection{Keplerian fit}
For the trajectory of X3, \citet{muzic2010} derived a linear solution. Since we have an access to a data baseline that is twice as long as the one analyzed by \citet{muzic2010}, we will apply a Keplerian solution to the data.
Based on the analysis in \cite{Parsa2017}, \cite{Ali2020}, and \cite{Peissker2022}, we assume a mass of $4.0\,\times\,10^{6}\,M_{\odot}$ with a distance of $8.0\,$kpc for the central potential SgrA*. This assumption is justified by the independently derived mass of $4.0\,\times\,10^{6}\,M_{\odot}$ for Sgr~A* by \cite{eht2022} and in reasonable agreement with \cite{Do2019S2}.
\subsection{Theoretical models}
To put the observational photometric results into perspective, we { apply the 3d radiative transfer MCMC} model by \citet{Robitaille2011} to model the SED for the X3 system. The code called $\rm HYPERION$ is based on a 3-d dust continuum emission model where ray-tracing is used to enhance the results. A detailed documentation of the code is freely accessible\footnote{\url{www.hyperion-rt.org}}.
HYPERION takes various structures of YSOs into account. For example, the gaseous accretion disk, bipolar cavities, and a rotationally flattened infalling dust envelope \citep[Ulrich type, see][]{Ulrich1976} can be modeled. We refer the interested reader to the publication of \citet{SiciliaAguilar2016} where the authors provide a rough overview of the composition of a YSO (see Sec. \ref{sec:discuss}). {An additional example of successful application of the code can be found in \cite{Zajacek2017} were we successfully modeled the NIR continuum emission of the infrared excess source DSO/G2 using similar gaseous-dusty structures typical of YSOs as for the X3 system.}
For the dust grains, we use a model that is supposed to represent the dust envelope of the X3-system. Our dust model is based on \cite{Draine2003} with a slope of the optical extinction curve of $\rm R_V\,=\,A_V/(A_B-A_V)\,=\,3.1$ which appears to be suitable and agrees with detailed high-resolution studies of the region by \cite{Fritz2011}.
\section{Results} \label{sec:results}
In this section, we present the results of our multiwavelength analysis. In the following section, we analyze the data that reveals the stellar counterpart of X3 with its components that are related to a YSO. Based on the presented observations and the SED model, we derive various parameters that describe the young dust-enshrouded star. { In Figure \ref{fig:x3_system}, we provide} an overview of the components that are associated with the X3 system. The three brightest and closest early-type stars to X3 are arranged as a triangle that is used throughout this manuscript for guidance to identify the investigated system in the crowded field. Please note that we { adopt} the nomenclature of \cite{Gautam2019} for S3-374.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=.7\textwidth]{NACO_fc_stars_included.png}
\caption{Finding chart for the X3 system. We show two zoomed views toward the X3 system. The cyan zoom box to the left shows a K- and L-band overlay image where blue represents the dust of the bow shock and red is associated with hot and thermal emission. Whereas X3a can be classified as a YSO, X3b and X3c are thermal blobs. The cyan box also indicates the position of the three closest and brightest stars, S3-373, S3-374, and S3-375 \citep[see][]{Gautam2019}. The blue box on the right shows the CO emission at 343 GHz {where we incorporate K-band} contours of X3a (lime colored) and the bow shock {which is primarily observed in the L-band (light blue colored)}.
The background image is observed with NACO in the {MIR (L-band, 3.8$\mu m$)} where we indicate prominent clusters like IRS16 or IRS13 and stars. Sgr A* is located at the position of the yellow $\times$.}
\label{fig:x3_system}
\end{figure*}
Since the other two stars of the triangle are not named yet, we are following the established nomenclature with S3-373 and S3-375. The cyan box in Fig. \ref{fig:x3_system} shows the three {\it triangle stars} S3-373, S3-374, and S3-375. {In the following, all figures are normalized to their peak flux intensity to maintain a comparable character. Some of the closer stars might be over-saturated due to intrinsic flux density of X3.}
\subsection{Near- and Mid-Infrared detection}
During the analysis of NACO data observed in 2004, we found at the position of the dusty $L$-band bow shock source X3 three compact sources that we call X3a, X3b, and X3c (see Fig. \ref{fig:x3_system} and Table \ref{tab:components} for an overview).
\begin{table}[hbt!]
\centering
\begin{tabular}{|cc|}
\hline
\hline
Name & Classification\\
\hline
X3 & Bow shock dust envelope \\
X3a & Central stellar source \\
X3b & Hot thermal blob \\
X3c & Hot blob? \\
\hline
\end{tabular}
\caption{Components of the X3 system. While we find strong evidence in our data set that justifies the listed classification of X3, X3a, and X3b, the analysis of X3c remains challenging.}
\label{tab:components}
\end{table}
K-band observations with the SHARP camera (NTT) of 1995 \citep[see Figure 2 in][]{Menten1997} in comparison with NACO (VLT) data of 2002 imply the absence of X3b in earlier years while X3a can be detected well above the confusion limit between 1995 and 2020.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{1995_SHARP_2002_naco_2019_nircam2.png}
\caption{K-band detection of X3a in 1995, 2002, and 2019. The position of the dashed gold-colored circle marks the (expected) position of X3b. Because of the higher K-band magnitude of X3b compared to X3a in 2002, the blob should be traceable in 1995 and 2019. We indicate the position of X3a with a blue-colored dashed circle. For guidance, we marked the triangle stars S3-373, S3-374, and S3-375 with a magenta colored circle (please see Fig. \ref{fig:x3_system}). North is up, East is to the left.}
\label{fig:detection_x3a_x3b}
\end{figure}
Setting X3a as a reference source, we find that the distance of X3b decreases between 2002 and 2011 (see the K-band observations in Appendix \ref{sec:appendix_multiwavelength_detection}, Fig. \ref{fig:x3_all_k}). In the following years after 2011, the data does not show any significant $K$-band flux above the noise level for X3b. This is unexpected especially because the magnitude of X3b was higher compared to X3a in 2002. We pick three epochs to reflect this dynamical behavior of the X3 system. From the data, it is apparent that X3b exhibits no K-band emission above the noise level at the expected position neither in 1995 nor 2019 (Fig. \ref{fig:detection_x3a_x3b}).\newline
In addition to X3b, we observed a second blob in the L-band that moves along with X3a between 2002 and 2019. To exclude the chance of a sporadic fly-by, we center the available L-band NACO data on the position of the dusty envelope X3. The resulting stacked data is shown in Fig. \ref{fig:counterjet} where we also indicate the position of X3c with respect to the bow shock X3 (see also Fig. \ref{fig:x3_system}). We find a compact emission with a FHWM of about 0.1". If X3 and X3c would not be related, the flux of the blob would have been canceled out as it is eminent by the low background emission displayed in Fig. \ref{fig:counterjet}. In Sec. \ref{sec:discuss}, we estimate the statistical probability for a random fly-by event for objects close to the IRS13 cluster based on the K-band luminosity function (KLF) formulated by \cite{Paumard2006}.\newline
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.4\textwidth]{counterjet_x3.png}
\caption{Dust blob X3c of the X3 system observed in the L-band with NACO. For the data presented, we stack all L-band NACO observations between 2002 and 2018 and subtract the triangle stars to emphasize the components of the system. We mark the position of X3 and the X3c blob. The contour lines refer to 20$\%$, 40$\%$, 60$\%$, 80$\%$, and 100$\%$ of the peak emission of the bow shock. Between 2002 and 2018, the distance of X3 to X3c stays constant. The measured projected length from tip to tail is about 0.8".}
\label{fig:counterjet}
\end{figure}
Furthermore, we determine the distance of X3 and X3a to Sgr~A* using the orbital elements of the well observed S2 orbit \citep[][]{Peissker2022}. From the orbital fit of S2, we calculate the position of Sgr~A* which then is used as a reference position to derive the distance of X3 and X3a.
To account for uncertainties caused by the detector \citep{Plewa2018}, we chose $\pm\,0.5$ pixel to additionally incorporate the footprint of the variable background as well as the large distance ($>0.1$parsec) between the X3 system and Sgr~A*. With the spatial pixel scale of NACO of 0.0133 arcsec (H- and K-band) and 0.027 arcsec (L-band), this translates to an uncertainty of $\pm\,6.6$ mas and $\pm\,13.5$ mas. A short clip showing X3 and X3c is appended.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{x3_pm.png}
\caption{Proper motion of the X3-system based on NACO H-, K-, and L-band data. We use Sgr~A* as the reference position. We derive an average proper motion of the X3-system of $244\,\pm\,27$ km/s where the uncertainty is the standard deviation. Because the fitted central position of the bow shock deviates due to its dimensions from the K- and H-band, a certain offset is expected. However, the direction of the proper motion of all three objects is consistent, which is reflected by the fit.}
\label{fig:proper_motion}
\end{figure}
For the dust envelope X3 and the blob X3a, we find an averaged proper motion of $244\,\pm\,27$ km/s, as shown in Fig. \ref{fig:proper_motion}. Individual numerical values are $220\,\pm\,30$ km/s for X3a (K-band) and $229\,\pm\,30$ km/s for the bow shock X3 (L-band). For the H-band detection of X3a, we find a proper motion of $282\,\pm\,30$ km/s which is slightly higher than the L- and K-band. Given the lower data baseline and the higher noise level of the H-band data, this is expected. However, the estimation of the proper motion of X3 and X3a in the H-,K-, and L-band are in reasonable agreement considering the uncertainties of the analysis. The continuum data used for the proper motion observed with NACO is shown in Appendix \ref{sec:appendix_multiwavelength_detection}, Fig. \ref{fig:x3_all_h}, Fig. \ref{fig:x3_all_k}, and Fig. \ref{fig:x3_all_l}.\newline
As mentioned above, the distance between X3a and X3b decreases between 2002 and 2012. In addition, we find a constant distance between X3a and X3c. We reflect this manifold behavior of the two components in Fig. \ref{fig:distance_blobs} where we assumed that X3b was created around 1995. However, any other assumed epoch for the creation of X3b before 2002 does not impact the overall trend of the decreasing distance between 2002 and 2011.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{distance_x3a_x3b_1.png}
\caption{Distance of X3b and X3c with respect to X3a. We trace X3b without confusion until 2012 in the NACO K-band data (see Fig. \ref{fig:x3_all_k}). {The data implies that the thermal blob X3b got created or ejected between 1995 and 2002. Between 2002 and 2007, the distance of X3b to X3a stayed constant with about 1000 AU.} After 2012, we do not find any significant K-band emission above the detection limit at the expected position of X3b. In addition, we trace X3c without confusion along with X3 in the L-band between 2002 and 2019. All data points show the related standard deviation of the fit (black and grey dashed line). The {purple dotted} line indicates the Hill radius where objects are expected to be significantly influenced by the gravitational field of X3a (see Sec. \ref{sec:discuss}).}
\label{fig:distance_blobs}
\end{figure}
From the data displayed in Fig. \ref{fig:counterjet} and Fig. \ref{fig:distance_blobs}, it is evident that the proper motion of X3c matches the estimated $244\,\pm\,27$ km/s for X3a and the bow shock X3. Consequently, we identify without confusion all three components at their expected positions in 2019 (Fig. \ref{fig:2019_keck}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{HKLband_KECK_2019_x3c.png}
\caption{Multi-wavelength detection of the X3-system in the H-, K-, and L-band with NIRC2 (KECK) in 2019. The upper panel shows a combination of the K-band (background image) and the L-band ({light-blue} contours). The lower panel shows the H-band emission whereas the lightblue-colored contours are again indicating the L-band emission of the bow shock X3. The green circle is placed at the identical position showing the H- and K-band emission X3a. The blob X3c is hinted by the golden contours above S3-374 which is together with S3-373 and S3-375 indicated by a magenta colored circle (see Fig. \ref{fig:x3_system}). No emission of X3b above the noise level is detected at the expected position. We refer the interested reader to the evolution of X3a and X3b shown in Fig. \ref{fig:x3_all_k}, Appendix \ref{sec:appendix_multiwavelength_detection}. Please also consider the setup of the X3-system in 2002 as observed with NACO (Fig. \ref{fig:detection_x3a_x3b}).}
\label{fig:2019_keck}
\end{figure}
\subsection{Line emission of the X3-system}
Inspecting SINFONI data observed in 2014 for spectroscopic analysis, we find several emission lines, such as Br10, Br$\delta$, Br$\gamma$, HeI, Pa$\gamma$, and the prominent [FeIII] multiplet that are related to X3a (see Fig. \ref{fig:x3_spec_sinfo}). For the telluric-corrected\footnote{We use the standard star Hip094122 for the telluric correction.} SINFONI spectrum, we applied a 3-pixel aperture to the position of X3a. {To avoid the contamination of the spectrum, we do not apply a background subtraction (see Appendix \ref{sec:appendix_spectral_analysis} for a further discussion).}
\begin{figure}[htbp!]
\centering
\includegraphics[width=.5\textwidth]{spec_X3_reduced.png}
\caption{Spectrum (H+K band) of the X3-system extracted from the SINFONI data observed in 2014. Because of different line strengths, we subdivide the spectrum into two subplots (top and bottom). The emission shows tracers such as a double-peaked Br$\gamma$ line around 2.1661$\mu m$, and a P-Cygni profile close to the blueshifted HeI line at 2.0575$\mu m$. We find no { significant} NIR CO tracers which excludes { an evolved} late-type nature of X3a. We will discuss the missing CO absorption lines in detail in Sec. \ref{sec:discuss}.}
\label{fig:x3_spec_sinfo}
\end{figure}
The observed Br$\gamma$ line exhibits a double-peak profile with a blue- and a red-shifted velocity of -152 km/s and +55 km/s, respectively. The isolation of the Doppler-shifted Br$\gamma$ lines and the subtraction of the underlying continuum reveal a compact gas emission at the position of X3a which is shown in Fig. \ref{fig:linemaps_sinfo_sinfo}. By selecting the individual Br$\gamma$ peaks, we create two related line maps that show the distribution of the ionized gas. We append a short movie that shows the offset of the line that we interpret as {photoionized outflows originating in a gaseous disk close to X3a.}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{linemap_sinfo_sinfo_with_bs.png}
\caption{Combined continuum and Br$\gamma$ line emission observed with SINFONI. The left plot shows the Br$\gamma$ emission in combination with the continuum (indicated by the surrounding gas and stellar emission). The right image shows a 3 pixel smoothed isolated Br$\gamma$ line where we subtracted the continuum. For the isolated Br$\gamma$ line on the right, we integrate over the full width of the double peaked emission shown in Fig. \ref{fig:x3_spec_sinfo}. {For the blue contour lines, we adapt the normalized emission of the shown isolated Br$\gamma$-line corresponding to 50$\%$, 60$\%$, 70$\%$, 80$\%$, and 90$\%$ of the peak intensity.}
In the upper right, we show a position-position-velocity diagram at the position of the double peak Br$\gamma$ line. The velocity range is about $\Delta v\,\simeq \,200$km/s. Additional material shows the velocity gradient and the related offset of the blue- and red-shifted Br$\gamma$ line. A short clip showing the double peak Br$\gamma$ line is appended.}
\label{fig:linemaps_sinfo_sinfo}
\end{figure}
{ Futhermore, we construct} a position-position-velocity (PPV) map based on the Doppler-shifted Br$\gamma$ emission and include it in Fig. \ref{fig:linemaps_sinfo_sinfo}. This PPV map shows a continuous velocity gradient with $\Delta v\,\simeq \,200$km/s. Analyzing the SINFONI spectrum, we find no tracers of CO band heads at 2.29$\mu m$, 2.32$\mu m$, 2.35$\mu m$ and 2.38$\mu m$ implying a young age of X3a. The dimensions of the Br$\gamma$-line distribution can be treated as an upper limit due to uncertainties imposed by the variable background and the marginally resolved emission.\newline
Furthermore, we find a P-Cygni profile for the near-infrared HeI line observed with a blue-shifted velocity of about -480 km/s (Fig. \ref{fig:pcygni_sinfo}), which indicates a stellar outflow of $\sim 400$ km/s when corrected for the LOS velocity of the source, which is $\sim -50$ km/s as inferred from the average of red and blue peaks of Br$\gamma$ line. Interestingly, the blue-shifted Br$\delta$ velocity of -478 km/s matches the peak velocity of the P-Cygni profile, implying a correlation and the potential origin of the recombination line in the outflow. A detailed investigation of a possible connection between Br$\delta$ and the P-Cygni profile is beyond the scope of this work and should be analyzed separately. We note that \cite{Mizumoto2018} find [FeII] emission lines related to the investigated P-Cygni absorption profiles. From this work, it is plausible that emission and absorption features in the spectrum seem to be related.
\begin{figure}[htbp!]
\centering
\includegraphics[width=.4\textwidth]{pcygni_sinfo.png}
\caption{P-Cygni profile of X3a observed with SINFONI. The infrared HeI (transition $2p^1 P^0-2s^1S$) line with a rest wavelength of $2.0586\,\mu m$ is marked with a green dashed line. The spectrum of X3a exhibits a blue-shifted HeI line with a related line of sight velocity of -173 km/s. Furthermore, a P-Cygni profile at $2.0554\,\mu m$ with -479 km/s is clearly traceable.}
\label{fig:pcygni_sinfo}
\end{figure}
{Table \ref{tab:emissionlines} lists all detected Dopplershifted NIR emission lines as observed with SINFONI. In addition to the H+K band, we investigate the observation campaign that covers the region around X3, which was carried out with VISIR in the N- and Q-band}.
\begin{table*}[hbt!]
\centering
\begin{tabular}{|c|ccc|}
\hline
\hline
Spectral line ($@$rest wavelength [$\mu$m]) & transition & central wavelength [$\mu$m] & Velocity [km/s] \\
\hline
$\rm [FeII]@$1.6440 $\mu$m & $a^4D_{7/2} - a^4F_{9/2}$ & 1.6445 $\pm$ 0.0002 & 91 \\
Br10 $@$1.7366 $\mu$m & n=10-4 & 1.7360 $\pm$ 0.0002 & -104 \\
Br$\delta$ $@$1.9450 $\mu$m & n=8-4 & 1.9419 $\pm$ 0.0002 & -478 \\
HeI $@$2.0586 $\mu$m & $2p^1 P^0-2s^1S$ & 2.0554 $\pm$ 0.0002 & -479 \\
HeI $@$2.0586 $\mu$m & $2p^1 P^0-2s^1S$ & 2.0575 $\pm$ 0.0002 & -173 \\
H$_2$ $@$2.1218 $\mu$m & v=1-0 S(1) & 2.1232 $\pm$ 0.001 & 198 \\
Br$\gamma$ $@$2.1661 $\mu$m & n=7-4 & 2.1650 $\pm$ 0.0002 & -152 \\
Br$\gamma$ $@$2.1661 $\mu$m & n=7-4 & 2.1665 $\pm$ 0.0002 & 55 \\
$\rm [FeIII]@$2.1451 $\mu$m & $^3G_3\,-\,^3H_4$ & 2.1447 $\pm$ 0.0002 & -56 \\
$\rm [FeIII]@$2.2178 $\mu$m & $^3G_5\,-\,^3H_6$ & 2.2175 $\pm$ 0.0002 & -41 \\
$\rm [FeIII]@$2.2420 $\mu$m & $^3G_4\,-\,^3H_4$ & 2.2414 $\pm$ 0.0002 & -80 \\
$\rm [FeIII]@$2.3479 $\mu$m & $^3G_5\,-\,^3H_5$ & 2.3475 $\pm$ 0.0002 & -51 \\
\hline
\end{tabular}
\caption{Emission lines extracted from the SINFONI (NIR) data cube of 2014. We list the rest wavelength of the related emission line. Furthermore, we indicate the transition of the investigated species. The uncertainty of the derived Doppler-shifted velocity is about $\pm\,25$ km/s.}
\label{tab:emissionlines}
\end{table*}
Based on these MIR observations, we trace NeII, PAH1, ArIII-, SIVr1-, SIVr2-, PAH2-, Q2-, and Q3-lines which are associated with YSOs \citep{Woitke2018}. In Fig. \ref{fig:sivr2_visir}, we present a SIVr2 continuum map with the indicated position of X3. All other lines are shown in Appendix \ref{sec:appendix_multiwavelength_detection}, Fig. \ref{fig:x3_all_visir}.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=0.8\textwidth]{mir_comparison.png}
\caption{Comparison of the M-band continuum with the SIVr2 line emission of X3. The left image was observed with NACO in 2012, the right image shows observations carried out with VISIR in 2004. The golden colored arrow indicates the position of X3, the M-band observations exhibits lime colored contour lines of the bow shock source at 35$\%$ of the peak emission (see Table \ref{tab:mag_flux}). The length of the bow shock observed at 4.5$\mu m$ is about 0.8 arcsec measured from tip to tail, the FHWM of the SIVr2 line emission is about 0.3 arcsec. Here, North is up, East is to the left.}
\label{fig:sivr2_visir}
\end{figure*}
Although the spatial pixel scale of the VISIR data is lower ($\sim\,0.1$arcsec) compared to the NACO data ($\sim\,0.027$arcsec), the elongation of the bow shock X3 should be detectable. We measure a rough size of about 0.8 arcsec in the M-band for the dusty emission associated with X3 (see Fig. \ref{fig:sivr2_visir}) whereas the SIVr2 line exhibits a FWHM of 0.3 arcsec. Hence, we expect an emission almost three times larger for the lines observed with VISIR if the MIR dust emission matches the distribution of the SIVr2 line. In contrast, the observations show that the MIR emission lines arise from a very dense and compact region which does not exhibit any elongation (please see also Appendix \ref{sec:appendix_multiwavelength_detection}, Fig. \ref{fig:x3_all_visir}).\newline
To cover longer wavelength regimes compared to the IR observations, we display the results of some of the ALMA observation campaigns targeting the vicinity of Sgr~A*. The presented radio data is also analyzed in \cite{Tsuboi2017}, \cite{tsuboi2019}, \cite{Tsuboi2020a}, and \cite{Tsuboi2020b} where the authors investigate, among other things, the IRS 13 cluster in detail.
In Fig.~\ref{fig:x3_disk}, we display the radio/submm emission at the position of X3a. In the same figure, we incorporate the contour lines of the stellar emission X3a (lime-colored) and the dust envelope (light blue-colored). The CO emission originates at the stellar position, { whereas} the H30$\alpha$ line is distributed around X3a {in a ring-like structure as it has been recently observed for the isolated massive YSO G28.20-0.05 by \cite{Law2022}}. Compared to the downstream bow-shock section, the H30$\alpha$ emission at the tip of the bow shock is enhanced by $\sim $20$\%$. This increased flux density at the position of the bow shock front implies heating caused by the interaction of X3 with the ambient medium.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=1.\textwidth]{X3_radio_disk.png}
\caption{Gas emission observed with ALMA overlaid with K- and L-band contour lines of X3a and X3. With lime-colored contours, we indicate the K-band peak emission of 2.8 mJy at 20$\%$ and 40$\%$ for X3a as observed in 2015 (see Table \ref{tab:mag_flux}). {With light-blue contours}, the L-band dust emission of X3 of the same epoch is shown. The background image was observed with ALMA and shows H30$\alpha$ and CO radio/submm continuum emission at the position of X3a. {While the H30$\alpha$ line seems to be arranged in a ring-like structure around X3a following the morphology of the recently observed massive protostar G28.20-0.05, the CO line exhibits a compact density distribution. Please note that the K-band contours do not imply the true size of the stellar source X3a. In the right figure, we show proper motion vector with a velocity of about $244\pm 27$km/s (magenta colored, Fig. \ref{fig:proper_motion}).}}
\label{fig:x3_disk}
\end{figure*}
Taking into account the ALMA observations presented in Fig. \ref{fig:x3_disk} and considering the SINFONI/VISIR line observations, it is evident that the ionized gas species seem to originate at different components inside the X3 system.
\subsection{Photometric analysis}
{ In this subsection}, we present the results of the photometric analysis. {Since \cite{Gautam2019} finds that more than 50$\%$ of their star sample (about 570) located in the NSC are variable, we are limited in our selecting of a proper reference star that is used to estimate the magnitudes and flux densities of the X3-system. Hence, variability but also fluctuating extinction values \citep[][]{Peissker2020c} confront the analysis with uncertainties that need to be targeted. Because the bright and close-by members E1, E2, and E3 of the IRS13 cluster show varying extinction in all investigated bands \citep[see Fig. \ref{fig:x3_system} and][]{muzic2008, Fritz2011}, we will address these fluctuations using an asymmetric uncertainty range for the estimated magnitude and flux densities (Table \ref{tab:mag_flux}). Furthermore, we used} the most observed object in the NSC, the S-cluster star S2 as a calibration source for the K-band whenever possible. Including the H-band, we note that the flux analysis of S2 in the literature shows inconsistencies. Therefore, we fit a blackbody SED to the emission of S2 with known individual numerical results (Table \ref{tab:mag_fux_reference_values}) for the K-\citep{Sabha2012}, L- and M-band \citep[both values are taken from][]{Viehmann2006}. For the SED model presented in Fig. \ref{fig:s2_sed} (Appendix \ref{sec:appendix_s2_sed}), we implement common B2V stellar-type parameters of $\rm R\,=\,8.5\,R_{\odot}$ and $\rm T_{eff}\,=\,22500$K \citep[see also][]{Hanson1996}. From the fit, we derive a flux density of $\rm 32.0\,\pm\,0.2\,mJy$ with the related magnitude of 16.0 mag \citep{Schoedel2010, Peissker2020b} for the S-cluster star S2 in the H-band.
For mid-infrared analysis, we include the PAH1 and NeII emission of the close-by reference source IRS2L \citep[][]{Bhat2022} and the relation
\begin{equation}
\rm mag_{\lambda}\,=\,-2.5\times log(f_{\lambda}/f_0)
\label{eq:apparent_magnitude}
\end{equation}
where $\rm mag_{\lambda}$ is the magnitude of the related band and $f_0$ the zero flux adapted from \cite{Tokunaga2007}\footnote{\citet{Tokunaga2007} use Vega as a zero magnitude star.}.
\begin{table}[hbt!]
\centering
\setlength{\tabcolsep}{0.9pt}
\begin{tabular}{|cccc|}
\hline
\hline
Band & Magnitude & Flux$_{\lambda}$ & Reference \\
& [mag] & [Jy] & \\
\hline
H-band & 16.0 & 0.032 & S2, see text \\
K-band & 14.1 & 0.0147 & S2, \cite{Sabha2012} \\
L-band & 6.4 & 2.98 & IRS2L, \cite{Viehmann2006} \\
M-band & 5.5 & 3.98 & IRS2L, \cite{Viehmann2006} \\
PAH1 & 0.9 & 21.4 & IRS2L, \cite{Bhat2022} \\
NeII & 0.4 & 18.9 & IRS2L, \cite{Bhat2022} \\
\hline
\end{tabular}
\caption{Derredened reference values for S2 and IRS2L used in this work. The related references are listed \citep[see also][]{Viehmann2007}. For PAH1 and NeII, we use a zero flux of 50 Jy and 28.6 Jy, respectively \citep{Tokunaga2007}.}
\label{tab:mag_fux_reference_values}
\end{table}
With the magnitude and flux information, we photometrically analyze almost 20 years of observations of the GC using
\begin{equation}
\rm F_{\lambda}\,=\,F_{0}\,\times\,10^{(-0.4(mag_{2}-mag_{1}))}
\label{eq:flux}
\end{equation}
where mag$_2$ refers to the { investigated object} and mag$_1$ to the reference source. The resulting magnitudes and fluxes are listed in Table \ref{tab:mag_flux}. The H-, K-, L-, and M-band data is observed with NACO, the narrow filter PAH1 and NeII with VISIR.
\begin{table*}[hbt!
\centering
\setlength{\tabcolsep}{0.5pt}
\begin{tabular}{|c|cccccccccccc|}
\hline
Year & \multicolumn{2}{c}{H-band} & \multicolumn{2}{c}{K-band} & \multicolumn{2}{c}{L-band} & \multicolumn{2}{c}{M-band} & \multicolumn{2}{c}{PAH1} & \multicolumn{2}{c|}{NeII}\\
[YYYY] & [mag] & [mJy] & [mag] & [mJy] & [mag] & [Jy] & [mag] & [Jy] & [mag] & [Jy] & [mag] & [Jy] \\
\hline
2002 & $19.6^{+0.7}_{-0.3}$ & $1.1^{+0.4}_{-0.5}$ & $16.3^{+0.7}_{-0.3}$ & $1.9^{+0.6}_{-0.9}$ & $9.8^{+0.8}_{-0.3}$ & $0.13^{+0.03}_{-0.07}$ & - & - & - & - & - & - \\
2003 & - & - & $15.8^{+0.7}_{-0.3}$ & $3.0^{+1.0}_{-1.4}$ & $9.9^{+0.8}_{-0.3}$ & $0.11^{+0.04}_{-0.06}$ & - & - & - & - & - & - \\
2004 & $19.2^{+0.7}_{-0.3}$ & $1.6^{+0.6}_{-0.7}$ & $15.8^{+0.7}_{-0.3}$ & $3.0^{+1.0}_{-1.4}$ & $9.9^{+0.8}_{-0.3}$ & $0.11^{+0.04}_{-0.06}$ & - & - & $3.0^{+0.3}_{-0.3}$ & $3.1^{+0.9}_{-0.7}$ & $1.4^{+0.3}_{-0.3}$ & $7.5^{+2.4}_{-1.8}$ \\
2005 & - & - & $15.5^{+0.7}_{-0.3}$ & $3.0^{+1.3}_{-0.9}$ & $9.7^{+0.8}_{-0.3}$ & $0.14^{+0.04}_{-0.07}$ & - & - & - & - & - & - \\
2006 & - & - & - & - & $9.6^{+0.8}_{-0.3}$ & $0.15^{+0.05}_{-0.08}$ & - & - & - & - & - & - \\
2007 & - & - & $15.9^{+0.7}_{-0.3}$ & $2.8^{+0.9}_{-1.4}$ & $9.5^{+0.8}_{-0.3}$ & $0.17^{+0.05}_{-0.09}$ & - & - & - & - & - & - \\
2008 & - & - & $16.0^{+0.7}_{-0.3}$ & $2.5^{+0.8}_{-1.2}$ & $9.4^{+0.8}_{-0.3}$ & $0.18^{+0.06}_{-0.09}$ & - & - & - & - & - & - \\
2009 & - & - & $15.9^{+0.7}_{-0.3}$ & $2.8^{+0.9}_{-1.4}$ & $9.3^{+0.8}_{-0.3}$ & $0.20^{+0.07}_{-0.10}$ & - & - & - & - & - & - \\
2010 & - & - & $15.7^{+0.7}_{-0.3}$ & $3.3^{+1.1}_{-1.5}$ & $8.9^{+0.8}_{-0.3}$ & $0.29^{+0.10}_{-0.15}$ & - & - & $2.4^{+0.3}_{-0.3}$ & $5.3^{+1.7}_{-1.3}$ & $1.5^{+0.3}_{-0.3}$ & $6.8^{+2.2}_{-1.6}$ \\
2011 & - & - & $15.8^{+0.7}_{-0.3}$ & $3.0^{+1.0}_{-1.4}$ & $8.4^{+0.8}_{-0.3}$ & $0.47^{+0.15}_{-0.25}$ & - & - & - & - & - & - \\
2012 & $19.2^{+0.7}_{-0.3}$ & $1.6^{+0.6}_{-0.7}$ & $15.8^{+0.7}_{-0.3}$ & $3.0^{+1.0}_{-1.4}$ & $8.4^{+0.8}_{-0.3}$ & $0.47^{+0.15}_{-0.25}$ & $7.9^{+0.7}_{-0.3}$ & $0.43^{+0.14}_{-0.24}$ & - & - & - & - \\
2013 & - & - & - & - & $8.7^{+0.8}_{-0.3}$ & $0.35^{+0.12}_{-0.18}$ & - & - & - & - & - & - \\
2014 & - & - & - & - & - & - & - & - & - & - & - & - \\
2015 & - & - & $15.9^{+0.7}_{-0.3}$ & $2.8^{+0.9}_{-1.4}$ & $8.5^{+0.8}_{-0.3}$ & $0.43^{+0.13}_{-0.23}$ & - & - & - & - & - & - \\
2016 & - & - & - & - & $8.4^{+0.8}_{-0.3}$ & $0.47^{+0.15}_{-0.25}$ & - & - &$2.0^{+0.3}_{-0.3}$ & $7.7^{+2.3}_{-1.8}$ & $1.3^{+0.3}_{-0.3}$ & $8.2^{+1.8}_{-2.0}$ \\
2017 & - & - & - & - & $8.4^{+0.8}_{-0.3}$ & $0.47^{+0.15}_{-0.25}$ & - & - & - & - & - & - \\
2018 & - & - & $15.9^{+0.7}_{-0.3}$ & $2.8^{+0.9}_{-1.4}$ & $8.2^{+0.8}_{-0.3}$ & $0.56^{+0.18}_{-0.29}$ & - & - & $2.0^{+0.3}_{-0.3}$ & $7.7^{+2.3}_{-1.8}$ & $1.3^{+0.3}_{-0.3}$ & $8.2^{+1.8}_{-2.0}$ \\
2019 & $19.6^{+0.7}_{-0.3}$ & $1.1^{+0.4}_{-0.5}$ & $16.1^{+0.7}_{-0.3}$ & $2.3^{+0.7}_{-1.0}$ & - & - & - & - & - & - & - & - \\
2020 & - & - & $16.0^{+0.7}_{-0.3}$ & $2.5^{+0.8}_{-1.2}$ & - & - & - & - & - & - & - & - \\
\hline
Averaged & $19.4\,\pm\,0.2$ & \multicolumn{1}{|l|}{$1.4\,\pm\,0.3$} & $15.9\,\pm\,0.2$ & \multicolumn{1}{|l|}{$2.8\,\pm\,0.4$} & $9.0\,\pm\,0.6$ & \multicolumn{1}{|l|}{$0.29\,\pm\,0.15$} & $7.9\,\pm\,0.5$ & \multicolumn{1}{|l|}{$0.43\,\pm\,0.1$} & $2.3\,\pm\,0.4$ & \multicolumn{1}{|l|}{$5.9\,\pm\,1.9$} & $1.3\,\pm\,0.1$ & \multicolumn{1}{|l|}{$7.7\,\pm\,0.6$} \\
\hline
\end{tabular}
\caption{Results of the photometric multiwavelength analysis of X3. Please note that all flux densities are given in Jy except for the H and K band. Lower magnitude values equal brighter and higher source emission.}
\label{tab:mag_flux}
\end{table*}
We adapt the listed averaged L-band magnitude of 9.0 mag from Table \ref{tab:mag_flux} to estimate the L-band magnitude of X3c.
With Eq. \ref{eq:apparent_magnitude}, we calculate an averaged L-band magnitude of $10.6\,\pm\,0.6$ for X3c. The uncertainties are adapted from the averaged L-band magnitude of X3 and represent the standard deviation of the individual measurements per epoch.
\subsection{Spectral Energy Distribution}
\label{sec:results_sed}
Due to the robust H- and K-band detection of X3a between 1995 and 2020 (see Fig. \ref{fig:detection_x3a_x3b}, Fig. \ref{fig:x3_all_k}, and Fig. \ref{fig:osiris_2020}), the rich line emission in various bands, and the strong outflow {that might emerge from the star or a protoplanetary disk} (Fig. \ref{fig:pcygni_sinfo}), the question about the stellar nature of the object arises. At this point, we can exclude a late type stellar nature due to the missing infrared CO band heads at 2.29$\mu m$, 2.32$\mu m$, 2.35$\mu m$ and 2.38$\mu m$. The absence of these absorption feature clearly separates X3a from late-type stars \citep[][]{Buchholz2009}. We address the question about the nature of X3a with the support of the 3D Markov Chain Monte Carlo radiative transfer code HYPERION to model the composite Spectral Energy Distribution (SED) of the X3 system \citep{Robitaille2011}; { see also \citet{Zajacek2017} for a similar set-up}. For HYPERION, we assume that X3a is a YSO motivated by the above arguments. The 3D dust continuum radiative transfer code calculates individual spectra and allows for a variety of input parameters. From the observed magnitudes and flux densities of X3a, we arrange the input spectrum for HYPERION as the flux density (in [Jy]) as a function of wavelength (in [$\mu m$]). See Table \ref{tab:mag_flux} for the numerical values of the related input spectrum.
This spectrum is then combined with the input values of the model listed in Table~\ref{tab:sed_values}. We choose 10$^6$ photons and 10$^4$ ray-tracing sources, resulting in an average computation time of 12-24 hours per simulation run. In total, we used more than 100 different setups to derive suitable best-fit parameters for the emission of the X3-system. For example, we checked different stellar mass assumptions between 0.5-25 M$_{\odot}$ in steps of 1 M$_{\odot}$.
\begin{table}[hbt!]
\centering
\begin{tabular}{|cc|}
\hline
\hline
Properties & Setting\\
\hline
Radius [R$_{\odot}$] & 10 \\
Luminosity [L$_{\odot}$] & 24$\times\,10^3$ \\
Mass [M$_{\odot}$] & 15 \\
Disk mass [M$_{\odot}$] & 0.01 \\
Disk radius (min) [R$_{\odot}$] & 50 \\
Disk radius (max) [AU] & 700 \\
Disk radius height[R$_{\odot}$] & 0.01 \\
Accretion [M$_{\odot}$/yr] & $10^{-6}$ \\
Ulrich env. (min) [R$_{\odot}$] & 50 \\
Ulrich env. (max) [AU] & 1000 \\
Number of Photons & 10$^6$ \\
Ray-tracing sources & 10$^4$ \\
Number of Iterations & 10 \\
\hline
\hline
\end{tabular}
\caption{Final input parameters for HYPERION. In addition to these settings, we define a input spectrum related to the derived fluxes listed in Table \ref{tab:mag_flux}.}
\label{tab:sed_values}
\end{table}
The results of the model are shown in Fig. \ref{fig:x3_sed}. Among the best-fit parameters are a stellar mass of $15M_{\odot}$ with a related effective temperature of $24\times 10^3$ K for X3a. Considering the values listed in Table \ref{tab:mag_flux}, we transfer the uncertainties of the estimated magnitudes and flux densities of X3 to the comparison plot. Due to the logarithmic scaling of Fig. \ref{fig:x3_sed}, these uncertainties are about the size of the used symbols. Since the uncertainties do not entirely reflect the data distribution, we constrain the mass of X3a to $15^{+10}_{-5} M_{\odot}$.
\begin{figure}[htbp!]
\centering
\includegraphics[width=.45\textwidth]{SED_X3_final.png}
\caption{Spectral energy distribution of the X3 system in comparison with early-type stars. We show the flux density in [Jy] as a function of wavelength in [$\mu m$] with a logarithmic axis scaling. The inclination angles of 77.5$^{\circ}$ and 87.5$^{\circ}$ with a related stellar mass of 15M$_{\odot}$ are marked with green and red, respectively. The (light)blue lines indicate the lower stellar mass limit of $10 M_{\odot}$. In contrast, magenta/pink shows the upper mass limit of $25 M_{\odot}$ resulting in a final stellar mass estimate of $15^{+10}_{-5} M_{\odot}$. The estimated flux densities are represented by orange dots whereas the size correlates with the uncertainties. The grey and black dashed lines represent the blackbody SED fits of S2 and IRS16NE, respectively \citep{Krabbe1995, Habibi2017}.}
\label{fig:x3_sed}
\end{figure}
Consistent with the previous study by \cite{muzic2010}, we independently constrain an inclination angle between 77.5$^{\circ}$ and 87.5$^{\circ}$ for the X3 system. When comparing the mass and luminosity of the X3 system with the GAIA YSO survey, a source with comparable characteristics is identified as MWC 297 \citep{Wichittanakom2020}. We note that the YSO MWC 297 is classified as a Herbig Ae/Be star which will be adapted for the X3 system.
{ Using} the derived mass and the effective temperature of the star, we use the PARSEC code { output files} \citep{2012MNRAS.427..127B} to elaborate the Hertzsprung-Russell diagram (Fig. \ref{fig:hr_x3}). Taking into account evolutionary paths of pre-main-sequence stars of different masses, we can estimate an age of X3 to be $\sim 0.04$ Myr.
\begin{figure}[htbp!]
\centering
\includegraphics[width=.45\textwidth]{hr_x3.pdf}
\caption{Position of the X3 in the Hertzsprung-Russell diagram. The X3 position (orange asterisk) in the Hertzsprung-Russell diagram is compared with the pre-main-sequence star evolutionary tracks according to the PARSEC code \cite{2012MNRAS.427..127B}, calculated for the near-Solar metallicity of $Z=0.01$. Evolutionary paths of stars in the mass range 1-10\,$M_{\odot}$ are shown (with 1\,$M_{\odot}$ increments) as well as $0.4$, $12$, $14$, and $16\,M_{\odot}$ stars are depicted. In addition, we plot $10^4$, $10^5$, $10^6\,{\rm yrs}$ isochrones, the { zero-age} main sequence, and the lines standing for $1\,R_{\odot}$, $5\,R_{\odot}$, and $10\,R_{\odot}$ (blue dotted lines). The inferred temperature and the luminosity of X3 are consistent with being a pre-main-sequence massive star ($\sim 14-15\,M_{\odot}$ having an age of $\sim 40\,000$ years). For comparison, we also plot the positions of X7 and G2 objects according to the broad-band SED fits of \cite{peissker2021} and \cite{peissker2021c}, respectively. In contrast to X3, they follow the evolutionary track of a low-mass star of $\sim 0.4\,M_{\odot}$ with the age of $\gtrsim 10^6$ years. The denoted OB stars S2 and IRS16NE are also clearly offset from the X3 star.}
\label{fig:hr_x3}
\end{figure}
The projected stagnation radius of $R_{\rm BS}\sim 0.4''\sim 3300\,{\rm AU}$, which is close to the deprojected value due to the high inclination, allows us to estimate the mass-loss rate of X3. For this, we use the ram-pressure equilibrium relation \citep[e.g.,][]{Wilkin1996}
\begin{equation}
R_{\rm BS}=(\dot{m}_{\rm w}v_{\rm w}/[4\pi \mu m_{\rm H} n_{\rm a} v_{\rm rel}^2])^{1/2}
\end{equation}
where the ambient number density $n_{\rm a}\sim 26\,{\rm cm^{-3}}$ is close to the Bondi radius \citep{2003ApJ...591..891B}. The terminal stellar wind velocity $v_{\rm w}\sim 400\,{\rm km\,s^{-1}}$ is adapted from the HeI P Cygni profile (Fig. \ref{fig:pcygni_sinfo}), and the relative velocity with respect to the ambient medium can be approximated by $v_{\rm rel}\gg v_{\star}\gtrsim 1000$ km/s due to the nearly perpendicular orientation of the bow shock with respect to the proper motion vector (Fig. \ref{fig:x3_disk}). From the above numerical approach we obtain $\dot{m}_{\rm w}\sim 2.65\times 10^{-6}-1.06\times 10^{-5}\,{\rm M_{\odot}\,yr^{-1}}$. This result is at the higher end of typical mass-loss rates for Hebig Ae/Be stars \citep{1995A&A...302..169N} and is comparable to the input accretion rate for the HYPERION SED calculation. We note that the estimate is strongly dependent on the exact value of the relative velocity value and on the bow-shock stagnation radius.
Furthermore, the stellar orbital velocity of X3a can be estimated simply as $v_{\star}\,\simeq \,(v^2_{\rm LOS}+v^2_{\rm PROP})^{1/2}$ where $v_{\rm LOS}$ defines the LOS velocity and $v_{\rm PROP}$ is the velocity related to the proper motion. With $v_{\rm PROP}\,=\,244\pm 27$km/s from the fit shown in Fig. \ref{fig:proper_motion} and the Br$\gamma$-based line center LOS velocity of $v_{\rm LOS}\,\sim (v_{\rm blue}+v_{\rm red})/2\,\sim -48.5 $km/s, we obtain the estimate of the stellar orbital velocity of X3a, $v_{\star}\,=\,249$km/s. The mean orbital distance then is $d_{\rm X3}\simeq GM_{\rm SgrA*}/v_{\star}^2\simeq 0.28\,{\rm pc}$, which is slightly larger than the projected distance of $\sim 0.1\,{\rm pc}$ of the X3 system from Sgr A*. This is expected due to the inclined stellar orbit with a non-zero LOS velocity traced in the emission lines.
For the modeled SED, we assumed the presence of a massive {protoplanetary disk with a related radius of $\lesssim 700$ AU around} the star. If the double-peak Br$\gamma$ line originates in the {inner disk which gets spatially extended by outflows}, the characteristic Keplerian velocity is estimated as $v_{\rm Br\gamma, K} \gtrsim 103.5$km/s, which serves as a lower limit for the Keplerian velocity in the disk due to an uncertain inclination \citep{Kraus2012}. The mean radius for the {origin of the} Br$\gamma$ emitting material is $r_{\rm disk,Br\gamma}\lesssim Gm_{\rm X3a}/v_{\rm Br\gamma, K}^2\sim 1.2\,{\rm AU}$. This is in agreement with the orbiting bound ionized material around a YSO that could be either inflowing or outflowing, that is, an accretion or a decretion disk \citep{2014A&A...561A...2A}.
This provides a consistency test of the overall kinematics of the system (stellar orbital motion around Sgr~A* as well as the bound gas motion around the X3a star) which is in agreement with the proposed computational model (Fig. \ref{fig:x3_sed}). {We note that the observed Br$\gamma$ line is spatially extended as it is expected for Herbig Ae/Be stars \citep{Tatulli2007, Davis2011}. We will elaborate this relation in the following section.}
\section{Discussion} \label{sec:discuss}
In the following, we will discuss the various findings of the { presented} analysis. First, we will summarize the different components related to the X3 system. Second, we will motivate the stellar nature of the bow shock source. Third, we will propose a speculative scenario to explain the formation process of the young star.
\subsection{The X3 system}
Analyzing NIR data observed with the VLT and KECK, we trace X3a along with X3 between 2002 and 2020 which is reflected by a matching proper motion of 244$\pm$27 km/s. In the case of two non-related sources, we would expect a significant difference for the proper motion but also the trajectory. Considering the statistical analysis of \cite{Sabha2012} for a random fly-by event, the 3d motion of X3 and X3a is a six-parameter problem ($x$, $y$, $v_{{pm}_x}$, $v_{{pm}_y}$, $v_{{los}_x}$, $v_{{pm}_y}$) where it is unlikely that at least 4 parameters are arranged in a way that $x_{X3}=x_{X3a}$ and $y_{X3}=y_{X3a}$. However, several studies \citep[see, for example,][]{Gallego-Cano2018} investigate the stellar density as a function of distance { from} Sgr~A*. Hence, we expect by definition that at larger distances the chance for a random encounter is { significantly} decreased. Considering the KLF for IRS 13E taken from \cite{Paumard2006}, we find for the position of the X3 system in total {$\sim $8 sources/(1 arcsec)$^2$}. Taking into account the spatial pixel scale of NACO, we { obtain the mean value of 0.14 sources per resolution element (i.e., pixel)}. For the {almost consecutive} NACO observations between 2002 and 2018, we find a random source at a random position of about {2 $\%$. The authors \cite{Sabha2012} and \cite{Eckart2013} estimate a similar value for the higher crowded S-cluster. Considering that a random fly-by encounter of two stars dissolves after three years, it becomes negligible that such an event can explain} the co-movement of the H-, K-, and L-band detection of X3 and X3a.\newline
Regarding X3b, it is reasonable to assume that the thermal blob was created (or confused) between 1995 and 2002 and disappeared around 2012. While the ejection of a blob from the central stellar source X3a is rather speculative, the formation of dense material within the bow shock due to instabilities is supported by simulations of bubbles around the bow-shock tip \citep{Gardner2016}. {We can assume that unstable bubbles suffer from magnitude and flux variations because of their nature. Investigating the $K$-band magnitude of X3b in 2002 revealed a magnitude of about 16 mag which was higher compared to X3a (see Table \ref{tab:mag_flux}). Since no emission above the noise of X3b is detected in 2012, we compare this magnitude to the NACO detection limit of 18.5 mag and find therefore a decrease of $\Delta_{mag}\,=\,18.5-16.0\,=\,2.5$.}
Therefore, the analysis implies that X3b suffered a decrease in thermal energy between 2002 and 2012 due to radiative or adiabatic cooling, which justifies the classification as a thermal blob. Furthermore, the proper motion of X3a observed in the $H$- and $K$-band is in reasonable agreement with X3, which suggests that both sources belong to the same system. This argumentation is also valid for X3c although the nature of the compact blob is challenging to uncover. The X3c component could be related to the collimated bow-shock downstream flow, such as within the Bondi–Hoyle–Lyttleton (BHL) accretion flow model \cite{Matsuda2015}. This mechanism describes the shock formation downstream at a specific stagnation point. Taking into account the thermal pressure of the ambient hot plasma as discussed in \cite{Christie2016}, it is also plausible that the surrounding medium could create enough pressure to make the bow-shock shell closed and X3c would be the shock formed where the streams meet. As a consequence, the blob is expected to be variable and the material could be accreted over time. Although we did not find indications of an accretion process between 2002 and 2018 (see Fig. \ref{fig:distance_blobs}), it could be subject to future observations. However, the idea of high thermal pressure to explain the nature of X3c seems { appealing} since the cigar shape of X7 \citep[][]{peissker2021} could also be created by a dominant { thermal pressure of the} ambient medium.
\subsection{The nature of X3a}
Examining the different Doppler-shifted velocities for X3 as observed with SINFONI, it is implied that various components of the system are responsible for the emission. For example, the observed Br$\gamma$ {emission (Fig. \ref{fig:linemaps_sinfo_sinfo}) exhibits} a double-peak {line that shows a velocity gradient along the source implying the presence of a disk or strong outflows \citep{Davis2011}. We will discuss this particular feature in Sec. \ref{Sec:discussion_brgamma}. Furthermore, the} HeI line might be connected to outflows from a possible jet or strong stellar winds (see the P-cygni profile shown in Fig. \ref{fig:pcygni_sinfo}). Helium pumping can be considered to be the responsible mechanism for the forbidden iron multiplet \citep{peissker2021}. We note the detection of a Doppler-shifted H$_2$ line (Fig. \ref{fig:x3_spec_sinfo_h2}) {which might serve as an additional tracer for photoinized outflows originating close to the massive protostar X3a \citep{Kumar2002, Tanaka2016}. The detection of the NIR H$_2$ line is accompanied by the radio/submm CO emission presented in Fig. \ref{fig:x3_disk}. Both lines are common indicators for the presence of a protoplanetary disk and outflows of Herbig Ae/Be stars \citep{Thi2001, Davis2011}. Although the spatial resolution of SINFONI forbids a detailed determination of the exact origin of the H$_2$ emission line, the arrangement of the ionized CO and H30$\alpha$ line observed with ALMA and displayed in Fig. \ref{fig:x3_disk} suggest different origins of the detected gas species related to the X3 system.}
\begin{figure}[htbp!]
\centering
\includegraphics[width=.45\textwidth]{spec_X3_H2_final.png}
\caption{Zoomed-in view to the SINFONI extracted spectrum in the spectral region between 2.08$\mu m$ and 2.155$\mu m$ to show the Doppler-shifted H$_2$ line. Besides the H$_2$ line at 2.1232$\mu m$ (rest wavelength at 2.1218$\mu m$), one forbidden iron transition at 2.1447$\mu m$ is indicated. The presence of the Doppler-shifted emission line H$_2$ with v$_{H_2}\,=\,198$ km/s observed with SINFONI is an indicator of the presence of a protoplanetary disk \citep[][]{Glassgold2004}. {For the related line map of the H$_2$ detection, please consult Fig. \ref{fig:x3_h2_linemap}, Appendix \ref{sec:appendix_spectral_analysis}.}}
\label{fig:x3_spec_sinfo_h2}
\end{figure}
{To exclude the possibility of a stellar classification as late-type, we inspect the SINFONI spectrum for CO absorption lines which are typical for evolved stars. Consequently, young stars do not show extended infrared CO band heads between $2.3-2.4\mu m$.} To visualize the difference of the CO band heads for different stellar ages, we compare the SINFONI spectrum of X3a with the late-type star S1-23 which is located about 2.36 arcsec east of IRS 13 \citep{Gautam2019}. In Fig. \ref{fig:x3_spec_co}, the two spectra of both sources are incorporated and the CO band head locations are marked.
\begin{figure}[htbp!]
\centering
\includegraphics[width=.45\textwidth]{spec_X3_selected_source_co.png}
\caption{Search for IR CO band-heads for X3a and comparison with the late-type analog S1-23. For the identification of the late-type star S1-23, we use the spectral analysis of \cite{Gautam2019}.}
\label{fig:x3_spec_co}
\end{figure}
Due to the absence of the CO band head features in the X3a spectra, the stellar temperature must be higher compared to S1-23. From the normalized spectrum presented in Fig. \ref{fig:x3_spec_co}, we evaluated the depth of the CO band (CBD) using the feature at 2.36$\mu m$. We follow the analysis of \cite{Buchholz2009} and derive a CBD for X3a of -0.1, for S1-23 a value 0.4. From a critical point of view, the CBD might be biased by telluric correction, background, or noise level. However, the CBD is only considered to provide a rough classification that separates late-type stars from young stellar sources and serves as an independent parameter that underlines our findings for X3a.\newline
This classification is supported by the broad-band spectral energy distribution with a prominent NIR and MIR excess, the double-peak profile of the Br$\gamma$ line with the velocity gradient of $\sim 200\,{\rm km/s}$ indicating a rotating structure {or outflows}, and the prominent P-Cygni profile of HeI emission line. { Therefore, we conclude} that the X3 system can be described as a YSO with a {protoplanetary} disk embedded in a bow-shock dust envelope. With the support of stellar evolutionary tracks shown in Fig. \ref{fig:hr_x3}, we propose that X3a is a young Herbig Ae/Be star with a mass of $15^{+10}_{-5} M_{\odot}$ and { an estimated age of a few} $10^4$yr.
Estimating the $H$-$K$ and $K$-$L$ colors from Table~\ref{tab:mag_flux} and comparing it with the surrounding stars and dusty objects of the nearby cluster IRS 13, we find further support for the pre-main-sequence nature of X3a (see Fig. \ref{fig:color_color_diagramm}).
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{colorcolor_diagram_x3.png}
\caption{Color-color diagram of close-by sources. We use published H-K and K-L colors of close-by sources \citep{Eckart2004, Viehmann2006} and incorporate the estimated magnitudes for X3 (Table \ref{tab:mag_flux}), X7 \citep{peissker2021}, and G2 \citep{peissker2021c}. The dashed line represents an emitting blackbody at different temperatures. The presumably YSOs exhibit a higher K-L color compared to the O/W type stars in our sample.}
\label{fig:color_color_diagramm}
\end{figure}
To get a better understanding of the setup of the X3 system, we display a possible arrangement for the different components (proto-star, protoplanetary disk, dust envelope, bow shocks) in the sketch shown in Fig. \ref{fig:sketch}.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=1.0\textwidth]{X3_model_sketch.pdf}
\vspace{-1.5cm}
\caption{{ Sketch of the X3 system. The figure depicts the central massive star X3a that is surrounded by the circumstellar disk (green), whose inner ionized part and the associated disk outflows (blue) emit Br$\gamma$ emission (reddish regions). The stellar and disk outflows get shocked within the stellar-wind bow shocks. The downstream bow shock may be associated with the stable infrared continuum component X3c. Since the star moves supersonically with respect to the ambient medium, a bow shock forms in the ambient medium as well (yellow). The inner (stellar wind) and outer (ambient medium) bow shock shells mix together and hydrodynamic instabilities form downstream. The transient infrared component X3b might be associated with a larger instability formed downstream. Yellow and orange arrows indicate the stellar proper motion and the ambient outflow directions, respectively, including their inferred velocities. Drawn not to the scale.} }
\label{fig:sketch}
\end{figure*}
{ We note that this sketch is not to the scale, but it captures the main components and dynamics of the X3 system. In particular, the interaction of the stellar and disk outflows (Fig. \ref{fig:pcygni_sinfo}) leads to the formation of two bow shocks, one of which (downstream one) may be associated with the stable infrared continuum component X3c. Furthermore, due to the supersonic motion of X3a with respect to the ambient medium, a bow shock forms in the ambient medium as well and the shocked gasesous-dusty material flows downstream, giving rise to the elongated shape of the X3 system visible in the L-band. Due to several molecular and recombination-line tracers, the X3a star is surrounded by a circumstellar disk ($\sim 700\,{\rm AU}$) in radius, whose inner part is ionized and is the source of the double-peaked Br$\gamma$ line. The Br$\gamma$ emission is further extended above and below the disk plane due to bipolar disk winds. The whole model may be even more complex, e.g. due to the presence of the so-called transition disk that connects the dust envelope with the accretion disk \citep[][]{Haworth2016a}.} Such a transition area also increases the size of {an accretion} disk compared to the sketch shown in Fig. \ref{fig:sketch}. Recent observations of (massive) YSOs suggest the existence of this feature which extends possible disk sizes \citep[][]{Frost2019, GravityCollaborationYSO2020}. While it can be argued that additional background noise may impose uncertainties that increase the confusion in order to estimate the exact disk size (Fig. \ref{fig:linemaps_sinfo_sinfo}), observations of massive YSOs in the Carina Nebula revealed even larger disk dimensions compared to the X3 system \citep{Preibisch2011}. {Using the settings for the disk radius of the model from Table \ref{tab:sed_values}, we estimate an expected size of almost 100 mas, which is in reasonable agreement with the dimensions displayed in Fig. \ref{fig:linemaps_sinfo_sinfo}. In the next section, we will provide a detailed approach of the composition of the X3 system.}
\subsection{The disk structure of the X3 system}
\label{Sec:discussion_brgamma}
{In Fig. \ref{fig:linemaps_sinfo_sinfo}, we show the detection of ionized Doppler-shifted Br$\gamma$ with an FWHM of about 0.15" ($\sim$ 1200 AU) which is about the size of the SINFONI PSF ($\sim$ 0.25") for the corresponding spatial plate scale. The diffraction-limited detection of the integrated Br$\gamma$ line shown in Fig. \ref{fig:linemaps_sinfo_sinfo} is confined by the resolution of the telescope and indicates that the ionized hydrogen originates in a smaller area than 0.15". In contrast, the single blue- and red-shifted linemaps (see the PPV diagram in Fig. \ref{fig:linemaps_sinfo_sinfo} and the attached movie) exhibit a slightly extended size of about 0.25" ($\sim$ 2000 AU) that matches the FWHM of the SINFONI PSF.\newline
Although the Br$\gamma$ line is often used as a tracer for accretion disks, the quantities of the X3 system exceed typical disk sizes of a few ten to about 100 AU \citep[][]{Beck2010, Beck2019}. However, \cite{Davis2011} and \cite{Ward2017} show spatially extended Br$\gamma$ lines of massive YSOs that exceed dimensions of several hundred up to 1000 AU. This apparent inconsistency can be explained by photoionized disk-wind outflows and, more generally, stellar winds that increase the spatial distribution of the Br$\gamma$-line. For example, \cite{Kraus2012} performed VLTI/AMBER observations of Br$\gamma$ gas distributions and showed that the Doppler-shifted line exhibits a photocenter offset compared to the central emission. Although \citet{Kraus2012} investigate classical Be stars, \citet{Tanaka2016} simulate density distributions n$_H$ of hydrogen around massive protostars. \citet{Tanaka2016} report strongly ionized outflows of high mass protostars with associated temperatures of about 5000-10000 K on 100-2000 AU scales. Since Br$\gamma$ can be detected at temperatures around 8000 K \citep[][]{Wojtczak2022}, the results of \cite{Tanaka2016} are in agreement with the observations carried out by \cite{Davis2011}, \cite{Ward2017} and this work. Due to the high mass of the X3 system (Fig. \ref{fig:x3_sed}), it is expected that ionized stellar winds distribute gas species on scales of several hundred AU \citep[][]{Tanaka2016}. This argument follows the analysis of \cite{Davis2011} who interpret the spatially extended Br$\gamma$ emission as a tracer for outflows which was independently shown by VLTI/AMBER observations carried out by \cite{Tatulli2007}.\newline
Although we cannot spatially resolve the accretion disk of the X3 system, we can safely assume that we observe a superposition of the warm disk material with strongly ionized outflows that are traced by the detected P-Cygni profile. Another mechanism might impose constraints on the detection of the size of the Br$\gamma$ emission area. As we show in Fig. \ref{fig:x3_disk}, the H30$\alpha$ line is arranged in the ring-like structure with an approximate diameter of about 0.25" or 2000 AU exhibiting a comparable morphology as the massive protostar G28.200.05 observed by \cite{Law2022}. The size of the ionized hydrogen matches the Br$\gamma$ emission but also the MIR lines detected with VISIR (see Fig. \ref{fig:sivr2_visir} and Fig. \ref{fig:x3_all_visir}, Appendix \ref{sec:appendix_further_detections}) and ISAAC (Fig. \ref{fig:linemap_isaac}, Appendix \ref{sec:appendix_further_detections}) implying that all detected atomic and molecular species originate in the same region. The enhanced intensity of the H30$\alpha$ line at the tip of the bow shock of about 20$\%$ might be related to the interaction of the ambient medium with the X3 system which is in agreement with the 3D magnetohydrodynamic models of $\zeta$ Ophiuchi carried out by \cite{Green2022}. The increased temperature in the bow-shock tip caused by the interaction of the supersonic X3 system with the ambient medium could also be responsible for the increased Br$\gamma$ emission size. The authors of \cite{Scoville2013}, for example, shows a spatially increased Br$\gamma$ emission created by the interaction of a young T Tauri star with the ambient medium around Sgr~A*. This contributes to the high temperature within the disk material in combination with the photoionized disk-wind outflows. Because there might be an interplay between the bow-shock induced temperatures and strong winds, gas flows through the disk cannot be ruled out. A probe for these gas flows are polycyclic aromatic hydrocarbons (PAH, see Fig. \ref{fig:x3_all_visir}). We use the SED representing IRS 3 (Fig. \ref{fig:x3_system}) derived by \cite{Pott2008} to normalize the PAH1 and PAH2 detection (Fig. \ref{fig:x3_all_visir}, Appendix \ref{sec:appendix_multiwavelength_detection}) of the X3 system and infer the intensity ratio $\rm I_{PAH1}/I_{PAH2}$ of $\sim\,6$. Following the analysis of Herbig Ae/Be stars shown in \cite{Maaskant2014}, we can conclude that the PAH total emission can be distributed in a radius around the protostar between a few AU up to 1000 AU. The PAH intensity ratio implies gas flows between optical thick disks close and further away from the protostar.\newline
For the X3 system, we can therefore conclude that the presence of a thick accretion disk and an optically thin region of gas flow \citep[][]{Maaskant2014} or transition disk \citep[][]{Haworth2016a} embedded in an optically thick dust envelope is the most plausible explanation for the observed lines. Strongly ionized outflows are responsible for the spatially extended Br$\gamma$ emission. The size of the Br$\gamma$ confined area matches all other lines detected in this work in agreement with the hypothesis of domination photoionized winds. For a visualization of the different components and interplays, please see the sketch displayed in Fig. \ref{fig:sketch}.}
\subsection{The gravitational stable X3 system}
Although the gravitational footprint of Sgr~A* can be traced at a distance of $\sim 0.1$ pc, the high stellar mass of X3a effectively shields the inner region of the system on the scale of $\sim 100\,{\rm AU}$. To provide a quantitative estimate of the gravitational imprint of Sgr~A* on the X3 system, we estimate the tidal radius with
\begin{equation}
r_{\rm t}\,\sim \,R_{\rm X3} (2M_{\rm SgrA*}/m_{\rm X3a})^{1/3}
\end{equation}
where $R_{\rm X3}\,\sim \,0.005 $pc is the approximate size of the X3 bow shock, $M_{\rm SgrA*}\,=\,4\times 10^{6} M_{\odot}$ is the mass of Sgr~A* \citep{Peissker2022}, and $m_{\rm X3a}$ is the mass of X3a ($\sim 15\,M_{\odot}$).
Using the above relation, we estimate the tidal radius of $r_{\rm t}\sim 0.41 $pc, which is greater than or comparable to the X3 distance from Sgr~A*, $d_{\rm X3}\sim 0.3\,{\rm pc}$ (see Sec. \ref{sec:results_sed}). Therefore, the outermost bow-shock can be affected by the ambient tidal field of Sgr~A*. However, the Hill radius of the X3a is $r_{\rm Hill}\sim d_{\rm X3}[m_{\rm X3a}/(3M_{\rm SgrA*})]^{1/3}\sim 700$ AU, which implies that the stellar core and the surrounding circumstellar material are bound and not significantly affected by the gravitational field of Sgr~A*. The extended sphere of influence of X3a has likely also determined the fate of the thermal warm blob X3b, which disappeared when it reached a comparable distance from X3a. The remnant of X3b will approach the star on the free-fall timescale of $t_{\rm ff}\sim \pi (r_{\rm Hill}/2)^{3/2}(Gm_{\rm X3a})^{-1/2}\sim 850$ years, see also Fig.~\ref{fig:distance_blobs}. The disappearance of X3b may be related to faster hydrodynamical processes, specifically the interaction with the bound material around the star or the stellar outflow and subsequent shocks.
\subsection{Dust temperature and sublimation radius}
{ Given the large bolometric luminosity of the X3a star, $L_{\rm bol}\simeq 24\times 10^3\,L_{\odot}$, and the stellar radius, $R_{\star}\simeq 10\,R_{\odot}$, see Table~\ref{tab:sed_values}, we obtain the effective temperature of $T_{\star}\simeq 22700\,{\rm K}$. This implies that the X3a star emits most of its radiation in the UV domain ($\lambda_{\rm max}\sim 128\,{\rm nm}$) and due to the large luminosity, the circumstellar dust evaporates at the sublimation radius $r_{\rm sub}$ at a certain distance from the star where it reaches the temperature of $T_{\rm sub}\sim 1500\,{\rm K}$. To estimate $r_{\rm sub}$, we use the model of \citet{1987ApJ...320..537B} for the temperature of the graphite grains at the distance $r_{\star}$ from the star given the UV luminosity $L_{\rm UV}$ of the central source and the optical depth $\tau_{\rm UV}$ of the material, in which grains are located. For $L_{\rm UV}\sim L_{\rm bol}$, we obtain the dust temperature scaled approximately to the sublimation temperature,}
\begin{equation}
T_{\rm dust}\simeq 1501.3 \left[\left(\frac{L_{\rm UV}}{24\times 10^3\,L_{\odot}} \right) \left(\frac{r_{\star}}{23\,{\rm AU}} \right)^{-2} e^{-\tau_{\rm UV}}\right]^\frac{1}{5.6}\,{\rm K}\,,
\label{eq_dust_temp1}
\end{equation}
{ hence, if the dust is not shielded, it evaporates at $r_{\rm sub}\sim 23\,{\rm AU}$. At larger distances, colder dust contributes to the mid- and far-infrared thermal emission of the X3 system. The peak in the SED close to 20-30\,${\rm \mu m}$, see Fig.~\ref{fig:x3_sed}, can be interpreted by the thermal emission of dust with the temperature of $\sim 100\,{\rm K}$, which must be shielded from the star by the optically thick circumstellar disk and outflows. If the optical depth reaches values close to $\tau_{\rm UV}\sim 10$, dust with $T_{\rm dust}\sim 100\,{\rm K}$ is located at $r_{\star}\sim 300\,{\rm AU}$ according to Eq.~\ref{eq_dust_temp1}.}
{ A comparable dust sublimation radius is obtained from the numerically determined relation by \citet{2004ApJ...617.1177W},
\begin{equation}
r_{\rm sub2}=R_{\star}\left(\frac{T_{\rm sub}}{T_{\star}} \right)^{-2.085}\,,
\label{eq_dust_temp2}
\end{equation}
which gives $r_{\rm sub2}\sim 289R_{\star}\sim 13.4\,{\rm AU}$. Eq.~\ref{eq_dust_temp2} takes into account shielding by an optically thick inner disk wall, hence the dust can then exist still closer in comparison with the optically thin limit ($\tau_{\rm UV}=0$) evaluated in Eq.~\ref{eq_dust_temp1}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{temperature_X3.png}
\caption{{ Dependence of the dust temperature $T_{\rm dust}$ on the distance from the X3a star ($\log{r}$) and the optical depth $\tau_{\rm UV}$ of the circumstellar medium at a given distance. The vertical dashed gray line marks the distance of 50$R_{\odot}$ for orientation. The contours for $100$, $500$, and $1000\,{\rm K}$ are depicted as white solid lines. The white region in the left corner stands for the parameter space where $T_{\rm dust}>1500\,{\rm K}$.}}
\label{fig_dust_temp_X3}
\end{figure}
In Fig.~\ref{fig_dust_temp_X3}, we show the distribution of the dust temperature around the X3a star as a function of the distance from the star (in AU) and the optical depth, using the simplified model of \citet{1987ApJ...320..537B} for a single point source with the UV luminosity of $L_{\rm UV}=24000\,L_{\odot}$. The plot shows a strong dependence of the dust temperature at a given distance on the optical depth, and hence on the shielding by the opaque circumstellar disk and outflows.
}
\subsection{Formation history of X3}
As a consequence of this analysis, the presence of a high-mass YSO requires { to assess} the question of the origin and the formation process of the X3 system. We will therefore discuss three different possible scenarios for the formation of the X3 system.\newline
\begin{enumerate}
\item For the first scenario, we consider the proposed top-heavy initial mass function for young stars in the GC proposed by \cite{Lu2013}. The authors assume a single starburst about 6-7 Myr ago that explains the presence of investigated young stars. Due to the age of the X3 system of $\sim 0.04$ Myr, this formation scenario can be excluded.
\item The second scenario is proposed by, among others, the authors of \cite{Jalali2014}. This scenario of in-spiralling molecular clouds with masses of about 100 M$_{\odot}$ might be appealing since \cite{Yusef-Zadeh2013} already found indications of high-mass star formation, which was underlined by the observations of \cite{Moser2017} and \cite{Hsieh2021}. However, there should be at least some tracers of a recent massive cloud infall, such as elongated gaseous-dust features or trail stars \citep[for further discussion, see also][]{Paumard2006}.
\item The third and the last discussed scenario is a continuous star formation process in the GC. In this scenario, it is believed that ongoing star formation processes take place in the so-called (counter-)clockwise disks as proposed by \cite{Paumard2006}. Due to high gas and dust temperature \citep{Cotera1999}, the turbulent environment \citep{Genzel2000}, and strong magnetic field, this scenario seems at least questionable. On the other hand, it can explain the variety of stellar members and regions in the inner parsec, such as IRS13, IRS 16, and early/late-type stars \citep[][]{Krabbe1995, Habibi2017, Gautam2019}. Overdensities in the two disks could have created cluster structures like the mentioned IRS 13 and IRS 16 regions.
\end{enumerate}
We consider a combination of the second and third scenario as the most plausible explanation for the origin of the X3 system {without excluding the starburst scenario as a general concept}. Specifically, a cluster of young stars could have started to form outside the inner parsec, e.g. in the region of the current Circum-nuclear disk, and at the same time, it was migrating towards Sgr~A*. {Due to cloud-cloud interactions, the inital birthplace of the X3 sytem could have lost angular momentum. The dynamical instability timescales that are required for the formation of the X3 system are in agreement with Kelvin-Helmholtz timescales as we have shown for a young T-Tauri star \citep[][]{peissker2021c}.} As suggested by \cite{Maillard2004}, IRS 13 might be an example of an evaporating cluster with ongoing star formation. \cite{Maillard2004} propose that IRS 13 is the remnant core of a massive cluster. As a result, IRS 13 could be the birth site of some of the high-mass stars in the GC as discussed by the authors. \cite{Portegies-Zwart2003} favor a similar explanation for the IRS 16 stars as they also propose an in-spiral of the cluster caused by dynamical friction.\newline
Taking into account the direction and proper motion of the studied dusty objects of IRS 13 \citep{Eckart2004}, our estimated value of v$_{\rm PROP}\,=\,244\pm 27$km/s for the X3 system coincides with the these presumably YSOs of the cluster \citep[][]{muzic2008, muzic2010}. Furthermore, v$_{\rm PROP}$ of X3 matches the proper motion of the evolved WR stars E2 and E4 \citep[][]{Zhu2020}. Therefore, we cannot exclude the possibility that the X3 system might be a cluster member of IRS 13. Considering the young age, X3a must have formed {\it in situ} due to the migration timescales \citep{Morris1993}. This implies that IRS 13 could have initially served as the birthplace of the X3 system in agreement with the discussion about the origin of some high mass stars by \cite{Maillard2004}. During the infall of IRS 13, the X3 system might have been separated from the cluster due to tidal disintegration and the initial velocity dispersion of forming stellar cores. This could also explain the spatial offset of about 1 arcsec between the X3 system and IRS 13. Consequently, the in-spiralling and evaporating cluster should have lost more sources similar to X3a \citep{Paumard2006}. Tracers of this event might be the {low- and high-mass YSOs observed by \cite{Yusef-Zadeh2013, Yusef-Zadeh2015, Yusef-Zadeh2017} in the inner two parsec of the GC. If IRS 13 is, however,} not classified as a cluster but rather as a fragmented disk structure, the theoretical scenarios proposed by \citet{Bonnel2008}, \citet{Hobbs2009}, and \citet{Jalali2014} that describe the in-spiral of massive molecular clouds should be considered to explain the formation of the X3 system. In particular, the simulations by \citet{Bonnel2008} favor the formation of high-mass stars, which could serve as a plausible explanation for the high-mass YSO described here. {While this formation scenario seems appealing, it does not explain the discovery of 11 low-mass bipolar outflow sources by \cite{Yusef-Zadeh2017}. These bipolar outflow sources are located in the inner parsec, but also in the S-cluster as we suggest in \cite{Peissker2019}. Recently, \cite{Owen2022} proposed a new formation path for X8 \citep{Peissker2019} and similar sources \citep[][]{Peissker2020b, Ciurlo2020} that requires the presence of giant planets in the related disk of the young protostar. Although all of the above scenarios aim to explain a specific stellar type or group in the NSC, no approach is able to address the rich presence of various stars of different ages. For example, the young stars in the S-cluster exhibit an age range of $\sim\,3-15$ Myr clearly inconsistent with the starburst 6 Myr ago \citep[][]{Lu2013} or an infalling molecular cloud with conjunt star formation processes \citep[][]{Jalali2014}.}
In the mid-term perspective, upcoming GC observations with the James Webb Space Telescope utilizing MIRI {IFU data} could add valuable insights into {these scenarios}.
\section{Conclusion} \label{sec:conclusion}
In this work, we have presented the observations of the first { candidate} high-mass YSO close to Sgr~A* using a data baseline of almost 30 years with about four different telescopes in various wavelength regimes. In the following, we will outline our key findings and the related interpretation as discussed above.
\begin{enumerate}
\item In the NIR/MIR, we have identified several components related to the X3 system in the H-, K-, L-, and M-band,
\item Because of the broad wavelength coverage, a coreless gas/dust feature can be excluded,
\item Based on the extensive data baseline covering two decades of observations, the components of the system (X3, X3a, X3b, X3c) move with a comparable proper motion towards the IRS 13 cluster,
\item {The H- and K-band detection of X3a between 1995 and 2020 imply a stellar classification of this component of the X3 system. It is therefore plausible to classify X3a as the embedded stellar source of the dusty envelope X3,}
\item Due to the missing NIR CO absorption lines and the depth of the 2.36$\mu m$ spectral feature, X3a { is consistent with} a young (proto)star { rather than an evolved, late-type star,}
\item The hot blob X3b with a decreasing distance towards the central stellar source X3a is below the detection limit in 2012 with no traceable emission in the following epochs. { The independently calculated theoretical Hill radius matches the NIR observations and implies that X3b was likely accreted in 2011/2012,}
\item For the hot L-band blob X3c, we find a constant distance towards X3a which suggests that it is created due to the { thermal} pressure of the ambient medium,
\item The spectroscopic footprint reveals a rich abundance of NIR emission lines,
\item We { detect} P-Cygni profile of the HeI line which indicates the presence of a wind with over -400 km/s { terminal velocity}. { Such a high wind velocity is a common property of young stars},
\item A detailed analysis of the Br$\gamma$ line reveals a continuous velocity gradient { that coincides with the position of X3a implying a physical connection. This emission line is most likely connected to photoionized outflows/winds that might originate close to the protostar or a gaseous accretion disk,}
\item This interpretation of the Br$\gamma$ emission correlates with the MIR lines that originate in a dense and compact region,
\item The {spatial distribution and dimensions of the NIR and MIR emission matches the size of the radio/submm observations of ionized H30$\alpha$ line implying that the ionization process might be produced by the same mechanism, namely, photoionized outflows},
\item {The H30$\alpha$ line is arranged in a ring-like structure with a diameter of about 2000 AU and most likely shielded in an optical thick envelope in agreement with recent independent observations of massive YSOs, i.e., G28.200.05},
\item In general, the organic and complex molecules observed in the {NIR and MIR are associated tracers for the presence of a YSO},
\item Based on our { 3d MCMC radiative transfer calculations}, we { infer the stellar mass of $15^{+10}_{-5} M_{\odot}$ and an age of a few $10^4$ years for the X3 system,}
\item Considering the 3d distance and proper motion of the X3 system, it may have been a former member of the IRS 13 cluster.
\end{enumerate}
In terms of the { future} perspective, we expect more insights into the X3 system with ERIS (VLT), MIRI (JWST), GRAVITY (VLTI), and METIS (ELT).
\begin{acknowledgments}
This work was supported in part by the
Deutsche Forschungsgemeinschaft (DFG) via the Cologne
Bonn Graduate School (BCGS), the Max Planck Society
through the International Max Planck Research School
(IMPRS) for Astronomy and Astrophysics as well as special
funds through the University of Cologne. We acknowledge support for the Article Processing Charge from the DFG (German Research Foundation, 491454339). MZ acknowledges the financial support by the National Science Center, Poland, grant No. 2017/26/A/ST9/00756 (Maestro 9) and the NAWA financial support under the agreement PPN/WYM/2019/1/00064 to perform a three-month exchange stay at the Charles University in Prague and the Astronomical Institute of the Czech Academy of Sciences. MZ also acknowledges the GA\v{C}R EXPRO grant 21-13491X (``Exploring the Hot Universe and Understanding Cosmic Feedback") for financial support. Part of this
work was supported by fruitful discussions with members of
the European Union funded COST Action MP0905: Black
Holes in a Violent Universe and the Prague--Cologne Exchange Program for university students. VK thanks the Czech Science Foundation
(No.\ 21-11268S). AP, JC, SE, and GB contributed useful points to the discussion. We also would like to
thank the members of the SINFONI/NACO/VISIR and ESO's Paranal/Chile team for their support and collaboration.
\end{acknowledgments}
This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration.
\vspace{5mm}
\facilities{HST (NICMOS), VLT (SINFONI, VISIR, ISAAC, and NACO), ALMA, KECK (NIRCAM2).}
\software{astropy \citep{2013A&A...558A..33A,2018AJ....156..123A},
Hyperion \citep{Robitaille2011},
DPuser \citep{Ott2013}.
}
|
{
"arxiv_id": "2302.14172",
"language": "en",
"timestamp": "2023-03-01T02:03:33",
"url": "https://arxiv.org/abs/2302.14172",
"yymm": "2302"
} | \section{Descriptive Analysis of Exploit Data}
\section{Conclusion}
In this paper, we presented results from an international, community-driven effort to collect and analyze software vulnerability exploit data, and to build a machine learning model capable of estimating the probability that a vulnerability would be exploited within 30 days following the prediction. In particular, we described the process of collecting each of the additional variables, and described the approaches used to create the machine learning model based on 6.4 million observed exploit attempts. Through the expanded data sources we achieved an unprecedented 82\% improvement in classifier performance over the previous iterations of EPSS.
We illustrated practical use of EPSS by way of comparison with a set of alternative vulnerability remediation strategies. In particular, we showed the sizeable and meaningful improvement in coverage, efficiency and level of effort (as measured by the number of vulnerabilities that would need to be remediated) by using EPSS v3 over any and all current remediation approaches, including CVSS, CISA's KEV list, and Metasploit.
As the EPSS effort continues to grow, acquire and ingest new data, and improve modeling techniques with each new version, we believe it will continue to improve in performance, and provide new and fundamental insights into vulnerability exploitation for many years to come.
\section{Acknowledgements} We would like to acknowledge the participants of the EPSS Special Interest Group (SIG), as well as the organizations that have contributed to the EPSS data model to include: Forinet, Shadow Server Foundation, Greynoise, Alien Vault, Cyentia, and FIRST.
\section{Data}
The data used in this research is based on 192,035 published vulnerabilities (not marked as ``REJECT'' or ``RESERVED'') listed in MITRE's Common Vulnerabilities and Exposures (CVE) list through December 31, 2022. The CVE identifier has been used to combine records across our disparate data sources. Table \ref{table:features} lists the categories of data, number of features in each category, and the source(s) or other notes. In total, EPSS collects 1,477 unique independent variables for every vulnerability.
\begin{table*}[t]
\caption{Description of data sources used in EPSS.}
\label{table:features}
\begin{tabular}{p{0.35\linewidth} p{0.1\linewidth} p{0.5\linewidth}}
\hline
Description & \# of variables & Sources \\
\hline
Exploitation activity in the wild (ground truth) & 1 (with dates) & Fortinet, AlienVault, ShadowServer, GreyNoise \\
Publicly available exploit code & 3 & Exploit-DB, GitHub, MetaSploit \\
CVE is listed/discussed on a list or website (``site'') & 3 & CISA KEV, Google Project Zero, Trend Micro's Zero Day Initiative (ZDI) \\
Social media & 3 & Mentions/discussion on Twitter \\
Offensive security tools and scanners & 4 & Intrigue, sn1per, jaeles, nuclei \\
References with labels & 17 & MITRE CVE List, NVD \\
Keyword description of the vulnerability & 147 & Text description in MITRE CVE List \\
CVSS metrics & 15 & National Vulnerability Database (NVD) \\
CWE & 188 & National Vulnerability Database (NVD) \\
Vendor labels & 1,096 & National Vulnerability Database (NVD) \\
Age of the vulnerability & 1 & Days since CVE published in MITRE CVE list \\
\hline
\end{tabular}
\end{table*}
\subsection{Ground truth: exploitation in the wild}
EPSS collects and aggregates evidence of exploits from multiple sources: Fortiguard, Alienvault OTX, the Shadow Server Foundation and GreyNoise (though not all sources cover the full time period). Each of these data sources employ network- or host-layer intrusion detection/prevention systems (IDS/IPS), or honeypots, in order to identify attempted exploitation. These systems are also predominantly signature-based (as opposed to anomaly-based) detection systems. Moreover, all of these organizations have large enterprise infrastructures of sensor and collection networks. Fortiguard, for example, manages tens of thousands of IDS/IPS devices that identify and report exploitation activity from across the globe. Alienvault OTX, GreyNoise and the Shadow Server Foundation also maintain worldwide networks of sensors for detecting exploitation activity.
These data sources include the list of CVEs observed to be exploited on a daily basis. The data are then cleaned, and exploitation activity is consolidated into a single boolean value (0 or 1), identifying days on which exploitation activity was reported for any given CVE across any of the available data sources. Structuring the training data according to this boolean time-series enables us to estimate the probability of exploitation activity in any upcoming window of time, though the consensus in the EPSS Special Interest Group was to standardize on a 30-day window to align with most enterprise patch cycles.
\noindent\fbox{%
\parbox{\linewidth}{%
The exploit data used in this research paper covers activity from July 1, 2016 to December 31st, 2022 (2,374 days / 78 months / 6.5 years), over which we collected 6.4 million exploitation observations (date and CVE combinations), targeting 12,243 unique vulnerabilities. Based on this data, we find that 6.4\% (12,243 of 192,035) of all published vulnerabilities were observed to be exploited during this period, which is consistent with previous findings \cite{jacobs2020improving, jacobs2021epss}.
}%
}
\subsection{Explanatory variables/features}
\label{subsec:explan_variables}
In total, EPSS leverages 1,477 features for predicting exploitation activity. Next, we describe the data sources used to construct these features.
\paragraph{Published exploit code}
We first consider the correlation between exploitation in the wild and the existence of publicly available exploit code, which is collected from three sources (courtesy of Cyentia\footnote{\url{https://www.cyentia.com/services/exploit-intelligence-service}}.): Exploit-DB, Github, and Metasploit. In total we identified 24,133 CVEs with published exploit code, consisting of 20,604 CVEs from Exploit-DB, 4,049 published on GitHub, and 1,905 published on Metasploit modules. Even though Exploit-DB contains the majority of published exploits, GitHub has become a valuable source in recent years. For example, in 2022, 1,591 exploits were published on GitHub, while Exploit-DB and Metasploit added 196 and 94 entries, respectively.
\paragraph{Public vulnerability lists}
Next, we consider that exploitation activity may be forecasted by the presence of vulnerabilities on popular lists and/or websites that maintain and share information about selective vulnerabilities. Google Project Zero maintains a listing\footnote{\url{https://docs.google.com/spreadsheets/d/1lkNJ0uQwbeC1ZTRrxdtuPLCIl7mlUreoKfSIgajnSyY/view\#gid=1190662839}.} of ``publicly known cases of detected zero-day exploits''.\footnote{\url{https://googleprojectzero.blogspot.com/p/0day.html}.} This may help us forecast exploitation activity as the vulnerability slides into N-day status. We include 162 unique CVEs listed by Google Project Zero.
Trend Micro's Zero Day Initiative (ZDI), the ``world's largest vendor-agnostic bug bounty program'',\footnote{\url{https://www.zerodayinitiative.com/about}.} works with researchers and vendors to responsibly disclose zero-day vulnerabilities and issue public advisories about vulnerabilities at the conclusion of their process. We include 7,356 CVEs that have public advisories issued by ZDI.
The Known Exploited Vulnerabilities (KEV) catalog from the US Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) is an ``authoritative source of vulnerabilities that have been exploited in the wild''.\footnote{\url{https://www.cisa.gov/known-exploited-vulnerabilities}} We include 866 CVEs from CISA's KEV list.
These sources lack transparency about when exploitation activity was observed, and for how long this activity was ongoing. However, because past exploitation attempts might influence the likelihood of future attacks, we include these indicators as binary features for our model.
\paragraph{Social media}
Exploitation may also be correlated with social media discussions, and therefore we collect Twitter mentions of CVEs, counting these mentions within three different historical time windows (7, 30, and 90 days). We only count primary and original tweets and exclude retweets and quoted retweets. The median number of daily unique tweets mentioning CVEs is 1,308 with the 25th and 75th percentile of daily tweets being 607 and 1,400 respectively. We currently make no attempt to validate the content or filter out automated posts (from bots).
\paragraph{Offensive security tools}
We also collect evidence of vulnerabilities being used in offensive security tools that are designed, in part, to identify vulnerabilities during penetration tests. We are currently gathering information from four different offensive security tools with varying numbers of CVEs identified in each: Nuclei with 1,548 CVEs, Jaeles with 206 CVEs, Intrigue with 169 CVEs and Sn1per with 63 CVEs. These are encoded as binary features which indicate whether each particular source is capable of scanning for and reporting on the presence of each vulnerability.
\paragraph{References}
In order to capture metrics around the activity and analysis related to vulnerabilities, for each CVE, we count the number of references listed in MITRE's CVE list, as well as the number of references with each of the 16 reference tags assigned by NVD. The labels and and their associated prevalence across CVEs are: Vendor Advisory (102,965), Third Party Advisory (84,224), Patch (59,660), Exploit (54,633), VDB Entry (31,880), Issue Tracking (16,848), Mailing List (15,228), US Government Resource (11,164), Release Notes (9,308), Permissions Required (3,980), Broken Link (3,934), Product (3,532), Mitigation (2,983), Technical Description (1,686), Not Applicable (961), and Press/Media Coverage (124).
\paragraph{Keyword description of the vulnerability}
To capture attributes of vulnerabilities themselves, we use the same process as described in previous research~\cite{jacobs2020improving, jacobs2021epss}. This process detects and extracts hundreds of common multiword expressions used to describe and discuss vulnerabilities. These expressions are then grouped and normalized into common vulnerability concepts. The top tags we included and associated CVEs are as follows: ``remote attacker'' (80,942), ``web'' (31,866), ``code execution'' (31,330), ``denial of service'' (28,478), and `authenticated'' (21,492). In total, we include 147 binary features for identifying such tags.
We followed the same process as EPSS v1 for extracting mulitword expressions from the text from references using Rapid Automatic Keyword Extraction~\cite{rose2010automatic}.
\paragraph{CVSS metrics}
To capture other attributes of vulnerabilities, we collect CVSS base metrics. These consist of exploitability measurements (attack vector, attack complexity, privilege required, user interaction, scope) and the three impact measurements (confidentiality, integrity and availability). These categorical variables are encoded using one-hot encoding. We collected CVSS version 3 information from NVD for 118,087 vulnerabilities. However, 73,327 vulnerabilities published before CVSSv3 was created and are only scored in NVD using CVSSv2. To address this, we developed a separate and dedicated machine learning model to estimate the CVSSv3 measurement values for each of these vulnerabilities.
We use a process similar to prior work~\cite{nowak2021conversion}, where for each CVE, we use the CVSSv2 sub-components for CVEs which have both CVSSv2 and CVSSv3 scores. We then train a feedforward neural network to predict CVSSv3 vectors. The model was validated using 8-fold, yearly stratified, cross-validation, achieving 74.9\% accuracy when predicting the exact CVSSv3 vector. For 99.9\% of vectors, we predict the majority (5 or more) of the individual metrics correctly. For each individual portion of the CVSSv3 vector we were able to achieve a minimum of 93.4\% accuracy (on the Privileges Required metric). We note that this exceeds the accuracy achieved by \cite{nowak2021conversion}, and likely warrants further research into the robustness of CVSSv3 prediction and its possible application to future versions of CVSS.
\paragraph{CWE}
We also capture the observation that different types of vulnerabilities may be more or less attractive to attackers, using the Common Weakness Enumeration (CWE), which is a ``community-developed list of software and hardware weakness types''.\footnote{\url{https://cwe.mitre.org}} We collect the CWE assignments from NVD, noting that 21,570 CVEs do not have a CWE assigned. We derived binary features for CWEs found across at least 10 vulnerabilities, resulting in 186 CWE identifiers being included. In addition, we maintain two features for vulnerabilities where CWE information is not available, or the assigned CWEs are not among the common ones. The top CWE identifiers and their vulnerability counts are CWE 79 (20,797), CWE 119 (11,727), CWE 20 (9,590), CWE 89 (8,790), CWE 787 (7,624), CWE 200 (7,270), CWE 264 (5,485), CWE 22 (4,918), CWE 125 (4,743), and CWE 352 (4,081).
\paragraph{Vulnerable vendors}
We suspect exploitation activity may be correlated to the market share and/or install base companies achieve. Therefore, we parse through the Common Platform Enumeration (CPE) data provided by NVD in order to identify platform records marked as ``vulnerable'', and extract only the vendor portion of the record. We did not make any attempt to fill in missing information or correct any typos or misspellings that may occasionally appear in the records. We ranked vendors according to the number of vulnerabilities, creating one binary feature for each vendor, and evaluated the effect of including less frequent vendors as features. We observed no performance improvements by including vendors with fewer than 10 CVEs in our dataset. As a result, we extracted 1,040 unique vendor features in the final model. The most prevalent vendors and their vulnerability counts are Microsoft (10,127), Google (9,100), Oracle (8,970), Debian (7,627), Apple (6,499), IBM (6,409), Cisco (5,766), RedHat (4,789), Adobe (4,627), Fedora Project (4,166).
\paragraph{Age of the vulnerability}
Finally, the age of a vulnerability might contribute or detract from the likelihood of exploitation. Intuitively, we expect old vulnerabilities to be less attractive to attackers due to a smaller vulnerable population. To capture this, we create a feature which records the number of days elapsed from CVE publication to the time of feature extraction in our model.
\section{Discussion and Future Work}
Currently, the EPSS model ingests data concerning which vulnerabilities were exploited on which days. However, exploitation has many other characteristics, which may be useful to capture and examine. For example, we may be interested in studying the number of exploits per vulnerability (volume), fragmentation of exploitation over time (that is, the pattern of periods of exploitation), or prevalence, which would measure the spread of exploitation, typically by counting the number of devices detecting exploitation. We leave these topics for future work.
\subsection{Limitations and adversarial consideration}
This research is conducted with a number of limitations. First, insights are limited to data collected from our data partners and the geographic and organizational coverage of their network collection devices. While these data providers collectively manage hundreds of thousands of sensors across the globe, and across organizations of all sizes and industries, they do not observe every attempted exploit event in every network. Nevertheless, it is plausible to think that the data used, and therefore any inferences provided, are representative of all mass exploitation activity.
In regard to the nature of how vulnerabilities are detected, any signature-based detection device is only able to alert on events that it was programmed to observe. Therefore, we are not able to observe vulnerabilities that were exploited but undetected by the sensor because a signature was not written.
Moreover, the nature of the detection devices generating the events will be biased toward detecting network-based attacks, as opposed to attacks from other attack vectors such as host-based attacks or methods requiring physical proximity.\footnote{For example, it is unlikeley to find evidence of exploitation for CVE-2022-37418 in our data set, a vulnerability in the remote keyless entry systems on specific makes and models of automobiles.} Similarly, these detection systems will be typically installed on public-facing perimeter internet devices, and therefore less suited to detecting computer attacks against internet of things (IoT) devices, automotive networks, ICS, SCADA, operational technology (OT), medical devices, etc.
Given the exploit data from the data partners, we are not able to distinguish between exploit activity generated by researchers or commercial entities, versus actual malicious exploit activity. While it is likely that some proportion of exploitation does originate from non-malicious sources, at this point we have no reliable way of estimating the true proportion. However, based on the collective authors' experience, and discussions with our data providers, we do not believe that this represents a significant percentage of exploitation activity.
While these points may limit the scope of our inferences, to the extent that our data collection is representative of an ecosystem of public-facing, network-based attacks, we believe that many of the insights presented here are generalizable beyond this dataset.
In addition to these limitations, there are other adversarial considerations that fall outside the scope of this paper. For example, one potential concern is the opportunity for adversarial manipulation either of the EPSS model, or using the EPSS scores. For example, it may be possible for malicious actors to poison or otherwise manipulate the input data to the EPSS model (e.g. Github, Twitter). These issues have been studied extensively in the context of machine learning for exploit prediction~\cite{sabottke2015vulnerability} and other tasks~\cite{suciu2018does,chakraborty2018adversarial}, and their potential impact is well understood. Given that we have no evidence of such attacks in practice, and our reliance on data from many distinct sources which would reduce the leverage of adversaries, we leave an in-depth investigation of the matter for future work. Additionally, it is possible that malicious actors may change their strategies based on EPSS scores. For example, if network defenders increasingly adopt EPSS as the primary method for prioritizing vulnerability remediation, thereby deprioritizing vulnerabilities with lower EPSS scores, it may be conceivable that attackers begin to strategically incorporate these lower scoring vulnerabilities into their tactics and malware. While possible, we are not aware of any actual or suggestive evidence to this effect.
Finally, while evolving the model from a logistic regression to a more sophisticated machine learning approach greatly improved performance of EPSS, an important consequence is that interpretability of variable contributions is more difficult to quantify as we discuss in the next section.
\subsection{Variable importance and contribution}
While an XGBoost model is not nearly as intuitive or interpretable as linear regression, we can use SHAP values~\cite{lundberg2017unified} to reduce the opacity of a trained model by quantifying feature contributions, breaking down the score assigned to a CVE as $\phi_0 + \sum_i \phi_i$, where $\phi_i$ is the contribution from feature $i$, and $\phi_0$ is a bias term. We use SHAP values due to their good properties such as local accuracy (attributions sum up to the output of the model), missingness (missing features are given no importance), and consistency (modifying a model so that a feature is given more weight never decreases its attribution).
The contributions from different classes of variables in the kernel density plot are shown in Figure \ref{figure:aggregated-contribution}. First, note that the figure displays the absolute value of the SHAP values, in order to infer the contribution of the variable away from zero. Second, note the horizontal axis is presented on log scale to highlight that the majority of features do not contribute much weight to the final output. In addition, the thin line extending out to the right in Figure \ref{figure:aggregated-contribution} illustrates how there are instances of features within each class that contribute a significant amount. Finally, note that Figure \ref{figure:aggregated-contribution} is sorted in decreasing mean absolute SHAP value for each class of features, highlighting the observation that published exploit code is the strongest contributor to the estimated probability of exploitation activity.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{fig06-variable_class_importance_density.pdf}
\caption{Density plots of the absolute SHAP values for each family of features}
\label{figure:aggregated-contribution}
\end{figure}
Figure \ref{figure:individual-contribution} identifies the 30 most significant features with their calculated mean absolute SHAP value. Again, note that higher values infer a greater influence (either positive or negative) on the final predicted value. Note that Figure \ref{figure:aggregated-contribution} is showing the mean absolute SHAP value from an entire class of features. So even though Exploit Code as a class of features has a higher mean absolut SHAP value, the largest individual feature is coming from the count of references in the published CVE (which is in the "CVE" class).
\noindent\fbox{%
\parbox{\linewidth}{%
Note how the most influential feature is the count of the number of references in MITRE's CVE List, followed by ``remote attackers,'' ``code execution,'' and published exploit code in Exploit-DB, respectively.
}%
}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{fig07-variable_importance.pdf}
\caption{Mean absolute SHAP value for individual features}
\label{figure:individual-contribution}
\end{figure}
\section{Evaluation}
\subsection{Precision (efficiency) and recall (coverage)}
Precision and recall are commonly used machine learning performance metrics, but are not intuitive for security practitioners, and therefore can be difficult to contextualize what these performance metrics represent in practice.
Precision (efficiency) measures how well resources are being allocated, (where low efficiency represents wasted effort), and is calculated as the true positives divided by the sum of the true and false positives.
\noindent\fbox{%
\parbox{\linewidth}{%
In the vulnerability management context, efficiency addresses the question, ``out of all the vulnerabilities remediated, how many were actually exploited?'' If a remediation strategy suggests patching 100 vulnerabilities, 60 of which were exploited, the efficiency would be 60\%.
}%
}
Recall (coverage), on the other hand, considers how well a remediation strategy actually addresses those vulnerabilities that should be patched (e.g., that have observed exploitation activity), and is calculated as the true positives divided by the sum of the true positives and false negatives.
\noindent\fbox{%
\parbox{\linewidth}{%
In the vulnerability management context, coverage addresses the question, ``out of all the vulnerabilities that are being exploited, how many were actually remediated?'' If 100 vulnerabilities are exploited, 40 of which are patched, the coverage would be 40\%.
}%
}
Therefore, for the purpose of this article, we use the terms efficiency and coverage interchangeably with precision and recall, respectively, in the discussions below.
\subsection{Model performance}
After several rounds of experiments to find the optimal set of features, amount of historical data, and model parameters as discussed in the previous section, we generated one final model using all vulnerabilities from November 1st, 2021 to October 31st, 2022. We then predicted the probability of exploitation activity in the next 30 days based on the state of vulnerabilities on December 1st, 2022. Using evidence of exploitation activity for the following 30 days (through Dec 30th, 2022), we measured overall performance as shown in Figure \ref{figure:precision-recall}. For comparison, we also show performance metrics for the EPSS versions 1 and 2, as well as CVSS v3 base scores for the same date and exploitation activity (Dec 1st, 2022). Figure \ref{figure:precision-recall} includes points along the precision-recall curves that represent the thresholds with each prioritization strategy.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figures/fig01-final_pr_all_epss.pdf}
\caption{Performance of EPSS v3 compared to previous versions and CVSS Base Score}
\label{figure:precision-recall}
\end{figure}
Figure \ref{figure:precision-recall} clearly illustrates the significant improvement of the EPSS v3 model over previous versions, as well as the CVSS version 3 base score.
\noindent\fbox{%
\parbox{\linewidth}{%
EPSS v3 produces an area under the curve (AUC) of 0.7795, and an F1 score of 0.728. A remediation strategy based on this F1 score would prioritize remediation for vulnerabilities with EPSS probabilities of 0.36 and above, and would achieve an efficiency of 78.5\% and coverage of 67.8\%.
}%
}
In addition, this strategy would prioritize remediation of 3.5\% of all published vulnerabilities (representing the level of effort).
EPSS v2 has an AUC of 0.4288 and a calculated F1 score at 0.451, which prioritizes vulnerabilities with a probability of 0.16 and above. At the F1 threshold, EPSS v2 achieves an efficiency rating of 45.5\% and coverage of 44.8\% and prioritizes 4\% of the vulnerabilities in our study. EPSS v1 has an AUC of 0.2998 and a calculated F1 score at 0.361, which prioritizes vulnerabilities with a probability of 0.2 and above. At the F1 threshold, EPSS v1 achieves an efficiency rating of 43\% and coverage of 31.1\% and prioritizes 2.9\% of the vulnerabilities in our study. Finally, CVSS v3.x base score has an AUC of 0.051 and a calculated F1 score at 0.108, which prioritizes vulnerabilities with a CVSS base score of 9.7 or higher. At the F1 threshold, CVSS v3.x achieves an efficiency rating of 6.5\% and coverage of 32.3\% and prioritizes 13.7\% of the vulnerabilities in our study.
\subsection{Probability calibrations}
A significant benefit of this model over alternative exploit scoring systems (described above) is that the output scores are true probabilities (i.e., probability of any exploitation activity being observed in the next 30 days) and can therefore be scaled to produce a threat score based on one or more vulnerabilities, such as would be found in a single network device (laptop, server), network segment, or an entire enterprise. For example, standard mathematical techniques can be used to answer questions like ``what is the probability that at least one of this asset's vulnerabilities will be exploited in the next 30 days?'' Such estimates, however, are only useful if they are calibrated and therefore reflect the true likelihood of the event occurring.
In order to address this, we measure calibration in a two ways. First we calculate a Brier Score~\cite{brier1950verification} which produces a score between 0 and 1, with 0 being perfectly calibrated and 1 being perfectly uncalibrated (the original 1950 paper doubles the range from 0 to 2). Our final estimate revealed a Brier score of 0.0162, which is objectively very low (good). We also plot the predicted (binned) values against the observed (binned) exploitation activity (commonly referred to as a ``calibration plot'') as shown in Figure \ref{figure:calibration}. The closer the plotted line is to a 45 degree line (i.e. a line with a slope of 1, represented by the dashed line), the greater the calibration. Again, by visual inspection, our plotted line very closely matches the 45 degree line.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figures/fig02.pdf}
\caption{Calibration Plot comparing predicted probabilities to observed exploitation period in the following 30 days}
\label{figure:calibration}
\end{figure}
\subsection{Simple Remediation Strategies}
Research conducted by Kenna Security and Cyentia tracked vulnerabilities at hundreds of companies and found that on average, companies were only able to remediate about 15.5\% of their open vulnerabilities in a month\cite{cyentia2022p2pv8}. This research also found that resource capacity for remediating vulnerabilities varies considerably across companies, which suggests that any vulnerability remediation strategy should accommodate varying levels of corporate resources and budgets. Indeed, organizations with fewer resources (presumably smaller organizations) may prefer to emphasize efficiency over coverage, to optimize their spending, while larger organizations may accept less efficient strategies in exchange for the greater coverage (i.e. more vulnerabilities patched).
Therefore, we compare the amount of effort required (as measured by the number of vulnerabilities needing to be remediated) for differing remediation strategies. Figure \ref{figure:alternates} highlights the performance of 6 simple (but practical) vulnerability prioritization strategies based on our test data (December 1st, 2022).\footnote{Performance is then measured based on exploitation activity in the following 30 days.}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figures/fig03-venn_alternative_strategies.pdf}
\caption{Alternative strategies based on simple heuristics}
\label{figure:alternates}
\end{figure}
The first diagram in the upper row considers a strategy based on the CVSS v3.x vector of ``Privilege Required: None''. Being able to exploit a vulnerability that doesn't require any established account credentials is an attractive vulnerability to exploit, as an attacker.
\noindent\fbox{%
\parbox{\linewidth}{%
While this strategy would yield 88.1\% coverage, it would achieve only 5.1\% efficiency. That is, from a defender perspective, this class of vulnerabilities represents over 130,000 (70\%) of all published CVEs, and would easily surpass the resources capacity of most organizations.
}%
}
``Code Execution'' is another attractive vulnerability attribute for attackers since these vulnerabilities could allow the attacker to achieve full control of a target asset. However, remediating all the code execution vulnerabilities (17\% or about 32,000 of all CVEs) would achieve 48\% coverage and 11.4\% efficiency.
The middle row of Figure \ref{figure:alternates} shows remediation strategies for vulnerabilities published in Exploit DB (left), and Buffer Overflows (CWE-119; right3), respectively.
The bottom row of Figure \ref{figure:alternates} is especially revealing. The bottom right diagram shows performance metrics for a remediation strategy based on patching vulnerabilities from the Known Exploited Vulnerabilities (KEV) list (as of Dec 1, 2022) from DHS/CISA. The KEV list is meant to prioritize vulnerability remediation for US Federal agencies as per Binding Operational Directive 22-01\footnote{"See https://www.cisa.gov/binding-operational-directive-22-01"}. Strictly following the KEV would remediate half of one percent (0.5\%) of all published CVEs, and produce a relatively high efficiency of 53.2\%. However, with almost 8,000 unique CVEs with exploitation activity in December, the coverage obtained from this strategy is only 5.9\%.
Alternatively, the strategy identified in the bottom left diagram shows a remediation strategy based on whether a vulnerability appears in a Metasploit module. In this case, a network defender would need to remediate almost twice as many vulnerabilities on the KEV list, but would enjoy 13\% greater efficiency (60.5\% vs 53.2\%) and almost three times more coverage (14.9\% vs 5.9\%).
\noindent\fbox{%
\parbox{\linewidth}{%
Therefore, based on this simple heuristic (KEV vs Metasploit), the Metasploit strategy outperforms the KEV strategy.
}%
}
\subsection{Advanced remediation strategies}
Next we explore the real-world performance of our model, using two separate approaches. We first compare coverage among four remediation strategies while holding the \textit{level of effort} constant (i.e. the number of vulnerabilities needing to be remediated), we then compare levels of effort while holding \textit{coverage} constant.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figures/fig04-venn_holding_effort_constant_prezo.pdf}
\caption{Strategy comparisons holding the level of effort constant}
\label{figure:venn_holding_effort}
\end{figure}
Figure \ref{figure:venn_holding_effort} compares the four strategies while maintaining approximately the same level of effort. That is, the blue circle in the middle of each figure -- representing the number of vulnerabilities that would need to be remediated -- is fixed to the same size for each strategy, at approximately 15\% or about 28,000 vulnerabilities. The CVSS strategy, for example, would remediate vulnerabilities with a base score of 9.1 or greater, and would achieve coverage and efficiency of 33.5\% and 6.1\%, respectively.
A remediation strategy based on EPSS v2, on the other hand, would remediate vulnerabilities with an EPSS v2 score of 0.037 and greater, yielding 69.9\% coverage and 18.5\% efficiency. Already, this strategy doubles the coverage and triples the efficiency, relative to the CVSS strategy.
Even better results are achieved with a remediation strategy based on EPSS v3 which enjoys 90.4\% coverage and 24.1\% efficiency.
Figure \ref{figure:venn_holding_coverage} compares the four strategies while maintaining approximately the same level of coverage. That is, the proportion of the red circle (exploitation activity) covered by the blue circle (number of vulnerabilities needing to be remediated). The baseline for coverage is set by a CVSS strategy of remediating vulnerabilities with a base score of 7 and above (CVEs with a "High" or "Critical" CVSS score). Such a strategy yields a respectable coverage at 82.1\% but at the cost of a higher level of effort, needing to remediate 58.1\% or 110,000 of all published CVEs. Practitioners can achieve a similar level of coverage (82\%) using EPSS v3 and prioritizing vulnerabilities scored at 0.088 and above but with a much lower level of effort, needing to only remediate 7.3\% or just under 14,000 vulnerabilities.
\noindent\fbox{%
\parbox{\linewidth}{%
Remediating CVEs rated as High or Critical with CVSS v3 gives a respectable level of coverage at 82.1\%, but requires remediating 58.1\% of published CVEs. On the other hand, EPSS v3 can achieve the same level of coverage but reduces the amount of effort from 58.1\% to 7.3\% of all CVEs, or fewer than 14000 vulnerabilities.
}%
}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figures/fig05-venn_holding_coverage_constant.pdf}
\caption{Strategy comparisons holding the coverage constant}
\label{figure:venn_holding_coverage}
\end{figure}
\section{Evolution of EPSS}
EPSS was initially inspired by the Common Vulnerability Scoring System (CVSS). The first EPSS model~\cite{jacobs2021epss} was designed to be lightweight, portable (i.e. implemented in a spreadsheet), and parsimonious in terms of the data required to score vulnerabilities. Because of these design goals, the first model used a logistic regression which produced interpretable and intuitive scores, and predicted the probability of exploitation activity being observed in the first year following the publication of a vulnerability. In order to be parsimonious, the logistic regression model was trained on only 16 independent variables (features) extracted at the time of vulnerability disclosure. While outperforming CVSS, the SIG highlighted some key limitations which hindered its practical adoption.
Informed by this feedback, the second version of EPSS aimed to address the major limitations of the first version. The first design decision was to switch to a centralized architecture. By centralizing and automating the data collection and scoring, a more complex model could be developed to improve performance. This decision came with a trade-off, namely a loss of the model's portability and thus, the ability to score vulnerabilities which are not publicly disclosed (e.g., zero day vulnerabilities, or flaws that may never be assigned a CVE ID). Nevertheless, focusing on public vulnerabilities under the centralized model removed the need for each implementation of EPSS to perform their own data collection, and further allowed more complex features and models. The model used in v2 is XGBoost~\cite{chen2016xgboost}, and the feature set was greatly expanded from 16 to 1,164. These efforts led to a significant improvement in predictive performance over the previous version by capturing higher order interactions in the extended feature set. Another major component of a centralized architecture was being able to adapt to new vulnerability artifacts (e.g., the publication of exploits) and produce new predictions, daily. Moreover, the SIG also commented that producing scores based on the likelihood of exploitation within the first year of a vulnerability's lifecycle was not very practical, since most prioritization decisions are made with respect to an upcoming patching cycle. As a result, v2 switched to predicting exploitation activity within the following 30-day window as of the time of scoring, which aligns with the typical remediation window of practitioners in the SIG.
For the third version of EPSS, the SIG highlighted a requirement for improved precision at identifying vulnerabilities likely to be exploited in the wild.
This drove an effort to expand the sources of exploit data by partnering with multiple organizations willing to share data for model development, and engineer more complex and informative features.
These label and feature improvements, along with a methodical hyper-parameter tuning approach, enabled improved training of an XGBoost classifier. This allowed the proposed v3 model to achieve an overall 82\% improvement in classifier performance over v2, with the Area Under the Precision/Recall Curve increasing from 0.429 to 0.779.
This boost in prediction performance allows organizations to substantially improve their prioritization practices and design data-driven patching strategies.
\begin{comment}
Informed by this feedback from the SIG, the second version of EPSS aimed to address the major limitations of the first version. The first design decision was to swith to a centralized architecture. By centralizing and automating the data collection and scoring, the model could support more complexity (leveraging a gradient boosted approach), since it no longer had to be portable or parsimonious. While this move removed the ability to score non-CVE vulnerabilities (e.g., zero day vulnerabilities), it also removed the obstacle of individuals performing their own data collection and building the model locally. It further allowed the use of a state-of-the-art machine learning algorithm (XGBoost~\cite{chen2016xgboost}) trained over 1,164 features (expanded from the original 16). This led to a significant improvement in predictive performance over the previous version by capturing higher order interactions in our extended feature set. This version also shifted to a 30-day prediction window, a typical remediation window according to feedback from practitioner members of the SIG.
In this paper, we present the third version of EPSS which, like the second version, employs a gradient boosted machine learning approach. This iteration of our model focuses on incorporating more features from multiple orthogonal data sources, as well as expanding visibility into exploitation activity. The use of a more granular/representative feature set and better ground truth, along with a more rigorous parameter tuning, allowed our proposed model to offer an overall 82\% improvement in classifier performance over the previous iteration, with the area under the precision/recall curve increasing from 0.429 to 0.779. When compared to the de facto standard of CVSS, our approach can achieve the same benefit (i.e., achieve the same recall/coverage level) while requiring only one-eighth the level of effort. This supports the limited capacity observed in organizations by allowing them to take more informed decisions regarding vulnerability prioritization, and thus use their resources more efficiently.
\todo{
\textbf{first model}: lightweight, portable, quick and proof of concept, predicting one year. Trade whatever CVSS is all about to achieve performance. (Kenna had an online calculator for the first one)
}
\todo{
\textbf{second model}: SIG formed, centralized (removing obstacle of individuals doing data collection and building the model locally - tradeoff with people not being able to score non-CVE vulns, zero days etc), complex black box ML (xgboost), 30 day prediction (typical remediation window), a lot more data/features, with the purpose of catering to practitioner requirements.
}
\todo{
\textbf{third model}: more data/features, more ground truth, improved tuning efforts to improve overall precision
The goals derived from people demanding more precision: better model (improved ground truth and model tuning) and better features (to better represent ground truth). Should be more about "lessons learned".
}
\todo{
add a timeline(?) of EPSS evolution, with changes and improvements:
* first paper published at WEIS, 2019
* second paper presented at blackhat, 2019
* Creation of the SIG, early 2020
* centralized scoring of v1, summer 2020
* v2 researched and released Feb 2022 (30 days, new features)
* active SIG discussions, months and months of research leading up V3
}
\end{comment}
\section{Introduction}
Vulnerability management, the practice of identifying, prioritizing, and patching known software vulnerabilities, has been a continuous challenge for defenders for decades. This issue is exacerbated by the increasing number of new vulnerabilities that are being disclosed annually. For example, MITRE published\footnote{Not marked as REJECT or RESERVED.} 25,068 new vulnerabilities during the 2022 calendar year, a 24.3\% increase over 2021.
Adding to the increasing rate of published vulnerabilities are challenges incurred by practitioners when trying to remediate them. Recent research conducted by Kenna Security and Cyentia tracked exposed vulnerabilities at hundreds of companies and found that the monthly median rate of remediation was only 15.5\%, while a quarter of companies remediated less than 6.6\% of their open vulnerabilities per month~\cite{cyentia2022p2pv8}. As a consequence of the increasing awareness of software flaws and the limited capacity to remediate them, vulnerability prioritization has become both a chronic and an acute concern for every organization attempting to reduce their attack surface.
The prioritization process involves scoring and ranking vulnerabilities according to assessments, often based on the industry standard Common Vulnerability Scoring System (CVSS)~\cite{cvss3guide}. However, only the Base metric group of CVSS is being assigned and distributed at scale by NIST, and this group of metrics is unable to adapt to post-disclosure information, such as the publication of exploits or technical artifacts, which can affect the odds of attacks against a vulnerability being observed in the wild.
As a result, while only 5\% of known vulnerabilities are exploited in the wild~\cite{jacobs2020improving}, numerous prior studies have shown that CVSS does not perform well when used to prioritize exploited vulnerabilities over those without evidence of exploitation~\cite{Allodi12:VulnerabilityScores,eiram2013exploitability,allodi2014comparing}.
While several other efforts have been made to capture exploitation likelihood in vulnerability assessments, these approaches are either vendor-specific~\cite{MS:ExploitabilityIndex,RedHat:SeverityRating} or proprietary and not available publicly~\cite{VPRDoc,Rapid7Risk,recordedfuturerisk}.
\noindent\fbox{%
\parbox{\linewidth}{%
In order to improve remediation practices, network defenders need a scoring systems that can accurately quantify \emph{likelihood of exploits in the wild}, and is able to \emph{adapt to new information} published after the initial disclosure of a vulnerability.
}
}
Any effort to developing a new capability to understand, anticipate, and respond to new cyber threats must overcome three main challenges: i) it must address the requirements of practitioners who rely on it; ii) it must provide significant performance improvements over existing scoring systems; and iii) it must have a low barrier to entry for adoption and use.
\begin{comment}
Motivated by the above, in this paper we seek to help organizations better prioritize the mountain of vulnerabilities that they face by predicting vulnerability exploitation in-the-wild. To that end, we present the latest iteration of the Exploit Prediction Scoring System (EPSS), using real-world exploitation data and proven machine learning techniques to estimate the probability that a vulnerability will be exploited within 30 days. Our proposed model has been informed by our partnerships with commercial organizations, non-profits, and academia, taking input from security researchers and practitioners.
The EPSS effort was first launched in 2019, and has been publishing exploit probabilities (scores) for hundreds of thousands of known vulnerabilities since 2020. The scores are updated daily and offered publicly -- and freely -- for everyone to use via direct download or a real-time API. A special interest group (SIG) was formed in early 2020 at the Forum of Incident Response and Security Teams (FIRST). At the time of this writing, the EPSS special interest group consists of over 170 members across the world, representing practitioners, researchers, government agencies, and software development interests.\footnote{See \url{https://www.first.org/epss}.}
\end{comment}
To address these challenges, a Special Interest Group (SIG) was formed in early 2020 at the Forum of Incident Response and Security Teams (FIRST). From its inception until the time of this writing, the Exploit Prediction Scoring System (EPSS) SIG has gathered 170 members from across the world, representing practitioners, researchers, government agencies, and software developers.\footnote{See \url{https://www.first.org/epss}.}
The SIG was created with the publication of the first EPSS model for predicting the likelihood of exploits in the wild~\cite{jacobs2021epss} and is organized around a mailing list, a discussion forum, and bi-weekly meetings.
This unique environment represented an opportunity to understand the challenges faced by practitioners when performing vulnerability prioritization, and therefore address the first challenge raised above by designing a scoring system that takes into account practitioner requirements.
To address the second challenge and achieve significant performance improvements, the SIG provided subject matter expertise, which guided feature engineering with high utility at predicting exploits in the wild. Finally, to address the challenges of designing a public and readily-available scoring system, the SIG attracted a set of industry partners willing to share proprietary data for the development of the model, the output of which can then be made public. This allowed EPSS scores to be publicly available at scale, lowering the barrier to entry for those wanting to integrate EPSS into their prioritization pipeline.
This paper presents the latest (third) iteration of the EPSS model, as well as lessons learned in its design, and their impact on designing a scoring system. The use of a novel and diverse feature set and state-of-the-art machine learning techniques allows EPSS to improve prediction performance by 82\% over its predecessor (as measured by the precision/recall Area Under the Curve improved to 0.779 from 0.429). EPSS is able to score all vulnerabilities published on MITRE's CVE List (and the National Vulnerability Database), and can reduce the amount of effort required to patch critical vulnerabilities to one-eighth of a comparable strategy based on CVSS.
This paper makes the following contributions:
\begin{enumerate}
\item Present lessons learned from developing an exploit prediction model that integrates the functional requirements of a community of nearly 200 practitioners and researchers.
\item Engineers novel features for exploit prediction and use them to train the EPSS classifier for predicting the likelihood of exploits in the wild.
\item Analyzes the practical utility of EPSS by showing that it can significantly improve remediation strategies compared to static baselines.
\end{enumerate}
\section{Modeling Approach}
\subsection{Preparing ground truth and features}
Exploitation activity is considered as any recorded attempt to exploit a vulnerability, regardless of the success of the attempt, and regardless of whether the targeted vulnerability is present. All observed exploitation activity is recorded with the date the activity occurred and aggregated across all data sources by the date and CVE identifier. The resulting ground truth is a binary value for each vulnerability of whether exploitation activity was observed or not, for each day.
Since many of the features may change day by day, we construct features for the training data on a daily basis. In order to reduce the size of our data (and thus the time and memory needed to train models) we aggregate consecutive daily observations where features do not change. The size of the exposure and the number of days with exploitation activity are included in the model training.
When constructing the test data, a single date is selected (typically "today", see next section) and all of the features are generated based on the state of vulnerabilities for that date. Since the final model is intended to estimate the probability of exploitation in the next 30 days, we construct the ground truth for the test data by looking for exploitation activity over the following 30 days from the test date selected.
\subsection{Model selection}
The first EPSS model~\cite{jacobs2021epss} sought not only to accurately predict exploitation but do so in a parsimonious, easy to implement way. As a result, regularized logistic regression (Elasticnet) was chosen to produce a generalized linear model with only a handful of variables. The current model relaxes this requirement in the hopes of improving performance and providing more accurate exploitation predictions. In particular, capturing non-linear relationships between inputs and exploitation activity will better predict the finer grain exploitation activity.
Removing the requirement of a simple model with the need to model complex relationships expands the universe of potential models. Indeed many machine learning algorithms have been developed for this exact purpose. However, testing all models is impractical because each model requires significant engineering and calibration to achieve an optimal outcome. We therefore focus on a single type of model that has proven to be particularly performant on these data. Recent research has illustrated that panel (tabular) data, such as ours, can be most successfully modeled using tree based methods (in particular gradient boosted trees for regression)~\cite{grinsztajn2022tree}, arriving at similar or better predictive performance with less computation and tuning in comparison to other methods such as neural networks. Given the results in \cite{grinsztajn2022tree} we focus our efforts on tuning a common implementation of gradient boosted trees, XGBoost~\cite{chen2016xgboost}.
\noindent\fbox{%
\parbox{\linewidth}{%
XGBoost is a popular, well documented, and performant implementation of the gradient boosted tree algorithm in which successive decision trees are trained to iteratively reduce prediction error.
}%
}
\subsection{Train/test split and measuring performance}
In order to reduce over-fitting, We implement two restrictions. First, we implement a time-based test/train split, constructing our training data sets on data up to and including October 31, 2021. We then construct the test data set based on the state of vulnerabilities on December 1st, 2021, providing one month between the end of the training data and the test data. As mentioned above, the ground truth in the test data is any exploitation activity from December 1st to December 30th, 2021. Second, we use 5-fold cross validation, with the folds based on each unique CVE identifier. This selectively removes vulnerabilities from the training data and tests the performance on the hold out set, thus further reducing the likelihood of over-fitting.
Finally, we measure performance by calculating the area under the curve (AUC) based on precision and recall across the full range of predictions. We selected precision-recall since we have severe class imbalance in exploited vulnerabilities, and using accuracy or traditional Receiver Operator Characteristic (ROC) curves may be misleading due to that imbalance.
\subsection{Tuning and optimizing model performance}
Despite being a well studied approach, the use of gradient boosted trees and XGBoost for prediction problems still requires some effort to identify useful features and model tuning to achieve good model performance. This requires a-priori decisions about which features to include and the hyperparameter values for the XGBoost algorithm.
The features outlined in \autoref{subsec:explan_variables} includes 28,724 variables. Many of these variables are binary features indicating whether a vulnerability affects a particular vendor or can be described by a specific CWE. While the XGBoost algorithm is efficient, including all of variables in our inference is technically infeasible. To reduce the scope of features we take a naive, yet demonstrably effective approach at removing variables below a specific occurrence rate~\cite{yang1997comparative}. This reduced the input feature set to 1,477 variables.
One additional challenge with our data is the temporal nature of our predictions. In particular, exactly how much historical data should be included in the data set. In addition to the XGBoost hyperparameters and the sparsity threshold, we also constructed four different sets of training data for 6 months and then 1, 2 and 3 years, to determine what time horizons would provide the best predictions.
To identify the time horizon and sparsity threshold described above as well as the other hyperparameters needed by our implementation of gradient boosted trees we take a standard approach described in \cite{yang2020hyperparameter}. We first define reasonable ranges for the hyperparameters, use Latin Hypercube sampling over the set of possible combinations, compute model performance for that set of hyperparameters, then finally build an additional model (also a gradient boosted tree) to predict performance given a set of hyperparameters, using the model to maximize performance.
\begin{table}[t]
\caption{Non-default hyperparameter values for XGBoost algorithm and data selection}
\label{table:hyperparams}
\begin{tabular}{ | m{20em} | r | }
\hline
\textbf{Parameter} & \textbf{Value} \\
\hline
Time Horizon & 1 year \\
\hline
Learning rate & 0.11 \\
\hline
Max depth tree depth & 20 \\
\hline
Subsample ratio of the training instances & 0.75 \\
\hline
Minimum loss reduction for leaf node partition & 10 \\
\hline
Maximum delta step & 0.9 \\
\hline
The number of boosting rounds & 65 \\
\hline
\end{tabular}
\end{table}
The results of the above process results in the parameters selected in Table \ref{table:hyperparams}. Note that of the tested time horizons, none dramatically outperformed others, with 1 year only slightly outperforming other tested possibilities.
\section{Literature Review and Related Scoring Systems}
This research is informed by multiple bodies of literature. First, there are a number of industry efforts that seek to provide some measure of exploitability for individual vulnerabilities, though there is wide variation in their scope and availability. First, the base metric group of CVSS, the leading standard for measuring the severity of a vulnerability, is composed of two parts, measuring impact and exploitability~\cite{cvss3guide}.
The score is built on expert judgements, capturing, for example the observation that a broader ability to exploit a vulnerability (i.e., remotely across the Internet, as opposed to requiring local access to the device); a more complex exploit required, or more user interaction required, all serve to increase the apparent likelihood that a vulnerability could be exploited, all else being equal. CVSS has been repeatedly shown by prior work~\cite{allodi2012preliminary,allodi2014comparing}, as well as our own evidence, to be insufficient for capturing all the factors that drive exploitation in the wild. The U.S. National Vulnerability Database (NVD) includes a CVSS base score with nearly all vulnerabilities it has published. Because of the wide-spread use of CVSS, specifically the base score, as a prioritization strategy we will compare our performance against CVSS as well as our previous models.
Exploit likelihood is also modeled through various vendor-specific metrics. In 2008, Microsoft introduced the Exploitability Index for vulnerabilities in their products~\cite{MS:ExploitabilityIndex}.
It provides 4 measures for the likelihood that a vulnerability will be exploited: whether an exploitation has already been detected, and whether exploitation is more or less likely, or unlikely. The metric has been investigated before~\cite{reuters2009microsoft,eiram2013exploitability,younis2015comparing} and was shown to have limited performance at predicting exploitation in the wild~\cite{MS:ExploitabilityIndexCritiqueDarkReading,reuters2009microsoft} or the development of functional exploits~\cite{suciu2022expected}.
Redhat provides a 4-level severity rating: low, moderate, important, and critical~\cite{RedHat:SeverityRating}.
In addition to capturing a measure of the impact to a vulnerable system, this index also captures some notion of exploitability. For example, the ``low'' severity rating represents vulnerabilities that are unlikely to be exploited, whereas the ``critical'' severity rating reflects vulnerabilities that could be easily exploited by an unauthenticated remote attacker. Like the Exploitability Index, Redhat's metric is vendor-specific and has limitations reflecting exploitation likelihood~\cite{suciu2022expected}.
A series of commercial solutions also aim to capture the likelihood of exploits. Tenable, a leading vendor of intrusion detection systems, created the Vulnerability Priority Rating (VPR), which, like CVSS, combines information about both impact to a vulnerable system, and the exploitability (threat) of a vulnerability in order to help network defenders better prioritize remediation efforts~\cite{VPRDoc}.
For example, the threat component of VPR ``reflects both recent and potential future threat activity'' by examining whether exploit code is publicly available, whether there are mentions of active exploitation on social media or in the dark web, etc. Rapid 7's Real Risk Score product uses its own collection of data feeds to produce a score between 1-1000.
This score is a combination of the CVSS base score, ``malware exposure, exploit exposure and ease of use, and vulnerability age'' and seeks to produce a better measure of both exploitability and ``risk''~\cite{Rapid7Risk}. Recorded Future's Vulnerability Intelligence product integrates multiple data sources, including threat information, and localized asset criticality~\cite{recordedfuturerisk}.
The predictions, performance evaluations and implementation details of these solutions are not publicly available.
These industry efforts are either vendor-specific, score only subsets of vulnerabilities, based on expert opinion and assessments and therefore not entirely data-driven, or proprietary and not publicly available.
Our work is also related to a growing academic research field of predicting and detecting vulnerability exploitation. A large body of work focuses on predicting the emergence of proof-of-concept or functional exploits ~\cite{bozorgi2010beyond,edkrantz2015predicting,bullough2017predicting,reinthal2018data,alperin2019risk,bhatt2021exploitability,suciu2022expected}, not necessarily whether these exploits will be used in the wild, as is done with EPSS. Papers predicting exploitation in the wild have used alternative sources of exploitation, most notably data from Symantec's IDS, to build prediction models~\cite{sabottke2015vulnerability,almukaynizi2017proactive,chen2019using,xiao2018patching,tavabi2018darkembed,fang2020fastembed,hoque2021improved}). Most of these papers build vulnerability feature sets from commonly used data sources such as NVD or OSVDB, although some of them use novel identifiers for exploitation: \cite{sabottke2015vulnerability} infers exploitation using Twitter data, \cite{xiao2018patching} uses patching patterns and blacklist information to predict whether organizations are facing new exploits, while \cite{tavabi2018darkembed} uses natural language processing methods to infer context of darkweb/deepweb discussions.
\noindent\fbox{%
\parbox{\linewidth}{%
Compared to other scoring systems and research described above, EPSS is a rigorous and ongoing research effort is; an international, community-driven effort; designed to predict vulnerability exploitation in the wild; available for all known and published vulnerabilities; updated daily to reflect new vulnerabilities and new exploit-related information; made available freely to the public.
}%
}
|
{
"arxiv_id": "2302.14199",
"language": "en",
"timestamp": "2023-03-01T02:04:33",
"url": "https://arxiv.org/abs/2302.14199",
"yymm": "2302"
} | \section{Introduction}\label{s1}
Let $a$ and $q$ be variables and define the conventional $q$-Pochammer symbol $$(a)_n=(a;q)_n:=\prod_{k=0}^{n-1}(1-aq^k)$$ for any positive integer $n$ and $(a)_0=1$. For $\lvert q\rvert<1$, we define $$(a)_{\infty}=(a;q)_{\infty}:=\lim_{n\rightarrow\infty}(a;q)_n.$$ We define $(a)_n$ for all real numbers $n$ by $$(a)_n := \dfrac{(a)_{\infty}}{(aq^n)_{\infty}}.$$ For variables $a_1,a_2,\ldots,a_k$, we define the shorthand notations $$(a_1,a_2,\ldots,a_k;q)_n:=\prod_{i=1}^{k}(a_i;q)_n\, ,$$ $$(a_1,a_2,\ldots,a_k;q)_{\infty}:=\prod_{i=1}^{k}(a_i;q)_{\infty}.$$\par Next, we require the following formulas from Gasper and Rahman \cite[Appendix I]{Gas-Rah04}
\begin{equation}\label{eq11}
(a;q)_{n+k}=(a;q)_n(aq^n;q)_k,
\end{equation}
\begin{equation}\label{eq12}
(a;q)_{-n}=\dfrac{1}{(aq^{-n};q)_n}=\dfrac{(-q/a)^n}{(q/a;q)_n}q^{\binom{n}{2}},
\end{equation}
\begin{equation}\label{eq13}
(aq^{-n};q)_k=\dfrac{(a;q)_k(q/a;q)_n}{(q^{1-k}/a;q)_n}q^{-nk},\quad \text{and}
\end{equation}
\begin{equation}\label{eq14}
\dfrac{(a;q)_{n-k}}{(b;q)_{n-k}}=\dfrac{(a;q)_n}{(b;q)_n}\dfrac{(q^{1-n}/b;q)_k}{(q^{1-n}/a;q)_k}\left(\dfrac{b}{a}\right)^k.
\end{equation}
\\\par We invite the reader to examine Gasper and Rahman’s text \cite{Gas-Rah04} for an introduction to basic hypergeometric series, whose notations we follow. For instance, the ${}_r\phi_{r-1}$ unilateral and ${}_r\psi_r$ bilateral basic hypergeometric series with base $q$ and argument $z$ are defined, respectively, by
\begin{equation*}
\begin{multlined}
{}_r\phi_{r-1} \left[
\setlength\arraycolsep{2pt}
\begin{matrix}
a_1,\ldots,a_r \\
\multicolumn{2}{c}{
\begin{matrix}
b_1,\ldots,b_{r-1}
\end{matrix}}
\end{matrix} \hspace{1pt}
;q, z \right]:=\sum_{k=0}^{\infty}\dfrac{(a_1,\ldots,a_r;q)_k}{(q,b_1,\ldots,b_{r-1};q)_k}z^k,\quad \lvert z\rvert<1, \\
\quad{}_r\psi_r \left[
\begin{matrix}
a_1,\ldots,a_r \\
b_1,\ldots,b_r
\end{matrix}
;q, z \right]:=\sum_{k=-\infty}^{\infty}\dfrac{(a_1,\ldots,a_r;q)_k}{(b_1,\ldots,b_{r-1};q)_k}z^k,\quad \left\lvert\dfrac{b_1\ldots b_r}{a_1\ldots a_r}\right\rvert<\lvert z\rvert<1.
\end{multlined}
\end{equation*}
\par Throughout the remainder of this paper, we assume that $\lvert q\rvert<1$. We now present the statements of the main identities which we prove in this paper.
\\
\begin{theorem}(Bailey \cite[eq. $3.1$]{Ba50})\label{5psi51}
For any non-negative integer $n$,
\begin{equation}\label{eq15}
{}_5\psi_5 \left[
\begin{matrix}
b, &c, &d, &e, &q^{-n} \\
q/b, &q/c, &q/d, &q/e, &q^{n+1}
\end{matrix}
;q, q \right]=\dfrac{(q,q/bc,q/bd,q/cd;q)_n}{(q/b,q/c,q/d,q/bcd;q)_n}
\end{equation}
where $bcde=q^{n+1}$.
\end{theorem}
\begin{theorem}(Bailey \cite[eq. $3.2$]{Ba50})\label{5psi52}
For any non-negative integer $n$,
\begin{equation}\label{eq16}
{}_5\psi_5 \left[
\begin{matrix}
b, &c, &d, &e, &q^{-n} \\
q^2/b, &q^2/c, &q^2/d, &q^2/e, &q^{n+2}
\end{matrix}
;q, q \right]=\dfrac{(1-q)(q^2,q^2/bc,q^2/bd,q^2/cd;q)_n}{(q^2/b,q^2/c,q^2/d,q^2/bcd;q)_n}
\end{equation}
where $bcde=q^{n+3}$.
\end{theorem}
\begin{theorem}(Bailey \cite[eq. $2.2$]{Ba50})\label{3psi31}
\begin{equation}\label{eq17}
{}_3\psi_3 \left[
\begin{matrix}
b, &c, &d \\
q/b, &q/c, &q/d
\end{matrix}
;q, \dfrac{q}{bcd} \right] = \dfrac{(q,q/bc,q/bd,q/cd;q)_{\infty}}{(q/b,q/c,q/d,q/bcd;q)_{\infty}}.
\end{equation}
\end{theorem}
\begin{theorem}(Bailey \cite[eq. $2.3$]{Ba50})\label{3psi32}
\begin{equation}\label{eq18}
{}_3\psi_3 \left[
\begin{matrix}
b, &c, &d \\
q^2/b, &q^2/c, &q^2/d
\end{matrix}
;q, \dfrac{q^2}{bcd} \right] = \dfrac{(q,q^2/bc,q^2/bd,q^2/cd;q)_{\infty}}{(q^2/b,q^2/c,q^2/d,q^2/bcd;q)_{\infty}}.
\end{equation}
\end{theorem}
\bigskip
Bailey \cite{Ba50} proved Theorems \ref{3psi31} and \ref{3psi32} by letting $a\rightarrow 1$ and setting $a=q$ in the ${}_6\phi_5$ summation formula \cite[II.$20$]{Gas-Rah04} respectively and mentioned that (\ref{eq15}) and (\ref{eq16}) follow from Jackson's $q$-analogue of Dougall's theorem \cite[II.$22$]{Gas-Rah04}.\par Our work is motivated by Ismail's initial proof \cite{Is77} of Ramanujan's ${}_1\psi_1$ summation formula which can be stated as
\begin{equation}\label{eq19}
{}_1\psi_1 \left[
\begin{matrix}
a \\
b
\end{matrix}
;q, z \right]=\dfrac{(q,b/a,az,q/az;q)_{\infty}}{(b,q/a,z,b/az;q)_{\infty}}
\end{equation} where $\lvert b/a\rvert<\lvert z\rvert<1$ and later Askey and Ismail's proof \cite{As-Is79} of Bailey's very-well-poised ${}_6\psi_6$ identity which is
\begin{equation}\label{eq110}
\begin{multlined}
{}_6\psi_6 \left[
\begin{matrix}
q\sqrt{a}, &-q\sqrt{a}, &b, &c, &d, &e \\
\sqrt{a}, &-\sqrt{a}, &aq/b, &aq/c, &aq/d, &aq/e
\end{matrix}
;q, \dfrac{qa^2}{bcde} \right] \\ =\dfrac{(aq,aq/bc,aq/bd,aq/be,aq/cd,aq/ce,aq/de,q,q/a;q)_{\infty}}{(aq/b,aq/c,aq/d,aq/e,q/b,q/c,q/d,q/e,qa^2/bcde;q)_{\infty}}
\end{multlined}
\end{equation}
provided $\lvert qa^2/bcde\rvert<1$.\par To prove (\ref{eq19}) and (\ref{eq110}), Ismail \cite{Is77} and Askey and Ismail \cite{As-Is79} show that the two sides of \ref{eq19} and \ref{eq110} are analytic functions that agree infinitely often near a point that is an interior point of the domain of analyticity and hence they are identically equal.\\\par
To this end, we employ the following $q$-hypergeometric series identities
\\
\begin{theorem}(Carlitz \cite[eq. $3.4$]{Car73})\label{5phi4}
For any non-negative integer $n$,
\begin{equation}\label{eq111}
\begin{multlined}
{}_5\phi_4 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
q^{-n}, &\quad\quad b, &\quad\quad c, &\quad\quad d, &e \\
\multicolumn{5}{c}{
\begin{matrix}
q^{-n+1}/b, &q^{-n+1}/c, &q^{-n+1}/d, &q^{-n+1}/e
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, q \right] \\ =q^{m(1+m-n)}(de)^{-m}\dfrac{(q^{-n})_{2m}(q^{-n+1}/bc,q^{-n+1}/bd,q^{-n+1}/be;q)_m}{(q,q^{-n+1}/b,q^{-n+1}/d,q^{-n+1}/e,q^{n-m}c;q)_m}(q^{2m-n})_{n-2m}
\end{multlined}
\end{equation}
where $m=\lfloor n/2\rfloor$ \textit{and} $bcde=q^{1+m-2n}$.
\end{theorem}
We note that for $n$ even, Theorem \ref{5phi4} is Chu's \cite[p.~$279$]{Chu09} Corollary $3$ where $\delta=0$ and for $n$ odd, Theorem \ref{5phi4} is Chu's \cite[p.~$280$]{Chu09} Corollary $7$ where $\delta=0$.
\\
\begin{theorem}(Jackson's terminating q-analogue of Dixon's sum \cite[II.$15$]{Gas-Rah04})\label{3phi2Jackson}
For any non-negative integer $m$,
\begin{equation}\label{eq112}
{}_3\phi_2 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
q^{-2m}, &\quad a, &\quad b \\
\multicolumn{3}{c}{
\begin{matrix}
q^{-2m+1}/a, &q^{-2m+1}/b
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, \dfrac{q^{-m+2}}{ab} \right] \\ =\dfrac{(a,b;q)_m(q,ab;q)_{2m}}{(q;ab)_m(a,b;q)_{2m}}.
\end{equation}
\end{theorem}
\begin{theorem}(Carlitz \cite[eq. $2.5$]{Car73})\label{3phi2Carlitz}
For any non-negative integer $n$,
\begin{equation}\label{eq113}
\begin{multlined}
{}_3\phi_2 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
q^{-n}, &\quad a, &\quad b \\
\multicolumn{3}{c}{
\begin{matrix}
q^{-n+1}/a, &q^{-n+1}/b
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, \dfrac{q^{-n+m+1}z}{ab} \right] \\ =\sum_{2j\le n}(-1)^j\dfrac{(q^{-n})_{2j}(q^{-n+1}/ab)_j}{(q,q^{-n+1}/a,q^{-n+1}/b;q)_j}q^{-j(j-1)/2+mj}z^j(z)_{m-j}(q^{j+m-n}z)_{n-m-j}
\end{multlined}
\end{equation}
where $m=\lfloor n/2\rfloor$.
\end{theorem}
\bigskip
The paper is organized as follows. In section \ref{s2}, we give the proofs of the two ${}_5\psi_5$ identities (\ref{eq15}) and (\ref{eq16}) respectively. In section \ref{s3}, we show that the two ${}_5\psi_5$ identities (\ref{eq15}) and (\ref{eq16}) become the two ${}_3\psi_3$ identities (\ref{eq17}) and (\ref{eq18}) respectively when $n\rightarrow \infty$. Finally we provide proofs of the two ${}_3\psi_3$ identities (\ref{eq17}) and (\ref{eq18}) in section \ref{s4}.
\section{Proofs of the two ${}_5\psi_5$ identities}\label{s2}
\subsection{Proof of Theorem \ref{5psi51}}
\begin{proof}
Replacing $n$ by $2m$, $b$ by $bq^{-m}$, $c$ by $cq^{-m}$, $d$ by $dq^{-m}$ and $e$ by $eq^{-m}$ in (\ref{eq111}), we get
\begin{equation}\label{eq21}
\begin{multlined}
{}_5\phi_4 \left[
\setlength\arraycolsep{5pt}
\begin{matrix}
q^{-2m}, &\quad bq^{-m}, &\quad cq^{-m}, &\quad dq^{-m}, &\quad eq^{-m} \\
\multicolumn{5}{c}{
\begin{matrix}
q^{-m+1}/b, &q^{-m+1}/c, &q^{-m+1}/d, &q^{-m+1}/e
\end{matrix}}
\end{matrix} \hspace{2pt}
;q,q \right] \\ =q^{m^2+m}(de)^{-m}\dfrac{(q^{-2m})_{2m}(q/bc,q/bd,q/be;q)_m}{(q,q^{-m+1}/b,q^{-m+1}/d,q^{-m+1}/e,c;q)_m}
\end{multlined}
\end{equation} where $bcde=q^{m+1}$. Now, we have
\[{}_5\psi_5 \left[
\begin{matrix}
b, &c, &d, &e, &q^{-n} \\
q/b, &q/c, &q/d, &q/e, &q^{n+1}
\end{matrix}
;q, q \right]\]
\begin{align*}&=\sum_{k=-\infty}^{\infty}\dfrac{(b,c,d,e,q^{-n};q)_k}{(q/b,q/c,q/d,q/e,q^{n+1};q)_k}q^k\\
&=\sum_{k=-n}^{\infty}\dfrac{(b,c,d,e,q^{-n};q)_k}{(q/b,q/c,q/d,q/e,q^{n+1};q)_k}q^k\quad (\text{since}\,1/(q^{n+1})_k=0\,\text{for all}\,k<-n)\\
&=\sum_{k=0}^{\infty}\dfrac{(b,c,d,e,q^{-n};q)_{k-n}}{(q/b,q/c,q/d,q/e,q^{n+1};q)_{k-n}}q^{k-n}\\
&=\dfrac{(b,c,d,e,q^{-n};q)_{-n}q^{-n}}{(q/b,q/c,q/d,q/e,q^{n+1};q)_{-n}}\sum_{k=0}^{\infty}\dfrac{(q^{-2n},bq^{-n},cq^{-n},dq^{-n},eq^{-n};q)_k}{(q,q^{-n+1}/b,q^{-n+1}/c,q^{-n+1}/d,q^{-n+1}/e;q)_k}q^k\\
&=\dfrac{(b,c,d,e,q^{-n};q)_{-n}(q^{-2n})_{2n}(q/bc,q/bd,q/be;q)_nq^{n^2}}{(q/b,q/c,q/d,q/e,q^{n+1};q)_{-n}(q,q^{-n+1}/b,q^{-n+1}/d,q^{-n+1}/e,c;q)_n(de)^n}
\end{align*}
\\
where the last equality above follows from (\ref{eq21}) (after replacing $m$ by $n$). Then simplifying the last expression above using (\ref{eq11}), (\ref{eq12}) and (\ref{eq13}) with appropriate substitutions, we get
\\
\[{}_5\psi_5 \left[
\begin{matrix}
b, &c, &d, &e, &q^{-n} \\
q/b, &q/c, &q/d, &q/e, &q^{n+1}
\end{matrix}
;q, q \right]=\dfrac{(q,q/bc,q/bd,q/cd;q)_n}{(q/b,q/c,q/d,q/bcd;q)_n}\]
\\
where $bcde=q^{n+1}$ for $n\in \mathbb{N}\cup\{0\}$. This completes the proof of Theorem \ref{5psi51}.
\end{proof}
\bigskip
\subsection{Proof of Theorem \ref{5psi52}}
\begin{proof}
Replacing $n$ by $2m+1$, $b$ by $bq^{-m-1}$, $c$ by $cq^{-m-1}$, $d$ by $dq^{-m-1}$ and $e$ by $eq^{-m-1}$ in (\ref{eq111}), we get
\begin{equation}\label{eq22}
\begin{multlined}
{}_5\phi_4 \left[
\setlength\arraycolsep{5pt}
\begin{matrix}
q^{-2m-1}, &bq^{-m-1}, &cq^{-m-1}, &dq^{-m-1}, &eq^{-m-1} \\
\multicolumn{5}{c}{
\begin{matrix}
q^{-m+1}/b, &q^{-m+1}/c, &q^{-m+1}/d, &q^{-m+1}/e
\end{matrix}}
\end{matrix} \hspace{2pt}
;q,q \right] \\ =(q-1)q^{m^2+2m-1}(de)^{-m}\dfrac{(q^{-2m-1})_{2m}(q^2/bc,q^2/bd,q^2/be;q)_m}{(q,q^{-m+1}/b,q^{-m+1}/d,q^{-m+1}/e,c;q)_m}.
\end{multlined}
\end{equation} where $bcde=q^{m+3}$. Now, we have
\[{}_5\psi_5 \left[
\begin{matrix}
b, &c, &d, &e, &q^{-n} \\
q^2/b, &q^2/c, &q^2/d, &q^2/e, &q^{n+2}
\end{matrix}
;q, q \right]\]
\begin{align*}&=\sum_{k=-\infty}^{\infty}\dfrac{(b,c,d,e,q^{-n};q)_k}{(q^2/b,q^2/c,q^2/d,q^2/e,q^{n+2};q)_k}q^k\\
&=\sum_{k=-n-1}^{\infty}\dfrac{(b,c,d,e,q^{-n};q)_k}{(q^2/b,q^2/c,q^2/d,q^2/e,q^{n+2};q)_k}q^k\quad (\text{since}\,1/(q^{n+2})_k=0\,\text{for all}\,k<-n-1)\\
&=\sum_{k=0}^{\infty}\dfrac{(b,c,d,e,q^{-n};q)_{k-n-1}}{(q^2/b,q^2/c,q^2/d,q^2/e,q^{n+2};q)_{k-n-1}}q^{k-n-1}\\
&=\dfrac{(b,c,d,e,q^{-n};q)_{-n-1}q^{-n-1}}{(q^2/b,q^2/c,q^2/d,q^2/e,q^{n+2};q)_{-n-1}}\sum_{k=0}^{\infty}\dfrac{(q^{-2n-1},bq^{-n-1},cq^{-n-1},dq^{-n-1},eq^{-n-1};q)_k}{(q,q^{-n+1}/b,q^{-n+1}/c,q^{-n+1}/d,q^{-n+1}/e;q)_k}q^k\\
&=\dfrac{(q-1)(b,c,d,e,q^{-n};q)_{-n-1}(q^{-2n-1})_{2n}(q^2/bc,q^2/bd,q^2/be;q)_nq^{n^2+n-2}}{(q^2/b,q^2/c,q^2/d,q^2/e,q^{n+2};q)_{-n-1}(q,q^{-n+1}/b,q^{-n+1}/d,q^{-n+1}/e,c;q)_n(de)^n}
\end{align*}
\\
where the last equality above follows from (\ref{eq22}) (after replacing $m$ by $n$). Then simplifying the last expression above using (\ref{eq11}), (\ref{eq12}) and (\ref{eq13}) with appropriate substitutions, we get
\\
\[{}_5\psi_5 \left[
\begin{matrix}
b, &c, &d, &e, &q^{-n} \\
q^2/b, &q^2/c, &q^2/d, &q^2/e, &q^{n+2}
\end{matrix}
;q, q \right]=\dfrac{(1-q)(q^2,q^2/bc,q^2/bd,q^2/cd;q)_n}{(q^2/b,q^2/c,q^2/d,q^2/bcd;q)_n}\]
\\
where $bcde=q^{n+3}$ for $n\in \mathbb{N}\cup\{0\}$. This completes the proof of Theorem \ref{5psi52}.
\end{proof}
\section{Two limiting cases}\label{s3}
Letting $n\rightarrow \infty$ in (\ref{eq15}) and simplifying using (\ref{eq13}) with appropriate substitutions, we get
\[{}_3\psi_3 \left[
\begin{matrix}
b, &c, &d \\
q/b, &q/c, &q/d
\end{matrix}
;q, \dfrac{q}{bcd} \right]=\dfrac{(q,q/bc,q/bd,q/cd;q)_\infty}{(q/b,q/c,q/d,q/bcd;q)_\infty}\]
which is exactly (\ref{eq17}).
\\\par
Similarly, letting $n\rightarrow \infty$ in (\ref{eq16}) and simplifying using (\ref{eq13}) with appropriate substitutions, we get
\[{}_3\psi_3 \left[
\begin{matrix}
b, &c, &d \\
q^2/b, &q^2/c, &q^2/d
\end{matrix}
;q, \dfrac{q^2}{bcd} \right]=\dfrac{(q,q^2/bc,q^2/bd,q^2/cd;q)_\infty}{(q^2/b,q^2/c,q^2/d,q^2/bcd;q)_\infty}\]
which is exactly (\ref{eq18}).
\section{Ismail type proofs of the two ${}_3\psi_3$ identities}\label{s4}
In this section, we derive the the two ${}_3\psi_3$ identities (\ref{eq17}) and (\ref{eq18}) using Ismail's method \cite{Is77}.
\subsection{Proof of Theorem \ref{3psi31}}
\begin{proof}
Replacing $a$ by $bq^{-m}$ and $b$ by $cq^{-m}$ in (\ref{eq112}), we get
\begin{equation}\label{eq41}
{}_3\phi_2 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
q^{-2m}, &\quad bq^{-m}, &\quad cq^{-m} \\
\multicolumn{3}{c}{
\begin{matrix}
q^{-m+1}/b, &q^{-m+1}/c
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, \dfrac{q^{m+2}}{bc} \right] = \dfrac{(bq^{-m},cq^{-m};q)_m(q,bcq^{-2m};q)_{2m}}{(q;bcq^{-2m})_m(bq^{-m},cq^{-m};q)_{2m}}.
\end{equation}
We now have
\[ {}_3\phi_2 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
q^{-2m}, &\quad bq^{-m}, &\quad cq^{-m} \\
\multicolumn{3}{c}{
\begin{matrix}
q^{-m+1}/b, &q^{-m+1}/c
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, \dfrac{q^{m+1}}{bc} \right] \]
\begin{align*}
&=\sum_{k=0}^{\infty}\dfrac{(q^{-2m},bq^{-m},cq^{-m};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+1}/bc)^k\\
&=\sum_{k=0}^{2m}\dfrac{(q^{-2m},bq^{-m},cq^{-m};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+1}/bc)^k\quad (\text{since}\,(q^{-2m})_k=0\,\text{for all}\,k>2m)\\
&=\sum_{k=0}^{2m}\dfrac{(q^{-2m},bq^{-m},cq^{-m};q)_{2m-k}}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_{2m-k}}(q^{m+1}/bc)^{2m-k}\, \text{(reversing the order of summation)}\\
&=\dfrac{(q^{-2m},bq^{-m},cq^{-m};q)_{2m}(q^{m+1}/bc)^{2m}}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_{2m}}\sum_{k=0}^{2m}\dfrac{(q^{-2m},bq^{-m},cq^{-m};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+2}/bc)^k\numberthis\label{eq42}\\
&=\dfrac{(q^{-2m},bq^{-m},cq^{-m};q)_{2m}(q^{m+1}/bc)^{2m}}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_{2m}}\sum_{k=0}^{\infty}\dfrac{(q^{-2m},bq^{-m},cq^{-m};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+2}/bc)^k\\
&=\dfrac{(q^{-2m},bq^{-m},cq^{-m},q,bcq^{-2m};q)_{2m}(bq^{-m},cq^{-m};q)_m(q^{m+1}/bc)^{2m}}{(q,q^{-m+1}/b,q^{-m+1}/c,bq^{-m},cq^{-m};q)_{2m}(q,bcq^{-2m};q)_m}\numberthis\label{eq43}
\end{align*}
\\
where (\ref{eq42}) follows using (\ref{eq14}) with appropriate substitutions and (\ref{eq43}) follows from (\ref{eq41}).
\\\par Firstly, we note that the series on the left-hand side of (\ref{eq17}) is an analytic function of $1/d$ provided $\left\lvert q^3/bcd\right\rvert<\left\lvert q/bcd\right\rvert<1$. If we set $1/d=q^m$ for any positive integer $m$ in (\ref{eq17}), we get
\[{}_3\psi_3 \left[
\begin{matrix}
b, &c, &q^{-m} \\
q/b, &q/c, &q^{m+1}
\end{matrix}
;q, \dfrac{q^{m+1}}{bc} \right]\]
\begin{align*}
&=\sum_{k=-\infty}^{\infty}\dfrac{(b,c,q^{-m};q)_k}{(q/b,q/c,q^{m+1};q)_k}(q^{m+1}/bc)^k\\
&=\sum_{k=-m}^{\infty}\dfrac{(b,c,q^{-m};q)_k}{(q/b,q/c,q^{m+1};q)_k}(q^{m+1}/bc)^k\quad (\text{since}\,1/(q^{m+1})_k=0\,\text{for all}\,k<-m)\\
&=\sum_{k=0}^{\infty}\dfrac{(b,c,q^{-m};q)_{k-m}}{(q/b,q/c,q^{m+1};q)_{k-m}}(q^{m+1}/bc)^{k-m}\\
&=\dfrac{(b,c,q^{m};q)_{-m}(q^{m+1}/bc)^{-m}}{(q/b,q/c,q^{m+1};q)_{-m}}\sum_{k=0}^{\infty}\dfrac{(q^{-2m},bq^{-m},cq^{-m};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+1}/bc)^k\\
&=\dfrac{(b,c,q^{-m};q)_{-m}(q^{-2m},bq^{-m},cq^{-m},q,bcq^{-2m};q)_{2m}(bq^{-m},cq^{-m};q)_m(q^{m+1}/bc)^{m}}{(q/b,q/c,q^{m+1};q)_{-m}(q,q^{-m+1}/b,q^{-m+1}/c,bq^{-m},cq^{-m};q)_{2m}(q,bcq^{-2m};q)_m}
\end{align*}
where the last equality above follows from (\ref{eq43}) Then simplifying the last expression above using (\ref{eq11}), (\ref{eq12}) and (\ref{eq13}) with appropriate substitutions, we get
\\
\[{}_3\psi_3 \left[
\begin{matrix}
b, &c, &q^{-m} \\
q/b, &q/c, &q^{m+1}
\end{matrix}
;q, \dfrac{q^{m+1}}{bc} \right] = \dfrac{(q,q/bc,q^{m+1}/b,q^{m+1}/c;q)_{\infty}}{(q/b,q/c,q^{m+1},q^{m+1}/bc;q)_{\infty}}.\]
\\
\par Thus, the two sides of (\ref{eq17}) constitute analytic functions of $1/d$ provided $\left\lvert q^3/bcd\right\rvert<\left\lvert q/bcd\right\rvert<1$ where we note that the first of these inequalities always holds simply because $\lvert q\rvert<1$ and the second inequality can be rearranged to give $\lvert 1/d\rvert<\lvert bc/q\rvert$ which is a disk of radius $\lvert bc/q\rvert$ centred about $0$. Thus, both the sides of (\ref{eq17}) agree on an infinite sequence of points $(q^m)_{m\in \mathbb{N}}$ which converges to the limit $0$ inside the disk $\left\{1/d\in\mathbb{C}:\lvert 1/d\rvert<\lvert bc/q\rvert\right\}$. Hence, (\ref{eq17}) is valid in general. This completes the proof of Theorem \ref{3psi31}.
\end{proof}
\bigskip
\subsection{Proof of Theorem \ref{3psi32}}
\begin{proof}
Replacing $n$ by $2m+1$, $z$ by $q^2$, $a$ by $bq^{-m-1}$ and $b$ by $cq^{-m-1}$ in (\ref{eq113}), we get
\begin{equation}\label{eq44}
\begin{multlined}
{}_3\phi_2 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
q^{-2m-1}, &bq^{-m-1}, &cq^{-m-1} \\
\multicolumn{3}{c}{
\begin{matrix}
q^{-m+1}/b, &q^{-m+1}/c
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, \dfrac{q^{m+4}}{bc} \right] \\ =\dfrac{(-1)^m(q^{-2m-1})_{2m}(q^2/bc)_{m}q^{m(m+5)/2}}{(q^2)_{m-1}(q^{-m+1}/b,q^{-m+1}/c;q)_{m}}.
\end{multlined}
\end{equation}
We now have
\[ {}_3\phi_2 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
q^{-2m-1}, &bq^{-m-1}, &cq^{-m-1} \\
\multicolumn{3}{c}{
\begin{matrix}
q^{-m+1}/b, &q^{-m+1}/c
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, \dfrac{q^{m+2}}{bc} \right] \]
\begin{align*}
&=\sum_{k=0}^{\infty}\dfrac{(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+2}/bc)^k\\
&=\sum_{k=0}^{2m+1}\dfrac{(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+2}/bc)^k\quad (\text{since}\,(q^{-2m-1})_k=0\,\text{for all}\,k>2m+1)\\
&=\sum_{k=0}^{2m+1}\dfrac{(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_{2m+1-k}}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_{2m+1-k}}(q^{m+2}/bc)^{2m+1-k}\, \text{(reversing the order of summation)}\\
&=\dfrac{(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_{2m+1}(q^{m+2}/bc)^{2m+1}}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_{2m+1}}\sum_{k=0}^{2m+1}\dfrac{(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+4}/bc)^k\numberthis\label{eq45}\\
&=\dfrac{(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_{2m+1}(q^{m+2}/bc)^{2m+1}}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_{2m+1}}\sum_{k=0}^{\infty}\dfrac{(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+4}/bc)^k\\
&=\dfrac{(-1)^m(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_{2m+1}(q^{-2m-1})_{2m}(q^2/bc)_mq^{(5m^2+15m+4)/2}}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_{2m+1}(q^2)_{m-1}(q^{-m+1}/b,q^{-m+1}/c;q)_m(bc)^{2m+1}}\numberthis\label{eq46}
\end{align*}
\\
where (\ref{eq45}) follows using (\ref{eq14}) with appropriate substitutions and (\ref{eq46}) follows from (\ref{eq44}).
\\\par Firstly, we note that series on the left-hand side of (\ref{eq18}) is an analytic function of $1/d$ provided $\left\lvert q^6/bcd\right\rvert<\left\lvert q^2/bcd\right\rvert<1$. If we set $1/d=q^m$ for any positive integer $m$ in (\ref{eq18}), we get
\[{}_3\psi_3 \left[
\begin{matrix}
b, &c, &q^{-m} \\
q^2/b, &q^2/c, &q^{m+2}
\end{matrix}
;q, \dfrac{q^{m+2}}{bc} \right]\]
\begin{align*}
&=\sum_{k=-\infty}^{\infty}\dfrac{(b,c,q^{-m};q)_k}{(q^2/b,q^2/c,q^{m+2};q)_k}(q^{m+2}/bc)^k\\
&=\sum_{k=-m-1}^{\infty}\dfrac{(b,c,q^{-m};q)_k}{(q^2/b,q^2/c,q^{m+2};q)_k}(q^{m+2}/bc)^k\quad (\text{since}\,1/(q^{m+2})_k=0\,\text{for all}\,k<-m-1)\\
&=\sum_{k=0}^{\infty}\dfrac{(b,c,q^{-m};q)_{k-m-1}}{(q^2/b,q^2/c,q^{m+2};q)_{k-m-1}}(q^{m+2}/bc)^{k-m-1}\\
&=\dfrac{(b,c,q^{m};q)_{-m-1}(q^{m+2}/bc)^{-m-1}}{(q^2/b,q^2/c,q^{m+2};q)_{-m-1}}\sum_{k=0}^{\infty}\dfrac{(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_k}{(q,q^{-m+1}/b,q^{-m+1}/c;q)_k}(q^{m+2}/bc)^k\\
&=\dfrac{(-1)^m(b,c,q^{-m};q)_{-m-1}(q^{-2m-1},bq^{-m-1},cq^{-m-1};q)_{2m+1}(q^{-2m-1})_{2m}(q^2/bc)_mq^{(3m^2+9m)/2}}{(q^2/b,q^2/c,q^{m+2};q)_{-m-1}(q,q^{-m+1}/b,q^{-m+1}/c;q)_{2m+1}(q^2)_{m-1}(q^{-m+1}/b,q^{-m+1}/c;q)_m(bc)^m}
\end{align*}
\\
where the last equality above follows from (\ref{eq46}). Then simplifying the last expression above using (\ref{eq11}), (\ref{eq12}) and (\ref{eq13}) with appropriate substitutions, we get
\\
\[{}_3\psi_3 \left[
\begin{matrix}
b, &c, &q^{-m} \\
q^2/b, &q^2/c, &q^{m+2}
\end{matrix}
;q, \dfrac{q^{m+2}}{bc} \right] = \dfrac{(q,q^2/bc,q^{m+2}/b,q^{m+2}/c;q)_{\infty}}{(q^2/b,q^2/c,q^{m+2},q^{m+2}/bc;q)_{\infty}}.\]
\\
\par Thus, the two sides of (\ref{eq18}) constitute analytic functions of $1/d$ provided $\left\lvert q^6/bcd\right\rvert<\left\lvert q^2/bcd\right\rvert<1$ where we note that the first of these inequalities always holds simply because $\lvert q\rvert<1$ and the second inequality can be rearranged to give $\lvert 1/d\rvert<\lvert bc/q^2\rvert$ which is a disk of radius $\lvert bc/q^2\rvert$ centred about $0$. that agree on an infinite sequence of points $(q^m)_{m\in \mathbb{N}}$ which converges to the limit $0$ inside the domain of analyticity. Thus, both the sides of (\ref{eq18}) agree on an infinite sequence of points $(q^m)_{m\in \mathbb{N}}$ which converges to the limit $0$ inside the disk $\left\{1/d\in\mathbb{C}:\lvert 1/d\rvert<\lvert bc/q^2\rvert\right\}$. Hence, (\ref{eq18}) is valid in general. This completes the proof of Theorem \ref{3psi32}.
\end{proof}
\section{Conclusion}\label{s5}
In this paper, we prove only (\ref{eq17}) and (\ref{eq18}) using Ismail's method. However, we note that (\ref{eq15}) and (\ref{eq16}) are (symmetric) rational functions of the variables $b,c,d,$ and $e$ on both sides since the left-hand side is a terminating series at both ends and the right-hand side is a finite product and in the limiting case $n\rightarrow\infty$, they become (\ref{eq17}) and (\ref{eq18}) respectively.\par Proofs of basic hypergeometric series sum-product formulas using Ismail's technique are not so familiar in the vast literature of basic hypergeometric series. For example, one such instance is when Bhargava and Adiga's \cite{Bha-Adi94} proof of the following ${}_2\psi_2$ summation formula in the style of Ismail's proof of (\ref{eq19})
\[{}_2\psi_2 \left[
\begin{matrix}
q/a, &b \\
d, &bq
\end{matrix}
;q, a \right] = \dfrac{(d/b,ab,q,q;q)_{\infty}}{(q/b,d,a,bq;q)_{\infty}}\]
where $\lvert a\rvert<1$, $\lvert d\rvert<1$, $\lvert q\rvert<1$ using the following version of Heine's ${}_2\phi_1$ transformation formula \cite[III.$2$]{Gas-Rah04}
\[{}_2\phi_1 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
a, &b \\
\multicolumn{2}{c}{
\begin{matrix}
c
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, z \right] =
\dfrac{(c/b,bz;q)_{\infty}}{(c,z;q)_{\infty}}
{}_2\phi_1 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
abz/c, &b \\
\multicolumn{2}{c}{
\begin{matrix}
bz
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, \dfrac{c}{b} \right].\]
Another such instance is Kadell's \cite{Ka05} proof of Heine's $q$-Gauss sum \cite[II.$8$]{Gas-Rah04}
\[{}_2\phi_1 \left[
\setlength\arraycolsep{6pt}
\begin{matrix}
a, &b \\
\multicolumn{2}{c}{
\begin{matrix}
c
\end{matrix}}
\end{matrix} \hspace{2pt}
;q, \dfrac{c}{ab} \right] = \dfrac{(c/a,c/b;q)_{\infty}}{(c,c/ab;q)_{\infty}}\]
using Ramanujan's ${}_1\psi_1$ summation formula (\ref{eq19}).\par Thus, it would be very interesting to investigate whether there are more basic hypergeometric series sum-product formulas which may be proved using Ismail's method. Jonathan Bradley-Thrush and the author are currently working on proving more identities using Ismail's method and other techniques.
\section{Acknowledgments}
The author would like to thank Alexander Berkovich for encouraging him to prove Theorems \ref{5psi51}, \ref{5psi52}, \ref{3psi31}, \ref{3psi32} and for his very helpful comments and suggestions. The author would also like to thank George E. Andrews and Jonathan Bradley-Thrush for previewing a preliminary draft of this paper and for their helpful comments.
\bibliographystyle{amsplain}
|
{
"arxiv_id": "2302.14166",
"language": "en",
"timestamp": "2023-03-01T02:03:27",
"url": "https://arxiv.org/abs/2302.14166",
"yymm": "2302"
} |
\section{Method}~\label{sec:method}
We introduce the generic attack requests and propose method GLOW in Sec.~\ref{sec:attk_req} and Sec.~\ref{sec:generation} respectively.
\subsection{Attack requests}~\label{sec:attk_req}
To attack an victim image, user may or may not specify victim objects, e.g. providing their locations or labels. Therefore, besides considering conventional attack request where a specific object and its targeted label are given, more generic requests should also be addressed,
such as give me 2 cats or mis-classify the rightmost boat to car.
Let's denote $\mathcal{D}=\{I_d\}_{d=1}^D$ as the set of victim images and $\mathcal{C}=\{c_p\}_{p=1}^C$ is the semantic label space with $C$ categories. Given a known object detector $f$, which can be the victim model in white-box attack or the surrogate model under black-box setting, we can obtain a set of predicted objects $\mathcal{O}_d$ on the $d$-th image by parsing $I_d$ to $f$, or $f(I_d)=\mathcal{O}_d$. Specifically, $\mathcal{O}_d=\{l^d_n, s^d_n\}_{n=1}^N$, where $N$ is the number of detected object instances. $l^d_n$ is a vector that defines bounding box center coordinates, height and width, reflecting the coordinates of the $n$-th bounding box in $d$-th image. And $s^d_n\in\mathcal{C}$ is its category label.
\noindent{\textbf{R1: mis-classify the object $s^d_n$ to $c_p$.}} This is the conventional attack request where the $n$-th object is our specific victim object and $c_p$ is the targeted label.
Though in theory, one can always choose random object as victim and random category as $c_p$, we observe that the choice of victim object and target label plays an important role in attack performances (See Sec.~\ref{sec:exps} for more details).
To this end, we set different selection criteria for victim object and targeted label to evaluate attack methods in various aspects. As for victim object, it is unpractical to assume that ground-truth locations can be provided by the users, e.g. bounding box annotations can be time-consuming~\cite{bearman2016s}. Therefore, we turn to the predictions as reliable sources to help us to determine where the attack should be taken place. More specifically, the one has the largest bounding box among all predictions whose confidence scores are above 0.85 will be selected.
There are two main reasons for this design. Firstly, larger objects are harder to attack in general, as more delicate perturbations might required on both contextual and the object regions. Secondly, the higher the confidence score is, the more likely this object category is correct and the harder the attack task would be.
As for target label $c_p$, we mainly follow~\cite{cai2022context,cai2022zero} where the out-of-context attack is considered. Specifically, to eliminate the chance of miscounting the existing objects as success, $c_p$ is selected if and only if $c_p$ is not present in the $I_d$. Rather than randomly selecting $c_p$ among all unpresented categories~\cite{cai2022context,cai2022zero}, our decision is made according to distance in word vector space~\cite{yu2019mcan}. Mathematically, for each $c_p$ that does not occur in $\mathcal{S}^d_n=\{s^d_n\}_{n=1}^N$, we have:
\begin{equation}
v_d(c_p)=\frac{1}{N}\sum_{n}v(c_p,s^d_n); \forall c_p\notin\mathcal{S}^d_n
\end{equation}~\label{eq:label_dis}
where $v(c_p,s^d_n)$ denotes the cosine distance between category $c_p$ and $s^d_n$ in word vector space. And $v_d(c_p)$ is the averaged distance.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\linewidth]{figures/r1_example_v1.png}
\caption{The first row visualizes victim images and victim objects highlighted with yellow bounding boxes. The second, third and forth row are the corresponding target labels generated with R1-5, R1-50 and R1-95, respectively.
}
\label{fig:r1_example}
\end{figure}
To evaluate the impact of different target labels, we collect three target $c_p$s according to $v_d(c_p)$. Specifically, we rank all $c_p\notin\mathcal{S}^d_n$ based on $v_d(c_p)$ then followed by selecting the top $5\%$, $50\%$ and $95\%$ category as our requested target class. We refer these three attack requests as R1-95, R1-50 and R1-5, respectively.
And we demonstrate our target label for each attack request
in Fig.~\ref{fig:r1_example}.
Our ultimate goal of R1 is not only the mis-classification on the victim object, but also failing the potential defense of consistency check. Therefore, the challenge of R1 mainly lies in figuring out the attack plan that is contextually consistent and beneficial for the mis-classification in practice.
\noindent{\textbf{R2: show me the category $c_p$ in $I_d$.}} Rather than assuming that a specific victim object is known to attackers as in R1, R2 takes one step further in terms of relaxing the attack request. Specifically, R2 comes in a much vague manner where user only specifies the target label $c_p$.
Though it seems that asking for the existence of $c_p$ is an easier task compared to R1 as one can always flip a random object to $c_p$, we argue that this conclusion is valid if only coarse semantic consistency check/defense, e.g. co-occurrence matrix~\cite{cai2022zero}, is available. Such defense largely neglects geometric context, e.g. scene layout, thus is not satisfactory. A more desired one, on the other hand, should be capable of capture both geometric and semantic context,
such as traffic light is less likely to appear on the image bottom while poles usually has string-like bounding boxes. And our goal is to fool the victim model and such delicate defense simultaneously.
Therefore, we claim that R2 is more challenging than R1 as it request additional understanding of the potential distribution of category $c_p$, including both sizes and object locations. We would like to kindly note our readers that such challenge is beyond~\cite{cai2022zero,cai2022context} (see Fig.~\ref{fig:r2_example}). We omit the details of $c_p$ in R2-5, R2-50 and R2-95 as they are selected based on the same criteria as that of R1 (See supplementary).
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\linewidth]{figures/r2_example_v2.png}
\caption{We visualize two victim images with their R2-5, R2-50 and R2-90 labels. And we highlight the victim object, which is localized by GLOW, with yellow bounding box.
}
\label{fig:r2_example}
\end{figure}
\noindent{\textbf{R3: give me multiple $c_p$s.}} R3 reflects another realistic attack scenario, e.g. have a monitor and a mouse in victim image $I_d$. Besides not specifying the victim object by providing only target label information, which is similar to R2, R3 enforces additional constraint on object amount, making it much more challenging. Specifically, multi-object relationship should be considered together with hard restrictions on the amount of objects. For example, besides modelling sizes and locations of mouse and monitor individually, estimating their layout, e.g. monitor is more likely to be above the mouse, is also essential to achieve context consistent yet fooled predictions.
We demonstrate the challenges of R3 in Fig.~\ref{fig:r3_example}. Details of R3-5, R3-50 and R3-95 can be found in supplementary.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\linewidth]{figures/r3_example_v2.png}
\caption{Challenge of R3. We have victim image on the left and four example proposals based on request R3-5 on the right. Among these four proposals, the left most one is more plausible than one in middle considering the layout relations. And the right two are totally wrong as they violate the amount restriction.}
\label{fig:r3_example}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\linewidth]{figures/cvpr23_main_v1.png}
\caption{Overview of GLOW. The first step of GLOW aims to locate the victim object $\mathcal{O}_{d^*}$ under generic attack requests according to dataset distribution.
Afterwards, the GLOW produces various context-consistent attack plans $\mathcal{O}_{\hat{d}}$, together with their consistency scores. Together with $\mathcal{O}_{d^*}$, the plan with highest score is selected as our final attack plan.}
\label{fig:main}
\end{figure*}
\subsection{GLOW: Global LayOut aWare attacks}~\label{sec:generation}
Contextually consistent attack has been considered in many previous work~\cite{cai2022context,cai2022zero}. The main motivation is that perturbing only the victim object may lead to inconsistency in context thus global attack plan should be considered. Specifically, an attack plan assigns target labels to all objects in victim image, including ones that are not victims originally, to both avoid inconsistency and benefit the attack request. Though well-motivated, existing methods largely rely on co-occurrence matrix/graph to produce attack plan or evaluate context consistency~\cite{cai2022zero},
neglecting geometric context such as scene layout.
In addition, the ability of modelling prior knowledge, such as having more than ten beds in an image is unlikely to happen while ten books are more plausible, is lacking in literature.
To this end, we propose a novel attack plan generation method GLOW that accounts for both semantic and geometric context, such as object location, sizes and overall scene layout. GLOW consists of two steps. The first localization step aims to locate the victim object based on target labels and their amounts under generic attack requests R2 and R3. Then the second generation step further produces multiple context-consistent attack plans as well as their scores with given victim objects.
Afterwards, the plan with the highest score is selected as our final attack plan and then parsed to existing attackers. See Fig.~\ref{fig:main} for more details.
\noindent{\textbf{Victim object localization}} We aim to localize victim objects under R2 and R3, where constraints on target labels and/or their amount are available.
Let's first assume there exist some images from annotated detection dataset, which, in the simplest case, can be the training set that our victim/surrogate model is pre-trained on.
We denote this dataset as $\mathcal{T}$, including $T$ images and their bounding box annotations $\mathcal{A}=\{\mathcal{A}_t\}_{t=1}^T$, where $\mathcal{A}_t=\{l^t_m,s^t_m\}_{m=1}^M$ is the set of bounding box annotations on the $t$-th image $I_t$. Similarly, we assume there exist $M$ objects and $l^t_m$ and $s^t_m\in\mathcal{C}$ are the location and semantic category of the $m$-th annotation.
Determining the location of victim object under R2 and R3 is equivalent to estimating the center, height and width of bounding boxes of target label $c_p$. And we formulate the localization as a probability maximization problem. This is achieved by modelling the joint probability of bounding box center, height and width per category.
Specifically, for each $c_p$, we have $\mathcal{L}_{c_p}=\{l^t_m|s^t_m=c_p\}_{t,m}$, where $l^t_m$ is normalized by image height and width. Then we then apply GMM~\cite{rasmussen1999infinite} to fit $q=\{1,\dots,Q\}$ Gaussians $\mathcal{N}^p_q(\mu^p_q,\delta^p_q)$ on $\mathcal{L}_{c_p}$. $\mu^p_q\in\mathbb{R}^4$ is the mean of the $q$-th Gaussian.
$\delta^p_q$ is its 4$\times$4 co-variance matrix. Meanwhile, $\pi^p_q$ is the weight of $\mathcal{N}^p_q$. Given any $x\in\mathbb{R}^4$,
our model is able to provide a weighted probability density $w(x)$ by:
\begin{equation}
w_p(x)=\frac{1}{Q}\sum_{q}\pi^p_q\times pdf_q^p(x)
\end{equation}
Where $pdf_q^p$ is the probability density function of $\mathcal{N}_q^p$. Intuitively, the higher the $w_p(x)$ is, the more likely $x$ would be the location of $c_p$.
Simply going through all $x$ and choosing ones with highest $w_p(x)$ ignore overall scene layout, which might result in significant layout changes, e.g. large bounding box on objectless area or heavy occlusions, leading to less plausible overall layouts.
Alternatively, we narrow down our search space to existing bounding boxes and find the optimal location among all $l_n^d$. As for R2, the victim object can be found by $*=\arg\max_n w_p(l_n^d)$. As for R3, we rank and select top ones depends on detailed request, e.g. choose the top 2 if R3 is to have two objects of same target label $c_p$. We then denote the victim objects in $I_d$ as $\mathcal{O}_{d^*}=\{c_p^*,l_p^*\}_*$, where $c_p^*$ equals to $c_p$ in R1 and R2 and $\{c_p^*\}_*$ is the set of requested target labels in R3. Similarly, $l_p^*$ is $l_n^d$ in R1 and are from estimation in R2 ($l_*^d$) and R3. We further denote the number of target objects as $X$ and $X=1$ under R1 and R2. Example victim objects can be found in Fig.~\ref{fig:process}.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\linewidth]{figures/cvpr23_process_v1.png}
\caption{Step-wise illustration of GLOW. From left to right, we have victim image $I_d$ with their initial prediction results $\mathcal{O}_d$, target label $c_p$ and the localized victim object $\mathcal{O}_{d^*}$, best matching image $I_{t^*}$, the plan for other objects $\mathcal{O}_{\hat{d}}$ and our final attack plan $\hat{\mathcal{O}}_d$.}
\label{fig:process}
\end{figure*}
\noindent{\textbf{Global attack plan generation}} Given $\mathcal{O}_{d^*}$, our next step is to generate target labels on objects that are not victim. We denote these objects as $\mathcal{O}_{\bar{d}}=\{l_n^d,s_n^d\}_{l_n^d\notin\{l_p^*\}_*}$ and then attack plan generation aims to find an mapping function $g(s_n^d)\in\mathcal{C}$ that perturbs the original label if necessary, leading to $\mathcal{O}_{\hat{d}}=\{l^d_n,g(s_n^d)\}_n$. The overall generated attack plan on $I_d$ would be $\hat{\mathcal{O}}_d=\{\mathcal{O}_{d^*},\mathcal{O}_{\hat{d}}\}$.
Theoretically, there exist $(N-X)^C$ possible configurations in $\mathcal{O}_{\hat{d}}$. Instead of permuting all possible solutions, we restrict ourselves with only plausible one that occurs in $\mathcal{T}$ since therein scene layouts are naturally context-consistent. To this end, we formulate our global attack plan generation as a layout similarity measurement problem, with hard constraints of $\mathcal{O}_{d^*}$. Our goal is therefore to map the bounding box labels according to the best match based on layout similarity in $\mathcal{T}$. Intuitively, the more similar these layouts are, the more confident we would be when performing mapping. Therefore the layout similarity score reflects context consistency to some extend. Our insights lie in the following design choice of obtaining mapping function $g$ and score $s$:
\begin{packed_lefty_item}
\item Generate $\mathcal{T}^*=\{\mathcal{T}_{c_p^*}\}_{c_p^*}$ where $\mathcal{T}_{c_p^*}$ consists of images including target label $c_p^*$. $\mathcal{T}_{c_p^*}=\{I_t|c_p^*=s^t_m\}_m$
\item Compute the Intersection over Union (IoU) score between $\mathcal{L}^t=\{l^t_m\}_m$ and $\{l_p^*\}_*$. Mathematically,:
\begin{equation}
s_1(I_t)=\frac{1}{X}\sum_{*}s(l_p^*); \forall I_t\in\mathcal{T}^*
\end{equation}
where $s(l_p^*)=\max_m \mathbbm{1}_{\{c_p^*=s^t_m\}}IoU(l_p^*,l^t_m)$. The IoU score between victim object location $l_p^*$ and $m$-th bounding box in $I_t$ is obtained by $IoU(l_p^*,l^t_m)$.
\item Perform Hungarian matching~\cite{stewart2016end,carion2020end} between $\mathcal{L}^t$ and $\mathcal{L}^d=\{l^d_n\}_n$. Specifically, we find a bipartite matching between these two sets by searching for a permutation of M elements $\mathbb{S}_M$ with the lowest cost:
\begin{equation}
\begin{split}
\delta_t^* & = \arg\max_{\delta\in\mathbb{S}_M}\sum_{l^d_n\in\mathcal{O}_{\bar{d}}} \mathcal{L}_m(l^d_n,l^t_{\delta(n)}) \\
& = \arg\max_{\delta\in\mathbb{S}_M}\sum_{l^d_n\in\mathcal{O}_{\bar{d}}} L1(l^d_n,l^t_{\delta(n)})+GIoU(l^d_n,l^t_{\delta(n)})
\end{split}
\end{equation}
where $L1()$ and $GIoU()$ define the L1 and GIoU~\cite{rezatofighi2019generalized} between bounding boxes. $\delta_t^*(n)$ is the index of the best match of $l^d_n$ in $\mathcal{L}^t$. And the match loss of $\delta_t^*(n)$ can be obtained with $s2(I_t)=\frac{1}{N-X}\sum_n\mathcal{L}_m(l^d_n,l^t_{\delta_t^*(n)})$. Our mapping function $g(s^d_n)=s^t_{\delta_t^*(n)}$ based on current $I_t$.
\end{packed_lefty_item}
The final score of $I_t$ is obtained by incorporating both $s_1$ and $s_2$, or $s(I_t)=s_1(I_t)-\lambda s_2(I_t)$., where $\lambda$ is a hyper-parameter. We would like to note that score $s$ accounts for not only the overlap of victim objects reflecting by $s_1$, but also the overall layout similarity incorporated in $s_2$.
After obtaining $s(I_t)$ for each image $I_d$, we then choose the best match $I_{t^*}$ that 1) gives the highest score and 2) matches more than 95$\%$ of objects in $I_d$.
Consequently, the mapping function $g(s^d_n)$ is defined as $p(s^d_n)=s^{t^*}_{\delta_{t^*}^*(n)}$. We refer the readers to Fig.~\ref{fig:process} for more details.
\subsection{Implementation of attack plan}
To generate $\hat{\mathcal{O}}_d$, evasion attacks can be implemented using our victim model itself under white-box setting or a single or multiple surrogate model(s) under zero-query black-box setting. In white-box scenario, our implementation of attack plan is based on TOG~\cite{chow2020adversarial}, which can be viewed as training the detector for modified labels given in the attack plan $\hat{\mathcal{O}}_d$. We fix the weight of victim model $f$ and learns a perturbation image $\delta$ for each $I_d$ by minimizing $\mathbf{L}(clip(I_d+\delta);\hat{\mathcal{O}}_d)$ at every iteration. $clip()$ is enforced to ensure bounded perturbation. Specifically, we follow I-FGSM~\cite{goodfellow2014explaining} to update $\delta$ at every iteration. Afterwards, the perturbed image $clip(I_d+\delta)$ is parsed to another unknown victim model, mimicking the zero-query black-box setting.
Note that our attack plan implementation can be generalized to different attackers, e.g. DAG~\cite{xie2017adversarial} or MIM~\cite{dong2018boosting}, or multiple surrogate models. And we demonstrate in Sec.~\ref{sec:exps} that our attack plan~$\hat{\mathcal{O}}_d$ not only produces contextually consistent predictions in white-box but also showcases strong transferability in zero-query black-box setting.
\section{Related Work}~\label{sec:works}
~\hspace{-7mm}~\noindent{\textbf{Object detection}} The goal of object detection is to predict a set of bounding boxes and category labels for each object of interest. Starting from~\cite{dalal2005histograms,felzenszwalb2008}, object detection explored extensive cues, including semantic~\cite{ladicky2010and}, geometric~\cite{yang2010layered} and other contextual cues~\cite{yao2012describing}, to improvement its performance as well as interpretability. Recently, deep neural networks (DNNs)~\cite{krizhevsky2017imagenet} have significantly improved many computer vision~\cite{krizhevsky2017imagenet,girshick2015fast} and natural language processing tasks~\cite{hinton2012deep,devlin2014fast}. Modern detectors follow the neural networks design, such as two-stage models where proposals are firstly generated and then regression and classification are performed~\cite{girshick2015fast,cai2019cascade} and one-stage models~\cite{lin2017focal,zhou2019objects,tian2019fcos} that simultaneously predict multiple bounding boxes and class probabilities with the help of pre-defined anchors or object centers. More recently, transformer-based models~\cite{carion2020end,zhu2020deformable} are proposed to further simplify the detection process by formulating the object detection as a set prediction problem where unique predictions can be achieved by bi-partite matching, rather than non-maximum suppression~\cite{hosang2017learning,bodla2017soft}. Similarly, contextual cues are also explored in modern detectors~\cite{bell2016inside,zhang2017relationship,chen2018context,liu2018structure,barnea2019exploring} with various forms. In this paper, we focus on adversarial attacks on DNNs-based detectors. And our GLOW generates contextually coherent attack plans with various requests, which are also transferable to detectors of different architectures.
\noindent{\textbf{Adversarial attacks and defenses in object detection}}
Despite impressive performance boosts, DNNs are vulnerable to adversarial attacks, e.g. adding visually imperceptible perturbations to images leads to significant performance drop~\cite{goodfellow2014explaining,szegedy2013intriguing,carlini2017adversarial}. Adversarial attacks can be categorized into white-box~\cite{goodfellow2014explaining,madry2017towards} and black-box~\cite{dong2018boosting,lin2019nesterov}, depending on whether parameters of victim models are accessible or not.
Attacks such as DAG~\cite{xie2017adversarial}, RAP~\cite{li2018robust} and CAP~\cite{zhang2020contextual} are architecture-specific white-box attacks on detectors where two-stage architecture is required since they work on proposals that generated by the first stage. More generic attacks, such as UAE~\cite{wei2018transferable} and TOG~\cite{chow2020adversarial}, are capable of attacking all different kinds of models regardless of their architectures.
Compared to the aforementioned methods that perturb the image globally~\cite{xie2017adversarial}, patch-based attacks~\cite{liu2018dpatch} also showcase their ability in terms of fooling the detectors without touching the victim objects~\cite{hu2021cca}. In contrast, black-box attacks~\cite{li2020practical,chakraborty2018adversarial,papernot2017practical,liu2016delving,li2020towards} are more practical yet challenging where either a few queries or known surrogate models are exploited to fool an unknown victim model.
Observing the impacts of adversarial attacks on detectors, various defense methods are proposed to detect such attacks, wherein contextual cues are explored~\cite{yin2021exploiting}. However, contextual cues are almost always represented in the form of semantic co-occurrence matrix where global layouts are largely neglected~\cite{cai2022context,cai2022zero}.
In contrast, we propose a generic attack plan generation algorithm that leverages both semantic and geometric coherency, e.g. scene layout. Consequently, it manages both conventional single targeted victim setting and generic attack requests where locations are unknown or object amount is further restricted, translating to SOTA performance under white-box and black-box settings.
\section{Experiment}~\label{sec:exps}
To evaluate GLOW under various requests, we perform extensive experiments on MS COCO~\cite{lin2014microsoft}, with both white-box and black-box settings. Specifically, detection victim models are trained with coco2017train and we use all images in coco2017val as our attacked set, which consist of 80 object categories that commonly appear in natural environment and about 5K images in total. We report our performance under both perturbation budget 10 and 30. Due to the space limitation, we refer the readers to supplementary materials for results with the former. And our claims are valid with different perturbation budgets.
\begin{table*}[t!]\small
\centering
\input{tables/res_r1}
\caption{Overall performance of R1. As described in Sec.~\ref{sec:attk_req}, we have three different target labels, R1-5, R1-50 and R1-95, for each victim object and we report the results on each of them. We highlight the best and second best with bold and underline respectively.
}
\label{tbl:r1_res}
\end{table*}
\noindent{\textbf{Victim detection model}} As for white-box attack, our victim model $f$ is Faster-RCNN-R50-FPN-1X-COCO~\cite{ren2015faster}. And we use TOG~\cite{chow2020adversarial} to implement attacks. The aforementioned Faster-RCNN is used as the single surrogate model in our black-box attacks where
DETR~\cite{carion2020end} is served as victim model $f$. Our black-box attack is zero-query based, meaning no feedback from victim model is available. Our GLOW is generally applicable to different victim detectors
and we choose the aforementioned models mainly for efficiency and re-productivity purpose~\cite{cai2022context}.
\noindent{\textbf{Baselines}} We compare GLOW with four baselines. To perform fair comparison, attack plan implementations are all obtained with TOG~\cite{chow2020adversarial} thus we describe only the attack plan generation process in the following:
\begin{packed_lefty_item}
\item TOG~\cite{chow2020adversarial} The attack plan generated by the TOG is context-agnostic, or $g(s^d_n)=s^d_n$.
Victim object is given in R1 and will be randomly selected under R2 and R3.
\item TOG+RAND. TOG+RAND. focuses on both victim objects and other objects $\mathcal{O}_{\bar{d}}$. Victim object is provided in R1 and randomly selected under R2 and R3.
Mapping function $g(s^d_n)$ is a random permutation function $r$ that maps $\{1,2,\dots,C\}\rightarrow\{1,2,\dots,C\}$.
\item TOG+SAME. Attack plan generated by TOG+SAME. includes all objects.
And we enforce $g(s^d_n)=c_p$, meaning all objects share the same target label $c_p$.
\item Zikui~\cite{cai2022context} Zikui~\cite{cai2022context} can be directly apply to R1. As for R2 and R3, Zikui~\cite{cai2022context} firstly selects random objects as victims and then generates the attack plan.
\end{packed_lefty_item}
\noindent{\textbf{Evaluation Metrics}} We follow the basic metric from~\cite{cai2022zero} and also introduce others for generic attack requests. Fooling rate ($\textbf{F}$)~\cite{cai2022zero} is used to evaluate the attack performance on victim objects. Specifically, one attack is succeed if (1) victim object is perturbed as target label while IOU score greater than 0.3 compared to GT and (2) it pass the co-occurrence check. And we define the fooling rate as the percentage of the number of test cases for which the above two conditions are satisfied. Besides, we further introduce $\textbf{T}$ to measure the consistency on victim objects. $\textbf{T}$ itself reveals the averaged $w_p(l_p^*)$. When combined with other metrics, $\textbf{T}$ is satisfied as long as the averaged $w_p(l_p^*)$ is above 0.02 (see Sec.~\ref{sec:generation}).
To measure the overall layout consistency, we introduce $\textbf{R}$ that reflects the percentage of images whose maximum recall rate compared to $\mathcal{A}$ is above 0.5.
We further design two metrics, $\textbf{E}$ and $\textbf{C}$, on R2 and R3 to report successful rate. $\textbf{E}$ checks whether $c_p$ exists in predictions. While $\textbf{C}$ further verify the amount of $c_p$. One attack is successful if both target labels and their amount satisfy the request in R3.
We refer the readers to supplementary for more details of all metrics.
\subsection{Main results}~\label{sec:white_main}
~\hspace{-7mm}\noindent{\textbf{Attack performance on R1}} We report our main results on conventional attack request R1 in Tab.~\ref{tbl:r1_res} where perturbation budgets is set to 30. In general, we observe that under white-box setting, our $\textbf{F}$ is comparable to existing methods, which is reasonable as this metric considers only oc-occurrence matrix and both TOG+SAME and Zikui~\cite{cai2022context} considers this semantic consistency. This is also valid on $\textbf{T}$ as victim object is shared among all baselines. When considering global layout $\textbf{R}$, we observe clear performance improvement over existing methods under all scenarios (R1-5, R1-50, R1-95), meaning that GLOW is able to not only fool the victim object, but also give more contextually consistent layout.
Noticeably, our observation is also valid with challenging zero-query black-box setting, which further demonstrates the transferbility of our proposed attack plan generation.
There are also other interesting observations in Tab.~\ref{tbl:r1_res}. Firstly, there exists a trend of performance improvement over all methods when compared R1-5, R1-50 and R1-95, indicating the selection of target label plays an important role in terms of performance. This trend validates our hypothesis that far-away labels, e.g. R1-5, are harder to attack compared to close-by ones, which in return proves the necessity of systematic design on target label rather than random generation. Secondly,
though TOG+SAME simply assigns all labels of existing objects to be target label $c_p$, it gives good performance under $\textbf{F}$. This observation further supports our design of more delicate consistency check metrics, e.g. $\textbf{R}$, as co-occurrence matrix is far from enough since it can be always satisfied under simple hacks.
\begin{table*}[t!]\small
\centering
\input{tables/res_r2_v1}
\caption{Overall performance of R2. Similar to R1, we have three different target labels for victim image. Since the victim object location is not provided in R2, $\textbf{T}$, $\textbf{F+T}$ and $\textbf{E+R}$ reflects different aspects of layout consistency.}
\label{tbl:r2_res}
\end{table*}
\begin{table*}[t!]\small
\centering
\input{tables/res_r3_v1}
\caption{Overall performance of R3. Compared to R2, our $\textbf{C+R}$ accounts for both layout consistency and amount restriction.}
\label{tbl:r3_res}
\end{table*}
\noindent{\textbf{Attack performance on R2}}
The advantages of GLOW are more noticeable in R2 where victim object is requested to be localized by algorithm itself rather than being provided, as can be observed in Tab.~\ref{tbl:r2_res}. There are two main observations. Firstly, GLOW almost always beat the SOTAs in terms of all evaluation metrics under R2-5, R2-50 and R2-95 in white-box setting, e.g. about 45$\%$ relative improvement compared to the second best in terms of $\textbf{F+T}$ and $\textbf{E+R}$ under R2-5. Interestingly, unlike R1 where $\textbf{T}$ are somewhat similar among all methods due to fixed victim objects, results on R2 showcase that the victim object selection matters.
Though neither TOG+SAME or Zikui~\cite{cai2022context} considers the overall layout consistency, the former gives better score compared to the later as it naively enforces all objects share the same target label and \textbf{E} in \textbf{E+R} measures only the existence of target label. Please note that \textbf{E} and \textbf{F} are different.
For instance, assuming layout consistency is already satisfied, if one attack fails the victim object but turns another object into target label, it will be regarded as success in \textbf{E} but failure in \textbf{F}. GLOW, again, produces superior results by leveraging layout explicitly. Our second observation from Tab.~\ref{tbl:r2_res} is that GLOW has better transfer rates than the context-aware baseline~\cite{cai2022context} and various types of random assignment under black-box setting, which further showcases the benefits of utilizing global layout in attack plan generation and the potential limitation of exploiting only semantic context. We observe the same trend that the overall performance improves when the target label is closer to presented labels in word space, supporting our design of various target label.
\noindent{\textbf{Attack performance on R3}}
Results on R3, which are provided in Tab.~\ref{tbl:r3_res}, give strongest evidence in terms of the advantages of GLOW. We kindly remind our readers that \textbf{C+R} and \textbf{F+T+C} reflect different aspects of an algorithm as the former does not care specific objects but checks both target labels and their the amount. Assuming our R3 is to have two apples in victim image and our attacks that are contextually consistent, \textbf{F+T+C} will be successful if two victim objects are perturbed to apple.
In contrast, \textbf{C+R} reflects the amount of apples in perturbed images and mismatch in numbers would lead to failure. Again, our GLOW is a much safer choice in terms of R3 as it can almost always give good performance with both white-box and black-box setting.
\section{Introduction}~\label{sec:intro}
Object detection aims to localize and recognise multiple objects in given images with their 2D bounding boxes and corresponding semantic categories~\cite{dalal2005histograms,felzenszwalb2008}. Due to the physical commonsense and viewpoint preferences~\cite{divvala2009empirical}, detected bounding boxes in natural images are not only semantically labeled but also placed relative to each other within a coherent scene geometry, reflecting the
underlying 3D scene structure. Such bounding box representation allows us to derive a notion of both semantic and geometric constraints. For example, co-occurrence matrix is a commonly exploited semantic constraint where certain object categories are more likely to co-occur, e.g., bed and pillow~\cite{galleguillos2008object}. Geometric constraints, on the other hand, leverage the inductive bias of scene layout~\cite{chen2017spatial}, such as when oc-occurring in a scene, traffic light is more likely to be appeared on the upper region with a smaller bounding box compared to car.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\linewidth]{figures/cvpr23_teasor_v1.png}
\caption{We propose a novel attack generation algorithm GLOW that manages both conventional single targeted object (R1) and our generic attack requests (R2,R3). Specifically, GLOW consists of two steps. The first step localizes victim objects, if not provided. The second step generates various attack plans with their consistency scores. Then the one with the highest score is our final attack plan and parsed to attackers. Best viewed in color.}
\label{fig:teasor}
\end{figure}
Adversarial attacks on object detectors mainly focus on targeted victim setting~\cite{xie2017adversarial,cai2022context} where the goal is to perturb a specific victim object to target class. In this case, the location, ground truth and target class of the victim object are assumed to be known to attackers. Naturally, contextual cues are leveraged in attack and defense mechanisms~\cite{yin2021exploiting,cai2022context,cai2022zero} on detectors to enhance or detect holistic context (in)consistency~\cite{cai2022zero}.
Though being very well-motivated and demonstrating good performances in conventional setting, the state-of-the-art methods~\cite{cai2022context,cai2022zero} suffer the following problems in practice. Firstly, the assumption of known location and ground truth label of victim object might be strong due to annotation cost~\cite{bearman2016s}. Therefore more vague attack requests where victim objects are not specified, e.g. show me an apple and a chair, should be considered in practice, which are beyond the existing methods. Secondly, global geometric layout is commonly neglected as they either model semantic co-occurrence~\cite{cai2022zero} or consider relative sizes and distance w.r.t. given victim object~\cite{cai2022context}.
In this work, we introduce a novel yet generic attack plan generation algorithm GLOW on both conventional and generic attack requests to effectively leverage both categorical and global layout cues, leading to superior white-box attack performance and better transferability in black-box setting.
As for generic requests, we firstly loose the assumption of known specific victim object by requesting only the existence of certain target label, e.g. show me category X in image. Compared to conventional setting, our request demands the modelling of the locations and sizes of target label X. Our second request further constrains label amount, e.g. give me N objects of category X and M objects of category Y, which necessitates the global layout of victim image.
To fulfill these requests, we propose a novel attack plan generation method GLOW that accounts for both categorical and geometrical relations. Specifically, GLOW aims to figure out the most context-consistent attack plan for each victim image according to its underlying layout while considering the hard constraints, e.g. existence or amount of some target labels under generic requests or a specific victim object under conventional request.
The first step in GLOW localizes victim objects with given target label or amount on victim image by modeling the joint distribution of bounding box sizes and centers. And it enables generic attack requests. Given the victim objects, the second step further leverages the layouts of victim images to generate globally context-consistent attack plans with consistency scores. This is achieved by reformulating the generation as a layout similarity measurement problem. And these consistency scores therefore are similarity scores. Finally, the plan with the highest score would be our selected attack plan. We then implement the selected plan with existing attack generation methods, or attackers. Details of our proposed requests and GLOW can be found in Fig.~\ref{fig:teasor}.
We validate our ideas on coco2017val images~\cite{lin2014microsoft} with both white-box and zero-query black-box settings. And we design a new evaluation metric to measure layout consistency thus mimicking consistency defenses. We demonstrate in white-box setting, our proposed method achieves superior performance with both conventional and proposed generic attack setting compared to SOTAs, under all evaluation metric. More importantly, GLOW provides significantly better transfer success rates on zero-query black-box setting compared to existing methods.
\noindent Our contributions can be summarized as follows:
\begin{packed_lefty_item}
\item A novel method GLOW that is capable of generating context consistent attack plans while accounts for both categorical and geometric layout coherency.
\item Two generic attack requests on coco2017val images and one consistency evaluation metric to mimic realistic attack request and delicate attack defenses.
\item State-of-the-art performances on coco2017val images under both white-box and zero-query black-box setting. Code, model and requests will be available.
\end{packed_lefty_item}
\section{Conclusion}~\label{sec:conclusion}
In this paper, we propose a novel attack generation algorithm for adversarial attacks on detectors. Our algorithm GLOW, compared to existing work, explicitly takes both semantic context and geometric layout into consideration. By validating on coco17val images, we demonstrate that GLOW produces superior performances under both conventional attack request and more generic ones where victim objects are obtained by estimation compared to existing methods. GLOW also showcases better transfer rates under challenging zero-query black-box setting.
\noindent{\textbf{Limitations}} More complex requests, such as turning the furthest brown chair to lamb, require high-level understanding of both scene and language. They are beyond the scope of GLOW and will be discussed in future work. |
{
"arxiv_id": "2302.14210",
"language": "en",
"timestamp": "2023-03-01T02:05:19",
"url": "https://arxiv.org/abs/2302.14210",
"yymm": "2302"
} | \section{\label{sec:intro} Introduction:}
Silicon-germanium heterojunction bipolar transistors (HBTs) are widely used in microwave applications such as high-speed communications and radar systems owing to their competitive microwave performance, low cost, and ease of integration compared with III-V compound semiconductor devices \cite{cressler2003silicon}. Technological advances such as reduced emitter widths, decreased base resistances and extrinsic capacitances, and advanced epitaxial techniques have enabled RF performance rivaling that of III-V high electron mobility transistors \cite{Chevalier2017, Wong2020a}. Recently, owing to these strengths, cryogenic SiGe HBTs have been considered for applications in quantum computing and radio astronomy \cite{Bardin2017a, Ying2018}.
The cryogenic microwave performance of SiGe HBTs has been investigated following their initial development in the late 1980s \cite{Patton1988a} as various performance metrics such as transconductance and noise figure improve with cooling. However, below $\sim$ 77 K, these improvements are observed to plateau with decreasing temperature \cite{Joseph1995, Bardin2009a}, corresponding to a temperature-dependent ideality factor $n(T)$ that greatly exceeds unity at cryogenic temperatures \cite{Rucker2017, Jin2021}. This behavior differs markedly from the predictions of thermionic emission theory of an ideal junction in which $n$ is equal to unity and is independent of temperature, and the transconductance increases inversely with temperature. \cite{Monch2001, sze2021}.
This cryogenic non-ideal behavior has been attributed to various mechanisms including quasiballistic transport \cite{Bardin2009a, richey1996}, direct tunneling \cite{Rucker2017}, or trap-assisted tunneling \cite{Davidovic2017}. However, a recent theory work has reported that quasiballistic electron transport cannot explain the observed collector cryogenic non-idealities \cite{Naik2021}. Further, these non-idealities have been observed in devices with base widths of $\sim$ 100 nm \cite{Ying2018} for which direct tunneling is negligible. As a result, the origin of these cryogenic anomalies remains a topic of investigation.
In a different context, similar anomalies have been observed and extensively investigated in Schottky diodes \cite{Hackam1972,Bhuiyan1988a, Monch2001}, and they were ultimately attributed to lateral inhomogeneities in the built-in potential $\Phi_{BI}$. \cite{Tung1992} It was shown through numerical simulations \cite{Sullivan1991} and experiments \cite{ Tung1992, Monch2001} that different spatial distributions of barrier inhomogeneities yielded different trends of ideality factor versus temperature, with some that are similar to those seen in HBTs at cryogenic temperatures \cite[Fig.~15]{Tung1992}. The experimental evidence for this effect is the discrepancy between the built-in potentials extracted from capacitance-voltage ($C V$) characteristics versus those from $I V$ characteristics at cryogenic temperatures because the $I V$ characteristics vary exponentially with the barrier height whereas the $C V$ characteristics vary with a weaker polynomial dependence. \cite{Bhuiyan1988a}. However, whether this explanation is also applicable to HBTs has not been considered.
Here, we experimentally investigate this possibility by characterizing the base-emitter built-in potential using $IV$ and $CV$ measurements in modern SiGe HBTs at cryogenic temperatures. We observe a marked discrepancy between $\Phi_{BI}$ extracted from these methods, consistent with presence of lateral inhomogeneities in the base-emitter junction potential. We suggest a possible physical origin of the inhomogeneities as Ge clusters or electrically active C impurities which are introduced to minimize dopant diffusion. Our work advances efforts to improve the cryogenic microwave noise performance of SiGe HBTs.
\section{\label{sec:th&met} Theory and Methods:}
\subsection{Overview}
Spatial inhomogeneities in the base-emitter potential may be identified by a discrepancy in built-in potential as measured from $IV$ and $CV$ characteristics due to the difference in their functional dependence on $\Phi_{BI}$. In more detail, the collector current $I_C$ is given by the diode equation:
\begin{equation}
I_C = A \exp( -q \Phi_{BI}/k T) \exp( {q V_{BE}/ n(T ) k T} )
\label{eq:IV1}
\end{equation}
where $A$ is a constant prefactor, $V_{BE}$ is the base-emitter voltage, and $n(T)$ is the ideality factor. The ideality factor may also be reported as an effective temperature $T_{eff}= n(T) T_{phys}$ \cite{Bardin2009a}. The temperature dependence of $\Phi_{BI}$, relative to a value at a reference temperature, can be obtained by fitting the ideal portion of the $I_C - V_{BE}$ characteristics with (\ref{eq:IV1}) and neglecting any temperature-dependence of $A$. $n(T)$ can also be extracted from the slope of the $IV$ Gummel curves. Variations in $\Phi_{BI}$ over the emitter area can lead to $n(T) \gg 1$, particularly at cryogenic temperatures due to the $1/T$ dependence in the exponential as seen in (\ref{eq:IV1}).
The capacitance-voltage characteristics of the base-emitter junction may also be used to obtain the built-in potential. The base-emitter depletion capacitance $C_{BE}$ is given by \cite{sze2021}
\begin{equation}
C_{BE}(V_{BE}) = \frac{C_{BE,0}}{(1 - \frac{V_{BE}}{\Phi_{BI}})^m }
\label{eq:CV}
\end{equation}
where $C_{BE,0}$ is the zero-bias junction capacitance and $m$ is a characteristic exponent. For a uniformly doped junction, $m=1/2$. As is evident from (\ref{eq:CV}), $C_{BE}$ varies with $\Phi_{BI}$ much less strongly compared to (\ref{eq:IV1}), and therefore $C_{BE}$ is less sensitive to spatial inhomogeneities relative to $I_C - V_{BE}$ characteristics. Discrepancies between the $\Phi_{BI}$ extracted using these two methods are therefore interpreted as evidence for the presence of inhomogeneities in the junction potential.
\subsection{Experimental methods}
We extracted $\Phi_{BI}$ from $C_{BE}-V_{BE}$ and $I_C - V_{BE}$ characteristics from 20 -- 300 K on a SiGe HBT (SG13G3, IHP). The discrete transistors were probed in a custom-built cryogenic probe station. \cite{Gabritchidze2022, russell2012cryogenic} We employed Nickel/Tungsten probes (40A-GSG-100-DP, GGB Industries) which are suitable for probing Al pads. $I_C - V_{BE}$ characteristics are performed at a constant collector voltage $V_{CE} =1$ V. $\Phi_{BI}$ is extracted relative to its value at a reference temperature from the intercept of a linear fit of $\ln(I_C)$ versus $V_{BE}$ from (\ref{eq:IV1}), and $n(T)$ is extracted from the slope of this fit.
Following standard procedure \cite{Jin2021, Monch2001} $C_{BE}-V_{BE}$ characteristics are obtained using a vector network analyzer (VNA, Keysight E5061B). In reverse-bias and low-forward bias regimes, the $Y-$ parameters are given by $Y_{11} = g_{BE} + j\omega( C_{BE} +C_{BC})$ and $Y_{12} = -j\omega C_{BC}$, where $C_{BC}$ is the base-collector depletion capacitance. The base-emitter capacitance can therefore be expressed as $C_{BE} = (Im(Y_{11} + Y_{12}))/2 \pi f$. $V_{BE}$ is restricted to [-0.5 V, +0.5 V] to minimize the contributions of the conductance and diffusion capacitance. $V_{BC} = $ 0 V is held constant to ensure $C_{BC}$ is constant while $V_{BE}$ is swept. The $Y$ parameters are measured in $1-3$ GHz, and the extractions are performed at 2.4 GHz. At these frequencies, it is observed that the imaginary part of $Y_{11}$ is linear in frequency, indicating purely capacitive behavior. SOLT calibration is performed on a CS-5 calibration standard at each temperature, and the shunt parasitic capacitance at the input of the device is de-embedded using an OPEN structure. The intermediate-frequency bandwidth (1 kHz) and frequency points (every 0.2 GHz) are selected to limit the total sweep time to less than 15 s to avoid drift. At each bias, Y-parameters are swept across frequency and ensemble-averaged 10 times. $\Phi_{BI}$ is extracted from a sweep of $C_{BE}$ versus $V_{BE}$ by fitting the parameters $\Phi_{BI}$, $C_{BE,0}$ and $m$ as in (\ref{eq:CV}) using a trust region reflective (TRF) algorithm from the SciPy library. \cite{2020SciPy-NMeth} $\Phi_{BI}$ is constrained to [0.5 V, 1.2 V], $C_{BE,0}$ is constrained between the minimum and maximum values of the sweep, and $m$ is constrained to [0, 1].
\section{\label{sec:results} Results}
\begin{figure}
\centering
\includegraphics[width= 0.99\textwidth]{Arxiv_Figs/Arxiv_Plot_IV_n_T_Tplots.pdf}
\caption{(a) Measured $I_C$ versus $V_{BE}$ for various temperatures. The characteristics become independent of temperature at cryogenic temperatures. (b) $T_{eff} = n(T) T_{phys}$ vs $T_{phys}$ from measurements (symbols) and theory (line), indicating the non-ideality of the base-emitter junction at cryogenic temperatures. }
\label{fig:IVnT}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width= 0.99\textwidth]{Arxiv_Figs/Arxiv_Plot_ImY11_CV_300K.pdf}
\caption{(a) $Im(Y_{11})$ versus $f$ at 300 K for various $V_{BE}$ in steps of 0.1 V. The linear trend in frequency confirms purely capacitance behavior within the frequency and bias range. (b) Measured and fitted $C_{BE}$ versus $V_{BE}$ at 300 K. The fit yields $\Phi_{BI} = 0.83$ V and $m = 0.10$.}
\label{fig:Y11CVfit}
\end{figure}
Fig.~\ref{fig:IVnT}a shows the collector current $I_C$ versus $V_{BE}$ at various temperatures between 20 K and 300 K. Consistent with prior findings, \cite{Ying2018, Bardin2009a, Rucker2017} the measurements exhibit deviations from (\ref{eq:IV1}) at cryogenic temperatures, with the curves exhibiting a weakening dependence on temperature below $\sim$ 80 K. $n(T)$ at each physical temperature is extracted from the slope obtained by fitting a line to each curve in Fig.~\ref{fig:IVnT}a. The current range used for this fitting is limited to 0.2 mA to exclude effects of series resistance. We plot the extracted $n(T)$ as $T_{eff} = n(T) T_{phys}$ versus $T_{phys}$ in Fig.~\ref{fig:IVnT}b. $T_{eff}$ is observed to deviate from the ideal predictions of (\ref{eq:IV1}), saturating to $\sim$ 100 K due to $n(T) > 1$ at cryogenic temperatures as has been reported previously \cite{Bardin2009a}.
We next examine the RF characteristics. Fig.~\ref{fig:Y11CVfit}a plots $Im(Y_{11})$ versus $f$ for various $V_{BE}$ at 300 K. A narrowed frequency range from the $1-3$ GHz measurements is plotted to distinguish the curves. From these data and those for $Im(Y_{12})$, the de-embedded base-emitter capacitance $C_{BE}$ is obtained. Fig.~\ref{fig:Y11CVfit}b plots the resulting $C_{BE}$ versus $V_{BE}$ at 300 K along with the fitted curve from (\ref{eq:CV}). The error bars, representing the 2$\sigma$ error in $C_{BE}$, are obtained from the 10 $C-V$ sweeps performed at each temperature.
Following the discussion in Sec.~\ref{sec:th&met}, these data are analyzed to obtain $\Phi_{BI}$ from the $I_C - V_{BE}$ and $C_{BE} - V_{BE}$ characteristics. At room temperature, $\Phi_{BI}(CV)$ is found to be 0.83 V, in good agreement with \cite{Jin2021}. This value is specified as the room temperature value for $\Phi_{BI}(IV)$ to facilitate comparison at other temperatures. Fig.~\ref{fig:VBImaster} plots the $\Phi_{BI}$ from both measurements versus $T_{phys}$. For $\Phi_{BI}(CV)$, the error bars represent the 2-$\sigma$ error in $\Phi_{BI}$, obtained by performing fits with (\ref{eq:CV}) for 100 $C_{BE} - V_{BE}$ sweeps with errors randomly determined based on a normal distribution defined by the uncertainty in the measured $C_{BE}$. The extracted $\Phi_{BI}(CV)$ is observed to weakly increase with decreasing temperature, consistent with observations for similar devices \cite{Jin2021} and Schottky diodes \cite{Bhuiyan1988a}. In contrast, $\Phi_{BI}(IV)$ exhibits a stronger dependence on temperature, as observed previously for Schottky diodes \cite{ Bhuiyan1988a, Monch2001}, and differs markedly in magnitude from the $\Phi_{BI}(CV)$. Following a method employed in the Schottky junction literature, \cite{Bhuiyan1988a} we also plot $\Phi_{BI}(IV) n(T)$, which for Schottky junctions has been empirically observed to agree with the measured $\Phi_{BI}(CV)$. This effect is also observed in the present data for the SiGe base-emitter junction.
\begin{figure}
\centering
\includegraphics[width= 0.55\textwidth]{Arxiv_Figs/Arxiv_Plot_PhiBvsT.pdf}
\caption{Built-in potential $\Phi_{BI}$ versus physical temperature from $CV$ and $IV$ data. Also plotted is the empirical relation $\Phi_{BI} (IV) n(T)$ from the Schottky junction literature. The measured $\Phi_{BI}(CV)$ agrees well with this quantity, supporting the existence of lateral spatial inhomogeneities in $\Phi_{BI}$.}
\label{fig:VBImaster}
\end{figure}
\section{\label{sec:disc} Discussion:}
We have shown that the cryogenic current anomalies in SiGe HBTs exhibit features that are qualitatively similar to those observed previously in Schottky junctions. The similarity of the base-emitter junction characteristics with those reported for Schottky junctions provides evidence for the similarity of the underlying physical mechanism, inhomogeneities in $\Phi_{BI}$ across the emitter area. In Schottky junctions, techniques such as ballistic emission electron microscopy have enabled the barrier-potential variation to be linked to specific material defects such as dislocations. \cite{Monch2001} We now discuss material defect origins of the base-emitter junction potential inhomogeneities in HBTs. Prior modeling studies of Schottky junctions have found that variations in barrier potential on the scale of the space-charge region (SCR) width can result in $n(T) \gg 1$ at cryogenic temperatures. \cite{Tung1992, Sullivan1991} Such variations in potential have been postulated to occur in SiGe HBTs and other SiGe films grown by chemical vapor deposition due to clustering of Ge \cite{Kiehl1993} or electrically active carbon defects \cite{Raoult2008}. Non-uniformities in Ge content over a few nanometers in SiGe p-wells with mean Ge content exceeding 30\% have been shown to negatively affect electrical properties like hole mobility \cite{Kiehl1993}. With Ge concentrations in modern SiGe HBTs being on the order of 30\% \cite{Rucker2017}, Ge clusters are a conceivable origin of the barrier potential inhomogeneity.
Studies of low-frequency noise in HBTs have also found evidence for trap states associated with C impurities, \cite{Raoult2008} which are present in modern HBTs in concentrations on the order of $10^{20}$ cm\textsuperscript{-3}. \cite{cressler2003silicon} The precise origin of the inhomogeneities in terms of materials defects will require a thorough characterization of the structural and electrical properties of carefully prepared SiGe heterojunctions.
Finally, we discuss the possible improvements in microwave amplification and noise performance that may be obtained with more ideal base-emitter junctions. A more homogeneous base-emitter junction potential will result in cryogenic transconductance and collector current values which are closer to their ideal values. Further, improved ideality in these DC parameters directly affects the minimum noise temperature $T_{min}$ of HBTs. For example, a decrease in $T_{eff}$ from 80 K to the physical temperature 20 K, corresponding to an ideality factor at 20 K of $n = 1$ instead of $n = 4 $, linearly improves the achievable $T_{min}$ by a factor of 4 in the low-frequency and low base-resistance limit (see Eqn.~(2) in Ref.~\cite{Bardin2017a}). Therefore, minimizing inhomogeneities in the base-emitter junction potential is expected to lead to improved cryogenic microwave noise performance, advancing their application in quantum science and engineering.
\section{\label{sec:concl} Conclusion:}
We have reported measurements of the built-in potential of the base-emitter junction and its temperature dependence of a SiGe HBT using $IV$ and $CV$ characteristics. The differing values of the built-in potential at cryogenic temperatures obtained from these two methods supports the origin of cryogenic electrical anomalies as arising from lateral inhomogeneities in the base-emitter junction potential. The physical origin of these barrier inhomogeneities is hypothesized to be Ge clustering or C impurities. This work advances efforts to improve the cryogenic microwave noise performance of SiGe HBTs.
\begin{acknowledgments}
The authors thank Sander Weinreb, Pekka Kangaslahti, Akim Babenko, Holger R{\"u}cker, Nicolas Derrier, John Cressler, and Xiaodi Jin for useful discussions. This work was supported by NSF Award Number 1911926 and by JPL PDRDF Project Number 107978.
\end{acknowledgments}
\section*{Author Declarations}
\subsection*{Conflict of Interest} The authors have no conflicts to disclose.
\section*{Data Availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
{
"arxiv_id": "2302.14153",
"language": "en",
"timestamp": "2023-03-01T02:02:41",
"url": "https://arxiv.org/abs/2302.14153",
"yymm": "2302"
} | \section{Introduction}\label{introduction}
A \emph{dagger category} is a category $\mathbf{C}$ with an operation $\dagger\: \mathrm{Mor}(\mathbf{C}) \to \mathrm{Mor}(\mathbf{C})$ such that
\begin{enumerate}
\item $\mathrm{id}_X^\dagger = \mathrm{id}_X$ for each object X;
\item $f^{\dag \dag} = f$ for each morphism $f$;
\item $(f \circ g)^\dagger = g^\dag \circ f^\dag$ for all composable pairs $(f, g)$.
\end{enumerate}
Two prominent examples of dagger categories are $\mathbf{Rel}$, the category of sets and binary relations, and $\mathbf{Hilb}_\mathbb{F}$, the dagger category of Hilbert spaces and bounded operators over $\mathbb{F}$, where $\mathbb{F} = \mathbb{R}$ or $\mathbb{F} = \mathbb{C}$. For a binary relation $r$, the binary relation $r^\dagger$ is the converse of $r$, and for a bounded operator $a$, the bounded operator $a^\dagger$ is the Hermitian adjoint of $a$.
The dagger categories $\mathbf{Rel}$ and $\mathbf{Hilb}_\mathbb{F}$ have many properties in common. These properties may be expressed in terms of morphisms that behave like the inclusion of one object into another. Explicitly, we work with the class of normal dagger monomomorphisms. Recall that a dagger monomorphism is a morphism $m$ such that $m^\dagger \circ m$ is an identity, and recall that a normal monomorphism is a morphism $m$ that is a kernel of some morphism.
Following Heunen and Jacobs, we use the term \emph{dagger kernel} for morphisms that are both dagger monomorphisms and normal monomorphisms \cite{HeunenJacobs}. Heunen and Jacobs showed that in any dagger category satisfying axioms (A) and (B), below, each dagger kernel $m$ has a complement $m^\perp$. Explicitly, defining $m^\perp = \ker(m^\dagger)$, they showed that $m$ and $m^{\perp\perp}$ are isomorphic as morphisms into their shared codomain.
A dagger kernel in $\mathbf{Rel}$ is an injective function, and a dagger kernel in $\mathbf{Hilb}_\mathbb{F}$ is a linear isometry.
$\mathbf{Rel}$ and $\mathbf{Hilb}_\mathbb{F}$ are both \emph{dagger symmetric monoidal categories}: each is also equipped with a symmetric monoidal structure whose product is a dagger functor and whose associators, braidings, and unitors are dagger isomorphisms \cite{Selinger}\cite{AbramskyCoecke2}. The monoidal product of $\mathbf{Rel}$ is the Cartesian product, and the monoidal product of $\mathbf{Hilb}_\mathbb{F}$ is the tensor product. A \emph{dagger functor} is a functor that preserves the dagger operation in the obvious sense, and a \emph{dagger isomorphism} is an isomorphism $m$ such that $m^{-1} = m^\dagger$. Equivalently, a dagger isomorphism is an epic dagger kernel. In $\mathbf{Rel}$, a dagger isomorphism is a bijection, and in $\mathbf{Hilb}_\mathbb{F}$, a dagger isomorphism is a unitary.
The dagger symmetric monoidal categories $\mathbf{Rel}$ and $\mathbf{Hilb}_\mathbb{F}$ both satisfy the following axioms:
\begin{enumerate}[ (A)]
\item there is a zero object;
\item each morphism has a kernel that is a dagger kernel;
\item each pair of complementary dagger kernels is jointly epic;
\item each pair of objects has a coproduct whose inclusions are complementary dagger kernels;
\item each nonzero endomorphism of the monoidal unit is invertible;
\item the monoidal unit is a monoidal separator.
\end{enumerate}
An object $I$ is said to be a separator in case the morphisms $a\:I \to X$ are jointly epic, for all objects $X$. It is said to be a \emph{monoidal separator} in case the morphisms $a \otimes b\: I \to X \otimes Y$ are jointly epic, for all objects $X$ and $Y$. Axiom (F) refers to this property.
These shared axioms (A)--(F) are almost sufficient to axiomatize both
$\mathbf{Rel}$ and $\mathbf{Hilb}$:
\begin{corollary}\label{A}
Let $(\mathbf{C}, \otimes, I, \dagger)$ be a dagger symmetric monoidal category that satisfies axioms (A)--(F). Then,
\begin{enumerate}[ (i)]
\item $(\mathbf{C}, \otimes, I, \dagger)$ is equivalent to $(\mathbf{Rel}, \times, \{\ast\}, \dagger)$ if and only if every object has a dagger dual and every family of objects has a coproduct whose inclusions are pairwise-orthogonal dagger kernels;
\item $(\mathbf{C}, \otimes, I, \dagger)$ is equivalent to $(\mathbf{Hilb_\mathbb{F}}, \otimes, \mathbb{F}, \dagger)$ for $\mathbb{F} = \mathbb{R}$ or $\mathbb{F} = \mathbb{C}$ if and only if every dagger monomorphism is a dagger kernel and the wide subcategory of dagger kernels has directed colimits.
\end{enumerate}
\end{corollary}
\noindent Two dagger kernels $m$ and $n$ are said to be \emph{orthogonal} if $m^\dagger \circ n$ is zero. Equivalently, $m$ and $n$ are orthogonal if and only if $n$ factors through $m^\perp$ \cite{HeunenJacobs}.
Recall that an object $X$ has a \emph{dagger dual} $X^*$ if there exists a morphism $\eta_X\: I \to X^* \otimes X$ such that $(\eta_X^\dagger \otimes \mathrm{id}_X) \circ (\mathrm{id}_X \otimes \eta_{X^*}) = \mathrm{id}_X$ and $(\mathrm{id}_{X^*} \otimes \eta_X^\dagger) \circ (\eta_{X^*} \otimes \mathrm{id}_{X^*}) = \mathrm{id}_{X^*}$. Of course, a dagger dual of $X$ is also a dual of $X$ in the standard sense \cite{MacLane2}. A dagger symmetric monoidal category in which every object has a dagger dual has been called strongly compact closed \cite{AbramskyCoecke2} and then \emph{dagger compact closed} \cite{Selinger}.
Every object in $\mathbf{Rel}$ has a dagger dual. The dagger dual of a set $X$ is the set $X$ itself, and $\eta_X \: \ast \mapsto \{(x,x)\,|\, x \in X\}$. Similarly, every object in $\mathbf{fHilb}_\mathbb{F}$, the category of finite-dimensional Hilbert spaces, has a dagger dual. The dagger dual of a finite-dimensional Hilbert space $X$ is the dual Hilbert space $X^*$, and $\eta_X\: 1 \mapsto \sum_{i\in M} e_{i*} \otimes e_i$, where $\{e_i \,|\, i \in N\}$ is an orthonormal basis of $X$ and $\{e_{i*} \,|\, i \in N\}$ is the corresponding orthonormal basis of $X^*$.
However, no infinite-dimensional Hilbert space has a dagger dual in $\mathbf{Hilb}_\mathbb{F}$ \cite{HeunenVicary}*{example~3.2}. Thus, $\mathbf{fHilb}_\mathbb{F}$ has the property that every object has a dagger dual, as does $\mathbf{Rel}$ but not $\mathbf{Hilb}_\mathbb{F}$, and $\mathbf{fHilb}_\mathbb{F}$ also has the property that every dagger monomorphism is a dagger kernel, as does $\mathbf{Hilb}_\mathbb{F}$ but not $\mathbf{Rel}$. Of course, $\mathbf{fHilb}_\mathbb{F}$ also satisfies axioms (A)--(F).
\begin{proof}[Proof of Corollary~\ref{A}]
Axiom (D) is just the existence of binary dagger biproducts \cite{Selinger}\cite{AbramskyCoecke2}. Indeed, any binary coproduct whose inclusions are complementary dagger kernels is clearly a dagger biproduct. Conversely, the inclusions of a binary dagger biproduct are complementary dagger kernels \cite{HeunenVicary}*{exercise 2.6}. By the same argument, the condition that every family of objects has a coproduct whose inclusions are pairwise-orthogonal dagger kernels is just the existence of all dagger biproducts.
Statement (ii) is a corollary of \cite{HeunenKornell}*{Theorem 10}, which implies that a dagger symmetric monoidal category is equivalent to $(\mathbf{Hilb_\mathbb{F}}, \otimes, \mathbb{F}, \dagger)$ for $\mathbb{F} = \mathbb{R}$ or $\mathbb{F} = \mathbb{C}$ if and only if it satisfies axioms (A), (D), and (F), the two conditions of statement (ii), and the assumption that each parallel pair of morphisms has an equalizer that is a dagger monomorphism. This last assumption follows from axioms (A)--(F): we obtain the scalar $-1$ as in the proof of \cite{HeunenKornell}*{Lemma 1}, and we obtain the dagger equalizer of $f$ and $g$ as the dagger kernel of $f - g$. Axioms (B), (C), and (E) are all clearly satisfied by $(\mathbf{Hilb_\mathbb{F}}, \otimes, \mathbb{F}, \dagger)$.
Statement (i) is a corollary of Theorem~\ref{V}. In place of axiom (F), it is enough to assume that $I$ is a separator as a consequence of the condition that every object has a dagger dual; see Proposition~\ref{M}. Of course, axioms (A) and (D) are special cases of the condition that every family of objects has a dagger biproduct.
\end{proof}
Dagger categories have been considered for more than half of a century \cite{BrinkmannPuppe}*{Definition~6.4.1}. Interest in dagger categories in the context of categorical quantum information theory began with \cite{AbramskyCoecke}. The term originates in \cite{Selinger}. The axiomatizations of $\mathbf{Hilb}_\mathbb{R}$ and $\mathbf{Hilb}_\mathbb{C}$ in \cite{HeunenKornell} derive from Sol\`{e}r's theorem \cite{Soler}. Axiomatizations of $\mathbf{Con}_\mathbb{R}$ and $\mathbf{Con}_\mathbb{C}$, the categories of Hilbert spaces and contractions, have also been obtained \cite{HeunenKornellVanDerSchaaf}.
The classic work of Lawvere provides axioms for the category $\mathbf{Set}$ of sets and functions \cite{Lawvere}. The close relationship between $\mathbf{Set}$ and $\mathbf{Rel}$ and the similarity between Lawvere's assumption of limits and our assumption of biproducts naturally invite a comparison between \cite{Lawvere}*{Corollary} and Theorem~\ref{V}. Unlike Lawvere, we have not chosen our axioms to provide a foundation for mathematics but rather to draw a comparison between the category $\mathbf{Rel}$ and the categories $\mathbf{Hilb}_\mathbb{F}$, as in Corollary~\ref{A}. Less directly, our assumptions about dagger kernels derive from \cite{Selinger2}, \cite{Vicary}, and \cite{HeunenJacobs}, and even less directly, they derive from elementary results on abelian categories \cite{MacLane}. Nevertheless, we refer the reader to Corollary~\ref{W}.
\section{Dagger biproducts}\label{biproducts}
The biproduct $\oplus$ is classically defined in the setting of abelian categories \cite{MacLane2}. In any abelian category, we have that $f + g = \nabla_Y \circ (f \oplus g) \circ \Delta_X$, where $\Delta_X\: X \to X \oplus X$ and $\nabla_Y\: Y \oplus Y \to Y$ are the diagonal and the codiagonal morphisms, respectively. This equation provides a bridge to an alternative definition of abelian categories, in which no enrichment is assumed \cite{Freyd}\cite{Schubert}. In this context, a \emph{biproduct} of objects $X$ and $Y$ is an object $X \oplus Y$ together with ``projections'' $p\: X \oplus Y \to X$ and $q\: X \oplus Y \to Y$ and ``injections'' $i\: X \to X \oplus Y$ and $j\: Y \to X \oplus Y$ such that $(X \oplus Y, p, q)$ is a product, such that $(X \oplus Y), i, j)$ is a coproduct, and such that $p \circ i = \mathrm{id}_X$, $q \circ j = \mathrm{id}_Y$, $p \circ j = 0$, and $q \circ i = 0$.
Neither $\mathbf{Rel}$ nor $\mathbf{Hilb_\mathbb{F}}$ are abelian categories. Fortunately, biproducts yield a canonical enrichment over commutative monoids in a more general setting that includes both of these categories \cite{MacLane}*{section 19}. In $\mathbf{Rel}$, each infinite family of objects has a biproduct, and this property distinguishes $\mathbf{Rel}$ from $\mathbf{Hilb}_\mathbb{F}$. This means that for any family of objects $\{X_\alpha\}_{\alpha \in M}$, there exists an object $X = \bigoplus_{\alpha \in M} X_\alpha$ together with ``projections'' $p_\alpha \: X \to X_\alpha$ and ``injections'' $i_\alpha\: X_\alpha \to X$ such that $p_\alpha \circ i_\alpha$ is an identity and otherwise $p_\alpha \circ i_{\beta}$ is zero. These infinite biproducts yield a canonical enrichment over commutative infinite monoids in a straightforward generalization of the finite case. Following Mac Lane, we leave this claim as an exercise for the reader \cite{MacLane2}*{exercise VIII.2.4(a)}.
In the current setting of dagger categories, the definition of a biproduct includes an additional condition. Many familiar category-theoretic notions have standard refinements in this setting. We refer to \cite{AbramskyCoecke2} and also to \cite{Karvonen}. A \emph{biproduct in a dagger category} is additionally required to satisfy $p_\alpha = i_\alpha^\dagger$ for each $\alpha \in M$. It follows that $i_\alpha$ is a dagger monomorphism in the sense that $i_\alpha^\dagger \circ i_\alpha = \mathrm{id}_{X_\alpha}$. In fact, it follows that the morphisms $i_\alpha$ are pairwise-orthogonal dagger kernels in the sense of \cite{HeunenJacobs}.
In the sequel, we will appeal to the fact that any dagger category with biproducts for all families of objects is canonically enriched over commutative infinitary monoids which we may define as follows:
\begin{definition}\label{B}
Let $S$ be a set. Let $\mathrm{Fam}(S)$ be the class of all families, i.e., indexed families, in $S$. For us, an \emph{infinitary operation} on $S$ is simply a function $\Sigma\: \mathrm{Fam}(S) \to M$, and furthermore,
\begin{enumerate}
\item $\Sigma$ is \emph{commutative} if $\sum_{\alpha \in M} s_{f(\alpha)} = \sum_{\beta \in N} s_\beta$ for all bijections $f\: M \to N$;
\item $\Sigma$ is \emph{associative} if $\sum_{\beta \in N} \sum_{\alpha \in M_\beta} s_{(\alpha,\beta)} = \sum_{(\alpha,\beta) \in P} s_{(\alpha,\beta)}$, where $P = \bigcup_{\beta \in N} M_\beta \times \{\beta\}$.
\end{enumerate}
A \emph{commutative infinitary monoid} is simply a set $S$ that is equipped with an infinitary operation $\Sigma$ that is associative and commutative in this sense.
\end{definition}
\noindent If $\{r_\alpha\}_{\alpha \in M}$ is a family of morphisms $X \to Y$ in a dagger compact category with biproducts, then $\sum_{\alpha \in M} r_\alpha = \Delta^\dagger \circ \left( \bigoplus_{\alpha \in M} r_\alpha \right) \circ \Delta,$ where $\Delta\: X \to \bigoplus_{\alpha \in M} X$ is the diagonal map. The proof of this enrichment is omitted because it differs from the proof of the same well-known fact for finite dagger biproducts and finitary commutative monoids only by tedious bookkeeping.
\section{Dagger symmetric monoidal categories}\label{part1}
Let $(\mathbf{C}, \otimes, I, \dagger)$ be a dagger symmetric monoidal category with dagger biproducts for all families of objects \cite{AbramskyCoecke2}. Assume that every morphism has a kernel that is dagger monic and that $k$ and $k^\perp$ are \emph{jointly epic} for every dagger kernel $k$. The latter condition means that $f = g$ whenever $f \circ k = g \circ k$ and $f \circ k^\perp = g \circ k^\perp$. Further, assume that $I$ is a separator and that all nonzero morphisms $I \to I$ are invertible. We show that for each object $X$, morphisms $I \to X$ form a complete Boolean algebra. First, we use an Eilenberg swindle to conclude that the scalars of $\mathbf{C}$ must be the Boolean algebra $\{0,1\}$.
\begin{lemma}\label{C}
Let $(R, \Sigma, \cdot)$ be an infinitary rig in the sense that
\begin{enumerate}
\item $(R, \Sigma)$ is a commutative infinitary monoid;
\item $(R, \cdot)$ is a monoid;
\item $(\sum_{\alpha \in M} a_\alpha) \cdot b = \sum_{\alpha \in M} a_\alpha \cdot b$ and $a \cdot (\sum_{\beta \in N} b_\beta) = \sum_{\beta \in N} a \cdot b_\beta$.
\end{enumerate}
If $R$ is an infinitary division rig in the sense that $R^\times := R \setminus \{0\}$ is a group, then $R^\times = \{1\}$, and $1 + 1 = 1$.
\end{lemma}
\begin{proof}
Let $\omega = 1 + 1 + \cdots$. Clearly $\omega + \omega = \omega$. Furthermore $\omega \neq 0$, because equality would imply that $0 = \omega = \omega + 1 = 0 + 1 = 1$. We now calculate that $1 + 1 = \omega^{-1} \cdot \omega + \omega^{-1} \cdot \omega = \omega^{-1} \cdot (\omega + \omega) = \omega^{-1} \cdot \omega = 1$. Thus, $a + a = a$ for all $a \in R$, and $R$ is a join semilattice with $a \vee b = a + b$.
By distributivity, $R^\times$ is a partially ordered group. Furthermore, it has a maximum element $m : = \sum_{a \in R} a$. We now calculate for all $a \in R^\times$ that $a = a \cdot 1 = a \cdot m \cdot m^{-1} \leq m \cdot m ^{-1} = 1$. This implies that $R^\times$ is trivial because $1 = a \cdot a^{-1} \leq a \cdot 1 = a \leq 1$ for all $a \in R^\times$.
\end{proof}
For all objects $X$ and $Y$, let $0_{X,Y}$ be the unique morphism $X \to Y$ that factors through $0$.
\begin{proposition}\label{D}
The endomorphisms of $I$ are $0 := 0_{I, I}$ and $1:= \mathrm{id}_{I}$.
\end{proposition}
\begin{proof}
Let $X$ be any object of $\mathbf{C}$. The hom set $\mathbf{C}(I, I)$ is of course a monoid under composition. We now define $\Sigma: \mathrm{Fam}(\mathbf{C}(I,I)) \to \mathbf{C}(I,I)$, by $\sum_{\alpha \in M} r_\alpha = \Delta^\dagger \circ (\bigoplus_{\alpha \in M} r_\alpha) \circ \Delta$, where $\Delta\: X \to \bigoplus_{\alpha \in M} X_\alpha$ is the diagonal. The verification of assumptions (1)--(3) of Lemma~\ref{C} is then a routine exercise; see section~\ref{biproducts} and \cite{MacLane2}*{exercise VIII.2.4(a)}.
If $X = I$, then $\mathbf{C}(X, X) = \mathbf{C}(I, I)$ is an infinitary division rig by assumption. Therefore, by Lemma~\ref{C}, the only nonzero element of $\mathbf{C}(I,I)$ is the identity.
\end{proof}
\begin{proposition}\label{1 + 1 = 1}\label{E}
Let $X$ and $Y$ be objects of $\mathbf{C}$. We can partially order the morphisms $X \to Y$ by $r \leq s$ if $r + s = s$. Then, $\mathbf{C}(X, Y)$ is a complete lattice with $\bigvee_{\alpha \in M} r_\alpha = \sum_{\alpha \in M} r_\alpha$.
\end{proposition}
\begin{proof}
For all $r \: X \to Y$, we calculate that $r + r = 1 \bullet r + 1 \bullet r = (1 + 1) \bullet r = 1 \bullet r = r$. Hence, $\mathbf{C}(X, Y)$ is an idempotent commutative monoid. Therefore, it is a poset with the given order, and moreover, $r_1 + r_2$ is the join of morphisms $r_1, r_2\: X \to Y$.
Let $\{r_\alpha\}_{\alpha \in M}$ be any nonempty indexed family of morphisms $X \to Y$. The sum $\sum_{\alpha \in M} r_\alpha$ is clearly an upper bound. Let $s$ be another upper bound. Then, $r_\alpha + s = s$ for all $\alpha \in M$, and hence
$$
s = \sum_{\alpha \in M} s = \sum_{\alpha \in M} (r_\alpha + s) = \sum_{\alpha \in M} r_\alpha + \sum_{\alpha \in M} s = \left( \sum_{\alpha \in M} r_\alpha \right) + s.
$$
For the equality $\sum_{\alpha \in M} s = s$, we argue that $\sum_{\alpha \in M} s = \sum_{\alpha \in M} 1 \bullet s = \left( \sum_{\alpha \in M} 1 \right) \bullet s = 1 \bullet s = s$. The equality $\sum_{\alpha \in M} 1 = 1$ holds because $\sum_{\alpha \in M} 1 = 1 + \sum_{\alpha \in M'} 1$, where $M'$ is $M$ with an element removed. We conclude that $\sum_{\alpha \in M} r_\alpha \leq s$ and, more generally, that $\sum_{\alpha \in M} r_\alpha$ is the least upper bound of $\{r_\alpha\}_{\alpha \in M}$.
\end{proof}
In any dagger symmetric monoidal category with biproducts, both dagger and composition preserve sums of morphisms \cite{AbramskyCoecke2}. The involution $\dagger\: \mathbf{C}(X, Y) \to \mathbf{C} (Y, X)$ is thus a join homomorphism and hence an order isomorphism. The composition $\circ\: \mathbf{C}(X,Y) \times \mathbf{C}(Y, Z) \to \mathbf{C}(X,Z)$ is similarly a join homomorphism in each variable separately, and hence, it is monotone in each variable separately. Furthermore, a straightforward generalization of the standard argument shows that dagger and composition preserve infinite sums of morphisms. In other words, dagger and composition preserve suprema.
\begin{definition}\label{F}
For each object $X$, let $\top_X$ be the maximum morphism $I \to X$.
\end{definition}
We will soon show that the $\ker(\top_X^\dagger)$ is zero. To avoid clutter, we choose a representative for each isomorphism class of dagger kernels into $X$, so that for all morphisms $r$ and $s$ out of $X$, the kernels $\ker(r)$ and $\ker(s)$ are uniquely defined and furthermore $\ker(r) = \ker(s)$ whenever $\ker(r) \cong \ker(s)$. If the objects of $\mathbf{C}$ form a proper class, and if our foundations do not allow us to choose representative dagger kernels for each of them, then we make such choices only as necessary.
\begin{proposition}\label{G}
Let $r\: X \to Y$. We have that $r=0_{X,Y}$ if and only if $r \circ \top_X = 0_{Y}$. Furthermore, $\mathrm{coker}(r) = \mathrm{coker}(r \circ \top_X)$.
\end{proposition}
\begin{proof}
The forward direction of the equivalence is trivial. For the backward direction, assume that $r \circ \top_X = 0_Y$. By the monotonicity of composition in the second variable, we have that $r \circ a = 0_Y$ for all $a\: I \to X$. Because $I$ is a separator, we conclude that $r = 0$, as desired. Hence, we have proved the equivalence.
To prove the equality, we compare $\mathrm{coker}(r)\: X \to A$ and $\mathrm{coker}(r \circ \top_X)\: X \to B$. We first observe that $\mathrm{coker}(r) \circ r \circ \top_X = 0_A$, so $\mathrm{coker}(r)$ factors through $\mathrm{coker}(r \circ \top_X)$. Next, we observe that $\mathrm{coker}(r \circ \top_X) \circ r \circ \top_X = 0_B$. Via the proved equivalence, we infer that $\mathrm{coker}(r \circ \top_X) \circ r = 0_{X,B}$, so $\mathrm{coker}(r \circ \top_X)$ factors through $\mathrm{coker}(r)$. It follows that $\mathrm{coker} (r)$ and $\mathrm{coker} (r \circ \top_X)$ are equal.
\end{proof}
\begin{definition}\label{H}
For each morphism $a\: I \to X$, let $\neg a$ be the maximum morphism $I \to X$ such that $a^\dagger \circ \neg a = 0$.
\end{definition}
\begin{lemma}\label{I}
Let $a\: I \to X$. Then, $j = \ker(a^\dagger)^\perp$ satisfies $a = j \circ \top_{A}$ and $\neg a = j^\perp \circ \top_{A^\perp}$, where $A$ is the domain of $j$ and $A^\perp$ is the domain of $j^\perp$.
\end{lemma}
\begin{proof}
For all $b\:I \to X$, we have the following chain of equivalences:
\begin{align*}
(j \circ \top_A)^\dagger \circ b = 0 & \quad\Longleftrightarrow\quad \top_A^\dagger \circ j^\dagger \circ b = 0 \quad\Longleftrightarrow\quad j^\dagger \circ b = 0 \quad\Longleftrightarrow\quad (\exists c)\; b = \ker(j^\dagger) \circ c
\\ & \quad\Longleftrightarrow\quad (\exists c)\; b = j^\perp \circ c \quad\Longleftrightarrow\quad (\exists c)\; b = \ker(a^\dagger) \circ c \quad\Longleftrightarrow\quad a^\dagger \circ b = 0.
\end{align*}
The second equivalence follows by Proposition \ref{G}. The second-to-last equivalence follows by \cite{HeunenJacobs}*{Lemma 3}. Because $I$ is a separator, we conclude that $(j \circ \top_A)^\dagger = a^\dagger$ or equivalently that $j \circ \top_A = a$.
We prove the equation $\neg a = j^\perp \circ \top_{A^\perp}$ as a pair of inequalities. In one direction, we calculate that $a^\dagger \circ j^\perp \circ \top_{A^\perp} = a^\dagger \circ \ker(a^\dagger) \circ \top_{A^\perp} = 0$, concluding that $j^\perp \circ \top_{A^\perp} \leq \neg a$. In the other direction, we reason that
\begin{align*}&
a^\dagger \circ \neg a = 0 \quad\Longrightarrow\quad (\exists c)\; \neg a = \ker(a^\dagger) \circ c = j^\perp \circ c \quad\Longrightarrow\quad \neg a \leq j^\perp \circ \top_{A^\perp}.
\end{align*}
Therefore, $\neg a = j^\perp \circ \top_{A^\perp}$, as claimed.
\end{proof}
\begin{proposition}\label{J}
For each object $X$, morphisms $I \to X$ form a complete ortholattice with orthocomplement $a \mapsto \neg a$.
\end{proposition}
\begin{proof}
The set $\mathbf{C}(I,X)$ is a complete lattice by Proposition \ref{E}. The operation $a \mapsto \neg a$ is antitone as an immediate consequence of Definition \ref{H}, and we now show that it is furthermore an order-reversing involution. Let $b = \neg a$. By Lemma~\ref{I}, the morphisms $j = \ker (a^\dagger)\: A \to X$ and $k = \ker(b^\dagger)\: B \to X$ are such that $a = j \circ \top_A$, that $\neg a = j^\perp \circ \top_{A^\perp}$, that $b = k \circ \top_B$, and that $\neg b = k^\perp \circ \top_{B^\perp}$. By Proposition~\ref{G}, $k = \ker(b^\dagger)^\perp = \ker(\top_{A^\perp}^\dagger \circ j^{\perp\dagger})^\perp = \ker(j^{\perp\dagger})^\perp = j^{\perp\perp\perp} = j^\perp$. Thus, $k^\perp = j^{\perp \perp} = j$, and $\neg \neg a = \neg b = k^\perp \circ \top_{B^\perp} = j \circ \top_A = a$. Therefore, $\mathbf{C}(I,X)$ is indeed an order-reversing involution.
For all $a\: I \to X$, we also have that $(a \And \neg a)^\dagger \circ (a \And \neg a) \leq a^\dagger \circ a = 0$ and thus that $a \And \neg a = 0_X$. Dually, $a \mathbin{\vee} \neg a = \neg \neg a \mathbin{\vee} \neg a = \neg (\neg a \And a) = \neg 0_X = \top_X$. Thus, $\neg a$ is a complemenent of $a$ for all $a \: I \to X$, and therefore, $\mathbf{C}(I, X)$ is an ortholattice.
\end{proof}
\begin{lemma}\label{K}
Let $j\: A \to X$ be a dagger kernel. Then, $j \circ j^\dagger + j^\perp \circ j^{\perp\dag} = \mathrm{id}_X$.
\end{lemma}
\begin{proof}
Let $i = [j, j^\perp]\: A \oplus A^\perp \to X$, and let $\mathrm{inc}_1\: A \to A \oplus A^\perp$ and $\mathrm{inc}_2\: A^\perp \to A \oplus A^\perp$ be the coproduct inclusions. We calculate that $\mathrm{inc}_1^\dagger \circ i^\dagger \circ i \circ \mathrm{inc}_1 = j^\dagger \circ j = \mathrm{id}_A = \mathrm{inc}_1^\dagger \circ \mathrm{id}_{A \oplus A^\perp} \circ \mathrm{inc}_1$, and similarly, $\mathrm{inc}_2^\dagger \circ i^\dagger \circ i \circ \mathrm{inc}_2 = \mathrm{inc}_2^\dagger \circ \mathrm{id}_{A \oplus A^\perp} \circ \mathrm{inc}_2$. We also calculate that $\mathrm{inc}_1^\dag \circ i^\dag \circ i \circ \mathrm{inc}_2 = j^\dag \circ j^\perp = 0_{A^\perp, A} = \mathrm{inc}_1^\dag \circ \mathrm{id}_{A \oplus A^\perp} \circ \mathrm{inc}_2$, and dually, $\mathrm{inc}_2^\dag \circ i^\dag \circ i \circ \mathrm{inc}_1 = \mathrm{inc}_2^\dag \circ \mathrm{id}_{A \oplus A^\perp} \circ \mathrm{inc}_1$. We conclude that $i^\dagger \circ i = \mathrm{id}_{A \oplus A^\perp}$, in other words, that $i$ is dagger monic. It is also epic because $j$ and $j^\perp$ are jointly epic by assumption. Therefore, $i$ is a dagger isomorphism. We now calculate that
$$
\mathrm{id}_X = i \circ i^\dagger = [j,j^\perp] \circ [j, j^\perp]^\dagger = \nabla_X \circ ( j \oplus j^\perp) \circ (j \oplus j^\perp)^\dagger \circ \nabla^\dag_X = j \circ j^\dagger + j^\perp \circ j^{\perp\dag}.
$$
\end{proof}
\begin{theorem}\label{L}
For each object $X$, morphisms $I \to X$ form a complete Boolean algebra.
\end{theorem}
\begin{proof}
We have already shown that $\mathbf{C}(I, X)$ is a complete ortholattice. It remains to prove the distributive law. Let $a\: I \to X$. We will show that $b \mapsto a \And b$ distributes over joins.
Let $b\: I \to X$. By Lemma~\ref{I}, the dagger kernel $j = \ker(a^\dagger)^\perp\: A \to X$ satisfies $j \circ \top_A = a$. We claim that $j \circ j^\dagger \circ b = a \And b$. We certainly have that $j \circ j^\dagger \circ b \leq j \circ \top_A = a$, and by Lemma~\ref{K}, we also have that $j \circ j^\dagger \circ b \leq j \circ j^\dag \circ b + j^\perp \circ j^{\perp\dag}\circ b = b$. Thus, $j \circ j^\dagger \circ b$ is a lower bound for $a$ and $b$.
Let $c\: I \to X$ be any lower bound for $a$ and $b$. Then, $(\neg a)^\dagger \circ c \leq (\neg a)^\dagger \circ a = 0$, so $c = \ker((\neg a)^\dagger) \circ d$ for some morphism $d$. Applying Lemma~\ref{I} again, we calculate that
$
c = \ker(\top_{A^\perp}^\dagger \circ j^{\perp\dagger}) \circ d = \ker(j^{\perp\dagger}) \circ d= j^{\perp\perp} \circ d = j \circ d.
$
It follows that
$$
c = j \circ d = j \circ j^\dagger \circ j \circ d = j\circ j^\dag \circ c \leq j \circ j^\dagger \circ b.
$$
Therefore, $j \circ j^\dagger \circ b = a \And b$ for all $b\: I \to X
$.
Let $b_1, b_2 \: I \to X$. We calculate that
$$
a \And (b_1 \mathbin{\vee} b_2) = j \circ j^\dagger \circ (b_1 + b_2) = j \circ j^\dagger \circ b_1 + j \circ j^\dagger \circ b_2 = (a \And b_1) \mathbin{\vee} (a \And b_2).
$$
Therefore, $a \And (b_1 \mathbin{\vee} b_2) = (a \And b_1) \mathbin{\vee} (a \And b_2)$ for all $a, b_1, b_2\: I \to X$. We conclude that $\mathbf{C}(I, X)$ is a Boolean algebra.
\end{proof}
\section{Dagger compact closed categories}\label{part2}
Additionally, assume that $(\mathbf{C}, \otimes, I, \dagger)$ is dagger compact closed \cite{Selinger}\cite{AbramskyCoecke}. This means that each object has a dagger dual. Explicitly, for each object $X$, there exists an object $X^*$ and a morphism $\eta_X \: I \to X^* \otimes X$ such that $(\eta_X^\dagger \otimes \mathrm{id}_X) \circ (\mathrm{id}_X \otimes \eta_{X^*}) = \mathrm{id}_X$ and $(\mathrm{id}_{X^*} \otimes \eta_X^\dagger) \circ (\eta_{X^*} \otimes \mathrm{id}_{X^*}) = \mathrm{id}_{X^*}$. Here, we have suppressed associators and unitors. More commonly, the dagger dual of $X^*$ is defined together with a morphism $\eta_X\: I \to X^* \otimes X$ and a morphism $\epsilon_X\: I \to X \otimes X^* $ that are then related by $\epsilon_X^\dagger = \beta_{X^*, X} \circ \eta_X$, and we have simpified this definition in the obvious way. Here, $\beta_{X^*, X} \: X^* \otimes X \to X \otimes X^*$ is the braiding.
In any dagger compact closed category, we have a bijection $\mathbf{C}(X, Y) \to \mathbf{C}(I, X^* \otimes Y)$ that is defined by $r \mapsto \breve r := (\mathrm{id}_{X^*} \otimes r) \circ \eta_X$. In a dagger compact closed category with biproducts, this is an isomorphism of commutative monoids. Hence, as a corollary of Theorem~\ref{L}, $\mathbf{C}(X,Y)$ is a complete Boolean algebra for all objects $X$ and $Y$.
We show that $\mathbf{C}(I, X^* \otimes Y)$ and hence $\mathbf{C}(X, Y)$ is an atomic complete Boolean algebra.
\begin{proposition}\label{M}
$I$ is a monoidal separator.
\end{proposition}
\begin{proof}
Let $r_1, r_2\: X \otimes Y \to Z$, and assume that $r_1 \circ (a \otimes b) = r_2 \circ (a \otimes b)$ for all $a\: I \to X$ and $b\: I \to Y$. This equation is equivalent to $r_1 \circ (\mathrm{id}_X \otimes b) \circ a = r_2 \circ (\mathrm{id}_X \otimes b) \circ a$. It follows that $r_1 \circ (\mathrm{id}_X \otimes b) = r_2 \circ (\mathrm{id}_X \otimes b)$ for all $b\: I \to Y$, because $I$ is a separator. Applying the canonical isomorphism $\mathbf{C}(X, Z) \to \mathbf{C}(I, X^* \otimes Z)$, we find that $(\mathrm{id}_{X^*} \otimes (r_1 \circ (\mathrm{id}_X \otimes b))) \circ \eta_X = (\mathrm{id}_{X^*} \otimes (r_2 \circ (\mathrm{id}_X \otimes b))) \circ \eta_X$. Now we compute that
\begin{align*}
(\mathrm{id}_{X^*} \otimes r_1) \circ (\eta_X \otimes \mathrm{id}_Y) \circ b
& =
(\mathrm{id}_{X^*} \otimes (r_1 \circ (\mathrm{id}_X \otimes b))) \circ \eta_X
=
(\mathrm{id}_{X^*} \otimes (r_2 \circ (\mathrm{id}_X \otimes b))) \circ \eta_X
\\ & =
(\mathrm{id}_{X^*} \otimes r_2) \circ (\eta_X \otimes \mathrm{id}_Y) \circ b.
\end{align*}
It follows that $(\mathrm{id}_{X^*} \otimes r_1) \circ (\eta_X \otimes \mathrm{id}_Y) = (\mathrm{id}_{X^*} \otimes r_2) \circ (\eta_X \otimes \mathrm{id}_Y)$, because $I$ is a separator. The function $r \mapsto (\mathrm{id}_{X^*} \otimes r) \circ (\eta_X \otimes \mathrm{id}_Y)$ is an isomorphism $\mathbf{C}(X \otimes Y, Z) \to \mathbf{C}(Y, X^* \otimes Z)$. Therefore, $r_1 = r_2$. More generally, we conclude that $I$ is a monoidal separator.
\end{proof}
\begin{lemma}\label{N}
Let $X$ be an object. If $\top_X^\dagger \circ \top_X = 1$, then $\mathbf{C}(I,X)$ contains an atom.
\end{lemma}
\begin{proof}
Assume that $\top_X^\dag \circ \top_X = 1$, and assume that $\mathbf{C}(I, X)$ contains no atoms. Let $r\: X \to X$ be the morphism $r=\sup\{ \neg c \circ c^\dagger \,|\, c\:I \to X\}$. Let $a$ be a nonzero morphism $I \to X$. By assumption, $a$ is not an atom, so $a = a_1 \mathbin{\vee} a_2$ for some disjoint nonzero $a_1, a_2\: I \to X$. Hence,
\begin{align*}
r \circ a \geq
((\neg a_1 \circ a_1^\dagger) \mathbin{\vee} (\neg a_2 \circ a_2^\dagger)) \circ a & = (\neg a_1 \circ a_1^\dagger \circ a) \mathbin{\vee} (\neg a_2 \circ a_2^\dagger \circ a) \\ & = \neg a_1 \mathbin{\vee} \neg a_2 = \neg (a_1 \And a_2) = \neg 0_X = \top_X.
\end{align*}
We conclude that $r \circ a = \top_X$ for all nonzero $a\: I \to X$, and of course, $r\circ 0_X = 0_X$. Because $I$ is separating, it follows that $r = \top_X \circ \top_X^\dagger$.
The monoidal category $(\mathbf{C}, \otimes, I)$ has a trace because it is compact closed. We calculate
\begin{align*}
1 & = \mathrm{Tr}(1) = \mathrm{Tr} (\top_X^\dagger \circ \top_X) = \mathrm{Tr}(\top_X \circ \top_X^\dagger) = \mathrm{Tr}\left( \bigvee_{c\: I \to X} \neg c \circ c^\dag\right) \\ & = \bigvee_{c\: I \to X} \mathrm{Tr}(\neg c \circ c^\dag) = \bigvee_{c\: I \to X} \mathrm{Tr}(c^\dagger \circ \neg c) = \bigvee_{c\: I \to X} 0 = 0.
\end{align*}
This conclusion contradicts Proposition~\ref{C}. Therefore, $\mathbf{C}(I, X)$ has at least one atom.
\end{proof}
\begin{theorem}\label{O}
Let $X$ be an object. Then $\mathbf{C}(I, X)$ is a complete atomic Boolean algebra.
\end{theorem}
\begin{proof}
Assume that $\mathbf{C}(I, X)$ is not atomic. It follows that there exists a nonzero morphism $a\: I \to X$ such that there exist no atoms $x \leq a$. By Lemma~\ref{I}, there exists a dagger kernel $j\: A \to X$ such that $j \circ \top_A = a$ and hence $\top_A^\dagger \circ \top_A = a^\dagger \circ a = 1$. By Lemma~\ref{N}, $\mathbf{C}(I, A)$ contains an atom $z$.
We claim that $j \circ z$ is an atom of $\mathbf{C}(I,X)$. This morphism is certainly nonzero, because $j^\dag \circ j \circ z = z \neq 0$. Let $b \leq j \circ z$ be nonzero too. Then, $j^\perp \circ j^{\perp\dag} \circ b \leq j ^\perp \circ j^{\perp \dag} \circ j \circ z = 0_{I, X}$, so
$$
j \circ j^\dag \circ b = j \circ j^\dag \circ b + j^\perp \circ j^{\perp\dag} \circ b = b
$$
by Lemma~\ref{K}. Thus, $j^\dag \circ b \neq 0$ because $b\neq 0$. Furthermore, $j^\dag \circ b \leq j^\dag \circ j \circ z = z$. Because $z$ is an atom, we conclude that $j^\dagger \circ b = z$ and hence that $b = j \circ j^\dag \circ b = j \circ z$. Therefore, $j \circ z$ is an atom.
Of course, $j \circ z \leq j \circ \top_A = a$, so there is a contradiction with our choice of $a$. We conclude that $\mathbf{C}(I, X)$ is atomic after all.
\end{proof}
\begin{definition}\label{P}
For each object $X$, define $E(X)$ to be the set of atoms of $\mathbf{C}(I,X)$. For each morphism $r\: X \to Y$, define $E(r) = \{(x,y) \in E(X) \times E(Y) \,|\, y^\dagger \circ r \circ x = 1\}.$
\end{definition}
We now show that $E$ is an equivalence of dagger symmetric monoidal categories $\mathbf{C} \to \mathbf{Rel}$. We often appeal to the fact that $x_1 = x_2$ if and only if $x_1^\dagger \circ x_2 = 1$, for all $x_1, x_2 \in E(X)$. Indeed, we reason that $x_1 \neq x_2$ if and only if $x_2 \leq \neg x_1$ if and only if $x_1^\dagger \circ x_2 = 0$.
\begin{lemma}\label{Q}
Let $X$ be an object. Then, $\mathrm{id}_X = \sup\{x \circ x^\dag \,|\, x \in E(X)\}$.
\end{lemma}
\begin{proof}
For all $a\: I \to X$, we calculate that
\begin{align*}&
\left(\bigvee_{x \in E(X)} x \circ x^\dagger\right) \circ a = \bigvee_{x \in E(X)} x \circ x^\dagger \circ a = \bigvee_{\begin{smallmatrix} x \in E(X) \\ x \leq a \end{smallmatrix}} x = a = \mathrm{id}_X \circ a.
\end{align*}
We conclude the claimed equality because $I$ is a separator.
\end{proof}
\begin{lemma}\label{R}
$E$ is a dagger functor $\mathbf{C} \to \mathbf{Rel}$. This means that $E$ is a functor such that $E(r^\dagger) = E(r)^\dagger$ for all morphisms $r$ of $\mathbf{C}$.
\end{lemma}
\begin{proof}
Let $X$ be an object of $\mathbf{C}$.
\begin{align*}
E(\mathrm{id}_X)
& =
\{(x_1, x_2) \in E(X) \times E(X) \,|\, x_2^\dagger \circ \mathrm{id}_X \circ x_1 = 1\}
\\& =
\{(x_1, x_2) \in E(X) \times E(X) \,|\, x_1 = x_2\} = \mathrm{id}_{E(X)}.
\end{align*}
Let $r\: X \to Y$ and $s\: Y \to Z$ be morphisms of $\mathbf{C}$. We apply Lemma~\ref{Q} to calculate that
\begin{align*}
E(s \circ r) & = \{(x,z) \in E(X) \times E(Z) \,|\, z^\dagger \circ s \circ r \circ x= 1\}
\\ & \textstyle = \{(x,z) \in E(X) \times E(Z) \,|\, \bigvee_{y \in E(Y)} z^\dagger \circ s \circ y \circ y^\dag\circ r \circ x= 1\}
\\ & \textstyle = \{(x,z) \in E(X) \times E(Z) \,|\, z^\dagger \circ s \circ y = 1 \text{ and } y^\dag\circ r \circ x= 1\text{ for some }y \in E(Y)\}
\\ & =
E(s) \circ E(r).
\end{align*}
Thus, $E$ is a functor. That $E$ is a dagger functor follows immediately from the definition.
\end{proof}
\begin{proposition}\label{S}
$E$ is a dagger equivalence $\mathbf{C} \to \mathbf{Rel}$. This means that $E$ is a full and faithful dagger functor and every set is dagger isomorphic to $E(X)$ for some object $X$ of $\mathbf{C}$.
\end{proposition}
\begin{proof}
Let $r, s\: X \to Y$. Assume that $E(r) = E(s)$, i.e., that $y^\dagger \circ r \circ x = y^\dagger \circ s \circ x$ for all atoms $x\: I \to X$ and all atom $y\: I \to Y$. Since $\mathbf{C}(I, X)$ and $\mathbf{C}(I, Y)$ are complete atomic Boolean algebras by Theorem~\ref{O}, we find that $b^\dagger \circ r \circ a = b^\dagger \circ s \circ a$ for all morphisms $a\: I \to X$ and all morphisms $b\: I \to Y$. Appealing twice to our assumption that $I$ is a separator, we conclude that $r = s$. Therefore, $E$ is faithful.
Let $X$ and $Y$ be objects of $\mathbf{C}$, and let $R\: E(X) \to E(Y)$ be a binary relation. We reason that for all $x_0 \in E(X)$ and $y_0 \in E(Y)$,
\begin{align*}
(x_0, y_0) \in E\left(\bigvee_{(x,y) \in R} y \circ x^\dagger\right)
{}& \quad\Longleftrightarrow\quad
y_0^\dagger \circ \left(\bigvee_{(x,y) \in R} y \circ x^\dagger\right) \circ x_0 = 1
\\ & \quad\Longleftrightarrow\quad
\bigvee_{(x,y) \in R} y_0^\dagger \circ y \circ x^\dagger \circ x_0 = 1
\quad\Longleftrightarrow\quad
(x_0, y_0) \in R.
\end{align*}
We conclude that $E\left(\bigvee_{(x,y) \in R} y \circ x^\dagger\right) = R$. Therefore, $E$ is full.
Let $M$ be a set. Let $X = \bigoplus_{m \in M} I$, and for each $m \in M$, let $j_m\: I \to X$ be the inclusion morphism for the summand of index $m$. We prove that $j_m$ is an atom. Let $a\: I \to X$ be a nonzero morphism such that $a \leq j_m$. It follows that $a^\dagger \circ j_m \geq a^\dagger \circ a = 1$. Furthermore, for all $m' \neq m$, we have that $a^\dagger \circ j_{m'} \leq j_m^\dagger \circ j_{m'} = 0$. By the universal property of $X$, we conclude that $a^\dagger = j_m^\dagger$ or equivalently that $a = j_m$. Therefore, $j_m$ is an atom for all $m \in M$.
Suppose that there is an atom $x\: I \to X$ such that $x \neq j_m$ for all $m \in M$. Then $x^\dagger \circ j_m = 0$. By the universal property of $X$, we conclude that $x^\dagger = 0_{X, I}$, contradicting that $x$ is an atom. Thus, $E(X) = \{j_m \,|\, m \in M\}$. The bijection $m \mapsto j_m$ is a dagger isomorphism $M \to \{j_m \,|\, m \in M\}$ in $\mathbf{Rel}$, so $M$ is dagger isomorphic to $E(X)$. Therefore, every set is dagger isomorphic to $E(X)$ for some object $X$ of $\mathbf{C}$.
\end{proof}
Finally, we prove that $E$ is a monoidal functor. We suppress unitors throughout.
\begin{lemma}\label{T}
Let $X$ and $Y$ be objects. Then, $x \otimes y \in E(X \otimes Y)$ for all $x \in E(X)$ and $y \in E(Y)$, and this defines a bijection $\mu_{X,Y}\: E(X) \times E(Y) \to E(X \otimes Y)$.
\end{lemma}
\begin{proof}
Let $x \in E(X)$ and $y \in E(Y)$. Then, $x \otimes y$ is nonzero because $(x \otimes y)^\dag \circ (x \otimes y) = 1$. The Boolean algebra $\mathbf{C}(I, X \otimes Y)$ is atomic, so there is an atom $z\in E(X \otimes Y)$ such that $z \leq x \otimes y$. We now show that $z = x \otimes y$ by appealing to the fact that $I$ is a monoidal separator by Lemma~\ref{M}.
Let $a\: I \to X$ and $b\: I \to Y$. If $x \leq \neg a$ or $y \leq \neg b$, then $x^\dag \circ a = 0$ or $y^\dag \circ b = 0$, so
$$
z^\dag \circ (a \otimes b) \leq (x \otimes y)^\dag \circ (a \otimes b) = (x^\dag \circ a) \otimes (y^\dag \circ b) = 0
$$
and thus $z^\dag \circ (a \otimes b) = 0 = (x \otimes y)^\dag \circ (a \otimes b)$. If $x \leq a$ and $y \leq b$, then
$$
z^\dag \circ (a \otimes b) \geq z^\dag \circ (x \otimes y) \geq z^\dag \circ z = 1,
$$
and thus $z^\dag \circ (a \otimes b) = 1 = (x \otimes y)^\dag \circ (a \otimes b)$. Therefore, $z^\dag \circ (a \otimes b) = (x \otimes y)^\dag \circ (a \otimes b)$ for all $a\: I \to X$ and $b\: I \to Y$, and we conclude that $z^\dag = (x \otimes y)^\dag$ or equivalently that $z = x \otimes y$. Consequently, $x \otimes y$ is an atom.
We have shown that $x \otimes y \in E(X \otimes Y)$ for all $x \in E(X)$ and $y \in E(Y)$, and hence $(x, y) \mapsto (x \otimes y)$ defines a function $\mu_{X,Y}\: E(X) \times E(Y) \to E(X \otimes Y)$. This function is injective because $(x_1 \otimes y_1)^\dagger \circ (x_2 \otimes y_2) = (x_1^\dagger \circ x_2) \otimes (y_1^\dagger \circ y_2) = 0$ whenever $x_1 \neq x_2$ or $y_1 \neq y_2$. This function is surjective because, by Lemma~\ref{Q}, for all $z \in E(X \otimes Y)$, we have that
$$
z = \mathrm{id}_{X \otimes Y} \circ z = (\mathrm{id}_X \otimes \mathrm{id}_Y) \circ z
=
\bigvee_{x \in E(X)} \bigvee_{y \in E(Y)} (x \otimes y) \circ (x \otimes y)^\dagger \circ z
$$
and thus $(x \otimes y)^\dagger \circ z \neq 0$ for some $(x,y) \in E(X) \times E(Y)$. Therefore, $\mu_{X, Y}$ is a bijection $E(X) \times E(Y) \to E(X \otimes Y)$.
\end{proof}
\begin{proposition}\label{U}
$E$ is a strong symmetric monoidal functor $(\mathbf{C}, \otimes, I) \to (\mathbf{Rel}, \times, \{\ast\})$:
\begin{enumerate}
\item the isomorphism $\{\ast\} \to E(I)$ is the function $\ast \mapsto 1$;
\item the natural isomorphism $E(X) \times E(Y) \to E(X \otimes Y)$ is the function $(x, y) \mapsto x \otimes y$.
\end{enumerate}
\end{proposition}
\begin{proof}
For all objects $X$, $Y$ and $Z$, let $a_{X, Y, Z}\: (X \otimes Y) \otimes Z \to X \otimes (Y \otimes Z)$ be the associator in $\mathbf{C}$, and for all sets $L$, $M$, and $N$, let $\alpha_{L, M, N}\: (L \times M) \times N \to L \times (M \times N)$ be the associator in $\mathbf{Rel}$. We prove that the following diagram commutes:
$$
\begin{tikzcd}[column sep = 10em]
(E(X) \times E(Y)) \times E(Z)
\arrow{r}{\alpha_{E(X), E(Y), E(Z)}}
\arrow{d}[swap]{\mu_{X,Y} \times \mathrm{id}_Z}
&
E(X) \times (E(Y) \times E(Z))
\arrow{d}{\mathrm{id}_X \times \mu_{Y,Z}}
\\
E(X \otimes Y) \times E(Z)
\arrow{d}[swap]{\mu_{X \otimes Y, Z}}
&
E(X) \times E(Y \otimes Z)
\arrow{d}{\mu_{X, Y \otimes Z}}
\\
E((X \otimes Y) \otimes Z)
\arrow{r}[swap]{E(a_{X, Y, Z})}
&
E(X \otimes (Y \otimes Z))
\end{tikzcd}
$$
The six morphisms in this diagram are binary relations that are functions. In particular, $E(a_{X, Y, Z})$ consists of pairs $(((x_1 \otimes y_1) \otimes z_1), (x_2 \otimes (y_2 \otimes z_2)))$ that satisfy the following equivalent conditions:
\begin{align*}
(x_2 \otimes (y_2 \otimes z_2))^\dagger \circ a_{X, Y, Z}\circ & ((x_1 \otimes y_1) \otimes z_1) = 1
\\ & \quad\Longleftrightarrow\quad
(x_2 \otimes (y_2 \otimes z_2))^\dagger \circ (x_1 \otimes (y_1 \otimes z_1)) = 1
\\ & \quad\Longleftrightarrow\quad
((x_2^\dagger \circ x_1) \otimes ((y_2^\dagger \circ y_1) \otimes (z_2^\dagger \circ z_1))= 1
\\ & \quad\Longleftrightarrow\quad
x_1 = x_2 \; \text{and} \; y_1 = y_2 \; \text{and} \, z_1 = z_2.
\end{align*}
We can now prove that the diagram commutes via function application. We simply compute that for all $x \in E(X)$, $y \in E(Y)$, and $z \in E(Z)$, we have that
\begin{align*}&
(E(a_{X,Y,Z}) \circ \mu_{X \otimes Y, Z} \circ (\mu_{X,Y} \times \mathrm{id}_Z))((x, y), z)
=
(E(a_{X,Y,Z}) \circ \mu_{X \otimes Y, Z})(x \otimes y, z)
\\ & =
E(a_{X,Y,Z})((x \otimes y) \otimes z)
=
x \otimes (y \otimes z)
=
\mu_{X, Y \otimes Z}(x, y \otimes z)
\\ & =
(\mu_{X, Y \otimes Z} \circ (\mathrm{id}_X \times \mu_{Y,Z}))(x, (y, z))
=
(\mu_{X, Y \otimes Z} \circ (\mathrm{id}_X \times \mu_{Y,Z}) \circ a_{E(X), E(Y), E(Z)})((x,y), z).
\end{align*}
We conclude that $E$ together with the natural bijection $\mu_{X, Y}\: E(X) \times E(Y) \to E(X \otimes Y)$ is a strong monoidal functor. The canonical bijection $\{\ast\} \to E(I)$ for this monoidal functor is evidently the unique such bijection \cite{EtingofGelakiNikshychOstrik}*{section 2.4}.
We verify that $E$ respects the braiding. For all objects $X$ and $Y$, let $b_{X,Y}\: X \otimes Y \to Y \otimes X$ be the braiding in $\mathbf{C}$, and for all sets $M$ and $N$, let $\beta_{M, N}\: M \times N \to N \times M$ be the braiding in $\mathbf{Rel}$. We prove that the following diagram commutes:
$$
\begin{tikzcd}[column sep = 6em]
E(X) \times E(Y)
\arrow{d}[swap]{\mu_{X,Y}}
\arrow{r}{\beta_{E(X), E(Y)}}
&
E(Y) \times E(X)
\arrow{d}{\mu_{Y,X}}
\\
E(X \otimes Y)
\arrow{r}[swap]{E(b_{X,Y})}
&
E(Y \otimes X)
\end{tikzcd}
$$
As before, the four morphisms in this diagram are binary relations that are functions. In particular, $E(b_{X,Y})$ consists of pairs $(x_1 \otimes y_1, y_2 \otimes x_2)$ that satisfy the following equivalent conditions:
\begin{align*}&
(y_2 \otimes x_2)^\dagger \circ b_{X,Y} \circ (x_1 \otimes y_1) = 1
\quad\Longleftrightarrow\quad
(y_2 \otimes x_2)^\dagger \circ (y_1 \otimes x_1) = 1
\\ & \quad\Longleftrightarrow\quad
(y_2^\dagger \circ y_1) \otimes (x_2^\dagger \circ x_1) = 1
\quad\Longleftrightarrow\quad
x_1 = x_2 \text{ and } y_1 = y_2.
\end{align*}
We can now prove that the diagram commutes via function application. We simply compute that for all $x \in E(X)$ and $y \in E(Y)$, we have that
\begin{align*}
(E(b_{X,Y}) \circ \mu_{X,Y})(x,y)
=
E(b_{X,Y})(x \otimes y)
=
y \otimes x
=
\mu_{Y, X}(y, x)
=
(\mu_{Y,X} \circ \beta_{E(X), E(Y)})(x, y).
\end{align*}
Therefore, $E$ is a strong symmetric monoidal functor.
\end{proof}
\begin{theorem}\label{V}
Let $(\mathbf{C}, \otimes, I, \dagger)$ be a dagger compact closed category. If
\begin{enumerate}
\item each family of objects has a dagger biproduct,
\item each morphism has a kernel that is dagger monic,
\item $k$ and $k^\perp$ are jointly epic for each dagger kernel $k$,
\item each nonzero morphism $I \to I$ is invertible,
\item $I$ is a separator,
\end{enumerate}
then the functor $E\: \mathbf{C} \to \mathbf{Rel}$ of Definition~\ref{P} is a strong symmetric monoidal dagger equivalence. Conversely, it is routine to verify that $(\mathbf{Rel}, \otimes, I, \dagger)$ is a dagger compact closed category satisfying (1)--(5).
\end{theorem}
\begin{proof}
Combine Propositions \ref{S} and \ref{U}.
\end{proof}
Assuming sufficient choice, the adjoint of $E$ \cite{MacLane2}*{Theorem IV.4.1} can be selected to be a dagger functor \cite{Vicary}*{Lemma 5.1} and can then be made a strong symmetric monoidal functor \cite{EtingofGelakiNikshychOstrik}*{Remark 2.4.10}.
\begin{corollary}\label{W}
Let $(\mathbf{C}, \otimes, I, \dagger)$ be a dagger compact closed category. If
\begin{enumerate}[\quad\;\,(1')]
\item each family of objects has a dagger biproduct,
\item $I$ is simple and separating,
\item each object has a unique morphism $\top_X\: I \to X$ such that $\mathrm{coker}(\top_X) = 0$,
\item each morphism $a\: I \to X$ has a dagger isomorphism $i\: A \oplus B \to X$ such that
$$
\begin{tikzcd}
I
\arrow{r}{a}
\arrow{d}[swap]{\top_A}
&
X
\\
A
\arrow{r}[swap]{\mathrm{inc}_1}
&
A \oplus B,
\arrow{u}[swap]{i}
\end{tikzcd}
$$
\end{enumerate}
then the functor $E\: \mathbf{C} \to \mathbf{Rel}$ of Definition~\ref{P} is a strong symmetric monoidal dagger equivalence. Conversely, it is routine to verify that $(\mathbf{Rel}, \otimes, I, \dagger)$ is a dagger compact closed category satisfying (1')--(4').
\end{corollary}
\begin{proof}
Assume (1')--(4'). First, we claim that for all $a\: I \to X$, if $a^\dagger \circ a = 0$, then $a = 0$. Applying assumption (4'), we write $a = i \circ \mathrm{inc}_1 \circ \top_A$, where $i$ is a dagger isomorphism and $\mathrm{coker}(\top_A) = 0$. Assume $a^\dagger \circ a = 0$. Then, $0 = a^\dagger \circ a = \top_A^\dagger \circ {\mathrm{inc}_1}^\dagger \circ i^\dagger \circ i \circ \mathrm{inc}_1 \circ \top_A = \top_A^\dagger \circ \top_A$. It follows that $\top_A^\dagger$ factors through $0$. Thus, $\top_A$ and hence $a$ factor through $0$. We have established our first claim.
Second, we claim that there are exactly two morphisms $I \to I$, namely, $0:= 0_I \neq \mathrm{id}_I$ and $1 := \top_I = \mathrm{id}_I$. Let $a\: I \to I$. By assumption (2'), $\mathrm{coker}(a) = \;!\: I \to 0$ or $\mathrm{coker}(a) = \mathrm{id}_I\: I \to I$ up to isomorphism. In the former case, $a = \top_I$ by assumption (3'), and in the latter case, $a = 0_I$. In particular, $\mathrm{id}_I = \top_I$ or $\mathrm{id}_I = 0_I$. In the latter case, $I \cong 0$, contradicting assumption (2'). Therefore, $\mathrm{id}_I \neq 0_I$, and hence $\mathrm{id}_I = \top_I$. We have established our second claim.
Thus, $(\mathbf{C}, \otimes, I, \dagger)$ is a dagger compact closed category that satisfies assumptions (1), (4), and (5) of Theorem~\ref{V}. It remains to show that $(\mathbf{C}, \otimes, I, \dagger)$ satisfies assumptions (2) and (3) of Theorem~\ref{V}.
Let $r\: X \to Y$. Let $a = r \circ \top_X$. By assumption (4'), there exists a dagger isomorphism $i\: A \oplus B \to Y$ such that $a = i \circ \mathrm{inc}_1 \circ \top_A$. We claim that $\mathrm{inc}_2^\dagger \circ i^\dagger$ is a cokernel of $r$. First, we calculate that $\mathrm{inc}_2^\dagger \circ i^\dagger \circ r \circ \top_X = \mathrm{inc}^\dagger \circ i^\dagger \circ a = \mathrm{inc}_2^\dagger \circ i^\dagger \circ i \circ \mathrm{inc}_1 \circ \top_A = \mathrm{inc}_2^\dagger \circ \mathrm{inc}_1 \circ \top_A = 0$. By assumption (3'), we have that $\mathrm{inc}_2^\dagger \circ i^\dagger \circ r = 0$.
Let $s\: Y \to Z$ be such that $s \circ r = 0_{X, Z}$. It follows that $s \circ i \circ \mathrm{inc}_1 \circ \top_A = s \circ a = s \circ r \circ \top_X = 0$. By assumption (3'), we have that $s \circ i \circ \mathrm{inc}_1 = 0$. As for any dagger biproduct of two objects, we have that $\mathrm{coker}(\mathrm{inc}_1) = \mathrm{inc}_2^\dagger$, and thus, $s \circ i = t \circ \mathrm{inc}_2^\dagger$ for some morphism $t$. We conclude that $s = s \circ i \circ i^\dagger = t \circ \mathrm{inc}_2^\dagger \circ i^\dagger$.
Therefore, $ \mathrm{inc}_2^\dagger \circ i^\dagger$ is a cokernel of $r$, as claimed. In other words $i \circ \mathrm{inc}_2$ is a kernel of $r^\dagger$. The kernel $i \circ \mathrm{inc}_2$ is dagger monic, and hence we have verified assumption (2) of Theorem~\ref{V}. Furthermore, as for any dagger biproduct of two objects, we have that $\mathrm{inc}_1$ and $\mathrm{inc}_2$ are jointly epic and that $\mathrm{inc}_2^\perp = \mathrm{inc}_1$. Hence, $i \circ \mathrm{inc}_1$ and $i \circ \mathrm{inc}_2$ are jointly epic and $(i \circ \mathrm{inc}_2)^\perp = i \circ \mathrm{inc}_1$. We conclude that every dagger kernel is jointly epic with its orthogonal complement, verifying assumption (3) of Theorem~\ref{V}.
We have verified the assumptions of Theorem~\ref{V}, and we now apply it to obtain the desired conclusion.
\end{proof}
\begin{remark}
It is routine to verify that the dagger compact closed category $(\mathbf{Rel}, \times, \{\ast\}, \dagger)$ also has the property that the wide subcategory of dagger kernels has directed colimits. Indeed, the latter category is simply the category of sets and injections.
\end{remark}
\section*{Acknowledgements}
I thank Chris Heunen and Bert Lindenhovius for useful discussion.
|
{
"arxiv_id": "2302.14197",
"language": "en",
"timestamp": "2023-03-01T02:04:33",
"url": "https://arxiv.org/abs/2302.14197",
"yymm": "2302"
} | \section{Introduction}
With the increase in online shopping, the online apparel shopping market is growing rapidly. Online shopping is a convenient method for purchasing items over the Internet without physically visiting a store. However, when purchasing clothes online, the customer cannot try out clothes to check if they fit.
Han et al.~\cite{viton} first proposed the virtual try-on (VITON) method, which can generate visualization images for users trying out clothes online. Allowing users to virtually try on clothes to check for fitting not only improves their shopping experience and changes the manner of shopping for clothes but also reduces costs for retailers. Variants~\cite{Wang_ECCV18,FW-GAN,VTNFP,2d-tryon,Yang_CVPR20} of VITON have been developed for improving virtual visual appearance. To improve the resolution of the synthetic image on the virtual try-on, Han et al.~\cite{Clothflow} predicted the optical flow maps of clothes and desired clothing regions.
Choi et al. proposed a novel high-resolution virtual try-on method called VITON-HD~\cite{viton-hd}, in which a novel clothing-agnostic person representation leveraging the pose information and the segmentation map is plotted. Furthermore, in the alignment-aware segment normalization introduced in VITON-HD, information irrelevant to the clothing texture in the misaligned regions is removed and semantic information is propagated throughout the network. Although VITON-HD can generate photorealistic fitting images, the clothing size is not considered. Even if clothes of various sizes are used, the same output is always generated because the system deforms clothes to fit the shape and design of a person's body, and the shape of the clothes to be changed is based on that of the clothes worn in the person's image.
In this study, an image-based virtual fitting system that can adjust the clothing size was proposed. Because VITON-HD generates a segmentation map based on the clothes a person is wearing, a fitting image that differs from the clothing size is generated. In the proposed method, the cloth area of the segmentation map is changed according to the ratio of the actual person and cloth sizes by using the size information of the person and clothes as the input. The clothing size is changed according to the size of the person by calculating the distance between the shoulder width and height of the clothing in the person image. If only the cloth region is enlarged, then the boundaries of the cloth region, such as the collar and the overlap with an arm, may become inconsistent. Therefore, we changed the coordinates of the segmentation map and corrected the boundary of the cloth region to maintain quality.
\section{Virtual Try-On System}
Given an image of a desired cloth and that of a person, an image-based virtual fitting system generates an image of the person wearing this cloth. In this section, we briefly review image-based virtual fitting methods and explain drawbacks.
\subsection{VITON-HD}
VITON-HD~\cite{viton,viton-hd} consists of four independent modules, namely pre-processing, segmentation generation, clothes deformation, and try-on synthesis.
In this model, pose estimation and parts estimation of a person wearing the target clothes are used for virtual try-on image synthesis. First, the segmentation map and pose map are used to pre-process an input image as a clothing-agnostic person image and its corresponding segmentation.
Furthermore, semantic segmentation is used in VITON-HD to separate appearance and shape generation, which generates spatial and natural results. Thus, a clothing-independent representation of the person can be achieved, information about the cloth worn before fitting can be removed, and body information for reconstruction can be obtained. In skeletal estimation, parts of the body that are difficult to reconstruct (e.g., arms and hands) are removed.
The semantic segmentation generation module (SGM) generates masks for exposed body parts (i.e., composite body part masks) and warped clothing regions using semantic segmentation of body parts and clothing. The SGM has a UNet~\cite{U-net, spectral} structure consisting of a convolutional layer, a downsampling layer, and an upsampling layer. Furthermore, two multiscale discriminators~\cite{semantic-gans} are used for conditional adversarial loss. The SGM generates semantic masks in a two-step process---the body part is first generated and then the clothing mask is synthesized, which renders the original clothing shape in the person image completely network agnostic.
In clothes deformation, input clothes are deformed by a geometric transformation to fit a given person. A thin plate spline (TPS) transformation~\cite{tps} is then used to deform the clothing image, which is an extension of the spline curve to a two-dimensional plane. These parameters were used to warp the clothing image to fit the estimated layout.
The UNet-based network is used to generate virtual-fitting images using intermediate products (target layout, area of compositing process, and warped clothing images) obtained up to this key point as the input. The try-on synthesis module explicitly determines areas that cannot be complemented by the cloth warping and inputs them to the generation module to reduce image quality degradation resulting from misalignment.
\subsection{Problem}
Conventional image-based virtual fitting systems depend on the clothing size a person is originally wearing. In VITON-HD, a segmentation map is generated from an image of a person by using face, arms, legs, and clothes worn by the person as feature points. The segmentation map is then used as an input for geometric transformation processing, which generates a virtual try-on image according to the size of the clothes worn by the person before try-on.
If the clothing region indicated by a segmentation map is extended, then nonclothing areas are affected and the clothing area is enlarged. However, in this module, appearance cannot be changed to a specified size.
\section{Proposed Method}
In the proposed method, the clothing size is changed to match the actual size by adding real-space size information and using a correction process to VITON-HD segmentation.
\subsection{Overview}
To change the clothes size according to the person, the actual sizes of the person and clothes were added as inputs to the correction process. Size information is defined as the shoulder width in the horizontal direction and the height of clothes in the vertical direction.
Virtual space information is calculated based on the coordinates of the key points detected using OpenPose~\cite{openpose,openpose2} and the distance between the shoulder width and height of the person image. Figure~\ref{fig:keypoint} displays these details.
Only the clothing region of the segmentation map is resized, the layout of which is estimated using the size of the person measured in the person image based on the ratio of the actual person size to the clothing size. By changing the clothing size ratio to match the person in the person image based on the size ratio between the actual person and the clothing, the clothing size in the person image is changed to the clothing size corresponding to the person.
Changing the clothing size results in the coordinates of the segmentation map to differ from those of VITON-HD, which causes inconsistencies at the boundaries. Therefore, a correction technique was applied to the boundary of the clothing region to maintain the same quality as that of VITON-HD.
\subsection{Ratio to clothes Size}
To reflect the clothing size in the person image, the actual clothing and person size information were first added as inputs. Size information is obtained from the shoulder width and height of the clothes in the horizontal and vertical directions, respectively. In OpenPose, the clothing size can be applied to the person image according to the input size information.
\begin{figure}
\centering
\begin{minipage}{3cm}
\centering
\includegraphics[scale=0.15]{keypoints.png}
\caption{Some key points detected by OpenPose.}
\label{fig:keypoint}
\end{minipage}
\hspace*{5mm}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.25]{irregular.png}
\caption{Irregularities at the collar region and overlap with the clothing area.}
\label{fig:irregular}
\end{minipage}
\end{figure}
\subsubsection{Adjustment of the clothing size}
To reflect the clothing size in the person image, the clothing and person size information are first added as inputs. Size information is calculated from the shoulder width and height of the clothes in the horizontal and vertical directions, respectively.
The vertical size of the clothing in a person image is obtained by detecting 25 key points by using OpenPose and calculating the ratio of the coordinate distance between key points 1 and 8, the size of the actual clothes, and the height of the person. Suppose that the height of a person's clothing region is $H_p$, the height of the clothing is $H_c$, and the coordinate of key point $t$ is $(x_t, y_t)$. The clothing’s height $\tilde{H}$ in a virtual space can be obtained using the following equation:
\begin{equation}
\tilde{H} = \frac{H_c}{H_p} \times \delta_{1,8},
\end{equation}
where $\delta_{i,j}$ is the Euclidean distance between key points $i$ and $j$ is defined as follows:
\begin{equation}
\delta_{i,j} = \sqrt{(x_i-x_j)^{2} + (y_i-y_j)^{2}}.
\end{equation}
Similarly, the horizontal size $\tilde{W}$ of the clothing in a virtual space can be calculated using the following equation:
\begin{equation}
\tilde{W} = \frac{W_c}{W_p} \times \delta_{2,5},
\end{equation}
where $W_c$ and $W_p$ are the shoulder widths of the clothing and person, respectively.
The lateral size of the clothing was calculated from the distance between key points 2 and 5 in the figure, the distance between key points 3 and 2, and the distance between key points 5 and 6. In this case, (sleeve length + shoulder width) $\alpha$ in the person’s image can be calculated using the following equation:
\begin{equation}
\alpha = \delta_{2,5} + \delta_{2,3} + \delta_{5,6}
\end{equation}
From the segmentation map obtained by layout estimation, only the clothes regions were obtained by color extraction. Using $\tilde{H}$, $\tilde{W}$, and $\alpha$, the clothing area was enlarged according to the vertical and horizontal distances to reflect the size of the clothes to be changed.
\subsection{Adjustment of Details}
As displayed in Fig.~\ref{fig:irregular}, when clothes are enlarged, some irregularities may occur. To suppress such side effects, the following two post-operations were performed using the proposed method.
\subsubsection{Collar Region}
When the segmentation map was merged by enlarging the clothes region, a gap appeared around the collar because the size of the segmentation map created by VITON-HD differed from that of the original shoulder coordinates when the clothes region was enlarged, and the other segmentation maps were combined.
In the proposed method, an erosion operator~\cite{book} was applied in morphological image processing to the collar area of the clothes region to generate a natural fitting image. Because clothes shrink if the contraction is applied to the entire clothes region, limiting the contraction to the collar region is necessary. Therefore, we consider a rectangle whose center is key point 1 in Fig.~\ref{fig:collar}, whose horizontal direction $s_x$ is proportional to the distance between key points 2 and 5, and whose vertical direction $s_y$ is proportional to the cloth’s size. In this study, we use $s_x=3\tilde{W}/4$ and $s_y=3\tilde{H}/4$.
\subsubsection{Overlapping Arms in Clothes Area}
If an arm overlaps the clothing region, as displayed in the red frame in Fig.~\ref{fig:irregular}, the positions of the clothes and arm regions become misaligned because the clothes region is divided and the coordinates of only the clothes region change when enlarged, whereas the coordinates of the arm region do not. Therefore, the coordinates of the shortest distance between the clothes and arm regions were obtained before and after enlargement, and the coordinates of the clothes region were corrected while maintaining the shortest distance between the two regions based on the coordinates of the clothes region.
Before enlargement, the distance between the two clothes areas is calculated for some combinations of the path in the two separated regions, and the minimum distance $D$ is determined, as displayed in Fig.~\ref{fig:contour}. When the two clothes regions are enlarged from the center coordinates, the coordinates with the shortest distance are calculated. The coordinates with the shortest distance are then obtained by finding all the distances between the two regions by the combination of the path and obtaining the coordinate with the minimum value.
\begin{figure}
\centering
\begin{minipage}{3cm}
\centering
\includegraphics[scale=0.45]{collar.png}
\caption{Correction of the collar region.}
\label{fig:collar}
\end{minipage}
\hspace*{5mm}
\begin{minipage}{4.5cm}
\centering
\includegraphics[scale=0.24]{contour.png}
\caption{Shortest distance between two separated regions.}
\label{fig:contour}
\end{minipage}
\end{figure}
The difference between the coordinates of the shortest distance before enlargement and the coordinates of the shortest distance after enlargement is determined, and the center coordinates of each object is moved by the difference in the shortest distance. Let $(x_C, y_C)$ denote the center coordinates of regions, and let $(x, y)$ and $(x^\prime, y^\prime)$ denote the coordinates before and after enlargement, respectively. By using the differences in the horizontal $\Delta_x=x^\prime_C-x_C$ and vertical $\Delta_y=y^\prime_C-y_C$ directions before and after enlargement, the smaller region is shifted to a larger region. By moving the center coordinates of the two clothing regions, the clothes region can be expanded while maintaining the distance between the arm regions.
\section{Evaluation}
To evaluate whether the proposed method can perform virtual fitting corresponding to the sizes of actual clothes, the person’s images, person, and clothes sizes were measured.
The segmentation map generated in VITON-HD~\cite{viton-hd} and the proposed method are displayed in Fig.~\ref{fig:segmentation}. The segmentation map of the proposed method is enlarged based on information of the shoulder width and height of the person and clothing. In the collar area, the clothes area can be enlarged while maintaining the same height as the person’s shoulder. Even when the clothes area overlaps with the arm area, the distance between the arm area and the clothes area can be enlarged without shifting the position of the arm area and the clothes area.
\begin{figure}
\centering
\includegraphics[scale=0.6]{segmentation.png}
\caption{Comparison of the generated segmentation map.}
\label{fig:segmentation}
\end{figure}
The generated fitted images with VITON-HD using the segmentation map created by the proposed method are presented in Fig.~\ref{fig:example}. The body and clothes length information was set to 66 and 73 cm, respectively, and the clothes size varied considerably. In VITON-HD, the fitting image is generated according to the size of the clothing before changing, and the size of the clothing to be changed is not changed, whereas the proposed method changes the clothing area in the fitting image according to the size of the clothes. Even when an arm overlapped the clothing area, the proposed method generated a fitting image of the appropriate size without separating it from the arm.
\begin{figure}
\centering
\includegraphics[scale=0.61]{example.png}
\caption{Comparison of generated images.}
\label{fig:example}
\end{figure}
To confirm the effectiveness of the proposed method, a subjective evaluation experiment was conducted to evaluate whether the proposed method exhibits superior performance. Four testers (A, B, C, and D) tried on clothes and the shoulder width and height of the clothes, and ten students in our faculty responded to the questionnaire. The number of students participating in the proposed method are summarized in Table~\ref{tab:result}.
To maintain fair evaluation, the order of VITON-HD and the proposed methods was randomly changed, and the proposed method was not revealed. The results of the questionnaire revealed that the proposed method is more natural and appropriate for resizing than VITION-HD is.
\begin{table}[t]
\centering
\caption{Results of the questionnaire from ten users, where the number of users who chose the proposed method is displayed.}
\label{tab:result}
\begin{tabular}{|c|c||c|c|c|c|c|} \hline
\multicolumn{2}{|c||}{} & \multicolumn{4}{|c|}{tester} & \\\cline{3-6}
\multicolumn{2}{|c||}{} & ~ A ~ & ~ B ~ & ~ C ~ & ~ D ~ & clothes \\ \hline \hline
size & height & 52 & 45 & 46 & 38 & 62 \\ \cline{2-7}
(cm) & width & 47 & 41 & 43 & 32 & 51 \\ \hline\hline
\multicolumn{2}{|c||}{Results} & 10 & 8 & 7 & 10 & --- \\\hline
\end{tabular}
\end{table}
\section{Conclusions}
In this study, an image-based virtual fitting system that corresponds with the clothing size was proposed, which applies OpenPose in the image segmentation region. The segmentation map generated by VITON-HD can be resized accurately by estimating the skeletal structure using OpenPose and adding clothing size, body length, and shoulder width as inputs.
In the future, appropriate size changes not only for T-shirts but also for various other types of clothing and the generation of natural-looking virtual fitting images according to the shooting environment will be studied.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.14130",
"language": "en",
"timestamp": "2023-03-01T02:01:55",
"url": "https://arxiv.org/abs/2302.14130",
"yymm": "2302"
} | \section{Introduction}
In the past decade, convolutional neural networks (CNN) have been widely deployed into many commercial applications. Various architectures that go beyond convolutional methods have also been developed. However, a core challenge in all of them is that they are accompanied by high computational complexity, and large storage requirements \cite{gou2021knowledge, cho2019efficacy}. For this reason, application of deep networks is still limited to environments that have massive computational support. In emerging applications, there is growing demand for applying deep nets on edge, mobile, and IoT devices \cite{li2018learning, plastiras2018edge, jang2020experimental, wu2016quantized}. To move beyond these limitations, many studies have developed a lightweight form of neural models which assure performance while `lightening' the network scale \cite{cho2019efficacy, li2018learning, plastiras2018edge, jang2020experimental, wu2016quantized, han2015deep, hinton2015distilling}.
Knowledge distillation (KD) is one of the promising solutions that can reduce the network size and develop an efficient network model \cite{gou2021knowledge, cho2019efficacy, yim2017gift} for various fields including wearable sensor data \cite{jeon2021role}, sound \cite{tripathi2022data, li2021mutual}, and image classification \cite{wen2021preparing, chen2021knowledge}. The concept of knowledge distillation is that the network consists of two networks, a larger one called teacher and a smaller one called student \cite{hinton2015distilling}. During training the student, the teacher transfers its knowledge to the student, using the logits from the final layer. So, the student can retain the teacher model's classification performance.
Recent insights have shown that features learnt in deep-networks often exhibit an angular distribution, usually leveraged via a hyperspherical embedding \cite{choi2020amc, liu2016large, liu2017sphereface}. Such embeddings lead to improved discriminative power, and feature separability. In terms of loss-functions, these can be implemented by using angular features that correspond to the geodesic distance on the hypersphere and incorporating a preset constant margin. In this work, we show that leveraging such spherical embeddings also improves knowledge distillation.
Firstly, to get more activated features, spatial attention maps are computed and decoupled into two parts: positive and negative maps.
Secondly, we construct a new form of knowledge by projecting the features onto the hypersphere to reflect the angular distance between them. Then, we introduce an angular margin to the positive feature to get a more attentive feature representation. Finally, during the distillation, the student tries to mimic the more separated decision regions of the teacher to improve the classification performance. Therefore, the proposed method effectively regularizes the feature representation of the student network to learn informative knowledge of the teacher network.
The contributions of this paper are:
\begin{itemize} [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item We propose an angular margin based distillation loss (named as AMD) which performs knowledge distillation by transferring the angular distribution of attentive features from the teacher network to the student network.
\item We experimentally show that the proposed method results in significant improvements with different combinations of networks and outperforms other attention-based methods across four datasets of different complexities, corroborating that the performance of a higher capacity teacher model is not necessarily better.
\item We rigorously validate the advantages of the proposed distillation method with various aspects using visualization of activation maps, classification accuracy, and reliability diagrams.
\end{itemize}
The rest of the paper is organized as follows. In section \ref{sec:related_work} and \ref{sec:background}, we describe related work and background, respectively. In section \ref{sec:proposed_method}, we provide an overview of the proposed method. In section \ref{sec:experimental_results}, we describe our experimental results and analysis. In section \ref{sec:conclusion}, we discuss our findings and conclusions.
\section{Related Work} \label{sec:related_work}
\textbf{Knowledge distillation.} Knowledge distillation, a transfer learning method, trains a smaller model by shifting knowledge from a larger model. KD is firstly introduced by Buciluǎ \emph{et al.} \cite{bucilua2006model} and is further explored by Hinton \emph{et al.} \cite{hinton2015distilling}. The main concept of KD is using soft labels by a trained teacher network. That is, mimicking soft probabilities helps students get knowledge of teachers, which improves beyond using hard labels (training labels) alone. Cho \emph{et al.} \cite{cho2019efficacy} explore which combination of student-teacher is good to obtain the better performance. They show that using a teacher trained by early stopping the training improves the efficacy of KD.
KD can be categorized into two approaches that use the outputs of the teacher \cite{gou2021knowledge}. One is response-based KD, which uses the posterior probabilities with softmax loss. The other is feature-based KD using the intermediate features with normalization. Feature-based methods can be performed with the response-based method to complement traditional KD \cite{gou2021knowledge}.
Recently, feature-based distillation methods for KD have been studied to learn richer information from the teacher for better-mimicking and performance improvement \cite{gou2021knowledge, wen2021preparing, wang2021knowledge}. Romero \emph{et al.} \cite{romero2014fitnets} firstly introduced the use of intermediate representations in FitNets using feature-based distillation. This method enables the student to mimic the teacher’s feature maps in intermediate layers.
\textbf{Attention transfer.} \label{sec:Attention_Transfer}
To capture the better knowledge of a teacher network, attention transfer \cite{gou2021knowledge, zagoruyko2016paying, wang2019pay, ji2021show} has been utilized, which is one of the popular methods for feature-based distillation. Zagoruyko \emph{et al.} \cite{zagoruyko2016paying} suggest activation-based attention transfer (AT), which uses a sum of squared attention mapping function computing statistics across the channel dimension. Although the depth of teacher and student is different, knowledge can be transferred by the attention mapping function, which matches the depth size as one. The activation-based spatial attention maps are used as the source of knowledge for distillation with intermediate layers, where the maps are created as: $f^{d}_{sum}$($A$) = $\sum_{j=1}^{c} |A_j|^{d} $, where $f$ is a computed attention map, $A$ is an output of a layer, $c$ is the number of channels for the output, $j$ is the number for the channel, and $d$ $>$ 1. A higher value of $d$ corresponds to a heavier weight on the most discriminative parts defined by activation level.
AT (feature-based distillation method) shows better effectiveness when used with traditional KD (response-based KD) \cite{zagoruyko2016paying}. The method encourages the student to generate similar normalized maps as the teacher. However, these studies have only focused on mimicking the teacher's activation from a layer \cite{wang2021knowledge}, not considering the teacher's dual ability to accurately distinguish between positive (relevant to the target object) and negative (irrelevant). Teacher not only can generate and transfer its knowledge as an activation map directly, but also can transfer separability to distinguish between positive and negative features. We refer to this as a dual ability, which we consider for improved distillation.
The emphasized positive feature regions that encapsulate regions of the target object are crucial to predicting the correct class. In general, a higher-capacity model shows better performance, producing those regions with more attention and precision compared to the smaller network. This suggests that the transfer of distinct regions of the positive and negative pairs from teacher to student could significantly improve performance. This motivates us to focus on utilizing positive and negative pairs for extracting more attentive features, implying better separability, for distillation.
\textbf{Spherical feature embeddings.}
The majority of existing methods \cite{sun2014deep, wen2016discriminative} rely on Euclidean distance for feature distinction. These approaches could not solve the problem that classification under open-set protocol shows a meaningful result only when successfully narrowing maximal intra-class distance. To solve this problem, an angular-softmax (A-softmax) function is proposed to distinguish the features by increasing the angular margins between features \cite{liu2017sphereface}. According to its geometric interpretation, using A-softmax function equivalents to the projection of features onto the hypersphere manifold, which intrinsically matches the preliminary condition that features also lie on a manifold. Applying the angular margin penalty corresponds to the geodesic distance margin penalty in the hypersphere \cite{liu2017sphereface}. A-softmax function encourages learned features to be discriminative on hypersphere manifold. For this reason, the A-softmax function shows superior performance to the original softmax function when tested on several classification problems \cite{liu2017sphereface}. On the other hand, Choi \emph{et al.} \cite{choi2020amc} introduced angular margin based contrastive loss (AMC-loss) as an auxiliary loss, employing the discriminative angular distance metric that corresponds to geodesic distance on a hypersphere manifold. AMC-loss increases inter-class separability and intra-class compactness, improving performance in classification. The method can be combined with other deep techniques, because it easily encodes the angular distributions obtained from many types of deep feature learners \cite{choi2020amc}.
The previous methods work with logits only or work with an auxiliary loss, such as a contrastive loss. We focus on features modeled as coming from angular distributions, and focus on their separability. The observations give us an insight that the high quality features for knowledge distillation can be obtained by projecting the feature pairs onto a hypersphere. For better distillation, we construct a derive new type of implicit knowledge with positive and negative pairs from intermediate layers. The details are explained in section \ref{sec:proposed_method}.
\section{Background} \label{sec:background}
\subsection{Traditional knowledge distillation}
In standard knowledge distillation \cite{hinton2015distilling}, the loss for training a student is:
\begin{equation}
\mathcal{L} = (1- \lambda)\mathcal{L_C} + \lambda \mathcal{L_K},
\end{equation}
where, $\mathcal{L_C}$ denotes the standard cross entropy loss, $\mathcal{L_K}$ is KD loss, and $\lambda$ is a hyperparameter; $0 < \lambda < 1$.
The error between the output of the softmax layer of a student network and the ground-truth label is penalized by the cross-entropy loss:
\begin{equation}
\mathcal{L_{C}} = \mathcal{H}(softmax(a_{S}), y),
\end{equation}
where $\mathcal{H(\cdot)}$ is a cross entropy loss function, $a_S$ is the logits of a student (inputs to the final softmax), and $y$ is a ground truth label.
The outputs of student and teacher are matched by KL-divergence loss:
\begin{equation}\label{eq3}
\mathcal{L_{K}} = \tau^{2}KL(z_{T}, z_{S}),
\end{equation}
where, $z_T = softmax(a_T/\tau)$ is a softened output of a teacher network, $z_S = softmax(a_S/\tau)$ is a softened output of a student, and $\tau$ is a hyperparameter; $\tau > 1$.
Feature distillation methods using intermediate layers can be used with the standard knowledge distillation that uses output logits. When they are used together, in general, it is beneficial to guide the student network towards inducing more similar patterns of teachers and getting a better classification performance. Thus, we also utilize the standard knowledge distillation with our proposed method.
\subsection{Attention map}
Denote an output as $A \in \mathcal{\mathbb{R}}^{c\times{h}\times{w}}$, where $c$ is the number of output channels, $h$ is the height for the size of output, and $w$ is width for the size of the output. The attention map for the teacher is given as follows:
\begin{align}
f^{l}_{T} = \sum_{j=1}^{c} |A^{l}_{T,j}|^{2}.
\end{align}
Here, $A_{T}$ is an output of a layer from a teacher model, $l$ is a specific layer, $c$ is the number of channels, $j$ is the number for the output channel, and $T$ denotes a teacher network. The attention map for the student is $f^{l'}_{S}$ = $\sum_{j'=1}^{c'} |A^{l'}_{S,j'}|^{2} $, where $A_{S}^{l'}$ is an output of a layer from a student, $l'$ is the corresponding layer of $l$, $c'$ is the number of channels for the output, $j'$ is the number for the output channel, and $S$ denotes a student network. If the student and teacher use the same depth for transfer, $l'$ can be the layer at the same depth as $l$; if not, $l'$ can be the end of the same block for the teacher.
From the attention map, we obtain positive and negative maps and we project features onto hypersphere to calculate angular distance for distillation. The details are explained in section \ref{sec:proposed_method}.
\subsection{Spherical feature with angular margin}
In order to promote the learned features to have an angular distribution, \cite{liu2017sphereface, wang2018additive} proposed to introduce the angular distance between features $W$ and weights $x$. For example, $W^{T}x = \norm{W}\norm{x} cos(\theta)$, where bias is set as $0$ for simplicity, and $\theta$ is the angle between $W$ and $x$. Then, the normalization of feature and weight makes the outputs only depend on the angle between weights and features and further, $\norm{x}$ is replaced to a constant $s$ such that the features are distributed on a hypersphere with a radius of $s$. To enhance the discrimination power, angular margin $m$ is applied to the angle of the target. Finally, output logits are used to formulate probability with angular margin $m$ as below \cite{liu2017sphereface, wang2018additive}:
\begin{equation}\label{eq4}
G^{i} = log\left (\frac
{e^{s\cdot(cos(m\cdot\theta_{y_{i}}))}}
{e^{s\cdot(cos(m\cdot\theta_{y_{i}}))} + \sum^{J}_{j=1,j\neq y_{i}}
e^{s\cdot(cos(\theta_{j}))}} \right ),
\end{equation}
where, $y_{i}$ is a label and $\theta_{y_{i}}$ is a target angle for class $i$, $\theta_{j}$ is an angle obtained from $j$-th element of output logits, $s$ is a constant, and $J$ is the class number. Liu \emph{et al.} \cite{liu2017sphereface} and Wang \emph{et al.} \cite{wang2018additive} utilized output logits to obtain more discriminative features for classification on a hypersphere manifold, which performs better than using original softmax function. We use Equation \eqref{eq4} to create the new type of feature-knowledge in the intermediate layers instead of output logits in the final classifier, thereby more attentive feature maps are transferred to the student model.
\section{Proposed Method} \label{sec:proposed_method}
\begin{figure*}[ht!]
\includegraphics[width=0.975\textwidth]{figure/AMloss_proposed2.png}
\centering
\caption{The existing attention map-based method (AT \cite{zagoruyko2016paying}) suggested the direct use of the feature map in the intermediate layer as shown in the green box. Instead, we first decouple the feature map into the positive ($Q_p$) and negative ($Q_n$) features and map them on the hypersphere with angular margin, $m$. Then, we convert them into the probability forms and compute loss based on AM loss function. The details are explained in section \ref{proposed_AMD}.}
\label{figure:proposedAMloss}
\end{figure*}
The proposed method utilizes features from intermediate layers of deep networks for extracting angular-margin based knowledge as illustrated in Figure \ref{figure:proposedAMloss}. The resultant angular margin loss is computed at various depths of the student and teacher as illustrated in figure \ref{figure:proposedKDmethod}. To obtain the angular distance between positive and negative features, we first generate attention maps from the outputs of intermediate layers. We then decouple the maps into positive and negative features. The features are projected onto a hypersphere to extract angularly distributed features. For effective distillation, more attentive features are obtained by introducing angular margin to the positive feature and the probability forms for distillation are computed. Finally, the knowledge of the teacher having better discrimination of positive and negative features is transferred to the student.
The details for obtaining the positive and negative maps and the angular margin based knowledge are explained in the following section.
\begin{figure}[ht!]
\includegraphics[width=0.475\textwidth]{figure/method1.png}
\centering
\caption{Schematics of teacher-student knowledge transfer with the proposed method.}
\label{figure:proposedKDmethod}
\end{figure}
\subsection{Generating attention maps}
To transfer activated features from teacher to student, the output of intermediate layers are used.
To match the dimension size between teacher and student models, we create the normalized attention maps \cite{zagoruyko2016paying}, which has benefits in generating maps discriminatively between positive and negative features. This reduces the need for any additional training procedure for matching the channel dimension sizes between teacher and student. We use the power value $d = 2$ for generating the attention maps, which shows the best results as reported in previous methods \cite{zagoruyko2016paying}.
\subsection{Angular margin computation} \label{proposed_AMD}
Although the activation map-based distillation provides additional context information for student model learning, there is still room to craft an attentive activation map that can distill a superior student model in KD. To further refine the original attention map, we propose an angular margin-based distillation (AMD) that encodes new knowledge using the angular distance between positive (relevant to the target object) and negative features (irrelevant) on the hypersphere.
We denote the normalized positive map as $Q_{p}=f/\norm{f}$ where $f$ is the output map extracted from the intermediate layer in networks. Further, we can obtain the normalized negative map by $Q_{n}=1-Q_{p}$.
Then, to make the positive map more attentive, we insert an angular margin $m$ into the positive features. In this way, a new feature-knowledge encoding attentive feature can be defined as follows:
\begin{equation}
\label{eq:g}
G^{l}(Q_{p}, Q_{n}) =
log\left (\frac
{e^{s\cdot(cos(m\cdot\theta_{p_{l}}))}}
{e^{s\cdot(cos(m\cdot\theta_{p_{l}}))} +
e^{s\cdot(cos(\theta_{n_{l}}))}} \right ),
\end{equation}
where, $\theta_{p_{l}}=cos^{-1}(Q_p)$ and $\theta_{n_{l}}=cos^{-1}(Q_n)$ for $l^{th}$ layer in the networks, and $m$ is a scalar angular margin. $G^{l}$ $\in \mathcal{\mathbb{R}}^{1\times{h}\times{w}}$ reflects the angular distance between positive and negative features in $l^{th}$ layer. For transferring knowledge, we aim to make the student's $G^{l}(Q_{Sp}, Q_{Sn})$ approximate the teacher's $G^{l}(Q_{Tp}, Q_{Tn})$ by minimizing the angular distance between feature maps.
\subsection{Angular margin based distillation loss}
With redesigned knowledge as above, we finally define the angular margin based distillation loss that accounts for the knowledge gap between the teacher and student activations as:
\begin{equation} \label{eq:amd}
\begin{split}
&\mathcal{L}_{AM}(Q_{Tp},Q_{Tn}, Q_{Sp},Q_{Sn}) = \frac{1}{3|L|}
\sum_{(l, l^\prime) \in L} \\
&\left( \underbracket{\bignorm{ \hat{G}^{l}(Q_{Tp}, Q_{Tn}) - \hat{G}^{l'}(Q_{Sp}, Q_{Sn})}^{2}_{F}}_\text{\normalsize\clap{\textbf{A}}} + \right.\\
&\left. \underbracket{\bignorm{ \hat{Q}^{l}_{Tp} - \hat{Q}^{l'}_{Sp} }^{2}_{F}}_\text{\normalsize\clap{\textbf{P}}}
+ \underbracket{\bignorm{ \hat{Q}^{l}_{Tn} - \hat{Q}^{l'}_{Sn} }^{2}_{F}}_\text{\normalsize\clap{\textbf{N}}}
\right).
\end{split}
\end{equation}
Here, $\hat{G}$ denotes a function for normalization for output of function G, $\hat{Q}$ is a normalized map. $L$ collects the layer pairs ($l$ and $l^\prime$), and $\norm{\cdot}_F$ is the Frobenius norm \cite{tung2019similarity}.
We will verify the performance of each component (A, P, and N) in section \ref{5_2}.
The final loss ($\mathcal{L}_{AMD}$) of our proposed method combines all the distillation losses, including the conventional logit distillation (Equation \ref{eq3}). Thus, our overall learning objective can be written as:
\begin{equation}
\mathcal{L}_{AMD} = \lambda_1\mathcal{L_{C}} + \lambda_2 \mathcal{L_{K}} + \gamma \mathcal{L_{A}},
\end{equation}
where, $\mathcal{L_{C}}$ is a cross-entropy loss, $\mathcal{L_{K}}$ is a knowledge distillation loss, $\mathcal{L_{A}}$ denotes the angular margin based loss from $\mathcal{L}_{AM}$, and $\lambda_1$, $\lambda_2$, and $\gamma$ are hyperparameters to control the balance between different losses.
\textbf{Global and local feature distillation.}
So far, we only consider the global feature (i.e., preserving its dimension and size). However, we point out that the global feature sometimes does not transfer more informative knowledge and rich spatial information across contexts of an input. Therefore, we also suggest utilizing local features during distillation. Specifically, the global feature is the original feature without a map division. Local features are determined by the division of the global feature. We split the global feature map from each layer by 2 for the width and height sizes of the maps to create four ($2 \times 2$) local feature maps. That is, one local map has $h/2\times w/2$ size, where $h$ and $w$ are the height and width sizes of the global map. Similar to before, local features encoding the attentive angle can be extracted for both teacher and student. Then, the losses considering global and local features for our method are:
\begin{equation}
\begin{split}
\mathcal{L_{A_\text{global}}}=\mathcal{L}_{AM}(Q_{T},Q_{S}), \hspace{0.2cm} \\
\mathcal{L_{A_\text{local}}}= \frac{1}{K} \sum^{K}_{k=1} \mathcal{L}_{AM}(Q^{k}_{T},Q^{k}_{S}),
\end{split}
\end{equation}
where $Q_{T}$ and $Q_{S}$ are global features of the teacher and student for distillation, and $Q^{k}_{T}$ and $Q^{k}_{S}$ are local features of the teacher and student, respectively, for $k$-th element of $K$, where $K$ is the total number of local maps from a map; $K$ = 4. When $\mathcal{L_{A_\text{global}}}$ and $\mathcal{L_{A_\text{local}}}$ are used together, we applied weights of 0.2 for local and 0.8 for global features to make a balance for learning.
\section{Experiments} \label{sec:experimental_results}
\begin{table}[htb!]
\centering
\caption{Description of experiments and their corresponding sections.}
\begin{center}
\scalebox{0.88}{\begin{tabular}{@{}p{7cm}|c}
\hline
\centering
Description & Section \\
\hline
1.~Does AMD work to distill a better student? & \multirow{5}{*}{\ref{5_2}}\\
\noindent
\vspace{-1em}
\begin{itemize}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item Comparison with various attention based distillation methods.
\item Investigating the effect of each component of the proposed method. \end{itemize} & \vspace{-1em}\\ \hline
2.~What is the effect of learning with AMD from various teachers? & \multirow{3}{*}{\ref{sec:various_teacher}} \\
\noindent
\vspace{-1em}
\begin{itemize}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item Exploring with different capacity of teachers. \end{itemize} & \vspace{-1em}\\ \hline
3.~What is the effect of different hyperparameters? & \multirow{2}{*}{\ref{sec:Param} }\\
\noindent
\vspace{-1em}
\begin{itemize}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item Ablation study with $\gamma$ and $m$. \vspace{-1em} \end{itemize} & \\ \hline
4.~What are the visualized results for the area of interest? & \multirow{5}{*}{\ref{sec:activation_map}} \\
\noindent
\vspace{-1em}
\begin{itemize}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item Visualized results of activation maps from intermediate layers with or without local feature distillation. \vspace{-1em} \end{itemize} & \\ \hline
5.~Is AMD able to perform with existing methods? & \multirow{6}{*}{\ref{sec:combi_methods}} \\
\noindent
\vspace{-1em}
\begin{itemize}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item Evaluation with various methods such as fine-grained feature distillation, augmentation, and other distillation methods.
\item Generalizability analysis with ECE and reliability diagrams. \vspace{-1em}
\end{itemize} & \\ \hline
\end{tabular}}
\end{center}
\label{table:experiment_summary}
\end{table}
In this section, we present experimental validation of the proposed method. We evaluate the proposed method, AMD, with various combinations of teacher and student, which have different architectural styles. We run experiments on four public datasets that have different complexities. We examine the sensitivity with several different hyperparameters ($\gamma$ and $m$) for the proposed distillation and discuss which setting is the best. To demonstrate the detailed contribution, we report the results with various aspects, using classification accuracy as well as activation maps extracted by Grad-CAM \cite{selvaraju2017grad}. Finally, we investigate performance enhancement by combining previous methods including filtered feature based distillation. Each experiment and its corresponding section are described in Table \ref{table:experiment_summary}.
\subsection{Datasets}
\textbf{CIFAR-10.} CIFAR-10 dataset \cite{Krizhevsky2009} includes 10 classes with 5000 training images per class and 1000 testing images per class. Each image is an RGB image of size 32$\times$32. We use the 50000 images as the training set and 10000 as the testing set. The experiments on CIFAR-10 helps validate the efficacy of our models with less time consumption.
\textbf{CINIC-10.} We extend our experiments on CINIC-10 \cite{darlow2018cinic}. CINIC-10 comprises of augmented extension in the style of CIFAR-10, but the dataset contains 270,000 images whose scale is closer to that of ImageNet. The images are equally split into each `train', `test', and `validate' sets. The size of the images is 32$\times$32. There are ten classes with 9000 images per class.
\textbf{Tiny-ImageNet / ImageNet.} To extend our experiments on a larger scale dataset having more complexity, we use Tiny-ImageNet \cite{le2015tiny}. The size of the images for Tiny-ImageNet is 64$\times$64. We pad them to 68$\times$68, then they are randomly cropped to 64$\times$64, and horizontally flipped, for augmentation to account for the complexity of the dataset. The training and testing sets are of size 100k and 10k respectively. The dataset includes 200 classes.
For ImageNet \cite{deng2009imagenet}, The dataset has 1k categories with 1.2M training images. The images are randomly cropped and then resized to 224$\times$224 and horizontally flipped.
\begin{table}[]
\centering
\caption{Architecture of WRN used in experiments. Downsampling is performed in the first layers of conv3 and conv4. 16 and 28 mean depth and $k$ is width (channel multiplication) of the network.}
\begin{center}
\scalebox{0.9}{
\begin{tabular}{p{3.2em} |p{3.2em} |c |c}
\hline
\centering
Group Name & Output Size & \multirow{2}{4.5em}{WRN16-$k$} &\multirow{2}{4.5em}{WRN28-$k$} \\
\hline
conv1 & 32$\times$32 & 3$\times$3, 16 & 3$\times$3, 16 \\ \hline \multirow{2}{1em}{conv2} & \multirow{2}{1em}{32$\times$32} & \multirow{2}{5.9em}{\bsplitcell{3$\times$3, 16$k$ \\ 3$\times$3, 16$k$}$\times$2} & \multirow{2}{5.9em}{\bsplitcell{3$\times$3, 16$k$ \\ 3$\times$3, 16$k$}$\times$4} \\[1.5pt]
&&&\\ \hline
\multirow{2}{1em}{conv3} & \multirow{2}{1em}{16$\times$16} & \multirow{2}{5.9em}{\bsplitcell{3$\times$3, 32$k$ \\ 3$\times$3, 32$k$}$\times$2} & \multirow{2}{5.9em}{\bsplitcell{3$\times$3, 32$k$ \\ 3$\times$3, 32$k$}$\times$4} \\ [1.5pt]
&&&\\ \hline
\multirow{2}{1em}{conv4} & \multirow{2}{1em}{8$\times$8} & \multirow{2}{5.9em}{\bsplitcell{3$\times$3, 64$k$ \\ 3$\times$3, 64$k$}$\times$2} & \multirow{2}{5.9em}{\bsplitcell{3$\times$3, 64$k$ \\ 3$\times$3, 64$k$}$\times$4} \\[1.5pt]
&&&\\ \hline
&1$\times$1&\multicolumn{2}{c}{average pool, 10-d fc, softmax}\\ \hline
\end{tabular} }
\end{center}
\label{table:WRNnetworks}
\end{table}
\subsection{Settings for experiments}
For experiments on CIFAR-10, CINIC-10, and Tiny-ImageNet, we set the batch size as 128, the total epochs as 200 using SGD with momentum 0.9, a weight decay of $1\times10^{-4}$, and the initial learning rate $lr$ as 0.1 which is decayed by a factor of 0.2 at epochs 40, 80, 120, and 160. For ImageNet, we use SGD with momentum of 0.9 and the batch size is set as 256. We run a total epoch of 100. The initial learning rate $lr$ is 0.1 decayed by 0.1 in 30, 60, and 90 epochs.
In experiments, we use the proposed method with WideResNet (WRN) \cite{zagoruyko2016wide} for teacher and student models to evaluate the classification accuracy, which is popularly used for KD \cite{cho2019efficacy, yim2017gift, zagoruyko2016paying, tung2019similarity}. Their network architectures are described in Table \ref{table:WRNnetworks}.
\begin{figure}[]
\includegraphics[scale=0.5] {figure/KD_CIFAR10.png}
\centering
\caption{Accuracy ($\%$) of students (WRN16-1) trained with a teacher (WRN16-3) on CIFAR-10 for various $\lambda_2$. $\lambda_1$ is obtained by 1 - $\lambda_2$.}
\label{figure:lambda_cifar10}
\end{figure}
To determine optimal parameters $\lambda_1$ and $\lambda_2$ for KD, we tested with different values for $\lambda_1$ and $\lambda_2$ for training based on KD on CIFAR-10 dataset. As shown in Figure \ref{figure:lambda_cifar10}, when $\lambda_1$ is 0.1 and $\lambda_2$ is 0.9 ($\tau$ = 4) with KD, the accuracy of a student (WRN16-1) trained with WRN16-3 as a teacher is the best. If $\lambda_1$ is small and $\lambda_2$ is large, the distillation effect of KD is increased. Since the accuracy depends on $\lambda_1$ and $\lambda_2$, we referred to previous studies \cite{cho2019efficacy, ji2021show, tung2019similarity} to choose the popular parameters for experiments. The parameters of ($\lambda_1$ = 0.1, $\lambda_2$ = 0.9, $\tau$ = 4), ($\lambda_1$ = 0.4, $\lambda_2$ = 0.6, $\tau$ = 16), ($\lambda_1$ = 0.7, $\lambda_2$ = 0.3, $\tau$ = 16), and ($\lambda_1$ = 1.0, $\lambda_2$ = 1.0, $\tau$ = 4) are used for KD on CIFAR-10, CINIC-10, Tiny-ImageNet, and ImageNet, respectively.
We perform baseline comparisons with traditional KD \cite{hinton2015distilling}, attention transfer (AT) \cite{zagoruyko2016paying}, relational knowledge distillation (RKD) \cite{park2019relational}, variational information distillation (VID) \cite{ahn2019variational}, similarity-preserving knowledge distillation (SP) \cite{tung2019similarity}, correlation congruence for knowledge distillation (CC) \cite{peng2019correlation}, contrastive representation distillation (CRD) \cite{tian2019contrastive}, attentive feature distillation and selection (AFDS) \cite{wang2019pay}, and attention-based feature distillation (AFD) \cite{ji2021show} that is a new feature linking method considering similarities between the teacher and student features, including state-of-the-art approaches. Note that, for fair comparison, the distillation methods are performed with traditional KD to see if they enhance standard KD, keeping the same setting as the proposed method. The hyperparameters of the methods follow their respective papers. For the proposed method, the constant parameter $s$ and margin parameter $m$ are 64 and 1.35, respectively. The loss weight $\gamma$ of the proposed method is 5000. We determine the hyperparameters empirically, considering the distillation effects by the capacity of models. A more detailed description of parameters appears in section \ref{sec:Param}. All experiments were repeated five times, and the averaged best accuracy and the standard deviation of performance are reported.
No augmentation method is applied for CIFAR-10 and CINIC-10.
For the proposed method, additional techniques, such as using the other hidden layers for generating better distillation effects from teachers or reshaping the dimension size of the feature maps, are not applied. All of our experiments are run on a 3.50 GHz CPU (Intel® Xeon(R) CPU E5-1650 v3), 48 GB memory, and NVIDIA TITAN Xp (3840 NVIDIA® CUDA® cores and 12 GB memory) graphic card \cite{gpuspec}.
To obtain the best performance, we adopt early-stopped KD (ESKD) \cite{cho2019efficacy} for training teacher and student models, leveraging its effects across the board in improving the efficacy of knowledge distillation. As shown in Figure \ref{figure:amd_eskd}, the early stopped model of a teacher tends to train student models better than Full KD that uses a fully trained teacher.
\begin{figure}[]
\includegraphics[scale=0.46] {figure/AMD_eskd2.png}
\centering
\caption{Accuracy ($\%$) for Full KD and ESKD. (a) and (b) are on CIFAR-10, and (c) and (d) are on CINIC-10, respectively. T and S denotes teacher and student models, respectively.}
\label{figure:amd_eskd}
\end{figure}
\subsection{Attention-based distillation}\label{5_2}
\begin{table*}[htb!]
\centering
\caption{Details of teacher and student network architectures. ResNet \cite{he2016deep} and WideResNet \cite{zagoruyko2016wide} are denoted by ResNet (depth) and WRN (depth)-(channel multiplication), respectively.}
\begin{center}
\scalebox{0.83}{\begin{tabular}{c |c |c |c |c |c |c |c| c| c}
\hline
\centering
\multirow{2}{*}{DB}& \multirow{2}{*}{Setup} & \multirow{2}{*}{Compression type} & \multirow{2}{*}{Teacher} & \multirow{2}{*}{Student} & FLOPs & FLOPs &\# of params &\# of params & Compression \\
& & & & & (teacher) & (student) & (teacher) & (student) & ratio \\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{\scriptsize{CIFAR-10}}}&(a) & Channel & WRN16-3 & WRN16-1 & 224.63M & 27.24M & 1.50M & 0.18M & 11.30$\%$ \\
&(b) & Depth & WRN28-1 & WRN16-1 & 56.07M & 27.24M & 0.37M & 0.18M & 47.38$\%$ \\
&(c) & Depth+Channel & WRN16-3 & WRN28-1 & 224.63M & 56.07M & 1.50M & 0.37M & 23.85$\%$ \\
&(d) & Different architecture & ResNet44 & WRN16-1 & 99.34M & 27.24M & 0.66M & 0.18M & 26.47$\%$ \\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{\scriptsize{CINIC-10}}}&(a) & Channel & WRN16-3 & \multirow{4}{*}{WRN16-1} & 224.63M & \multirow{4}{*}{27.24M} & 1.50M & \multirow{4}{*}{0.18M} & 11.30$\%$ \\
&(b) & Depth & WRN28-1 & & 56.07M & & 0.37M & & 47.38$\%$ \\
&(c$^a$) & Depth+Channel & WRN28-3 & & 480.98M & & 3.29M & & 5.31$\%$ \\
&(d) & Different architecture & ResNet44 & & 99.34M & & 0.66M & & 26.47$\%$ \\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{\scriptsize{Tiny-ImageNet}}}&(a) & Channel & WRN16-3 & \multirow{4}{*}{WRN16-1} & 898.55M & \multirow{4}{*}{108.98M} & 1.59M & \multirow{4}{*}{0.19M} & 11.82$\%$ \\
&(b$^b$) & Depth & WRN40-1 & & 339.60M & & 0.58M & & 32.52$\%$ \\
&(c$^b$) & Depth+Channel & WRN40-2 & & 1,323.10M & & 2.27M & & 8.26$\%$ \\
&(d) & Different architecture & ResNet44 & & 397.36M & & 0.67M & & 27.82$\%$ \\
\hline
\end{tabular}}
\end{center}
\label{table:info_settings}
\end{table*}
\begin{table*}[htb!]
\centering
\caption{Accuracy ($\%$) on CIFAR-10 with various knowledge distillation methods. The methods denoted by ``*'' are attention based distillation. ``$\mathrm{g}$'' and ``$\mathrm{l}$'' denote using global and local feature distillation, respectively.}
\begin{center}
\scalebox{0.93}{
\begin{tabular}{c |c c | c c c c c c c | c c}
\hline
\centering
\multirow{3}{*}{Setup} & \multicolumn{11}{c}{Method} \\ \cline{2-12}
& \multirow{2}{*}{Teacher} & \multirow{2}{*}{Student} & \multirow{2}{*}{KD} & \multirow{2}{*}{AT$^*$} & \multirow{2}{*}{SP} & \multirow{2}{*}{RKD} & \multirow{2}{*}{VID} & \multirow{2}{*}{AFDS$^*$} & \multirow{2}{*}{AFD$^*$} & \multicolumn{2}{c}{AMD} \\
&&&&&&&&&&(g) & (g+l) \\ \hline
\multirow{2}{*}{(a)} & 87.76 & 84.11 & 85.29 & 85.79 & 85.69 & 85.45 & 85.40 & \multirow{2}{*}{--} & 86.23 & 86.28 & \textbf{86.36} \\
& \scriptsize$\pm$0.12 & \scriptsize$\pm$0.12 & \scriptsize$\pm$0.15 & \scriptsize$\pm$0.14 & \scriptsize$\pm$0.11 &\scriptsize$\pm$0.09 & \scriptsize$\pm$0.14 & & \scriptsize$\pm$0.13 &\scriptsize$\pm$0.06 &\scriptsize$\pm$0.10 \\
\multirow{2}{*}{(b)} & 85.59 & 84.11 & 85.48 & 85.79 & 85.77 & 85.47 & 84.92 & 85.53 & 85.84 & 86.04 & \textbf{86.10} \\
& \scriptsize$\pm$0.13 & \scriptsize$\pm$0.12 & \scriptsize$\pm$0.12 & \scriptsize$\pm$0.12 & \scriptsize$\pm$0.07 &\scriptsize$\pm$0.12 & \scriptsize$\pm$0.13 &\scriptsize$\pm$0.13 & \scriptsize$\pm$0.11 &\scriptsize$\pm$0.12 &\scriptsize$\pm$0.10 \\
\multirow{2}{*}{(c)} & 87.76 & 85.59 & 86.57 & 86.77 & 86.56 & 86.38 & 86.64 & \multirow{2}{*}{--} & 87.24 & 87.13 & \textbf{87.35} \\
& \scriptsize$\pm$0.12 & \scriptsize$\pm$0.12 & \scriptsize$\pm$0.16 & \scriptsize$\pm$0.11 & \scriptsize$\pm$0.09 &\scriptsize$\pm$0.22 & \scriptsize$\pm$0.24 & & \scriptsize$\pm$0.03 &\scriptsize$\pm$0.14 &\scriptsize$\pm$0.10 \\
\multirow{2}{*}{(d)} & 86.41 & 84.11 & 85.44 & 85.95 & 85.41 & 85.50 &85.17 & 85.14 & 85.78 & 86.22 & \textbf{86.34} \\
& \scriptsize$\pm$0.20 & \scriptsize$\pm$0.21 & \scriptsize$\pm$0.06 & \scriptsize$\pm$0.05 & \scriptsize$\pm$0.12 &\scriptsize$\pm$0.06 & \scriptsize$\pm$0.11 &\scriptsize$\pm$0.13 & \scriptsize$\pm$0.09 &\scriptsize$\pm$0.07 &\scriptsize$\pm$0.05 \\
\hline
\end{tabular} }
\end{center}
\label{table:att_CIFAR10}
\end{table*}
\begin{table*}[htb!]
\centering
\caption{Accuracy ($\%$) on CINIC-10 with various knowledge distillation methods. The methods denoted by ``*'' are attention based distillation. AMD outperforms RKD \cite{park2019relational}. ``$\mathrm{g}$'' and ``$\mathrm{l}$'' denote using global and local feature distillation, respectively. }
\begin{center}
\scalebox{0.93}{
\begin{tabular}{c |c c | c c c c c c | c c}
\hline
\centering
\multirow{3}{*}{Setup} & \multicolumn{10}{c}{Method} \\ \cline{2-11}
& \multirow{2}{*}{Teacher} & \multirow{2}{*}{Student} & \multirow{2}{*}{KD} & \multirow{2}{*}{AT$^*$} & \multirow{2}{*}{SP} & \multirow{2}{*}{VID} & \multirow{2}{*}{AFDS$^*$} & \multirow{2}{*}{AFD$^*$} & \multicolumn{2}{c}{AMD} \\
&&&&&&&&&(g) & (g+l) \\ \hline
\multirow{2}{*}{(a)} & 75.40 & \multirow{8}{*}{\shortstack{72.05 \\ {\scriptsize$\pm$0.12}}} & 74.31 & 74.63 & 74.43 & 74.35 & \multirow{2}{*}{--} & 74.13 & 75.04 & \textbf{75.18} \\
& \scriptsize$\pm$0.12 & & \scriptsize$\pm$0.10 & \scriptsize$\pm$0.13 & \scriptsize$\pm$0.14 &\scriptsize$\pm$0.05 & & \scriptsize$\pm$0.12 &\scriptsize$\pm$0.11 &\scriptsize$\pm$0.09 \\
\multirow{2}{*}{(b)} & 75.59 & & 74.66 & 74.73 & 74.94 & 73.85 & 74.54 & 74.36 & 75.14 & \textbf{75.21} \\
& \scriptsize$\pm$0.15 & & \scriptsize$\pm$0.08 & \scriptsize$\pm$0.02 & \scriptsize$\pm$0.11 &\scriptsize$\pm$0.08 & \scriptsize$\pm$0.08 & \scriptsize$\pm$0.04 &\scriptsize$\pm$0.06 &\scriptsize$\pm$0.04 \\
\multirow{2}{*}{(c$^a$)} & 76.97 & & 74.26 & 74.19 & 75.05 & 74.06 & \multirow{2}{*}{--} & 74.20 & 74.72 & \textbf{75.17} \\
& \scriptsize$\pm$0.05 & & \scriptsize$\pm$0.06 & \scriptsize$\pm$0.11 & \scriptsize$\pm$0.10 &\scriptsize$\pm$0.15 & & \scriptsize$\pm$0.12 &\scriptsize$\pm$0.07 &\scriptsize$\pm$0.07 \\
\multirow{2}{*}{(d)} & 74.30 & & 74.47 & 74.67 & 74.46 & 74.43 & 74.64 & 73.31 & 74.93 & \textbf{75.10} \\
& \scriptsize$\pm$0.15 & & \scriptsize$\pm$0.09 & \scriptsize$\pm$0.05 & \scriptsize$\pm$0.17 &\scriptsize$\pm$0.10 & \scriptsize$\pm$0.12 &\scriptsize$\pm$0.13 & \scriptsize$\pm$0.07 &\scriptsize$\pm$0.10 \\
\hline
\end{tabular} }
\end{center}
\label{table:att_CINIC10}
\end{table*}
\begin{table*}[htb!]
\centering
\caption{Accuracy ($\%$) on Tiny-ImageNet with various knowledge distillation methods. The methods denoted by ``*'' are attention based distillation. AMD outperforms VID \cite{ahn2019variational} and RKD \cite{park2019relational}. ``$\mathrm{g}$'' and ``$\mathrm{l}$'' denote using global and local feature distillation, respectively.}
\begin{center}
\scalebox{0.93}{
\begin{tabular}{c |c c | c c c c c | c c}
\hline
\centering
\multirow{3}{*}{Setup} & \multicolumn{9}{c}{Method} \\ \cline{2-10}
& \multirow{2}{*}{Teacher} & \multirow{2}{*}{Student} & \multirow{2}{*}{KD} & \multirow{2}{*}{AT$^*$} & \multirow{2}{*}{SP} & \multirow{2}{*}{AFDS$^*$} & \multirow{2}{*}{AFD$^*$} & \multicolumn{2}{c}{AMD} \\
&&&&&&&&(g) & (g+l) \\ \hline
\multirow{2}{*}{(a)} & 58.16 & \multirow{8}{*}{\shortstack{49.45 \\ {\scriptsize$\pm$0.20}}} & 49.99 & 49.72 & 49.27 & \multirow{2}{*}{--} & 50.00 & \textbf{50.32} & 49.92 \\
& \scriptsize$\pm$0.30 & & \scriptsize$\pm$0.15 & \scriptsize$\pm$0.15 & \scriptsize$\pm$0.19 & &\scriptsize$\pm$0.23 &\scriptsize$\pm$0.07 &\scriptsize$\pm$0.04\\
\multirow{2}{*}{(b$^b$)} & 54.74 & & 49.56 & 49.79 & 49.89 & 49.46 & 50.04 & \textbf{50.15} & 49.97 \\
& \scriptsize$\pm$0.24 & & \scriptsize$\pm$0.17 & \scriptsize$\pm$0.22 & \scriptsize$\pm$0.20 &\scriptsize$\pm$0.28 & \scriptsize$\pm$0.27 & \scriptsize$\pm$0.10 &\scriptsize$\pm$0.18 \\
\multirow{2}{*}{(c$^b$)} & 59.92 & & 49.67 & 49.62 & 49.59 & \multirow{2}{*}{--} & 49.78 & 49.88 & \textbf{50.07} \\
& \scriptsize$\pm$0.15 & & \scriptsize$\pm$0.13 & \scriptsize$\pm$0.16 & \scriptsize$\pm$0.25 & &\scriptsize$\pm$0.24&\scriptsize$\pm$0.20 &\scriptsize$\pm$0.10 \\
\multirow{2}{*}{(d)} & 54.66 & & 49.52 & 49.45 & 49.13 & 49.55 & 49.44 & 49.92 & \textbf{50.08} \\
& \scriptsize$\pm$0.14 & & \scriptsize$\pm$0.16 & \scriptsize$\pm$0.28 & \scriptsize$\pm$0.20 &\scriptsize$\pm$0.13 & \scriptsize$\pm$0.27 &\scriptsize$\pm$0.09 & \scriptsize$\pm$0.16 \\
\hline
\end{tabular} }
\end{center}
\label{table:att_Tiny}
\end{table*}
\begin{table*}[htb!]
\caption{Top-1 and Top-5 accuracy (\%) on ImageNet with various knowledge distillation methods. The methods denoted by ``*'' are attention based distillation. ``$\mathrm{g}$'' and ``$\mathrm{l}$'' denote using global and local feature distillation, respectively.}
\label{test_accuracy_table_on_imagenet}
\begin{center}
\scalebox{0.93}{
\begin{tabular}{l|cc|ccccccc|cc}
\hline
& \multirow{2}{*}{Teacher} & \multirow{2}{*}{Student} & \multirow{2}{*}{KD} & \multirow{2}{*}{AT$^*$} & \multirow{2}{*}{RKD} & \multirow{2}{*}{SP} & \multirow{2}{*}{CC} & \multirow{2}{*}{AFD$^*$} & \multirow{2}{*}{CRD(+KD)} & \multicolumn{2}{c}{AMD} \\
&&&&&&&&&&(g)&(g+l)\\\hline
Top-1 & 73.31 & 69.75 & 70.66 & 70.70 & 70.59 & 70.79 & 69.96 & 71.38 & 71.17(71.38) & \textbf{71.58} & 71.47 \\
Top-5 & 91.42 & 89.07 & 89.88 & 90.00 & 89.68 & 89.80 & 89.17 & -- & 90.13(90.49) & \textbf{90.50} & 90.49\\
\hline
\end{tabular}
}
\end{center}
\end{table*}
In this section, we explore the performance of attention based distillation approaches with different types of combinations for teacher and student.
We set four types of combinations for teacher and student that consist of the same or different structure of networks. The four types of combinations are described in Table \ref{table:info_settings}. Since the proposed method is relevant to using attention maps, we implemented various baselines that are state-of-the-art attention based distillation methods, including AT \cite{zagoruyko2016paying}, AFDS \cite{wang2019pay}, and AFD \cite{ji2021show}. As described in section \ref{sec:related_work}, AT \cite{zagoruyko2016paying} uses activation-based spatial attention maps for transferring from teacher to student. AFDS \cite{wang2019pay} includes attentive feature distillation and accelerates the transfer-learned model by feature selection. Additional layers are used to calculate a transfer importance predictor used to measure the importance of the source activation maps and enforce a different penalty for training a student. AFD \cite{ji2021show} extracts channel and spatial attention maps and identifies similar features between teacher and student, which are used to control the distillation intensities for all possible pairs and compensate for the limitation of learning to transfer (L2T) \cite{pmlr-v97-jang19b} using manually selected links.
We implemented AFDS \cite{wang2019pay} when the dimension size of features for intermediate layers from the student is the same as the one from the teacher to concentrate on the distillation effects. We use four datasets that have varying degrees of difficulty in a classification problem. These baselines are used in the following experiments as well.
Table \ref{table:att_CIFAR10} presents the accuracy of various knowledge distillation methods for all setups in Table \ref{table:info_settings} on CIFAR-10 dataset. The proposed method, AMD (global+local), has the best performing results in all cases. Table \ref{table:att_CINIC10} describes the CINIC-10 results. In most cases, AMD (global+local) achieves the best results. For experiments on Tiny-ImageNet, as illustrated in Table \ref{table:att_Tiny}, AMD outperforms previous methods, and AMD (global) shows better results in (a) and (b$^b$) setups. For (c$^b$) and (d) setups, AMD (global+local) provides better results. For experiments on ImageNet, standard KD is not applied to baselines and Full KD is utilized. Teacher and student networks are ResNet34 and ResNet18, respectively. The results of baselines are referred from prior works \cite{ji2021show, tian2019contrastive}. As described in Table \ref{test_accuracy_table_on_imagenet}, AMD (global) outperforms other distillation methods, increasing the top-1 and top-5 accuracy by 1.83\% and 1.43\% over the results of learning from scratch, respectively.
\begin{figure}[htb!]
\includegraphics[scale=0.47] {figure/KD_CIFAR10_apn_histo.png}
\centering
\caption{Accuracy ($\%$) of students (WRN16-1) trained with teachers (WRN16-3 and WRN28-1) on CIFAR-10 for various loss functions.}
\label{figure:amdloss_cifar10}
\end{figure}
Compared to KD, AT obtains better performance in most cases across datasets. That is, the attention map helps the teacher to transfer its knowledge. Even though there is a case that AT shows lower performance than KD in Table \ref{table:att_Tiny}, AMD outperforms KD in all cases. It verifies that applying the discriminative angular distance metric for knowledge distillation maximizes the attention map's efficacy of transferring the knowledge and performs to complement the traditional KD for various combinations of teacher and student. The accuracies of SP with setup (a) and (d), and AFD with setup (d), are even lower than the accuracy of learning from scratch, while AMD performs better than other methods as shown in Table \ref{table:att_Tiny}. When the classification problem is harder, AMD (global) can perform better than AMD (global+local) in some cases. When the teacher and student have different channels or architectural styles, AMD (global+local) can generate a better student than AMD (global).
\textbf{Components of AMD loss function.} As described in Equation \ref{eq:amd}, angular margin distillation loss function ($\mathcal{L}_{AM}(Q_{Tp},Q_{Tn},Q_{Sp},Q_{Sn})$) includes three components (A, P, N). To verify the performance of each component in AMD loss, we experiment with each component separately. As shown in Figure \ref{figure:amdloss_cifar10}, among all components, (A) provides the strongest contribution. Each component in AMD contributes to improvements in performance, which transfers different knowledge. Adding one component to the other one provides richer information, which leads to better performance. The combination of all the components (AMD) show a much higher performance. This result indicates that all components (AMD) are critical to distilling the best student model.
\begin{figure*}[]
\includegraphics[scale=0.35] {figure/AMloss2.png}
\centering
\caption{$\mathcal{L_{A}}$ vs. Accuracy ($\%$) for (from left to right) WRN16-1 students (S) trained with WRN16-3, WRN28-1, and ResNet44 teachers (T), on CIFAR-10.}
\label{figure:sample_figure1}
\end{figure*}
\begin{figure*}[]
\includegraphics[scale=0.5] {figure/tsne_plot.png}
\centering
\caption{t-SNE plots of output for teacher model (ResNet44) and students (WRN16-1) trained with KD and AMD on CIFAR-10.}
\label{figure:tsne}
\end{figure*}
In Figure \ref{figure:sample_figure1} we show $\mathcal{L_{A}}$ vs. accuracy, when using KD, SP, and AMD (global), for WRN16-1 students trained with WRN16-3, WRN28-1, and ResNet44 teachers, on CIFAR-10 testing set. As shown in Figure \ref{figure:sample_figure1}, when the loss value is smaller, the accuracy is higher. Thus, these plots verify that $\mathcal{L_{A}}$ and performance are correlated.
\textbf{t-SNE visualization and cluster metrics.} To measure the clustering performance, we plot t-SNE \cite{vandermaaten08a} and calculate V-Score \cite{rosenberg2007v} of outputs from penultimate layers of KD and the proposed method on CIFAR-10, where V-Score is clustering metrics implying a higher value is better clustering. As shown in Figure \ref{figure:tsne}, compared to KD, AMD helps get tighter clusters and better separation between classes as seen in higher V-Score.
\subsection{Effect of teacher capacity} \label{sec:various_teacher}
\begin{table*}[]
\centering
\renewcommand{\tabcolsep}{1.2mm}
\caption{Accuracy ($\%$) with various knowledge distillation methods for different combinations of teachers and students. ``Teacher'' and ``Student'' denote results of the model used to train the distillation methods and trained from scratch, respectively. ``$\mathrm{g}$'' and ``$\mathrm{l}$'' denote using global and local feature distillation, respectively.}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{c |c c| c c| c c c c c c| c c c| c c | c}
\hline
\centering
Method & \multicolumn{4}{c|}{CIFAR-10} &
\multicolumn{9}{c|}{CINIC-10} & \multicolumn{3}{c}{Tiny-ImageNet} \\
\hline
\multirow{4}{*}{Teacher} & WRN & WRN & WRN & WRN &
WRN & WRN & WRN & WRN & WRN & WRN & WRN & WRN & M.Net &
WRN & WRN & WRN\\
& 28-1 & 40-1 & 16-3 & 16-8 &
16-3 & 16-8 & 28-1 & 40-1 & 28-3 & 40-2 & 16-3 & 28-3 & V2 &
40-1 & 40-2 & 16-3 \\
& (0.4M, & (0.6M, & (1.5M, & (11.0M, &
(1.5M, & (11.0M, & (0.4M, & (0.6M, & (3.3M, & (2.2M, & (1.5M, & (3.3M, & (0.6M, &
(0.6M, & (2.3M, & (1.6M, \\
& 85.84) & 86.39) & 88.15) & 89.50) &
75.65) & 77.97) & 73.91) & 74.49) & 77.14) & 76.66) & 75.65) & 77.14) & 80.98) &
55.28) & 60.18) & 58.78) \\
\hline
\multirow{3}{*}{Student} & \multicolumn{2}{c|}{WRN16-1} & \multicolumn{2}{c|}{WRN28-1} &
\multicolumn{6}{c|}{WRN16-1} & \multicolumn{3}{c|}{ResNet20} &
\multicolumn{2}{c|}{WRN16-1} & ResNet20 \\
& \multicolumn{2}{c|}{(0.2M,} & \multicolumn{2}{c|}{(0.4M, } &
\multicolumn{6}{c|}{(0.2M, } & \multicolumn{3}{c|}{(0.3M, } &
\multicolumn{2}{c|}{(0.2M,} & (0.3M, \\
& \multicolumn{2}{c|}{84.11{\scriptsize$\pm$0.21})} & \multicolumn{2}{c|}{85.59{\scriptsize$\pm$0.13})} &
\multicolumn{6}{c|}{72.05{\scriptsize$\pm$0.12})} & \multicolumn{3}{c|}{72.74{\scriptsize$\pm$0.09})} &
\multicolumn{2}{c|}{49.45{\scriptsize$\pm$0.20})} & 51.75{\scriptsize$\pm$0.19} \\
\hline
\multirow{2}{*}{KD} & 85.48 & 85.42 & 86.57 & 86.68 &
74.31 & 74.17 & 74.66 & 74.45 & 74.26 & 74.29 & 75.12 & 74.97 & 76.69 &
49.56 & 49.67 & 51.72 \\
& {\scriptsize$\pm$0.12} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.16} & {\scriptsize$\pm$0.08} &
{\scriptsize$\pm$0.10} & {\scriptsize$\pm$0.16} & {\scriptsize$\pm$0.08} & {\scriptsize$\pm$0.03} & {\scriptsize$\pm$0.06} & {\scriptsize$\pm$0.09} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.06} &
{\scriptsize$\pm$0.17} & {\scriptsize$\pm$0.13} & {\scriptsize$\pm$0.13} \\
\multirow{2}{*}{AT} & 85.79 & 85.79 & 86.77 & 87.00 &
74.63 & 74.23 & 74.73 & 74.55 & 74.19 & 74.48 & 75.33 & 75.18 & 77.34 &
49.79 & 49.62 & 51.65 \\
& {\scriptsize$\pm$0.12} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.05} &
{\scriptsize$\pm$0.13} & {\scriptsize$\pm$0.14} & {\scriptsize$\pm$0.02} & {\scriptsize$\pm$0.06} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.08} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.09} & {\scriptsize$\pm$0.10} &
{\scriptsize$\pm$0.22} & {\scriptsize$\pm$0.16} & {\scriptsize$\pm$0.05} \\
\multirow{2}{*}{SP} & 85.77 & 85.90 & 86.56 & 86.94 &
74.43 & 74.34 & 74.94 & 74.86 & 75.04 & 74.81 & 75.29 & 75.50 & 73.71 &
49.89 & 49.59 & 51.87 \\
& {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.09} & {\scriptsize$\pm$0.08} &
{\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.13} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.10} & {\scriptsize$\pm$0.09} & {\scriptsize$\pm$0.10} & {\scriptsize$\pm$0.09} & {\scriptsize$\pm$0.10} &
{\scriptsize$\pm$0.20} & {\scriptsize$\pm$0.25} & {\scriptsize$\pm$0.09} \\
\hline
AMD & 86.04 & 86.03 & 87.13 & 87.22 &
75.04 & 74.93 & 75.14 & \textbf{75.12} & 74.72 & 74.95 & 75.66 & 75.61 & 78.45 &
\textbf{50.15} & 49.88 & 51.89\\
(g) & {\scriptsize$\pm$0.12} & {\scriptsize$\pm$0.09} & {\scriptsize$\pm$0.14} & {\scriptsize$\pm$0.17} &
{\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.09} & {\scriptsize$\pm$0.06} & {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.20} & {\scriptsize$\pm$0.08} & {\scriptsize$\pm$0.06} & {\scriptsize$\pm$0.03} &
{\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.20} & {\scriptsize$\pm$0.25} \\
AMD & \textbf{86.10} & \textbf{86.15} & \textbf{87.35} & \textbf{87.31} &
\textbf{75.18} & \textbf{75.20} & \textbf{75.21} & 75.10 & \textbf{75.22} & \textbf{75.04} & \textbf{75.75} & \textbf{75.76} & \textbf{78.62} &
49.97 & \textbf{50.07} & \textbf{52.12} \\
(g+l) & {\scriptsize$\pm$0.10} & {\scriptsize$\pm$0.06} & {\scriptsize$\pm$0.10} & {\scriptsize$\pm$0.15} &
{\scriptsize$\pm$0.09} & {\scriptsize$\pm$0.05} & {\scriptsize$\pm$0.04} & {\scriptsize$\pm$0.04} & {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.06} & {\scriptsize$\pm$0.08} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.04} &
{\scriptsize$\pm$0.18} & {\scriptsize$\pm$0.10} & {\scriptsize$\pm$0.15} \\
\hline
\end{tabular}
}
\end{center}
\label{table:various_CapacityT}
\end{table*}
To understand the effect of the capacity of the teacher, we implemented various combinations of teacher and student, where the teacher has a different capacity. We use well-known benchmarks for image classification which are WRN \cite{zagoruyko2016wide}, ResNet \cite{he2016deep}, and MobileNetV2 (M.NetV2) \cite{sandler2018mobilenetv2}. We applied the same settings as in the experiments of the previous section.
The results in classification accuracy for the student models are described in Table \ref{table:various_CapacityT} across three datasets, trained with attention based and non-attention based methods \cite{hinton2015distilling, zagoruyko2016paying, tung2019similarity}. The number of trainable parameters are noted in in brackets. For all cases, the proposed method, AMD, shows the highest accuracy. When the complexity of the dataset is higher and the depth of teacher is largely different from the one of the student, AMD (global) tends to generate a better student than AMD (global+local). When a larger capacity of students is used, the accuracy observed is higher. This is seen in the results from WRN16-1 and ResNet20 students with WRN16-3 and WRN28-3 teachers on CINIC-10 dataset. For the combinations, ResNet20 students having a larger capacity than WRN16-1 generate better results. Furthermore, on CIFAR-10, when a WRN16-3 teacher is used, a WRN28-1 student achieves 87.35$\%$ for AMD (global+local), whereas a WRN16-1 student achieves 86.36$\%$ for AMD (global+local). On Tiny-ImageNet, when AMD (global+local) is used, the accuracy of a ResNet20 student is 52.12$\%$, which is higher than the accuracy of a WRN16-1 student, which is 49.92$\%$.
Compared to KD, in most cases, AT achieves better performance. However, when the classification problem is difficult, such as when using Tiny-ImageNet, and when WRN40-2 teacher and WRN16-1 student are used, both AT and SP show worse performance than KD. When the WRN16-3 teacher and ResNet20 student are used, KD and AT perform worse than the model trained from scratch. The result of AT is even lower than that of KD. So, there are cases where AT and SP cannot complement the performance of the traditional KD. On the other hand, for the proposed method, the results are better than the baselines in all the cases. Interestingly, on CIFAR-10 and CINIC-10, the result of a WRN16-1 student trained by AMD with a WRN28-1 teacher is even better than the result of the teacher. Therefore, we conclude that the proposed method maximizes the attention map's efficacy of transferring the knowledge and complements traditional KD.
Also, when applying the larger teacher model and the smaller student model, the performance degradation of AMD can occur.
For example, on CINIC-10, WRN16-1 student trained with WRN40-1 (0.6M) teacher outperforms the one trained with WRN40-2 (2.3M) teacher.
Both AMD and other methods produce some cases with lower performance when a better (usually larger) teacher is used. This is consistent with prior findings \cite{cho2019efficacy, wang2021knowledge, stanton2021does} that a better teacher does not always guarantee a better student.
\textbf{Heterogeneous teacher-student.}
In Table \ref{table:various_CapacityT}, we present the results of the teacher-student combinations from similar architecture styles. Tian \emph{et al.} \cite{tian2019contrastive} found that feature distillation methods such as SP sometimes struggled to find the optimal solution in different architecture styles.
In this regard, we implemented heterogeneous teacher-student combination, where the teacher and student have very different structure of networks. We use vgg \cite{simonyan2014very} network to compose heterogeneous combinations.
As describe in Table \ref{table:heteroT}, we observe similar findings, showing degraded performance in using SP when vgg13 teacher and ResNet20 student are used, while AMD consistently outperforms all baselines we explored. Also, in most cases, WRN16-8 teacher distills a better student (vgg8) than WRN28-1 teacher. However, KD and SP shows better performance with WRN28-1 teacher, which corroborates a better teacher does not always distill a better student.
\begin{table}[htb!]
\centering
\renewcommand{\tabcolsep}{1.2mm}
\caption{Accuracy ($\%$) with various knowledge distillation methods for different structure of teachers and students on CIFAR-10. ``Teacher'' and ``Student'' denote results of the model used to train the distillation methods and trained from scratch, respectively. ``$\mathrm{g}$'' and ``$\mathrm{l}$'' denote using global and local feature distillation, respectively.}
\begin{center}
\scalebox{0.9}{
\begin{tabular}{c |c c| c c }
\hline
\centering
\multirow{4}{*}{Teacher} & WRN & WRN & \multirow{2}{*}{vgg13} & M.Net \\
& 28-1 & 16-8 & & V2 \\
& (0.4M, & (11.0M, & (9.4M, & (0.6M, \\
& 85.84) & 89.50) & 88.56) & 89.61) \\
\hline
\multirow{3}{*}{Student} & \multicolumn{2}{c|}{vgg8} & ResNet20 & ResNet26 \\
& \multicolumn{2}{c|}{(3.9M,} & (0.3M, & (0.4M, \\
& \multicolumn{2}{c|}{85.41{\scriptsize$\pm$0.06})} & 85.20{\scriptsize$\pm$0.17}) & 85.65{\scriptsize$\pm$0.20}) \\
\hline
\multirow{2}{*}{KD} & 86.93 & 86.74 & 85.39 & 87.74\\
& {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.13} & {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.08} \\
\multirow{2}{*}{AT} & 87.16 & 87.29 & 85.63 & 88.61 \\
& {\scriptsize$\pm$0.09} & {\scriptsize$\pm$0.10} & {\scriptsize$\pm$0.20} & {\scriptsize$\pm$0.04}\\
\multirow{2}{*}{SP} & 87.29 & 86.82 & 85.00 & 85.78 \\
& {\scriptsize$\pm$0.00} & {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.10} \\
\hline
AMD & 87.43 & 87.61 & 86.18 & \textbf{88.70}\\
(g) & {\scriptsize$\pm$0.04} & {\scriptsize$\pm$0.11} & {\scriptsize$\pm$0.14} & {\scriptsize$\pm$0.03} \\
AMD & \textbf{87.56} & \textbf{87.63} & \textbf{86.41} & 88.42 \\
(g+l) & {\scriptsize$\pm$0.03} & {\scriptsize$\pm$0.07} & {\scriptsize$\pm$0.04} & {\scriptsize$\pm$0.08} \\
\hline
\end{tabular}
}
\end{center}
\label{table:heteroT}
\end{table}
\subsection{Ablations and sensitivity analysis} \label{sec:Param}
In this section, we investigate sensitivity for hyperparameters ($\gamma$ and $m$) used for the angular margin based attention distillation.
\subsubsection{\makebox{Effect of angular distillation hyperparameter $\gamma$}}
The results of a student model (WRN16-1) for AMD (global) trained with teachers (WRN16-3 and WRN28-1) by using various $\gamma$ on CIFAR-10 (the first row) and CINIC-10 (the second row) are depicted in Figure \ref{figure:cifar10_gamma} ($m$ = 1.35). When $\gamma$ is 5000, all results show the best accuracy. For CIFAR-10, when WRN16-3 is used as a teacher, the accuracy of $\gamma$ = 3000 is higher than that of $\gamma$ = 7000. However, for WRN28-1 as a teacher, the accuracy of $\gamma$ = 7000 is higher than that of $\gamma$ = 3000. When $\gamma$ is 1000, the accuracy is lower than KD, implying that it does not complement KD and adversely affects the performance. On the other hand, for CINIC-10, when the WRN16-3 teacher is used, the result of $\gamma$ = 7000 is better than that of $\gamma$ = 3000. But, for the WRN28-1 teacher, $\gamma$ = 3000 is higher than that of $\gamma$ = 7000. Therefore, $\gamma$ values between 3000 and 7000 achieve good performance, while too small or large $\gamma$ values do not help much with improvement. Therefore, setting the proper $\gamma$ value is important for performance. We recommend using $\gamma$ as 5000, which produces the best results across datasets and combinations of teacher and student.
\begin{figure}[htb!]
\includegraphics[scale=0.315] {figure/gamma_value22.png}
\centering
\caption{Accuracy ($\%$) of students (WRN16-1) for AMD (global) with various $\gamma$, trained with teachers (WRN16-3 and WRN28-1) on CIFAR-10 and CINIC-10. ``T'' and ``S'' denote teacher and student, respectively.}
\label{figure:cifar10_gamma}
\end{figure}
\begin{figure}[htb!]
\includegraphics[scale=0.29] {figure/margin_value22.png}
\centering
\caption{Accuracy ($\%$) of students (WRN16-1) for AMD (global) with various angular margin $m$, trained with teachers (WRN16-3 and WRN28-1) on CIFAR-10 and CINIC-10. ``T'' and ``S'' denote teacher and student, respectively.}
\label{figure:sample_margin}
\end{figure}
\subsubsection{Effect of angular margin $m$}
The results of a student model (WRN16-1) for AMD (global) trained with teachers (WRN16-3 and WRN28-1) by various angular margin $m$ on CIFAR-10 (the first row) and CINIC-10 (the second row) are illustrated in Figure \ref{figure:sample_margin} ($\gamma$ = 5000).
As described in section \ref{proposed_AMD}, using the large value of $m$ corresponds to producing more distinct positive features in the attention map and making a large gap between positive and negative features for distillation. When $m$ is 1.35 for the WRN16-3 teacher, the WRN16-1 student shows the best performance of 86.28$\%$ on CIFAR-10. When $m$ = 1.5 for CINIC-10, the student's accuracy is 75.13$\%$, which is higher than when $m$ = 1.35. When the teacher is WRN28-1, the student produces the best accuracy with $m$ = 1.35 on both datasets. The student model with $m$ = 1.35 performs better than the one with $m$ = 1.1 and 2.0. When the complexity of the dataset is higher, using $m$ (1.5) which is larger than 1.35 can produce a good performance.
When $m$ = 1.0 (no additional margin applied to the positive feature) for CIFAR-10 and CINIC-10 with setup ($b$), the results are 85.81$\%$ and 74.83$\%$, which are better than those of 85.31$\%$ and 74.75$\%$ from $m$ = 2.0, respectively.
This result indicates that it is important to set an appropriate $m$ value for our method. We believe that angular margin plays a key role in determining the gap between positive and negative features. As angular margin increases, the positive features are further emphasized, and in this case of over-emphasis by a much larger $m$, the performance is worse than that of the smaller $m$.
We recommend using a margin $m$ of around 1.35 ($m > 1.0$), which generates the best results in most cases.
\subsection{Analysis with activation maps} \label{sec:activation_map}
To analyze results with intermediate layers, we adopt Grad-CAM \cite{selvaraju2017grad} which uses class-specific gradient information to visualize the coarse localization map of the important regions in the image. In this section, we present the activation maps from intermediate layers and the high level of the layer with various methods. The red region is more crucial for the model prediction than the blue one.
\begin{figure}[htb!]
\includegraphics[width=0.4\textwidth] {figure/activation_map_levels.png}
\centering
\caption{Activation maps for different levels of students (WRN16-1) trained with a teacher (WRN16-3) on CIFAR-10.}
\label{figure:activation_levels}
\end{figure}
\subsubsection{Activation maps for the different levels of layers}
\begin{figure}[htb!]
\includegraphics[width=0.43\textwidth] {figure/activation_map2.png}
\centering
\caption{Activation maps of high-level from students (WRN16-1) trained with a teacher (WRN16-3) for different input images on CIFAR-10.}
\label{figure:activation}
\end{figure}
The activation maps from intermediate layers with various methods are shown in Figure \ref{figure:activation_levels}. The proposed method, AMD, shows intuitively similar activated regions to the traditional KD \cite{hinton2015distilling} in the low-level. However, at mid-level and high-level, the proposed method represents the higher activations around the region of a target object, which is different from the previous methods \cite{hinton2015distilling, zagoruyko2016paying}. Thus, the proposed method can classify positive and negative areas more discriminatively, compared to the previous methods \cite{hinton2015distilling, zagoruyko2016paying}.
The high-level activation maps with various input images are described in Figure \ref{figure:activation}. The activation from proposed method is seen to be more centered on the target. The result shows that the proposed method performs better in focusing on the foreground object distinctly with high weight, while being less distracted by the background compared to other methods \cite{hinton2015distilling, zagoruyko2016paying}. With higher weight over regions of interest, the student from the proposed method has a stronger discrimination ability. Therefore, the proposed method guides student models to increase class separability.
\subsubsection{Activation maps for global and local distillation of AMD}
To investigate the impact of using global and local features for AMD, we illustrate relevant results in Figure \ref{figure:attention_AMD}. When both global and local features are used for distillation, the activated area is located and shaped more similar to the teacher, than using the global feature only. Also, AMD (global+local) focuses more on the foreground object with higher weights than AMD (global). AMD (global+local) guides the student to focus more on the target regions and finds discriminative regions. Thus, using global and local features is better than using global features alone for the proposed method.
\begin{figure}[htb!]
\includegraphics[width=0.43\textwidth] {figure/attention_AMD2.png}
\centering
\caption{Activation maps of high-level from students (WRN16-1) for AMD trained with a teacher (WRN16-3) for different input images on CIFAR-10.}
\label{figure:attention_AMD}
\end{figure}
\subsection{Combinations with existing methods} \label{sec:combi_methods}
Even if a model shows good performance in classification, it may have miscalibration problems \cite{guo2017calibration} and may not always obtain improved results from combining with other robust methods. In this section, to evaluate the generalizability of models trained by each method and to explore if the method can complement other methods, we implement experiments with various existing methods.
We use the method in various ways to demonstrate how easily it can be combined with any previous learning tasks. We trained students with fine-grained features \cite{wang2019distilling, wang2020fully}, augmentation methods, and one of the baselines such as SP \cite{tung2019similarity} that is not based on the attention feature based KD. WRN16-1 students were trained with WRN16-3 and WRN28-1 teachers. We examine whether the proposed method can be combined with other techniques and compare the results to baselines.
\subsubsection{Fine-grained feature-based distillation}
\begin{figure*}[htb!]
\includegraphics[scale=0.44] {figure/masked_feature_result2.png}
\centering
\caption{Accuracy ($\%$) from students (WRN16-1) for AMD trained with a teacher (WRN16-3) with/without masked features. ``g'', ``l'', and ``m'' denote global, local, and masked feature, respectively.}
\label{figure:masked}
\end{figure*}
If the features of teacher and student are compatible, it results in a student achieving `minor gains' \cite{wang2019distilling}. To perform better distillation and to overcome the problem of learning minor gains, a technique for generating a fine-grained feature has been used \cite{wang2019distilling, wang2020fully}. For distillation with AMD and creating the fine-grained (masked) feature, a binary mask is adopted when the negative feature is created. For example, if the probability of the point for the negative map is higher than 0.5, the point is multiplied by 1, otherwise by 0.
Then, compared to non-masking, it boosts the difference between teacher and student, where the difference can be more focused on loss function for training. The results for AMD with or without using masked feature-based distillation are presented in Figure \ref{figure:masked}. The parameter $\gamma$ for training a student based on AMD without masked features is 5000 for all setups across datasets. When masked features are used for AMD, to generate the best results, $\gamma$ of 3000 is applied to setup (b) on CIFAR-10, setup ($c^a$) on CINIC-10, and all setups on Tiny-ImageNet. For CIFAR-10, AMD (global+local) without masked features has the best performing result in most cases. AMD (global+local) with masked features shows the best with setup (d). For CINIC-10, the results of AMD with masked features for setup (d) show the best. For Tiny-ImageNet, in most cases, AMD with masked features performs the best. Therefore, when the complexity of a dataset is high, fine-grained features can help more effectively improve the performance, and the smaller parameter of $\gamma$, 3000, generates better accuracy. Also, AMD (global+local) with masked features produces better performance than AMD (global) with the one. For setup (d) -- different architectures for teacher and student -- with/without masked features, AMD (global+local) outperforms AMD (global). This could be due to the fact that the teacher's features differ from the student's because the two networks have different architectures, resulting in different distributions. So, masked features with both global and local distillation influence more on setup (d) than other setups. The difference between AMD (global) and AMD (global+local) with masked features is also discriminatively shown with the harder problem in classification. If the student's and teacher's architectural styles are similar, the student is more likely to achieve plausible results \cite{wang2021knowledge}.
\subsubsection{Applying augmentation methods}
In this section, we investigate of the compatibility with different types of augmentation methods.
\textbf{Mixup.} Mixup \cite{zhang2017mixup} is one of the most commonly used augmentation methods. We demonstrate here that AMD complements Mixup. Mixup's parameter is set to $\alpha_\text{Mixup}=0.2$. A teacher is trained with the original training set and learns from scratch. A student is trained with Mixup and the teacher model is implemented as a pre-trained model.
\begin{figure}[htb!]
\includegraphics[width = 0.34\textwidth]
{figure/mixup_result31.png}
\centering
\caption{Accuracy ($\%$) of students (WRN16-1) for knowledge distillation methods, trained with Mixup and a teacher (WRN16-3) on CIFAR-10. ``T'' and ``S'' denote teacher and student, respectively. ``g'' and ``l'' denote using global and local feature distillation, respectively. ``Student'' is a result of WRN16-1 trained from scratch.}
\label{figure:mixup_results}
\end{figure}
\begin{table}[htb!]
\centering
\caption{ECE ($\%$) and NLL ($\%$) for various knowledge distillation methods with Mixup on CIFAR-10. ``$\mathrm{g}$'' and ``$\mathrm{l}$'' denote using global and local feature distillation, respectively. The results (ECE, NLL) for WRN16-3 and WRN28-1 teachers are (1.469$\%$, 44.42$\%$) and (2.108$\%$, 64.38$\%$), respectively. }
\begin{center}
\scalebox{0.72}{
\begin{tabular}{c |c |c c | c c}
\hline
\centering
\multirow{2}{*}{Setup} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{w/o Mixup }& \multicolumn{2}{c}{w/ Mixup}\\
& & ECE & NLL & ECE & NLL \\
\hline
& Student & 2.273 & 70.49 & 7.374 (\textcolor{red}{+5.101}) & 90.58 (\textcolor{red}{+20.09}) \\
\hline
\multirow{5}{*}{(a)} & KD \cite{hinton2015distilling} & 2.065 & 63.34 & 1.818 (\textcolor{blue}{-0.247}) & 55.62 (\textcolor{blue}{-7.71}) \\
&AT \cite{zagoruyko2016paying}& 1.978 & 60.48 & 1.652 (\textcolor{blue}{-0.326}) & 50.84 (\textcolor{blue}{-9.64}) \\
&AFD \cite{ji2021show}& \textbf{1.890} & \textbf{56.71} & 1.651 (\textcolor{blue}{-0.240}) & 50.22 (\textcolor{blue}{-6.49}) \\
&AMD (g) & 1.933 & 59.67 & 1.645 (\textcolor{blue}{-0.288}) & 50.33 (\textcolor{blue}{-9.34}) \\
&AMD (g+l) & 1.895 & 57.60 & \textbf{1.592} (\textcolor{blue}{-0.304}) & \textbf{49.68} (\textcolor{blue}{-7.92}) \\
\hline
\multirow{6}{*}{(b)} & KD \cite{hinton2015distilling}& 2.201 & 68.75 & 1.953 (\textcolor{blue}{-0.249}) & 58.81 (\textcolor{blue}{-9.93}) \\
&AT \cite{zagoruyko2016paying}& 2.156 & 67.14 & 1.895 (\textcolor{blue}{-0.261}) & 56.51 (\textcolor{blue}{-10.62}) \\
&AFDS \cite{wang2019pay}& 2.197 & 68.53 & 1.978 (\textcolor{blue}{-0.219}) & 58.86 (\textcolor{blue}{-9.68}) \\
&AFD \cite{ji2021show}& 2.143 & 66.05 & 1.900 (\textcolor{blue}{-0.243}) & 57.68 (\textcolor{blue}{-8.37}) \\
&AMD (g) & \textbf{2.117} & \textbf{66.47} & 1.869 (\textcolor{blue}{-0.248}) & 56.05 (\textcolor{blue}{-10.42}) \\
&AMD (g+l) & 2.123 & 67.51 & \textbf{1.853} (\textcolor{blue}{-0.270}) & \textbf{55.15} (\textcolor{blue}{-12.36}) \\
\hline
\end{tabular} }
\end{center}
\label{table:mixup_result}
\end{table}
\begin{figure}[htb!]
\includegraphics[scale=0.29] {figure/mixup_reliability.png}
\centering
\caption{Reliability diagrams of students (WRN16-1) for knowledge distillation methods, trained with Mixup and a teacher (WRN16-3) on CIFAR-10. For the results of each method, the left is the result without Mixup, and the right is with Mixup.}
\label{figure:mixup_reliability}
\end{figure}
As described in Figure \ref{figure:mixup_results}, with Mixup, most of the methods generate better results. However, KD shows slight degradation when a WRN16-3 teacher is used. This degradation might be related to the artificially blended labels by Mixup. Conventional KD achieves the success by transferring concise logit knowledge. However, with Mixup in KD, the knowledge from a teacher is affected by the mixed labels and is not concise logits, which can hurt distillation quality \cite{das2020empirical}. So, the knowledge for separating different classes can be better encoded by traditional KD (without Mixup) \cite{das2020empirical}. Even though the KD performs degradation with Mixup, all other baselines and proposed methods transferring features with intermediate layers show improvement. Thus, the feature based distillation methods help to reduce the negative effects from noisy logits.
When a WRN28-1 teacher is used, the performance of the student from AFD is degraded. AFD utilizes similarity of features for all possible pairs of the teacher and student. For this combination, Mixup produces noisy features, which can affect to mismatch the pair for distillation to perform degradation. Compared to the baselines, AMD obtains more gains from Mixup. To study the generalizability and regularization effects of Mixup, we measured expected calibration error (ECE) \cite{guo2017calibration, naeini2015obtaining} and negative log likelihood (NLL) \cite{guo2017calibration} for each method. ECE is a metric to measure calibration, representing the reliability of the model \cite{guo2017calibration}. A probabilistic model's quality can be measured by using NLL \cite{guo2017calibration}. The results of training from scratch with Mixup show a higher ECE and NLL than the results of training without Mixup, as seen in Table \ref{table:mixup_result}. However, the methods, including knowledge distillation, generate lower ECE and NLL. This implies that knowledge distillation from teacher to student influences the generation of a better model not only for accuracy but also for reliability. In both (a) and (b), with Mixup, AMD (global+local) shows robust calibration performance. Therefore, we confirm that an augmentation method such as Mixup gets the benefits from AMD in generating better calibrated performance. As can be seen in Figure \ref{figure:mixup_reliability}, WRN16-1 trained from scratch with Mixup produces underconfident predictions \cite{zhang2017mixup}, compared to KD \cite{hinton2015distilling} with Mixup. AMD (global+local) with Mixup achieves the best calibration performance. These results support the advantage of AMD, that it can be easily combined with common augmentation methods to improve the performance in classification with good calibration.
\textbf{CutMix.} CutMix \cite{yun2019cutmix} one of the most popular augmentation methods, which is more advanced method to Mixup. We evaluate AMD with CutMix. We referred to the previous study to set the parameters for CutMix \cite{yun2019cutmix}.
As illustrated in Figure \ref{figure:cutmix_results}, all methods are improved by CutMix. Compared to other baselines, AFD gains less improvement. Both AMD (global) and AMD (global+local) perform better with CutMix and these results also show that the proposed method can be easily combined with the advanced augmentation methods.
\begin{figure}[htb!]
\includegraphics[width = 0.4\textwidth]
{figure/KD_CIFAR10_281cutmix.png}
\centering
\caption{Accuracy ($\%$) of students (WRN16-1) for knowledge distillation methods, trained with CutMix and a teacher (WRN28-1) on CIFAR-10. ``g'' and ``l'' denote using global and local feature distillation, respectively. ``Student'' is a result of WRN16-1 trained from scratch.}
\label{figure:cutmix_results}
\end{figure}
\textbf{MoEx.} To test with a latent space augmentation method, MoEx \cite{li2021feature} is adopted to train with AMD, which is one of the state-of-the-art technique for augmentation. We applied the same parameter by referring to the prior study \cite{li2021feature}. We apply MoEx to a layer before stage 3 in the student network (WRN16-1), which achieves the best with KD.
\begin{figure}[htb!]
\includegraphics[width = 0.4\textwidth]
{figure/KD_CIFAR10_281moex.png}
\centering
\caption{Accuracy ($\%$) of students (WRN16-1) for knowledge distillation methods, trained with MoEx and a teacher (WRN28-1) on CIFAR-10. ``g'' and ``l'' denote using global and local feature distillation, respectively. ``Student'' is a result of WRN16-1 trained from scratch.}
\label{figure:moex_results}
\end{figure}
As shown in Figure \ref{figure:moex_results}, most of KD based methods with MoEx perform better than the one without MoEx. AFD shows degradation. Since AFD transfers the knowledge considering all pair of features from teacher and student, MoEx in AFD hinders the pair matching and transferring the high quality knowledge. Both AMD (global) and AMD (global+local) outperform baselines. This results verify that latent space augmentation based methods can be combined with the proposed method. Therefore, the proposed method can implement with various augmentation methods to improve the performance.
\begin{figure}[htb!]
\includegraphics[width = 0.38\textwidth]
{figure/CIFAR10_281_moex.png}
\centering
\caption{Accuracy ($\%$) of students (WRN16-1) for knowledge distillation methods, trained with MoEx and a teacher (WRN28-1) on CIFAR-10. We denote the layer index to apply MoEx as (1=before stage 1, 2=before stage 2, 3=before stage 3, 4=after stage 3). ``g'' and ``l'' denote using global and local feature distillation, respectively.}
\label{figure:moex_layer_results}
\end{figure}
Additionally, we explore the work of MoEx at different layers. As described in Figure \ref{figure:moex_layer_results}, when MoEx is applied the layer before stage 3 of the student model, AMD shows the best performance. KD also shows its best when MoEx is applied to a layer before stage 3. This aspect is different from the result of learning from scratch, which shows the best when MoEx is applied to a layer before stage 1 \cite{li2021feature}. Thus, when latent space augmentation is combined with KD based method including baselines and the proposed method, a layer to apply augmentation method has to be chosen considerably. And, these results imply that a layer before stage 3 plays a key role for knowledge distillation.
\subsubsection{Combination with other distillation methods}
\begin{figure}[htb!]
\includegraphics[width = 0.33\textwidth] {figure/sp_result31.png}
\centering
\caption{Accuracy ($\%$) of students (WRN16-1) for knowledge distillation methods, trained with SP and a teacher (WRN16-3) on CIFAR-10. ``T'' and ``S'' denote teacher and student, respectively. ``g'' and ``l'' denote using global and local feature distillation, respectively. ``Student'' is a result of WRN16-1 trained from scratch.}
\label{figure:SP_results}
\end{figure}
\begin{table}[htb!]
\centering
\caption{ECE ($\%$) and NLL ($\%$) for various knowledge distillation methods with SP on CIFAR-10. ``$\mathrm{g}$'' and ``$\mathrm{l}$'' denote using global and local feature distillation, respectively. The results (ECE, NLL) for WRN16-3 and WRN28-1 teachers are (1.469$\%$, 44.42$\%$) and (2.108$\%$, 64.38$\%$), respectively.}
\begin{center}
\scalebox{0.73}{
\begin{tabular}{c |c |c c | c c}
\hline
\centering
\multirow{2}{*}{Setup} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{w/o SP}& \multicolumn{2}{c}{w/ SP}\\
& & ECE & NLL & ECE & NLL \\
\hline
\multirow{4}{*}{(a)} &AT \cite{zagoruyko2016paying}& 1.978 & 60.48 & 1.861 (\textcolor{blue}{-0.118}) & 56.22 (\textcolor{blue}{-4.26}) \\
&AFD \cite{ji2021show}& \textbf{1.890} & \textbf{56.71} & 1.881 (\textcolor{blue}{-0.010}) & 56.73 (\textcolor{blue}{-0.02}) \\
&AMD (g) & 1.933 & 59.67 & 1.808 (\textcolor{blue}{-0.125}) & 54.74 (\textcolor{blue}{-4.93}) \\
&AMD (g+l) & 1.895 & 57.60 & \textbf{1.803} (\textcolor{blue}{-0.092}) & \textbf{53.80} (\textcolor{blue}{-3.80}) \\
\hline
\multirow{5}{*}{(b)} &AT \cite{zagoruyko2016paying}& 2.156 & 67.14 & 2.095 (\textcolor{blue}{-0.060}) & 65.38 (\textcolor{blue}{-1.75}) \\
&AFDS \cite{wang2019pay}& 2.197 & 68.53 & 2.128 (\textcolor{blue}{-0.069}) & 66.61 (\textcolor{blue}{-1.92}) \\
&AFD \cite{ji2021show}& 2.143 & 66.05 & 2.118 (\textcolor{blue}{-0.024}) & 65.39 (\textcolor{blue}{-0.66}) \\
&AMD (g) & \textbf{2.117} & \textbf{66.47} & 2.058 (\textcolor{blue}{-0.059}) & 63.37 (\textcolor{blue}{-3.10}) \\
&AMD (g+l) & 2.123 & 67.51 & \textbf{2.043} (\textcolor{blue}{-0.080}) & \textbf{63.23} (\textcolor{blue}{-4.28}) \\
\hline
\end{tabular} }
\end{center}
\label{table:addSP_result}
\end{table}
\begin{figure}[ht!]
\includegraphics[scale=0.31] {figure/sp_reliability.png}
\centering
\caption{Reliability diagrams of students (WRN16-1) for knowledge distillation methods, trained with SP and a teacher (WRN16-3) on CIFAR-10. For the results of each method, the left is the result without SP, and the right is with SP.}
\label{figure:sp_reliability}
\end{figure}
To demonstrate how AMD can perform with the other distillation methods, we adopt SP \cite{tung2019similarity} which is not an attention based distillation method. A teacher is trained with the original training set and learns from scratch. SP \cite{tung2019similarity} is applied while a student is being trained. We compare with baselines, depicted in Figure \ref{figure:SP_results}. In all cases, with SP, the accuracy is increased. Compared to the other attention based methods, AMD gets more gains by SP. Therefore, AMD can be enhanced and can perform well with the other distillation methods such as SP. We additionally analyzed the reliability described in Table \ref{table:addSP_result}. AMD (global+local) with SP shows the lowest ECE and NLL values. It verifies that AMD with SP can generate a model having higher reliability with better accuracy. Thus, the proposed method can be used with an additional distillation method. Also, the proposed method with SP can perform with different combinations of teacher and student with well-calibrated results. As illustrated in Figure \ref{figure:sp_reliability}, with SP \cite{tung2019similarity}, AT \cite{zagoruyko2016paying} and AFD \cite{ji2021show} produce more overconfident predictions, compared to AMD (global+local) with SP \cite{tung2019similarity} that gives the best calibration performance. Conclusively, our empirical findings reveal that AMD can perform with other distillation methods such as SP \cite{tung2019similarity} to generate more informative features for distillation from teacher to student.
\section{Conclusion} \label{sec:conclusion}
In this paper, we proposed a new type of distillation loss function, AMD loss, which uses the angular distribution of features. We validated the effectiveness of distillation with this loss, under the setting of multiple teacher-student architecture combinations of KD in image classification. Furthermore, we have confirmed that the proposed method can be combined with previous methods such as fine-grained feature, various augmentation methods, and other types of distillation methods.
In future work, we aim to extend the proposed method to explore the distillation effects with different hypersphere feature embedding methods \cite{wang2018cosface, deng2019arcface}. Also, we plan to extend AMD to different approaches in image classification, such as vision transformer \cite{dosovitskiy2020image} and
MLP-mixer \cite{tolstikhin2021mlp} that are not based on convolutional neural network. In addition, our approach could provide insights for further advancement in other applications such as object detection and semantic segmentation.
\section*{Acknowledgements}
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112290073. Approved for public release; distribution is unlimited.
{
\bibliographystyle{elsarticle-num}
|
{
"arxiv_id": "2302.14175",
"language": "en",
"timestamp": "2023-03-01T02:03:43",
"url": "https://arxiv.org/abs/2302.14175",
"yymm": "2302"
} | \section{Introduction}
Dynamic mechanical metamaterials (MMs) feature sub-wavelength micro-structures that interact with stress waves, exhibiting exotic functionalities.
Numerous exciting potentials have been proposed for MMs, including wave attenuation\cite{Xiao2020,Baertsch2021}, negative refraction\cite{Ding2010,Seo2012,Li2016,Nemat-Nasser2015a}, cloaking\cite{Chen2007,Norris2011,Zhu2014,Cummer2016}, and insulators\cite{Oh2017,Matlack2018}.
The systematic design of metamaterials requires comprehensive understanding of the dynamic behaviors of MM itself as well as manufacturing design constraints. The first natural step is to find the eigenfrequency band structure and mode shapes of the design. The former encodes major characteristic information of a MM unit cell such as resonance frequencies, wave velocities, and band gaps, while the latter is needed to determine scattering in finite structures and to fully evaluate interactions between modes.
Common methodologies of band structure calculation include plane wave expansion (PWE)\cite{Wu2004g,Sridhar2017,Lu2017a,Oudich2014}, transfer matrix method (TMM)\cite{Mead1996,Junyi2015,Amirkhizi2018c}, and finite element method (FEM)\cite{Huang2021}.
In almost all cases, the design and analysis of these dynamic systems are plagued by the geometric complexity and computational burden. The major challenges include 1) unclear design-performance relationship and 2) expensive computational cost for calculating the dynamic response.
For design optimization purposes, a reduced order modeling (ROM) method is clearly more suitable as it allows for simple and fast computation limited to frequency range or modalities of interest.
In this paper we present a ROM approach for fast computation of MM problems, which can be used for different study setups, including eigenfrequency band calculation and computation of time dependent dynamic responses of finite structures.
The metamaterials that are considered here are assumed to be comprised of 2D micro-structural designs with beam-like elements. The materials are assumed to have no loss or gain mechanisms, but the inclusion of linear viscoelastic response is considered as a natural future expansion.
Detailed numerical and experimental studies on some similar micro-structures can be found in previous work\cite{Aghighi2019,Amirkhizi2018d}.
Extensive reduced order modeling techniques have been developed for vibration problems, e.g. dynamic condensation\cite{Kidder1973}, improved reduced system (IRS)\cite{Gordis1994}, and system equivalent reduction expansion process (SEREP)\cite{Avitable1989}. These reduction methods in general employ certain transformation matrices that map the full set of degrees of freedom (DOFs) to a reduced set of DOFs.
For wave propagation problems, especially metamaterial problems, the existing model order reduction methods are limited and less applicable because the system matrices and the eigenfunctions are dependent on the wavevector.
The wavevector-dependence leads to the frequency and mode variation in the band structures and is a key element in metamaterial dynamics. Therefore, novel reduction schemes that can preserve the wavevector dependence and band accuracy are needed for metamaterial problems. To this end, Hussein\cite{Hussein2009a} introduced reduced Bloch mode expansion (RBME) for fast computation of band structures. The RBME method employs selected Bloch eigenfunctions to reduce the dimensionality.
A similar method, Bloch mode synthesis (BMS) \cite{Krattiger2014,Krattiger2018a}, is an extended sub-structuring technique that describes the structural DOFs by normal and constraint modes. Both the RBME and BMS methods utilize selected eigen-modes to construct transformation matrices that reduce the size of the full matrices. These transformation-based methods effectively reduce the number of equations, but the resulting matrices are no longer representing the physical quantities (stiffness and inertia), therefore less suitable for geometric or material design problems. Additionally, these methods could not be applied to time/frequency domain computations of finite arrays. Nevertheless, these methods are have been shown useful for topology optimization\cite{Jung2020} in terms of reducing the computational cost.
An alternative scheme is to develop discrete models comprised of masses and springs.
The discretized mass-spring representation has been widely accepted in literature as it offers analytical formulations that simplify the computational effort while retaining essential physics. It has proven to be beneficial for various design aspects such as feasibility analysis \cite{Dertimanis2016}, reliability assessment \cite{Wagner2018}, and design space mapping \cite{Morris2022}. For higher order systems operating at high frequency ranges, an excellent example of modeling discrete weakly coupled MMs has been introduced by Matlack~\textit{et~al}. \cite{Matlack2018}. The model reduction is performed using the Schrieffer-Wolff transformation, so that the modes in the frequency range of interest are decoupled.
However, this method could only be applied to narrow-band dynamics.
While the mass-spring representation can significantly reduce the computational effort, certain vibration modes may exhibit mixed coupling between DOFs. To accurately capture such dynamics, the elastic spring elements cannot be simple 2-DOF elements.
In our previous work\cite{Amirkhizi2018d}, the elastic spring elements are physically represented as beams. The reduced stiffness and mass matrices can then be derived using simple strength of materials analysis. Such an approach provides analytical matrices that operate on physical DOFs and is naturally suitable for tuning the response via control of physical dimensions and material choices \cite{Morris2022}. They also make interpreting the modal physics straightforward, for example such a beam-based discrete model allows for accurate identification of the level repulsion\cite{Lu2018,Wang2021exceptional} and coupling between the DOFs. However, the selection of DOFs has been mostly a heuristic step that may affect the results.
In addition, approximating the structural components as standard beam elements does not generally match the actual response of the beam-like elements as accurately as needed.
The present work introduces a systematic implementation of the structural-element-based ROM approach that overcomes these limitations.
A generalized ROM procedure, parameterized in terms of effective structural stiffness parameters and discrete DOF inertia, is developed and is applicable to a large family of 3D printable MM designs.
The conceptual idea of the proposed ROM method takes advantage of the fact that the 3D printable MMs that operate as low frequency (long wavelength) locally resonant systems are often comprised of slender plate- or beam-like elements.
In addition, in most cases only the low frequency dynamics of the MM are of particular interest for practical applications, which reside in the subspace spanned by the lowest few eigen-modes, representable using a few carefully selected DOFs.
Modeling the system with these ``master" DOFs can hence reduce the computational cost while maintains high fidelity of the underlying physics.
A structural assembly system with symbolic matrices is used to represent the repeating unit cell (RUC). By optimizing the energy fitness compared with numerical results, one can find the effective stiffness and inertia parameters of this structural assembly. Such a ROM unit cell can accurately predict the eigenfrequency band structures with minimal computational effort.
This approach improves upon existing model order reduction methods in handling problems in metamaterials by maintaining eigen-solution accuracy within the Brillouin zone and providing parameterized matrices. It incorporates the propagating nature of waves, rather than just the modal response of finite structures. In MM systems, small variations in geometry can drastically change the overall response. The use of an analytical model that characterizes the MM with small number of parameters is therefore advantageous for understanding the influence of each component and fine-tuning the design. This resulting ROMs can also be extended for modeling finite-sized arrays, and the reduction in DOFs can accelerate the computation process significantly, especially in time dependent problems.
This paper is organized as follows. The general procedure of ROM development is first introduced in \cref{sec:procedure}. Then, two examples are given in \cref{sec:examples}, showing the accurately reproduced band structures. Finally, further uses of the proposed ROM approach are discussed in \cref{sec:apps}.
The conclusions and future outlook of this work is discussed in \cref{sec:conclusion}.
With accurate construction of cellular discrete models, The proposed approach will lead to efficient modeling and design discovery of mechanical metamaterials.
\section{Formulation and Procedure}\label{sec:procedure}
\subsection{Governing equations}
The first natural step to study a MM design is to obtain the eigenfrequency band structure.
Based on Bloch-Floquet theorem for wave propagation problems, the spatial domain is reduced to a single repeating unit cell (RUC). All fields, including the displacement field, must satisfy the Floquet periodicity:
\begin{equation}\label{eq:per}
\vec{u}(\vec{x} + (s_1\vec{a}_1 + s_2 \vec{a}_2), t) =\vec{u}_0 (\vec{x}) \exp[\mathrm{i} (\omega t-\vec{k} \cdot (s_1\vec{a}_1 + s_2 \vec{a}_2))],
\end{equation}
where~$\vec{u}$ is the complex displacement vector field, the real part of which is the physical displacement vector field, $\mathrm{i}=\sqrt{-1}$,~$\omega=2\pi f$ is the angular frequency, $\vec{k}=[k_x,k_y]$ is the wavevector in~$x-y$ plane, $s_{1,2}$ are integers indicating different cells, and~$\vec{a}_{1,2}$ are the primitive translation vectors for a 2D unit cell, respectively. The eigenfrequency problem can be written as:
\begin{equation}\label{eq:eigf}
\left[\vec{K}(\vec{k})-\omega^2\vec{M}\right]\vec{u}_0=\vec{0},
\end{equation}
where~$\vec{K}$ and $\vec{M}$ are the stiffness and mass matrices,~$\omega^2$ is the eigenvalue, and~$\vec{u}_0$ is the eigenfunction (mode shape). The detailed development of~\cref{eq:eigf} can be found in literature\cite{Hussein2009a}. Here the matrix~$\vec{K}$ is dependent on wavevector~$\vec{k}$ since the Floquet condition is applied to the RUC.
The Floquet condition~\cref{eq:per} and the eigenfrequency problem~\cref{eq:eigf} are the general setups used in finite element methods to find band structure and mode shapes, and have nearly identical counterparts in the ROM as well. A collection of eigen-modes can be organized in matrix $\vec{\Phi}$. Consider the~$m$-th eigen-mode solution, with frequency~$\omega_m$ and mode shape~$\vec{\Phi}_{m}$ (the $\vec{u}_0$ solution for the $m$-th mode), the time-averaged kinetic and strain energies for this mode are:
\begin{equation}\label{eq:Tm}
\begin{split}
{\overline{T}}_m &=\frac{\omega_m}{2\pi}\int_0^{\frac{2\pi}{\omega_m}}\frac{1}{2}\Re\left[\mathrm{i} \omega_m e^{\mathrm{i} \omega_m t}\vec{\Phi}_{m}^\top\right] \Re\left[\mathrm{i} \omega_m e^{\mathrm{i} \omega_m t}\vec{M} \vec{\Phi}_{m}\right] \mathrm{d} t\\
&=\frac{1}{4}\Re[\omega_m \vec{\Phi}_{m}^\top \left(\omega_m\vec{M} \vec{\Phi}_{m}\right)^*]\\
&=\frac{1}{4}\omega_m^2 \vec{\Phi}_{m}^\dagger\vec{M} \vec{\Phi}_{m},
\end{split}
\end{equation}
\begin{equation}\label{eq:Vm}
\begin{split}
{\overline{V}}_{m}&=\frac{\omega_m}{2\pi}\int_0^{\frac{2\pi}{\omega_m}}\frac{1}{2}\Re\left[ e^{\mathrm{i} \omega_m t}\vec{\Phi}_{m}^\top\right] \Re\left[ e^{\mathrm{i} \omega_m t}\vec{K} \vec{\Phi}_{m}\right] \mathrm{d} t \ \ \ \ \ \ \ \ \\
&=\frac{1}{4}\Re[\vec{\Phi}_{m}^\top \left(\vec{K} \vec{\Phi}_{m}\right)^*]\\
&=\frac{1}{4} \vec{\Phi}_{m}^\dagger\vec{K} \vec{\Phi}_{m}
\end{split}
\end{equation}
where~$*$ is complex conjugate, and~$\dagger$ is conjugate transpose. Here \cref{eq:Tm} and \cref{eq:Vm} are valid for lossless systems for which~$\vec{K}$ and~$\vec{M}$ are Hermitian matrices.
Based on the modal orthogonality, we further have
\begin{equation}\label{eq:mt}
\overline{\vec{T}}=\mathrm{diag}[\overline{T}_1,\dots,\overline{T}_{n_m} ]=\frac{1}{4}\vec{\Phi}^\dagger\vec{M}\vec{\Phi}\vec{\omega}^2,
\end{equation}
\begin{equation}\label{eq:mv}
\overline{\vec{V}}=\mathrm{diag}[\overline{V}_1,\dots,\overline{V}_{n_m} ]=\frac{1}{4}\vec{\Phi}^\dagger\vec{K}\vec{\Phi}
\end{equation}
where~$\vec{\omega}=\mathrm{diag}[\omega_1,\dots,\omega_{n_m}]$ for the lowest~$n_m$ modes of interest.
The modal energy matrices~$\overline{\vec{T}}$ and~$\overline{\vec{V}}$ are global quantities that can be evaluated in finite element solvers.
Notice, again, that the energy derivations \crefrange{eq:Tm}{eq:mv} are only valid for fully linear elastic systems, and the proposed method in this paper is developed based on such assumptions. For systems with loss elements (viscoelastic components), certain modifications are suggested in order to construct the ROM, see \cref{appLoss}.
\subsection{Model order reduction}
The governing equations introduced in the previous subsection are generally valid for both continuum systems and their discrete (reduced order) counterparts.
The essential idea of the proposed ROM method, is to find the small-sized stiffness $\vec{K}^\mathrm{r}$ and mass matrices $\vec{M}^\mathrm{r}$ in such a way that the resulting global quantities~$\overline{\vec{T}}^\mathrm{r}$, $\overline{\vec{V}}^\mathrm{r}$, and~$\vec{\omega}^\mathrm{r}$ of the reduced system are preserved and directly associated with the continuum results, identified as~$\overline{\vec{T}}^\mathrm{c}$, $\overline{\vec{V}}^\mathrm{c}$, and~$\vec{\omega}^\mathrm{c}$. To achieve this, the matrix size reduction is performed by first down-sampling the continuum mode shapes~$\vec{\Phi}^\mathrm{c}$ at a set of $n_p$ primary nodal positions to obtain the sampled mode shapes $\vec{\Phi}^\mathrm{p}$, from which the ROM mode shapes~$\vec{\Phi}^\mathrm{r}$ are extracted.
Then, the effective ROM matrices $\vec{K}^\mathrm{r}$ and $\vec{M}^\mathrm{r}$ can be found in order to satisfy
\begin{equation}\label{eq:Tmatch}
\overline{\vec{T}}^\mathrm{r}=\frac{1}{4}\vec{\Phi}^{\mathrm{r}\dagger} \vec{M}^\mathrm{r} \vec{\Phi}^\mathrm{r}\vec{\lambda}^\mathrm{c}\approx\frac{1}{4}\vec{\Phi}^{\mathrm{p}\dagger} \vec{M}^\mathrm{p}\vec{\Phi}^\mathrm{p}\vec{\lambda}^\mathrm{c}\approx\overline{\vec{T}}^\mathrm{c}=\overline{\vec{V}}^\mathrm{c}
\end{equation}
\begin{equation}\label{eq:Vmatch}
\overline{\vec{V}}^\mathrm{r}=\frac{1}{4}\vec{\Phi}^{\mathrm{r}\dagger} \vec{K}^\mathrm{r} \vec{\Phi}^\mathrm{r}\approx\overline{\vec{T}}^\mathrm{r}.
\end{equation}
Here the target quantities to be identified are the ROM matrices~$\vec{M}^\mathrm{r},\vec{K}^\mathrm{r}$, and other quantities~$\vec{\lambda}^\mathrm{c}=(\vec{\omega}^\mathrm{c})^2,\vec{\Phi}^\mathrm{r},\overline{\vec{T}}^\mathrm{c},\overline{\vec{V}}^\mathrm{c}$ are obtained from continuum simulations.
Due to the known geometric layout and the domain knowledge (i.e., beam stiffness formulation), the ROM matrices are symbolically parameterized by a set of effective physical parameters that describes the structural and inertia features. In this proposed method, the effective ROM parameters are the beam stiffness parameters~$\vec{\beta}$ and the nodal inertia~$\vec{\mu}$.
Identification of~$\vec{\beta}$ and $\vec{\mu}$ for the beam elements and nodes will complete the construction of~$\vec{K}^\mathrm{r}(\vec{\beta},\vec{k})$ and~$\vec{M}^\mathrm{r}(\vec{\mu})$.
The full procedure of ROM construction is as follows:
\begin{enumerate}
\item Assign a set ($n_p$) of primary nodes in the continuum unit cell, from whose displacement and rotation values the full continuum mode shapes may be approximated;
\item Perform eigenfrequency simulations at a few~($n_k$) selected wavevectors to determine the frequency~$\vec{\omega}^\mathrm{c}$, down-sampled continuum mode shapes~$\vec{\Phi}^\mathrm{p}$, and the modal energies~$\overline{\vec{T}}^\mathrm{c}$,~$\overline{\vec{V}}^\mathrm{c}$ for the lowest~$n_m$ modes. The superscript~$\mathrm{c}$ indicates the global quantities measured from the continuum system, and the superscript~$\mathrm{p}$ denotes the down-sampled mode shapes;
\item Construct the symbolic stiffness $\vec{K}^\mathrm{f}(\vec{\beta})$ and mass $\vec{M}^\mathrm{f}(\vec{\mu})$ matrices based on the unit cell geometry (layout, connectivity, symmetry) and positions of the~$n_p$ primary nodes as well as those of the~$n_d$ dependent ones (associated with Floquet periodicity);
\item Apply Floquet boundary condition to the symbolic matrices so that the equations of motion only involve the DOFs at the~$n_p$ primary nodes, yielding the symbolic matrices $\vec{K}^\mathrm{p} (\vec{\beta},\vec{k})$ and $\vec{M}^\mathrm{p}(\vec{\mu})$;
\item Find the effective mass matrix~$\vec{M}^\mathrm{p}$ (i.e. parameters $\vec{\mu}$) by optimizing the kinetic energy fitness (matching the ROM values with the continuum~$\overline{\vec{T}}^\mathrm{c}$), for the lowest~$n_m$ modes, using the measured~$\vec{\omega}^\mathrm{c}$ and~$\vec{\Phi}^\mathrm{p}$;
\item Identify the slave DOFs (whose contribution to kinetic energy is negligible) and perform static condensation so that the number of studied DOFs is reduced (from $3n_p$ to~$n_r$). This step establishes the numerical-valued matrix~$\vec{M}^\mathrm{r}$ (a sub-matrix of $\vec{M}^\mathrm{p}$), the symbolic matrix~$\vec{K}^\mathrm{r}(\vec{\beta},\vec{k})$, and reduces the measured modes from~$\vec{\Phi}^\mathrm{p}$ to~$\vec{\Phi}^\mathrm{r}$;
\item Find the effective stiffness parameters~$\vec{\beta}$ by optimizing the potential energy fitness (matching the ROM modal matrix~$\overline{\vec{V}}^\mathrm{r}$ with the already determined~$\overline{\vec{T}}^\mathrm{r}$), for the lowest~$n_m$ modes, using~$\vec{\omega}^\mathrm{c}$ and~$\vec{\Phi}^\mathrm{r}$ and the established ROM mass matrix~$\vec{M}^\mathrm{r}$;
\item Use the established matrices~$\vec{K}^\mathrm{r}$ and~$\vec{M}^\mathrm{r}$ to compute the band structure, or adjust them for other types of problems.
\end{enumerate}
By matching the diagonalized modal matrices resulting from the symbolic discrete model and the FEM model, this process aims to construct a discretized lower order system that, at the selected wavevector~$\vec{k}$ points, inherits the continuum eigenvalues~$\vec{\omega}^\mathrm{c}$ and the associated mode shapes~$\vec{\Phi}^\mathrm{r}$ down-sampled from the continuum system.
With the ROM matrices $\vec{K}^\mathrm{r}(\vec{\beta},\vec{k})$ and~$\vec{M}^\mathrm{r}(\vec{\mu})$ symbolically parameterized instead of numerically found through the pseudo-inversion of \crefrange{eq:Tmatch}{eq:Vmatch}, the known structural knowledge is retained in the model. This allows for further analysis and optimization of the structural components in design efforts.
Since the number of these unknown parameters in~$\vec{\beta}$ and~$\vec{\mu}$ are limited, one does not need to optimize the matching of \crefrange{eq:Tmatch}{eq:Vmatch} for every wavevector~$\vec{k}$. Instead, the eigen-information from only a small number of wavevectors will be sufficient to determine the unknown ROM parameters and the number of needed simulations is small.
In addition, the extracted stiffness ($\vec{\beta}$) and inertia~($\vec{\mu}$) values for the discrete system are properly scaled to represent effective physical quantities due to the matched modal energies. Since the effective parameters~$\vec{\beta}$ and~$\vec{\mu}$ are of high physical fidelity and independent of wavevector~$\vec{k}$, one can easily compute the eigen-results at any arbitrary wavevector with the ROM matrices. Furthermore, the ROM can be easily extended for other types computations, such as frequency or time domain problems, for which finite-sized arrays are modeled, and wavevector~$\vec{k}$ is not an explicit parameter.
In summary, such a model order reduction approach serves as both a parameter retrieval method that characterizes the continuum model as a discrete one, as well as a fast tool that accelerates the computation of eigen- or other dynamic problems.
In the next section of this paper, the detailed ROM construction steps are demonstrated through examples.
\section{Examples}\label{sec:examples}
\subsection{Node assignment and information collection}
To demonstrate the ROM parameterization process, two MM unit cells are selected. The square unit cell \Cref{fig:sh_geom} features a H-shaped resonator mass, while the hexagonal cell \cref{fig:h0_geom} has two split resonators. The two RUCs are modeled in FEM using the same material, with Young's modulus~$E=\SI{300}{GPa}$, Poisson's ratio~$\nu=0.22$, and density~$\rho=\SI{3900}{kg/m^3}$. Both designs have the lattice constant~$a=\SI{10}{mm}$. One should first assign a set of $n_p$ nodes in the continuum model, where the deformation will be sampled and used for mode shape projections. These sampling points should, in principle, be located at the mass centers of structural components, or at the intersections between beam elements.
Further detailed analysis and principles of DOF selection are given in literature \cite{Kevin2015,Qu2004}.
The chosen sampling nodes in the two examples are denoted by the blue dots in \cref{fig:geom}. Notice that there is no node assigned at the top or right edge of the cell frames, due to the known Floquet periodicity~\cref{eq:per} for infinite arrays. The mode shapes~$\vec{\Phi}^\mathrm{p}$ will be represented by the deformation vector~$\vec{u}^\mathrm{p}$ containing the displacement~$u_x,u_y$ in~$x-y$ plane and the rotation~$\theta_z$ at the chosen nodes. The rotational component is derived from the curl of the displacement field~$\theta_z=(\frac{\partial u_y}{\partial x}-\frac{\partial u_x}{\partial y})/2$.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=140pt]{figures/sh_geom}
\caption{Square MM \label{fig:sh_geom}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=140pt]{figures/h0_heom}
\caption{Hexagonal MM\label{fig:h0_geom}}
\end{subfigure}
\caption{Unit cell geometry of the selected examples. The blue dots denote the selected~$n_p$ nodes where the mode shapes~$\vec{\Phi}^\mathrm{p}$ are sampled from FEM solutions at selected wavevectors. The base material is alumina ($E=\SI{300}{GPa},\nu=0.22,\rho=\SI{3900}{kg/m^3}$).
\label{fig:geom}}
\end{figure}
Then, one performs finite element simulations at a few selected~$\vec{k}$ points in the irreducible Brillouin zone (IBZ). Preferably, these~$\vec{k}$ points should be far away from each other. In the shown examples here, we select~$n_k=4$ points at~$\vec{k}=[0,0],[\pi/a,0],[0,\pi/a],[\pi/a,\pi/a]$. At each wavevector point, one collects the eigenfrequency results for the lowest~$n_m$ modes. The number~$n_m$ is determined such that: (1) the~$n_m$-th frequency~$f_{n_m}$ covers the frequency range of interest and (2) the locations associated with dominant deformations for the lowest~$n_m$ modes are included in the pre-selected nodes.
We chose~$n_m=6$ for the square cell and~$n_m=8$ for the hexagonal one. In this step, the collected information includes the diagonal frequency matrix~$\vec{\omega}^\mathrm{c}\in\mathbb{R}^{n_m\times n_m}$, the diagonal kinetic energy matrix~$\overline{\vec{T}}^\mathrm{c}\in\mathbb{R}^{n_m\times n_m}$, the diagonal potential energy matrix~$\overline{\vec{V}}^\mathrm{c}\in\mathbb{R}^{n_m\times n_m}$, and the mode shape matrix~$\vec{\Phi}^\mathrm{p}\in\mathbb{C}^{3n_p\times n_m}$.
\subsection{Matrix construction}
After the FEM data collection, the ROM matrices can be constructed symbolically based on the selected nodes, including the~$n_p$ primary ones and the~$n_d$ dependent ones, based on the known connectivity and beam element stiffness formulation (see Appendix \cref{eq:beamK}). The dependent nodes are those whose displacements are those whose kinematics are determined based on Floquet periodicity, yet are connected by a structural element to one of the primary nodes. The graphic representations of the ROM unit cells are shown in \cref{fig:rom_ruc_dof}.
For the square unit cell in~\cref{fig:sh_ruc_dofs}, the dependent DOFs are~$\vec{u}^\mathrm{d}=[\vec{u}^{(3)},\vec{u}^{(8)},\vec{u}^{(9)}]^\top$. For the hexagonal unit cell in~\cref{fig:h0_ruc_dofs}, the dependent DOFs are~$\vec{u}^\mathrm{d}=[\vec{u}^{(3)},\vec{u}^{(4)}]^\top$.
Notice that some edge elements (between dependent nodes) are not included in order to eliminate redundancy in the periodically generated array. For example, nodes 8 and 9 are not directly connected in \cref{fig:sh_ruc_dofs}.
Nevertheless, the structures shown in \cref{fig:rom_ruc_dof} are primitive unit cells whose 2D repetitions will produced the infinitely periodic system perfectly.
Each node (denoted by a black dot) has three inertia parameters~$m_x,m_y=m_x$ (mass) and~$I_z$ (rotational inertia). The force-balance relation between two connected nodes is approximated based on beam analysis, introduced in \cref{appBeam}. In this formulation, the beam element is allowed to have an asymmetric layout, and only four independent stiffness parameters~$\beta_{1,2,5,7}$ (diagonal components of the stiffness matrix) are needed to construct the local stiffness matrix. The other components are statically determined. The six force and moment components are related to the six displacement and rotation quantities through the symbolic stiffness matrix, in which four independent stiffness values need to be found. Such a form is not only compatible with standard beam elements (Euler-Bernoulli, Timoshenko), but also suitable for any generalized 1D structural component with two end nodes.
To obtain the global stiffness matrix, each force-balance relation is first converted into the global coordinate system; See~\cref{appBeam}. Based on the equilibrium of the overall structure, the global stiffness matrix is obtained by summing all the loads arising from the adjacent elements for each node\cite{ferreira2008matlab}. The static balance equations then read
\begin{equation}
\vec{K}^\mathrm{f}\vec{u}^\mathrm{f}=\vec{K}^\mathrm{f}\begin{bmatrix}
u^{(1)}_x& u^{(1)}_y& \theta^{(1)}_z&\cdots&u^{(n_p+n_d)}_x& u^{(n_p+n_d)}_y& \theta^{(n_p+n_d)}_z
\end{bmatrix}^\top=\vec{F},
\end{equation}
where $\vec{F}$ is the nodal loading and superscript is the node index. The symbolic matrix~$\vec{K}^\mathrm{f}$ gives the constitutive description of the full unanchored structure, with free boundary conditions. The associated mass matrix is a diagonal matrix~$\vec{M}^\mathrm{f}=\mathrm{diag}[\vec{\mu}]=\mathrm{diag}[m^{(1)}_x,\dots,I^{(n_p+n_d)}_z]$. Then, the unit cell structure is effectively parameterized by the unknown stiffnesses~$\vec{\beta}$ and inertia values $\vec{\mu}$.
To apply the Bloch-Floquet periodicity condition, one can first describe the full set of DOFs~$\vec{u}^\mathrm{f}$ by a transformed version of the DOFs at the primary nodes~$\vec{u}^\mathrm{p}$:
\begin{equation}
\vec{u}^\mathrm{f}=\vec{P}(\vec{k})\vec{u}^\mathrm{p},
\end{equation}
where~$\vec{P}(\vec{k})$ is a rectangular transformations matrix containing phase differences between the dependent and primary DOFs, determined by the nodal positions and the wavevector. A detailed discussion on applying the periodicity can be found in\cite{Krattiger2018a}.
For the periodic unit cells, these dependent DOFs are removed from the equations of motion, and only the primary DOFs are necessary:
\begin{equation}\label{eq:eom_periodic}
\vec{K}^\mathrm{p} (\vec{k}) \vec{u}^\mathrm{p}-\vec{\omega}^2 \vec{M}^\mathrm{p}\vec{u}^\mathrm{p}=\vec{0},
\end{equation}
where~$\vec{K}^\mathrm{p}=\vec{P}^\dagger \vec{K}^\mathrm{f}\vec{P}$,~$\vec{M}^\mathrm{p}=\vec{P}^\dagger \vec{M}^\mathrm{f}\vec{P}$ are the matrices for the primitive cell, and~$\dagger$ is Hermitian transpose. At this stage, the equations of motion \cref{eq:eom_periodic} effectively describe the dynamics of the discretized primitive unit cells, and the involved nodes are identical to the ones marked in \cref{fig:geom}.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=140pt]{figures/sh_ruc_dofs}
\caption{\label{fig:sh_ruc_dofs}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=140pt]{figures/h0_ruc_dofs}
\caption{\label{fig:h0_ruc_dofs}}
\end{subfigure}
\caption{ Beam assemblies as the reduced order unit cells for the (\subref{fig:sh_ruc_dofs}) square and (\subref{fig:h0_ruc_dofs}) hexagonal MMs. The red arrows denote the reduced inertia DOFs. \label{fig:rom_ruc_dof}}
\end{figure}
Taking advantage of the geometrical symmetries in the continuum model, the number of unknown parameters in the ROM property matrices can be reduced. For example, beam 5-6 and beam 7-6 are symmetric with respect to node 6 in~\cref{fig:sh_ruc_dofs}. Then, node 5 will have the same mass and rotational inertia as node 7. The two beams will share the same local stiffness matrix as well (in the un-rotated local coordinate system). Under this idealized description, the structural symmetries lead to degeneracy in the eigenvalues. However, any physical or numerical realization of such systems will have the tendency to become non-degenerate due to any small asymmetry.
\subsection{Inertia quantification and DOF reduction}
With the symbolic ROM matrices developed, the next step is to find the effective mass and inertia values of the selected nodes. Consider the kinetic energy formulation at the~$(j)$-th $\vec{k}$ point, the ROM values are expected to be identical to the continuum ones:
\begin{equation}\label{eq:mt_e}
\frac{1}{4}\vec{\lambda}^\mathrm{c}_{(j)}\vec{\Phi}^{\mathrm{p}\dagger}_{(j)}\vec{M}^\mathrm{p}(\vec{\mu})\vec{\Phi}^\mathrm{p}_{(j)}
= \overline{\vec{T}}^\mathrm{c}_{(j)}\in\mathbb{R}^{n_m\times n_m}.
\end{equation}
As is indicated by the rhs of this equation, this is an $n_m \times n_m$ matrix equation (for each $j$ value), whose purpose is to find the matrix~$\vec{M}^\mathrm{p}(\vec{\mu})$, assumed diagonalizable by the down-sampled eigenvectors~$\vec{\Phi}^\mathrm{p}_{(j)}$, with the known $n_m \times n_m$ diagonal eigenvalue matrix~$\vec{\lambda}^\mathrm{c}_{(j)}=(\vec{\omega}^\mathrm{c}_{(j)})^2$ and kinetic energy~$\overline{\vec{T}}^\mathrm{c}_{(j)}$.
As this is clearly mathematically over determined, an error vector can be defined as
\begin{equation}\label{eq:masserror}
\vec{e}_{(j)} (\vec{\mu})=\mathrm{UT}\left[\frac{1}{4}\vec{\lambda}^\mathrm{c}_{(j)}\vec{\Phi}^{\mathrm{p}\dagger}_{(j)}\vec{M}^\mathrm{p}(\vec{\mu})\vec{\Phi}^{\mathrm{p}}_{(j)}
- \overline{\vec{T}}^\mathrm{c}_{(j)}\right],
\end{equation}
where~$\mathrm{UT}[\cdot]$ denotes the vector containing all the upper triangular entries of a matrix.
Combining the results at all the~$n_k$ wavevectors, the error vector is then $\vec{E}=[\vec{e}_{(1)}\dots\vec{e}_ {(n_k)}]$.
This problem can be stated as a constrained linear least-squares problem
\begin{equation}
\begin{aligned}
\min_{\vec{\mu}} \quad & ||\vec{E}(\vec{\mu})||_2^2\\
\textrm{s.t.} \quad & \vec{\mu} \geq 0 \\
\end{aligned}
\end{equation}
where $||\cdot||_2$ indicates the~$L_2$ norm, and the condition on $\vec{\mu}$ is understood to apply to each component independently. In practice, to guarantee the equal weight of each mode and each wavevector, all the mode shapes should be pre-normalized so that their kinetic energies are equal.
Nevertheless, the off-diagonal components in the modal matrix~$\overline{\vec{T}}^\mathrm{c}$ will remain zero.
The optimization problem is solved using the \textit{lsqlin} function in MATLAB\textsuperscript{\textregistered}. \Cref{fig:mass_fitting} shows the resulting energy fitness after optimization. The blue lines denote the right hand side of \cref{eq:mt_e}, i.e., the kinetic energy matrix components in FEM results. The red lines represent the approximated ROM results. Good accuracy is observed in both cases. By using the simulation results at more than one ~$\vec{k}$ points (in this case,~$n_k=4$), the real and imaginary parts of the upper triangular components together lead to ($n_k n_m (n_m+1)$) real equations for finding the unknown inertia parameters.
Furthermore, the lowest two modes at~$\vec{k}=\Gamma=[0,0]$ are rigid body modes with zero eigenfrequencies. Inclusion of the~$\Gamma$ point in this process will automatically guarantee that the solved solution satisfies mass conservation.
In addition, parallel axis theorem could also be enforced in this optimization process either as a constraint or as a part of the optimization function. Based on this theorem, the discrete unit cell is expected to have the same total moment of inertia as the continuum model. In the given examples, it is found that introducing parallel axis theorem would add a small error (8\%) in the kinetic energy match. The root of this discrepancy is in the selected location for ROM nodes and can potentially be overcome, by allowing these locations to be found as part of this optimization process. However, the extra computational cost and complexity of this process far outweighs its benefit to accuracy. Therefore, this additional consideration is not enforced here. Nevertheless, the resulting band structure and dynamic response of the ROMs are still very close matches of the continuum solutions, indicating high physical fidelity, as shown in the later sections.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=120pt]{figures/sh_mass_fitting}
\caption{\label{fig:sh_mass}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=120pt]{figures/h0_mass_fitting}
\caption{\label{fig:h0_mass}}
\end{subfigure}
\caption{Kinetic energy matrix components for the (\subref{fig:sh_mass}) square and (\subref{fig:h0_mass}) hexagonal MMs after optimization. The first and second halves of the equations are the real and imaginary parts of the kinetic energy matrix, respectively. \label{fig:mass_fitting}}
\end{figure}
Although three DOFs ($u_x,u_y,\theta_z$) are sampled at each node, not all of them have the same importance. Certain DOFs may become apparent in the mode shapes only when the frequency is high enough, and certain DOFs may be associated with negligible inertia. With the mass matrix found, the mode shapes~$\vec{\Phi}^\mathrm{p}$ can be re-scaled so that
\begin{equation}\label{eq:Tnorm}
\frac{1}{4} \lambda^\mathrm{c}_{m(j)} \vec{\Phi}^{\mathrm{p}\dagger}_{m(j)}\vec{M}^\mathrm{p}\vec{\Phi}^\mathrm{p}_{m(j)}=1 \ \qquad \mathrm{for\ any\ }m,(j)
\end{equation}
where~$\lambda=\omega^2$ is the eigenvalue, and subscript~$_{m(j)}$ indicates the~$m$-th mode eigenvalue at~$(j)$-th wavevector location.
Then, the weight of the~$i$-th DOF is evaluated as the averaged kinetic energy in the log scale:
\begin{equation}
W_i=\log_{10}\dfrac{\sum_{m=1}^{n_m}\sum_{j=1}^{n_k} \frac{1}{4} \lambda^\mathrm{c}_{m(j)}
\Phi^{\mathrm{p}*}_{im(j)}
M^{\mathrm{p}}_{ii}
\Phi^{\mathrm{p}}_{im(j)}}
{n_m n_k}.
\end{equation}
\Cref{fig:dof_weights} shows the relative weights of all the DOFs for the two MM examples. It is apparent that certain DOFs with weight~$\leq-4$ should be eliminated and be regarded as ``slave'' DOFs. For example, the ninth DOF of the square MM (the rotation at node 4) has negligible weight and is not active in the considered frequency range. Only the active ``master'' DOFs will be kept in the ROM formulations, and they are indicated by the red arrows in~\cref{fig:rom_ruc_dof}.
Deleting the slave DOFs from~$\vec{\Phi}^\mathrm{p}$ leads to the reduced mode shapes~$\vec{\Phi}^\mathrm{r}$, which is a subset of continuum data and is expected to be the eigenvectors of the ROM matrices.
The removal of these slave DOFs follows the standard static condensation\cite{GUYAN1965}, as the associated inertia values are effectively zero. One can re-arrange the stiffness matrix as
$$
\vec{K}^\mathrm{p}=\begin{bmatrix}
\vec{K}_{\mathrm{mm}} & \vec{K}_{\mathrm{ms}} \\
\vec{K}_{\mathrm{sm}} & \vec{K}_{\mathrm{ss}}
\end{bmatrix}.
$$
Here the subscripts denote ``m"aster (not to be confused with index $m$ used earlier to indicate modes) and ``s"lave.
The columns and rows related to slave DOFs in the mass matrix~$\vec{M}^\mathrm{p}$ are deleted, and it leads to the reduced mass matrix~$\vec{M}^\mathrm{r}$. The reduced (still symbolic) stiffness matrix is obtained by
\begin{equation}\label{eq:guyan}
\vec{K}^\mathrm{r}=\vec{K}_{\mathrm{mm}}-\vec{K}_{\mathrm{ms}}\vec{K}_{\mathrm{ss}}^{-1}\vec{K}_{\mathrm{sm}}.
\end{equation}
In practice, the inversion of symbolic sub-matrix~$\vec{K}_{\mathrm{ss}}$ is computationally challenging. However, this step can be equivalently implemented using Gaussian elimination.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=110pt]{figures/sh_dof_weights}
\caption{\label{fig:sh_dofwei}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=110pt]{figures/h0_dof_weights}
\caption{\label{fig:h0_dot_wei}}
\end{subfigure}
\caption{Weights of the DOFs (in log scale) for the (\subref{fig:sh_mass}) square and (\subref{fig:h0_mass}) hexagonal MMs. \label{fig:dof_weights}}
\end{figure}
\subsection{Stiffness parameter extraction}
To ensure that the continuum eigenfrequencies and mode shapes can be accurately reproduced by the ROM, the modal potential energy must equal to the modal kinetic energy. The ideal set of stiffness parameters~$\vec{\beta}$ can therefore be found by optimizing the potential energy fitness. The error in energy at the~$(j)$-th wavevector location is defined as
\begin{equation}\label{eq:betaerror}
\vec{e}_ {(j)}(\vec{\beta})=\mathrm{UT}\left[\frac{1}{4}\vec{\Phi}^{\mathrm{r}\dagger}_{(j)}\vec{K}^\mathrm{r}(\vec{\beta})\vec{\Phi}^\mathrm{r}_{(j)} -\frac{1}{4}\vec{\Phi}^{\mathrm{r}\dagger}_{(j)}\vec{M}^\mathrm{r}\vec{\Phi}^\mathrm{r}_{(j)}\vec{\lambda}^\mathrm{c}_{(j)}\right],
\end{equation}
where the superscript~$r$ denotes quantities associated with the reduced set of master DOFs. The error vector for all~$n_k$ wavevector locations is then~$\vec{E}=[\vec{e}_{(1)}\dots\vec{e}_{(n_k)}]$.
The optimization problem is then formulated as:
\begin{equation}\label{eq:Vopt}
\begin{aligned}
\min_{\vec{\beta}} \quad & {||\vec{E}(\vec{\beta})||_2}\\%/{||\vec{E}_r||_2} \\
\textrm{s.t.} \quad & \vec{\beta}>\vec{0}, \\
\end{aligned}
\end{equation}
where the constraint on $\vec{\beta}$ is understood as positivity for every single $\beta$ parameter in the structure; See \cref{appBeam} for detail.
This process also ensures the diagonality of the potential energy in ROM formulation.
The mode shapes are normalized based on~\cref{eq:Tnorm} to ensure equal weights in all the modes. Notice that due to the static condensation process \cref{eq:guyan}, the stiffness matrix~$\vec{K}^\mathrm{r}$ and the error vector~$\vec{E}$ are no longer linear functions of the stiffness~$\vec{\beta}$. Furthermore, the stiffness parameters need to be rescaled properly due to numerical considerations. For example, the rotational stiffness have different units than axial or translational stiffnesses. Therefore, it is beneficial to express the stiffness as
\begin{equation}
\vec{\beta}=\vec{\alpha}\circ\hat{\vec{\beta}}
\end{equation}
where~$\circ$ is the element-wise multiplication,~$\vec{\alpha}$ is a dimensionless stiffness ratio vector. The vector~$\hat{\vec{\beta}}$ contains the estimated stiffness values based on the beam geometry (length, height), which can be derived using the standard formulas of Timoshenko beam stiffness matrix. Then, \cref{eq:Vopt} can be re-written as
\begin{equation}
\begin{aligned}
\min_{\vec{\alpha}} \quad & {||\vec{E}(\vec{\alpha})||_2}\\%{||\vec{E}_r||_2} \\
\textrm{s.t.} \quad & \vec{\alpha}>\vec{0} . \\
\end{aligned}
\end{equation}
Such a problem can be initialized from~$\vec{\alpha}=\vec{1}$ and is solvable using the \textit{fmincon} function in MATLAB\textsuperscript{\textregistered}. Furthermore, effect of deviations from these initial estimates are of similar order of magnitude, which is numerically preferred. \Cref{fig:beta} shows the error (cost) convergence for the two examples considered here. For the square MM~\cref{fig:sh_beta}, it takes more iterations to reach the minimum. However, both cases present decently low errors in the final iterations.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=110pt]{figures/sh_beta_training}
\caption{\label{fig:sh_beta}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=110pt]{figures/h0_beta_training}
\caption{\label{fig:h0_beta}}
\end{subfigure}
\caption{Potential energy fitness optimization convergence plots for (\subref{fig:sh_beta}) square and (\subref{fig:h0_beta}) hexagonal unit cells. \label{fig:beta}}
\end{figure}
\subsection{Results and discussion}
With the effective stiffness~$\vec{\beta}$ and inertia~$\vec{\mu}$ parameters determined, the ROM procedure is completed. The optimized modal energy fitness ensures the fidelity of the ROM. It is observed that the optimized matching of modal relation leads to accurate reproduction of the eigenfrequencies as well as the nodal mode shapes at the pre-calculated~$n_k$ wavevector locations. Beyond these locations, the eigen-analysis results are also well extrapolated because of the symbolic implementation of the structural stiffness and Bloch-Floquet periodicity.
One can solve for the eigenfrequency and mode shapes through the analytical formulation~\cref{eq:eigf} for any given wavevector~$\vec{k}$. Such computation will be extremely fast due to the compactness of the matrices.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=210pt]{figures/sh_band_rom}
\caption{Square MM\label{fig:sh_band}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=210pt]{figures/h0_band_rom}
\caption{Hexagonal MM\label{fig:h0_band}}
\end{subfigure}
\caption{Band structure comparison. \label{fig:band}}
\end{figure}
\Cref{fig:band} shows the eigenfrequency band structures for the two studied examples, plotted in the dimensionless wavenumber space~$Q_{x,y}=k_{x,y}a$, where~$a=\SI{10}{mm}$ is the lattice constant. The colored surfaces are generated based on ROM, while the dots represent FEM results. It can be seen that the ROM provides close approximations of the band structures. Notice that the ROM construction only requires the simulations at four different wavevector locations $[Q_x,Q_y]=[0,0],[0,\pi],[\pi,0]$ and $[\pi,\pi]$. The number and locations of these input simulations are, however, not fixed to the given ones.
It should be noted that the minimizing the matching error~$\vec{e}_{(j)}(\vec{\beta})\rightarrow\vec{0}$ in \cref{eq:betaerror} is a necessary yet insufficient requirement for the ROM system~$\vec{K}^\mathrm{r}(\vec{\beta}),~\vec{M}^\mathrm{r}$ to produce the exact eigen-solutions~$\vec{\Phi}^\mathrm{r}_{(j)},~\vec{\omega}^\mathrm{c}_{(j)}$ obtained from FEM simulations. The left multiplication of~$\vec{\Phi}^\mathrm{r}_{(j)}$ in \cref{{eq:betaerror}} reduces the number of equations since the mode shape matrix is rectangular. However, the number of unknowns in~$\vec{\beta}$ is limited, and \cref{eq:betaerror} is collected for multiple wavenumber points. Therefore, such an optimization scheme creates an over-determined problem for seeking the limited set of ROM parameters that are representative of the unit cell properties. With known eigen-states, the ROM produces the same modal energy matrices as the higher order FEM system. Then, it is observed that the resulting ROM leads to eigenvectors which closely agree with the FEM results. An alternative way to identify~$\vec{\beta}$ is to create a multi-objective optimization problem in which one also optimizes the ROM mode shape accuracy while minimizing the modal energy error given by \cref{eq:betaerror}. In practice and in the examples here, the secondary optimization objective of maintaining mode shape accuracy is omitted and only used as a sanity check, leading to significant computational cost saving but insignificant or no loss to accuracy. The latter outcome is understood to be a consequence of symbolic development of dynamic matrices based on the internal cell topology (beam connectivity).
This approach advances the well-established model order reduction methods such as SEREP \cite{Avitable1989} in attacking MM problems in the sense that (1) the proposed ROM maintains the eigen-solution accuracy for any wavevector and (2) it provides analytical and parameterized matrices instead of numerical ones. It expands such methods by including the propagating nature of waves instead of modal response of finite structures. In MM systems, the micro-structural features play vital roles in the dynamic properties. A small variation in the geometry could lead to a drastic change in the overall response. Therefore, an analytical model with parameterized structural elements is particularly advantageous for understanding the influence of each component and fine-tuning the design. Several applications of the method are discussed in the next section.
\section{Applications}\label{sec:apps}
The proposed ROM approach has a wide application spectrum, as the matrices are parameterized by the physical properties (structural stiffness and inertia) and the modeled DOFs are physical deformations instead of generalized coordinates. Therefore the developed ROMs preserves the necessary physical ingredients for further analysis. The dependence of ROM parameters on the geometric dimensions and material properties may be curve-fitted for design purposes with continuous functions, see\cite{Morris2022}. The fidelity of these models is inherently guaranteed by the optimized energy relations and the accurate production of band structures and mode shapes. While the ROM is capable of generating the band structure accurately and efficiently, the band computation is not the ultimate goal for the ROM, rather it is the basis and a starting point.
One immediate application is optimizing unit cell designs for desired eigenfrequencies. The ROM characterizes the continuum unit cell with finite number of stiffness and inertia parameters and provides the analytical formulation of the eigenfrequency bands. It is then intuitive and direct to tune the structural parameters and associated geometry for desired eigenfrequency performance (wave speeds and band gaps) based on the analytical model. The detailed steps are omitted here. A simplified example can be found in the previous work\cite{Morris2022}. Other application examples are discussed below.
\subsection{Level repulsion identification}
The micro-geometry and periodicity of MMs add an extra layer of complexity to the analysis of band topology and scattering response. The high dimensionality of traditional models presents challenges in understanding physical phenomena and interpreting results. In MM and phononic band structures, many apparent crossing points may exist between eigenfrequency branches. It is important to classify these crossings as either degeneracy points (real crossings) or level repulsions (avoided crossings) \cite{Lu2018}. Despite a small quantitative difference between the two types of crossings, this discrepancy can lead to misunderstandings of modal natures and scattering responses. Level repulsion indicates mixing of modes, caused by the coupling between DOFs, resulting in unexpected energy transfer in scattering analysis. Conversely, real crossings indicate fully decoupled modes \cite{Amirkhizi2018d}.
\begin{figure}[!ht]
\centering\includegraphics[height=150pt]{figures/lr_allband}
\caption{The square MM unit cell band structure with an apparent or real crossing region identified in the blue circle.\label{fig:lr_band}}
\end{figure}
An example of band identification can be seen in \crefrange{fig:lr_band}{fig:crossing}, where the 1D band structure along the $\Gamma-X$ direction is analyzed for the previously shown square cell. \Cref{fig:lr_band} shows good overall agreement between the ROM and FEM results. However, zooming into a region with an apparent crossing (indicated by the blue circle in \cref{fig:lr_band}) reveals a discrepancy. The FEM results, shown in \cref{fig:lrfem}, indicate that the two relevant branches appear repulsed with each other. The root of this apparent repulsion is not associated with the cell response, but rather due to asymmetry in the FEM mesh. To correct the sorting, one must refine the mesh and to preserve the exact two-fold symmetry of the cellular structure, as shown by the black curves in \cref{fig:lrfem}. This task requires repeated simulations and mesh refinements with the FEM approach. On the other hand, the ROM results, shown by the blue dashed curves in \cref{fig:lrrom_band}, correctly indicate a real crossing between the two branches since the discrete model possesses the same symmetry group as the continuum one. The analytical representation of the ROM formulas allows for easy distinction between real crossings and level repulsions. Additionally, manual assignment of perturbations to the parameterized ROM quantities can break the symmetry and lead to repulsed branches, as shown by the black curves in \cref{fig:lrrom_band}. This analysis shows that the studied crossing point is a symmetry-protected degeneracy (rather than an accidental one), which is unstable and prone to forming repulsed branches due to imperfections in material or geometry of an actual specimen. The same is true for continuum FE models, which may lack symmetry due to their meshes.
Another example can be found in \cite{Amirkhizi2018d}, where the direct FEM result suggests a real crossing between two branches, but the ROM and a further refined FEM study indicate otherwise. The ROM approach is more efficient for band sorting purposes (and can easily be leveraged in calculation of topological invariants based on path integrals), and it provides benefits in understanding the difference between an ideal model and a realistic one, as well as in distinguishing normal and accidental degeneracy.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=150pt]{figures/femlr.pdf}
\caption{\label{fig:lrfem}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=150pt]{figures/romlr}
\caption{\label{fig:lrrom_band}}
\end{subfigure}%
\caption{Band crossing identification example for the square MM cell. The small region identified with a circle in \cref{fig:lr_band} is magnified for both computational approaches: (\subref{fig:lrfem}) FEM, (\subref{fig:lrrom_band}) ROM. In both cases slight baseline shift in the asymmetric band is applied to bring the two results in the same narrow frequency window. \label{fig:crossing}}
\end{figure}
\subsection{Equi-frequency contours}
The developed ROM matrices also allow for computation of the equi-frequency contours~$k_x(\omega,k_y)$ or~$k_y(\omega,k_x)$, i.e., to find the wavevector solutions~$k_{x}$ or~$k_y$ at prescribed~$k_{y}$ or~$k_x$ and frequency values. For complex $k$ components, the solved eigen-modes are the propagating and evanescent waves that constitute the basis solution for the oblique scattering problem.
Such problems are rarely studied for metamaterials but are of prime importance for analyzing the scattering dynamics\cite{Mokhtari2019a,wang2021angledependent}. Solving this type of problems in FEM would be nearly impractical, because the real-valued frequency~$\omega$ can not be easily enforced in the traditional eigenfrequency study for a given complex $k$ components. It is also generally not straightforward to assign a wavenumber component as an eigenvalue to be found, though for phononic media, an elegant mixed eigenvalue approach is presented in \cite{Mokhtari2019a}. For metamaterials with complex internal features, the proposed ROM approach would be an ideal alternative. Using the ROM matrices, this problem can be simply solved by finding the global minima of the determinant~$\mathrm{det}\left[\vec{K}^\mathrm{r}(k_x,k_y)-\omega^2\vec{M}^\mathrm{r}\right]$ in the complex~$(k_x,~k_y)$ space for prescribed~$\omega$ values. Examples of such contour calculations are deferred to focused studies on their use.
\subsection{Finite array transient response}
The band structure computations for infinitely periodic arrays are the main tool and product of ROM approach. However, it is of practical interest to study the dynamic response of finite sized arrays under localized loading or scattering. With the unit cell matrices obtained from the ROM procedure, it is possible to assemble multiple ROM unit cells to model a finite sized array, easily removing $\vec{k}$ dependence in stiffness matrix. This reduced order representation of finite systems allows for very fast computation of the frequency and time domain responses of the structure. Non-uniform and non-periodic arrays can be designed by stacking different unit cells, suitable for design for novel applications such as clocking and insulating.
Time domain solutions can be directly calculated through time marching integration schemes. Here, we discuss the use of frequency domain solutions for solving such time domain problems. The governing partial differential equations of such an array, after the modal transformation, will read:
\begin{equation}
\vec{\overline{M}}\vec{\ddot{q}}+\vec{\overline{K}}\vec{q}=\vec{\Phi}^\dagger\vec{F}(t),
\end{equation}
where~$\vec{\overline{M}}=\vec{\Phi}^\dagger\vec{M}\vec{\Phi}$ is the diagonal modal mass matrix,~$\vec{\overline{K}}=\vec{\Phi}^\dagger\vec{K}\vec{\Phi}$ is the diagonal modal stiffness matrix, both associated with and assembled for the full finite array, ~$\vec{\Phi}$ is the eigenvector matrix, also associated with the finite structure, and which needs to be calculated,~$\vec{F}$ is the nodal load vector,~$\vec{q}$ is the generalized coordinate vector, and~$\vec{u}=\vec{\Phi}\vec{q}$ is the nodal DOF vector. Such a form decouples the equations and renders a set of single DOF equations to solve. For a single DOF system with mass~$M$, stiffness~$K$, eigenfrequency~$\omega$, displacement~$u$, and loading~$F(t)$, the dynamic response under arbitrary loading can be computed using the Duhamel integral:
\begin{equation}
u(t)=\int_0^\tau F(\tau)h(t-\tau)\mathrm{d}\tau
\end{equation}
where the impulse response function (IRF) is
\begin{equation}
h(t)=\frac{1}{M\omega}\exp[\sin(\omega t)].
\end{equation}
\begin{figure}[!ht]
\centering\includegraphics[height=130pt]{figures/load}
\caption{Time domain loading profile.}
\label{fig:load}
\end{figure}
Therefore, one can compute each entry in~$\vec{q}(t)$ in a similar way. Then, the physical response will be~$\vec{u}(t)=\vec{\Phi}\vec{q}(t)$. An example of time dependent response computation is shown in \cref{fig:td_res}. An impact force (\cref{fig:load}) is horizontally applied to one of the internal nodes, as indicated by the red arrows in \cref{fig:td_res}. It can be seen that the ROM solution can accurately reproduce the wave propagation pattern. The resulting displacement field from ROM has 94\% correlation with the FEM data, showing high physical fidelity.
Finally, our preliminary results shows that using the time marching approach for the shown system, the FEM model has approximately 700,000 DOFs while the ROM only needs 2,000. Consequently, the computation speed of ROM is about 5,000 times faster than the FEM approach. Therefore, the computational efficiency can be significantly improved for large-sized arrays. Along with data-driven and machine learning methods, one can efficiently explore a vast design space and achieve fast cell-by-cell optimization using the proposed ROM method.
\begin{figure}[!ht]
\centering\includegraphics[height=260pt]{figures/td_res.pdf}
\caption{Time domain response of a hexagonal MM array $40\ \mu$s after loading initiation. Top panel: FEM model and solutions. Bottom panel: ROM solutions.}
\label{fig:td_res}
\end{figure}
It must be noted that the modeling of finite arrays requires an extra step in terms of characterizing the boundary elements. The structural components of the main body remain the same as the ones obtained from the unit cell ROM. However, the outermost elements need to be quantified separately due to the different boundary conditions assigned to them, especially when the MM array is in contact with a different homogeneous medium. The detailed approach is subject of current research and is initially carried out by optimizing the static response or impedance of edge cells with the help of a few FEM simulations.
\section{Conclusion and Outlook}\label{sec:conclusion}
A new reduced order modeling technique for periodic mechanical metamaterials is introduced. The method uses a limited number of simulations at selected wavevector locations to establish the reduced system matrices, which are parameterized based on the structural connectivity. Effective parameters are extracted by matching modal energies. This approach expands upon previous model order reduction techniques by considering the wave's propagating nature, leading to accurate eigen-solutions for any wavevector. The parameterized and analytical matrices generated by the ROM method offer valuable insights into the micro-structural influence on overall behavior and provide significant assistance in fine-tuning of design. Additionally, the ROM approach leads to fast computation of the dynamic responses of finite-sized arrays, with a significant reduction in computational effort compared to FEM.
To summarize, the highlights of this work are:
\begin{itemize}
\item the proposed ROM method can be easily applied to any periodic metamaterial that has beam-like components;
\item the reduced order matrices allows fast and accurate computation of band structures and the dynamic responses in frequency and time domains;
\item the ROM method can further benefit design optimization due to its computational efficiency.
\end{itemize}
The essential limitation of the proposed work is that, the ROM method can only be applied to MM micro-structures comprised of beam-like elements (and potentially plate-like elements). Other types of micro-structures, such as layered media, or unit cells with solid inclusions in a solid matrix, are not the suitable modeling target for the proposed ROM (instead, one can use RBME\cite{Hussein2009a}).
In addition, the proposed ROM approach is only applicable to systems with stiffness coupling between nearest neighbors, i.e., the long-range interaction\cite{Farzbod2020} is not considered.
The presented method could contribute to the modeling and design of finite and periodic mechanical metamaterials by reducing the computational effort. The micro-structure and periodicity of MMs lead to exciting dynamic properties and present theoretical questions in the physics of micro-structured media. The ROM method, equipped with the vibration and strength of material domain knowledge, can offer concise descriptions of the micro-structural dynamics and is a solid analytical tool for studying the metamaterial dynamics. In addition, the presented method leads to significant improvements in computational efficiency and is a promising candidate for further design optimization of graded MM arrays with data-driven techniques. An immediate topic of research is the handling of edge cells due to their different dynamic response, while the interior cells appear to be very well represented by the ROM based on infinitely periodic Media. Future work to be implemented is to adjust the element stiffness matrix formulation \cref{eq:beamK} for compatibility with 3D (and potentially composite) beam and plate elements. In addition, the optimization approaches for matching global quantities between the FEM and ROM can be adjusted so that lossy elements (viscoelastic material) are allowed in the system \cref{appLoss}
These future extensions would extend the modeling capability to 3D designs and enlarge the feasible design space.
\section*{Data Availability}
All data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.
\section*{Acknowledgements}
The authors wish to thank US Army Research Laboratory for continued support throughout this effort. This research was supported by CC DEVCOM Army Research Laboratory through Cooperative Agreements W911NF-17-2-0173 and W911NF-20-2-0147.
\clearpage
\bibliographystyle{aipnum4-1}
\section{Introduction}
Dynamic mechanical metamaterials (MMs) feature sub-wavelength micro-structures that interact with stress waves, exhibiting exotic functionalities.
Numerous exciting potentials have been proposed for MMs, including wave attenuation\cite{Xiao2020,Baertsch2021}, negative refraction\cite{Ding2010,Seo2012,Li2016,Nemat-Nasser2015a}, cloaking\cite{Chen2007,Norris2011,Zhu2014,Cummer2016}, and insulators\cite{Oh2017,Matlack2018}.
The systematic design of metamaterials requires comprehensive understanding of the dynamic behaviors of MM itself as well as manufacturing design constraints. The first natural step is to find the eigenfrequency band structure and mode shapes of the design. The former encodes major characteristic information of a MM unit cell such as resonance frequencies, wave velocities, and band gaps, while the latter is needed to determine scattering in finite structures and to fully evaluate interactions between modes.
Common methodologies of band structure calculation include plane wave expansion (PWE)\cite{Wu2004g,Sridhar2017,Lu2017a,Oudich2014}, transfer matrix method (TMM)\cite{Mead1996,Junyi2015,Amirkhizi2018c}, and finite element method (FEM)\cite{Huang2021}.
In almost all cases, the design and analysis of these dynamic systems are plagued by the geometric complexity and computational burden. The major challenges include 1) unclear design-performance relationship and 2) expensive computational cost for calculating the dynamic response.
For design optimization purposes, a reduced order modeling (ROM) method is clearly more suitable as it allows for simple and fast computation limited to frequency range or modalities of interest.
In this paper we present a ROM approach for fast computation of MM problems, which can be used for different study setups, including eigenfrequency band calculation and computation of time dependent dynamic responses of finite structures.
The metamaterials that are considered here are assumed to be comprised of 2D micro-structural designs with beam-like elements. The materials are assumed to have no loss or gain mechanisms, but the inclusion of linear viscoelastic response is considered as a natural future expansion.
Detailed numerical and experimental studies on some similar micro-structures can be found in previous work\cite{Aghighi2019,Amirkhizi2018d}.
Extensive reduced order modeling techniques have been developed for vibration problems, e.g. dynamic condensation\cite{Kidder1973}, improved reduced system (IRS)\cite{Gordis1994}, and system equivalent reduction expansion process (SEREP)\cite{Avitable1989}. These reduction methods in general employ certain transformation matrices that map the full set of degrees of freedom (DOFs) to a reduced set of DOFs.
For wave propagation problems, especially metamaterial problems, the existing model order reduction methods are limited and less applicable because the system matrices and the eigenfunctions are dependent on the wavevector.
The wavevector-dependence leads to the frequency and mode variation in the band structures and is a key element in metamaterial dynamics. Therefore, novel reduction schemes that can preserve the wavevector dependence and band accuracy are needed for metamaterial problems. To this end, Hussein\cite{Hussein2009a} introduced reduced Bloch mode expansion (RBME) for fast computation of band structures. The RBME method employs selected Bloch eigenfunctions to reduce the dimensionality.
A similar method, Bloch mode synthesis (BMS) \cite{Krattiger2014,Krattiger2018a}, is an extended sub-structuring technique that describes the structural DOFs by normal and constraint modes. Both the RBME and BMS methods utilize selected eigen-modes to construct transformation matrices that reduce the size of the full matrices. These transformation-based methods effectively reduce the number of equations, but the resulting matrices are no longer representing the physical quantities (stiffness and inertia), therefore less suitable for geometric or material design problems. Additionally, these methods could not be applied to time/frequency domain computations of finite arrays. Nevertheless, these methods are have been shown useful for topology optimization\cite{Jung2020} in terms of reducing the computational cost.
An alternative scheme is to develop discrete models comprised of masses and springs.
The discretized mass-spring representation has been widely accepted in literature as it offers analytical formulations that simplify the computational effort while retaining essential physics. It has proven to be beneficial for various design aspects such as feasibility analysis \cite{Dertimanis2016}, reliability assessment \cite{Wagner2018}, and design space mapping \cite{Morris2022}. For higher order systems operating at high frequency ranges, an excellent example of modeling discrete weakly coupled MMs has been introduced by Matlack~\textit{et~al}. \cite{Matlack2018}. The model reduction is performed using the Schrieffer-Wolff transformation, so that the modes in the frequency range of interest are decoupled.
However, this method could only be applied to narrow-band dynamics.
While the mass-spring representation can significantly reduce the computational effort, certain vibration modes may exhibit mixed coupling between DOFs. To accurately capture such dynamics, the elastic spring elements cannot be simple 2-DOF elements.
In our previous work\cite{Amirkhizi2018d}, the elastic spring elements are physically represented as beams. The reduced stiffness and mass matrices can then be derived using simple strength of materials analysis. Such an approach provides analytical matrices that operate on physical DOFs and is naturally suitable for tuning the response via control of physical dimensions and material choices \cite{Morris2022}. They also make interpreting the modal physics straightforward, for example such a beam-based discrete model allows for accurate identification of the level repulsion\cite{Lu2018,Wang2021exceptional} and coupling between the DOFs. However, the selection of DOFs has been mostly a heuristic step that may affect the results.
In addition, approximating the structural components as standard beam elements does not generally match the actual response of the beam-like elements as accurately as needed.
The present work introduces a systematic implementation of the structural-element-based ROM approach that overcomes these limitations.
A generalized ROM procedure, parameterized in terms of effective structural stiffness parameters and discrete DOF inertia, is developed and is applicable to a large family of 3D printable MM designs.
The conceptual idea of the proposed ROM method takes advantage of the fact that the 3D printable MMs that operate as low frequency (long wavelength) locally resonant systems are often comprised of slender plate- or beam-like elements.
In addition, in most cases only the low frequency dynamics of the MM are of particular interest for practical applications, which reside in the subspace spanned by the lowest few eigen-modes, representable using a few carefully selected DOFs.
Modeling the system with these ``master" DOFs can hence reduce the computational cost while maintains high fidelity of the underlying physics.
A structural assembly system with symbolic matrices is used to represent the repeating unit cell (RUC). By optimizing the energy fitness compared with numerical results, one can find the effective stiffness and inertia parameters of this structural assembly. Such a ROM unit cell can accurately predict the eigenfrequency band structures with minimal computational effort.
This approach improves upon existing model order reduction methods in handling problems in metamaterials by maintaining eigen-solution accuracy within the Brillouin zone and providing parameterized matrices. It incorporates the propagating nature of waves, rather than just the modal response of finite structures. In MM systems, small variations in geometry can drastically change the overall response. The use of an analytical model that characterizes the MM with small number of parameters is therefore advantageous for understanding the influence of each component and fine-tuning the design. This resulting ROMs can also be extended for modeling finite-sized arrays, and the reduction in DOFs can accelerate the computation process significantly, especially in time dependent problems.
This paper is organized as follows. The general procedure of ROM development is first introduced in \cref{sec:procedure}. Then, two examples are given in \cref{sec:examples}, showing the accurately reproduced band structures. Finally, further uses of the proposed ROM approach are discussed in \cref{sec:apps}.
The conclusions and future outlook of this work is discussed in \cref{sec:conclusion}.
With accurate construction of cellular discrete models, The proposed approach will lead to efficient modeling and design discovery of mechanical metamaterials.
\section{Formulation and Procedure}\label{sec:procedure}
\subsection{Governing equations}
The first natural step to study a MM design is to obtain the eigenfrequency band structure.
Based on Bloch-Floquet theorem for wave propagation problems, the spatial domain is reduced to a single repeating unit cell (RUC). All fields, including the displacement field, must satisfy the Floquet periodicity:
\begin{equation}\label{eq:per}
\vec{u}(\vec{x} + (s_1\vec{a}_1 + s_2 \vec{a}_2), t) =\vec{u}_0 (\vec{x}) \exp[\mathrm{i} (\omega t-\vec{k} \cdot (s_1\vec{a}_1 + s_2 \vec{a}_2))],
\end{equation}
where~$\vec{u}$ is the complex displacement vector field, the real part of which is the physical displacement vector field, $\mathrm{i}=\sqrt{-1}$,~$\omega=2\pi f$ is the angular frequency, $\vec{k}=[k_x,k_y]$ is the wavevector in~$x-y$ plane, $s_{1,2}$ are integers indicating different cells, and~$\vec{a}_{1,2}$ are the primitive translation vectors for a 2D unit cell, respectively. The eigenfrequency problem can be written as:
\begin{equation}\label{eq:eigf}
\left[\vec{K}(\vec{k})-\omega^2\vec{M}\right]\vec{u}_0=\vec{0},
\end{equation}
where~$\vec{K}$ and $\vec{M}$ are the stiffness and mass matrices,~$\omega^2$ is the eigenvalue, and~$\vec{u}_0$ is the eigenfunction (mode shape). The detailed development of~\cref{eq:eigf} can be found in literature\cite{Hussein2009a}. Here the matrix~$\vec{K}$ is dependent on wavevector~$\vec{k}$ since the Floquet condition is applied to the RUC.
The Floquet condition~\cref{eq:per} and the eigenfrequency problem~\cref{eq:eigf} are the general setups used in finite element methods to find band structure and mode shapes, and have nearly identical counterparts in the ROM as well. A collection of eigen-modes can be organized in matrix $\vec{\Phi}$. Consider the~$m$-th eigen-mode solution, with frequency~$\omega_m$ and mode shape~$\vec{\Phi}_{m}$ (the $\vec{u}_0$ solution for the $m$-th mode), the time-averaged kinetic and strain energies for this mode are:
\begin{equation}\label{eq:Tm}
\begin{split}
{\overline{T}}_m &=\frac{\omega_m}{2\pi}\int_0^{\frac{2\pi}{\omega_m}}\frac{1}{2}\Re\left[\mathrm{i} \omega_m e^{\mathrm{i} \omega_m t}\vec{\Phi}_{m}^\top\right] \Re\left[\mathrm{i} \omega_m e^{\mathrm{i} \omega_m t}\vec{M} \vec{\Phi}_{m}\right] \mathrm{d} t\\
&=\frac{1}{4}\Re[\omega_m \vec{\Phi}_{m}^\top \left(\omega_m\vec{M} \vec{\Phi}_{m}\right)^*]\\
&=\frac{1}{4}\omega_m^2 \vec{\Phi}_{m}^\dagger\vec{M} \vec{\Phi}_{m},
\end{split}
\end{equation}
\begin{equation}\label{eq:Vm}
\begin{split}
{\overline{V}}_{m}&=\frac{\omega_m}{2\pi}\int_0^{\frac{2\pi}{\omega_m}}\frac{1}{2}\Re\left[ e^{\mathrm{i} \omega_m t}\vec{\Phi}_{m}^\top\right] \Re\left[ e^{\mathrm{i} \omega_m t}\vec{K} \vec{\Phi}_{m}\right] \mathrm{d} t \ \ \ \ \ \ \ \ \\
&=\frac{1}{4}\Re[\vec{\Phi}_{m}^\top \left(\vec{K} \vec{\Phi}_{m}\right)^*]\\
&=\frac{1}{4} \vec{\Phi}_{m}^\dagger\vec{K} \vec{\Phi}_{m}
\end{split}
\end{equation}
where~$*$ is complex conjugate, and~$\dagger$ is conjugate transpose. Here \cref{eq:Tm} and \cref{eq:Vm} are valid for lossless systems for which~$\vec{K}$ and~$\vec{M}$ are Hermitian matrices.
Based on the modal orthogonality, we further have
\begin{equation}\label{eq:mt}
\overline{\vec{T}}=\mathrm{diag}[\overline{T}_1,\dots,\overline{T}_{n_m} ]=\frac{1}{4}\vec{\Phi}^\dagger\vec{M}\vec{\Phi}\vec{\omega}^2,
\end{equation}
\begin{equation}\label{eq:mv}
\overline{\vec{V}}=\mathrm{diag}[\overline{V}_1,\dots,\overline{V}_{n_m} ]=\frac{1}{4}\vec{\Phi}^\dagger\vec{K}\vec{\Phi}
\end{equation}
where~$\vec{\omega}=\mathrm{diag}[\omega_1,\dots,\omega_{n_m}]$ for the lowest~$n_m$ modes of interest.
The modal energy matrices~$\overline{\vec{T}}$ and~$\overline{\vec{V}}$ are global quantities that can be evaluated in finite element solvers.
Notice, again, that the energy derivations \crefrange{eq:Tm}{eq:mv} are only valid for fully linear elastic systems, and the proposed method in this paper is developed based on such assumptions. For systems with loss elements (viscoelastic components), certain modifications are suggested in order to construct the ROM, see \cref{appLoss}.
\subsection{Model order reduction}
The governing equations introduced in the previous subsection are generally valid for both continuum systems and their discrete (reduced order) counterparts.
The essential idea of the proposed ROM method, is to find the small-sized stiffness $\vec{K}^\mathrm{r}$ and mass matrices $\vec{M}^\mathrm{r}$ in such a way that the resulting global quantities~$\overline{\vec{T}}^\mathrm{r}$, $\overline{\vec{V}}^\mathrm{r}$, and~$\vec{\omega}^\mathrm{r}$ of the reduced system are preserved and directly associated with the continuum results, identified as~$\overline{\vec{T}}^\mathrm{c}$, $\overline{\vec{V}}^\mathrm{c}$, and~$\vec{\omega}^\mathrm{c}$. To achieve this, the matrix size reduction is performed by first down-sampling the continuum mode shapes~$\vec{\Phi}^\mathrm{c}$ at a set of $n_p$ primary nodal positions to obtain the sampled mode shapes $\vec{\Phi}^\mathrm{p}$, from which the ROM mode shapes~$\vec{\Phi}^\mathrm{r}$ are extracted.
Then, the effective ROM matrices $\vec{K}^\mathrm{r}$ and $\vec{M}^\mathrm{r}$ can be found in order to satisfy
\begin{equation}\label{eq:Tmatch}
\overline{\vec{T}}^\mathrm{r}=\frac{1}{4}\vec{\Phi}^{\mathrm{r}\dagger} \vec{M}^\mathrm{r} \vec{\Phi}^\mathrm{r}\vec{\lambda}^\mathrm{c}\approx\frac{1}{4}\vec{\Phi}^{\mathrm{p}\dagger} \vec{M}^\mathrm{p}\vec{\Phi}^\mathrm{p}\vec{\lambda}^\mathrm{c}\approx\overline{\vec{T}}^\mathrm{c}=\overline{\vec{V}}^\mathrm{c}
\end{equation}
\begin{equation}\label{eq:Vmatch}
\overline{\vec{V}}^\mathrm{r}=\frac{1}{4}\vec{\Phi}^{\mathrm{r}\dagger} \vec{K}^\mathrm{r} \vec{\Phi}^\mathrm{r}\approx\overline{\vec{T}}^\mathrm{r}.
\end{equation}
Here the target quantities to be identified are the ROM matrices~$\vec{M}^\mathrm{r},\vec{K}^\mathrm{r}$, and other quantities~$\vec{\lambda}^\mathrm{c}=(\vec{\omega}^\mathrm{c})^2,\vec{\Phi}^\mathrm{r},\overline{\vec{T}}^\mathrm{c},\overline{\vec{V}}^\mathrm{c}$ are obtained from continuum simulations.
Due to the known geometric layout and the domain knowledge (i.e., beam stiffness formulation), the ROM matrices are symbolically parameterized by a set of effective physical parameters that describes the structural and inertia features. In this proposed method, the effective ROM parameters are the beam stiffness parameters~$\vec{\beta}$ and the nodal inertia~$\vec{\mu}$.
Identification of~$\vec{\beta}$ and $\vec{\mu}$ for the beam elements and nodes will complete the construction of~$\vec{K}^\mathrm{r}(\vec{\beta},\vec{k})$ and~$\vec{M}^\mathrm{r}(\vec{\mu})$.
The full procedure of ROM construction is as follows:
\begin{enumerate}
\item Assign a set ($n_p$) of primary nodes in the continuum unit cell, from whose displacement and rotation values the full continuum mode shapes may be approximated;
\item Perform eigenfrequency simulations at a few~($n_k$) selected wavevectors to determine the frequency~$\vec{\omega}^\mathrm{c}$, down-sampled continuum mode shapes~$\vec{\Phi}^\mathrm{p}$, and the modal energies~$\overline{\vec{T}}^\mathrm{c}$,~$\overline{\vec{V}}^\mathrm{c}$ for the lowest~$n_m$ modes. The superscript~$\mathrm{c}$ indicates the global quantities measured from the continuum system, and the superscript~$\mathrm{p}$ denotes the down-sampled mode shapes;
\item Construct the symbolic stiffness $\vec{K}^\mathrm{f}(\vec{\beta})$ and mass $\vec{M}^\mathrm{f}(\vec{\mu})$ matrices based on the unit cell geometry (layout, connectivity, symmetry) and positions of the~$n_p$ primary nodes as well as those of the~$n_d$ dependent ones (associated with Floquet periodicity);
\item Apply Floquet boundary condition to the symbolic matrices so that the equations of motion only involve the DOFs at the~$n_p$ primary nodes, yielding the symbolic matrices $\vec{K}^\mathrm{p} (\vec{\beta},\vec{k})$ and $\vec{M}^\mathrm{p}(\vec{\mu})$;
\item Find the effective mass matrix~$\vec{M}^\mathrm{p}$ (i.e. parameters $\vec{\mu}$) by optimizing the kinetic energy fitness (matching the ROM values with the continuum~$\overline{\vec{T}}^\mathrm{c}$), for the lowest~$n_m$ modes, using the measured~$\vec{\omega}^\mathrm{c}$ and~$\vec{\Phi}^\mathrm{p}$;
\item Identify the slave DOFs (whose contribution to kinetic energy is negligible) and perform static condensation so that the number of studied DOFs is reduced (from $3n_p$ to~$n_r$). This step establishes the numerical-valued matrix~$\vec{M}^\mathrm{r}$ (a sub-matrix of $\vec{M}^\mathrm{p}$), the symbolic matrix~$\vec{K}^\mathrm{r}(\vec{\beta},\vec{k})$, and reduces the measured modes from~$\vec{\Phi}^\mathrm{p}$ to~$\vec{\Phi}^\mathrm{r}$;
\item Find the effective stiffness parameters~$\vec{\beta}$ by optimizing the potential energy fitness (matching the ROM modal matrix~$\overline{\vec{V}}^\mathrm{r}$ with the already determined~$\overline{\vec{T}}^\mathrm{r}$), for the lowest~$n_m$ modes, using~$\vec{\omega}^\mathrm{c}$ and~$\vec{\Phi}^\mathrm{r}$ and the established ROM mass matrix~$\vec{M}^\mathrm{r}$;
\item Use the established matrices~$\vec{K}^\mathrm{r}$ and~$\vec{M}^\mathrm{r}$ to compute the band structure, or adjust them for other types of problems.
\end{enumerate}
By matching the diagonalized modal matrices resulting from the symbolic discrete model and the FEM model, this process aims to construct a discretized lower order system that, at the selected wavevector~$\vec{k}$ points, inherits the continuum eigenvalues~$\vec{\omega}^\mathrm{c}$ and the associated mode shapes~$\vec{\Phi}^\mathrm{r}$ down-sampled from the continuum system.
With the ROM matrices $\vec{K}^\mathrm{r}(\vec{\beta},\vec{k})$ and~$\vec{M}^\mathrm{r}(\vec{\mu})$ symbolically parameterized instead of numerically found through the pseudo-inversion of \crefrange{eq:Tmatch}{eq:Vmatch}, the known structural knowledge is retained in the model. This allows for further analysis and optimization of the structural components in design efforts.
Since the number of these unknown parameters in~$\vec{\beta}$ and~$\vec{\mu}$ are limited, one does not need to optimize the matching of \crefrange{eq:Tmatch}{eq:Vmatch} for every wavevector~$\vec{k}$. Instead, the eigen-information from only a small number of wavevectors will be sufficient to determine the unknown ROM parameters and the number of needed simulations is small.
In addition, the extracted stiffness ($\vec{\beta}$) and inertia~($\vec{\mu}$) values for the discrete system are properly scaled to represent effective physical quantities due to the matched modal energies. Since the effective parameters~$\vec{\beta}$ and~$\vec{\mu}$ are of high physical fidelity and independent of wavevector~$\vec{k}$, one can easily compute the eigen-results at any arbitrary wavevector with the ROM matrices. Furthermore, the ROM can be easily extended for other types computations, such as frequency or time domain problems, for which finite-sized arrays are modeled, and wavevector~$\vec{k}$ is not an explicit parameter.
In summary, such a model order reduction approach serves as both a parameter retrieval method that characterizes the continuum model as a discrete one, as well as a fast tool that accelerates the computation of eigen- or other dynamic problems.
In the next section of this paper, the detailed ROM construction steps are demonstrated through examples.
\section{Examples}\label{sec:examples}
\subsection{Node assignment and information collection}
To demonstrate the ROM parameterization process, two MM unit cells are selected. The square unit cell \Cref{fig:sh_geom} features a H-shaped resonator mass, while the hexagonal cell \cref{fig:h0_geom} has two split resonators. The two RUCs are modeled in FEM using the same material, with Young's modulus~$E=\SI{300}{GPa}$, Poisson's ratio~$\nu=0.22$, and density~$\rho=\SI{3900}{kg/m^3}$. Both designs have the lattice constant~$a=\SI{10}{mm}$. One should first assign a set of $n_p$ nodes in the continuum model, where the deformation will be sampled and used for mode shape projections. These sampling points should, in principle, be located at the mass centers of structural components, or at the intersections between beam elements.
Further detailed analysis and principles of DOF selection are given in literature \cite{Kevin2015,Qu2004}.
The chosen sampling nodes in the two examples are denoted by the blue dots in \cref{fig:geom}. Notice that there is no node assigned at the top or right edge of the cell frames, due to the known Floquet periodicity~\cref{eq:per} for infinite arrays. The mode shapes~$\vec{\Phi}^\mathrm{p}$ will be represented by the deformation vector~$\vec{u}^\mathrm{p}$ containing the displacement~$u_x,u_y$ in~$x-y$ plane and the rotation~$\theta_z$ at the chosen nodes. The rotational component is derived from the curl of the displacement field~$\theta_z=(\frac{\partial u_y}{\partial x}-\frac{\partial u_x}{\partial y})/2$.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=140pt]{figures/sh_geom}
\caption{Square MM \label{fig:sh_geom}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=140pt]{figures/h0_heom}
\caption{Hexagonal MM\label{fig:h0_geom}}
\end{subfigure}
\caption{Unit cell geometry of the selected examples. The blue dots denote the selected~$n_p$ nodes where the mode shapes~$\vec{\Phi}^\mathrm{p}$ are sampled from FEM solutions at selected wavevectors. The base material is alumina ($E=\SI{300}{GPa},\nu=0.22,\rho=\SI{3900}{kg/m^3}$).
\label{fig:geom}}
\end{figure}
Then, one performs finite element simulations at a few selected~$\vec{k}$ points in the irreducible Brillouin zone (IBZ). Preferably, these~$\vec{k}$ points should be far away from each other. In the shown examples here, we select~$n_k=4$ points at~$\vec{k}=[0,0],[\pi/a,0],[0,\pi/a],[\pi/a,\pi/a]$. At each wavevector point, one collects the eigenfrequency results for the lowest~$n_m$ modes. The number~$n_m$ is determined such that: (1) the~$n_m$-th frequency~$f_{n_m}$ covers the frequency range of interest and (2) the locations associated with dominant deformations for the lowest~$n_m$ modes are included in the pre-selected nodes.
We chose~$n_m=6$ for the square cell and~$n_m=8$ for the hexagonal one. In this step, the collected information includes the diagonal frequency matrix~$\vec{\omega}^\mathrm{c}\in\mathbb{R}^{n_m\times n_m}$, the diagonal kinetic energy matrix~$\overline{\vec{T}}^\mathrm{c}\in\mathbb{R}^{n_m\times n_m}$, the diagonal potential energy matrix~$\overline{\vec{V}}^\mathrm{c}\in\mathbb{R}^{n_m\times n_m}$, and the mode shape matrix~$\vec{\Phi}^\mathrm{p}\in\mathbb{C}^{3n_p\times n_m}$.
\subsection{Matrix construction}
After the FEM data collection, the ROM matrices can be constructed symbolically based on the selected nodes, including the~$n_p$ primary ones and the~$n_d$ dependent ones, based on the known connectivity and beam element stiffness formulation (see Appendix \cref{eq:beamK}). The dependent nodes are those whose displacements are those whose kinematics are determined based on Floquet periodicity, yet are connected by a structural element to one of the primary nodes. The graphic representations of the ROM unit cells are shown in \cref{fig:rom_ruc_dof}.
For the square unit cell in~\cref{fig:sh_ruc_dofs}, the dependent DOFs are~$\vec{u}^\mathrm{d}=[\vec{u}^{(3)},\vec{u}^{(8)},\vec{u}^{(9)}]^\top$. For the hexagonal unit cell in~\cref{fig:h0_ruc_dofs}, the dependent DOFs are~$\vec{u}^\mathrm{d}=[\vec{u}^{(3)},\vec{u}^{(4)}]^\top$.
Notice that some edge elements (between dependent nodes) are not included in order to eliminate redundancy in the periodically generated array. For example, nodes 8 and 9 are not directly connected in \cref{fig:sh_ruc_dofs}.
Nevertheless, the structures shown in \cref{fig:rom_ruc_dof} are primitive unit cells whose 2D repetitions will produced the infinitely periodic system perfectly.
Each node (denoted by a black dot) has three inertia parameters~$m_x,m_y=m_x$ (mass) and~$I_z$ (rotational inertia). The force-balance relation between two connected nodes is approximated based on beam analysis, introduced in \cref{appBeam}. In this formulation, the beam element is allowed to have an asymmetric layout, and only four independent stiffness parameters~$\beta_{1,2,5,7}$ (diagonal components of the stiffness matrix) are needed to construct the local stiffness matrix. The other components are statically determined. The six force and moment components are related to the six displacement and rotation quantities through the symbolic stiffness matrix, in which four independent stiffness values need to be found. Such a form is not only compatible with standard beam elements (Euler-Bernoulli, Timoshenko), but also suitable for any generalized 1D structural component with two end nodes.
To obtain the global stiffness matrix, each force-balance relation is first converted into the global coordinate system; See~\cref{appBeam}. Based on the equilibrium of the overall structure, the global stiffness matrix is obtained by summing all the loads arising from the adjacent elements for each node\cite{ferreira2008matlab}. The static balance equations then read
\begin{equation}
\vec{K}^\mathrm{f}\vec{u}^\mathrm{f}=\vec{K}^\mathrm{f}\begin{bmatrix}
u^{(1)}_x& u^{(1)}_y& \theta^{(1)}_z&\cdots&u^{(n_p+n_d)}_x& u^{(n_p+n_d)}_y& \theta^{(n_p+n_d)}_z
\end{bmatrix}^\top=\vec{F},
\end{equation}
where $\vec{F}$ is the nodal loading and superscript is the node index. The symbolic matrix~$\vec{K}^\mathrm{f}$ gives the constitutive description of the full unanchored structure, with free boundary conditions. The associated mass matrix is a diagonal matrix~$\vec{M}^\mathrm{f}=\mathrm{diag}[\vec{\mu}]=\mathrm{diag}[m^{(1)}_x,\dots,I^{(n_p+n_d)}_z]$. Then, the unit cell structure is effectively parameterized by the unknown stiffnesses~$\vec{\beta}$ and inertia values $\vec{\mu}$.
To apply the Bloch-Floquet periodicity condition, one can first describe the full set of DOFs~$\vec{u}^\mathrm{f}$ by a transformed version of the DOFs at the primary nodes~$\vec{u}^\mathrm{p}$:
\begin{equation}
\vec{u}^\mathrm{f}=\vec{P}(\vec{k})\vec{u}^\mathrm{p},
\end{equation}
where~$\vec{P}(\vec{k})$ is a rectangular transformations matrix containing phase differences between the dependent and primary DOFs, determined by the nodal positions and the wavevector. A detailed discussion on applying the periodicity can be found in\cite{Krattiger2018a}.
For the periodic unit cells, these dependent DOFs are removed from the equations of motion, and only the primary DOFs are necessary:
\begin{equation}\label{eq:eom_periodic}
\vec{K}^\mathrm{p} (\vec{k}) \vec{u}^\mathrm{p}-\vec{\omega}^2 \vec{M}^\mathrm{p}\vec{u}^\mathrm{p}=\vec{0},
\end{equation}
where~$\vec{K}^\mathrm{p}=\vec{P}^\dagger \vec{K}^\mathrm{f}\vec{P}$,~$\vec{M}^\mathrm{p}=\vec{P}^\dagger \vec{M}^\mathrm{f}\vec{P}$ are the matrices for the primitive cell, and~$\dagger$ is Hermitian transpose. At this stage, the equations of motion \cref{eq:eom_periodic} effectively describe the dynamics of the discretized primitive unit cells, and the involved nodes are identical to the ones marked in \cref{fig:geom}.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=140pt]{figures/sh_ruc_dofs}
\caption{\label{fig:sh_ruc_dofs}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=140pt]{figures/h0_ruc_dofs}
\caption{\label{fig:h0_ruc_dofs}}
\end{subfigure}
\caption{ Beam assemblies as the reduced order unit cells for the (\subref{fig:sh_ruc_dofs}) square and (\subref{fig:h0_ruc_dofs}) hexagonal MMs. The red arrows denote the reduced inertia DOFs. \label{fig:rom_ruc_dof}}
\end{figure}
Taking advantage of the geometrical symmetries in the continuum model, the number of unknown parameters in the ROM property matrices can be reduced. For example, beam 5-6 and beam 7-6 are symmetric with respect to node 6 in~\cref{fig:sh_ruc_dofs}. Then, node 5 will have the same mass and rotational inertia as node 7. The two beams will share the same local stiffness matrix as well (in the un-rotated local coordinate system). Under this idealized description, the structural symmetries lead to degeneracy in the eigenvalues. However, any physical or numerical realization of such systems will have the tendency to become non-degenerate due to any small asymmetry.
\subsection{Inertia quantification and DOF reduction}
With the symbolic ROM matrices developed, the next step is to find the effective mass and inertia values of the selected nodes. Consider the kinetic energy formulation at the~$(j)$-th $\vec{k}$ point, the ROM values are expected to be identical to the continuum ones:
\begin{equation}\label{eq:mt_e}
\frac{1}{4}\vec{\lambda}^\mathrm{c}_{(j)}\vec{\Phi}^{\mathrm{p}\dagger}_{(j)}\vec{M}^\mathrm{p}(\vec{\mu})\vec{\Phi}^\mathrm{p}_{(j)}
= \overline{\vec{T}}^\mathrm{c}_{(j)}\in\mathbb{R}^{n_m\times n_m}.
\end{equation}
As is indicated by the rhs of this equation, this is an $n_m \times n_m$ matrix equation (for each $j$ value), whose purpose is to find the matrix~$\vec{M}^\mathrm{p}(\vec{\mu})$, assumed diagonalizable by the down-sampled eigenvectors~$\vec{\Phi}^\mathrm{p}_{(j)}$, with the known $n_m \times n_m$ diagonal eigenvalue matrix~$\vec{\lambda}^\mathrm{c}_{(j)}=(\vec{\omega}^\mathrm{c}_{(j)})^2$ and kinetic energy~$\overline{\vec{T}}^\mathrm{c}_{(j)}$.
As this is clearly mathematically over determined, an error vector can be defined as
\begin{equation}\label{eq:masserror}
\vec{e}_{(j)} (\vec{\mu})=\mathrm{UT}\left[\frac{1}{4}\vec{\lambda}^\mathrm{c}_{(j)}\vec{\Phi}^{\mathrm{p}\dagger}_{(j)}\vec{M}^\mathrm{p}(\vec{\mu})\vec{\Phi}^{\mathrm{p}}_{(j)}
- \overline{\vec{T}}^\mathrm{c}_{(j)}\right],
\end{equation}
where~$\mathrm{UT}[\cdot]$ denotes the vector containing all the upper triangular entries of a matrix.
Combining the results at all the~$n_k$ wavevectors, the error vector is then $\vec{E}=[\vec{e}_{(1)}\dots\vec{e}_ {(n_k)}]$.
This problem can be stated as a constrained linear least-squares problem
\begin{equation}
\begin{aligned}
\min_{\vec{\mu}} \quad & ||\vec{E}(\vec{\mu})||_2^2\\
\textrm{s.t.} \quad & \vec{\mu} \geq 0 \\
\end{aligned}
\end{equation}
where $||\cdot||_2$ indicates the~$L_2$ norm, and the condition on $\vec{\mu}$ is understood to apply to each component independently. In practice, to guarantee the equal weight of each mode and each wavevector, all the mode shapes should be pre-normalized so that their kinetic energies are equal.
Nevertheless, the off-diagonal components in the modal matrix~$\overline{\vec{T}}^\mathrm{c}$ will remain zero.
The optimization problem is solved using the \textit{lsqlin} function in MATLAB\textsuperscript{\textregistered}. \Cref{fig:mass_fitting} shows the resulting energy fitness after optimization. The blue lines denote the right hand side of \cref{eq:mt_e}, i.e., the kinetic energy matrix components in FEM results. The red lines represent the approximated ROM results. Good accuracy is observed in both cases. By using the simulation results at more than one ~$\vec{k}$ points (in this case,~$n_k=4$), the real and imaginary parts of the upper triangular components together lead to ($n_k n_m (n_m+1)$) real equations for finding the unknown inertia parameters.
Furthermore, the lowest two modes at~$\vec{k}=\Gamma=[0,0]$ are rigid body modes with zero eigenfrequencies. Inclusion of the~$\Gamma$ point in this process will automatically guarantee that the solved solution satisfies mass conservation.
In addition, parallel axis theorem could also be enforced in this optimization process either as a constraint or as a part of the optimization function. Based on this theorem, the discrete unit cell is expected to have the same total moment of inertia as the continuum model. In the given examples, it is found that introducing parallel axis theorem would add a small error (8\%) in the kinetic energy match. The root of this discrepancy is in the selected location for ROM nodes and can potentially be overcome, by allowing these locations to be found as part of this optimization process. However, the extra computational cost and complexity of this process far outweighs its benefit to accuracy. Therefore, this additional consideration is not enforced here. Nevertheless, the resulting band structure and dynamic response of the ROMs are still very close matches of the continuum solutions, indicating high physical fidelity, as shown in the later sections.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=120pt]{figures/sh_mass_fitting}
\caption{\label{fig:sh_mass}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=120pt]{figures/h0_mass_fitting}
\caption{\label{fig:h0_mass}}
\end{subfigure}
\caption{Kinetic energy matrix components for the (\subref{fig:sh_mass}) square and (\subref{fig:h0_mass}) hexagonal MMs after optimization. The first and second halves of the equations are the real and imaginary parts of the kinetic energy matrix, respectively. \label{fig:mass_fitting}}
\end{figure}
Although three DOFs ($u_x,u_y,\theta_z$) are sampled at each node, not all of them have the same importance. Certain DOFs may become apparent in the mode shapes only when the frequency is high enough, and certain DOFs may be associated with negligible inertia. With the mass matrix found, the mode shapes~$\vec{\Phi}^\mathrm{p}$ can be re-scaled so that
\begin{equation}\label{eq:Tnorm}
\frac{1}{4} \lambda^\mathrm{c}_{m(j)} \vec{\Phi}^{\mathrm{p}\dagger}_{m(j)}\vec{M}^\mathrm{p}\vec{\Phi}^\mathrm{p}_{m(j)}=1 \ \qquad \mathrm{for\ any\ }m,(j)
\end{equation}
where~$\lambda=\omega^2$ is the eigenvalue, and subscript~$_{m(j)}$ indicates the~$m$-th mode eigenvalue at~$(j)$-th wavevector location.
Then, the weight of the~$i$-th DOF is evaluated as the averaged kinetic energy in the log scale:
\begin{equation}
W_i=\log_{10}\dfrac{\sum_{m=1}^{n_m}\sum_{j=1}^{n_k} \frac{1}{4} \lambda^\mathrm{c}_{m(j)}
\Phi^{\mathrm{p}*}_{im(j)}
M^{\mathrm{p}}_{ii}
\Phi^{\mathrm{p}}_{im(j)}}
{n_m n_k}.
\end{equation}
\Cref{fig:dof_weights} shows the relative weights of all the DOFs for the two MM examples. It is apparent that certain DOFs with weight~$\leq-4$ should be eliminated and be regarded as ``slave'' DOFs. For example, the ninth DOF of the square MM (the rotation at node 4) has negligible weight and is not active in the considered frequency range. Only the active ``master'' DOFs will be kept in the ROM formulations, and they are indicated by the red arrows in~\cref{fig:rom_ruc_dof}.
Deleting the slave DOFs from~$\vec{\Phi}^\mathrm{p}$ leads to the reduced mode shapes~$\vec{\Phi}^\mathrm{r}$, which is a subset of continuum data and is expected to be the eigenvectors of the ROM matrices.
The removal of these slave DOFs follows the standard static condensation\cite{GUYAN1965}, as the associated inertia values are effectively zero. One can re-arrange the stiffness matrix as
$$
\vec{K}^\mathrm{p}=\begin{bmatrix}
\vec{K}_{\mathrm{mm}} & \vec{K}_{\mathrm{ms}} \\
\vec{K}_{\mathrm{sm}} & \vec{K}_{\mathrm{ss}}
\end{bmatrix}.
$$
Here the subscripts denote ``m"aster (not to be confused with index $m$ used earlier to indicate modes) and ``s"lave.
The columns and rows related to slave DOFs in the mass matrix~$\vec{M}^\mathrm{p}$ are deleted, and it leads to the reduced mass matrix~$\vec{M}^\mathrm{r}$. The reduced (still symbolic) stiffness matrix is obtained by
\begin{equation}\label{eq:guyan}
\vec{K}^\mathrm{r}=\vec{K}_{\mathrm{mm}}-\vec{K}_{\mathrm{ms}}\vec{K}_{\mathrm{ss}}^{-1}\vec{K}_{\mathrm{sm}}.
\end{equation}
In practice, the inversion of symbolic sub-matrix~$\vec{K}_{\mathrm{ss}}$ is computationally challenging. However, this step can be equivalently implemented using Gaussian elimination.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=110pt]{figures/sh_dof_weights}
\caption{\label{fig:sh_dofwei}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=110pt]{figures/h0_dof_weights}
\caption{\label{fig:h0_dot_wei}}
\end{subfigure}
\caption{Weights of the DOFs (in log scale) for the (\subref{fig:sh_mass}) square and (\subref{fig:h0_mass}) hexagonal MMs. \label{fig:dof_weights}}
\end{figure}
\subsection{Stiffness parameter extraction}
To ensure that the continuum eigenfrequencies and mode shapes can be accurately reproduced by the ROM, the modal potential energy must equal to the modal kinetic energy. The ideal set of stiffness parameters~$\vec{\beta}$ can therefore be found by optimizing the potential energy fitness. The error in energy at the~$(j)$-th wavevector location is defined as
\begin{equation}\label{eq:betaerror}
\vec{e}_ {(j)}(\vec{\beta})=\mathrm{UT}\left[\frac{1}{4}\vec{\Phi}^{\mathrm{r}\dagger}_{(j)}\vec{K}^\mathrm{r}(\vec{\beta})\vec{\Phi}^\mathrm{r}_{(j)} -\frac{1}{4}\vec{\Phi}^{\mathrm{r}\dagger}_{(j)}\vec{M}^\mathrm{r}\vec{\Phi}^\mathrm{r}_{(j)}\vec{\lambda}^\mathrm{c}_{(j)}\right],
\end{equation}
where the superscript~$r$ denotes quantities associated with the reduced set of master DOFs. The error vector for all~$n_k$ wavevector locations is then~$\vec{E}=[\vec{e}_{(1)}\dots\vec{e}_{(n_k)}]$.
The optimization problem is then formulated as:
\begin{equation}\label{eq:Vopt}
\begin{aligned}
\min_{\vec{\beta}} \quad & {||\vec{E}(\vec{\beta})||_2}\\%/{||\vec{E}_r||_2} \\
\textrm{s.t.} \quad & \vec{\beta}>\vec{0}, \\
\end{aligned}
\end{equation}
where the constraint on $\vec{\beta}$ is understood as positivity for every single $\beta$ parameter in the structure; See \cref{appBeam} for detail.
This process also ensures the diagonality of the potential energy in ROM formulation.
The mode shapes are normalized based on~\cref{eq:Tnorm} to ensure equal weights in all the modes. Notice that due to the static condensation process \cref{eq:guyan}, the stiffness matrix~$\vec{K}^\mathrm{r}$ and the error vector~$\vec{E}$ are no longer linear functions of the stiffness~$\vec{\beta}$. Furthermore, the stiffness parameters need to be rescaled properly due to numerical considerations. For example, the rotational stiffness have different units than axial or translational stiffnesses. Therefore, it is beneficial to express the stiffness as
\begin{equation}
\vec{\beta}=\vec{\alpha}\circ\hat{\vec{\beta}}
\end{equation}
where~$\circ$ is the element-wise multiplication,~$\vec{\alpha}$ is a dimensionless stiffness ratio vector. The vector~$\hat{\vec{\beta}}$ contains the estimated stiffness values based on the beam geometry (length, height), which can be derived using the standard formulas of Timoshenko beam stiffness matrix. Then, \cref{eq:Vopt} can be re-written as
\begin{equation}
\begin{aligned}
\min_{\vec{\alpha}} \quad & {||\vec{E}(\vec{\alpha})||_2}\\%{||\vec{E}_r||_2} \\
\textrm{s.t.} \quad & \vec{\alpha}>\vec{0} . \\
\end{aligned}
\end{equation}
Such a problem can be initialized from~$\vec{\alpha}=\vec{1}$ and is solvable using the \textit{fmincon} function in MATLAB\textsuperscript{\textregistered}. Furthermore, effect of deviations from these initial estimates are of similar order of magnitude, which is numerically preferred. \Cref{fig:beta} shows the error (cost) convergence for the two examples considered here. For the square MM~\cref{fig:sh_beta}, it takes more iterations to reach the minimum. However, both cases present decently low errors in the final iterations.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=110pt]{figures/sh_beta_training}
\caption{\label{fig:sh_beta}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=110pt]{figures/h0_beta_training}
\caption{\label{fig:h0_beta}}
\end{subfigure}
\caption{Potential energy fitness optimization convergence plots for (\subref{fig:sh_beta}) square and (\subref{fig:h0_beta}) hexagonal unit cells. \label{fig:beta}}
\end{figure}
\subsection{Results and discussion}
With the effective stiffness~$\vec{\beta}$ and inertia~$\vec{\mu}$ parameters determined, the ROM procedure is completed. The optimized modal energy fitness ensures the fidelity of the ROM. It is observed that the optimized matching of modal relation leads to accurate reproduction of the eigenfrequencies as well as the nodal mode shapes at the pre-calculated~$n_k$ wavevector locations. Beyond these locations, the eigen-analysis results are also well extrapolated because of the symbolic implementation of the structural stiffness and Bloch-Floquet periodicity.
One can solve for the eigenfrequency and mode shapes through the analytical formulation~\cref{eq:eigf} for any given wavevector~$\vec{k}$. Such computation will be extremely fast due to the compactness of the matrices.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=210pt]{figures/sh_band_rom}
\caption{Square MM\label{fig:sh_band}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=210pt]{figures/h0_band_rom}
\caption{Hexagonal MM\label{fig:h0_band}}
\end{subfigure}
\caption{Band structure comparison. \label{fig:band}}
\end{figure}
\Cref{fig:band} shows the eigenfrequency band structures for the two studied examples, plotted in the dimensionless wavenumber space~$Q_{x,y}=k_{x,y}a$, where~$a=\SI{10}{mm}$ is the lattice constant. The colored surfaces are generated based on ROM, while the dots represent FEM results. It can be seen that the ROM provides close approximations of the band structures. Notice that the ROM construction only requires the simulations at four different wavevector locations $[Q_x,Q_y]=[0,0],[0,\pi],[\pi,0]$ and $[\pi,\pi]$. The number and locations of these input simulations are, however, not fixed to the given ones.
It should be noted that the minimizing the matching error~$\vec{e}_{(j)}(\vec{\beta})\rightarrow\vec{0}$ in \cref{eq:betaerror} is a necessary yet insufficient requirement for the ROM system~$\vec{K}^\mathrm{r}(\vec{\beta}),~\vec{M}^\mathrm{r}$ to produce the exact eigen-solutions~$\vec{\Phi}^\mathrm{r}_{(j)},~\vec{\omega}^\mathrm{c}_{(j)}$ obtained from FEM simulations. The left multiplication of~$\vec{\Phi}^\mathrm{r}_{(j)}$ in \cref{{eq:betaerror}} reduces the number of equations since the mode shape matrix is rectangular. However, the number of unknowns in~$\vec{\beta}$ is limited, and \cref{eq:betaerror} is collected for multiple wavenumber points. Therefore, such an optimization scheme creates an over-determined problem for seeking the limited set of ROM parameters that are representative of the unit cell properties. With known eigen-states, the ROM produces the same modal energy matrices as the higher order FEM system. Then, it is observed that the resulting ROM leads to eigenvectors which closely agree with the FEM results. An alternative way to identify~$\vec{\beta}$ is to create a multi-objective optimization problem in which one also optimizes the ROM mode shape accuracy while minimizing the modal energy error given by \cref{eq:betaerror}. In practice and in the examples here, the secondary optimization objective of maintaining mode shape accuracy is omitted and only used as a sanity check, leading to significant computational cost saving but insignificant or no loss to accuracy. The latter outcome is understood to be a consequence of symbolic development of dynamic matrices based on the internal cell topology (beam connectivity).
This approach advances the well-established model order reduction methods such as SEREP \cite{Avitable1989} in attacking MM problems in the sense that (1) the proposed ROM maintains the eigen-solution accuracy for any wavevector and (2) it provides analytical and parameterized matrices instead of numerical ones. It expands such methods by including the propagating nature of waves instead of modal response of finite structures. In MM systems, the micro-structural features play vital roles in the dynamic properties. A small variation in the geometry could lead to a drastic change in the overall response. Therefore, an analytical model with parameterized structural elements is particularly advantageous for understanding the influence of each component and fine-tuning the design. Several applications of the method are discussed in the next section.
\section{Applications}\label{sec:apps}
The proposed ROM approach has a wide application spectrum, as the matrices are parameterized by the physical properties (structural stiffness and inertia) and the modeled DOFs are physical deformations instead of generalized coordinates. Therefore the developed ROMs preserves the necessary physical ingredients for further analysis. The dependence of ROM parameters on the geometric dimensions and material properties may be curve-fitted for design purposes with continuous functions, see\cite{Morris2022}. The fidelity of these models is inherently guaranteed by the optimized energy relations and the accurate production of band structures and mode shapes. While the ROM is capable of generating the band structure accurately and efficiently, the band computation is not the ultimate goal for the ROM, rather it is the basis and a starting point.
One immediate application is optimizing unit cell designs for desired eigenfrequencies. The ROM characterizes the continuum unit cell with finite number of stiffness and inertia parameters and provides the analytical formulation of the eigenfrequency bands. It is then intuitive and direct to tune the structural parameters and associated geometry for desired eigenfrequency performance (wave speeds and band gaps) based on the analytical model. The detailed steps are omitted here. A simplified example can be found in the previous work\cite{Morris2022}. Other application examples are discussed below.
\subsection{Level repulsion identification}
The micro-geometry and periodicity of MMs add an extra layer of complexity to the analysis of band topology and scattering response. The high dimensionality of traditional models presents challenges in understanding physical phenomena and interpreting results. In MM and phononic band structures, many apparent crossing points may exist between eigenfrequency branches. It is important to classify these crossings as either degeneracy points (real crossings) or level repulsions (avoided crossings) \cite{Lu2018}. Despite a small quantitative difference between the two types of crossings, this discrepancy can lead to misunderstandings of modal natures and scattering responses. Level repulsion indicates mixing of modes, caused by the coupling between DOFs, resulting in unexpected energy transfer in scattering analysis. Conversely, real crossings indicate fully decoupled modes \cite{Amirkhizi2018d}.
\begin{figure}[!ht]
\centering\includegraphics[height=150pt]{figures/lr_allband}
\caption{The square MM unit cell band structure with an apparent or real crossing region identified in the blue circle.\label{fig:lr_band}}
\end{figure}
An example of band identification can be seen in \crefrange{fig:lr_band}{fig:crossing}, where the 1D band structure along the $\Gamma-X$ direction is analyzed for the previously shown square cell. \Cref{fig:lr_band} shows good overall agreement between the ROM and FEM results. However, zooming into a region with an apparent crossing (indicated by the blue circle in \cref{fig:lr_band}) reveals a discrepancy. The FEM results, shown in \cref{fig:lrfem}, indicate that the two relevant branches appear repulsed with each other. The root of this apparent repulsion is not associated with the cell response, but rather due to asymmetry in the FEM mesh. To correct the sorting, one must refine the mesh and to preserve the exact two-fold symmetry of the cellular structure, as shown by the black curves in \cref{fig:lrfem}. This task requires repeated simulations and mesh refinements with the FEM approach. On the other hand, the ROM results, shown by the blue dashed curves in \cref{fig:lrrom_band}, correctly indicate a real crossing between the two branches since the discrete model possesses the same symmetry group as the continuum one. The analytical representation of the ROM formulas allows for easy distinction between real crossings and level repulsions. Additionally, manual assignment of perturbations to the parameterized ROM quantities can break the symmetry and lead to repulsed branches, as shown by the black curves in \cref{fig:lrrom_band}. This analysis shows that the studied crossing point is a symmetry-protected degeneracy (rather than an accidental one), which is unstable and prone to forming repulsed branches due to imperfections in material or geometry of an actual specimen. The same is true for continuum FE models, which may lack symmetry due to their meshes.
Another example can be found in \cite{Amirkhizi2018d}, where the direct FEM result suggests a real crossing between two branches, but the ROM and a further refined FEM study indicate otherwise. The ROM approach is more efficient for band sorting purposes (and can easily be leveraged in calculation of topological invariants based on path integrals), and it provides benefits in understanding the difference between an ideal model and a realistic one, as well as in distinguishing normal and accidental degeneracy.
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=150pt]{figures/femlr.pdf}
\caption{\label{fig:lrfem}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[height=150pt]{figures/romlr}
\caption{\label{fig:lrrom_band}}
\end{subfigure}%
\caption{Band crossing identification example for the square MM cell. The small region identified with a circle in \cref{fig:lr_band} is magnified for both computational approaches: (\subref{fig:lrfem}) FEM, (\subref{fig:lrrom_band}) ROM. In both cases slight baseline shift in the asymmetric band is applied to bring the two results in the same narrow frequency window. \label{fig:crossing}}
\end{figure}
\subsection{Equi-frequency contours}
The developed ROM matrices also allow for computation of the equi-frequency contours~$k_x(\omega,k_y)$ or~$k_y(\omega,k_x)$, i.e., to find the wavevector solutions~$k_{x}$ or~$k_y$ at prescribed~$k_{y}$ or~$k_x$ and frequency values. For complex $k$ components, the solved eigen-modes are the propagating and evanescent waves that constitute the basis solution for the oblique scattering problem.
Such problems are rarely studied for metamaterials but are of prime importance for analyzing the scattering dynamics\cite{Mokhtari2019a,wang2021angledependent}. Solving this type of problems in FEM would be nearly impractical, because the real-valued frequency~$\omega$ can not be easily enforced in the traditional eigenfrequency study for a given complex $k$ components. It is also generally not straightforward to assign a wavenumber component as an eigenvalue to be found, though for phononic media, an elegant mixed eigenvalue approach is presented in \cite{Mokhtari2019a}. For metamaterials with complex internal features, the proposed ROM approach would be an ideal alternative. Using the ROM matrices, this problem can be simply solved by finding the global minima of the determinant~$\mathrm{det}\left[\vec{K}^\mathrm{r}(k_x,k_y)-\omega^2\vec{M}^\mathrm{r}\right]$ in the complex~$(k_x,~k_y)$ space for prescribed~$\omega$ values. Examples of such contour calculations are deferred to focused studies on their use.
\subsection{Finite array transient response}
The band structure computations for infinitely periodic arrays are the main tool and product of ROM approach. However, it is of practical interest to study the dynamic response of finite sized arrays under localized loading or scattering. With the unit cell matrices obtained from the ROM procedure, it is possible to assemble multiple ROM unit cells to model a finite sized array, easily removing $\vec{k}$ dependence in stiffness matrix. This reduced order representation of finite systems allows for very fast computation of the frequency and time domain responses of the structure. Non-uniform and non-periodic arrays can be designed by stacking different unit cells, suitable for design for novel applications such as clocking and insulating.
Time domain solutions can be directly calculated through time marching integration schemes. Here, we discuss the use of frequency domain solutions for solving such time domain problems. The governing partial differential equations of such an array, after the modal transformation, will read:
\begin{equation}
\vec{\overline{M}}\vec{\ddot{q}}+\vec{\overline{K}}\vec{q}=\vec{\Phi}^\dagger\vec{F}(t),
\end{equation}
where~$\vec{\overline{M}}=\vec{\Phi}^\dagger\vec{M}\vec{\Phi}$ is the diagonal modal mass matrix,~$\vec{\overline{K}}=\vec{\Phi}^\dagger\vec{K}\vec{\Phi}$ is the diagonal modal stiffness matrix, both associated with and assembled for the full finite array, ~$\vec{\Phi}$ is the eigenvector matrix, also associated with the finite structure, and which needs to be calculated,~$\vec{F}$ is the nodal load vector,~$\vec{q}$ is the generalized coordinate vector, and~$\vec{u}=\vec{\Phi}\vec{q}$ is the nodal DOF vector. Such a form decouples the equations and renders a set of single DOF equations to solve. For a single DOF system with mass~$M$, stiffness~$K$, eigenfrequency~$\omega$, displacement~$u$, and loading~$F(t)$, the dynamic response under arbitrary loading can be computed using the Duhamel integral:
\begin{equation}
u(t)=\int_0^\tau F(\tau)h(t-\tau)\mathrm{d}\tau
\end{equation}
where the impulse response function (IRF) is
\begin{equation}
h(t)=\frac{1}{M\omega}\exp[\sin(\omega t)].
\end{equation}
\begin{figure}[!ht]
\centering\includegraphics[height=130pt]{figures/load}
\caption{Time domain loading profile.}
\label{fig:load}
\end{figure}
Therefore, one can compute each entry in~$\vec{q}(t)$ in a similar way. Then, the physical response will be~$\vec{u}(t)=\vec{\Phi}\vec{q}(t)$. An example of time dependent response computation is shown in \cref{fig:td_res}. An impact force (\cref{fig:load}) is horizontally applied to one of the internal nodes, as indicated by the red arrows in \cref{fig:td_res}. It can be seen that the ROM solution can accurately reproduce the wave propagation pattern. The resulting displacement field from ROM has 94\% correlation with the FEM data, showing high physical fidelity.
Finally, our preliminary results shows that using the time marching approach for the shown system, the FEM model has approximately 700,000 DOFs while the ROM only needs 2,000. Consequently, the computation speed of ROM is about 5,000 times faster than the FEM approach. Therefore, the computational efficiency can be significantly improved for large-sized arrays. Along with data-driven and machine learning methods, one can efficiently explore a vast design space and achieve fast cell-by-cell optimization using the proposed ROM method.
\begin{figure}[!ht]
\centering\includegraphics[height=260pt]{figures/td_res.pdf}
\caption{Time domain response of a hexagonal MM array $40\ \mu$s after loading initiation. Top panel: FEM model and solutions. Bottom panel: ROM solutions.}
\label{fig:td_res}
\end{figure}
It must be noted that the modeling of finite arrays requires an extra step in terms of characterizing the boundary elements. The structural components of the main body remain the same as the ones obtained from the unit cell ROM. However, the outermost elements need to be quantified separately due to the different boundary conditions assigned to them, especially when the MM array is in contact with a different homogeneous medium. The detailed approach is subject of current research and is initially carried out by optimizing the static response or impedance of edge cells with the help of a few FEM simulations.
\section{Conclusion and Outlook}\label{sec:conclusion}
A new reduced order modeling technique for periodic mechanical metamaterials is introduced. The method uses a limited number of simulations at selected wavevector locations to establish the reduced system matrices, which are parameterized based on the structural connectivity. Effective parameters are extracted by matching modal energies. This approach expands upon previous model order reduction techniques by considering the wave's propagating nature, leading to accurate eigen-solutions for any wavevector. The parameterized and analytical matrices generated by the ROM method offer valuable insights into the micro-structural influence on overall behavior and provide significant assistance in fine-tuning of design. Additionally, the ROM approach leads to fast computation of the dynamic responses of finite-sized arrays, with a significant reduction in computational effort compared to FEM.
To summarize, the highlights of this work are:
\begin{itemize}
\item the proposed ROM method can be easily applied to any periodic metamaterial that has beam-like components;
\item the reduced order matrices allows fast and accurate computation of band structures and the dynamic responses in frequency and time domains;
\item the ROM method can further benefit design optimization due to its computational efficiency.
\end{itemize}
The essential limitation of the proposed work is that, the ROM method can only be applied to MM micro-structures comprised of beam-like elements (and potentially plate-like elements). Other types of micro-structures, such as layered media, or unit cells with solid inclusions in a solid matrix, are not the suitable modeling target for the proposed ROM (instead, one can use RBME\cite{Hussein2009a}).
In addition, the proposed ROM approach is only applicable to systems with stiffness coupling between nearest neighbors, i.e., the long-range interaction\cite{Farzbod2020} is not considered.
The presented method could contribute to the modeling and design of finite and periodic mechanical metamaterials by reducing the computational effort. The micro-structure and periodicity of MMs lead to exciting dynamic properties and present theoretical questions in the physics of micro-structured media. The ROM method, equipped with the vibration and strength of material domain knowledge, can offer concise descriptions of the micro-structural dynamics and is a solid analytical tool for studying the metamaterial dynamics. In addition, the presented method leads to significant improvements in computational efficiency and is a promising candidate for further design optimization of graded MM arrays with data-driven techniques. An immediate topic of research is the handling of edge cells due to their different dynamic response, while the interior cells appear to be very well represented by the ROM based on infinitely periodic Media. Future work to be implemented is to adjust the element stiffness matrix formulation \cref{eq:beamK} for compatibility with 3D (and potentially composite) beam and plate elements. In addition, the optimization approaches for matching global quantities between the FEM and ROM can be adjusted so that lossy elements (viscoelastic material) are allowed in the system \cref{appLoss}
These future extensions would extend the modeling capability to 3D designs and enlarge the feasible design space.
\section*{Data Availability}
All data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.
\section*{Acknowledgements}
The authors wish to thank US Army Research Laboratory for continued support throughout this effort. This research was supported by CC DEVCOM Army Research Laboratory through Cooperative Agreements W911NF-17-2-0173 and W911NF-20-2-0147.
\clearpage
\bibliographystyle{aipnum4-1}
|
{
"arxiv_id": "2302.14225",
"language": "en",
"timestamp": "2023-03-01T02:05:41",
"url": "https://arxiv.org/abs/2302.14225",
"yymm": "2302"
} |
\section{Conclusion}
\vspace{-2mm}
We propose two Weighted Sampling methods to alleviate the frequency bias issue in Masked Language Modeling for pre-training language models. Extensive experiments show Weighted Sampling improves both sentence representations and the transferability of pre-trained models. We also analyze the token embeddings to explain how Weighted Sampling works. Future work includes investigating other dynamic sampling methods and exploring training objectives with a penalty for frequency bias.
\section{Experiments}
\label{sec:experiment}
\vspace{-2mm}
\begin{table}[]
\centering
\scalebox{0.65}{
\begin{tabular}{@{}l|lllllll|l@{}}
\toprule[1.5pt]
Method & STS12 & STS13 & STS14 & STS15 & STS16 & STS-B & SICK-R & Avg. \\ \midrule
BERT & 39.70 & 59.38 & 49.67 & 66.03 & 66.19 & 53.87 & 62.06 & 56.70 \\
BERT-CP &41.00 &60.02 &51.11 &68.43 &64.59 &56.32 &62.07 &57.65 \\
WSBERT\_Freq & 42.60 &61.32 &52.04 &69.84 &66.61 &59.89 &61.94 &59.18 \\
WSBERT\_Dynamic & 47.80 & 67.28 & 57.13 & 71.41 & 68.87 & 65.28 & 64.90 & 63.24 \\ \midrule
BERT-Whitening & {\textbf{63.90}} & {\textbf{73.76}} & {\textbf{69.08}} & 74.59 & 74.42 & 62.23 & {\textbf{71.43}} & 69.82 \\
WSBERT-Whitening & {\ul\textbf{64.17}} & {\ul\textbf{74.50}} & {\ul\textbf{69.73}} & {\textbf{75.26}} & {\textbf{75.05}} & 61.91 & {\ul\textbf{71.98}} & {\ul\textbf{70.45}} \\ \midrule
BERT + Prompt$\dag$ &60.96 &73.83 &62.18 &71.54 &68.68 &70.60 &67.16 &67.85 \\
WSBERT + Prompt & 63.03 & 71.66 & 63.80 & {\ul\textbf{75.32}} & {\ul\textbf{76.67}} & {\textbf{74.79}} & 65.32 & {\textbf{70.08}} \\ \bottomrule[1.5pt]
\end{tabular}
}
\caption{\small{Sentence representation performance on STS tasks. The reported score is Spearman’s correlation coefficient between the predicted similarity and the gold standard similarity scores. The best results are both underlined and in bold. WSBERT without a subscript refers to WSBERT\_Dynamic. Performance of BERT-Whitening is from the model learned on the full target dataset (train+development+test)~\cite{su2021whitening}. $\dag$ denotes the best manual-prompt results cited from \cite{jiang2022promptbert}}.
}
\label{table:sts}
\end{table}
\begin{table}[]
\centering
\scalebox{0.99}{%
\begin{tabular}{@{}c|c|c|c@{}}
\toprule[1.5pt]
Dataset & BERT & BERT-CP & WSBERT \\ \midrule
MNLI & $84.30_{\pm0.26}$ & ${84.26_{\pm{0.19}}}$ & {$\mathbf{84.42_{\pm{0.35}}}$} \\
QQP & $91.31_{\pm{0.04}}$ & $90.94_{\pm{0.59}}$ & {$\mathbf{91.43_{\pm{0.05}}}$} \\
QNLI & {$\mathbf{91.47_{\pm{0.01}}}$} & $91.32_{\pm{0.17}}$ & $91.14_{\pm{0.17}}$ \\
SST-2 & {$\mathbf{92.86_{\pm{0.13}}}$} & $92.78_{\pm{0.43}}$ & $91.35_{\pm{0.47}}$ \\
CoLa & $56.47_{\pm{0.65}}$ & $57.44_{\pm{0.95}}$ & {$\mathbf{58.29_{\pm{0.33}}}$} \\
STS-B & $89.68_{\pm{0.26}}$ & $89.52_{\pm{0.37}}$ & {$\mathbf{89.86_{\pm{0.18}}}$} \\
MRPC & $86.13_{\pm{1.63}}$ & $85.13_{\pm{0.53}}$ & {$\mathbf{88.20_{\pm{2.39}}}$} \\
RTE & $69.23_{\pm{0.4}}$ & $67.25_{\pm{1.84}}$ & {$\mathbf{70.89}_{\pm{0.17}}$} \\ \midrule
\textbf{AVG} & $82.68_{\pm0.33}$ & $82.33_{\pm0.32}$ & $\mathbf{83.20}_{\pm0.10}$ \\ \bottomrule[1.5pt]
\end{tabular}
}
\caption{\small{GLUE Validation results from \emph{BERT-base-uncased} (BERT-base), \emph{BERT-base-uncased} continually pre-trained (BERT-CP), and Weighted-Sampled BERT (WSBERT). BERT-CP and WSBERT both continually train on BERT with the same training settings. WSBERT refers to WSBERT\_Dynamic. The best results for each dataset and AVG are in bold.}}
\label{table:glue}
\end{table}
We conduct two sets of experiments. Firstly, we investigate the quality of unsupervised sentence representations using WSBERT, by evaluating on the commonly used STS tasks. Secondly, we investigate the efficacy of Weighted Sampling on the transfer learning capability of BERT, by fine-tuning and evaluating WSBERT on the GLUE benchmark. We also design ablation studies to analyze the impact of WSBERT on token embeddings for rare tokens and common tokens.
\subsection{Datasets and Implementation Details}
\label{sub:dataset}
For WSBERT, we continue pre-training on \verb!bert-base-uncased! (BERT)\footnote{\url{https://huggingface.co/bert-base-uncased}}, with Weighted Sampling on the WikiText dataset (+100M tokens) \footnote{\url{https://huggingface.co/datasets/wikitext}}. We set the learning rate 5e-5 and train 10 epochs. We use 4 NVIDIA V100 GPUs to train WSBERT with batch size 8 per device and gradient accumulation as 8. We use the WordPiece tokenizer as in ~\cite{devlin2018bert} and token frequency is based on the tokenized Wikitext. STS tasks contain STS 2012-2016, STS benchmark, and SICK-Relatedness datasets. GLUE includes eight datasets: MNLI, QQP, QNLI, SST, STS, Cola, MRPC, RTE. To analyze the effect of continual pre-training without Weighted Sampling, we also continue pre-training on BERT with the same random sampling as for BERT on the same WikiText data and with the same pre-training setting as WSBERT. We denote the resulting model \textbf{BERT-CP}. We compare fine-tuning performance of \verb!bert-base-uncased! (BERT), BERT-CP, and WSBERT on GLUE.
For each model on each GLUE task, we run three runs with different random seeds; for each run, we conduct a grid search on hyperparameters on GLUE validation set among ${2e-5,3e-5,5e-5}$ learning rate and ${5, 10}$ epochs. The other hyperparameters are the same for the three models: we use 1 V100 with batch size 32 per device, warm-up ratio 0.06, and weight-decay 0.01. We then report the mean and standard deviation of the best results from three runs in Table~\ref{table:glue}.
\subsection{Main Results}
\label{sub:main-results}
\noindent \textbf{Semantic Textual Similarity}
Table~\ref{table:sts} shows the main STS results. All models in the table are of BERT base size. We report results of WSBERT with \emph{Frequency Weighted Masking} and \emph{Dynamic Weighted Masking}, denoted WSBERT\_Freq and WSBERT\_Dynamic. The first group in Table~\ref{table:sts} shows that WSBERT\_Dynamic outperforms BERT and BERT-CP by \textbf{6.54} and \textbf{5.59} absolute, significantly improving the quality of sentence embeddings of PLMs. WSBERT\_Dynamic outperforms WSBERT\_Freq by 4.06 absolute, showing that Dynamic Weighted Sampling is more effective than sampling only based on token frequency. BERT\_Whitening~\cite{su2021whitening}, as a calibration method, is compatible with WSBERT. The second group in Table~\ref{table:sts} shows that although WSBERT\_Dynamic yields a lower average score on STS compared to BERT\_Whitening, WSBERT\_Dynamic could be effectively combined with Whitening and further improve the performance of WSBERT\_Dynamic to \textbf{70.45}.
We also investigate enhancing BERT and WSBERT with prompt. Different from previous works using a single \verb![MASK]! in prompts~\cite{jiang2022promptbert}, we transform a sentence using manual prompt templates with multiple \verb![MASK]!\footnote{The best prompt is designed as The sentence: ${[}X{]}$ means ${[}MASK{]}$. That is ${[}MASK{]}$ and also means ${[}MASK{]}$.}.
Furthermore, instead of extracting and averaging representations of the masked tokens as final sentence embeddings~\cite{jiang2022promptbert}, we encode the whole transformed sentence and compute sentence embedding by average pooling all token embeddings of the sentence.
The third group in Table~\ref{table:sts} shows that prompt-enhanced WSBERT achieves \textbf{70.08}. These results demonstrate that Weighted Sampling improves sentence representations generated by PLMs and combining WSBERT with Whitening and prompts further improves sentence embeddings. We did not compare WSBERT to the SOTA models on STS, i.e., sentence-level contrastive learning (CL) based models such as SimCSE \cite{gao2021simcse}, since prior works~\cite{su2021token-aware} and our studies show sentence-level CL based models hurt transfer learning capability. We observe absolute 0.5 degradation on GLUE AVG score from SimCSE-BERT compared to BERT. However, as shown in the following GLUE experiments, WSBERT both enhances sentence representations of BERT and improves transfer learning capability.
\noindent \textbf{GLUE Evaluation}
As shown in Table~\ref{table:glue}, WSBERT achieves the best average GLUE score compared to BERT and BERT-CP, outperforming BERT by {\textbf 0.52} absolute.
WSBERT maintains competitive performance on MNLI and QQP and outperforms BERT on all other tasks. In contrast to models such as SimCSE, Dynamic Weighted Sampling improves the transfer learning capability while enhancing sentence representations. Compared to BERT, BERT-CP degrades GLUE AVG by 0.35 absolute while WSBERT outperforms BERT-CP by 0.87 absolute. These results prove that the gain of WSBERT over BERT is from continual pre-training with Dynamic Weighted Sampling instead of just continuing pre-training BERT using random sampling with more steps on the same WikiText dataset. BERT-CP degrades GLUE performance compared to BERT, probably because WikiText (373.28M data size) used for continual pre-training is much smaller and less diverse than the standard BERT pre-training dataset (Wikipedia and Bookscorpus, 16GB data size), which could hurt generalizability of PLMs.
\vspace{-3mm}
\subsection{Analysis}
\vspace{-1mm}
As observed in~\cite{li2020sentence, consert}, token embeddings of MLM-pretrained PLMs can be biased by token frequency, causing embeddings of high-frequency tokens to concentrate densely and low-frequency tokens to disperse sparsely. Inspired by these works, to analyze whether Weighted Sampling could indeed alleviate the frequency bias problem, we propose two approaches to analyze distributions of BERT, BERT-CP, and WSBERT in the representation space. We also discuss the training time for training a model with and without weighted sampling.
\begin{table}[]
\centering
\begin{tabular}{@{}c|c|c@{}}
\toprule
Method & Rare Tokens & Common Tokens \\ \midrule
BERT & 14.97 & 64.82 \\
BERT-CP & 14.86 & 64.95 \\
WSBERT & {\ul \textbf{15.60}} & {\ul \textbf{65.54}} \\ \bottomrule
\end{tabular}
\caption{\small{Portion of \emph{common tokens} (high-frequency tokens) in the nearest neighbors of \emph{rare tokens} (low-frequency tokens) and \emph{common tokens}. We first sort tokens in the WikiText vocabulary by frequency in descending order. Then we select the tokens with ranks ranging from 10K-20K as rare tokens while choosing the Top-10K tokens as common tokens. We choose the 10 nearest neighbors decided by the Euclidean distance between representations of the target token and other tokens.}}
\label{table:nn}
\end{table}
\noindent \textbf{Nearest Neighbors}
We investigate the portion of common tokens in the nearest neighbors (NN) of rare tokens, denoted $P_{rare}$, and the portion of common tokens in NN of common tokens, denoted $P_{common}$ (rare and common tokens are defined in Table~\ref{table:nn} caption). The larger portion of common tokens in NN of rare/common tokens indicates more concentrated token distributions and hence smaller frequency bias in the token embeddings. In Table~\ref{table:nn}, $P_{rare}$/$P_{common}$ for WSBERT increase by 0.63/0.72 over those of BERT and 0.74/0.59 over those of BERT-CP, suggesting that WSBERT has more concentrated token distributions and smaller frequency bias in token embeddings compared to BERT and BERT-CP, and common tokens are also more concentrated in WSBERT than BERT and BERT-CP.
\begin{table}[]
\centering
\scalebox{0.7}{
\begin{tabular}{@{}cc|cccc@{}}
\toprule[1.5pt]
\multicolumn{2}{c|}{\textbf{Rank of token frequency}} & 0-100 & 100-500 & 500-5k & 5k-10k \\ \midrule
\multicolumn{1}{c|}{\multirow{3}{*}{Mean $\ell2$-norm}} & BERT & 0.9655 & 1.0462 & 1.2150 & 1.3639 \\
\multicolumn{1}{c|}{} & BERT-CP & 0.9597 & 1.0428 & 1.2141 & 1.3647 \\
\multicolumn{1}{c|}{} & WSBERT & \textbf{0.9562} & \textbf{1.0385} & \textbf{1.2112} & \textbf{1.3621} \\ \midrule
\multicolumn{1}{c|}{\multirow{3}{*}{Mean k-NN $\ell2$-norm (k=3)}} & BERT & 0.6972 & 0.7782 & 0.8188 & 0.8953 \\
\multicolumn{1}{c|}{} & BERT-CP & 0.6913 & 0.7750 & 0.8180 & 0.8963 \\
\multicolumn{1}{c|}{} & WSBERT & \textbf{0.6883} & \textbf{0.7724} & \textbf{0.8154} & \textbf{0.8929} \\ \midrule
\multicolumn{1}{c|}{\multirow{3}{*}{Mean k-NN $\ell2$-norm (k=5)}} & BERT & 0.8007 & 0.8868 & 0.9327 & 1.0083 \\
\multicolumn{1}{c|}{} & BERT-CP & 0.7936 & 0.8833 & 0.9319 & 1.0096 \\
\multicolumn{1}{c|}{} & WSBERT & \textbf{0.7899} & \textbf{0.8800} & \textbf{0.9287} & \textbf{1.0056} \\ \midrule
\multicolumn{1}{c|}{\multirow{3}{*}{Mean k-NN $\ell2$-norm (k=7)}} & BERT & 0.8590 & 0.9458 & 0.9932 & 1.0671 \\
\multicolumn{1}{c|}{} & BERT-CP & 0.8513 & 0.9422 & 0.9924 & 1.0685 \\
\multicolumn{1}{c|}{} & WSBERT & \textbf{0.8471} & \textbf{0.9386} & \textbf{0.9888} & \textbf{1.0642} \\ \bottomrule[1.5pt]
\end{tabular}
}
\caption{\small{The mean $\ell2$-norm calculated for each bin of tokens with ranking ranges based on token frequency in WikiText.
Common tokens occupy a higher ranking while rare tokens are in low rankings.
A lower mean $\ell2$-norm suggests that the token embeddings in that bin are more concentrated. }}
\label{table:l2}
\end{table}
\noindent \textbf{Token Distribution}
Inspired by~\cite{li2020sentence}, we compute the mean $\ell2$-norm between token embeddings and the origin for the three models to analyze token distributions. As shown in the first row of Table~\ref{table:l2}, although common tokens are close to the origin and rare tokens are distributed far away from the origin, the smaller mean $\ell2$-norm indicates the token embeddings of WSBERT are more concentrated than BERT and BERT-CP. The $\ell2$-norms of WSBERT are smaller on all the bins than those of BERT, suggesting that both common tokens and rare tokens are closer to the origin in WSBERT than BERT.
Furthermore, WSBERT tokens in each bin are more compact than BERT and BERT-CP as shown by the smaller mean $k$-NN $\ell2$-norm for each $K$ in Table~\ref{table:l2}, which indicates that the embedding space of WSBERT is less sparse than BERT and BERT-CP. Sparsity in the embedding space may cause poorly defined semantic meanings~\cite{li2020sentence}, hence the gains in sentence embeddings and transfer learning capability from WSBERT over BERT may also be attributed to sparsity reduction by WSBERT in the embedding space.
\noindent \textbf{Training Time}
To calculate the sampling weight of each token during training, some extra time is needed. Therefore, we compare the training time of MLM with and without weighted sampling for 10 epochs. Training without weighted sampling takes 11 hours, while training with weighted sampling takes 20 hours. However, this trade-off between efficiency and performance is acceptable.
\section{Introduction}
\label{sec:intro}
\vspace{-2mm}
Early language models model context unidirectionally, either left-to-right or right-to-left. In contrast, Masked Language Modeling (MLM) replaces a subset of tokens in the input sequence with a special token \verb![MASK]! and trains the model to predict the masked tokens using their bidirectional context. MLM has been widely adopted as a self-supervised pre-training objective for learning bidirectionally contextualized language representations, such as BERT~\cite{devlin2018bert} and RoBERTa~\cite{liu2019roberta}. BERT and its extensions as pre-trained language models (PLMs) have shown remarkable performance on various downstream NLP tasks.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{Image/input.png}
\caption{\small{An example from WikiText. Randomly selected tokens are in blue while Frequency Weighted Sampled tokens are in pink.}}
\label{fig:input}
\end{figure}
Nevertheless, recent studies reveal critical problems in MLM. \cite{gao2019representation, ethayarajh2019contextual} find the contextualized word representations of BERT and other PLMs are not isotropic as they are not uniformly distributed w.r.t. direction; instead, they are anisotropic as word representations occupy a narrow cone. The token frequency in the pre-training data usually follows a long-tailed distribution. The conventional masking strategy for MLM selects tokens to mask with a uniform distribution~\cite{devlin2018bert,liu2019roberta}.
This random masking strategy for MLM unavoidably encounters the frequency bias issue, that is, high-frequency tokens will be masked frequently, while more informative tokens, typically with lower frequencies, will be masked much less frequently during pre-training, which would greatly harm the efficiency of pre-training, lower the quality of representations of rare tokens and limit the performance of PLMs. As shown in Figure~\ref{fig:input}, tokens selected based on their frequency (in pink, see Eqn.~\ref{eq:freq_weight}) are apparently more informative than tokens selected randomly (in blue) which are mostly high-frequency tokens but not essential to the semantics of the sentence. \cite{li2020sentence} investigates the embedding space of MLM-trained PLMs and confirms that embeddings are biased by token frequency and rare tokens are distributed sparsely in the embedding space. \cite{jiang2022promptbert} demonstrates that frequency bias indeed harms the performance of sentence embeddings generated by MLM-trained PLMs.
As shown in these studies, alleviating the frequency bias issue is essential for improving effectiveness of MLM and performance of resulting PLMs.
Several recent studies focus on improving efficiency of pre-training, including mixed-precision training~\cite{shoeybi2019mix}, parameter distillation for different layers~\cite{gong2019efficient}, introducing a note dictionary for saving information of rare tokens~\cite{tnf}, designing different training objectives~\cite{lan2019albert,clark2020electra,meng2021coco}, and dropping redundant tokens during pre-training~\cite{token_dropping}. However, most of these approaches focus on modifying model architecture or optimization for pre-training.
Our work focuses on alleviating the frequency bias issue in MLM and improving quality of PLMs. We propose two \textbf{Weighted Sampling} methods for masking tokens based on token frequency or training loss. The latter one can dynamically adjust sampling weights and achieve a good balance between masking probabilities of \emph{common tokens} and \emph{rare tokens}\footnote{We denote high-frequency and low-frequency tokens by \emph{common tokens} and \emph{rare tokens} in the rest of the paper.} based on the learning status of PLMs.
Our Weighted Sampling methods can be applied to \emph{any} MLM-pretrained PLMs. In this work, we focus on investigating the effectiveness of applying Weighted Sampling to BERT as the backbone. We initialize from BERT and continue pre-training with Weighted Sampling. We denote the resulting PLM by \textbf{WSBERT}. We hypothesize that since Weighted Sampling could alleviate frequency bias, it could improve representation learning of rare tokens and also improve the overall quality of language representations. Quality of pre-trained language representations is generally evaluated on sentence representations generated by PLMs, commonly evaluated on the Semantic Textual Similarity (STS) benchmark \cite{agirre2012semeval,agirre-etal-2013-sem,agirre-etal-2014-semeval,agirre-etal-2015-semeval,agirre-etal-2016-semeval,cer-etal-2017-semeval,marelli2014sick}; and evaluated on transfer learning capability of PLMs, commonly evaluated on fine-tuning and testing on the GLUE benchmark~\cite{wang2018glue}. Recent efforts on sentence representation modeling include calibration methods~\cite{li2020sentence,su2021whitening}, prompt learning\cite{radford2018gpt,radford2019gpt,brown2020few-shot,schick2020exploiting, gao2020promptsearch, jiang2022promptbert}, and sentence-level contrastive learning (CL) based models such as SimCSE~\cite{gao2021simcse} and its variants~\cite{jiang2022deep,wu2022infocse}. Although SimCSE and its variants achieve state-of-the-art (SOTA) performance on STS,
they degrade the transfer learning capability on tasks such as SQuAD since they do not target improving token-level representation learning~\cite{su2021token-aware}. We also observe absolute 0.5 performance degradation on GLUE from SimCSE-BERT compared to BERT.
In this work, to investigate whether the proposed Weighted Sampling could improve the quality of token embeddings, we evaluate sentence representations generated by WSBERT on STS and the transferability of WSBERT on GLUE. We also analyze the embedding space of WSBERT and BERT to understand how Weighted Sampling improves the quality of token embeddings.
Our contributions can be summarized as follows:
\begin{itemize}[leftmargin=*,noitemsep]
\item We propose two \textbf{Weighted Sampling} methods to alleviate the frequency bias issue in conventional masked language modeling.
\item We develop a new PLM, WSBERT, by applying Weighted Sampling to BERT. Different from SOTA sentence representation models, we find WSBERT outperforms BERT on \textbf{both sentence representation quality and transfer learning capability}. We also find integrating calibration methods and prompts into WSBERT further improve sentence representations.
\item We design ablation approaches to analyze the embedding space of WSBERT and BERT. We find that with Weighted Sampling, rare tokens are more concentrated with common tokens and common tokens are more concentrated in the embedding space than BERT. We also find that both common and rare tokens are closer to the origin in WSBERT than BERT and token embeddings of WSBERT are less sparse than BERT. We believe these improvements in token embeddings caused by Weighted Sampling contribute to the improvements in sentence representations and transferability.
\end{itemize}
\section{Formatting your paper}
\footnotesize
\bibliographystyle{IEEEbib}
\section{Method}
\label{sec:method}
\vspace{-2mm}
In this section, we first describe traditional Masked Language Modeling (MLM). Then we propose \textbf{Weighted Sampling} for MLM to alleviate the frequency bias problem.
\subsection{Masked Language Modeling}
For a sentence $S=\{t_1, t_2, \ldots, t_n\}$, where $n$ is the number of tokens and $t_i$ is a token, the standard masking strategy as in~\cite{devlin2018bert} randomly chooses 15\% of tokens to mask. The language model learns to predict the masked tokens with bidirectional context. To make the model compatible with fine-tuning, for a chosen token, 10\% of the time it is replaced by a random token from the corpus, 10\% of the time it remains unchanged, and 80\% of the time it is replaced by a special token \verb![MASK]!.
\subsection{Weighted Sampling}
In order to tackle the frequency bias problem, we propose two weighted sampling strategies, namely, \textbf{Frequency Weighted Sampling} and \textbf{Dynamic Weighted Sampling}, to compute the masking probability for each
token, based on statistical signals and model-based signals, respectively.
\subsubsection{Frequency Weighted Sampling}
A natural statistical signal characterizing the informativeness of a token $w$ is its frequency $\mathrm{freq}(w)$ in the pre-training corpus. %
We first apply the following transformation to remove the excessive influences of extremely rare tokens, which are usually noise.
\begin{equation}
\mathrm{freq}^*(w) =
\begin{cases}
\mathrm{freq}(w) & \text{, if } \mathrm{freq}(w) > \theta \\
\theta & \text{, otherwise.}
\end{cases}
\end{equation}
\noindent Then we compute the sampling weight $wt(w)$ for $w$ as follows.
\begin{equation}
\label{eq:freq_weight}
\mathrm{wt}(w) = (\mathrm{freq}^*(w))^{-\alpha}
\end{equation}
\noindent In our experiments, we set the hyperparameters $\theta=10$ and $\alpha=0.5$ based on optimizing performance on the development set.
For each token $t_i$ in a sentence $S=\{t_1, t_2, \ldots, t_n\}$, where $n$ is the number of tokens, we compute the sampling probability $p(t_i)$ for masking $t_i$ by normalizing $wt(t_i)$.
\begin{equation}
\label{eq:freq_prob}
\small
p(t_i) = \frac{wt(t_i)}{\sum_{j=1}^{n}{wt(t_j)}}
\end{equation}
\begin{figure}[]
\centering
\centerline{\includegraphics[width=0.8\linewidth]{Image/weighted_sampling}}
\caption{\small{Illustration of the proposed \textbf{Dynamic Weighted Sampling} for mask language modeling (MLM). The sampling weight of choosing a token to mask is computed based on the prediction loss of this token by the current PLM. We store the sampling weights of each token in the weight dictionary.}}
\label{fig:example}
\end{figure}
\subsubsection{Dynamic Weighted Sampling}
Frequency Weighted Sampling produces constant sampling probabilities for tokens and does not consider the learning status of the backbone masked language model that it applies for. We hypothesize that the signal of informativeness of a token $w$ may also be derived from how poorly a masked language model predicts it. Therefore, we propose the Dynamic Weighted Sampling strategy shown in Figure~\ref{fig:example}. We use a weight dictionary in memory to store the sampling weights of each token after each batch in each iteration instead of processing all batches in an iteration for an update, in which the sampling only happens once. Firstly, we set an initial sampling weight
$wt(t_i)=1$ for each token $t_i \in T$ in the weight dictionary, where $T$ denotes all tokens in the pre-training dataset. Then, we compute the sampling probabilities using sampling weights in the weight dictionary based on Eqn.~\ref{eq:freq_prob} and train a masked language model. During each mini-batch, the masked tokens are predicted by the current model and we compute the total cross-entropy loss for token $t_i$ as:
\begin{equation}
\small
L_{t_i} = - logP(t_i \mid x, \theta)
\end{equation}
\noindent where $x$ denotes the input masked sequence and $\theta$ denotes the parameters of the current masked language model.
Then, we use $L_{t_i}$ to compute the sampling weight $wt(t_i)$ based on Eqn.~\ref{eq:dyn_weight}.
\begin{equation}
\small
wt(t_i) = exp(\frac{L_{t_i}}{\tau})
\label{eq:dyn_weight}
\end{equation}
\noindent where $\tau$ is a temperature parameter with default as 0.2. Finally, for the next mini-batch, we compute the sampling probability $p(t_i)$ for token $t_i$ by normalizing $wt(t_i)$ following Eqn.~\ref{eq:freq_prob}.
The sampling weights computed by Eqn.~\ref{eq:dyn_weight} are larger for tokens with higher cross-entropy prediction loss, i.e., tokens that are poorly learned in the current masked language model and are often rare tokens; the sampling weights are smaller for tokens with lower cross-entropy loss, i.e., tokens that are relatively better learned. We design the sampling weight function $wt(t_i)$ as Eqn.~\ref{eq:dyn_weight} to enlarge the variance of sampling weights between different tokens and to further boost up sampling probabilities of rare tokens. During each iteration in pre-training, the weight dictionary is updated with the latest sampling weights $wt(t_i)$ for each token $t_i$. For the next iteration, for a sequence $s={t_1, t_2, ..., t_n}, s \in S$, the sampling probability of choosing to mask each token is computed using the updated weight dictionary by Eqn.~\ref{eq:freq_prob}.
|
{
"arxiv_id": "2302.14170",
"language": "en",
"timestamp": "2023-03-01T02:03:30",
"url": "https://arxiv.org/abs/2302.14170",
"yymm": "2302"
} | \section{Introduction}\label{introduction}
From the movement of gravitationally coupled celestial objects \cite{Fischer2014} to swarming animals \cite{Reynolds1987} to bacteria \cite{Zhang2010} and colloids \cite{Denz2021} to molecules in a fluid \cite{Rahman1964}, there are countless examples of naturally occurring and synthetically produced many-particle systems, i.e.,\ systems of interacting discrete entities. While it is typically easy to describe the dynamics of single particles analytically, the dynamics of many interacting particles is often only accessible numerically. Therefore, simulations of many-particle systems have become a standard tool for scientific computation and have numerous applications in physics \cite{Kong2011,Ristow1992}, chemistry \cite{vanGunsteren1990,vanDuin2001}, biology \cite{Lee2006,Perez2012}, and mathematics \cite{Sakuma2017,Edvardsson2015}.
Despite the fact that these kinds of simulation methods have been in use since the dawn of scientific computer simulations \cite{Alder1959}, there is still active development in the field \cite{Schwantes2014, Hertig2016, Hollingsworth2018}. One reason for this is that many-particle simulations frequently involve very computationally intensive calculations and therefore require careful algorithmic optimization. The complexity of crafting efficient implementations of many-particle dynamics has led to the development of numerous software packages, which promise to alleviate some of this complexity. This is especially the case in the field of molecular dynamics which has brought forward software packages such as GROMACS \cite{Pall2014}, HOOMD-blue \cite{Anderson2020}, LAMMPS \cite{LAMMPS}, or NAMD \cite{Phillips2020}.
In recent times, there has been an increased interest in more general particle simulation software. This has given rise to software packages like FDPS \cite{Iwasawa2016}, PPM \cite{Awile2013}, or OpenFPM \cite{Incardona2019}. These software packages are not simply a library of common routines like many molecular dynamics packages, but rather a framework for implementing custom particle dynamics, thus giving the user much more control over the exact numerical details of the simulation. Most of these existing software solutions for many-particle simulations utilize the distributed memory model for parallelization, i.e.,\ the particle system is decomposed into multiple subsystems, which are then processed in parallel. While this means that the simulation can be spread evenly across multiple processing nodes, it also comes with an inherent communication overhead due to the need to send information between particle subsystems \cite{Kale1999}.
At the same time, we see a trend in computer hardware towards increasingly parallel computing devices, e.g.,\ in the form of processors with increasingly large numbers of cores or specialized hardware such as graphics processors. This makes it viable to utilize a shared memory model on a single processing node instead, where all threads have access to the same memory. Some modern molecular dynamics frameworks such as OpenMM \cite{Eastman2017}, thus target \textit{shared-memory} CPU and GPU architectures to exploit current hardware to its fullest extent. While the shared-memory model eliminates most of the communication overhead, it comes at the cost of a parallelism model that is much more prone to errors, in particular a class of errors called \textit{data races} \cite{Netzer1989}.
While data races are highly problematic in general computation, their impact can be greatly reduced by limiting computation to a specific problem domain. The most popular example of this phenomenon are graphics shaders, which by the nature of graphics processors run on shared memory hardware, but are rarely affected by data races. This is due to the design of the render pipeline that restricts access to data as it is being processed in parallel \cite{Segal2019}. This approach to preventing data races is taken even further in the novel systems programming language Rust \cite{Klabnik2019}, which generalizes the idea of access restrictions on variables such that general-purpose computation is possible.
Inspired by the success of Rust, we present in this article a domain-specific model and programming language for many-particle simulations with the goal of data-race freedom on shared-memory hardware. We show that many common particle dynamics can be expressed within this model and can therefore be parallelized safely. Our work can be seen as a continuation of Ref.\ \cite{Bamme2021}, however, we found it more productive to construct our model independently instead of maintaining compatibility with the notation used in Ref.\ \cite{Bamme2021}. Due to a very broad definition of what constitutes a \textit{particle dynamics}, the resulting model for safely parallel molecular dynamics surpasses the flexibility of existing shared-memory implementations such as OpenMM \cite{Eastman2017}.
This article is structured as follows: In Sec.\ \ref{sec:data_races_and_rust}, we briefly review the concept of data races and strategies to prevent them. Then we formulate a model to ensure data-race-free particle dynamics in Sec.\ \ref{sec:data_race_free_simulations}, before concretizing this model in form of a domain-specific programming language in Sec.\ \ref{sec:DSL}. We conclude in Sec.\ \ref{sec:conclusions}.
\section{Data races and data-race freedom}\label{sec:data_races_and_rust}
In this section, we briefly introduce the problem of \textit{data races} in parallel programming and present a number of methods to ensure their absence in programs \cite{Rauber2013,Klabnik2019}.
Generally speaking, programs can be viewed as sets of \textit{tasks}.\footnote{Sometimes these tasks are also called \textit{instructions}, which, however, might be confused with \textit{machine instructions}, which have additional technical connotations in computer architectures.} In a \textit{sequential} program, these tasks are executed in a predefined sequence called a \textit{thread of execution}, with only one task being worked on at the same time. In contrast, \textit{concurrent} programs do not have just a single thread of execution, but are composed of multiple threads of execution that can at least in principle progress independently. Notably, this does \textit{not} imply that multiple threads of execution advance at the same time. The program could instead simply switch between threads of execution periodically and advance them one at a time. A concurrent program, where multiple threads of execution do in fact advance \textit{simultaneously}, is called a \textit{parallel} program. \cite{Klabnik2019}
Compared to sequential programming, creating concurrent programs requires more diligence as one needs to take into account every possibility of the different threads of execution interleaving accesses to shared resources (e.g., memory). If the outcome of executing a program depends on the order of these accesses, the program is said to exhibit a \textit{race condition} as two or more threads \textit{race} for first access to a resource. A particularly significant case of this is a situation where two or more threads access the same location in memory concurrently with at least one thread \textit{writing} to memory and at least one access being \textit{unsynchronized}, i.e., not being forced into sequential execution with the other accesses. This situation is referred to as a \textit{data race} \cite{Klabnik2019}. Most programming language models give no guarantees to program behavior if a data race occurs (\textit{undefined behavior}). This, combined with the erratic nature of race conditions in general, makes data races a class of programming errors that poses a significant threat to program correctness.
Consequently, data race detection and avoidance is an important field of research \cite{Serebryany2009, Netzer1991}. One solution to avoid data races is to simply not share any memory. This is often used in the \textit{message-passing} scheme of parallel programming, where threads cannot share data, but can send data between each other. However, message passing is less efficient compared to sharing memory as the data sent between threads typically needs to be copied first to form a contiguous message. This creates a communication overhead that can degrade performance if too much information needs to be communicated.
Here, we want to focus on the solution to the problem of data races utilized by the Rust Programming Language (henceforth simply called \ZT{Rust}). Rust is a modern systems programming language with a focus on reliability and efficiency. On the one hand, Rust offers low-level access to details of execution, e.g., memory management or operating system level multithreading, like typical systems programming languages such as C or C++. On the other hand, it also comes with strong correctness guarantees via a powerful type system and static analysis checks. In fact, in the so-called \textit{Safe Rust} subset of Rust, it is \textit{impossible} to trigger undefined behavior in any legal program.
This safety is possible in part due to the concepts of memory \textit{ownership} and reference \textit{lifetimes} being explicit and pervasive in Rust programs.
At compile time the Rust compiler validates programs with respect to a set of rules regarding these concepts that are designed specifically to prevent data races as well as other memory-related sources of undefined behavior. In brief, Rust prohibits \textit{aliasing} of values, i.e., at every point in time every value (i.e., some block of memory filled with data) of a Rust program is bound to exactly one variable. While the variable is in scope, the value must remain allocated, and conversely, when the variable drops out of scope, the value is freed. This prevents \textit{use-after-free} errors without the overhead of memory management via \textit{garbage collection} \cite{Barth1977}. Besides value ownership, Rust also allows \textit{borrowing} a value from a variable as a reference. To prevent use-after-free errors, the Rust compiler uses an elaborate system of tracking \textit{lifetimes} of references to prove that no reference cannot outlive the variable it is derived from.
Of particular importance for this work is secondary restriction imposed on references in Rust, the so-called \textit{rules of borrowing}. These state that at every point in time, there can be either an arbitrary number of read-only references or a single mutable reference. In either case, data races are impossible: there is either no writing access or only a single thread carries a reference to the same location in memory. It is this idea of regulating concurrent access and mutability of data that forms the basis for our domain-specific model for safely parallelizable particle simulations. One should keep in mind that all this reasoning has to happen at compile time as to not incur a management overhead that degrades performance.
It is outside the scope of this article to explain the intricacies of Rust in more detail, so we instead refer the interested reader to Refs. \cite{Jung2017,Weiss2019,Jung2021,Klabnik2019} for in-depth explanations on the design of Rust.
At this point it should be highlighted that Rust does not provide its safety guarantees for free. Programmers have to handle the increased complexity of designing their programs in such a fashion that the compiler can track ownership and reference lifetimes. Because this is a special case of static program analysis, the Rust compiler is doomed by the impossibility of \textit{perfect} static program analysis \cite{Rice1953} to always reject some programs even though they are in fact (provably) safe.\footnote{In fact, it is trivial to find such a program in Rust. Because Rust treats arrays as a single value with respect to the aliasing rules the Rust compiler will reject a multithreaded program even if each thread accesses only a mutually disjoint subset of the array.} To circumvent this, Rust allows programmers to selectively opt out of some static analysis checks via so-called \textit{Unsafe Rust}. This, of course, places the responsibility of validating safety on the programmer, but can be used very effectively to create safe abstractions over inherently unsafe operations. This ``escape hatch'' design of Rust is another aspect we take inspiration from in this article.
Before we discuss the specific semantics of simulating particle systems numerically we will first present a more general formalization of the idea of safe parallelism via thread-exclusive mutability of variables. For this purpose, we first define the notion of a \textit{program state} as the collective state of all variables of a program. To distinguish between different variables we identify each variable with a natural number. In the interest of simplicity we do not differentiate between different variable types but instead only assume all variables can take values from some generic set.
\begin{definition}
A \textbf{program state} is a function $\sigma:I\rightarrow V$ that maps a finite index set $I\subset\mathbb{N}$ to the set of possible variable values $V$. The value of the $i$-th variable is represented as $\sigma(i)$. The set of all possible program states with a given index set $I$ and value set $V$ is $V^{I}$, i.e., the set of all functions that map from $I$ to $V$.
\end{definition}
It should be noted that unlike typical automata models such as register machines this definition of a program state lacks any notion of a instruction counter or a similar construct to track the flow of execution. Instead, we describe program flow extrinsically as a sequence of fixed program steps.
\begin{definition}
A \textbf{program step} for a set of possible variable values $V$ is defined as a tuple $(I,i,\delta)$ with an index set $I\subset\mathbb{N}$, a variable index $i\in\mathbb{N}$ and an update function $\delta\in(V^{I}\rightarrow V)\cup\{\mathrm{DEL}\}$ which must satisfy the condition $\delta=\mathrm{DEL}\Rightarrow i\in I$. Conceptually, this encodes a change in a program state $\sigma$ with index set $I$ where the $i$-th variable is substituted by $\delta(\sigma)$. If $i\notin I$, the variable is newly created and initialized by $\delta(\sigma)$ and if $\delta=\mathrm{DEL}$ the variable is deleted instead. We define the output index set $I_{\mathrm{out}}\subset\mathbb{N}$ as
\begin{align*}
I_{\mathrm{out}}((I,i,\delta))=\begin{cases}
I & i\in I\wedge\delta\neq\mathrm{DEL}\\
I\backslash\{i\} & i\in I\wedge\delta=\mathrm{DEL}\\
I\cup\{i\} & i\notin I\wedge\delta\neq\mathrm{DEL}
\end{cases}
\end{align*}
To map input program states to output program states we define the \textbf{execution function} $\mathrm{exec}((I,i,\delta)):V^{I}\rightarrow V^{I_{\mathrm{out}}((I,i,\delta))}$ as
\begin{align*}
\mathrm{exec}((I,i,\delta))(\sigma)(j)=\begin{cases}
\sigma(j) & i\neq j\\
\delta(\sigma) & i=j\wedge\delta\neq\mathrm{DEL}
\end{cases}
\end{align*}
\end{definition}
\begin{definition}
Let $p=(I_{1},i_{1},\delta_{1}),\dots,(I_{n},i_{n},\delta_{n})$ be a finite sequence of length $n\in\mathbb{N}$ such that $(I_{k},i_{k},\delta_{k})$ is a program step for all $1\leq k \leq n$. We call this sequence a \textbf{program} if the index set of each program step is the same as the output index set of the previous program step, i.e., for all $1\leq k<n$ it holds that $I_{k+1}=I_{\mathrm{out}}((I_{k},i_{k},\delta_{k}))$. We define the \textbf{execution function} of the program $\mathrm{exec}(p):V^{I_{1}}\rightarrow V^{I_{\mathrm{out}}((I_{n},i_{n},\delta_{n}))}$, i.e., the function that maps from the initial state to the output state after executing each program step in order, as the composition of the execution functions of each program step
\begin{align*}
\mathrm{exec}(p)=\mathrm{exec}((I_{n},i_{n},\delta_{n}))\circ\dots\circ\mathrm{exec}((I_{1},i_{1},\delta_{1})).
\end{align*}
Furthermore we define the \textbf{input index set} of the program as $I_{\mathrm{in}}(p)=I_{1}$ and the \textbf{output index set} of the program as $I_{\mathrm{out}}(p)=I_{\mathrm{out}}((I_{n},i_{n},\delta_{n}))$.
\end{definition}
This way of modeling programs comes with the obvious drawback that the program flow must be the same regardless of program input. While this is a severe limitation for general computing, it is much less problematic in the context of molecular dynamics, where the program flow is typically determined only by the underlying model equations and not the concrete state of a given physical system. It should also be noted that the static nature program flow in the above definition does not rule out branching computation in general since any kind of computation can be incorporated into the update functions $\delta$.\footnote{Naturally, for this model to be of any practical use we have to assume that $\delta$ is computable (if $\delta\neq\mathrm{DEL}$).} As we will see later, this will form an \ZT{escape hatch} in our domain-specific programming language similar to Unsafe Rust.
Using the definition of a program in terms of its access patterns to variables we can now formalize the concepts of a program depending on and mutating variables.
\begin{definition}
\label{def:program_mut}
Let $p=(I_{1},i_{1},\delta_{1}),\dots,(I_{n},i_{n},\delta_{n})$ be a program. We define the \textbf{set of variables mutated by} $p$ as $\mathrm{mut}(p)\subseteq I_{\mathrm{out}}(p)\cup I_{\mathrm{in}}(p)$ such that for all $i\in\mathrm{mut}(p)$ there must be a $1\leq k\leq n$ with $i=i_{k}$. In other words, a variable is mutable with respect to a program if there is at least one program step in this program that updates it.
\end{definition}
\begin{definition}
Let $(I,i,\delta)$ be a program step. We say that this \textbf{step depends on a variable} $k$ if there are two program states $\sigma,\sigma'\in V^{I}$ with $\sigma(k)\neq\sigma'(k)$ and $\sigma(\ell)=\sigma'(\ell)$ for all $\ell\neq k$ such that $\delta(\sigma)\neq\delta(\sigma')$, i.e., there are two program states that differ only in the value for the $k$-th variable but produce different output states.
\end{definition}
We emphasize again that we do not make any assumption on how $\delta$ is computed. Therefore, a program step $(I,i,\delta)$ not depending on the $k$-th variable does not imply that \textit{every} implementation of $\delta$ will be free of reading operations for this variable, but merely implies the \textit{existence} of an implementation without such an operation. In practice, it is usually sufficient to analyze \textit{syntactically} if an implementation of $\delta$ reads a given variable, e.g., by finding the corresponding symbol for the variable in the program code.\footnote{This method of analysis allows for false positives as it does not verify the reachability of the expression containing the symbol in question. Therefore it does not violate the general undecidability of (perfect) static analysis.}
\begin{definition}
\label{def:program_dep}
Let $p=(I_{1},i_{1},\delta_{1}),\dots,(I_{n},i_{n},\delta_{n})$ be a program. For each variable $i\in I_{\mathrm{in}}(p)$ we can split the program into two programms $p_{\mathrm{ro},i}$ and $p_{\mathrm{rem},i}$ such that the following three properties hold:
\begin{enumerate}
\item $p_{\mathrm{ro},i}p_{\mathrm{rem},i}=p$
\item $p_{\mathrm{ro},i}$ contains no element $(I_{k},i_{k},\delta_{k})$ with $i_{k}=i$
\item $p_{\mathrm{ro},i}$ has maximum length
\end{enumerate}
In other words, $p_{\mathrm{ro},i}$ is the part of program $p$ before there is a write to the $i$-th variable and $p_{\mathrm{rem},i}$ contains the remaining program steps of $p$. We then define the \textbf{set of variables $p$ depends} on as $\mathrm{dep}(p)\subseteq I_{\mathrm{in}}(p)$ where $i\in\mathrm{dep}(p)$ if and only if $p_{\mathrm{ro},i}$ contains a step that depends on the variable $i$. Conceptually, $p$ depending on a variable means that during the execution of $p$ the value of the variable is read before it is first written to.
\end{definition}
Finally, we use Def.\ \ref{def:program_mut} and \ref{def:program_dep} to formally define the notion of data-race freedom by exclusive mutability as follows.
\begin{definition}
\label{def:parallelizable_general}
Let $p$ be a program $p=p_{1}\dots p_{n}$ composed of $n$ subprograms with $I_{\mathrm{in}}(p)=I_{\mathrm{in}}(p_{k})=I_{\mathrm{out}}(p_{k})=I_{\mathrm{out}}(p)=I$ for all $1\leq k\leq n$. We say that $p$ can be \textbf{parallelized without data races} via $p_{1},\dots,p_{n}$ if for all variables $i\in I$ the following conditions hold:
\begin{enumerate}
\item The variable $i$ is mutated by at most one subprogram, i.e., there is at most one $k\in\mathbb{N}^{\leq n}$ such that $i\in\mathrm{mut}(p_{k})$.
\item If the variable is mutated by one subprogram, no other subprogram may depend on this variable, i.e., if there is a $k\in\mathbb{N}^{\leq n}$ such that $i\in\mathrm{mut}(p_{k})$ it is implied that for all $\ell\in\mathbb{N}^{\leq n}$ with $\ell\neq k$ it holds that $i\notin\mathrm{dep}(p_{\ell})$.
\end{enumerate}
\end{definition}
The strategy we used to obtain Def.\ \ref{def:parallelizable_general} serves as a guideline for the next section, where we consider the problem domain of particle simulations.
\section{Data-race-free particle simulations}\label{sec:data_race_free_simulations}
In this section, we derive a general model for particle simulations for which we can make guarantees in terms of data-race freedom. To this end, we first define three concepts: \textit{physical quantities} associated with particles, \textit{particle types}, and \textit{particle systems}. With these we can then define \textit{particle dynamics} in a very general fashion and classify certain \textit{particle dynamics} as particularly useful.
\subsection{Modelling static particle systems}
Conceptually, a \textit{physical quantity} of a particle is a single, semantically atomic property of the particle, e.g., its position, mass, charge, velocity, or orientation. We make no assumptions on these quantities or their physical meaning aside from the fact that they can be represented by a finite number of real numbers.\footnote{For the sake of simplicity, we ignore the fact that physical quantities typically also possess a unit.}
\begin{definition}
Let $n\in \mathbb{N}$. Then we call $Q=\mathbb{R}^n$ a \textbf{physical quantity type} of dimensionality $\textup{dim}(Q)=n$. $q\in Q$ is called a physical quantity of type $Q$. Furthermore, we call the \textbf{set of all physical quantity types} $\mathfrak{Q}=\{\mathbb{R}^n|n\in \mathbb{N}\}$ and the \textbf{set of all physical quantities} $\mathcal{I}(\mathfrak{Q})$, i.e., $q\in \mathcal{I}(\mathfrak{Q})$ implies the existence of an $n\in \mathbb{N}$ such that $q\in \mathbb{R}^n$.
\end{definition}
Generally, we can express the idea of a \textit{particle} as an entity that ties together a position in space with a varying number of physical quantities, e.g., orientation, (angular) velocity, mass, or charge. For a truly general model of particles we make no assumptions on the nature of these quantities except that their number must be finite.\footnote{Strictly speaking, this can be seen as a loss of generality. One could, e.g., imagine a particle that keeps a memory of its past states, e.g.\ to implement self-avoiding dynamics. Since a full representation of all past states of a particle cannot be expressed in a finite number of quantities, our model cannot capture these kinds of particles. However, practical implementations of particles with memory typically have either a temporal decay of memory (thus limiting the number of previous states that are remembered) or utilize global fields instead of per-particle memory to enable memory.} When defining this idea formally, we must distinguish between the concepts of \textit{particle types} and \textit{particle states}. The first can be seen as a blueprint for a set of particles equal in structure, while the second one encapsulates the idea of a single concrete particle at a given point in time.
\begin{definition}
Let $I\subset\mathbb{N}$ be a finite set with $1\in I$ and let $\kappa:I\rightarrow\mathfrak{Q}$ be a function. Then $P=(I,\kappa)$ is a \textbf{particle type} with the index set $I$ and the quantity function $\kappa$. For a particle type $P=(I,\kappa)$ we call $\textup{pos}(P)=\kappa(1)$ the \textbf{position quantity type} of $P$ and $\textup{dim}(P)=\textup{dim}(\kappa(1))$ the \textbf{dimensionality} of the particle type. Furthermore, we define $\mathfrak{P}$ as the set of all particle types and $\mathfrak{P}_{n_\textup{dim}}$ as the set of all particle types of dimensionality $n_\textup{dim}$.
\end{definition}
\begin{definition}
Let $P=(I,\kappa)$ be a particle type. Then we call $p:I\rightarrow\mathcal{I}(\mathfrak{Q})$ a \textbf{particle state} of type $P$ if for all $i\in I$ it holds that $p(i)\in \kappa(i)$, i.e., a particle state maps the index set $I$ to physical quantities in accordance to the quantity types defined in the corresponding particle type. For a particle state $p$ we define $\textup{pos}(p)=p(1)$ as the \textbf{position} of the particle state. Furthermore, we define $\mathcal{I}(P)$ as the set of all particle states of particle type $P$ as well as $\mathcal{I}(\mathfrak{P})$ as the set of all particle states of any particle type and $\mathcal{I}(\mathfrak{P}_{n_{\textup{dim}}})$ as the set of all particle states of any particle type of dimensionality $n_\textup{dim}$.
\end{definition}
The purpose of the index set $I$, which associates every quantity of a particle state with a unique index, may seem questionable to a reader at first glance. As we will see later, it is crucial, however, to reason about partial particle states, i.e., restrictions of a particle state onto a subset of quantities. $I$ can also serve to give physical meaning to quantities by some form of indexing scheme. This makes reasoning within the particle model much easier than, e.g., modelling particle states as (unordered) sets of quantities would do.
Finally, we can use the notion of particle types and states to define \textit{particle systems} and their states. Conceptually, a particle system contains two pieces of information. The first information contains the various particle types found in the system as well as how many particles of each type are contained within the system. Secondly, a particle system may also have physical information that is not bound to individual particles, i.e., information that is \textit{global} to the system. Examples of this are the simulation time or external fields. Again, we make no assumption on these global information except for the fact that they must be representable by a finite set of real numbers. One might ask at this point if the global state could not simply be represented as a single particle of a unique particle type. As we will see in the next section, separating the particle system from the global state is worthwhile to exploit the fact that for many numerical simulations of particle systems the global state is mutated separately from the state of the particle system.
\begin{definition}\label{def:particle_system}
Let $I\subseteq\mathbb{N}$ be a finite set, $\tau:I\rightarrow\mathfrak{P}_{n_{\textup{dim}}}$ with $n_{\textup{dim}}\in\mathbb{N}$ and $\nu:I\rightarrow\mathbb{N}$ be functions as well as $G=\mathbb{R}^{m}$ for some $m\in\mathbb{N}_{0}$. Then $S=(I,\tau,\nu,G)$ is a \textbf{particle system} of dimensionality $n_{\textup{dim}}$ with the index set $I$, the particle type function $\tau$, the particle number function $\nu$, and the global-state space $G$. For each $i\in I$ we say that the system $S$ contains $\nu(i)$ particles of type $\tau(i)$.
\end{definition}
\begin{definition}
Let $S=(I,\tau,\nu,G)$ be a particle system, $\sigma:I\rightarrow\mathcal{M}(\mathcal{I}(\mathfrak{P}))$ be a function mapping every element of $I$ to a multiset\footnote{See Appendix \ref{appendix_multisets} for a complete overview of the multiset formalism used here.} of particle states of any type and $g\in G$. Then we call $s=(\sigma,g)$ a \textbf{state of the particle system} $S$ if for all $i\in I$ and for all $p\in\sigma(i)$ it holds that $p\in\mathcal{I}(\tau(i))$ and $|\sigma(i)|=\nu(i)$. In other words, $\sigma$ is a function that maps the index for each particle type to a multiset containing the states for each particle of this type in the particle system. We define $\mathcal{I}(S)$ as the set of all possible states of a particle system $S$, i.e.\ $s\in\mathcal{I}(S)$ if and only if $s$ is a state of $S$.
\end{definition}
One should note that in the definitions above there is no concept of any ordering of particles, as the particle states for each particle type in the system are represented by a multiset. For the same reason, there is also no notion of particle identity inherent in the model, i.e., two or more particles in the same state are indistinguishable when represented in the state of a particle system. However, if desired, one can express both an ordering of particles and an inherent distinguishability of particles by extending the particle type with a (non-physical) quantity signifying particle identity (e.g., by using a quantity to enumerate particles).
To illustrate the above formalism, let us consider a particle system comprised of three point masses, i.e., particles that have position, velocity, and mass, in three spatial dimensions. Additionally, two of these particles shall also have an electric charge.\footnote{At this point, one could ask if it was not more sensible to unify all particles into one type, by expressing the uncharged particles as particles with zero charge. While sensible for this tiny example system, in larger systems, where the number of uncharged particles is high compared to that of the charged particles, it becomes computationally wasteful to use the more complicated dynamics of charged particles for the uncharged particles as well.} We say the particles have a position $\vec{r}_i$, a velocity $\vec{v}_i$, a mass $m_i$, and in the case of the charged particles a charge $q_i$ where $i=1$ represents the uncharged particle and $i\in\{2,3\}$ the charged particles. Both uncharged and charged particles can then be represented by these particle types, respectively,
\begin{align}
P_{u}&=\left(\{1,2,3\},i\mapsto\begin{cases}
\mathbb{R}^{3} & i\in\{1,2\}\\
\mathbb{R} & i=3
\end{cases}\right),\\
P_{c}&=\left(\{1,2,3,4\},i\mapsto\begin{cases}
\mathbb{R}^{3} & i\in\{1,2\}\\
\mathbb{R} & i\in\{3,4\}
\end{cases}\right),
\end{align}
and we can define the particle system as
\begin{align}
S&=\left(\{1,2\},\tau,\nu,\{\}\right),\\
\tau&:\{1,2\}\rightarrow\{P_{1},P_{2}\},i\mapsto\begin{cases}
P_{u} & i=1\\
P_{c} & i=2
\end{cases},\\
\nu&:\{1,2\}\rightarrow\mathbb{N},i\mapsto\begin{cases}
1 & i=1\\
2 & i=2
\end{cases}.
\end{align}
Then we can express the state of the particle system as
\begin{align}
s=(\sigma,g),\;\;\sigma(1)=[p_{1}],\;\;\sigma(2)=[p_{2},p_{3}]
\end{align}
with the particle states defined by
\begin{alignat}{3}
p_{1}(1)&=\vec{r}_{1},& p_{1}(2)&=\vec{v}_{1}, & p_{1}(3)&=m_{1},\\
p_{2,3}(1)&=\vec{r}_{2,3},\;\;& p_{2,3}(2)&=\vec{v}_{2,3},\;\;&p_{2,3}(3)&=m_{2,3},\\
p_{2,3}(4)&=q_{2,3}.
\end{alignat}
\subsection{Modelling particle dynamics}\label{subsec:particle_dynamics}
Using the definitions from the previous section, we can define \textit{particle dynamics} simply as transitions between states of particle systems.
\begin{definition}\label{def:particle_dynamics}
Let $S$ and $S'$ be particle systems. Then a function $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S')$ is called a \textbf{particle dynamics}. For $s\in S$ we call $s'=d(s)$ the \textbf{evolved state} of $s$ under $d$.
\end{definition}
It should be noted that we do not impose \textit{any} restrictions on particle dynamics other than the fact that it maps from all valid states of \textit{some} particle system to valid states of \textit{some other} particle system. In particular, we do not assume that the particle system being mapped from is the same as the one being mapped to. This allows a particle dynamics to, e.g., change the number of particles in the system, alter particle types, and redefine global state.
An important algebraic property of particle dynamics as defined above is that they are closed under composition under the condition that input and output states are compatible. This is relevant since in practice particle simulations are often formulated as a loop over multiple operations each of which is considered elementary within some numerical scheme (e.g., applying a force all particles of a type or advancing particle positions). Our model can express this by defining a particle dynamics for each of these elementary operation and then compose them into more complex dynamics. As we will see later, this is highly useful as these elementary operations can typically be reasoned about more strongly than their composition into a complex particle dynamics.
On its own, the above definition of particle dynamics is far too generic to be of practical use. In a sense it is simply a (convoluted) way of expressing functions mapping between vector spaces of finite dimension. For this we cannot make any general statements regarding parallelism. Similar to how Safe Rust carves out of all possible programs only those programs for which it can guarantee safety, we can carve out of all possible particle dynamics only those which we can safely process in parallel in some fashion.
The first useful classification of particle dynamics distinguishes between dynamics that conserve global state and those that do not.
\begin{definition}
Let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S')$ be a particle dynamics with $S=(I,\tau,\nu,G)$ and $S'=(I',\tau',\nu',G')$ being particle systems. $d$ is a \textbf{global-state preserving} particle dynamics if $G=G'$ and for all $(\sigma,g)\in \mathcal{I}(S)$ it holds that $d((\sigma,g))=(\sigma',g)$. In other words, the global state is invariant under the dynamics.
\end{definition}
This separation is useful as the time evolution of global state can be driven by \textit{any} numerical procedure. For example, the global state might contain discretized fields that are integrated in time by a numerical scheme such as finite differences. Parallel implementations of these generic algorithms are not specific to many-particle simulations and thus outside the scope of our simulation model for particle dynamics. Instead, it is more sensible to delegate the analysis and implementation of these kinds of programs to general purpose programming languages such as Rust.
Another classification of particle dynamics can be made by looking at the definition of particle types in the two particle systems connected by a particle dynamics. From Def.\ \ref{def:particle_dynamics} there is no requirement for both particle systems to contain the same particle types. In some special instances this can be useful, e.g., for initializing a system subject to a complex dynamics via a simple dynamics. For instance, one can use a simple diffusion dynamics to relax a system of particles into a homogeneous state to then run a more complex particle dynamics (with different particle types) from there. However, like in the case for dynamics not preserving global state, dynamics that do not preserve particle types are not algorithmically restricted enough to make statements about safe parallelism. We therefore characterize \textit{particle-type preserving dynamics} as particle dynamics that do not alter which particle types are found in a given particle system.
\begin{definition}
Let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S')$ be a particle dynamics with $S=(I,\tau,\nu,G)$ and $S'=(I',\tau',\nu',G')$ being particle systems. $d$ is a \textbf{particle-type preserving} particle dynamics if $I=I'$ and $\tau=\tau'$.
\end{definition}
If we look at particle dynamics that are both global-state preserving and particle-type preserving, we can identify two subclasses of interest.
We consider first a class of particle dynamics that add particles of existing types to the particle system, but leaves existing particles untouched. To add particles to the system, we require the number of new particles as well as an initialization function for each particle type. Of particular interest for parallelization are dynamics of this kind, where the initialization of each new particle can be performed in parallel, i.e., independently from one another.
\begin{definition}\label{def:indep_init}
Let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S')$ be a global-state preserving and particle-type preserving particle dynamics with $S=(I,\tau,\nu,G)$ and $S'=(I,\tau,\nu',G)$ being particle systems. Also, let $\zeta:I\rightarrow\mathbb{N}_{0}$ and $\eta:I\rightarrow\bigcup_{i\in I}(\mathbb{N}_{0}^{\leq\zeta(i)}\times\mathcal{I}(S)\rightarrow\mathcal{I}(\tau(i)))$ be functions indicating the number of new particles and their initial state for each particle type, respectively. We then call $d$ an \textbf{independently initializing insertion} if $\nu'(i)=\nu(i)+\zeta(i)$ and $d(\sigma,g)=(i\mapsto\sigma(i)+\sum_{k=1}^{\zeta(i)}[(\eta(i))(k,(\sigma,g))],g)$.
\end{definition}
Definition \ref{def:indep_init} formalizes the idea of an independent initialization of new particles by generating a dynamics from two functions $\zeta$ and $\eta$ where $\zeta$ determines the number of particles to create for each particle type and $\eta$ initializes a new particle for a given type based on an index enumerating each new particle as well as the state of the particle system before adding any particles. Since $\eta$ only depends on the original system state and the number of particles to create for each type is known in advance, all evaluations of $\eta$ can be performed in parallel without a data race as long as the new particles are only added to the system state after all particles have been initialized.
Next, we consider a class of particle deleting dynamics based on a predicate function that selects the particles to delete. Again, we can parallelize this dynamics safely if the individual evaluations of this predicate only depend on the individual particle state as well as the original system state.
\begin{definition}\label{def:indep_sel_del}
Let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S')$ be a global-state preserving and particle-type preserving particle dynamics with $S=(I,\tau,\nu,G)$ and $S'=(I,\tau,\nu',G)$ being particle systems and $s=(\sigma,g)$ being a state of $S$. Also, let $\delta:I\rightarrow\bigcup_{i\in I}(\mathcal{I}(\tau(i))\times\mathcal{I}(S)\rightarrow\mathbb{B})$ be a Boolean predicate which flags particles for deletion. We then call $d$ an \textbf{independently selecting deletion} if $\nu'(i)=\nu(i)-|\textup{select}(\sigma(i),p\mapsto(\delta(i))(p,s))|$ and $d((\sigma,g))=(i\mapsto\textup{select}(\sigma(i),p\mapsto\neg(\delta(i))(p,(\sigma,g))),g)$.
\end{definition}
It should be noted that while for an independently initializing insertion the evolved particle system only depends on the original particle system, for an independently selecting deletion the evolved particle system depends both on the original particle system \textit{and} its state due to the fact that the number of deleted particles might depend on this state. Therefore, an independently selecting deletion as defined in Def.\ \ref{def:indep_sel_del} can only be meaningfully applied to a single state even if it is technically defined on all states of the original particle system. This is largely irrelevant for the domain-specific language developed later as we will design this language to be generic over particle numbers.
Until now, we have considered particle dynamics that can change the number of particles. Let us now look at particle-number conserving dynamics.
\begin{definition}
Let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S')$ be a particle-type preserving particle dynamics with $S=(I,\tau,\nu,G)$ and $S'=(I,\tau,\nu',G')$ being particle systems. $d$ is a \textbf{particle-number preserving} particle dynamics if $\nu=\nu'$.
\end{definition}
By itself this class does not directly provide a parallelization opportunity as we have not specified how $d$ calculates the evolved state. Ideally, one would like to process every particle in parallel as this will provide ample opportunity for parallelization for most real-world systems. To this end we define a new subclass of particle dynamics.
\begin{definition}\label{def:particle_parallel}
Let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S)$ be a global-state, particle-type and particle-number preserving particle dynamics with $S=(I,\tau,\nu,G)$ being a particle system. We call $d$ a \textbf{particle-parallel} particle dynamics under $\Pi$ if there is a function $\Pi:I\rightarrow\bigcup_{i\in I}(\mathcal{I}(\tau(i))\times\mathcal{I}(S)\rightarrow\mathcal{I}(\tau(i)))$ such that $d((\sigma,g))=(i\mapsto\textup{map}(\sigma(i),p\mapsto(\Pi(i))(p,(\sigma,g))),g)$. In other words, for every particle type $\tau(i)$ the function $\Pi$ produces a function $\Pi(i)$ that takes the state of a single particle of type $\tau(i)$ as well as the initial state of the particle system and produces the new state for this particle.
\end{definition}
It is noteworthy that, as a consequence of enforcing this form of dynamics, particles of the same type in the same state are necessarily mapped to the same state in the evolved system. We also enforce particles of the same type to be subject to the same state-mapping function, i.e., the notion of a particle type is now not only used to unify particles with a common set of quantities associated with them, but also particles that follow the same kind of behavior within a particle dynamics. While formally a restriction, this form of dynamics does not diminish the expressiveness of the particle model in practice. In particular, dynamics with completely unique behavior for each particle can be implemented by using a unique particle type for each particle. Similarly, if the restriction for particles of the same type in the same state is undesired, one can simply introduce a new quantity in the particle type to make particles of this type distinguishable.
A particle-parallel particle dynamics can be trivially parallelized by evaluating the respective $\Pi(i)$ for each particle in parallel. However, $\Pi(i)$ also has a secondary dependency on the state of the entire particle system. This is necessary to allow the dynamics of a single particle to depend not only on its own original state but also on the state of other particles, i.e., this enables particle \textit{interactions}. If we wish to mutate the system state in-place (i.e., without making a copy first)\footnote{Making a full copy of the entire particle system in each step of the simulation would be prohibitively expensive in terms of memory and runtime in many applications.} we might cause data races if we naively allow \textit{arbitrary} functions $\Pi(i)$ to run in parallel. To formulate a restriction on $\Pi(i)$ such that data-race freedom is guaranteed for in-place state updates, we first introduce a number of helpful definitions related to particle systems and particle dynamics.
First, we formalize the idea of extracting partial information from a particle system by only considering some subset of the quantities of each particle type.
\begin{definition}
Let $S=(I,\tau,\nu,G)$ be a particle system with $\tau(i)=(I_{P,i},\kappa_{i})$ and $\varsigma:I\rightarrow\mathcal{P}(\mathbb{N})$ be a function such that for all $i\in I$ it holds that $\varsigma(i)\subseteq I_{P,i}$. Then $S'=(I,\tau',\nu,G)$ is called the \textbf{subsystem} of $S$ with respect to $\varsigma$ (written as $S'=\textup{subsystem}(S,\varsigma)$) if for all $i\in I$ it holds that $\tau'(i)=(\varsigma(i),\kappa_{i}|_{\varsigma(i)})$. In other words, $S'$ is the particle system obtained by taking only those quantities for each type $\tau(i)$ where the respective index of the quantity is in $\varsigma(i)$.
\end{definition}
\begin{definition}
Let $s$ be a state of a particle system $S=(I,\tau,\nu,G)$ and $S'=\textup{subsystem}(S,\varsigma)$ be a subsystem of $S$. Then $s'=(\sigma',g)$ is called the \textbf{substate} of $s$ with respect to $\varsigma$ (written as $s'=\textup{substate}(s,\varsigma)$) if $s'$ is a state of $S'$ and for all $i\in I$ it holds that $\sigma'(i)=\textup{map}(\sigma(i),p\mapsto p|_{\varsigma(i)})$.
\end{definition}
Next, we formalize the idea of a subsystem being invariant under a particle-parallel particle dynamics.
\begin{definition}
Let $S=(I,\tau,\nu,G)$ be a particle system and let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S)$ be a particle dynamics that is particle-parallel under $\Pi$. Furthermore, let $i\in I$ be a particle type index and $\tau(i)=(I_{P},\kappa)$ be the corresponding particle type. We then call $I_{\textup{immut}}\subseteq I_{P}$ an \textbf{immutable subset of the particle type} $\tau(i)$ under $d$ if for all $i_{P}\in I_{\textup{immut}}$ it holds that
\begin{align}
\begin{split}
((\Pi(i))(p,s))(i_{P}) = p(i_{P})
\end{split}
\end{align}
In other words, there must be no state of $S$ where $\Pi(i)$ would mutate the state of any particle of type $\tau(i)$ such that a quantity with an index in $I_{\textup{immut}}$ is changed.
\end{definition}
\begin{definition}
Let $S=(I,\tau,\nu,G)$ be a particle system and let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S)$ be a particle dynamics that is particle-parallel under $\Pi$. Furthermore, let $\upsilon:I\rightarrow\mathcal{P}(\mathbb{N})$ be a function. Then we call $\mathrm{subsystem}(S,\upsilon)$ an \textbf{immutable subsystem} of $S$ under $d$ if for all $i\in I$ it holds that $\upsilon(i)$ is an immutable subset of the particle type $\tau(i)$ under $d$.
\end{definition}
The notion of an immutable subsystem can be used to express the idea of extracting from a particle-system state only information that is invariant under a specific particle dynamics. This allows us to formulate a data-race-free subclass of the particle-parallel particle dynamics defined in Def.\ \ref{def:particle_parallel} in the case of in-place state manipulation. Similar to Def.\ \ref{def:parallelizable_general} we demand that each quantity of any particle in the system must either remain unchanged by the dynamics, or, if it is mutated, may only influence the dynamics of the particle it is associated with while being irrelevant to the dynamics of any other particle. In other words, the particle dynamics has \textit{exclusive mutability}.
\begin{definition}\label{def:exclusive_mutability}
Let $S=(I,\tau,\nu,G)$ be a particle system and let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S)$ be a particle dynamics that is particle-parallel under $\Pi$. Also, let $\upsilon:I\rightarrow\mathcal{P}(\mathbb{N})$ be a function such that $S_\textup{immut}=\textup{subsystem}(S,\upsilon)$ is an immutable subsystem of $S$ under $d$. We then say that $d$ possesses \textbf{exclusive mutability} via the immutable subsystem $S_\textup{immut}$ and the restricted update function $\Pi_{\textup{immut}}:I\rightarrow\bigcup_{i\in I}(\mathcal{I}(\tau(i))\times\mathcal{I}(S_{\textup{immut}})\rightarrow\mathcal{I}(\tau(i)))$ if
\begin{align}
\begin{split}
(\Pi(i))(p,s)=(\Pi_{\textup{immut}}(i))(p,\textup{substate}(s,\upsilon)).
\end{split}
\end{align}
In other words, for a dynamics $d$ that is particle parallel under $\Pi$ to possess exclusive mutability, there must be another function $\Pi_{\textup{immut}}$ that can reproduce the results of $\Pi$ while only depending on quantities of the respective particle and those quantities of other particles that are immutable under $d$.
\end{definition}
The reasoning why this implies data-race freedom is the same as in the case of the borrow rules of Rust. If a particle quantity is immutable under a particle dynamics, there is no writing access to it that would be required for a data race. Conversely, if the quantity is mutated, its visibility is restricted to a single thread of execution, namely that of the particle the quantity is associated with. This makes a data race, which requires at least two threads to access the same value, impossible.
While in Def.\ \ref{def:exclusive_mutability} we have reached the goal of defining a general class of particle dynamics that we can parallelize without the risk of data races, its usefulness still remains to be proven. To show that real-world particle dynamics can be expressed within this class, we will look at a number of special cases of particle dynamics with exclusive mutability that relate directly to real-world applications.
First, we consider the special case of a particle dynamics with exclusive mutability where each evolved particle state only depends on one particle state in the original system, i.e., the particle dynamics only processes information local to a single particle.
\begin{definition}
Let $S=(I,\tau,\nu,G)$ be a particle system and let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S)$ be a particle dynamics that possesses exclusive mutability under $\Pi_\textup{immut}$. Then we call $d$ a \textbf{particle-local} dynamics if for all $ i\in I$ there is a function $f_i:\mathcal{I}(\tau(i)) \rightarrow \mathcal{I}(\tau(i))$ such that $\Pi_\textup{immut}(i)=((p,s)\mapsto f_{i}(p))$. In other words, we drop the dependency of $\Pi_{\textup{immut}}(i)$ on the immutable substate of the system for all particle types $\tau(i)$.
\end{definition}
This kind of dynamics implies that no information is shared between particles or, in a more physical interpretation, that the particles do not interact. These kinds of dynamics can therefore be found, e.g., in the simulation of ideal gases.
In many real-world particle dynamics, however, we find that particles do interact with each other. To limit the computational expense of simulating these dynamics, the interaction of each particle is typically restricted to a small subset of the particle system. This means that information between particles is not shared over the whole system, but only between small groups of particles. To define this formally, we first introduce the analog of a power set for states of particle systems.
\begin{definition}
\label{def:particle_system_powerset}
Let $S=(I,\tau,\nu,G)$ be a particle system and $s=(\sigma,g)$ be a state of $S$. We define the \textbf{power set $\mathcal{P}(s)$ of the particle-system state} $s$ as the set of all particle-system states\footnote{Note that in general $s'$ is not a state of $S$ in this context.} $s'=(\sigma',g)$ with $\sigma':I\rightarrow\mathcal{M}(\mathcal{I}(\mathfrak{P}))$ which fulfill the condition that for all $i \in I$ it holds that $\sigma'(i)$ is a submultiset of $\sigma(i)$. In physical terms, this means that all particles found in $s'$ appear in the same state in $s$, but not vice versa.
\end{definition}
Definitions \ref{def:particle_system_powerset} allows us to formalize the idea that calculating new particle states may not require knowledge of the entire immutable substate of the particle system, but only of a (potentially small) selection of particles from this state.
\begin{definition}
Let $S=(I,\tau,\nu,G)$ be a particle system and let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S)$ be a particle dynamics that possesses exclusive mutability via the immutable subsystem $S_\textup{immut}$ and the restricted update function $\Pi_{\textup{immut}}$. Furthermore, let $\gamma_{ij}:\mathcal{I}(\tau(i))\times\mathcal{I}(\tau(j))\times G\rightarrow\mathbb{B}$ for all $i,j\in I$ be a family of Boolean predicates that encode group association between particles. Then we call $d$ a \textbf{group-local} dynamics under $\Pi_g$ with respect to $\gamma_{ij}$ if there is a function $\Pi_{g}:I\rightarrow\left(\bigcup_{i\in I}(\mathcal{I}(\tau(i))\times\{\mathcal{P}(s)|s\in\mathcal{I}(S_{\textup{immut}})\}\rightarrow\mathcal{I}(\tau(i)))\right)$ such that
\begin{align}
&(\Pi_{\textup{immut}}(i))(p,(\sigma_{\textup{immut}},g)) \nonumber\\
&=(\Pi_{g}(i))(p,j\mapsto\textup{select}(\sigma_{\textup{immut}}(j),p'\mapsto\gamma_{ij}(p,p')),g).
\end{align}
In other words, we restrict the dependencies of $\Pi_{\textup{immut}}$ such that for each particle only information from other particles for which $\gamma_{ij}$ indicates group association is necessary to calculate the new state of the particle.
\end{definition}
An important detail in this definition is the fact that not just the evolved state, but also group association, must be decidable based on the information in an immutable substate of the particle system to prevent data races. It should also be noted that every particle dynamics that possesses exclusive mutability is a group-local dynamics with respect to \textit{some} group predicate, e.g., in the case of a particle-local dynamics a group predicate $\gamma_{ij}(p,p',g)=0$ or in the case of unrestricted particle interaction $\gamma_{ij}(p,p',g)=1$. However, the formalism allows us to express two other relevant subclasses of particle dynamics with exclusive mutability besides particle-local dynamics.
On the one hand, we can have group-local dynamics where the group association of particles is determined by some form of particle identity.
\begin{definition}
Let $S=(I,\tau,\nu,G)$ be a particle system and let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S)$ be a particle dynamics that is group-local with respect to $\gamma_{ij}$. Furthermore, let $\textup{id}_{i}:\mathcal{I}(\tau(i))\rightarrow\mathbb{N}$ be a family of identifying function for all $i\in I$ such that
\begin{align}
\forall &i\in I:\forall k\in\mathbb{N}:\forall(\sigma,g)\in\mathcal{I}(S):\nonumber\\
&|\textup{select}(\sigma(i),p\mapsto(\textup{id}_{i}(p)=k))|\leq1.
\end{align}
In other words, for every possible state of $S$ the function $\textup{id}_{i}$ must map every possible state of a particle of type $\tau(i)$ to a unique number.
We then call $d$ a \textbf{fixed-group local} particle dynamics if there is a function $\gamma_g:\mathbb{N}^4\rightarrow\mathbb{B}$ such that $\gamma_{ij}(p,p',g)=\gamma_{\textup{group}}(i,\textup{id}_{i}(p),j,\textup{id}_{j}(p'))$. Conceptually, this means that the decision of group association does not need to inspect the whole state of each particle, but only the unique identifications produced by the function family $\textup{id}_{i}$ and the indices of the respective particle types.
\end{definition}
This class of particle dynamics is commonly used to enforce an association between particles that cannot be derived from within the simulation context and is therefore imposed on the system extrinsically. Typically these kinds of associations between particles are also considered unbreakable within the context of the particle simulation, which gives rise to the name of this category of particle dynamics. The most notable example for this are (fixed) chemical bonds. Notably, unlike many other molecular dynamics models, we make no assumption on the number of atoms involved in the bond. Typically, groups of up to four atoms (dihedral angles) are considered in molecular dynamics.
On the other hand, we can also have dynamic group associations based on parts of the particle state that vary over the course of a many-particle simulation. Since particle interactions typically decay with distance between particles, by far the most common case of this is a group predicate using the distance between particles.
\begin{definition}\label{def:neighborhood_local_dynamics}
Let $S=(I,\tau,\nu,G)$ be a particle system and let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S)$ be a particle dynamics that is group-local with respect to $\gamma_{ij}$. Furthermore, let $r_\textup{cut}\in \mathbb{R}$ be a cutoff distance and $G \in \mathbb{B}^{\left\Vert I \right\Vert \times \left\Vert I \right\Vert}$ Boolean matrix indicating which particle types are allowed to interact. We then call $d$ a \textbf{neighborhood-local} dynamics if
\begin{align}
\gamma_{ij}(p,p',g)= G_{ij} \wedge \left\Vert \textup{pos}(p)-\textup{pos}(p')\right\Vert <r_{\mathrm{cut}}.
\end{align}
\end{definition}
In practice, usually a special case of group-local dynamics is used in the form of \textit{pairwise} particle interactions, e.g., in the form of pairwise forces between particles. These interactions represent a particular method of calculating the evolved state of each particle from the state of all particles interacting with it.
First, we find each particle that is interacting with the particle being updated. Then, we calculate a single quantity for each of these interaction partners using only the initial state of the particle being updated and the interaction partner. Finally, we use some reduction scheme to calculate the evolved state of particle being updated from its initial state and all of the previously calculated pair quantities. In the example case of pairwise forces, this corresponds to first calculating the force for each pair of interacting particles and then summing these forces to obtain the net force for each particle. We can formalize this concept as follows.
\begin{definition}\label{def:pairwise_interaction}
Let $S=(I,\tau,\nu,G)$ be a particle system and let $d:\mathcal{I}(S)\rightarrow\mathcal{I}(S)$ be a particle dynamics that is group-local under $\Pi_g$. Furthermore, for $i,j\in I$ let $Q_{ij}$ be a family of physical quantity types, $\mu_{ij}:\mathcal{I}(\tau(i))\times\mathcal{I}(\tau(j))\rightarrow Q_{ij}$ be a function family to map pairs of particle states to physical quantities and $\rho_{i}:\mathcal{I}(\tau(i))\times\left(I\rightarrow\bigcup_{j\in I}\mathcal{M}(Q_{ij})\right)\rightarrow\mathcal{I}(\tau(i))$ be family of reduction functions. We then call $d$ a \textbf{pairwise interaction} with the mapping functions $\mu_{ij}$ and the reduction functions $\rho_i$ if
\begin{align}
(\Pi_{g}(i))(p,\sigma_{g})=\rho_{i}\left(p, j\mapsto\textup{map}(\sigma_{g}(j),p'\mapsto\mu_{ij}(p,p'))\right).
\end{align}
\end{definition}
In practice, pairwise interactions are often also neighborhood-local, e.g., in the case of the popular Weeks-Chandler-Anderson potential for particle interactions.
When trying to map real-world particle simulations into the models developed in this section, we typically cannot find a suitably specialized classification of the \textit{entire} particle simulation for parallel execution. However, as stated before, real world particle simulations are often composed of multiple sub-dynamics which can be expressed as, e.g., particle-local dynamics or pairwise interactions. In these cases, one can use the parallel schemes developed here as long as there is a synchronization barrier between each sub-dynamics.\footnote{This is a result of the third condition for data races demanding that the memory accesses must be unsynchronized.}
An overview of the different kinds of particle dynamics presented in this section and their relationship towards one another can be found in Fig.\ \ref{fig:dynamics_levels}.
\begin{figure}
\includegraphics*[]{fig1.pdf}
\caption{Overview of the different kinds of particle dynamics characterized in this article. The colors indicate how data-race freedom can be achieved in an implementation of this model, if at all possible. The green areas are inherently free from data races by their formulation in this article. The orange region indicates possible data-race freedom by choosing (Safe) Rust (or any language with equivalent safety guarantees) as an implementation language, while the red region indicates the possibility of particle dynamics for which neither our models nor the language model of Safe Rust can make statements regarding data-race freedom.}
\label{fig:dynamics_levels}
\end{figure}
\section{A domain-specific language for data-race-free particle simulations}\label{sec:DSL}
In this section, we present a practical application of the results of Sec.\ \ref{sec:data_race_free_simulations} in the form of a domain-specific language for many-particle simulations that can be parallelized while guaranteeing the absence of data races. First, we define the grammar of this language in Sec.\ \ref{subsec:grammar} before discussing how to ensure deadlock freedom in Sec.\ \ref{subsec:dead_lock_elimination} and finally presenting a number of example simulation codes to demonstrate the practicability of our approach in Sec.\ \ref{subsec:code_examples}.
\subsection{Language grammar}\label{subsec:grammar}
The purpose of this subsection is to translate the abstract models developed in Sec.\ \ref{sec:data_race_free_simulations} into the practical form of a (minimalistic) programming language for expressing concrete particle dynamics in a fashion that can then be compiled into a runnable program. Much like the concepts in Sec.\ \ref{sec:data_race_free_simulations} lean heavily on the borrow rules of Rust, here we lean on Rust also on a syntactical level. To describe this formally, we use a slightly modified version of the EBNF notation (see Appendix \ref{appendix_ebnf} for an overview) to define the grammar of the language. It should be noted that the language we describe in the following is meant to be stand-alone, but rather supposed to cooperate with some kind of environment hosting it. This greatly simplifies the design as we can delegate tasks requiring general computation interfaces to this hosting environment.
It is useful to define some common syntax elements first before getting to the more unique design points of the language. For one, we define the following basic non-terminals by regular expressions in the syntax of \textit{Perl compatible regular expressions} (PCRE) \cite{PCRE}
\begin{verbatim}
Identifier = [A-Za-z][A-Za-z0-9_]*
Nat = 0|([1-9][0-9]*)
Integer = (+|-)(0|([1-9][0-9]*))
Float = [+-]?[0-9]+(\.[0-9]*([eE][+-]?[0-9]+)?)?
\end{verbatim}
which in order of appearance denote identifiers (i.e., symbol names), natural numbers, integer numbers, and floating point numbers.
Another common syntax element is that of an \textit{expression}, which we define by the EBNF rule
\begin{verbatim}
Expression =
Expression ("+" | "-" | "*" | "/") Expression |
Identifier "(" (Expression ** ",") ")" |
Identifier ("." Identifier)? ("[" Nat "]")? |
"[" (Expression ** ",") "]" |
Float | Integer
\end{verbatim}
where the productions (in order of appearance) encode binary arithmetic expressions, function calls, variables (including options for namespacing and static indexing), array literals, and primitive number literals. In terms of semantics, the only noteworthy aspect of expressions is that for arithmetic expressions involving array types both vector addition and scalar multiplication are defined. Otherwise the typical semantics for imperative languages apply.
Finally, we define types via the EBNF rule
\begin{verbatim}
Type = "i64" | "f64" | "[" Type ";" Nat "]" |
"position"
\end{verbatim}
which in order of appearance correspond to 64-bit integer numbers, 64-bit floating point numbers, array types of fixed length, and the special \texttt{position} type which is equivalent to an array type of floating point numbers with a length corresponding to the dimensionality of the system. The \texttt{position} type might seem redundant, but compared to the general array type it encodes additional semantic meaning as it can be used to mark a quantity as the distinguished position quantity of a particle type.
From these syntax elements, we first construct a representation of the global-state space introduced in Def.\ \ref{def:particle_system}. Unless stated otherwise, the syntax constructs introduced in the following are all top-level constructs, i.e., can coexist simply separated by whitespace in a single unit of compilation. While for considerations of data-race freedom modeling the global state as a single lumped vector was sufficient, for practical purposes it is more sensible to allow splitting this vector into multiple distinct physical quantities. These can be expressed syntactically as a top-level construct similar to that of global variables:
\begin{verbatim}
GlobalStateMember = "global" Identifier
":" "mut"? Type ";"
\end{verbatim}
Here, the optional \texttt{mut} keyword indicates whether this quantity of the global state can be changed after compilation.\footnote{This allows certain optimization strategies such as constant folding or rewriting of expensive operations like divisions as a sequence of cheaper operations like additions and bitwise shifts.}
While the syntax of the global state we propose here is similar as for global variables, the semantics is not. As discussed in Sec.\ \ref{subsec:particle_dynamics}, the evolution of the global state can be better described by a general-purpose language. Therefore, we delegate the task of initializing and mutating the global state to the hosting environment.
Next, we look at the state space spanned by each particle type, which consists of an indexed set of physical quantities. For practical purposes, it is preferable to use symbolic names over numeric indices to distinguish both the quantities within a particle type and the particles types within a particle system. With this we can use a syntactical construct similar to heterogeneous product types (i.e., \texttt{struct} types) in Rust:
\begin{verbatim}
ParticleDefinition = "particle" Identifier
"{" (ParticleQuantity ** ",") "}"
ParticleQuantity = Identifier ":" ("mut")? Type
\end{verbatim}
The optional \texttt{mut} qualifier is again a concession to possible optimizations such as eliding redundant storage of quantities that are constant for all particles of the same type. It should be noted that while the syntax is similar to \texttt{struct} types in Rust, the semantics is very different as the definition of a particle type neither introduces a new data type nor makes any statements about the layout of particle data in memory.
With the definitions of the static state structure being complete, we proceed with the syntactic encoding of particle dynamics.
As discussed at the end of Sec.\ \ref{subsec:particle_dynamics}, real-world particle dynamics typically consist of a loop containing multiple of the different classes of dynamics discussed in Sec.\ \ref{subsec:particle_dynamics}. To express this syntactically, a simulation \textit{schedule} is required that defines each of the loop iterations for each particle type. It is also useful to allow filtering of dynamics based on the loop counter (i.e., the simulation step number), e.g., to switch on and off certain parts of the dynamics at different times or to schedule evaluation steps that calculate statistical data from the state of the particle system. Overall, we propose the following syntax rules:
\begin{verbatim}
Simulation = "simulation" Identifier
"{" ScheduleFilter* "}"
ScheduleFilter =
"once" Nat "{" ParticleFilter "}" |
"step" StepRange "{" ParticleFilter "}"
StepRange = Nat | Nat? ".." Nat? ("," Nat)?
ParticleFilter = "particle" Identifier
"{" (Statement ** ";") "}"
\end{verbatim}
A simulation is thus described as a two-layered nested structure of \textit{statement} blocks where the first layer filters by simulation step and the second layer filters by particle type. For the simulation step filters, up to three numbers are required to express a step range where starting at step $a$ and ending at step $b$ (not inclusive) every $n$-th step is taken. We suggest that a single number is to be interpreted as a value of $n$ with $a=0$ and $b=\infty$, while the notation with up to three numbers should indicate $a$, $b$ and $n$ in this order with defaults of $a=0$, $b=\infty$, and $n=1$ being used if the respective number is omitted. The particle filter construct should express that the contents of its block of statements only applies to the particle type indicated by its symbolic name. To decide whether a simulation should terminate, control should (temporarily) be passed back to the hosting environment after each iteration of the simulation loop.
Nested within the simulation schedule, we use a set of statements to both express particle-local dynamics and provide a mechanism to ``escape'' the restrictions of the syntax we propose to the more expressive host environment. To this end, we define the following syntax rules for statements:
\begin{verbatim}
Statement =
"let" Identifier ":" Type "=" Expression |
Identifier ("[" Nat "]")? "=" Expression |
"call" Identifier |
"update" Identifier ("." Identifier)?
\end{verbatim}
The first two rules for statements are used to implement particle-local dynamics with the first one allowing to introduce new (temporary) variables and the second one allowing to either write a value to one of the previously declared variables or to mutate a particle quantity of the particle type of the current particle filter. The optional natural number in square brackets can be used to assign values to individual fields of vector variables or quantities.
Statements beginning with the keyword \texttt{call} indicate that control should be handed over to the hosting environment temporarily at this point in the simulation. This is likely to be implemented as a form of \textit{callback}, i.e., a function pointer from the hosting environment registered under a symbolic name which can then be referenced in this kind of statement after the \texttt{call} keyword.
Finally, statements beginning with the \texttt{update} keyword implement general group-local interactions. Since interactions can be defined between two or more particle types, it is sensible to place these syntax constructs outside the simulation schedule (which is filtered by particle types) and reference them by a symbolic name after the \texttt{update} keyword. In this article, we only develop syntax for neighborhood-local, pairwise interactions and fixed-group local dynamics with a constant group size known at compile time as these are by far the most common primitives for molecular dynamics in particular and can be implemented with a relatively simple grammar. Both of these can be represented as follows:
\begin{verbatim}
PairInteraction =
"pair interaction" Identifier "("
Identifier ":" Identifier ","
Identifier ":" Identifier
")" "for" ("|" Identifier "|" "=")?
Identifier "<" Expression
InteractionBody
FixedGroupInteraction =
"fixed group interaction" Identifier "("
(Identifier ":" Identifier) ** ","
")"
InteractionBody
\end{verbatim}
Here, the dynamics is first bound to a name following the \texttt{pair interaction} or \texttt{fixed-group interaction} keywords, respectively. After this, the two interacting particle types for pairwise interactions or an arbitrary number of particle types are specified. This is done with a syntax similar to function parameters in Rust as a pair of identifiers per particle type, where the first identifier defines a namespace for the quantities of each of the types and the second identifier specifies the particle type by its symbolic name. This again represents a syntactic similarity of particle types and structured data types even though the semantics of both are very different. Binding a namespace to each particle involved in the dynamics is necessary as there is no guarantee that quantity names are unique between different particle types.
In the case of a pairwise interaction, following the particle type specification, after the \texttt{for} keyword is the required information for defining the particle neighborhood, which, as discussed in Def.\ \ref{def:neighborhood_local_dynamics}, is defined by a cutoff distance. This distance is given as an expression after the symbol \texttt{<}. An implementation has to verify that this expression evaluates to a numeric value and only depends on constant quantities from the global state. Before this, one or two identifiers have to be placed in a notation resembling the equation $\left\Vert \vec{r}\right\Vert =r<r_{\text{cut}}$. Both of these identifiers serve the purpose of binding symbolic names to the distance between two interacting particles with the first, optional identifier denoting the distance \textit{vector} and the second identifier denoting the \textit{scalar} distance. Binding the distance (vector) to a name in such a prominent position is motivated by the fact that many-particle interactions depend on the inter-particle distance to determine the interaction strength. In the case of periodic boundary conditions, this also allows to differentiate between the position of the particle in the simulation domain and the distance vector that might correspond to one of the images of the particle.
The last part of both the definition of a pairwise interaction and a fixed-group local dynamics is a section containing the information on what calculations are involved in the respective dynamics and which quantities are to be mutated. We propose the following syntax rules for this task:
\begin{verbatim}
InteractionBody = "{"
("common" "{"
(InteractionStatement ** ";")*
"}")?
InteractionQuantity*
"}"
InteractionQuantity =
"quantity" Identifier
"-[" ReductionMethod "]->" TargetSpecifier "{"
(InteractionStatement ** ";")+ ";"
Expression
"}"
InteractionStatement =
"let" Identifier ":" Type "=" Expression |
Identifier ("[" Nat "]")? "=" Expression
ReductionMethod = "sum" | "max" | "min"
TargetSpecifier =
(("-")? Identifier "." Identifier)
** ("=" | ",")
\end{verbatim}
Here, an \texttt{InteractionBody} is composed of one optional \texttt{common} block as well as multiple definitions of \textit{interaction quantities}, i.e., the physical quantities calculated for every pair of interacting particles. The purpose of the \texttt{common} block is to allow the definition of variables used for the calculation of more than one interaction quantities. The definitions of interaction quantities follows very closely to Def.\ \ref{def:pairwise_interaction}.
Following the symbolic name of the interaction quantity is a specification of the reduction method, i.e., the equivalent to the function $\rho_i$ in Def. \ref{def:pairwise_interaction} which transforms the individual results of all pairwise interactions of a particle into a single physical quantity. Here, we suggest only three common reduction operations as examples, namely summing all pairwise interaction results or taking either the minimum or maximum of all interaction results.
After the specification of the reduction method follows a specification of the target particle quantities that this interaction quantity should be written to. The syntax for the target quantities can be seen as a variation of the pattern-matching syntax of Rust. Here, this pattern is matched against the result of the last expression in the block following the target specification. The target specification itself is composed of multiple namespaced particle quantities separated either by equality signs or commas. Here, an equality sign indicates that a single result from the calculation of the interaction quantity is written to multiple particle quantities, while a comma indicates that the result is an array which is to be split into its elements which are then written to distinct particle quantities. Both variants have the option of negating the result, which is a convenience for force-like quantities which are subject to Newton's third law as these can then simply be expressed as \texttt{p1.F = -p2.F} for two particle namespaces \texttt{p1} and \texttt{p2} each of which contains a particle quantity \texttt{F}. Notably, target specifiers can contain both target quantities separated by equality signs and targets separated by commas.\footnote{This is only useful for fixed-group local dynamics as pairwise interactions cannot have more than two targets.} For example, an interaction quantity for an interaction of three particles with namespaces \texttt{p1}, \texttt{p2}, and \texttt{p3}, respectively, and a particle quantity \texttt{F} in each namespace can have the target specifier \texttt{p1.F = -p3.F, p2.F}. This indicates that the calculation of the interaction quantity produces a vector of two values, the first of which is written to \texttt{p1.F} and in negated form to \texttt{p3.F} while the second one is written to \texttt{p2.F}.
Finally, after the target specifier, the actual calculation of each interaction quantity follows in the form of a block of statements terminated by an expression that determines the value of the interaction quantity. In this context, the statements are limited to variable definitions and assignments. In all expressions appearing in this section, it is essential to respect the constraints of Def.\ \ref{def:exclusive_mutability}, i.e., to forbid access to any particle quantities that are mutated by this interaction, i.e., all quantities appearing as part of a target specification in this interaction.
It should be noted that the fact that each interaction quantity can only be written to a single particle quantity is a restriction compared to the more general reduction function $\rho$ in Def.\ \ref{def:pairwise_interaction}. The reason for this is the necessity for an implementation to statically analyze which quantities of each particle is immutable under the interaction and therefore visible in the computation of the interaction quantities. With the grammar above this task becomes trivial as the programmer must explicitly indicate that a variable is not immutable. A similar mandatory opt-in for mutability can also be found in the design of Rust.
For the two data-race-free, particle number altering dynamics -- namely independently initializing insertions and independently selecting deletions -- it is necessary to implement them in particularly tight integration to the hosting environment as they often require general purpose functionality (such as I/O operations to load data for initialization or random number generation for probabilistic deletion of particles). Therefore, we do not propose a special syntax for these dynamics but assume that they can be implemented by a suitable mechanism in the hosting environment (e.g., function pointers for dynamic dispatch or generics for static dispatch).
One aspect a reader might find surprising in the syntax developed in this section is the lack of any control-flow structures such as conditionals or loops. The primary reason for this is the fact that branching control flow makes static program analysis much harder if not impossible. In particular, for reasons explained in the following section, it is required that the sequence of elementary particle dynamics for each particle type must be known at compile time to guarantee the absence of deadlocks. Allowing conditional control-flow would require verifying all possible control flow paths which is impractical as their number grows exponentially with the number of control flow statements. This problem could be worked around by preventing conditional control flow to ``leak'' beyond the boundaries of each elementary particle dynamics, e.g. by using syntax elements such as ternary operators to allow branching control flow only on the level of expressions. However, even in this case branching code can still be problematic as it interferes with optimization strategies such as vectorization and adds complexity to the optimization tasks due to effects of branch prediction. Therefore, for very complex branching code it might be more viable to delegate this part of the dynamics to the hosting environment instead.
\subsection{Eliminating interaction deadlocks}\label{subsec:dead_lock_elimination}
As discussed at the end of Sec.\ \ref{subsec:particle_dynamics} for particle dynamics composed of multiple (sub-)dynamics, it is necessary to add synchronization barriers between each sub-dynamics for each particle type. As interactions can affect more than one particle type, this, however, also synchronizes the simulation schedules of multiple particle types at certain points. So far, we implicitly assumed that an implementation will ensure that these synchronization points match up, i.e., that if an interaction between a particle type \texttt{A} and a particle type \texttt{B} is implied in the simulation schedule of type \texttt{A} there is a corresponding one found in the schedule for type \texttt{B}. This, however, brings with it the problem of \textit{deadlocks}, i.e., multiple interactions blocking each other from progressing in a dependency cycle. For example, consider the following simulation:
\begin{verbatim}
particle A { /* ... */ }
particle B { /* ... */ }
particle C { /* ... */ }
pair interaction AB(a: A, b: B) {}
pair interaction BC(b: B, c: C) {}
pair interaction CA(c: C, a: A) {}
simulation {
step {
particle A {
update AB;
update CA;
}
particle B {
update BC;
update AB;
}
particle C {
update CA;
update BC;
}
}
}
\end{verbatim}
This simulation will stall indefinitely as each simulation schedule prescribes a different interaction to be performed first.
To detect the presence of deadlocks in a simulation, we can use a graph-based intermediate representation of the simulation schedule. The nodes of this graph represent each interaction in the simulation schedule as well as by two special nodes the start and end of each simulation step. Then, for each particle type a directed path is added beginning at the start node and connecting the interaction nodes in the order they appear in the simulation schedule for this particle type. Each of these paths is then terminated at the end node. Such an \textit{interaction graph} is presented in Fig. \ref{fig:deadlock} for the previous example.
Since interaction deadlocks are created by a cyclic dependency of interactions on one another, a simulation is deadlock-free if the \textit{interaction graph} is acyclic. This can be verified by standard methods such as Tarjan's algorithm \cite{Tarjan1972}.
\begin{figure}
\includegraphics[]{fig2.pdf}
\caption{Example of an interaction graph of a particle simulation of three particle types. The color of the arrows indicates the particle type to which the simulation schedule for this directed edge belongs. The clearly visible cycle in the directed graph means that this simulation contains a deadlock and is therefore invalid.}
\label{fig:deadlock}
\end{figure}
\subsection{Examples}\label{subsec:code_examples}
In this section, we provide examples of common simulations to illustrate the application of the previous results to real-world particle dynamics.
\subsubsection*{Newtonian dynamics}
First, we present in Listing \ref{code:velocityverlet} an example for a particle system subject to Newtonian dynamics. We employ the commonly used velocity-Verlet scheme to solve these dynamics numerically. The particle system itself is composed of free moving particles using the Weeks-Chandler-Andersen (WCA) potential \cite{Weeks1971}
\begin{equation} \label{eq:wca}
U(r)=\begin{cases}
4\varepsilon \left[ \left( \frac{\sigma}{r} \right)^{12} - \left( \frac{\sigma}{r} \right)^{6} \right] + \varepsilon & \mbox{if } r \leq 2^{1/6} \sigma, \\
0 & \mbox{else}
\end{cases}
\end{equation}
to model particle interactions. Here, $r$ is the particle distance, $\varepsilon$ is an energy scaling factor, and $\sigma$ is the diameter of the particles.
In the implementation, we first make use of the global state to define the time-step size, the particle diameter $\sigma$, and the energy scaling factor $\varepsilon$ of the WCA potential as constants of the simulation. This allows us to insert the concrete values for these constants from the hosting environment without having to change the domain-specific simulation code.
Next, we define a particle type representing a spherical particle without orientational degrees of freedom. The physical state of the particle is fully described by its position, velocity, and mass, the latter of which is considered to be constant. Additionally, we define particle quantities for the net force acting on the particle and the total potential energy associated with the particle. These two quantities are not necessary to describe the physical state of the particle, but rather store the physical quantities resulting from particle interactions.
Afterwards, we define a pairwise particle interaction between two particles of the previously defined particle type with the cutoff distance defined by the WCA potential. After defining a variable for the common term $(\sigma/r)^6$, we define two interaction quantities. The first quantity is the repulsion force defined by $-\vec{\nabla}U$, which is written to the force quantity of each particle with a negation for one of the particles to account for Newton's third law. The second quantity calculates the potential energy of the interaction evenly split between both particles. This allows to calculate the total potential energy of the system by summing over all particles.
Finally, we define a simulation schedule for the \texttt{Spherical} particle type in combination with the velocity-Verlet integration scheme. Here, the particle dynamics is composed of a particle-local dynamics mutating position and velocity, a pairwise interaction that mutates forces and potential energy, and then another particle-local dynamics mutating velocity.
Overall, this example shows that even though the domain-specific language has no in-built support for any concrete numerical scheme, it can still express real-world simulations relatively concisely. For example, the entire velocity-Verlet scheme only takes up four lines of code.
\begin{lstlisting}[float=*,label={code:velocityverlet},
caption={Example of a Newtonian dynamics simulation of a system of spherical, monodisperse particles subject to a repulsive force based on the Weeks-Chandler-Andersen potential in three spatial dimensions. Here, the velocity-Verlet scheme is used to integrate the equations of motion. The \texttt{pow} function used here is defined as the result of taking the first argument to the power of the second argument. In addition to each particle trajectory, this simulation also calculates the potential energy of each particle by splitting the potential energy of each particle interaction evenly between particles.}
,basicstyle=\ttfamily]
global DT: f64; // Velocity-Verlet time step
global SIGMA: f64; // Particle diameter
global EPSILON: f64; // WCA scaling factor
// Representation of spherical particles
particle Spherical {
x : mut position,
v : mut [f64; 3], // Velocity of particle
F : mut [f64; 3], // Total force acting on particle
U : mut f64, // Total potential energy of particle
mass: f64 // Particle mass
}
// Pairwise repulsion of particles via the WCA potential
pair interaction Repulsion (p1: Spherical, p2: Spherical)
for |rvec| = r < pow(2, 1.0 / 6.0) * SIGMA
{
common {
// Inverse of inter-particle distance relative to particle diameter
let relative_distance_inverse = pow(SIGMA/r,6);
}
// Force derived from WCA potential
quantity force_WCA -[sum]-> p1.F = -p2.F {
let scale: f64 = -4.0 * EPSILON * (
12.0 * relative_distance_inverse * relative_distance_inverse
- 6.0 * relative_distance_inverse) / (r * r);
scale * rvec
}
// Potential energy from WCA potential (split evenly between particles)
quantity potential_WCA -[sum]-> p1.U = p2.U {
0.5 * (4.0 * EPSILON * (
relative_distance_inverse * relative_distance_inverse
- relative_distance_inverse
) + EPSILON)
}
}
// Velocity-Verlet-based simulation of Newtonian particle dynamics
simulation VelocityVerlet {
step 1 {
particle Spherical {
v = v + 0.5 * F / mass * DT;
x = x + v * DT;
update Repulsion;
v = v + 0.5 * F / mass * DT;
}
}
}
\end{lstlisting}
\subsubsection*{Harmonic valence angle potential}
Next, we look at the example in Listing \ref{code:angularbond} which presents an implementation of a chemical bond between three atoms with a harmonic potential for the valence angle, i.e.,
\begin{equation}
U(\theta) = k(\theta - \theta_0)^2,
\end{equation}
where $\theta$ is the valence angle, $k$ is the bond strength, and $\theta_0$ is the equilibrium valence angle. This is an important example as potentials depending on bond angles are a very common archetype in molecular dynamics.
In this example, we reuse the same particle type as in Listing \ref{code:velocityverlet} for spherical particles and use a similar style of passing the constants $k$ and $\theta_0$ to the simulation via global state quantities. After this we define a fixed-group interaction for triples of particles. This interaction defines a single interaction quantity that writes three different forces to each of the three particles involved in the interaction. For the sake of clarity, we use the formulation of Ref.\ \cite{Monasse2014} for calculating the forces on each atom. This is algebraically equivalent, but numerically less efficient than the typical formulation found in molecular dynamics software packages for (harmonic) angular bonds.
Particularly noteworthy is the \texttt{distvec} function in this example. Typically, one would calculate the distance vector between two particles by a simple vector difference of the position vectors of each particle. However, this method is problematic if periodic boundary conditions are used, in which case one has to decide the image of each particle that is used for the distance calculation. For this purpose, we propose the implementation to support the pseudo function \texttt{distvec} that takes two particle namespaces and returns the distance vector for the particles they represent. In the case of periodic boundary conditions, we can define the distance vector between particles as the vector between the position of the first particle and the position of the closest image of the second particle (\textit{minimum image convention} \cite{Deiters2013}).
Overall, this example shows one of the more common applications of fixed-group interactions in the form of a potential depending on bond angles. It can be extended easily to larger particle groups, e.g., to implement potentials depending on dihedral angles.
\begin{lstlisting}[float=*,label={code:angularbond},
caption={Definitions of a particle type for spherical particles and a three-particle interaction emulating a chemical bond with a harmonic potential for the valence angle in three spatial dimensions. Typically, this interaction would be combined with an interaction related to the bond \textit{length}, which is omitted here for the sake of brevity. In the calculation of the interaction quantity \texttt{force\_harmonic}, we use the \texttt{length} function to calculate the length of a vector, the \texttt{acos} function to calculate the arccosine of its argument, and the \texttt{dot} and \texttt{cross} functions to calculate the dot and cross products of two vectors, respectively. We also use a special notation to determine the distance vector between two particles by means of a pseudo function \texttt{distvec}, which we assume the implementation will substitute with an appropriate calculation given the boundary conditions of the system. The calculation itself closely follows Ref.\ \cite{Monasse2014}.}
,basicstyle=\ttfamily]
global K: f64; // Bond strength
global THETA_0: f64; // Bond equilibrium angle
particle Spherical {
x : mut position,
v : mut [f64; 3],
F : mut [f64; 3],
mass: f64
}
fixed-group interaction HarmonicAngle (p1: Spherical, p2: Spherical, p3: Spherical) {
quantity force_harmonic -[sum]-> p1.F, p2.F, p3.F {
// Distance vectors
let r_21: [f64; 3] = distvec(p2,p1);
let r_23: [f64; 3] = distvec(p2,p3);
let d_21: f64 = length(r_21);
let d_23: f64 = length(r_23);
// Bond angle
let theta: f64 = acos(dot(r_21,r_23) / (d_21 * d_23));
// Directions of forces on p1 and p2
let perp_1: [f64; 3] = norm(cross( r_21, cross(r_21,r_23)));
let perp_3: [f64; 3] = norm(cross(-r_23, cross(r_21,r_23)));
// Forces
let f_1: [f64; 3] = -2.0 * K * (theta - THETA_0) / d_21 * perp_1;
let f_3: [f64; 3] = -2.0 * K * (theta - THETA_0) / d_23 * perp_3;
let f_2: [f64; 3] = -f_1 - f_2;
[f_1, f_2, f_3]
}
}
\end{lstlisting}
\subsubsection*{Brownian dynamics}
The final example is shown in Listing \ref{code:eulermaruyama}, which demonstrates how a simple Brownian dynamics can be implemented in form of the Euler-Maruyama scheme. Assuming spherical particles, the numerical scheme then reads
\begin{equation}\label{eq:brownian}
\vec{r}_i(t+\Delta t) = \vec{r}_i(t) + \frac{\Delta t}{\gamma} \vec{F}_i (\vec{r}_i (t), t) + \sqrt{\frac{2k_\mathrm{B}T\Delta t}{\gamma}} \vec{\eta}_i,
\end{equation}
where $\vec{r}_i(t)$ is the position of the $i$-th particle at time $t$, $\Delta t$ is the time-step size, $\gamma$ is the friction coefficient, $\vec{F}_i (\vec{r}_i (t))$ is the total force acting on the $i$-th particle at time $t$, $k_\mathrm{B}$ is the Boltzmann constant, $T$ is the system temperature, and $\vec{\eta}_i$ is a Gaussian white noise vector with zero mean and a variance of one.
Again, the implementation begins with a declaration of the global state quantities, which in this case consist of $\Delta t$, $\gamma$, and $T$. Additionally, we use the global-state quantities to pass $k_\mathrm{B}$ as a symbolic value for the sake of brevity in this example. The particle type is notably shorter compared to the earlier examples since particle mass and instantaneous velocity are irrelevant in the case of Brownian dynamics\footnote{More precisely, inertial forces are considered negligible compared to friction forces.}. Finally, in the definition of the simulation schedule, we find an almost identical expression to Eq.\ (\ref{eq:brownian}).
This example demonstrates how the model of particle dynamics we propose can be molded to the exact requirements of a numerical scheme. In particular, one can define particle types with just the degrees of freedom a particle has under a given numerical scheme thus saving resources. For example, in Brownian dynamics there is no need to store the velocity of each particle. Of course, this places some responsibility on the programmer to implement a \textit{numerically} sound particle dynamics as the domain-specific language only prevents the user from formulating particle simulations that are unsound in the sense that they exhibit undefined behavior (e.g., data races).
\begin{lstlisting}[float=*,label={code:eulermaruyama},
caption={Example of a Brownian dynamics simulation of a system of spherical particles. The exact definition of the forces acting on the particle are omitted for the sake of brevity. This example makes use of the \texttt{random\_normal} function which produces a single sample from a Gaussian white noise of mean zero and variance one as well as the \texttt{sqrt} function which calculates the square root of its argument.}
,basicstyle=\ttfamily]
global DT: f64; // Euler-Maruyama time step
global GAMMA: f64; // Friction coefficient
global TEMP: f64; // System temperature
global K_B: f64; // Boltzmann constant
particle Spherical {
x : mut position,
F : mut [f64; 3]
}
simulation EulerMaruyama {
step 1 {
particle Spherical {
// Interaction updates and external forces omitted for brevity
// [...]
let normal_noise: [f64; 3] = [random_normal(), random_normal(), random_normal()];
x = x + DT / GAMMA * F + sqrt(2.0 * K_B * TEMP * DT / GAMMA) * normal_noise;
}
}
}
\end{lstlisting}
\section{Conclusions}\label{sec:conclusions}
We have developed a formalization of the concepts of generic particle systems and based on this a very general definition of particle dynamics. This was done in the context of safe parallelism, i.e., the avoidance of data races, to find a safe abstraction for computational implementations of particle dynamics in the shared-memory model of parallel programming. For this we took inspiration from the design of the general purpose programming language Rust which is able to guarantee the absence of data races by enforcing a set of borrowing rules at compile time. We found that general particle dynamics are not algorithmically constrained enough to transfer these rules directly. Instead, we formulated a hierarchy of restrictions on particle dynamics that eventually allowed us to define concrete subclasses of particle dynamics which we can safely parallelize automatically. In particular, we identified two classes of particle dynamics that alter the number of particles in a particle system in such a way that they can be split into independent per-particle operations. For particle dynamics that do not add or remove particles, we found an equivalent of the borrowing rules of Rust in the context of particle systems. This concept was then concretized into multiple subclasses of particle dynamics that are common in practical applications.
After this, we designed a domain-specific programming language around our abstract formulation and classification of particle dynamics. Borrowing heavily from the syntax of existing languages, we formulated a minimalist grammar for this language and discussed where static analysis is required within this language to ensure data-race freedom. We found that unlike Rust, which utilizes an elaborate type system to ensure safety, for the language we designed it is sufficient to regulate symbol visibility and perform basic type checking. We also showed that by means of an appropriate intermediate representation we can not only guarantee data-race freedom, but also deadlock freedom. To prevent over-extending our language and still overcome the inherently limited expressiveness of a domain-specific language, we designed the language to be embedded in a general-purpose programming language and added mechanisms for these two contexts to cooperate.
Finally, we asserted the practicability of our model by expressing common operations from molecular dynamics in our domain-specific programming language. These examples also demonstrate the high degree of flexibility our model provides which we expect to be very useful for prototyping more exotic many-particle dynamics.
There are many opportunities for future extensions of our results presented in this work. For one, we have only formulated safe abstractions for particle dynamics that are local either to a single particle or to a small group of particles. There are, however, many physical systems that include long-range interactions (e.g., long-range electrostatic interactions) which often require more sophisticated algorithms such as Ewald summation \cite{Ewald1921}, the particle-particle-particle-mesh method \cite{Hockney1988}, or the Barnes-Hut method \cite{Barnes1986} for simulations. More research is required to find general, data-race-free abstractions for these kinds of methods.
Naturally \textit{implementing} the language we designed here is another big task. A partial implementation of the model we designed here for multicore CPU-based systems can be found in Ref.\ \cite{FIPS}. Due to the high degree of parallelism we provide in our model of particle dynamics, the development of an implementation based on GPU hardware is another worthwhile avenue to continue this work.
There is also much opportunity for extending the syntax we gave in this article for real-world applications. Here, we presented a very minimal language model to ease the discussion, but this leaves much to be desired in terms of language features. Potential areas of improvement include extensions of the type system to support tensor quantities of a rank higher than one, support for sharing behavior between particle types to reduce redundant code, a module system to provide common primitives such as interactions for commonly used potentials, limited forms of conditional expressions and loops, and many more. Due to the already present possibility of integrating a general-purpose programming language into the model, strictly speaking these extensions do not add functionality, but will drastically increase the usability of any implementation of our model.
We expect that the model of particle dynamics and the language we developed for it will aid future research related to the numerical treatment of many-particle systems, and we hope that in return feedback from these applications will guide future refinement of abstractions for safe particle dynamics.
\begin{acknowledgments}
We thank Cornelia Denz, Matthias R\"uschenbaum, Stephan Br\"oker and Tobias Nitschke for helpful discussions. This work is funded by the DFG -- Project-ID 433682494 - SFB 1459.
\end{acknowledgments}
|
{
"arxiv_id": "2302.14231",
"language": "en",
"timestamp": "2023-03-01T02:06:09",
"url": "https://arxiv.org/abs/2302.14231",
"yymm": "2302"
} | \section{Introduction}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{model.pdf}
\caption{\textbf{CHGNet model architecture} (a) CHGNet workflow: a crystal structure with unknown atomic charge is used as input to predict the energy, force, stress, and magnetic moments, resulting in a charge-decorated structure. (b) Atom graph: The pairwise bond information is drawn between atoms; Bond graph: the pairwise angle information is drawn between bonds. (c) Graphs run through basis expansions and embedding layers to create atom, bond, angle features. The features are updated through several interaction blocks, and the properties are predicted at output layers. (d) Interaction block in which the atom, bond, and angle share and update information. (e) Atom convolution layer where neighboring atom and bond information is calculated through weighted message passing and aggregates to the atoms.}
\label{fig:CHGNet}
\end{figure*}
Large-scale simulations, such as molecular dynamics (MD), are essential tools in the computational exploration of solid-state materials \cite{frenkel2001_understandMD}. They enable the study of reactivity, degradation, interfacial reactions, transport in partially disordered structures, and other heterogeneous phenomena relevant for the application of complex materials in technology. Technological relevance of such simulations requires rigorous chemical specificity which originates from the orbital occupancy of atoms. Despite their importance, accurate modeling of electron interactions or their subtle effects in MD simulations remains a major challenge. Classical force-fields treat the charge as an atomic property that is assigned to every atom \textit{a-priori} \cite{Lucas_Bauer_Patel_2012, Drautz2020_ACE}. Methodology developments in the field of polarizable force-fields such as the electronegativity equalization method (EEM) \cite{Mortier_Ghosh_Shankar_1986}, chemical potential equalization (CPE) \cite{York_Yang_1996}, and charge equilibration (Qeq) \cite{Rappe_Goddard_1991} realize charge evolution via the redistribution of atomic partial charge. However, these empirical methods are often not accurate enough to capture complex electron interactions.
\textit{Ab-initio} molecular dynamics (AIMD) with density functional theory (DFT) can produce high-fidelity results with quantum-mechanical accuracy by explicitly computing the electronic structure within the density functional approximation. The charge-density distribution and corresponding energy can be obtained by solving the Kohn--Sham equation \cite{KohnSham}. Long-time and large-scale spin-polarized AIMD simulations critical for studying ion migrations, phase transformations and chemical reactions are challenging and extremely computing intensive \cite{Reed2004_ChemicalReview, Eum2020_TM_migration_layer}. These difficulties underscore the need for more efficient computational methods in the field that can account for charged ions and their orbital occupancy at sufficient time and length scales needed to model important phenomena.
Machine-learning interatomic potentials (MLIPs) such as \ae net \cite{Artrith_Morawietz_Behler_2011, Nong_aenet_GPU_2023} and DeepMD \cite{Zhang2021_DeepMD_water} have provided promising solutions to bridge the gap between expensive electronic structure methods and efficient classical interatomic potentials. Specifically, graph neural network (GNN)-based MLIPs such as DimeNet \cite{dimnet_2020}, NequIP \cite{NequIP_2022}, and MACE \cite{Batatia2022_MACE} have been shown to achieve state-of-the-art performance by incorporating invariant/equivariant symmetry constraints and long-range interaction through graph convolution \cite{Fu2022_forces}.
Most recently, GNN-based MLIPs trained on the periodic table (e.g., M3GNet) have demonstrated the possibility of universal interatomic potentials that may not require chemistry-specific training for each new application \cite{Chen_Ong_2022, Choudhary2023_universal}. However, so far none of these methods has included the important effects that valences have on chemical bonding.
The importance of an ion's valence derives from the fact that it can engage in very different bonding with its environment depending on its electron count. While traditional MLIPs treat the elemental label as the basic chemical identity, different valence states of transition metal ions behave as different from each other as different elements. For example, high spin Mn$^{4+}$ is a non-bonding spherical ion which almost always resides in octahedral coordination by oxygen atoms, whereas Mn$^{3+}$ is a Jahn--Teller active ion, radically distorts its environment, and Mn$^{2+}$ is an ion that strongly prefers tetrahedral coordination \cite{Reed2004_ChemicalReview}. Such strong chemical interaction variability across different valence states exists for almost all transition metal ions and requires specification of an ion beyond its chemical identity. In addition, the charge state is a degree of freedom that can create configurational entropy and whose dynamic optimization can lead to strongly coupled charge and ion motion, impossible to capture with a MLIP that only carries elemental labels. The relevance of explicit electron physics motivates the development of a robust MLIP model with charge information built in.
Charge has been represented in a variety of ways, from a simple oxidation state label to continuous wave functions derived from quantum mechanics \cite{Walsh2018_chg_review}. Challenges in incorporating charge information into MLIPs arise from many factors, such as the ambiguity of representations, complexity of interpretation, and impracticality of taking charge as an input ($E(\{\boldsymbol{r}_i\}, \{q_i\})$, as the labels $\{q_i\}$ are generally not \textit{a-priori} available). In this work, we define charge as an atomic property (\emph{atomic charge}) that can be inferred from the inclusion of magnetic moments (magmoms). We show that by explicitly incorporating the site-specific magmoms as the charge-state constraints into the \textbf{C}rystal \textbf{H}amiltonian \textbf{G}raph neural-\textbf{Net}work (CHGNet), one can both enhance the latent-space regularization and accurately capture electron interactions.
We demonstrate the charge constraints and latent-space regularization of atomic charge in Na$_2$V$_2$(PO$_4$)$_3$ and show the applications of CHGNet in the study of charge-transfer and phase transformation in Li$_x$MnO$_2$, electronic entropy in the Li$_x$FePO$_4$ phase diagram, and Li diffusivity in garnet-type Li-superionic conductors Li$_{3+x}$La$_3$Te$_2$O$_{12}$. By critically comparing and evaluating the importance of incorporating charge information in the construction of CHGNet, we offer new insights into the materials modeling of ionic systems with additional electronic degrees of freedom. Our analysis highlights the essential role that charge information plays in atomistic simulations for solid-state materials.
\section{Results}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{MPtrj.pdf}
\caption{\textbf{Element distribution of Materials Project Trajectory (MPtrj) Dataset.} The color on the lower-left triangle indicates the total number of atoms/ions of an element. The color on the upper right indicates the number of times the atoms/ions are incorporated with magnetic moment labels in the MPtrj dataset. On the lower part of the plot is the count and mean absolute deviation (MAD) of energy, magmoms, force, and stress}
\label{fig:MPtrj}
\end{figure*}
\subsection{CHGNet architecture}
The foundation of CHGNet is a GNN, as shown in Fig. \ref{fig:CHGNet}, where the graph convolution layer is used to propagate atomic information via a set of nodes $\{v_i\}$ connected by edges $\{e_{ij}\}$. The translation, rotation, and permutation invariance are preserved in GNNs \cite{Bruna_Zaremba_Szlam_LeCun_2013, Geiger_Smidt_2022, Xie_Grossman_2018}.
Figure \ref{fig:CHGNet}(a) shows the workflow of CHGNet which takes a crystal structure with unknown atomic charges as input and outputs the corresponding energy, forces, stress, and magmoms. The charge-decorated structure can be inferred from the on-site magnetic moments and atomic orbital theory. The details are described in the following section.
In CHGNet, a periodic crystal structure is converted into an atom graph $G^a$ by searching for neighboring atoms $v_j$ within $r_\text{cut}$ of each atom $v_i$ in the primitive cell. The edges $e_{ij}$ are drawn with information from the pairwise distance between $v_i$ and $v_j$, as shown in Fig. \ref{fig:CHGNet}(b). Three-body interaction can be computed by using an auxiliary bond graph $G^b$, which can be similarly constructed by taking the angle $a_{ijk}$ as the pairwise information between bonds $e_{ij}$ and $e_{jk}$ (see Methods). We adopt similar approaches to include the angular/three-body information as other recent GNN MLIPs \cite{dimnet_2020, Chen_Ong_2022, Choudhary_DeCost_2021}.
Figure \ref{fig:CHGNet}(c) shows the architecture of CHGNet, which consists of a sequence of basis expansions, embeddings, interaction blocks, and outputs layers (see Methods for details).
Figure \ref{fig:CHGNet}(d) illustrates the components within an interaction block, where the atomic interaction is simulated with the update of atom, bond, and angle features via the convolution layers.
Figure \ref{fig:CHGNet}(e) presents the convolution layer in the atom graph. Weighted message passing is used to propagate information between atoms, where the message weight $\Tilde{e}_{ij}^a$ from node $j$ to node $i$ decays to zero at the graph cutoff radius to ensure smoothness of the potential energy surface \cite{dimnet_2020}.
Unlike other GNNs, where the updated atom features $\{v^n_i\}$ after $n$ convolution layers are directly used to predict energies, CHGNet regularizes the node-wise features $\{v^{n-1}_i\}$ at the $n-1$ convolution layer to contain the information about magnetic moments. The regularized features $\{v^{n-1}_i\}$ carry rich information about both local ionic environments and charge distribution. Therefore, the atom features $\{v^n_i\}$ used to predict energy, force, and stress are charge-constrained by their charge-state information. As a result, CHGNet can provide charge-state information using only the nuclear positions and atomic identities as input, allowing the study of charge distribution in atomistic modeling.
\subsection{Materials Project Trajectory Dataset}
The Materials Project database contains a vast collection of DFT calculations on $\sim 146,000$ inorganic materials composed of 94 elements \cite{MaterialProject}. To accurately sample the universal potential energy surface, we extracted $\sim 1.37$ million Materials Project tasks of structure relaxation and static calculations using either the generalized gradient approximation (GGA) or GGA+U exchange-correlation (see Methods). This effort resulted in a comprehensive dataset with 1,580,395 atom configurations, 1,580,395 energies, 7,944,833 magnetic moments, 49,295,660 forces, and 14,223,555 stresses. To ensure the consistency of energies within the MPtrj dataset, we applied the GGA/GGA+U mixing compatibility correction, as described by \citet{Wang_Kingsbury_McDermott_Horton_Jain_Ong_Dwaraknath_Persson_2021}.
The distribution of elements in the MPtrj dataset is illustrated in Fig. \ref{fig:MPtrj}. The lower-left triangle (warm color) in an element's box indicates the frequency of occurrence of that element in the dataset, and the upper-right triangle (cold color) represents the number of instances where magnetic information is available for the element. With over 100,000 occurrences for 60 different elements and more than 10,000 instances with magnetic information for 76 different elements, the MPtrj dataset provides comprehensive coverage of all chemistries, excluding only the noble gases and actinoids. The lower boxes in Fig. \ref{fig:MPtrj} present the counts and mean absolute deviations of energy, force, stress, and magmoms in the MPtrj dataset.
CHGNet with 400,438 trainable parameters was trained on the MPtrj dataset with 8:1:1 training, validation, and test set ratio partition by materials (see Methods). The statistics of the mean absolute errors of the energy, force, stress, and magmoms on the MPtrj test set structures are shown in Table. \ref{table:1}. We observe a similar test set error with slight improvements in the model trained with magmoms.
\begin{table}[h]
\caption{Mean-absolute-errors (MAEs) of CHGNet on MPtrj test set of 157,955 structures from 14,572 materials. 'With mag' and 'No mag' indicate whether the model is trained with magmoms ($\mu_B$ is the Bohr magneton).}
\label{sample-table}
\begin{center}
\begin{tabular}{clllll}
\hline\hline
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{\begin{tabular}{@{}c@{}} $\textbf{Energy}$ \\ (meV/atom) \end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}{@{}c@{}} $\textbf{Force}$ \\ (meV/\AA) \end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}{@{}c@{}} $\textbf{Stress}$ \\ (GPa) \end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}{@{}c@{}} $\textbf{Magmom}$ \\ ($\mu_B$) \end{tabular}}
\\ \hline
\textbf{With mag} & \multicolumn{1}{c}{30} & \multicolumn{1}{c}{77} & \multicolumn{1}{c}{0.348} & \multicolumn{1}{c}{0.032} \\
\hline
\textbf{No mag} & \multicolumn{1}{c}{33} & \multicolumn{1}{c}{79} & \multicolumn{1}{c}{0.351} & \multicolumn{1}{c}{ N/A } \\
\hline\hline
\end{tabular}
\end{center}
\label{table:1}
\end{table}
\subsection{Charge-constraints and charge-inference from magnetic moments}
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{NVP.pdf}
\caption{\textbf{Magmom and hidden-space regularization in Na$_2$V$_2$(PO$_4$)$_3$}. (a) Magmom distribution of the 216 V ions in the unrelaxed structure (blue) and CHGNet-relaxed structure (orange). (b) A two-dimensional visualization of the PCA on V-ion embedding vectors before the magmom projection layer indicates the latent space clustering is highly correlated with magmoms and charge information. The PCA reduction is calculated for both unrelaxed and relaxed structures.}
\label{fig:NVP}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{LMO2.pdf}
\caption{\textbf{Li$_{0.5}$MnO$_2$ phase transformation and charge disproportionation} (a) orthorhombic LiMnO$_2$ (\textit{o}-LMO) unit cell plotted with the tetrahedral site and the octahedral site. (b) Simulated XRD pattern of CHGNet MD structures as the system transforms from the \textit{o}-LMO phase to the \textit{s}-LMO. (c) Average magmoms of tetrahedral and octahedral Mn ions \textit{vs.} time. (d) Top: total potential energy and the relative intensity of \textit{o}-LMO and \textit{s}-LMO characteristic peaks \textit{vs.} time. Bottom: the histogram of magmoms on all Mn ions \textit{vs.} time. The brighter color indicates more Mn ions distributed at the magmom. (e) Predicted magmoms of tetrahedral Mn ions using r$^2$SCAN-DFT (black) and CHGNet (blue), where the structures are drawn from MD simulation at 0.4 ns (left) and 1.5 ns (right).}
\label{fig:LMO}
\end{figure*}
In solid-state materials that contain heterovalent ions, it is crucial to distinguish the atomic charge of the ions, as an element's interaction with its environment can depend strongly on its valence state. It is well known that the valence of heterovalent ions cannot be directly calculated through the DFT charge density because the charge density is almost invariant to the valence state due to the hybridization shift with neighboring ligand ions \cite{Mackrodt1996_HF_chg,Wolverton1998_LiCO2}. Furthermore, the accurate representation and encoding of the full charge density is another demanding task requiring substantial computational resources \cite{Gong2019_charge_density,Qiao2022_OrbitEqui}. An established approach is to rely on the magnetic moment for a given atom site as an indicator of its atomic charge, which can be derived from the difference in localized up-spin and down-spin electron densities in spin-polarized DFT calculations \cite{Reed2004_ChemicalReview,Barroso-Luque2022_CE_theory}. Compared with the direct use of charge density, magmoms are found to contain more comprehensive information regarding the electron orbital occupancy and therefore the chemical behavior of ions, as demonstrated in previous studies.
To rationalize our treatment of the atomic charge, we used a NASICON-type cathode material Na$_4$V$_2$(PO$_4$)$_3$ as an illustrative example. The phase stability of the (de-)intercalated material Na$_{4-x}$V$_2$(PO$_4$)$_3$ is associated with Na/vacancy ordering and is highly correlated to the charge ordering on the vanadium sites \cite{Wang2022_NVP_phase_stability}. We generated a supercell structure of Na$_4$V$_2$(PO$_4$)$_3$ with 2268 atoms and randomly removed half of the Na ions to generate the structure with composition Na$_2$V$_2$(PO$_4$)$_3$, where half of the V ions are oxidized to a V$^{4+}$ state. We used CHGNet to relax the (de-)intercalated structure and analyze its capability to distinguish the valence states of V atoms with the ionic relaxation (see Methods).
Figure \ref{fig:NVP}(a) shows the distribution of predicted magmoms on all V ions in the unrelaxed (blue) and relaxed (orange) structures. Without any prior knowledge about the V-ion charge distribution other than learning from the spatial coordination of the V nuclei, CHGNet successfully differentiated the V ions into two groups of V$^{3+}$ and V$^{4+}$. Figure \ref{fig:NVP}(b) shows the two-dimensional principal component analysis (PCA) of all the latent space feature vectors of V ions for both unrelaxed and relaxed structures after three interaction blocks. The PCA analysis demonstrates two well-separated distributions, indicating the latent space feature vectors of V ions are strongly correlated to the different valence states of V. Hence, imposing different magmom labels to the latent space (i.e., forcing the two orange peaks to converge to the red dashed lines in Fig. \ref{fig:NVP}(a)) would act as the \emph{charge constraints} for the model by regularizing the latent-space features.
Because energy, force, and stress are calculated from the same feature vectors, the inclusion of magmoms can improve the featurization of the heterovalent atoms in different local chemical environments (e.g., V$^{3+}$ and V$^{4+}$ displays very distinct physics and chemistry) and therefore improve the accuracy and expressibility of CHGNet.
\subsection{Charge disproportionation in Li$_x$MnO$_2$ phase transformation}
The long-time and large-scale simulation of CHGNet enables studies of ionic rearrangements coupled with charge transfer \cite{Reed_Ceder_Ven_2001,Kang2006_Li_diffusion}, which is crucial for ion mobility and the accurate representation of the interaction between ionic species.
As an example, in the LiMnO$_2$ battery cathode material, transition-metal migration plays a central role in its phase transformations, which cause irreversible capacity loss \cite{Reimers_Fuller_Rossen_Dahn_1993, Koetschau_Richard_Dahn_Soupart_Rousche_1995}. The mechanism of Mn migration is strongly coupled with charge transfer, with Mn$^{4+}$ being an immobile ion, and Mn$^{3+}$ and Mn$^{2+}$ generally considered to be more mobile \cite{Jang_Chou_Huang_Sadoway_Chiang_2003,Jo2019_layer_Mn_migration,radin2019_NE_Mn7}. The dynamics of the coupling of the electronic degrees of freedom with those of the ions has been challenging to study but is crucial to understand the phase transformation from orthorhombic LiMnO$_2$ (\textit{o}-LMO, shown in Fig. \ref{fig:LMO}(a)) to spinel LiMnO$_2$ (\textit{s}-LMO), as the time scale and computational cost of such phenomena are far beyond any possible \textit{ab-initio} methods.
In early quasi-static \textit{ab-initio} studies, \citet{Reed_Ceder_Ven_2001} rationalized the remarkable speed at which the phase transformation proceeds at room temperature using a charge disproportion mechanism: $2\text{Mn}^{3+}_{\text{oct}} \rightarrow \text{Mn}^{2+}_{\text{tet}} + \text{Mn}^{4+}_{\text{oct}}$, where the subscript indicates location in the tetrahedral or octahedral site of a face-centered cubic oxygen packing, as shown in Fig. \ref{fig:LMO}(a). The hypothesis based on DFT calculations was that Mn$^{2+}$ had a lower energy barrier for migration between tetrahedral and octahedral sites and preferred to occupy the tetrahedral site. The ability therefore for Mn to dynamically change its valence would explain its remarkable room temperature mobility. However, \citet{Jang_Chou_Huang_Sadoway_Chiang_2003} showed in a later magnetic characterization experiment that the electrochemically transformed spinel LiMnO$_2$ has lower-spin (high-valence) Mn ions on the tetrahedral sites, which suggested the possibility that Mn with higher valence can be stable on tetrahedral sites during the phase transformation.
To demonstrate the ability of CHGNet to fully describe such a process, we used CHGNet to run a charge-informed MD simulation at 1100 K for 1.5 ns (see Methods). The MD simulation started from a partially delithiated supercell structure with the \textit{o}-LMO structure (Li$_{20}$Mn$_{40}$O$_{80}$), which is characterized by peaks at 15\textdegree, 26\textdegree, and 40\textdegree\ in the X-ray diffraction (XRD) pattern (the bottom line in Fig. \ref{fig:LMO}(b)). As the simulation proceeded, a phase transformation from orthorhombic ordering to spinel-like ordering was observed. Figure \ref{fig:LMO}(b) presents the simulated XRD pattern of MD structures at different time intervals from 0 to 1.5 ns, with a clear increase in the characteristic spinel peaks (18\textdegree, 35\textdegree) and a decrease in the orthorhombic peak. The simulated results agree well with the experimental in-situ XRD results \cite{Reimers_Fuller_Rossen_Dahn_1993, Jang_Chou_Huang_Sadoway_Chiang_2003}.
Figure \ref{fig:LMO}(d) presents the CHGNet-predicted energy of the LMO supercell structure as a function of simulation time, together with the peak strength of $2\theta=15^\circ$ and $18^\circ$. An explicit correlation between the structural transformation and energy landscape is observed. The predicted energy of the spinel phase is approximately 26 meV/oxygen lower than that of the starting \textit{o}-LMO, suggesting that the phase transformation to spinel is indeed, thermodynamically favored.
The advantage of CHGNet is shown in its ability to predict charge-coupled physics, as evidenced by the lower plot in Fig. \ref{fig:LMO}(d). A histogram of the magmoms of all the Mn ions in the structure is presented against time. In the early part of the simulation, the magmoms of Mn ions are mostly distributed between 3$\mu_B$ and 4$\mu_B$, which correspond to Mn$^{4+}$ and Mn$^{3+}$.
At approximately 0.8 ns, there is a significant increase in the amount of Mn$^{2+}$, which is accompanied by a decrease in the potential energy and changes in the XRD peaks. Following this major transformation point, the Mn$^{3+}$ ions undergo charge disproportionation, resulting in the coexistence of Mn$^{2+}$, Mn$^{3+}$, and Mn$^{4+}$ in the transformed spinel-like structure.
One important observation from the long-time charge-informed MD simulation is the correlation between ionic rearrangements and the charge-state evolution. Specifically, we noticed that the time scale of charge disproportionation ($\sim$ ns for the emergence of Mn$^{2+}$) is far longer than the time scale of ion hops ($\sim$ ps for the emergence of Mn$_{\text{tet}}$), indicating that the migration of Mn to the tetrahedral coordination is less likely related to the emergence of Mn$^{2+}$. Instead, our result indicates that the emergence of Mn$^{2+}_{\text{tet}}$ is correlated to the formation of the long-range spinel-like ordering. Figure \ref{fig:LMO}(c) shows the average magmoms of Mn$_{\text{tet}}$ and Mn$_{\text{oct}}$ as a function of time. The result reveals that Mn$^{2+}_{\text{tet}}$ only forms over a long time period, which cannot be observed using any conventional simulation techniques.
To further validate this hypothesis and the accuracy of CHGNet prediction, we used r$^2$SCAN-DFT static calculations to get the magmoms of the structures at 0.4 and 1.5 ns, where the results are shown in Fig. \ref{fig:LMO}(e). The r$^2$SCAN-DFT magmoms (black) infer the same Mn$_\text{tet}$ valence states as the CHGNet prediction (blue). The systematically lower magmoms from r$^2$SCAN are expected since CHGNet is trained with GGA+U which over-localizes the electron density in oxides \cite{Wang2006_GGAU}. The r$^2$SCAN-DFT shows a 34 meV/oxygen driving force between the structures at 0.4 and 1.5 ns.
\subsection{Electronic entropy effect in the phase diagram of Li$_x$FePO$_4$}
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{LFP.pdf}
\caption{\textbf{Li$_x$FePO$_4$ phase diagram from CHGNet}. The phase diagrams in (a) and (b) are calculated with and without electronic entropy on Fe$^{2+}$ and Fe$^{3+}$. The colored dots represent the stable phases obtained in semi-grand canonical MC. The dashed lines indicate the two-phase equilibria between solid solution phases.}
\label{fig:LFP}
\end{figure}
The configurational electronic entropy has a significant effect on the temperature-dependent phase stability of mixed-valence oxides, and its equilibrium modeling therefore requires an explicit indication of the atomic charge. However, no current MLIPs can provide such information. We demonstrate that using CHGNet one can include the electronic entropy in the thermodynamics of Li$_x$FePO$_4$ and the temperature-dependent phase diagram (PD).
Previous research has shown that the formation of a solid solution in Li$_x$FePO$_4$ is mainly driven by electronic entropy rather than by Li$^+$/vacancy configurational entropy \cite{Zhou_Maxisch_Ceder_2006}. We applied CHGNet as an energy calculator to generate two cluster expansions (CEs), which is the typical approach to studying configurational entropy \cite{VandeWalle2002_ATAT}. One of these is charge-decorated (considering Li$^+$/vacancy and Fe$^{2+}$/Fe$^{3+}$) and another is non-charge-decorated (only
considering Li$^+$/vacancy without consideration of the Fe valence). Semi-grand canonical Monte Carlo was used to sample these cluster expansions and construct Li$_x$FePO$_4$ PDs (see Methods). The calculated PD with charge decoration in Fig. \ref{fig:LFP}(a) features a miscibility gap between FePO$_4$ and LiFePO$_4$, with a eutectoid-like transition to the solid-solution phase at intermediate Li concentration, qualitatively matching the experiment result \cite{Delacourt_Poizot_Tarascon_Masquelier_2005, Dodd_Yazami_Fultz_2006}. In contrast, the calculated PD without charge decoration in Fig. \ref{fig:LFP}(b) features only a single miscibility gap without any eutectoid transitions, in disagreement with experiments. This comparison highlights the importance of explicit inclusion of the electronic degrees of freedom, as failure to do so can result in incorrect physics. These experiments show how practitioners may benefit from CHGNet with atomic charge inference for equilibrium modeling of configurationally and electronically disordered systems.
\subsection{Activated Li diffusion network in Li$_3$La$_3$Te$_2$O$_{12}$}
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{LLTO_md.pdf}
\caption{\textbf{Li diffusivity in garnet Li$_3$La$_3$Te$_2$O$_{12}$}. The CHGNet simulation accurately reproduces the dramatic increase in Li-ion diffusivity when a small amount of extra Li is stuffed into the garnet structure, qualitatively matching the activated diffusion network theory and agreeing well with the DFT-computed activation energy.}
\label{fig:LLTO}
\end{figure}
In this section, we showcase the precision of CHGNet for general-purpose MD. Lithium-ion diffusivity in fast Li-ion conductors is known to show a drastic non-linear response to compositional change. For example, stuffing a small amount of excess lithium into stoichiometric compositions can result in orders-of-magnitude improvement of the ionic conductivity \cite{Jun_Sun_Xiao_Zeng_Kim_Kim_Miara_Im_Wang_Ceder_2022}. \citet{Xiao_Jun_Wang_Miara_Tu_Ceder_2021} reported that the activation energy of Li diffusion in stoichiometric garnet Li$_3$La$_3$Te$_2$O$_{12}$ decreases from more than 1 eV to $\sim$160 meV in a slightly stuffed Li$_{3+\delta}$ garnet ($\delta$ = 1/48), owing to the activated Li diffusion network of face-sharing tetrahedral and octahedral sites.
We performed a zero-shot test to assess the ability of CHGNet to capture the effect of such slight compositional change on the diffusivity and its activation energy. Figure \ref{fig:LLTO} shows the Arrhenius plot from CHGNet-based MD simulations and compares it to AIMD results. Our results indicate that not only is the activated diffusion network effect precisely captured, the activation energies from CHGNet are also in excellent agreement with the DFT results \cite{Xiao_Jun_Wang_Miara_Tu_Ceder_2021}. This effort demonstrates the capability of CHGNet to precisely capture the strong interactions between Li ions in activated local environments and the ability to simulate highly non-linear diffusion behavior. Moreover, CHGNet can dramatically decrease the error on simulated diffusivity and enable studies in systems with poor diffusivity such as the unstuffed Li$_{3}$ garnet by extending to nano-second-scale simulations \cite{Huang2021_DP}.
\section{Discussion}
Large-scale computational modeling has proven essential in providing atomic-level information in materials science, medical science, and molecular biology. Many technologically relevant applications contain heterovalent species, for which a comprehensive understanding of the atomic charge involved in the dynamics of processes is of great interest.
The importance of assigning a valence to ions derives from the fundamentally different electronic and bonding behavior ions can exhibit when their electron count changes. \textit{Ab-initio} calculations based on DFT are useful for these problems, but the $\sim \mathcal{O}(N^3)$ scaling intrinsically prohibits its application to large time- and length-scales. Recent development of MLIPs provides new opportunities to increase computational efficiency while maintaining near DFT accuracy. The present work presents an MLIP that combines the need to include the electronic degrees of freedom with computational efficiency.
In this work, we developed CHGNet and demonstrated the effectiveness of incorporating magnetic moments as a proxy for inferring the atomic charge in atomistic simulations, which results in the integration of electronic information and the imposition of additional charge constraints as a regularization of the MLIP. We highlight the capability of CHGNet in distinguishing Fe$^{2+}$/Fe$^{3+}$ in the study of Li$_x$FePO$_4$, which is essential for the inclusion of electronic entropy and finite temperature phase stability. In the study of LiMnO$_2$, we demonstrate CHGNet's ability to gain new insights into the relation between charge disproportionation and phase transformation in a heterovalent transition-metal oxide system from long-time charge-informed MD.
CHGNet builds on recent advances in graph-based MLIPs \cite{dimnet_2020,Chen_Ong_2022}, but is pretrained with electronic degrees of freedom built in, which provides an ideal solution for high-throughput screening and atomistic modeling of a variety of technologically relevant oxides, including high-entropy materials \cite{Lun2020_high_entropy, sun2021_HE_catalysis}. As CHGNet is already generalized to broad chemistry during pretraining, it can also serve as a data-efficient model for high-precision simulations when augmented with fine-tuning to specific chemistries.
Despite these advances, further improvements can be achieved through several efforts. First, the use of magnetic moments for valence states inference does not strictly ensure global charge neutrality. The formal valence assignment depends on how the atomic charges are partitioned \cite{Walsh2018_chg_review}. Second, although magnetic moments are good heuristics for the atomic charge from spin-polarized calculations in ionic systems, it is recognized that the atomic charge inference for non-magnetic ions may be ambiguous and thus requires extra domain knowledge. As a result, the atom-centered magnetic moments cannot accurately reflect their atomic charges. We posit that it is possible to enhance the model by incorporating more advanced and general approaches into charge representations, such as an electron localization function \cite{silvi1994_ELF}, electric polarization \cite{king1993_polarization_theory}, and atomic orbital-based partitioning (e.g. Wannier function \cite{Zhang2022_wannier}). These approaches could be used for atom feature engineering in latent space.
In conclusion, CHGNet enables charge-informed atomistic simulations amenable to the study of heterovalent systems using large-scale computational simulations, expanding opportunities to study charge-transfer-coupled phenomena in computational chemistry, physics, biology, and materials science.
\section{Acknowledgments}
This work was funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC0205CH11231 (Materials Project program KC23MP). The work was also supported by the computational resources provided by the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant number ACI1053575; the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory; and the Lawrencium Computational Cluster resource provided by the IT Division at the Lawrence Berkeley National Laboratory. The authors would also like to thank Jason Munro for helpful discussions.
\section{Methods}
\subsection{Data parsing}
The Materials Project Trajectory Dataset (MPtrj) was parsed from the September 2022 Materials Project database version.
We collected all the GGA and GGA+U task trajectories under each material-id and followed the criteria below:
\begin{enumerate}
\item We removed deprecated tasks and only kept tasks with the same calculation settings as the primary task, from which the material could be searched on the Materials Project website. To verify if the calculation settings were equal, we confirmed the following:
(1) The +U setting must be the same as the primary task.
(2) The energy of the final frame cannot differ by more than 20 meV/atom from the primary task.
\item Structures without energy and forces or electronic step convergence were removed.
\item Structures with energy higher than 1 eV/atom or lower than 10 meV/atom relative to the relaxed structure from Materials Project's \texttt{ThermoDoc} were filtered out to eliminate large energy differences caused by variations in VASP settings, etc.
\item Duplicate structures were removed to maintain a healthy data distribution. This removal was achieved using a \texttt{pymatgen} \texttt{StructureMatcher} and an energy matcher to tell the difference between structures. The screening criteria of the structure and energy matchers became more stringent as more structures under the same materials-id were added to the MPtrj dataset.
\end{enumerate}
\subsection{Model design}
In constructing the crystal graph, the default $r_\text{cut}$ is set to 5\AA, which has been shown adequate enough for capturing long-range interactions \cite{Chen_Ong_2022}. The bond graph is constructed with a cutoff of 3\AA\ for computational efficiency. The bond distances $r_{ij}$ were expanded to $\Tilde{e}_{ij}$ by a trainable smooth radial Bessel function (SmoothRBF), as proposed in \citet{dimnet_2020}. The SmoothRBF forces the radial Bessel function and its derivative to approach zero at the graph cutoff radius, thus guaranteeing a smooth potential energy surface. The angles $\theta_{ijk}$ were expanded by Fourier basis functions to create $\Tilde{a}_{ijk}$ with trainable frequency. The atomic numbers $Z_i$, $\Tilde{e}_{ij}$, and $\Tilde{a}_{ijk}$ were then embedded into node $v^{0}_{i}$, edge $e^{0}_{ij}$, and angle features $a^{0}_{ijk}$ (all have 64 feature dimensions by default):
\begin{equation}
\begin{aligned}
v_i^0 &= \boldsymbol{W_v}Z_i + \boldsymbol{b_v},\\
e_{ij,n}^0 &= \boldsymbol{W_e}\Tilde{e}_{ij}, ~ \Tilde{e}_{ij} = \sqrt{\frac{2}{5}}\frac{\sin(\frac{n \pi r_{ij}}{5})}{r_{ij}} \odot u(r_{ij})\\
a_{ijk,\ell}^0 &=
\begin{split}
\begin{cases}
\frac{1}{\sqrt{2\pi}} & \text{if } \ell=0 \\
\frac{1}{\sqrt{\pi}} \cos\left[\ell\theta_{ijk}\right] &\text{if } \ell = [1,N]\\
\frac{1}{\sqrt{\pi}} \sin\left[(\ell-N)\theta_{ijk}\right] &\text{if } \ell = [N+1,2N]
\end{cases} , \\
\end{split}
\label{eq:embedding}
\end{aligned}
\end{equation}
where $\{\boldsymbol{W},\boldsymbol{b}\}$ are the trainable weights and bias. The angle is computed using $\theta_{ijk} = \arccos{\frac{e_{ij}\cdot e_{jk}}{|e_{ij}||e_{jk}|}}$. The $u(r_{ij})$ is a polynomial envelope function to enforce the value, the first and second derivative of $\Tilde{e}_{ij}$, smoothly toward 0 at the graph cutoff radius \cite{dimnet_2020}. The subscripts $n,\ell$ are the expansion orders, and we set the maximum orders for both $n$ and $\ell$ to be $2N+1=9$. The superscript denotes the index of the interaction block. The $\odot$ represents the element-wise multiplication. The edge vectors $e^t_{ij}$ are bi-directional, which is essential for $e^t_{ij}$ and $e^t_{ji}$ to be represented as a single node in the bond graph \cite{Choudhary_DeCost_2021}.
For the atom graph convolution, a weighted message passing layer is applied to the concatenated feature vectors $(v_i^{t}||v_j^{t}||e_{ij}^{t})$ from two atoms and one bond. For the bond graph convolution, the weighted message passing layer is applied to the concatenated feature vectors $(e_{ij}^{t}||e_{jk}^{t}||a_{ijk}^{t}||v_j^{t+1})$ from two bonds, the angle between them, and the atom where the angle is located. For the angle update function, we used the same construction for the bond graph message vector but without the weighted aggregation step. The mathematical form of the atom, bond, and angle updates are formulated below:
\begin{equation}
\begin{aligned}
v_i^{t+1} &= v_i^{t} + L_v^t \left[ \sum_{j} \Tilde{e}_{ij} \cdot \phi^t_v\left(v_i^{t}||v_j^{t}||e_{ij}^{t}\right) \right] ,\\
e_{jk}^{t+1} &= e_{jk}^{t} + L_v^t \left[ \sum_{i} \Tilde{e}_{ij} \cdot \Tilde{e}_{jk} \cdot \phi^t_e\left(e_{ij}^{t}||e_{jk}^{t}||a_{ijk}^{t}||v_j^{t+1}\right) \right],\\
a_{ijk,f}^{t+1} &= a_{ijk}^{t} + \phi^t_a\left(e_{ij}^{t+1}||e_{jk}^{t+1}||a_{ijk}^{t}||v_j^{t+1}\right).\\
\end{aligned}
\end{equation}
The $L$ is a linear layer and $\phi$ is the gated multilayer perceptron (gatedMLP) \cite{Xie_Grossman_2018}:
\begin{equation}
\begin{aligned}
L(x) &= \boldsymbol{W} x+\boldsymbol{b}, \\
\phi(x) &= \left(\sigma \circ L_{\text{gate}}(x)\right) \odot \left(g \circ L_{\text{core}}(x)\right),
\end{aligned}
\end{equation}
where $\sigma$ and $g$ are the \texttt{Sigmoid} and \texttt{SiLU} activation functions, respectively. The magnetic moments are predicted by a linear projection of the atom features $v_i^3$ after three interaction blocks by
\begin{equation}
m_i =\ L_m(v^3_i).
\end{equation}
Instead of using a full interaction block, the last convolution layer only includes atom graph convolution
\begin{equation}
v^4_i = v_i^3 + \sum_{j} \Tilde{e}_{ij} \cdot \phi^3_v\left(v_i^{3}||v_j^{3}||e_{ij}^{3}\right).
\end{equation}
The energy is calculated by a non-linear projection of the site-wise averaged feature vector over all atoms $\{v_i^4\}$. The forces and stress are calculated via auto-differentiation of the energy with respect to the atomic Cartesian coordinates and strain:
\begin{equation}
\begin{aligned}
E_{\text{tot}} &= \sum_{i} L_3 \circ g \circ L_2 \circ g \circ L_1(v^4_i) ,\\
\Vec{f}_i &= - \frac{\partial E_{\text{tot}}}{\partial \Vec{x}_i}, \\
\boldsymbol{\sigma} &= \frac{1}{V}\frac{\partial E_{\text{tot}}}{\partial \boldsymbol{\varepsilon}}.
\end{aligned}
\end{equation}
Overall, with four atom convolution layers, the pretrained CHGNet can capture long-range interaction up to 20 \AA\ with a small computation cost.
\subsection{Model training}
The model is trained to minimize the summation of Huber loss (with $\delta = 0.1$) of energy, force, stress, and magmoms:
\begin{equation}
\begin{aligned}
\mathcal{L}(x,\hat{x}) &=
\begin{split}
\begin{cases}
0.5\cdot(x-\hat{x})^2 & \text{if } |x-\hat{x}| < \delta \\
\delta\cdot(|x-\hat{x}| - 0.5\delta) &\text{otherwise}
\end{cases}
\end{split}.
\end{aligned}
\end{equation}
The loss function is a weighted sum of the contributions from energy, forces, stress, and magmoms:
\begin{equation}
\mathcal{L} = \mathcal{L}(E, \hat{E})+w_f\mathcal{L}(\boldsymbol{f}, \hat{\boldsymbol{f}})+w_{\sigma}\mathcal{L}(\boldsymbol{\sigma}, \hat{\boldsymbol{\sigma}})+ w_m\mathcal{L}(m, \hat{m}),
\end{equation}
where the weights for the forces, stress, and magmoms are set to $w_f = 1$, $w_{\sigma} = 0.1$, and $w_m = 0.1$, respectively. The DFT energies are normalized with elemental reference energies before fitting to CHGNet to decrease variances \cite{Chen_Ong_2022}. The batch size is set to 40 and the \texttt{Adam} optimizer is used with $10^{-3}$ as the initial learning rate. The \texttt{CosineAnnealingLR} scheduler is used to adjust the learning rate 10 times per epoch, and the final learning rate decays to $10^{-5}$ after 20 epochs.
\subsection{Software interface}
CHGNet was implemented using \texttt{pytorch 1.12.0} \cite{paszke2019pytorch}, with crystal structure processing from \texttt{pymatgen} \cite{Ong2013_pymatgen}. Molecular dynamics and structure relaxation were simulated using the interface to \texttt{Atomic Simulation Environment (ASE)} \cite{HjorthLarsen2017_ASE}. The cluster expansions were performed using the \texttt{smol} package \cite{Barroso-Luque2022_smol}.
\subsection{Structure relaxation and molecular dynamics}
All the structure relaxations were optimized by the \texttt{FIRE} optimizer over the potential energy surface provided by CHGNet \cite{Bitzek2006_FIRE}, where the atom positions, cell shape, and cell volume were simultaneously optimized to reach converged interatomic forces of 0.1 eV/\AA.
For the MD simulations of the \textit{o}-LMO to \textit{s}-LMO phase transformation, the initial structure Li$_{20}$Mn$_{40}$O$_{80}$ was generated by randomly removing Li from a Li$_{40}$Mn$_{40}$O$_{80}$ supercell of the orthorhombic structure and relaxing with DFT. The MD simulation was run under the NVT ensemble, with a time step of 2 fs at T = 1100 K for 2 ns. For the simulated XRD in Fig. \ref{fig:LMO}(b), the structures at 0.0, 0.3, 0.6, 0.9, 1.2, and 1.5 ns were coarse-grained to their nearest Wyckoff positions to remove noisy peaks. In Fig. \ref{fig:LMO}(c), Mn$_\text{oct}$ and Mn$_\text{tet}$ were determined by counting the number of bonding oxygen ions within 2.52 \AA. If six bonding oxygen ions were found, then the Mn ion is categorized into Mn$_\text{oct}$; if less than six bonding oxygen ions were found, the Mn ion is coarse-grained into Mn$_\text{tet}$ for representation of lower coordinated environments. In Fig. \ref{fig:LMO}(e), Mn$^{2+}$ and Mn$^{3+}$ are classified by CHGNet magmom threshold of 4.2 $\mu_B$ \cite{Barroso-Luque2022_CE_theory}.
For the MD simulations of garnet Li$_3$La$_3$Te$_2$O$_{12}$ systems, a time step of 2 fs was used. We ramped up the temperature to the targeted temperature in the NVT ensemble with at least 1 ps. Then, after equilibrating the system for 50 ps, the lithium self-diffusion coefficients were obtained by calculating the mean squared displacements of trajectories for at least 2.3 ns. The uncertainty analysis of the diffusion coefficient values was conducted following the empirical error estimation scheme proposed by \citet{He_Zhu_Epstein_Mo_2018}. In Li${_{3+\delta}}$, the excess lithium was stuffed to an intermediate octahedral ($48g$) site to face-share with the fully occupied $24d$ tetrahedral sites.
\subsection{Phase diagram calculations}
The cluster expansions (CEs) of Li$_x$FePO$_4$ were performed with pair interactions up to 11 \AA\ and triplet interactions up to 7 \AA\ based on the relaxed unit cell of LiFePO$_4$. For better energy accuracy, we first fine-tuned CHGNet with the Materials Project structures in the Li-Fe-P-O chemical space with a \texttt{MSE} loss function for 40 epochs, which resulted in a 12 meV/atom training energy error and 19 meV/atom validation energy error. We applied CHGNet to relax 456 different structures in Li$_x$FePO$_4$ ($0\leq x\leq 1$) and predict the energies and magmoms, where the 456 structures were generated via an automatic workflow including CE fitting, canonical CE Monte Carlo for searching the ground state at varied Li$^+$ composition and CHGNet relaxation. The charge-decorated CE is defined on coupled sublattices over Li$^+$/vacancy and Fe$^{2+}$/Fe$^{3+}$ sites, where Fe$^{2+}$ and Fe$^{3+}$ are treated as different species. In addition, the non-charge-decorated CE
is defined only on Li$^+$/vacancy sites. In the charge-decorated CE, Fe$^{2+}$/Fe$^{3+}$ is classified with magmom in $[3\mu_B,4\mu_B]$ and $[4\mu_B,5\mu_B]$, respectively \cite{Barroso-Luque2022_CE_theory}.
The semigrand canonical Monte Carlo simulations were implemented using the Metropolis--Hastings algorithm, where 20\% of the MC steps were implemented canonically (swapping Li$^+$/vacancy or Fe$^{2+}$/Fe$^{3+}$) and 80\% of the MC steps were implemented grand-canonically using the table-exchange method \cite{Xie2023_chgMC,Deng2020_TableExchange}. The simulations were implemented on a $8\times 6\times 4$ of the unit cell of LiFePO$_4$. In each MC simulation, we scanned the chemical potential in the $[-5.6,-4.8]$ range with a step of $0.01$ and sampled the temperatures from 0 to 1000 K. The boundary for the solid solution stable phases is determined with a difference in the Li concentration $ < 0.05$ by $\Delta\mu = 0.01$ eV.
\subsection{DFT calculations}
DFT calculations were performed with the \textit{Vienna ab initio simulation package} (VASP) using the projector-augmented wave method \cite{kresse1996VASP, kresse1999PAW}, a plane-wave basis set with an energy cutoff of 680 eV, and a reciprocal space discretization of 25 \textit{k}-points per \AA$^{-1}$. All the calculations were converged to $10^{-6}$ eV in total energy for electronic loops and 0.02 eV/Å in interatomic forces for ionic loops. We relied on the regularized strongly constrained and appropriately normed meta-GGA exchange-correlation functional (r$^2$SCAN) \cite{Sun2015SCAN,furness2020r2SCAN}, which has improved performance on volume, coordination, and formation-energy prediction in solid-state systems. r$^2$SCAN provides better computational efficiency than the earlier version of SCAN \cite{kingsbury2022r2SCAN_PRM}.
\subsection{Code availability}
The source code of CHGNet is available at \href{https://github.com/CederGroupHub/chgnet}{https://github.com/CederGroupHub/chgnet}.
\subsection{Data availability}
The dataset will be released after review.
|
{
"arxiv_id": "2302.14160",
"language": "en",
"timestamp": "2023-03-01T02:03:11",
"url": "https://arxiv.org/abs/2302.14160",
"yymm": "2302"
} | \section{Introduction}
A \emph{canon} is a musical piece in which two or more voices perform the same melodic line, but with transformations so that they do not sing in unison. In the simplest canons, the voices enter at enter at equally spaced times and at the same pitch, forming a \emph{round} (such as the medieval ``Sumer is icumen in'' or the modern ``Fr\`ere Jacques''). Medieval and early Renaissance composers produced a menagerie of canons including such devices as contrary motion, omission of some indicated notes, augmentation, diminution, and simultaneous meters (the last found, for instance, in Ockeghem's \emph{Missa prolationum}). In some cases, one is left with the impression that devising the canonic scheme demanded more skill than writing the canon itself.
By the time of Giovanni Pierluigi da Palestrina (c.~1520--1594), composers were losing interest in these more recondite kinds of canons, and there appeared other specimens which, while superficially of simple types, demand great skill. Such are canons where \emph{three} or more voices sing from a single line. Palestrina's output includes a number of these, which are listed in Table \ref{tab:Pal_canons}.
We have found Palestrina a convenient composer to highlight here because his style has been intensively studied as representing the height of the Renaissance, and because he takes pains not to relax the rules of the style, which pertain especially to dissonance treatment, even in the face of additional demands imposed by a canonic structure \cite[p.~199] {jeppesen2012style}. For more examples, see Feininger \cite{Feininger}, a detailed history of the canon from its medieval genesis to the works of Josquin des Prez around 1500. In the course of the sixteenth century, the rules for treating dissonance grew increasingly strict, so that writing a canon became more challenging.
Very little is known about how these canons were written. Gauldin \cite{GauldinComposition} suggests that a skeleton in first species (whole notes only) could have been written first, based on the observation that some canons in the repertoire can be analyzed as a single underlying motive whose repetition is obscured by varied ornamentation of the melodic line. However, this presupposes that we have a canonic scheme that admits a canon on some motive, preferably one that admits some flexibility so that the entire piece is not a strict sequence. Thus the aims of this paper:
\begin{itemize}
\item Using a theoretical model based on first-species reduction, we give each canonic scheme $\S$ a \emph{flexibility} value (\emph{flex} for short) $\lambda(\S)$.
\item We exhaustively compute the flex values of canonic schemes whose time intervals are not too long. We verify that some schemes that are symmetric (i.e.~have equal time intervals and closely related pitch intervals) have a flex value much higher than average, explaining their ubiquity in the repertoire.
\item When using asymmetric schemes, we find that Palestrina picks the more flexible ones more consistently than his predecessors, helping him to compose canons without relaxing the dissonance treatment.
\item However, our method turns up new schemes that are just as flexible as those in the repertoire, and we present a new composition on such a scheme in accord with the rules of dissonance practiced in the late 16th century.
\end{itemize}
The remainder of the paper is organized as follows. In section \ref{sec:types}, we classify canonic schemes. In section \ref{sec:theor}, we define the flex of a scheme and show that it is invariant under a large family of transformations, allowing us to greatly shorten the list of schemes to be considered. In section \ref{sec:data}, we compare flex values for various schemes present and absent in the literature (the full data set is tabulated in Appendix \ref{app:tab}). In section \ref{sec:Pisot}, we explain the algorithm used to compute the flex value, demonstrating in the process that it is a Pisot number. Finally, in section \ref{sec:new}, we present a new composition in the Palestrina style utilizing an unexplored scheme that nonetheless has a sufficiently high flex value.
\section{Types of canons}
\label{sec:types}
\begin{table}
\begin{tabular}{llp{2.5in}}
\multicolumn{3}{l}{\textbf{I. Stacked Canons}} \\
Page & Work & Scheme \\ \hline
XI, 67 & Mass \emph{Ad fugam,} Benedictus & $\{(0,0), (4,-4), (8,-8)\text{B}\}$ \\
XI, 152 & Mass \emph{Papae Marcelli,} Agnus Dei II &
$\{(0,0),(6,4),(12,8)\}$ \\
XVII, 129ff. & Mass \emph{Sacerdotes Domini} & $\{(0,0), (3,1), (6,2)\}$ (Kyrie I, Credo, Sanctus) \\
&& $\{(0,0), (4,1), (8,2)\}$ (Christe, Kyrie II, Agnus Dei I--II, Gloria, Et unam sanctam) \\
&& $\{(0,0)\text{B}, (4,1), (8,2)\}$ (Pleni)\\
&& $\{(0,0), (6,-1), (12,-2)\}$ (Hosanna; time units are semibreves in $3/1$ time). \\
\\
\multicolumn{3}{l}{\textbf{II. Inverting Canons} (both at the 12th)} \\
Page & Work & Scheme \\ \hline
I, 158 & Motet ``Virgo prudentissima,'' secunda pars & $\{(0,0), (6,4), (12,-3)\}$ \\
XIX, 34 & Mass \emph{Gi\`a f\`u chi m'hebbe cara,} Benedictus & $\{(0,0),(4,-4)\text{B},(8,3)\}$ \\
\\
\multicolumn{3}{l}{\textbf{III. Asymmetric Canons}} \\
Page & Work & Scheme \\ \hline
I, 152 & Motet ``Virgo prudentissima,'' prima pars & $\{(0,0), (4,4), (10,-3)\}$ \\
III, 123 & Motet ``Accepit Jesus calicem'' & $\{(0,0), (1,3), (5,7)\}$ \\
XI, 53 & Mass \emph{Sine Nomine,} Agnus Dei II & $\{(0,0),(4,4),(14,6),(18,10)\}$ \\
XI, 65 & Mass \emph{Ad fugam,} Pleni &
$\{(0,0)\text{B},(1,3),(3,7)\}$ \\
XI, 69 & Mass \emph{Ad fugam,} Agnus Dei II &
$\{(0,0),(6,0),(7,3)\}$ (plus a $2$-voice canon) \\
XII, 47 & Mass \emph{Primi toni,} Agnus Dei II &
$\{(0,0),(7,-7),(8,-4)\}$ \\
XII, 132 & Mass \emph{Repleatur os meum,} Agnus Dei II &
$\{(0,0),(8,7),(12,3)\}$
\end{tabular}
\caption{Canons of 3-in-1 or 4-in-1 type in the works of Palestrina}
\label{tab:Pal_canons}
\end{table}
In Table \ref{tab:Pal_canons}, we have listed all canons in the works of Palestrina where at least three parts read from the same melodic line. Volume and page numbers in the table refer to the Breitkopf \& H\"artel edition of the complete works of Palestrina, available online on IMSLP \cite{Pal}. It will be noted that the great majority of the canons in the table occur in masses, specifically in the Pleni, Benedictus, and Agnus Dei II. In the Pleni and Benedictus, which are middle movements within the Sanctus, it was fairly common to reduce the texture to three voices, raising the possibility of a three-voice unaccompanied canon. The Agnus Dei II, by contrast, is the grand finale of the mass setting, frequently involving added voices and an intensification of the compositional procedures in use. The three Palestrina masses using canons as a structural motif throughout (\emph{Ad fugam,} \emph{Repleatur os meum,} and \emph{Sacerdotes Domini}) all include three-voice canons in the final Agnus Dei.
Each canon\footnote{Excluding the special types (augmentation, retrograde) which Palestrina occasionally uses.} has a \emph{scheme} which we notate as a set of ordered pairs $(t_i, p_i)$, meaning that a voice begins $t_i$ beats (semibreves) after the written timing and $p_i$ diatonic steps higher than the written pitch. A negative value of $p_i$ indicates that a particular voice sings \emph{below} the written pitch. The annotation ``$\text{B}$'' indicates that this voice is the bass of the texture. This is significant when it happens, because in Renaissance style, certain intervals (perfect and augmented fourths, diminished fifths) are treated as consonances between upper voices but dissonances when they involve the bass \cite[p.~175]{jeppesen1992counterpoint}.
As indicated in the table, the canons are of several types. The first and simplest is termed \emph{stacked canon} in \cite{GosmanStacked} and is used by many composers in the Renaissance and after. It occurs when each new voice has the same displacement in both time and pitch from the one before it; or, what is the same thing, when the pairs $(t_i, p_i)$, when plotted in the plane, are equally spaced points on a straight line. This simplifies the compositional process greatly, as the consonant intervals occurring between the first voice and the others will automatically repeat (except for possible chromatic alterations) to form consonances among the answering voices (see Figure \ref{fig:sacerdotes_pleni}).
\begin{figure}
\includegraphics[width=\linewidth]{figures/SacerdotesPleni.pdf}
\caption{A stacked canon from Palestrina's mass \emph{Sacerdotes Domini}}
\label{fig:sacerdotes_pleni}
\end{figure}
More artistically, a composer can employ equal time intervals but alter the pitch displacements so that the intervals between the 1st and 2nd voices are inverted when they recur between the 2nd and 3rd voices. We call this disposition an \emph{inverting canon.} As is well known, invertible counterpoint at the octave, 10th, or 12th is feasible; Palestrina and his contemporaries favored the 12th (see Figure \ref{fig:GiaFuBenedictus}).
\begin{figure}
\includegraphics[width=\linewidth]{figures/GiaFuBenedictus.pdf}
\caption{An inverting canon at the 12th from Palestrina's mass \emph{Gi\`a f\`u chi m'hebbe cara}}
\label{fig:GiaFuBenedictus}
\end{figure}
Are equal time intervals necessary for a feasible canon? Concerning this third, asymmetric possibility, Gauldin writes: ``Opening points of canonic imitation employing \emph{both} asymmetrical intervallic and temporal relations create extreme problems as regards the continuation of the strict imitation'' \cite[p.~115]{gauldin2013practical}. Nevertheless, numerous examples exist in the Renaissance repertoire, and Palestrina wrote no less than seven canons of this type. Most have three voices plus some accompanying voices, but the final Agnus Dei of the mass \emph{Sine Nomine} has a four-voice canon, each voice entering at a different pitch. How does he manage? The melody is constructed from a pentachord (D-E-F-G-A) and is imitated using the other three pentachords in the diatonic scale that have no melodic tritone (A-B-C-D-E, C-D-E-F-G, G-A-B-C-D). The relation between the last two voices is the same as that between the first two. In most of the other canons, the voices are separated by perfect intervals (fourths, fifths, and octaves).
Most of these canons are accompanied by free voices, which ease the compositional process by filling in the texture during the frequent rests in the canonic parts; also, the free bass\footnote{Accompanied canons where the bass is one of the canonic parts are rare; Palestrina writes them only in the case of $2$-voice canon, and we will not consider them here.} corrects any fourths between the canonic parts and includes strong leaps of fourths and fifths at cadences, which could be awkward to imitate through each voice of the texture. However, the Pleni from the mass \emph{Ad fugam,} perhaps Palestrina's most impressive canon, is for three parts unaccompanied (see Fig.~\ref{fig:ad_fugam_pleni}). Except possibly for the final cadence, which introduces the so-called ``consonant 4th'' idiom by an atypical octave leap (see Jeppesen \cite[p.~236]{jeppesen2012style}), its polyphonic style features the same meticulous dissonance treatment practiced in Palestrina's free imitative works. Its scheme $\{(0,0)\text{B},(1,3),(3,7)\}$ is the exact inversion of the scheme $\{(0,0),(1,-3),(3,-7)\text{B}\}$ used in one accepted realization of the famous English puzzle canon ``Non nobis Domine,'' once attributed to William Byrd.
\begin{figure}
\includegraphics[width=\linewidth]{figures/AdFugamPleni.pdf}
\caption{A canon from Palestrina's mass \emph{Ad fugam} with the asymmetric scheme $(0,0)\text{B}, (1,3), (3,7)$}
\label{fig:ad_fugam_pleni}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{figures/NonNobis_new.pdf}
\caption{An anonymous canon with the asymmetric scheme $(0,0),(1,-3),(3,-7)\text{B}$}
\label{fig:non_nobis}
\end{figure}
\section{A theoretical model}
\label{sec:theor}
To understand how Palestrina and his contemporaries chose the time and pitch displacements for their canons, we follow Gauldin \cite{GauldinComposition} and other authors in making a first-species reduction of the canon; that is, we assume a melody of whole notes only. We simplify further by ignoring all octave differences and chromatic alterations, so that our tones belong to the finite field $\mathbb{F}_7$, which consists of seven elements $\{0,1,2,3,4,5,6\} = \{B,C,D,E,F,G,A\}$. We then implement the dissonance rules of the style as follows:
\begin{defn}
A \emph{canonic scheme} $\S$ with $r$ \emph{voices} is a set of ordered pairs $\{(t_i, p_i)\}_{1 \leq i \leq r}$, each time displacement $t_i \in \mathbb{Z}$, each pitch displacement $p_i \in \mathbb{F}_7$, with no two $t_i$ equal, and with one of the pairs optionally bearing the annotation $\text{B}$ for ``bass.'' A \emph{melody} is a sequence $\{x_i\} = x_1, x_2, \ldots, x_n$ of \emph{notes} $x_i \in \mathbb{F}_7$. Given a melody, its \emph{realization} is the matrix of tones $[a_{ti}]$, where $a_{ti}$, the note sung by the $i$th voice at time $t$, is given by
\begin{equation}
a_{ti} = x_{t - t_i} + p_i.
\end{equation}
\end{defn}
\begin{defn}
A melody is a \emph{valid canon} for a canonic scheme $\S$ if its realization obeys the following two rules for all voices $i$ and $j$, at all times $t \in \mathbb{Z}$ such that $a_{ti}, a_{tj}$ both exist:
\begin{enumerate}[$($a$)$]
\item\label{it:2nd} Harmonic seconds and sevenths are not allowed:
\[
a_{ti} - a_{tj} \notin \{1, 6\}.
\]
\item\label{it:4th} Fourths above the bass are not allowed:
\[
(t_j, p_j)\text{B} \implies a_{ti} - a_{tj} \neq 3.
\]
\end{enumerate}
Let $\mathcal{V}_n(\S)$ denote the set of valid $n$-note canons for the scheme $\S$, and let $V_n(\S) = \size{\mathcal{V}(\S)}$ be the number of valid $n$-note canons. Define the \emph{flex} (short for \emph{flexibility}) $\lambda(\S)$ to be a parameter measuring how fast $V_n(\S)$ grows as $n$ increases:
\[
\lambda(\S) = \lim_{n \to \infty} \sqrt[n]{V_n(\S)},
\]
assuming that the limit exists; we will later prove that it always does (see Theorem \ref{thm:Pisot}).
\end{defn}
Note that $0 \leq V_n(\S) \leq 7^n$ because there are seven choices for each note. Accordingly $0 \leq \lambda(\S) \leq 7$.
\begin{examp}
Let $\S = \{(0,0), (t, p)\text{B}\}$ be a two-voice canonic scheme, $t > 0$. We have
\[
V_n(\S) = \begin{cases}
7^n, & n < t \\
7^t 4^{n-t}, & n \geq t
\end{cases}
\]
because the first $t$ notes are completely free, and thereafter we may write the third, fifth, sixth, or octave above the already determined bass note. We have
\[
\lambda(\S) = \lim_{n\to \infty} \sqrt[n]{7^t 4^{n - t}} = 4.
\]
\end{examp}
Note the vast simplification of the resources of Renaissance composition in this model. There is no provision for the use of rests; at any rate, a canonic scheme which forced the use of frequent rests could not be judged a good one. There is no imposition of the rule against parallel octaves and fifths, because these can often be evaded by ornamenting the melodic line. (However, the computer code used in computing flex values includes the option to impose this rule.) There is no provision for suspensions, which can and do complicate matters by allowing dissonance to occur on a strong beat. For example, Figure \ref{fig:quick}
\begin{figure}
\centering
\includegraphics[scale=1.0]{figures/quick.pdf}
\caption{A versatile motif for canons}
\label{fig:quick}
\end{figure}
is a useful motif in asymmetric canons because any of its three notes may dissonate according to the laws of the style: the first as a suspension, the second as an accented passing note, and the third as a passing note, neighbor note, or cambiata.
The payoff for such a stark simplification of the laws of the style is that the flex of a canonic scheme is invariant under a large set of transformations, vastly reducing the possibilities we need to consider:
\begin{prop}\label{prop:eqv}
Let $\S$ be a canonic scheme, and let $\S'$ be a new scheme derived by one of the following transformations:
\begin{enumerate}[$($a$)$]
\item\label{it:ttrans} Time translation $t_i \mapsto t_i + t$, for a constant $t \in \mathbb{Z}$
\item\label{it:ptrans} Pitch translation $p_i \mapsto p_i + p$, for a constant $p \in \mathbb{F}_7$
\item\label{it:shear} Shearing $p_i \mapsto p_i + ct_i$, for a constant $c \in \mathbb{F}_7$
\item\label{it:inv} Inversion: keeping the time intervals $t_i' = t_i$, set
\[
p_i' = \begin{cases*}
5 - p_i & if $(t_i, b_i)$ is a marked bass voice \\
-p_i & otherwise.
\end{cases*}
\]
\item\label{it:tdil} Time dilation (augmentation, diminution, or retrograde) $t_i \mapsto ct_i$, for a constant $c \in \mathbb{Q}^\times$ for which all $ct_i \in \mathbb{Z}$.
\end{enumerate}
Then $\S'$ and $\S$ have the same flex:
\[
\lambda(\S') = \lambda(\S).
\]
\end{prop}
\begin{proof}
Items \eqref{it:ttrans}--\eqref{it:inv} are proved by producing a bijection between the valid canons $\{x_i\}$ of the two schemes $\S$, $\S'$ in question, proving that $V_n(\S) = V_n(\S')$. For items \eqref{it:ttrans} and \eqref{it:ptrans}, the same melody $\{x_i\}$ serves for both. For item \eqref{it:shear}, we use a shearing transformation $x_i' = x_i + ci$ to transform a valid canon $\{x_i\}$ for $\S$ to a valid canon $\{x_i'\}$ for the new scheme $\S' = \{(t_i, p_i + ct_i)\}$. The vertical intervals $a_{ti} - a_{tj}$ are thereby preserved, although the new canon may have quite a different musical flavor from the original (see Fig.~\ref{fig:shear})!
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{figures/shear.pdf}
\hfill
\includegraphics[width=0.48\linewidth]{figures/shear2.pdf}
\caption{Canons with the schemes $\{(0,0), (2,-2)\text{B}, (4,7)\}$ (left) and $\{(0,0), (2,-4)\text{B}, (4,3)\}$ (right), which both invert at the 12th. The two canons are related by a shear transformation that transposes the chord on the $i$th beat by $i$ steps downward, with some octave adjustments.}
\label{fig:shear}
\end{figure}
For item \eqref{it:inv}, we simply invert the melody, setting $x_i' = -x_i$. Note that intervals above the bass get inverted at the 10th and so remain consonant.
Item \eqref{it:tdil} is interesting because it cannot be proved by simply finding a bijection; the numbers of canons $V_n(\S)$, $V_n(\S')$ may differ in general. For instance, if $\S'$ is derived by dilating the times $t_i$ by a large integer $c$, then $V_n(\S') = 7^n$ for all $n \leq c$ because there is no opportunity for two notes to sound simultaneously under the scheme $\S'$.
Assume first that $c > 0$ is an integer. Note that a valid canon for $\S'$ is composed of $c$ interwoven valid canons for $\S$: a melody $x_1, x_2, \ldots, x_n$ is valid for $\S'$ if and only if the subsequences
\begin{equation} \label{eq:subseq}
x_a, x_{c + a}, x_{2c + a}, \ldots, \quad 1 \leq a \leq c
\end{equation}
are valid canons for $\S$. Now we use the division algorithm (dividing $n$ by $c$ with remainder) to write $n = qc + r$, where $0 \leq r < c$. We note that the sequence \eqref{eq:subseq} stops at $x_{qc + a}$ if $a \leq r$, at $x_{(q-1)c + a}$ otherwise. So
\begin{equation} \label{eq:tdil}
V_n(\S') = V_{qc + r}(\S') = V_q(\S)^{c - r} V_{q+1}(\S)^r.
\end{equation}
If $V_n(\S) = 0$ for all sufficiently large $n$, then the same is true for $\S'$, so both schemes have flex $0$. So we will assume that $\lambda(\S) \geq 1$. By definition, for every $\epsilon$ ($0 < \epsilon < 1$) there is an $N$ such that for all $n \geq N$,
\[
\big(\lambda(\S) - \epsilon\big)^n \leq V_n(\S) \leq \big(\lambda(\S) + \epsilon\big)^n.
\]
If $q > N$, then by \eqref{eq:tdil}, we get the upper bound
\begin{align*}
V_n(\S') &= V_q(\S)^{c - r} V_{q+1}(\S)^r \\
&\leq \big(\lambda(\S) + \epsilon\big)^{q(c - r)} \cdot \big(\lambda(\S) + \epsilon\big)^{(q+1)r} \\
&= \big(\lambda(\S) + \epsilon\big)^{qc + r} \\
&= \big(\lambda(\S) + \epsilon\big)^n
\end{align*}
and, analogously, the lower bound
\[
V_n(\S') \geq \big(\lambda(\S) - \epsilon\big)^n.
\]
Thus, for $n \geq cN$, we have the desired bounds
\[
\lambda(\S) - \epsilon \leq \sqrt[n]{V_n(\S')} \leq \lambda(\S) + \epsilon.
\]
Since $\epsilon$ was arbitrary, we have shown that $\lambda(\S') = \lambda(\S)$.
This concludes the proof in the case that $c$ is a positive integer. The other cases follow readily: if $c = a/b$ is a ratio of positive integers, we first multiply the time displacements $t_i$ by $a$ and then divide by $b$, using the previous case to show that both steps preserve the flex. Lastly, if $c = -1$, we put the melody in retrograde (that is, $x_i' = x_{n+1-i}$), taking care of all the negative values of $c$.
\end{proof}
Another consequence of item \eqref{it:tdil} is that the time unit used to measure the displacements $t_i$ is immaterial. In triple meter, for example, we can measure $t_i$ in beats or in bars without affecting the flex.
\section{Numerical data}
\label{sec:data}
We call two canonic schemes \emph{equivalent} if they can be transformed to one another by some combination of the transformations in Proposition \ref{prop:eqv}. In particular, we know that equivalent canonic schemes have the same flex.
In Figure \ref{fig:flex}, we graph the flex values of all three-voice canons $\{(t_1,p_1), (t_2,p_2), (t_3,p_3)\}$ with time displacements $t_i$ up to $8$, which conveniently is enough to include all three-voice canons in Table \ref{tab:Pal_canons} after suitably rescaling the time unit. Note that we can make great reductions using Proposition \ref{prop:eqv}. First, we can use time and pitch translation to assume that $t_1 = p_1 = 0$ (we always assume $t_1 < t_2 < t_3$). Then, we can use time dilation to assume that $t_2$ and $t_3$ are coprime positive integers. We can also use time dilation by $-1$ to assume that $t_3 \geq 2 t_2$ (that is, the last two voices are at least as far apart in time as the first two). Finally, we can use shearing to set $p_2 = 0$, unless $t_2$ is a multiple of $7$, in which case we can set $p_3 = 0$ (this last case does not occur in the range of values under consideration). Thus we have only three free variables $t_2, t_3, p_3$, together with the option to designate one of the three voices as bass. The flex values of these schemes are shown numerically in Table \ref{tab:flex} in Appendix \ref{app:tab}. More information about how these values were computed will be found in Section \ref{sec:Pisot}.
In Table \ref{tab:rep_flex}, we list the flex values for the Palestrina canons in Table $1$, plus some canons by other Renaissance composers for comparison.
\begin{figure}
\begin{tikzpicture}[>=latex, shorten >= 2pt]
\tikzset{
mynode/.style={rectangle,rounded corners,draw=black, top color=white, bottom color=yellow!50,very thick, inner sep=1em, minimum size=3em, text centered},
myarrow/.style={->, >=latex', shorten >=20pt},
mylabel/.style={text width=7em, text centered}
}
\begin{axis}[
width=\textwidth,
%
xlabel={Time displacements $(t_2,t_3)$},
ylabel={Flexibility $\lambda(\mathcal{S})$},
xtick=\empty,
extra x ticks={%
0, 2, 4, 6,7, 9, 11,12,13, 15,16
},
extra x tick style={
tick label style={
rotate=90,anchor=east}},
extra x tick labels={%
${(1,2)}$,
${(1,3)}$,
${(1,4)}$,
${(1,5)}$,${(2,5)}$,
${(1,6)}$,
${(1,7)}$,${(2,7)}$,${(3,7)}$,
${(1,8)}$,${(3,8)}$%
},
legend entries={%
accompanied,%
{accompanied, found in repertoire},
unaccompanied,%
{unaccompanied, found in repertoire},
}
]
%
\addplot[
scatter,
only marks,
point meta=explicit symbolic,
scatter/classes={
n={mark=o,blue},
nhl={mark=o,mark size=3,blue},
b={mark=triangle,red},
bhl={mark=triangle,mark size=4, red}
},
] table [meta=label,x=x,y=flex] {flex.dat};
%
\node[] (stacked1) at (axis cs: 0, 3.935) {};
\node[] (stacked2) at (axis cs: 0, 3.140) {};
\node[right=of stacked1, node distance=1in] (stacked-label) {stacked canons};
\draw[->,blue] (stacked-label.west) -- (stacked1);
\draw[->,red] (stacked-label.west) -- (stacked2);
\node[] (Forestier) at (axis cs: 0, 3.562) {};
\node[above right=0.1in and 0.5in of Forestier] (Forestier-label) {inverting canons at the 10th};
\draw[->,blue] (Forestier-label.west) -- (Forestier);
\node[] (impos) at (axis cs: 0, 1) {};
\node[right=of impos, node distance=1in] (impos-label) {inflexible canon (see Figure \ref{fig:impos})};
\draw[->] (impos-label) -- (impos);
\node[] (AFP) at (axis cs: 2, 2.992) {};
\node[above right=0.7in and 0.2in of AFP] (AFP-label) {\emph{Ad fugam,} Pleni};
\draw[->,red] (AFP-label.south west) -- (AFP.east);
\node[] (Non) at (axis cs: 2, 2.781) {};
\node[below right=0.7in and 0.2in of Non] (Non-label) {Non nobis Domine};
\draw[->,red] (Non-label.north west) -- (Non.east);
\node[] (New) at (axis cs: 13, 2.814) {};
\node[below right=0.7in and 0.2in of New,anchor=north] (New-label) {New canon (see Section \ref{sec:new})};
\draw[->,red] (New-label.north) -- (New.east);
\node[] (inv) at (axis cs: 0, 3) {};
\node[below right=1.5in and 0.3in of inv,anchor=north west] (inv-label) {inverting canons at the 12th (acc. and unacc.)};
\draw[->,black,double] (inv-label.north west) -- (inv);
\node[] (Herc) at (axis cs: 0, 2) {};
\node[below right=0.8in and 0.3in of Herc,anchor=north west] (Herc-label) {Josquin, \emph{Hercules,} Agnus Dei II};
\draw[->,red] (Herc-label.north west) -- (Herc);
\end{axis}
\end{tikzpicture}
\caption{Flex values for various three-voice canonic schemes.}
\label{fig:flex}
\end{figure}
\begin{table} \label{tab:rep_flex}
\begin{tabular}{rl|l}
Composer & Work & Flex \\ \hline
many & all 3-part accompanied stacked canons & 3.935 \\
Forestier & \emph{L'homme arm\'e,} Agnus I & 3.562 \\
Forestier & \emph{L'homme arm\'e,} Agnus II & 3.542 \\
Palestrina & ``Virgo prudentissima'' (I pars) & 3.363 \\
Palestrina & \emph{Primi toni,} Agnus II & 3.344 \\
Palestrina & \emph{Ad fugam,} Agnus II & 3.292 \\
Palestrina & \emph{Sine nomine,} Agnus II & 3.177 \\
many & all 3-part unaccompanied stacked canons & 3.140 \\
Palestrina & ``Accepit Jesus calicem'' & 3.118 \\
Palestrina & inverting canons at the 12th & 3 \\
Palestrina & \emph{Ad fugam}, Pleni & 2.992 \\
O. & New canon (see Section \ref{sec:new}) & 2.814 \\
anon. & ``Non nobis Domine'' & 2.781 \\
Palestrina & \emph{Repleatur,} Agnus II & 2.754 \\
La Rue & \emph{O Salutaris Hostia,} Kyrie II & 2.683 \\
La Rue & \emph{O Salutaris Hostia,} Christe & 2.678 \\
La Rue & \emph{O Salutaris Hostia,} Kyrie I & 2.420 \\
Forestier & \emph{L'homme arm\'e,} Agnus III & 2.420 \\
Josquin & \emph{Hercules dux Ferrariae,} Agnus II & 2
\end{tabular}
\caption{The flex values in canons by Palestrina and other Renaissance composers}
\end{table}
Several trends can be seen. The most extreme flex values occur near the left side of Figure \ref{fig:flex}, where $t_3$ is small, that is, the time displacements are in a simple ratio. In the top left data points (entries \ref{ti:stacked}\emph{,} \ref{ti:inv} and \ref{ti:stacked-u} of Table \ref{tab:flex}) we recover that stacked canons, as well as inverting canons at the octave, 10th, and 12th with the bass entering second, are very flexible, having flex values of $3$ or greater. By contrast, the canonic scheme $(0,0)\text{B}, (1,4), (2,7)$, appearing in the bottom left corner of Figure \ref{fig:flex}, has a flex value of exactly $1$ (the minimum possible nonzero flex value). For any length $n$, it admits only \emph{two} valid canons up to transposition, both rather degenerate, as shown in Figure \ref{fig:impos}. Unsurprisingly, there is \emph{no} canon in the Renaissance repertoire utilizing this scheme or an equivalent one.
\begin{figure}
\centering
\hfill
\includegraphics[width=0.4\linewidth]{figures/impos.pdf}
\hfill
\includegraphics[width=0.4\linewidth]{figures/impos2.pdf}
\hfill{}
\caption{The only two consonant canons with the scheme $\S = (0,0)\text{B}, (1,4), (2,7)$, which has flex $1$.}
\label{fig:impos}
\end{figure}
The scheme $\{(0,0), (4,4), (8,-3)\}$, an inverting canon at the 12th with the bass entering \emph{last,} has a flex value of only $2$. This is the scheme of the Agnus Dei II of Josquin's mass \emph{Hercules dux Ferrariae}, \cite{Josquin} which includes quite a few concessions in the dissonance treatment, such as upward-resolving suspensions, which look backward to Ockeghem rather than forward to Palestrina. This is the lowest flex value we have found in the repertoire analyzed. Pierre de la Rue's mass \emph{O Salutaris Hostia} \cite{LaRue} (not to be confused with the motet by the same name) is rather more meticulous in the dissonance treatment (although there are still some unsuspended dissonances on the accented beats). Space does not permit a complete analysis of this mass, which consists entirely of 4-in-1 and a few 2-in-1 canons, but let it be noted that the schemes used for the Kyrie have flex values in the $2.4\sim 2.7$ range. The most ambitious endeavor in this direction, however, is almost certainly Mathurin Forestier's \emph{Missa L'homme arm\'e} \cite{Forestier}, sometimes attributed to Mouton (discussed at length in Burn \cite{BurnFurther}). It makes use of numerous canons with systematically varying pitch intervals and ends with a striking canon for seven voices with the scheme
\[
\{ (0,0)\text{B}, (1,3), (2,6), (3,0), (4,3), (5,6), (6,0) \}
\]
(time units are full measures of $3/1$). The scheme has a flex value of $2.420$, remarkably high for so many voices. Forestier makes it work with the help of frequent rests (even the final cadence does not include all the voices, breaking with a convention in force since medieval polyphony) and some laxity in the treatment of dissonance and parallels.
In short, the composers of Josquin's generation make use of the whole gamut of flexibility options from $2$ up to $3.935$, the maximum computed (or $5$ if we include two-voice canons); Palestrina excludes the lower portion of this, his least flexible scheme being found in the $6$-part Agnus Dei of the mass \emph{Repleatur os meum,} where the canonic parts have lengthy rests.
One tendency \emph{not} captured by the model, but familiar to theorists at the time and after, is the preference for canons by perfect intervals; note that in Table \ref{tab:Pal_canons}, most pieces use pitch displacements by $p \equiv 0, 3, 4$ mod $7$, that is, imitation at the fourth, fifth, and octave, and that $p \equiv 3$ and $p \equiv 4$ do not normally occur in the same piece. In our model, owing to the shearing symmetry, any interval is as good as any other. The preference for fourth and fifth imitations cannot, therefore, be motivated by flexibility of the canon as we have defined it. It must be traced rather by a desire to imitate melodic motives more exactly, in that a melody based on a Guidonian hexachord (such as C-D-E-F-G-A) can be imitated at the fifth or fourth on another hexachord (such as G-A-B-C-D-E) without altering the placement of whole and half steps, and avoiding the melodic tritone to boot.
Another habit of Palestrina's, evident in Table \ref{tab:Pal_canons}, is to bring in two voices in tight stretto, typically one semibreve apart at the interval of a fourth, while bringing in the third voice at a longer delay either before or after. This is equivalent to setting $t_2 = 1$ (or $t_2 = t_3 - 1$, which is equivalent under retrograde). In view of Figure \ref{fig:flex}, this cannot be said to increase the flex value significantly. However, it may help in producing canons with more stepwise movement, inasmuch as steps in the melody produce rich thirds and fifths against the tightly imitating line.
As the time displacements $t_i$ increase while remaining relatively prime, the flex values appear to even out and tend toward a limit, which is apparently around $3.3$ when there is no canonic bass voice and around $2.8$ when there is. Palestrina mainly draws from the former region, but the Pleni from his mass \emph{Ad fugam,} whose opening was shown earlier in Figure \ref{fig:ad_fugam_pleni}, successfully uses the second most flexible scheme (flex value $2.992$) among all unaccompanied asymmetric three-voice canons in the table! It is absurd to suggest that he chose this scheme by an exhaustive mathematical analysis of the possibilities as we have done here. Most likely, his compositional skill and vast experience with imitative polyphony suggested a theme and a workable canonic scheme for it. By contrast, the anonymous riddle canon ``Non nobis Domine'' uses a scheme with the mediocre flex value of $2.781$, so it is perhaps inevitable that this piece is short and essentially constructed from a single motive (as Gauldin points out \cite{GauldinComposition}).
The only more flexible scheme for an unaccompanied, asymmetric three-voice canon is given by $\{(0,0)$, $(1,0)\text{B}$, $(3,4)\}$ (or one of its numerous equivalent forms). It can be obtained from a 4-voice inverting canon with the scheme $\{(0,0), (1,0)\text{B}, (2,4), (3,4)\}$ (a scheme frequently used by Josquin) by omitting the third voice. Is there a canon somewhere in the Renaissance repertoire using this scheme?
\section{Computing the flex value}
\label{sec:Pisot}
Although it has little musical significance, one cannot help noticing that some of the flex values in Table \ref{tab:flex} have number-theoretic importance: these include the exact integers $1$, $2$, and $3$, the golden ratio $\frac{1 + \sqrt{5}}{2} = 1.618\ldots,$ and $1 + \sqrt{3} = 2.732\ldots.$ These are all examples of Pisot numbers. We recall that a \emph{Pisot number} is an algebraic integer that is greater than the absolute value of all its Galois conjugates.
In the following theorem, we show that the limit defining the flex exists and provide an effective method to compute it. Moreover, the method sheds light on these number-theoretic properties.
\begin{thm} \label{thm:Pisot}
For any canonic scheme $\S$ with at least two voices, the flex $\lambda(\S)$ is an algebraic integer of degree at most $ V_t(\S) \leq 7^t$, where
\[
t = \max_i t_i - \min_i t_i
\]
is the total time displacement of the canon. Moreover, $\lambda(\S)$ is either a Pisot number or the $h$th root of a Pisot number for some integer $h \geq 1$.
\end{thm}
\begin{proof}
The technique of the proof uses the Perron--Frobenius theorem. A suitable form of the theorem is as follows:
\begin{defn}[see \cite{MeyerMatrix}, \textsection8.3]
A square matrix $A$ is called \emph{reducible} if, after permuting the rows and columns the same way, it becomes a block-triangular matrix
\begin{equation} \label{eq:XYZ}
\begin{bmatrix}
X & Y \\
0 & Z
\end{bmatrix}
\end{equation}
with $X$ and $Z$ square, and \emph{irreducible} otherwise. For an $n\times n$ matrix $A$, define its \emph{graph} $\mathcal{G}_A$ to be the (directed, possibly non-simple) graph on $n$ nodes with an edge from node $i$ to node $j$ for each nonzero entry $a_{ij}$. Observe that $A$ is irreducible exactly when $\mathcal{G}_A$ is \emph{strongly connected} (that is, for any nodes $i$ and $j$, there exists a directed path from $i$ to $j$).
\end{defn}
\begin{prop}[Perron--Frobenius theorem; see \cite{MeyerMatrix}, \textsection8.3]
Let $A$ be an irreducible square matrix with nonnegative entries. Then:
\begin{enumerate}[$($a$)$]
\item The largest positive eigenvalue $\lambda$ of $A$ is also the spectral radius of $A$, that is, has the largest absolute value of any eigenvalue of $A$.
\item $\lambda$ has algebraic multiplicity $1$.
\item The corresponding eigenvector $\mathbf{v}$ is nonnegative, and is the unique nonnegative eigenvector (up to scaling) of $A$.
\item Let $h$ be the \emph{imprimitivity index} of the graph $\mathcal{G}_A$, that is, the GCD of its cycle lengths. Then the eigenvalues of $A$ having absolute value $\lambda$ are precisely
\[
\lambda e^{2\pi k \sqrt{-1}/h}, \quad k = 0, 1, \ldots, h - 1.
\]
These eigenvalues are equally spaced around the unit circle, and the whole spectrum of $A$ is rotationally symmetric under multiplication by any $h$-th root of unity.
\end{enumerate}
\end{prop}
Given a canonic scheme $\S$, we construct a graph $\mathcal{G}_\S$ and a matrix $A_\S$ as follows. Let $\mathcal{G}_\S$ have as nodes the $N = V_t(\S)$ valid canons for $\S$ of length $t$. For each of the $E = V_{t+1}(\S)$ valid canons $(0 = x_1, x_2, \ldots, x_{t+1})$, draw a directed edge from
\[
(x_1, x_2, \ldots, x_t) \quad \text{to} \quad (x_2, x_3, \ldots, x_{t+1}).
\]
Thus $\mathcal{G}_\S$ is a directed graph on $N$ nodes, possibly with loops. Let $A_\S$ be the adjacency matrix of $\mathcal{G}_\S$; that is, an $N \times N$ matrix whose rows and columns number the nodes and whose entries $a_{ij}$ count the number of edges pointing from node $i$ to node $j$. It is evident that $A_\S$ has nonnegative integer entries and encodes the same information as the graph $\mathcal{G}_\S$.
Note also that $V_{t + k}(\S)$ is the number of paths with $k$ nodes in $\mathcal{G}_\S$: we can encode a canon $(x_1, \ldots x_{t+k})$ by following the edges corresponding to its substrings of length $t + 1$. Here it is essential that the validity of a canon can be checked by looking only at substrings of length at most $t + 1$, which is true by our choice of $t$. But, as is easy to see, counting paths amounts to taking powers of the adjacency matrix, so
\[
\frac{1}{7} V_{t + k}(\S) = \mathbf{1}_N^{\top} A_\S^k \mathbf{1}_N,
\]
where $\mathbf{1}_N$ is the all-ones vector of length $N$. Thus we see the relevance of the eigenvalues of $A_\S$, which is a nonnegative integer matrix.
However, $A_\S$ need not be irreducible. So we break up $\mathcal{G}_\S$ into its \emph{strongly connected components} $\mathcal{G}_i$ with sizes $N_i$. The adjacency matrix $A_i$ of $\mathcal{G}_i$ satisfies the hypotheses of the Perron--Frobenius theorem and so has a largest eigenvalue $\lambda_i$. We have the following relation:
\begin{lem}
The flex $\lambda(\S)$ is the largest of the $\lambda_i$, which is the largest eigenvalue of the matrix $A_\S$.
\end{lem}
\begin{proof}
Using the fact that the characteristic polynomial of a block-triangular matrix
\[
\begin{bmatrix}
X & Y \\
0 & Z
\end{bmatrix}
\]
is the product of the characteristic polynomials of $X$ and $Z$, we see that the eigenvalues of $A_\S$ comprise the union (as a multiset) of the eigenvalues of the $A_i$. Thus, on the one hand, $V_{n}(\S)$ can grow at most as fast as a polynomial times a power of $\lambda = \max_i \lambda_i$ (this can be seen by converting $A_\S$ to Jordan normal form). On the other hand, let $\mathbf{v}_i$ be the nonnegative eigenvector of $A_i$. Then
\begin{align*}
V_{t + k}(\S) &= \mathbf{1}_N^{\top} A_\S^k \mathbf{1}_N \\
& \geq \mathbf{1}_{N_i}^\top A_i^k \mathbf{1}_{N_1} \\
& \geq \mathbf{v}_i^\top A_i^k \mathbf{v}_i \\
& = \mathbf{v}_i^\top \lambda^k \mathbf{v}_i \\
&= \lambda^k.
\end{align*}
So the flex of $\S$ is exactly $\lambda$.
\end{proof}
The remaining assertions in the theorem follow easily. We have that $\lambda$ is an algebraic integer of degree at most $N \leq 7^t$. If $h = 1$, then $\lambda$ is a Pisot number; in general, $\lambda$ is the $h$-th root of a Pisot number.
\end{proof}
This interpretation of the flex value also yields an efficient way to compute it. We first enumerate all valid canons of length at most $t + 1$ to find the graph $\mathcal{G}_\S$, which we then decompose into strongly connected components $\mathcal{G}_i$. For each $i$, we apply $A_i$ repeatedly to an arbitrary nonnegative vector (we choose $\mathbf{1}_{N_i}$) until the result stabilizes to the desired precision. This has the advantage that $\lambda(\S)$ can be computed precisely and quickly once $A_\S$ is found, and the complexity of this grows exponentially with $t$.\footnote{Using transposition invariance, it is possible to reduce the number of nodes by a factor of $7$. We omit the details.} We have been able to compute examples for $t$ up to $8$, which conveniently is enough to include all the $3$-voice schemes in Table \ref{tab:Pal_canons}.
\begin{qn}
Is the flex always a Pisot number, that is, can we always take $h = 1$ in Theorem \ref{thm:Pisot}?
\end{qn}
To be sure, there are schemes for which the imprimitivity index $h$ of one of the graphs $\mathcal{G}_i$, which is the GCD of the lengths of all directed cycles, is not $1$. For instance, a strongly connected component can be an $h$-cycle, which has imprimitivity index $h$. But this has dominant eigenvalue $1$, which is still a Pisot number.
\subsection{A Virah\=anka--Fibonacci canonic scheme}
The scheme $\S = \{(0,0),(1,6)\text{B},(3,0)\}$, marked \ref{ti:fib} in Table \ref{tab:flex}, has a flex value of $\frac{1 + \sqrt{5}}{2} = 1.618\ldots,$ the golden ratio. This value is too low to be of much musical use, but it is worth noting that, taking our vertices to be $2$-note fragments (strictly speaking, the proof of Theorem \ref{thm:Pisot} would require us to use fragments of $t = 3$ notes, but in this case it doesn't affect the validity of any canon) gives the pretty graph shown in Figure \ref{fig:fib_graph}. It has two connected components, which are interchanged by the inversion symmetry of Proposition \ref{prop:eqv}\eqref{it:inv}. Within each component, a path is formed by going around the $1$-cycle and the $2$-cycle in free alternation; thus we can write $V_n(\S)$ (more accurately $\frac{1}{7}V_{n}(\S)$) as the number of $n$-beat rhythms composed of $1$-beat and $2$-beat notes. This was precisely the combinatorial problem that led Virah\=anka to discover the Fibonacci numbers around the 8th century!
\begin{figure}
\centering
\begin{tikzpicture}[
scale = 1,
every node/.style = {draw, circle, minimum size = 10mm}
]
\node[] (A) {$\uparrow3$};
\node[right=15mm of A] (B) {$\uparrow1$};
\node[right=20mm of B] (C) {$\downarrow3$};
\node[right=15mm of C] (D) {$\downarrow1$};
%
\draw[->] (A) to[bend left] (B);
\draw[->] (B) to[bend left] (A);
\draw[->] (C) to[bend left] (D);
\draw[->] (D) to[bend left] (C);
\draw[->] (A) to[out=-150,in=150,loop] ();
\draw[->] (C) to[out=-150,in=150,loop] ();
\end{tikzpicture}
\caption{The graph of the canon $\{(0,0),(1,6)\text{B},(3,0)\}$, which shows Fibonacci behavior. (Intervals are zero-based, so that, for instance, $\uparrow3$ means that the melody rises by a fourth.)}
\label{fig:fib_graph}
\end{figure}
\section{A canon with a hitherto unexplored scheme} \label{sec:new}
To explore the viability of composition using the vast variety of canonic schemes summarized in Table \ref{tab:flex}, we present as an example a new composition on the scheme
\[
\S_{\mathrm{SC}} = \{(0,0)\text{B}, (4,4), (7,7)\},
\]
which is equivalent to $\{(0,0)\text{B}, (4,0), (0,0)\}$ (marked \emph{x} in the table). It has a flex value of 2.814, on the mid-low range of Palestrina's output. It was not picked at random but rather is the imitative scheme for the first three voices of Palestrina's motet ``Sicut cervus,'' one of the most beloved Renaissance pieces today. ``Sicut cervus'' is not a canon, but it makes use of imitation of a few themes (see Figures \ref{fig:SC1}--\ref{fig:SC2}) which we incorporate into the new composition. The new composition could thus serve as a movement for a parody mass (a frequent genre in the 16th century) based on this motet.
\begin{figure}
\centering
\hfill
\includegraphics[width=\linewidth]{figures/Sicut_Cervus_1.pdf}
\caption{The opening of Palestrina's ``Sicut Cervus.''}
\label{fig:SC1}
\end{figure}
\begin{figure}
\centering
\hfill
\includegraphics[width=\linewidth]{figures/Sicut_Cervus_2.pdf}
\caption{Another imitative passage in Palestrina's ``Sicut Cervus.''}
\label{fig:SC2}
\end{figure}
As an aid in construction, we asked the computer to generate random canons on this scheme beginning with various note sequences. Figures \ref{fig:SCgen1} and \ref{fig:SCgen2} show two of the many results that come from running this program starting from the seed values $(1,1,1,2,1,1,3,4)$ (taken from the first eight downbeats of the tenor voice of Figure \ref{fig:SC1}) and adjusting octaves as needed for a smooth melodic line.
Chromatic alterations in the answering parts are shown above the staff, following the custom of the time that only the leading voice be notated, the answering voices making alterations as needed in accord with their position in the diatonic scale.
The new composition shows the feasibility of the novel scheme by adhering strictly to the laws of dissonance of the Palestrina style and also, as far as possible, pursuing its aesthetics of voice independence and a flowing melodic line, which are much harder to formulate as rules.
\begin{figure}
\centering
\hfill
\includegraphics[width=\linewidth]{figures/SC_gen.pdf}
\caption{A randomly generated canon on the scheme $\S_{\mathrm{SC}}$ (see text).}
\label{fig:SCgen1}
\end{figure}
\begin{figure}
\centering
\hfill
\includegraphics[width=\linewidth]{figures/SC_gen_2.pdf}
\caption{A randomly generated canon on the scheme $\S_{\mathrm{SC}}$ (see text).}
\label{fig:SCgen2}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{figures/Benedictus_3v_canon-1.pdf}
\label{sc-begin}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{figures/Benedictus_3v_canon-2.pdf}
\label{sc-end}
\end{figure}
|
{
"arxiv_id": "2302.14149",
"language": "en",
"timestamp": "2023-03-01T02:02:39",
"url": "https://arxiv.org/abs/2302.14149",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
The superior detection capabilities of modern and upcoming radio arrays -- namely, the Square Kilometre Array (SKA) -- necessitate new and improved radio-interferometric (RI) imaging algorithms that are precise, robust, and scalable to larger quantities of data, wider fields-of-view, and broader frequency bandwidths. The ``Scalable precision wide-field imaging in radio interferometry'' series aims to showcase a novel imaging framework in action, as applied to real radio observations from ASKAP. The proposed imaging framework builds from compressed sensing techniques to reconstruct true signal from incomplete, noisy data and operates at the interface of optimization theory and deep learning. Implemented in MATLAB, the framework is automated, highly parallelised, and capable of producing wide-field, high-dynamic range, super-resolved monochromatic intensity images. Two interchangeable image regularisation denoisers can be ``plugged'' in as the `backward' step of the underlying iterative forward-backward (FB) deconvolution structure \citep{Terris22,dabbech2022first} of the imaging framework, alternating with a gradient descent `forward' step promoting data fidelity.
The first algorithm, unconstrained Sparsity Averaging Reweighted Analysis (uSARA), is purely optimisation-based. It leverages the proximal operator of a state-of-the-art handcrafted sparsity-promoting regularisation function \citep{Carrillo12,Terris22} as regularisation denoiser. In Part I: ``uSARA validated on ASKAP data'', uSARA was validated against the widely-used CLEAN-based imager for RI {\tt WSClean} \citep{2014MNRAS.444..606O, 2017MNRAS.471..301O}. Our experiments with uSARA showed that we were able to take wide-field, imperfectly calibrated data and create images with exceptional resolution and enhanced sensitivity. The findings of Part I establish uSARA as an advanced RI imaging algorithm capable of surpassing the state-of-the-art in precision and robustness when applied to large-scale, real, and imperfect data. A remaining caveat with uSARA lies in its computational cost due to the iterative nature of the proximal operator underlying its image model.
In this sequel, we showcase AIRI \citep{Terris22} -- the second algorithm encapsulated in our automated, parallelised imaging framework -- which combines optimisation theory with the power of AI. Building on the recent success of Plug-and-Play (PnP) approaches in various applications such as image restoration \citep{zhang2020plug} and magnetic resonance imaging \citep{ahmad2020plug}, AIRI relies on the same FB iterative scheme as uSARA, with the proximal operator enforcing the image prior model replaced by a learned DNN denoiser. The learned denoiser, deployed on graphic processing units (GPUs), enables a significant acceleration of the backward step of the iterative FB algorithm when compared to the computationally heavy iterative proximal operator powering uSARA. The speed combined with the learning power of the DNNs -- trained herein as denoisers for high dynamic-range images -- make for a scalable and robust tool in image reconstruction. Although the training of the DNNs requires important computational time and resources, AIRI denoisers are pre-trained independently of the data under scrutiny. They can generalise to any RI data with a simple scaling procedure.
This paper specifically addresses testing AIRI on the same real and imperfectly calibrated data from ASKAP, used in Part I. To summarise from Part I, we selected three fields-of-view (FoVs) from ASKAP Early Science and Pilot Survey observations hosting radio sources of primary interest that exhibit emission with both complex diffuse and compact filamentary morphology. Our targets of interest include the merging galaxy cluster system Abell 3391-95 (hosting a candidate radio phoenix; \citealp{2021A&A...647A...3B}), the merging galaxy cluster SPT-CL J2023-5535 (hosting a radio halo and radio relic; \citealp{2020ApJ...900..127H}), the X-shaped radio galaxy PKS 2014-558 \citep[e.g.][]{2020MNRAS.495.1271C}, and the ``dancing ghosts,'' known collectively as PKS 2130–538 \citep[e.g.][]{2021PASA...38...46N}. We refer the reader to Table 1 of Part I for full details on the ASKAP observations selected for imaging. Further details of the observations selected, calibration and processing of the data, and the steps involved to prepare for imaging are elaborated upon in Section 3 of Part I.
The remainder of this article is structured as follows. In Section~\ref{sec:methods}, we recall the parallelised, automated imaging framework from Part I and expand upon the application of the AIRI algorithm as well as our approach to selecting DNN denoisers for imaging. In Section~\ref{sec:data}, we provide details of the ASKAP data used in this work and the imaging settings applied for the AIRI algorithm. Reconstruction results of our selected fields are presented and compared to the results of Part I in Section~\ref{sec:results}. The computational performance of AIRI is studied in \ref{sec:time}. Finally, conclusions are made in Section~\ref{sec:con}.
\section{Methods} \label{sec:methods}
In this section, we briefly recall the RI data model in the context of wide-field imaging and provide a summary of AIRI, building from the underlying theory \citep{Terris22} and its first application to real RI data \citep{dabbech2022first}. We also outline the encompassing framework for wide-field imaging, focusing on the parallelisation of the AI denoiser and the automated selection of associated parameters. A description of the framework's underpinning wide-field parallel measurement operator can be found in Section 2 of Part I.
\subsection{RI data model}
We recall the discrete RI data model in the context of wide-field imaging, detailed in Part I, whereby the measured visibilities $\ensuremath{\boldsymbol{y}} \in \mathbb{C}^M$ are modelled from a discrete representation of the sought radio image $\ensuremath{\boldsymbol{x}} \in \mathbb{R}_+^N$ as follows
\begin{equation}
\label{eq:datamodel}
\ensuremath{\boldsymbol{y}} = \bm\Phi \ensuremath{\boldsymbol{x}} + \ensuremath{\boldsymbol{n}},
\end{equation}
where $\ensuremath{\boldsymbol{n}} \in \mathbb{C}^M$ is a realisation of random Gaussian noise with mean zero and standard deviation $\tau>0$, and $\bm\Phi \in \mathbb{C}^{M \times N}$ is the measurement operator encompassing the Fourier sampling and the so-called $w$-effect, a chirp-like phase modulation emanating from the non-coplanarity of the array \citep{Cornwell2008}. The measurement operator can also accommodate data-weighting schemes aiming at enhancing the effective resolution of the observation by compensating for the highly non-uniform Fourier sampling \citep[\emph{e.g.} Briggs weighting; ][]{briggs95}. Our imaging framework is shipped with a parallel and memory-efficient measurement operator, ensuring scalability to large data sizes \citep[see][ for a comprehensive summary]{dabbech2022first}.
\subsection{AIRI algorithm}
The recent PnP scheme established that proximal optimisation algorithms, such as FB, enable not only the use of proximal operators of handcrafted regularisation operators, but also the injection of learned DNN denoisers, which define regularisation implicitly \citep{venkatakrishnan2013,romano2017}. In order to preserve the convergence of the algorithm, and the interpretability of its solution, the PnP denoiser must typically satisfy a “firm non-expansiveness” constraint, ensuring that it contracts distances \citep{pesquet2020learning,hurault2022proximal}. Learning denoisers from rich databases (as opposed to handcrafting proximal operators) opens the door to more powerful regularisation. The speed of DNNs on GPU also offers a significant acceleration over iterative proximal operators. In this context, the AIRI imaging algorithm \citep{Terris22} is underpinned by the same FB structure as the uSARA imaging algorithm (see Section 2 in Part I) with the image update at each iteration alternating between a `forward' step enforcing data fidelity with respect to \eqref{eq:datamodel}, and a `backward' denoising step for image regularisation:
\begin{equation} \label{eq:dnn}
(\forall k\in \mathbb{N}) \qquad \ensuremath{\boldsymbol{x}}^{(k+1)} = {{\rm D}}\left( \ensuremath{\boldsymbol{x}}^{(k)} -\gamma \nabla f (\ensuremath{\boldsymbol{x}}^{(k)}) \right).
\end{equation}
The operator ${{\rm D}}$ denotes a learned DNN denoiser. Considering the standard data-fidelity function $f(\ensuremath{\boldsymbol{x}}; \ensuremath{\boldsymbol{y}}) = 1/2 \|\ensuremath{\boldsymbol{y}} -\bm\Phi \ensuremath{\boldsymbol{x}}\|^2_2$, where $\|.\|_2$ denotes the $\ell_2$-norm of its vector argument, the operator $\nabla f $ stands for its the gradient and reads $\nabla f (\ensuremath{\boldsymbol{x}}) = \text{Re}\{\bm{\Phi}^\dagger\bm{\Phi}\} \ensuremath{\boldsymbol{x}} -\text{Re}\{\bm{\Phi}^\dagger\ensuremath{\boldsymbol{y}}\}$. The parameter $\gamma>0$ is a sufficiently small step size.
\subsection{DNN training and noise level}
\label{ssec:dnnselection}
Following \citet{Terris22}, the AIRI denoisers used in this work were trained in a supervised approach to remove random Gaussian noise with zero-mean and standard deviation $\widehat{\sigma}>0$ from noisy input images. The denoisers rely on a simple denoising convolutional neural network (DnCNN) architecture and are trained using a rich high-dynamic range database synthesised from optical astronomy images, with groundtruth images normalised to have a peak value equal to 1. Importantly, the training loss function is regularised with an appropriate non-expansiveness term on the denoiser ${{\rm D}}$.
Given the normalisation of the groundtruth images from the training database, the DNN's noise level can be interpreted as the inverse of a target dynamic range, which can intuitively be adjusted to match the signal-to-noise ratio in the observed data. Considering $L$ as the spectral norm of $\text{Re}\{\bm{\Phi}^\dagger \bm{\Phi}\}$, the standard deviation of the measurement noise in the image domain can be estimated as $\sigma=\eta\tau/\sqrt{2L}$ \citep{thouvenin22,wilber221} where $\eta>0$ is derived from the data-weighting operator when considered in imaging and is set to 1 otherwise. Hence, the target dynamic range of the sought image is given by $\operatorname{max}_j\{{x}_j\}/\sigma$.
However, the peak value of the true image of the sky is not accessible in practice. We therefore resort to the dirty image defined as $\overline{\ensuremath{\boldsymbol{x}}}^{\textrm{dirty}}=\beta \text{Re}\{\bm{\Phi}^\dagger \ensuremath{\boldsymbol{y}} \}\in\mathbb{R}^N$, where $\beta>0$ is a normalisation factor\footnote{The factor $\beta$ corresponds to the peak value of the non-normalised point spread function given by $\text{Re}\{\bm{\Phi}^\dagger \bm{\Phi}\}\bm{\delta}$, where $\bm{\delta} \in \mathbb{R}^N$ is the image with one at its centre and zero otherwise.}, and approximate the peak value of the sought image by the peak value of the dirty image $\kappa=\operatorname{max}_j\{{\overline x}^{\textrm{dirty}}_j\} >0$. The value of $\kappa$ constitutes an upper bound on the estimated peak value\footnote{In our experiments, we observed that $\kappa$ is within one order of magnitude from the peak value of the reconstructed image.}. Consequently, $\kappa/\sigma$ provides an estimate (in fact, an upper bound) on the target dynamic range.
In this context, the RI inverse problem \eqref{eq:datamodel} is normalised by $\kappa$ to ensure that the sought image $\ensuremath{\boldsymbol{x}}/\kappa$ satisfies the same normalisation constraints of the training images, such that pixel values are below 1. The DNN denoiser is trained for the removal of a zero-mean random Gaussian noise with standard deviation
\begin{equation}
\label{eq:heuristic}
\widehat{\sigma}=\sigma/{\kappa} \textrm{, where~} \sigma=\eta\tau/\sqrt{2L}.
\end{equation}
The AIRI
reconstruction is later re-scaled back via multiplication by $\kappa$.
The apparent dependency of the training noise level on the statistics of the noise corrupting the RI data and the associated measurement operator raises a generalisability concern for the AIRI denoisers. However, this can be circumvented via further scaling of the inverse problem to bring the target dynamic range of the reconstruction to the inverse of the noise level of an already available DNN denoiser.
\subsection{Denoiser selection} \label{ssec:denoise-strat}
The selection of the appropriate denoiser can be conducted via two approaches. The first approach relies on a pre-trained \emph{shelf of denoisers}, from which the appropriate denoiser is selected depending on the target dynamic range of the reconstruction. A set of denoisers are thus trained, with noise levels sampled within a wide range of values, reflecting a whole range of dynamic ranges of interest in modern RI imaging. For each RI dataset, AIRI's denoiser is selected from the shelf as the DNN with the nearest noise level ${\sigma}_s$ below the inverse of the target dynamic range $\widehat{\sigma}$.
This implies considering a slightly looser image peak upper bound $\kappa\widehat{\sigma}/{\sigma}_{s}$ for the re-scaling of the inverse problem \eqref{eq:datamodel}, thus leading to a heuristic value ${\sigma}_{s}$ for the training noise level in \eqref{eq:heuristic}.
The second approach leverages a pre-trained \emph{single (universal) denoiser} to be applied for the image formation of any RI dataset. The denoiser is trained with a very low noise level ${\sigma}_u$, tailored for the highest target dynamic ranges of interest for modern RI imaging. For any RI dataset of interest, this amounts to considering a possibly much looser image peak upper bound $\kappa\widehat{\sigma}/{\sigma}_{u}$ for the re-scaling of the inverse problem \eqref{eq:datamodel}, systematically leading to a heuristic value ${\sigma}_{u}$ for the training noise level in \eqref{eq:heuristic}. This second approach was already shown to be efficient in the formation of high-quality radio maps when applied to observations from the MeerKAT telescope \citep{dabbech2022first}.
\subsection{Denoiser faceting}
Owing to their convolutional nature and narrow receptive fields, AIRI's denoisers can be applied to facets of the image without causing any faceting-related artefacts, provided that appropriate facet overlaps are considered. This feature enables the scalability of AIRI denoisers to large image dimensions through their parallel application to image facets, as well as circumventing the memory limitation of the GPUs during inference.
\section{Data, imaging, and analysis} \label{sec:data}
In validating the AIRI algorithm, we consider the same RI datasets imaged in Part I. These data consist of three individual beam observations from ASKAP Early Science and Evolutionary Map of the Universe Pilot survey \citep[EMU-PS][]{2021PASA...38...46N} scheduling blocks (SBs): SB8275-15, SB9351-12, and SB9442-35. The measurement sets containing calibrated visibilities of these selected observations were produced by ASKAPsoft \citep{2021PASA...38....9H} and obtained from the CSIRO ASKAP Science Data Archive (CASDA; \citealp{2017ASPC..512...73C}. For our imaging purposes, each of these observations, with bandwidths of 288~MHz (ranging between [800,1158]~MHz), was split into eight spectral windows (SPWs). For the first two fields (SB8275-15 and SB9351-12), the resulting sub-band data were imaged separately to form monochromatic images with dimensions of $5500 \times 5500$ pixels and a cell-size of 2.2~arcsec. For the third field (SB9442-35), sub-band images and a single full-band image with dimensions of $4096 \times 4096$ pixels and cell-sizes of 2.2~arcsec were formed. The full-band data of SB9442-35 were reconstructed into a monochromatic full-band image with the aim to increase both the dimensionality and the sensitivity of the data. Full details of the data and the imaging settings can be found in Tables~1 \& 2 of Part I.
Specific to AIRI, in each imaging experiment we enabled the image-faceting functionality of the denoiser by splitting the image into four facets of equal dimensions. Faceting was found to be necessary for satisfying memory requirements. In the selection of the appropriate denoiser, we first determined the values of the inverse of the target dynamic range, $\widehat{\sigma}$, of all formed sub-band and full-band images, following \eqref{eq:heuristic}. We primarily opted for the pre-trained shelf strategy for all AIRI reconstructions, where the considered noise levels of the pre-trained denoisers are $\sigma_s = [2,4,8]\times 10^{-5}$.
We also investigated the pre-trained universal denoiser strategy for the field SB9351-12. In this case, the universal denoiser is chosen from the pre-trained shelf as the denoiser with the lowest noise level (equivalently, the highest dynamic range), that is $\sigma_u = 2\times10^{-5}$
We note that training under the firm non-expansiveness constraint is highly challenging. While \citet{Terris22} demonstrated in simulation that the AIRI training approach leads to a robust way to ensure convergence of the PnP algorithms, we acknowledge that, when used for real data and at large image sizes and dynamic ranges such as those of interest here, some denoisers lead to algorithm instability, ultimately requiring further training. This phenomenon was also witnessed by \citet{dabbech2022first}.
AIRI experiments were run on Cirrus\footnote{\url{http://www.cirrus.ac.uk}}, a UK Tier2 high-performance computing (HPC) service, and utilised its GPU compute nodes. A Cirrus GPU node is comprised of 4 GPUs and 40 CPU cores with 384~GB of shared memory.
Each imaging experiment of AIRI is launched on one to five GPU nodes, depending on the memory requirements, whereby CPU cores, allocated dynamically, were utilised for the forward step, more precisely the application of the measurement operator, and GPUs were exploited in the parallel application of the denoising DNN on facets of the image.
For a quantitative assessment of AIRI's performance, we focus on the same primary sources of interest as presented in Part I and analyse their associated flux measurements and spectral index maps. Results are compared to those obtained with uSARA and {\tt WSClean} for all spectral windows of each imaged field. The AIRI flux measurements were computed from hand-drawn regions (generated using the visualisation software SAOImageDS9; \citealp{ds903}) slightly different from those considered in the uSARA and {\tt WSClean} images, to better match the morphology of recovered emission. Spectral index maps inferred from the AIRI-ASKAP sub-band images were also obtained following the same procedure described in Part I. Likewise, only the first six sub-band images were used to generate the spectral index maps due to the limited diffuse signal recovered in the final two sub-bands. Similarly to uSARA, blurring with a circular Gaussian beam of 5 arcsec was applied to AIRI-ASKAP sub-band images to smooth the source structure before flux measurements were fitted to the spectral curve: $S_{\nu} \propto \nu^{-\alpha}$, where $S_{\nu}$ is the flux density for a given beam area at a given frequency $\nu$ and $\alpha > 0$ is the spectral index. In presenting our AIRI images, we also make use of optical images from the first data release of the Dark Energy Survey (DES; \citealp{2018ApJS..239...18A}).
We performed an additional assessment towards the quantification of epistemic model uncertainty by comparing variations of AIRI (\emph{i.e.} using different DNN denoisers) when imaging each spectral window of the field SB9351-12. More details on this analysis can be found in Section~\ref{ssec:results-strat}.
\begin{figure}
\centering
\includegraphics[width=.49\textwidth]{Images/DNN-Shelf.png}
\caption{The positioning of the inverse of the target dynamic range, $\widehat{\sigma}$, associated with the different SPWs of the selected fields (dashed lines), with respect to the noise level, $\sigma_s$, of the pre-trained shelf of DNN denoisers (black solid lines). From left to right: plots for the fields SB8275-15, SB9351-12, and SB9442-35, respectively. Specifically to SB9442-35, we also add the inverse of the target dynamic range associated with the full-band imaging experiment.
\label{DNNshelf}}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/A3391_main_AIRI.pdf}
\caption{
SB8275-15 -- AIRI: Full FoV image covering the merging cluster system Abell 3391-95, at the first sub-band (SPW:1, centred at 887 MHz). For visual comparison with {\tt WSClean} and uSARA, refer to their respective Figures 1 and 2 from Part I. This monochromatic image is an AIRI model with a pixel resolution of $2.2 \times 2.2$ arcsec. Panel (a) centred on the FR I radio galaxy in A3391; panel (b) centred on cluster member FR II radio galaxy; (c) panels centred on FR I and diffuse source in A3395. Middle (c) panel: r-band optical image from DES overlaid with AIRI model image, demarcated by blue contours at levels $\{2^n\}_{0 \leq n \leq 10} ~\mu$Jy pixel$^{-1}$. Rightmost (c) panel: spectral index map obtained with the first six sub-band images of AIRI after smoothing with a common circular Gaussian beam of 5 arcsec. All sub-band images are combined into the GIF \texttt{`SB8275-15\_AIRI'} provided in \citet{askapdataset}. \label{A3391airi}}
\end{figure*}
\begin{table*}
\begin{tabular}{| *{9}{c|} }
\hline
\hline
\textbf{A3395 Phoenix} & $S_{887}$ & $S_{923}$ & $S_{959}$ & $S_{995}$ & $S_{1031}$ & $S_{1067}$ & $S_{1103}$ & $S_{1139}$ \\
\hline
AIRI model & 25.3 & 27.0 & 17.9 & 8.5 & 11.3 & 12.1 & 9.7 & 1.6 \\
\hline
\end{tabular}
\caption{Integrated flux density values in [mJy] of the diffuse phoenix source in Abell 3395 for each SPW imaged with AIRI. Central frequency of each SPW is listed in MHz. See Table 3 in Part I for uSARA and {\tt WSClean} flux measurements of the diffuse phoenix source. \label{tab:fluxA3391}}
\end{table*}
\section{Results} \label{sec:results}
In this section, we showcase high-resolution high-fidelity images of our three selected fields produced by the AIRI algorithm encapsulated in our parallelised and automated imaging framework and investigate two approaches for the selection of the DNN denoiser, as described in Section~\ref{ssec:dnnselection}.
Figure~\ref{DNNshelf} illustrates the positioning of the inverse of the target dynamic range $\widehat{\sigma}$ of each imaged spectral window for each field with respect to the noise levels of the learned DNNs.
Select AIRI-ASKAP images are displayed in Figures~\ref{A3391airi}--\ref{9442airi}, in a format identical to the uSARA-ASKAP and {\tt WSClean} images presented in Part I. The AIRI-ASKAP figures consist of full FoV images with zoomed-in views focused on complex radio emission of interest, and their associated optical images and spectral index maps. In \citet{askapdataset}, we provide all sub-band AIRI-ASKAP images of the three selected fields (as FITS files) and combine them into GIF files to better show how emission and source morphology changes over the full frequency band. Throughout this section, we refer the reader to Part I for specific comparisons of our imaging results with the uSARA and {\tt WSClean} reconstructions from Part I.
Upon visual inspection, our monochromatic AIRI-ASKAP images of all three fields capture more extended structure than seen in the pure-optimisation counterpart uSARA-ASKAP images. In particular, the faintest structures of our diffuse targets of interest appear more pronounced and defined than they did in the uSARA-ASKAP images. However, the intensity of the faintest point sources, as reconstructed by AIRI, seems to be diminished. In the following subsections, we focus on each ASKAP field and address these differences by examining specific sources of interest. We present detailed comparisons of source morphology, flux density measurements, and spectral index maps between the three imaging algorithms. In an experiment toward uncertainty quantification of AIRI denoisers, we also showcase AIRI reconstructions made via the universal denoiser approach.
\subsection{First field: SB8275-15}
This field contains the massive, merging galaxy clusters Abell 3391 (in the north) and Abell 3395 (in the south). The cluster pair is connected by a warm gas bridge, recently discovered in eROSITA X-ray observations \citep{2021A&A...647A...2R}. In Figure~\ref{A3391airi}, we present our AIRI image of this full imaged FoV (3.36$^{\circ}$) of the first spectral window (SPW:1). The figure includes zoomed-in views of the FR-I in Abell 3391 (a: top right panel), a FR-II cluster member in the east (b: middle right panel), and multiple sources in Abell 3395 (c: bottom panels). The bent-tail FR I radio galaxies at the centres of Abell 3391 and Abell 3395 (see Table 1 in Part I for source names) are reconstructed with similar brightness and resolution when compared to our uSARA image from Part I (the peak pixel flux of the FRI in Abell 3391 is 20~mJy in both the AIRI and uSARA images). The highly resolved detail of the `braiding' in these FRI jets, not resolved in {\tt WSClean} images, is present in both the AIRI and uSARA images. However, the radio galaxies as captured by AIRI exhibit more blended edges than seen in the uSARA image. In addition, the ring-like artefacts emanating from these bright FRI sources take on a much more extended structure in our AIRI image -- their appearance is dimmed and they propagate further out from their point of origin. Mainly, there is a noticeable difference in the diffuse structure recovered in the candidate phoenix of Abell 3395 and the background FR-II radio galaxy (c and b panels, respectively, in Figure~\ref{A3391airi}).
\subsubsection{Abell 3395}
In the middle panel (c) of Figure~\ref{A3391airi}, contours mapping the AIRI recovered emission above $1~\mu$Jy pixel$^{-1}$ are overlaid on an r-band optical image from DES. In comparison with uSARA, the recovered emission of the candidate phoenix by AIRI seems to grow in size. Although appearing slightly fainter and smoother, more extended structure is revealed in the AIRI reconstruction of the Abell 3395 phoenix, as the north-west arm now bridges the dim core and the compact sources at the north-west edge of the cluster. The structure of the recovered emission also changes from one spectral window to the next, with noticeable fading as the frequency increases \citep[see associated GIF in ][]{askapdataset}. Flux density measurements of the phoenix from the sub-band AIRI-ASKAP images are provided in Table~\ref{tab:fluxA3391}. In comparison with uSARA and {\tt WSClean}, we see some variation across the spectral windows, with less flux recovered in AIRI images at the higher frequencies.
The most exciting result is perhaps the improvement of the spectral index map of the phoenix obtained with AIRI when compared to uSARA (see rightmost panel (c) of Figure~\ref{A3391airi}). Since more diffuse structure is recovered by AIRI overall, even at the higher frequencies, the spectral index map has more coverage over the full source morphology. This coverage aids in source classification, enabling the identification of a trend in the spectral index as the emission shifts from the dim core to the north-west and south-west arms. There is clearly more steepness of the spectra as compared to the uSARA map shown in Part I. The dim core recovered by AIRI has a steeper index ($2.1 \leq \alpha \leq 2.8$), and the north-west arm shows a sharp, rather than gradual, drop-off from the core. There still exists the ring of flatter emission around the core, matching the results from uSARA and {\tt WSClean}. Our AIRI results are in line with the hypothesis that this source is no longer receiving a fresh injection from an active nucleus and that the surrounding emission may be undergoing some gentle re-energisation, which in turn is causing brightening and flattening of old and faded AGN emission. Interestingly, both the FR-I to the east and the compact source to the north-west exhibit consistent spectral behaviour between uSARA and AIRI.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/SPT2023_main_AIRI.pdf}
\caption{SB9351-12 -- AIRI: Full FoV image covering the merging cluster SPT2023 and the X-shaped radio galaxy PKS 2014-55, at the first sub-band (SPW:1, centred at 817 MHz). For visual comparison with {\tt WSClean} and uSARA, refer to their respective Figures 3 and 4 from Part I. This monochromatic image is an AIRI model with a pixel resolution of $2.2 \times 2.2$ arcsec. Panel (a) centred on the merging galaxy cluster SPT2023; panel (b) centred on a field containing compact and point sources; (c) panels centred on the X-shaped radio galaxy PKS 2014-55. Middle (c) panel: r-band optical image from DES overlaid with the AIRI model image, demarcated by blue contours at the levels $\{2^n\}_{0 \leq n \leq 10} ~\mu$Jy pixel$^{-1}$. Rightmost (c) panel: spectral index map obtained with the first six sub-band images of AIRI after smoothing with a common circular Gaussian beam of 5 arcsec. All sub-band images are combined into the GIF \texttt{`SB9351-12\_AIRI'} provided in \citet{askapdataset}. \label{SPT2023airi}}
\end{figure*}
\begin{table*}
\begin{tabular}{| *{9}{c|} }
\hline
\hline
\textbf{X-Shaped RG} & $S_{817~{\rm MHz}}$ & $S_{853~{\rm MHz}}$ & $S_{889~{\rm MHz}}$ & $S_{925~{\rm MHz}}$ & $S_{961~{\rm MHz}}$ & $S_{997~{\rm MHz}}$ & $S_{1033~{\rm MHz}}$ & $S_{1069~{\rm MHz}}$ \\
\hline
\hline
AIRI model & 678.7 & 580.0 & 485.0 & 427.7 & 436.2 & 352.7 & 302.3 & 190.5 \\
\hline
\end{tabular}
\caption{Integrated flux density values in [mJy] of the X-shaped radio galaxy PKS 2014-55 for each SPW imaged with AIRI. The listed flux densities are totals from summing the flux densities measured in regions mapping the east wing, the west wing, and the core. See Table 4 in Part I for uSARA and {\tt WSClean} flux measurements of the X-shaped radio galaxy.\label{tab:flux-xshape}}
\end{table*}
\begin{table*}
\begin{tabular}{| *{9}{c|} }
\hline
\hline
\textbf{SPT2023 Relic} & $S_{817~{\rm MHz}}$ & $S_{853~{\rm MHz}}$ & $S_{889~{\rm MHz}}$ & $S_{925~{\rm MHz}}$ & $S_{961~{\rm MHz}}$ & $S_{997~{\rm MHz}}$ & $S_{1033~{\rm MHz}}$ & $S_{1069~{\rm MHz}}$ \\
\hline
AIRI model & 4.3 & 3.2 & 1.8 & 2.0 & 2.8 & 2.3 & 1.7 & 0.5 \\
\hline
\end{tabular}
\caption{Integrated flux density values in [mJy] of the radio relic in SPT2023 for each SPW imaged with AIRI. See Table 5 in Part I for uSARA and {\tt WSClean} flux measurements. \label{tab:flux-SPT2023}}
\end{table*}
\subsection{Second field: SB9351-12}
\label{ssec:SB9351}
The second selected field covers the merging galaxy cluster SPT-CL J2023-5535 (hereafter SPT2023) and the X-shaped radio galaxy PKS 2014-55. As stated in Part I, two recent studies have been separately published for these sources of interest: \citet{2020ApJ...900..127H} confirmed the detection of a radio halo and radio relic in SPT2023 with the same data used in this work and \citet{2020MNRAS.495.1271C} used MeerKAT observations to generate total intensity, polarisation, B-field maps, and spectral index maps of the X-shaped radio galaxy. In Figure~\ref{SPT2023airi}, we present our AIRI image of the full FoV (3.36$^{\circ}$) of the first spectral window (SPW:1) of SB89351-12. The figure includes zoomed-in views on the merging cluster SPT2023 (a: upper right panel), a field of compact and point-like sources (b: middle right panel), and the X-shaped radio galaxy PKS 2014-55 (c: bottom panels).
In comparison with the uSARA image (Figure 3 of Part I), there is an undeniable improvement in the recovery of faint emission within the zoomed-in views seen in the AIRI image. The diffuse emission stretching east-to-west in the {\tt WSClean} image of SPT2023, which was not recovered by uSARA, clearly emerges in the AIRI image. Similarly, faint point sources seen in the {\tt WSClean} image but not in the uSARA image, are captured by AIRI, though appearing to be somewhat fainter and smoother (see the panel (b) of Figure~\ref{SPT2023airi}). Finally, the calibration artefacts are still noticeable at the southern edge of the pointing, taking the form of ring-type artefacts emanating from the bright quasar RX J2024.3-5723 and propagating radially up to 1 deg. Compared to uSARA, these artefacts are generally fainter but extend further.
\subsubsection{X-shaped Radio Galaxy}
In the middle panel (c) of Figure~\ref{SPT2023airi}, we overlay AIRI-ASKAP emission of the X-shaped radio galaxy as contours on an r-band optical map from DES. Compared to uSARA, the X-shape radio galaxy as reconstructed by AIRI appears to have a greater extent of diffuse emission, though with a noticeable loss in the resolution of the compact structure within the lobes. This behaviour is consistent across all the sub-band images of AIRI, where smoother and more diffuse edges are observed. Table~\ref{tab:flux-xshape} reports the measured flux densities per spectral window for the X-shaped radio galaxy. AIRI flux measurements fluctuate around the uSARA measurements for each spectral window, with the same general trend of the flux dropping off at higher frequencies.
A spectral index map of the X-shaped radio galaxy is shown in the rightmost (c) panel of Figure~\ref{SPT2023airi}. There is an incredible improvement in the coverage of the AIRI spectral index map compared to uSARA, thanks to the diffuse flux consistently recovered at the edges of the lobes, even at the higher frequencies, with AIRI. The -- most likely -- artificial steepening at the edges of the lobes seen in the uSARA spectral index map is now corrected in the AIRI map. Nonetheless, AIRI recovers slightly steeper spectra on the borders of the lobes when compared to the {\tt WSClean} spectral index map. Owing to the superb resolution achieved by AIRI, turbulent activity can be traced where plasma in the lobes exhibits a flatter spectral index. The south-east leg of the east wing shows a spectral index of $1.4 \leq \alpha \leq 2.1$, which is much flatter than the emission in the west wing (with an average spectral index of about $\alpha = 3$. The furthest north-west portion of the west wing exhibits an ultra-steep spectrum $3.5 \leq \alpha \leq 5$. Wide-band deconvolution is necessary to confirm these ultra-steep values.
\subsubsection{SPT-CL J2023-5535}
In Part I, our monochromatic {\tt WSClean} image shows the radio halo in SPT2023 as an increase in noise at the cluster centre, but our uSARA image did not show any diffuse structure resembling a radio halo. In Figure~\ref{SPT2023airi}, the panel (a) focuses on the diffuse emission present in SPT2023. Here, our AIRI image does in fact recover the diffuse structure of the radio halo -- elongating from the western relic towards the east. Across the sub-bands the radio halo is most clearly detected in AIRI images SPW:1 and SPW:2. It is quite remarkable that the radio halo is detected in these narrow, sub-band AIRI images since the full 288~MHz bandwidth (in the form of a wideband {\tt WSClean} image) was used to detect and measure the radio halo's signal in \citet{2020ApJ...900..127H}. We also find that the SPT2023 radio relic has a smoother, wider, and fainter morphology when compared to uSARA. AIRI flux measurements of the relic for SPW:1 \& 8 are slightly higher than uSARA flux measurements, as reported in Table~\ref{tab:flux-SPT2023}.
\subsection{Third field: SB9442-35}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/9442_main_AIRI.pdf}
\caption{SB9442-35 -- AIRI: Full FoV image covering PKS 2130-538, formed using the full-band data (centred at 943 MHz). For visual comparison with {\tt WSClean} and uSARA, refer to their respective Figures 5 and 6 from Part I. This monochromatic image is an AIRI model with a pixel resolution of $2.2 \times2.2$ arcsec. Panel (a) centred on a field containing extended and point-like radio galaxies; panel (c) centred on the star-forming galaxy NGC 7090; (b) panels centred on ``the dancing ghosts'' (PKS 2130-538). Middle (b) panel: image made with only the first sub-band of data (SPW:1, centered at 817), shown for a comparison of sensitivity. Rightmost (b) panel: spectral index map made with the first six sub-band images of AIRI after smoothing with a common circular Gaussian beam of 5 arcsec. All sub-band images are combined into the GIF \texttt{`SB9442-35\_AIRI'} provided in \citet{askapdataset}. \label{9442airi}}
\end{figure*}
\begin{table*}
\begin{tabular}{| *{10}{c|} }
\hline
\hline
\textbf{Dancing Ghosts} & $S_{\rm fullband - 943~{\rm MHz}}$ & $S_{817~{\rm MHz}}$ & $S_{853~{\rm MHz}}$ & $S_{889~{\rm MHz}}$ & $S_{925~{\rm MHz}}$ & $S_{961~{\rm MHz}}$ & $S_{997~{\rm MHz}}$ & $S_{1033~{\rm MHz}}$ & $S_{1069~{\rm MHz}}$ \\
\hline
AIRI model & 116.7 & 128.3 & 123.8 & 118.1 & 113.2 & 109.2 & 105.7 & 102.6 & 97.7 \\
\hline
\end{tabular}
\caption{Integrated flux density values of ``the dancing ghosts'' PKS 2130-53 for each SPW imaged with AIRI. See Table 6 in Part I for uSARA and {\tt WSClean} flux measurements of ``the dancing ghosts''. \label{tab:fluxghosts}}
\end{table*}
This final selected field is centred on the complex radio source PKS 2130-538, nicknamed ``the dancing ghosts,'' owing to its peculiar and mirrored ghost-like shape. Two radio lobes, bridged by arching jets from the primary AGN host, extend southwards and blend into each other. A secondary AGN in the south-east produces a similar arched jet with a bent-tail that curls back around to the eastern primary lobe. With the original images generated from the ASKAP Evolutionary Map of the Universe Survey (EMU; \citealp{2011PASA...28..215N}), \citet{2021PASA...38...46N} mention that the interaction between the primary and secondary AGN is unclear. With our super-resolved uSARA images, presented in Part I, we have been able to distinguish a clear physical separation between the secondary curling jet and the primary western lobe. Nonetheless, this strange source offers an interesting case study of turbulent dynamics in bent-tail radio galaxies.
For this field, we produced eight sub-band images as well as a monochromatic full-band image with AIRI. In Figure~\ref{9442airi}, we present the AIRI image of a $\sim 2.5^{\circ}$ FoV formed using the full-band (288 MHz) data of SB89442-35. This figure includes zoomed-in views on a field containing an extended radio galaxy and points sources (a: upper right panel), the star-forming galaxy NGC 7090 (c: middle right panel), and the ``dancing ghosts'' (b: bottom panels). The bottom panels include a view of PKS 2130-538 from the full-band image (leftmost) and the first sub-band image SPW:1, covering 36 MHz (middle) for a visual comparison of the sensitivity in both imaging settings. Also included in the bottom rightmost panel is a spectral index map of PKS 2130-538, generated with the first six sub-band images.
\subsubsection{The Dancing Ghosts}
The separation between the curling secondary jet and the eastern lobe of the primary AGN in PKS 2130-538 is less distinct in our AIRI image when compared to the uSARA image from Part I. However, there is a more drastic difference in the improvement of resolution when moving to the full-band with AIRI. Our AIRI sub-band image shows a much smoother structure than the full-band image, particularly noticeable when focusing on the sharpness of the jet bridge linking the two lobes from the primary AGN. Faint point sources emerge more clearly in the AIRI full-band image, with a slight improvement over its uSARA counterpart.
The filamentary emission extending from the eastern lobe (possibly a synchrotron thread similar to those discovered in \citealp{2020A&A...636L...1R}) appears slightly fainter and more diffuse in the AIRI image, with overall steeper spectra ($3.5 \leq \alpha \leq 6$) than seen in the uSARA maps. It is interesting that this eastern extending filament has such an ultra-steep spectrum in the AIRI map since the spectral index over the rest of the source morphology remains similar between the uSARA and AIRI maps. When comparing the flux density measurements in Table~\ref{tab:fluxghosts} to the corresponding measurements in Part I, there appears to be a clear consistency between uSARA and AIRI and {\tt WSlean}, with AIRI recovering slightly less flux at the higher frequencies.
\subsection{Universal Denoiser and model uncertainty\label{ssec:results-strat}}
In an experiment toward modelling epistemic uncertainty, we measure differences between AIRI reconstructions of the field SB9351-12 produced by the two denoiser selection strategies proposed in Section~\ref{ssec:denoise-strat}, namely the denoiser shelf and universal denoiser strategies. The two approaches differ by the denoiser instance used. The AIRI reconstructions leveraging a pre-trained shelf of denoisers were presented in Section~\ref{ssec:SB9351}. Here we also present the reconstruction results when utilising the universal denoiser approach, and study the robustness of AIRI reconstructions to denoiser (\emph{i.e.}~model) variations. We recall that the considered universal denoiser corresponds to the lowest training noise level on the shelf, $\sigma_u=2\times 10^{-5}$. Under this consideration, we note that the first sub-band (SPW:1) is not included in this analysis since its shelf-appropriate denoiser corresponds to the universal denoiser (\emph{i.e.} $\sigma_s=\sigma_u$); therefore, only spectral windows 2--8
were re-imaged.
Focusing on our target sources of interest in this field -- namely, the X-shaped galaxy and the merging galaxy cluster SPT2023 -- reconstruction results of SPW:7, obtained using uSARA and the two AIRI denoiser selection strategies (the nearest shelf-appropriate DNN with $\sigma_s=8\times 10^{-5}$
and the universal DNN denoiser with $\sigma_u=2\times 10^{-5}$) are showcased in Figure~\ref{variations}. The seventh spectral window is chosen for a visual comparison due to its high signal and lower dynamic range when compared to other spectral windows. All other spectral windows imaged via the AIRI universal denoiser strategy are provided as FITS files and combined into a GIF for easier viewing in \citet{askapdataset}. In what follows, we refer to the AIRI reconstructions generated via their associated denoiser strategy as $\sigma_u$-AIRI (universal approach) or $\sigma_s$-AIRI (shelf approach).
The most evident visual difference in the AIRI reconstructions -- particularly noticeable for the X-shaped galaxy -- is in the smoothness of the emission recovered in $\sigma_s$-AIRI and the arguably more detailed emission recovered in $\sigma_u$-AIRI which targets higher dynamic ranges.
Both AIRI images recover significantly more diffuse emission than uSARA, yet with a slight compromise in resolution, as can be seen in the radio galaxies of the merging galaxy cluster SPT2023.
We examine the absolute difference between the $\sigma_u$-AIRI and $\sigma_s$-AIRI images. The resulting error map\footnote{A hard thresholding operation is applied to the error map, keeping only values above $10^{-8}$.} of SPW:7 is displayed in Figure~\ref{variations}, following a normalisation by the associated noise estimate in the image domain $\sigma$ (see Eq.~\ref{eq:heuristic}). From the full-FoV error maps associated with the sub-band images, we conduct a numerical analysis of AIRI reconstructions on the basis of the percentage of the pixels with values above $\sigma$. As shown in Table~\ref{tab:percentages}, for each sub-band error map, we find that $0.8 - 1.1\%$ of the pixels are of values higher than $\sigma$. This very small percentage corresponds mainly to pixel intensities within the brightest point-like sources. Because the integrated flux densities of these bright sources recovered by both AIRI reconstructions are very close, the discrepancy most likely arises from differences in individual pixel intensities which are slightly spatially offset from each other.
When comparing the uSARA reconstruction to each of the AIRI reconstructions, we find that the percentage of the pixels with absolute difference above $\sigma$ is slightly more significant, at $2.5\%$ for SPW:7. These findings suggest that uSARA and AIRI reconstructions are very similar with respect to each other and that AIRI is highly robust to variations of the denoiser instance used (highlighting a small epistemic uncertainty of the learned denoiser approach). This also validates the simpler high dynamic-range universal denoiser strategy for AIRI, as opposed to the denoiser shelf approach.
\begin{table*}
\centering
\begin{tabular}{ccccccc}
\hline
\hline
\textbf{SPW:2} & \textbf{SPW:3} & \textbf{SPW:4} & \textbf{SPW:5} & \textbf{SPW:6} & \textbf{SPW:7} & \textbf{SPW:8} \\
\hline
99.6\% & 99.7\% & 99.7\% & 99.3\% & 99.5\% & 99.2\% & 98.9\% \\
\hline
\end{tabular}
\caption{
The percentage of the pixels in the error maps between $\sigma_u$-AIRI and $\sigma_s$-AIRI images with values below the estimated standard deviation of the noise in the image domain, $\sigma$, for each sub-band image of the field SB9351-12.
}
\label{tab:percentages}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/AIRI_variations.png}
\caption{Comparison of uSARA and AIRI reconstructions of the seventh sub-band data (SPW:7) of the field SB9351-12, focusing on the X-shaped radio galaxy (top) and the galaxy cluster SPT2023 (bottom). From left to right: uSARA reconstruction, $\sigma_s$-AIRI reconstruction (using the shelf-appropriate DNN denoiser), $\sigma_u$-AIRI reconstruction (using a universal DNN denoiser), and the error map between $\sigma_s$-AIRI and $\sigma_u$-AIRI, normalised by the estimated standard deviation of the noise in the image domain, $\sigma$.} \label{variations}
\end{figure*}
\section{Computational performance}\label{sec:time}
For all AIRI imaging experiments performed in this work, the decomposition of the measurement operator and consequently the number of CPU cores allocated to enforce data fidelity are identical to uSARA (see Tables 7--9 in Part I for further details). While uSARA was deployed on the CPU nodes of Cirrus, AIRI was run on its GPU nodes comprising both CPU cores and GPUs (see Section~\ref{sec:data} for details of the compute nodes). In this setting, the computing time of the forward step in AIRI was found to be up to 1.2 faster than its counterpart in uSARA, which we attribute to the newer processors used in these GPU nodes.
The faceting functionality of AIRI was enabled, whereby the image is decomposed into $F=4$ facets. Hence, four GPUs were deployed for the parallel application of the DNN denoiser on each image facet. As such, the learned denoiser brought a drastic reduction of the computing time of AIRI's backward step by a factor 10 to 30 (depending on the image dimensions) compared to its pure optimisation counterpart uSARA. For each imaging experiment of each field, AIRI's total compute time and computational cost in CPU core hour are reported in Table~\ref{tab:time-airi}. In light of its extremely fast denoiser, AIRI's computational cost, solely driven by its forward step, is on average four times lower than uSARA (see Tables~7--9 in Part I) and five times higher than {\tt WSClean} (see Table~10 in Part I).
Interestingly, preliminary experiments further leveraging GPUs to perform the Fourier Transforms involved in the forward step have shown a reduction of AIRI's total compute time, and consequently its computational cost by nearly a factor of two. However, a similar consideration in uSARA or {\tt WSClean} would not necessarily bring significant acceleration. The computational cost of uSARA would still be driven by its sub-iterative denoiser. Similarly, the Fourier transforms are typically not the dominating the computational cost of {\tt WSClean}.
The speed and learning power of the DNN denoisers, pre-trained independently of the RI data under scrutiny, are significantly narrowing the gap between optimisation-based imaging algorithms and the standard CLEAN-based imager, thus highlighting their prospective potential for scalability and computational efficiency when handling extreme data and image dimensions.
\begin{table*}
\begin{tabular}{cccccccccc}
\hline
\hline
& & \textbf{SB8275} & & &\textbf{SB9351} & & &\textbf{SB9442} &\\
\hline
& $F$ & C\textsubscript{Image}& T\textsubscript{Image}&$F$ &C\textsubscript{Image}&T\textsubscript{Image}&$F$&C\textsubscript{Image} & T\textsubscript{Image} \\
& & [CPUh] & [h] & & [CPUh] & [h] & & [CPUh] & [h] \\
\hline
\textbf{Full-band} & -- & -- & -- & -- & -- & -- & 4 & 203 & 1.1 \\
\hline
\textbf{SPW:1} & 4 & 48 & 1 & 4 & 95 & 2.1 & 4 & 45 & 1.4 \\
\hline
\textbf{SPW:2} & 4 & 63 & 1.1 & 4 & 95 & 2.2 & 4 & 54 & 1.5 \\
\hline
\textbf{SPW:3} & 4 & 104 & 1.4 & 4 & 101 & 2.6 & 4 & 51 & 1.6 \\
\hline
\textbf{SPW:4} & 4 & 104 & 1.6 & 4 & 105 & 2.7 & 4 & 50 & 1.6 \\
\hline
\textbf{SPW:5} & 4 & 119 & 1.7 & 4 & 99 & 2.3 & 4 & 52 & 1.5 \\
\hline
\textbf{SPW:6} & 4 & 129 & 1.7 & 4 & 104 & 2.5 & 4 & 62 & 1.6\\
\hline
\textbf{SPW:7} & 4 & 103 & 1.5 & 4 & 125 & 2.9 & 4 & 66 & 1.6\\
\hline
\textbf{SPW:8} & 4 & 144 & 2 & 4 & 91 & 2 & 4 & 69 & 1.6 \\
\hline
\end{tabular}
\caption{AIRI computational costs for all imaging experiments: $F$ refers to the number of image facets, each deployed on one GPU core; C\textsubscript{Image} [CPUh] is the computational cost of the deconvolution in CPU core hours and T\textsubscript{Image} is the time in hours of the deconvolution. Memory and computational costs specific to the measurement operator are identical to those considered in uSARA (see Tables 7--9 in Part I for more details).
\label{tab:time-airi}}
\end{table*}
\section{Conclusions} \label{sec:con}
The results of this work show that the PnP-RI image reconstruction algorithm AIRI is on par with the precision-capability of its pure optimisation counterpart uSARA and surpasses the precision and robustness capabilities of {\tt WSClean}. A main and consistent feature of AIRI reconstructions is their sensitivity to the diffuse components of faint emission. This gives AIRI a distinct advantage over uSARA when the scientific goal is to detect and fully reconstruct low-surface-brightness diffuse emission at or near the noise level.
Building a shelf of suitable denoisers covering a range of potential target dynamic ranges has proven to be a solid approach for implementing AIRI. When resorting to a single universal high dynamic-range denoiser, high-fidelity reconstruction is also achieved, with comparable results to nearest-on-the-shelf reconstructions. In fact, we find that AIRI realisations reconstructed from denoisers with different training noise levels have about a 1\% discrepancy in terms of the percentage of pixel intensities above the estimated noise level of the imaged data.
In comparing the flux density tables of the scrutinised sources reported in Part I and II, we find a general consistency between uSARA and AIRI. For the faintest, diffuse sources measured, uSARA and AIRI flux measurements are consistently lower than the measurements taken from {\tt WSClean} images. In this case, it is possible that uSARA and AIRI are introducing a bias for very faint emission, but it may also reveal that flux measured from the restored {\tt WSClean} map results in over-estimated flux values of faint sources, whose brightness is near or at the noise level. Wide-band variants of uSARA and AIRI should provide more accurate estimates of flux density, and possibly verify these lower flux densities for the faintest sources.
Finally, the higher computational efficiency achieved by AIRI over uSARA is attributed to its substantially faster denoiser. For the ASKAP FoVs under scrutiny, this led to four-fold acceleration on average. Such a reduction in imaging time and computational cost is a clear indication of the scalability power of AIRI to large image dimensions.
\section*{Acknowledgements}
The first two authors contributed equally to this work. This work was supported by the UK Research and Innovation under the EPSRC grants EP/T028270/1 and EP/T028351/1, and the STFC grant ST/W000970/1. The research used Cirrus, a UK National Tier-2 HPC Service at EPCC funded by the University of Edinburgh and EPSRC (EP/P020267/1). ASKAP, from which the data under scrutiny originate, is part of the Australia Telescope National Facility managed by CSIRO. This project used public archival data from the Dark Energy Survey (DES).
\section*{Data Availability}
The ASKAP data underlying this article (calibrated visibilities and mosaic images of Scheduling Blocks) are made publicly available for viewing and download on the \href{https://data.csiro.au/collections/#domain/casdaObservation/search/}{CSIRO ASKAP Science Data Archive} (CASDA; \citealp{2017ASPC..512...73C}), and can be accessed with the unique Project Identifiers AS034 and AS101. The reconstructed images in FITS format as well as the GIF files showing the imaged fields over the spectral windows are made available in \citet{askapdataset}. The uSARA and AIRI code will become available in a later release of the Puri-Psi library for RI imaging.
\bibliographystyle{mnras}
|
{
"arxiv_id": "2302.14235",
"language": "en",
"timestamp": "2023-03-01T02:06:10",
"url": "https://arxiv.org/abs/2302.14235",
"yymm": "2302"
} | \section{Introduction}
\label{sec:Introduction}
In an increasingly diverse research landscape, management and curation of public data are becoming critical components of transdisciplinary science. Keys to the realization of an open research ecosystem that adds scientific value have been identified in the FAIR principles of scientific data management and stewardship \cite{fair_2016}. Making data Findable, Accessible, Interoperable, and Reusable, however, requires a considerable amount of tooling and infrastructure.
A common problem, which is acute for data in high-energy physics but increasingly an issue in other fields as well, is the sheer size of data sets stored in custom file formats.
For large-scale experimental facilities, such as the LHC at the European Organization for Nuclear Research (CERN), the data sets are so large that even access by the directly involved scientists has to be centrally managed. As an example, the LHCb data collected in the years 2011-12, corresponding to $\sim 3$ fb$^{-1}$ of proton-proton collisions amount to a volume of 900 TB. This volume only refers to the already preprocessed data available to members of the collaboration and scheduled for release to the public. For the purpose of processing these data, extensive computing infrastructure has been set up by the countries participating in this type of research \cite{Bird:2014ctt}.
Replicating such an infrastructure to allow the public to handle the data would not only require dedicated expert knowledge, but it would also duplicate existing facilities. On the other hand, any individual research conducted on a typical LHC data set will often only make use of a tiny portion of the full data, filtered and selected according to the requirements of the respective research question. It is therefore natural to provide the public with FAIR access to those highly selective subsamples.
In the following, an application is presented that exposes a data query service to allow the public to request sub-samples of data collected and published by the LHCb experiment. The samples are delivered as ROOT Ntuples~\cite{ROOT} a data format that requires no special LHCb-specific software to read and for which converters to other standard file formats exist. We call the application the Ntuple Wizard{}.
The application interface guides users with basic knowledge in particle physics through the process of discovering the available data and formulating a useful query. The queries can be processed by the existing data production infrastructure, and results will be delivered through the CERN Open Data Portal~\cite{ODP}. By splitting the data request into the construction of a data query and subsequent processing of the query on the internal infrastructure, the LHCb collaboration retains fine-grained control over access to the data. Crucially this system protects the compute infrastructure from attacks by malicious code injection.
\subsection{Accessible open data}
In 2020, the LHC experiments at CERN adopted a new Open Data Policy~\cite{CERN-OPEN-2020-013}, the scope of which expanded in 2022 to an Open Science Policy~\cite{CERN-OPEN-2022-013}. These documents define the commitments of CERN to make the data collected at the LHC, at several levels of complexity, publicly available~\cite{DPHEPStudyGroup:2012dsv}:
\begin{description}
\item[Level 1] Published results --- this can include tables and figures but also preprocessed Ntuples or binned and unbinned fit likelihood functions.
\item[Level 2] Outreach and education --- usually in the form of highly preprocessed Ntuples.
\item[Level 3] Reconstructed data --- these data have been preprocessed to derive physics objects, such as charged particle candidates, photons, or particle jets. Reconstructed data may or may not be corrected for detector effects, such as efficiency and resolution.
\item[Level 4] Raw data -- the basic quantities recorded by the experimental instruments.
\end{description}
Both Level 1 and 2 data are considered to be highly processed, abstracted, and manageable using commonly available computers. Level 4 raw data will not be made available due to practical reasons concerning data size but also detector-specific information needed for the interpretation of these data. This leaves Level 3 data as the most versatile and basic data set which will be publicly accessible.
All LHC experiments have long and intricate data reconstruction pipelines, which yield several intermediate output data formats. During a pipeline, the raw data are converted to physical objects such as charged particle trajectories, jets, and vertices. Furthermore, the raw data are classified and filtered to obtain samples enriched in interesting signatures.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\textwidth]{figs/lhcb_run_2_data_flow.png}
\caption{LHCb data flow in Runs 1 and 2. The output of the stripping step will be made public through the CERN Open Data Portal~\cite{ODP}.}
\label{fig:data_flow_run1and2}
\end{figure*}
Figure \ref{fig:data_flow_run1and2} shows an overview of the data processing pipeline in LHCb as it has been used during LHC data-taking Runs 1 and 2 (2011--18). The various steps of the pipeline are outlined further in the LHCb computing technical design reports~\cite{TDR1, TDR2}. Level 3 data have been defined as the output of the {\it stripping} step. The stripping consists of a large number of selection algorithms called {\it lines}, which are designed to filter the data and sort events into several collections, which are called {\it streams}. Streams are defined according to common physics signatures and aim to collect selections with significant overlaps into a common set of files, to reduce duplication of the data. The LHCb data organization is discussed in more detail in Appendix~\ref{app:data_org}, including a list of streams available in Runs 1 and 2.
The stripping selections are based on the concept of physics {\it candidates}. A candidate refers to a set of data matching a particular physics signature. In most cases, this signature will be a particular particle decay, such as for example $B^+ \rightarrow \bar{D}^0 \pi^+$ with the subsequent decay $\bar{D}^0 \rightarrow K^+\pi^-$, where $B, D, K,$ and $\pi$ mesons are the lightest hadrons containing $b, c, s$, and $u/d$ quarks respectively. Such cascading decays are represented as tree-like data structures, where the nodes represent (intermediate) particles and the edges indicate a parent-child relationship in the decay. These data structures are referred to as {\it decay trees}. The root particle of the decay tree (the $B^+$ in our example) is called its {\it head}. Stripping selections attempt to find sets of physics objects in the reconstructed LHCb data, which match the desired decay tree and any additional criteria that might be applied to distinguish the intended signal process from background. Typical selection criteria include kinematic variables, vertex and track reconstruction qualities, and particle identification variables. Some stripping lines rely on multivariate classifiers to combine several observables into a single powerful selection criterion. The output of this procedure is collections of decay candidates specified by their particular decay trees in a custom LHCb-specific data format.
It is important to note that candidates are distinct from the concept of {\it events} in the LHCb data processing. An event is defined during the data acquisition and refers to a particular time window in which collisions can occur. Several proton-proton collisions can happen during this time window, and in principle, it can happen that several candidates for a particular decay are identified for a single collision. In such cases, relevant quantities (related to vertex reconstruction and flight distances) can be computed for every primary vertex (e.g. collision point) in the event.
In order to convert these data into a framework-independent format a useful concept is the aforementioned Ntuples. The idea of a Ntuple is simple: each candidate is described by a tuple of variables, i.e. physical observables of interest measured on the particular candidate, or referring to the global event in which the candidate was found. A data set consists of $N$ such tuples, much like a simple CSV file. Ntuples are saved in ROOT files~\cite{ROOT} and only basic data types are allowed for the variables. As a small complication in some instances, the variables can actually be arrays of basic data types. In such cases, the Ntuple Wizard{} provides the necessary documentation for the interpretation.
\subsection{Principle of Ntuple creation and the Ntuple Wizard{}}
Both the stripping as well as the Ntuple-making step in Fig.~\ref{fig:data_flow_run1and2} are handled by DaVinci~\cite{DaVinci, TDR1, TDR2}, an LHCb application for event selection and data analysis using the Gaudi framework~\cite{Gaudi, TDR1, TDR2}. DaVinci is configured via Python scripts and used to process entire data sets with batch processing. Both the Python configuration as well as the batch production system are intentionally hidden from users of the Ntuple Wizard{} for security reasons.
The DaVinci application provides access to a number of algorithms that can be combined in sequence for event selection and processing. In order to produce a Ntuple the user has to specify which variables should appear in the output data. This Ntuple configuration is handled by an algorithm named {\it DecayTreeTuple}, in which variables are registered through the use of so-called {\it TupleTools} and {\it LoKi functors}. A large collection of those tools and functors are available for the user to choose from. In general, a TupleTool will add a set of variables to the Ntuple, while a LoKi functor usually computes a single number. The {\it LoKi::Hybrid::TupleTool} can be used to write the output of functors into the tuple. Functors can be combined with standard arithmetic and logic operations, providing a flexible and powerful system to compute derived quantities. A list of important available tools is presented in Appendix~\ref{app:tupletools}.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\textwidth]{figs/wizard.pdf}
\caption{Architecture of the Ntuple Wizard{}.}
\label{fig:arch}
\end{figure*}
Figure \ref{fig:arch} shows an overview of the Ntuple Wizard{} architecture, the core functionality of which is the configuration of DaVinci. The metadata and documentation describing the available data, preselections, as well as available selection operations are generated from the original provenance traces of the data and the stripping selection code. The web interface presents this metadata and documentation to the user in a pedagogical way that facilitates data discovery and formulation of the query. The query to the data has two principal parts: Data set discovery and Ntuple configuration. First, the application allows the user to select from the available predefined stripping selections, data-taking periods, and conditions. In the second step, the user defines what quantities should be computed and written to the output Ntuple. Standard tools for the computation of typical quantities, such as kinematic variables, particle identification (PID) variables, etc., are available. The query formulated by the user is stored in a set of configuration files. These files can be converted into a Python configuration compatible with the internal LHCb Analysis Productions system~\cite{AnalysisProductions}. This conversion and the final submission of the query to the compute infrastructure are handled through an LHCb Analysis Productions manager.
\subsection{Security considerations}
Accepting arbitrary external code to run on the LHCb computing resources has obvious unacceptable security risks. Therefore, the Ntuple Wizard{} is designed to generate the configuration in a pure data-structure format. As shown in Figure~\ref{fig:arch}, the configuration of the query is captured in YAML files, which can be downloaded and submitted to an LHCb Analysis Productions manager for further processing.
\section{Metadata and documentation acquisition}
\label{sec:meta}
In order to facilitate the core functionality of the Ntuple Wizard{} --- namely data set discovery (Sec.~\ref{sec:discover}) and algorithm configuration (Sec.~\ref{sec:alg}), metadata and documentation from several sources are required. In particular, the application needs to know what types of decays can be queried and what tools are available to compute derived quantities of interest about these candidates.
Since these metadata are unchanging, and providing direct access to the various sources requires authentication and introduces more points of failure, the metadata are collated and served as static files over HTTP. No additional access to the LHCb code or database is needed by the Ntuple Wizard{} once it has been deployed.
The sources of metadata can be grouped into two coarse categories: the LHCb software stack and the LHCb database. Metadata are acquired from the LHCb software stack in two ways. The first is from the Gaudi Python interface; particularly under the DaVinci application environment. Metadata about the configuration interface of each TupleTool are extracted from DaVinci. Details of the stripping lines, including the chain of selection algorithms that define them, are extracted from the DaVinci versions used in the corresponding stripping campaigns.
The process of building decay candidates in a stripping line often involves a combination of many algorithms from the LHCb selection framework, which combine particles, impose selection requirements, perform PID substitution, and build final-state particle candidates from trajectories of charged particles and calorimeter clusters.
The algorithms can be related to each other through their input and output locations. The full list of decays (including all sub-decays) must be inferred by traversing the `dependency tree' of the selection algorithms. This is performed using custom code during metadata acquisition.
The second, more indirect way is from the LHCb Doxygen pages, which themselves are generated from the source code of the LHCb software stack.
The latest Doxygen pages for Run 1 or Run 2 DaVinci versions are used to extract the documentation for each TupleTool and LoKi functor.
A campaign to improve the Doxygen documentation at its source was undertaken during the development of the Ntuple Wizard{}.
The LHCb database provides metadata about the centrally managed data sets, which is necessary to configure the Ntupling productions as explained above.
In order not to duplicate effort, a common code base is employed to extract metadata from the LHCb database for both the Ntuple Wizard{} and the CERN Open Data Portal.
\section{User interface}
\label{sec:web}
The user interface consists of a sequence of dialogues that guide the user through the configuration steps. This is designed as a client-side dynamic web page that reads metadata acquired at deployment time to serve as static files (see Sec.~\ref{sec:meta}).
Since users of LHCb open data do not, in general, have access to the same support network of experienced users and developers enjoyed by LHCb collaboration members, a key design element of the Wizard is to provide the necessary documentation for a novice user to complete each step of the configuration process.
The existing documentation of DaVinci~\cite{DaVinci, TDR1, TDR2} is fragmented across several sources (Twiki~\cite{twiki}, the Starterkit~\cite{starterkit}, Doxygen~\cite{doxygen} and the source code itself), so where possible, the Wizard pulls text from each of these disparate sources and renders it in the relevant context within the user interface.
There are two main steps to formulate a query using the Ntuple Wizard{}: Dataset discovery and Ntuple configuration. These steps are explained in the following.
\section{Dataset discovery and production configuration}
\label{sec:discover}
The available data contain a wide range of stripping selections, data-taking periods, and running conditions. The {\bf Production configuration} dialogue of the Ntuple Wizard{} guides the user through the selection of the desired subsets. The interface allows the selection of several decays to be processed simultaneously as part of one query. For each decay, a separate Ntuple will be produced.
\subsection{Discovering available candidate decays}
In the {\bf Decay search} dialogue, the Ntuple Wizard{} presents a list of all decays selected by the stripping, accompanied by decay descriptors in LoKi and LaTeX formats, information about which stripping lines build them, as well as `tags' that can be used to filter different types of decays. Decays are searchable through various filters, including the identity or properties of the parent particle and decay products, whether the candidates are built by a specific stripping line, and the aforementioned tags. An example of the decay search is shown in Figure~\ref{fig:decay_search}. The selected candidate of interest is highlighted in blue, and the collection was narrowed down from the list of all possible decays by using the filters and tags at the top of the page. The `none of' option of the tags drop-down menu is chosen by default, indicating that decays with the displayed tags are hidden from the list of selectable decays. The tags `charge-violating` and `undefined-unstable` corresponding to decays that violate charge conservation and that contain unstable particles without defined decays respectively are hidden by default. If the user wishes to instead isolate decays that meet the criteria of a given tag, a different option can be selected from the `tags' drop-down menu. It is possible to select several decays for further processing at this stage.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\textwidth]{figs/examples/decay_search.png}
\caption{Example of the decay candidate search function of the Ntuple Wizard{}.}
\label{fig:decay_search}
\end{figure*}
\subsection{Stripping line and data set selection}
Once a decay is selected by the user, all corresponding stripping lines and data sets from the various running periods are listed, and the desired combination(s) can be selected. The case can arise where the same stripping line shows up in multiple stripping versions within the same dataset (stream, running year, and magnet polarity). These are rendered as separate options in the dataset selection drop-down menu of the Ntuple Wizard{}. For a given decay, it is recommended to choose only one dataset for each magnet polarity within a given running year, and to use the most recent stripping version in the case of duplicates. The data organization of LHCb is elaborated on in Appendix~\ref{app:data_org}, including a table of running years, as well as corresponding collision energies and stripping versions.
Links to documentation about each stripping line including selection algorithms that went into building the decay candidates are displayed to the user to guide them in choosing the most suitable stripping line and data stream for their physics interest. Figure~\ref{fig:prod_config} shows an example of the production configuration page, where an available stripping line and data set have been chosen from lists of all possibilities corresponding to the selected decay channel. The blue question mark button contains links to the aforementioned stripping documentation. At this point, the query is specified up to deciding what information to write into the Ntuple.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\textwidth]{figs/examples/prod_config_filled.png}
\caption{Example of the data set selection and production configuration step of the Ntuple Wizard{}.}
\label{fig:prod_config}
\end{figure*}
\section{Ntuple configuration}
\label{sec:alg}
The {\bf DecayTreeTuple configuration} dialogue is designed to guide the user through customization of the quantities written to the Ntuple for the selected candidates. For each decay, a separate DecayTreeTuple has to be configured. Care should be taken to name the Ntuples appropriately. The Ntuple Wizard{} requires a unique name for each Ntuple.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\textwidth, height=0.55\textheight]{figs/examples/node_tree_tools.png} \\
\vspace{0.75em}
\includegraphics[width=\textwidth, height=0.4\textheight]{figs/examples/node_tree_tools_selected.png}
\caption{Example of an interactive graph used to configure DecayTreeTuple, with selected TupleTools displayed for both the entire candidate (top) and selected nodes (bottom).}
\label{fig:node_tree}
\end{figure*}
Selected decay trees are visually represented as graphs, where each physics object (e.g. particle) is represented by a node as shown in the screenshots in Figure \ref{fig:node_tree}. The user can interact with this graph by selecting one or multiple nodes at a time and determining which TupleTools will be added to each node, which in turn determines which quantities are saved to the Ntuple. A list is rendered on screen depending on the selected node(s), each element of which corresponds to a selected TupleTool, with buttons for configuring and removing the tool. The TupleTool configuration interface includes links to relevant documentation about the tool, including lists of quantities written by the tool where available. Each node in the graph comes with the standard set of TupleTools for LHCb analyses, but more will often be needed depending on the particular physics interests of the user. Furthermore, any added tool will come with the standard configuration, which can be further modified if the user desires. A custom set of standard LoKi variables and functions of these variables can also be saved to the Ntuple for each node, using the {\it Loki::Hybrid::TupleTool}. Appendix~\ref{app:tupletools} contains a brief description of the standard set of TupleTools included with each node on the graph, as well as other useful TupleTools for physics analysis. Figure~\ref{fig:node_tree} shows an example of the configurable graph corresponding to the selected candidate shown in Figures~\ref{fig:decay_search} and~\ref{fig:prod_config}, as well as a list of TupleTools corresponding to the entire decay candidate (top), and particular nodes selected on the graph (bottom). It can be seen from the figure that nodes can also be selected through the categories shown below the graph and that TupleTools can be added, removed, or configured for each node or grouping of nodes.
Figure~\ref{fig:TupleTool_config} shows an example of the user interface for configuring TupleTools, with the particular example showing \textit{TupleToolTISTOS}, which saves trigger information to the Ntuple. It can be seen at the bottom how relevant information is provided.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\textwidth]{figs/examples/TISTOS.png}
\caption{Example of the configuration interface of a TupleTool within the Ntuple Wizard{}, (in particular, \textit{TupleToolTISTOS} for saving trigger information), including links to relevant documentation at the bottom of the modal.}
\label{fig:TupleTool_config}
\end{figure*}
\subsection{Configuration output}
Figure~\ref{lst:dtt_yaml} shows an example of the output YAML file used to configure the DecayTreeTuple algorithm that was populated via configurations captured in Figs.~\ref{fig:node_tree} --~\ref{fig:TupleTool_config}, where the \texttt{tools, groups} and \texttt{branches} keys are shown specifying which TupleTools and therefore which information will be saved to the Ntuple.
The top-level key \texttt{tools} contains a list of TupleTool configurations, from which the parsing functions create and configure TupleTool algorithms attached to the DecayTreeTuple itself, which will thus write either particle-level information about the decay or event-level information, depending on the class of the TupleTool.
The keys \texttt{branches} and \texttt{groups} themselves contain lists of dictionaries whose keys specify particles and have their own \texttt{tools} lists which are used similarly to attach TupleTool algorithms to the specified particle(s) in the decay tree.
Note that \texttt{groups} differs from \texttt{branches} in that it specifies multiple particles to be looped over and have identically configured TupleTool algorithms attached.
\begin{figure*}[!htbp]
\caption{Output of Btree.yaml, the data file used to configure the DecayTreeTuple algorithm.}
\label{lst:dtt_yaml}
\begin{lstlisting}[language=yaml, frame=topline]
inputs:
- /Event/BhadronCompleteEvent/Phys/B2D0PiD2HHBeauty2CharmLine/Particles
descriptorTemplate: ${Bplus}[B+ -> ${D_0}(D~0 -> ${Kplus}K+ ${piminus}pi-)${piplus}pi+]CC
tools:
- TupleToolKinematic:
ExtraName: ''
Verbose: false
MaxPV: 100
Transporter: ParticleTransporter:PUBLIC
- TupleToolPid:
ExtraName: ''
Verbose: false
MaxPV: 100
- TupleToolANNPID:
ExtraName: ''
Verbose: false
MaxPV: 100
ANNPIDTunes:
- MC12TuneV2
- MC12TuneV3
- MC12TuneV4
- MC15TuneV1
PIDTypes:
- Electron
- Muon
- Pion
- Kaon
- Proton
- Ghost
- TupleToolGeometry:
ExtraName: ''
Verbose: false
MaxPV: 100
RefitPVs: false
PVReFitter: LoKi::PVReFitter:PUBLIC
FillMultiPV: false
- TupleToolEventInfo:
ExtraName: ''
Verbose: false
MaxPV: 100
branches:
Bplus:
particle: B+
tools: []
\end{lstlisting}
\end{figure*}
\begin{figure*}[!htbp]
\begin{lstlisting}[language=yaml, frame=bottomline]
D_0:
particle: D~0
tools: []
Kplus:
particle: K+
tools: []
piminus:
particle: pi-
tools: []
piplus:
particle: pi+
tools: []
groups:
Kplus,piminus:
particles:
- K+
- pi-
tools:
- TupleToolTISTOS:
ExtraName: ''
Verbose: false
MaxPV: 100
VerboseL0: false
VerboseHlt1: false
VerboseHlt2: false
VerboseStripping: false
FillL0: true
FillHlt1: true
FillHlt2: true
FillStripping: false
TriggerList: []
Hlt1TriggerTisTosName: Hlt1TriggerTisTos
Hlt2TriggerTisTosName: Hlt2TriggerTisTos
L0TriggerTisTosName: L0TriggerTisTos
PIDList: []
TopParticleOnly: false
Hlt1Phys: >-
Hlt1(?!ODIN)(?!L0)(?!Lumi)(?!Tell1)(?!MB)(?!NZS)(?!Velo)(?!BeamGas)(?!Incident).*Decision
Hlt2Phys: >-
Hlt2(?!Forward)(?!DebugEvent)(?!Express)(?!Lumi)(?!Transparent)(?!PassThrough).*Decision
TIS: true
TOS: true
TUS: false
TPS: false
name: DecayTreeTuple/Btree
\end{lstlisting}
\end{figure*}
\subsection{Future developments}
It is planned to extend the current functionality of the Ntuple Wizard{} by including the ability to create custom candidates from standard collections of LHCb particles. Another important planned addition is the ability to configure custom jet reconstruction. Ideally, support will be included for the full set of algorithms available in DaVinci for data analysis and event/candidate selection as resources allow.
As of Run 3, which started in 2022, the majority of the filtering and preselection of the data will be done in real time within the LHCb high-level trigger (HLT). In this architecture, the data will be fully reconstructed online and the final preselection algorithms will run in the HLT. Offline preselections will be feasible for a subset of the events. In both cases the output will have the same level of abstraction as the output of the stripping, allowing for a relatively simple adaptation of the Ntuple Wizard once the Run 3 data are made public.
\section{Request submission and execution}
\label{sec:prod}
Once the candidate(s) of interest, data set(s), and information to be saved in the Ntuple(s) are specified, and a name and email address have been provided for the production, a ZIP format file containing all relevant output files for the data query can be downloaded (as shown in Figure~\ref{fig:output_zip}) and submitted to an LHCb Analysis Productions manager.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\textwidth]{figs/examples/output_zip.png}
\caption{Example of downloading the output files of the Ntuple Wizard{} after the query is fully specified.}
\label{fig:output_zip}
\end{figure*}
Requests for Ntuple creation are handled using the Analysis Productions package.
The files describing a new request are committed to a repository hosted on the CERN GitLab~\cite{git}, and a merge request is created once they are ready for review.
The Continuous Integration feature of GitLab is used to submit test productions to LHCbDIRAC~\cite{TDR1, TDR2}, which automatically processes a small fraction of the data when the remote repository is updated.
Once the request is submitted, it is handled by the LHCbDIRAC production system.
A production defines how a dataset is to be processed, and LHCbDIRAC will launch and manage computing jobs until the dataset is fully processed.
Productions are defined in `steps' that specify which application to run and which configuration files to read, and may be chained together such that the output of the first step is the input to the second, etc.
The \texttt{info.yaml} file produced by the Ntuple Wizard{} defines one production per dataset, each consisting of a single DaVinci step.
Within the production jobs, DaVinci is configured by functions defined in an external Python module according to the YAML files produced by the Ntuple Wizard{}.
The data structure configured in Section~\ref{sec:alg} and displayed in Figure~\ref{lst:dtt_yaml} is traversed, and the configurable properties of the DecayTreeTuple algorithm are assigned the corresponding values.
After the Analysis Production jobs are complete, the produced Ntuples will be delivered to the CERN Open Data Portal for retrieval.
\section{Summary}
Providing public access to the large data sets at the LHC is a significant technical challenge, but it is becoming increasingly important for the longevity of high-energy physics in order to optimize acquired knowledge from the collected data. The volume and complexity of the data collected at LHCb make providing direct access to reconstructed (Level 3) data suitable for physics research difficult, motivating the design of the Ntuple Wizard{}, where users can submit queries to obtain skimmed data samples (Ntuples) of the reconstructed data suitable for their physics interests. The Ntuple Wizard{} is a web-based application that intuitively guides the user through specifying a query, from discovering a data set from a physics candidate (e.g. decay) of interest, to configuring the information to be saved in the output Ntuple. The output of the Ntuple Wizard{} is a pure data structure (YAML) format, which is to be submitted to an LHCb Analysis Productions manager so it can be parsed internally to provide the necessary Python scripts needed to configure the DaVinci application. The Ntuples will ultimately be delivered to the CERN Open Data Portal for retrieval.
\section*{Appendices}
|
{
"arxiv_id": "2302.14191",
"language": "en",
"timestamp": "2023-03-01T02:04:26",
"url": "https://arxiv.org/abs/2302.14191",
"yymm": "2302"
} | \section*{\large{Supplemental Material}}
}
\usepackage{bbold}
\title{On the path integral approach to quantum anomalies in interacting models}
\author[a]{Alireza Parhizkar,}
\author[b]{Colin Rylands}
\author[a,c]{and Victor Galitski}
\affiliation[a]{Joint Quantum Institute, University of Maryland, \\ College Park, MD 20742, USA}
\affiliation[b]{SISSA and INFN, \\ via Bonomea 265, 34136 Trieste ITALY}
\affiliation[c]{Center for Computational Quantum Physics, The Flatiron Institute, New York, NY 10010, United States}
\emailAdd{alpa@umd.edu}
\emailAdd{crylands@sissa.it}
\emailAdd{galitski@umd.edu}
\date{
\today
}
\abstract{
The prediction and subsequent discovery of topological semimetal phases of matter in solid state systems has instigated a surge of activity investigating the exotic properties of these unusual materials. Amongst these are transport signatures which can be attributed to the chiral anomaly; the breaking of classical chiral symmetry in a quantum theory.
This remarkable quantum phenomenon, first discovered in the context of particle physics has now found new life in condensed matter physics, connecting topological quantum matter and band theory with effective field theoretic models.
In this paper we investigate the interplay between interactions and the chiral anomaly in field theories inspired by semimetals using Fujikawa’s path integral method. Starting from models in one spatial dimension we discuss how the presence of interactions can affect the consequences of the chiral anomaly leading to renormalization of excitations and their transport properties. This is then generalised to the three dimensional case where we show that the anomalous response of the system, namely the chiral magnetic and quantum hall effects, are modified by the presence of interactions. These properties are investigated further through the identification of anomalous modes which exist within interacting Weyl semimetals. These massive excitations are nonperturbative in nature and are a direct consequence of the chiral anomaly.
The effects of interactions on mixed axial-gravitational anomalies are then investigated and the conditions required for interactions effects to be observed are discussed.}
\begin{document}
\maketitle
\flushbottom
\section{Introduction}
The state of a physical system is defined through parameters that fully determine its configuration. Each distinct realization of these set of parameters corresponds to a distinct configuration. For example, revaluation of generalized coordinates $\vec{x}=(x,y,z)$ of a point particle takes it to different positions in the three dimensional space. A transformation $\vec{x} \rightarrow \mathcal{T}[\vec{x}]$ then, yields a different physical situation and it will be a symmetry only if all such situations are governed by the same physical laws. In other words, if $\mathcal{T}$ is a symmetry transformation, the physical description (or physics) of the system, will be blind to the difference between $\vec{x}$ and $\mathcal{T}[\vec{x}]$. For example, given a $\mathcal{T}$ that preserves the action functional, if the physical path $\vec{x_\text{ph}}$ minimizes the action then also will $\mathcal{T}[\vec{x_\text{ph}}]$.
There are two prominent types of physical descriptions, classical and quantum. Since the quantum description is canonically derived from the classical, one expects two situations that are classically indistinguishable, to remain indistinguishable in the quantum description as well. Therefore, it sounds peculiar that in some cases, a symmetry of the classical theory is not a symmetry of the corresponding quantum theory. It means that quantum corrections know something about the system that the classical description is completely ignorant about. This phenomena is known as a quantum anomaly; when the classical description of two situations are the same but their quantum descriptions differ.
For every symmetry there is a corresponding conserved Noether current. Thus, anomalies which destroy a classical symmetry consequently ruin a classical conservation law, giving rise to a source term which is usually called the anomalous term. This non-vanishing source term is responsible for the decay of the neutral pion---the phenomenon which would have been suppressed if not for quantum anomalies and thus led to their discovery~\cite{MesonDecay,DecaySchwinger,Adler,BellJackiw,BardeenWI}.
Anomalies are hence purely quantum mechanical and therefore become of even greater interest when one finds out that there are macroscopic phenomena based on these beings which dwell deep in the quantum realm. One example of such macroscopic phenomena are anomalous transport signatures in condensed matter systems.
Quantum anomalies are among many that have originated in high energy physics but have gradually found their way into condensed matter physics. In this paper we will be concerned with the chiral anomaly and its corresponding anomalous term which supplies the non-conservation of the chiral current. In condensed matter systems such as crystals, chiral symmetry is an emergent property, since the periodic nature of the crystal results in a periodic band structure which allows for exactly as many left-handed chiral modes as there are right-handed ones. In other words, if a band crosses the Fermi surface at one point it will cross the Fermi surface back at least at one other point due to the periodicity of the band structure~\cite{NN1,NN2,NNnogo,NNFriedan}. As a consequence, exciting the system will generate left-moving particles around one point while it produces right-moving particles around the other. So a definite chirality cannot be attributed to a specific band. Even though chiral points are connected through the deep lattice structure, they can be thought of as distinct nodes in the low energy description of the material which is blind to the lattice structure. In this sense the chiral anomaly has a prosaic explanation in condensed matter systems, namely, the pumping of charge through the bottom of the band from one node to another. Amazingly, this simple picture relates key concepts such as the quantized Hall conductance (e.g. via Laughlin's argument~\cite{Laughlin}) and existence of topological metals (e.g. the Weyl semimetal~\cite{WanTurnerVishwanathSavrasov,BurkovBalents, YangLuRab, XuWengWangDaiFang, HalaszBalents, Aji,WengFangZhongBernevigDai, Lv1, Lv2, Xu, Huang}) to chiral anomalies.
Topology is the cornerstone of chiral anomaly. We will see that the chiral anomaly is connected to zero-modes of the fermionic theory on one hand and the winding number of the gauge field on the other, when we review the non-perturbative formulation of anomalies named after Fujikawa~\cite{Fujikawa, FujikawaErrata,Fujikawa2004Book} in section \ref{sec:Review}. Within perturbative QED the chiral anomaly arises \textit{only} from the triangle diagrams when one needs to carefully regularize the difference between linearly divergence integrals~\cite{AdlerBardeen}. Higher order loop diagrams will cancel their own contribution to chiral symmetry breaking and render the chiral anomaly subject to non-renormalization theorems; the higher order terms will leave the form of the anomalous term unmodified and are accounted for by replacing the bare fields and parameters with their renormalized values.
While in theories of elementary particle physics, one is restricted by normalizability conditions and symmetries such as Lorentz invariance and charge conservation, this is not the case of condensed matter systems where we often seek effective low energy description of complex systems. Condensed matter physics is therefore home to a diverse array of particles and interactions, and thus low energy description of a condensed matter system is apt to carrying interaction terms that could be alien to fundamental physics. In this article we investigate the interplay of the chiral anomaly with these features that arise in condensed matter systems. In particular, building on a previous work \cite{CAICMS} we study the effects of interactions on anomalous chiral symmetry breaking in low energy descriptions of Dirac and Weyl materials.
After the review of the basics presented in section \ref{sec:Review}, we employ the path-integral formulation in section \ref{sec:2D} to investigate in detail the aspects of introducing interactions on chiral anomaly in a $(1+1)$-dimensional system e.g. a Luttinger liquid. We then proceed to do the same in the next section \ref{sec:4D} for $(3+1)$-dimensions where a much richer behaviour will emerge. As will we see, due to interactions there are modifications to the chiral symmetry breaking in both systems, and how they related to each other is the subject of section \ref{sec:DimRed}. As mentioned before there are macroscopic phenomena attributed to chiral anomaly such as chiral magnetic effect and anomalous Hall response. Since the anomalous term will be different in a theory that contains the additional interaction, these phenomena also will be modified by interactions. In particular we will see that even though Hall conductivity will remain the same in the equilibrium and homogeneous limit, it will have a different finite frequency behavior in presence of interactions. These are investigated in section \ref{sec:Measurement} where we have represented them as a way to measure the effects of interactions. Curiously, the effects of interaction exceed the mere modification of anomalous transport phenomena; in section \ref{sec:DynamicAnomaly} we will see that duo to the interplay of interactions and Weyl node separation, anomalous current will have a dynamic of its own even in the absence of electromagnetic gauge field. These ``anomalous modes'' can further enforce dynamics on the gauge field; in particular they give rise to an axionic electrodynamics. Section \ref{sec:Gravity} carries on to thermal phenomena and investigates how interactions influence the gravitational anomaly and transport in presence of non-trivial geometry. Finally, although we are mainly considering the consequences of a general local current-current interactions in this paper, in section \ref{sec:Beyond} we go beyond this type and consider one other kind which also can encapsulate the effect of local spin-spin interactions on chiral anomaly.
Overall, a reader who is \textit{only} interested in the effect of interactions on anomaly in $(3+1)$ dimensions and is familiar with the subject of chiral anomaly and Fujikawa's method for deriving it, may start from section \ref{sec:4D} and go back to previous sections upon questions or difficulties.
\subsection*{Notations}
Throughout the paper we use the natural units where the reduced Planck constant $\hbar$ and the speed of light $c$ are set to one. We also use Einstein's summation convention and sometimes represent the four-vector of current as $j^\mu \equiv (\rho,j^x,j^y,j^z)$ in Minkowski coordinates $x^\mu \equiv (t,x,y,z)$. The Minkowskian metric is denoted by $\eta_{\mu\nu} = \text{diag} (1,-1,-1,-1)$, the Kronecker delta by $\delta^\mu_\nu$, the d'Alembertian operator by $\Box \equiv \partial_t^2 - \vect\nabla^2$ and Dirac's slash notation, $\gamma^\mu \mathcal{V}_\mu = \slashed{\mathcal{V}}$, is employed.
Gamma matrices, $\gamma^\mu$, are the matrix representations of the Clifford algebra: $\{\gamma^\mu,\gamma^\nu\}=2\eta^{\mu\nu}$. We also define $\gamma_5$ as the matrix that satisfies a natural extension $\{\gamma^\mu,\gamma_5\}=0$ and $\gamma_5^2=1$. In a two dimensional spacetime we can choose the representation as $\gamma^0 = \sigma^x$ and $\gamma^1=i\sigma^y$, yielding $\gamma_5=\sigma^z$, with $\sigma^{x,y,z}$ being the Pauli matrices. In four dimensions we can write them as,
\begin{equation}
\gamma^\mu = \left( \begin{array}{cc}
0 & \sigma^\mu \\
\bar\sigma^\mu & 0
\end{array} \right) \, ,
\ \quad \
\gamma_5 = \left( \begin{array}{cc}
-\mathbb{1} & 0 \\
0 & \mathbb{1}
\end{array} \right) \, ,
\end{equation}
with $\sigma^\mu \equiv (\mathbb{1},\vect{\sigma})$ and $\bar\sigma^\mu \equiv (\mathbb{1},-\vect{\sigma})$. The basis of the above particular representation is called the Weyl basis. A Wick rotation, $t\rightarrow -it$, changes the Minkowskian metric, $\eta_{\mu\nu} \rightarrow \text{diag} (-1,-1,-1,-1)$, and hence changes the algebra, which leads to a redefinition of the zeroth gamma matrix by $\gamma^0 \rightarrow -i\gamma^0$.
Moreover, to avoid clutter and better compatibility with high energy physics, we have treated the Fermi velocity as the speed of light unless when it was needed to restate it distinctly.
\section{Fujikawa's Method} \label{sec:Review}
Here we briefly review our main technical tool, Fujikawa's path-integral approach to calculating quantum anomalies. The reader who already has a fresh knowledge of the subject can safely pass through to the next section.
\subsection{Non-Trivial Jacobian}
A symmetry transformation leaves the equations of motion unchanged, or in the language of action principle, it preserves the action functional (up to a boundary term). Let us assume action functional $S[\Phi]$ describes our classical system and it has as symmetry transformation $\mathcal{T}$ that preserves it. All the degrees of freedom are represented by $\Phi$. Let us further formulate our quantum theory by the path-integral approach. All there is to know about the quantum system are given by path-integrals of the form $I=\int \mathcal{D}\Phi e^{iS[\Phi]}$. An anomaly, as we have introduced it, is an instance when a classical symmetry fails to be also a quantum one. But $S$ in invariant under $\mathcal{T}$, so $I$ also would be if it was not for the path-integral measure $\mathcal{D}\Phi$. The only thing that can ruin the symmetry at the quantum level, when described by the path-integral language, is evidently the measure. Thus for the quantum anomaly to appear $\mathcal{D}\Phi$ must transform under $\mathcal{T}$ with a non-trivial Jacobian of transformation. This remarkable transparency, where the quantum anomaly is \textit{anticipated} at the outset, is a feature of the path-integral approach in comparison with the perturbative approach where the quantum anomaly is \textit{discovered} when higher order corrections are being calculated. This Lagrangian formalism of quantum anomalies is named after Fujikawa~\cite{Fujikawa, FujikawaErrata,Fujikawa2004Book}.
In this paper we are concerned with chiral anomalies that happen in fermionic systems. For concreteness consider the following action functional in (1+1)-dimensional space-time having sufficient dimensions to capture the essence of the quantum anomalies,
\begin{equation}
S[\bar{\psi},\psi,A_\mu] = \int d^2 x \left[ \bar{\psi} i\gamma^\mu \left( \partial_\mu - i e A_\mu \right) \psi \right] \, ,
\label{SimpleAction}
\end{equation}
along with the corresponding path-integral,
\begin{equation}
I = \int \mathcal{D}\bar{\psi}\mathcal{D}\psi e^{iS[\bar{\psi},\psi,A_\mu]} \, ,
\end{equation}
where the integration is only over fermionic degrees of freedom $\psi$ and $\bar\psi \equiv \psi^\dagger \gamma^0$ (which in the current case are Weyl spinors) hence treating the gauge field $A^\mu$ as an external electromagnetic four-potential. Note also, that the path-integral treats $\psi$ and $\bar\psi$ as independent integral variables.
The action \eqref{SimpleAction} is symmetric under two $U(1)$ global transformations. A simple phase transformation,
\begin{equation}
\psi \longrightarrow e^{i\alpha} \psi \, , \ \ \bar{\psi} \longrightarrow \bar{\psi} e^{-i\alpha} \, ,
\label{PhaseTransG}
\end{equation}
and a chiral transformation,
\begin{equation}
\psi \longrightarrow e^{i\alpha\gamma_5} \psi \, , \ \ \bar{\psi} \longrightarrow \bar{\psi} e^{i\alpha\gamma_5} \, .
\label{ChiralTransG}
\end{equation}
We are then curious to see how the measure $\mathcal{D}\bar{\psi}\mathcal{D}\psi$ behaves under the above transformations.
To begin our investigation we use the usual method of Euclideanization, which has the reward of making the Dirac operator $\slashed{D} \equiv \gamma^\mu (\partial_\mu - ie A_\mu)$ hermitian and helps significantly in the calculation of path-integrals. Hermitian operators can form complete orthonormal bases. Therefore, if now we decide to expand the fermionic fields, there is a natural way of doing so. Namely, we can expand $\bar{\psi}$ and $\psi$ in orthonormal modes of the hermitian Dirac operator $\slashed{D}\phi_n = l_n \phi_n$, with $l_n$s being eigenvalues of $\slashed{D}$ and $\phi_n$s the corresponding eigenfunctions.
\begin{equation}
\bar{\psi} = \sum_n \bar{b}_n \phi^\dagger_n(x) \, , \ \ \psi=\sum_n a_n\phi_n(x) \, .
\label{SpinorExpansion}
\end{equation}
Both $a_n$ and $\bar{b}_n$ are Grassmann numbers so that they form Grassmann field spinors when multiplied by two component fields $\phi_n$ and $\phi^\dagger_n$.
We could have expanded the fermionic fields in the basis of any hermitian operator. But this natural choice has the crucial property of formally diagonalizing the action. By this choice it becomes clear that the fermionic part of the path-integral is given by the products of all eigenvalues of the Dirac operator which renders it exactly integrable.
\begin{align}
I &= \int \mathcal{D}\bar{\psi}\mathcal{D}\psi e^{S[\bar{\psi},\psi,A_\mu]} = \int \prod_n \left[ d\bar{b}_n da_n e^{l_n \bar{b_n}a_n} \right] = \prod_n l_n = \det (\slashed{D})
\end{align}
In above and future path-integral equalities we use the equal sign for all path-integrals that are proportional to each other by a constant coefficient. We should also note that integration over Grassmann numbers is defined by left derivative.
Other than orthonormality and completeness,
\begin{align}
&\int d^2x \phi^\dagger_m(x) \phi_n(x) = \delta_{mn} \label{Orthonormality} \, , \\
&\sum_n \phi^\dagger_n(x)\phi_n(y) = \delta(x-y) \, , \label{Completeness}
\end{align}
eigenfunctions of the Dirac operator have another property which is of interest here. Since $\slashed{D}$ anti-commutes with $\gamma_5$, multiplying an eigenfunction with $\gamma_5$ produces another eigenfunction of the Dirac operator with an eigenvalue negative of the the original one:
\begin{equation}
\slashed{D}(\gamma_5\phi_n) = -\gamma_5\slashed{D}\phi_n = -l_n (\gamma_5\phi_n) \, .
\label{MirroredMode}
\end{equation}
Therefore, if $l_n \neq 0$ then $\phi_n$ and $\gamma_5 \phi_n$ are orthogonal to each other. When $l_n=0$ the corresponding eigenfunction is called a zero mode. In the subset of zero modes $\slashed{D}$ and $\gamma_5$ can be simultaneously diagonalized, since $[\slashed{D},\gamma_5]\phi_n$ vanishes there.
By an infinitesimal chiral rotation \eqref{ChiralTransG} the expansion coefficients $a_n$ and $\bar{b}_n$ go under a transformation.
\begin{align}
\sum_n a_n\phi_n &\rightarrow \sum_n e^{i\alpha\gamma_5} a_n\phi_n \approx \sum_n (1 + i\alpha\gamma_5) a_n \phi_n \Rightarrow \nonumber \\
a_m &\rightarrow a_m + i\sum_n a_n \int d^2x \phi^\dagger_m(x) \alpha \gamma_5 \phi_n(x) \, ,
\end{align}
where we have used \eqref{Orthonormality} to go from first to second line and we have kept $\alpha$ inside the integral since in general it can depend on position. Therefore, $\{a_n\}$ transforms as $\{a_n\}' = M \{a_n\}$ with the matrix of transformation given by $M_{mn} = \left[ \delta_{mn} + i\int d^2x \phi^\dagger_m(x) \alpha \gamma_5 \phi_n(x) \right]$. The exact same goes for $\bar{b}_n$ because a chiral rotation \eqref{ChiralTransG} transforms $\bar{\psi}$ the same way it transforms $\psi$, unlike a phase rotation \eqref{PhaseTransG}.
\begin{equation}
\bar{b}_m \rightarrow \sum_n \left[ \delta_{mn} + i\int d^2x \phi^\dagger_m(x) \alpha \gamma_5 \phi_n(x) \right] \bar{b}_n \, .
\end{equation}
Phase transformation is obtained from chiral transformation if we substitute $\gamma_5$ by $1$ everywhere, and $i$ by $-i$ only for variables that have a bar sign i.e. $\bar{\psi}$ and $\bar{b}_n$.
Since $a_n$ and $\bar{b}_n$ are Grassmann numbers and their integrals are given by left derivatives, the Jacobian of their transformation is the inverse of Jacobian for standard c-number transformations and is given by,
\begin{align}
\prod_n d\bar{b}'_n da'_n &= \left( \det [M]^{-1} \prod_n d\bar{b}_n \right) \! \left(\det [M]^{-1} \prod_n da_n \right) \nonumber \\
&= \left[ \exp \left( -i \sum_n \int d^2x \phi^\dagger_n \alpha\gamma_5\phi_n \right) \prod_n d\bar{b}_n \right] \nonumber \\
&\times \left[ \exp \left( -i \sum_n \int d^2x \phi^\dagger_n \alpha\gamma_5\phi_n \right) \prod_n da_n \right] \nonumber \\
&= \exp \left( -2i \sum_n \int d^2x \phi^\dagger_n \alpha\gamma_5\phi_n \right) \prod_n d\bar{b}_n da_n \nonumber \\
&\equiv J_5 (\alpha) \prod_n d\bar{b}_n da_n \, ,
\label{dJacobian}
\end{align}
where $J_5(\alpha)$ is the Jacobian and we have used the identity $\ln \det (M^{-2}) = -2\text{Tr} \ln(M)$ for the second equality. To obtain the corresponding Jacobian for phase rotation \eqref{PhaseTransG}, we can use the prescription above, and observe that its Jacobian is unity and therefore the path-integral measure remains unchanged under phase rotation. This is what we naturally desire, since consequently the particle number conservation remains unbroken at the quantum level. As we can see, chiral rotation has a different story. What was a classical symmetry and generated the continuity equation for chiral current $\partial_\mu j_5^\mu =0$, is now quantum mechanically broken. The broken conservation law now reads:
\begin{equation}
\int d^2x \partial_\mu \left(\bar{\psi}\gamma^\mu\gamma_5\psi\right)= -2i \sum_n \int d^2x \phi^\dagger_n \gamma_5\phi_n \, ,
\label{RawConservation}
\end{equation}
where $\bar{\psi}\gamma^\mu\gamma_5\psi$ is the Noether current $j_5^\mu$ for the classical chiral symmetry.
Looking back at \eqref{MirroredMode} and the statement below it, we find out that only zero modes contribute to the sum in \eqref{RawConservation}, since for all other modes $\int d^2x \phi^\dagger_n \gamma_5\phi_n$ is zero by \eqref{Orthonormality}. The same goes for the exponents of \eqref{dJacobian} as well when $\alpha$ is to be constant. Thus, chiral anomaly is attributed to the subset of zero modes. In fact since zero modes are also eigenvectors of $\gamma_5$, it can be diagonalized in the subspace of zero modes $\gamma_5=\mathrm{diag}(+1,-1)$, and then the value of $\int d^2x \phi^\dagger_n \gamma_5\phi_n$ becomes either $+1$ when $\gamma_5\phi_n=+\phi_n$, or $-1$ when $\gamma_5\phi_n=-\phi_n$. Therefore the sum above is equal to the number of right-moving zero modes $n_+$ minus the number of left-moving zero modes $n_-$,
\begin{equation}
\sum_n \int d^2x \phi^\dagger_n \gamma_5\phi_n = n_+ - n_- = \mathrm{Index}(\slashed{D}) \, .
\label{Index}
\end{equation}
Since $\phi_n$s are eigenfunctions of the covariant derivative $\slashed{D}$, they depend on the gauge field $\phi_n \equiv \phi_n(A_\mu(x))$. Therefore, $n_\pm$ also depend on $A_\mu(x)$, which, along with the knowledge that anomaly belongs to zero modes, points to the fact that anomaly is topologically related to the gauge field. It is also worth mentioning that since a zero mode stays a zero mode after a chiral rotation, we can now write \eqref{dJacobian} as
\begin{equation}
\prod_{n \in \{0\}} d\bar{b}'_n da'_n = e^{-2i\big( n_+[A_\mu] - n_-[A_\mu] \big)} \prod_{n \in \{0\}} d\bar{b}_n da_n \, ,
\end{equation}
where $n$ goes only over the subspace of zero modes, $\{0\}$, while all other modes do not contribute to the Jacobian of transformation.
The dependence of $n_\pm$ on the gauge field is not surprising, as it is clear from the one-dimensional model, applying an electric field, say in the right direction, can of course favor the generation of right moving particles and disfavor the left moving ones. In fact a sober guess can tell us that in (1+1)-dimensional spacetime the number of right moving particles must increase by a rate proportional to $eE$ which is the electrical force exerted on each particle, with $e$ the charge of particles and $E$ the applied electric field. If there was no anomaly, say at the classical level where there is no sea of anti-particles to source this generation, $(n_+ - n_-)$ would have been conserved; instead it is only $(n_+ + n_-)$, the total number of zero modes, that stays conserved at the quantum level.
We now proceed to calculate what we have guessed above, specifically, how \eqref{Index} is given in terms of the gauge field $A_\mu$.
\subsection{Regularized Jacobian}
So far, we have found that the Jacobian of chiral transformations $J_5(\alpha)$ is given by what is essentially a trace over $\gamma_5$:
\begin{equation}
\ln J_5(\alpha) = -2i \lim_{N \to \infty} \sum^N_{n=1} \int d^2x \phi^\dagger_n(x)\alpha(x)\gamma_5\phi_n(x) \, ,
\label{lnJacobian}
\end{equation}
where we have introduced $N \to \infty$ to emphasize that this sum, roughly speaking, is over an infinite series of $\pm 1$s. It is not trivial that such a sum converges. The final value depends on how we decide to group $+1$s and $-1$s to obtain a convergent series. We need a proper method to sum these numbers which conveys the physics behind. We have already taken one step, namely choosing the basis of the Dirac operator. But the current labels attributed to the eigenfunctions $\phi_n$ in \eqref{lnJacobian} do not carry any physical meaning. The correct method of summation should be gauge invariant so that it can produce a gauge invariant result; otherwise a term resulting from the breaking of gauge symmetry might be confused with the source of anomaly. Therefore, we will further utilize eigenvalues $l_n$ of the Dirac operator, which are invariant under gauge transformations, to relabel the eigenfunctions and regularize \eqref{lnJacobian} as follows.
\begin{align}
&\frac{i}{2}\ln J_5(\alpha) \nonumber \\
&= \lim_{M \to \infty} \int d^2x \alpha(x) \sum^\infty_{n=1} \phi^\dagger_n(x)\gamma_5 f\left(\frac{l_n^2}{M^2}\right) \phi_n(x) \nonumber \\
&= \lim_{M \to \infty} \int d^2x \alpha(x) \sum^\infty_{n=1} \phi^\dagger_n(x)\gamma_5 f\left(\frac{\slashed{D}^2}{M^2}\right) \phi_n(x) \, ,
\label{RegularizationIntro}
\end{align}
where $f(x)=1$ for $x<1$ but vanishes for $x>1$ fast enough. Now eigenfunctions are labeled by how large their corresponding eigenvalue is. In \eqref{lnJacobian} we needed to end the summation at some arbitrary $n$ which is reflected in the limit $N\to\infty$, but now we are using $M\to\infty$ and the summation ends on physical grounds.\footnote{At this stage it is worth emphasising on the distinction between the eigen-basis of the Dirac operator, which are used in the regularization, and that of the corresponding Hamiltonian. In particular the zero-modes of the Dirac operator are \textit{not} zero \textit{energy} modes. The zero-modes of $\slashed{D}$ are those which solve the equations of motion of the free theory, in other words the on-shell modes. Therefore, higher eigenvalues belong to those modes which are farther from the shell. This means that the regularization above penalizes the most off-shell degrees of freedom.} Presence of $\slashed{D}$ in the last line of Eq. \eqref{RegularizationIntro} makes it clear how the gauge field $A_\mu$ appears in the anomalous relation. However, what we so far have is given in terms of eigenvalues of $\slashed{D}$ and it is not yet manifest how \eqref{RegularizationIntro} is a function of time and space. To extract the dependence of the regularization $f$ on the coordinates we move to the basis of plane waves.
\begin{subequations}
\label{2DCal}
\begin{align}
&\frac{i}{2}\ln J_5(\alpha) = \text{Tr} \alpha\gamma_5 \nonumber \\
& \equiv \text{tr} \lim_{M \to \infty} \int \! d^2x \, \alpha \! \int \frac{d^2k}{(2\pi)^2} e^{-ik_\mu x^\mu}\gamma_5 f\left(\frac{\slashed{D}^2}{M^2}\right) e^{ik_\mu x^\mu} \label{2DCal:a} \\
&=\text{tr} \lim_{M \to \infty} \int \! d^2x \, \alpha \! \int \frac{d^2k}{(2\pi)^2} e^{-ik_\mu x^\mu}\gamma_5 f\left(\frac{D^\mu D_\mu -\frac{ie}{4} [\gamma^\mu,\gamma^\nu]F_{\mu\nu}}{M^2}\right) e^{ik_\mu x^\mu} \label{2DCal:b} \\
&=\text{tr} \lim_{M \to \infty} \int \! d^2x \, \alpha \! \int \frac{d^2k}{(2\pi)^2} \gamma_5 f\left(\frac{(-k_\mu k^\mu +2ik^\mu D_\mu + D^\mu D_\mu) -\frac{ie}{4} [\gamma^\mu,\gamma^\nu]F_{\mu\nu}}{M^2}\right) \label{2DCal:c} \\
&=\text{tr} \lim_{M \to \infty} M^2 \! \int \! d^2x\, \alpha \! \int \frac{d^2k}{(2\pi)^2} \gamma_5 f\left( \! -k_\mu k^\mu +\frac{2ik^\mu D_\mu}{M} + \frac{D^\mu D_\mu}{M^2} -\frac{\frac{ie}{4} [\gamma^\mu,\gamma^\nu]F_{\mu\nu}}{M^2} \! \right) \label{2DCal:d} \\
&=\text{tr} \lim_{M \to \infty} - M^2 \int \! d^2x \, \alpha \! \int \frac{d^2k}{(2\pi)^2} f'(-k_\mu k^\mu) \frac{\frac{ie}{4} [\gamma^\mu,\gamma^\nu]\gamma_5 F_{\mu\nu}}{M^2} \label{2DCal:e} \\
&= i\int \! d^2x \, \alpha \frac{ie}{4\pi}\epsilon^{\mu\nu}F_{\mu\nu} \, . \nonumber
\end{align}
\end{subequations}
In \eqref{2DCal:a} we have chosen the basis of plane waves and have traced over the remaining indices by $\text{tr}$. For the next line, \eqref{2DCal:b}, we have expanded $\slashed{D}^2=\gamma^\mu D_\mu \gamma^\nu D_\nu$ by separating $\gamma^\mu\gamma^\nu$ into its symmetric and anti-symmetric parts and the fact that $[D_\mu,D_\nu] = -ieF_{\mu\nu}$. We have then pulled $e^{ik_\mu x^\mu}$ through covariant derivatives $D_\mu$ from \eqref{2DCal:b} to \eqref{2DCal:c}, which leaves $ik_\mu$ behind wherever there is a covariant derivative $D_\mu \rightarrow ik_\mu + D_\mu$. Changing the variables of integration from $k_\mu$ to $M k_\mu$ leads us to \eqref{2DCal:d}. This will let us expand $f(x)$ in orders of $1/M$ around $-k_\mu k^\mu$ which takes us to the next line. In \eqref{2DCal:e} we have only kept the leading orders of $1/M$ and also have used the fact that all terms coming with only $\gamma_5$ vanish since $\text{tr} \gamma_5 = 0$. Furthermore, in a two-dimensional spacetime $\text{tr}[\gamma^\mu,\gamma^\nu]\gamma_5 = 4 i\epsilon^{\mu\nu}$ which gives us the only surviving term, and we have
\begin{equation}
\int \frac{d^2k}{(2\pi)^2} f'(-k_\mu k^\mu) = -\int \frac{du}{4\pi} f'(u)= -\frac{1}{4\pi} \, .
\end{equation}
The above equality is true regardless of the details of the function $f$ and is satisfied only by the requirements described before. Therefore, for a general regularizing function we find the Jacobian of chiral transformations for the Minkowski metric to be
\begin{equation}
J_5 (\alpha) = \exp{ \left\{ -i\int d^2x \alpha(x) \frac{e}{2\pi}\epsilon^{\mu\nu}F_{\mu\nu} \right\} } \, .
\end{equation}
All in all, the differences caused by chiral rotation in the path-integral can be represented by the equality below,
\begin{equation}
\int \mathcal{D}\bar{\psi}\mathcal{D}\psi e^{iS} = \int \mathcal{D}\bar{\psi}\mathcal{D}\psi e^{iS + i \int \! d^2x \alpha \left( \partial_\mu j_5^\mu - \frac{e}{2\pi}\epsilon^{\mu\nu}F_{\mu\nu} \right) } \, ,
\end{equation}
where from the left-hand side to the right-hand side a chiral rotation in fermionic variables have taken place while the equality sign reflects the fact that the chiral rotation is after all only a change of path-integral variables and must not change the whole path-integral. From the above equality the following anomalous relation is concluded,
\begin{equation}
\langle \partial_\mu j_5^\mu \rangle = \langle \frac{e}{2\pi}\epsilon^{\mu\nu}F_{\mu\nu} \rangle \, ,
\label{2DAnomaly}
\end{equation}
where $j_5^\mu \equiv \bar{\psi}\gamma^\mu\gamma_5\psi$ is the Noether current corresponding to chiral rotations and is called ``chiral current''. Had it not been for the non-trivial Jacobian $J_5(\alpha)$, the right hand side of the above equation would have been zero, which is the case for the classical theory were there are no path-integral measures to begin with. Moreover, the angle signs $\langle \, \rangle$ are there to remind us that these equations are path-integral relations.
The chiral anomaly in (1+1) dimensions is probably the simplest form of quantum anomaly, but it is connected to its (3+1) dimensional counterpart if we note that in four dimensions we will have to integrate over a four dimensional momentum space and therefore $M^4$ appears in \eqref{2DCal:d} in place of $M^2$ behind the integral. This means that we now have to expand the regulator $f(x)$ up to the order of $1/M^4$ (instead of $1/M^2$ in two dimensional spacetime), since higher orders in expansion vanish in the limit $M\rightarrow \infty$. On the other hand, algebra of gamma matrices in (3+1) dimensions gives the following
\begin{subequations}
\begin{align}
\label{trgamma}
&\text{tr} \gamma_5 = \text{tr} \left[ \gamma^\mu,\gamma^\nu \right]\gamma_5 = 0 \\
&\text{tr} \left[ \gamma^\mu,\gamma^\nu \right]\left[ \gamma^\rho,\gamma^\sigma \right] \gamma_5 = -16\epsilon^{\mu\nu\rho\sigma} \, .
\end{align}
\end{subequations}
Thus, the only surviving term from the expansion will come from the square of $\left[ \gamma^\mu,\gamma^\nu \right]F_{\mu\nu}$ which, roughly speaking, gives us the square, $\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}$, of what we had for (1+1) dimensions:
\begin{equation}
\langle \partial_\mu j^\mu_5 \rangle = \langle \frac{e^2}{16\pi^2}\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma} \rangle \, .
\label{4DAnomaly}
\end{equation}
By following the same reasoning in other dimensions one can show that the chiral anomaly vanishes in odd spacetime dimensions.\footnote{However, it has recently been shown \cite{TopolCriterion} that for certain systems, in particular those which develop localization, it is possible to reduce the dimensionality, by removing the time dimension, from odd to even number of spacetime dimensions hence reviving the chiral anomaly.}
It is also worth while to rewrite \eqref{2DAnomaly} and \eqref{4DAnomaly} in terms of electric and magnetic fields which respectively yields
\begin{equation}
\langle \partial_\mu j_5^\mu \rangle = \langle \frac{e}{\pi} E \rangle \, , \quad \quad \text{(2D spacetime)}
\label{2DAnomalyE}
\end{equation}
and
\begin{equation}
\langle \partial_\mu j_5^\mu \rangle = \langle \frac{e^2}{2\pi^2}\vec{E} \cdot \vec{B} \rangle \, , \quad \quad \text{(4D spacetime)}
\label{4DAnomalyEB}
\end{equation}
where in the latter (four dimensional case) we see that it is the coincidence of electric and magnetic fields that breaks the conservation of the chiral current.
\section{Interaction in 1+1 Dimensions} \label{sec:2D}
We now turn our attention to the following specific question: How will the anomalous relation \eqref{2DAnomaly} be modified, if the fermions are interacting with each other? To this end, consider the interacting action functional below,
\begin{equation}
S_\lambda = \int d^2x \left[ \bar{\psi} i \gamma^\mu \left( \partial_\mu - ieA_\mu \right) \psi -\frac{\lambda^2}{2} j^\mu j_\mu \right] \, ,
\label{2DInetractingAction}
\end{equation}
where $j^\mu \equiv \bar{\psi}\gamma^\mu\psi$ is the Noether current of phase rotation, or simply the electrical current when multiplied by the electrical charge $e$, and $\lambda^2/2$ is the strength of an attractive interaction. Note that the interaction term does not break any classical symmetry $S_\lambda$ had when $\lambda$ was zero. Knowing this one might readily argue that since the anomaly comes from the measure of the path-integral, and adding a term to the action functional has nothing to do with the measure, then the interaction term $-\frac{\lambda^2}{2}\bar{\psi}\gamma^\mu\psi \bar{\psi}\gamma_\mu\psi$ must have no effect on the anomalous relation whatsoever. This is however \textit{not} the case as we shall now explain.
\subsection{Diagonalized Partition Function}
\label{sec:Regularization}
If the argument in the last sentences above is correct, then interactions must have no effect on the anomalous relation and going any further in this direction would be a lost cause. But from the above argument, the assumption that the action functional has nothing to do with the measure, is wrong.
In this subsection we discuss this fact, and show that it comes from a need for a self-consistent regularization of the path-integral, and postpone a detailed demonstration to appendix \ref{sec:SelfRegularization}.
The reader who is not concerned with such details or would like to return to them later, can safely jump to the next subsection, \ref{sec:Interaction2D}.
Unlike regular path-integration, fermionic path-integrals are defined by left differentiation of Grassmann numbers and they need regularization. For example, path integration over fermionic degrees of freedom of a free Dirac fermion theory yields the determinant of the Dirac operator $\det (i\slashed{D} + m) = \prod l_n$ only if the product is well-defined which is the case when $\sum_n^\infty |l_n -1|$ converges~\cite{InfiniteProduct}. But eigenvalues, $l_n$, of the Dirac operator are not even bounded. Thus the well-defined path-integral must carry a type of regularization with itself. This is where action functional intrudes into measure's business. The path-integral (or the partition function) has two elements; the measure and the action; the regularization is not present in the latter. But \textit{how} the measure is regularized must only be determined by the action, since the partition function is to be a self-sufficient object and should be indifferent about the backstory of why it is written. Let us again look at the free Dirac fermion. We can write its partition function as,
\begin{align}
\det\left(i\slashed{D}+m\right) &= \int \prod_n d\bar{b}_n d a_n \left( 1 + l_n \bar{b}_n a_n \right) = \int \left( \prod_n d\bar{b}_n da_n\right) \exp\left\{\sum_n l_n \bar{b}_n a_n\right\} \, ,
\end{align}
with $\bar{b}_n$ and $a_n$ being the Grassmann amplitudes defined before in equation \eqref{SpinorExpansion}. Looking at above we see that the action is given by $\sum_n l_n \bar{b}_n a_n$. This setup, in which the action is \textit{formally diagonalized}, makes it easy for us to set a natural cut-off on the determinant: We can simply disregard all $l_n$s which are bigger than some limit $M$ and eventually take the limit $M \rightarrow \infty$. This is exactly what we have done for calculating the chiral anomaly in the previous section.
The above way of regularizing the path-integral clearly relies on the action, since the cut-off is actually on the eigenvalues of the Euclidean action. In other words, the basis chosen for the spinor expansion of the fermionic degrees of freedom, are those which formally diagonalize the action functional. After this diagonalization the regularization is naturally given as above. Note also that this regularization procedure is indifferent to external information such as what physical system is being described by the path-integral, or whether a particular field has a certain symmetry or not. The regularization, being determined only by the action, will share the symmetries of the action; even if the measure does not respect them all.
Now we can go back to question raised below equation \eqref{2DInetractingAction}. To calculate the anomaly we first have to expand the fermionic degrees of freedom in spinor modes. But what basis should they be expanded in? The answer would be the basis that formally diagonalizes the action and therefore leads to a well-defined path-integral. Clearly, by introducing interaction terms to the action, the basis in which it will be diagonalized will change accordingly. Through this, the presence of interactions will modify the anomaly.
This subject is rigorously expounded in the appendix \ref{sec:SelfRegularization} although the material covered there will not be essential for what follows.
\subsection{Effect of Interaction}
\label{sec:Interaction2D}
Now we can proceed to see how the interactions modify the anomalous relation in $(1+1)$ dimensions. In the case of \eqref{2DInetractingAction} we can easily find a regularization that fits the descriptions put forward in the previous subsection and in appendix \ref{sec:SelfRegularization}, although this regularization, which follows shortly, may be natural enough for the reader to ignore the previous subsection altogether.
We can decouple the interaction term using a Hubbard-Stratonovich auxiliary field, $a_\mu$ (not to be confused with the Grassmann variable $a_n$), arriving at the following equality
\begin{equation}
\int \mathcal{D}\bar{\psi}\mathcal{D}\psi e^{S_\lambda} = \int \mathcal{D}\bar{\psi}\mathcal{D}\psi \mathcal{D}a_\mu e^{S_a} \, ,
\label{IntHS}
\end{equation}
with $S_\lambda$ given by \eqref{2DInetractingAction} and $S_a$ as follows,
\begin{equation}
S_a = \int d^2x \left[ \bar{\psi} i \gamma^\mu \left( \partial_\mu - ieA_\mu -i\lambda a_\mu \right) \psi + \frac{1}{2}a_\mu a^\mu \right] \, .
\label{2DHSAction}
\end{equation}
To see the equality of the two path-integrals, we first shift $a_\mu$ in \eqref{2DHSAction} by $-\lambda \bar{\psi}\gamma_\mu\psi$ to obtain
\begin{equation}
S_a = \int d^2x \left[ \bar{\psi} i \slashed{D} \psi -\frac{\lambda^2}{2} j_\mu j^\mu + \frac{1}{2}a_\mu a^\mu \right] \, ,
\end{equation}
but a simple shift does not change the path-integral measure $\mathcal{D}a_\mu$. We can then integrate $a_\mu$ out and regain equation \eqref{2DInetractingAction}. Recall that path-integral equalities of the form \eqref{IntHS} ought to be regarded as equalities upto a constant coefficient.
What has been sandwiched between $\bar{\psi}$ and $\psi$ is now a generalized Dirac operator $\slashed{D}_g \equiv \gamma^\mu \left( \partial_\mu - ieA_\mu - i\lambda a_\mu \right)$ where $A_\mu$ and $a_\mu$ have the same mathematical status. It is therefore rational to think that $\slashed{D}_g$ must substitute $\slashed{D}$ in Fujikawa's method of regularization introduced in \eqref{RegularizationIntro}. Regardless of naturalness, moreover, regularizing with respect to $\slashed{D}_g$ satisfies the requirements put forward in the previous section and appendix \ref{sec:SelfRegularization}, namely expanding the spinor degrees of freedom in the basis of $\slashed{D}_g$ formally diagonalizes the fermionic part of the action.\footnote{To make a connection to what has been suggested in appendix \ref{sec:SelfRegularization} one can look at \eqref{EigenIntegral}, \eqref{RegularizationBasis} and \eqref{FormallyDiagonalized}. The fact that such a regularization is enforced can be observed simply by choosing the basis of generalized hermitian Dirac operator $\slashed{D}_g \phi_n = l_n \phi_n$ in order to expand fermionic degrees of freedom $\bar{\psi}$ and $\psi$ as in \eqref{SpinorExpansion}, which makes the fermionic part of the action $S_a$ formally diagonalized:
\begin{align}
I = \int &\mathcal{D}a_\mu \exp \left\{ \frac{1}{2}\int d^2x a_\mu a^\mu \right\} \times \int \prod_n d\bar{b}_n a_n \exp\left\{ \sum_n l_n \bar{b}_n a_n \right\} \, .
\end{align}
Then, for instance, equation \eqref{EigenIntegral} tells us that $\ell_n = l_n$, meaning that the eigenvalues of the generalized Dirac operator $\slashed{D}_g$ determine the regularization.
It is also worth noting that the on-shell modes are now the zero-modes of the generalized Dirac operator, $\slashed{D}_g \phi_{\{0\}} = 0$. So the regularization process should assign a penalty to the off-shell modes with respect to this operator.}
Having established the above we can proceed to calculate the anomalous relation in presence of interactions. We need to calculate $\sum_n \phi^\dagger \gamma_5 f(\slashed{D}_g^2/M^2) \phi_n$ where we can take the same steps as \eqref{2DCal} but with $eA_\mu$ replaced by $eA_\mu + \lambda a_\mu$, which yields,
\begin{equation}
\langle \partial_\mu j_5^\mu \rangle_{a_\mu} = \langle \frac{e}{\pi} \epsilon^{\mu\nu} \partial_\mu A_\nu + \frac{\lambda}{\pi}\epsilon^{\mu\nu}\partial_\mu a_\nu \rangle _{a_\mu} \, ,
\end{equation}
where the subscript $a_\mu$ is a reminder that in addition to fermionic fields there is also an integration over the auxiliary field $a_\mu$. We have also used the following,
\begin{equation}
\epsilon^{\mu\nu} F_{\mu\nu} = \epsilon^{\mu\nu} \partial_\mu A_\nu + \epsilon^{\nu\mu} \partial_\nu A_\mu = 2\epsilon^{\mu\nu}\partial_\mu A_\nu \, .
\end{equation}
To get back to the original path-integral $\int\mathcal{D}[\bar{\psi},\psi] e^{S_\lambda}$, we must first shift $a_\mu$ to $a_\mu - \lambda j_\mu$ and then integrate it out,
\begin{align}
& \langle \partial_\mu j_5^\mu \rangle_{a_\mu} = \langle \frac{e}{\pi} \epsilon^{\mu\nu} \partial_\mu A_\nu -\frac{\lambda^2}{\pi}\epsilon^{\mu\nu}\partial_\mu j_\nu + \frac{\lambda}{\pi}\epsilon^{\mu\nu}\partial_\mu a_\nu \rangle _{a_\mu} \nonumber \\
& \Rightarrow \partial_\mu j_5^\mu = \frac{e}{\pi} \epsilon^{\mu\nu} \partial_\mu A_\nu -\frac{\lambda^2}{\pi}\epsilon^{\mu\nu}\partial_\mu j_\nu \, ,
\end{align}
where in the last line an integration over fermionic degrees of freedom is implied. Note that the action is even in $a_\mu$, therefore, odd terms such as $\langle a_\mu \rangle$ are eliminated after the integration.
We can look at the result in different ways one of which is to write in the following form,
\begin{equation}
\partial_\mu j_5^\mu = \frac{e}{\pi}\epsilon^{\mu\nu} \partial_\mu \left( A_\nu +\frac{\lambda^2}{e^2} ej_\nu \right) \, ,
\label{2DIntScreen}
\end{equation}
and see the interplay of the interaction and the chiral anomaly manifested as an addition to the electromagnetic field due to the electrical current $ej^\mu$. Note that the anomalous term above is present also in the absence of the gauge field; its effect then is to renormalize the excitations of the system. To see this recall that in (1+1)-dimensions we have $\gamma^\mu\gamma_5 = \epsilon^{\mu\nu}\gamma_\nu$, allowing us to write $j^\mu_5 = \epsilon^{\mu\nu}j_\nu$, and therefore, restricted to (1+1)-dimensions, write the anomalous relation as,
\begin{equation}
\partial_\mu j_5^\mu = \frac{1}{1+\lambda^2/\pi} \frac{e}{2\pi}\epsilon^{\mu\nu}F_{\mu\nu} \, ,
\label{2DIntAnomaly}
\end{equation}
which gives us the same anomalous term on the right hand side as in the absence of interactions but now modified by a coefficient $(1+\lambda^2/\pi)^{-1}$.
For an effective field theory of a condensed matter system there is no real reason for preserving Lorentz symmetry. Thus it is reasonable to investigate interactions such as the density-density interaction, $(\bar{\psi}^\dagger\psi)^2$, that do not respect it. This type of interaction completely fits to our procedure if we substitute $a_\mu$ just by its temporal component $\delta^0_\mu a_0$ and integrate only over $a_0$ instead. We get
\begin{equation}
\partial_\mu j_5^\mu = \frac{e}{2\pi}\epsilon^{\mu\nu}F_{\mu\nu} - \frac{\lambda^2}{\pi}\partial_1 j_5^1 \, ,
\end{equation}
which shows that even in the absence of external electromagnetic fields, the chiral charge conservation law is modified by the interplay of interactions and chiral anomaly. In the absence of the external field the anomalous terms still remain and their effect can be seen as a renormalization of the velocity of the excitations which carry the chiral charge.
\subsection{Finite Chiral Rotation}
All the transformations we were concerned with so far were infinitesimal transformations, allowing us to obtain a (non)-conservation relation. In this subsection we will see how finite transformations differ with infinitesimal ones in the context of anomalies.
An infinitesimal chiral rotation $\psi \rightarrow e^{i\alpha\gamma_5}\psi$ adds a term of $ \gamma^\mu (i\partial_\mu \alpha) \gamma_5$ to the Dirac operator $\slashed{D}$ and an anomalous term, $-\alpha\frac{e}{2\pi}\epsilon^{\mu\nu}F_{\mu\nu}$, coming from the non-trivial Jacobian, to the Lagrangian. However, if you consider the effect of the former on the anomaly, you can observe that the anomalous term should have been modified, but since $\alpha$ is infinitesimal, the higher order corrections can be disregarded. But a finite chiral rotation as it appears inside the generalized Dirac operator $\slashed{D}_g$, changes the anomalous term and its potentially sizable contributions must be brought into account.
We begin by noticing that in two dimensional spacetime $\gamma^\mu\gamma_5$ is equal to $\epsilon^\mu_{\ \nu}\gamma^\nu$ so that we are able to write a generalized Dirac operator which carries an axial-vector term as
\begin{align}
\slashed{D}_g &= \gamma^\mu \left( \partial_\mu -ieA_\mu -ib_\mu\gamma_5 \right) \nonumber \\
&= \gamma^\mu \left( \partial_\mu -ieA_\mu - i\epsilon^\nu_{\ \mu} b_\nu \right) \equiv \gamma^\mu \left( \partial_\mu -iC_\mu \right) \, .
\end{align}
with $b_\mu$ being the axial-vector field. And therefore its corresponding anomalous relation is
\begin{equation}
\partial_\mu j^\mu_5 = \frac{1}{\pi}\epsilon^{\mu\nu}\partial_\mu C_\nu = \frac{e}{\pi}\epsilon^{\mu\nu}\partial_\mu A_\nu + \frac{1}{\pi}\partial_\mu b^\mu \, ,
\end{equation}
where the right hand side is due to the Jacobian in presence of $b_\mu$. We see that if $b_\mu$ is a constant across spacetime it will not contribute.
Now let us start over with a simple Dirac operator $\slashed{D} \equiv \gamma^\mu (\partial_\mu - ieA_\mu)$, and arrive at a finite chiral rotation in steps of $d\alpha$ so that at any certain step an angle of $\alpha=\int d\alpha$ has been accumulated, which finally reaches $\alpha_f$. At each step the following term is added to the action
\begin{align}
\delta_{d\alpha} S = \int d^2x \bigg[ &-(\partial_\mu d\alpha)j_5^\mu - d\alpha \frac{e}{\pi}\epsilon^{\mu\nu} \partial_\mu A_\nu + \frac{d\alpha}{\pi}\partial_\mu \partial^\mu \alpha \bigg] \, .
\end{align}
Hence by integrating $d\alpha$ the total change due to the finite chiral rotation is obtained to be,
\begin{equation}
\delta_\alpha S = \int d^2x \left[ -b_\mu j_5^\mu + \frac{e}{\pi}\epsilon^{\mu\nu} b_\mu A_\nu + \frac{1}{2\pi}\partial_\mu b^\mu \right] \, ,
\end{equation}
with $b_\mu$ set equal to $\partial_\mu \alpha_f$. One conclusion here is that if we begin with the action below,
\begin{equation}
S = \int d^2x \bar{\psi}i\gamma^\mu \left( \partial_\mu - ieA_\mu - ib_\mu \gamma_5 \right) \psi \, ,
\end{equation}
with $b_\mu$ constant, we can remove $b_\mu$ from the Dirac operator by a chiral rotation with angle $\alpha$ satisfying $\partial_\mu \alpha = b_\mu$.
\begin{equation}
S + \delta_\alpha S = \! \int \! d^2x \bigg[ \bar{\psi}i\gamma^\mu \left( \partial_\mu - ieA_\mu \right) \psi + \frac{e}{\pi}\epsilon^{\mu\nu} b_\mu A_\nu \bigg] \, .
\end{equation}
\subsection{Effective Action} \label{sec:EffAct}
In 2D vector spaces, Helmholtz theorem takes quite a simple form and allows us to decompose any $\slashed{v}$ into vector and axial-vector parts, $\slashed{v}= \slashed{\partial} \rho_v + \slashed{\partial} \phi_v \gamma_5$, with $v_\mu$ being a two-vector and $\rho_v$ and $\phi_v$ two scalars, generating the divergence-free and curl-free parts of the vector field $v_\mu$ respectively. Returning to the (1+1)-dimensional Hubbard-Stratonovich action functional \eqref{2DHSAction},
\begin{equation}
S_a \! = \! \int \! \! d^2x \! \left\{ \bar{\psi}i\gamma^\mu\left[ \partial_\mu -ieA_\mu -i\lambda a_\mu \right] \psi +\frac{1}{2}a_\mu a^\mu \right\} \, ,
\end{equation}
we can rewrite $A_\mu$ and $a_\mu$ as the sum of their curl-free and divergence-free parts,
\begin{align}
S_a = \! \int \! d^2x \bigg\{ &\bar{\psi}i\gamma^\mu\left[ \partial_\mu -ie\epsilon_\mu^{\ \nu}\partial_\nu\Phi -i\lambda (\partial_\mu \rho + \epsilon_\mu^{\ \nu}\partial_\nu\phi) \right] \psi + \frac{1}{2}\partial_\mu\rho\partial^\mu \rho - \frac{1}{2}\partial_\mu\phi\partial^\mu\phi \bigg\}.
\end{align}
where we have chosen the Lorentz gauge $\partial_\mu A^\mu = \partial_\mu \partial_\nu \left( \epsilon^{\mu \nu} \Phi \right)=0$ and used the identity $\epsilon_{\mu\alpha}\epsilon^\mu_{\ \beta}=-\eta_{\alpha\beta}$. In two space-time dimensions we can write $\gamma^\mu\gamma_5=\epsilon^\mu_{\ \nu}\gamma^\nu$ and rearrange the above equation into
\begin{align}
S_a = \int d^2x \bigg\{ &\bar{\psi}i\gamma^\mu\left[ \partial_\mu + i\gamma_5 (e\partial_\mu\Phi +\lambda\partial_\mu\phi) -i\lambda\partial_\mu \rho \right] \psi + \frac{1}{2}\partial_\mu\rho\partial^\mu \rho - \frac{1}{2}\partial_\mu\phi\partial^\mu\phi \bigg\} \, .
\end{align}
Next we are going to draw out the $\gamma_5$ from inside the brackets by a finite chiral rotation $\psi \rightarrow e^{-i(e\Phi + \lambda\phi)\gamma_5}\psi'$. This in turn introduces a Jacobian which appears in the action as
\begin{align}
S_a &= \int d^2x \bigg\{ \bar{\psi}'i\gamma^\mu\partial_\mu \psi' + \frac{1}{2}\partial_\mu\rho\partial^\mu \rho \nonumber \\
& -\frac{1}{2\pi}\partial_\mu(e\Phi + \lambda\phi) \partial^\mu(e\Phi +\lambda\phi) - \frac{1}{2}\partial_\mu\phi\partial^\mu\phi \bigg\} \, ,
\end{align}
where we have also eliminated $-i\lambda\partial_\mu \rho$ by a $U(1)$ phase rotation without any cost. A simple squaring process gives
\begin{eqnarray} \label{BeforeInt}
S_a = \int d^2x &&\bigg\{ \bar{\psi}'i\gamma^\mu\partial_\mu \psi' + \frac{1}{2}\partial_\mu\rho\partial^\mu \rho \nonumber \\
&&- \frac{1+\lambda^2/\pi}{2} \left(\partial_\mu\phi + \frac{\lambda e/\pi}{1+\lambda^2/\pi} \partial_\mu\Phi \right)^2 -\frac{1}{1+\lambda^2/\pi} \frac{e^2}{2\pi}\partial_\mu\Phi\partial^\mu\Phi \bigg\} \, .
\end{eqnarray}
Here we can easily shift $\phi$ so that it devours the other $\Phi$ term inside the parentheses, and then we can integrate $\phi$, $\rho$ and fermionic degrees of freedom out, to end up with,
\begin{equation} \label{ModifiedMass}
S \equiv S[A^\mu] = \int d^2x \, \frac{1}{1+\lambda^2/\pi} \frac{e^2}{2\pi}A_\mu A^\mu \, ,
\end{equation}
where we remembered that $A^\mu = \epsilon^{\mu\nu}\partial_\nu\Phi$ whenever $\partial_\mu A^\mu =0$. As we can see the effective field theory for the gauge field in (1+1)-dimensions comes with a mass term for photon which is modified by interactions. Moreover, by functionally differentiating \eqref{2DHSAction} and \eqref{ModifiedMass} with respect to $eA_\mu (x)$ we find
\begin{equation}
j^\mu = \frac{1}{1+\lambda^2/\pi} \frac{e}{\pi} A^\mu \, .
\end{equation}
Notice, however, that this equation is true if we begin by choosing the Lorentz gauge for $A_\mu$, this consequently means that electrical current $j^\mu$ is conserved. On the other, hand chiral current $j^\mu_5 = \epsilon^\mu_{\ \nu} j^\nu$ is not and its non-conservation is modified by interaction strength $\lambda^2$.
Getting back to \eqref{BeforeInt} let us only integrate the auxiliary $\rho$ and $\phi$ fields out and leave the Grassmann fields unintegrated. We will have,
\begin{align}
S = \int d^2x \bigg\{ &\bar{\psi}'i\gamma^\mu\partial_\mu \psi' - \frac{1}{2\pi}\partial_\mu\frac{e\Phi}{\sqrt{1+\lambda^2/\pi}} \partial^\mu \frac{e\Phi}{\sqrt{1+\lambda^2/\pi}} \bigg\} \, .
\end{align}
By another (reverse) rotation $\psi' \rightarrow e^{-i\gamma_5 e\Phi/\sqrt{1+\lambda^2/\pi}}\Psi$ we can now re-couple the gauge field to fermion number current. This gives us
\begin{equation}
S = \int d^2x \, \bar{\Psi}i\gamma^\mu \left[ \partial_\mu -i\frac{e}{\sqrt{1+\lambda^2/\pi}}\gamma_5 \partial_\mu \Phi \right] \Psi \, .
\end{equation}
Using the relation $\gamma^\mu\gamma_5=\epsilon^\mu_{\ \nu}\gamma^\nu$ once again, we can write the above as
\begin{equation}
S = \int d^2x \, \bar{\Psi}i\gamma^\mu \left[ \partial_\mu -i\frac{e}{\sqrt{1+\lambda^2/\pi}}A_\mu \right] \Psi \, .
\end{equation}
This means that in (1+1)-dimensions the effect of the current-current interaction is summarized in having a \textit{free} fermionic theory with a ``dressed'' charge of $e/\sqrt{1+\lambda^2/\pi}$.
\section{Interaction in 3+1 Dimensions} \label{sec:4D}
Having the basics established, we are going to continue, for the case of four spacetime dimensions, with a more general short range current-current interaction appearing in the following path-integral as,
\begin{equation}
I=\int \! \mathcal{D}[\bar{\psi}\psi] \exp \left\{ i \int \! d^4x \left[ \bar{\psi}i\slashed{D}\psi - \frac{1}{2} \lambda^2_{\mu\nu} j^\mu j^\nu \right] \right\} ,
\label{InteractingI}
\end{equation}
where the current $j^\mu$ and the Dirac operator $\slashed{D}$ are defined as before and $\lambda^2_{\mu\nu} \equiv \lambda_{\mu\alpha}\lambda_\nu^{\ \alpha}$ is the interaction strength.
The methods we outline in this paper are quite general and can be applied to arbitrary interaction strengths, however for clarity, at times, we have restricted our focus to special cases of $\lambda^2_{\mu\nu}=\lambda^2\eta_{\mu\nu}$, which preserves Lorentz symmetry and $\lambda^2_{\mu\nu}=\lambda^2_0\eta_{0\mu}\eta_{0\nu} + \lambda^2_3\eta_{3\mu}\eta_{3\nu}$. In the latter, when $\lambda_3^2=0$ the interaction term simply becomes the density-density interaction; on the other hand, when $\lambda_3^2=\lambda^2_0$, it breaks the full Lorentz symmetry down to a reduced symmetry constructed by a rotational invariance in the $x$-$y$ plane and a boost invariance along the $z$ direction. The same symmetry reduction happens in the presence of a constant background magnetic field along the longitudinal direction, $z$.
Evidently, depending on the choice of $\lambda_{\mu\nu}$ some of the symmetries of the model may be broken, e.g. Lorentz invariance, but they do not break the classical chiral symmetry. Unlike in $(1+1)$ dimensions, these interactions are RG irrelevant and typically are not considered, however we will see that in the presence of the constant magnetic field, they should not be discounted.
\subsection{Interacting Anomalous Relation}
Again, through Hubbard-Stratonovich decoupling the path-integral $I$ can also be written as
\begin{equation}
I=\int \! \mathcal{D}[\bar{\psi}\psi a_\mu] \exp \left\{ i \! \int \! d^4x \left[ \bar{\psi}i\slashed{D}_g\psi +\frac{1}{2}a_\mu a^\mu \right] \right\}
\label{DecoupledI} ,
\end{equation}
with the generalized Dirac operator $\slashed{D}_g$ now defined as $\slashed{D}_g \equiv \gamma^\mu\left( \partial_\mu -ieA_\mu - i\lambda_{\mu\nu}a^\nu \right)$. The equivalence of \eqref{InteractingI} and \eqref{DecoupledI} can be observed by noticing that a shift in the auxiliary field by its on-shell value $a_\mu \rightarrow a_\mu - \lambda_{\nu\mu}j^\nu$ in the above equation, preserves the measure but changes the action to that of \eqref{InteractingI} plus a quadratic term in $a_\mu$ which can be integrated out leaving only a constant. Therefore these two path-integrals and all correlation functions generated by them are concluded to be equivalent. In what follows, we are going to reserve $S_\lambda$ for the action of \eqref{InteractingI} and $S_a$ for the action of \eqref{DecoupledI} as in the two-dimensional case.
An infinitesimal chiral rotation $\bar{\psi} \rightarrow \bar{\psi}e^{i\alpha\gamma_5}$, $\psi \rightarrow e^{i\alpha\gamma_5} \psi$, transforms the action $S_a$ to
\begin{equation}
S_a + \delta S_a \equiv S_a + \int d^4x \, \alpha (\partial_\mu j_5^\mu - \mathcal{A}_5) \, .
\end{equation}
The first term in the parentheses arises from the classical shift of the action itself whereas the second is the anomalous term introduced by the non-invariance of the measure, or in other words, the non-trivial Jacobian of the transformation. From the path-integral point of view, the transformation is a mere renaming of the variables of integration. Demanding the same value for the path-integral after such change of variables, is now translated in requiring $\delta S_a$ to vanish. The pre-regularization anomalous term is given similar to its two-dimensional counterpart as,
\begin{equation}
\int \! d^4x \, \mathcal{A}_5 (x) \equiv \! \int \! d^4x \left[ 2\lim_{N\rightarrow\infty} \sum^N_{n=1}\phi^\dagger_n (x) \gamma_5 \phi_n (x) \right] .
\label{A5}
\end{equation}
As we discussed in sections \ref{sec:Regularization} and \ref{sec:Interaction2D}, the above is well-defined after a regulator $f(l_n^2/M^2)$ is introduced in the sum, where $l_n$s are the eigenvalues of the generalized Dirac operator $\slashed{D}_g$ while $\phi_n$s are the corresponding eigenfunctions. Then the large value limit of $N$ gives way to that of $M$ as in \eqref{RegularizationIntro},
\begin{equation}
\mathcal{A}_5 (x)= \lim_{M \to \infty} \sum^\infty_{n=1} \phi^\dagger_n(x)\gamma_5 f\left(\frac{\slashed{D}_g^2}{M^2}\right) \phi_n(x) \, ,
\label{RegA5}
\end{equation}
where now $\slashed{D}_g$ is able to capture the position dependence of anomaly after acting on $\phi_n(x)$s which can further be replaced by plane waves via a change of basis. The four dimensional counterpart of the calculation done in \eqref{2DCal}, as was discussed above equation \eqref{4DAnomaly}, determines the value of $\mathcal{A}_5$ to be,
\begin{align}
\mathcal{A}_5 = \frac{e^2}{16\pi^2} \epsilon^{\mu\nu\rho\sigma} \mathcal{F}_{\mu\nu}\mathcal{F}_{\rho\sigma} \, ,
\end{align}
with $\mathcal{F}_{\mu\nu} \equiv \partial_\mu (eA_\nu+\lambda_{\nu\alpha}a^\alpha)-\partial_\nu (eA_\mu+\lambda_{\mu\beta}a^\beta)$ which renders the anomalous non-conservation law as below,
\begin{align} \label{4DMidInt}
\left\langle \partial_\mu j^\mu_5 \right\rangle_a &= \frac{e^2}{16\pi^2} \left\langle \epsilon^{\mu\nu\rho\sigma} \mathcal{F}_{\mu\nu}\mathcal{F}_{\rho\sigma} \right\rangle_a \\
& = \frac{e^2}{16\pi^2} \left\langle \epsilon^{\mu\nu\rho\sigma} F_{\mu\nu} F_{\rho\sigma} \right\rangle_a \nonumber \\
& \quad \ + \frac{e}{2\pi^2}\epsilon^{\mu\nu\rho\sigma} \left\langle \partial_\mu A_\nu \partial_\rho (\lambda_{\sigma\alpha} a^\alpha) \right\rangle_a
+ \frac{1}{4\pi^2}\epsilon^{\mu\nu\rho\sigma} \left\langle \partial_\mu (\lambda_{\nu\alpha} a^\alpha) \partial_\rho (\lambda_{\sigma\beta} a^\beta) \right\rangle_a \, , \nonumber
\end{align}
where we remember that all relations here are path-integral relations and in particular there is an integral over the auxiliary field $a_\mu$ in the above which establishes the path-integral equivalence
\begin{equation}
\int \mathcal{D}\bar{\psi}\mathcal{D}\psi e^{S_\lambda} = \int \mathcal{D}\bar{\psi}\mathcal{D}\psi \mathcal{D}a_\mu e^{S_a} \, .
\end{equation}
We want to integrate $a_\mu$ out in equation \eqref{4DMidInt} and get the original path-integral \eqref{InteractingI} with the current-current interaction term back. To go from RHS to LHS in the above relation, we first shift the Hubbard-Stratonovich field by its on-shell value $a_\mu \rightarrow a_\mu - \lambda_{\nu\mu}j^\nu$ which turns \eqref{4DMidInt} into,
\begin{align}
\left\langle \partial_\mu j^\mu_5 \right\rangle_a &= \frac{e^2}{16\pi^2} \left\langle \epsilon^{\mu\nu\rho\sigma} F_{\mu\nu} F_{\rho\sigma} \right\rangle_a \\
& + \frac{e}{2\pi^2}\epsilon^{\mu\nu\rho\sigma} \left\langle \partial_\mu A_\nu \partial_\rho (\lambda_{\sigma\alpha} a^\alpha - \lambda^2_{\sigma\alpha}j^\alpha) \right\rangle_a \nonumber \\
& + \frac{1}{4\pi^2}\epsilon^{\mu\nu\rho\sigma} \left\langle \partial_\mu (\lambda_{\nu\alpha} a^\alpha - \lambda^2_{\nu\alpha}j^\alpha) \partial_\rho (\lambda_{\sigma\beta} a^\beta - \lambda^2_{\sigma\beta}j^\beta) \right\rangle_a \, . \nonumber
\end{align}
In the above all terms are odd in $a_\mu$ except the term $\epsilon^{\mu\nu\rho\sigma}\lambda_\nu^{\ \alpha}\lambda_\sigma^{\ \beta}\partial_\mu a_\alpha \partial_\rho a_\beta$ coming from the last line. But if the matrix $\lambda_\mu^{\ \nu}$ has only one non-zero element in each of its rows and columns, which is the case for all special cases that we are focusing on, then this term is also odd in each component of $a_\mu$. The reason is that the totally anti-symmetric tensor $\epsilon^{\mu\nu\rho\sigma}$ does not allow for repeated indices. Therefore, after integrating the auxiliary field $a_\mu$ out, all terms that contain $a_\mu$ will vanish, and we will be left with,
\begin{align}\label{FullWard}
\partial_\mu j^\mu_5 &=\frac{e^2}{16\pi^2} \epsilon^{\mu\nu\rho\sigma} F_{\mu \nu}F_{\rho \sigma}
-\frac{e}{2\pi^2}\epsilon^{\mu\nu\rho\sigma}\lambda^2_{\sigma\alpha}\partial_\mu A_\nu\partial_\rho j^\alpha +\frac{1}{4\pi^2}\epsilon^{\mu\nu\rho\sigma}\lambda^2_{\nu\alpha}\lambda^2_{\sigma\beta}\partial_\mu j^\alpha \partial_\rho j^\beta \, ,
\end{align}
We see that there are terms respectively depending only on the electromagnetic field, only on the presence of interactions, and a mixed term requiring the presence of both. Note that the first term on the right hand side is the anomalous term in the absence of interactions.
\subsection{Interpretation through Screening}
It is possible to look at the above result as a screening process which is the subject of this subsection.
The anomalous identity in the absence of interactions was previously given by \eqref{4DAnomalyEB} which is $\partial_\mu j^\mu_5=\frac{e^2}{2\pi^2} \vec{E}\cdot\vec{B}$. In comparison, by defining
\begin{eqnarray}\label{D}
\tilde{E}_i&=&E_i-\frac{1}{e}\left[\lambda^2_{i\beta}\partial_0-\lambda^2_{0\beta}\partial_i\right]j^\beta \, ,\\\label{H}
\tilde{B}_i&=&B_i-\frac{1}{2e}\epsilon_{ijk}\left[\lambda^2_{j\beta}\partial_k-\lambda^2_{k\beta}\partial_j\right]j^\beta \, ,
\end{eqnarray}
the anomalous identity in presence of interactions \eqref{FullWard} can also take the following appearance
\begin{equation}
\partial_\mu j^\mu_5 = \frac{e^2}{2\pi^2}\tilde{\vec{E}} \cdot \tilde{\vec{B}} \, ,
\end{equation}
juxtaposed to the (1+1)-dimensional counterpart, equation \eqref{2DIntScreen}. Viewed this way, one can interpret the effects of interaction as a sort of anomalous screening where the newly introduced interactions between charge densities and currents result in the screening of constituents of the external electromagnetic field responsible for the chiral symmetry breaking.
The interacting identity above can equivalently be written as,
\begin{equation}
\partial_\mu j^\mu_5 = \frac{e^2}{16\pi^2}\epsilon^{\mu\nu\rho\sigma} \tilde{F}_{\mu\nu} \tilde{F}_{\rho\sigma} \, ,
\label{tildeF}
\end{equation}
where $\tilde{F}_{\mu\nu} \equiv F_{\mu\nu} - \frac{1}{e}\left[ \partial_\mu \left(\lambda^2_{\nu\alpha}j^\alpha \right) - \partial_\nu \left(\lambda^2_{\mu\alpha}j^\alpha \right) \right]$ with the brackets being the contribution from the interacting currents.
Let us for the moment consider the electromagnetic field to be dynamical and focus on the case $\lambda^2_{\mu\nu}=\lambda^2 \eta_{\mu\nu}$ where interactions respect Lorentz symmetry. Upon treating the electromagnetic field in a semi-classical fashion i.e. saddle-point approximation of $A_\mu$ while it is coupled to fluctuating fermionic fields $\psi$ and $\bar{\psi}$, we have $\partial_\nu F^{\mu\nu}=ej^\mu$ and consequently $\partial_\mu j^\mu = 0$. Therefore, Maxwell's equations for $\tilde{F}_{\mu\nu}$ are given by the four components of
\begin{equation}
\partial_\nu \tilde{F}^{\mu\nu} = \left(1-\frac{\lambda^2}{e^2}\Box \right) ej^\mu \, .
\end{equation}
On the other hand, using the commutative nature of partial derivatives, we have
\begin{equation}
\tilde{F}_{\mu\nu}= \left( 1 - \frac{\lambda^2}{e^2}\Box \right) F_{\mu\nu} \, ,
\end{equation}
or equivalently,
\begin{equation}
\tilde{A}_\mu = \left( 1 - \frac{\lambda^2}{e^2}\Box \right) A_\mu \, ,
\end{equation}
where $\tilde{A}_\mu$ is defined through $\tilde{F}_{\mu\nu} \equiv \partial_\mu \tilde{A}_\nu - \partial_\nu \tilde{A}_\mu$. Therefore, looking both at equation \eqref{tildeF} and equations above we see that the anomalous chiral symmetry breaking is generated not only by the background fields but also by backreactions of the interacting matter.
\subsection{A Perturbative Discussion}
So far we have exploited only the non-perturbative language, it is worthwhile however, to discuss the results in perturbative a language as well. Consider the vacuum expectation value of the chiral current $j^\mu_5$ by utilizing the path-integral \eqref{DecoupledI},
\begin{equation}
\langle \bar{\psi} \gamma^\mu \gamma_5 \psi \rangle_{a;\psi} \equiv \int \mathcal{D}a \int \mathcal{D}\bar{\psi}\mathcal{D}\psi \, \bar{\psi} \gamma^\mu \gamma_5 \psi \, e^{S_a} \, ,
\label{VevChiral}
\end{equation}
where the subscript $a$ designates that we have separated the integral over auxiliary field $a_\mu$ from the rest of the integrals denoted by the subscript $\psi$. We then notice that
\begin{equation}
\langle \bar{\psi}(x) \gamma^\mu \gamma_5 \psi (x) \rangle_\psi = -\lim_{y \to x} \langle T^\star \gamma^\mu_{\alpha\sigma} \gamma_{5\sigma\beta} \psi_\beta (x) \bar{\psi}_\alpha (y) \rangle_\psi \, , \nonumber
\end{equation}
where $\alpha$, $\beta$ and $\sigma$ are running over spinor indices of the gamma matrix and fermionic operators while $T^\star$ is the time ordering operator. The two point function on the write hand side can moreover be written as,
\begin{align}
\langle T^\star \psi (x) \bar{\psi} (y) \rangle_\psi &= \frac{1}{i\slashed{D}^x_g} \frac{1}{Z} \int \mathcal{D}[\bar{\psi}\psi] i\slashed{D}^x_g \psi(x)\bar{\psi}(y) e^{S_a} \nonumber \\
&= \frac{1}{i\slashed{D}^x_g} \frac{1}{Z} \int \mathcal{D}[\bar{\psi}\psi] \frac{\delta \, e^{S_a}}{\delta \bar{\psi}(x)} \bar{\psi}(x) \nonumber \\
&= - \frac{1}{i\slashed{D}^x_g} \frac{1}{Z} \int \mathcal{D}[\bar{\psi}\psi] \frac{\delta \bar{\psi}(y)}{\delta \bar{\psi}(x)} e^{S_a} = - \frac{1}{i\slashed{D}^x_g} \delta (x-y) \, .
\end{align}
Here $\slashed{D}^x_g$ is the generalized Dirac operator introduced previously containing a partial derivative that acts on functions of $x$, and $1/\slashed{D}^x_g$ just represent the inverse of that operator. Knowing this we can rewrite \eqref{VevChiral} as
\begin{equation}
\langle \bar{\psi} \gamma^\mu \gamma_5 \psi \rangle_{a;\psi} = \lim_{y \to x} \langle \text{tr} [ \gamma^\mu\gamma_5 (i\slashed{D}_g)^{-1} \delta(x-y) ] \rangle_a \, ,
\label{<j5>}
\end{equation}
where the trace is over spinor indices. By remembering the definition of the generalized Dirac operator $i\slashed{D}_g = i\slashed{\partial} + e\slashed{A} + \lambda \slashed{a}$ we can expand $\slashed{D}_g^{-1}$ in powers of $e\slashed{A} + \lambda \slashed{a}$ and then employ the integration of the auxiliary field $a_\mu$ over each order. The expansion of $\slashed{D}_g^{-1}$ has the following form,
\begin{align}
\frac{1}{i\slashed{D}_g} &= \frac{1}{i\slashed{\partial}} - \frac{1}{i\slashed{\partial}} (e\slashed{A}+\lambda\slashed{a})\frac{1}{i\slashed{\partial}} + \frac{1}{i\slashed{\partial}} (e\slashed{A}+\lambda\slashed{a}) \frac{1}{i\slashed{\partial}} (e\slashed{A}+\lambda\slashed{a}) \frac{1}{i\slashed{\partial}} + \dots \, ,
\label{PropagatorExpansion}
\end{align}
the last term of which gives rise to the triangle diagram, Fig. \ref{fig:Triangle}. In hindsight we know that only this term contributes to the chiral anomaly and all other higher order terms cancel out~\cite{AdlerBardeen}. The difference from the non-interacting case is the new element $\slashed{a}$ that appears because of the local current-current interactions alongside $\slashed{A}$. Without disturbing the triangle nature of chiral anomalies, this new term will nevertheless contribute to the chiral symmetry breaking. Integration on $a_\mu$ is Gaussian with a peak on $a_\mu = 0$. Therefore, the triangle diagram coming from \eqref{PropagatorExpansion} can be looked at as having a Gaussian distribution of photon legs peaked at $(a_\mu + A_\mu) = A_\mu$. (See Fig. \ref{fig:Triangle}.)
\begin{figure}
\includegraphics[width=\linewidth]{TriangleH.PNG}
\centering
\caption{Chiral anomaly is associated to triangle diagrams. On the left: The triangle diagram responsible for chiral non-conservation with one vertex of chiral current and two vertices of photons. On the right: The effect of interactions on chiral non-conservation can be schematically shown by a Gaussian distribution of the photon legs. }
\label{fig:Triangle}
\end{figure}
The chiral anomaly is given by substituting the last term of \eqref{PropagatorExpansion} in place of $(i\slashed{D}_g)^{-1}$ in \eqref{<j5>} and taking the divergence of the whole equality, or in momentum space, contracting it with an external momentum.
Instead of investigating the chiral anomaly by looking directly at the divergence of the chiral current, we can similarly investigate it through the response of a chiral system by considering the number current $j^\mu$. For this, we first introduce a constant axial field $\slashed{b}\gamma_5$ to the generalized Dirac operator and again expand $(i\slashed{D}_g)^{-1}$ in powers of $e\slashed{A} + \lambda\slashed{a}$ which to first order is given by,
\begin{equation}
\frac{1}{i\slashed{D}_g} = \frac{1}{i\slashed{\partial}-\slashed{b}\gamma_5} - \frac{1}{i\slashed{\partial}-\slashed{b}\gamma_5} (e\slashed{A}+\lambda\slashed{a})\frac{1}{i\slashed{\partial}-\slashed{b}\gamma_5} +\dots \, .
\label{AxialPropagatorExpansion}
\end{equation}
Similar to \eqref{<j5>} we have a relation for $\langle j^\mu \rangle$ which after using the above expansion and successive Fourier transformations, will be as below
\begin{equation}
\langle j^\mu \rangle_{a;\psi} = \left\langle \int_{k,q} \!\! \text{tr} \left[\gamma^\mu \frac{-e^{-i q_\nu x^\nu}}{\slashed{k}+\slashed{q}-\slashed{b}\gamma_5}(e\slashed{A}+\lambda\slashed{a})\frac{1}{\slashed{k}-\slashed{b}\gamma_5} \right] \right\rangle_a \! ,
\end{equation}
where $\int_{k,q}$ is defined as $\int \frac{d^4 k}{(2\pi)^4} \frac{d^4 q}{(2\pi)^4}$ and the first term of the expansion \eqref{AxialPropagatorExpansion} has vanished. We can see, for example by multiplying the numerator and denominator of the last fraction by $\slashed{k}+\slashed{b}\gamma_5$, that additional linear in $A_\mu$ terms will appear due to the axial field $b_\mu$. These are of the following form
\begin{align}
\langle j^\mu \rangle_{a;\psi} = \bigg\langle \int_{k,q} f(k,q) \text{tr} [ & e\gamma^\mu\gamma^\alpha\gamma^\nu\gamma^\beta \gamma_5 q_\alpha A_\nu(q) b_\beta \nonumber \\
&+ \lambda\gamma^\mu\gamma^\alpha\gamma^\nu\gamma^\beta \gamma_5 q_\alpha a_\nu(q) b_\beta ] \bigg\rangle_a +\dots \, .
\end{align}
with $f(k,q)$ being some function of the two four-momentums $k^\mu$ and $q^\mu$. Since for the Euclideanized picture we have $\text{tr}[\gamma^\mu\gamma^\alpha\gamma^\nu\gamma^\beta\gamma_5]=4\epsilon^{\mu\alpha\nu\beta}$ the vacuum expectation value of the current density $j^\mu (x)$ acquires a contribution from $\epsilon^{\mu\alpha\nu\beta}b_\alpha \partial_\nu A_\beta$ and another from $ \epsilon^{\mu\alpha\nu\beta} b_\alpha \partial_\nu \langle j_\beta \rangle$. We will discuss the exact path-integral treatment of the above in section \ref{sec:Measurement} when we calculate the response as a way to measure the effect of interactions on chiral anomaly.
\section{Dimensional Reduction} \label{sec:DimRed}
The chiral symmetry breaking relation in presence of interactions i.e. equation \eqref{FullWard}, can be simplified when the system admits certain symmetries. For instance, when both electric and magnetic fields are kept parallel to the $z$-axis, $\vec{E}=E_z \hat{z}$ and $\vec{B}=B_z \hat{z}$, we will have rotational and translational symmetry across $x$--$y$ plane. This insures us that on average the currents and derivatives along $x$ and $y$ directions vanish. Therefore the last term in equation \eqref{FullWard} vanishes and it reduces to
\begin{equation}
\partial_\mu j^\mu_5=\frac{e^2}{16\pi^2}\epsilon^{\mu\nu\rho\sigma} F_{\mu \nu}F_{\rho \sigma}-\frac{e B_z}{2\pi^2}\lambda^2_{\sigma\alpha}\epsilon^{12\rho\sigma} \partial_\rho j^\alpha \, .
\end{equation}
The magnetic field along $\hat{z}$ generates Landau levels in a non-interacting theory. It would have been surprising if raising interaction strengths from zero to a small value would destroy the Landau level structure completely. Thus it is quite reasonable to assume that for small interaction strengths or large magnetic fields there still exist Landau levels and therefore also a lowest Landau level (LLL). The contribution of all Landau levels to chiral anomaly are canceled out due to the spin degeneracy of each level, except for the lowest Landau level which does not suffer from such degeneracy. As we have discussed before (for example around equation \eqref{RawConservation} and its following paragraph) the chiral anomaly comes from the zero-modes and the current situation is no exception.
Let us examine $j^\mu=\bar{\psi}\gamma^\mu\psi$ in this situation and in chiral representation: We can write,
\begin{align}
j^\mu= \bar{\psi}\gamma^\mu\psi &=
\begin{pmatrix}
u^\dagger & v^\dagger
\end{pmatrix}
\begin{pmatrix}
\bar{\sigma^\mu} & 0 \\
0 & \sigma^\mu
\end{pmatrix}
\begin{pmatrix}
u \\
v
\end{pmatrix} = u^\dagger \bar{\sigma^\mu} u + v^\dagger \sigma^\mu v \, ,
\end{align}
where $u$ and $v$ are two-component Weyl spinors that constitute Dirac spinors $\psi$, $\sigma^\mu\equiv (\mathbb{1},\sigma^k)$ and $\bar{\sigma}^\mu\equiv (\mathbb{1},-\sigma^k)$. When the spinors are situated at the LLL they can have definite spins along $\hat{z}$ leaving them with only one non-zero component in the basis of $\sigma^3$, and $j^\mu$ given as above, will have non-vanishing components only for $\mu=0$ and $3$. The same goes for the chiral current $j_5^\mu$. Considering all these we realize that the system has gone through a dimensional reduction from $(3+1)$ dimensions to $(1+1)$ dimensions. We can further see this by noticing that within zero-modes of our system, which is all we are concerned about here, the relation $\epsilon^{21\mu\nu}\gamma_\nu=\gamma^\mu\gamma_5$ holds if we keep $\mu \in \{0,3\}$ and $\nu \in \{0,3\}$. Compare this with the relation $\epsilon^{\mu\nu}\gamma_\nu=\gamma^\mu\gamma_5$ which only holds in $2$-dimensional spacetimes and was used to calculate equation \eqref{2DIntAnomaly}. After some rearranging we arrive at
\begin{equation}\label{reducedAnomaly}
\partial_\mu j^\mu_5= \frac{1}{1+n_0\lambda^2_{3}/\pi}\frac{e^2}{2\pi^2}E_zB_z -\frac{n_0\left(\lambda^2_{0} - \lambda^2_{3}\right)/\pi}{1+n_0\lambda^2_{3}/\pi} \partial_3 j^3_5 \, ,
\end{equation}
where $n_0 \equiv eB_z/2\pi$. Here we have also specialized to the case where the interaction tensor is diagonal. In deriving this equation we have made no assumptions on the nature of Landau levels or how they arise, only that they exist which seems a physically reasonable proposition especially in the limit of large background field. In the opposite limit of zero background field \eqref{reducedAnomaly} reduces to the noninteracting result.
Furthermore, when $\lambda_0^2=\lambda_3^2=\lambda^2$ e.g. for Lorentz symmetry respecting interactions, equation \eqref{reducedAnomaly} simplifies even more to
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\! \partial_\mu j_5^\mu = \frac{1}{1+n_0 \lambda^2/\pi}\frac{e^2}{2\pi^2}E_z B_z = \frac{n_0}{1+n_0 \lambda^2/\pi}\frac{e}{\pi}E_z \, .
\label{reducededAnomaly}
\end{eqnarray}
We see that here the chiral symmetry breaking in the absence of interactions has been modified by a coefficient much like \eqref{2DIntAnomaly} in $(1+1)$ dimensions. The only difference here is the quantity $n_0$ which is nothing but the degeneracy of the lowest Landau level per unit area.
The similarity of \eqref{reducededAnomaly} to the $(1+1)$-dimensional case \eqref{2DIntAnomaly} can be further expounded through describing a $(1+1)$-dimensional system with $N$ flavors of fermions in an all to all interacting relation. Consider the following path-integral,
\begin{align}
I_N = \int &\mathcal{D}a_\mu \prod_{i=1}^N \mathcal{D}\bar{\psi}_i\mathcal{D}\psi_i \exp \left\{ \int d^2x \left[\sum_{i=1}^N \bar{\psi}_i i\slashed{D}_g \psi_i +\frac{1}{2}a_\mu a^\mu \right] \right\} \, ,
\label{PIofN}
\end{align}
with $\slashed{D} = \gamma^\mu\left(\partial_\mu - ieA_\mu - i\lambda a_\mu \right)$. We can separately define $N$ current densities $j_i^\mu \equiv \bar{\psi}_i \gamma^\mu \psi_i$ for each flavor. The on-shell value of $a_\mu$ is so given by $a^\mu = -\lambda \sum^N_{i=1} j^\mu_i$ which, when $a_\mu$ is integrated out, generates current-current interactions between all the current densities with coupling $-\lambda^2/2$ similar to what have done before.
Under the simultaneous chiral transformation of all flavors,
\begin{equation}
\psi_i \longrightarrow e^{i\alpha\gamma_5} \psi_i, \ \ \ \ \ \bar{\psi}_i \longrightarrow \bar{\psi}_i e^{i\alpha\gamma_5} ,
\label{ChiralTransN}
\end{equation}
the action functional of \eqref{PIofN} goes under the following change,
\begin{equation}
\delta S_N = \int d^2x \, \alpha \partial_\mu \sum^N_{i=1} \bar{\psi}_i\gamma^\mu\gamma_5\psi_i \equiv \int d^2x \, \alpha\partial_\mu J_5^\mu \, ,
\end{equation}
hence $J_5^\mu$, defined as above, is classically conserved. However the measure, under each one of the chiral rotations in \eqref{ChiralTransN}, will introduce a copy of the $(1+1)$-dimensional chiral anomaly to break the classical conservation
\begin{equation*}
\partial_\mu J_5^\mu = N \left( \frac{e}{\pi}E_z + \frac{\lambda}{\pi}\epsilon^{\mu\nu}\partial_\mu a_\nu \right) = N \frac{e}{\pi}E_z - N\frac{\lambda^2}{\pi}\partial_\mu J_5^\mu \, ,
\end{equation*}
where $e E_z/\pi$ is the anomalous term for one flavor in the absence of interactions and the last equality stems from integrating the auxiliary field $a_\mu$ out. For the $J_5^\mu$ in the last equality to appear we again have used the relation $\epsilon^{\mu\nu}\gamma_\nu = \gamma^\mu\gamma_5$ between $2$-dimensional gamma matrices. A simple rearranging of the above equation yields,
\begin{equation}
\partial_\mu J_5^\mu = \frac{N}{\left( 1+N\lambda^2/\pi \right)} \frac{e}{\pi}E_z \, .
\label{2DNAnomaly}
\end{equation}
Upon identifying $N$ with the degeneracy of LLL per area $n_0$, equations \eqref{2DNAnomaly} and \eqref{reducededAnomaly} become identical.
Moreover, we can take the same steps we took in section \ref{sec:EffAct} for these $N$ flavor fermionic system, to transform the action functional to
\begin{equation}
S_N = \int d^2x \sum_{i=1}^N \bar{\Psi}_ii\gamma^\mu \left[ \partial_\mu -i\frac{e}{\sqrt{1+N\lambda^2/\pi}}A_\mu \right] \Psi_i \, ,
\end{equation}
which means that in $(1+1)$ dimensions we can treat an $N$-flavor interacting fermionic system as an $N$-flavor free fermionic system where the coupling to external electromagnetic field has been modified by $(1+N\lambda^2/\pi)^{-1/2}$. Consistently, if we start with a $(3+1)$-dimensional free fermionic system which has gone through a dimensional reduction due to the presence of parallel magnetic and electric fields, but has an electric charge given by
\begin{equation}
\tilde{e} \equiv \frac{e}{\sqrt{1+\frac{eB_z}{2\pi}\frac{\lambda^2}{\pi}}} \, ,
\end{equation}
we get the same result as \eqref{reducededAnomaly}.
\section{Measurable Consequences} \label{sec:Measurement}
The models that we have been considering so far can be realized in many physical systems. In lattice systems of these sort the band structure forms Dirac cones where, around the point that the two low energy bands have deformed to touch each other, the description of the system is given by fermionic path-integrals such as $\int \mathcal{D}[\bar{\psi}\psi] \exp\{ i \int \! d^n \!\!\; x \, \bar{\psi}i\slashed{D}\psi \} $. One can point to graphene, topological insulators and liquid $^3\text{He}$ as few examples~\cite{Graphene, EEScales,TopolInsulator,Topol3D,TopolBirth,NonAbelianAnyons,VolovikExotic,VolovikUniverse}. When the energy dispersion has genuine doubly degenerate Dirac cones, they can be made separated into two chirally distinct cones by introducing a chiral or time-reversal breaking element. Among material that posses such feature, are Weyl semimetals~\cite{BalentsBurkov,TopologicalMaterialsWeyl,Zhou,GuoFan,Wang}---a type of gapless topological matter with distinctive features including a large negative magnetoresistance~\cite{NielsenNinomiya, SonSpivak,Burkov, FukushimaKharzeevWarringa} and an anomalous Hall response~\cite{ZyuzinBurkov,ChenWuBurkov}. The chiral element in Weyl semimetals, which separates the otherwise degenerate cones, appears in the low energy description as the additional term $\int d^4 x b_\mu j^\mu_5$ in the action functional with $b_\mu$ being constant. When projected to left and right handed spinors this term breaks into $\int d^4 x b_\mu (j^\mu_R - j^\mu_L)$ which clearly destroys the preexisting symmetry $L\leftrightarrow R$ between the exchange of left and right moving fermions.
\subsection{Prior to Interactions}
Before jumping to the interacting case, let us first derive the anomalous transport of a chiral system. The low energy description of the Weyl semimetal is provided by the following path-integral,
\begin{align}
Z_b[A_\mu]= \int \mathcal{D}[\bar{\psi}\psi] \exp\left\{\int d^4x \left[ \bar{\psi} i\slashed{D} \psi + b_\mu j^\mu_5 \right] \right\} \, ,
\end{align}
where as before $\slashed{D}=(\partial_\mu -ieA_\mu)$ with $A_\mu$ being an external field and spinor degrees of freedom are integrated over. Also the subscript in the partition function $Z_b$ indicates that it is carrying a chiral element (or Weyl separation) which breaks the $L\leftrightarrow R$ symmetry.
Since $b_\mu$ is constant we can always perform a chiral transformation to remove $-ib_\mu \gamma_5$ from inside the parentheses. This transformation is given by $\psi \rightarrow e^{ib_\mu x^\mu\gamma_5}\psi$, $\bar{\psi} \rightarrow \bar{\psi} e^{ib_\mu x^\mu\gamma_5}$. But of course the measure is not invariant under such transformation and introduces an additional term to the action,
\begin{equation}
Z_b =\int \mathcal{D}[\bar{\psi}\psi] \exp\left\{\int d^4x \bigg [ \bar{\psi} i\slashed{D} \psi - \frac{e^2}{4\pi^2} \epsilon^{\mu\nu\rho\sigma} b_\mu A_\nu \partial_\rho A_\sigma \bigg ] \right\} \, ,
\end{equation}
where the same path-integral, $Z_b[A_\mu]$, now comes with a different action in which the $b_\mu j^\mu_5$ has turned to a Chern-Simons like term. Here we can establish a relation between partition functions that carry the chiral element and the ones that do not:
\begin{align}
&Z_b[A_\mu] = Z[A_\mu] \exp{\left\{- \frac{e^2}{4\pi^2}\int d^4x \, \epsilon^{\mu\nu\rho\sigma} b_\mu A_\nu \partial_\rho A_\sigma \right\} } \, , \nonumber \\
& \mbox{with} \quad \quad \ Z[A_\mu] \equiv Z_b[A_\mu] \Big|_{b_\mu=0} \, .
\label{ZZRelation}
\end{align}
We wish to know how much the chiral element contributes to the electrical current. To obtain the current in non-zero $b_\mu$ we vary $Z_b$ with respect to the external gauge field and write $ e \langle j^\mu \rangle_b = \frac{1}{Z_b}\frac{\delta Z_b}{\delta A_\mu}$. Correspondingly we have the current $e \langle j^\mu \rangle = \frac{1}{Z}\frac{\delta Z}{\delta A_\mu}$ when $b_\mu$ has been excluded. The response due to the chiral element, which we may call the anomalous response $j_A^\mu$, is obtained from the difference of these two. Varying the above relation between $Z_b$ and $Z$ with respect to the external gauge field $A_\mu$ gives us just that,
\begin{equation}
e j_A^\mu = e \langle j^\mu \rangle_b- e \langle j^\mu \rangle = \frac{e^2}{2\pi^2}\epsilon^{\mu\nu\rho\sigma}b_\nu \partial_\rho A_\sigma \, .
\label{AnomalousCurrent}
\end{equation}
Note that the anomalous current is conserved i.e. $\partial_\mu j^\mu_A = \frac{e}{2\pi^2}b_\nu \epsilon^{\mu\nu\rho\sigma}\partial_\mu\partial_\rho A_\sigma=0$.
\subsection{Interactions Included}
Having the current formula \eqref{AnomalousCurrent} at hand and remembering our experience in deriving equation \eqref{tildeF}, we can readily guess that substituting $eA_\sigma$ by $e\tilde{A}_\sigma = eA_\sigma - \lambda^2_{\sigma\alpha} \langle j^\alpha \rangle_b$ should give us the anomalous current in presence of interactions. Nevertheless, we are going to take more careful steps and calculate the interacting formula rigorously
as follows.
The low energy description of an interacting Weyl material can then be given by the following path-integral
\begin{align}
I_b =\int \mathcal{D}[\bar{\psi}\psi]\exp \left\{ \int d^4 x \left[ \bar{\psi} i\slashed{D} \psi +b_\mu j^\mu_5 -\frac{1}{2}\lambda^2_{\mu\nu} j^\mu j^\nu \right] \right\} \, ,
\label{Ic}
\end{align}
As we have done so many times now, we are going to decouple the current-current interaction by introducing an auxiliary field $a_\mu$ to reform the path-integral as below,
\begin{align}
I_b =\int &\mathcal{D}[\bar{\psi}\psi a_\mu] \exp \left\{ \int d^4x \left[ \bar{\psi} i \slashed{D}_g \psi +b_\mu j^\mu_5 -\frac{1}{2}a_\mu a^\mu \right] \right\} \, ,
\end{align}
where as before the generalized Dirac operator is given by $\slashed{D}_g \equiv \left( \partial_\mu -ieA_\mu - i \lambda_{\mu\nu} a^\nu \right)$ and the previous form of the path-integral is obtained by integrating over $a_\mu$.
We are going to exploit the fact that both $a_\mu$ and $b_\mu$ can decouple from fermions, the former by a shift with its on-shell value and the latter by a chiral rotation. Let us first decouple $b_\mu$ from fermions via the chiral rotation $\psi \rightarrow e^{ib_\mu x^\mu\gamma_5}\Psi$, $\bar{\psi} \rightarrow \bar{\Psi} e^{ib_\mu x^\mu\gamma_5}$.
All the terms in the action are invariant under this rotation, but the measure transforms with a non-trivial Jacobian and the path-integral becomes reorganized as
\begin{align}
I_b =\int \mathcal{D}[\bar{\Psi}\Psi a_\mu] \exp \int d^4x \bigg [ &\bar{\Psi} i \slashed{D}_g \Psi -\frac{1}{2}a_\mu a^\mu \\
& - \frac{1}{4\pi^2} \epsilon^{\mu\nu\rho\sigma} b_\mu \left(eA_\nu + \lambda_{\nu\alpha}a^\alpha \right) \partial_\rho \left(eA_\sigma + \lambda_{\sigma\beta}a^\beta \right) \bigg ] \, . \nonumber
\end{align}
The relation corresponding to \eqref{ZZRelation} is given for the interacting case by
\begin{align}
&I_b \equiv \int \mathcal{D}a_\mu Z_b [A_\mu,a_\mu] = \int \mathcal{D}a_\mu Z[A_\mu,a_\mu] \exp \left\{ -\frac{e^2}{4\pi^2}\int d^4x \, \epsilon^{\mu\nu\rho\sigma} b_\mu \mathcal{A}_\nu \partial_\rho \mathcal{A}_\sigma \right\} \, , \nonumber \\
&\mbox{with} \quad \quad Z_b[A_\mu,a_\mu]= \int \mathcal{D}[\bar{\psi}\psi] \exp\left\{\int d^4x \left[ \bar{\psi} i\slashed{D}_g \psi + b_\mu j^\mu_5 - \frac{1}{2}a_\mu a^\mu\right] \right\} \, , \nonumber \\
&\text{and } \quad \quad Z [A_\mu,a_\mu]\equiv Z_b [A_\mu,a_\mu]\Big|_{b_\mu=0} \, , \label{ZZRelationI}
\end{align}
where $e \mathcal{A}_\mu \equiv e A_\mu + \lambda_{\mu\alpha} a^\alpha$. Therefore, similar to \eqref{AnomalousCurrent}, upon varying \eqref{ZZRelationI} with respect to $A_\mu$, we have the following relation for currents,
\begin{equation}
\int \mathcal{D}a_\mu \langle j^\mu \rangle_b \, Z_b [A_\mu,a_\mu] = \int \mathcal{D}a_\mu \left( \langle j^\mu \rangle + \frac{e}{2\pi^2}\epsilon^{\mu\nu\rho\sigma}b_\nu \partial_\rho \mathcal{A}_\sigma \right) \, Z_b [A_\mu,a_\mu] \, .
\end{equation}
We now rotate the fermionic degrees of freedom $\Psi$ back to their original state $\psi$ by the reverse chiral rotation. Doing this leaves $Z_b$ and $j^\mu$ and consequently everything in the above relation unchanged, but is nevertheless crucial, first because the relevant correlation functions are those expressed in terms of the original fermions and second for making integration over the auxiliary field $a_\mu$ possible.
Now it is time to decouple $a_\mu$ from fermions by the shift $a_\mu \rightarrow a_\alpha - \lambda_{\beta\alpha}j^\beta$ which results in getting the current-current interaction in \eqref{Ic} back. Note that the reverse chiral rotation to original fermions has removed the Chern-Simons like term from the action which now after the shift in the auxiliary field is quadratic in $a_\mu$. This means that the term $\epsilon^{\mu\nu\rho\sigma}b_\nu\partial_\rho (\lambda_{\sigma\alpha}a^\sigma)$, which is linear in $a_\mu$, will be eliminated from $\epsilon^{\mu\nu\rho\sigma}b_\nu \partial_\rho \mathcal{A}_\sigma$ after integration over the auxiliary field. At this stage both current expectation values and $Z_b$ are untied from $a_\mu$ meaning that $a_\mu$-terms can be factored out of them. Thus we are left with
\begin{equation}
ej_A^\mu = \frac{e}{2\pi^2}\epsilon^{\mu\nu\rho\sigma}b_\nu\partial_\rho\left(e A_\sigma - \lambda^2_{\sigma\alpha} \langle j^\alpha \rangle_b \right) \, .
\label{AnomalousCurrentI}
\end{equation}
This result is therefore true for all $\lambda_{\mu\nu}$.
\subsection{Anomalous Transport}
Remember that the total current, $\langle j^\mu \rangle_b$, is the sum total of the anomalous current, $j_A^\mu$, and the non-anomalous current. When the non-anomalous part of the total current is vanishing, we can summarize the result \eqref{AnomalousCurrentI} as $j^\mu = \frac{e^2}{4\pi^2}\epsilon^{\mu\nu\rho\sigma}b_\nu \tilde{F}_{\rho\sigma}$ with $\tilde{F}_{\mu\nu}$ defined under equation \eqref{tildeF}. Let us simplify the current equation by specializing to the case of $b^\mu=b_z \delta^\mu_3$ and Lorentz invariant interaction $\lambda_{\mu\nu}=\lambda\eta_{\mu\nu}$, which gives us a $2b_z$ separation of Weyl nodes along $k_z$ direction in momentum space and natural current-current interaction which is but the density-density interaction made Lorentz invariant. When only $b_3$ is non-zero, $j^3 \equiv j^z$ is zero and we have for other spatial directions $ej^x = \sigma^{xy}_0 \tilde{E}^y$ with $\sigma^{xy}_0=e^2 b_z/2\pi^2$ and similarly for $j^y$. As we can see from the definition \eqref{D}, at the equilibrium and homogeneous limit $\tilde{\vec{E}}$ goes to $\vec{E}$. Therefore at such limit, we end up with the same anomalous Hall response formula as in the non-interacting case. More explicitly,
\begin{equation}\label{HallResponse}
j^x=\frac{eb_z}{2\pi^2}E^y-\lambdabar\left[\partial_t j_y-\partial_{y}\rho\right] \, ,
\end{equation}
with $\lambdabar\equiv \lambda^2 b_z / 2\pi^2$. The first term gives the quantum anomalous Hall current in either non-interacting materials or the equilibrium and homogeneous limit; in both cases the second term vanishes. Therefore, even though interactions do not affect the equilibrium Hall current, they may nevertheless contribute to the nonequilibrium or inhomogeneous response.
To obtain the Hall conductivity in the homogeneous but nonequilibrium limit we combine equation \eqref{HallResponse} with the corresponding expression for $j^y$ and after switching to Fourier space we get
\begin{equation}
\sigma^{xy}(\omega)=\left[1+\left(\lambdabar \omega\right)^2\right]^{-1}\sigma^{xy}_0 \, ,
\end{equation}
where again, $\sigma^{xy}_0 = e^2b_z/2\pi^2$ is the Hall conductivity in the absence of interactions. There is also a contribution to the longitudinal conductivity arising solely due to interplay of interactions with the Hall conductivity,
\begin{equation} \label{LongResponse}
\! \sigma^{xx}(\omega)= \frac{\lambdabar \omega}{1+\left(\lambdabar \omega\right)^2} \sigma^{xy}_0 = \lambdabar \omega \sigma^{xy}(\omega) \, .
\end{equation}
For small $\omega$ i.e small deviations from equilibrium, the leading order in $\sigma^{xx}(\omega)$ is simply $\lambdabar \omega \sigma_0^{xy}$, whereas at large enough $\omega$, given the current formulation holds, the anomalous longitudinal conductivity $\sigma^{xx}(\omega)$ vanishes along with $\sigma^{xy}(\omega)$. It is noteworthy that had the non-anomalous part of the current been non-zero its contribution could appear as an additional non-anomalous longitudinal conductivity $\sigma_0^{xx}$ on the right hand side of \eqref{LongResponse}.\footnote{We can see this simply by substituting $j_A^\mu$ with $\langle j^\mu \rangle_b - \langle j^\mu \rangle$ in equation \eqref{AnomalousCurrentI} and then writing the spatial part of the non-anomalous current as conductivity times electric field, in particular $\langle j^x \rangle = \sigma_0^{xx} E^x$. Then it becomes clear that whatever the anomalous or Hall conductivity is, the total conductivity is obtained by adding $\sigma^{xx}_0$ to its longitudinal part.}
But the anomalous Hall effect is not the only anomalous transport phenomena that might be influenced by the effects of interactions. What about other responses at other limits, do, for example, interactions affect the equilibrium density response to a change of the magnetic field? To see if that is the case, it is both convenient and insightful to look back at the dimensionally reduced system, section \ref{sec:DimRed}, where as we have calculated in \eqref{reducededAnomaly} the effect of interactions on chiral symmetry breaking is simply a factor that depends on the degeneracy $n_0 = eB_z/2\pi$ of LLL while a strong magnetic field has already generated a background charge density. We start from the path-integral \eqref{Ic} in the dimensionally reduced setup where the full Lorentz symmetry is broken into a rotational symmetry in $x$-$y$ plane and a boost symmetry along $t$-$z$, with $\lambda^2_{\mu\nu}=\lambda^2(\eta_{0\mu}\eta_{0\nu} + \eta_{3\mu}\eta_{3\nu})$, $\vec{E}=E_z \hat{z}$ and $\vec{B}=B_z \hat{z}$. We also restrict $b_\mu$ to $b_z\delta_\mu^3$ as before. Removing $b_z j^z_5$ from the Lagrangian by the chiral rotation employed in previous section, adds to the action the right hand side of the following relation.
\begin{align}
\int d^4x \, b_z j^z_5 =
\frac{e^2/4\pi^2}{1+n_0 \lambda^2/\pi} \int d^4x \, \epsilon^{\nu 3 \rho\sigma} b_z A_\nu\partial_\rho A_\sigma \, .
\label{reducedCS}
\end{align}
The above relation can be confirmed by integrating over the modified chiral charge conservation law \eqref{reducededAnomaly} multiplied by the parameter of the chiral rotation $b_\mu x^\mu$. Now that we have the right hand side of \eqref{reducedCS} substituting $\int d^4 x b_z j^z$ inside the action, varying the path-integral with respect to $A_0 (x)$ will give us the anomalous density:
\begin{equation}
e j^0_A=e \rho_A=\frac{n_0}{1+n_0\lambda^2/\pi}\frac{e}{\pi}b_z \, .
\end{equation}
Remember that $n_0$ depends on $B_z$. Also note that in reaching the above equation we have made no assumptions regarding the non-anomalous part of the total current.
If we change the background magnetic field by a small amount $B_z \rightarrow B_z + \delta B_z$ and then let the system go to the new equilibrium, the background density will be altered by a small amount. For the case where $\lambda^2$ is positive, it is given to first order by
\begin{equation}
e\delta\rho_A=\frac{\delta n_0}{\left(1+n_0 \lambda^2/\pi\right)^2}\frac{e}{\pi}b_z \, ,
\label{DensityResponse}
\end{equation}
with $\delta n_0 = e\delta B_z/2\pi$. Here, if we keep the magnetic field large enough so that LLL is formed, there are two domains of magnetic field strength that the density transport phenomena has distinct behaviors in. For large $B_z$ limit there is no density response to a change of magnetic field, whereas at low enough $B_z$ the density responds linearly.
Notice that due to dimensional reduction the chiral current in the longitudinal direction $\langle j^z_5 \rangle$ is equivalent to density $\langle \rho \rangle$ through the relation $\epsilon^{\mu\nu}\gamma_\nu = \gamma^\mu \gamma_5$ which holds in two dimensional spacetimes. Therefore, equation \eqref{DensityResponse} can be viewed as the generation of a chiral current in response to a change in the magnetic field which is known as the chiral separation effect (CSE)~\cite{Vilenkin, MetlitskiZhitnitsky, NewmanSon}.
At this point we briefly mention that the current formula \eqref{AnomalousCurrentI} can be also exploit for the interacting chiral magnetic effect when separation of Weyl point has a temporal component $b_t \neq 0$. Moreover, investigations through bosonization and random phase approximation shed more light on the subject. These are discussed in reference~\cite{CAICMS}.
\section{Anomalous Modes} \label{sec:DynamicAnomaly}
In this section we briefly turn our attention to the interacting anomalous current formula \eqref{AnomalousCurrentI} to better uncover the dynamics that resides inside it. Turning the external electromagnetic field off makes it clear that the interplay between interactions and anomaly alone is creating a dynamical behavior among anomaly generated phenomena. Let us then set $A_\mu =0$, and for simplicity choose $b_\mu = b_z \delta^3_\mu$ in addition to the Lorentz invariant interaction $\lambda^2_{\mu\nu}=\lambda^2\eta_{\mu\nu}$, to obtain out of equation \eqref{AnomalousCurrentI} the following formula,
\begin{equation}
j^\mu = \lambdabar \epsilon^{3\mu\alpha\beta} \partial_\alpha j_\beta \, ,
\label{DynamicCurrent}
\end{equation}
where we have dropped the subscript in $j^\mu_A$ since we are only considering the situation where only the anomalous current is present, and again we are using $\lambdabar \equiv \lambda^2 b_z / 2\pi^2$ notation as in the previous section.
Since in $b_\mu$ only the $\mu=3$ component is non-zero the Levi-Civita tensor makes sure that $j^z$ is zero. The other three equations are
\begin{align}
j^x &= \lambdabar \left( \partial_y \rho - \partial_t j_y \right) \label{jx} \\
j^y &= \lambdabar \left( \partial_t j_x -\partial_x \rho \right) \label{jy} \\
\rho &= \lambdabar \left( \partial_x j_y - \partial_y j_x \right) = \lambdabar \vect{\nabla} \times \vect{j} \label{rho} \, .
\end{align}
The first two equations are the ones that give rise to dynamical behavior, while the third one is a constraint equation involving no time-derivative. Here the curl operator is in nature a two-dimensional differential operator in $x$-$y$ plane. But since $j^z=0$ is kept zero and there are no equations, hence no dynamics, along $z$-direction we can pretend that differential operators are three-dimensional whenever it is desirable.
To see how charge disappears from a region let us take the time derivative of \eqref{rho} and write $\partial_t \rho = -\lambdabar \partial_t \vect{\nabla} \times \vec{j}$. On the other hand, by adding up the partial derivatives of \eqref{jx} and \eqref{jy} respectively along $x$ and $y$, we have $\vect\nabla\cdot\vect j = \lambdabar \partial_t \vect\nabla \times \vect j$. Comparing these two relations tells us that the anomalous charge is locally conserved:~$\partial_t \rho = - \vect\nabla \cdot \vect j$. This of course should come as no surprise, since we are reiterating the fact that $\partial_\mu j^\mu$ vanishes due to commutativity of partial derivatives and anti-commutativity of the indices of the Levi-Civita tensor in \eqref{AnomalousCurrentI} or \eqref{DynamicCurrent}.
Furthermore, incorporating dynamical equations \eqref{jx} and \eqref{jy} in the constraint equation \eqref{rho} gives $\rho = \lambdabar^2 \left( \partial_t \vect\nabla \cdot \vect j + \vect\nabla^2 \rho \right)$. Using the conservation of charge this becomes an equation for the density:~$\rho = \lambdabar^2 \left( \vect\nabla^2 \rho - \partial_t^2 \rho \right)$. The similar relation holds for $j^x$ and $j^y$. Thus we end up with three Klein-Gordon equations for each non-zero component of $j^\mu$,
\begin{equation}
\left( \partial_t ^2 - \vect\nabla^2 + \lambdabar^{-2} \right) j^\mu =0 \, .
\label{KGwoA}
\end{equation}
Components of $j^\mu$ only depend on each other through the continuity equation, otherwise they have separate dynamics. Solutions to the above equation are relativistic propagating distributions $j^\mu = f^\mu(\omega t - \vec{ k\cdot x})$ where $\omega$ and $k$ satisfy the relation $\omega^2 - k^2 = \lambdabar^{-2}$. Here $\lambdabar$ is the reduced Compton wavelength of these waves. By allowing the Fermi velocity $v_F$ and $\hbar$ to reappear, we can read off from $\lambdabar = \hbar/m v_F$ the mass attributed to these relativistic waves. In terms of the parameters of our model then,
\begin{equation}
m=\hbar/\lambdabar v_F =h^2/2v_F \lambda^2 b_z \, .
\end{equation}
So at the limit of very strong interactions or well separated Weyl points, the waves become massless and propagate with Fermi velocity.
Electromagnetic fields, then, are sources to equation \eqref{KGwoA} and we are going to confirm this by using equation \eqref{AnomalousCurrentI} on itself. Let us for the sake of accessibility rewrite it below
\begin{equation}
j^\mu = \epsilon^{\mu\rho\sigma 3} \left(\frac{eb_z}{2\pi^2} \partial_\rho A_\sigma - \lambdabar \partial_\rho j_\sigma \right) \, .
\end{equation}
We are going to restrict all indices to $\{0,1,2\}$ since there is no need to bother about the third component. Using above equation twice gives,
\begin{align}
j^\mu &= \epsilon^{\mu\rho\sigma 3} \frac{eb_z}{2\pi^2} \partial_\rho A_\sigma - \lambdabar\left(\eta^{\mu\alpha}\eta^{\rho\beta} - \eta^{\mu\beta}\eta^{\rho\alpha}\right) \left(\frac{eb_z}{2\pi^2} \partial_\alpha A_\beta - \lambdabar \partial_\alpha j_\beta \right) \, .
\end{align}
By employing the conservation of current and after some rearranging we have,
\begin{equation}
\left(1 + \lambdabar^2 \Box \right) j^\mu = \frac{eb_z}{4\pi^2} \left( \epsilon^{\mu\rho\sigma 3} F_{\rho\sigma} + 2\lambdabar \partial_\rho F^{\rho\mu} \right) \, ,
\label{Mawell}
\end{equation}
where we recall $\Box \equiv \partial_t^2 - \vect\nabla^2$ is the d'Alembertian. Again, the above equation describes three massive waves sourced by the electromagnetic field.
As an aside, note that the role of speed of light is played by the Fermi velocity $v_F \neq c$ in the d'Alembertian. Was it not the case, shining electromagnetic waves on the material would have a rather similar effect as in the non-interacting case. Let us for a moment set $v_F=c$ and then apply the inverse of $(1+\lambdabar^2\Box)$ from the left on both sides of the above equation. In effect, doing this removes the the d'Alembertian, since $\Box F_{\mu\nu}$ vanishes in this case.
The origin of the gauge field $A_\mu$ was external, meaning that no dynamical terms were introduced for the gauge field inside the action and it was treated as a source without being integrated over. Now it is rather interesting that regardless of the way the electromagnetic field started, Maxwell's equations have forced their way through, utilizing the interplay of chiral anomaly and interactions. Notice the last term in \eqref{Mawell} vanishes either when interactions are absent or no chiral element $b_\mu$ exists, while the second to last term is a purely axionic one. We can therefore conclude that there exist massive modes coupled to electromagnetic fields obeying axionic electrodynamics all due to the interaction-anomaly interplay.
\section{Introducing Gravity} \label{sec:Gravity}
Using what we have so far constructed, it is easy to see that a mixed chiral-gravitational anomalous relation in (3+1)-dimensions~\cite{Witten} will also be modified in a similar way. In the presence of curvature but absence of interactions, the four dimensional anomalous relation will host an additional geometrical Pontryagin density,
\begin{equation}
\nabla_\mu j_5^\mu = \frac{e^2}{16\pi^2}\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma} + \frac{1}{384\pi^2}\epsilon^{\mu\nu\rho\sigma}R_{\mu\nu}^{\ \ \ \alpha\beta} R_{\rho\sigma\alpha\beta} \, ,
\label{WardCurved}
\end{equation}
with $R^\mu_{\ \nu\alpha\beta}$ being the Riemann curvature tensor. We need to be more careful about curved space notions such as the covariant derivative $\nabla_\mu$ and the fact that $\epsilon^{\mu\nu\rho\sigma}$ is a Levi-Civita \textit{tensor} defined as $\varepsilon^{\mu\nu\rho\sigma}/\sqrt{|g|}$ with $\varepsilon^{\mu\nu\rho\sigma}$ being the totally anit-symmetric Levi-Civita \textit{symbol} and $g$ the determinant of the curved spacetime metric $g_{\mu\nu}$.\footnote{The covariant derivative $\nabla_\mu \equiv \partial_\mu + \Gamma_\mu$ is defined with respect to the object it is acting on. For spinor $\Gamma_\mu$ carries spinor indices while for vector or higher rank tensors it only has spacetime components. The Riemann tensor is then defined as,
\begin{equation}
R^\mu_{\ \nu \alpha \beta} A_\mu = -\left[\nabla_\alpha,\nabla_\beta\right] A_\nu \, .
\end{equation}
The Levi-Civita \textit{symbol}, $\varepsilon^{\mu\nu\alpha\beta}$ is totally anti-symmetric with $\varepsilon^{0123}=1$ and all other components given by permutations and the anti-symmetry constraint. This \textit{symbol} is a tensor density and it can be made into a tensor if it is divided by the square root of the metric determinant.
}
These of course do not change our previous results in flat spacetime. This can be verified by considering the fact that we can always transform the coordinates to a frame where metric looks locally flat at any given point on spacetime.
The reason the gravitational term appears in the non-conservation of the chiral current is clear by the arguments made in subsection \ref{sec:Regularization}: When there is curvature the covariant derivative must accommodate its presence, leading to a generalized Dirac operator which also contains a spin-connection, $\slashed{D}_g \equiv \gamma^\mu (\partial_\mu - ieA_\mu - i\Gamma_\mu )$, with $\Gamma_\mu$ representing the spin-connection and $\gamma^\mu(x) = e^\mu_k(x) \gamma^k$ being the curved version of the flat gamma matrices $\gamma^k$. Also $e^\mu_k$ are the vielbeins satisfying $g^{\mu\nu}=e^\mu_m e^\nu_n \eta^{mn}$. The basis of the generalized Dirac operator formally diagonalizes the action and can be used to regularize the fermionic path-integral as well as the Jacobian of fermion transformations \eqref{RegA5}. In this way the curvature effects appear in the anomalous term and we arrive at equation \eqref{WardCurved}. (To better clarify the notation used above it should be added that we use Latin indices for objects that belong to the flat tangent space. Also, $\Gamma_\mu$ is a compact notation which encapsulates four matrices defined as $\Gamma_\mu \equiv \Gamma^{mn}_\mu [\gamma_m,\gamma_n]/4$ with the commutator of gamma matrices carrying all the spinor indices.)
We can treat the presence of interactions as before by a Hubbard-Stratonovich decoupling. Let us consider the same four-fermionic interaction term $\lambda^2_{\mu\nu}j^\mu j^\nu$ as in section \ref{sec:4D}. When curvature effects are included this leads to the following anomalous relation,
\begin{align}\label{FullCurvedWard} \nonumber
\nabla_\mu j^\mu_5 =&\frac{\epsilon^{\mu\nu\rho\sigma}}{4\pi^2}\left( \lambda^2_{\nu\alpha}\lambda^2_{\sigma\beta}\nabla_\mu j^\alpha \nabla_\rho j^\beta - 2e\lambda^2_{\sigma\alpha}\nabla_\mu A_\nu\nabla_\rho j^\alpha \right) \\
& + \frac{e^2}{16\pi^2} \epsilon^{\mu\nu\rho\sigma} F_{\mu \nu}F_{\rho \sigma} + \frac{1}{384\pi^2}\epsilon^{\mu\nu\rho\sigma}R_{\mu\nu}^{\ \ \ \alpha\beta} R_{\rho\sigma\alpha\beta} .
\end{align}
As is apparent these types of interactions do not produce cross-terms with curvature.
Perhaps the effects of interactions on the chiral-gravitational part of the anomalous term are most easily seen after a dimensional reduction as in section \ref{sec:DimRed} where the electric and magnetic fields are both pointing along $\hat{z}$ direction. Doing so and choosing $\lambda^2_{\mu\nu} = \lambda^2 \eta_{\mu\nu} $ greatly simplifies the above relation to,
\begin{align} \label{WardCurvedInt}
&\left( 1 + n_0\frac{\lambda^2}{\pi} \right) \nabla_\mu j_5^\mu = \frac{e^2}{16\pi^2}\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma} + \frac{1}{384\pi^2}\epsilon^{\mu\nu\rho\sigma}R_{\mu\nu}^{\ \ \ \alpha\beta} R_{\rho\sigma\alpha\beta} \, .
\end{align}
When we are interested in the non-conservation of the chiral current $\nabla_\mu j^\mu_5 = \mathcal{A}_5$ we see that it has been modified by a factor of $(1+n_0 \lambda^2/\pi)^{-1}$ where we recall that $n_0$ is defined as $eB_z/2\pi$ in section \ref{sec:DimRed}.
\subsection{Gravity and Temperature}
Now that we have obtained the interacting anomalous relation in presence of curvature \eqref{FullCurvedWard} and \eqref{WardCurvedInt} we can discuss the corresponding response and possible measurable consequences which are the subject of the next three subsections. First we are going to consider two examples that provide an intuition about the geometrical part of the anomalous term.
For the winding density $R\tilde R \equiv \varepsilon^{\mu\nu\rho\sigma} R_{\mu\nu}^{\ \ \ \alpha\beta}R_{\rho\sigma\alpha\beta}$ to be non-zero, there should exist some sort of ``twist'' in the geometry. A spherical symmetric geometry, for instance, has a vanishing $R\tilde R$. Consider the following line element,
\begin{equation}
ds^2 = -dt^2 + dr^2 + dz^2 + r^2 d\phi^2 - 2 r\Omega \Theta (z \! + \! l,l \! - \! z) dt d\phi \, .
\end{equation}
with $\Theta (z+l,l-z)$ being a generalized Heaviside step function: Zero whenever any of its arguments are negative and equal to identity otherwise. A non-vanishing $\Omega$ generates a difference between an angular step forward $d\phi>0$ and backward $d\phi<0$. So the line element describes a twist withing the region $-l \leq z \leq l$. For this metric we have $R\tilde R = 2[\delta (l+z) - \delta (l-z) ]\times \Omega^3/r^2(1+\Omega^2)^2$ which is non-zero only for non-vanishing $\Omega$. For a general $z$
dependent $\Omega (z)$ we would have $R\tilde R = 8\Omega^2 \partial_z \Omega / r^2(1+\Omega^2)^2$ instead.
As the second example consider the following metric,
\begin{equation}
ds^2 = -dt^2 + dr^2 + \left( dz - \Omega(t) d\phi \right)^2 + r^2 d\phi^2 \, .
\end{equation}
Let us define $d\tilde z \equiv dz-\Omega d\phi$, where now $\Omega(t)$ is an arbitrary function of time, and take $\tilde z$ as the new substitute for the $z$ axis which sets up a coordinate system where metric becomes diagonalized. Consequently, a step in $\phi$ while other coordinates $(t,r,\tilde z)$ are kept fixed, would mean a step in $z$ direction. This describes a spiral along $\hat z$. For this geometry we have $R\tilde R = -8(\partial_t\Omega/r)^3$, which is non-zero whenever there is a change in $\Omega$.
These give us a little more than zero intuition of what $R\tilde R$ designates. But how does the relation of this term to the chiral current shows itself in physical systems?
Quasi-particles propagating in a flowing fluid can be described as excitations of a field that lives on a curved background characterized by a non-trivial metric~\cite{Analogue}. One can intuitively guess that a rotating flow may translate, in analogue gravity language, to one with non-vanishing $R\tilde R$. On the other hand, in a rotating system of fermions an axial current emerges. This phenomenon is known by the name of Chiral Vortical Effect (CVE)~\cite{CVEAnatomy}. An interesting feature of this effect is its independent relation to temperature. For massless Dirac fermions, detached from chemical potentials (represented by $\mu$), this axial current relates to temperature $T$ by the following,
\begin{equation}
\vec{j_5} = \vec{\Omega} \left( \frac{\mu}{4\pi^2} + \frac{|\vec{\Omega}|^2}{48\pi^2} + \frac{T^2}{12} \right) \, .
\label{CVE}
\end{equation}
with $\Omega$ being the angular velocity of the rotating system. There are other similar effects to CVE that are worth mentioning such as the axial magnetic effect with measurable consequences where the same temperature dependence appears~\cite{AMEChernodub}. In all these cases the temperature dependence has been connected to the gravitational anomalies~\cite{GravAnomTrans} albeit they can be derived by other methods as well \cite{CVEStone}.
As the right hand side of \eqref{CVE}, apart from the geometric parameter $\vec{\Omega}$, contains also the temperature $T$, the winding density $R\tilde R$ must somehow encode the notion of temperature as well.
To see how these seemingly unrelated notions, namely gravity and temperature, can possibly be connected, its perhaps best to remember that gravity is the force that causes all types of \textit{energy} to flow; since it couples to everything. But this feature is just like temperature gradient. All energy carriers contribute to heat transfer. So there should be a relation, even though fictitious, between these two concepts, as has been employed before by Luttinger~\cite{LuttingerThermal} in his treatment of thermal transport using an auxiliary gravitational potential.
But how can we attribute a temperature to a specific space-time geometry? One way to have a notion of temperature associated to geometry is to look at black hole space-times. They radiate thermally. Let us for simplicity imagine a $(1+1)$ dimensional black hole system. An event horizon divides space-time into interior and exterior regions. In the exterior particles are doomed to move towards the \textit{future} and also are allowed to have positive or negative \textit{momenta}. In the interior the roles of space and time are ``swapped'', where now particles are doomed to go towards the \textit{singularity} and also are allowed to have positive or negative \textit{energy}. Having this permission (for negative energy particles in the interior) allows for a \text{real} pair creation near the horizon, where the negative energy particle is to be created inside the black hole while the positive exterior one can escape to infinity. In this way energy can be extracted from the black hole in the form of radiation. By calculating the probability of such pair creation \citep{Hawking,Parikh} we can see that this radiation is thermal with a temperature proportional to the surface gravity of the horizon---a completely geometrical quantity.
Therefore, for CVE, in order to get an anomalous contribution proportional to $\Omega$ and $T$, what we need is perhaps a geometry that has both a twist and a horizon. In fact we hope for a non-vanishing Pontryagin density from which a $\vec\Omega T^2$ term, as in equation \eqref{CVE}, can be extracted.
\subsection{Adding a Horizon}
The following metric \cite{CVEStone} has all the properties we are looking for, namely, a twist, a horizon, nonzero Pontryagin density, and asymptotic flatness as an additional welcoming feature. In return it is a bit more complicated than our previous examples:
\begin{align} \label{StoneMetric}
ds^2 = &-f(z)\frac{\left(dt - \Omega r^2 d\phi \right)^2}{\left( 1 - \Omega^2 r^2 \right)} + \frac{1}{f(z)} dz^2 +dr^2 + \frac{r^2\left( d\phi - \Omega dt \right)^2}{\left( 1 - \Omega^2 r^2 \right)} \, .
\end{align}
The horizon happens on $z=0$ where $f(z)$ is set to have a non-degenerate root. The Pontryagin density of the metric \eqref{StoneMetric} is given by
\begin{align}
&\frac{1}{4}\epsilon^{\mu\nu\rho\sigma}R_{\mu\nu}^{\ \ \ \alpha\beta}R_{\rho\sigma\alpha\beta} = - \frac{ 2 \Omega f'(z) \left[ \left( 1-r^2 \Omega^2 \right)^2 f''(z)-8 \Omega^2 \left( 1-f(z) \right) \right] }{ (1- r^2 \Omega^2)^3 } \, .
\end{align}
with the prime designating differentiation with respect to $z$. Recall that $\epsilon^{\mu\nu\rho\sigma}=\varepsilon^{\mu\nu\rho\sigma}/\sqrt{|g|}$ is the Levi-Civita \textit{tensor} and $|g|=r$ with our choice of metric.
One can effectively reduce the near horizon physics to that of a $(1+1)$ dimensional chiral quantum field theory~\citep{Wilczek}. The anomalous non-conservation relation for a current of either right or left moving particles is the same as equation \eqref{CVE} but now with half of the right hand side. Looking back at the anomalous relation with the above Pontryagin density, keeping only the leading term in $\Omega$ we have,
\begin{equation}
\frac{1}{\sqrt{|g|}}\partial_z \left( \sqrt{|g|} j_5^z \right) = -\frac{\Omega}{192\pi^2}\partial_z \left( f'(z)^2 \right) \, ,
\end{equation}
or
\begin{equation}
j_5^z \bigg |_{z\rightarrow\infty} = \left. \frac{\Omega}{192\pi^2} f'(z)^2 \right|_{z=0}= \frac{\Omega}{48\pi^2} \kappa^2 \, ,
\end{equation}
where $\kappa=f'(0)/2$ is the surface gravity of the horizon. Here we have used two boundary conditions: First, because of the asymptotic flatness, $f'(z)$ vanishes at infinity, second, a non-vanishing current at the horizon leads to infinite flux which we have abandoned on physical grounds~\cite{Wilczek2,Wilczek3}.
Temperature of the Hawking radiation out of the horizon is related to its surface gravity by $T_H=\kappa/2\pi$, in natural units. Through identifying this temperature with the temperature of our fermionic system we will have,
\begin{equation}
j_5^z = \frac{1}{12}\Omega T_H^2 \, ,
\end{equation}
which is the temperature part of the CVE \eqref{CVE} as we were looking for.
\subsection{Turning Interactions On}
Looking back at \eqref{WardCurvedInt} we observe that had we introduced interactions at the beginning, we would have had a different coefficient behind the Pontryagin density,
\begin{align}
\nabla_\mu j_5^\mu = \ \text{the electromagnetic term} + \frac{1}{384\pi^2 \left( 1+n_0 \lambda^2/\pi \right) }\epsilon^{\mu\nu\rho\sigma}R_{\mu\nu}^{\ \ \ \alpha\beta} R_{\rho\sigma\alpha\beta} \, ,
\end{align}
which would have been dragged all the way to the end result, without altering anything else. In that case we would instead get,
\begin{equation}
j_5^z = \frac{1}{12 \left( 1+n_0 \lambda^2/\pi \right)} \Omega T_H^2 = \frac{1}{12}\Omega \tilde{T}_H^2 \, ,
\end{equation}
This can be interpreted in two ways: Either that the interactions alter the flow of charge along $z$, or, the temperature of this radiation is not simply given by $\kappa/2\pi$ in the presence of interactions. In the latter case the modified temperature would be $\tilde T_H \equiv T_H / \sqrt{1+ n_0 \lambda^2/\pi}$.
Mixed axial-gravitational anomalies similar to the non-conservation relation \eqref{WardCurved} have been invoked in the context of experimental condensed matter physics as in \cite{NatureGooth}. If these anomalies truly appear in thermal phenomena as they have in \cite{NatureGooth}, then according to what we have discussed so far, one expects them and their consequent thermal phenomena to be modified by the effects of interactions, for example, through equation \eqref{FullCurvedWard}. Seeing the modification in experiment would also be a confirmation on the previous results. On the other hand, if the modifications avoid observation then the association made between the gravitational anomaly and the thermal phenomena would go under question.
As the final remark here, let us go back to the black hole radiation. Energy and charge fluxes out of the black hole horizon have been evaluated before using gravitational anomalies~\citep{Wilczek2,Wilczek3}. What usually is employed as an underlying theory for calculating the Hawking radiation, is a non-interacting quantum field theory. It is therefore a notable question to ask how an interacting theory would differ in producing black hole radiation. Combining our method of treating interactions and the method used in~\citep{Wilczek2,Wilczek3} for calculating horizon fluxes in non-interacting theories, we can observe that depending on the type of interactions the resulting fluxes indeed can go under some modifications; even though perhaps interaction terms such as $\bar\psi\gamma^\mu\psi\bar\psi\gamma_\mu\psi$ are RG irrelevant.
\section{Beyond Electrical Interactions} \label{sec:Beyond}
The main concern of this paper has so far been the local current-current interactions of the general type of $\lambda^2_{\mu\nu} j^\mu j^\nu$. But not all interactions are between currents, or in other words, not all interactions are expressible in terms of electrical interactions. Take, for example, interactions between chiral currents $\lambda^2_{\mu\nu} j^\mu_5 j^\nu_5$. Even though in two spacetime dimensions this interaction is equivalent to the current-current interaction, as is discussed in the appendix \ref{sec:RemQues}, the equivalence does not hold in four dimensions. The spin-spin interactions between Dirac fermions can be seen as the spatial part of $j_\mu^5 j^\mu_5$. What follows is the investigation of chiral anomaly in presence of such an interaction.
The general local interaction between chiral currents, $\lambda^2_{\mu\nu} j^\mu_5 j^\nu_5$, can be investigated through the same procedure established in previous sections, but for the sake of simplicity we are going to specialize to the Lorentz invariant case expressed by the following path-integral,
\begin{equation}
I=\int \! \mathcal{D}[\bar{\psi}\psi] \exp i\left\{ \int \! d^4x \left[ \bar{\psi}i\slashed{D}\psi - \frac{1}{2} \lambda^2 j^\mu_5 j^5_\mu \right] \right\} .
\label{InteractingIJ5}
\end{equation}
As before, it is possible to decouple the interaction term above by the help of an auxiliary field $s^\mu$,
\begin{equation}
I \! = \! \int \! \mathcal{D}[\bar{\psi}\psi s_\mu] \exp i\left\{\int \! d^4x \left[ \bar{\psi}i\slashed{D}_g\psi +\frac{1}{2}s_\mu s^\mu \right] \! \right\} ,
\label{DecoupledIJ5}
\end{equation}
where now the generalized Dirac operator is given as $\slashed{D}_g \equiv \gamma^\mu\left( \partial_\mu -ieA_\mu - i\lambda s_\mu \gamma_5 \right)$. It is straightforward to check that a shift in the auxiliary field $s^\mu \rightarrow s^\mu - \lambda j^\mu_5$ decouples the auxiliary field from the fermions and leaves only a quadratic term in the action which can easily be integrated out to give \eqref{InteractingIJ5} back. But before we do that, we first benefit from the fact that the path-integral is formally diagonalized in its current state \eqref{DecoupledIJ5}, in which we can unambiguously calculate the chiral anomaly. (see subsection \ref{sec:Regularization} if needed.)
We encountered a constant axial-field appearing inside the generalized Dirac operator in previous sections. Being constant the axial-field would not contribute to the chiral anomaly. But here we are integrating over all configurations of $s_\mu$ in which case the constant $s_\mu$ becomes irrelevant. Therefore we need to recalculate the chiral anomaly in presence of both the gauge field $A_\mu (x)$ and the axial-field $s_\mu(x)$, which means we have to go over similar steps as in \eqref{2DCal}, when we traced over $\gamma_5$, but this time with the generalized Dirac operator that itself carries a $\gamma_5$ inside and also in four spacetime dimensions. This is a rather long and tedious task, so we skip to the result below that first appeared in~\cite{NonAbelianBosPRB} along with the corresponding Weyl anomaly.
\begin{align}
\partial_\mu j^\mu_5 = \lambda\left[\frac{M^2}{2\pi^2 }+\frac{\lambda^2s_\mu s^\mu}{\pi^2}-\frac{\partial_\mu\partial^\mu}{12\pi^2}\right]\partial_\nu s^\nu
+\frac{\epsilon^{\mu\nu\rho\sigma}}{16\pi^2}\left[\frac{\lambda^2}{3} G_{\mu\nu}G_{\rho\sigma}
+e^2 F_{\mu\nu}F_{\rho\sigma}\right] \, ,
\label{gamma5}
\end{align}
with $G_{\mu\nu} \equiv \partial_\mu s_\nu - \partial_\nu s_\mu$ and $M$ defined in \eqref{RegularizationIntro}. The terms constructed out of $s_\mu$ in the above equation are all odd in each component of $s_\mu$. Therefore, if we shift $s^\mu$ by its on-shell value, $-\lambda j^\mu_5$, and consequently make the action quadratic in $s_\mu$, all the odd terms in above will vanish after the integration. Therefore we will have for the chiral anomaly,
\begin{align}
\partial_\mu j^\mu_5 =-\lambda^2\left[\frac{\tilde{M}^2}{2\pi^2}+\frac{\lambda^4j^5_\mu j_5^\mu}{\pi^2}-\frac{\partial_\mu\partial^\mu}{12\pi^2}\right]\partial_\nu j_5^\nu
+\frac{\epsilon^{\mu\nu\rho\sigma}}{16\pi^2}\left[\frac{4\lambda^4}{3} \partial_\mu j^5_\nu \partial_\rho j^5_\sigma
+e^2 F_{\mu\nu}F_{\rho\sigma}\right] \, ,
\label{AnomalyJ5}
\end{align}
with $\tilde{M}$ now containing also the constant contribution coming from $\langle s_\mu s^\mu \rangle$ in addition to $M$. Depending on the physics behind the anomalous relation above, we adjust the chiral symmetry breaking formula. For example, if $j^\mu_5$ is really describing the spin on a Dirac fermions, then $j^5_\mu j_5^\mu$ becomes a constant $S$ which renders \eqref{AnomalyJ5} as
\begin{align}
\partial_\mu j^\mu_5 =\left[1+ \lambda^2\left(\frac{ \tilde{M}^2}{2\pi^2}+\frac{\lambda^4 S}{\pi^2}-\frac{ \Box}{12\pi^2}\right)\right]^{-1} \times \frac{\epsilon^{\mu\nu\rho\sigma}}{16\pi^2}\left[\frac{4\lambda^4}{3} \partial_\mu j^5_\nu \partial_\rho j^5_\sigma
+e^2 F_{\mu\nu}F_{\rho\sigma}\right] \, .
\label{AnomalySpin}
\end{align}
\section{Conclusion}
Interactions, as we saw, have non-trivial effects on the chiral anomaly and its many consequences. The non-conservation of the chiral current is modified by the presence of interactions and consequently also all the phenomena associated to it: The Hall conductivity in the inhomogeneous and non-equilibrium limit; the electric charge of $(1+1)$-dimensional pseudo-particles; the existence of longitudinal non-equilibrium Hall conductivity; the density response to magnetic field; the chiral magnetic effect; the chiral vortical effect and thermal or geometrical responses are the few examples we discussed here explicitly. We also found that the interplay of anomaly with interactions and Weyl separation leads to the existence of curious massive modes coupled to an axion-electromagnetic field. Lastly we showed that the appearance of these modifications are not subject to only electrical interactions but can easily occur under the influence of other types such as spin-spin interactions. In order to make certain that the regularization used for the calculation of the anomaly is indeed justified we have developed a general path-integral regularization procedure that reduces to well-known regularizations such as that of Fujikawa's or Pauli-Villars in those cases that these regularizations become relevant.\footnote{The details of this construction is presented in appendix~\ref{sec:SelfRegularization}.}
The exactness of chiral anomaly extends to interacting systems as well, providing rare non-perturbative results. One can hope that the results and the method set forth in this paper will extend to various other problems as well. Possible examples include the Weyl-Kondo semimetal~\cite{NonAbelianBosPRB}, interacting topological insulators~\cite{TopolInsInt}, studies on Hawking radiation via anomalies~\cite{Wilczek} in interacting field theories, non-equilibrium studies of Hall phenomena, and can even extend to odd spacetime dimensions~\cite{TopolCriterion}. For our investigations here we used the path-integral technique for the low energy effective theory. It is nonetheless desirable to understand how these effects arise from a microscopic lattice model and to observe how they coincide in the low energy limit as the extra symmetries of the effective theory emerge.
\acknowledgments
This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0001911 and Simons Foundation (V.G) and through ERC under Consolidator grant number 771536 NEMO (CR).
\bibliographystyle{JHEP}
|
{
"arxiv_id": "2302.14161",
"language": "en",
"timestamp": "2023-03-01T02:03:17",
"url": "https://arxiv.org/abs/2302.14161",
"yymm": "2302"
} | \section{Introduction}
Robot manipulation planning is primarily a problem of finding a sequence of \emph{valid} actions that move a set of target objects to a given goal configuration.
Actions are valid if they respect the problem's constraints, which may be task-specific (e.g., keeping a glass of water upright or maintaining dynamic stability of a stack of objects) or implicit and arising from the environment (e.g., avoiding collisions).
Most approaches to manipulation planning (e.g.,~\cite{krontiris_dealing_difficult_2015,krontiris_efficiently_solving_2016,dantam_incremental_constraint-based_2018,alami_geometrical_approach_1989,barry_manipulation_multiple_2013,toussaint_logic-geometric_programming_2015,garrett_pddlstream_integrating_2020}) rely on explicitly specified problem constraints through formal languages like Linear Temporal Logic (LTL)~\cite{he_manipulation_planning_2015} and the Planning Domain Definition Language (PDDL)~\cite{mcdermott_pddl-the_planning_1998,garrett_integrated_task_2021} or through natural language~\cite{misra_tell_me_2016}.
These manually created specifications identify both the valid, meaningful subsets of the state space and the valid transitions between these subsets.
The resulting transition system guides a search for a sequence of valid actions that perform the given task when executed by the robot.\looseness=-1
However, such specifications are onerous and error-prone to construct~\cite{thomason_counterexample-guided_repair_2021}, and may not capture the full set of possible actions.
They must not only define the valid dynamics for a problem's environment, but also be rich enough to describe a wide range of problems and goals.
Furthermore, problem specifications are not unique and the choice of specification can impact planning performance~\cite{vallati_general_approach_2018,vega-brown_admissible_abstractions_2018,dahlman_critical_assessment_2002,xia_learning_sparse_2019}.
Conversely, some manipulation planners forgo full generality for simplified problem specification and improved performance~\cite{krontiris_dealing_difficult_2015,krontiris_efficiently_solving_2016}.
These planners tend to be restricted to planar tabletop object rearrangement or similar problems~\cite{stilman_manipulation_planning_2007}.
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/fig1.png}
\caption{To rotate the pyramid, the robot must reason about the physical validity of sequences of manipulation actions.
For example, removing cubes from the bottom of the pyramid before the cubes at the top will cause the structure to topple.
Placing a cube at certain positions on the bumpy surface will affect the ability to transfer the other cubes.
Explicitly specifying potential intermediate arrangements and action validity for this setting is tedious.
Our approach leverages a physics simulator to discover the valid actions for a given arrangement and plan to reconfigure the objects.%
}%
\label{fig:expdemo}%
\end{figure}
We propose a middle ground: planners that can solve a broad set of classes of manipulation planning problems with no more problem specification than a typical low-level motion planning problem.
Our insight is that the necessary transition systems can be \emph{implicitly defined} through an environment simulator, reducing the manual specification burden.
This paper contributes a novel perspective on manipulation problem specification and manipulation planner design.
This perspective centers around embedding an environment simulation in a sampling-based planning algorithm as an implicit specification of a problem's valid transition system.
In support of these ideas, we contribute
\begin{enumerate*}
\item the \emph{arrangement space}, a novel
planning space representing object arrangements and dynamically discovered low-level robot motions moving between them,
\item \emph{Stable Arrangement Search Trees}{} (\texttt{SAST}\xspace), an arrangement-space planner using embedded environment simulators to discover valid action sequences and the associated low-level motions (\cref{sec:approach.sast}), and
\item a procedure to simplify the solutions found in the arrangement space.
\end{enumerate*}
Concretely, we investigate the use of an embedded off-the-shelf physics simulator~\cite{coumans_bullet_physics_2013,lee_dart_dynamic_2018} in~\texttt{SAST}\xspace{} to efficiently find statically stable, collision-free states for 3D object reconfiguration problems without manually specifying action semantics.
This setting is a specific instance of the broader manipulation planning paradigm that we propose.
We demonstrate that our proposed framework can efficiently solve 3D object reconfiguration problems with physical constraints without requiring more than an environment description and start/goal configurations to specify a problem.
These results argue for the viability of a family of planners based upon implicitly simulator-defined transition systems.
\zkcom{Any discussion of how we are finding quasi-static arrangements and leveraging the capabilities of geometric planners? We talked about this briefly.}
\section{Background and Related Work}
This paper proposes a novel perspective on planning with embedded simulators that combines and extends earlier uses of simulation in robot planning and control (\cref{sec:related.work:simulation}).
\texttt{SAST}\xspace, an example of this perspective in practice, is an efficient sampling-based planning algorithm (\cref{sec:related.work:sbmp}) that builds upon ideas from tabletop rearrangement planning and integrated task and motion planning (\cref{sec:related.work:rearrangement}) to solve dynamically-constrained object reconfiguration problems.
\subsection{Simulation in robotics}\label{sec:related.work:simulation}
Simulation is widely used in robot control and learning.
Model-predictive control (MPC) simulates control trajectories forward through time to choose optimal inputs~\cite{garcia_model_predictive_1989}.
Recent work improves MPC performance by integrating differentiable physics simulation~\cite{heiden_interactive_differentiable_2020,heiden_neuralsim_augmenting_2021}.
Efficient simulation for training~\cite{liang_gpu-accelerated_robotic_2018,makoviychuk_isaac_gym_2021} has been core to learning-based methods for robot control~\cite{kroemer_review_robot_2021,kober_reinforcement_learning_2013,shen_acid_action-conditional_2022}, despite the challenge of translating controllers from simulation to the real world~\cite{zhao_sim-to-real_transfer_2020}.
However, as noted by~\cite{saleem_planning_selective_2020}, simulation for planning is under-studied.
Prior work combining manipulation planning and simulation restricts to specific motion primitives~\cite{zickler_efficient_physics-based_2009,zito_two-level_rrt_2012} or 2D settings~\cite{haustein_kinodynamic_randomized_2015}; we operate in a 3D workspace and dynamically discover the valid motions available to the robot via a bi-level search over object arrangements and robot motions.
~\cite{saleem_planning_selective_2020} studied efficient planning with simulators by selectively simulating actions.
~\cite{huang_parallel_monte_2022} improved the long-horizon planning efficiency of a Monte-Carlo Tree Search-based planner by integrating parallel simulation.
~\cite{agboh_robust_physics-based_2021,agboh_combining_coarse_2019} use simulators with different precisions and interleaved simulation and execution to improve manipulation planning performance and robustness.
These approaches complement~\texttt{SAST}\xspace.
We propose that an embedded simulator can be an effective implicit specification of a problem's constraints.
\subsection{Sampling-based motion planning}\label{sec:related.work:sbmp}
Sampling-based motion planning (SBMP) is a family of efficient robot motion planning techniques based upon constructing approximations of a robot's high-dimensional configuration space~\cite{lozano-perez_spatial_planning_1990} from sampled configurations~\cite{kavraki_probabilistic_roadmaps_1996,lavalle_planning_algorithms_2006,kuffner_rrt-connect_efficient_2000,hsu_probabilistic_foundations_2006}.
Most SBMP algorithms operate by building up a graph~\cite{kavraki_probabilistic_roadmaps_1996} or tree~\cite{lavalle_planning_algorithms_2006} of valid configurations connected by short valid motions.
\texttt{RRTConnect}\xspace{}~\cite{kuffner_rrt-connect_efficient_2000} is among the fastest SBMP algorithms for many problems, due to its technique of growing two trees of valid motions, one from each of the start and the goal, toward each other using a local ``extension'' planner to control the trees' growth.
\zkcom{RRTC succeeds when the path is relatively ``straight''---i.e., the space is expansive, which is true of the arrangement space (many arrangements work, this is basically the bimonotone property we talked about). Maybe more discussion here?}
\wtcom{I'm not sure that is bimonotonicity, which (as I recall) was more about the decomposition of a problem rather than its planning space? Regardless, it might be good to note that the arrangement space is expansive.}
\texttt{SAST}\xspace{} adapts the high-level planning loop of \texttt{RRTConnect}\xspace{} to search an expansive space of stable object arrangements (\cref{sec:approach.sast}) using a simulation-based extension planner (\cref{sec:approach.extend}).
\subsection{Object reconfiguration}\label{sec:related.work:rearrangement}
Object reconfiguration has been studied in contexts including manipulation planning~\cite{alami_geometrical_approach_1989,barry_manipulation_multiple_2013,cambon_hybrid_approach_2009,berenson_task_space_2011}, rearrangement planning~\cite{han_high-quality_tabletop_2017,krontiris_dealing_difficult_2015,krontiris_efficiently_solving_2016,shome_synchronized_multi-arm_2020}, and integrated task and motion planning (TAMP)~\cite{garrett_integrated_task_2021}.
These approaches span an axis ranging from problem specialization (i.e., planar rearrangement planners~\cite{krontiris_dealing_difficult_2015,krontiris_efficiently_solving_2016,han_high-quality_tabletop_2017}) to relative generality (i.e., full TAMP solving~\cite{dantam_incremental_constraint-based_2018,toussaint_logic-geometric_programming_2015,garrett_pddlstream_integrating_2020}).
This axis also corresponds to the relative \emph{specification effort} for each planner: a measure of the work a user must do to provide a given planner with the information it needs to operate.
Planar rearrangement planners typically only specify the desired object arrangement (as well as the environment geometry), and exploit their assumption of planar problems to find solutions faster.
TAMP solvers also rely on symbolic action specifications, mechanisms for discovering states that satisfy action preconditions, and more (e.g., explicit problem constraint specifications)~\cite{garrett_integrated_task_2021}.
We strike a balance: simulators still require manual effort to create, but are more broadly reusable across problems and domains than the specifications and samplers required by most TAMP solvers.
Simulators can also implicitly encode a more general set of constraints than most rearrangement solvers, allowing for richer problems.
Further, as progress in learning problem-specific dynamics models advances~\cite{sanchez-gonzalez_graph_networks_2018,chang_compositional_object-based_2017,battaglia_interaction_networks_2016,battaglia_simulation_engine_2013}, the effort required to create simulators for planning will decrease.
\texttt{SAST}\xspace, like~\cite{krontiris_efficiently_solving_2016}, relies on an arrangement-aware extension primitive to find valid action sequences.
~\cite{haustein_kinodynamic_randomized_2015} also proposes a rearrangement planner incorporating a simplified 2D physics model to evaluate a predefined set of rearrangement actions.
Similarly,~\cite{ren_rearrangement-based_manipulation_2022} explores kinodynamic planning for planar rearrangement with a focus on reacting to unexpected events during rearrangement plan execution, and using a heuristic-based task specification.
\texttt{SAST}\xspace{} uses full 3D physics, does not predefine motion primitives, and models dynamic constraints such as stability.
In future work, synergistically combining \texttt{SAST}\xspace{} with the techniques of~\cite{haustein_kinodynamic_randomized_2015,ren_rearrangement-based_manipulation_2022} could allow \texttt{SAST}\xspace{} to use richer non-prehensile motions for manipulating objects.
\section{Problem Formulation}
We demonstrate implicit constraint definition via embedded simulation in a specific application: 3D object reconfiguration with stability constraints, using pick-and-place actions.
Consider a 3D workspace containing movable rigid-body \emph{objects}, \(\ensuremath{o} \in \ensuremath{\mathcal{O}}\), and a known set of posed static \emph{obstacle} geometries.
Objects have known 3D geometries and poses in \SE{3}.
An \emph{arrangement} assigns a pose to each object:
\begin{definition}[Arrangement]\label{def:arrangement}
An arrangement, \(\ensuremath{\alpha}\), prescribes a pose, \(\arrangementop{\ensuremath{o}} \in \SE{3}\), to each object in the workspace.
Denote the \emph{arrangement space}, the set of all arrangements, as \ensuremath{\mathcal{A}}, and let \(\ensuremath{\alpha} \setminus o\) be arrangement \(\ensuremath{\alpha}\) with object \(o \in \ensuremath{\mathcal{O}}\) removed from consideration.\looseness=-1
\end{definition}
Arrangements may be \emph{valid} or \emph{invalid}.
Valid arrangements are those that are both \emph{collision-free} and \emph{statically stable}.
\begin{definition}[Valid arrangement]\label{def:valid.arrangement}
Let \collision{\cdot} be a collision test for arrangements, such that \(\collision{\ensuremath{\alpha}} = \texttt{True}\) if \ensuremath{\alpha}{} has no objects in collision.
Similarly, let \stable{\ensuremath{\alpha}} be a static stability test for arrangements, such that \(\stable{\ensuremath{\alpha}} = \texttt{True}\) if \ensuremath{\alpha}{} is statically stable after a fixed duration.
An arrangement, \(\ensuremath{\alpha} \in \ensuremath{\mathcal{A}}\), is valid if and only if \(\collision{\ensuremath{\alpha}} = \texttt{True}\) and \(\stable{\ensuremath{\alpha}} = \texttt{True}\).
We evaluate \collision{\cdot} via a physics simulator's collision checker.
We check \stable{\cdot} by stepping the simulator for a fixed number of time steps and verifying that all objects' displacements remain below a small heuristic threshold.\looseness=-1
\end{definition}
Let \ensuremath{r}{} be a robot arm with a static base and joint configuration space \ensuremath{\mathcal{Q}}~\cite{lozano-perez_spatial_planning_1990}.
The arm is capable of two classes of motion: \emph{Transit} motions move the empty end effector along a collision-free path between two workspace poses.
\emph{Transfer} motions grasp a target object and move it and the end effector to a new pose along a collision-free path~\cite{alami_geometrical_approach_1989}.
\begin{definition}[Transit motions]\label{def:transit.motion}
A transit motion \(\transit{\ensuremath{\alpha}, \ensuremath{q}_i, \ensuremath{q}_j}\) is a continuous motion of the robot arm from initial configuration \(\ensuremath{q}_i \in \ensuremath{\mathcal{Q}}\) to \(\ensuremath{q}_j \in \ensuremath{\mathcal{Q}}\) that is collision-free with respect to \ensuremath{\alpha}.
\end{definition}
\begin{definition}[Transfer motions]\label{def:transfer.motion}
A transfer motion \(\transfer{\ensuremath{\alpha}_i, \ensuremath{o}, \ensuremath{q}, \ensuremath{q}', \ensuremath{\alpha}_j}\) is a continuous motion of the robot arm, holding object \(\ensuremath{o} \in \ensuremath{\mathcal{O}}\), from \(\ensuremath{q} \in \ensuremath{\mathcal{Q}}\) to \(\ensuremath{q}' \in \ensuremath{\mathcal{Q}}\), that is collision-free with respect to \(\ensuremath{\alpha}_i \setminus \ensuremath{o}\).
\(\ensuremath{q}\) and \(\ensuremath{q}'\) must place object \(\ensuremath{o}\) at \(\operatorname{\ensuremath{\alpha}_i}\lbrack\ensuremath{o}\rbrack\) and \(\operatorname{\ensuremath{\alpha}_j}\lbrack\ensuremath{o}\rbrack\), respectively.
\end{definition}
\begin{figure}[t]
\centering
\includesvg[width=\linewidth]{figures/rrtconnect2.svg}
\caption{%
Bidirectional search trees in the arrangement space.
Each vertex represents a valid arrangement (\cref{def:valid.arrangement}).
An edge \((\ensuremath{q}_i, o_i, \ensuremath{q}_i')\) represents a transformation between the two connected arrangements \(\ensuremath{\alpha}_{i - 1}\) and \(\ensuremath{\alpha}_{i}\).
This comprises of a \texttt{TRANSIT}\xspace{} motion of the robot to configuration \(\ensuremath{q}_i\), followed by a stable \texttt{TRANSFER}\xspace{} motion of object \(o_i\) from its pose in \(\ensuremath{\alpha}_{i - 1}\) to \(\ensuremath{\alpha}_{i}\) by grasping \(o_i\) and moving the robot from \(\ensuremath{q}_i\) to \(\ensuremath{q}_i'\).
Edges are bidirectional---one can also transform arrangement \(\ensuremath{\alpha}_{i}\) to \(\ensuremath{\alpha}_{i - 1}\) using a \texttt{TRANSIT}\xspace{} motion to \(\ensuremath{q}_i'\), followed by a stable \texttt{TRANSFER}\xspace{} motion of object \(o_{i}\) from its pose in \(\ensuremath{\alpha}_{i}\) to its pose in \(\ensuremath{\alpha}_{i - 1}\) by grasping \(o_i\) and moving the robot from \(q_i'\) to \(q_i\).
}\label{fig:rrtc}
\end{figure}
\begin{algorithm}[t]
\caption{\texttt{SAST}\xspace}\label{alg:rrtc}
\DontPrintSemicolon
\SetKwFunction{InitTree}{InitTree}
\SetKwFunction{Sample}{SampleArrangement}
\SetKwFunction{True}{True}
\SetKwFunction{False}{False}
\SetKwFunction{Extend}{Extend}
\SetKwFunction{Connect}{Connect}
\SetKwFunction{Path}{Path}
\SetKwFunction{Swap}{Swap}
\SetKwFunction{GetNearest}{GetNearest}
\(\mathcal{T}_a \gets \InitTree(\ensuremath{\alpha}_0)\)\;\label{line:rrtc_inita}
\(\mathcal{T}_b \gets \InitTree(\ensuremath{\alpha}_\mathrm{goal})\)\;\label{line:rrtc_initb}
\While{\texttt{True}} { \label{line:rrtc_loop}
\(\ensuremath{\alpha}_\mathrm{rand} \gets \Sample{}\)\; \label{line:rrtc_sample}
\(\ensuremath{\alpha}_\mathrm{nearest} \gets \GetNearest(\mathcal{T}_a, \ensuremath{\alpha}_\mathrm{rand})\)\; \label{line:rrtc_nearest}
\(\ensuremath{\alpha}_\mathrm{new} \gets \Extend(\mathcal{T}_a, \ensuremath{\alpha}_\mathrm{nearest}, \ensuremath{\alpha}_\mathrm{rand}, \texttt{False}) \)\; \label{line:rrtc_extend}
\If{\(\ensuremath{\alpha}_\mathrm{new} \not= \ensuremath{\alpha}_\mathrm{nearest}\)} { \label{line:rrtc_ifextended}
\If{\(\Connect(\mathcal{T}_b, \ensuremath{\alpha}_\mathrm{new}) = \ensuremath{\alpha}_\mathrm{new}\)}{ \label{line:rrtc_connect}
\Return \(\Path(\mathcal{T}_a, \mathcal{T}_b)\)\; \label{line:rrtc_done}
}
}
\(\Swap(T_a, T_b)\)\; \label{line:rrtc_swap}
}
\end{algorithm}
Note that these motion classes do not predefine concrete motion primitives or actions.
We are now equipped to formally state the object reconfiguration problem:
\begin{definition}[Object Reconfiguration Problem]\label{def:reconfiguration.problem}
Given an initial valid arrangement (\cref{def:valid.arrangement}), \(\ensuremath{\alpha}_\mathrm{start} \in \ensuremath{\mathcal{A}}\), robot configuration, \(\ensuremath{q}_\mathrm{start} \in \ensuremath{\mathcal{Q}}\), and valid goal arrangement \(\ensuremath{\alpha}_\mathrm{goal} \in \ensuremath{\mathcal{A}}\), the object reconfiguration problem is to find a sequence of objects and robot configurations, \(\lbrack \ensuremath{q}_1, \ensuremath{o}_1, \ensuremath{q}'_1, \ldots, \ensuremath{q}_n, \ensuremath{o}_n, \ensuremath{q}'_n \rbrack\) and corresponding alternating \texttt{TRANSIT}\xspace{} and \texttt{TRANSFER}\xspace{} motions such that the sequence:
\begingroup
\allowdisplaybreaks
\begin{align*}
& \transit{\ensuremath{\alpha}_\mathrm{start}, q_\mathrm{start}, q_1} \\
\rightarrow & \transfer{\ensuremath{\alpha}_0, o_1, q_1, q_1', \ensuremath{\alpha}_1} \\
\rightarrow & \cdots \\
\rightarrow & \transit{\ensuremath{\alpha}_{n - 1}, q_{n - 1}', q_n} \\
\rightarrow & \transfer{\ensuremath{\alpha}_{n - 1}, o_n, q_n, q_n', \ensuremath{\alpha}_n}
\end{align*}
\endgroup
is valid and
\(\ensuremath{\alpha}_n = \ensuremath{\alpha}_\mathrm{goal}\), where \(\ensuremath{\alpha}_i\) is the arrangement after executing the \(i\)-th \texttt{TRANSFER}\xspace{} motion.
\end{definition}
This problem formulation is similar to that of~\cite{krontiris_efficiently_solving_2016}, but adds a 3D workspace and consideration of stability constraints.
\section{Approach}
We propose to solve the reconfiguration problem with a bidirectional tree search algorithm, \texttt{SAST}\xspace{}, that operates in a given problem's arrangement space.
\texttt{SAST}\xspace{} resembles \texttt{RRTConnect}\xspace, but operates in the arrangement space with a novel extension operator that exploits an embedded physics simulator (\cref{sec:approach.extend}) to automatically discover valid actions.
\begin{algorithm}[t]
\caption{\texttt{Connect(}\(\mathcal{T}, \ensuremath{\alpha}'\)\texttt{)}}
\label{alg:connect}
\DontPrintSemicolon
\SetKwFunction{True}{True}
\SetKwFunction{GetNearest}{GetNearest}
\SetKwRepeat{Do}{do}{while}
\SetKw{and}{and}
\Do{\(\ensuremath{\alpha}_\mathrm{next} \not\in \{\ensuremath{\alpha}_\mathrm{nearest}, \ensuremath{\alpha}'\}\)}{
\(\ensuremath{\alpha}_\mathrm{nearest} \gets \GetNearest(\mathcal{T}, \ensuremath{\alpha}')\)\; \label{line:connect_nearest}
\(\ensuremath{\alpha}_\mathrm{next} \gets \Extend(\mathcal{T}, \ensuremath{\alpha}_\mathrm{nearest}, \ensuremath{\alpha}', \texttt{True})\)\;
}
\Return{\(\ensuremath{\alpha}_\mathrm{next}\)}\;
\end{algorithm}
\subsection{Stable Arrangement Search Trees (\texttt{SAST}\xspace)}\label{sec:approach.sast}
\texttt{SAST}\xspace{} initializes two trees in the arrangement space, one rooted at the start arrangement, \(\ensuremath{\alpha}_\mathrm{start}\), and the other at the goal arrangement, \(\ensuremath{\alpha}_\mathrm{goal}\).
Vertices in these trees represent valid arrangements (\cref{def:valid.arrangement}); edges represent transformations between valid arrangements.
In this work, we consider pick-and-place transformations which move \emph{exactly} one object.
Given two valid arrangements \(\ensuremath{\alpha}_{i - 1}\) and \(\ensuremath{\alpha}_{i}\), a connecting edge can be described as \((\ensuremath{q}_i, o_i, \ensuremath{q}_i')\).
This transformation corresponds to a \texttt{TRANSIT}\xspace{} motion of the robot to \(\ensuremath{q}_i\), followed by a stable \texttt{TRANSFER}\xspace{} motion moving \(o_i\) from its pose in \(\ensuremath{\alpha}_{i - 1}\) to \(\ensuremath{\alpha}_{i}\) by grasping \(o_i\) and moving the robot from \(\ensuremath{q}_{i}\) to \(\ensuremath{q}_{i}'\).
Edges are bidirectional: the reverse transformation from \(\ensuremath{\alpha}_{i}\) to \(\ensuremath{\alpha}_{i - 1}\) corresponds to a \texttt{TRANSIT}\xspace{} motion to \(q_i'\), followed by a stable \texttt{TRANSFER}\xspace{} motion of \(o_i\) from its pose in \(\ensuremath{\alpha}_{i}\) to \(\ensuremath{\alpha}_{i - 1}\) by grasping \(o_i\) and moving the robot from \(\ensuremath{q}_{i}'\) to \(\ensuremath{q}_{i}\).
In the arrangement space representation, a solution to a reconfiguration problem is a path of edges that connect \(\ensuremath{\alpha}_\mathrm{start}\) to \(\ensuremath{\alpha}_\mathrm{goal}\).\looseness=-1
Planning starts from the tree rooted at \(\ensuremath{\alpha}_\mathrm{start}\).
Each iteration of the planning loop samples a random arrangement \(\ensuremath{\alpha}_\mathrm{rand}\) and finds its closest neighbor, \(\ensuremath{\alpha}_\mathrm{nearest}\) in the current tree (\cref{alg:rrtc}, \cref{line:rrtc_sample,line:rrtc_nearest}).
This is done via spatial lookup on a GNAT~\cite{brin_neighbor_search_1995} with arrangement distance defined as the summed \SE{3}{} distance\footnote{\SE{3}{} distance is the sum of the Euclidean distance of the translational components and the angular distance of the rotational components.} between the respective poses of each object in the two arrangements.
\texttt{SAST}\xspace{} then attempts to \texttt{Extend} the tree from \(\ensuremath{\alpha}_\mathrm{nearest}\) toward \(\ensuremath{\alpha}_\mathrm{rand}\) by growing a sequence of edges according to~\cref{alg:extend}.
If the resulting sequence of edges is non-empty (\cref{alg:rrtc}, \cref{line:rrtc_ifextended}), we try to \texttt{Connect} the other tree to the terminal vertex of the extended trajectory (\cref{alg:rrtc}, \cref{line:rrtc_connect}).
This is done (\cref{alg:connect}) by repeatedly extending the closest arrangement on the other tree to the terminal vertex, until either the connection succeeds or until the extension fails.
If connection succeeds, \texttt{SAST}\xspace{} has found a solution and terminates.
Otherwise, it swaps the trees and repeats the planning loop.\looseness=-1
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/sa.png}
\caption{Stable arrangement sampling.
The pose of each object is uniformly sampled within workspace bounds; rejection sampling ensures no object intersection.
The dynamics of the world are stepped until the arrangement stabilizes, that is, zero displacement over a sufficient number of steps.
}\label{fig:sa}
\end{figure}
\subsection{Sampling stable arrangements}
The \texttt{SampleArrangement} subroutine (\cref{fig:sa}) samples a valid arrangement for use with \texttt{Extend}.
Here, we leverage the embedded physics simulator to find stable arrangements.
First, \texttt{SampleArrangement} picks uniform-random 3D poses for each object within the workspace bounds, using rejection sampling to ensure that the objects do not intersect.
Then, it simulates the dynamics of the arrangement forward for a fixed number of small timesteps, checking at fixed intervals if the objects have maintain zero displacement since the previous interval.
If so, the arrangement resulting from the applied dynamics is kinematically valid and statically stable, and is returned as a result.
Otherwise, this process repeats until a valid sample is found.
\texttt{SampleArrangement} is easy to parallelize---our implementation of \texttt{SAST}\xspace{} uses multiple threads to sample stable arrangements.
Uniform-random initial pose sampling trades off performance for ease of specification, avoiding the specialized samplers used by TAMP solvers to find states on low and zero-measure state manifolds.
\subsection{Generating valid transformation actions}\label{sec:approach.extend}
The \Extend subroutine (\cref{alg:extend}) searches for a sequence of valid edges that transform a given arrangement \(\ensuremath{\alpha}\) of \(k\) objects into a target arrangement \(\ensuremath{\alpha}'\).
A major contribution of our work is to use an embedded physics simulator in \Extend to reason about the validity of these transformations.
The simulator allows us to treat the physics of the environment as an implicit specification of the valid transformation actions from any state.
We also ensure that a valid transformation has a valid instantiation with the robot by motion planning for its associated \texttt{TRANSIT}\xspace{} and \texttt{TRANSFER}\xspace{} motions.
\begin{algorithm}[t]
\caption{\texttt{Extend(}\(\mathcal{T}, \ensuremath{\alpha}, \ensuremath{\alpha}', connectTarget\)\texttt{)}}
\label{alg:extend}
\DontPrintSemicolon
\SetKwFunction{RandomObjectOrder}{RandomObjectOrder}
\SetKwFor{ForParallel}{for}{do parallel}{end for}
\SetKwFunction{InitTree}{InitTree}
\SetKwFunction{Sample}{SampleArrangement}
\SetKwFunction{True}{True}
\SetKwFunction{Extend}{Extend}
\SetKwFunction{Connect}{Connect}
\SetKwFunction{Path}{Path}
\SetKwFunction{Swap}{Swap}
\SetKwFunction{Copy}{Copy}
\SetKwFunction{IKOp}{SampleGraspConfs}
\SetKwFunction{ParentEdge}{PrecedingConf}
\SetKw{if}{if}
\SetKw{then}{then}
\SetKw{continue}{continue}
\SetKw{or}{or}
\SetKw{donot}{not}
\SetKw{and}{and}
\SetKw{invalid}{invalid}
\SetKw{valid}{valid}
\SetKwFunction{AddVertex}{AddVertex}
\SetKwFunction{AddEdge}{AddEdge}
\SetKwFunction{Transit}{TRANSIT}
\SetKwFunction{Transfer}{TRANSFER}
\(o_1, \dots, o_k \gets \RandomObjectOrder{}\)\; \label{line:extend_random}
\(\ensuremath{\alpha}_\mathrm{cur} \gets \Copy(\ensuremath{\alpha})\)\; \label{line:extend_copy}
\For{\(i\gets 1\) \KwTo k}{ \label{line:extend_loop}
\(\ensuremath{\alpha}_\textrm{next} \gets (\ensuremath{\alpha}_\mathrm{cur}[o_i] \gets \ensuremath{\alpha}'[o_i])\)\; \label{line:extend_next}
\tcp{\textcolor{gb}{Sample grasp configurations.}}
\(q_i, q_i' \gets \IKOp(\ensuremath{\alpha}_\mathrm{cur}[o_i], \ensuremath{\alpha}'[o_i])\)\;\label{line:extend_ik}
\tcp{\textcolor{gb}{Perform checks.}}
\if \(\collision{\ensuremath{\alpha}_\mathrm{next}} = \texttt{False}\) \continue\; \label{line:extend_check3}
\if \(\stable{\ensuremath{\alpha}_\mathrm{cur} \setminus o_i} = \texttt{False}\) \continue\;\label{line:extend_check4}
\if \(\stable{\ensuremath{\alpha}_\mathrm{next} \setminus o_i} = \texttt{False}\) \continue \;\label{line:extend_check5}
\(q_\mathrm{prev}' \gets \ParentEdge(\ensuremath{\alpha}_\mathrm{cur})\)\; \label{line:extend_parentedge}
\if \invalid \Transit{\(\ensuremath{\alpha}_\mathrm{cur}, q_\mathrm{prev}', q_i\)} \or
\invalid \Transfer{\(\ensuremath{\alpha}_\mathrm{cur}, o_i, q_i, q_i', \ensuremath{\alpha}_\mathrm{next}\)} \continue\; \label{line:extend_check7}
\tcp{\textcolor{gb}{Connect target, or create edge.}}
\eIf{\(\ensuremath{\alpha}_\mathrm{next} = \ensuremath{\alpha}'\) \and \(connectTarget\)}{ \label{line:extend_checkconnect1}
\(q_\mathrm{next}' \gets \ParentEdge(\ensuremath{\alpha}')\)\; \label{line:extend_childedge}
\If{\valid \Transit{\(\ensuremath{\alpha}_\mathrm{next}, q_i', q_\mathrm{next}'\)}}{ \label{line:extend_checkconnect2}
\(\mathcal{T}.\AddEdge(\ensuremath{\alpha}_\mathrm{cur}, \ensuremath{\alpha}', (q_i, o_i, q_i'))\)\;\label{line:extend_connectadd}
\Return{\(\ensuremath{\alpha}'\)}\;\label{line:extend_connectreturn}
}
}{
\(\mathcal{T}.\AddVertex(\ensuremath{\alpha}_\textrm{next})\)\;\label{line:extend_add1}
\(\mathcal{T}.\AddEdge(\ensuremath{\alpha}_\mathrm{cur}, \ensuremath{\alpha}_\textrm{next}, (q_i, o_i, q_i'))\)\;\label{line:extend_add2}
\(\ensuremath{\alpha}_\mathrm{cur} \gets \ensuremath{\alpha}_\textrm{next}\)\;\label{line:extend_add3}
}
}
\Return{\(\ensuremath{\alpha}_\mathrm{cur}\)}\;\label{line:extend_return}
\end{algorithm}
\Extend starts by selecting a random order to move the objects\footnote{We choose a random order for simplicity, but could substitute a more sophisticated permutation selector for performance.} and setting the current arrangement, \(\ensuremath{\alpha}_\mathrm{cur}\) to the given start arrangement, \(\ensuremath{\alpha}\).
It then tries to move each object in the chosen order to its target position in the given target arrangement, \(\ensuremath{\alpha}'\), while maintaining stability of the other objects.\looseness=-1
\begin{figure*}[t]
{\centering
\begin{tabularx}{\linewidth}{YYYYYY}
{\includegraphics[width=1.15\linewidth]{figures/problems_new_blender/reverse_start.png}} &
{\includegraphics[width=1.15\linewidth]{figures/problems_new_blender/reverse_goal.png}} &
{\includegraphics[width=1.15\linewidth]{figures/problems_new_blender/transform_start.png}} &
{\includegraphics[width=1.15\linewidth]{figures/problems_new_blender/transform_goal.png}} &
{\includegraphics[width=1.15\linewidth]{figures/problems_new_blender/rotate_start.png}} &
{\includegraphics[width=1.15\linewidth]{figures/problems_new_blender/rotate_goal.png}}
\\
\mbox{\footnotesize{(a) \textsc{Reverse}\xspace Start}} &
\mbox{\footnotesize{(b) \textsc{Reverse}\xspace Goal}} &
\mbox{\footnotesize{(c) \textsc{Transform}\xspace Start}} &
\mbox{\footnotesize{(d) \textsc{Transform}\xspace Goal}} &
\mbox{\footnotesize{(e) \textsc{Rotate}\xspace Start}} &
\mbox{\footnotesize{(f) \textsc{Rotate}\xspace Goal}}
\end{tabularx}
}
\caption{Test problems used in our experiments.
(a-b) \textsc{Reverse}\xspace: The robot starts with a stack of \(6\) cubes and must re-stack them on the same base location in reversed order.
(c-d) \textsc{Transform}\xspace: The robot must transform a stack of \(6\) cubes into a pyramid, centered at the same location. The table is covered in tiles of random height and size, which constrain the feasible intermediate arrangements.
(e-f) \textsc{Rotate}\xspace: The robot is given a \textit{diagonal} pyramid of cubes and must manipulate the cubes into another pyramid with cubes stacked cross-diagonally. The table is covered in bumps of random size and location, which prevent the cubes from being placed flat in intermediate arrangements and make it more difficult to find stable arrangements.
In all problems, the robot starts with its arm tucked in (\cref{fig:expdemo}). Across runs, the \(x\) and \(y\) positions of the structures to reconfigure, as well as the tiles and bumps, are randomized.}\label{fig:expproblems}
\end{figure*}
For each object, \(o_i\), \Extend creates a new arrangement \(\ensuremath{\alpha}_\mathrm{next}\) equal to \(\ensuremath{\alpha}_\mathrm{cur}\) with \(o_i\) at its pose in \(\ensuremath{\alpha}'\) and samples collision free robot configurations grasping \(o_i\)'s pose in \(\ensuremath{\alpha}_\mathrm{cur}\) and at \(\ensuremath{\alpha}_\mathrm{next}\), using the same grasp.
Then, it checks that:
\begin{enumerate*}
\item \(\ensuremath{\alpha}_\mathrm{next}\) is collision-free,
\item \(\ensuremath{\alpha}_\mathrm{cur} \setminus o_i\) is stable, allowing \(o_i\) to be moved, and
\item \(\ensuremath{\alpha}_\mathrm{next} \setminus o_i\) is also stable, allowing \(o_i\) to be moved in the \emph{reverse} transformation
\end{enumerate*}.
If these conditions are met, \Extend attempts to find a valid \texttt{TRANSIT}\xspace{} motion between the preceding configuration of \(\ensuremath{\alpha}_\mathrm{cur}\)\footnote{If \(\ensuremath{\alpha}_\mathrm{cur} = \ensuremath{\alpha}_\mathrm{start}\), we select \(q_0\) as \(q_\mathrm{prev}'\).
If \(\ensuremath{\alpha}_\mathrm{cur} = \ensuremath{\alpha}_\mathrm{goal}\), we skip this check since there is no constraint on the robot configuration at the goal.} and the sampled grasp for \(o_i\)'s pose in \(\ensuremath{\alpha}_\mathrm{cur}\), and a valid \texttt{TRANSFER}\xspace{} motion between the sampled grasps for \(o_i\)'s pose in \(\ensuremath{\alpha}_\mathrm{cur}\) and \(\ensuremath{\alpha}_\mathrm{next}\), respectively, using a standard motion planner (\cref{alg:extend}, \cref{line:extend_parentedge,line:extend_check7}).
These motions are considered infeasible if the sub-planner fails to find a solution within a predefined timeout.
If \Extend finds the requisite \texttt{TRANSIT}\xspace{} and \texttt{TRANSFER}\xspace{} motions, then it either adds the discovered edge to the \emph{current} tree (\cref{alg:extend}, \cref{line:extend_add1,line:extend_add2,line:extend_add3}) and continues with the next object and \(\ensuremath{\alpha}_\mathrm{cur} = \ensuremath{\alpha}_\mathrm{next}\), or attempts to connect the newly-reached arrangement to the \emph{other} tree (\cref{alg:extend}, \cref{line:extend_checkconnect1,line:extend_childedge,line:extend_checkconnect2,line:extend_connectadd}) if \(\ensuremath{\alpha}_\mathrm{next}\) is the target arrangement and a connection is desired.
In the latter case, \Extend returns the target arrangement (\cref{alg:extend}, \cref{line:extend_connectreturn}); otherwise, it iterates through the remaining objects and returns the last reached arrangement.\looseness=-1
\subsection{Solution simplification}\label{sec:approach.simplification}
Although unnecessary for completeness, \texttt{SAST}\xspace{} applies heuristic simplifications to solutions to improve their quality.
If an object \(o\) has been moved twice along a solution trajectory, one of these motions may be unnecessary.
We can remove the first motion by altering the second motion to move \(o\) starting from the first motion's starting pose.
Similarly, we can remove the second motion by altering the first motion to move \(o\) to the second motion's ending pose.
Both cases modify the pose of \(o\) in the arrangements between the first and second motions.
This requires recomputing the grasps and planning motions for these intermediate arrangements to validate the altered arrangement trajectory.
In a third case, motions may also be removed if the pickup and placement locations of the object are exactly the same.
\texttt{SAST}\xspace{} iterates through these three simplification cases on solutions, rechecking for stability and recomputing the \texttt{TRANSIT}\xspace{} and \texttt{TRANSFER}\xspace{} motions after each modification to ensure that the solution remains feasible.
This simplification process continues until no potentially redundant actions remain.
Note that this heuristic set is non-exhaustive and does not guarantee optimal motions.\looseness=-1
\begin{figure*}[t]
{\centering
\begin{tabularx}{\linewidth}{YYYY}
{\includegraphics[width=\linewidth]{figures/rotate_blender/trace0020.png}} &
{\includegraphics[width=\linewidth]{figures/rotate_blender/trace0040.png}} &
{\includegraphics[width=\linewidth]{figures/rotate_blender/trace0060.png}} &
{\includegraphics[width=\linewidth]{figures/rotate_blender/trace0080.png}} \\
{\includegraphics[width=\linewidth]{figures/rotate_blender/trace0100.png}} &
{\includegraphics[width=\linewidth]{figures/rotate_blender/trace0120.png}} &
{\includegraphics[width=\linewidth]{figures/rotate_blender/trace0140.png}} &
{\includegraphics[width=\linewidth]{figures/rotate_blender/trace0160.png}} \\
\end{tabularx}
}
\caption{Sequence of actions in a computed solution after simplification for the \textsc{Rotate}\xspace problem.
The robot is able to identify actions that remove the blocks on top before removing those at the bottom.
This ensures that blocks are not removed from the bottom that will cause those stacked on top to topple.
The sequence shown achieves the minimum possible length for the given problem.}\label{fig:expsimplified}
\end{figure*}
\section{Experiments}
We evaluate \texttt{SAST}\xspace{} on a set of 3D tabletop rearrangement problems (\cref{fig:expproblems})---\textsc{Reverse}\xspace (a-b), \textsc{Transform}\xspace (c-d), and \textsc{Rotate}\xspace (e-f).
These problems involve using a single-arm manipulator to reconfigure cubes from one 3D structure to another. They require reasoning about the physical constraints between the cubes, as well as with the environment.
The solutions are non-trivial, in that the robot must choose and move the objects through intermediate arrangements to achieve its goal.
In addition, some problems contain obstacles such as tiles and bumps which complicate the validity of actions.
Grounding these details in order to apply contemporary approaches would be tedious and challenging.
\zkcom{Why are these problems interesting? Point out interesting features: requiring intermediate locations, hard to specify in PDDL, etc.}
\wtcom{Also, are we adding the experiments with multi-level tables?}
\subsection{Implementation}
We use DART~\cite{lee_dart_dynamic_2018} as our embedded physics simulator and plan \texttt{TRANSIT}\xspace{} and \texttt{TRANSFER}\xspace{} motions via Robowflex~\cite{kingston_robowflex_robot_2022} with the Open Motion Planning Library (OMPL)~\cite{sucan2012open}.
All experiments ran on an AMD 5900x desktop CPU with \(12\) cores at \(4.8\) GHz, using \(6\) parallel threads for sampling stable arrangements and inverse kinematics.
\subsection{Planning performance}
\begin{table}[t]
\centering
\begin{tabularx}{\linewidth}{Yccc}
\toprule
& \textsc{Reverse}\xspace & \textsc{Transform}\xspace & \textsc{Rotate}\xspace \\
\midrule
Success Rate & 0.96 (0.03) & 0.96 (0.03) & 1.00 (0) \\
Solve Time (s) & 16.5 (0.9) & 55.0 (5.6) & 61.5 (6.6) \\
Solution Length & 25.6 (0.7) & 23.7 (0.8) & 15.2 (0.5) \\
Num. Nodes & 44.1 (2.4) & 86.2 (9.6) & 57.4 (6.0) \\
\bottomrule
\end{tabularx}
\caption{Run-time metrics of \texttt{SAST}\xspace{} across the test problems.
\textit{Success rate} refers to the proportion of runs where \texttt{SAST}\xspace{} finds a solution within the given time limit.
For successful runs,
\textit{Solve Time} refers to the time taken to find a solution;
\textit{Solution Length} refers to the solution length, in terms of the number of objects moved;
\textit{Num. Nodes} refer to the total number of tree vertices created by \texttt{SAST}\xspace{} during search.
Mean values are shown, with standard error shown in parentheses.}\label{expres:planning}
\vspace{-0.2cm}
\end{table}
We applied \texttt{SAST}\xspace{} to each test problem for \(50\) trials with a maximum timeout of \(300\) seconds per trial. In each trial, we randomize the start and goal positions of the structure to rearrange, together with the the obstacle positions.
\Cref{expres:planning} shows that \texttt{SAST}\xspace{} was almost always able to find a solution within the stipulated time limit.
Solution times were also reasonable, taking not more than a minute per successful run despite having to invoke the simulator repeatedly for collision and static stability checking, and having to integrate the low-level motion planning of the \texttt{TRANSIT}\xspace{} and \texttt{TRANSFER}\xspace{} motions.
The sizes of the search trees, in terms of the number of nodes, were also small, indicating that a sparse coverage of arrangements was sufficient to identify a solution.
Across each problem and trial, we only had to specify the geometry and positions of the obstacles (steps and bumps) and the start and goal arrangement poses of the objects.
This highlights the strength of our approach in using the physics simulator to automatically derive action validity without requiring any manual, explicit specification.
Solution lengths, however, often require about twice the optimal number of steps.
This is because \texttt{SAST}\xspace, like \texttt{RRTConnect}\xspace, is non-optimizing.\looseness=-1
\subsection{Simplification performance}
The results in~\cref{expres:planning} do not use the solution simplification heuristics of~\cref{sec:approach.simplification}.
\Cref{expres:simplification} shows the results of applying these heuristics to the solutions found by each successful run, indicating that solution simplification usually terminated within \(40\) seconds.
Most of the additional time comes from rechecking for stability and replanning for the low-level \texttt{TRANSIT}\xspace{} and \texttt{TRANSFER}\xspace{} motions, required whenever two actions moving the same object merge.
This is done up to \(\mathcal{O}(n^3)\) times in the number of actions in the initial solution.
Simplification usually decreased solution length by roughly half, reaching or coming close to the optimal solution length.
\Cref{fig:expsimplified} shows an example of a simplified \textsc{Rotate}\xspace{} solution.\looseness=-1
\begin{table}[t]
\centering
\begin{tabularx}{\linewidth}{Yccc}
\toprule
& \textsc{Reverse}\xspace & \textsc{Transform}\xspace & \textsc{Rotate}\xspace \\
\midrule
Simplify Time (s) & 26.5 (1.4) & 41.8 (2.7) & 28.0 (2.0) \\
Simplified Solution Length & 12.1 (0.1) & 12.0 (0.1) & 8.0 (0.1) \\
Improvement (\%) & 51.5 (1.2) & 46.8 (1.7) & 44.9 (1.7) \\
\bottomrule
\end{tabularx}
\caption{Solution simplification results.
\textit{Simplify Time} is the time taken for the simplification procedure to terminate.
\textit{Simplified Solution Length} is the length of the simplified solutions, in terms of the number of objects moved.
\textit{Improvement} is the percentage of the original solution length that simplification eliminated.
Mean values are shown, with standard error in parentheses.\looseness=-1%
}\label{expres:simplification}
\vspace{-0.2cm}
\end{table}
\subsection{How important is integrating motion planning?}
\texttt{SAST}\xspace{} verifies the feasibility of each \texttt{TRANSIT}\xspace{} and \texttt{TRANSFER}\xspace{} motion by planning a valid trajectory in the robot's configuration space for the motion.
To investigate the impact of this integrated verification, we conducted an ablation experiment by removing these low-level feasibility checks.
We find solutions in terms of sequences of object arrangements and object grasps, assuming that the transformations between arrangements are feasible.
After finding a full solution, we attempt to compute a motion plan for each of the associated low-level motions to check solution validity.
\Cref{expres:primitiveablation} shows a substantial drop in solution feasibility when low-level motion checks are skipped.
Indeed, \texttt{TRANSIT}\xspace{} motions require checking that an object is reachable with a given grasp without the manipulator colliding with the other objects; \texttt{TRANSFER}\xspace{} motions require checking that an object can be pulled away without the object or the manipulator intersecting with the other nearby objects. The problems we consider have environment obstacles (table, tiles, and bumps) that do not interfere much with the robot's motion---in more constrained environments, such as in a cupboard or drawer, we could expect feasibility to worsen.
\begin{table}[t]
\centering
\begin{tabularx}{\linewidth}{Yccc}
\toprule
& \textsc{Reverse}\xspace & \textsc{Transform}\xspace & \textsc{Rotate}\xspace \\
\midrule
w/ Motion Checks & 1.00 (0) & 1.00 (0) & 1.00 (0) \\
w/o Motion Checks & 0.60 (0.07) & 0.42 (0.07) & 0.56 (0.07) \\
\bottomrule
\end{tabularx}
\caption{Solution feasibility with and without low-level motion checking during the arrangement tree search.
We show the mean proportion of solutions with feasible low-level motions; the standard error is in parentheses.\looseness=-1%
}\label{expres:primitiveablation}
\vspace{-0.2cm}
\end{table}
\section{Discussion}
This work contributes a novel perspective on manipulation planning that embeds a simulator to implicitly encode the valid actions and states for a problem domain.
We demonstrate this perspective for 3D object reconfiguration planning, where we are able to efficiently find statically stable object configurations that would otherwise be onerous to specify.
\texttt{SAST}\xspace{} currently uses random sampling and extension to grow the arrangement space graph, but informed approaches like Expansive Space Trees~\cite{hsu_path_planning_1997} or Monte Carlo Tree Search~\cite{huang_parallel_monte_2022} may discover solutions faster.
SBMP advances such as biased samplers~\cite{hsu_bridge_test_2003,Lee2022} and optimizing planners~\cite{karaman_sampling-based_algorithms_2011,gammell_asymptotically_optimal_2021,strub_aitstar_eitstar_2022,janson2015fast} may also complement embedded-simulator planning.
Embedded-simulator planning is broadly applicable outside object reconfiguration, which poses several directions for future work.
How can we use simulators to encode constraints beyond stability, such as orientation or contact dynamics?
Similarly, what are the precise requirements for an embedded simulator?
For some problems, precise physics simulation may be unnecessary; for others, non-standard physics can encode problem constraints.
Further, how well do plans found via embedded simulation transfer to the real world?\looseness=-1
Finally, we wish to explore richer uses of the embedded simulator, including combining differentiable simulation with optimization techniques, to broaden the manipulation problem classes that we can efficiently solve.
\newpage
\balance
\printbibliography{}
\end{document}
|
{
"arxiv_id": "2302.14184",
"language": "en",
"timestamp": "2023-03-01T02:04:14",
"url": "https://arxiv.org/abs/2302.14184",
"yymm": "2302"
} | \section{Introduction}
\label{sec1}
\vspace*{-2mm}
Atomic nuclei are made of fundamental particles called quarks and gluons (a.k.a.\ partons). Due to the property of color confinement, partons cannot be studied in isolation. Confinement can be broken, however, in many-parton systems of very high densities \cite{Linde_1979, Collins1975}. Quark-Gluon Plasma (QGP) is one of several such emergent phases in chunks of strongly interacting matter. It is a hot and dense soup of quarks and gluons with liquid properties that can be created in nucleus--nucleus collisions at very high energies \cite{Gyulassy:2004zy} and has strong similarity with the matter that filled the universe right after the Big Bang before it cooled down to produce the hadronic matter that we observe today \cite{Yagi:2005yb}.
The largest experimental heavy-ion facility in the world, the Large Hadron Collider (LHC) at CERN, collides nuclei at center-of-mass energies of several TeV per nucleon pair. At these high energies all of the baryon number carriers from the incoming nuclei pass through each other without being stopped, while a fraction of the total energy is deposited in the mid-rapidity region of the collision, creating new matter with approximately zero net baryon density~\cite{Muller:2012zq}. Strong interactions among its constituents quickly turn this matter into a baryon-neutral QGP that rapidly cools via collective expansion, converting into color-neutral hadrons when the temperature drops below the hadronization temperature. These hadrons continue to interact until their density becomes too low, after which unstable hadronic resonances decay and the stable decay products fly out toward the detectors.
The QGP phase has an extremely short lifetime (${\sim\,}10^{-23}$\,s) and size (${\sim\,}10^{-14}$\,m). It can only be studied by recording the distributions of, and correlations among, the energies and momenta of the thousands of finally emitted hadrons. A quantitative theoretical analysis of these distributions requires their simulation using sophisticated dynamical evolution models and advanced statistical techniques~\cite{big_picture}.
Many large-scale heavy-ion simulation studies using Bayesian statistical techniques have been carried out in recent years to infer from the experimental measurements the properties of the QGP \cite{Petersen:2010zt, Novak:2013bqa, Sangaline:2015isa, Bernhard:2015hxa, Bernhard:2016tnd, Moreland:2018gsh, Bernhard:2019bmu, JETSCAPE:2020shq, Nijs:2020ors, Nijs:2020roc, JETSCAPE:2020mzn, Heffernan_CGC}. One of the major limitations of the current simulations is the breakdown of the hydrodynamic approach that describes the evolution of the QGP phase at early times when the QGP is being formed. This breakdown is caused by extremely large pressure gradients in the incipient stage of the newly created matter. To circumvent this issue, almost all previous Bayesian model calibrations invoked a weakly-coupled, gaseous pre-hydrodynamic stage.\footnote{%
A very recent study \cite{Heffernan_CGC} employed the Color-Glass Condensate based IP-Glasma model to dynamically evolve the pre-hydrodynamic stage. While this is a significant conceptual improvement over free-streaming partons, it shares with the latter approach that, being rooted in Classical Yang-Mills dynamics for the interacting gluon fields, it keeps the system from naturally approaching local thermal equilibrium.}
In this stage the fireball medium evolves initially as a conformal, weakly interacting gas. After a ``switching time'' of order 1\,fm/$c$ the density and pressure gradients have decreased to a level where second-order viscous hydrodynamics, the framework used to evolve the QGP fluid, becomes applicable. Strong interactions in the QGP break its conformal symmetry; the matching between a conformally symmetric pre-hydrodynamic stage to a non-conformal QGP fluid stage, with a realistic equation of state (EoS) $p(e)$ taken from lattice QCD simulations, introduces considerable theoretical ambiguity. The discontinuity of the EoS gives rise to a number of unphysical effects, including a large positive bulk viscous pressure at the start of the hydrodynamic stage \cite{Liu:2015nwa, NunesdaSilva:2020bfs}. The second-order viscous hydrodynamic equations must then rapidly evolve the bulk pressure toward its first-order Navier-Stokes value, which is of more moderate magnitude but has the opposite sign \cite{McNelis_2021}.
In this work we perform the first Bayesian model calibration of a novel dynamical evolution model based on viscous {\it anisotropic} hydrodynamics (\texttt{VAH}) as described in Refs.~\cite{mcnelis20183+, McNelis_2021} instead of standard second-order viscous hydrodynamics.\footnote{%
Our \texttt{VAH}{} approach differs from that used in Refs.~\cite{Martinez:2010sc, florkowski2011highly, alqahtani20173+, alqahtani2017anisotropic, almaalol2019anisotropic, alqahtani2021bulk} by including in the hydrodynamic evolution equations all dissipative terms arising from the residual viscous correction $\delta\tilde f$
\cite{bazow2014second}.
}
We use the JETSCAPE simulation and model calibration framework \cite{JETSCAPE:2020mzn} for relativistic heavy-ion collisions but modify it by eliminating the free-streaming pre-equilibrium module and replacing the second-order viscous hydrodynamic (VH) stage with the \texttt{VAH}{} module described in \cite{McNelis_2021}. \texttt{VAH}{} can handle the large anisotropy between the longitudinal and transverse pressure gradients that characterizes the early expansion stage in heavy-ion collisions much better than VH; this allows us to start the hydrodynamic stage at such an early time that neglecting the pre-hydrodynamic evolution completely becomes a good approximation. \texttt{VAH}{} transitions automatically into standard second-order viscous hydrodynamics (although with an algorithm based on a different decomposition of the fluid's energy-momentum tensor) at later times \cite{McNelis_2021}.
The rest of this paper is organized as follows. An overview of the \texttt{VAH}{} hybrid evolution model for ultra-relativistic heavy-ion collisions, including a description of its model parameters, is presented in Sec.~\ref{sec2}. In Sec.~\ref{sec3} we briefly describe the Bayesian model calibration workflow, including several innovations introduced and tested in the present study. Section~\ref{sec4} describes the construction and training of fast emulators for the \texttt{VAH}{} model. Closure test results for these emulators are reported in Sec.~\ref{sec5}. The actual model calibration process is described in Sec.~\ref{sec6}, and the posterior probability distributions for the inferred parameters are described and discussed. A model sensitivity analysis for the observables used in the calibration procedure is presented in Sec.~\ref{sec7}. The performance of the calibrated model in describing the calibration and other experimental data is discussed in Sec.~\ref{sec8}. Our conclusions are presented in Sec.~\ref{sec9}. Appendices \ref{app:datacollection}--\ref{app:Sobol} provide technical details related to procedures described in the main body of the text.
\section{Overview of the hybrid model for relativistic heavy-ion collisions}
\label{sec2}
\vspace*{-2mm}
The dynamical evolution of relativistic heavy-ion collisions involves physics ranging from small to large length scales and demands a multistage modeling approach, with different modules simulating different stages of the evolution. In this section, we briefly discuss each evolution stage and its simulation.
\subsection{Initial conditions}
\label{trento}
\vspace*{-2mm}
Our simulation of a relativistic heavy-ion collision starts right after the two incoming nuclei collide. In the lab frame both nuclei move initially close to the speed of light in opposite directions and, for an observer on the beam axis, appear as strongly Lorentz-contracted pancakes. As the nuclei hit each other, their valence quarks (which carry their baryon number) pass through each other without being fully stopped in the center-of-momentum frame. Due to interactions between their gluon clouds they lose, however, a large fraction of their energy, some of which is deposited in the form of newly created matter near mid-rapidity (i.e., with low longitudinal momenta in the lab frame) \cite{big_picture}. The spatial distribution of this matter is described phenomenologically with the stochastic model \texttt{T$_\mathrm{R}$ENTo}{} \cite{Moreland:2014oya, trento_code}.\footnote{%
For the default values of the \texttt{T$_\mathrm{R}$ENTo}{} model parameters described in the following please consult \cite{Moreland:2014oya, trento_code}.}
\texttt{T$_\mathrm{R}$ENTo}{} models the incoming nuclei as a conglomerate of nucleons whose positions in the plane perpendicular to the beam are held fixed during the extremely short nuclear interpenetration time. The transverse density distribution of a nucleon is parameterized by a three-dimensional Gaussian with width parameter $w$, integrated over the beam direction $z$:
\begin{equation}
\label{eq:fluctuated_thickness}
\rho(\mathbf{x}_\perp)=\int_{-\infty}^\infty \frac{dz}{(2\pi w^2)^{3/2}}\exp\left(-\frac{\mathbf{x}_\perp^2+z^2}{2w^2}\right).
\end{equation}
The positions of the nucleons inside each incoming nucleus are sampled with a minimum pairwise separation $d_\mathrm{min}$ to simulate their repulsive hard core, but otherwise independently, from Woods-Saxon distributions whose radius $R$ and surface diffusion parameter $\alpha$ are adjusted such that, after folding with the above nucleon density, the measured nuclear charge density distributions are reproduced.\footnote{%
We do not account for the possibility of a neutron skin.
}
Next, assuming the experiment measures collisions with minimum bias, an impact parameter vector $\mathbf{b}$ in the transverse plane is sampled from a uniform distribution, and each nucleon in nucleus $A$ ($B$) is shifted by $+\mathbf{b}/2$\ \ ($-\mathbf{b}/2$). For each pair of colliding nucleons their collision probability is then sampled by taking into account the proton-proton collision cross section at the specified center-of-mass energy ($\sqrt{s_\textrm{NN}}{}=2.76$\,TeV for this work). Nucleons that do not undergo any collisions and thus do not contribute to mid-rapidity energy deposition are then thrown away. The remaining nucleons in each nucleus are labeled as participants, and their density distributions, integrated along the $z$-direction, are added to get the areal densities in the transverse plane of the participants in each of the two nuclei (their so-called nuclear thickness functions):
\begin{equation}
T_{A,B}(\mathbf{x}_\perp) = \sum_{i=1}^{N^{A,B}_\mathrm{part}} \gamma_i \int_{-\infty}^\infty dz \, \rho(\mathbf{x}-\mathbf{x}_i).
\end{equation}
Here $\mathbf{x}_i$ are the nucleon positions, and the $\gamma_i$ are independent random weights, sampled from a Gamma distribution with unit mean and standard deviation $\sigma_k$. These parameterize the measured large multiplicity fluctuations observed in minimum-bias proton--proton collisions. Note that the nuclear thickness functions fluctuate from event to event, due to the fluctuating nucleon positions $\mathbf{x}_i$ and their fluctuating ``interaction strengths'' $\gamma_i$. They describe how much longitudinally integrated matter each nucleus contributes to the collision at each transverse position $\mathbf{x}_\perp$.
The implementation chosen here then sets the initially deposited energy density at transverse position $\mathbf{x}_\perp$ to
\begin{equation}
\label{eq:edens}
\epsilon(\mathbf{x}_\perp)=\frac{1}{\tau_0} \frac{dE}{d\eta \, d^2\mathbf{x}_\perp}=\frac{1}{\tau_0} N\,T_R(\mathbf{x}_\perp;p),
\end{equation}
where the ``reduced thickness function'' $T_R$ is defined in terms of the two nuclear thickness functions by their ``generalized mean'' \cite{Moreland:2014oya}
\begin{equation}
\label{eq:harmonic_mean}
T_{R}(\mathbf{x}_\perp;p)=\left(\frac{T_A^p(\mathbf{x}_\perp)+T_B^p(\mathbf{x}_\perp)}{2}\right)^{1/p}\,.
\end{equation}
Here the longitudinal proper time $\tau_0$ marks the end of the energy deposition process and is taken as small as technically possible (see below). The normalization $N$ depends on $\sqrt{s_\textrm{NN}}$ and allows one to describe the growth of the energy deposited at mid-rapidity with increasing collision energy.
In this work, we assume longitudinal boost invariance for the collision system (i.e., $\eta$-independence of all spatial distributions where $\eta=\frac{1}{2}\ln\bigl[(t{+}z)/(t{-}z)\bigr]$ is the space-time rapidity, and $y$-independence of all momentum distributions where $y=\frac{1}{2}\ln\bigl[(E{+}p_z)/(E{-}p_z)\bigr]$ is the momentum rapidity) and use only observables measured at mid-rapidity as also done in previous Bayesian calibrations of relativistic heavy-ion collision models \cite{JETSCAPE:2020mzn, Nijs:2020ors, Heffernan_CGC}. We start the viscous anisotropic hydrodynamic evolution at $\tau_0=0.05$\,fm/$c$. We use the above \texttt{T$_\mathrm{R}$ENTo}{} initial condition model to simulate Pb--Pb collisions at $\sqrt{s_\textrm{NN}}{}=2.76$\,TeV. The Woods-Saxon parameters used for Pb are $R=6.62$\,fm and $\alpha=0.546$\,fm. The nucleon width $w$, their minimum distance $d_\mathrm{min}$, the normalization $N$, the harmonic mean parameter $p$, and the standard deviation of the Gamma distribution $\sigma_k$ are model parameters of interest that we infer from the experimental data using Bayesian parameter estimation.
\subsection{Viscous Anisotropic Hydrodynamics (\texttt{VAH}{})}
\label{vah}
\vspace*{-2mm}
Hydrodynamics is an effective theory that can accurately model the space-time evolution of macroscopic properties of many dynamical systems found in nature, ranging from large-scale galaxy formation to small-scale systems such as cold atomic gases \cite{big_picture, Romatschke:2017ejr}. At even smaller scales, hydrodynamic modeling has also been extensively used for decades in describing relativistic heavy-ion collisions \cite{Kolb:2003dz, Heinz:2009xj, Heinz:2013th, Gale, Jeon:2015dfa}. Within the framework of Bayesian parameter estimation it is the workhorse for modeling the QGP stage of the collision fireball \cite{Nijs:2020ors, Bernhard:2015hxa, JETSCAPE:2020mzn, Heffernan_CGC}.
Taking the macroscopic degrees of freedom to be the energy density $\epsilon$ and the fluid four velocity $u_\mu$ (i.e., the four-velocity of the local rest frame (LRF) relative to the global frame), the energy momentum tensor for an ideal fluid (i.e., at zeroth-order in gradients of the macroscopic degrees of freedom) can be decomposed as
\begin{equation}
T^{\mu\nu} = \epsilon u^\mu u^\nu - p\Delta^{\mu\nu}.
\label{eq:T_munuideal}
\end{equation}
Here, $\Delta^{\mu\nu}=g^{\mu\nu}-u^\mu u^\nu$ projects on the space-like part in the LRF. The signature of the metric $g^{\mu\nu}$ is taken to be ``mostly minus'' $(+,-,-,-)$. $p(\epsilon)$ is the isotropic equilibrium pressure in the LRF, given by the EoS which is a property of the medium that reflects it microscopic degrees of freedom and their interactions.
Ideal fluid dynamics embodies the unrealistic assumption of instantaneous local equilibration of the medium, i.e., zero mean free path for the microscopic constituents. For finite microscopic relaxation times (i.e., non-zero mean free path) an expanding fluid is always somewhat out of local equilibrium. These non-equilibrium effects can be written as first- and higher-order gradient corrections to the energy momentum tensor,
\begin{equation}
T^{\mu\nu} = \epsilon u^\mu u^\nu - (p+\Pi)\Delta^{\mu\nu} + \pi^{\mu\nu},
\label{eq:T_munuviscous}
\end{equation}
whose evolution is then described by viscous hydrodynamic equations of motion. In second-order viscous hydrodynamics one considers dissipative corrections $\Pi, \pi^{\mu\nu}$ given by a sum of terms containing up to two derivatives of the macroscopic degrees of freedom $\epsilon$ and $u^\mu$, multiplied by so-called transport coefficients that reflect the microscopic transport properties of the fluid medium. To ensure causality, these so-called constituent relations can, however, not be imposed instantaneously (as done in Navier-Stokes theory), but must be implemented via additional equations of motion describing their dynamical evolution toward their Navier-Stokes limits, on time scales related to the microscopic relaxation time. The resulting ``M\"uller-Israel-Stewart (MIS) type'' theories \cite{Muller:1967, Israel:1976tn, Israel:1979wp} introduce additional non-hydrodynamic modes of higher frequencies and shorter wavelengths \cite{Romatschke:2017ejr} which dominate the macroscopic dynamics when the medium develops large spatial gradients, leading to a breakdown of the gradient expansion and eventually rendering the hydrodynamic approach invalid. In particular, second-order viscous hydrodynamics is not applicable for situations with very large pressure anisotropies such as those encountered in the earliest stage of ultra-relativistic heavy-ion collisions directly after nuclear impact.
State-of-the-art simulation models for relativistic heavy-ion collisions circumvent this issue by introducing a pre-hydrodynamic stage that evolves the out-of-equilibrium quark-gluon system microscopically until the gradients have decayed sufficiently that the medium can be fed into the hydrodynamic evolution module. One of these pre-hydrodynamic models is based on the simplifying assumption of free-streaming massless partons for which the microscopic kinetic theory can be solved exactly (see, for example, Refs.~\cite{Baym:1984np, Broniowski:2008qk, Liu:2015nwa, Romatschke:2015dha, Liu-thesis}). In this work we avoid this overly simplified picture by using a different formulation of hydrodynamics, viscous anisotropic hydrodynamics (\texttt{VAH}) \cite{Martinez:2010sc, florkowski2011highly, alqahtani20173+, alqahtani2017anisotropic, almaalol2019anisotropic, alqahtani2021bulk, bazow2014second, mcnelis20183+, McNelis_2021}, that significantly extends the range of validity of hydrodynamics in the presence of large pressure anisotropies like those encountered in the early stage of heavy-ion collisions \cite{McNelis_2021}, and can even describe the far-off-equilibrium free-streaming limit in that situation, for both massless \cite{bazow2014second} and massive partons \cite{Chattopadhyay:2021ive, Jaiswal:2021uvv}.
The development of viscous anisotropic hydrodynamics was motivated by the fact that the momentum distribution of the QGP is highly anisotropic right after the relativistic heavy-ion collision~\cite{PhysRevD.22.2793}. Initially, the medium has a large expansion rate along the longitudinal beam direction and can be approximated by a boost-invariant longitudinal velocity profile (Bjorken flow \cite{PhysRevD.27.140}). The transverse expansion rate of the medium is initially small, building up only gradually in response to the transverse pressure gradients. The initially highly anisotropic expansion rate engenders a large shear pressure, resulting in a high degree of anisotropy between the pressures in the transverse and longitudinal (beam) directions.
To formulate viscous anisotropic hydrodynamic one starts by decomposing the spatial projector $\Delta^{\mu\nu}$ further, splitting it into projectors along the beam direction ($z^\mu$) and a transverse projector $\Xi^{\mu\nu}$) as $\Delta^{\mu\nu}=\Xi^{\mu\nu}-z^\mu z^\nu$. Multiplying these projectors by different longitudinal and transverse pressures, the energy-momentum tensor is now decomposed as \cite{Molnar:2016vvu}
\begin{equation}
\label{eq:vah}
T^{\mu\nu} = \epsilon u^\mu u^\nu + P_L z^\mu z^\nu - P_\perp \Xi^{\mu\nu} + 2 W_{\perp z}^{(\mu} z^{\nu)} + \pi_{\perp}^{\mu \nu}.
\end{equation}
The decompositions (\ref{eq:T_munuviscous}) and (\ref{eq:vah}) are mathematically equivalent \cite{Molnar:2016vvu}, but the \texttt{VAH}{} equations evolve the longitudinal and transverse pressures $P_L$ and $P_\perp$ separately, treating them on par with the thermal pressure in standard viscous hydrodynamics and not by assuming that their differences from the thermal pressure and from each other are small viscous corrections.
The evolution equations for the energy density and the flow velocity are obtained from the conservation laws for energy and momentum:
\begin{equation}
\partial_\mu T^{\mu \nu} = 0.
\end{equation}
The equilibrium pressure $p(\epsilon)$ is taken from lattice QCD calculations by the HotQCD collaboration \cite{PhysRevD.97.014510}. The dynamical evolution equations for the non-equilibrium flows $P_L, P_\perp, W_{\perp z}^{(\mu} z^{\nu)}, \pi_\perp^{\mu \nu}$ are derived assuming that the fluid's microscopic physics can be described by the relativistic Boltzmann equation with a medium-dependent mass \cite{Alqahtani:2016rth, PhysRevD.95.054007, mcnelis20183+}. The relaxation times for $P_L$ and $P_\perp$ are written in terms of those for the bulk and shear viscous pressures \cite{mcnelis20183+}, which are parameterized as
\begin{align}
\tau_\pi = \frac{\eta}{s\beta_\pi}, && \tau_\Pi = \frac{\zeta}{s\beta_\Pi}.
\end{align}
$\beta_\pi$, $\beta_\Pi$, as well as all of the anisotropic transport coefficients, are computed within the quasiparticle kinetic theory model discussed in Ref.~\cite{mcnelis20183+}.\footnote{%
Specifically, $\beta_\pi$ and $\beta_\Pi$ are given by the temperature-dependent isotropic thermodynamic integrals defined in Eq.~(82) in Ref.~\cite{mcnelis20183+}.}
For the temperature-dependent specific shear and bulk viscosities, $\bigl(\eta/s\bigr)(T)$ and $\bigl(\zeta/s\bigr)(T)$, we use the same parameterizations as the JETSCAPE Collaboration \cite{JETSCAPE:2020mzn}:
\begin{eqnarray}
\label{positivity}
\Bigl(\frac{\eta}{s}\Bigr)(T) &=& \max\left[\left.\frac{\eta}{s}\right\vert_{\rm lin}\!\!\!\!(T), \; 0\right],
\\
\text{with}\quad\left.\frac{\eta}{s}\right\vert_{\rm lin}\!\!\!(T) &=& a_{\rm low}\, (T{-}T_{\eta})\, \Theta(T_{\eta}{-}T)+ (\eta/s)_{\rm kink}
\nonumber\\
&+& a_{\rm high}\, (T{-}T_{\eta})\, \Theta(T{-}T_{\eta}),
\end{eqnarray}
and
\begin{eqnarray}
\Bigl(\frac{\zeta}{s}\Bigr)(T) &=& \frac{(\zeta/s )_{\max}\Lambda^2}{\Lambda^2+ \left( T-T_\zeta\right)^2},
\\
\text{with}\qquad\Lambda &=& w_{\zeta} \Bigl(1 + \lambda_{\zeta} \sign \left(T{-}T_\zeta\right) \Bigr)\nonumber.
\end{eqnarray}
A figure illustrating these parameterizations can be found in \cite{JETSCAPE:2020mzn}.
Apart from the eight parameters related to the viscosities, there is one additional parameter that we will infer from the experimental data: the initial ratio $R=(P_L/P_\perp)_0$ at the time $\tau_0$ when \texttt{VAH}{} is initialized:
\begin{equation}
P_{\perp 0}=\frac{3}{2{+}R}\, p_0\,, \qquad
P_{L 0}=\frac{3R}{2+R}\, p_0\,.
\label{eq:R}
\end{equation}
Here $p_0{\,\equiv\,}p(\epsilon_0)$ is the equilibrium pressure at time $\tau_0$. The allowed range for $R$ is restricted to $R \in (0.3, 1)$ for technical reasons: For $R<0.3$ the inversion of the relations expressing the macroscopic densities in terms of the microscopic parameters of the distribution function (needed for the calculation of the transport coefficients) fails to converge.\footnote{%
This can be avoided (Kevin Ingles, private communication) when using the modified Romatschke-Strickland distribution introduced in \cite{Chattopadhyay:2021ive, Jaiswal:2021uvv} but this modification has not yet been implemented in the Bayesian calibration code used here.}
By using this parameterization we also assume that the initial bulk viscous pressure is $\Pi=(2P_\perp{+}P_L)/3-p=0$. The initial flow profile is assumed to be static in Milne coordinates, $u^\mu=(1,0,0,0)$, and the residual shear stresses $W_{\perp z}^\mu$
and $\pi_\perp^{\mu \nu}$ are initially set to zero.
\subsection{Particlization}
\label{is3d}
\vspace*{-2mm}
As the QGP expands and cools, it eventually reaches the critical temperature for hadronization. Below that temperature, the fireball medium can be described as a hadron resonance gas. Its constituents are color neutral and thus interact much less strongly with each other than do the quarks and gluons in the QGP just before hadronization. Correspondingly, the mean free path increases rapidly below this transition, and fluid dynamics quickly becomes inadequate \cite{Song:2011hk}. This breakdown forces a transition back to a microscopic description, for which we use the Boltzmann transport code \texttt{SMASH}{} \cite{smash_code} briefly described in the following subsection.
This so-called particlization transition is not a physical phase transition but simply a change of language, from fluid dynamical degrees of freedom in \texttt{VAH}{} to particle degrees of freedom in \texttt{SMASH}. Following earlier calibration efforts \cite{Bernhard:2016tnd, Moreland:2018gsh, Bernhard:2019bmu, JETSCAPE:2020mzn, JETSCAPE:2020shq, Nijs:2020ors, Nijs:2020roc, Heffernan_CGC}, we keep the particlization temperature as a model parameter to be inferred from the experimental data and denote it by the switching temperature $T_\mathrm{sw}$. Fluid cells that reach the switching temperature and their surface normal vector $s^3\sigma_\mu$ are identified using the code \texttt{CORNELIUS}{} \cite{Huovinen:2012is}. The particlization hypersurface is passed to the \texttt{iS3D}{} hadron sampler \cite{McNelis:2019auj,is3d_code}. There, the Cooper-Frye prescription is used to convert all energy and momentum of the fluid into different hadron types and momenta emitted from the switching hypersurface.
According to the Cooper-Frye formula \cite{Cooper:1974mv}, the Lorentz-invariant particle momentum distribution is given by
\begin{equation}
p^0 \frac{dN_i}{d^3p} = \frac{g_i}{(2\pi)^3} \int_{\Sigma} d^3\sigma_{\mu}(x) p^{\mu} f_i(x;p).
\end{equation}
Here $p$ is particle four-momentum, $x$ is the position four-vector of the fluid cell, $f_i(x,p)$ is the phase-space distribution function for the particle species $i$, and $\Sigma$ is the switching hypersurface, with volume element $d^3\sigma_\mu(x)$ at point $x\in\Sigma$.
For a locally equilibrated hadron resonance gas the distribution function takes the form
\begin{equation}
f_{\mathrm{eq},i}(x,p)=\frac{g_i}{\exp\bigl[p{\cdot}u(x)/T(x)-\alpha_i(x)\bigr]+\Theta_i}.
\end{equation}
Here $g_i{\,=\,}2s_i{+}1$ is the spin degeneracy for species $i$, $u(x)$ is the 4-velocity of the fluid, $T(x)$ is the temperature, $\alpha_i(x)=b_i\mu_B(x)/T(x)$ is the baryon chemical potential-to-temperature ratio for a hadron with baryon number $b_i$, and $\Theta_i \in [-1, 1]$ accounts for the quantum statistics of fermions and bosons, respectively. In the present work $\mu_B{\,=\,}0$ since at midrapidity the fireball is taken to have zero net baryon number.
On the switching hypersurface $\Sigma$ the QGP liquid is still characterized by large dissipative flows and thus cannot be modeled as a locally equilibrated hadron resonance gas. The distribution functions must be modified such that the momentum moment $\sum_i\langle p^\mu p^\nu\rangle_i=T^{\mu\nu}$ matches the energy-momentum tensor (\ref{eq:vah}) on the particlization surface, including the dissipative flows. There are many types of modifications that fulfill this constraint -- here we choose the Pratt-Torrieri-McNelis-Anisotropic (PTMA) prescription \cite{PhysRevC.103.064903}:\footnote{%
The theoretical uncertainty introduced by different assumptions for the form of the momentum dependence of $f_i(x,p)$ on the particlization surface and its importance in Bayesian model parameter inference is discussed in Refs.~\cite{PhysRevC.103.064903, McNelis:2019auj, JETSCAPE:2020shq}. In the absence of a full microscopic kinetic description of the evolution preceding particlization it cannot be avoided. A maximum entropy (minimum information) approach to defining $f_i(x,p)$ at particlization was proposed in \cite{Everett:2021ulz} but has not yet been implemented in the Bayesian parameter inference framework. A full Bayesian assessment of the contribution to the error budget of the model parameters arising from the particlization and other model uncertainties will have to await the development of suitably adapted Bayesian model mixing tools \cite{Phillips:2020dmw}.}
\begin{equation}
\label{eq:PTMA}
f_{a,i}^{\mathrm{PTMA}}(x,p)=\frac{\mathcal{Z}(x)\, g_i}{\exp\Bigl[\frac{ \sqrt{{p'}^2(x)+m_i^2}}{\Lambda(x)}\Bigr]+\Theta_i}\,.
\end{equation}
The normalization $\mathcal{Z}$, effective temperature $\Lambda(x)$, and the transformation relating $p'(x)$ with $p$ depend on the dissipative terms in the \texttt{VAH}-decomposition (\ref{eq:vah}) of the energy-momentum tensor at point $x$; the interested reader is referred to Ref.~\cite{PhysRevC.103.064903} for details. Compared to some other prescriptions (see discussion in \cite{PhysRevC.103.064903, McNelis:2019auj}) the PTMA distribution (\ref{eq:PTMA}) has the advantage of being positive definite by construction.
\subsection{Hadronic afterburner}
\label{smash}
\vspace*{-2mm}
For the final decoupling stage of the fireball evolution, the hadrons sampled from the particlization surface are fed into the hadronic Boltzmann transport code \texttt{SMASH}{} \cite{smash_code}, which allows the hadrons to rescatter and the unstable resonances to decay and to be recreated in hadronic interactions, until the system becomes so dilute and collisions so rare that first the chemical composition and then the momentum distributions fall out of equilibrium and eventually freeze out \cite{Nonaka:2006yn, Hirano:2007ei, Petersen:2008dd, Song:2010aq, Heinz:2011kt, Song:2013qma, Zhu:2015dfa, Ryu:2017qzn}. To obtain sufficient particle statistics at limited computational cost, we limit the full hydrodynamic runs to a representative sample of the initial-state quantum fluctuations between 200 and 1600 hydro events, distributed over all collision centralities, per design point in parameter space (see App.~\ref{app:datacollection}), but oversample the switching hypersurface for each hydro event many times until a sufficient number of emitted hadrons has been generated for good statistical precision of all observables of interest \cite{Petersen:2010zt, Novak:2013bqa, Sangaline:2015isa, Bernhard:2015hxa, Bernhard:2016tnd, Moreland:2018gsh, Bernhard:2019bmu, JETSCAPE:2020shq, Nijs:2020ors, Nijs:2020roc, JETSCAPE:2020mzn, Heffernan_CGC}. In practice we oversample each hypersurface until a total of $10^5$ hadrons has been generated per hydrodynamic event, but not more than 1000 times. Finally, the experimental observables measured by the \texttt{ALICE} detector \cite{Aamodt:2010cz, Adam:2016thv, Abelev:2013vea, ALICE:2011ab} are calculated from the final ensemble of hadrons generated by the \texttt{SMASH}{} output, following the experimental procedures.
\section{Bayesian calibration}
\label{sec3}
\vspace*{-2mm}
This section provides a brief conceptual summary of Bayesian model calibration; details will be fleshed out in later sections. Let us represent a generic simulation model by a mathematical function $\mathbf{y}_\mathrm{sim}(\cdot)$ that takes values of parameters\footnote{%
The reader is asked to distinguish this use of $\mathbf{x}$ to describe sets of model parameters from its use as a spatial position vector elsewhere in the text. Its meaning should be clear from the context.
}
$\mathbf{x}{\,=\,}(x_1, \ldots, x_q) {\,\in\,}\mathcal{X}$ and returns output with mean $\mathbf{y}_\mathrm{sim}(\mathbf{x}){\,\in\,}\mathbb{R}^d$. A Bayesian parameter estimation process uses experimental observations, denoted by a vector $\mathbf{y_{\rm exp}}{\,=\,}(y_{{\rm exp}, 1}, \ldots, y_{{\rm exp}, d})$, to infer the simulation parameters $\mathbf{x}$~\cite{Sivia}. In this paper we carry out a Bayesian calibration of the \texttt{VAH}{} simulation for relativistic heavy-ion collisions, to align the simulation outputs $\mathbf{y}_\mathrm{sim}(\mathbf{x})$ with the experimental data $\mathbf{y_{\rm exp}}$. To do so we write down a statistical model of the form
\begin{equation}\label{eq:generalstatmodel}
\mathbf{y_{\rm exp}} = \mathbf{y}_\mathrm{sim}(\mathbf{x}) + \pmb{\epsilon},
\end{equation}
where the residual error, $\pmb{\epsilon}$, follows a multivariate normal (MVN) distribution with mean $\mathbf{0}$ and covariance matrix $\mathbf{\Sigma}$. In this work, we consider $d=98$ experimental observables for Pb--Pb collisions at a center-of-mass energy of $\sqrt{s_\textrm{NN}}{}=2.76$\,TeV (see Sec~\ref{sec6A} for details). The experimental measurements have experimental uncertainties due to finite measurement statistics and other instrumental effects. Our simulations are also stochastic, and thus the simulation outputs also have uncertainties (included in the error $\pmb{\epsilon}$ in (\ref{eq:generalstatmodel})). Hence, finding the model parameter values that would fit the experimental data (i.e., solving ``the inverse problem'' of determining the model parameters by analyzing the model output), taking into account {\it all} of the uncertainties, requires an advanced probabilistic framework.
In a Bayesian viewpoint, probability is defined as the degree of belief about an hypothesis considering all available information \cite{Sivia}. Parameter inference for physical model simulations is based on Bayes rule:
\begin{equation}
\label{eq:bayes}
\mathcal{P}(\mathbf{x}|\mathbf{y}_{\rm exp}) = \frac{\mathcal{P}(\mathbf{y_{\rm exp}|x})\mathcal{P}(\mathbf{x})}
{\mathcal{P}(\mathbf{y_{\rm exp}})}.
\end{equation}
The term $\mathcal{P}(\mathbf{x}|\mathbf{y}_{\rm exp})$ on the left-hand side of Eq.~\eqref{eq:bayes} is called the posterior (short for ``the posterior probability density"), which is the probability of the model parameter $\mathbf{x}$ to take a particular value given the experimental data $\mathbf{y_{\rm exp}}$. It is the primary focus of Bayesian parameter inference. On the right-hand side of Eq.~\eqref{eq:bayes}, $\mathcal{P}(\mathbf{x})$ represents the prior probability density for the parameters to take values $\mathbf{x}$, given information from previous experiments and/or independent theoretical input. $\mathcal{P}(\mathbf{y_{\rm exp}|x})$ is the likelihood function, describing the probability density that the model output for a given set of model parameters $\mathbf{x}$ agrees with the experimental data $\mathbf{y_{\rm exp}}$. Given the distribution of $\pmb{\epsilon}$, the likelihood $\mathcal{P}(\mathbf{y_{\rm exp}|x})$ is
\begin{equation}
\label{eq:likelihood}
\frac{1}{\sqrt{|2\pi\mathbf{\Sigma}|}}
\exp\Bigl[-\frac{1}{2}(\mathbf{y}_\mathrm{sim}(\mathbf{x}) - \mathbf{y}_\mathrm{exp})^\top \mathbf{\Sigma}^{-1} (\mathbf{y}_\mathrm{sim}(\mathbf{x}) - \mathbf{y}_\mathrm{exp})\Bigr],
\end{equation}
where the $d \times d$ matrix $\mathbf \Sigma$ represents the total uncertainty, obtained by adding the experimental and simulation uncertainties.
The posterior generally does not have a closed-form expression, in particular not for heavy-ion collisions. To find the posterior for the parameters $\mathbf{x}$ and to quantify their uncertainty therefore in practice requires numerical techniques that directly produce samples from the posterior. This is typically achieved by using Markov Chain Monte Carlo (MCMC) techniques \cite{Gelman2004, Trotta:2008qt}. These techniques require only relative probabilities, so the normalization $\mathcal{P}(\mathbf{y_{\rm exp}})$ in the denominator on the right of Eq.~\eqref{eq:bayes} (which is independent of the parameters to be inferred) does not need to be calculated.
To produce each sample, MCMC techniques have to evaluate the right-hand side of Eq.~\eqref{eq:bayes} many times for different values of $\mathbf{x}$. When the simulation model is computationally intensive, evaluating the likelihood $\mathcal{P}(\mathbf{y_{\rm exp}|x})$ becomes computationally expensive. Typically, MCMC methods require very many evaluations of the likelihood; this can make Bayesian parameter estimation computationally prohibitive for expensive simulation models such as the one described in the preceding section.
A popular solution is to build a computationally cheaper emulator to be used in place of the simulation model \citep{BMC, Higdon2004}. Gaussian Process (GP) emulators can serve as computationally cheap surrogates to replace an expensive simulation \cite{gramacy2020surrogates, Rasmussen2004}. GP emulators have to be trained on simulation data before they can be used to predict the simulation outputs at any other parameter set $\mathbf{x}$. Once the emulator is built, it returns the prediction mean $\pmb{{\mu}}(\mathbf{x})$ and the covariance ${\mathbf{C}}(\mathbf{x})$ to represent the simulation output $\mathbf{y}_\mathrm{sim}(\mathbf{x})$. In this case, the likelihood in Eq.~(\ref{eq:likelihood}) is approximated as \cite{BMC}
\begin{equation}
\label{eq:emulikelihood}
\frac{1}{\sqrt{|2\pi\mathbf{V}(\mathbf{x})|}}
\exp\Bigl[-\frac{1}{2}(\pmb{{\mu}}(\mathbf{x}) - \mathbf{y}_\mathrm{exp})^\top \mathbf{V}(\mathbf{x})^{-1} (\pmb{{\mu}}(\mathbf{x}) - \mathbf{y}_\mathrm{exp})\Bigr],
\end{equation}
where $\mathbf{V}(\mathbf{x}) = {\mathbf{C}}(\mathbf{x}) + \mathbf{\Sigma}$. MCMC techniques can then use Eq.~(\ref{eq:emulikelihood}) to draw samples from the posterior in order to avoid a sampling process that depends on extensive evaluation of the expensive computer simulation.
\section{Emulators for the \texttt{VAH}{} model}
\label{sec4}
\vspace*{-2mm}
Relativistic heavy-ion collision simulations are computationally expensive. Due to irreducible quantum fluctuations in the initial state, for each set of model parameters the simulation has to run multiple times, with stochastically fluctuating initial conditions, in order to produce output that can be meaningfully compared with the experimental measurements. For example, in the JETSCAPE framework for a single set of model parameters, the full-model simulations for 2000 fluctuating initial conditions take approximately $1000$ core hours \cite{ JETSCAPE:2020mzn}. A majority ($\approx$80\%) of the core hours is spent on the hadron transport stage after particlization; the remaining fraction of core hours is mostly utilized by the hydrodynamic QGP evolution code. The MCMC techniques employed in Bayesian parameter estimation require many evaluations of the simulation that, without the use of efficient emulators, renders the inference for heavy-ion collision models computationally infeasible.
Getting the training data necessary for the GP emulators from the full heavy-ion model is the most computationally demanding step in Bayesian parameter estimation. In this work, we employ several novel methods to significantly reduce this computational cost: a) we use a novel Minimum Energy Design (MED) \cite{MED} for selecting the model parameter values at which we run the full simulation, replacing the Min-Max Latin Hypercube Design \cite{LHS} used in most previous works \cite{Petersen:2010zt, Novak:2013bqa, Sangaline:2015isa, Bernhard:2015hxa, Bernhard:2016tnd, Moreland:2018gsh, Bernhard:2019bmu, JETSCAPE:2020shq, Nijs:2020ors, Nijs:2020roc, JETSCAPE:2020mzn}; (b) instead of getting all the simulation data at once, we follow a sequential process to obtain batches of simulation data with increasing accuracy in the most probable parameter region; (c) we test several GP emulation methods for relativistic heavy-ion collisions and use a thorough validation to select the best one. All of these features are explained in detail below.
\subsection{Generating the simulation data for emulator training}
\label{sec4b}
The popular Latin Hypercube Design (LHD) fills the multidimensional model parameter input space in a way that ensures that each parameter is broadly distributed within its full range \cite{LHS}. The emulators that are built using LHD effectively treat all regions of the parameter space as important.
Building emulators for Bayesian parameter estimation is a unique application of emulators. In theory the emulators built for this purpose need to accurately predict the simulation output only in the high posterior regions of the model parameter space. Following this idea naturally leads to a sampling method where some regions of the input parameter space have a higher density of design/training points than others, deviating strongly from a LHD. This targeted sampling approach saves computational cost by requiring a smaller overall number of high-precision simulations, by sacrificing emulation accuracy in uninteresting (low posterior probability) regions of the model parameter space for higher precision simulations in regions of interest (high posterior probability). This idea of building emulators by sequentially sampling with a bias towards the interesting regions while still exploring, albeit with less model precision, the full parameter space domain specified by the prior $\mathcal{P}(\mathbf{x})$ is known as {\it active learning} in the machine learning and statistics literature (see Chapter~6 in \cite{gramacy2020surrogates} for a detailed review).
In this work, we adopt an active learning approach to generate the VAH{} simulation data for training our emulators. Our simulation is inherently stochastic due to the random initial energy depositions and the probabilistic conversion of QGP fluid to hadrons at particlization. Our final aggregated observables are calculated from the stochastic simulation by evaluating it multiple times with random initial conditions at any given model parameter set. We refer to each of these evaluations as an event. As the number $N_\mathrm{events}$ of events simulated for a given design point increases, the statistical accuracy of the final simulation output observables increases roughly proportional to $\sqrt{N_\mathrm{events}}$ while the computational expense increases linearly. In the following we describe the sequential design approach to simulate design points with varying precision (from 200 events per design to 1600 events per design, which we refer to as low-precision and high-precision simulations, respectively) to build emulators for Bayesian parameter estimation.
First, we use low-precision simulations on a very sparse set of initial design points to estimate an intermediate posterior for potential additional intermediate design point. Informed by the initial design, we decide on which regions in the parameter space we should put more weight when sampling the next batch of design points, and use maximum energy design (MED) to do intelligent sequential sampling. MED selects the points that minimize the total ``potential energy'' as defined in \cite{Joseph2015}, and fast procedures for generating MED samples are provided in \cite{MED}. Suppose that we start with $j$ samples of the $q$-dimensional parameter space and their corresponding simulation outputs. According to the MED selection criterion, the $(j+1)$th point is selected at
\begin{equation}
\label{eq:medcriterion}
\mathbf{x}_{j+1} = \argmax_{\mathbf{x} \in \mathcal{L}} \min_{i=1:j} \mathcal{P}^{1/2q}(\mathbf{y_{\rm exp}|x}) \mathcal{P}^{1/2q}(\mathbf{y_{\rm exp}}|\mathbf{x}_i) d(\mathbf{x}, \mathbf{x}_i)
\end{equation}
where $\mathcal{L}$ represents a candidate list of design parameters that we generate from a LHD and $d(\mathbf{x}, \mathbf{x}_i)$ represents the Euclidean distance between $\mathbf{x}$ and $\mathbf{x}_i$.
Since model simulations have not yet been performed for any of the points $\mathbf{x}{\,\in\,}\mathcal{L}$, the likelihood $\mathcal{P}(\mathbf{y_{\rm exp}|x})$ at such an ``unseen'' point $\mathbf{x}$ must be estimated using Eq.~(\ref{eq:emulikelihood}) with the emulator built using the simulation data retrieved at design points $\{\mathbf{x}_i|i=1,\dots,j\}$. After generating a batch of design points in regions of large estimated values for the posterior via Eq.~(\ref{eq:medcriterion}), high-precision full-model simulations with increased event statistics per design are performed for this batch, and their output is added to the previously simulated events and used to build new emulators with improved precision. The process is iterated until the desired emulator precision has been reached. Full details are given in App.~\ref{app:datacollection}.
\subsection{Gaussian process emulation}
\label{sec4a}
In this study, we leverage a GP-based emulator using the basis vector approach \cite{Higdon2008} since the simulation returns a length $d{\,=\,}98$ vector. Principal component analysis (PCA) \cite{Ramsay97functionaldata} is used to project the high-dimensional outputs into a low-dimensional space where the projection is a collection of latent outputs. Then, each latent output is modeled using an independent GP model. We have tested different emulation strategies, and we provide the results for stochastic kriging \cite{Ankenman2009} since it returns the best prediction accuracy on the test data. Relativistic heavy-ion collision simulations are stochastic, meaning that each time the simulation model is evaluated with the same parameter setting the simulation output is different. The stochastic kriging approach incorporates both the intrinsic uncertainty inherent in a stochastic simulation and the extrinsic uncertainty about the unknown simulation output. Additional information on other emulators we explored are presented in App.~\ref{app:additionalemu}.
For the rest of this section, let $\{\mathbf{x}_1^{\rm tr}, \ldots, \mathbf{x}_n^{\rm tr}\}$ denote the ${n}$ unique parameter samples used to train each of the emulators. Let $\bar{\mathbf{y}}_\mathrm{sim}(\mathbf{x}^{\rm tr}_i)$ be a $d$-dimensional aggregated output vector of observables from the simulation model evaluated at the model parameter value $\mathbf{x}^{\rm tr}_i$ across a number of random initial conditions. The standardized outputs are stored in a $d \times n$ matrix $\Xi$ where the $i$th column is $(\bar{\mathbf{y}}_\mathrm{sim}(\mathbf{x}^{\rm tr}_i) - \mathbf{h})/\mathbf{c}$ (computed element-wise); and $\mathbf{h}$ and $\mathbf{c}$ are the centering and scaling vectors. Principal component analysis finds a $d{\,\times\,}p$ linear transformation matrix $\mathbf{A} = [\mathbf{a}_1, \ldots, \mathbf{a}_p]$, which projects $d$-dimensional simulation outputs $\Xi$ to a collection of latent outputs $\mathbf{Z} = [\bm{z}_1, \ldots, \bm{z}_p] = \mathbf{A}^{\!\top}\Xi$ in a $p$-dimensional space. In this work, we find that keeping twelve principal components ($p{\,=\,}12$) explains 98\% of the variance in the original simulation data set. After transformation, we build an independent GP for each latent output $z_t(\cdot) = \mathbf{a}_t^\top \mathbf{G}^{-1}(\bar{\mathbf{y}}_\mathrm{sim}(\cdot) - \mathbf{h})$ where $\mathbf{G} = {\rm diag}(\mathbf{c})$. The GP model results in a prediction with mean $m_t(\mathbf{x})$ and variance $s^2_t(\mathbf{x})$ such that
\begin{equation}
z_t(\mathbf{x})|\bm{z}_t \sim {\rm N}(m_t(\mathbf{x}), s^2_t(\mathbf{x})), \quad \text{for} \quad t = 1, \ldots, p.
\label{eq:gppost}
\end{equation}
Following the properties of GPs \cite{santner2003design}, the mean $m_t(\mathbf{x})$ and variance $s^2_t(\mathbf{x})$ are given by
\begin{align}
\begin{split}
m_t(\mathbf{x}) &= \mathbf{k}_t^\top \mathbf{K}_t^{-1} \bm{z}_t\\
s^2_t(\mathbf{x}) &= k_t(\mathbf{x},\mathbf{x}) - \mathbf{k}_t^\top \mathbf{K}_t^{-1} \mathbf{k}_t,
\label{eq:gpeqns}
\end{split}
\end{align}
where $k_t(\cdot,\cdot)$ is the covariance function, $\mathbf{k}_t = \bigl[k_t(\mathbf{x}, \mathbf{x}^{\rm tr}_i)\bigr]_{i=1}^n$ is the covariance vector between $n$ parameters $\{\mathbf{x}_1^{\rm tr}, \ldots, \mathbf{x}_n^{\rm tr}\}$ and any parameter $\mathbf{x}$, and $\mathbf{K}_t = \bigl[k_t(\mathbf{x}_i^{\rm tr},\mathbf{x}^{\rm tr}_j) + \delta_{ij} \frac{r_{t,i}}{a_i}\bigr]_{i,j=1}^n$ is the covariance matrix between $n$ parameters used to train the GP (see App.~\ref{gpfitting} for additional details). Here, $r_{t,i}$ represents the intrinsic uncertainty due to the stochastic nature of the simulation output across different events (i.e., different random initializations), $a_i$ is the number of events, and $\delta_{ij}$ is the Kronecker-$\delta$. For the covariance function, there are several popular choices including Gaussian,\footnote{%
In the statistics literature the Gaussian function is often called a ``squared-exponential", indicated here by the superscript SE.}
Mat\'ern, and cubic covariances \cite{Rasmussen2004}.
Once the GP is fitted, the goal is to predict the simulation output at an unseen point $\mathbf{x}$ as follows. Equations (\ref{eq:gppost})--(\ref{eq:gpeqns}) provide the basis for emulator modeling: the posterior mean $m_t(\mathbf{x})$ serves as the emulator model prediction at a new point $\mathbf{x}$, and the posterior variance $s_t^2(\mathbf{x})$ quantifies the emulator model uncertainty. A key appeal of GP emulators is that both their prediction and uncertainty can be efficiently computed via such closed-form expressions. For a prediction of the model output at any test point $\mathbf{x}$, we first obtain the mean $m_t(\mathbf{x})$ and variance $s^2_t(\mathbf{x})$ in Eq.~(\ref{eq:gpeqns}) for each of the corresponding latent outputs $t = 1, \ldots, p$. These are then transformed back to the original $d$-dimensional space through the inverse PCA transformation as follows: Define the $p$-dimensional vector $\mathbf{m}(\mathbf{x}) = (m_1(\mathbf{x}), \ldots, m_p(\mathbf{x}))$ and a $p \times p$ diagonal matrix $ \mathbf{S}(\mathbf{x})$ with diagonal elements $s^2_t(\mathbf{x})$ ($t = 1, \ldots, p$). Then the inverse PCA transformation yields
\begin{equation}
\mathbf{y}_\mathrm{sim}(\mathbf{x}) \sim \text{MVN}(\pmb{{\mu}}(\mathbf{x}), {\mathbf{C}}(\mathbf{x})),
\label{emu_final}
\end{equation}
where $\pmb{{\mu}}(\mathbf{x}) = \mathbf{h} + \mathbf{G} \mathbf{A} \mathbf{m}(\mathbf{x})$ is the emulator predictive mean and ${\mathbf{C}}(\mathbf{x}) = \mathbf{G} \mathbf{A} \mathbf{S}(\mathbf{x})\mathbf{A}^\top \mathbf{G}$ is the covariance matrix. By plugging $\pmb{{\mu}}(\mathbf{x})$ and ${\mathbf{C}}(\mathbf{x})$ into Eq.~(\ref{eq:emulikelihood}) we obtain the approximate likelihood at parameter point $\mathbf{x}$. We then use Eq.~(\ref{eq:bayes}) to compute the posterior $\mathcal{P}(\mathbf{x|y_{\rm exp}})$.
In this way, by employing the MCMC method on the sequentially updated emulators, we extract posterior distributions for the parameters of the \texttt{VAH}{} simulations of heavy-ion collisions without running the computationally expensive simulation model when doing the MCMC sampling.
\begin{figure}[b]
\includegraphics[width=\linewidth]{Emulator_validation/R2_PCSK_ozge_req.png}
\caption{$\mathrm{R}^2$ scores calculated using the test simulations and emulator predictions for principal component stochastic kriging (PCSK) for eleven observable types. Each observable type is represented by a unique color, and each dot corresponds to measurements of that observable type at increasing centrality when going from left to right.
}
\label{R2_pcgpr_sub_groups}
\end{figure}
Once the emulators are fitted with training data, and before using them for Bayesian parameter inference, we test the accuracy of the emulators. To do that, we use the model simulation output from a batch of $m$ design points that was put aside and not used for emulator training (specifically batch (b) described in App.~\ref{app:datacollection}). The emulator accuracy is measured using the coefficient of determination ($R^2$ score) for the $l^ \mathrm{th}$ observable ($l = 1, \ldots, d$) defined by
\begin{eqnarray}
\label{eq:R2}
&& R^2_l = 1 - \frac{\sum_{i=1}^m(\bar{\mathbf{y}}_{{\rm sim}, l}(\mathbf{x}_i^{\rm test})-\pmb{{\mu}}_l(\mathbf{x}_i^{\rm test}))^2}
{\sum_{i=1}^m(\bar{\mathbf{y}}_{{\rm sim},l}(\mathbf{x}_i^{\rm test})-\bar{\bar{\mathbf{y}}}_{{\rm sim},l}(\mathbf{x}_i^{\rm test}))^2}\,,
\\\nonumber
&& {\rm where} \quad \bar{\bar{\mathbf{y}}}_{{\rm sim},l}(\mathbf{x}_i^{\rm test}) = \frac{\sum_{i=1}^{m} \bar{\mathbf{y}}_{{\rm sim},l}(\mathbf{x}_i^{\rm test})}{m}.
\end{eqnarray}
The maximum possible value for $R_l^2$ is $R_l^2{\,=\,}1$, which occurs only when the emulation predictions for observable $l$ are identical to the simulation output for all $m$ test designs (i.e., the emulator is perfect on the test designs). The $R^2$ score can assume negative values when the emulator predictions deviate strongly from the simulation outputs for the test designs.
The $R^2$ scores we obtain for the \texttt{VAH}{} emulators trained by using the principal component stochastic kriging (PCSK) method\footnote{%
Two other types of emulators, obtained with the {\it principal component Gaussian process regression} (PCGP) and the {\it principal component Gaussian process regression with grouped observables} (PGPRG) methods, respectively, were also studied (see App.~\ref{app:additionalemu} for a description and the corresponding $R^2$ values). The $R^2$ values exhibit a general tendency to decrease as we switch from PCSK to PCGPR emulators, and then again as we move to PCGPRG emulators. For the analysis in the rest of the paper we have therefore chosen the PCSK emulators as the most accurate ones.}
\begin{figure*}[!t]
\noindent\makebox[1\textwidth]{%
\centering
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\linewidth]{Closures/Closure_tests_for_shear_PCSK_PTMC.png}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\linewidth]{Closures/Closure_tests_for_bulk_PCSK_PTMC.png}
\end{minipage}
}
\caption{Closure tests for the temperature-dependent specific (a) shear and (b) bulk viscosities. The colored bands show different confidence intervals (C.I.).}
\label{fig:closure_tests}
\end{figure*}%
are shown in Fig.~\ref{R2_pcgpr_sub_groups}. Each color corresponds to a different type of observable, and each dot corresponds to a different collision centrality, ordered from left to right by increasing centrality. In total there are 98 dots, corresponding to the 98 observables considered in this work.
\section{Closure tests}
\label{sec5}
\vspace*{-2mm}
Following Ref.~\cite{JETSCAPE:2020mzn}, we perform closure tests before proceeding to the model calibration stage. By using Bayesian inference to reconstruct the model parameters from simulated (``mock'') data generated by model runs for a known point in model parameter space, closure tests are an important check of the ability of the inference framework to correctly infer model parameters from real experimental measurements, and they also provide a deeper understanding of the behavior of the uncertainties associated with the inference process.
We randomly pick nine design points from our most accurate simulation data set (batch (e) in App.~\ref{app:datacollection}), which has 1600 events per design. We train our emulators without including these nine design points in our training data. Then, the simulation outputs of these nine design points are taken as pseudo-experimental data and we perform the Bayesian parameter inference to find the most probable values of the model parameters that can reproduce the pseudo-experimental data. Finally, we compare the inferred model parameter values with the known truth values that generated these pseudo-experimental data and validate our Bayesian parameter estimation framework.
Figure~\ref{fig:closure_tests} shows the closure test results for the temperature-dependent specific bulk and shear viscosities. One sees that in most of the nine cases the true model parameter values (giving rise to the dotted lines) are well captured by the 90\% inferred posteriors for these model parameters (indicated by lighter and darker colored regions for the 90\% ad 60\% confidence intervals (C.I.), respectively). We expect the true value (black dashed line) to lie inside the 90\% confidence interval 90\% of the time. We also observe that for some pseudo-experimental data values (e.g., left column, middle row in both panels (a) and (b)) the closure can be poor. There are several possible reasons for this to happen: a) emulators are not well trained to capture the simulation behavior around some of the parameter values used to generate pseudo-experimental data; b) in some regions of the parameter space the temperature-dependent viscosities are not sensitive enough to be accurately inferred using the currently used experimental observables; a similar issue was reported in Section~V.B of Ref.~\cite{Heffernan_CGC}. The observation of such instances of poor closure highlights the importance of further validation tests after completing the model calibration. For this purpose we perform posterior predictive tests for the experimental data as discussed in Sec.~\ref{sec8}. For the closure tests shown here, in addition to the viscosities we also checked the consistency of the posteriors for the remaining model parameters with their true values; we found all of them to be inferred accurately.
\section{Bayesian calibration of \texttt{VAH}{} with LHC data}
\label{sec6}
\vspace*{-2mm}
\subsection{Calibration procedure}
\label{sec6A}
\vspace*{-2mm}
With our Bayesian tool set in place we perform Bayesian parameter inference using experimental data for Pb--Pb collisions at center-of-mass energy $\sqrt{s_\textrm{NN}}{}=2.76$\,TeV collected at the LHC. For ease of comparison, we use (almost\footnote{%
We omit the fluctuations in the mean transverse momentum, $\delta p_T /\langle p_T\rangle$ \cite{Abelev:2014ckr}, for centrality bins ranging from 0 to 70\% centrality in our analysis. We found that excluding this observable increases the validation scores $R^2$ for all observables. We suspect this is due to the current number of events per design being insufficient to calculate the mean transverse momentum accurately.}%
) the same set of measurements as the recent JETSCAPE analysis \cite{JETSCAPE:2020mzn} (98 measurements in total):
\begin{itemize}
\item the charged particle multiplicity $dN_{\text{ch}}/d\eta$ \cite{Aamodt:2010cz} for centrality bins covering $0{-}70$\% centrality;
%
\item the transverse energy $dE_T/d\eta$ \cite{Adam:2016thv} for centrality bins covering $0{-}70$\% centrality;
%
\item the multiplicity $dN/dy$ and mean transverse momenta $\langle p_T \rangle $ of pions, kaons, and protons \cite{Abelev:2013vea} for centrality bins covering $0{-}70$\% centrality;
%
\item the two-particle cumulant harmonic flows $v_n\{2\}$ $(n{\,=\,}2,3,4)$ for centrality bins covering $0{-}70$\% centrality for $n{\,=\,}2$ and $0{-}50$\% centrality for $n{\,=\,}3,\,4$ \cite{ALICE:2011ab}.
\end{itemize}
The 15 model parameters that we infer and their priors are listed in Table~\ref{table:prior}. Since \texttt{VAH}{} has no free-streaming stage, the associated JETSCAPE parameters are missing from the table. The additional parameter $R{\,\in\,}[0.3,\,1]$, defined in Eq.~(\ref{eq:R}), controls the initial pressure anisotropy $(\mathcal{P}_L/\mathcal{P}_L)_0$ and is unique to \texttt{VAH}.
\begin{table}
\begin{tabular}{l|l}
parameter & prior range\\ \hline \hline
$N$ & $[10, 30]$ \\ \hline
$p$ & $[-0.7, 0.7]$ \\ \hline
$w$ [fm] & $[0.5, 1.5]$ \\ \hline
$d_{\mathrm{min}}$ [fm] & $[0.0, 1.7]$ \\ \hline
$\sigma_k$ & $[0.3, 2.0]$ \\ \hline
$T_\mathrm{sw}$ [GeV] & $[0.135, 0.165]$ \\ \hline
$R$ & $[0.3, 1]$ \\ \hline
$T_{\eta/s, \mathrm{kink}}$\ [GeV] & $[0.13, 0.3]$ \\ \hline
$(\eta/s)_{\rm kink}$ & $[0.01, 0.2]$ \\ \hline
$a_{\mathrm{high}}$ [GeV$^{-1}$] & $[-1, 2]$ \\ \hline
$a_{\mathrm{low}}$ [GeV$^{-1}$] & $[-2, 1]$ \\ \hline
$(\zeta/s)_{\mathrm{max}}$ & $[0.01, 0.25]$ \\ \hline
$T_{\zeta}$ [GeV] & $[0.12, 0.3]$ \\ \hline
$w_{\zeta}$ [GeV] & $[0.025, 0.15]$ \\ \hline
$\lambda_\zeta$ & $[-0.8, 0.8]$
\end{tabular}
\caption{
List of \texttt{VAH}{} model parameters. We use uniform priors throughout, with prior ranges (specified in the right column) that agree with the ones used in Ref.~\cite{JETSCAPE:2020mzn}.}
\label{table:prior}
\end{table}
For the likelihood function we use the multivariate normal distribution (\ref{eq:likelihood}). The variances characterizing the experimental and simulation uncertainties are added together to obtain the total uncertainty appearing in the likelihood.
The posterior is found by combining the (multivariate Gaussian) likelihood with the (multivariate uniform) prior according to Bayes'\ rule (\ref{eq:bayes}). The resulting 15-dimensional posterior probability distribution is analyzed by sampling it using MCMC \cite{MCMC, Trotta_2008}. Specifically, we use the Parallel Tempering MCMC technique \cite{Vousden_2015, Foreman_Mackey_2013, ptemcee_code}, on account of its robustness in sampling multi-modal posterior distributions. For the temperature ladder we choose 500 values for the tempering temperature, evenly distributed between 0 and 1000. For each temperature in the ladder we run 100 randomly initialized chains (see \cite{ptemcee_code} for technical details of the parallel tempering algorithm). In each chain we discard the first 1000 steps as burn-in. The final posterior samples are obtained from the next 5000 steps, after thinning the chains by a factor 10, by saving only every 10th sample.
\subsection{Posterior for the model parameters}
The joint marginal posterior distributions (obtained by projecting the posterior on two dimensions in all possible ways) for the model parameters that are not related to the viscosities are shown in Fig.~\ref{fig:posterior}; the diagonal shows the marginal posterior distributions for each model parameter. These marginal distributions are obtained by integrating the posterior over all other model parameters, except the chosen one or two. The model parameter values that maximize the high-dimensional posterior (the ``mode'' of the distribution) are called Maximum a Posteriori (MAP) parameters --- they are indicated by blue dotted vertical lines in the diagonal panels.
In each panel the lilac color shades the uniform prior distribution density; shades of red color indicate the projected density of the posterior distribution. In cases where the available experimental data have insufficient constraining power on a parameter one expects the prior (blue) and posterior (red) marginal distributions to largely agree. In these cases the posterior essentially returns the information already contained in the prior. Figure~\ref{fig:posterior} shows that two of our model parameters, the initial pressure ratio $R$
and the minimum distance $d_\mathrm{min}$ between nucleons, are not well constrained by the experimental data.\footnote{%
To the extent that the very early fireball expansion stage respects Bjorken symmetry \cite{PhysRevD.22.2793} (an approximation expected to hold well in collisions between large nuclei at LHC energies), the dynamical evolution of $R$ is controlled by a far-from-equilibrium hydrodynamic attractor to which it decays rapidly on a time scale ${\sim\,}\tau_0$, irrespective of its initial value, following a power law decay \cite{Florkowski:2017olj, Jaiswal:2019cju, Kurkela:2019set, Chattopadhyay:2021ive, Jaiswal:2021uvv}, and which is well described by viscous anisotropic hydrodynamics \cite{Chattopadhyay:2021ive, Jaiswal:2021uvv}. This may be the main reason for our inability to constrain it well using final-state experimental measurements.}
\begin{figure*}[!t]
\noindent\makebox[1\textwidth]{%
\centering
\includegraphics[width=16cm]{Final_posterior/WithoutViscosity.png}
}
\caption{
Joint marginal distributions of the posterior for all model parameters except the viscosities, for \texttt{VAH}{} with PTMA viscous corrections at particlization and using experimental data from Pb--Pb collisions at $\sqrt{s_\textrm{NN}}{}=2.76$\,TeV. Posterior distributions are represented by shades of red while the (uniform) prior distributions are shown in lilac. Cyan numbers and vertical dotted lines in the diagonal panels indicate the MAP values for each parameter.
}
\label{fig:posterior}
\end{figure*}
The shape of the probability density contours in the off-diagonal panels conveys information about correlations among the inferred model parameters. One observes slight positive correlations between the nucleon width parameter $w$, as well as the multiplicity fluctuation parameter $\sigma_k$, and the \texttt{T$_\mathrm{R}$ENTo}{} normalization $N$, and between $\sigma_k$ and the minimum nucleon distance $d_\mathrm{min}$; the power $p$ in the \texttt{T$_\mathrm{R}$ENTo}{} parameterization (\ref{eq:harmonic_mean}) of the reduced thickness function $T_R$ is slightly anti-correlated with $N$ and $R$. More striking is the joint marginal posterior distribution of the \texttt{T$_\mathrm{R}$ENTo}{} normalization parameter $N$ with the initial pressure anisotropy (shear stress) parameter $R$; this correlation exhibits a bimodal structure that is also reflected by twin peaks in each parameter's individual marginal distribution: the calibration reflects ambiguity between large values of $N$ paired with large $R$ ratios and small $N$ values paired with small $R$, with almost a factor of two between the corresponding parameter values.\footnote{%
The mode characterized by the larger $(N,R)$ values abuts the edge of the joint prior distribution at its upper right corner; this suggests that this mode may actually lie outside the prior parameter range.}
Such a bimodal structure is also seen in the joint marginal distributions of other parameter pairs but hidden in their individual marginal distributions.
According to Eq.~\eqref{eq:edens} the \texttt{T$_\mathrm{R}$ENTo}{} normalization parameter $N$ controls the normalization of the initially deposited energy density distribution in the transverse plane. The Bayesian parameter estimation is unable to distinguish compellingly between an initially pressure isotropic configuration ($R{\,=\,}1$, corresponding to zero shear stress) with large initial energy density and a solution with large initial shear stress ($R=P_L/P_\perp{\,\sim\,}1/2$) whose initial energy density is about 40\% smaller. An increase in the value of $R$ corresponds to a decrease of the initial transverse pressure relative to the longitudinal one, resulting in a decrease of particle yields, transverse energy and mean transverse momenta in the final state mid-rapidity observables.\footnote{\label{widget}%
The interested reader is invited to confirm these claims with the help of the emulator-based ``\texttt{VAH}{} widget'' which can be found at \url{https://danosu-visualization-vah-streamlit-widget-wq49dw.streamlit.app/} \cite{vah_tool}}
This can be compensated by making $N$ larger, i.e., by increasing the initial energy density.
To break this degeneracy requires additional input. Unable to suggest any supplemental measurements that could directly address this particular aspect of the initial conditions for the hydrodynamic expansion, we instead offer additional ``prior theoretical'' input: In the widely considered Color Glass Condensate (CGC) model \cite{Gelis:2010nm}, which is expected to apply to heavy-ion collisions at LHC energies, the longitudinal pressure $P_L$ is predicted to be initially negative, rising very quickly (on a time scale $\sim 1/Q_s$ where $Q_s\sim 1{-}3$\,GeV is the saturation momentum of the CGC) to positive values and approaching the transverse pressure $P_T$ from below at late times as the system moves closer to thermal equilibrium \cite{Epelbaum:2013ekf}. This picture would definitely tilt the prior for $R$ in the lower right panel of Fig.~\ref{fig:posterior} strongly toward the left, assigning very low prior probability to initial pressure ratios near $R{\,=\,}1$.
Similar to earlier Bayesian model calibrations \cite{Moreland:2018gsh, Bernhard:2019bmu, Nijs:2020ors, JETSCAPE:2020mzn, Nijs:2021clz} we find the nucleon width parameter $w$ constrained to a value around 1\,fm. Since this value is larger than the nucleon width needed to match the recently measured total hadronic cross sections for $p$--Pb \cite{CMS:2015nfb} and Pb--Pb \cite{ALICE:2022xir} collisions at $\sqrt{s_\textrm{NN}}{\,=\,}5.02$\,GeV at the LHC \cite{Nijs:2022rme}, this suggests that a successful description of the experimental data used in our model calibration requires an initial density profile (\ref{eq:edens}) for the energy density deposited near mid-rapidity that fluctuates on a larger length scale than the strong interaction radius of the proton. The \texttt{VAH}{} widget \cite{vah_tool} (see footnote \ref{widget}) shows that bumpier initial conditions whose density fluctuates on shorter length scales increase the radial and anisotropic flows (reflected in the average momenta $\langle p_T\rangle$ and harmonic flow coefficients $v_n$ of the emitted hadrons) which must be compensated by larger values for the shear and bulk viscosities. Thus, the choice of $w$ has direct consequences for the viscosity coefficients inferred from the experimental data.
The fluctuation length scale of the initially deposited matter should be larger than that of the nuclear thickness functions of the colliding nuclei makes immediate sense once one realizes that the matter created by a collision of two colored partons at transverse position $\mathbf{x}_\perp$, which is characterized by a typical transverse momentum scale $p_\perp{\,\lesssim\,}1$\,GeV, cannot be deposited at precisely the same point $\mathbf{x}_\perp$: the uncertainty relation requires it to be distributed around $\mathbf{x}_\perp$ in a cloud of transverse radius $\Delta r_\perp{\,\sim\,}\mathcal{O}(1/p_\perp)$. Unfortunately, the \texttt{T$_\mathrm{R}$ENTo}{} ansatz (\ref{eq:edens},\ref{eq:harmonic_mean}) does not
\begin{figure*}[!htb]
\noindent\makebox[\textwidth]{%
\centering
\begin{minipage}{0.5\textwidth}
\includegraphics[width=8cm]{Final_posterior/shear.png}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\includegraphics[width=8cm]{Final_posterior/bulk.png}
\end{minipage}
}
\caption{
Posteriors for the temperature-dependent specific (a) shear and (b) bulk viscosities in the \texttt{VAH}{} model with PTMA viscous corrections at particlization, using experimental data from Pb--Pb collisions at $\sqrt{s_\textrm{NN}}{}=2.76$\,TeV for model calibration. Grey bands show the 90\% prior interval. The colored regions corresponds to 90\% (light) and 60\% (dark) posterior credible intervals.
}
\label{fig:posterior_viscosities}
\end{figure*}%
allow to change the width parameter $w$ independently in the nuclear thickness functions $T_{A,B}$ (which control, among other observables, the total inelastic $p$--Pb and Pb--Pb cross sections \cite{CMS:2015nfb, ALICE:2022xir, Nijs:2022rme}) and in the reduced thickness function $T_R$ (\ref{eq:harmonic_mean}) which controls the fluctuation length scale of the initially deposited transverse density profile. We suggest that future implementations of the \texttt{T$_\mathrm{R}$ENTo}{} model should allow for an additional Gaussian smearing in Eq.~(\ref{eq:edens}),
\begin{equation}
\label{eq:edens_new}
\epsilon(\mathbf{x}_\perp)=\frac{N}{\tau_0} \int d^2r_\perp T_R(\mathbf{r}_\perp;p,w)\, \frac{e^{-\frac{1}{2} \bigl(\frac{\mathbf{x}_\perp{-}\mathbf{r}_\perp}{\Delta r_\perp}\bigr)^2}}{2\pi(\Delta r_\perp)^2},
\end{equation}
where $T_R$ is characterized by a nucleon width $w$ matched to reproduce the total inelastic $p$--Pb and Pb--Pb cross sections \cite{Nijs:2022rme} while the additional smearing scale $\Delta r_\perp$, setting the length scale of the initial energy density fluctuations, is taken as a model parameter to be inferred from the heavy-ion collision data by Bayesian inference.\footnote{%
As suggested by Weiyao Ke (private communication) the above uncertainty argument $\Delta r_\perp{\,\sim\,}\mathcal{O}(1/p_\perp)$ implies that the smearing radius $\Delta r_\perp$ could be smaller for the production of hard particles than for the soft matter considered here which makes up the QGP medium.}
For the \texttt{VAH}{} model the MAP value for the harmonic mean parameter $p$ in \texttt{T$_\mathrm{R}$ENTo}{} (see Eq.~\ref{eq:harmonic_mean}) is 0.038, close to the preferred value of zero found in all previous Bayesian parameter inferences using a free-streaming pre-hydrodynamic stage \cite{Petersen:2010zt, Novak:2013bqa, Sangaline:2015isa, Bernhard:2015hxa, Bernhard:2016tnd, Moreland:2018gsh, Bernhard:2019bmu, JETSCAPE:2020shq, Nijs:2020ors, Nijs:2020roc, JETSCAPE:2020mzn}. However, its marginal posterior distribution is shifted toward slightly negative values. Its correlation with $R$ suggests that this shift is associated with the large posterior density around $R=1$, which we argued above is theoretically disfavored. Figure~\ref{fig:posterior} shows that large $R$ values correlate with somewhat larger values of the width parameter $w$ which reduce the harmonic flows by smoothing the initial density profile. To compensate for this effect the harmonic mean parameter $p$ tends negative, which, as observed in \cite{Moreland:2014oya} and confirmed with the \texttt{VAH}{} widget in \cite{vah_tool}, increases all flow harmonics. Increasing values of $R$ and $w$ also correlate with a growth of the multiplicity fluctuation parameter $\sigma_k$ (see Eq.~\ref{eq:fluctuated_thickness}), compensating with larger fluctuations in the magnitude of the initial density profiles for their larger length scale $w$.
Our Bayesian model calibration leaves the minimum distance $d_{\mathrm{min}}$ between nucleons unconstrained. This has also been observed in other Bayesian parameter estimation studies done with the \texttt{T$_\mathrm{R}$ENTo}{} model \cite{JETSCAPE:2020mzn}. The switching temperature $T_{\mathrm{sw}}$ where the QGP is converted into hadrons is constrained to a somewhat lower temperature than in previous studies \cite{JETSCAPE:2020mzn, Nijs:2020ors}. While its MAP value $T_{\mathrm{sw}}^\mathrm{MAP}=146$\,MeV is compatible with earlier analyses, its marginal posterior is tilted toward lower temperatures and abuts the lower edge of its prior interval. This shift toward lower particlization temperatures suggests that the very tight constraints for $T_\mathrm{sw}$ observed in earlier Bayesian analyses using different evolution models are affected by significant model uncertainties, and that in future Bayesian parameter studies with the \texttt{VAH}{} model the prior for this parameter should be extended toward lower temperature values.
\subsection{Posteriors for the temperature-dependent specific shear and bulk viscosities}
The posterior probability distributions for the temperature-dependent specific shear and bulk viscosities of the QGP, inferred from experimental data for Pb--Pb collisions at $\sqrt{s_\textrm{NN}}{}=2.76$\,TeV, are shown in Fig.~\ref{fig:posterior_viscosities}.
Comparing with two other recent Bayesian parameter studies \cite{JETSCAPE:2020mzn, Nijs:2020ors} we observe improved posterior constraints especially in the upper temperature range. The specific bulk viscosity is constrained to very low values ($\zeta/s{\,<\,}0.01$ with 60\% confidence) at temperatures above 350\,MeV; at temperatures below 220\,MeV the constraints are similar to those in Ref.~\cite{JETSCAPE:2020mzn}. For the specific shear viscosity $\eta/s$ we find posterior constraints that again are consistent with previous Bayesian parameter inference studies at temperatures below 250 MeV, but are much tighter and weighted toward lower $\eta/s$ values at higher temperatures when using \texttt{VAH}{} than found with the earlier models that assumed a free-streaming pre-hydrodynamic stage.
\begin{figure}
\includegraphics[width=8cm]{Sensitivity/PbPbsobolgroup_bar_no_log.png}
\caption{First-order Sobol' sensitivities (App.~\ref{app:Sobol}) for the \texttt{VAH}{} model.}
\label{fig:sensitivity}
\end{figure}
The better constraints at higher temperatures make sense when noting that in our \texttt{VAH}{} model the specific bulk and shear viscosities enter as model parameters into the description of the dynamical evolution at much earlier times, when the energy density (which controls the associated equilibrium temperature by Landau matching) is much higher. In the JETSCAPE \cite{JETSCAPE:2020mzn} and {\sl Trajectum} \cite{Nijs:2020ors} models $\eta/s$ and $\zeta/s$ do not enter until the beginning of the viscous hydrodynamic stage after about 1\,fm/$c$ or later, when the energy density has already dropped by a factor 20 or more, corresponding to a decrease by more than factor 2 in temperature. The initial free-streaming stage assumed in Refs.~\cite{JETSCAPE:2020mzn, Nijs:2020ors} can in fact be thought of as a fluid with infinite shear and bulk viscosities. The present work shows that by replacing the unphysical free-streaming stage by viscous anisotropic hydrodynamics we can achieve a description of the experimental measurements that is at least as good as achieved in the previous model calibrations (see further discussion below) and which allows us to probe the temperature dependence of the QGP viscosities to much higher temperatures than possible before. An additional benefit of the \texttt{VAH}{} approach is that it eliminates the unphysical switching time from free-streaming to hydrodynamics as a model parameter, and also avoids the large and positive (i.e., wrong-signed) artificial starting values of the bulk viscous pressure that arise from matching the conformal free-streaming stage to a hydrodynamic fluid with a realistic, non-conformal EoS.
\section{Model sensitivity}
\label{sec7}
\vspace*{-2mm}
In this section, we perform a sensitivity analysis on our model emulators to understand how the \texttt{VAH}{} model observables respond to changes in the model parameters. The first-order Sobol' sensitivity analysis performed here measures the global sensitivity of the model to its parameters \cite{sobol1990sensitivity, dan_tl}. Here we have grouped the parameters related to shear and bulk viscosity into two separate groups and measure the overall sensitivity of the model to these grouped sets for the ease of presentation \cite{jacques2006sensitivity}; see App.~\ref{app:Sobol} for details.
Figure~\ref{fig:sensitivity} shows the first-order Sobol' sensitivity indices calculated for 6 types of observables for the \texttt{VAH}{} model. The blue color is for observables measured in the most central collisions (0-5\% centrality) while the purple color represents observables in the mid-centrality range (40-50\% centrality).
We observe that charged and identified particle yields for pions and protons are mostly sensitive to the \texttt{T$_\mathrm{R}$ENTo}{} normalization, $N$. The normalization directly scales magnitude of the initial energy deposition profile from \texttt{T$_\mathrm{R}$ENTo}{} and thus controls the number of particles produced at freeze out. The same observables measured in peripheral collisions, but not in central collisions, are sensitive to the \texttt{T$_\mathrm{R}$ENTo}{} harmonic mean parameter $p$, which controls how the nucleon thickness functions for each of the nucleus are combined to produce the initial energy profile.
The mean transverse momenta of both pions and protons are most sensitive to the grouped specific shear viscosity parameters, with somewhat weaker sensitivity to the normalization parameter $N$. The same observables also have high sensitivity to the nucleon width $w$ measured in peripheral collisions but not in central collisions.
The elliptic flow observables measured in the most central collisions exhibit the strongest sensitivity to the multiplicity fluctuation parameter $\sigma_k$. The average shape of the nuclear overlap region being perfectly azimuthally symmetric in the most central collisions, flow anisotropies arise only from event-by-event fluctuations in these collisions, explaining their sensitivity to $\sigma_k$. The elliptic flow also shows unsurprisingly significant sensitivities to the grouped specific shear viscosity parameters and the \texttt{T$_\mathrm{R}$ENTo}{} parameters $p$ and $w$, in both central and peripheral collisions. The latter again reflects the roles of these parameters in the event-by-event fluctuation spectrum characterizing the initial conditions.
None of the selected observables exhibit any appreciable sensitivity to the initial pressure anisotropy parameter $R$ or the minimum distance $d_\mathrm{min}$ between nucleons when sampling their positions from the Woods-Saxon distribution. This observation is consistent with the very broad marginal posteriors for these two parameters shown in Fig.~\ref{fig:posterior}. These suggest that additional novel observables (to sharpen their marginal posteriors) and/or trustworthy additional theoretical arguments (to better constrain their priors) are needed to better constrain these model parameters.
In Ref.~\cite{McNelis_thesis} the author tried to keep all except two parameters in the \texttt{VAH}{} model fixed at the MAP values of the recently calibrated JETSCAPE SIMS model \cite{JETSCAPE:2020mzn}, seeking \texttt{VAH}{} values only for the normalization $N$ and nucleon width $w$. The resulting fit \cite{McNelis_thesis} agreed surprisingly well with the experimental data. The above sensitivity can explain this unexpected success: Fig.~\ref{fig:sensitivity} shows that $N$ and $w$ are the two parameters to which most of the selected observables exhibit significant sensitivity, so adjusting them captures most of the variation of the observables under model parameter change. For this reason we have also included the \texttt{VAH}{} parameter set proposed in \cite{McNelis_thesis} as their ``best guess" as one of our design points when training emulators.
Our trained emulators for the \texttt{VAH}{} model can be accessed by following the link in footnote \ref{widget} and Ref.~\cite{vah_tool}. The ``\texttt{VAH}{} widget'' produces the centrality dependent observables in real time for any model parameter values within their prior bounds. As long as only model predictions for the set of experimental data used in the \texttt{VAH}{} calibration are desired, the widget can serve as a fast and quantitatively precise emulator of the full \texttt{VAH}{} model. We found it very useful for developing intuition for the response of the observables to changes in single or combinations of model parameters. While no substitute for the full solution of the ``inverse problem'' (i.e., the inference of the model parameters from the observables), it can help to develop an in-depth understanding of such a solution.
\vspace*{-2mm}
\section{Predictions from the maximum a posteriori probability}
\label{sec8}
\vspace*{-2mm}
The final result of the Bayesian parameter estimation work presented here are computationally cheap samples of the experimental observables from the most probable region in the multidimensional posterior distribution for the model parameters. To visualize and understand the full posterior distribution we have used plots of the 1- and 2-dimensional (single or joint) marginal distribution in Figs.~\ref{fig:posterior} and \ref{fig:posterior_viscosities}. In this section we will show how to find a point estimate from the posterior, i.e., the set of model parameters that can best fit the experimental data. The comparison between the simulation predictions for this set of model parameter values (the MAP values) and the experimental measurement will be a final test for the validity of the Bayesian parameter inference framework that we have used in this work. We note, however, that model predictions that quantitatively include the full uncertainty range arising from the uncertainties of the inferred model parameters require a posterior-weighted sample from the entire high-probability region in the multidimensional parameter space.
\begin{table}[b]
\begin{tabular}{l|l}
parameter & MAP values\\ \hline \hline
$N$ & $20.013$ \\ \hline
$p$ & $0.038$ \\ \hline
$w$ [fm] & $0.985$ \\ \hline
$d_\mathrm{min}$ [fm] & $0.878$ \\ \hline
$\sigma_k$ & $1.184$ \\ \hline
$T_\mathrm{sw}$ [GeV]& $0.146$ \\ \hline
$R$ & $0.653$ \\ \hline
$T_{\eta/s, \mathrm{kink}}$\ [GeV] & $0.219$ \\ \hline
$(\eta/s)_\mathrm{kink}$ & $0.094$ \\ \hline
$a_{\mathrm{high}}$ [GeV$^{-1}$] & $0.493$ \\ \hline
$a_{\mathrm{low}}$ [GeV$^{-1}$] & $-0.383$ \\ \hline
$(\zeta/s)_{\mathrm{max}}$ & $0.044$ \\ \hline
$T_{\zeta}$ [GeV] & $0.235$ \\ \hline
$w_{\zeta}$ [GeV] & $0.032$ \\ \hline
$\lambda_\zeta$ & $0.037$
\end{tabular}
\caption{
Model parameters corresponding to the mode of the posterior distribution (MAP) for the \texttt{VAH}{} model. These model parameters provide simulation outputs that agree best with the experimental measurements.}
\label{table:MAP}
\end{table}
\begin{figure}[t]
\includegraphics[width=8cm]{Final_posterior/SIMS_VAH.png}
\caption{MAP predictions for the JETSCAPE SIMS model with Grad 14-moment viscous corrections at particlization from Ref.~\cite{JETSCAPE:2020mzn} (left column) compared with those for the \texttt{VAH}{} model with PTMA viscous corrections at particlization (right column).}
\label{fig:MAP}
\end{figure}
The model parameters that correspond to the mode of the posterior distribution are called maximum a posterior probability (MAP) values. The MAP values are found using numerical optimization algorithms that find the model parameter values that minimize the negative log posterior distribution \cite{diff_evolution, 2020SciPy-NMeth}. The MAP values for the posterior obtained in this work are listed in Table~\ref{table:MAP}.
In Fig.~\ref{fig:MAP} we compare how the MAP prediction from the \texttt{VAH}{} model compares with another recent Bayesian parameter estimation study published in Ref.~\cite{JETSCAPE:2020mzn}. The left column of Fig.~\ref{fig:MAP} is obtained by using the JETSCAPE SIMS model \cite{JETSCAPE:2020mzn} with its MAP parameter values and running 3000 events with fluctuating initial conditions. The right column is obtained by the same procedure using the \texttt{VAH}{} model with its MAP parameter set. In both columns the black triangles show the experimental measurements taken with Pb--Pb collisions at the LHC at $\sqrt{s_\textrm{NN}}{}=2.76$\,TeV. While both fits look good, closer inspection reveals several detailed features in the data that are better described by \texttt{VAH}{} and none that are significantly better described by JETSCAPE SIMS. It is likely, however, that the two sets of model predictions are statistically consistent with each other\footnote{%
Perhaps with the exception of the mean $\langle p_T\rangle$ for kaons.}
once a full sample of the posterior probability distribution is generated. We leave this for a future study aimed at a quantitative comparison between the two models and improved observable predictions obtained by combining the models (and variations of them) with Bayesian model mixing techniques \cite{Phillips:2020dmw}.
It is important to note that the Bayesian parameter estimation carried out here did not use the experimental data for the transverse momentum fluctuations of charged particles, $\delta p_T^\mathrm{ch}/\langle p_T^\mathrm{ch}\rangle$, in the model calibration, nor any model predictions for this observable when training the model emulators. The fact that the MAP values for the \texttt{VAH}{} model successfully reproduce this observable (bottom right panel in Fig.~\ref{fig:MAP}) can be interpreted as a successful {\it pre}diction of the calibrated \texttt{VAH}{} model.
\vspace*{-2mm}
\section{Conclusions}
\label{sec9}
\vspace*{-2mm}
In this work we performed a Bayesian calibration of a novel relativistic heavy-ion collision model, \texttt{VAH}{}, which treats the early far-from-equilibrium stage of the collision with viscous anisotropic hydrodynamics that is optimized for the particular symmetries that characterize this stage. The experimental input in this study were data taken with Pb--Pb collisions at $\sqrt{s_\textrm{NN}}{}=2.76$\,TeV at the LHC. The \texttt{VAH}{} model can be used from very early times onward (here we use $\tau_0{\,=\,}0.05$\,fm/$c$), which largely obviates the need for any pre-hydrodynamic evolution at all. This eliminates model parameters and other conceptual uncertainties associated with the free-streaming modeling of the pre-hydrodynamic stage employed in similar analyses that employed standard viscous hydrodynamics to model the QGP.
An important advantage of being able to start the hydrodynamic modeling so much earlier is that transport coefficients enter as parameters of the dynamical description at a stage of much higher energy density and temperature. By using the \texttt{VAH}{} approach we can therefore constrain the specific shear and bulk viscosities at higher temperatures than accessible in other approaches that invoke an extended pre-hydrodynamic stage modeled microscopically in a way that cannot be meaningfully parameterized by hydrodynamic transport coefficients. We find that within the \texttt{VAH}{} approach the specific bulk viscosity is well constrained by the LHC data to very low values, $\zeta/s{\,<\,}0.03$ at 90\% confidence level and ${\,<\,}0.01$ at 60\% confidence level, for temperatures above 350\,MeV. Also the specific shear viscosity is more tightly constrained at temperatures above 250\,MeV than in previous analyses, with the 60\% and 90\% confidence intervals both pushed toward lower $\eta/s$ values than before. At lower temperatures below 220 MeV the \texttt{VAH}{} constraints agree with those obtained from previous Bayesian parameter estimation studies.
The joint marginal distributions of the posterior for pairs of model parameters exposed a bimodal posterior, with one maximum located at $R{\,=\,}1$, corresponding to an initially anisotropically expanding but nonetheless pressure isotropic medium. The available experimental measurements are not able to discriminate against this particular solution of the inverse problem, at least not within the \texttt{VAH}{} model studied here. Equality of the longitudinal and transverse pressures (i.e., $R{\,=\,}1$) directly after the collision contradicts the expectation from all CGC-based models for the early-time evolution of the collision fireball created at LHC energies. We argue that this solution should be eliminated by a theoretical prior that assigns very low probability to $R{\,=\,}1$ if one subscribes to the CGC effective theory.
We demonstrated that the mode of the posterior parameter probability distribution (i.e., its MAP parameter set) leads to a very good description of the data used in the model calibration, with no significant tensions between the calibrated model and the experimental data. The calibrated \texttt{VAH}{} model was shown to improve on several features where earlier Bayesian model fits exhibited weaknesses. In particular, the MAP parameters of our calibrated \texttt{VAH}{} model led to a quite successful prediction of the mean-$p_T$ fluctuations, a set of data that was not used in the model calibration on account of its high demand on event statistics but was still correctly predicted by the model after calibration with other, statistically less demanding experimental observables.
The comparison of our results here with those obtained earlier with different variants of the heavy-ion collision evolution model shines a light on model uncertainties and their effects on the uncertainty ranges for the fireball parameters inferred with these models from the experimental data. While state-of-the-art Bayesian analyses provide principled uncertainty quantification given the known experimental uncertainties, there is nothing principled about the way the heavy-ion community presently tries to assess theoretical and modeling uncertainties. The availability of several Bayesian calibrated heavy-ion evolution models now opens the window to, and underlines the urgency for developing new statistical tools that enable principled uncertainty quantification in Bayesian model parameter inference that accounts for experimental and theoretical uncertainties on an equal footing.
\vspace*{2mm}
\section*{Acknowledgments}
\vspace*{-3mm}
We thank Derek Everett and the JETSCAPE Collaboration for providing the relativistic heavy-ion collision simulation and Bayesian calibration framework which we adapted for this work. The authors express particular gratitude to Mike McNelis for developing and providing the hydrodynamic simulation code for the \texttt{VAH}{} model and also for continuous support while running the simulations. U.H. acknowledges stimulating discussions with participants of the INT Program INT-23-1a and thanks the Institute for Nuclear Theory for its hospitality. Generous computing resources provided on Bebop, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory, are gratefully acknowledged. This work was supported by the NSF CSSI program under grant \rm{OAC-2004601} (BAND Collaboration). D.L.\ and U.H.\ acknowledge additional partial support within the framework of the JETSCAPE Collaboration under NSF Award No.~\rm{ACI-1550223}, as well as by the DOE Office of Science, Office for Nuclear Physics under Award No.~\rm{DE-SC0004286}. S.M.W\ acknowledges additional support by the U.S.\ Department of
Energy, Office of Science, Office of Advanced Scientific Computing
Research SciDAC program under Contract Nos.\ \rm{DE-AC02-05CH11231} and \rm{DE-AC02-06CH11357}.
|
{
"arxiv_id": "2302.14234",
"language": "en",
"timestamp": "2023-03-01T02:06:09",
"url": "https://arxiv.org/abs/2302.14234",
"yymm": "2302"
} | \section{Omitted proof from Section~\ref{section:predictions}}
\label{appendix:predictions}
We prove the set-theoretic result concerning $\mathsf{WC}(\cdot)$ and $\cB(\cdot,r)$ used in Theorem~\ref{theorem:random_expand}.
\begin{lemma}\label{lemma:wc_set_theoretic}
$\mathsf{WC}(\cB(\tTheta,r)) = \mathsf{WC}(\cB(\mathsf{WC}(\tTheta), r))$.
\end{lemma}
\begin{proof}
We first prove the forwards containment. For the sake of contradiction suppose there exists $\ttheta\in\mathsf{WC}(\cB(\tTheta, r))$ such that $\ttheta\notin\mathsf{WC}(\cB(\mathsf{WC}(\tTheta), r))$. Then, there exists $\theta'\in\cB(\mathsf{WC}(\tTheta), r)$ such that $\theta'\preccurlyeq\ttheta$. But $$\mathsf{WC}(\tTheta)\subseteq\tTheta\implies\cB(\mathsf{WC}(\tTheta), r)\subseteq\cB(\tTheta, r),$$ so $\theta'\in\cB(\tTheta, r)$ and $\theta'\preccurlyeq \ttheta$ which contradicts the assumption that $\ttheta\in\mathsf{WC}(\cB(\tTheta, r))$.
We now prove the reverse containment. For the sake of contradiction suppose there exists $\ttheta\in\mathsf{WC}(\cB(\mathsf{WC}(\tTheta), r))$ such that $\ttheta\notin\mathsf{WC}(\cB(\tTheta, r))$. Then, there exists $\theta'\in\cB(\tTheta, r)$ such that $\theta'\preccurlyeq\ttheta$. Furthermore, if $\theta'\notin\mathsf{WC}(\cB(\tTheta, r))$, there exists $\theta''\in\mathsf{WC}(\cB(\tTheta, r))$ such that $\theta''\preccurlyeq\theta'\preccurlyeq\theta'$ (if $\theta'\in\mathsf{WC}(\cB(\tTheta, r))$, set $\theta'' = \theta'$). From the forward inclusion, we have $$\mathsf{WC}(\cB(\tTheta, r))\subseteq\mathsf{WC}(\cB(\mathsf{WC}(\tTheta), r))\subseteq\cB(\mathsf{WC}(\tTheta), r),$$ so $\theta''\in\cB(\mathsf{WC}(\tTheta), r)$ and $\theta''\preccurlyeq\ttheta$ which contradicts the assumption that $\ttheta\in\mathsf{WC}(\cB(\mathsf{WC}(\tTheta), r))$.
\end{proof}
\begin{comment}
\begin{lemma}\label{lemma:wch_ball_commute}
$\mathsf{WCH}(\cB(\tTheta, r)) = \cB(\mathsf{WCH}(\tTheta), r)$.
\end{lemma}
\begin{proof}
We first prove that $\mathsf{WC}(\cB(\tTheta, r)) = \mathsf{WC}(\cB(\mathsf{WCH}(\tTheta), r))$. We have $$\mathsf{WC}(\cB(\mathsf{WCH}(\tTheta), r)) \stackrel{\text{Lem.}~\ref{lemma:wc_set_theoretic}}{=} \mathsf{WC}(\cB(\mathsf{WC}(\mathsf{WCH}(\tTheta)), r)) \stackrel{\text{Prop.}~\ref{prop:wc_properties}}{=} \mathsf{WC}(\cB(\mathsf{WC}(\tTheta), r)).$$ So $\cB(\tTheta, r)$ and $\cB(\mathsf{WCH}(\tTheta), r)$ share the same weakest competitor set, which implies the reverse containment $\cB(\mathsf{WCH}(\tTheta), r)\subseteq\mathsf{WCH}(\cB(\tTheta), r)$. Given this, it suffices to show that $\cB(\mathsf{WCH}(\tTheta), r)$ is upwards closed, which is immediate since $\mathsf{WCH}(\tTheta)$ is upwards closed.
\end{proof}
\end{comment}
\begin{comment}
We now show that similar consistency and robustness ratios can be obtained by a more fine-tuned randomized mechanism that discards predictions for each agent independently. We instantiate our meta mechanism $\cM$ with the following randomized usage of predictions. Let $X_1,\ldots, X_n$ be independent Bernoulli random variables where $X_i = 1$ with probability $\beta_i$ and $X_i = 0$ with probability $1-\beta_i$. We set $\hTheta_i = \Theta$ if $X_i = 1$ and set $\hTheta_i=\mathsf{WCH}(\tTheta_i)$ if $X_i=0$. In other words, for each agent $i$, we discard its prediction with probability $\beta_i$. Let $V = \{i : \theta_i\in\mathsf{WCH}(\tTheta_i)\}$, so $V$ is the (deterministic) set of agents with valid predictions (which is of course apriori not known to the mechanism designer).
\begin{theorem}\label{theorem:random_discard}
Let $S = \{i : \theta_i\in\hTheta_i\}$ and $V = \{i : \theta_i\in\mathsf{WCH}(\tTheta_i)\}$. For any $\beta\in [0,1]$, $\cM$ satisfies the following guarantees: \begin{align*}&\E[\text{welfare}] \ge \sum_{i=1}^n\beta_i\cdot\theta_i[\alpha_{\mathsf{opt}}] + \sum_{i\in V} (1-\beta_i)\cdot\theta_i[\alpha_{\mathsf{opt}}] \\ &\E[\text{revenue}] \ge \sum_{i\in V}(1-\beta_i)\cdot\left(\E\left[\theta_i[\alpha_{\mathsf{opt}}(S)]\right] - \gamma_i^A\right).\end{align*}
\end{theorem}
\begin{proof}
Let $V = \{i : \theta_i\in\mathsf{WCH}(\tTheta_i)\}$, $S = \{i : \theta_i\in\hTheta_i\}$, and $D = \{i : X_i = 1\}$. $V$ is a fixed deterministic set consisting of the agents with valid predictions, while $S\supseteq V$ is a random set that additionally includes agents with invalid predictions that have been discarded. $S$ can be expressed as $S = V\cup (\overline{V}\cap D)$.
First we compute expected welfare. We have
\begin{align*}
\E[\text{welfare}] = \E\Bigg[\max_{\alpha}\sum_{i=1}^n\theta_i[\alpha]\cdot\mathbf{1}(\theta_i\in\hTheta_i)\Bigg]
&\ge \E\Bigg[\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]\cdot\mathbf{1}(\theta_i\in\hTheta_i)\Bigg] \\
& = \sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]\cdot\Pr(\theta_i\in\hTheta_i).
\end{align*}
and \begin{align*}\Pr(\theta_i\in\hTheta_i) &= \Pr(\theta_i\in\hTheta_i | X_i = 1)\cdot\Pr(X_i = 1) + \Pr(\theta_i\in\hTheta_i | X_i = 0)\cdot\Pr(X_i = 0) \\ &= \beta_i\cdot\Pr(\theta_i\in\Theta) + (1-\beta_i)\cdot\Pr(\theta_i\in\mathsf{WCH}(\tTheta_i)) \\ &= \beta_i + (1-\beta_i)\cdot\mathbf{1}(\theta_i\in\mathsf{WCH}(\tTheta_i)).\end{align*} So \begin{align*}\E[\text{welfare}] &\ge \sum_{i=1}^n\beta_i\cdot\theta_i[\alpha_{\mathsf{opt}}] + \sum_{i=1}^n (1-\beta_i)\cdot\theta_i[\alpha_{\mathsf{opt}}]\cdot\mathbf{1}(\theta_i\in\mathsf{WCH}(\tTheta_i)) \\ &= \sum_{i=1}^n\beta_i\cdot\theta_i[\alpha_{\mathsf{opt}}] + \sum_{i\in V} (1-\beta_i)\cdot\theta_i[\alpha_{\mathsf{opt}}].\end{align*}
We now derive our lower bound on expected revenue. We will bound revenue by restricting our attention to payments made by agents in $V$ such that the corresponding predictions have not been discarded.
For brevity, let $D_i$ denote the event that $X_i = 1$. We have
\begin{align*}
\E[\text{revenue}] \ge \E\bigg[\sum_{i\in V} p_i\bigg] &= \sum_{i\in V}\E[p_i] \\
&\ge \sum_{i\in V}\E[p_i\mid\neg D_i]\cdot\Pr(\neg D_i) \\
&\ge \sum_{i\in V} (1-\beta_i)\cdot\E\left[\theta_i[\alpha_{\mathsf{opt}}(S)] - d_H(\theta_i,\mathsf{WC}(\tTheta_i))\mid \neg D_i\right] \\
&= \sum_{i\in V}(1-\beta_i)\cdot\left(\E\left[\theta_i[\alpha_{\mathsf{opt}}(S)]\right] - d_H(\theta_i, \mathsf{WC}(\tTheta_i))\right)
\end{align*} where the fourth line is due to the payment bound of Lemma~\ref{lemma:revenue_lb_1} and the fifth line is due to the fact that for $i\in V$, $\theta_i[\alpha_{\mathsf{opt}}(S)]$ and $d_H(\theta_i, \mathsf{WC}(\tTheta_i))$ are both independent of $D_i$. The latter independence is obvious. To see why the former independence holds, observe that $i\in V\implies i\in S$ deterministically, and given this $\alpha_{\mathsf{opt}}(S)$ does not depend on whether or not the prediction for agent $i$ is discarded.
\end{proof}
In particular, when all predictions are valid, $\text{welfare} = \mathsf{OPT}$ deterministically and $\E[\text{revenue}] \ge \sum_{i=1}^n (1-\beta_i)\cdot\left(\theta_i[\alpha_{\mathsf{opt}}]-\gamma_i^A\right)$. If furthermore all predictions are perfect, $\E[\text{revenue}]\ge \sum_{i=1}^n (1-\beta_i)\cdot\theta_i[\alpha_{\mathsf{opt}}]\ge (1-\max_i\beta_i)\cdot\mathsf{OPT}.$ If all predictions are invalid $\E[\text{welfare}]\ge\beta\cdot\mathsf{OPT}$ and $\E[\text{revenue}]=\mathsf{VCG}(\beta_1,\ldots,\beta_n)$. Furthermore, if revenue monotonicity holds (that is, VCG revenue is monotonically increasing in the number of bidders), we have the stronger statement that $$\E[\text{revenue}]\ge\max\bigg\{\sum_{i\in V}(1-\beta_i)\cdot\left(\E\left[\theta_i[\alpha_{\mathsf{opt}}(S)]\right] - \gamma_i^A\right), \mathsf{VCG}(\beta_1,\ldots,\beta_n)\bigg\}$$ where $\mathsf{VCG}(\beta_1,\ldots,\beta_n) = \E[\mathsf{VCG}(S)]$ where $S$ is sampled by including agent $i$ in $S$ independently with probability $\beta_i$. The main advantage of this mechanism over the trivial solution is that the guarantees are more fine-tuned to the possibility of some fraction of the predictions being valid and highly accurate.
\paragraph{Remark 1} If $\mathsf{VCG}(S)$ is a submodular function of agents, then $\mathsf{VCG}(\beta)\ge\beta\cdot\mathsf{VCG}(1)$~\citep{hartline2008optimal} and we get $(\beta,\beta)$-robustness. Without the submodularity assumption $\mathsf{VCG}$ revenue can shrink by more than a $\beta$ factor. There are simple settings where reduced competition leads to $\mathsf{VCG}(\beta) = \beta^2\cdot\mathsf{VCG}(1)$~\citep{balcan2022maximizing}, but there appear to be no general characterizations on how small or large $\frac{\mathsf{VCG}(\beta)}{\mathsf{VCG}(1)}$ can be.
\paragraph{Remark 2} To compute $\E[\text{revenue}]$ in the above proof it appears possible to directly take expectations on both sides of the statement of Lemma~\ref{lemma:revenue_lb_1}. The main pitfall of this approach is that without restricting attention to $V$, the loss term $\sum_{i=1}^n d_H(\theta_i,\mathsf{WC}(\tTheta_i))\cdot\mathbf{1}(\theta_i\in\hTheta_i)$ includes agents with invalid predictions that were discarded. So even if all predictions are valid and perfect, this approach will not recover the $(1-\beta)\cdot\mathsf{OPT}$ revenue guarantee when predictions are perfect.
\end{comment}
\section{Main guarantees of the mechanism in terms of prediction quality}
\label{section:predictions}
In this section we prove our main guarantees on our meta-mechanism $\cM$ in terms of the quality of the side information $\tTheta_1,\ldots,\tTheta_n$. We will largely refer to the side information as \emph{predictions} in this section to emphasize that $\tTheta_i$ could be wildly incorrect/inaccurate. To state our results we need the following notation which will be used throughout the remainder of the paper. Given agent types $\theta_1,\ldots,\theta_n$, let $\alpha_{\mathsf{opt}}$ denote the efficient allocation among the $n$ agents and
let $\mathsf{OPT} = \max_{\alpha\in\Gamma}\sum_{i=1}^n\theta_i[\alpha] = \sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]$ denote the welfare of the efficient allocation (also called the total social surplus).
For a subset $S\subseteq\{1,\ldots,n\}$ of agents, let $\mathsf{OPT}_S = \sum_{i\in S}\theta_i[\alpha_{\mathsf{opt}}]$ be the welfare generated by the efficient allocation restricted to agents in $S$.
Let $\mathsf{VCG}(S)$ denote the revenue of the vanilla VCG mechanism when run among the agents in $S$. Let $\mathsf{VCG}(\beta) = \E[\mathsf{VCG}(S)]$ where $S\subseteq\{1,\ldots,n\}$ is sampled by including each agent in $S$ independently with probability $\beta$. In general, VCG is not revenue monotonic~\citep{rastegari2011revenue}, that is $S\subseteq T\centernot\implies\mathsf{VCG}(S)\le\mathsf{VCG}(T)$, so $\mathsf{VCG}(\beta)$ need not be increasing in $\beta$, but there are various sufficient conditions for revenue monotonicity.\footnote{For example, if efficient welfare is a submodular set function over the agents, then $\mathsf{VCG}$ is revenue monotonic due to a result of~\citet{ausubel2002ascending}, and in this case $\mathsf{VCG}(\beta)$ is increasing in $\beta$. In the combinatorial auction setting, a sufficient condition for efficient welfare to be submodular is that bidders have gross-substitutes valuations over items~\citep{gul1999walrasian,yokoo2004effect}.}
The following calculation shows the revenue of $\cM$ can be related to the efficient welfare restricted to the subset of agents with valid predictions. We incur a loss term equal to the sum of $\ell_{\infty}$-Hausdorff distances from each type $\theta_i$ to $\mathsf{WC}(\hTheta_i)$. The Hausdorff distance between $\theta_i$ and $\mathsf{WC}(\hTheta_i)$ is defined as $d_H(\theta_i, \mathsf{WC}(\hTheta_i)):= \max_{\htheta_i\in\mathsf{WC}(\hTheta_i)}\lVert\theta_i - \htheta_i\rVert_{\infty}$.
\begin{lemma}\label{lemma:revenue_lb_1} Run $\cM$ with $\hTheta_1,\ldots,\hTheta_n$. Let $S = \{i : \theta_i\in\mathsf{WCH}(\hTheta_i)\}$ where $\theta_i$ is the true and revealed type of agent $i$ (so $S\subseteq\cI$). Then, for $i\in S$, $\cM$ satisfies $$p_i\ge \theta_i[\alpha_{\mathsf{opt}}] - d_H\big(\theta_i, \mathsf{WC}(\hTheta_i)\big),$$ and $$\text{revenue}\ge\mathsf{OPT}_S - \sum_{i\in S}d_H\big(\theta_i, \mathsf{WC}(\hTheta_i)\big).$$
\end{lemma}
\begin{proof} Let $p_i$ denote the payment collected from agent $i\in S$. Let $\theta_i^*$ be the weakest competitor in $\hTheta_i$ with respect to $\btheta_{-i}$. The utility for agent $i$ under $\cM$ is
\begin{align*}
\theta_i[\alpha_{\mathsf{opt}}] - p_i
&= \sum_{j\neq i}\theta_j[\alpha_{\mathsf{opt}}] - \min_{\htheta_i\in\hTheta_i}\Bigg(\max_{\alpha\in\Gamma}\sum_{j\neq i}\theta_j[\alpha] + \htheta_i[\alpha]\Bigg) \\
&= \sum_{j\neq i}\theta_j[\alpha_{\mathsf{opt}}] - \Bigg(\max_{\alpha\in\Gamma}\sum_{j\neq i}\theta_j[\alpha] + \theta_i^*[\alpha]\Bigg) \\
&\le \sum_{j\neq i}\theta_j[\alpha_{\mathsf{opt}}] - \Bigg(\sum_{j\neq i}\theta_j[\alpha_{\mathsf{opt}}] + \theta_i^*[\alpha_{\mathsf{opt}}]\Bigg) \\
&= \theta_i[\alpha_{\mathsf{opt}}] - \theta_i^*[\alpha_{\mathsf{opt}}] \\
&\le \max_{\htheta_i\in\mathsf{WC}(\hTheta_i)}\big\lVert\theta_i - \htheta_i\big\rVert_{\infty} = d_H(\theta_i,\mathsf{WC}(\hTheta_i)),
\end{align*} as required. The revenue guarantee follows by summing up the bound for $p_i$ over all $i\in S$.
\end{proof}
Truncating the proof of Lemma~\ref{lemma:revenue_lb_1} yields the following important bound, which is a more direct lower bound on $p_i$ in terms of the weakest competitor's value for the efficient allocation (versus the bound in terms of Hausdorff distances of Lemma~\ref{lemma:revenue_lb_1}).
\begin{lemma}\label{lemma:payment_lb}
Let $i\in S$ (adopting the same notation and setup as Lemma~\ref{lemma:revenue_lb_1}). Then $$p_i\ge\theta_i^*[\alpha_{\mathsf{opt}}]$$ where $\theta_i^*$ is the weakest competitor in $\hTheta_i$ with respect to $\btheta_{-i}$.
\end{lemma}
\subsection*{Measuring the error of a prediction}
Before instantiating $\cM$ with specific rules to determine the $\hTheta_i$ from the $\tTheta_i$, we define our main notions of error in the side information/predictions, which are motivated by Lemma~\ref{lemma:revenue_lb_1}.
\begin{definition}
The \emph{invalidity} of a prediction $\tTheta_i$, denoted by $\gamma_i^V$, is the distance from the true type $\theta_i$ of agent $i$ to $\mathsf{WCH}(\tTheta_i)$: $$\gamma^V_i := d(\theta_i, \mathsf{WCH}(\tTheta_i)) = \min_{\ttheta_i\in\mathsf{WCH}(\tTheta_i)}\big\lVert\theta_i - \ttheta_i\big\rVert_{\infty}.$$
\end{definition}
\begin{definition}
The \emph{inaccuracy} of a prediction $\tTheta_i$ is the quantity $$\gamma^A_i := d_H(\theta_i, \mathsf{WC}(\tTheta_i)) = \max_{\ttheta_i\in\mathsf{WC}(\tTheta_i)}\big\lVert\theta_i-\ttheta_i\big\rVert_{\infty}.$$
\end{definition}
We say that a prediction $\tTheta_i$ is \emph{valid} if $\gamma^V_i = 0$, that is, $\theta_i\in\mathsf{WCH}(\tTheta_i)$. We say that a prediction is \emph{perfect} if $\gamma_i^A = 0$ or, equivalently, $\mathsf{WC}(\tTheta_i) = \{\theta_i\}$. If a prediction is perfect, then it is also valid. Our main results will depend on these error measures. Figure~\ref{fig:accuracy} illustrates these notions of prediction error for two different prediction sets. We briefly define our notions of bi-objective \emph{consistency} and \emph{robustness} from the field of algorithms with predictions. We say a mechanism is $(a, b)$-consistent and $(c, d)$-robust if when predictions are perfect it satisfies $\E[\text{welfare}]\ge a\cdot\mathsf{OPT}$, $\E[\text{revenue}]\ge b\cdot\mathsf{OPT}$, and satisfies $\E[\text{welfare}]\ge c\cdot\mathsf{OPT},\E[\text{revenue}]\ge d\cdot\mathsf{VCG}(1)$ independent of the prediction quality. We will largely not be too concerned with robustness---our main goal is to design high-performance mechanisms that degrade gradually and continuously as the prediction errors increase.
\begin{figure}[t]
\centering
\includegraphics[scale=0.21]{accuracy.png}
\caption{The prediction $\tTheta_1$ (depicted in gray) is valid ($\gamma^V = 0$), but is highly inaccurate. The prediction $\tTheta_2$ (depicted in light blue) is invalid ($\gamma^V > 0$), but is more accurate than $\tTheta_1$. A small expansion of $\tTheta_2$ would yield a valid and highly accurate prediction.}
\label{fig:accuracy}
\end{figure}
\subsection{First approach: trust predictions completely}
\label{section:random_discard}
The first basic instantiation of our meta mechanism $\cM$ is the following: the mechanism designer simply sets $\hTheta_i = \mathsf{WCH}(\tTheta_i)$ for all $i$. Let $V = \{i : \theta_i\in\mathsf{WCH}(\tTheta_i)\} = \{i : \gamma_i^V = 0\}$ denote the set of agents with valid predictions. The welfare of this mechanism is simply $\mathsf{OPT}_V$ and its revenue is bounded by Lemma~\ref{lemma:revenue_lb_1}. If all predictions are valid and perfect, that is, $\mathsf{WC}(\tTheta_i) = \{\theta_i\}$ for all $i$, both welfare and revenue are equal to $\mathsf{OPT}$. However, if all predictions are such that $\theta_i\notin\mathsf{WCH}(\tTheta_i)$, both welfare and revenue potentially drop to $0$.
\subsection{Second approach: discard predictions randomly}
The issue with the above mechanism is that if all predictions are invalid, it generates no welfare and no revenue. We show how randomization can quell that issue. One trivial solution is to discard all predictions with probability $\beta$, and trust all predictions completely with probability $(1-\beta)$. That is, with probability $\beta$ set $(\hTheta_1,\ldots,\hTheta_n) = (\Theta,\ldots,\Theta)$ and with probability $1-\beta$ set $(\hTheta_1,\ldots,\hTheta_n) = (\mathsf{WCH}(\tTheta_1),\ldots,\mathsf{WCH}(\tTheta_n))$, and then run $\cM$. This mechanism achieves strong consistency and robustness ratios. Let $V = \{i : \theta_i\in\mathsf{WCH}(\tTheta_i)\}$ be the set of valid predictions. From Lemma~\ref{lemma:revenue_lb_1}, we have $\E[\text{welfare}] = \beta\cdot\mathsf{OPT} + (1-\beta)\cdot\mathsf{OPT}_V$ and $\E[\text{revenue}] \ge \beta\cdot\mathsf{VCG}(1) + (1-\beta)\cdot \left(\mathsf{OPT}_V - \sum_{i\in V}\gamma_i^A\right)$, and thus obtain $(1, 1-\beta)$-consistency and $(\beta, \beta)$-robustness.
This approach suffers from a major issue: its revenue drops drastically the moment predictions are invalid ($\gamma_i^V > 0$). In particular, if predictions are highly accurate but very slightly invalid (such as the blue prediction in Figure~\ref{fig:accuracy}), this approach completely misses out on any payments from such agents and drops to the revenue of VCG (which can be drastically smaller than $\mathsf{OPT}$). But, a tiny expansion of these predictions would have sufficed to increase revenue significantly and perform competitively with $\mathsf{OPT}$. One simple approach is to set $\hTheta_i$ to be an expansion of $\tTheta_i$ by a parameter $\eta_i$ with some probability, and discard the prediction with complementary probability. If $\gamma_i^V\le\eta_i$ for all $i$, then such a mechanism would perform well. The main issue with such an approach is that the moment $\gamma_i^V > \eta_i$, our expansion by $\eta_i$ fails to capture the true type $\theta_i$ and the performance drastically drops. Our approach in the next subsection essentially selects the $\eta_i$ randomly from a suitable discretization of the ambient type space to be able to capture $\theta_i$ with reasonable probability.
\subsection{Third approach: expand predictions randomly}
\label{section:random_expansion}
Our main guarantee in this section will depend on $H$, which is an upper bound on any agent's value for any allocation. This is the only problem-domain-specific parameter in our results, and is naturally interpreted based on the setting (as in the examples we gave in Section~\ref{subsection:examples}). We have $H = \operatorname{diam}(\Theta) := \max_{\theta_1,\theta_2\in\Theta}\lVert\theta_1-\theta_2\rVert_{\infty}$ is the $\ell_{\infty}$-diameter of $\Theta$. For a point $\theta$, let $\cB(\theta, r) = \{\theta' : \lVert \theta - \theta'\rVert_{\infty}\le r\}$ be the closed $\ell_{\infty}$-ball centered at $\theta$ with radius $r$. For a set $\tTheta$, let $\cB(\tTheta, r) = \cup_{\ttheta\in \tTheta}\cB(\ttheta, r) = \{\theta' : \exists\ttheta\in\tTheta\text{ s.t. }\big\lVert\theta'-\ttheta\big\rVert_{\infty}\le r\}$ denote the $\ell_{\infty}$-expansion of $\tTheta$ by $r$. For simplicity we shall assume that $H = 2^{a}$ for some non-negative integer $a$, but the following analysis works verbatim by replacing $\log_2 H$ with $\lceil\log_2 H\rceil$.
We instantiate our meta-mechanism $\cM$ with random expansions of the input predictions. We denote this instantiation by $\cM_{\text{expand}}$. For each $i$, we independently set $$\boxed{\hTheta_i = \mathsf{WCH}\left(\cB(\tTheta_i, r_i)\right), \text{ where } r_i\sim_{\text{unif.}}\Big\{2^0-1, 2^1-1,\ldots, 2^{\log_2 H+1}-1\Big\}.}$$
We could also instead set $\hTheta_i = \cB(\tTheta_i, r_i)$ or $\hTheta_i = \mathsf{WC}(\cB(\tTheta_i, r_i))$, or more generally use any set such that $\mathsf{WC}(\hTheta_i) = \mathsf{WC}(\cB(\tTheta_i, r_i))$. We use the $\mathsf{WCH}$ operation to emphasize that we are interested in types contained in $\mathsf{WCH}(\cB(\tTheta_i, r_i))$, since we will be able to meaningfully compute payments from such agents. Figure~\ref{fig:mechs} illustrates the expansions $\cB(\tTheta_i, r_i)$ that $\cM_{\text{expand}}$ randomizes over.
\begin{theorem}\label{theorem:random_expand}
$\cM_{\text{expand}}$ satisfies$$\E[\text{welfare}]\ge\max\left\{1 - \frac{\log_2(1+2\max_i \gamma^V_i)}{2+\log_2 H}, \frac{1}{2+\log_2 H}\right\}\cdot\mathsf{OPT}$$ and $$\E[\text{revenue}]\ge \frac{1}{2+\log_2 H}\left(\mathsf{OPT} - \sum_{i=1}^n(\gamma^A_i + 2\gamma^V_i)\right).$$
\end{theorem}
\begin{proof}
For each agent $i$, let $$r_i^* = \argmin\left\{r\in\{2^0-1, 2^1-1,\ldots, 2^{\log_2 H+1}-1\}: \theta_i\in\mathsf{WCH}(\cB(\tTheta_i, r))\right\}$$ ($r_i$ is well-defined for all $i$ since no matter what $\tTheta_i$ is, $\cB(\tTheta_i, 2H-1)$ covers the entire ambient type space $\Theta$). First we compute expected welfare. We have, using the fact that $\theta_i\in\hTheta_i\implies i\in\cI$ (Theorem~\ref{theorem:wch_ir}), $$\E[\text{welfare}]=\E\bigg[\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]\cdot\mathbf{1}(i\in\cI)\bigg] \ge \E\bigg[\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]\cdot\mathbf{1}(\theta_i\in\hTheta_i)\bigg]=\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]\cdot\Pr(\theta_i\in\hTheta_i)$$ and $$\Pr(\theta_i\in\hTheta_i) = \Pr(r_i\ge r_i^*) = 1- \Pr(r_i < r_i^*) = 1-\frac{\log_2(1+r_i^*)}{2+\log_2 H}.$$ Therefore $$\E[\text{welfare}]\ge\left(1 - \frac{\log_2(1+\max_i r_i^*)}{2+\log_2 H}\right)\cdot\mathsf{OPT}.$$ By definition, $2\gamma^V_i\ge r_i^*\ge\gamma^V_i$, which yields the desired welfare guarantee. If all predictions are valid, $r_i^* = \gamma^V_i = 0$ for all $i$, and we get $\E[\text{welfare}]=\mathsf{OPT}$. In general, this bound decays as predictions are worse, and a simple calculation shows that $\E[\text{welfare}]\ge\frac{1}{2+\log_2 H}\cdot\mathsf{OPT}$.
We now compute expected revenue by computing $\E[p_i]$ for each agent $i$. Let $S = \{i : \theta_i\in\hTheta_i\}$ be the (random) set of agents with valid predictions post expansion. We have $$\E[p_i]\ge\E[p_i\mid r_i = r_i^*]\cdot\Pr(r_i = r_i^*) = \frac{1}{2+\log_2 H}\cdot\E[p_i\mid r_i = r_i^*].$$ Now $r_i = r_i^*\implies i\in S$, so we may apply the payment bound of Lemma~\ref{lemma:revenue_lb_1}: \begin{align*}\E[p_i\mid r_i = r_i^*] &\ge \E\left[\theta_i[\alpha_{\mathsf{opt}}] - d_H(\theta_i, \mathsf{WC}(\cB(\tTheta_i, r_i^*)))\mid r_i = r_i^*\right] \\ &= \theta_i[\alpha_{\mathsf{opt}}] - d_H(\theta_i, \mathsf{WC}(\cB(\tTheta_i, r_i^*))).\end{align*}
Next, we bound $d_H(\theta_i, \mathsf{WC}(\cB(\tTheta_i, r_i^*)))$ in terms of $\gamma^A_i = d_H(\theta_i, \mathsf{WC}(\tTheta_i))$. Let $\ttheta_i\in\mathsf{WC}(\cB(\tTheta_i, r_i^*))$ be arbitrary. By Lemma~\ref{lemma:wc_set_theoretic} (which we prove in Appendix~\ref{appendix:predictions}), $\ttheta_i\in\mathsf{WC}(\cB(\mathsf{WC}(\tTheta_i), r_i^*))$, so there exists $\theta_i'\in\mathsf{WC}(\tTheta_i)$ such that $\lVert\ttheta_i-\theta_i'\rVert_{\infty}\le r_i^*$. Moreover, $\lVert\theta_i- \theta_i'\rVert_{\infty}\le\gamma^A_i$ by definition of $\gamma^A_i$. The triangle inequality therefore yields $$\big\lVert\theta_i - \ttheta_i\big\rVert_{\infty}\le\big\lVert\ttheta_i - \theta_i'\big\rVert_{\infty} + \big\lVert\theta_i - \theta_i'\big\rVert_{\infty}\le \gamma^A_i + r_i^*,$$ so, as $\ttheta_i\in\mathsf{WC}(\cB(\tTheta_i, r_i^*))$ was arbitrary, $d_H(\theta_i, \mathsf{WC}(\cB(\tTheta_i, r_i^*)))\le\gamma^A_i + r_i^*\le \gamma^A_i + 2\gamma^V_i.$ Finally, we have
\begin{align*}\E[\text{revenue}] = \E\bigg[\sum_{i=1}^n p_i\bigg]=\sum_{i=1}^n\E[p_i] &\ge \frac{1}{2+\log_2 H}\left(\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}] - \sum_{i=1}^n(\gamma^A_i + 2\gamma^V_i)\right) \\ &= \frac{1}{2+\log_2 H}\left(\mathsf{OPT} - \sum_{i=1}^n(\gamma_i^A+2\gamma_i^V)\right),\end{align*} as desired.
\end{proof}
Our welfare guarantee degrades continuously from $\mathsf{OPT}$ to $\mathsf{OPT}/(2+\log_2 H)$ as the invalidity of the predictions increase. Our revenue guarantee degrades continuously from $\mathsf{OPT}/(2+\log_2 H)$ as both the invalidity and inaccuracy of the predictions increase. Assuming revenue monotonicity, since $\theta_i\in\hTheta_i$ with probability at least $1/(2+\log_2 H)$, the revenue of our mechanism is never worse than $\mathsf{VCG}(1/(2+\log_2 H))$. Thus, in the language of algorithms-with-predictions, $\cM_{\text{expand}}$ is $(1, 1/(2+\log_2 H))$-consistent and, assuming revenue monotonicity, $(1/(2+\log_2 H), \mathsf{VCG}(1/(2+\log_2 H))/\mathsf{VCG}(1))$-robust. If VCG revenue happens to be submodular, the robustness ratio is $\ge 1/(2+\log_2 H)$ (but in general revenue can shrink by more than this ratio~\citep{balcan2022maximizing}). In contrast to the previous trivial approaches that either trusted the side information completely or discarded predictions completely, our random expansion approach does not suffer from large discontinuous drops in welfare nor revenue. Furthermore, the previous approaches had no way of gracefully dealing with invalid prediction sets. In particular, if $\tTheta_i$ is an invalid prediction, even if a tiny expansion of $\tTheta_i$ would have captured $\theta_i$ (such as the blue set in Figure~\ref{fig:accuracy}), we gave up on getting any meaningful revenue from agent $i$. When all predictions were invalid (that is, $\gamma_i^V > 0$ for all $i$), our guarantee dropped to $\beta\cdot\mathsf{VCG}(1)$. The random expansion of predictions remedies these issues. Its revenue is nearly $\log H$-competitive with $\mathsf{OPT}$ as long as $\sum_i \gamma_i^A + 2\gamma_i^V$ is not too large. In particular, if we have high-accuracy but invalid predictions that are just a small expansion away from capturing $\theta_i$, the mechanism in this section is nearly $\log H$-competitive with $\mathsf{OPT}$ whereas the mechanism from the previous section is compared to vanilla VCG due to invalid predictions.
\subsection{Combining the approaches}
The advantages of each of the previous approaches (near-optimal performance when predictions are perfect, continuous and gradual performance degradation as prediction errors increase, and robustness in the worst case over all predictions) can be simultaneously realized by taking a convex combination of all three approaches. Let $\beta_{1}, \beta_{2}, \beta_{3}\ge 0$ be parameters selected by the mechanism designer such that $\beta_{1} + \beta_{2} + \beta_{3} = 1$. Upon receiving $\tTheta_1,\ldots,\tTheta_n$, the mechanism designer either (1) trusts the predictions completely with probability $\beta_{1}$ and sets $\hTheta_i = \mathsf{WCH}(\tTheta_i)$ for all $i$, (2) expands predictions randomly with probability $\beta_{2}$ and for each $i$ sets $\hTheta_i = \mathsf{WCH}(\cB(\tTheta_i, r_i))$ where $r_i$ is chosen randomly as in the previous section, or (3) discards all predictions with probability $\beta_{3}$ and for each $i$ sets $\hTheta_i = \Theta$. Let $V = \{i : \gamma_i^V = 0\}$ be the set of agents with valid predictions. This approach satisfies $$\E[\text{welfare}] = \beta_{1}\cdot\mathsf{OPT}_V + \beta_{2}\cdot\max\left\{1 - \frac{\log_2(1+2\max_i \gamma^V_i)}{2+\log_2 H}, \frac{1}{2+\log_2 H}\right\}\cdot\mathsf{OPT} + \beta_{3}\cdot\mathsf{OPT}$$ and $$\E[\text{revenue}]\ge \beta_{1}\cdot \bigg(\mathsf{OPT}_V-\sum_{i\in V}\gamma_i^A\bigg) + \beta_2\cdot\frac{1}{2+\log_2 H}\bigg(\mathsf{OPT}-\sum_{i=1}^n(\gamma_i^A +2\gamma_i^V)\bigg) + \beta_{3}\cdot\mathsf{VCG}(1).$$ If the mechanism designer has a prior over the errors $\gamma_i^A$ and $\gamma_i^V$, the parameters $\beta_1,\beta_2,\beta_3$ can be set to optimize performance based on the prior. This is an interesting direction for future work.
\begin{figure}
\centering
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.75\linewidth]{mech.png}
\end{subfigure}%
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=0.75\linewidth]{subspace.png}
\end{subfigure}
\caption{{\bf Left:} An illustration of the sets $\cB(\tTheta, r)$ that the random expansion mechanism $\cM_{\text{expand}}$ chooses from for a particular agent with true type $\theta$ and side-information set/prediction $\tTheta$. The blue expansion is the set $\cB(\tTheta, r^*)$.
{\bf Right:} An illustration of the weakest competitors $\ttheta(\ell_1,\ldots,\ell_k)$ the mechanism $\cM_k$ randomizes over for a particular agent when $k = 2$ and $H = 2^4$. The weakest competitors with $(\ell_1,\ldots,\ell_k)\in W_{2}$ are displayed in blue. (In both figures we drop the subscript of $i$ indicating the agent index.)}
\label{fig:mechs}
\end{figure}
\section{Bayesian mechanism design}
So far we have assumed a completely prior-free setting with no assumptions on how $\btheta = (\theta_1,\ldots, \theta_n)$ are generated. In this section, we explore some initial ideas that apply our meta-mechanism $\cM$ to fruitfully exploit knowledge about priors on the agent types, whether that knowledge be a complete description of the value distribution or a training set of sample values drawn from the prior. Let $f_1,\ldots, f_n:[0,1]^{\Gamma}\to\R_{\ge 0}$ be the continuous density functions of the $n$ agents, and let $\Theta_i = \mathrm{supp}(f_i)\subseteq\Theta$ be the support of $f_i$. Let $\mu_i(S) = \int_{S}f_i$. The benchmark we aim to compete with is the \emph{ex-post} expected optimum $$\E_{\btheta}[\mathsf{OPT}_{\btheta}], \text{ where } \mathsf{OPT}_{\btheta}:= \max_{\alpha}\sum_{i=1}^n\theta_i[\alpha]$$ is the benchmark that is used in the prior-free setting of the previous sections. Let $\alpha_{\mathsf{opt},\btheta} = \argmax_{\alpha}\sum_{i=1}^n\theta_i[\alpha]$ denote the efficient allocation for type profile $\btheta$ (so $\alpha_{\mathsf{opt},\btheta}$ is a random variable taking values in $\Gamma$). For $S\subseteq\{1,\ldots,n\}$ let $\mathsf{OPT}_{\btheta}(S) = \max_{\alpha}\sum_{i\in S}\theta_i[\alpha]$ and let $\alpha_{\mathsf{opt},\btheta}(S) = \argmax_{\alpha}\sum_{i\in S}\theta_i[\alpha]$.
All of the prior-free results we have derived in the paper thus far apply to the Bayesian setting in expectation over the draw of $\btheta$. In this section we explore a few examples where the distributional knowledge can be exploited to derive new, and potentially improved, guarantees.
One approach is to simply run $\cM$ with $\hTheta_i = \supp(f_i)$ (while $\supp(f_i)$ is not necessarily upwards closed, $\cM$ is incentive compatible since there is a known restriction on the type space: agents cannot submit a misreported type outside of $\supp(f_i)$). The revenue of this approach is given by Lemmas~\ref{lemma:revenue_lb_1} and~\ref{lemma:payment_lb}, which depend on geometric properties of $\supp(f_i)$ (e.g. its $\ell_{\infty}$-diameter).
We propose a basic framework that trades off between welfare and revenue by using subsets of $\supp(f_i)$ that cover some prescribed level of probability mass of $f_i$. Given $\tTheta_i$ for each $i$, let $\tmu_i = \mu_i(\tTheta_i\cap\supp(f_i))$. We instantiate $\cM$ with $\hTheta_i = \mathsf{WCH}(\tTheta_i)$. To make the analysis simpler, we assume that the $\tTheta_i$ are axis parallel boxes, so $\mathsf{WC}(\tTheta_i) = \{\ttheta_i\}$ is a singleton for each $i$.
\begin{theorem} Let $S = \{i:\theta_i\in\hTheta_i\}$ and let $\tmu = \min_i\tmu_i$. $\cM$ satisfies $\E[\text{welfare}]\ge\tmu\cdot\sum_{i=1}^n \E[\theta_i[\alpha_{\mathsf{opt},\btheta}]\mid\theta_i\in\hTheta_i]$ and $\E[\text{revenue}]\ge\sum_{i=1}^n\tmu_i\cdot\ttheta_i[S]$.
\end{theorem}
\begin{proof}
We have $\E[\text{welfare}] = \E[\max_{\alpha}\sum_{i=1}^n\theta_i[\alpha]\cdot\mathbf{1}(\theta_i\in\hTheta_i)]\ge\E[\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt},\btheta}]\cdot\mathbf{1}(\theta_i\in\hTheta_i)] = \sum_{i=1}^n \E[\theta_i[\alpha_{\mathsf{opt},\btheta}]\cdot\mathbf{1}(\theta_i\in\hTheta_i)]=\sum_{i=1}^n \E[\theta_i[\alpha_{\mathsf{opt},\btheta}]\mid\theta_i\in\hTheta_i]\cdot\Pr(\theta_i\in\hTheta_i) \ge \sum_{i=1}^n \tmu_i\cdot\E[\theta_i[\alpha_{\mathsf{opt},\btheta}]\mid\theta_i\in\hTheta_i]$ and $\E[\text{revenue}] = \sum_{i=1}^n\E[p_i] \ge \sum_{i=1}^n\E[p_i\mid\theta_i\in\hTheta_i]\cdot\Pr(\theta_i\in\hTheta_i)\ge\sum_{i=1}^n\tmu_i\cdot\E[p_i\mid\theta_i\in\hTheta_i]\ge\tmu\cdot\sum_{i=1}^n\E[\ttheta_i[\alpha_{\mathsf{opt},\btheta}(S)]] = \tmu\cdot\E[\sum_{i=1}^n\ttheta_i[\alpha_{\mathsf{opt},\btheta}(S)]]=\tmu\cdot\E[\mathsf{OPT}_{\widetilde{\btheta}}(S|\{1,\ldots,n\})]$.
\end{proof}
\begin{example}
Suppose $\supp(f_i) = (\varepsilon,\ldots,\varepsilon)\cup A$ where every $\theta\in A$ satisfies $\theta\succcurlyeq (t,\ldots,t)$ for some $t$ much larger than $\varepsilon$, and $\mu_i((\varepsilon,\ldots,\varepsilon)) = 0.1, \mu_i(A) = 0.9$. If $\hTheta_i = \supp(f_i)$, agent $i$'s value contributes as much as possible towards the efficient welfare, but as $\varepsilon\to 0$, $\E[p_i]$ approaches $i$'s VCG payment. Instead if $\hTheta_i = A$, agent $i$'s value contributes to the efficient welfare with probability $0.9$, and $\E[p_i]\ge 0.9t.$
\end{example}
\begin{example}[Uniform distributions over boxes]
\end{example}
\begin{lemma}
Let $g:[0,1]^{\Gamma}\to\R_{\ge 0}$ be anti-monotone in the sense that $x\prec y\implies g(x) > g(y)$. Let $V\subseteq [0,1]^{\Gamma}$ be the cube $V = [0, s]^{\Gamma}$, where $0 < s < 1$. Let $f$ be a density on $[0,1]^{\Gamma}$ such that there exists a function $\lambda:[0,1]^{\Gamma}\to\R_{\ge 0}$ with $f(sx)\ge\lambda(s)f(x)$ for all $s, x$. Then $$\E[g(x)\mid x\in V]\ge \frac{\lambda(s)\cdot s^{|\Gamma|}}{\int_V f}\cdot\E[g(x)].$$
\end{lemma}
\begin{proof}
Let $U = [0,1]^{\Gamma}$ denote the unit cube, and let $\varphi : U\to V$ be the map $\varphi(x) = sx$. Let $\mu(S) = \int_S f$. We have
\begin{align*}
\E[g(x)\mid x\in V] &= \frac{1}{\mu(V)}\int_V g(v)f(v)dv \\
&= \frac{1}{\mu(V)}\int_U g(\varphi(u))f(\varphi(u))|\det J_{\varphi}(u)|du \\
&\ge \frac{1}{\mu(V)}\int_U g(u)f(\varphi(u))|\det J_{\varphi}(u)|du \\
&= \frac{s^{|\Gamma|}}{\mu(V)}\int_U g(u) f(su)du \\
&\ge \frac{\lambda(s)\cdot s^{|\Gamma|}}{\mu(V)}\int_U g(u)f(u)du =\frac{\lambda(s)\cdot s^{|\Gamma|}}{\mu(V)}\cdot\E[g(x)]
\end{align*} where the second line follows from a change of variables ($J_{\varphi}$ is the Jacobian of $\varphi$) and the third line is due to anti-monotonicity of $g$.
\end{proof}
\section{Beyond the VCG mechanism: affine maximizers}
\label{appendix:amas}
Given agent-specific multipliers $\omega=(\omega_1,\ldots,\omega_n)\in\R_{\ge 0}$ and an allocation-based boost function $\lambda:\Gamma\to\R_{\ge 0}$, we define the following meta-mechanism $\cM(\omega,\lambda)$ which is a generalization of our meta-mechanism $\cM$. The mechanism designer receives as input $\tTheta_1,\ldots,\tTheta_n$, and based on these decides on prediction sets $\hTheta_1,\ldots,\hTheta_n$. The agents are then asked to reveal their true types $\theta_1,\ldots,\theta_n$. The allocation used is $$\alpha_{\omega,\lambda} = \argmax_{\alpha\in\Gamma}\sum_{i=1}^n\omega_{i}\theta_i[\alpha] + \lambda(\alpha).$$ Let $$p_i = \frac{1}{\omega_i}\left[\min_{\ttheta_i\in\hTheta_i}\left(\max_{\alpha\in\Gamma}\sum_{j\neq i}\omega_j\theta_j[\alpha]
+ \omega_i\ttheta_i[\alpha]+\lambda(\alpha)\right) - \left(\sum_{j\neq i}\omega_j\theta_j[\alpha_{\omega,\lambda}]+\lambda(\alpha_{\omega,\lambda})\right)\right].$$ Let $$\cI = \{i : \theta_i[\alpha_{\omega,\lambda}]-p_i\ge 0\}.$$ Agents in $i$ enjoy allocation $\alpha_{\omega,\lambda}$ and pay $p_i$. Agents not in $i$ do not participate and receive zero utility.
This mechanism is the natural generalization of the affine-maximizer mechanism~\citep{Roberts79:characterization} parameterized by $\omega,\lambda$ to our setting. The special case where agent misreporting is limited to $\hTheta_i$ is the natural generalization of the weakest-competitor VCG mechanism of~\citet{krishna1998efficient} to affine-maximizer mechanisms. The following is a simple consequence of the proofs that $\cM$ and the affine-maximizer mechanism parameterized by $\omega,\lambda$ are incentive compatible and individually rational.
\begin{theorem}
For any $\omega\in\R_{\ge 0}^n$ and $\lambda:\Gamma\to\R_{\ge 0}$, $\cM(\omega,\lambda)$ is incentive compatible and individually rational.
\end{theorem}
Let $\mathsf{OPT}(\omega,\lambda) = \sum_{i=1}^n\theta_i[\alpha_{\omega,\lambda}]$ be the welfare of the $(\omega,\lambda)$-efficient allocation. All of the guarantees satisfied by $\cM$ carry over to $\cM(\omega,\lambda)$, the only difference being the modified benchmark of $\mathsf{OPT}(\omega,\lambda)$. Of course, $\mathsf{OPT}(\omega,\lambda)\le\mathsf{OPT}$ is a weaker benchmark than the welfare of the efficient allocation. However, the class of affine maximizer mechanisms is known to achieve much higher revenue than the vanilla VCG mechanism. We leave it as a compelling open question to derive even stronger guarantees on mechanisms of the form $\cM(\omega,\lambda)$ when the underlying affine maximizer is known to achieve greater revenue than vanilla VCG.
\section{Conclusions and future research}
\label{section:conclusion}
We developed a versatile new methodology for multidimensional mechanism design that incorporates side information about agent types with the bicriteria goal of generating high social welfare and high revenue simultaneously. We designed a side-information-dependent meta-mechanism. This mechanism generalizes the weakest-competitor VCG mechanism of~\citet{krishna1998efficient}. Careful instantiations of our meta-mechanism simultaneously achieved strong welfare and revenue guarantees that were parameterized by errors in the side information, and additionally proved to be fruitful in a setting where each agent's type lies on a constant-dimensional subspace (of the potentially high-dimensional ambient type space) that is known to the mechanism designer.
There are many new interesting research directions that stem from our work. First, how far off are our mechanisms from the welfare-versus-revenue Pareto frontier? The weakest-competitor VCG mechanism is one extreme point, but what does the rest of the frontier look like? One possible approach here would be to extend our theory beyond VCG to the larger class of affine maximizers (which are known to contain high-revenue mechanisms)---we provide some initial ideas in Appendix~\ref{appendix:amas}. Another important facet that we have largely ignored is computational complexity. The computations in our mechanism involving weakest competitors scale with the description complexity of $\tTheta_i$ (\emph{e.g.}, the number of constraints, the complexity of constraints, and so on). An important question here is to understand the computational complexity of our mechanisms as a function of the differing (potentially problem-specific) language structures used to describe the side-information sets $\tTheta_i$. In particular, the classes of side-information sets that are accurate, natural/interpretable, and easy to describe might depend on the specific mechanism design domain. Expressive bidding languages for combinatorial auctions have been extensively studied with massive impact in practice~\citep{sandholm2013very,sandholm2007expressive}. Can a similar methodology for side information be developed in conjunction with the results of this paper? Another natural direction is to extend our techniques to derive even stronger welfare and revenue guarantees when there is a known prior distribution on the agent types. All of our results apply to the Bayesian setting, but knowledge of the prior (or even sample access to the prior) could yield stronger guarantees. Another interesting direction along this vein is to generalize our mechanisms to depend on a known prior over prediction errors. Finally, the weakest-competitor variant of VCG of~\citet{krishna1998efficient} is a strict improvement over the vanilla VCG mechanism, yet it appears to not have been further studied nor applied since its discovery. The weakest-competitor paradigm highlighted by that work and our results could potentially have important applications in the field of economics and computation more broadly.
\subsection*{Acknowledgements}
This material is based on work supported by the NSF under grants IIS-1901403, CCF-1733556, CCF-1910321, and SES-1919453, and by the ARO under
award W911NF2210266. S. Prasad thanks Morgan McCarthy for helpful discussions about real-world use cases of multidimensional mechanism design.
\section{Introduction}\label{section:introduction}
Mechanism design is a high-impact branch of economics and computer science that studies the implementation of socially desirable outcomes among strategic self-interested agents. Major real-world use cases include combinatorial auctions (\emph{e.g.}, strategic sourcing, radio spectrum auctions), matching markets (\emph{e.g.}, housing allocation, ridesharing), project fundraisers, and many more. The two most commonly studied objectives in mechanism design are \emph{welfare maximization} and \emph{revenue maximization}. In many settings, welfare maximization, or \emph{efficiency}, is achieved by the classic Vickrey-Clarke-Groves (VCG) mechanism~\citep{vickrey1961counterspeculation,clarke1971multipart,groves1973incentives}. Revenue maximization is a much more elusive problem that is only understood in very special cases. The seminal work of~\citet{myerson1981optimal} characterized the revenue-optimal mechanism for the sale of a single item in the Bayesian setting, but it is not even known how to optimally sell two items to multiple buyers. It is known that welfare and revenue are generally competing objectives and optimizing one can come at the great expense of the other~\citep{ausubel2006lovely,abhishek2010efficiency,anshelevich2016pricing, kleinberg2013ratio, diakonikolas2012efficiency}.
In this paper we study how \emph{side information} (or \emph{predictions}) about the agents can help with \emph{bicriteria} optimization of both welfare and revenue. Side information can come from a variety of sources that are abundantly available in practice such as advice from domain experts, predictions from a machine-learning model trained on historical agent data, or even the mechanism designer's own gut instinct. Mechanism design approaches that exploit the proliferation of agent data have in particular witnessed a great deal of success both in theory~\citep{balcan2005mechanism,balcan2018general,morgenstern2016learning,munoz2017revenue} and in practice~\citep{edelman2007internet,dutting2019optimal,walsh2008computing,sandholm2007expressive}. In contrast to the typical Bayesian approach to mechanism design that views side information through the lens of a prior distribution over agents, we adopt a prior-free perspective that makes no assumptions on the correctness, accuracy, or source of the side information. A nascent line of work (that is part of a larger agenda on augmenting algorithms with machine-learned predictions~\citep{mitzenmacher2022algorithms}) has begun to examine the challenge of exploiting predictions (of \emph{a priori} unknown quality) when agents are self-interested, but only for fairly specific problem settings~\citep{xu2022mechanism,banerjee2022online,balkanski2022strategyproof,gkatzelis2022improved,balseiro2022single,agrawal2022learning}. We contribute to this line of work with a general side-information-dependent meta-mechanism for a wide swath of multidimensional mechanism design problems that aim for high social welfare and high revenue.
Here we provide a few examples of the forms of side information we consider in various multidimensional mechanism design scenarios. A formal description of the model is in Section~\ref{section:problem_formulation}. \emph{(1) The owner of a new coffee shop sets prices based on the observation that most customers are willing to pay \$9 for a coffee and a croissant, and are willing to pay \$5 for either item individually. (2) A real-estate agent believes that a particular buyer values a high-rise condominium with a city view three times more than one on the first floor. Alternately, the seller might know for a fact that the buyer values the first property three times more than the second based on set factors such as value per square foot. (3) A homeowner association is raising funds for the construction of a new swimming pool within a townhome complex. Based on the fact that a particular resident has a family with children, the association estimates that this resident is likely willing to contribute at least \$300 if the pool is opened within a block of the resident's house but only \$100 if outside a two-block radius.} These are all examples of side information available to the mechanism designer that may or may not be useful or accurate. Our methodology allows us to derive welfare and revenue guarantees under different assumptions on the veracity of the side information. We study two slightly different settings: one in which the side information can be completely bogus (Section~\ref{section:predictions}), and another in which the side information is constrained to be valid (Section~\ref{section:subspace}).
\subsection{Our contributions}
Our main contribution is a versatile meta-mechanism that integrates side information about agent types with the bicriteria goal of simultaneously optimizing welfare and revenue.
In Section~\ref{section:problem_formulation} we formally define the components of multidimensional mechanism design with side information. The abstraction of multidimensional mechanism design is a rich language that allows our theory to apply to many real-world settings including combinatorial auctions, matching markets, project fundraisers, and more---we expand on this list of examples further in Section~\ref{section:problem_formulation}. We also present the weakest-competitor VCG mechanism introduced by~\citet{krishna1998efficient} and prove that it is revenue-optimal among all efficient mechanisms in the prior-free setting (extending their work which was in the Bayesian setting for a fixed known prior).
In Section~\ref{section:meta_mechanism} we present our main meta-mechanism for efficient mechanism design with side information. It generalizes the mechanism of~\citet{krishna1998efficient}. We introduce the notion of a weakest-competitor set and a weakest-competitor hull, constructions that are crucial to understanding the payments and incentive properties of our meta-mechanism (Theorems~\ref{theorem:wch_ic} and~\ref{theorem:wch_ir}). These constructions do not appear to have been previously studied, and might be of independent interest in other mechanism design settings.
In Section~\ref{section:predictions} we prove that our meta-mechanism---when carefully instantiated---simultaneously achieves strong welfare and revenue guarantees that are parameterized by errors in the side information (Theorem~\ref{theorem:random_expand}). Our mechanism works by independently expanding the input predictions, where the expansion radius for each prediction is drawn randomly from a logarithmic discretization of the diameter of the ambient type space. Our mechanism achieves the efficient welfare $\mathsf{OPT}$ and revenue $\Omega(\mathsf{OPT}/\log H)$ when the side information is highly informative and accurate, where $H$ is an upper bound on any agent's value for any outcome. Its performance decays continuously and gradually as the quality of the side information decreases (whereas na\"{i}ve approaches suffer from huge discontinuous drops in performance). And it does not perform much worse than the vanilla VCG mechanism when the side information is completely bogus and/or misleading.
In Section~\ref{section:subspace} we use our meta-mechanism to derive new results in a setting where each agent's type is determined by a constant number of parameters. Specifically, agent types lie on constant-dimensional subspaces (of the potentially high-dimensional ambient type space) that are known to the mechanism designer. \emph{For example, in the condominium example from the introduction, an agent's value per square foot might completely determine her value for each property.} When each agent's true type lies on a known ray, we show that the revenue of (a particular instantiation of) our meta-mechanism is at least $\Omega(\mathsf{OPT}/\log H)$ while simultaneously guaranteeing a welfare of at least $\mathsf{OPT}/\log H$ (Theorem~\ref{theorem:line}). More generally, when each agent's true type is known to lie in a particular $k$-dimensional subspace of the ambient type space, we show how to use our meta-mechanism to guarantee revenue at least $\Omega(\mathsf{OPT}/k(\log H)^{k})$ while simultaneously guaranteeing welfare at least $\mathsf{OPT}/\log H$ (Theorem~\ref{theorem:subspace}).
Traditionally it is known that welfare and revenue are often at odds and maximizing one objective comes at the expense of the other. One perspective of our results is that side information can help mitigate this difficulty.
\subsection{Related work}
\emph{Side information in mechanism design.} Various mechanism design settings have been studied under the assumption that some form of public side information is available. \citet{munoz2017revenue} study single-item (unlimited supply) single-bidder posted-price auctions with approximate bid predictions. \citet{devanur2016sample} study the sample complexity of (single-parameter) auctions when the mechanism designer receives a distinguishing signal for each bidder. More generally, the field of \emph{algorithms with predictions} aims to improve the quality of classical algorithms when (potentially machine-learned) predictions about the solution are available. This is an extremely active area of research (\citet{mitzenmacher2022algorithms} provide a survey of work in this area). There have been recent explicit connections of the algorithms-with-predictions paradigm to mechanism design in specific settings such as strategic scheduling, facility location, online Nash social welfare maximization, and single-leg revenue management~\citep{agrawal2022learning, gkatzelis2022improved, balkanski2022strategyproof,banerjee2022online, balseiro2022single}. Most related to our work,~\citet{xu2022mechanism} study the design of high-revenue auctions to sell a (single copy of a) single item to multiple bidders when the mechanism designer has access to point predictions on the bidders' values for the items. Unlike our approach which heavily exploits randomization, they focus on \emph{deterministic} prediction-augmented modifications of a second-price auction. An important drawback of determinism is that revenue guarantees do not decay continuously as prediction quality degrades. For agents with value capped at $H$ there is an error threshold after which, in the worst case, only a $1/H$-fraction of revenue can be guaranteed (this is not even competitive with a vanilla second-price auction). \citet{xu2022mechanism} prove that such a drop in revenue is unavoidable by deterministic mechanisms. Finally, our setting is distinct from, but similar to in spirit, work that uses public attributes to perform market segmentation to improve revenue~\citep{balcan2005mechanism,balcan2020efficient}.
\vspace{2mm}
\noindent
\emph{Welfare-revenue tradeoffs in auctions.} The relationship between welfare and revenue in Bayesian auctions has been widely studied since the seminal work of~\citet{bulow1996auctions}. \citet{hartline2009simple,daskalakis2011simple} quantify welfare-revenue tradeoffs for second-price auctions with reserve prices in the single item setting. \citet{diakonikolas2012efficiency} study the Pareto frontier between efficiency and revenue in the single-item setting. They show that understanding the Pareto frontier is in general intractable, but derive polynomial-time approximation schemes to approximate the Pareto curve when there are two bidders. \citet{anshelevich2016pricing} study welfare-revenue tradeoffs in large markets,~\citet{aggarwal2009efficiency} study the efficiency of revenue-optimal mechanisms, and~\citet{abhishek2010efficiency} study the efficiency loss of revenue-optimal mechanisms.
\vspace{2mm}
\noindent
\emph{Constant-parameter mechanism design.} Revenue-optimal mechanism design for settings where each agent’s type space is of a constant dimension has been studied previously in certain specific settings. Single-parameter mechanism design is a well-studied topic dating back to the seminal work of~\citet{myerson1981optimal}, who (1) characterized the set of all truthful allocation rules and (2) derived the Bayesian optimal auction based on virtual values (a quantity that is highly dependent on knowledge of the agents' value distributions). \citet{archer2001truthful} also characterize the set of allocation rules that can be implemented truthfully in the single-parameter setting, and use this to derive polynomial-time mechanisms with strong revenue approximation guarantees in various settings. \citet{kleinberg2013ratio} prove revenue guarantees for a variety of single-parameter settings that depend on distributional parameters. Constrained buyers with values determined by two parameters have also been studied~\citep{malakhov2009optimal,pai2014optimal}.
\vspace{2mm}
\noindent
\emph{Revenue of combinatorial auctions for limited supply.} Our mechanism when agent types lie on known linear subspaces of low degree can be seen as a generalization of the well-known logarithmic revenue approximation that is achieved by a second-price auction with a random reserve price in the single-item setting~\citep{goldberg2001competitive}. Similar revenue approximations have been derived in multi-item settings for various classes of bidder valuation functions such as unit-demand~\citep{guruswami2005profit}, additive~\citep{sandholm2015automated,likhodedov2005approximating}, and subadditive~\citep{balcan2008item, chakraborty2013dynamic}. To the best of our knowledge, no previous techniques handle agent types on low-dimensional subspaces. Furthermore, our results are not restricted to combinatorial auctions unlike most previous research.
\section{Learning the type sets to use}
So far we have assumed a completely prior-free setting with no assumptions on how $\btheta = (\theta_1,\ldots, \theta_n)$ are generated. In this section, we explore how our meta-mechanism $\cM$ can be fruitfully used to exploit knowledge about priors on the agent types, whether that knowledge be a complete description of the value distribution or a training set of sample values drawn from the prior. In this section we will assume for simplicity that $\Theta = [0,1]^{\Gamma}$. Let $f_1,\ldots, f_n:[0,1]^{\Gamma}\to\R_{\ge 0}$ be the density functions of the $n$ agents, and let $\Theta_i = \mathrm{supp}(f_i)\subseteq\Theta$ be the support of $f_i$. Let $\mu_i(S) = \int_{S}f_i$.
There are a few benchmarks we might hope to compare the performance of our mechanisms against. The first is the \emph{ex-ante} optimum $$\max_{\alpha\in\Delta(\Gamma)}\E_{\btheta}\bigg[\sum_{i=1}^n\theta_i^T\alpha\bigg].$$ The second---and stronger---benchmark is the \emph{ex-post} optimum $$\E_{\btheta}[\mathsf{OPT}_{\btheta}], \text{ where } \mathsf{OPT}_{\btheta}:= \max_{\alpha}\sum_{i=1}^n\theta_i[\alpha],$$ which is the benchmark that is used in the prior-free setting of the previous section. We may also desire high-probability performance guarantees on our mechanisms, in which case we compare to $\mathsf{OPT}_{\btheta}$ directly (as random variables).
\subsection{Support estimation}
If $\Theta_i = \supp(f_i)$ is known for all $i$, we may simply run $\cM$ with $\hTheta = \Theta_i$. (While $\Theta_i$ is not upwards closed, $\cM$ is IC since there is a known restriction on the type space: agents cannot submit a misreported type in $\mathsf{WCH}(\Theta_i; \Theta)\setminus\Theta_i$. Phrased another way, $\Theta_i$ \emph{is} upwards closed with respect to itself as the ambient space, that is, $\Theta_i = \mathsf{WCH}(\Theta_i)\cap\Theta_i$. We will by and large not discuss weakest competitor hulls in this section and assume that all sets $\hTheta$ used in $\cM$ are upwards closed relative to $\Theta_i$.)
We now show how to instantiate $\cM$ with an estimate of $\Theta_i$ that is learned from agent data when a description of $f_i$ is not available.
The algorithm is to let $\hTheta_i$ be the minimal upwards closed axis-parallel bounding box of the points. The family of such boxes has VC dimension $|\Gamma|$. For a set $\tTheta$, let $\cR(\tTheta)$ be the minimal axis-parallel rectangle that contains $\tTheta$.
Choice of algorithm based on the following artifact of our analysis: in $\ell_{\infty}$-distance, $d_H(\theta_i, \mathsf{WC}(\tTheta_i)) = d_H(\theta_i, \mathsf{WC}(\cR(\tTheta_i)))$. In the worst case over the types $\btheta_{-i}$ of the other agents, there is no revenue loss incurred by moving from $\tTheta_i$ to $\cR(\tTheta_i)$.
\subsection{Known agent distributions}
Let $\cT_i\subseteq\cP(\Theta_i)$ be a collection of nested sets $\cT_i = \{\Theta_i^{\psi} : 0\le\psi\le 1\}$ parameterized by $\psi\in[0,1]$ such that $\psi<\psi'\implies\Theta_i^{\psi}\subset\Theta_i^{\psi'}$, $\Theta_i^0 = \emptyset$, and $\Theta_i^1\supseteq\Theta_i$. Need more conditions for $h$ to be continuous.
Let $h(\psi) = \mu_i(\Theta_i^{\psi})$. $h$ is continuous in $\psi$.
Ideas. Consider distance distributions $\rho_i : \R_{\ge 0}\to [0,1]$ where $\Pr_{\theta_i\sim f_i}(a\le d_H(\theta_i, \mathsf{WC}(supp(f_i)))\le b) = \int_{a, b} \rho(x)dx$. Our revenue guarantees would depend on level sets of this density.
Suffices to use boxes in our analysis. This is because the min axis-parallel bounding box of $supp(f_i)$ satisfies the same revenue guarantees as $supp(f_i)$ according to Lemma~\ref{lemma:revenue_lb_1}. So, we can consider, given a parameter $\gamma$, the box $[a_1, H]\times[a_2, H]\times\cdots[a_{|\Gamma|}, H]$ that maximizes $\min_k a_k$ over all boxes such that $\int_{\prod_k [a_k, H]}f_i\ge\gamma$. This way,
Let $$E_{f,\gamma}\in\argmax\left\{\ell(E) : \int_{E} f\ge\gamma, E\text{ measurable}\right\}$$ be a set of density at least $\gamma$ with respect to $f$ that maximizes the minimum coordinate of any point in the set.
Let these be nested: for $\gamma_1 \le \gamma_2$ we have $E_{f, \gamma_1}\subseteq E_{f, \gamma_2}$ (might need to change objective slightly, i.e. not use $\argmax\ell(E)$. But this nested thing is what we need, since then $\gamma_1\le\gamma_2\implies E_{f,\gamma_1}\subseteq E_{f,\gamma_2}\implies\ell(E_{f, \gamma_1})\ge \ell(E_{f, \gamma_2})$.
\begin{lemma}
Let $g:[0,1]^{\Gamma}\to\R_{\ge 0}$ be anti-monotone in the sense that $x\prec y\implies g(x) > g(y)$. Let $V\subseteq [0,1]^{\Gamma}$ be the cube $V = [0, s]^{\Gamma}$, where $0 < s < 1$. Let $f$ be a density on $[0,1]^{\Gamma}$ such that there exists a function $\lambda:[0,1]^{\Gamma}\to\R_{\ge 0}$ with $f(sx)\ge\lambda(s)f(x)$ for all $s, x$. Then $$\E[g(x)\mid x\in V]\ge \frac{\lambda(s)\cdot s^{|\Gamma|}}{\int_V f}\cdot\E[g(x)].$$
\end{lemma}
\begin{proof}
Let $U = [0,1]^{\Gamma}$ denote the unit cube, and let $\varphi : U\to V$ be the map $\varphi(x) = sx$. Let $\mu(S) = \int_S f$. We have
\begin{align*}
\E[g(x)\mid x\in V] &= \frac{1}{\mu(V)}\int_V g(v)f(v)dv \\
&= \frac{1}{\mu(V)}\int_U g(\varphi(u))f(\varphi(u))|\det J_{\varphi}(u)|du \\
&\ge \frac{1}{\mu(V)}\int_U g(u)f(\varphi(u))|\det J_{\varphi}(u)|du \\
&= \frac{s^{|\Gamma|}}{\mu(V)}\int_U g(u) f(su)du \\
&\ge \frac{\lambda(s)\cdot s^{|\Gamma|}}{\mu(V)}\int_U g(u)f(u)du =\frac{\lambda(s)\cdot s^{|\Gamma|}}{\mu(V)}\cdot\E[g(x)]
\end{align*} where the second line follows from a change of variables ($J_{\varphi}$ is the Jacobian of $\varphi$) and the third line is due to anti-monotonicity of $g$.
\end{proof}
Consider the following mechanism, parameterized by $\gamma$, which uses $\hTheta_i = \mathsf{WCH}(E_{f_i, \gamma})$. We denote this mechanism by $\cM_{\gamma}$. In contrast to the randomized approach in the previous section, this mechanism is completely deterministic. Our guarantees though are in expectation over the agent distributions.
\begin{theorem}
$\cM_{\gamma}$ satisfies $$\E[\text{welfare}]\ge\gamma\cdot\E[\mathsf{OPT}]$$ and $$\E[\text{revenue}]\ge\gamma\cdot\sum_{i=1}^n\ell(E_{f_i,\gamma}).$$
\end{theorem}
\begin{proof}
Let $\alpha_{\mathsf{opt},\btheta}$ denote the efficient allocation for type profile $\btheta$ (so in the following, $\alpha_{\mathsf{opt},\btheta}$ is a random variable taking values in $\Gamma$). We have
\begin{align*}
\E[\text{welfare}] &= \E\Bigg[\max_{\alpha}\sum_{j=1}^n\theta_j[\alpha]\cdot\mathbf{1}(\theta_j\in\hTheta_j)\Bigg] \\ &\ge\E\Bigg[\sum_{j=1}^n\theta_j[\alpha_{\mathsf{opt},\btheta}]\cdot\mathbf{1}(\theta_j\in\hTheta_j)\Bigg] \\
&= \sum_{j=1}^n \E\left[\theta_j[\alpha_{\mathsf{opt},\btheta}]\cdot\mathbf{1}(\theta_j\in\hTheta_j)\right] \\
&= \sum_{j=1}^n \E\left[\theta_j[\alpha_{\mathsf{opt},\btheta}]\mid\theta_j\in\hTheta_j\right]\cdot\Pr(\theta_j\in\hTheta_j) \\
&\ge \gamma\cdot\sum_{j=1}^n \E\left[\theta_j[\alpha_{\mathsf{opt},\btheta}]\mid\theta_j\in\hTheta_j\right].
\end{align*} We claim that $$\E\left[\theta_j[\alpha_{\mathsf{opt},\btheta}]\mid\theta_j\in\hTheta_j\right] \ge \E\left[\theta_j[\alpha_{\mathsf{opt},\btheta}]\right].$$ To prove this, observe that that $\hTheta_j = \mathsf{WCH}(E_{f_j, \gamma})\subseteq\supp(f_j)$. We can thus write $$\E\left[\theta_j[\alpha_{\mathsf{opt},\btheta}]\right] = \E\left[\theta_j[\alpha_{\mathsf{opt},\btheta}] | \theta_j\in\hTheta_j\right]\cdot\Pr(\theta_j\in\hTheta_j) + \E\left[\theta_j[\alpha_{\mathsf{opt},\btheta}]|\theta_j\in\supp(f_j)\setminus\hTheta_j\right]\cdot\Pr(\theta_j\in\supp(f_j)\setminus\hTheta_j).$$
\end{proof}
Instantiation for $n$ bidders iid uniform on $[0,1]^2$ ($|\Gamma|=2$). For any $\gamma$, our mechanism satisfies $\E[welfare]\ge\gamma\E[\mathsf{OPT}]$ and $\E[revenue]\ge \gamma(1-\sqrt{\gamma})n$. Revenue maximized at $\gamma = 4/9$ for expected welfare of $(4/9)\mathsf{OPT}$ and expected revenue of $(4/27)n$.
In general over $[0,1]^{\Gamma}$ we get $\E[revenue]\ge \gamma(1-\gamma^{1/|\Gamma|})$.
\subsection{Thoughts}
Let $$E_{f}(\gamma) = \argmin\left\{\diam_{\infty}(E) : \int_{E}f\ge\gamma, E\text{ measurable}\right\}$$ denote the minimum diameter set (with respect to $\ell_{\infty}$ metric) that contains at least a $\gamma$ fraction of the probability mass of $f$, and let $d_f(\gamma) = \diam_{\infty}(E_f(\gamma))$ be its diameter. \textcolor{red}{$E_f(\gamma)$ need not be unique but for now let's just assume it is unique/there's a way of picking some unique one.} For $\gamma_1 < \gamma_2$ we have $d_f(\gamma_1) < d_f(\gamma_2)$.
Note: if $\hTheta$ is such that $\theta_i\in\hTheta$ with high probability ($\ge 1-\varepsilon$), then can use $\hTheta$ directly to get a high probability IC guarantee. But....might as well use $\mathsf{WCH}(\hTheta)$ since then IC is guaranteed and there's no revenue loss.
So now if we choose $\gamma = 1-\varepsilon/n$ and use $\hTheta_i = \mathsf{WCH}(E_{f_i}(1-\varepsilon/n))$ for each agent, we have exact IC, and with probability $\ge 1-\varepsilon$, $\theta_i\in\hTheta_i$ for all $i$. We thus get that with probability $\ge1-\varepsilon$, $$\text{welfare} = \mathsf{OPT}$$ and $$\text{revenue}\ge\mathsf{OPT} - \sum_i d_{f_i}(1-\varepsilon/n).$$ So this sort of gives a welfare-revenue tradeoff in terms of the min diameter sets.
Don't need $\varepsilon/n$, just use linearity of expectation.
\textcolor{green}{Wait, this diameter thing seems wrong since we're using $\mathsf{WCH}$ which can vastly increase the diameter of the min-diameter set} But, with whatever probability the diameter is at most what we have.
{\color{blue}
Some thoughts. Can write the mechanism design problem as such. Mechanism designer gets predictions $\{\tTheta_i\}$. Based on these, have to commit to $\{\widehat{\Theta}_i\}$ to use in the above mechanism. These can be any sets!
The fact that we can use ANY weakest competitor hull in the mechanism, not just $\mathsf{WCH}(\tTheta_i)$, where $\tTheta_i$ are the received predictions, should make the learning problem much easier. For example, even $\cT\subseteq\cP(\Theta)$ with infinite VC dimension should be ``learnable" in the sense of using a simpler class of WCHs.
For example, say the learning algorithm fits the minimal axis-parallel box to the type data. The, with high probability, this box is epsilon close to the minimal bounding box of the TRUE type set. The higher VC dimension of the kind of sets you use, the better revenue due to being a better approximation of the true type space (roughly), but harder to learn (might run more risk of invalidity).
}
Interesting learnability question here: given $N$ instances $\btheta_{1},\ldots, \btheta_{N}$ where $\btheta_i = (\theta^i_1,\ldots, \theta^i_n)$ of the mechanism design environment, can accurate type predictions be learned? The $\mathsf{WCH}$ relaxation could provide some interesting sample complexity results. For example if $\cT$ is a family of type spaces, that is, $\cT$ is a collection of subsets of $\R^{\Gamma}$, something like $VCdim(\{\mathsf{WCH}(\Theta): \Theta\in\cT\}) = VCdim(\cT)$ should hold. So the hulls are equally easy to learn but leave some leeway for errors in the learning somehow.
Shorthand: $\mathsf{WCH}(\cT) := \{\mathsf{WCH}(\Theta) : \Theta\in\cT\}$. Should be fairly easy to show that $\VC(\mathsf{WCH}(\cT))\le \VC(\cT)$.
I think there should exist classes $\cT$ such that $\VC(\cT) = \infty$ but $\VC(\mathsf{WCH}(\cT)) = O(|\Gamma|)$.
\begin{lemma}
Let $\cT\subseteq\cP(\R_{\ge 0}^2)$ be the collection of convex polygons such that exactly $k$ of the polygon's vertices must satisfy $x + y \le 1$ and all other vertices must satisfy $x > 1$ and $y > 1$. Then, $\VC(\cT) = \infty$, but $\VC(\mathsf{WCH}(\cT)) = O(k)$.
\end{lemma}
\begin{proof}
Need to write out formally and make sure this is true.
\end{proof}
Can give an end-to-end procedure with guarantees for revenue maximization subject to efficiency constraint by plugging into the WCVCG. The above Lemma shows that there are instances where the exact restricted type spaces might not be learnable, but their weakest-competitor hulls are and thus we may harness the full power of WCVCG anyway (after taking into account the error tolerance of the PAC guarantees).
Suppose we receive samples of the form $\btheta^1 = (\theta^1_1,\ldots, \theta^1_n),\ldots, \btheta^N = (\theta^N_1,\ldots,\theta^N_n)$. Let $\widehat{\Theta}_i = \conv(\{\theta_1^1,\ldots,\theta^N_1\})$. Then, on a fresh sample, run WCVCG with $\{\mathsf{WCH}_{\varepsilon}(\widehat{\Theta}_i)\}$ where $\varepsilon$ is just the PAC error guarantee. Then, the resulting mechanism is always efficient, and with probability $\ge 1-\delta$ is IC and IR. Revenue guarantee in terms of the ``spread" of the samples.
Another interesting learnability setting could be when $\btheta^1,\ldots, \btheta^N,\ldots$ follow a general stochastic process (rather than being iid). More realistic to make ``point" predictions in this setting.
\section{Other ideas}
Bundling predictions for combinatorial auctions. Predictions of the form ``sell items $\{x,y,z\}$" together.
\section{Problem formulation, example applications, and the weakest-competitor VCG mechanism}
\label{section:problem_formulation}
We consider a general multidimensional mechanism design setting with a finite allocation space $\Gamma$ and $n$ agents. $\Theta_i$ is the type space of agent $i$. Agent $i$'s true private type $\theta_i\in\Theta_i$ determines her value $v(\theta_i, \alpha)$ for allocation $\alpha\in\Gamma$. We will interpret $\Theta_i$ as a subset of $\R^{\Gamma}$, so $\theta_i[\alpha] = v(\theta_i, \alpha)$. We use $\btheta\in\bigtimes_{i=1}^n\Theta_i$ to denote a profile of types and $\btheta_{-i}\in\Theta_{-i}:=\bigtimes_{j\neq i}\Theta_i$ to denote a profile of types excluding agent $i$. We now introduce our model of side information. For each agent, the mechanism designer receives a subset of the type space predicting that the subset contains the agent's true yet-to-be-revealed type. Formally, the mechanism designer receives additional information about each agent in the form of a refinement of each agent's type space, given by $\tTheta_1\subseteq\Theta_1,\ldots, \tTheta_n\subseteq\Theta_n$. These refinements postulate that the true type of bidder $i$ is actually contained in $\tTheta_i$ (though the mechanism designer does not necessarily know whether or not these predictions are valid). We refer to the $\tTheta_i$ as \emph{side-information sets} or simply \emph{predictions}. To simplify exposition, we assume that prior to receiving side information the mechanism designer has no differentiating information about the agents' types, that is, $\Theta_1 = \cdots = \Theta_n$. Let $\Theta$ denote this common \emph{ambient type space}. We assume $0\in\Theta$. This assumption is without loss of generality---all definitions and theorems in this paper hold relative to whatever the ambient type space of agent $i$ is.
A mechanism with side information is specified by an allocation rule $\alpha(\btheta; \tTheta_1,\ldots,\tTheta_n)\in\Gamma$ and a payment rule $p_i(\btheta; \tTheta_1,\ldots,\tTheta_n)\in\R$ for each agent $i$. We assume agents have quasilinear utilities. A mechanism is \emph{incentive compatible} if $\theta_i\in\argmax_{\theta_i'\in\Theta_i} \theta_i[\alpha(\theta_i',\btheta_{-i};\tTheta_1,\ldots,\tTheta_n)] - p_i(\theta_i',\btheta_{-i};\tTheta_1,\ldots,\tTheta_n)$ holds for all $i, \theta_i, \btheta_{-i}, \tTheta_1,\ldots,\tTheta_n$, that is, agents are incentivized to report their true type regardless of what other agents report and regardless of the side information used by the mechanism (this definition is equivalent to the usual notion of dominant-strategy incentive compatibility and simply stipulates that side information ought to be used in an incentive compatible manner). A mechanism is \emph{individually rational} if $\theta_i[\alpha(\theta_i,\btheta_{-i};\tTheta_1,\ldots,\tTheta_n)] -p_i(\theta_i,\btheta_{-i};\tTheta_1,\ldots,\tTheta_n)\ge 0$ holds for all $i,\theta_i,\btheta_{-i},\tTheta_1,\ldots,\tTheta_n$. We will analyze a variety of randomized mechanisms that randomize over incentive compatible and individually rational mechanisms. Such randomized mechanisms are thus incentive compatible and individually rational in the strongest possible sense (as supposed to weaker notions of in-expectation incentive compatibility/individual rationality).
\subsubsection*{Example applications}
\label{subsection:examples}
Our model of side information within the rich language of multidimensional mechanism design allows us to capture a variety of different problem scenarios where both welfare and revenue are desired objectives. We list a few examples of different multidimensional mechanism settings along with examples of different varieties of side information sets.
\begin{itemize}
\item Combinatorial auctions: There are $m$ indivisible items to be allocated among $n$ agents (or to no one). The allocation space $\Gamma$ is the set of $(n+1)^m$ allocations of the items and $\theta_i[\alpha]$ is agent $i$'s value for the bundle of items she is allocated by $\alpha$. Let X and Y denote two of the items for sale. The set $\tTheta_i = \{\theta_i : \theta_i[\{\text{X, Y}\}]\ge 9, \theta_i[\{\text{X}\}]+\theta_i[\{\text{Y}\}]\ge 10\}$ represents the prediction that agent $i$'s values for X and Y individually sum up to at least \$10, and her value for the bundle is at least \$9. Here, $\tTheta_i$ is the intersection of linear constraints.
\item Matching markets: There are $m$ items (\emph{e.g.}, houses) to be matched to $n$ buyers. The allocation space $\Gamma$ is the set of matchings on the bipartite graph $K_{m,n}$ and $\theta_i[\alpha]$ is buyer $i$'s value for the item $\alpha$ assigns her. Let $\alpha_1,\alpha_2,\alpha_3$ denote three matchings that match house 1, house 2, and house 3 to agent $i$, respectively. The set $\tTheta_i = \{\theta_i : \theta_i[\alpha_1] = 2\cdot\theta_i[\alpha_2] = 0.75\cdot\theta_i[\alpha_3]\}$ represents the information that agent $i$ values house 1 twice as much as house 2, and $3/4$ as much as house 3. Here, $\tTheta_i$ is the linear space given by $\operatorname{span}(\langle 1, 1/2, 4/3\rangle)$.
\item Fundraising for a common amenity: A multi-story office building that houses several companies is opening a new cafeteria on a to-be-determined floor and is raising construction funds. The allocation space $\Gamma$ is the set of floors of the building and $\theta_i[\alpha]$ is the (inverse of the) cost incurred by building-occupant $i$ for traveling to floor $\alpha$. The set $\tTheta_i = \{\theta_i : \lVert\theta_i - \theta_i^*\rVert_p\le k\}$ postulates that $i$'s true type is no more than $k$ away from $\theta_i^*$ in $\ell_{p}$-distance, which might be derived from an estimate of the range of floors agent $i$ works on based on the company agent $i$ represents. Here, $\tTheta_i$ is given by a (potentially nonlinear) distance constraint.
\item Bidding for a shared outcome: A package delivery service that offers multiple delivery rates (priced proportionally) needs to decide on a delivery route to serve $n$ customers. The allocation space $\Gamma$ is the set of feasible routes and $\theta_i[\alpha]$ is agent $i$'s value for receiving her packages after the driving delay specified by $\alpha$. For $t = 0,1,\ldots$ let $\alpha_t$ denote an allocation that imposes a driving delay of $t$ time units on agent $i$. The set $\tTheta_i = \{\theta_i : \theta_i[\alpha_0]\ge 500, \theta_i[\alpha_{t+1}]\ge f_t(\theta_i[\alpha_{t}])\;\forall t\}$ is the prediction that agent $i$ is willing to pay \$500 to receive her packages as soon as possible, and is at worst a time discounter determined by (potentially nonlinear) discount functions $f_t$. Here, the complexity of $\tTheta_i$ is determined by the $f_t$.
\end{itemize}
In our results we will assume that $\Theta = [0, H]^{\Gamma}$ and thereby impose an upper bound $H$ on any agent's value for any allocation. This cap $H$ is the only problem-specific parameter in our results. In the above four bulleted examples $H$ represents the maximum value any agent has for the grand bundle of items, any available house, the cafeteria opening on her floor, and receiving her packages with no delay, respectively.
\subsubsection*{The weakest-competitor VCG mechanism} The VCG mechanism can generally be highly suboptimal when it comes to revenue~\citep{ausubel2006lovely,metz2018facebook,varian2014vcg} (and conversely mechanisms that shoot for high revenue can be highly welfare suboptimal). However, if efficiency is enforced as a constraint of the mechanism design, then the \emph{weakest-competitor VCG mechanism} introduced by~\citet{krishna1998efficient} is in fact revenue optimal (they call it the generalized VCG mechanism). While VCG payments are based on participation externalities, the payments of the weakest-competitor VCG mechanism are based on agents being replaced by \emph{weakest competitors} who have the smallest impact on welfare. This approach yields a strict revenue improvement over the vanilla VCG mechanism. ~\citet{krishna1998efficient} proved that the Bayesian version of the weakest-competitor VCG mechanism is revenue optimal among all efficient, incentive compatible, and individually rational mechanisms. The (prior-free) weakest-competitor VCG mechanism works as follows. Given reported types $\theta_1,\ldots, \theta_n$, it uses the efficient allocation $\alpha^* = \argmax_{\alpha\in\Gamma}\sum_{i=1}^n\theta_i[\alpha]$. The payments are given by $$p_i(\theta_1,\ldots, \theta_n) = \min_{\ttheta_i\in\Theta_{i}}\Bigg(\max_{\alpha\in\Gamma}\sum_{j\neq i} \theta_j[\alpha] + \ttheta_i[\alpha]\Bigg) - \sum_{j\neq i} \theta_j[\alpha^*].$$ If $0\in\Theta_i$, $p_i$ is simply the vanilla VCG payment. \citet{krishna1998efficient} prove the following result in the Bayesian setting, which we reproduce in a stronger prior-free form for completeness.
\begin{theorem}\label{theorem:rev_optimal}
The weakest-competitor VCG auction is incentive compatible and individually rational. Furthermore, it is the revenue maximizing mechanism among all mechanisms that are efficient, incentive compatible, and individually rational.
\end{theorem}
\begin{proof}
Weakest-competitor VCG is incentive compatible for the same reason that VCG is incentive compatible: the minimization in the payment formula (the \emph{pivot} term) is independent of bidder $i$'s reported type. Concretely, if $\alpha'$ is the welfare-maximizing allocation when bidder $i$ reports $\theta_i'$, bidder $i$'s utility from reporting $\theta_i'$ is $\sum_{j=1}^n \theta_j[\alpha'] - \min_{\ttheta_i\in\Theta_{i}}(\max_{\alpha}\sum_{j\neq i} \theta_j[\alpha] + \ttheta_i[\alpha]),$ which is maximized at $\alpha' = \alpha^*$ (which proves incentive compatibility). Furthermore, for each $i$, $\sum_{j=1}^n \theta_j[\alpha^*] - \min_{\ttheta_i\in\Theta_{i}}(\max_{\alpha}\sum_{j\neq i} \theta_j[\alpha] + \ttheta_i[\alpha])\ge \sum_{j=1}^n \theta_j[\alpha^*] - \max_{\alpha} \sum_{j=1}^n \theta_j[\alpha] = 0,$ which proves individual rationality. The proof that weakest-competitor VCG is revenue optimal follows from the revenue equivalence theorem; the necessary ingredients may be found in the monograph by~\citet{vohra2011mechanism}. Let $p_i(\btheta)$ be the weakest-competitor VCG payment rule, and let $p_i'(\btheta)$ be any other payment rule that also implements the efficient allocation rule. By revenue equivalence, for each $i$, there exists $h_i(\btheta_{-i})$ such that $p_i'(\theta_i, \btheta_{-i}) = p_i(\theta_i, \btheta_{-i}) + h_i(\btheta_{-i})$. Suppose $\btheta$ is a profile of types such that $p_i'$ generates strictly greater revenue than $p_i$, that is, $\sum_{i=1}^n p_i'(\btheta) > \sum_{i=1}^n p_i(\btheta)$. Equivalently $\sum_{i=1}^n p_i(\theta, \btheta_{-i}) + h_i(\btheta_{-i}) > \sum_{i=1}^n p_i(\theta_i, \btheta_{-i})$. Thus, there exists $i^*$ such that $h_{i^*}(\btheta_{-i^*}) > 0$. Now, let $\ttheta_{i^*}=\argmin_{\theta_{i^*}'\in\Theta_{i^*}}\max_{\alpha\in\Gamma}\sum_{j\neq i}\theta_j[\alpha] + \theta_{i^*}'[\alpha]$ be the \emph{weakest competitor} with respect to $\btheta_{-i^{*}}$. If weakest-competitor VCG is run on the type profile $(\ttheta_{i^*}, \btheta_{-i^*})$, the agent with type $\ttheta_{i^*}$ pays their value for the efficient allocation. In other words, the individual rationality constraint is binding for $\ttheta_{i^*}$. Since $h_{i^*}(\btheta_{-i^*}) > 0$, $p_i'$ violates individual rationality, which completes the proof. \end{proof}
Given this result, one cannot hope to meaningfully improve revenue via noisy side information if efficiency is a hard constraint. Our meta-mechanism (Section~\ref{section:meta_mechanism}) is a generalization of the weakest-competitor VCG mechanism that (1) does not assume misreporting is limited to the side information sets and (2) relaxes efficiency in order to capitalize on the available side information. The notion of a \emph{weakest competitor} highlighted by the weakest-competitor VCG mechanism is a key ingredient in our study of side information.
\section{Constant-parameter agents: types on low-dimensional subspaces}
\label{section:subspace}
In this section we show how the theory we have developed so far can be used to derive new revenue approximation results when the mechanism designer knows that each agent's type belongs to some low-dimensional subspace of $\R^{\Gamma}$ (these subspaces can be different for each agent).
This is a slightly different setup from the previous sections. So far, we have assumed that $\Theta_i = \Theta$ for all $i$, that is, there is an ambient type space that is common to all the agents. Side information sets $\tTheta_i$ are given as input to the mechanism designer, with no assumptions on quality/correctness (and our guarantees in Section~\ref{section:predictions} were parameterized by quality). Here, we assume the side information that each agent's type lies in a particular subspace is guaranteed to be valid. Two equivalent ways of stating this setup are (1) that $\Theta_i$ is the corresponding subspace for agent $i$ and the mechanism designer receives no additional prediction set $\tTheta_i$ or (2) $\Theta_i = \Theta$ for all $i$, $\tTheta_i = \Theta\cap U_i$ where $U_i$ is a subspace of $\R^{\Gamma}$, and the mechanism designer has the additional guarantee that $\theta_i\in U_i$ (so $\tTheta_i$ is a \emph{valid} side-information set). We shall use the language of the second interpretation.
Since $0\in U_i$ for each $i$, none of our instantiations of $\cM$ from the previous section with $\tTheta_i = \Theta\cap U_i$ would yield meaningful revenue improvements over vanilla VCG. This is because when weakest-competitor sets contain the origin, the weakest-competitor VCG mechanism is identical to the vanilla VCG mechanism. Another perspective is that the accuracy errors $\gamma_i^A$ of the sets $\tTheta_i = \Theta\cap U_i$ are too large to be meaningful in the guarantees of Lemma~\ref{lemma:revenue_lb_1} and Theorem~\ref{theorem:random_expand}.
In this section we show how to fruitfully use the information provided by the subspaces $U_1,\ldots,U_n$ within the framework of our meta-mechanism. We begin by analyzing the most informative case: when agent types lie on lines. We then generalize our reasoning to any $k$-dimensional subspace that admits an orthonormal basis in the non-negative orthant of $\R^{\Gamma}$. In this section we assume $\Theta=[1,H]^{\Gamma}$, thereby imposing a lower bound of $1$ on agent values (this choice of lower bound is not important, but the knowledge of some lower bound is needed in our analysis).
\subsection{Agent types that lie on lines}
Suppose for each $i$, the mechanism designer knows that $\theta_i$ lies in a line given by $U_i = \{\lambda u_i:\lambda\in\R\}$, for some $u_i\in\R_{\ge 0}^{\Gamma}$. Let $\cL_i = U_i\cap [0,H]^{\Gamma}$ be the line segment that is the portion of this line that lies in $[0,H]^{\Gamma}$. We assume that $H = 2^a$ for some positive integer $a$.
Let $\ttheta_i^0$ be the endpoint of $\cL_i$ with $\lVert \ttheta^0_i\rVert_{\infty} = H$ (the other endpoint of $\cL_i$ is the origin). Let $\ttheta^1_i = \ttheta^0_i/2$ be the midpoint of $\cL_i$, and for $\ell = 2,\ldots, \log_2 H$ let $\ttheta_i^{\ell} = \ttheta_i^{\ell-1}/2$ be the midpoint of $\overline{0\ttheta^{\ell-1}_i}$. So $\lVert\ttheta^{\log_2 H}_i\rVert_{\infty} = 1$. We terminate the halving of the line segment $\cL_i$ after $\log_2 H$ steps due to the assumption that $\theta_i\in[1, H]^{\Gamma}$.
We can now specify our instantiation of $\cM$, which we denote by $\cM_{1}$. For each $i$ we independently set $$\boxed{\hTheta_i = \overline{\ttheta_i^{\ell_i}\ttheta_i^0}\text{ where } \ell_i\sim_{\text{unif.}}\left\{1,\ldots,\log_2H\right\}.}$$ The weakest-competitor sets of the segments used in $\cM_{1}$ are precisely the points in our logarithmic discretizations of the segments $\cL_i$, that is, $\mathsf{WC}(\overline{\ttheta_i^{\ell}\ttheta_i^0}) = \{\ttheta_i^{\ell}\}.$ We could equivalently instantiate $\cM_1$ by simply setting $\hTheta_i = \{\ttheta_i^{\ell}\}$. We show that in this special case of types belonging to known rays, $\cM_{1}$ satisfies effectively the same guarantees as the general mechanism in the previous section, but without the loss associated with prediction error. The key observation is that by Lemma~\ref{lemma:payment_lb}, we can lower bound the payment $p_i$ of agent $i$ by the value $\ttheta_i[\alpha_{\mathsf{opt}}]$ of the corresponding weakest competitor. This is a more direct lower bound than the one we used previously in terms of inaccuracy $\gamma_i^A$ given by Lemma~\ref{lemma:revenue_lb_1}.
\begin{theorem}\label{theorem:line}
$\cM_1$ satisfies $$\E[\text{welfare}]\ge\frac{1}{\log_2H}\cdot\mathsf{OPT}$$ and $$\E[\text{revenue}]\ge\frac{1}{2\log_2 H}\cdot\mathsf{OPT}.$$
\end{theorem}
\begin{proof}
We have $\E[\text{welfare}]\ge\E[\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]\cdot\mathbf{1}(\theta_i\in\hTheta_i)]=\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]\cdot\Pr(\theta_i\in\hTheta_i)\ge\frac{1}{\log_2 H}\cdot\mathsf{OPT}$. We now prove the revenue guarantee. Let $\ell_i^* = \min\{\ell : \theta_i\succeq\ttheta^{\ell}_i\}$. By how the $\ttheta^{\ell}_i$ are defined, we have $\ttheta_i^{\ell_i^*}\succeq\frac{1}{2}\theta_i$. We have $$\E[p_i]\ge\E[p_i\mid\ell_i = \ell_i^*]\cdot\Pr(\ell_i = \ell_i^*) = \frac{1}{\log_2 H}\cdot\E[p_i\mid \ell_i = \ell_i^*].$$ Let $S = \{i : \theta_i\in\hTheta_i\}$. We have $\ell_i = \ell_i^*\implies i\in S$. Therefore, conditioned on $\ell_i = \ell_i^*$, Lemma~\ref{lemma:payment_lb} yields $p_i\ge\ttheta_i^{\ell_i^*}[\alpha_{\mathsf{opt}}]$. As $\ttheta_i^{\ell_i^*}\succeq\frac{1}{2}\theta_i$, we have $$\E[p_i\mid \ell_i^* = \ell_i]\ge \ttheta_i^{\ell_i^*}[\alpha_{\mathsf{opt}}]\ge\frac{1}{2}\theta_i[\alpha_{\mathsf{opt}}].$$ Finally, $$\E[\text{revenue}] =\sum_{i=1}^n\E[p_i]\ge\frac{1}{2\log_2 H}\cdot\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]=\frac{1}{2\log_2 H}\cdot\mathsf{OPT},$$ as desired.
\end{proof}
\subsection{Generalization to agent types that lie in subspaces}
Suppose for each $i$, the mechanism designer knows that $\theta_i$ lies in a $k$-dimensional subspace $U_i=\text{span}(u_{i,1},\ldots,u_{i,k})$ of $\R^{\Gamma}$ where each $u_{i,j}\in\R^{\Gamma}_{\ge 0}$ lies in the non-negative orthant and $\{u_{i,1},\ldots, u_{i,k}\}$ is an orthonormal basis for $U_i$. As in the previous section, we assume $H = 2^a$ for some positive integer $a$.
We generalize the single-dimensional analysis from the previous subsection in the design of our mechanism here. Let $\cL_{i,j} = \{\lambda u_{i,j} : \lambda\ge 0\}\cap [0, H]^{\Gamma}$ be the line segment that is the portion of the ray generated by $u_{i,j}$ that lies in $[0, H]^{\Gamma}$. Let $y_{i,j}$ be the endpoint of $\cL_{i,j}$ with $\lVert y_{i,j}\rVert_{\infty} = H$ (the other endpoint of $\cL_{i,j}$ is the origin). Let $z_{i,j}^1 = y_{i,j}/2$ be the midpoint of $\cL_{i,j}$, and for $\ell = 2,\ldots, \log_2 H$ let $z_{i,j}^{\ell}=z_{i,j}^{\ell-1}/2$ be the midpoint of $\overline{0z_{i,j}^{\ell-1}}$. So $\lVert z_{i,j}^{\log_2 H}\rVert_{\infty} = 1$. We terminate the halving of $\cL_{i,j}$ after $\log_2H$ steps due to the assumption that $\theta_i\in [1,H]^{\Gamma}$. For every $k$-tuple $(\ell_1,\ldots, \ell_k)\in\{1,\ldots, \log_2H\}^k$, let $$\ttheta_i(\ell_1,\ldots, \ell_k) = \sum_{j=1}^kz_{i,j}^{\ell_j}.$$ Furthermore, let $W_{\ell} = \{(\ell_1,\ldots, \ell_k)\in\{1,\ldots,\log_2 H\}^k : \min_j\ell_j = \ell\}$. The sets $W_1,\ldots, W_{\log_2 H}$ partition $\{1,\ldots,\log_2 H\}^k$ into levels, where $W_{\ell}$ is the set of points with $\ell_{\infty}$-distance $H/2^{\ell}$ from the origin. We can now specify our instantiation of $\cM$, which we denote by $\cM_k$. For each $i$, we independently set $$\boxed{\hTheta_i = \mathsf{WCH}\big(\big\{\ttheta_i(\ell_{i,1},\ldots, \ell_{i,k})\big\}\big)\text{ where } \ell_i\sim_{\text{unif.}}\left\{1,\ldots, \log_2 H\right\}\text{ and }(\ell_{i,1},\ldots, \ell_{i,k})\sim_{\text{unif.}}W_{\ell_i}.}$$ Figure~\ref{fig:mechs} is an illustration of the mechanism in the case of $k=2$. We could equivalently set $\hTheta_i = \{\ttheta_i(\ell_{i,1},\ldots,\ell_{i,k})\}$ to be the weakest competitor itself.
\begin{theorem}\label{theorem:subspace}
$\cM_k$ satisfies $$\E[\text{welfare}]\ge\frac{1}{\log_2 H}\cdot\mathsf{OPT}$$ and $$\E[\text{revenue}]\ge\frac{1}{2k(\log_2 H)^{k}}\cdot\mathsf{OPT}.$$
\end{theorem}
\begin{proof}
We have $\E[\text{welfare}]\ge\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]\cdot\Pr(\theta_i\in\hTheta_i)\ge\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]\cdot\Pr(\ell_i = \log_2 H) = \frac{1}{\log_2 H}\cdot\mathsf{OPT}$ (since $\theta_i\succeq\ttheta_i(\log_2H,\ldots,\log_2H)$). The proof of the revenue guarantee relies on the following key claim: for each agent $i$, there exists $\ell_{i,1}^*,\ldots, \ell_{i,k}^*\in\{1,\ldots,\log_2H\}$ such that $\ttheta(\ell_{i,1}^*,\ldots,\ell_{i,k}^*)\succeq\frac{1}{2}\theta_i$. To show this, let $\theta_i^j$ denote the projection of $\theta_i$ onto $u_j$, so $\theta_i = \sum_{j=1}^k \theta_i^j$ since $\{u_{i,1},\ldots,u_{i,k}\}$ is an orthonormal basis. Let $\ell_{i, j}^* = \min\{\ell : \theta_i^j\succeq z_{i, j}^{\ell}\}$. Then, $z_{i, j}^{\ell_{i, j}^*}\succeq\frac{1}{2}\theta_i^j$, so $$\ttheta(\ell_{i, 1}^*,\ldots,\ell_{i, k}^*) = \sum_{j=1}^k z_{i, j}^{\ell_{i, j}^*}\succeq\sum_{j=1}^k\frac{1}{2}\theta_i^j = \frac{1}{2}\theta_i.$$ We now bound the expected payment of agent $i$ as in the previous results. Let $\ell_{i}^* = \min_j\ell_{i, j}^*$. We have
\begin{align*}
\E[p_i] &\ge\E\big[p_i\mid (\ell_{i, 1},\ldots,\ell_{i, k}) = (\ell_{i, 1}^*,\ldots,\ell_{i,k}^*)\big]\cdot\Pr\big((\ell_{i, 1},\ldots,\ell_{i, k}) = (\ell_{i, 1}^*,\ldots,\ell_{i,k}^*)\big) \\
&= \frac{1}{|W_{\ell_i^*}|\log_2 H}\cdot \E\big[p_i\mid (\ell_{i, 1},\ldots,\ell_{i, k}) = (\ell_{i, 1}^*,\ldots,\ell_{i,k}^*)\big] \\
&\ge \frac{1}{\log_2 H((\log_2 H)^{k} - (\log_2 H - 1)^{k})}\cdot\E\big[p_i\mid (\ell_{i, 1},\ldots,\ell_{i, k}) = (\ell_{i, 1}^*,\ldots,\ell_{i,k}^*)\big] \\
&\ge \frac{1}{k(\log_2 H)^{k}}\cdot\E\big[p_i\mid (\ell_{i, 1},\ldots,\ell_{i, k}) = (\ell_{i, 1}^*,\ldots,\ell_{i,k}^*)\big]
\end{align*} since the probability of obtaining the correct weakest competitor $\ttheta(\ell_{i,1}^*,\ldots,\ell_{i,k}^*)$ can be written as the probability of drawing the correct ``level'' $\ell_{i}^*\in\{1,\ldots,\log_2 H\}$ times the probability of drawing the correct weakest competitor within the correct level $W_{\ell_i^*}$. We bound the conditional expectation as in the proof of Theorem~\ref{theorem:line}. By Lemma~\ref{lemma:payment_lb}, $$\E\big[p_i\mid (\ell_{i, 1},\ldots,\ell_{i, k}) = (\ell_{i, 1}^*,\ldots,\ell_{i,k}^*)\big] \ge\ttheta_i(\ell_{i, 1}^*,\ldots,\ell_{i, k}^*)[\alpha_{\mathsf{opt}}] \ge \frac{1}{2}\cdot\theta_i[\alpha_{\mathsf{opt}}].$$ Finally, $$\E[\text{revenue}] = \sum_{i=1}^n\E[p_i] \ge \frac{1}{2k(\log_2 H)^{k}}\cdot\sum_{i=1}^n\theta_i[\alpha_{\mathsf{opt}}]=\frac{1}{2k(\log_2 H)^{k}}\cdot\mathsf{OPT},$$ as desired.
\end{proof}
As mentioned in Section~\ref{section:introduction}, the mechanisms in this section can be viewed as a generalization of the well-known $\log H$ revenue approximation in the single-item limited-supply setting that is achieved by a second-price auction with a random reserve price chosen uniformly from $\{H/2, H/4, H/8,\ldots, 1\}$~\citep{goldberg2001competitive}. Our results apply not only to combinatorial auctions but to general multidimensional mechanism design problems such as the examples presented in Section~\ref{subsection:examples}.
\section{Weakest-competitor sets and our meta-mechanism}
\label{section:meta_mechanism}
In this section we present our meta-mechanism for multidimensional mechanism design with side information. Our meta-mechanism generalizes the weakest-competitor VCG mechanism. We begin by introducing some new constructions based on the concept of a weakest competitor. These constructions are the key ingredients in understanding the role of side information in our meta-mechanism. Let $\theta\preceq\theta'$ if $\theta[\alpha]\le\theta'[\alpha]$ for all $\alpha\in\Gamma$. Let $\theta\preccurlyeq\theta'$ if $\theta[\alpha]\le\theta'[\alpha]$ for all $\alpha\in\Gamma$ and there exists $\alpha'\in\Gamma$ with $\theta[\alpha'] < \theta'[\alpha']$. Let $\theta\prec\theta'$ if $\theta[\alpha] < \theta'[\alpha]$ for all $\alpha\in\Gamma$. We assume $\Theta_i = \Theta=[0, H]^{\Gamma}$ for all $i$, that is, all agents share a common ambient type space with no up-front differentiating information.
\begin{definition} The \emph{extended weakest-competitor set} of a closed set $\tTheta_i$, denoted by $\overline{\mathsf{WC}}(\tTheta_i)$, is the subset of all weakest competitors in $\tTheta_i$ over all possible type profiles of the other agents. Formally, $$\overline{\mathsf{WC}}(\tTheta_i) := \Bigg\{\argmin_{\ttheta_i\in\tTheta_i}\Bigg(\max_{\alpha\in\Gamma}\sum_{j\neq i} \theta_j[\alpha] + \ttheta_i[\alpha]\Bigg) : \boldsymbol{\theta}_{-i}\in\Theta_{-i}\Bigg\}\subseteq\mathrm{bd}(\tTheta_i).$$ The \emph{weakest-competitor set} of $\tTheta_i$, denoted by $\mathsf{WC}(\tTheta_i)$, is the subset of $\overline{\mathsf{WC}}(\tTheta_i)$ where ties in the argmin are broken by discarding any $\theta'$ in the argmin such that there exists $\theta$ also in the argmin with $\theta\preccurlyeq\theta'$. We call members of both $\overline{\mathsf{WC}}(\tTheta_i)$ and $\mathsf{WC}(\tTheta_i)$ \emph{weakest competitors} and say $\htheta_i$ is a \emph{weakest competitor relative to} $\btheta_{-i}$ if $\htheta_i\in\argmin_{\ttheta_i\in\tTheta_i}\max_{\alpha\in\Gamma}\sum_{j\neq i}\theta_j[\alpha]+\ttheta_i[\alpha]$.
\end{definition}
The weakest-competitor set is a natural notion of a ``lower bound'' set of types corresponding to a given predicted type set. From the perspective of the weakest-competitor VCG mechanism, the payment of an agent with true type in $\tTheta_i$ only depends on $\mathsf{WC}(\tTheta_i)$ and not on $\tTheta_i$. Motivated by this observation, we define the weakest-competitor hull, which can be viewed as a ``weakest-competitor relaxation'' of a given set $\tTheta_i$.
\begin{definition} The \emph{weakest-competitor hull} of $\tTheta_i$, denoted by $\mathsf{WCH}(\tTheta_i)$, is the maximal set $S$ such that $\mathsf{WC}(S) = \mathsf{WC}(\tTheta_i)$ (no $T\supset S$ satisfies $\mathsf{WC}(T) = \mathsf{WC}(\tTheta_i)$). Equivalently, $$\mathsf{WCH}(\tTheta_i) = \bigcup_{\widehat{\Theta}_i : \mathsf{WC}(\widehat{\Theta}_i) = \mathsf{WC}(\tTheta_i)}\widehat{\Theta}_i.$$
\end{definition}
We now give simpler characterizations of weakest-competitor sets and weakest-competitor hulls without explicit reference to the mechanics of the weakest-competitor VCG mechanism.
\begin{theorem}\label{thm:wch_characterization}
Let $\Theta = [0, H]^{\Gamma}$ and let $\tTheta\subseteq\Theta$ be closed and connected. Then $$\overline{\mathsf{WC}}(\tTheta) = \left\{\theta\in\tTheta : \left\{\theta'\in\tTheta : \theta'\prec\theta\right\} = \emptyset\right\}, \mathsf{WC}(\tTheta) = \left\{\theta\in\tTheta:\left\{\theta'\in\tTheta:\theta'\preccurlyeq\theta\right\}=\emptyset\right\},$$ and $$\mathsf{WCH}(\tTheta) = \left\{\theta\in\Theta : \exists\theta'\in\tTheta \text{ s.t. } \theta\succeq\theta'\right\}$$ is the upwards closure of $\tTheta$.
\end{theorem}
\begin{proof}
Let $i$ denote the index of the agent under consideration with type space $\tTheta$. Let $\theta\in\tTheta$ be a point such that there exists $\theta'\in\tTheta$ with $\theta'\prec\theta$. Then, $$\max_{\alpha\in\Gamma}\sum_{j\neq i}\theta_j[\alpha] + \theta'[\alpha]<\max_{\alpha\in\Gamma}\sum_{j\neq i}\theta_j[\alpha] + \theta[\alpha]$$ for all $\btheta_{-i}\in\Theta_{-i}$. So $\theta\notin \overline{\mathsf{WC}}(\tTheta)$, which shows that $\overline{\mathsf{WC}}(\tTheta)\subseteq\{\theta\in\tTheta : \{\theta'\in\tTheta : \theta'\prec\theta\} = \emptyset\}$. To show the reverse containment, let $\theta\in\tTheta$ be such that $\{\theta'\in\tTheta: \theta'\prec\theta\}=\emptyset$. Consider any $\btheta_{-i} = (\theta_1,\ldots,\theta_{i-1},\theta_{i+1},\ldots,\theta_n)$ such that $$\sum_{j\neq i}\theta_j[\alpha_1] + \theta[\alpha_1] = \sum_{j\neq i}\theta_j[\alpha_2] + \theta[\alpha_2] = \cdots = \sum_{j\neq i}\theta_j[\alpha_{|\Gamma|}] + \theta[\alpha_{|\Gamma|}].$$ The existence of such a $\btheta_{-i}$ can be shown explicitly as follows.
Let $j\neq i$ be arbitrary. For all $k\notin\{i,j\}$ set $\theta_k = (0,\ldots,0)$. Without loss of generality relabel the allocations in $\Gamma$ such that $\theta[\alpha_1]\ge\theta[\alpha_2]\ge\cdots\ge\theta[\alpha_{|\Gamma|}]$. Then, set $$\theta_j = \left(0, \theta[\alpha_1] - \theta[\alpha_2],\ldots, \theta[\alpha_1] - \theta[\alpha_{|\Gamma|}]\right)\in [0, H]^{\Gamma}.$$ Then, $\theta$ minimizes $$\max\left\{\sum_{j\neq i}\theta_j[\alpha_1] + \theta[\alpha_1] , \sum_{j\neq i}\theta_j[\alpha_2] + \theta[\alpha_2] , \ldots , \sum_{j\neq i}\theta_j[\alpha_{|\Gamma|}] + \theta[\alpha_{|\Gamma|}]\right\}$$ since any $\theta'$ that attains a strictly smaller value must satisfy $\theta'\prec\theta$ (and no such $\theta'$ exists, by assumption). So $\theta\in\overline{\mathsf{WC}}(\tTheta)$, which proves the reverse containment. The characterizations of $\mathsf{WC}$ and $\mathsf{WCH}$ follow immediately.
\end{proof}
\begin{figure}[t]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.75\linewidth]{ex_1.png}
\label{fig:ex1}
\end{subfigure}%
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=0.75\linewidth]{ex_2.png}
\label{fig:ex2}
\end{subfigure}
\caption{Example weakest-competitor hulls in a two-dimensional type space ($|\Gamma| = 2$). $\overline{\mathsf{WC}}(\tTheta)$ is depicted in solid and dashed blue, $\mathsf{WC}(\tTheta)$ is depicted in solid blue, and $\mathsf{WCH}(\tTheta)$ is the region enclosed by $\overline{\mathsf{WC}}(\tTheta)$.}
\label{fig:wch}
\end{figure}
For example, in the single-dimensional case we have $\overline{\mathsf{WC}}([\underline{\theta},\overline{\theta}]) = \mathsf{WC}([\underline{\theta}, \overline{\theta}])=\{\underline{\theta}\}$, and $\mathsf{WCH}([\underline{\theta}, \overline{\theta}]) = [\underline{\theta}, H]$. Figure~\ref{fig:wch} displays example weakest-competitor sets and weakest-competitor hulls in a two-dimensional type space. We list a few additional properties of $\mathsf{WC}$ and $\mathsf{WCH}$ that are all immediate consequences of Theorem~\ref{thm:wch_characterization}.
\begin{prop}\label{prop:wc_properties}
Let $\tTheta\subseteq\Theta=[0, H]^{\Gamma}$ be closed and connected. Then the following properties hold:
\begin{enumerate}
\item $\mathsf{WC}(\mathsf{WCH}(\tTheta)) = \mathsf{WC}(\tTheta)$.
\item $\mathsf{WCH}(\mathsf{WC}(\tTheta)) =\mathsf{WCH}(\tTheta)$.
\item Idempotence: $\mathsf{WC}(\mathsf{WC}(\tTheta)) = \mathsf{WC}(\tTheta)$ and $\mathsf{WCH}(\mathsf{WCH}(\tTheta)) = \mathsf{WCH}(\tTheta)$.
\end{enumerate}
\end{prop}
We now present our meta-mechanism, which we denote by $\cM$. The intuition behind $\cM$ is that it still uses the efficient allocation, but that allocation is enjoyed only by the subset of agents able to compete with the weakest competitors in the side information set. $\cM$ then implements the weakest-competitor payments on those agents. If exact efficiency is a hard constraint then side information cannot be used in a meaningful way since weakest-competitor VCG is revenue optimal (Theorem~\ref{theorem:rev_optimal}). Our mechanism $\cM$ is a generalization of weakest-competitor VCG where efficiency is dependent on the veracity of the input side information. The input subsets $\tTheta_1,\ldots,\tTheta_n$ represent the side information/predictions given to the mechanism designer that postulate that $\theta_i\in\tTheta_i$.
\begin{tcolorbox}[colback=white, sharp corners]
\underline{Meta-mechanism $\cM$}\\
Input: subsets $\tTheta_1,\ldots,\tTheta_n\subseteq\Theta$ given to mechanism designer.
\begin{itemize}
\item Based on $\tTheta_1,\ldots,\tTheta_n$, come up with $\widehat{\Theta}_1,\ldots,\widehat{\Theta}_n$.
\end{itemize}
Agents asked to reveal types $\theta_1,\ldots,\theta_n$.
\begin{itemize}
\item Let $$\alpha^* = \argmax_{\alpha\in\Gamma}\sum_{i=1}^n\theta_i[\alpha]$$ and for each $i$ let $$p_i = \min_{\ttheta_i\in\mathsf{WC}(\widehat{\Theta}_i)}\Bigg(\max_{\alpha\in\Gamma}\sum_{j\neq i}\theta_j[\alpha] + \ttheta_i[\alpha]\Bigg) - \sum_{j\neq i}\theta_j[\alpha^*].$$
\item Let $$\cI = \left\{i : \theta_i[\alpha^*] - p_i\ge 0\right\}.$$
\item If agent $i\notin\cI$, $i$ does not participate and receives zero utility (zero value and zero payment). If agent $i\in\cI$, $i$ enjoys allocation $\alpha^*$ and pays $p_i$.
\end{itemize}
\end{tcolorbox}
While $\cM$ selects the efficient allocation among all agents, it does not necessarily achieve the efficient welfare. Its welfare is $\sum_{i\in\cI}\theta_i[\alpha^*]$ and its revenue is $\sum_{i\in \cI}p_i$. Furthermore, the meta-mechanism $\cM$ does not specify how to come up with $\hTheta_1,\ldots,\hTheta_n$ from $\tTheta_1,\ldots,\tTheta_n$ (hence the ``meta'' label). This challenge is the subject of the later sections where we will describe, based on the setting, how to select the $\hTheta_i$ in order to generate high welfare and high revenue. $\cM$ represents a large class of mechanisms that can differ in how the $\hTheta_i$ are ultimately selected by the mechanism designer. We now establish the incentive properties of $\cM$.
\begin{theorem}\label{theorem:wch_ic}
$\cM$ is incentive compatible and individually rational.
\end{theorem}
\begin{proof}
$\cM$ is incentive compatible for the exact same reason weakest-competitor VCG is incentive compatible (Theorem~\ref{theorem:rev_optimal}). Individual rationality is an immediate consequence of how $\cM$ is defined: all agents with potential individual-rationality violations (those not in $\cI$) do not participate and receive zero utility.
\end{proof}
Next we show that the weakest-competitor hull precisely captures the set of agent types that never violate individual rationality. This is not considered in the weakest-competitor VCG mechanism since in that setting misreporting is limited to the set used in the weakest-competitor minimization, and hence individual rationality is never violated. In our setting, we make no assumptions on the veracity of the sets $\hTheta_i$ and must therefore reckon with the possibility that an agent is unable to compete with the weakest competitors in $\mathsf{WC}(\hTheta_i)$.
\begin{theorem}\label{theorem:wch_ir}
Let $\theta_i$ denote the true type of agent $i$ and $\btheta_{-i}$ the reported types of the other agents. Let $\hTheta_1,\ldots,\hTheta_n$ denote the side information sets used by $\cM$. Then $$i\in\cI\text{ for all } \btheta_{-i}\iff \theta_i\in\mathsf{WCH}(\hTheta_i).$$
\end{theorem}
\begin{proof} Let $\theta_i$ denote the true type of agent $i$ and let $\btheta_{-i} = (\theta_1,\ldots,\theta_{i-1},\theta_{i+1},\ldots,\theta_n)$ denote the reported types of the other agents. Suppose $\theta_i\notin\mathsf{WCH}(\hTheta_i)$. Then, there exists $\ttheta_i\in\overline{\mathsf{WC}}(\hTheta_i)$ such that $\ttheta_i\succ\theta_i$, and there exists $\btheta_{-i}$ such that $\ttheta_i\in\argmin_{\htheta_i\in\hTheta_i}\max_{\alpha\in\Gamma}\sum_{j\neq i}\theta_j[\alpha] + \htheta_i[\alpha]$ is a weakest competitor relative to $\btheta_{-i}$ (the existence of $\btheta_{-i}$ follows from the same reasoning as in the proof of Theorem~\ref{thm:wch_characterization}). As $\ttheta_i\succ\theta_i$, agent $i$'s overall utility will be negative. The utility is unchanged and remains negative if $\ttheta_i$ is replaced by $\theta_i^*\in\mathsf{WC}(\hTheta_i)$ that is also a weakest competitor relative to $\btheta_{-i}$. So we have shown there exists $\btheta_{-i}$ such that $i\notin\cI$.
Conversely suppose $\theta_i\in\mathsf{WCH}(\hTheta_i)$. Then, there exists $\theta_i'\in\mathsf{WC}(\hTheta_i)$ such that $\theta_i\succeq\theta_i'$. Let $\btheta_{-i}$ be arbitrary. Agent $i$'s utility is $\sum_{j=1}^n \theta_j[\alpha^*] - \min_{\ttheta_i\in\mathsf{WC}(\hTheta_{i})}(\max_{\alpha}\sum_{j\neq i} \theta_j[\alpha] + \ttheta_i[\alpha]) \ge \sum_{j=1}^n \theta_j[\alpha^*] - (\max_{\alpha}\sum_{j\neq i} \theta_j[\alpha] + \theta_i'[\alpha])\ge \sum_{j=1}^n \theta_j[\alpha^*] - (\max_{\alpha}\sum_{j\neq i} \theta_j[\alpha] + \theta_i[\alpha]) = 0$, so $i\in\cI$, as desired.
\end{proof}
The key takeaway from Theorem~\ref{theorem:wch_ir} is that $i$ is guaranteed to participate in $\cM$ regardless of the other agents' types if and only if $\theta_i\in\mathsf{WCH}(\hTheta_i)$. We will capitalize on this observation when we derive revenue guarantees for $\cM$, since if $\hTheta_i$ is derived from a high quality prediction/side information set $\tTheta_i$, one would expect that $\theta_i\in\mathsf{WCH}(\hTheta_i)$. In particular, the welfare of $\cM$ is at least $\sum_{i : \theta_i\in\mathsf{WCH}(\hTheta_i)}\theta_i[\alpha^*]$ and its revenue is at least $\sum_{i : \theta_i\in\mathsf{WCH}(\hTheta_i)} p_i$.
\subsection{Computational considerations}
Before we proceed to our main analyses of the key properties and guarantees of $\cM$, we briefly discuss its computational complexity. We consider the special case where the side-information sets are polytopes. Let $size(\hTheta_i)$ denote the encoding size of the constraints defining $\hTheta_i$.
\begin{theorem} Let $\hTheta_i$ be a polytope. Agent $i$'s payment $p_i$ in the execution of $\cM$ can be computed in $poly(|\Gamma|, size(\hTheta_i), n)$ time. Furthermore, determining membership in $\mathsf{WCH}(\hTheta_i)$ can be done in $poly(|\Gamma|, size(\hTheta_i))$ time.
\end{theorem}
\begin{proof}
The weakest competitor in $\hTheta_i$ relative to $\btheta_{-i}$ is the solution $\ttheta_i\in\R^{\Gamma}$ to the linear program
\begin{equation*}\min
\left\{
\gamma \;:\;
\begin{aligned}
& \ttheta_i[\alpha] + \textstyle\sum_{j\neq i}\theta_j[\alpha]\le\gamma \;\;\forall\alpha\in\Gamma,\\
& \ttheta_i\in\hTheta_i, \gamma\ge 0
\end{aligned}
\right\}
\end{equation*} with $|\Gamma|+1$ variables and $|\Gamma| + size(\hTheta_i)$ constraints. Generating the first set of constraints requires computing $\sum_{j\neq i}\theta_j[\alpha]$ for each $\alpha\in\Gamma$, which takes time $\le n|\Gamma|$.
Checking membership of $\theta_i$ in $\mathsf{WCH}(\hTheta_i)$ is equivalent to checking feasibility of a polytope $$\theta_i\in\mathsf{WCH}(\hTheta_i)\iff \left\{\ttheta_i: \ttheta_i\in\hTheta_i, \theta_i[\alpha]\ge\ttheta_i[\alpha]\;\forall\alpha\in\Gamma\right\}\neq\emptyset$$ defined by $size(\hTheta_i) + |\Gamma|$ constraints.
\end{proof}
More generally, the complexity of the above two mathematical programs is determined by the complexity of constraints needed to define $\tTheta_i$: for example, if $\tTheta_i$ is a convex set then they are convex programs. Naturally, a major caveat of this brief discussion on computational complexity is that $|\Gamma|$ can be very large (for example, $|\Gamma|$ is exponential in combinatorial auctions).
|
{
"arxiv_id": "2302.14169",
"language": "en",
"timestamp": "2023-03-01T02:03:30",
"url": "https://arxiv.org/abs/2302.14169",
"yymm": "2302"
} | \section{Introduction}
Building and evaluating data-to-text (D2T) generation systems \cite{gatt2018survey,sharma2022innovations} requires understanding the data and observing system behavior. It is, however, not trivial to interact with the large volume of D2T generation datasets that have emerged in the last years (see Table~\ref{tab:datasets}).
Although research on D2T generation benefits from platforms providing unified interfaces, such as HuggingFace Datasets \cite{lhoest2021datasets} or the GEM benchmark \cite{gehrmann2021gem}, these platforms still leave the majority of the data processing load on the user.
A key component missing from current D2T tools is the possibility to visualize the input data and generated outputs. Visualization plays an important role in examining and evaluating scientific data \cite{Kehrer2013VisualizationAV} and can help D2T generation researchers to make more informed design choices. A suitable interface can also encourage researchers to step away from unreliable automatic metrics \cite{gehrmann2022repairing} and focus on manual error analysis \cite{van_miltenburg_underreporting_2021,van_miltenburg_barriers_2023}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{img/teaser.pdf}
\caption{\textsc{TabGenie} provides a way to handle various data-to-text generation datasets through a unified web and programming interface. The \textit{web interface} enables interactive exploration and analysis of datasets and model outputs, while the \textit{programming interface} provides unified data loaders and structures.}\label{fig:teaser}
\end{figure}
Along with that, demands for a \textit{unified input data format} have recently been raised with multi-task training for large language models (LLMs) \citep[\textit{inter alia}]{Sanh2021MultitaskPT,scao2022bloom,Ouyang2022TrainingLM}. Some works have used simple data linearization techniques for converting structured data to a textual format, in order to align it with the format used for other tasks \cite{UnifiedSKG,tang2022mvp}. However, linearizations are using custom preprocessing code, leading to discrepancies between individual works.
In this paper, we present \textsc{TabGenie} -- a multi-purpose toolkit for interacting with D2T generation datasets and systems designed to fill these gaps. On a high level, the toolkit consists of (a) an interactive web interface, (b) a set of command-line processing tools, and (c) a set of Python bindings (see Figure~\ref{fig:teaser}).
\begin{table*}[t]
\centering\small
\begin{tabular}{@{}lllrrrl@{}}
\toprule
\multirow{2}{*}{\textbf{Dataset}} & \multirow{2}{*}{\textbf{Source}} & \multirow{2}{*}{\textbf{Data Type}} & \multicolumn{3}{c}{\textbf{Number of examples}} & \multirow{2}{*}{\textbf{License}} \\ \cmidrule(lr){4-6}
& & & \textbf{train} & \textbf{dev} & \textbf{test} & \\ \midrule
CACAPO & \citet{van2020cacapo} & Key-value & 15,290 & 1,831 & 3,028 & CC BY \\
DART$^\dagger$ & \citet{nan2021dart} & Graph & 62,659 & 2,768 & 5,097 & MIT \\
E2E$^\dagger$ & \citet{duvsek2019semantic} & Key-value & 33,525 & 1,484 & 1,847 & CC BY-SA \\
EventNarrative & \citet{colas2021eventnarrative} & Graph & 179,544 & 22,442 & 22,442 & CC BY \\
HiTab & \citet{cheng2021hitab} & Table w/hl & 7,417 & 1,671 & 1,584 & C-UDA \\
Chart-To-Text & \citet{kantharaj2022chart} & Chart & 24,368 & 5,221 & 5,222 & GNU GPL \\
Logic2Text & \citet{chen2020logic2text} & Table w/hl + Logic & 8,566 & 1,095 & 1,092 & MIT \\
LogicNLG & \citet{chen2020logical} & Table & 28,450 & 4,260 & 4,305 & MIT \\
NumericNLG & \citet{suadaa-etal-2021-towards} & Table & 1,084 & 136 & 135 & CC BY-SA \\
SciGen & \citet{Moosavi2021LearningTR} & Table & 13,607 & 3,452 & 492 & CC BY-NC-SA \\
SportSett:Basketball$^\dagger$ & \citet{thomson-etal-2020-sportsett} & Table & 3,690 & 1,230 & 1,230 & MIT \\
ToTTo$^\dagger$ & \citet{parikh2020totto} & Table w/hl & 121,153 & 7,700 & 7,700 & CC BY-SA \\
WebNLG$^\dagger$ & \citet{ferreira20202020} & Graph & 35,425 & 1,666 & 1,778 & CC BY-NC \\
WikiBio$^\dagger$ & \citet{lebret2016neural} & Key-value & 582,659 & 72,831 & 72,831 & CC BY-SA \\
WikiSQL$^\dagger$ & \citet{zhongSeq2SQL2017} & Table + SQL & 56,355 & 8,421 & 15,878 & BSD \\
WikiTableText & \citet{bao2018table} & Key-value & 10,000 & 1,318 & 2,000 & CC BY \\
\bottomrule
\end{tabular}
\caption{The list of datasets included in \textsc{TabGenie}. Glossary of data types: \textit{Key-value}: key-value pairs, \textit{Graph}: subject-predicate-object triples, \textit{Table}: tabular data (\textit{w/hl}: with highlighted cells), \textit{Chart}: chart data, \textit{Logic / SQL}: strings with logical expressions / SQL queries. The datasets marked with $\dagger$ were already present on Huggingface Datasets. We uploaded the rest of the datasets to our namespace: \texttt{\url{https://huggingface.co/kasnerz}}.}
\label{tab:datasets}.
\end{table*}
The cornerstone of \textsc{TabGenie} is a \textbf{unified data representation}. Each input represented is as a matrix of $m$ columns and $n$ rows consisting of individual cells accompanied with metadata (see §\ref{sec:data}). Building upon this representation, \textsc{TabGenie} then provides multiple features for unified workflows with table-to-text datasets, including:
\begin{enumerate}
\item visualizing individual dataset examples in the tabular format (§\ref{sec:exploration}),
\item interacting with table-to-text generation systems in real-time (§\ref{sec:interactive}),
\item comparing generated system outputs (§\ref{sec:interactive}),
\item loading and preprocessing data for downstream tasks (§\ref{sec:python}),
\item exporting examples and generating spreadsheets for manual error analysis (§\ref{sec:cli}).
\end{enumerate}
In §\ref{sec:casestudies}, we present examples of practical use-cases of \textsc{TabGenie} in D2T generation research.
\section{Data}
\label{sec:data}
We currently include 16 datasets listed in Table~\ref{tab:datasets} in \textsc{TabGenie}, covering many subtasks of D2T generation. All the datasets are available under a permissive open-source license.
\begin{figure*}[t]
\centering
\setlength{\fboxsep}{0pt}\fcolorbox{gray!20}{white}{\includegraphics[width=1.0\textwidth]{img/web.png}}
\caption{The web interface of \textsc{TabGenie}. The \textbf{left panel} and the \textbf{navigation bar} contains user controls; the \textbf{center panel} shows table properties and table content; the \textbf{right panel} contains system outputs.}
\label{fig:web}
\end{figure*}
\subsection{Data Format}
The inputs in D2T generation datasets may not consist only of tables, but also of e.g.\ graphs or key-value pairs. However, we noticed that in many cases, converting these formats to tables requires only minimal changes to the data structure while allowing a unified data representation and visualization. This conversion narrows down the task of D2T generation as the task of generating description for a tabular data, i.e. table-to-text generation \cite{parikh2020totto, liu2022plog, gong2020tablegpt}.
In our definition, a \textit{table} is a two-dimensional matrix with $m$ columns and $n$ rows, which together define a grid of $m \times n$ cells. Each cell contains a (possibly empty) text string. A continuous sequence of cells $\{c_{i}, \ldots, c_{i+k}\}$ from the same row or column may be merged, in which case the values of $\{c_{i+1},\ldots,c_{i+k}\}$ are linked to the value of $c_{i}$. A cell may be optionally marked as a \textit{heading}, which is represented as an additional property of the cell.\footnote{The headings are typically located in the first row or column, but may also span multiple rows or columns and may not be adjacent.} To better accommodate the format of datasets such as ToTTo \cite{parikh2020totto} or HiTab \cite{cheng2021hitab}, we also allow individual cells to be \textit{highlighted}. Highlighted cells are assumed to be preselected for generating the output description.
The tables may be accompanied with an additional set of properties (see Figure~\ref{fig:web}) -- an example of such a property is a \textit{``title''} of the table in WikiBio \cite{lebret2016neural} or a \textit{``category''} in WebNLG \cite{gardent2017webnlg}. We represent properties as key-value pairs alongside the table. The properties may be used for generating the table description.
\subsection{Data Transformation}
We aim to present the data as true to the original format as possible and only make some minor changes for datasets which do not immediately adhere to the tabular format:
\begin{itemize}
\item For graph-to-text datasets, we format each triple as a row, using three columns labeled \textit{subject}, \textit{predicate}, and \textit{object}.
\item For key-value datasets, we use two columns with keys in the first column as row headings.
\item For SportSett:Basketball \cite{thomson-etal-2020-sportsett}, we merge the \textit{box score} and \textit{line score} tables and add appropriate headings where necessary.
\end{itemize}
\subsection{Data Loading}
To ease the data distribution, we load all the datasets using the Huggingface \texttt{datasets} package \cite{lhoest2021datasets}, which comes equipped with a data downloader. Out of 16 datasets we are using, 7 were already available in Huggingface datasets, either through the GEM benchmark \cite{gehrmann2021gem} or other sources. We publicly added the 9 remaining datasets (see Table~\ref{tab:datasets}).
\textsc{TabGenie} also supports adding custom data loaders. Creating a data loader consists of simple sub-classing the data loader class and overriding a single method for processing individual entries, allowing anyone to add their custom dataset.
\section{Web Interface}
\label{sec:web}
\textsc{TabGenie} offers a user-friendly way to interact with table-to-text generation datasets through the \textit{web interface}. The interface can be rendered using a local server (cf. §\ref{sec:cli}) and can be viewed in any modern web browser. The interface features a simple, single-page layout, which contains a navigation bar and three panels containing user controls, input data, and system outputs (see Figure \ref{fig:web}). Although the interface primarily aims at researchers, it can be also used by non-expert users.
\subsection{Content Exploration}
\label{sec:exploration}
The input data in \textsc{TabGenie} is rendered as HTML tables, providing better visualizations than existing data viewers, especially in the case of large and hierarchical tables.\footnote{Compare, e.g., with the ToTTo dataset in Huggingface Datasets for which the table is provided in a single field called \textit{``table''}: \url{https://huggingface.co/datasets/totto}} In the web interface, users can navigate through individual examples in the dataset sequentially, access an example using its index, or go to a random example. The users can add notes to examples and mark examples as favorites for accessing them later. The interface also shows the information about the dataset (such as its description, version, homepage, and license) and provides an option to export the individual examples (see §\ref{sec:cli}).
\subsection{Interactive Mode}
\label{sec:interactive}
\textsc{TabGenie} offers an \textit{interactive mode} for generating an output for a particular example on-the-fly. The user can highlight different cells, edit cell contents, and edit parameters of the downstream processor. For example, the user can prompt a LLM for table-to-text generation and observe how it behaves while changing the prompt.
The contents of a table are processed by a processing \textit{pipeline}. This pipeline takes table contents and properties as input, processes them with a sequence of modules, and outputs HTML code. The modules are custom Python programs which may be re-used across the pipelines.
\textsc{TabGenie} currently provides two basic pipelines: (1) calling a generative language model through an API with a custom prompt, and (2) generating graph visualizations of RDF triples. We describe the case-study for the model API pipeline in §\ref{sec:cs:prompting}. Users can easily add custom pipelines by following the instructions in the project repository.
\subsection{Pre-generated Outputs}
\label{sec:outputs}
In addition to interactive generation, \textsc{TabGenie} allows to visualize static pre-generated outputs. These are loaded in the JSONL\footnote{\url{https://jsonlines.org}} format from the specified directory and displayed similarly to the outputs from the interactive mode. Multiple outputs can be displayed alongside a specific example, allowing to compare outputs from multiple systems.
\section{Developer Tools}
\label{sec:developer}
\textsc{TabGenie} also provides a developer-friendly interface: Python bindings (§\ref{sec:python}) and a command-line interface (§\ref{sec:cli}). Both of these interfaces aim to simplify dataset preprocessing in downstream tasks. The key benefit of using \textsc{TabGenie} is that it provides streamlined access to data in a consistent format, removing the need for dataset-specific code for extracting information such as table properties, references, or individual cell values.
\subsection{Python Bindings}
\label{sec:python}
\textsc{TabGenie} can be integrated in other Python codebases to replace custom preprocessing code. With a \textit{single unified interface} for all the datasets, the \textsc{TabGenie} wrapper class allows to:
\begin{itemize}
\item load a dataset from the Huggingface Datasets or from a local folder,
\item access individual table cells and their properties,
\item linearize tables using pre-defined or custom functions,
\item prepare the Huggingface \texttt{Dataset} objects for downstream processing.
\end{itemize}
\textsc{TabGenie} can be installed as a Python package, making the integration simple and intuitive.
See §\ref{sec:cs:generation} for an example usage of the \textsc{TabGenie} Python interface.
\subsection{Command-line Tools}
\label{sec:cli}
\textsc{TabGenie} supports several basic commands via command line.
\paragraph{Run} The \texttt{tabgenie run} command launches the local web server, mimicking the behavior of \texttt{flask run}. Example usage:
\begin{codebox}
\begin{minted}[fontsize=\small]{bash}
tabgenie run --port=8890 --host="0.0.0.0"
\end{minted}
\end{codebox}
\paragraph{Export} The \texttt{tabgenie export} command enables batch exporting of the dataset. The supported formats are \texttt{xlsx}, \texttt{html}, \texttt{json}, \texttt{txt}, and \texttt{csv}. Except for \texttt{csv}, table properties can be exported along with the table content. Example usage:
\begin{codebox}
\begin{minted}[fontsize=\small]{bash}
tabgenie export --dataset "webnlg" \
--split "dev" \
--out_dir "export/datasets/webnlg" \
--export_format "xlsx"
\end{minted}
\end{codebox}
\noindent Export can also be done in the web interface.
\paragraph{Spreadsheet} For error analysis, it is common to select $N$ random examples from the dataset along with the system outputs and manually annotate them with error categories (see~§\ref{sec:cs:analysis}). The \texttt{tabgenie sheet} command generates a suitable spreadsheet for this procedure. Example usage:
\begin{codebox}
\begin{minted}[fontsize=\small]{bash}
tabgenie sheet --dataset "webnlg" \
--split "dev" \
--in_file "out-t5-base.jsonl" \
--out_file "analysis_webnlg.xlsx" \
--count 50
\end{minted}
\end{codebox}
\section{Implementation}
\label{sec:architecture}
\textsc{TabGenie} runs with Python >=3.8 and requires only a few basic packages as dependencies. It can be installed as a stand-alone Python module from PyPI (\texttt{pip install tabgenie}) or from the project repository.
\paragraph{Backend} The web server is based on \texttt{Flask},\footnote{\url{https://pypi.org/project/Flask/}} a popular lightweight Python-based web framework. The server runs locally and can be configured with a YAML\footnote{\url{https://yaml.org}} configuration file. On startup, the server loads the data using the \texttt{datasets}\footnote{\url{https://pypi.org/project/datasets/}} package. To render web pages, the server uses the \texttt{tinyhtml}\footnote{\url{https://pypi.org/project/tinyhtml/}} package and Jinja\footnote{\url{https://jinja.palletsprojects.com/}} templating language.
\paragraph{Frontend} The web frontend is built on HTML5, CSS, Bootstrap,\footnote{\url{https://getbootstrap.com/}} JavaScript, and jQuery.\footnote{\url{https://jquery.com}} We additionally use the D3.js\footnote{\url{https://d3js.org}} library for visualizing the structure of data in graph-to-text datasets. To keep the project simple, we do not use any other major external libraries.
\section{Case Studies}
\label{sec:casestudies}
In this section, we outline several recipes for using \textsc{TabGenie} in D2T generation research. The instructions and code samples for these tasks are available in the project repository.
\subsection{Table-To-Text Generation}
\label{sec:cs:generation}
\paragraph{Application} Finetuning a sequence-to-sequence language model for table-to-text generation in PyTorch \cite{paszke2019pytorch} using the Huggingface Transformers \cite{wolf2019huggingface} framework.
\paragraph{Process} In a typical finetuning procedure using these frameworks, the user needs to prepare a \texttt{Dataset} object with tokenized input and output sequences. Using \textsc{TabGenie}, preprocessing a specific dataset is simplified to the following:
\begin{codebox}
\begin{minted}[fontsize=\fontsize{8pt}{8pt}]{python}
from transformers import AutoTokenizer
import tabgenie as tg
# instantiate a tokenizer
tokenizer = AutoTokenizer.from_pretrained(...)
# load the dataset
tg_dataset = tg.load_dataset(
dataset_name="totto"
)
# preprocess the dataset
hf_dataset = tg_dataset.get_hf_dataset(
split="train",
tokenizer=tokenizer
)
\end{minted}
\end{codebox}
The function \texttt{get\_hf\_dataset()} linearizes the tables (the users may optionally provide their custom linearization function) and tokenizes the inputs and references.
For training a single model on multiple datasets in the multi-task learning setting \cite{UnifiedSKG}, the user may preprocess each dataset individually, prepending a dataset-specific task description to each example. The datasets may then be combined using the methods provided by the \texttt{datasets} package.
\paragraph{Demonstration} For running the baselines, we provide an example script, which can be applied to any \textsc{TabGenie} dataset and pre-trained sequence-to-sequence model from the \texttt{transformers} library. For multi-task learning, we provide an example of joint training on several datasets with custom linearization functions. We run the example scripts for several datasets and display the resulting generations in the application demo. Details on the fine-tuned models can be found in Appendix \ref{appendix:models}.
\subsection{Interactive Prompting}
\label{sec:cs:prompting}
\paragraph{Application} Observing the impact of various inputs on the outputs of a LLM prompted for table-to-text generation.
\paragraph{Process} The user customizes the provided \texttt{model\_api} pipeline to communicate with a LLM through an API. The API can communicate either with an external model (using e.g. OpenAI API\footnote{\url{https://openai.com/api/}}), or with a model running locally (using libraries such as FastAPI\footnote{\url{https://fastapi.tiangolo.com}}). The user then interacts with the model through \textsc{TabGenie} web interface, modifying the prompts, highlighted cells, and table content (see §\ref{sec:interactive}).
\paragraph{Demonstration} We provide an interactive access to the instruction-tuned Tk-Instruct \texttt{def-pos-11b} LLM \cite{wang2022super} in the project live demo. The user can use the full range of possibilities included in the interactive mode, including customizing the prompt and the input data.\footnote{Note that using the model for the task of table-to-text generation is experimental and may not produce optimal outputs. The model should also not be used outside of demonstration purposes due to our limited computational resources.} The interface is shown in Appendix \ref{appendix:screenshots}.
\subsection{Error Analysis}
\label{sec:cs:analysis}
\paragraph{Application}
Annotating error categories in the outputs from a table-to-text generation model.
\paragraph{Process} The user generates the system outputs (see §\ref{sec:cs:generation}) and saves the outputs for a particular dataset split in a JSONL format. Through the command-line interface, the user will then generate a XLSX file which can be imported in any suitable office software and distributed to annotators for performing error analysis.
\paragraph{Demonstration} We provide instructions for generating the spreadsheet in the project documentation. See Appendix \ref{appendix:screenshots} for a preview of the spreadsheet format.
\section{Related Work}
\subsection{Data Loading and Processing}
As noted throughout the work, Huggingface Datasets \cite{lhoest2021datasets} is the primary competitor package for data loading and preprocessing. Our package serves as a wrapper on top of this framework, providing additional abstractions for D2T generation datasets.
DataLab \cite{xiao-etal-2022-datalab} is another platform for working with NLP datasets. Similarly to Huggingface Datasets, this platform has much broader focus than our package. Besides data access, it offers fine-grained data analysis and data manipulation tools. However, it has limited capabilities of visualizing the input data or interactive generation and at present, it does not cover the majority of datasets available in \textsc{TabGenie}.
PromptSource \cite{bach2022promptsource} is a framework for constructing prompts for generative language models using the Jinja templating language. It can be used both for developing new prompts and for using the prompts in downstream applications.
Several tools have been developed for comparing outputs of language generation systems (notably for machine translation) such as CompareMT \cite{neubig2019comparemt} or Appraise \cite{federmann2018appraise}, but the tools do not visualize the structured data.
\subsection{Interactive D2T Generation}
Until now, platforms for interactive D2T generation have been primarily limited to commercial platforms, such as Arria,\footnote{\url{https://www.arria.com}} Automated Insights,\footnote{\url{https://automatedinsights.com}} or Tableau Software\footnote{\url{https://www.tableau.com}} (formerly Narrative Science). These platforms focus on proprietary solutions for generating business insights and do not provide an interface for research datasets. \citet{dou-etal-2018-data2text} present Data2Text Studio, a platform which provides a set of developer tools for building custom D2T generation systems. The platform currently does not seem to be publicly available.
\subsection{Table-To-Text Generation}
Although pre-trained sequence-to-sequence models have been found to be effective for D2T generation \citep{kale-rastogi-2020-text, UnifiedSKG}, they have difficulties with handling the input structure, generation diversity, and logical reasoning. Multiple works have tried to address these issues. For a comprehensive review of the field, we point out the interested reader to the recent survey of \citet{sharma2022innovations}.
\section{Conclusion}
We presented \textsc{TabGenie}, a multifunctional software package for table-to-text generation. \textsc{TabGenie} bridges several gaps including visualizing input data, unified data access, and interactive table-to-text generation. As such, \textsc{TabGenie} provides a comprehensive set of tools poised to accelerate progress in the field of D2T generation.
\section*{Limitations}
For some D2T generation inputs, the tabular structure may be inappropriate. This involves hierarchical tree-based structures, bag-of-words, or multimodal inputs \cite{balakrishnan2019constrained,lin2019commongen,krishna2017visual}. Due to deployment issues, \textsc{TabGenie} also does not include large synthetic datasets \cite{agarwal2021knowledge,jin2020genwiki}. \textsc{TabGenie} is currently in early development stages, which is why it primarily targets the research community.
\section*{Ethical Impact}
The table-to-text generation datasets may contain various biases or factually incorrect outputs, which may be further reproduced by the table-to-text generation models. Although our software package is designed to help to examine and eliminate the biases and errors, we cannot guarantee the correctness of the processed outputs.
As \textsc{TabGenie} is an open-source software package with a permissive license, we do not control its downstream applications. We advocate using it for responsible research with the aim of improving natural language generation systems.
|
{
"arxiv_id": "2302.14177",
"language": "en",
"timestamp": "2023-03-01T02:03:48",
"url": "https://arxiv.org/abs/2302.14177",
"yymm": "2302"
} | \section{Introduction}\label{introduction}}
Software production, use, and reuse is an increasingly crucial part of
scholarly work
\citep{open_source_code_repo_predict_impact, Trisovic2021ALS}. While
historically underutilized, citing and referencing software used during
the course of research is becoming common with new standards for
software citation \citep{Katz2021RecognizingTV, Du2022UnderstandingPI}
and work in extracting software references in existing literature
\citep{cz_software_mentions}. However, records of software production
are not readily identifiable or available at scale in the way that
peer-reviewed publications or other scholarly outputs are
\citep{Schindler2022TheRO}. To make progress on this problem, we
introduce two related datasets for studying and inferring software
produced as a part of research, which we refer to as the Soft-Search
dataset.
The Soft-Search dataset is aimed at identifying research projects which
are likely to have produced software while funded by a federal grant. We
start by identifying GitHub repositories that acknowledge funding from
at least one National Science Foundation (NSF) award. We then annotate
each GitHub repository found with a binary decision for its contents:
software or not-software (e.g.~not all github repositories contain
software, they might include research notes, course materials, etc.). We
then link each annotated GitHub repository to the specific NSF award
ID(s) referenced in its README.md file. Finally, we compile the
Soft-Search Training dataset using the annotations for each GitHub
repository, and the text from the linked NSF award abstract and the
project outcomes report.
Using the Soft-Search Training dataset, we train a variety of models to
predict software production using either the NSF award abstract or
project outcomes report text as input. We use the best performing models
to then infer software production against all awards funded by the
National Science Foundation from 2010 to 2023 (additional details are
offered in Section~\ref{sec-data-collection}). The predictions and
metadata for each NSF award between the 2010 and 2023 period are
compiled to form the Soft-Search Inferred dataset.
In total, our new Soft-Search dataset includes the following
contributions:
\begin{enumerate}
\def\arabic{enumi}.{\arabic{enumi}.}
\item
Soft-Search Training: A ground truth dataset compiled using linked NSF
awards and GitHub repositories which have been annotated for software
production.
\item
Multiple classifiers which infer software production from either the
text of an NSF award's abstract or project outcomes report.
\item
Soft-Search Inferred: A dataset of more than 150,000 NSF funded awards
from between 2010 and 2023. Each award has two predictions for
software production: one from prediction using the abstract text and
the other from prediction using the project outcomes report text.
\end{enumerate}
The rest of the paper proceeds as follows. In
Section~\ref{sec-data-collection} we detail the data collection and
annotation process used for creating the Soft-Search Training dataset.
In Section~\ref{sec-models} we briefly describe the model training
process and report results. In Section~\ref{sec-soft-search-dataset} we
provide summary statistics for the Soft-Search Inferred dataset and
observe trends in software production over time. We conclude with
discussion regarding the limitations of our approach and opportunities
for future work.
\hypertarget{sec-data-collection}{%
\section{Data Collection and Annotation}\label{sec-data-collection}}
\hypertarget{sec-finding-soft}{%
\subsection{Finding Software Produced by NSF
Awards}\label{sec-finding-soft}}
The first step in our data collection process was to find software
outputs from National Science Foundation (NSF) funded research. This
step has two potential approaches. The first approach is a manual search
for references and promises of software production within NSF award
abstracts, project outcome reports, and papers supported by each award.
This first approach is labor intensive and may be prone to labeling
errors because while there may be a promise of software production in
these documents, it may not be possible to verify such software was
ultimately produced. The other approach is to predict software
production using a trained model. We pursue this approach with the
caveat that there are also potential label errors.
To gather examples of verifiable software production, we created a
Python script which used the GitHub API to search for repositories which
included reference to financial support from an NSF award in the
repositories README.md file. Specifically our script queried for
README.md files which contained any of the following text snippets:
`National Science Foundation', `NSF Award', `NSF Grant', `Supported by
NSF', or `Supported by the NSF'. GitHub was selected as the basis for
our search because of its widespread adoption and mention in scholarly
publication \citep{riseofgithubinscholarlypublication}. This search
found 1520 unique repositories which contained a reference to the NSF in
the repository's README.md file.
\hypertarget{software-production-annotation}{%
\subsection{Software Production
Annotation}\label{software-production-annotation}}
The next step in our data collection process was to annotate each of the
GitHub repositories found as either ``software'' or ``not software.'' In
our initial review of the repositories we had collected, we found that
the content of repositories ranged from documentation, experimental
notes, course materials, collections of one-off scripts written during a
research project, to more typical software libraries with installation
instructions, testing, and community support and use.
Using existing definitions of what constitutes research software to form
the basis of our annotation criteria
\citep{martinez_ortiz_carlos_2022_7185371, sochat2022research}, we
conducted multiple rounds of trial coding on samples of the data.
Fleiss' kappa was used to determine if there was agreement between our
research team on whether ten GitHub repositories contained `software' or
not. On each round of trial coding ten GitHub repositories were randomly
selected from our dataset for each member of our research team to
annotate independently. When assessing a repository, members of the
research team were allowed to use any information in the repository to
determine their annotation (i.e.~the content of the README.md file, the
repository activity, documentation availability, etc.)
Our final round of trial coding showed that there was near perfect
agreement between the research team (K=0.892)
\citep{viera2005understanding}.
Our final annotation criteria was generally inclusive of labeling
repositories as software, rather there were specific exclusion criteria
that resulted in a repository being labeled as ``not software''.
Specifically repositories were labeled as ``not software'' when a
repository primarily consisted of:
\begin{enumerate}
\def\arabic{enumi}.{\arabic{enumi}.}
\item
project documentation or research notes
\item
teaching materials for a workshop or course
\item
the source code for a project or research lab website
\item
collections of scripts specific to the analysis of a single experiment
without regard to further generalizability
\item
utility functions for accessing data without providing any additional
processing capacity
\end{enumerate}
We then annotated all GitHub repositories in our dataset as either
``software'' or ``not software'' according to our agreed upon annotation
criteria.
\hypertarget{linking-github-repositories-to-nsf-awards}{%
\subsection{Linking GitHub Repositories to NSF
Awards}\label{linking-github-repositories-to-nsf-awards}}
Our final step in the data collection process was to link the annotated
GitHub repositories back to specific NSF awards. To do so, we created a
script which would load the webpage for each GitHub repository, scrape
the content of the repository's README and find the specific NSF award
ID number(s) referenced. While annotating the dataset, and with this
script, our dataset size was reduced as we found some repositories were
returned in the initial search because of the ``NSF'' acronym being used
by other, non-United-States governmental agencies which also fund
research.
When processing each repository, our Python script would load the README
content, search for NSF Award ID patterns with regular expressions, and
then verify that each NSF award ID found was valid by requesting
metadata for the award from the NSF award API.
We then retrieved the text for each award's abstract and project
outcomes report. This was the final step of our data collection process
and allowed us to create a dataset of 446 unique NSF awards labeled as
`produced software' and 471 unique NSF awards labeled as `did not
produce software'.
\hypertarget{sec-models}{%
\section{Predictive Models}\label{sec-models}}
Using the compiled Soft-Search Training dataset, we trained three
different models using the text from either the award abstract or
project outcomes report. The models trained include a logistic
regression model trained with TF-IDF word embeddings
(\texttt{tfidf-logit}), a logistic regression model trained with
semantic embeddings (\texttt{semantic-logit}), and a fine-tuned
transformer (\texttt{transformer}). The semantic embeddings and the base
model from which we fine-tuned our own transformer model was the
`distilbert-base-uncased-finetuned-sst-2-english' model
\citep{hf_canonical_model_maintainers_2022}. Each model was trained with
80\% of the Soft-Search Training dataset. We then test each of the
models and use F1 to rank each model's performance.
\hypertarget{tbl-model-results-from-abstract}{}
\begin{table}
\caption{\label{tbl-model-results-from-abstract}Predictive Model Results (Trained with Abstract Text) }\tabularnewline
\centering
\begin{tabular}{llrrrr}
\toprule
{} & model & accuracy & precision & recall & f1 \\
\midrule
0 & tfidf-logit & 0.674 & 0.674 & 0.674 & 0.673 \\
1 & transformer & 0.636 & 0.608 & 0.697 & 0.649 \\
2 & semantic-logit & 0.630 & 0.630 & 0.630 & 0.630 \\
3 & regex & 0.516 & 0.515 & 0.516 & 0.514 \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tbl-model-results-from-abstract} reports the results from
training using the abstract text as input. The best performing model was
the \texttt{tfidf-logit} which achieved an F1 of 0.673.
\hypertarget{tbl-model-results-from-project-outcomes}{}
\begin{table}
\caption{\label{tbl-model-results-from-project-outcomes}Predictive Model Results (Trained with Project Outcomes Report Text) }\tabularnewline
\centering
\begin{tabular}{llrrrr}
\toprule
{} & model & accuracy & precision & recall & f1 \\
\midrule
0 & tfidf-logit & 0.745 & 0.745 & 0.745 & 0.745 \\
1 & transformer & 0.673 & 0.638 & 0.771 & 0.698 \\
2 & semantic-logit & 0.633 & 0.633 & 0.633 & 0.632 \\
3 & regex & 0.510 & 0.507 & 0.510 & 0.482 \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tbl-model-results-from-project-outcomes} reports the results
from training using the project outcomes reports as input. The best
performing model was the \texttt{tfidf-logit} which achieved an F1 of
0.745.
While the models trained with the project outcomes reports were trained
with less data, the best model of the group achieved a higher F1 than
any of the models trained with the abstracts. While we have not
investigated further, we believe this to be because the project outcomes
reports contain more direct citation of produced software rather than an
abstract's promise of software production.
The data used for training, and functions to reproduce these models, are
made available via our Python package:
\href{https://github.com/si2-urssi/eager}{\texttt{soft-search}}.
\hypertarget{sec-soft-search-dataset}{%
\section{The Soft-Search Dataset}\label{sec-soft-search-dataset}}
Using the predictive models, we compile the Soft-Search Inferred dataset
which contains the metadata, abstract text, and project outcomes report
text, for all NSF awarded projects during the 2010-2023 period. The
Soft-Search Inferred dataset additionally contains our predictions for
software production using both texts respectively.
\hypertarget{tbl-soft-search-stats}{}
\begin{table}
\caption{\label{tbl-soft-search-stats}Composition of the NSF Soft Search Dataset }\tabularnewline
\centering
\begin{tabular}{llrrr}
\toprule
{} & Program & \# Awards & \# Software & \% Software \\
\midrule
0 & MPS & 32885 & 19178 & 0.583184 \\
1 & CISE & 24633 & 13274 & 0.538871 \\
2 & ENG & 22900 & 11242 & 0.490917 \\
3 & GEO & 17822 & 5142 & 0.288520 \\
4 & BIO & 16990 & 6013 & 0.353914 \\
5 & EHR & 13703 & 575 & 0.041962 \\
6 & SBE & 13318 & 1966 & 0.147620 \\
7 & TIP & 8597 & 4501 & 0.523555 \\
8 & OISE & 2329 & 636 & 0.273079 \\
9 & OIA & 498 & 123 & 0.246988 \\
\bottomrule
\end{tabular}
\end{table}
\hypertarget{trends-and-observations}{%
\subsection{Trends and Observations}\label{trends-and-observations}}
\begin{figure}
{\centering \includegraphics[width=\linewidth]{paper_files/figure-pdf/fig-soft-over-time-output-1.pdf}
}
\caption{\label{fig-soft-over-time}Software Production Over Time (Using
Predictions from Abstracts)}
\end{figure}
Using the Soft-Search Inferred dataset we can observe trends in software
production over time. Figure~\ref{fig-soft-over-time} plots the percent
of awards which we predict to have produced software (using the award's
abstract) over time. While there are minor year-to-year deviations in
predicted software production, we observe the ``Math and Physical
Sciences'' (MPS) funding program as funding the most awards which we
predict to produce software, with ``Computer and Information Science and
Engineering'' (CISE), and ``Engineering'' (ENG) close behind.
\begin{figure}
{\centering \includegraphics[width=\linewidth]{paper_files/figure-pdf/fig-soft-over-duration-output-1.pdf}
}
\caption{\label{fig-soft-over-duration}Software Production Grouped By
Award Duration (Using Predictions from Abstracts)}
\end{figure}
We can additionally observe trends in software production as award
duration increases. Figure~\ref{fig-soft-over-duration} plots the
percent of awards which we predict to have produced software (using the
award's abstract) grouped by the award duration in years. We note that
as award duration increases, the percentage of awards which are
predicted to have produced software also tends to increase.
\hypertarget{conclusion}{%
\section{Conclusion}\label{conclusion}}
We introduce Soft-Search, a pair of novel datasets for studying software
production from NSF funded projects. The Soft-Search Training dataset is
a human-labeled dataset with almost 1000 examples used to train models
which predict software production from either the NSF award abstract
text or the project outcomes report text. We used these models to
generate the Soft-Search Inferred dataset. The Soft-Search Inferred
dataset includes project metadata, the awards abstract and project
outcomes report, and predictions of software production for each NSF
funded project between 2010 and 2023. We hope that Soft-Search helps
further new studies and findings in understanding the role software
development plays in scholarly publication.
All datasets and predictive models produced by this work are available
from our GitHub repository:
\href{https://github.com/si2-urssi/eager}{\texttt{si2-urssi/eager}}.
\hypertarget{limitations}{%
\subsection{Limitations}\label{limitations}}
As discussed in Section~\ref{sec-data-collection}, the Soft-Search
Training dataset was entirely composed of NSF awards which ultimately
released or hosted software (and other research products) on GitHub. Due
to our data collection strategy, it is possible that each of the
predictive models learned not to predict if an NSF award would produce
software, but rather, if an NSF award would produce software hosted on
GitHub.
\hypertarget{future-work}{%
\subsection{Future Work}\label{future-work}}
As discussed in Section~\ref{sec-finding-soft}, our initial method for
attempting to find research software produced from NSF supported awards
was to search for references and promises of software production in the
abstract, project outcomes report, and attached papers of each award.
While attempting this approach to create the dataset, we found that many
awards and papers that reference computational methods do not provide a
reference web link to their code repositories or websites. In some
cases, we found repositories related to an award or paper via Google and
GitHub search ourselves. While we support including references to code
repositories in award abstracts, outcomes reports, and papers, future
research should be conducted on how to enable automatic reconnection of
papers and their software outputs.
\hypertarget{acknowledgements}{%
\section{Acknowledgements}\label{acknowledgements}}
We thank the USRSSI team, especially Karthik Ram for their input. This
material is based upon work supported by the National Science Foundation
under Grant 2211275.
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.14176",
"language": "en",
"timestamp": "2023-03-01T02:03:43",
"url": "https://arxiv.org/abs/2302.14176",
"yymm": "2302"
} |
\subsection{Asset Depreciation}
When discussing a situation with decaying reward values, it is useful to distinguish between potential future rewards and actual rewards that have been obtained.
As such, we introduce the term \emph{asset} to refer to a reward that has been obtained by an agent at a previous moment in time.
Using this terminology, the present work may be described as an inquiry into optimization and learning under the assumption that assets \emph{depreciate}.
Depreciation, a term borrowed from the field of finance and accounting \citep{Wright64,Burt72}, describes exactly the phenomenon where the value of something decays with time.
We propose a notion of depreciation that is inspired by traditional discounting and is based on applying the same basic principle of time preference to an agent's history in addition to its future.
More precisely, we consider the situation in which an agent's behavior is evaluated with respect to an infinite sequence of cumulative accrued assets, each of which is discounted in proportion to how long ago it was obtained.
That is, we propose evaluating the agent in terms of functions on the sequence of assets
\[
\seq{\sum^n_{k=1} r_k \gamma^{n-k}}^\infty_{n=1},
\]
where $\gamma \in (0,1)$ is a discount factor, rather than on the sequence of rewards $\seq{r_n}^\infty_{n=1}$.
To motivate the study of depreciation and illustrate its naturalness, we examine the following hypothetical case-study.
\begin{example}[Used Car Dealership]
\label{ex:car}
Consider a used car dealership with a business model involving purchasing used cars in locations with favorable regional markets, driving them back to their shop, and selling them for profit in their local market.
Suppose that our optimizing agent is an employee of this dealership, tasked with managing capital acquisition.
More specifically, this employee's job is to decide the destination from which the next car should be purchased, whenever such a choice arises.
The objective of the agent is to maximize the sum of the values of all vehicles in stock at the dealership over a discounted time-horizon for some discount factor $\lambda \in (0,1)$.
Note that the discounted time-horizon problem is equivalent to the problem of maximizing expected terminal payoff of the process given a constant probability $(1-\lambda)$ of terminating operations at any point.
It has long been known~\citep{Wykof70,ackerman1973used} that cars tend to continually depreciate in value after being sold as new, and so any reasonable model for the value of all vehicles in the inventory should incorporate some notion of asset depreciation.
Suppose that another discount factor $\gamma \in (0,1)$ captures the rate at which automobiles lose value per unit of time.
Considering $\gamma$-depreciated rewards and $\lambda$-discounted horizon, the goal of our agent can be defined as a \emph{discounted depreciating optimization problem}.
Alternatively, one may seek to optimize the \emph{long run average} (mean payoff) of $\gamma$-depreciated rewards.
\end{example}
\subsection{Discounted Depreciating Payoff}
Consider the sequence $x = \seq{3, 4, 5, 3, 4, 5, \ldots}$ of (absolute) rewards accumulated by the agent.
In the presence of depreciation, the cumulative asset values at various points in time follow the sequence
\begin{gather*}
3, (3\gamma + 4), (3 \gamma^2+4\gamma+5), (3 \gamma^3+4\gamma^2+5\gamma+3), \\
(3 \gamma^4+4\gamma^3+5\gamma^2+3\gamma+4), \ldots
\end{gather*}
For the $\lambda$-discounted time horizon, the value of the assets can be computed as follows:
\begin{align*}
&\cla{3} + \clb{\lambda (3\gamma {+} 4}) + \clc{\lambda^2 (3 \gamma^2 {+}4\gamma {+}5}) + \cld{\lambda^3(3 \gamma^3 {+}4\gamma^2 {+}5\gamma {+}3)} +\\
&\qquad \cle{\lambda^4(3 \gamma^4 {+}4\gamma^3 {+}5\gamma^2 {+}3\gamma+4)} + \ldots \\
&= (\cla{3} {+} \clb{3 \lambda\gamma} {+}\clc{3\gamma^2\lambda^2} {+}\cdots) +(\clb{4 \lambda } {+}\clc{4 \lambda^2\gamma} {+}\cld{4\lambda^3\gamma^2} {+}\cdots) + \\
&\qquad (\clc{5\lambda^2} {+}\cld{5\lambda^3\gamma} {+} \cle{\lambda^5\gamma^2} {+} \cdots)+ (\cld{3\lambda^3} {+} \cle{3 \lambda\gamma^4} {+}3\gamma^2\lambda^5 {+}\cdots)+\cdots \\
&= 3(1 {+} \lambda\gamma {+}\gamma^2\lambda^2 {+}\cdots) + 4\lambda(1 {+} \lambda\gamma {+}\lambda^2\gamma^2 {+}\cdots) + \\
&\qquad 5\lambda^2(1 {+}\lambda\gamma {+}\lambda^2\gamma^2 {+} \cdots)+ 3\lambda^3(1 {+} \lambda\gamma {+} \gamma^2\lambda^2 {+}\cdots) +\ldots\\
&= \frac{3+4\lambda+5\lambda^2+3\lambda^3+\cdots}{(1-\lambda\gamma)} \\
&= \frac{3+4\lambda+5\lambda^2}{(1-\lambda\gamma)(1-\lambda^3)}.
\end{align*}
Notice that this $\gamma$-depreciated sum is equal to the $\lambda$-discounted sum when immediate rewards are scaled by a factor $\frac{1}{1-\lambda\gamma}$.
We show that this is not a mere coincidence, and prove that this equality holds also for general MDPs.
\subsection{Average Depreciating Payoff}
Next, consider the long-run average of the depreciating asset values as the limit inferior of the sequence
\begin{gather*}
3, \frac{3\gamma {+} 4}{2}, \frac{3 \gamma^2 {+} 4\gamma {+} 5}{3}, \frac{3 \gamma^3 {+} 4\gamma^2 {+} 5\gamma {+} 3}{4}, \\
\frac{3 \gamma^4 {+} 4\gamma^3 {+} 5\gamma^2 {+} 3\gamma {+} 4}{5}, \ldots
\end{gather*}
Based on classical Tauberian results~\citep{BewleyKohlberg76}, it is tempting to conjecture that the $\lambda$-discounted, $\gamma$-depreciating value converges to this mean as $\lambda \to 1$, e.g.
\begin{align*}
\lim_{\lambda\to 1} (1-\lambda) \frac{3+4\lambda+5\lambda^2}{(1-\lambda\gamma)(1-\lambda^3)} &= \lim_{\lambda\to 1} \frac{3+4\lambda+5\lambda^2}{(1-\lambda\gamma)(1+\lambda+\lambda^2)} \\
&= \frac{3+4+5}{3(1-\gamma)}.
\end{align*}
Indeed, we prove that this conjecture holds.
\paragraph{Contributions.} The highlights of this paper are given below.
\begin{itemize}[wide]
\item[{$\blacktriangleright$}] We initiate the study of discounted and average payoff optimization in the presence of depreciation dynamics.
\item[{$\blacktriangleright$}] We characterize the optimal value of the discounted depreciating payoff via Bellman-style optimality equations and use them to show that stationary deterministic policies are sufficient for achieving optimality.
Moreover, our characterization enables computing the optimal value and an optimal policy in polynomial time in the planning setting.
\item [{$\blacktriangleright$}] The optimality equation also facilitates a formulation of a variant of Q-learning that is compatible with asset depreciation, thereby providing a model-free reinforcement learning approach to obtain optimal policies in the learning setting.
\item [{$\blacktriangleright$}]
We show the classical Tauberian theorem relating discounted and average objectives can be extended to the depreciating reward setting. This result allows us to establish the sufficiency of stationary deterministic policies for optimality with respect to the average depreciating payoffs.
\end{itemize}
\section{Introduction}
\input{intro.tex}
\paragraph{Organization.}
We begin by introducing necessary notation and reviewing the relevant technical background.
Section~\ref{sec:deprec} develops results on discounted depreciating payoff, while Section~\ref{sec:average} develops results for the average depreciating objective.
We discuss some closely related work in Section~\ref{sec:related} and recap our contributions in the concluding section.
\section{Preliminaries}
\input{prelims.tex}
\section{Discounted Depreciating Payoff}
\label{sec:deprec}
\input{past-disc.tex}
\section{Related Work}
\label{sec:related}
Discounted and average payoffs have played central roles in the theory of optimal control and reinforcement learning.
A multitude of deep results exist connecting these objectives
\citep{BewleyKohlberg76,BewleyKohlberg78,MertensNeyman81,AnderssonMiltersen09,ChatterjeeDoyenSingh11,ChatterjeeMajumdar12,Ziliotto16,Ziliotto16G,Ziliotto18}
in addition to an extensive body of work on algorithms for related optimization problems and their complexity
\citep{FilarSchultz86,RaghavanFilar91,RaghavanSyed03,ChatterjeeMajumdarHenzinger08,ChatterjeeIbsenJensen15}.
The value for the depreciating assets is defined as a past discounted sum of rewards.
Past discounted sums for finite sequences were studied in the context of optimization \citep{alur2012regular} and are closely related to exponential recency weighted average, a technique used in nonstationary multi-armed bandit problems \citep{Sutton18} to estimate the average reward of different actions by giving more weight to recent outcomes.
However, to the best of our knowledge, depreciating assets have not been formally studied as a payoff function.
Discounted objectives have found significant applications in areas of program verification and synthesis
\citep{deAlfaroHenzingerMajumdar03,CernyChatterjeeHenzingerRadhakrishnaSing11}.
Although the idea of past operators is quite old \citep{LichtensteinPnueliZuck85}, relatively recently a number of classical formalisms including temporal logics such as LTL and CTL and the modal $\mu$-calculus have been
extended with past-tense operators and with discounted quantitative semantics
\citep{deAlfaroFaellaHenzingerMajumdarStoelinga05,AlmagorBokerKupferman14,AlmagorBokerKupferman16,littma17}.
A particularly significant result \citep{Markey03} around LTL with classical
boolean semantics is that, while LTL with past operators is no more expressive
than standard LTL, it is exponentially more succinct.
It remains open whether this type of relationship holds for other logics and their extensions by past operators when interpreted with discounted quantitative semantics \citep{AlmagorBokerKupferman16}.
\section{Conclusion}
\label{sec:conclusion}
In the stochastic optimal control and reinforcement learning setting the agents select their actions to maximize a discounted payoff associated with the resulting sequence of scalar rewards.
This interaction models the way dopamine driven organisms maximize their reward sequence based on their capability to delay gratification (discounting). While this paradigm provides a natural model in the context of streams of immediate rewards, when the valuations and objectives are defined in terms of assets that depreciate, the problem cannot be directly modeled in the classic framework.
We initiated the study of optimization and learning for the depreciating assets, and showed a surprising connection between these problems and traditional discounted problems.
Our result enables solving optimization problems under depreciation dynamics by tweaking the algorithmic infrastructure that has been extensively developed over the last several decades for classic optimization problems.
We believe that depreciating assets may provide a useful abstraction to a number of related problems.
The following points sketch some of these directions and state several problems that remain open.
\begin{itemize}[wide]
\item[{$\blacktriangleright$}] Regret minimization \citep{Cesa-BianchiLugosi06} is a popular criterion in the setting of online
learning where a decision-maker chooses her actions so as to minimize the average regret---the difference between the realized reward and the reward that could have been achieved.
We posit that imperfect decision makers may view their regret in a depreciated sense, since a suboptimal action in the recent past tends to cause more regret than an equally suboptimal action in the distant past.
We hope that the results of this work spur further interest in developing foundations of past-discounted characterizations of regret in online learning and optimization.
\item[{$\blacktriangleright$}] In solving multi-agent optimization problems, a practical assumption involves bounding the capability of any adversary by assuming that they have a limited memory of the history of interaction, and this can be modeled via a discounting of past outcomes.
From our results it follows that two-player zero-sum games with depreciation dynamics under both discounted and average payoffs can be reduced to classic optimization games modulo some scaling of the immediate rewards.
\item[{$\blacktriangleright$}] The notion of state-based discount factors has been studied in the context of classic optimization and learning.
Is it possible to extend the results of this paper to the setting with state-dependent depreciation factors?
This result does not directly follow from the tools developed in this paper, and it remains an open problem.
\item[{$\blacktriangleright$}] Continuous-time MDPs provide a dense-time analog of discrete-time MDPs and optimization and RL algorithms for such systems are well understood.
Is it possible to solve optimization and learning for CTMDPs with depreciating assets?
\end{itemize}
\bibliographystyle{plainnat}
\subsection{Discounted Optimization}
In this section, we study discounted optimization, for $\lambda \in (0,1)$, under depreciating asset dynamics.
The payoff in this setting is captured by the expression
\begin{equation}
\sum^\infty_{n=1} \lambda^{n-1} \sum^n_{k=1} R(s_k, a_k) \gamma^{n-k}, \tag{Discounted Depreciating Payoff}
\end{equation}
which has a corresponding value function
\begin{equation*}
V_\lambda^\gamma(s) = \sup_{\pi \in \Pi^M} \bb{E}^\pi_s\bk{\sum^\infty_{n=1} \lambda^{n-1} \sum^n_{k=1} R(s_k, a_k) \gamma^{n-k}}.
\end{equation*}
Let us now return to the used car dealership example.
\begin{example}[Used Car Dealership Cont.]
Recognizing that cars depreciate continually after their first purchase, the employee realizes that their model should incorporate a notion of asset depreciation.
After a bit of market research, the employee selects another discount factor $\gamma \in (0,1)$ to capture the rate at which automobiles typically lose value over a given time step.
Using both discount factors $\lambda$ and $\gamma$, the employee can model the scenario as a discounted depreciating optimization problem.
For the sake of simplicity, suppose that there are only two locations $s_1$ and $s_2$ from which to choose the next target market, and that the only point where the employee has more than one possible action is at the dealership $s_d$ (from where they can chose action $a_1$ to go to $s_1$ or $a_2$ to go to $s_2$).
Realizing that it is unreasonable to plan without expecting unforeseen delays, the employee also introduces two parameters $\rho_1$ and $\rho_2$, which are success rates for buying a desired vehicle in $s_1$ and $s_2$ respectively.
Given that the agent is in location $s_i$, the rate $\rho_i$ is interpreted as the probability that they find a seller and purchase a vehicle before the end of the day and thus $1 - \rho_i$ is the probability that they fail to do so.
This situation is represented graphically as a finite MDP in \autoref{fig:car_mdp}, where actions are displayed in red, transition probabilities in blue, and immediate rewards (i.e. car values when they are stocked) in green.
If an action is omitted from an edge label, then there is only one action $a$ available.
If a transition probability is omitted, then the transition is deterministic, i.e. occurs with probability 1.
If a reward value is omitted, then the reward obtained is 0.
\begin{figure}
\centering
\begin{tikzpicture}[> = Stealth, thick]
\node[state] (d) {$\bm{s_d}$};
\node[state] (s1)[above left = 1cm and 1cm of d] {$\bm{s_1}$};
\node[state] (t1)[above right = 1cm and 1cm of d] {$\bm{t_1}$};
\node[state] (s2)[below left = 1cm and 1cm of d] {$\bm{s_2}$};
\node[state] (t2)[below right = 1cm and 1cm of d] {$\bm{t_2}$};
\path[->] (d) edge node[above,sloped,color=red] {$\bm{a_1}$} (s1);
\path[->] (d) edge node[above,sloped,color=red] {$\bm{a_2}$} (s2);
\path[->] (s1) edge[loop left] node[color=blue] {$\bm{1 - \rho_1}$} (s1);
\path[->] (s1) edge node[above,color=blue] {$\bm{\rho_1}$} (t1);
\path[->] (s2) edge[loop left] node[color=blue] {$\bm{1 - \rho_2}$} (s2);
\path[->] (s2) edge node[above,color=blue] {$\bm{\rho_2}$} (t2);
\path[->] (t1) edge node[sloped,below,color=darkgreen] {$\bm{r_1}$} (d);
\path[->] (t2) edge node[sloped,above,color=darkgreen] {$\bm{r_2}$} (d);
\end{tikzpicture}
\caption{An MDP for the discounted depreciating optimization problem of the car dealership.}
\label{fig:car_mdp}
\end{figure}
In traditional discounted optimization, the discount factor $\lambda$ imposes a certain type of trade-off.
Suppose, for instance, that $\rho_1$ is large while $r_1$ is small and that $\rho_2$ is small while $r_2$ is large.
Then a small discount factor indicates that it may payoff more to take action $a_1$ since it is likely that taking $a_2$ will result in significant delays and thus diminish the value of the eventual reward $r_2$.
On the other hand, if the discount factor is close to 1, then it may be worth it for the agent to accept the high probability of delay since the eventual discounted value will be closer to $r_2$.
Adding in the depreciation dynamics with discount factor $\gamma$, the trade-off remains, but to what extent depreciation alters the dynamics of a given environment and policy is unclear.
Intuition may suggest that introducing depreciation to discounted optimization should only make the risk-reward trade-off sharper, and one might further conjecture that when $\gamma$ is close to 0, the higher decay rate of cumulative asset value should drive an agent towards riskier behavior.
On the other hand, it is plausible that a depreciation factor close to one might embolden the agent towards similar risky actions because the opportunity cost of such behavior diminishes as assets are accumulated in greater quantities.
As we proceed with our analysis of the discounted depreciating payoff we attempt to shed light on questions like this and get to the core of what depreciation entails in this context.
\end{example}
Our first main result establishes a Bellman-type equational characterization the discounted depreciating value.
\begin{theorem}[Optimality Equation]
\label{thm:optimalityEQ}
The discounted depreciating value is the unique solution of the equation
\begin{equation}
V_\lambda^\gamma(s) = \max_{a \in A} \frac{R(s, a)}{1 - \lambda\gamma} + \lambda \bb{E}_T\bk{V_\lambda^\gamma(t) \:\middle\vert\: s, a}.
\end{equation}
\end{theorem}
\begin{proof}
By splitting the term $\lambda^{n-1}$ occurring in the definition of the discounted depreciating payoff into the product $\lambda^{n-k}\lambda^{k-1}$ and distributing these factors into the inner summation, we obtain the expression
\begin{equation}
\label{eq:to_be_factored}
\sum^\infty_{n=1} \sum^n_{k=1} \lambda^{k-1} R(s_k, a_k) \lambda^{n-k} \gamma^{n-k}.
\end{equation}
The next step of the proof relies on the following classical result of real analysis (c.f. Theorem 3.50 of \citet{Rudin76}).
\begin{mertens}
Let $\sum^\infty_{n=1} x_n = X$ and $\sum^\infty_{n=1} y_n = Y$ be two convergent series of real numbers.
If at least one of the given series converges absolutely, then their Cauchy product converges to the product of their limits:
\begin{equation*}
\paren{\sum^\infty_{n=1} x_n} \paren{\sum^\infty_{n=1} y_n} = \sum^\infty_{n=1} \sum^n_{k=1} x_k y_{n-k} = XY.
\end{equation*}
\end{mertens}
The series \eqref{eq:to_be_factored} may be factored into the Cauchy product
\begin{equation}
\label{eq:cauchy_factored}
\paren{\sum^\infty_{n=1} (\lambda\gamma)^{n-1}} \paren{\sum^\infty_{n=1} \lambda^{n-1} R(s_n, a_n)},
\end{equation}
and since both terms in this Cauchy product converge absolutely, Mertens' theorem applies.
Thus, noticing that the left-hand series is geometric, the expression \eqref{eq:cauchy_factored} is equivalent to
\begin{equation*}
\frac{1}{1-\lambda\gamma} \sum^\infty_{n=1} \lambda^{n-1} R(s_n, a_n).
\end{equation*}
Consequently, the discounted depreciating value may be written as
\begin{equation}
\label{eq:FPtoF1}
\begin{aligned}
V_\lambda^\gamma(s) &= \sup_{\pi \in \Pi^M} \bb{E}^\pi_s\bk{\frac{1}{1-\lambda\gamma} \sum^\infty_{n=1} \lambda^{n-1} R(s_n, a_n)} \\
&= \frac{1}{1-\lambda\gamma} \sup_{\pi \in \Pi^M} \bb{E}^\pi_s\bk{\sum^\infty_{n=1} \lambda^{n-1} R(s_n, a_n)} \\
&= \frac{V_\lambda(s)}{1-\lambda\gamma}.
\end{aligned}
\end{equation}
The equational characterization of the discounted value $V_\lambda$ now facilitates the derivation of the desired equational characterization of the discounted depreciating value $V_\lambda^\gamma$ as
\begin{equation}
\label{eq:FPtoF2}
\begin{aligned}
V_\lambda^\gamma(s) &= \frac{1}{1-\lambda\gamma} \paren{\max_{a \in A} R(s,a) + \lambda \bb{E}_T\bk{V_\lambda(t) \:\middle\vert\: s, a}} \\
&= \max_{a \in A} \frac{R(s,a)}{1 - \lambda\gamma} + \lambda \bb{E}_T\bk{V_\lambda^\gamma(t) \:\middle\vert\: s,a}.
\end{aligned}
\end{equation}
\end{proof}
\noindent
An immediate consequence of \autoref{thm:optimalityEQ} is a characterization of the strategic complexity of discounted depreciating payoffs.
\begin{corollary}[Strategic Complexity]
For any discounted depreciating payoff over any finite MDP, there exists an optimal policy that is stationary and deterministic.
\end{corollary}
\autoref{thm:optimalityEQ} enables a number of extensively studied algorithmic techniques to be adapted for use under the discounted depreciating payoff.
In particular, the equational characterization of the discounted depreciating value implies that it is the unique fixed point of a contraction mapping \citep{banach1922operations}, which in turn facilitates the formulation of suitable variants of planning algorithms based on foundational methods such as value iteration and linear programming.
This allows us to bound the computational complexity of determining discounted depreciating values in terms of the size of the environmental MDP and the given discount factors.
\begin{theorem}[Computational Complexity]
The discounted depreciating value and a corresponding optimal policy are computable in polynomial time.
\end{theorem}
\begin{proof}
Let $\delta_{i,j} = \begin{cases}
1 &\tn{if } i=j \\
0 &\tn{otherwise}
\end{cases}$ be the Kronecker delta.
Suppose that, for each state $s$ in the environment $M$, we have an associated real number $0 < x_s$, chosen arbitrarily.
The unique solution to the following linear program is the vector of values from each state of $M$.
\begin{equation}
\label{eq:valueLP}
\begin{aligned}
&\tn{minimize } \sum_{s \in S} x_s v_s \quad\tn{subject to} \\
&\frac{R(s,a)}{1-\lambda\gamma} \leq \sum_{t \in S} v_t \paren{\delta_{s,t} - \frac{\lambda T(t \mid s,a)}{1-\lambda\gamma}} &\forall (s,a) \in S \times A
\end{aligned}
\end{equation}
From a solution $v^*$ to \eqref{eq:valueLP}, an optimal policy can be obtained as
\begin{equation*}
\pi(s) = \argmax_{a \in A} \frac{R(s,a)}{1-\lambda\gamma} + \lambda \bb{E}_T\bk{v^*_t \mid s,a}.
\end{equation*}
Alternatively, an optimal policy may be derived from the solution to the dual linear program given as follows.
\begin{equation}
\label{eq:policyLP}
\begin{aligned}
&\tn{maximize} \sum_{(s,a) \in S \times A} \frac{R(s,a)}{1 - \lambda\gamma} y_{s,a} \quad\tn{subject to} \\
&x_s = \sum_{(t,a) \in S \times A} \paren{\delta_{s,t} - \frac{\lambda T(t \mid s,a)}{1-\lambda\gamma}} &\forall s \in S \\
&0 \leq y_{s,a} &\forall (s,a) \in S \times A
\end{aligned}
\end{equation}
In particular, if $y^*$ is a solution to \eqref{eq:policyLP}, then any policy $\pi$ for which the inequality $0 < y^*_{s,\pi(s)}$ holds at every state is optimal.
The correctness of these linear programs follows from the proof of \autoref{thm:optimalityEQ}.
Since linear programs can be solved polynomial time, the theorem follows.
\end{proof}
\autoref{thm:optimalityEQ} allows the formulation of an associated Q-value
\begin{equation*}
Q_\lambda^\gamma(s, a) = \frac{R(s, a)}{1 - \lambda\gamma} + \lambda \bb{E}_T\bk{V_\lambda^\gamma(t) \:\middle\vert\: s, a},
\end{equation*}
which may be used to construct a Q-learning iteration scheme for discounted depreciating payoffs as
\begin{equation}
\label{eq:Past-Q-update}
\hspace{-3pt} Q^{\gamma, n+1}_\lambda(s, a) {\gets} Q^{\gamma, n}_\lambda(s, a) {+} \alpha_n \paren{\frac{R(s, a)}{1 {-} \lambda\gamma} {+} \lambda V^{\gamma, n}_\lambda(t) {-} Q^{\gamma,n}_\lambda(s, a)}.
\end{equation}
\begin{theorem}
If each state-action pair of the environment is encountered infinitely often and the learning rates satisfy the Robbins-Monroe convergence criteria
\begin{equation*}
\sum^\infty_{n=0} \alpha_n = \infty \quad\tn{ and }\quad \sum^\infty_{n=0} \alpha_n^2 < \infty,
\end{equation*}
then iterating \eqref{eq:Past-Q-update} converges almost surely to the discounted depreciating Q-value as $n \to \infty$:
\begin{equation*}
\lim_{n \to \infty} Q^{\gamma,n}_\lambda = Q_\lambda^\gamma.
\end{equation*}
\end{theorem}
\begin{proof}
Equations \eqref{eq:FPtoF1} and \eqref{eq:FPtoF2} show that the optimality equation for the discounted depreciating value reduces to the optimality equation for the discounted value, modulo a multiplicative factor dependent on $\lambda$ and $\gamma$.
It therefore follows that discounted depreciating Q-learning, via iteration of \eqref{eq:Past-Q-update}, converges in the limit to the optimal $Q^\gamma_\lambda$ under the same conditions that standard discounted Q-learning, via iteration of \eqref{eq:Q-update}, converges in the limit to the optimal $Q_\lambda$.
Hence, we conclude that discounted depreciating Q-learning asymptotically converges given that each state-action pair is encountered infinitely often and that the convergence conditions in the theorem statement are satisfied by the learning rates.
\end{proof}
\subsection{Discussion}
Besides the technical implications of \autoref{thm:optimalityEQ}, its proof provides some insight about the interplay between discounting and depreciation.
A foundational result \citep{BewleyKohlberg76} in the theory of infinite-horizon optimization establishes that over a common MDP the discounted value asymptotically approaches the average value, up to a multiplicative factor of $(1-\lambda)$, as $\lambda$ approaches 1 from below:
\begin{equation*}
\lim_{\lambda \to 1} (1 - \lambda) V_\lambda = V.
\end{equation*}
Following this approach, we consider the asymptotic behavior of the discounted depreciating value when taking similar limits of the discount factors.
Using the identity $V_\lambda^\gamma = \frac{V_\lambda}{1 - \lambda\gamma}$ from equation \eqref{eq:FPtoF1} as the starting point for taking these limits yields the equations
\begin{gather}
\lim_{\lambda \to 1} (1 - \lambda) V_\lambda^\gamma = \frac{V}{1 - \gamma}, \label{eq:FPtoM} \\
\lim_{\gamma \to 1} V_\lambda^\gamma = \frac{V_\lambda}{1 - \lambda}, \label{eq:g_to_0} \\
\lim_{\gamma \to 0} V_\lambda^\gamma = V_\lambda. \label{eq:g_to_1}
\end{gather}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fixed_l_varied_g.png}
\caption{
A graph of the discounted depreciating value of the car dealership example as $\gamma$ varies over the interval $(0,1)$ with fixed $\lambda = \frac{1}{2}$.
The parameter values for this plot are $\rho_1 = \frac{1}{2}$, $\rho_2 = \frac{1}{4}$, $r_1 = 5$, $r_2 = 7$.
}
\label{fig:fixed_l_varied_g}
\end{figure}
The relationships described by equations \eqref{eq:g_to_1} and \eqref{eq:g_to_0}, illustrated by \autoref{fig:fixed_l_varied_g}, are justified conceptually by a simple interpretation that is helpful for building intuition around the behavior of the discounted depreciating payoff.
One can think of the standard discounted payoff as a special case of the discounted depreciating payoff where $\gamma = 0$.
That is, the optimizing agent working towards maximizing a discounted payoff does not consider the value of their assets whatsoever at any point in time; the only quantities of concern from their perspective are the incoming stream of rewards.
Interpreting $\gamma$ as a measure of the agent's memory of past outcomes, it follows naturally that the discounted depreciating payoff reduces to the discounted payoff when the agent has no recollection whatsoever.
Connecting this notion back to depreciation, it can be argued that, from the agent's perspective, externally driven depreciation of assets is morally equivalent to an internally driven perception of depreciation based on an imperfect recollection of past events.
Conversely, an agent with a perfect memory operating under a discounted payoff would end up maximizing this payoff on the sequence of cumulative assets $\seq{\sum^n_{k=1}R(s_k, a_k)}^\infty_{n=1}$ rather than the sequence $\seq{R(s_n, a_n)}_{n=1}^\infty$ of immediate rewards.
Assuming positive immediate rewards, this results in a greater value than would be obtained on the reward sequence itself, as evidenced by the plot in \autoref{fig:fixed_l_varied_g}.
As a consequence of the contraction property resulting from the standard discounting, the overall sum converges in spite of the fact that the cumulative asset stream may not be bounded.
\section {Average Depreciating Payoff}
\label{sec:average}
Let us now consider the asymptotic average evaluation criterion, given that assets depreciate.
The payoff of an outcome in this context is defined as
\begin{equation}
\liminf_{n \to \infty} \sum^n_{k=1} \sum^k_{i=1} \frac{R(s_i, a_i) \gamma^{k-i}}{n}, \tag{Average Depreciating Payoff}
\end{equation}
and the associated average depreciating value function is
\begin{equation*}
V^\gamma(s) = \sup_{\pi \in \Pi^M} \bb{E}^\pi_s\bk{\liminf_{n \to \infty} \sum^n_{k=1} \sum^k_{i=1} \frac{R(s_i, a_i) \gamma^{k-i}}{n}}.
\end{equation*}
\noindent
Our main result in this section asymptotically relates the average depreciating value and the discounted depreciating value.
\begin{theorem}[Tauberian Theorem]
\label{thm:FP-MP}
The limit of discounted depreciating value as $\lambda \to 1$ from below, scaled by $(1-\lambda)$, converges to the average depreciating value:
\begin{equation*}
\lim_{\lambda \to 1} (1 - \lambda) V_\lambda^\gamma = V^\gamma.
\end{equation*}
\end{theorem}
The proof of \autoref{thm:FP-MP} uses the following pair of lemmas.
\begin{lemma}
\label{lemma:helper1}
For any finite path in the environmental MDP,
\begin{equation}
\label{eq:helper1}
\sum^n_{k=1} \sum^k_{i=1} \frac{R(s_i, a_i) \gamma^{k-i}}{n} = \sum^n_{k=1} \frac{R(s_k, a_k) (1 - \gamma^{n+1-k})}{n(1 - \gamma)}.
\end{equation}
\end{lemma}
\begin{proof}
We proceed by induction on $n$.
\item\paragraph*{Base case.}
Suppose that $n=1$.
Then both expressions occurring in \eqref{eq:helper1} evaluate to $R(s_1, a_1)$.
\item\paragraph*{Inductive case.}
Suppose that \eqref{eq:helper1} holds for $n-1$.
By splitting the summation on the left-hand side of \eqref{eq:helper1}, we obtain the expression
\begin{equation*}
\sum^{n-1}_{k=1} \sum^k_{i=1} \frac{R(s_i, a_i) \gamma^{k-i}}{n} + \sum^n_{k=1} \frac{R(s_k, a_k) \gamma^{n-k}}{n}.
\end{equation*}
Factoring $\frac{n-1}{n}$ from the double summation in this expression yields
\begin{equation*}
\frac{n-1}{n} \sum^{n-1}_{k=1} \sum^k_{i=1} \frac{R(s_i, a_i) \gamma^{k-i}}{n-1} + \sum^n_{k=1} \frac{R(s_k, a_k) \gamma^{n-k}}{n}.
\end{equation*}
Now, applying the inductive hypothesis, this may be rewritten as
\begin{equation*}
\frac{n-1}{n} \sum^{n-1}_{k=1} \frac{R(s_k, a_k) (1 - \gamma^{n-k})}{(n-1)(1 - \gamma)} + \sum^n_{k=1} \frac{R(s_k, a_k) \gamma^{n-k}}{n}.
\end{equation*}
Factoring out $\frac{1}{n(1-\gamma)}$ from the entire expression, we get
\begin{equation*}
\frac{\sum\limits^{n-1}_{k=1} R(s_k, a_k) (1 {-} \gamma^{n-k}) + (1{-}\gamma)\sum\limits^n_{k=1} R(s_k, a_k) \gamma^{n-k}}{n(1-\gamma)}.
\end{equation*}
Distributing through the numerator results in the expression
\begin{equation*}
\hspace{-2pt} \frac{\sum\limits^{n{-}1}_{k{=}1} R(s_k{,}a_k) {-} R(s_k{,}a_k) \gamma^{n{-}k} {+} \sum\limits^n_{k{=}1} R(s_k{,}a_k) \gamma^{n{-}k} {-} R(s_k{,}a_k) \gamma^{n{+}1{-}k}}{n(1-\gamma)}
\end{equation*}
and removing those terms that cancel additively yields
\begin{equation*}
\frac{\sum\limits^{n}_{k{=}1} R(s_k, a_k) - \sum\limits^n_{k{=}1} R(s_k,a_k) \gamma^{n{+}1{-}k}}{n(1-\gamma)}.
\end{equation*}
Finally, we obtain \eqref{eq:helper1} by factoring the numerator one last time:
\begin{equation*}
\sum^n_{k=1} \frac{R(s_k,a_k) (1 - \gamma^{n+1-k})}{n(1-\gamma)},
\end{equation*}
thereby proving that if \eqref{eq:helper1} holds for paths of length $n-1$, then it also holds for paths of length $n$.
\end{proof}
\begin{lemma}
\label{lemma:helper2}
For any infinite path in the environmental MDP,
\begin{equation*}
\lim_{n \to \infty} \sum^n_{k=1} \frac{R(s_k, a_k) \gamma^{n+1-k}}{n(1-\gamma)} = 0.
\end{equation*}
\end{lemma}
\begin{proof}
Factoring out the constant term in the denominator of the left-hand side of the claimed equation, we obtain the equivalent expression
\begin{equation*}
\frac{1}{1-\gamma} \lim_{n \to \infty}\frac{1}{n} \sum^n_{k=1} R(s_k, a_k) \gamma^{n-k}.
\end{equation*}
Since the environmental MDP is assumed to be finite, there are finitely many possible reward values and we can bound the summation in the above expression as
\begin{equation*}
\frac{r_\downarrow (1 - \gamma^{n-1})}{1-\gamma} \leq \sum^n_{k=1} R(s_k, a_k) \gamma^{n-k} \leq \frac{r_\uparrow (1 - \gamma^{n-1})}{1-\gamma}
\end{equation*}
where $r_\downarrow = \min_{(s,a) \in S \times A} R(s,a)$ and $r_\uparrow = \max_{(s,a) \in S \times A} R(s,a)$.
Lastly, noticing that
\begin{equation*}
\lim_{n \to \infty} \frac{r_\downarrow (1 - \gamma^{n-1})}{n(1-\gamma)} = \lim_{n \to \infty} \frac{r_\uparrow (1 - \gamma^{n-1})}{n(1-\gamma)} = 0,
\end{equation*}
it follows that
\begin{equation*}
\lim_{n \to \infty}\frac{1}{n} \sum^n_{k=1} R(s_k, a_k) \gamma^{n-k} = 0.
\end{equation*}
\end{proof}
Now we are in position to prove \autoref{thm:FP-MP}.
\begin{proof}[Proof of \autoref{thm:FP-MP}]
In light of equation \eqref{eq:FPtoM}, it is sufficient to prove the identity $V^\gamma = \frac{V}{1 - \gamma}$.
Applying \autoref{lemma:helper1}, the average depreciating payoff may be rewritten as
\begin{equation*}
\liminf_{n \to \infty} \sum^n_{k=1} \frac{R(s_k, a_k) (1 - \gamma^{n+1-k})}{n(1 - \gamma)}.
\end{equation*}
Distributing the product in the numerator and then breaking the summation into a difference of summations yields the expression
\begin{equation*}
\liminf_{n \to \infty} \paren{\sum^n_{k=1} \frac{R(s_k, a_k)}{n (1-\gamma)} - \sum^n_{k=1} \frac{R(s_k, a_k) \gamma^{n+1-k}}{n (1-\gamma)}}.
\end{equation*}
By \autoref{lemma:helper2}, the right-hand term in this difference tends to 0 as $n \to \infty$, and so the above expression is equivalent to
\begin{equation*}
\liminf_{n \to \infty} \sum^n_{k=1} \frac{R(s_k, a_k)}{n (1-\gamma)}.
\end{equation*}
Factoring the constant term in the denominator out, the remaining limit-term is exactly the definition of the average payoff, and thus we conclude, for any state $s$, that
\begin{equation*}
V^\gamma(s) = \frac{V(s)}{1 - \gamma}.
\end{equation*}
\end{proof}
As a direct consequence of \autoref{thm:FP-MP}, there exists a Blackwell optimal policy that is optimal for $V_\lambda^\gamma$ when $\lambda$ is sufficiently close to 1, that is also optimal for $V^\gamma$.
\begin{corollary}
There exists a discount factor $\lambda_0 \in (0,1)$ and a policy $\pi$ such that, for all $\lambda \in [\lambda_0, 1)$ and every state $s$, it holds that
\begin{align*}
V_\lambda^\gamma(s) &= \bb{E}^\pi_s\bk{\sum^\infty_{n=1} \lambda^{n-1} \sum^n_{k=1} R(s_k, a_k) \gamma^{n-k}}, \\
V^\gamma(s) &= \bb{E}^\pi_s\bk{\liminf_{n \to \infty} \frac{1}{n} \sum^n_{k=1}\sum^k_{i=1} R(s_i, a_i) \gamma^{k-i}}.
\end{align*}
\end{corollary}
In turn, this implies the following result on the strategic complexity for the average depreciating payoff.
\begin{corollary}[Strategic Complexity]
For any average depreciating payoff over any finite MDP, there exists an optimal policy that is stationary and deterministic.
\end{corollary}
\subsection{Markov Decision Processes}
A (finite) \emph{Markov decision process} (MDP) $M$ is a tuple $(S, A, T, R)$ in which $S$ is a finite set of states, $A$ is a finite set of actions, $T : \paren{S \times A} \to \dist{S}$ is a stochastic transition function specifying, for any $s,t \in S$ and $a \in A$ the conditional probability $T(t \mid s,a)$ of moving to state $t$ given that the current state is $s$ and that action $a$ has been chosen, and $R : \paren{S \times A} \to \bb{R}$ is a real-valued reward function mapping each state-action pair to a numerical valuation.
For any function $f : S \to \bb{R}$, i.e. any random variable on the state space of the MDP, we write $\bb{E}_T\bk{f(t) \mid s,a}$ to denote the conditional expectation $\sum_{t \in S} f(t) T(t \mid s,a)$ of $f$ on the successor state, given that the agent has selected action $a$ from state $s$.
A path in $M$ is a sequence $s_1 a_1 s_2 \cdots a_n s_{n+1}$ of alternating states and actions such that $0 < T(s_{k+1} \mid s_k, a_k)$ at every index.
Let $\mc{F}(M)$ denote the set of all finite paths in $M$ and $\mc{I}(M)$ denote the set of all infinite paths in $M$.
\paragraph{Payoffs, Policies, and Optimality.}
We focus on infinite duration quantitative optimization problems where an outcome may be concretized as an infinite path in the MDP.
Such an outcome is evaluated relative to some mapping into the real numbers $\mc{I}(M) \to \bb{R}$ called a payoff.
A policy on $M$ is a function $\pi : \mc{F}(M) \to \dist{A}$ that chooses an a distribution over the action set, given a finite path in $M$.
Fixing a policy $\pi$ induces, for each state $s$, a unique probability measure $\bb{P}^\pi_s$ on the probability space over the Borel subsets of $\mc{I}(M)$.
This enables the evaluation of a policy, modulo a payoff and initial state $s$, in expectation $\bb{E}^\pi_s$.
Let $\Pi^M$ be the set of all policies on the MDP $M$.
A policy is optimal for a payoff if it maximizes, amongst all other policies, the expected value of that payoff, and this maximal expectation is called the value of the payoff on $M$.
\paragraph{Strategic Complexity.}
The strategic complexity of a payoff characterizes the necessary structure required for a policy to be optimal.
A qualitative aspect of strategic complexity is based on whether or not there exist environments for which optimal policies are necessarily probabilistic (\emph{mixed}).
A policy is \emph{deterministic} (\emph{pure}) if returns a point distribution for every input.
A policy is stationary if $\pi(s_1 a_1 \cdots a_{n-1} s_n) = \pi(s_n)$ holds at every time $n$.
The class of deterministic stationary policies is of special interest since there are finitely many such policies on any finite MDP; we consider these policies as functions $S \to A$.
\subsection{Discounted and Average Payoffs}
Given a path $s_1 a_1 s_2 \cdots$ in an MDP, two well-studied objectives are the discounted payoff, relative to a discount factor $\lambda \in (0,1)$, and the average payoff, defined as
\begin{gather}
\sum^\infty_{n=1} \lambda^{n-1} R(s_n, a_n), \text{ and } \tag{Discounted Payoff} \\
\liminf_{n \to \infty} \frac{1}{n} \sum^n_{k=1} R(s_k, a_k). \tag{Average Payoff}
\end{gather}
The discounted value and average value functions are defined
\begin{align}
V_\lambda(s) &= \sup_{\pi \in \Pi^M} \bb{E}^\pi_s\bk{\sum^\infty_{n=1} \lambda^{n-1} R(s_n, a_n)}, \tag{Discounted Value} \\
V(s) &= \sup_{\pi \in \Pi^M} \bb{E}^\pi_s\bk{\liminf_{n \to \infty} \sum^n_{k=1} \frac{R(s_k, a_k)}{n}}. \tag{Average Value}
\end{align}
A stronger notion of optimality, specific to the discounted payoff, is Blackwell optimality.
A policy $\pi$ is Blackwell optimal if there exists a discount factor $\lambda_0 \in (0,1)$ such that $\pi$ is optimal for the discounted payoff with any discount factor in the interval $[\lambda_0, 1)$.
An alternative characterization of the discounted value is as the unique solution to the optimality equation
\begin{equation*}
V_\lambda(s) = \max_{a \in A} R(s, a) + \lambda\bb{E}_T\bk{V_\lambda(t) \mid s, a},
\end{equation*}
which is the starting point for establishing the following result on the complexity of discounted and average payoffs \citep{Puterman94,FeinbergShwartz12,FilarVrieze96}.
\begin{theorem}
Both discounted and average payoffs permit deterministic stationary optimal policies. Moreover, optimal values for both payoffs can be computed in polynomial time.
\end{theorem}
\subsection{Reinforcement Learning}
Reinforcement learning (RL)~\citep{Sutton18} is a sampling-based optimization paradigm based on the feedback received from the environment in the form of scalar rewards.
The standard RL scenario assumes a discounted payoff, and model-free approaches typically leverage the \emph{state-action value} or \emph{Q-value}: defined as the optimal value from state $s$, given that action $a$ has been selected, and is the solution of the equation
\begin{equation*}
Q_\lambda(s, a) = R(s, a) + \lambda \bb{E}_T\bk{ V_\lambda(t) \mid s, a}.
\end{equation*}
The Q-value provides the foundation for the classic Q-Learning algorithm \citep{WatkinsDayan92}, which learns an optimal policy by approximating $Q_\lambda$ with a sequence $Q^n_\lambda$ of maps which asymptotically converge to $Q_\lambda$.
In particular, $Q^1_\lambda$ is initialized arbitrarily and then the agent explores the environment by selecting action $a = \argmax_{a \in A} Q^n_\lambda(s, a)$ from the current state $s$ and performing the update
\begin{equation}
\label{eq:Q-update}
Q^{n+1}_\lambda(s, a) \gets Q^n_\lambda(s, a) + \alpha_n \paren{R(s, a) + \lambda V^n_\lambda(t) - Q^n_\lambda(s, a)},
\end{equation}
in which $t$ is the next state as determined by the outcome of sampling the conditional distribution $T(\cdot \mid s, a)$, the family of $\alpha_n \in (0,1)$ are time-dependent parameters called learning rates, and $V^n_\lambda(t) = \max_{a \in A} Q^n_\lambda(t, a)$.
The following theorem gives a sufficient condition for asymptotic convergence of the $Q$-learning algorithm.
\begin{theorem}[\citet{WatkinsDayan92}]
If every state-action pair in the environmental decision process is encountered infinitely often and the learning rates $0 \leq \alpha_n < 1$ satisfy the Robbins-Monroe conditions $\sum_{n=1}^{\infty} \alpha_n = \infty$ and $\sum_{n = 1} ^{\infty} \alpha_n^2 < \infty$, then $Q^{n+1}_\lambda(s, a) {\to} Q_\lambda$ almost surely as $n {\to} \infty$.
\end{theorem}
\subsection{Depreciating Assets}
We define variations on the discounted and average payoffs based on the idea that the value of an asset decays geometrically in proportion with the amount of time elapsed since it was obtained as a reward.
That is, we consider the situation in which a payoff is determined as a function of the sequence $\seq{R(s_n, a_n)}^\infty_{n = 1}$, but rather of the sequence
\[
\seq{\sum^n_{k=1} R(s_k, a_k) \gamma^{n-k}}^\infty_{n = 1}
\]
of exponential recency-weighted averages of the agent's assets, where $\gamma \in (0,1)$ is a discount factor.
|
{
"arxiv_id": "2302.14194",
"language": "en",
"timestamp": "2023-03-01T02:04:27",
"url": "https://arxiv.org/abs/2302.14194",
"yymm": "2302"
} |
\section{Introduction}
\label{sec:introduction}
\iffalse
In how far can a convex polytope be reconstructed from partial combinatorial and geometric data, such as its edge-graph, edge lengths, dihedral angles, face volumes, normal vectors, etc., optionally up to combinatorial type, affine equivalence, or even isometry?
Question in this spirit have a long history
(one only thinks of~Cauchy's rigidity theorem \cite{aigner2010proofs,pak2010lectures}, Minkowski's uniqueness theorem \cite{klain2004minkowski}, or Alexandrov's whole book ``Convex Polyhedra'' \cite{alexandrov2005convex})
and are intimately linked to various notions of rigidity.
In how far can a convex polytope be reconstructed from partial combinatorial~and geometric data, such as its edge-graph, edge lengths, dihedral angles, etc., optionally up to combinatorial type, affine equivalence, or even isometry?
Question of~this~fla\-vor have a long history (one only thinks of Cauchy's rigidity theorem \cite{aigner2010proofs,pak2010lectures}, or Alexandrov's book ``Convex Polyhedra'' \cite{alexandrov2005convex})
and are intimately linked to various notions of rigidity.
In this article we shall be specifically concerned with
constraints on the 1-skeleton of the polytopes, aka.\ restrictions on how the edge-graph is embedded in Euclidean space.
One of the earliest and most well-known results in this direction can already be derived from Cauchy's rigidity theorem:
a simplicial polytope is already uniquely determined (up to isometry) by its combinatorial type and its edge lengths.
Another fitting examples is Novik and Zheng's recent proof \cite{novik2021reconstructing} of a conjecture by Kalai \cite{kalai1994some}: a simplicial polytopes is determined (up to affine transformations) by its edge-graph and the so-called ``space of self-stresses''.
In this article we want to move beyond simplicial polytopes, want to focus on length constraint, and also try to avoid prescribing more polytopal structure than the edge-graph.
The prototypical question is ``in how far is a polytope determined by its edge-graph and edgelengths''.
In general, the answer is: not so much.
\newpage
\fi
In how far can a convex polytope be reconstructed from partial combinatorial~and geometric data, such as its edge-graph, edge lengths, dihedral angles, etc., optionally up to combinatorial type, affine equivalence, or even isometry?
Questions of~this~na\-ture have a long history
and are intimately linked~to~the various notions of rigidity.
In this article we address the reconstruction from the edge-graph and some~``graph-compatible'' distance data, such as edge lengths.
It is well-understood that the~edge-graph alone carries very little information about the polytope's full combinatorics, and trying to fix this by supplementing
additional metric data reveals
two opposing effects at play.
First and foremost, we need to reconstruct the full combinatorics.
As a general rule of thumb, reconstruction from the edge-graph appears more tractable for polytope that have relatively few edges (such as \emph{simple} polytopes as proven by Blind \& Mani \cite{blind1987puzzles} and later by Kalai \cite{kalai1988simple})\footnote{Though ``few edges'' is not the best way to capture this in general, see \cite{doolittle2017reconstructing} or \cite{joswig2000neighborly}.}.
\mbox{At the same time~however such} polytopes often have too few edges to encode sufficient metric data for~reconstructing the geometry.
This is most~evident~for polygons, but happens non-trivially in higher~di\-mensions and with non-simple polytopes as well (see~\cref{fig:flex_polytope}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.62\textwidth]{fig/flex_polytope_2.pdf}
\caption{Non-isometric realizations with the same edge lenghts.}
\label{fig:flex_polytope}
\end{figure}
In contrast, \emph{simplicial} polytopes have many edges and it follows from Cauchy's rigidity theorem that such are determined up to isometry from their edge lengths; \emph{if} we assume knowledge of the full combinatorics.
For simplicial polytopes however, the edge-graph alone is usually not enough to reconstruct the combinatorics in the first place (as evidenced by the abundance of neighborly polytopes).
This leads to the following question: how much and what kind of data do we~need to supplement to the edge-graph to permit
\begin{myenumerate}
\item unique reconstruction of the combinatorics, also for polytopes with many edges (such as simplicial polytopes), and at the same time,
\item unique reconstruction of the geometry, also for polytopes with few edges (such as simple polytopes).
\end{myenumerate}
Also, ideally the supplemented data fits into the structural framework provided by the edge-graph, that is, contains on the order of $\#\text{edges}+\#\text{vertices}$ datums.
\iffalse
\hrulefill
It is now understood that the edge-graph alone carries very little information about the polytope's full combinatorics and geometry and one needs to supplement further metric data, such as edge lengths or certain vertex distances, to have the chance of reconstruction.
Curiously, opposing effects seem to be at play if one tries to only supplement data that fits naturally in the structural framework provided by the edge-graph.
Curiously, classes of polytopes that allow for a reconstruction from little combinatorial data, such as only the edge-graph, vs.\ from little geometric data, such as distances or angles, seem to lie on opposite sides of a spectrum.
On the one hand, Blind \& Mani \cite{blind1987puzzles} (and later Kalai \cite{kalai1988simple}) showed that a \emph{simple} polytopes can be reconstructed, up to combinatorial type, from the edge-graph alone; and Doolittle \cite{doolittle2017reconstructing} showed that one cannot deviate too much from this being simple.
In contrast, the combinatorics of a \emph{simplicial} polytope is usually not determined by its edge-graph, but its many edges provide a framework to encode a lot of geometric data.
So it follows already from Cauchy's rigidity theory that every simplicial polytope is uniquely determined, up to isometry, by its combinatorial type and edge lengths.
So one might wonder wheter some combiantion of geometric and combiantorial data can be s
In a similar spirit, Novik \& Zheng \cite{novik2021reconstructing} recently proven (asnwering a question by Kalai \cite{kalai1994some}) that a \emph{simplicial} polytope is uniquely determined, up to affine equivalence, by its edge-graph and its so-called ``space of self-stresses''.
\hrulefill
In this article we discuss the reconstruction of polytopes from the edge-graph supplemented with additional metric data, such as edge lengths and certain vertex distances.
We thereby want to be general, not restricting to any subclass of polytopes, such as simple or simplicial.
In fact, simple and simplicial polytopes seem to interpolate
This combination of data makes use of a peculiar effect for polytopes:
The combinatorial data from the edge-graph is strongest for simple polytopes as was proven by Mani \& Blind \cite{blind1987puzzles} and Kalai \cite{kalai1988simple}.
In contrast, metric data, such as edge-length provide most information for simplicial polytopes.
It follows already from Cauchy's rigidity theory that every \emph{simplicial} polytope is uniquely determined, up to isometry, by its combinatorial type and edge lengths.
In a similar spirit, Novik \& Zheng \cite{novik2021reconstructing} recently proven (asnwering a question by Kalai \cite{kalai1994some}) that a \emph{simplicial} polytope is uniquely determined, up to affine equivalence, by its edge-graph and its so-called ``space of self-stresses''.
\hrulefill
It is now understood that the edge-graph of a polytope carries only little information about the polytope's full combinatorics and geometry.
Nonetheless, we believe that the edge-graph can allow for a reconstruction of the polytope if supplemented with only minimal additional metric data, such as edge lengths and certain vertex distances.
...
We thereby avoid restricting to subclasses, such as simple or simplicial polytopes, for which much more is known.
We also want to be general.
In fact, much is known for, say, simple and simplicial polytopes.
It follows already from Cauchy's rigidity theory that every \emph{simplicial} polytope is uniquely determined, up to isometry, by its combinatorial type and edge lengths.
By a classical result of Blind \& Many \cite{blind1987puzzles} (with its beautiful proof by Kalai \cite{kalai1988simple}), a \emph{simple} polytopes is determined, up to combinatorial equivalence, by its edge-graph alone.
Recently,\nlspace Novik \& Zheng \cite{novik2021reconstructing} answered a question by Kalai \cite{kalai1994some}, showing that every \emph{simplicial} polytope is uniquely determined, up to affine equivalence, by its edge-graph and its so-called ``space of self-stresses''.
\hrulefill
In this article we will be concerned with the reconstruction of polytopes from~the edge-graph and its edge lengths, together with some further metric data~detailed~below.
\hrulefill
In this article we discuss the general question in how far a convex polytope is determined by its edge-graph and edge lengths, and we provide instances of additional geometric and combinatorial data that suffice to achieve a unique reconstruction.
Edge lengths alone clearly do not determine the shape of a polytopes (consider polygons \cref{fig:...} or the $d$-cube).
The following results are in the spirit of this questions but all have their shortcomings:
\begin{itemize}
\item from Cauchy's rigidity theorem follows that a simplicial polytope is uniquely determined, up to isometry, by its combinatorial type and edge lengths. This requires the full combinatorics rather than just the edge-graph.
\item by ``Kalai's simple way to tell a simple polytope from its graph'' \cite{kalai1988simple} a simple polytope is uniquely determined, up to combinatorial equivalence, from its edge-graph alone.
\item as proven recently by Novik \& Zheng \cite{novik2021reconstructing}, answering a question by Kalai \cite{kalai1994some}, a simplicial polytope is determined, up to \emph{affine transformations}, by~its edge-graph and the so-called ``space of self-stresses''.
\end{itemize}
In this article we shall focus specifically on constraints expressible using only~the edge-graph.
One of the earliest and most well-known (almost) examples in this spirit can already be derived from Cauchy's rigidity theorem:
a simplicial polytope is uniquely determined (up to isometry) by its combinatorial type and its edge lengths.
\hrulefill
Our goal is twofold: firstly, focusing on the edge-graph as the combinatorial scaffold from which to reconstruct the polytope, as opposed to, say, the full face-lattice.
Secondly, being general, not restricting to, say, simple or simplicial polytopes, for which much more is known.
We mention some results, which are similar in spirit to what we aim for:
It follows already from Cauchy's rigidity theory that every \emph{simplicial} polytope is uniquely determined, up to isometry, by its combinatorial type and edge lengths.
By a classical result of Blind \& Many \cite{blind1987puzzles} (with its beautiful proof by Kalai \cite{kalai1988simple}), a \emph{simple} polytopes is determined, up to combinatorial equivalence, by its edge-graph alone.
Related to the rigidity aspects of our investigations, Novik \& Zheng \cite{novik2021reconstructing} recently answered a question by Kalai \cite{kalai1994some}, showing that every \emph{simplicial} polytope is uniquely determined, up to affine equivalence, by its edge-graph and its so-called ``space of self-stresses''.
\newpage
\hrulefill
In how far can a convex polytope be reconstructed from partial combinatorial~and geometric data, such as its edge-graph, edge lengths, dihedral angles, etc., optionally up to combinatorial type, affine equivalence, or even isometry?
Question of this~kind have a long history and are intimately linked to
the various notions of rigidity.
Alexandrov book ``Convex Polyhedra'' \cite{alexandrov2005convex} was mainly dedicated to the ``which data could determine a convex polyhedron and to what extent?''
One of the earliest result in this spirit has been \emph{Cauchy's rigidity theorem}, dating back to 1813, which asserts (in its higher dimensional generalizations \cite{alexandrov2005convex})~that every convex polytope is already determined, up to isometry, by its face-lattice~together with the shape of its 2-dimensional faces.
For simplicial polytopes this can be phrased in terms of edge lengths: a simplicial polytope is determined, up to isometry, by its combinatorial type and its edge lengths.
In this form the result has been further generalized beyond convexity by Gluck \cite{gluck1975almost}, Fogelsanger \cite{fogelsanger1988generic} and Hendrickson \cite{hendrickson1992conditions}, with the surprising counterexamples of Connelly \cite{connelly1977counterexample,connelly1978flexible} for flexible (non-convex) simplicial polyhedra.
More recently, Novik and Zheng \cite{novik2021reconstructing} answered a question by Kalai \cite{kalai1994some}, proving that a simplicial polytopes is determined (up to affine transformations) by its edge-graph and the so-called space of self-stresses.
In this article we go beyond simplicial polytopes and ask in how far length~constraints in general sufficiently fix shape or combinatorics.
Edge lengths alone are clearly not sufficient to determine a polytope in a geometrically meaningful way: already polygons are \enquote{flexible} in this regard, and it is easy to build higher-dimensional examples from these.
It is then natural to ask whether some aspects of this result carry over to an analogous setting using edges instead of 2-faces.
For example, one might ask whether the face-lattice (or even just the edge-graph) together with the edge lengths determine a polytope, perhaps up to isometry.
This, of course, fails already in dimension two:
\begin{center}
\includegraphics[width=0.48\textwidth]{fig/flex_pentagon}
\end{center}
To our knowledge it is not even know whether the edge-graph and the edge lengths determine the combinatorial type or the dimension of the polytope.
\iffalse
If we consider simplicial polytopes (all facets are simplices) then Cauchy's result carries over directly, as the edge lengths of a simplex determine its shape. This has been generalized in various forms, going beyond convexity and edge-length constraints: non-convex simplicial spheres might not be locally rigid, as demonstrated by Connelly \cite{connelly1977counterexample,connelly1978flexible}.
This however is exceptional and cannot happen for generically embedded simplicial surfaces, see Gluck \cite{gluck1975almost} for spheres and Fogelsanger \cite{fogelsanger1988generic} for the general case.
While (locally) rigid (they do not flex), by Hendrickson \cite{hendrickson1992conditions} generically embedded simplicial spheres are never globally rigid (there are distinct embeddings with the same edge lengths).
Generically embedded non-spherical triangulated surfaces however are always rigid as was recently shown by Cruickshank et al.\ \cite{cruickshank2022global}.
Recently, Novik and Zheng \cite{novik2021reconstructing} answered a question by Kalai \cite{kalai1994some}, stating that a simplicial polytopes is determined up to affine transformations by its edge-graph and the so-called space of self-stresses.
\fi
In this article we move beyond simplicial polytopes and ask what actually is determined by the edge-graph, the edge lengths and perhaps some additional data.
We specifically consider the distances of each vertex from some interior point and ask whether this already determines the polytope up to isometry, globally or even universally.
Instead we modify the question in the following way: choose an arbitrary point \emph{in the interior} of the polytope; is the polytope uniquely determined (up to isometry) by its edge-graph, the edge lengths \emph{and} the distance of each vertex from this point? We feel rather optimistic about this, and while the precise formulation of this conjecture needs some care (see \cref{conj:...}) we state a partially informal version already here:
\hrulefill
\fi
We propose the following: besides the edge-graph and edge lengths, we also fix a point in the interior of the polytope $P$, and we record its distance to each vertex of $P$ (\shortStyle{cf.}\ \cref{fig:central_rods}).
We believe that this is sufficient data to reconstruct the polytope up to isometry across all dimensions and all combinatorial types.
\begin{figure}[h!]
\centering
\includegraphics[width=0.37\textwidth]{fig/central_rods_2}
\caption{A ``pointed polytope'', \shortStyle{i.e.,}\ a polytope $P\subset\RR^d$ with a point $x\in\Int(P)$. In addition to the edge lengths we also record the lengths~of the gray bars -- the ``vertex-point distances''.}
\label{fig:central_rods}
\end{figure}
Here and in the following we can assume that the polytopes are suitably transla\-ted so that the chosen point is the origin $0\in\Int(P)$.
\begin{conjecture}\label{conj:main_rigid_intro}
Given polytopes $P\subset\RR^d$ and $Q\subset\RR^e$ with the origin in their respective interiors, and so that $P$ and $Q$ have isomorphic edge-graphs,\nolinebreak\space corresponding edges are of the same length, and corresponding vertices have the same~\mbox{distance}~to~the origin.
Then $P\simeq Q$ (\shortStyle{i.e.,}\ $P$ and $Q$ are isometric via an orthogonal \mbox{transformation}).
\end{conjecture}
Requiring the origin to lie \emph{in the interior} is necessary to prevent counterexamples such as the one shown in \cref{fig:origin_outside_tnontriv_ex}.
This conjecture vastly generalizes~several~known reconstruction results, such as for matroid base polytopes or highly symmetric~polytopes~(see~\cref{sec:vast_generalization}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{fig/origin_outside_tnontriv_ex}
\caption{Two non-isometric realizations of a pentagon~with~the~same \mbox{edge lengths and vertex-point distances. This is possible~\mbox{because}~the~point} is not in the interior.}
\label{fig:origin_outside_tnontriv_ex}
\end{figure}
We also make the following stronger conjecture:
\begin{conjecture}\label{conj:main}
Given two polytopes $P\subset\RR^d$ and $Q\subset\RR^e$ with isomorphic edge-graphs, and so that
\begin{myenumerate}
\item $0\in\Int(Q)$,
\item edges in $Q$ are \ul{at most as long} as their counterparts in $P$, and
\item vertex-origin distances in $Q$ are \ul{at least as large} as their counterparts in $P$,
\end{myenumerate}
then $P\simeq Q$ ($P$ and $Q$ are isometric via an orthogonal transformations).
\end{conjecture}
Intuitively, \cref{conj:main} states that a polytope cannot become larger (or~``more expanded'' as measured in vertex-origin distances) while its edges are getting shorter.
It is clear that \cref{conj:main_rigid_intro} is a consequence of \cref{conj:main}, and we shall call the former the ``unique reconstruction version'' of the latter.
Here, the necessity of the precondition $0\in\Int(Q)$ can be seen even quicker: vertex-origin distances~can~be increased arbitrarily by translating the polytope just far enough away from the origin (see also \cref{fig:origin_outside_trivial_ex}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.43\textwidth]{fig/origin_outside_trivial_ex.pdf}
\caption{If $x\not\in\Int(Q)$, then it is possible for $Q$ to have shorter edges than $P$, while simultaneously also all vertices farther away from $x$.}
\label{fig:origin_outside_trivial_ex}
\end{figure}
In this article we develop techniques that we feel confident point us the right~way towards a resolution of the conjectures.
We then verify the conjectures~in~the~follo\-wing three relevant special cases:
\begin{itemize}
\item $P$ and $Q$ are centrally symmetric (\cref{res:centrally_symmetric}),
\item $Q$ is a slight perturbation of $P$ (\cref{thm:main_local}),
\item $P$ and $Q$ are combinatorially equivalent (\cref{thm:main_comb_eq}).
\end{itemize}
Our eventual formulations of the first two special cases will in fact be more general, replacing $Q$ by some embedding $q\:V(G_P)\to\RR^e$ of the edge-graph $G_P$, where~$q$ is no longer assumed to be the skeleton of any polytope.
These results can then~also be interpreted as claiming rigidity, local or universal, of certain bar-joint or tensegrity frameworks.
\subsection{Notation and terminology}
\label{sec:setting}
Throughout the article, all polytopes are convex and bounded, in particular, can be written as the convex hull of their vertices:
$$P=\conv\{p_1,...,p_n\}:=\Big\{\smash{\sum_{i}\alpha_i p_i \mid \alpha\in\Delta_n}\Big\},$$
where $\Delta_n:=\{x\in\RR^n_{\ge 0}\mid x_1+\cdots+x_n=1\}$ denotes the set of convex coefficients.
If not stated otherwise, $P\subset\RR^d$ will denote a polytope in $d$-dimensional space~for $d\ge 2$, though
its affine hull $\aff(P)$ might be a proper subspace of $\RR^d$.
If $\dim\aff(P)=d$ we say that $P$ is \emph{full-dimensional}.
Our polytopes are often \emph{pointed}, that is, they come with a special point $x\in\Int(P)$ (sometimes also on $\partial P$ or outside); but we usually translate $P$ so that $x$ is the origin.
So, instead of distances from the vertices to $x$, we just speak of \emph{vertex-origin distances}.
By $\F(P)$ we denote the \emph{face-lattice} of $P$, and by $\F_\delta(P)$ the subset of $\delta$-dimensional faces.
We shall assume a fixed enumeration $\F_0(P)=\{p_1,...,p_n\}$ of the polytope's vertices (\shortStyle{i.e.,}\ our polytopes are \emph{labelled}), in particular, the number of vertices will be denoted by $n$.
We also often use a polytope $Q\subset\RR^e$ whose vertices are denoted $\F_0(Q)=\{q_1,...,q_n\}$.
The edge-graph of $P$ is the finite simple graph $G_P=(V,E)$, where $V=\{1,...,n\}$ is compatible with the vertex labelling, that is, $i\in V$ corresponds to $p_i\in \F_0(P)$~and $ij\in E$ if and only if $\conv\{p_i,p_j\}\in\F_1(P)$.
The graph embedding given by $i\mapsto p_i$ (with edges embedded as line segments) is called \emph{(1-)skeleton} $\skel(P)$ of $P$.
When speaking of combinatorially equivalent polytopes $P$ and $Q$, we shall implicitly fix a face-lattice isomorphism $\phi \:\F(P)\xrightarrow{\sim} \F(Q)$ compatible with the vertex labels, \shortStyle{i.e.,}\ $\phi(p_i)=q_i$.
This also allows us to implicitly associate faces of $P$ to faces of $Q$, for example, for a face $\sigma\in\F(P)$ we can write $\sigma_Q$ for the corresponding face in $\F(Q)$.
Likewise, if $P$ and $Q$ are said to have isomorphic edge-graphs, we implicitly assume a graph isomorphism $G_P\xrightarrow{\sim}G_Q$ sending $p_i$ onto $q_i$.
We will then often say that $P$ and $Q$ have \emph{a common} edge-graph, say, $G_P$.
We write $P\simeq Q$ to denote that $P$ and $Q$ are isometric. Since our polytopes are usually suitably translated, if not stated otherwise, this isometry can be assumed~as realized by an orthogonal transformation.
Let us repeat \cref{conj:main} using our terminology:
\begin{conjectureX}{\ref{conj:main}}
Given polytopes $P\subset\RR^d$ and $Q\subset\RR^e$ with the same edge-graph $G_P=(V,E)$, so that
\begin{myenumerate}
\item $0\in\Int(Q)$,
\item edges in $Q$ as most as long as in $P$, \shortStyle{i.e.,}\
%
$$\|q_i-q_j\|\le \|p_i-p_j\|,\quad\text{for all $ij\in E$},$$
\item vertex-origin distances in $Q$ are at least as larger as in $P$, \shortStyle{i.e.,}\
%
$$\|q_i\|\ge \|p_i\|,\quad\text{for all $i\in V$},$$
\end{myenumerate}
then $P\simeq Q$.
\end{conjectureX}
\subsection{Structure of the article}
\label{sec:overview}
In \cref{sec:warmup} we prove the instructive special case of \cref{conj:main} where both $P$ and $Q$ are simplices.
While comparatively straightforward, the proof helps us to identify a quantity\nlspace --\nlspace we call it the \emph{expansion}~of~a~polytope -- that is at the core of a more general approach.
The goal of \cref{sec:expansion_version} is to show that the \enquote{expansion} of a polytope is monotone in its edge lengths, that is, decreases when the edge lengths shrink.
In~fact,\nlspace we~verify this in the more general context that replaces $Q$ by a general embedding $q\: V(G_P)\to\RR^d$ of $P$'s edge-graph.
As a main tool we introduce~the~\emph{Wachspress coordinates} (a special class of generalized barycentric coordinates) and discuss a theo\-rem of Ivan Izmestiev.
In \cref{sec:expansion_rigidity_tensegrity} we apply these results to prove \cref{conj:main} for the three special cases: centrally symmetric, close-by and combinatorially equivalent polytopes. We also discuss the special case of inscribed polytopes.
We elaborate how our tools can potentially be used to attack \cref{conj:main}.
In \cref{sec:conclusions} we conclude our investigation with further thoughts on our results, notes on connections to the literature, as well as \emph{many} questions and future research directions.
Despite being a conclusion section, it is quite rich in content, as we found it more appropriate to gather many notes here rather than to repeatedly~interrupt the flow of~the main text.
\iffalse
\hrulefill
We make a simple statement:
\begin{quote}
\itshape If all edges of a polytope shorten, the polytope itself necessarily~shrinks in size. Or can it still get bigger?
\end{quote}
While this seems plausible, it certainly lacks in detail. Most importantly, what do we even mean by the \emph{size} of a polytope.
\begin{theorem}
Given two combinatorially equivalent polytope $P,Q\subset\RR^d$ so that
\begin{myenumerate}
\item $P$ and $Q$ are inscribed in spheres of radii $r_P$ and $r_Q$ respectively,
\item $Q$ contains the origin in its interior, and
\item edges in $Q$ are not longer than edges in $P$,
\end{myenumerate}
then $r_P\ge r_Q$.
\end{theorem}
\subsection{Notes on notation}
By $\F(P)$ we denote the \emph{face lattice} of $P$, that is, the set of faces of $P$ ordered by inclusion.
By $\F_\delta(P)$ we denote the subset of $\delta$-dimensional faces.
Two polytopes $P,Q\subset\RR^d$ are said to be \emph{combinatorially equivalent} if their face lattices $\F(P)$ and $\F(Q)$ are isomorphic as lattices.
We use the following~convention: if we speak of combinatorially equivalent polytopes $P,Q\subset\RR^d$, we implicitly fix a face lattice isomorphism $\phi\:\F(P)\to\F(Q)$. For each face $\sigma\in\F(P)$ we then speak of the \emph{corresponding face} $\sigma_Q:=\phi(\sigma)$ in $Q$, and vice versa.
\fi
\section{Warmup: a proof for simplices}
\label{sec:warmup}
To get acquainted with the task we discuss the instructive special case~of \cref{conj:main} where both $P$ and $Q$ are simplices.
The proof is reasonably short~but~contains already central ideas for the general case.
\iffalse
We discuss two special cases to develop a feeling for the intricacies of the problem.
\subsection{Polygons}
\label{sec:warmup_polygons}
This is potentially the easiest case and we present the most direct proof one would come up with:
\begin{theorem}\label{res:special_case_polygons}
Let $P,Q\subset \RR^2$ be two $n$-gons so that
\begin{myenumerate}
\item $P$ and $Q$ are inscribed in circles of radii $r_P$ and $r_Q$ respectively,
\item $P$ and $Q$ contain their respective circumcenter in the interior, and
\item edges in $Q$ are at most as long as in $P$,
\end{myenumerate}
then $r_P\ge r_Q$.
\begin{proof}
Let $e_1,...,e_n\in\mathcal F_1(P)$ be an enumeration of the edges of $P$.
Let $\alpha_i$ be the angle opposite $e_i$ in the triangle spanned by $e_i$ and the circumcenter.
Since the circumcenter is in the interior of $P$, the edges form a polyline with winding number one around the circumcenter, and so $\alpha_1+\cdots+\alpha_n=2\pi$. If $\beta_i$ is the analogously defined angle in $Q$, then also $\beta_1+\cdots+\beta_n=2\pi$.
Assume then to the contrary that $r_P<r_Q$. Combining this with the assumption that edges in $Q$ are not longer than in $P$, it is geometrically clear that $\beta_i< \alpha_i$. But then $\beta_1+\cdots+\beta_n<2\pi$, a contradiction.
The same follows if $r_P=r_Q$ but at least one edge of $Q$ is shorter than in $P$.
\end{proof}
\end{theorem}
The most direct reaction to this proof is certainly the attempt to generalize it to higher dimensions. But there are many difficulties: first, instead of planar angles, one now needs to consider solid angles under which one views facets of $P$ as seen from the circumcenter. But whether this solid angle shrinks or not also depends on how the volume of the facet changes when transitioning to $Q$. We do not know anything about the volume.
All in all, this approach seems doomed and we where not able to make it work.
\hrulefill
\begin{theorem}\label{res:special_case_polygons}
Let $P,Q\subset \RR^2$ be two (convex) $n$-gons so that
\begin{myenumerate}
\item $0\in\Int(Q)$,
\item edges in $Q$ are at most as long as in $P$, and
\item vertex-origin distances in $Q$ are at least as large as in $P$.
\end{myenumerate}
then $P\simeq Q$.
\begin{proof}
\msays{TODO} ?
\end{proof}
\end{theorem}
\subsection{Simplices}
\label{sec:warmup_simplices}
\fi
\iffalse
If $P,Q\subset\RR^d$ are two simplices (which are inscribed, as all simplices are) then one can write down a surprisingly straight-forward argument.
\begin{theorem}\label{res:special_case_simplices}
Let $P,Q\subset \RR^d$ be two simplices so that
\begin{myenumerate}
\item $P$ and $Q$ are inscribed in spheres of radii $r_P$ and $r_Q$ respectively,
\item $Q$ contains its circumcenter, and
\item edges in $Q$ are at most as long as in $P$,
\end{myenumerate}
then $r_P\ge r_Q$.
\begin{proof}
By translating $P$ and $Q$ we can arrange that they are inscribed in concentric spheres centered at the origin $0$.
In particular, $0\in Q$ and we can choose barycentric coordinates $\alpha\in\Delta_n$ for the origin, \shortStyle{i.e.,}\ $0=\sum_i\alpha_i q_i$.
The claim follows by rewriting the circumradii as sums of terms, each of which can easily seen to be non-increasing when transitioning from $P$ to $Q$:
\begin{align}\label{eq:simplex_case}
r_P^2
= \sum_i \alpha_i \|p_i\|^2
&= \Big\|\sum_i \alpha_i p_i\Big\|^2 \!+ \tfrac12\sum_{i,j} \alpha_i\alpha_j \|p_i-p_j\|^2
\\&\ge \Big\|\sum_i \alpha_i q_i\Big\|^2 + \tfrac12\sum_{i,j} \alpha_i\alpha_j \|q_i-q_j\|^2
= \sum_i \alpha_i \|q_i\|^2
= r_Q^2. \notag
\end{align}
\end{proof}
\end{theorem}
Now, why again does this proof only work for simplices?
The reason is that for term-wise comparison we invoked the following estimation:
\begin{align}\label{eq:expansion_comparison}
\tfrac12\sum_{i,j}\alpha_i\alpha_j\|p_i-p_j\|^2
\ge \tfrac12\sum_{i,j}\alpha_i\alpha_j\|q_i-q_j\|^2.
\end{align}
Either sum iterates over \emph{all pairs of vertices} in its respective polytope.
Since the polytopes are simplices each such pair forms an edge\footnote{The same holds for so-called \emph{2-neighborly} polytopes, that is, polytopes in which any pair of vertices forms an edge. Simplices are examples of 2-neighborly polytopes, but in dimension $d\ge 4$ other such polytopes exist.}, and so \eqref{eq:expansion_comparison} is true by the term-wise application of \itm3 $\|p_i-p_j\|\ge\|q_i-q_j\|$.
In a general polytope however some pairs of vertices might not form an edge, that is, their distance is allowed to increase, and so we cannot invoke the same argument.
It is then surprising to learn that even though the term-wise argument does not apply for general polytopes, \eqref{eq:expansion_comparison} still holds!
We can summarize this as follows: even~if only edge lengths are required to shrink, pair-wise distances between vertices still become smaller \emph{on average}, and this suffices to show that the polytope's circumradius decreases (almost verbatim to the proof of \cref{res:special_case_simplices}).
\hrulefill
\fi
\begin{theorem}\label{res:special_case_simplices}
Let $P,Q\subset \RR^d$ be two simplices so that
\begin{myenumerate}
\item $0\in\Int(Q)$,
\item edges in $Q$ are at most as long as in $P$, and
\item vertex-origin distances in $Q$ are at least as large as in $P$,
\end{myenumerate}
then $P\simeq Q$.
\begin{proof}
By \itm1 we can choose barycentric coordinates $\alpha\in\Int\Delta_n$ for the origin~in~$Q$, that is, $0=\sum_i\alpha_i q_i$.
Consider the following system of equalities and inequalities:
\begin{align}\notag
\sum_i \alpha_i \|p_i\|^2
&= \Big\|\sum_i \alpha_i p_i\Big\|^2 \!+ \tfrac12\sum_{i,j} \alpha_i\alpha_j \|p_i-p_j\|^2
\\[-1.5ex]
\rotatebox{90}{$\ge$}
\qquad
&
\qquad \quad \;\,
\rotatebox{90}{$\le$}
\qquad \qquad \qquad \qquad\!
\rotatebox{90}{$\le$}
\\[-1.5ex]\notag
\sum_i \alpha_i \|q_i\|^2
&= \Big\|\sum_i \alpha_i q_i\Big\|^2 \!+ \tfrac12\sum_{i,j} \alpha_i\alpha_j \|q_i-q_j\|^2
\end{align}
\label{eq:simplex_case}
The equalities of the first and second row can be verified by rewriting the norms as inner products followed by a straightforward computation.
The vertical inequalities follow, from left to right, using \itm3, the definition of $\alpha$, and \itm2 respectively.
But considering this system of (in)equalities, we must conclude that all inequalities are actually satisfied with equality.
In particular, equality in the right-most terms yields $\|p_i-p_j\|=\|q_i-q_j\|$ for all $i,j\in V(G_P)$ (here we are using $\alpha_i>0$).\nolinebreak\space%
But sets of points with pairwise identical distances are isometric.
\end{proof}
\end{theorem}
Why can't we apply this proof to general polytopes?
The right-most sum in~\eqref{eq:simplex_case} iterates over all vertex pairs and measures, if you will, a weighted average of pairwise vertex distances in $P$.
In simplices each~vertex pair forms an edge, and hence, if all edges decrease in length, this average de\-creases as well.
In general polytopes however, when edge become shorter, some \enquote{non-edge vertex distances} might still~increase, and so the right-most inequality cannot be obtained in the same term-wise fashion.
In fact, there is no reason to expect that the inequality holds at all.
It should then be surprising to learn that it actually does hold, at least in some controllable circumstances that we explore in the next section.
This will allow us to generalize \cref{res:special_case_simplices} beyond simplices.
\iffalse
\begin{disable}
What prevents us from applying the same proof to general polytopes?
Mainly two things:
\begin{itemize}
\item the right-most sum iterates over all vertex pairs and measures, if you will, a weighted average of pairwise vertex distances in $P$.
In simplices each~vertex pair forms an edge, and hence, if all edges decrease in length, this average de\-creases as well.
In general polytopes however, when edge become shorter, some \enquote{non-edge vertex distances} might actually increase, and so the right-most inequality cannot be obtained in the same term-wise fashion. In fact, there seems no reason to expect that the inequality holds at all.
\item For simplices exists a canonical notion of barycentric coordinates, which allowed us choose a unique $\alpha\in\Int(\Delta_n)$.
In general polytopes the origin might be expressed as a convex combination of the polytope's vertices in several ways.
\end{itemize}
It should be surprising to learn that this can't happen, in a precise way presented in the next section. Morally we find: \emph{polytopes are maximally expanded~for~their~edge lengths} (for the right definition of \enquote{expanded}).
\msays{TODO}
\end{disable}
\fi
\section{$\alpha$-expansion, Wachspress coordinates and the Izmestiev matrix}
\label{sec:expansion_version}
Motivated by the proof of the simplex case (\cref{res:special_case_simplices}) we define the following measure of size for a polytope (or graph embedding $p\: V(G_P)\to\RR^d$):
\begin{definition}
For $\alpha\in\Delta_n$ the \emph{$\alpha$-expansion} of $P$ is
$$\|P\|_\alpha^2 := \tfrac12\sum_{i,j} \alpha_i\alpha_j \|p_i-p_j\|^2.$$
\end{definition}
The sum in the definition iterates over all pairs of vertices and so the $\alpha$-expansion measures a weighted average of vertex distances, in particular, $\|P\|_\alpha$ is a translation invariant measure.
If all pairwise distances between vertices decrease, so does the~$\alpha$-expansion.
The surprising fact, and main result of this section (\cref{res:expansion_main_result}), is that for a carefully chosen $\alpha\in\Delta_n$ the $\alpha$-expansion decreases already if only the edge lengths decrease, independent of what happens to other vertex distances.
In fact, this statement holds true in much greater generality and we state it already here (it mentions \emph{Wachspress coordinates} which we define in the next section; one should read this as \enquote{there exist $\alpha\in\Delta_n$ so that ...}):
\begin{theorem}\label{res:expansion_main_result}
Let $P\subset\RR^d$ be a polytope with edge-graph $G_P=(V,E)$ and let~$\alpha\in\Delta_n$ be the Wachspress coordinates of some interior point $x\in\Int(P)$.
If $q\: V$ $\to\RR^e$ is some embedding of $G_P$ whose edges are at most as long as in $P$, then
$$\|P\|_\alpha\ge \|q\|_\alpha,$$
with equality if and only if $q$ is an affine transformation of the skeleton $\skel(P)$,\nolinebreak\space all~ed\-ges of which are of the same length as in $P$.
\end{theorem}
Indeed, \cref{res:expansion_main_result} is not so much about comparing $P$ with another polytope, but actually about comparing the skeleton $\skel(P)$ with some other graph embedding $q$ that might not be the skeleton of any polytope and might even be embedded in a lower- or higher-dimensional Euclidean space.
Morally, \cref{res:expansion_main_result} says: \emph{polytope skeleta are~maximally expanded for their edge lengths}, where \enquote{expansion}~here measures~an average of vertex distances with carefully chosen weights.
The result clearly hinges on the existence of these so-called \emph{Wachspress coordinates}, which we introduce now.
\subsection{Wachspress coordinates}
\label{sec:Wachspress_Izmestiev}
In a simplex $P\subset\RR^d$ each point $x\in P$ can~be~expressed as a convex combination of the simplex's vertices in a unique way:
$$x=\sum_i \alpha_i p_i.$$
The coefficients $\alpha\in\Delta_n$ are called the \emph{barycentric coordinates} of $x$ in $P$.
In a general polytope $P\subset\RR^d$ there are usually many ways to express a point~$x\in P$ as a convex combination of the polytope's vertices.
In many applications however it is desirable to have a canonical choice, so to say \enquote{generalized barycentric~coordinates}.
Various such coordinates have been defined (see \cite{floater2015generalized} for an overview), one of them being the so-called \emph{Wachspress coordinates}.
Those were initially defined by Wachspress for polygons \cite{wachspress1975rational}, and later generalized to general polytopes by Warren et al.~\cite{warren1996barycentric,warren2007barycentric}.
A construction, with a geometric interpretation due to \cite{ju2005geometric}, is given in \cref{sec:relation_Wachspress_Izmestiev} below.
The relevance of the Wachspress coordinates for our purpose is however not so much in their precise definition, but rather in their relation to a polytope invariant of \enquote{higher rank} that we introduced next.
\iffalse
\begin{construction}[Wachspress coordinates]
For $x\in\Int(P)$ let $P-x$ denote the translate of $P$, and $(P-x)^\circ$ its polar dual.
For $i\in\{1,...,n\}$ let $\sigma_i^\circ\in\F_{d-1}((P-x)^\circ)$ be the dual facet to the vertex $p_i-x\in\F_0(P-x)$ and $C_i$ the cone over $\sigma_i^\circ$ with apex at the origin.
Its volume is $\vol(C_i)=\vol(\sigma_i^\circ)/d\|p_i\|$.
The Wachspress coordinates $\alpha\in\Delta_n$ of the point $x\in\Int(P)$ can now be defined as follows: $\alpha_i$ is the fraction of $\vol(C_i)$ in the total volume of $(P-x)^\circ$, or
$$\alpha_i:=\frac{\vol(C_i)}{\vol((P-x)^\circ)}.$$
\end{construction}
\fi
\iffalse
We collect the most relevant properties of the Wachspress coordinates \cite{warren1996barycentric,warren2007barycentric, ju2005geometric}.
\begin{remark}[Properties of the Wachspress coordinates]\quad
\label{rem:Wachspress_properties}
\begin{myenumerate}
\item The Wachspress coordinates depend continuously on $x$ and can be continuously extended to the boundary $\partial P$. If $x$ is contained in a face $\sigma\in\F(P)$, then $\alpha_i>0$ if and only if $p_i\in \sigma$. In particular, if $x\in\Int(P)$, then~$\alpha\in\Int\Delta_n$.
\item The Wachspress coordinates are an (almost) local geometric invariant, that is, $\alpha_i$ only depends on a local neighborhood of $p_i$, as well as its distance~from $x$.
\end{myenumerate}
\end{remark}
We require one last ingredient before proving \cref{res:expansion_main_result}.
\fi
\subsection{The Izmestiev matrix}
At the core of our proof of \cref{res:expansion_main_result} is the observation that the Wachspress coordinates are merely a shadow of a \enquote{higher rank} object that we call the \emph{Izmestiev matrix} of $P$; an $(n\x n)$-matrix associated~to~an~$n$-vertex polytope with $0\in\Int(P)$, whose existence and properties in connection with graph skeleta were established by Lovász in dimension three \cite{lovasz2001steinitz}, and by Izmestiev in general dimension \cite{izmestiev2010colin}.
We summarize the findings:
\begin{theorem}
\label{res:Izmestiev}
Given a polytope $P\subset\RR^d$ with $0\in\Int(P)$ and edge-graph~$G_P=(V,$ $E)$,
there exists a symmetric matrix $M\in\RR^{n\x n}$ (the Izmestiev matrix of $P$) with~the following properties:
\begin{myenumerate}
\item $M_{ij}>0$ if $ij\in E$,
\item $M_{ij}=0$ if $i\not=j$ and $ij\not\in E$,
\item $\dim\ker M=d$,
\item $MX_P=0$, where $X_P^{\Tsymb}=(p_1,...,p_n)\in\RR^{d\x n}$, and
\item $M$ has a unique positive eigenvalue (of multiplicity one).
\end{myenumerate}
\end{theorem}
Izmestiev provided an explicit construction of this matrix that we discuss in~\cref{sec:relation_Wachspress_Izmestiev} below.
Another concise proof of the spectral properties of the Izmestiev~matrix can be found in the appendix of \cite{narayanan2021spectral}.
\begin{observation}\label{res:Izmestiev_observations}
Each of the properties \itm1 to \itm5 of the Izmestiev matrix will~be crucial for proving \cref{res:expansion_main_result} and we shall elaborate on each point below:
\begin{myenumerate}
\item \cref{res:Izmestiev} \itm1 and \itm2 state that $M$ is some form of generalized adjacency matrix, having non-zero off-diagonal entries if and only if the polytope has an edge between the corresponding vertices.
Note however that the theorem tells nothing directly about the diagonal entries of $M$.
\item \cref{res:Izmestiev} \itm3 and \itm4 tell us precisely how the kernel of $M$ looks like, namely, $\ker M=\Span X_P$. The inclusion $\ker M\supseteq \Span X_P$ follows directly from \itm4. But since $P$ has at least one interior point (the origin) it must be a full-dimensional polytope, meaning that $\rank X_P=d$. Comparison of dimensions (via \itm 3) yields the claimed equality.
\item let $\{\theta_1 > \theta_2 > \cdots > \theta_m\}$ be the spectrum of $M$. \cref{res:Izmestiev} \itm5 then tells us that $\theta_1>0$, $\theta_2=0$ and $\theta_k<0$ for all $k\ge 3$.
\item $M':= M+\gamma\Id$ is a non-negative matrix if $\gamma>0$ is sufficiently large,\nolinebreak\space and is then subject to the \emph{Perron-Frobenius theorem} (see \cref{res:Perron_Frobenius}). Since the edge-graph $G_P$ is connected, the matrix $M'$ is \emph{irreducible}.
The crucial information provided by the Perron-Frobenius theorem is that $M'$ has an eigenvector $z\in\RR^n$ to its largest eigenvalue (that is, $\theta_1+\gamma$), all entries~of which are positive.
By an appropriate scaling we can assume $z\in\Int(\Delta_n)$, which is a $\theta_1$-eigenvector to the Izmestiev matrix $M$, and in fact, spans its $\theta_1$-eigenspace.
\end{myenumerate}
\end{observation}
Note that the properties \itm1 to \itm5 in \cref{res:Izmestiev} are invariant under scaling~of $M$ by a \emph{positive} factor.
As we verify in \cref{sec:relation_Wachspress_Izmestiev} below, $\smash{\sum_{i,j}M_{ij}>0}$,\nolinebreak\space and so we can fix the convenient normalization $\smash{\sum_{i,j}M_{ij}=1}$.
In fact, with this~normalization in place we can now reveal that the Wachspress coordinates emerge simply as the row sums of $M$:
\begin{equation}
\label{eq:Wachspress_is_Izemstiev_row_sum}
\alpha_i:=\sum_j M_{ij}, \quad\text{for all $i\in\{1,...,n\}$}.
\end{equation}
This connection has previously been observed in \cite[Section 4.2]{ju2005geometric} for 3-dimensional polytopes, and we shall verify the general case in the next section (\cref{res:Wachspress_is_Izemstiev_row_sum}).
\iffalse
This allows us to choose a convenient normalization for $M$, such as $\sum_{i,j} M_{ij}= 1$.
That this is possible with a positive factor requires that \cref{res:Izmestiev} provides a matrix with $\sum_{i,j} M_{ij}>0$, which we verify in \cref{sec:relation_Wachspress_Izmestiev}.
In fact, the latter is the~more suitable normalization for our purpose and we shall assume it implicitly in the following.
Most importantly, it allows us to state the fundamental relation between $M$ and the Wachspress coordinates $\alpha\in\Delta_n$ of the point $0\in\Int(P)$: the Wachspress coordinates of the origin $0\in \Int(P)$ are the row sums of the Izmestiev matrix, \shortStyle{i.e.,}\
$$\alpha_i:=\sum_j M_{ij}, \quad\text{for all $i\in\{1,...,n\}$}.$$
This connection has previously been observed in \cite[Section 4.2]{ju2005geometric} for 3-dimensional polytopes.
Using appropriate geometric definitions for the Wachspress coordinates and the Izmestiev matrix yields an elementary geometric proof in general dimension which can be found in \cref{sec:appendix_Izemstiev_row_sums}.
Izmestiev furthermore provides an explicit form for the entries of $M$.
These~and other tangential results concerning the Izmestiev matrix, as well as some historical notes, are collected in \cref{sec:appendix_Izemstiev_Wachspress}.
Among them (\cref{res:Izmestiev_sum_is_vol_dual}) is the fact that with Izmestiev's explicit entries we have
$$\sum_{i,j} M_{ij} = d(d-1) \vol(P^\circ) > 0,$$
where $P^\circ$ denotes the polar dual of $P$.
Note that the properties \itm1 to \itm5 in~\cref{res:Izmestiev} are invariant under scaling $M$ by a positive~factor.
This allows~us~to~choose a different normalization for $M$, such as $\sum_{i,j} M_{ij}= 1$.
In fact, the latter is the~more suitable normalization for our purpose and we shall assume it implicitly in the following.
Most importantly, it allows us to state the fundamental relation between $M$ and the Wachspress coordinates $\alpha\in\Delta_n$ of the point $0\in\Int(P)$: the Wachspress coordinates of the origin $0\in \Int(P)$ are the row sums of the Izmestiev matrix, \shortStyle{i.e.,}\
$$\alpha_i:=\sum_j M_{ij}, \quad\text{for all $i\in\{1,...,n\}$}.$$
This connection has previously been observed in \cite[Section 4.2]{ju2005geometric} for 3-dimensional polytopes.
Using appropriate geometric definitions for the Wachspress coordinates and the Izmestiev matrix yields an elementary geometric proof in general dimension which can be found in \cref{sec:appendix_Izemstiev_row_sums}.
\hrulefill
\fi
\subsection{The relation between Wachspress and Izmestiev}
\label{sec:relation_Wachspress_Izmestiev}
The Wachspress coordinates and the Izmestiev matrix can be defined simultaneously in a rather elegant fashion: given a polytope $P\subset\RR^d$ with $d\ge 2$ and~$0\in\Int(P)$, as well as a vector~$\mathbf c=$ $(c_1,...,c_n)\in\RR^n$, consider the \emph{generalized polar dual}
$$P^\circ(\mathbf c):=\big\{x\in\RR^d\mid \<x,p_i\>\le c_i\text{ for all $i\in V(G_P)$}\big\}.$$
We have that $P^\circ(\mathbf 1)$ with $\mathbf 1=(1,...,1)$ is the usual polar dual.
The (unnormalized) Wachspress coordinates $\tilde\alpha\in\RR^n$ of the origin and the (unnormalized) Izmestiev~matrix $\tilde M$ $\in$ $\RR^{n\x n}$ emerge as the coefficients in the Taylor expansion~of~the volume of $P^\circ(\mathbf c)$ at $\mathbf c=\mathbf 1$:
\begin{equation}\label{eq:Taylor_definition}
\vol\!\big(P^\circ(\mathbf c)\big)=\vol(P^\circ)+\<\mathbf c-\mathbf
1,\tilde\alpha\>+\tfrac12(\mathbf c-\mathbf 1)^{\Tsymb}\!\tilde M(\mathbf c-\mathbf 1)+\cdots.
\end{equation}
In other words,
\begin{equation}\label{eq:variational_definition}
\tilde\alpha_i := \frac{\partial\vol(P^\circ(\mathbf c))}{\partial c_i}\Big|_{\mathbf c=\mathbf 1}\quad\text{and}\quad
\tilde M_{ij} := \frac{\partial^2\vol(P^\circ(\mathbf c))}{\partial c_i\partial c_j}\Big|_{\mathbf c=\mathbf 1}.
\end{equation}
In this form, one might recognize the~(unnor\-malized) Izmes\-tiev matrix of $P$ as the \emph{Alexandrov ma\-trix} of the polar dual $P^\circ$.
Geometric interpretations for \eqref{eq:variational_definition} were given in \cite[Section 3.3]{ju2005geometric} and \cite[proof of Lemma 2.3]{izmestiev2010colin}: for a vertex $p_i\in \F_0(P)$ let $F_i\in\F_{d-1}(P^\circ)$ be~the corresponding dual~facet.
Likewise, for an edge $e_{ij}\in\F_1(P)$ let $\sigma_{ij}\in\F_{d-2}(P^\circ)$ be the corresponding dual face of codimension 2.
Then
\begin{equation}\label{eq:geometric_definition}
\tilde\alpha_i = \frac{\vol(F_i)}{\|p_i\|}\quad\text{and}\quad \tilde M_{ij}=\frac{\vol(\sigma_{ij})}{\|p_i\|\|p_j\|\sin\sphericalangle(p_i,p_j)},
\end{equation}
where $\vol(F_i)$ and $\vol(\sigma_{ij})$ are to be understood as relative volume.
The expression for $\tilde\alpha$ is (up to a constant factor) the \emph{cone volume} of $F_i$ in $P^\circ$\!, \shortStyle{i.e.,}\ the volume~of~the cone with base face $F_i$ and apex at the origin.
As such it is positive, which confirms again~that we can normalize to $\alpha\in\Delta_n$, and we see that $\alpha_i$ measures the fraction~of the cone vo\-lume at $F_i$ in the total volume of $P^\circ$.
That $\smash{\tilde M}$ can be normalized~follows from the next statement, which is a precursor to \eqref{eq:Wachspress_is_Izemstiev_row_sum}:
\begin{proposition}
$\sum_j \tilde M_{ij} = (d-1)\tilde \alpha_i$.
\end{proposition}
\begin{proof}
Observe first that $\vol(P^\circ(\mathbf c))$ is a homogeneous function of degree $d$, \shortStyle{i.e.,}\
$$\vol(P^\circ(t\mathbf c)) = \vol(tP^\circ(\mathbf c)) = t^d\vol(P^\circ(\mathbf c))$$
for all $t\ge 0$.
Each derivative $\partial\vol(P^\circ(\mathbf c))/\partial c_i$ is then homogeneous of degree $d-1$.
\emph{Euler's homogeneous function theorem} (\cref{res:Eulers_homogeneous_function_theorem}) yields
$$
\sum_j c_j \frac{\partial^2\vol(P^\circ(\mathbf c))}{\partial c_i\partial c_j}
= (d-1) \frac{\partial\vol(P^\circ(\mathbf c))}{\partial c_i}.
$$
Evaluating at $\mathbf c=1$ and using \eqref{eq:variational_definition} yields the claim.
\end{proof}
We immediately see that
$\sum_{i,j}\tilde M_{ij}>0$ and that we can normalize to $\sum_{i,j}M_{ij} = 1$. For the normalized quantities then indeed holds \eqref{eq:Wachspress_is_Izemstiev_row_sum}:
\begin{corollary}
\label{res:Wachspress_is_Izemstiev_row_sum}
$\sum_{i} M_{ij}=\alpha_j$ for all $j\in\{1,...,n\}$.
\end{corollary}
Lastly, the following properties of the Wachspress coordinates and the~Izmestiev matrix will be relevant and can be inferred from the above.
\iffalse
Izmestiev also provided a more geometric description of \eqref{eq:variational_definition} \cite[proof of Lemma 2.3]{izmestiev2010colin}: for a vertex $p_i\in \F_0(P)$ let $F_i\in\F_{d-1}(P^\circ)$ be the corresponding dual~facet.\nolinebreak\space
Likewise, for an edge $e_{ij}\in\F_1(P)$ let $\sigma_{ij}\in\F_{d-2}(P^\circ)$ be the corresponding dual face of codimension 2.
Then
\begin{equation}\label{eq:geometric_definition}
\tilde\alpha_i = \frac{\vol(F_i)}{\|p_i\|}\quad\text{and}\quad \tilde M_{ij}=\frac{\vol(\sigma_{ij})}{\|p_i\|\|p_j\|\sin\sphericalangle(p_i,p_j)} \text{ for $i\not=j$},
\end{equation}
where $\vol(F_i)$ and $\vol(\sigma_{ij})$ are to be understood as relative volume.
It is clear from \eqref{eq:geometric_definition} that $\tilde\alpha_i\ge 0$ and that normalization yields $\alpha\in\Delta_n$.
Due to \cref{res:Wachspress_is_Izemstiev_row_sum}, $\sum_{i,j} \tilde M_{ij}=\sum_i \tilde\alpha_i>0$, which justifies the normalization.
Even though defined initially only for $0\in\Int(P)$, normalization allows a continuous extension to $0\in\partial P$.
\msays{Extension to the boundary}
\msays{Local geometric invariant}
\msays{
The Wachspress coordinates depend continuously on $x$ and can be continuously extended to the boundary $\partial P$. If $x$ is contained in a face $\sigma\in\F(P)$, then $\alpha_i>0$ if and only if $p_i\in \sigma$. In particular, if $x\in\Int(P)$, then~$\alpha\in\Int\Delta_n$.}
\fi
\begin{samepage}
\begin{remark}\quad
\label{rem:Wachspress_properties}
\begin{myenumerate}
\item The Wachspress coordinates of the origin and the Izmestiev matrix depend continuously on the translation of $P$, and their normalized variants can be continuously extended to $0\in \partial P$. If the origin lies in the relative interior of a face $\sigma\in\F(P)$, then $\alpha_i>0$ if and only if $p_i\in \sigma$. In particular, if~$0\in$ $\Int(P)$, then $\alpha\in\Int\Delta_n$.
\item The Wachspress coordinates of the origin and the Izmestiev matrix are~invariant under linear transformation of $P$.
This can be inferred from \eqref{eq:geometric_definition} via an elementary computation, as was done for the Izmestiev matrix in \cite[Proposition 4.6.]{winter2021capturing}.
\end{myenumerate}
\end{remark}
\end{samepage}
\iffalse
\subsection{Historical notes}
\label{sec:appendix_Izmestiev_Wachspress_historical_notes}
Wachspress coordinates were initially defined by Wachspress for polygons \cite{wachspress1975rational}, later generalized to general polytopes by Warren et al.~\cite{warren1996barycentric,warren2007barycentric}. The elegant construction in \eqref{eq:geometric_definition} is from \cite{ju2005geometric}.
The Izmestiev matrix $M\in\RR^{n\x n}$ in dimension three was first constructed and~investigated by Lovász \cite{lovasz2001steinitz} and subsequently generalized to higher dimensions by Izmestiev \cite{izmestiev2010colin}. Another concise proof of the spectral properties of the Izmestiev matrix can be found in the appendix of \cite{narayanan2021spectral}.
In other contexts a matrix $M$ defined variationally as in \eqref{eq:variational_definition} has also been called \emph{Alexandrov matrix} or simply \emph{formal Hessian}.
\fi
\subsection{Proof of \cref{res:expansion_main_result}}
\label{sec:proof_of_main_result}
Recall the main theorem.
\begin{theoremX}{\ref{res:expansion_main_result}}
Let $P\subset\RR^d$ be a polytope with edge-graph $G_P=(V,E)$ and let~$\alpha\in\Delta_n$ be the Wachspress coordinates of some interior point $x\in\Int(P)$.
If $q\: V$ $\to\RR^e$ is some embedding of $G_P$ whose edges are at most as long as in $P$, then
$$\|P\|_\alpha\ge \|q\|_\alpha,$$
with equality if and only if $q$ is an affine transformation of the skeleton $\skel(P)$,\nolinebreak\space all~ed\-ges of which are of the same length as in $P$.
\end{theoremX}
The proof presented below is completely elementary, using little more than~linear algebra.
In \cref{sec:semi_definite_proof} the reader can find a second shorter proof based on~the~duality theory of semi-definite programming.
\begin{proof}
At the core of this proof is rewriting the $\alpha$-expansions $\|P\|_\alpha$ and $\|q\|_\alpha$ as a sum~of terms, each of which is non-increasing when transitioning from $P$ to $q$:
\begin{align}\label{eq:3_term_decomposition}
\|P\|_\alpha^2 \,
&= \sum_{ij\in E} M_{ij}\|p_i-p_j\|^2 - \Big\|\sum_i\alpha_i p_i\Big\|^2 + \tr(MX_PX_P^{\Tsymb})
\\[-1.5ex] &\notag
\qquad\qquad\quad\; \rotatebox{90}{$\le$}
\qquad\qquad\qquad\;\;\, \rotatebox{90}{$\le$}
\qquad\qquad\qquad \rotatebox{90}{$\le$}
\\[-1.5ex]
&\;\,\phantom{\ge} \sum_{ij\in E} M_{ij}\|q_i-q_j\|^2 \,- \Big\|\sum_i\alpha_i q_i\Big\|^2 + \tr(MX_qX_q^{\Tsymb}) =\, \|q\|_\alpha^2. \notag
\end{align}
Of course, neither the decomposition nor the monotonicity of the terms is obvious; yet their proofs use little more than linear algebra.
We elaborate on this now.
For the setup,
we recall that the $\alpha$-expansion is a translation invariant measure of size.
We can therefore translate $P$ and $q$ to suit our needs:
\begin{myenumerate}
\item translate $P$ so that $x=0$, that is, $\sum_i \alpha_i p_i = 0$.
\item since then $0\in\Int(P)$, \cref{res:Izmestiev} ensures the existence of the Izmestiev matrix $M\in\RR^{n\x n}$.
\item
Let $\theta_1>\theta_2>\cdots>\theta_m$ be the eigenvalues of $M$, where $\theta_1>0$ and $\theta_2=0$.
By~\cref{res:Izmestiev_observations} \itm4 there exists a unique $\theta_1$-eigenvector $z\in\Int(\Delta_n)$.
\item translate $q$ so that $\sum_i z_i q_i=0$.
\end{myenumerate}
We are ready to derive the decompositions shown in \eqref{eq:3_term_decomposition}: the following equality can be verified straightforwardly by rewriting the square norms as inner products:
\begin{align*}
\tfrac12 \sum_{i,j} M_{ij} \|p_i-p_j\|^2
&= \sum_i \Big(\sum_j M_{ij}\Big) \|p_i\|^2 - \sum_{i,j} M_{ij}\<p_i,p_j\>,
\end{align*}
We continue rewriting each of the three terms:
\begin{itemize}
\item on the left: $M_{ij}\|p_i-p_j\|^2$ is only non-zero for $ij\in E$ (using \cref{res:Izmestiev}~\itm2). The sum can therefore be rewritten to iterate over the edges of $G_P$ (where we consider $ij,ji\in E$ the same and so we can drop the factor $\nicefrac12$)
\item in the middle: the row sums of the Izmestiev matrix are exactly the Wachspress coordinates of the origin, that is, $\smash{\sum_{j} M_{ij}=\alpha_i}$.
\item on the right: recall the matrix $X_P\in\RR^{d\x n}$ whose rows are the vertex~coordinates of $P$.
The corresponding Gram matrix $X_P X_P^{\Tsymb}$ has entries $(X_PX_P^{\Tsymb})_{ij}=\<p_i,p_j\>$.
\end{itemize}
By this we reach the following equivalent identity:
\begin{align*}
\sum_{ij\in E} M_{ij} \|p_i-p_j\|^2
&= \sum_i \alpha_i \|p_i\|^2 - \sum_{i,j} M_{ij}(X_P X_P^{\Tsymb})_{ij},
\end{align*}
We continue rewriting the terms on the right side of the equation:
\begin{itemize}
\item in the middle: the following transformation was previously used in the~simplex case (\cref{res:special_case_simplices}) and can be verified by straightforward expansion~of the squared norms: $$\sum_i\alpha_i \|p_i\|^2 = \tfrac12 \sum_{i,j}\alpha_i\alpha_j \|p_i-p_j\|^2 + \Big\|\sum_i \alpha_i p_i\Big\|^2.$$
Note that the middle term is just the $\alpha$-expansion $\|P\|_\alpha^2$.
\item on the right: the sum iterates over entry-wise products of the two matrices $M$ and $X_PX_P^{\Tsymb}$, which can be rewritten as $\tr(M X_P X_P^{\Tsymb})$.
\end{itemize}
Thus, we arrive at
\begin{align*}
\sum_{ij\in E} M_{ij} \|p_i-p_j\|^2
&= \|P\|_\alpha^2 + \Big\|\sum_i \alpha_i p_i\Big\|^2\! - \tr(M X_PX_P^{\Tsymb}).
\end{align*}
This clearly rearranges to the first line of \eqref{eq:3_term_decomposition}.
An analogous sequence of transformations works for $q$ (we replace $p_i$ by $q_i$ and $X_P$ by $X_q$, but we keep the Izmestiev matrix of $P$).
This yields the second line of \eqref{eq:3_term_decomposition}.
It remains to verify the term-wise inequalities.
For the first term we have
$$
\sum_{ij\in E} M_{ij} \|p_i-p_j\|^2 \ge \sum_{ij\in E} M_{ij} \|q_i-q_j\|^2
$$
by term-wise comparison: we use that the sum is only over edges, that $M_{ij}>0$ for $ij\in E$ (by \cref{res:Izmestiev} \itm1), and that edges in $q$ are not longer than in $P$.
Next, by the wisely chosen translation in setup \itm1 we have $\sum_i \alpha_i p_i=0$, thus
$$ -\Big\|\sum_i \alpha_i p_i\Big\|^2\! = 0 \ge -\Big\|\sum_i \alpha_i q_i\Big\|^2.$$
The final term requires the most elaboration. By \cref{res:Izmestiev} \itm4 the Izmestiev matrix satisfies $MX_P=0$.
So it suffices to show that $\tr(M X_qX_q^{\Tsymb})$ is non-positive, as then already follows
\begin{equation}
\label{eq:trace_neg}
\tr(M X_P X_P^{\Tsymb}) = 0 \overset?\ge \tr(M X_q X_q^{\Tsymb}).
\end{equation}
To prove $\smash{\tr(M X_q X_q^{\Tsymb})\le 0}$ consider the decomposition $\smash{X_q=X_q^1 + \cdots }$ $\smash{+\, X_q^m}$ where $MX_q^k = \theta_k X_q^k$ (since $M$ is symmetric, its eigenspaces are orthogonal and $X_q^k$ is the column-wise orthogonal projecting of $X_q$ onto the $\theta_k$-eigenspace).
We compute
\begin{align*}
\tr(M X_q X_q^{\Tsymb})
&= \sum_{k,\ell} \tr(M X_q^k (X_q^\ell)^{\Tsymb})
\\&= \sum_{k,\ell} \theta_k \tr(X_q^k (X_q^\ell)^{\Tsymb}) && |\; \text{by $MX_q^k = \theta_k X_q^k$}
\\&= \sum_{k,\ell} \theta_k \tr((X_q^\ell)^{\Tsymb} X_q^k) && |\; \text{by $\tr(AB)=\tr(BA)$}
\\&= \sum_{k} \theta_k \tr((X_q^k)^{\Tsymb} X_q^k). && |\; \text{since $(X_q^\ell)^{\Tsymb} X_q^k = 0$ when $k\not=\ell$}
\end{align*}
Again, we have been wise in our choice of translation of $q$ in setup \itm4: $\sum_i z_i q_i=0$ can be written as $z^{\Tsymb} X_q=0$.
Since $z$ spans the $\theta_1$-eigenspace, the columns~of~$X_q$~are therefore orthogonal to the $\theta_1$-eigenspace, hence $X_q^1=0$. We conclude
\begin{align}\label{eq:tr_final_step}
\tr(M X_q X_q^{\Tsymb})
&= \sum_{k\ge 2} \theta_k \tr((X_q^k)^{\Tsymb} X_q^k) \le 0,
\end{align}
where the final inequality follows from two observations: first, the Izmestiev matrix $M$ has a unique positive eigenvalue $\theta_1$, thus $\theta_k\le 0$ for all $k\ge 2$ (\cref{res:Izmestiev}~\itm5); second $(X_q^k)^{\Tsymb} X_q^k$ is a Gram matrix, hence is positive semi-definite and has a non-negative trace.
This finalizes the term-wise comparison and established the inequality \eqref{eq:3_term_decomposition}. It remains to discuss the equality case.
By now we see that the equality $\|P\|_\alpha=\|q\|_\alpha$ is equivalent to term-wise equality in \eqref{eq:3_term_decomposition}; and so we proceed term-wise.
To enforce equality in the first term
$$
\sum_{ij\in E} M_{ij} \|p_i-p_j\|^2 \overset!= \sum_{ij\in E} M_{ij} \|q_i-q_j\|^2
$$
we recall again that $M_{ij}>0$ whenever $ij\in E$ by \cref{res:Izmestiev} \itm1.
Thus, we require equality $\|p_i-p_j\|=\|q_i-q_j\|$ for all edges $ij\in E$. And so edges in $q$ must be of~the same length as in $P$.
We skip the second term for now and enforce equality in the last term:
$$0=\tr(M X_P X_P^{\Tsymb}) \overset!=\tr(M X_q X_q^{\Tsymb}) \overset{\eqref{eq:tr_final_step}} = \sum_{k\ge 2} \theta_k \tr((X_q^k)^{\Tsymb} X_q^k).$$
Since $\theta_k<0$ for all $k\ge 3$ (\shortStyle{cf.}\ \cref{res:Izmestiev_observations} \itm3), for the sum on the right~to~vanish we necessarily have
$$\tr((X_q^k)^{\Tsymb} X_ q^k)=0\;\text{ for all $k\ge 3$}\quad\implies\quad X_q^k=0\;\text{ for all $k\ge 3$}.$$
Since we also already know $X_q^1=0$, we are left with $\smash{X_q=X_q^2}$, that is, the columns of $X_q$ are in the $\theta_2$-eigenspace (aka.\ the kernel) of $M$.
In particular,\nolinebreak\space \mbox{$\Span X_q\subseteq\ker M$} $=\Span X_P$, where the last equality follows by \cref{res:Izmestiev_observations} \itm2.
It is well-known that if two matrices satisfy $\Span X_q\subseteq\Span X_P$, then the rows of $X_q$ are linear transformations of the rows of $X_P$, that is, $T X_P^{\Tsymb}=X_q^{\Tsymb}$ for some linear map $T\:\RR^d\to\RR^e$, or equivalently, $q_i = T p_i$ for all $i\in V$ (see \cref{res:linear_algebra} in the appendix for a short reminder of the proof).
Therefore, $q$ (considered with its original translation prior to the setup) must have been an \emph{affine} transformation of $\skel(P)$.
Lastly we note that equality in the middle term of \eqref{eq:3_term_decomposition} yields no new constraints.
In fact, by $\Span X_q\subseteq \ker M$ we have $MX_q=0$ and
$$\sum_i\alpha_iq_i = \sum_i\Big(\sum_j M_{ij}\Big) q_i=\sum_j\Big(\sum_i M_{ij} q_i\Big)=0=\sum_i\alpha_i p_i.$$
Thus, identity in the middle term follows already from identity in the last term.
For the other direction of the identity case assume that $q$ is an affine transformation of $\skel(P)$ with the same edge lengths.
Instead of setup \itm4 assume a
translation of $q$ for which it is a \emph{linear} transformation of $\skel(P)$, \shortStyle{i.e.,}\ $\smash{X_q^{\Tsymb} = T X_P^{\Tsymb}}$ for some linear map $T\:\RR^d\to\RR^e$.
Hence $\smash{\sum_i \alpha_i p_i = \sum_i \alpha_i q_i=0}$ and $M X_P=MX_q=0$, and \eqref{eq:3_term_decomposition} reduces to
$$\|P\|_\alpha^2 = \sum_{ij\in E} M_{ij}\|p_i-p_j\|^2 - 0 + 0 = \sum_{ij\in E} M_{ij}\|q_i-q_j\|^2 - 0 + 0 = \|q\|_\alpha^2.$$
\end{proof
As an immediate consequence we have the following:
\begin{corollary}\label{res:polytope_by_lenghts_and_alpha}
A polytope is uniquely determined (up to affine equivalence) by its edge-graph, its edge lengths and the Wachspress coordinates of some interior point.
\begin{proof}
Given polytopes $P_1$ and $P_2$ with the same edge-graph and edge lengths as well as points $x_i\in\Int(P_i)$ with the same Wachspress coordinates $\alpha\in\Delta_n$.\nlspace By~\cref{res:expansion_main_result} we have $\|P_1\|_\alpha\ge\|P_2\|_\alpha\ge\|P_1\|_\alpha$, thus $\|P_1\|_\alpha=\|P_2\|_\alpha$.
Then $P_1$~and~$P_2$ are~affinely equi\-valent by the equality case of \cref{res:expansion_main_result}.
\end{proof}
\end{corollary}
Remarkably, this reconstruction works across all combinatorial types~and dimen\-sions.
That the reconstruction is only up to affine equivalence rather than~iso\-metry is due to examples such as rhombi and the zonotope in \cref{fig:length_preserving_linear_trafo}. In general, such flexibility via an affine transformation happens exactly if ``the edge directions lie on a conic at infinity'' \cite{connelly2018affine}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.53\textwidth]{fig/length_preserving_linear_trafo_2.pdf}
\caption{\mbox{Two affinely equivalent (but not isometric) polytopes~with~the} same edge length. The edge directions trace a circle on the ``plane~at~infinity''.}
\label{fig:length_preserving_linear_trafo}
\end{figure}
Lastly, the reconstruction permitted by \cref{res:polytope_by_lenghts_and_alpha} is feasible in practice. This follows from a reformulation of \cref{res:expansion_main_result} as a semi-definite program, which can be solved in polynomial time. This is elaborated on in the alternative proof given in \cref{sec:semi_definite_proof}.
\iffalse
\hrulefill
\Cref{res:polytope_by_lenghts_and_alpha} can furthermore be used to derive non-trivial upper bounds on~the dimension of the realization space of a polytope.
For our purpose, let the \emph{realization space} $\mathcal R(P)$ of a $d$-polytope $P\subset\RR^d$ be the \enquote{space} of all $d$-polytopes that are~combinatorially equivalent to $P$ modulo affine equivalence (see \cite[Definition 4.10]{ziegler2012lectures} for a precise definition).
These spaces can generally be quite complex, and already the question for their dimensions is of\-ten hard to answer (\shortStyle{e.g.}\ the dimension of the realization space for the 24-cell is not completely settled, see \cite{rastanawi2021dimensions}).
We obtain the following:
\begin{corollary}\label{res:dim_realization_space}
$$\dim\mathcal R(P)\le f_0+f_1-d-1\wideword{and} \dim \mathcal R(P)\le f_{d-1}+f_{d-2}-d-1.$$
\end{corollary}
Here $f_i$ denotes the number of $i$-dimensional faces of $P$.
\begin{proof}
By \cref{res:polytope_by_lenghts_and_alpha}, $f_0+f_1$ parameters suffice to determine the polytope from its graph (up to affine equivalence): $f_1$ edge lengths and $f_0$ Wachspress coordinates $\alpha_1,...,\alpha_n\in\RR_+$ of some interior point.
This description however has $d+1$ redundancies: one for $\alpha_1+\cdots+\alpha_n=1$ and $d$ for the choice of the inner point (which mods out translation).
\end{proof}
For this bound to be of any use it has to beat the \enquote{naive bounds} $df_0$ and $df_{d-1}$ which a are tight for simplicial and simple polytopes respectively.
Note that the local dimension of the realization space of a graph can be larger than the local dimension of the usual realization of the combinatorial type.
In general, the bound of \cref{res:dim_realization_space} is very crude and usually weaker than the \enquote{trivial bounds} $d f_0 - d^2-d$ and $df_{d-1}-d^2-d$ \cite{...}.
In order for our bound to be the best, both $P$ and its dual must have \enquote{few edges}.
\begin{example}
The dimension of the realization space of a polytope is certainly smaller than the dimension of the realization space of a simplicial polytope with the same number of vertices, which is exactly $d \cdot f_0$. For an equivalent reason using simple polytopes, the dimension is upper bounded by $d\cdot f_{d-1}$.
We describe a polytope for which our bound is better than these bounds: consider the 4-crosspolytope with one vertex truncated. This polytope has 15 vertices, 37 edges, 40 2-faces and 17 facets. This yields
\begin{align*}
d\cdot f_0 &= 60, \\
d\cdot f_{d-1} &= 68, \\
f_0 + f_1 + {\textstyle {d\choose 2}} &= 57, \\
f_{d-1}+f_{d-2} + {\textstyle {d\choose 2}} &= 63.
\end{align*}
A well-known lower bound is $d(f_0+f_{d-1})-f_{0,d-1}=42$, and so there is still a large gap.
\end{example}
\fi
\section{Rigidity, tensegrity and reconstruction}
\label{sec:expansion_rigidity_tensegrity}
Our reason for pursuing \cref{res:expansion_main_result} in \cref{sec:expansion_version} was to transfer the proof of the simplex case (\cref{res:special_case_simplices}) to general polytopes with the eventual goal of verifying the main conjecture and its corresponding \enquote{unique reconstruction version}:
\begin{conjectureX}{\ref{conj:main}}
Given polytopes $P\subset\RR^d$ and $Q\subset\RR^e$ with common edge-graph.\nolinebreak\space If
\begin{myenumerate}
\item $0\in \Int(Q)$,
\item edges in $Q$ are at most as long as in $P$, and
\item vertex-origin distances in $Q$ are at least as large as in $P$,
\end{myenumerate}
then $P\simeq Q$.
\end{conjectureX}
\begin{conjectureX}{\ref{conj:main_rigid_intro}}
A polytope $P$ with $0\in\Int(P)$ is uniquely determined (up to~isometry) by its edge-graph, edge lengths and vertex-origin distances.
\end{conjectureX}
In contrast to our formulation of \cref{res:expansion_main_result}, both of the above conjectures~are \emph{false} when stated for a general graph embedding $q\:V(G_P)\to\RR^e$ instead of $Q$,\nlspace even if we require $0\in\Int\conv(q)$.
The following counterexample was provided by Joseph Doolittle \cite{410625}:
\begin{example}\label{ex:cube}
The 3-cube $P:=[-1,1]^3\subset\RR^3$ is inscribed in a sphere of~radius $\smash{\sqrt 3}$.
\Cref{fig:counterexample_cube} shows an inscribed embedding $q\:V(G_P)\to\RR^2$ with the same circumradius and edge lengths, collapsing $G_P$ onto a path.
In the circumcircle each edge spans~an arc of length
$$2\sin^{-1}(1/\sqrt 3)\approx 70.5287^\circ > 60^\circ.$$
The three edges therefore suffice to reach more than half around the circle.
In other words, the convex hull of $q$ contains the circumcenter in its interior.
A full-dimensional counterexample in $\RR^3$ can be obtained by interpreting $q$~as embedded in $\RR^2\times\{0\}$ follows by a slight perturbation.
\end{example}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{fig/counterexample_cube}
\caption{\mbox{A 2-dimensional embedding of the edge-graph~of~the~3-di\-men}\-\mbox{sio\-nal cube with the same circumradius and edge lengths as the unit cube,} also containing the origin in its convex hull.}
\label{fig:counterexample_cube}
\end{figure}
Potential fixes to the \enquote{graph embedding versions} of \cref{conj:main_rigid_intro} and \cref{conj:main} are discussed in \cref{sec:fixing_q_version_conjecture}.
While the general \cref{conj:main} will stay open, we are confident that our~methods point the right way and highlight the essential difficulties.
We overcome~them in three relevant special cases, for some of which we actually can replace~$Q$ with a graph~embedding $q\:V(G_P)\to\RR^e$. Those are
\begin{myenumerate}
\item $P$ and $q$ are centrally symmetric (\cref{res:centrally_symmetric}).
\item $q$ is a sufficiently small perturbation of $\skel(P)$ (\cref{thm:main_local}).
\item $P$ and $Q$ are combinatorially equivalent (\cref{thm:main_comb_eq}).
\end{myenumerate}
\subsection{The remaining difficulty}
On trying to generalize the proof of \cref{res:special_case_simplices} beyond simplices using \cref{res:expansion_main_result} we face the following difficulty:
\cref{res:expansion_main_result} requires the $\alpha\in\Delta_n$ to be Wachspress coordinates of an interior point $x\in\Int(P)$, whereas in the proof of \cref{res:special_case_simplices} we use that $\alpha$ is a set of barycentric coordinates of $0\in$ $\Int(Q)$.
While we have some freedom in choosing $x\in\Int(P)$, and thereby $\alpha\in\Delta_n$, it is not clear that any such choice yields barycentric coordinates for $0\in\Int(Q)$.
In fact, this is the only obstacle preventing us from proving \cref{conj:main} right away.
For convenience we introduce the following map:
\begin{definition}\label{def:Wachspress_map}
Given polytopes $P\subset\RR^d$ and $Q\subset\RR^e$, the \emph{Wachspress map}~$\phi\: P\to Q$ is defined as follows: for $x\in P$ with Wachspress coordinates $\alpha(x)\in\Delta_n$ set
$$\phi(x):=\sum_i \alpha_i(x) q_i.$$
\end{definition}
In cases where we are working with a graph embedding $q\:V(G_P)\to \RR^e$ instead of $Q$ we have an analogous map $\phi\: P\to\conv(q)$.
Our previous discussion amounts to checking whether the origin is in the image of $\Int(P)$ \shortStyle{w.r.t.}\ $\phi$.
While this could be reasonably true if $\dim P \ge \dim Q$, it~is~certainly questionable if $\dim P<\dim Q$: the image of $\phi(\Int(P))\subset Q$ is of smaller dimension than $Q$ and easily ``misses'' the origin.
Fortunately, we can ask for less, which we now formalize in \itm1 of the following lemma:
\iffalse
\hrulefill
\msays{Isn't the conjecture very suspect if $e>d$? Note: the unique reconstruction version is fine, because we can exchange $P$ and $Q$.}
And so we see that it is not $0\in\Int(Q)$ or $0\in\Int\conv(q)$ that we should ask for in \cref{conj:main} \itm1, but rather $0\in\phi_q(\Int(P))$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.23\textwidth]{fig/phi_image_cube.pdf}
\caption{The shaded area is the image of the 3-cube under the Wachspress map $\phi$ in the setting of \cref{ex:cube}. The image fails to contain the origin, which is the reason this counterexample is possible.}
\label{fig:phi_image_cube}
\end{figure}
Having said that, one could become rather doubtful of \cref{conj:main}, or at least of the approach presented here.
After all, we require $0\in\im(\phi)\subseteq Q$.
This image might not be all of $Q$, and it is certainly not all of $Q$ if $Q$ is of a higher dimension than $P$.
One could be rather doubtful about finding such a point, especially if $q$ lives in a higher-dimensional Euclidean space than $P$.
However, careful inspection of the proof of \cref{res:special_case_simplices} reveals that we have further leeway to work with.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{fig/Wachpress_map_image.pdf}
\caption{...}
\label{fig:Wachpress_map_image}
\end{figure}
\hrulefill
\begin{lemma}\label{lem:main}
Let $P\subset\RR^d$ be a polytope and $q\:V(G_P)\to\RR^e$ some embedding. If
\begin{myenumerate}
\item there exists $x\in\Int(P)$ with $\phi(x)=0$,
\item edges in $q$ are at most as long as in $P$, and
\item vertex-origin distances in $q$ are at least as large as in $P$,
\end{myenumerate}
then $q\simeq\skel(P)$ (via an orthogonal transformation).
\begin{proof}
Choose $x\in \Int P$ with $\phi(x)=0$.
Note that the Wachspress coordinates~$\alpha\in\Int\Delta_n$ of $x$ are strictly positive.
In the remainder we follow the proof of \cref{res:special_case_simplices} almost verbatim: consider the system of (in)equalities:
\begin{align*}
\sum_i \alpha_i \|p_i\|^2
&= \Big\|\sum_i \alpha_i p_i\Big\|^2 \!+
\tfrac12\sum_{i,j} \alpha_i\alpha_j \|p_i-p_j\|^2
= \|x\|^2 + \|P\|_\alpha^2
\\[-1ex]
\rotatebox{90}{$\ge$}
\qquad
&
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,
\rotatebox{90}{$\le$}
\qquad\;\;
\rotatebox{90}{$\le$}
\\[-1ex]
\sum_i \alpha_i \|q_i\|^2
&= \Big\|\sum_i \alpha_i q_i\Big\|^2 \!+
\tfrac12\sum_{i,j} \alpha_i\alpha_j \|q_i-q_j\|^2\,
= \|0\|^2 \,+\, \|q\|_\alpha^2,
\end{align*}
where the rows hold by simple computation and the columns hold (from left~to~right) by \itm3, trivially, and \itm2 + \cref{res:expansion_main_result} respectively.
It follows that all inequali\-ties are actually equalities.
In particular, since $\alpha_i>0$ we find both $\|p_i\|=\|q_i\|$~for all $i\in V$ and $\|p_i-p_j\|=\|q_i-q_j\|$ for all $i,j\in V$, establishing that $q$ and $\skel(P)$~are indeed isometric via an orthogonal transformation.
\end{proof}
\end{lemma}
\fi
\begin{lemma}\label{lem:main}
Let $P\subset\RR^d$ be a polytope and $q\:V(G_P)\to\RR^e$ some embedding. If
\begin{myenumerate}
\item there exists an $x\in\Int(P)$ with $\|\phi(x)\|\le\|x\|$ (\shortStyle{e.g.}\ $\phi(x)=0$),
\item edges in $q$ are at most as long as in $P$, and
\item vertex-origin distances in $q$ are at least as large as in $P$,
\end{myenumerate}
then $q\simeq\skel(P)$ (via an orthogonal transformation).
\begin{proof}
Choose $x\in \Int P$ with $\|\phi(x)\|\le \|x\|$, and note that its Wachspress coordi\-nates $\alpha\in\Int\Delta_n$ are strictly positive.
In the remainder we follow closely the proof of \cref{res:special_case_simplices}: consider the system of (in)equalities:
\begin{align*}
\sum_i \alpha_i \|p_i\|^2
&= \Big\|\sum_i \alpha_i p_i\Big\|^2 \!+
\tfrac12\sum_{i,j} \alpha_i\alpha_j \|p_i-p_j\|^2
= \;\; \|x\|^2 \;\;\;\,+ \|P\|_\alpha^2
\\[-1.5ex]
\rotatebox{90}{$\ge$}
\qquad
&
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\;\;
\rotatebox{90}{$\le$}
\qquad\;\;\;\;\;\,
\rotatebox{90}{$\le$}
\\[-1.5ex]
\sum_i \alpha_i \|q_i\|^2
&= \Big\|\sum_i \alpha_i q_i\Big\|^2 \!+
\tfrac12\sum_{i,j} \alpha_i\alpha_j \|q_i-q_j\|^2\,
= \|\phi(x)\|^2 \,+\, \|q\|_\alpha^2,
\end{align*}
where the two rows hold by simple computation and the vertical inequalities follow (from left to right) by \itm3, \itm1, and \itm2 + \cref{res:expansion_main_result} respectively.
It follows~that all inequali\-ties are actually equalities.
In particular, since $\alpha_i>0$ we find both~$\|p_i\|$ $=$ $\|q_i\|$ for all $i\in V$ and $\|p_i-p_j\|=\|q_i-q_j\|$ for all $i,j\in V(G_P)$, establishing that $q$ and $\skel(P)$~are indeed isometric via an orthogonal transformation.
\end{proof}
\end{lemma}
The only way for \cref{lem:main} \itm1 to fail is if $\|\phi(x)\|>\|x\|$ for all $x\in\Int(P)$.\nlspace
By \itm2 and \itm3 we have $\|\phi(x)\|=\|x\|$ whenever $x$ is a vertex or in an edge of~$P$, which makes it plausible that \itm1 should not fail, yet it seems hard to exclude~in~general.
In each of the three special cases of \cref{conj:main} discussed below we actually managed to verify $0\in\phi(\Int(P))$ in order to apply \cref{lem:main}.
\subsection{Central symmetry}
\label{sec:central_symmetry}
Let $P\subset\RR^d$ be \emph{centrally~symmetric} (more precisely, \emph{origin symmetric}), that is, $P=-P$.
This induces an involution \mbox{$\iota\: V(G_P)\to V(G_P)$} with $p_{\iota(i)}=-p_i$ for all $i\in V(G_P)$. We say that~an~embedding $q\: V(G_P)\to\RR^d$ of the edge-graph is centrally symmetric if $q_{\iota(i)}=$ $-q_i$ for all $i\in V(G_P)$.
\begin{theorem}[centrally symmetric version]\label{res:centrally_symmetric}
Given a centrally symmetric polytope $P\subset\RR^d$ and a centrally symmetric graph embedding $q\: V(G_P)\to\RR^e$, so that
\begin{myenumerate}
\item edges in $q$ are at most as long as in $P$, and
\item vertex-origin distances in $q$ are at least as large as in $P$,
\end{myenumerate}
then $q\simeq \skel(P)$.
\begin{proof}
Since $P$ is centrally symmetric, we have $0\in\relint(P)$ and we can find~Wachspress coordinates $\alpha\in\Int(\Delta_n)$ of the origin in $P$.
Since the Wachspress coordinates are invariant under a linear transformation (as noted in \cref{rem:Wachspress_properties} \itm2), it holds $\alpha_i = \alpha_{\iota(i)}$.
For the Wachspress map $\phi$ follow
$$
\phi(0)
= \tfrac12 \sum_{i\in V} \alpha_i q_i + \tfrac12 \sum_{i\in V} \alpha_{\iota(i)} q_{\iota(i)}
= \tfrac12 \sum_{i\in V} \alpha_i q_i - \tfrac12\sum_{i\in V} \alpha_i q_i = 0,
$$
The claim then follows via \cref{lem:main}
\end{proof}
\end{theorem}
It is clear that \cref{res:centrally_symmetric} can be adapted to work with other types of symmetry that uniquely fix the origin, such as irreducible symmetry groups.
\Cref{res:centrally_symmetric} has a natural interpretation in the language of classical rigidity~theory, where it asserts the universal rigidity of a certain tensegrity framework. In this form it was proven up to dimension three by Connelly \cite{connelly2006stress}. We elaborate further on this in \cref{sec:classical_rigidity}.
It is now tempting to conclude the unique reconstruction version of \Cref{res:centrally_symmetric}, answering \cref{conj:main_rigid_intro} for centrally symmetric polytope.
There is~however~a~subtlety:
our notion of ``central symmetry'' for the graph embedding $q\:V(G_P)\to\RR^e$ as used in \cref{res:centrally_symmetric} has been relative to $P$, in that it forces $q$ to have the same pairs of antipodal vertices as $P$.
It is however not true that any two centrally symmetric polytopes with the same edge-graph have this relation.
David E.\ Speyer \cite{antipodalNotUnique} constructed a 12-vertex 4-polytope whose edge-graph has an automorphism that does not preserve antipodality of vertices.
\iffalse
{\color{lightgray}
The dual version (\shortStyle{cf.}\ \cref{rem:dual_versions}) does not hold; a counterexample is shown in \cref{fig:octagon_ex}.
However, if $q$ is restricted to polytope skeleta, the dual version holds again by the symmetry in the statement.
\begin{corollary}\label{res:reconstruct_centrally_sym}
A centrally symmetric polytope is uniquely determined (up to isometry) by its edge-graph, the edge lengths and the vertex-origin distances.
\end{corollary}
This reconstruction too is unique across all combinatorial types and dimensions.
}
\hrulefill
It seems suggestive to infer a unique reconstruction results for centrally symmetric polytopes akin \cref{conj:main_rigid_intro}.
There is however a subtlety: it is not clear that every centrally symmetric polytope $Q$ with the edge-graph $G_Q\simeq G_P$ induces the same (or conjugate) involution $\iota\in\Aut(G_P)$ on $G_P$. This is assumed by \cref{res:centrally_symmetric}.
We discuss this further in \cref{sec:...}.
\begin{corollary}
A centrally symmetric polytope is uniquely determined (up to isometry) by its edge-graph, the edge lengths, the vertex-origin distances and the action of the central symmetry on the edge-graph.
\end{corollary}
\begin{corollary}
Let $P\subset\RR^d$ and $Q\subset\RR^e$ be centrally symmetric polytopes with the same edge-graph, edge lengths and vertex-origin distances; and let $\phi\:G_P\xrightarrow{\sim} G_Q$ be a graph isomorphism. Let $\iota_P\in\Aut(G_P)$ be the graph automorphism induced by the central symmetry of $P$, and $\iota_Q\in\Aut(G_Q)$ likewise. Do we have $\iota_P=\phi^{-1}\circ \iota_Q\circ \phi$.
\end{corollary}
Whether knowing the action is actually necessary is discussed in \cref{sec:...}.
\fi
\subsection{Local uniqueness}
\label{sec:local_rigidity}
Given a polytope $P\subset\RR^d$ consider the space
$$\mathcal E_P:=\{q:V(G_P)\to\RR^d\}$$
of $d$-dimensional embeddings of $G_P$.
We then have $\skel(P)\in\mathcal E_P$.
Since $\mathcal E_P\cong~\RR^{n\x d}$ (in some reasonable sense) we can pull back a metric $\mu\:\mathcal E_P\times\mathcal E_P\to\RR_{+}$.\;
\begin{theorem}[local version]\label{thm:main_local}
Given a polytope $P\subset\RR^d$ with $0\in\Int(P)$, there~exists an $\eps >0$ with the following property: if $q\:V(G_P)\to\RR^d$ is an embedding~with
\begin{myenumerate}
\item $q$ is $\eps$-close to $\skel(P)$, \shortStyle{i.e.,}\ $\mu(q,\skel(P))<\eps$,
\item edges in $q$ are at most as long as in $P$, and
\item vertex-origin distances in $q$ are at least as large as in $P$,
\end{myenumerate}
then $q\simeq \skel(P)$.
\end{theorem}
\Cref{thm:main_local} as well can be naturally interpreted in the language of rigidity~theory.
The proof below makes no use of this language, but in \cref{sec:classical_rigidity} we elaborate on this connection and sketch a second approach that proves \emph{prestress stability}.
This result too was proven up to dimension three by Connelly \cite{connelly2006stress}.
In order to prove \cref{thm:main_local} we again show $0\in\phi(\Int(P))$, which requries more work this time: since $0\in\Int(P)$, there exists an $\eps$-neighborhood $B_\eps(0)\subset P$~of the origin.
The hope is that for a sufficiently small perturbation of the vertices of~$P$~the image of $B_\eps(0)$ under $\phi$ is still a neighborhood of the origin.
This hope is formalized and verified in the following lemma, which we separated from the proof of \cref{thm:main_local} to reuse it in \cref{sec:combinatorial_equivalence}.
Its proof is standard and is included in \cref{sec:topological_argument}:
\begin{lemmaX}{D.1
Let $K\subset\RR^d$ be a compact convex set with $0\in\Int(K)$ and $f\:K\times[0,1]$ $\to\RR^d$ a homotopy with $f(\free,0)=\id_K$.
If the restriction $f|_{\partial K}\: \partial K\times[0,1]\to\RR^d$ yields a homotopy of $\partial K$ in $\RR^d\setminus\{0\}$, then $0\in\Int f(K,1)$.
\end{lemmaX}
In other words: if a \enquote{well-behaved} set $K$ contains the origin in its interior,\nolinebreak\space and~it is~deformed so that its boundary never crosses the origin, then the origin stays~in\-side.
\begin{proof}[Proof of \cref{thm:main_local}]
Since $0\in\Int(P)$ there exists a $\delta>0$ with $B_\delta(0)\subset P$.
Fix some compact neighborhood $N\subset\mathcal E_P$ of $\skel(P)$.
Then $N\times P$ is compact in $\mathcal E_P\times \RR^d$ and the map
$$N\times P\to\RR^d,\;\;(q,x) \mapsto \phi_q(x):=\sum_i \alpha_i(x)q_i$$
is uniformly continuous: there exists an $\epsilon>0$ so that whenever $\mu(q,q')+\|x-x'\|< \eps$, we have $\|\phi_q(x)-\phi_{q'}(x')\|<\delta/2$.
We can assume that $\eps$ is sufficiently small,\nlspace so~that $B_\eps(\skel(P))\subset N$.
We show that this $\eps$ satisfies the statement of the theorem.
Suppose that $q$ is $\eps$-close to $\skel(P)$, then
$$\mu(\skel(P),q) + \|x-x\|<\eps \;\,\implies\;\, \|x-\phi_q(x)\|=\|\phi_{\skel(P)}(x) - \phi_q(x)\|<\delta/2.$$
The same is true when replacing $q$ by any linear interpolation $q(t):=(1-t)\skel(P)+t q$ with $t\in[0,1]$.
Define the following homotopy:
$$f\: B_\delta(0)\times[0,1]\to\RR^d,\quad (x,t)\mapsto \phi_{q(t)}(x).$$
We have $f(\free,0)=\id$.
That is, if $x\in\partial B_\delta(0)$ then $\|f(x,0)\|=\delta$, as well as
$$\|f(x,t)\| \ge \|f(x,0)\| - \|f(x,0)-f(x,t)\| \ge \delta - \delta/2=\delta/2,$$
and $f(x,t)\not=0$ for all $t\in[0,1]$.
Since $B_\delta(0)$ is compact and convex, the homotopy $f$ satisfies the conditions of \cref{res:topological_argument} and we conclude $0\in f(B_\delta(0),1)=\phi_q(B_\delta(0))$ $\subseteq\phi_q(\Int(P))$.
Then $q\simeq \skel(P)$ follows via \cref{lem:main}.
\end{proof}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{fig/dimension_increase_ex.pdf}
\caption{If $q$ in \cref{thm:main_local} is not restricted to $\aff(P)$, the \mbox{vertex-ori}\-gin distances can be increased by moving out of the affine hull.}
\label{fig:dimension_increase_ex}
\end{figure}
The polytope $P$ in \cref{thm:main_local} is assumed to be full-dimensional.
This is necessary, since allowing $\skel(P)$ to deform beyond its initial affine hull already permits counterexamples such as shown in \cref{fig:dimension_increase_ex}. Even restricting to deformations with $0\in\Int\conv(q)$ is not sufficient, as shown in the next example:
\begin{example}
\label{ex:cube_counterexample_2}
Consider the 3-cube as embedded in $\RR^3\times\{0\}\subset\RR^3\times\RR^2\cong\RR^5$.
Let $p_1,...,p_8\in\RR^3$ $\times$ $\{0\}$ be its vertices, and let $q_1,...,q_8\in\{0\}\times\RR^2$ be the vertices as embedded in \cref{fig:counterexample_cube} (on the right). Since both share the same edge lengths and vertex-origin distances, so does the em\-bedding $tp+sq$ whenever $t^2+s^2=1$.
Observe further that for both $p_i$ and $q_i$~the origin can be written as a convex combination using the same coefficients $\alpha\in\Delta_n$ (use an $\alpha$ with $\alpha_i=\alpha_j$ whenever $p_i=-p_j$). It follows $0\in$ $\Int\conv(tp+sq)$.
\end{example}
\subsection{Combinatorially equivalent polytopes}
\label{sec:combinatorial_equivalence}
In this section we consider combi\-natorially equivalent polytopes $P,Q\subset\RR^d$ and prove the following:
\begin{theorem}[combinatorial equivalent version]\label{thm:main_comb_eq}
Let $P,Q\subset\RR^d$ be combinatorially equivalent polytopes so that
\begin{myenumerate}
\item $0\in\Int(Q)$
\item edges in $Q$ are at most as long as in $P$, and
\item vertex-origin distances in $Q$ are at least as large as in $P$.
\end{myenumerate}
Then $P\cong Q$.
\end{theorem}
Once again the proof uses \cref{lem:main}.
Since $0\in\Int(Q)$, we can verify \mbox{$0\in\phi(\Int(P))$} by showing that the Wachspress map $\phi\: P\to Q$ is surjective.
This statement is of inde\-pendent interest, since the question whether $\phi$ is \emph{bijective} is a \mbox{well-known} open problem for $d\ge 3$ (see \cref{conj:Wachspress_injective}).
Our proof of surjectivity uses~\cref{res:topological_argument} and the following:
\begin{lemma}\label{res:Wachspress_map_sends_boundary}
Given a face $\sigma\in\F(P)$, the Wachspress map $\phi$ sends $\sigma$ onto the~corresponding face $\sigma_Q\in\F(Q)$.
In particular, $\phi$ sends $\partial P$ onto $\partial Q$.
\end{lemma}
\begin{proof}
Given a point $x\in\relint(\sigma)$ with Wachspress coordinates $\alpha\in\Delta_n$, the coeffi\-cient $\alpha_i$ is non-zero if and only if the vertex $p_i$ is in $\sigma$ (\cref{rem:Wachspress_properties} \itm1).
The claim $\phi(x)\in\sigma_Q$ follows immediately.
\end{proof}
\begin{lemma}\label{res:Wachspress_map_surjective}
The Wachspress map $\phi\:P\to Q$ is surjective.
\begin{proof}
We proceed by induction on the dimension $d$ of $P$.
For $d=1$ the Wachspress map is linear and the claim is trivially true.
For $d>1$ recall that $\phi$ sends $\partial P$ to~$\partial Q$ (by \cref{res:Wachspress_map_sends_boundary}).
By induction hypothesis, $\phi$ is surjective on each proper face,\nlspace thus sur\-jective on all of $\partial P$.
To show surjectivity in the interior, we fix $x\in\Int(Q)$; we show $x\in\im\phi$.
Let $\psi\:Q\to P$ be the Wachspress map in the other direction (which is usually \emph{not} the inverse of $\phi$) and define $\rho:=\phi\circ\psi\:Q\to Q$.
Note that by \cref{res:Wachspress_map_sends_boundary} $\rho$ sends each face of $Q$ to itself and is therefore homotopic to the identity on $Q$ via the following linear homotopy:
$$f\:Q\times[0,1]\to Q,\; (y,t)\mapsto (1-t)x+t\rho(x).$$
Since faces of $Q$ are closed under convex combination, $f(\free,t)$ sends $\partial Q$ to itself for all $t\in[0,1]$.
Thus, $f$ satisfies the assumptions of \cref{res:topological_argument} (with $K$ chosen as $Q$), and therefore $x\in f(Q,1)=\im \rho\subset\im\phi$.
\end{proof}
\end{lemma}
The proof of \cref{thm:main_comb_eq} follows immediately:
\begin{proof}[Proof of \cref{thm:main_comb_eq}]
Since $0\in\Int(Q)$ and the Wachspress map $\phi\:P\to Q$~is~sur\-jective (by \cref{res:Wachspress_map_surjective}), there exists $x\in P$ with $\phi(x)=0$.
Since $\phi(\partial P)=\partial Q$ (by \cref{res:Wachspress_map_sends_boundary}), we must have $x\in\Int(P)$.
$P\simeq Q$ then follows via \cref{lem:main}.
\end{proof}
\begin{corollary}
\label{res:comb_type_rigid}
A polytope with the origin in its interior is uniquely determined~by its face-lattice, its edge lengths and its vertex-origin distances.
\end{corollary}
\iffalse
We give two examples where this fails if we do not require the origin in the interior. \msays{TODO}
\begin{example}
\cref{fig:origin_outside_tnontriv_ex} shows a pentagon that does not contain the origin and that is not determined by its edge lengths and vertex-origin distances. \mbox{Still, the~poly}\-gon cannot be continuously deformed while keeping its lengths fixed (that is, it is \emph{locally rigid}, but not \emph{globally rigid}). Higher-dimensional polytopes with the same ``flaw'' can be obtained as prisms over this pentagon.
Moreover, if we consider the pentagon $P$ as contained in the plane $\RR^2\times\{1\}\subset\RR^3$, we can construct the pyramid $P^*:=\conv(P\cup\{0\})$ with base face $P$.
Here the origin lies in the polytope, but in a vertex rather than the interior, and the polytope is again not determined by its edge lengths and vertex origin distances. By considering prisms over $P^*$ one finds polytopes in any dimension with the origin inside a face of codimension 3 that likewise lack the unique reconstruction.
\end{example}
\msays{Is there an example that is not even locally rigid?}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{fig/origin_outside_tnontriv_ex}
\caption{Two non-isometric realizations of a pentagon with the same edge lengths and vertex-origin distances. This is possible because~the ori\-gin is not in the interior.}
\end{figure}
It is natural to ask in how far a version of \cref{thm:main_comb_eq} or \cref{res:comb_type_rigid} holds if $0\in\partial P$.
In general, this fails, as demonstrated in the examples ...
However, more can be said if we specify the type of face in which the origin is contained.
For example, if the origin is in a facet, both results apply unchanged. If the origin is contained in a face of codimension $\ge 2$, then both theorems fail. \msays{TODO}
We briefly address the question in how far unique reconstruction works if $0\in\partial P$. We show that it works if the origin is in a facet or a face of codimension 2, but~can fail for faces of lower dimension.
\fi
If the origin lies not in $P$ then a unique reconstruction is not guaranteed (re\-call \cref{fig:origin_outside_tnontriv_ex}).
However, if $0\in\partial P$ then we can say more.
Recall the \emph{tangent cone} of $P$ at a face $\sigma\in\F(P)$:
$$T_P(\sigma):=\cone\{x-y\mid x\in P,\, y\in\sigma\}.$$
\begin{theorem}
Let $P,Q\subset\RR^d$ be combinatorially equivalent polytopes with~the~following properties:
\begin{myenumerate}
\item $0\in\relint(\sigma_Q)$ for some face $\sigma_Q\in\F(Q)$, $\sigma_P$ is the corresponding~face~in~$P$, and $P$ and $Q$ have isometric tangent cones at $\sigma_P$ and $\sigma_Q$.
\item edges in $Q$ are at most as long as in $P$.
\item vertex-origin distances in $Q$ are at least as large as in $P$.
\end{myenumerate}
Then $P\cong Q$.
\end{theorem}
Property \itm1 is always satisfied if, for example, $\sigma_Q$ is a facet of $Q$, or if $\sigma_Q$~is~a face of co\-dimension two at which $P$ and $Q$ agree in the dihedral angle.
\begin{proof}
The proof is by induction on the dimension $d$ of the polytopes. The induction base $d=1$ is clearly satisfied.
In the following we assume $d\ge 2$.
Note first that we can apply \cref{thm:main_comb_eq} to $\sigma_P$ and $\sigma_Q$ to obtain $\sigma_P\simeq\sigma_Q$ via an orthogonal transformation $T\:\RR^d\to\RR^d$, in particular $0\in\relint(\sigma_P)\subseteq P$.
By \itm1 this transformation extends to the tangent cones at these faces.
Let $F_{1,Q},...,F_{m,Q}\in\F_{d-1}(Q)$ be the facets of $Q$ that contain $\sigma_Q$, and let $F_{1,P},...,F_{m,P}\in\F_{d-1}(P)$ be the corresponding facets in $P$.
Then $F_{i,P}$ and $F_{i,Q}$ too have isometric tangent cones at $\sigma_P$ resp.\ $\sigma_Q$, and $F_{i,P}\simeq F_{i,Q}$ follows by induction hypothesis.
Now choose a point $x_Q\in\RR^d$ \emph{beyond} the face $\sigma_Q$ (above all facet-defining hyper\-planes that contain $\sigma_Q$, and below the others) and set $x_P:=T x_Q$.
Consider the polytopes $Q':=\conv(Q\cup\{x_Q\})$ and $P':=\conv(P\cup\{x_P\})$.
Since $x_Q$ was chosen beyond $\sigma_Q$, each edge of $Q'$ is either an edge of $Q$, or is an edge between $x_Q$ and a vertex of some facet $F_{i,Q}$; and analogously for $P'$.
The lengths of edges incident to $x_Q$ depend only on the shape of the tangent cone and the shapes of the facets $F_{i,Q}$, hence are the same for corresponding edges in $P'$.
Thus, $P'$ and $Q'$ satisfy the preconditions of \cref{thm:main_comb_eq}, and we have $P'\simeq Q'$.
Finally, as $x_Q\to 0$, we have $Q'\to Q$ and $P'\to P$ (in the Hausdorff metric), which shows that $P\simeq Q$.
\end{proof}
Thus, if the origin lies in the interior of $P$ or a facet of $P$ then \cref{thm:main_comb_eq}~applies. If the origin lies in a face of codimension three, then counterexamples exist.
\begin{example}
Consider the pentagons from \cref{fig:origin_outside_tnontriv_ex} as lying in the plane $\RR^2\times\{1\}$, with their former origins now at $(0,0,1)$. Consider the pyramids
$$P^*:= \conv(P\cup\{0\})\quad\text{and}\quad Q^*:=\conv(Q\cup\{0\}).$$
These polytopes have the origin in a vertex (a face of codimension three), have the same edge-graphs, edge lengths and vertex-origin distances, yet are not isometric. Examples with the origin in a high-dimensional face of codimension three can be constructed by considering prisms over $P^*$ resp.\ $Q^*$.
\end{example}
We do not know whether \cref{thm:main_comb_eq} holds if the origin lies in a face of codimen\-sion two (see \cref{q:codimension_two}).
\subsection{Inscribed polytopes}
\label{sec:inscribed}
It is worthwhile to formulate versions of \cref{thm:main_comb_eq} for inscribed polytopes, that is, polytopes where all vertices lie on a common sphere -- the circumsphere.
For inscribed polytopes we can write down a direct monotone relation between edge lengths and the circumradius.
\begin{corollary}[inscribed version]\label{res:inscribed_version}
Given two combinatorially equivalent polytopes $P,Q\subset\RR^d$ so that
\begin{myenumerate}
\item $P$ and $Q$ are inscribed in spheres of radii $r_P$ and $r_Q$ respectively,
\item $Q$ contains its circumcenter in the interior, and
\item edges in $Q$ are at most as long as in $P$,
\end{myenumerate}
Then $r_P\ge r_Q$, with equality if and only if $P\simeq Q$.
\begin{proof}
Translate $P$ and $Q$ so that both circumcenters lie at the origin.
Suppose~that $r_P\le r_Q$. Then all preconditions of \cref{thm:main_comb_eq} are satisfied, which yields $P\simeq Q$, hence $r_P=r_Q$.
\end{proof}
\end{corollary}
Interestingly, the corresponding \enquote{unique reconstruction version} does not require any assumptions about the location of the origin or an explicit value for the circum\-radius.
In fact, we do not even need to apply our results, as it already follows from Cauchy's rigidity theorem (\cref{res:Cauchy_rigidity}).
\begin{corollary}
An inscribed polytope of a fixed combinatorial type is uniquely~determined, up to isometry, by its edge lengths.
\begin{proof
The case $d=2$ is straightforward: given any circle, there is only~a~single~way (up to isometry) to place edges of prescribed lengths. Also, there is only~a~single~radius for the circle for which the edges reach around the circle exactly once and close up perfectly.
This proves uniqueness for polygons.
If $P$ is of higher dimension then its 2-dimensional faces are still inscribed, have prescribed edge lengths, and by the 2-dimensional case above, corresponding 2-faces in $P$ and $Q$ are therefore isometric.
Then $P\simeq Q$ follows from Cauchy's~rigidity~theorem (\cref{res:Cauchy_rigidity}).
\end{proof}
\end{corollary}
\iffalse
\newpage
\section{Inscribed polytopes}
At this point we have developed almost the complete machinery to come back to our initial question about inscribed polytopes:
\begin{theorem}
Given two combinatorially equivalent polytope $P,Q\subset\RR^d$ so that
\begin{myenumerate}
\item $P$ and $Q$ are inscribed in spheres of radii $r_P$ and $r_Q$ respectively,
\item $Q$ contains the origin in its interior, and
\item edges in $Q$ are not longer than edges in $P$,
\end{myenumerate}
then $r_P\ge r_Q$. Equality holds if and only if $P\cong Q$.
\end{theorem}
We discuss an idea for the proof to uncover where the remaining difficulties lie: the plan would be to apply the proof of \cref{res:special_case_simplices} verbatim: translate $Q$ so that the circumcenter becomes the origin, in particular, $0\in\Int(Q)$. Express the origin in Wachspress coordinates (in the simplex case we have simply used the barycentric coordinates) $0=\sum_i\alpha_i q_i$. Finally, observe
\begin{align*
r_P^2
= \sum_i \alpha_i \|p_i\|^2
&= \Big\|\sum_i \alpha_i p_i\Big\|^2 \!+ \|P\|_\alpha^2
\\&\ge \Big\|\sum_i \alpha_i q_i\Big\|^2 + \|Q\|_\alpha^2
= \sum_i \alpha_i \|q_i\|^2
= r_Q^2. \notag
\end{align*}
However, there is one major oversight in this proof: the Wachspress coordinates of the origin in $Q$ have been chosen \shortStyle{w.r.t.}\ $Q$.
But in order to apply \cref{res:expansion_main_result} the coordinates must have been chosen \shortStyle{w.r.t.}\ to $P$.
So, what we actually need is a point $x\in\Int(P)$ whose Wachspress coordinates are convex coordinates (not necessarily Wachspress coordinates) of the origin in $Q$.
\subsection{The Wachspress map}
The Wachspress coordinates can be used to define canonical continuous maps between combinatorially equivalent polytopes:
\begin{definition}
For combinatorially equivalent polytopes $P,Q\subset\RR^d$ the \emph{Wachspress map} $\phi\: P\to Q$ is defined as follows: for $x\in P$ let $\alpha\in\Delta_n$ be the Wachspress coordinates of $x$. Set $\phi(x):=\sum_i \alpha_i q_i$.
\end{definition}
\begin{observation}
We list some relevant properties of the Wachspress map:
\begin{myenumerate}
\item the Wachspress map is continuous.
\item for each face $\sigma\in\F(P)$, the Wachspress map sends $\sigma$ onto the corresponding face $\sigma_Q\in\F(Q)$. More precisely, it sends the relative interior of $\sigma$ onto the relative interior of $\sigma_Q$.
\item for simplices, the Wachspress map is a linear map.
\end{myenumerate}
\end{observation}
\begin{lemma}\label{res:Wachspress_map_surjective}
The Wachspress map is surjective.
\begin{proof}
We proceed by induction on the dimension $d$ of $P$.
For $d=1$ the Wachspress map is linear and the claim is trivially true. Suppose then that the claim has been shown for all polytopes of dimension up to $d-1$; we prove it for $d$.
Recall that if $\sigma\in\F(P)$ is a face of $P$, $x\in \relint(\sigma)$ some point, and $\alpha\in\Delta_n$ its~Wachs\-press coordinates in $P$,
then $\alpha_i>0$ if and only if the corresponding vertex $p_i$ is in $\sigma$.
As a consequence, the Wachspress map sends $\sigma$ to the corresponding face $\sigma_Q\in\F(Q)$. In particular, it sends the boundary $\partial P$ to $\partial Q$.
Let now $\psi\:Q\to P$ be some homeomorphism which conversely maps each face $\sigma\in\F(Q)$ onto the respective face $\sigma_P\in\F(P)$ (such $\psi$ exists, consider constructing one from suitable triangulations of $P$ and $Q$). Define the map $\rho:=\psi\circ\phi\:P\to P$.
The map $\rho$ then sends $\partial P$ onto $\partial P$.
The following is a homotopy between $\rho$ and the identity on $\partial P$:
$$h\:[0,1]\times\partial P \to \partial P,\quad (t,x) \;\mapsto\; h_t(x):=(1-t)x + t\rho(x).$$
In fact, since both $x$ and $\rho(x)$ are in the same face of $P$, $h_t(x)$ is in this face as well, thus in $\partial P$.
Suppose now that $\phi$ is not surjective, so neither is $\rho$.
By the induction hypothesis, we can assume that $\rho$ is surjective on every face of $P$, and so there must be an \emph{interior} point $a\in\Int(P)$ that is not in the image of $\rho$.
Define the map $\kappa\:P\to\partial P$ as follows: for $x\in P$ let $R$ be the ray emanating from $a$ and passing through $\rho(x)$ (which is unique since $a\not= \rho(x)$). Set $\kappa(x)$ to be the intersection of $R$ with $\partial P$. Note that $\kappa(x)=\rho(x)$ for all $x\in\partial P$.
We have then constructed a continuous map from $P$ to $\partial P$ so that the restriction to $\partial P$ is homotopic to the identity map.
This is a well-known impossibility. It can be quickly shown by considering the following commuting diagram (left) and the diagram induced on the $\Bbb Z$-homology groups (right):
\begin{figure}[h!]
\centering
$\begin{tikzcd}
& P \\
{\partial P} && {\partial P}
\arrow["\partial\rho", from=2-1, to=2-3]
\arrow["i", hook, from=2-1, to=1-2]
\arrow["\kappa", from=1-2, to=2-3]
\end{tikzcd}$
\qquad
$\begin{tikzcd}
& H_\bullet(P) \\
{H_\bullet(\partial P)} && {H_\bullet(\partial P)}
\arrow["\partial\rho_*", from=2-1, to=2-3]
\arrow["i_*", from=2-1, to=1-2]
\arrow["\kappa_*", from=1-2, to=2-3]
\end{tikzcd}$
\end{figure}
\noindent
Since $\partial\rho$ is homotopic to the identity, the arrow $\partial\rho_*$ is an isomorphism, and so must be the arrows above it. This is impossible because $$H_{d-1}(\partial P)=\ZZ\not=0=H_{d-1}(P).$$
\end{proof}
\end{lemma}
\fi
\iffalse
\newpage
\section{Rigidity results}
\begin{theorem}\label{res:shrink_edges_expand_poly}
Let $P,Q\subset\RR^d$ be combinatorially equivalent polytopes so that
\begin{myenumerate}
\item $Q$ contains the origin in its interior,
\item edges in $Q$ are \ul{not longer} than in $P$, and
\item vertex-origin distances in $Q$ are \ul{not smaller} than in $P$.
\end{myenumerate}
Then $P\cong Q$.
\begin{proof}
By \cref{res:Wachspress_map_surjective} the Wachspress map $\phi\:P\to Q$ is surjective.
Since $0\in \Int(Q)$ we can therefore choose an $x\in P$ with $\phi(x)=0$.
Let~$\alpha\in\Int(\Delta_n)$ be the Wachspress coordinates of $x$.
Then
\begin{align*}
\|P\|_\alpha^2 &= \sum_i\alpha_i \|p_i\|^2 - \|x\|^2
\overset{(*)}\le \sum_i \alpha_i \|q_i\|^2 - \|\phi(x)\|^2
= \|Q\|_\alpha^2
\overset{(*)}\le \|P\|_\alpha^2,
\end{align*}
where in $(*)$ we used \itm3 and $\phi(x)=0$, and in $(**)$ we applied \cref{res:expansion_main_result}.
From this chain of inequalities we find
\begin{itemize}
\item $\|P\|_\alpha=\|Q\|_\alpha$, which by \cref{res:expansion_main_result} implies that $P$ and $Q$ have the same edge lengths and that $Q$ is an affine transformation of $P$.
\item $\sum_i\alpha_i\|p_i\|^2 - \|x\|^2 = \sum_i \alpha_i \|q_i\|^2$, which by $\alpha_i>0$ and \itm3 implies $\|x\|=0$ and that $P$ and $Q $ have the same vertex-origin distances.
\end{itemize}
The isometry $P\cong Q$ then follows from \cref{res:linear_is_isometry} below.
\end{proof}
\end{theorem}
\begin{corollary
Let $P,Q\subset\RR^d$ be combinatorially equivalent polytopes so that
\begin{myenumerate}
\item $P$ and $Q$ are inscribed in spheres of radii $r_P$ and $r_Q$ respectively,
\item $Q$ contains the circumcenter in its interior, and
\item edges in $Q$ are \ul{not longer} than in $P$.
\end{myenumerate}
Then $r_P\ge r_Q$. Equality holds if and only if $P\cong Q$.
\begin{proof}
Translate $P$ and $Q$ so that both circumcenter lie at the origin.
Suppose that $r_P\le r_Q$. Then all preconditions of \cref{res:shrink_edges_expand_poly} are satisfied, which yields $P\cong Q$, hence $r_P=r_Q$.
\end{proof}
\end{corollary}
For a given polytope $P\subset\RR^d$ with edge-graph $G_P=(V,E)$ there are functions $\ell\:E\to \RR_+$, which to each edge of $P$ assigns its length, and $\omega\:V\to\RR_+$, which to each vertex of $P$ assigns its distance to the origin. The functions $\ell$ and $\omega$ will be called the \emph{parameters} of $P$.
\begin{corollary}\label{res:polytope_rigid_edge_lenghts_vod}
Given two combinatorially equivalent polytopes $P,Q\subset\RR^d$, at least one of which contains the origin in its interior, and both have the same edge lengths and vertex-origin distances, then $P\cong Q$.
\end{corollary}
This result implies a uniqueness condition for polytopes: a polytope of a fixes combinatorial type with $0\in\Int(P)$ is uniquely determined by its edge-length and vertex-origin distances.
Note however that the statement of the corollary is stronger: it also says that polytopes with and without origin in the interior cannot coexist if they have the same parameters.
We also point to the fact that the condition that at least one polytope contain the origin is necessary: \msays{example}
\begin{corollary}
A polytope $P\subset\RR^d$ of a fixes combinatorial type and with $0\in\Int(P)$ is uniquely determined by its edge-length and vertex-origin distances.
\end{corollary}
Interestingly, if we restrict to inscribed polytopes then the requirement for the origin to be in the interior is no longer necessary.
This does not follow from our previous results, but follows from a simple application of Cauchy's rigidity theorem.
\begin{corollary}
Inscribed polytopes of a given combinatorial type are uniquely deter\-mined by their edge lengths.
\begin{proof}[Proof in the appendix (see \cref{res:inscribed_determined_by_edge_length_via_Cauchy})]
\end{proof}
\end{corollary}
This statement however can also be proven in a straightforward way by applying Cauchy's famous rigidity theorem (see \cref{res:Cauchy_rigidity}) in which case we do not even need any assumption on the location of the circumcenter.
\newpage
\begin{lemma}\label{res:linear_is_isometry}
If $P\subset\RR^d$ is a full-dimensional polytope and $T\:\RR^d\to\RR^d$ is an affine transformation that preserves all edge lengths and vertex-origin distances~of~$P$,\nolinebreak\space then $T$ is an isometry that fixes the origin.
\begin{proof}
Write $T$ as $x\mapsto Ax+b$, where $A\:\RR^d\to\RR^d$ is a linear map and $b\in\RR^d$. Fix a vertex $x\in\F_0(P)$.
Since $T$ preserves vertex-origin distances it holds
\begin{align*}
\|Ax+b\|^2=\|x\|^2 \quad\implies\quad (\mathrm I.)\;\;\|Ax\|^2 + \<Ax,b\> + \|b\|^2 = \|x\|^2.
\end{align*}
For every vertex $y\in\F_0(P)$ adjacent to $x$ like-wise holds $(\mathrm{II}.)\;\|Ay\|^2+\<Ay,b\>+\|b\|^2=\|y\|^2$. Summing up $(\mathrm{I}.)$ and $(\mathrm{II}.)$ yields
$$(\mathrm{III}.)\;\;\|Ax\|^2+\|Ay\|^2 + \<Ax+Ay,b\> + 2\|b\|^2 = \|x\|^2+\|y\|^2.$$
Since $x$ and $y$ form an edge, and $T$ preserves edge lengths, we also have
$$\|Ax-Ay\|^2 = \|x-y\|^2 \;\implies\; (\mathrm{IV}.)\; \|Ax\|^2+\|Ay\|^2 - \<Ax,Ay\> = \|x\|^2+\|y\|^2-\<x,y\>.$$
We emphasize that the left side, and hence $(\mathrm{IV}.)$ and everything that follows below also holds if $x=y$.
Substituting the right side of $(\mathrm{III}.)$ into the right side of $(\mathrm{IV}.)$ and rewriting yields
\begin{align*}
\|Ax\|^2+\|Ay\|^2 - \<Ax,Ay\> &= \|Ax\|^2+\|Ay\|^2 + \<Ax+Ay,b\> + 2\|b\|^2-\<x,y\> \\
- \<Ax,Ay\> &= \<Ax+Ay,b\> + 2\|b\|^2-\<x,y\> \\
- \<A^{\Tsymb} A x,y\> &= \<Ax,b\>+\<A^{\Tsymb} b,y\> + 2\|b\|^2-\<x,y\> \\
\<(\Id - A^{\Tsymb} A)x - A^{\Tsymb} b,y\> &= \<Ax,b\> + 2\|b\|^2
\end{align*}
Thus, $x$ itself and all vertices adjacent to $x$ must be contained in the affine subspace
$$\mathcal H_x:=\big\{\, y\in\RR^d \;\big\vert\; \<(\Id-A^{\Tsymb} A)x-A^{\Tsymb} b,y\> = \<Ax,b\>+2\|b\|^2 \,\big\}.$$
But then also $P\subset\mathcal H_x$.
Since $P$ is full-dimensional we therefor require $\mathcal H_x=\RR^d$ for all vertices $x\in\F_0(P)$. This can only happen if
$$(\Id-A^{\Tsymb} A)x = A^{\Tsymb} b, \quad\text{for all $x\in\F_0(P)$}.$$
This states once more that the vertices of $P$ must be contained in an affine subspace.
But again, since $P$ is full-dimensional this restriction must be vacuous, meaning that $A^{\Tsymb} A = \Id$ ($A$ is orthogonal) and $A^{\Tsymb} b = 0\implies b=0$.
We conclude that $T$ is an isometry that fixes the origin.
\end{proof}
\end{lemma}
\fi
\iffalse
\newpage
\section{Expanded graph embeddings}
\label{sec:graph_embeddings}
The core motivation of our investigations have been polytopes and their skeleta.
Nevertheless, some of our results also apply in the more general setting of graph~embeddings.
The purpose of this section is to address this generalization briefly.
\subsection{Expansion matrices and expanded embeddings}
\label{sec:expansion_matrices}
The following objects are named in reference to the analogue of \cref{res:expansion_main_result} that they will satisfy (see \cref{res:main_result_for_embeddings} below).
\begin{definition}
Given a graph $G=(V,E)$, a symmetric matrix $M\in\RR^{n\x n}$ is called an \emph{expansion matrix} for $G$ if
\begin{myenumerate}
\item $M_{ij}>0$ if $ij\in E$,
\item $M_{ij}=0$ if $i\not=j$ and $ij\not\in E$,
\item $M$ has a unique positive eigenvalue (of multiplicity one).
\end{myenumerate}
\end{definition}
The Izmestiev matrix is an expansion matrix for the edge-graph of a polytope (by \cref{res:Izmestiev}).
More generally, Colin de Verdière matrices are expansion matrices (see \cite{van1999colin}; they are usually defined with opposite signs).
Given a graph embedding $p\: V\to\RR^d$, consider the matrix $X_p\in\RR^{n\x d}$ whose rows are $p_1,...,p_n$:
$$X_q:=\begin{pmatrix}
\;\rule[.5ex]{2.5ex}{0.4pt} \!\!\!\! & p_1 & \!\!\!\! \rule[.5ex]{2.5ex}{0.4pt} \;\, \\
& \vdots \\
\;\rule[.5ex]{2.5ex}{0.4pt} \!\!\!\! & p_n & \!\!\!\! \rule[.5ex]{2.5ex}{0.4pt} \;\, \\
\end{pmatrix}
$$
\begin{definition}
A realization $p\: V\to \RR^d$ of $G$ is called \emph{expanded} if $MX_p=0$~(or equivalently, $\Span X_p\subseteq \ker M$) for some expansion matrix $M$ of $G$.
We say that $p$ is \emph{strictly expanded} if $\Span X_p=\ker M$.
\end{definition}
For a matrix $M$, an embedding with $\Span X_p = \ker M$ is often called~a~\emph{null\-space embedding} for $M$, which is unique up to affine transformation (\shortStyle{cf.}\ \cref{res:linear_algebra}). Using this terminology, a strictly expanded embedding is thus~a~null\-space embedding of an expansion matrix.
The nullspace embeddings of a Colin de Verdiére matrix are usually called \emph{Colin de Verdiére embeddings} and provide examples of strictly expanded embeddings.\nlspace
In \cite{izmestiev2010colin} (the original source of \Cref{res:Izmestiev}) Izmestiev established that polytope skeleta are Colin de Verdiére embeddings.
\subsection{Main results}
\label{sec:embedding_main_result}
Expansion matrices and the corresponding strictly expanded embeddings are defined just right to satisfy statements equivalent to \itm1 - \itm5~in~\cref{res:Izmestiev}.
An analogue of \cref{res:expansion_main_result} can then be proven largely equivalent~to~\cref{sec:proof_of_main_result} and we do not repeat the proof in full.
Solely the equality case is different and is proven separately.
\msays{Why are the $\alpha_i>0$?}
\begin{theorem}
\label{res:main_result_for_embeddings}
Let $p\: V\to\RR^d$ be an expanded embedding of the graph $G$ with~expan\-sion matrix $M\in\RR^{n\x n}$\!.
If $q\: V\to\RR^e$ is any other embedding of $G$ with edges~that~are not longer than in $p$, then
$$\|p\|_\alpha\ge \|q\|_\alpha, \quad\text{where $\textstyle\alpha_i=\sum_j M_{ij}$}.$$
Equality holds if and only if $q$ is a translate of an expanded embedding for~$M$~and has the same edge lengths as $p$.
\end{theorem}
The name \enquote{expanded embedding} appears now justified as we see that they have maximal expansion among all embeddings with such edge lengths.
\begin{proof}[Proof of \cref{res:main_result_for_embeddings}]
We only verify the equality case.
Suppose $\|p\|_\alpha=\|q\|_\alpha$.
Analogous to the proof of \cref{res:expansion_main_result} \mbox{(in \cref{sec:proof_of_main_result})} we find that $p$ and $q$ must have the same edge lengths and, for a suitable translation~of $q$, $\Span X_q\subseteq\ker M$. The latter means that this translation of $q$ is expanded \shortStyle{w.r.t.}\ $M$.
Conversely, if both $p$ and $q$ are expanded \shortStyle{w.r.t.}\ the matrix $M$ and have the same edge lengths, then $\|p\|_\alpha\ge\|q\|_\alpha\ge\|p\|_\alpha$, hence $\|p\|_\alpha=\|q\|_\alpha$.
\end{proof}
Note for the equality case, if both $p$ and $q$ are expanded \shortStyle{w.r.t.}\ the same expansion matrix $M$, while they are not necessarily linear transformations of each other,\nlspace they are projections of a common strictly expanded embedding $\hat p\:V\to\RR^f$ where $f:=\dim\ker M$.
A conjecture akin \cref{conj:main} for expanded embeddings is certainly not true, as already the counterexample \cref{ex:cube} shows.
However, some of the special cases of the polytope case can be proven for embeddings as well.
Again, the proofs are essentially equivalent and use a version of \cref{lem:main}.
Note however that for general graph embeddings there exists no equivalent of the Wachspress map, but we only have a \emph{Wachspress point}: \msays{TODO}
\begin{lemma}
Let $p\: V(G)\to\RR^d$ and $q\:V(G)\to\RR^e$ be embeddings.
If $p$~is~\mbox{expanded} and it furthermore holds
\begin{myenumerate}
\item the Wachspress point $w=0$ is zero,
\item edges in $q$ are at most as long as in $P$, and
\item vertex-origin distances in $q$ are at least as large as in $P$,
\end{myenumerate}
then $p\simeq q$.
\end{lemma}
The proofs of the centrally symmetric case and the local version base their proofs on showing $\phi(0)=0$.
\hrulefill
Within the current setting it is harder to prove results akin the ones of \cref{sec:expansion_rigidity_tensegrity} as we do not have an analogue of the Wachspress map. We only prove an analogue of the centrally symmetric case (\cref{res:centrally_symmetric}).
Given a graph $G$ and an involution $\iota\in\Aut(G)$, an embedding $p\: V(G)\to\RR^d$ is \emph{$\iota$-centrally symmetric} if $p_{\iota(i)}=-p_i$ for all $i\in V(G)$.
\begin{theorem}[centrally symmetric version]\label{res:centrally_symmetric_for_embeddings}
Given two $\iota$-centrally symmetric~graph embeddings $p\:V(G)\to\RR^d$ and $q\:V(G)\to\RR^e$, if $p$ is expanded and
\begin{myenumerate}
\item edges in $q$ are at most as long as in $p$, and
\item vertex-origin distances in $q$ are at least as large as in $p$,
\end{myenumerate}
then $p\simeq q$.
\end{theorem}
\msays{Here too we need that $\alpha_i>0$.}
\msays{We also need that $\alpha$ is a geometric invariant = transforms well under symmetries.}
\begin{proof}
Let $\alpha\in\Delta_n$ be given by $\alpha_i:=\sum_j M_{ij}>0$.
Then
\begin{align*}
\sum_i \alpha_i \|p_i\|^2
&= \Big\|\sum_i \alpha_i p_i\Big\|^2 \!+
\tfrac12\sum_{i,j} \alpha_i\alpha_j \|p_i-p_j\|^2
= \|x\|^2 + \|P\|_\alpha^2
\\[-1ex]
\rotatebox{90}{$\ge$}
\qquad
&
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,
\rotatebox{90}{$\le$}
\qquad\;\;
\rotatebox{90}{$\le$}
\\[-1ex]
\sum_i \alpha_i \|q_i\|^2
&= \Big\|\sum_i \alpha_i q_i\Big\|^2 \!+
\tfrac12\sum_{i,j} \alpha_i\alpha_j \|q_i-q_j\|^2\,
= \|0\|^2 \,+\, \|q\|_\alpha^2,
\end{align*}
where the rows hold by simple computation and the columns hold (from left~to~right) by \itm3, trivially, and \itm2 + \cref{res:expansion_main_result} respectively.
It follows that all inequali\-ties are actually equalities.
In particular, since $\alpha_i>0$ we find both $\|p_i\|=\|q_i\|$~for all $i\in V$ and $\|p_i-p_j\|=\|q_i-q_j\|$ for all $i,j\in V$, establishing that $q$ and $\skel(P)$~are indeed isometric via an orthogonal transformation.
\end{proof}
\iffalse
\begin{theorem}[local version]\label{thm:main_local}
Given an expanded graph embedding $p\: V(G)\to \RR^d$, there exists an $\eps >0$ with the following property: if an embedding $q\:V(G_P)\to\RR^d$ satisfies
\begin{myenumerate}
\item $q$ is $\eps$-close to $p$, \shortStyle{i.e.,}\ $\mu(q,p)<\eps$,
\item edges in $q$ are at most as long as in $p$, and
\item vertex-origin distances in $q$ are at least as large as in $p$,
\end{myenumerate}
then $p\simeq q$.
\end{theorem}
\fi
We suspect that polytope skeleta are the only expanded embeddings for which an analogue of the Wachpress map can be defined. More rigorously, we wonder the following:
\begin{question}
Let $p$ be a graph embedding so that for all $x\in\Int \conv(p)$ the~translated embedding $p-x$ is an expanded embedding. Is then $p$ necessarily the skeleton of $\conv p$?
\end{question}
\msays{My conclusion for this section is: it will probably work, but it will require much more work than what would be justified for such a \enquote{brief} section. I guess we leave it out.}
\fi
\iffalse
\newpage
\section{Relation to other results and conjectures}
\label{sec:relations}
While we believe that the forms of rigidity for convex polytopes that we proved in \shortStyle{e.g.}\ \cref{res:shrink_edges_expand_poly} or \cref{res:comb_type_rigid} are interesting and required quite some elaboration, we also admit that they are not too surprising.
In this section we want to discuss that edge lengths and vertex-origin distances are potentially \emph{much more}
powerful in nailing down a single polytope than one might expect. In fact, by the vast generality of \cref{res:expansion_main_result} ($Q$ might not be a polytope, but any graph embedding), we believe the following:
\hrulefill
The generality of \cref{res:expansion_main_result} suggests that polytope skeleta are \enquote{maximally expanded} for their edge lengths, and that any other embedding of the edge-graph with the same or shorter edge lengths is therefore \enquote{less expanded}.
This is provably true when \enquote{expansion} stands for $\alpha$-expansion with a well-chosen parameter $\alpha\in\Delta_n$. But we can ask the same for our initial measure of polytopes size -- the circumradius. For example, we could ask whether the following is true:
\begin{statement}\label{stm:inscribed_expansion}
Let $P\subset \RR^d$ be a polytope and $q\:V(G_P)\to\RR^e$ an embedding of its edge-graph, so that
\begin{myenumerate}
\item $P$ and $q$ are inscribed in spheres of radii $r_P$ and $r_q$ respectively,
\item $\conv\{q_1,...,q_n\}$ contains the circumcenter of $q$, and
\item edges in $q$ are at most as long as in $P$.
\end{myenumerate}
Then $r_P\ge r_q$.
\end{statement}
\subsection{It's about the edge-graph}
\begin{conjecture}\label{conj:shrink_edges_expand_poly}
Let $P,Q\subset\RR^d$ be two polytopes with isomorphic edge-graphs, so that
\begin{myenumerate}
\item $Q$ contains the origin in its interior,
\item edges in $Q$ are \ul{not longer} than in $P$, and
\item vertex-origin distances in $Q$ are \ul{not smaller} than in $P$.
\end{myenumerate}
Then $P\cong Q$.
\end{conjecture}
Note the difference to \cref{res:shrink_edges_expand_poly}: we do not require $P$ and $Q$ to be combinatorially equivalent, but only to have the same edge-graph!
The following conjectures would be implied by \cref{conj:shrink_edges_expand_poly}:
\begin{conjecture}\label{conj:determined_by_edge_lengths}
Given a graph $G=(V,E)$ and functions $r\:V\to\RR_+$ and $\ell\:E\to\RR_+$, then there is at most one $d\ge 1$, $\gamma>0$ and polytope $P\subset\RR^d$ so that
\begin{myenumerate}
\item $P$ contains the origin in its interior,
\item for each $e\in E$ the corresponding edge in $P$ has length $\ell(e)$,
\item for each $i\in V$ the corresponding vertex-origin distance in $P$ is $\gamma r(i)$.
\end{myenumerate}
If there is at least one such polytope, then the statement stays valid if we drop~requirement \itm1.
\end{conjecture}
\begin{corollary}\label{conj:determined_by_edge_lengths_inscribed}
An inscribed polytope that contains the origin in its interior is uniquely determined by its edge-graph and edge lengths.
\end{corollary}
\begin{corollary}\label{conj:determined_by_edge_graph}
An inscribed polytope that contains the origin in its interior and all edges of which are of the same length is uniquely determined by its edge-graph.
\end{corollary}
To emphasize the power of \cref{conj:determined_by_edge_lengths_inscribed}: the edge-graph and the edge lengths would determine the polytope
\begin{itemize}
\item across all dimensions,
\item across all combinatorial types, and
\item across all radii of the circumsphere.
\end{itemize}
\subsection{Proving the conjectures in special cases}
What prevents us from proving these conjectures?
In fact, the proof of \cref{res:shrink_edges_expand_poly} seems valid even if we replace $Q\subset\RR^d$ by an arbitrary graph embedding $q\:V\to\RR^e$. The answer lies in \cref{res:Wachspress_map_surjective}, the surjectivity of the Wachspress map.
The definition of the Wachspress map can certainly be generalized: let $\phi\:P\to\RR^e$ be the mapping
$$x:=\sum_i \alpha_i p_i \mapsto \sum_i \alpha_i q_i.$$
If $q$ is not the skeleton of a polytope, it makes not much sense to ask whether $\phi$ is surjective.
But the proof of \cref{res:shrink_edges_expand_poly} only requires that $0\in\im\phi$.
Even better, the proof only requires to find an $x\in\Int(P)$ with $\|\phi(x)\|\le \|x\|$.
In general, it seems surprisingly hard to prove the existence of such a point $x$. We give an example where the vastest thinkable generalization fails (this example is due to Joseph Doolittle):
\begin{example}
\msays{TODO}
\end{example}
\subsection{Beyond polytopes} \quad
\msays{TODO}
\subsection{Other conjectures}
We agree that the vertex-origin distances are a rather unnatural parameters, given that we do not want to consider a polytope at a fixed position in space.
They are less arbitrary in the case of inscribed polytopes, centrally symmetric polytopes or other polytopes with a kind of symmetry that fixes one particular point.
\begin{question}
Given two (combinatorially equivalent) polytopes $P,Q\subset\RR^d$ so that for any two vertices $i,j\in V(G_P)$ with $\dist(i,j)\le 2$ holds $\|p_i-p_j\|=\|q_i-q_j\|$. Are these polytopes necessarily isometric?
\end{question}
\msays{Yes, it does $\to$ Cauchy!}
\begin{question}
Given two (combinatorially equivalent) polytopes $P,Q\subset\RR^d$ with the same edge lengths, and also with the same set of distances between all vertices and a fixed vertex $i\in V(G_P)$. Do we have $P\cong Q$?
\end{question}
\msays{No, counterexample of pyramid over flexible polygon!}
\begin{question}
What is the least amount of distances (between vertices of $P$, or between vertices and external point) necessary to determine the polytope uniquely?
\end{question}
This has certainly something to do with the dimension of the realization space.
\fi
\iffalse
\newpage
\section{Miscellaneous}
\subsection{Dimension of the realization space}
\begin{corollary}
The realization space of a $d$-polytope is at most
$$\dim\mathcal R(P)\le f_0+f_1+{\textstyle {d\choose 2}}.$$
\end{corollary}
In general, this bound is very crude; so crude in fact that we are not aware of a case where it is actually attained (even for polygons it is off by one). However, in a few cases it provides the best upper bound known.
\begin{example}
The dimension of the realization space of a polytope is certainly smaller than the dimension of the realization space of a simplicial polytope with the same number of vertices, which is exactly $d \cdot f_0$. For an equivalent reason using simple polytopes, the dimension is upper bounded by $d\cdot f_{d-1}$.
We describe a polytope for which our bound is better than these bounds: consider the 4-crosspolytope with one vertex truncated. This polytope has 15 vertices, 37 edges, 40 2-faces and 17 facets. This yields
\begin{align*}
d\cdot f_0 &= 60, \\
d\cdot f_{d-1} &= 68, \\
f_0 + f_1 + {\textstyle {d\choose 2}} &= 57, \\
f_{d-1}+f_{d-2} + {\textstyle {d\choose 2}} &= 63.
\end{align*}
A well-known lower bound is $d(f_0+f_{d-1})-f_{0,d-1}=42$, and so there is still a large gap.
\end{example}
\fi
\iffalse
\newpage
\noindent\hrulefill
Let $G=(V,E)$ be a connected finite simple graph with vertex set $V=\{1,...,n\}$.
A $d$-dim\-ensional embedding of $G$ is a map $v\: V\to\RR^d$ that maps the vertices of $G$ to the points denoted $v_1,...,v_n\in\RR^d$.
We are interested in embeddings that are \enquote{as expanded as possible} for their edge lengths. We shall be more precise later on, but intuitively we mean that if we have a \enquote{maximally expanded} embedding, and we try to move the points without making any edge longer (where we do not necessarily insist in a continuous move, or even staying in the same dimension), then the \enquote{overall size} of the embedding can never go up. By \enquote{size} we could mean something like a circumradius or a weighted function of distances between vertices.
We do not know a general procedure by which to identify a given embedding~as \enquote{expanded} in this sense, but we shall discuss some ways to construct such embeddings.
For this, we start with \enquote{expansion matrices}:
\begin{definition}\label{def:expanded_matrix}
A symmetric matrix $M\in\RR^{n\times n}$ is called an \emph{expansion matrix} for~$G$ if it satisfies
\begin{myenumerate}
\item $M_{ij}=0$ whenever $i\not= j$ and $ij\not\in E$,
\item $M_{ij}>0$ whenever $ij\in E$, and
\item $M$ has a single positive eigenvaluem $\theta>0$ of multiplicity one.
\end{myenumerate}
\end{definition}
This is (up to an irrelevant sign change) close to the definition of so-called Colin de Verdiére matrices (a matrix $M\in\RR^{n\x n}$ is a Colin de Verdiére matrix for $G$ if $-M$ satisfies \itm1, \itm2 and \itm3, as well as the so-called \emph{strong Arnold property}).
In fact, every Colin de Verdiére matrix yields an expansion matrix, and, since these matrices are well studied, provide countless examples for our purpose (see \shortStyle{e.g.}\ \cref{res:Izmestiev}).
In the following, let $M\in\RR^{n\x n}$ be a fixed expansion matrix for $G$.
\begin{observation}\label{res:z_pos}
Consider a matrix $M':=M+\alpha \Id$ with $\alpha>0$ sufficiently large, so that
\begin{myenumerate}
\item all diagonal entries of $M'$ are non-negative (the off-diagonal entries are non-negative anyway by definition of $M$), and
\item $\theta+\alpha$ is the largest eigenvalue of $M'$ in absolute value (note that $M'$ has the same spectrum as $M$ shifted by $\alpha$).
\end{myenumerate}
Applying the theorem of Perron-Frobenius and the fact that $G$ is connected (which makes $M$ and $M'$ so-called \emph{irreducible} matrices) shows that every eigenvector $z\in \RR^n$ of $M'$ to eigenvalue $\theta+\alpha$ can be rescaled to have only non-negative entries.
Note that $z$ is also the $\theta$-eigenvector of $M$.
\end{observation}
Given an embedding $v$, the matrix $X_v\in\RR^{n\times d}$ with $X_v^{\Tsymb}=(v_1,...,v_n)$ (\shortStyle{i.e.,}\ the rows of $X_v$ are the $v_i$) shall be called the \emph{embedding matrix} of $v$. Its columns span $U_v:=\Span X_v\subseteq\RR^n$ is called \emph{embedding space} of $v$.
We are~inte\-rested in expansion matrices with non-trivial kernel because they give rise to interesting (\shortStyle{i.e.,}\ \enquote{maximally expanded}) embeddings:
\begin{definition}
An embedding $v$ with embedding space $\ker M$ is called a \emph{nullspace embedding} of $G$ corresponding to $M$.
\end{definition}
Note that any two embeddings with the same embedding space, hence any two nullspace embeddings for $M$, are just linear transformations of each other.
A first relevant property of such embeddings is that their convex hull always contains the origin.
\begin{proposition}\label{res:z_balance}
If $v$ is the nullspace embedding of the expansion matrix $M$, then
\begin{equation}\label{eq:z_balance}
\sum_i z_i v_i = 0,
\end{equation}
where $z\in\RR^n$ is the eigenvector to the unique positive eigenvalue $\theta>0$. In particular, $0\in\conv\{v_1,...,v_n\}$.
\begin{proof}
By definition of a nullspace embedding, the columns of $X_v$ (the embedding matrix) are in $\ker M$ (the $0$-eigenspace of $M$). Since $M$ is symmetric, its eigenspaces are pair-wise orthogonal and thus $z^{\Tsymb} X_v=0$.
But this is just \cref{eq:z_balance} written~as a matrix equation.
By \cref{res:z_pos}, $z$ can be rescaled to be non-negative, and if we rescale~further to satisfy $\sum_i z_i = 1$, we see that \eqref{eq:z_balance} describes the origin as a convex combination~of the $v_i$.
\end{proof}
\end{proposition}
\begin{remark}\label{res:Izmestiev_note}
Let $P\subset\RR^d$ be a convex polytope, that is, the convex hull of finitely many points $v_1,...,v_n\in\RR^d$.
The skeleton of $P$ is the embedding of the edge-graph $G_P$ that assigns each vertex to the corresponding point $v_i$.
One motivation for our investigation was to establish polytope skeleta as \enquote{maximally expanded}; at least when they satisfy the only obviously necessary criterion \cref{res:z_balance}, that $P$ contains the origin.
This will turn out as a special case of our investigation: by a fascinating result of Ivan Izmestiev \cite{izmestiev2010colin}, every polytope skeleton (of a polytope $P$ that contains the origin) is a nullspace embedding of some appropriately chosen expansion matrix (in fact, Colin de Verdiére matrix) of $G_P$.
\end{remark}
In the following, let $v$ denote a fixed nullspace embedding of $M$.
One convenient intuition for the relation between $M$ and $v$ is the interpretation of the entries of $M$ as forces acting on the vertices and along the edges of the embedding, so that the complete embedding is in equilibrium.
\begin{proposition}\label{res:M_balance}
For each $i\in \{1,...,n\}$ holds
\begin{equation}\label{eq:M_balance}
\sum_j M_{ij} v_j = 0.
\end{equation}
\begin{proof}
The proof is similar to \cref{res:z_balance}, and it comes down to the observation that \cref{eq:M_balance} is equivalent to the matrix equation $MX_v = 0$ (which holds because $v$ is a nullspace embedding, \shortStyle{i.e.,}\ $\Span X_v = \ker M$).
To be more explicit, for any two $i,k\in\{1,...,n\}$ holds
$$0=(MX_v)_{ij} = \sum_j M_{ij} (X_v)_{jk} = \sum_j M_{ij} (v_j)_k.$$
Since this holds for all $k$, this gives \eqref{eq:M_balance}.
\end{proof}
\end{proposition}
To follow on the force interpretation, for each $i\in\{1,...,n\}$ holds
$$\sum_j M_{ij}(v_j-v_i) = \underbrace{\sum_j M_{ij} v_j}_0 - \Big(\sum_{j} M_{ij}\Big) v_i = -m_i v_i,$$
where we defined $m_i:=\sum_j M_{ij}$.
Since $M_{ij}\not=0$ only if $ij\in E$ (for $i\not=j$), the left hand side allows the interpretation of forces acting on the vertex $v_i$ along the edges connecting it to its neighbors $v_j,j\sim i$. These forces are proportional to the length of the edge and the respective entry of $M_{ij}$.
The equation furthermore shows that these forces add up to a total force that attracts the vertex $v_i$ to the origin.
If this force is counteracted by a force repelling $v_i$ from the origin, proportional to $\|v_i\|$ and $m_i$, the total arrangement is in equilibrium.
One should note that the terms \enquote{attracts} and \enquote{repels} are used in a way that suggests $m_i>0$, which we do not yet know. This is our first open question:
\begin{question}\label{q:m_pos}
Do we have $m_i\ge 0$? If $\rank v\ge 2$, do we have $m_i >0$?
\end{question}
The answer is known to be positive for certain special cases, \shortStyle{e.g.}\ for inscribed embeddings (see \cref{res:inscribed_m_pos} further below).
\begin{corollary}
\begin{equation}\label{res:m_balance}
\sum_i m_i v_i =0.
\end{equation}
\begin{proof}
Using \cref{res:M_balance}:
$$\sum_i m_i v_i = \sum_i \Big(\sum_j M_{ij}\Big) v_i = \sum_j \Big(\underbrace{\sum_i M_{ij} v_i}_0\Big) = 0.$$
\end{proof}
\end{corollary}
Let us call an embedding $w$ \emph{$c$-centered} for a vector $c\in\RR^n$ if $\sum_i c_i w_i = 0$. In~this sense, the nullspace embedding $v$ is both $z$-centered and $m$-centered.
\begin{definition}
For a vector $c\in\RR^n$, the \emph{$c$-expansion} of an embedding $w$ is
$$\|w\|_c^2:=\sum_i c_i \|w_i\|^2.$$
\end{definition}
If $c\in\RR^n$ is positive vector, then the $c$-expansion naturally interprets as a kind of \enquote{size} of an embedding.
Note however, that $c$-expansion alone, even with positive $c_i$, is still a rather weak notion of size, as translating any given arrangement like $w'_i:= w_i+t$ for some $t\in\RR^d$ yields embeddings with arbitrarily large $c$-expansion.
A straight forward computation shows that among all translations of $w$ the one with the smallest $c$-expansion is the one that is $c$-centered.
This allows then to also define a translation-invariant version of $c$-expansion:
\begin{align*}
\sum_{i,j} c_i c_j\|w_i-w_j\|^2
&= \sum_{i,j} c_i c_j \big (\|w_i\|^2 - 2\<w_i,w_j\> + \|w_j\|^2\big)
\\& \!\!\!\!\!\!\!\!\!\!\! = 2 \Big(\sum_i c_i\Big) \Big(\sum_j c_j\|w_j\|^2 \Big) - 2\Big\<\underbrace{\sum_i c_i w_i}_0,\underbrace{\sum_j c_j w_j}_0\Big\> = C\|w\|^2_c,
\end{align*}
where $C:=2\sum_i c_i$.
Thus, it can be more natural to view $c$-expansion as measuring a weighted pair-wise (squared) distance between vertices (up to a factor $C$), where no reference to the origin is necessary anymore.
Note that all pair-wise distances matter, not only edge lengths.
Since we want to establish the nullspace embedding $v$ as \enquote{maximally expanded}, it is now natural to go for either $z$- or $m$-expansion.
We show the following:
\begin{theorem}\label{res:nullspace_expanded}
Among all $m$-centered embeddings with edges not longer than in $v$, $v$ has the maximal $m$-expansion.
More precisely, if $w$ is any $m$-centered embedding with
$$\|w_i-w_j\|\le \|v_i-v_j\|,\quad\text{for all $ij\in E$}$$
then $\|w\|_m^2\le \|v\|_m^2$.
Equality holds exactly when $w$ is a linear transformation of $v$ with edges of the same length as $v$.
\end{theorem}
In fact, we will prove a stronger statement: since $v$ is also $z$-centered, it is not too implausible to state the theorem for $z$-centered embeddings.
However, the catch is that we still stay with $m$-expansion instead of $z$-expansion:
\begin{theorem}\label{res:nullspace_expanded_z}
If $w$ is any $z$-centered embedding with
$$\|w_i-w_j\|\le \|v_i-v_j\|,\quad\text{for all $ij\in E$}$$
then $\|w\|_m^2\le \|v\|_m^2$.
Equality holds exactly when $w$ is a linear transformation of $v$ with edges of the same length as $v$.
\end{theorem}
Note that this implies \cref{res:nullspace_expanded}: if $w$ is any $m$-centered embedding satisfying the edge-length condition, then its translate $w':=w+(z-m)$ is $z$-centered and also satisfies the edge-length condition. Then
$$\|w\|_m^2 \le \|w'\|_m^2 \le \|v\|_m^2,$$
where the first inequality holds because the $m$-centered translate of $w$ has the~minimal $m$-expansion, and the second inequality hold because of \cref{res:nullspace_expanded_z}.
\begin{proof}[Proof of \cref{res:nullspace_expanded_z}]
Consider
\begin{align*}
\sum_{i,j} M_{ij} \|w_i-w_j\|^2
&= \sum_{i,j} M_{ij} \big(\|w_i\|^2 - 2\<w_i,w_j\> + \|w_j\|^2\big)
\\&= 2\sum_i \underbrace{\Big(\sum_j M_{ij}\Big)}_{m_i} \|w_i\|^2 - 2\underbrace{\sum_{i,j} M_{ij}\<w_i,w_j\>}_{\tr(M X_w X_w^{\Tsymb})},
\end{align*}
which, using $M_{ij}=0$ when $ij\not\in E$ (for $i\not=j$), rearranges to
\begin{align}\label{eq:exp_len_tr}
\underbrace{\sum_i m_i \|w_i\|^2}_{\|w\|_m^2}
&= \frac12 \sum_{ij\in E} M_{ij} \|w_i-w_j\|^2 + \tr(M X_w X_w^{\Tsymb}).
\end{align}
Eventually, the goal is to show that the left hand side is not larger than $\|v\|_m^2$.\nolinebreak\space We prove this by showing that both terms on the right hand side increase when $w$~is replaced by $v$.
This is clear for the first term since $M_{ij}>0$ and $\|w_i-w_j\|\le \|v_i-v_j\|$ for all $ij\in E$. Note that since $v$ is a nullspace embedding to $M$, we~have $MX_v=0$, and so it suffices to show that the second term is negative.
To see this, consider the decomposition $X_w=X_w^1+\cdots+X_w^r$ where $M X_w^k=\theta_k X_w^k$ and $\theta_k$ is the $k$-th largest eigenvalue of $M$ (that is, the columns of $X_w^k$ are $\theta_k$-eigen\-vectors).
Then
\begin{align}\label{eq:tr_neg}
\begin{split}
\tr(M X_w X_w^{\Tsymb})
&= \tr\Big(\sum_{k,\ell} M X_w^k (X_w^\ell)^{\Tsymb}\Big)
= \tr\Big(\sum_{k,\ell} \theta_k X_w^k (X_w^\ell)^{\Tsymb}\Big)
\\&= \sum_{k,\ell} \theta_k \tr\big( (X_w^\ell)^{\Tsymb} X_w^k\big)
\\&\overset{\mathclap{(*)}}=\, \sum_k \theta_k \tr\big( (X_w^k)^{\Tsymb} X_w^k \big)
\\&\overset{\mathclap{(**)}}=\, \sum_{k\ge 2} \underbrace{\theta_k}_{\le 0} \Big(\underbrace{\sum_{i,j} (X_w^k)_{ij}^2}_{\ge 0}\Big)
\;\le\; 0
\end{split}
\end{align}
where in $(*)$ we used that $X_w^\ell (X_w^k)^{\Tsymb}=0$ whenever $k\not=\ell$ (because different eigen\-spaces are orthogonal), and in $(**)$ we used that $w$ is $z$-centered, that is, $z^{\Tsymb} X_w = 0$ or $X_w^1=0$, and that $\theta_1$ is the only positive eigenvalue of $M$.
We can also immediately investigate the identity case by considering when the two terms on the right side of \cref{eq:exp_len_tr} do not change when we replacing $w$ by $v$.
For the first term this means that $\|w_i-w_j\|=\|v_i-v_j\|$ for all $ij\in E$.\nolinebreak\space%
For the second term we have to check when equality holds in the last step of \eqref{eq:tr_neg}, which is the case when $X_w^k=0$ for all $k\not=2$ (recall that $\theta_2=0$). In other words, $X_w=X_w^2$, the embedding space $U_w=\Span X_w$ is a subspace of $\ker M$, and thus, $w$ is a linear transformation of $v$.
\end{proof}
Eventually, we want to formulate results for more natural notions of \enquote{expansion} (natural in the sense of independent from any arbitrary seeming weight vector~$m$).
One such notion can be defined for inscribed embeddings, namely, the \emph{circumradius}.
Ideally, we want to identify inscribed embeddings as maximally expanded in the sense that any other inscribed embeddings with edges that are never longer has a smaller circumradius.
In the following, assume that $v$ is a nullspace embeddings for $M$ but is also~inscribed with circumcenter in the origin.
If $v$ has circumradius $r=r(v)>0$, then it holds
$$\|v\|_m^2 = \sum_i m_i \|v_i\|^2 = r^2 \Big(\sum_i m_i\Big).$$
If $w$ is any other $m$-centered (or even $z$-centered) inscribed embedding with circumcenter $0$, circumradius $\rho>0$ and not longer edges than $v$, an analogous computation and \cref{res:nullspace_expanded} yield
$$\rho^2 \Big(\sum_i m_i\Big) = \|w\|_m^2 \le \|v\|_m^2 = r^2 \Big(\sum_i m_i\Big).$$
From this we would like to conclude $\rho\le r$, but we do not yet know whether $\sum_i m_i$ $> 0$ (\cref{q:m_pos}).
Luckily, for inscribed embeddings we can show something even stronger:
\begin{lemma}\label{res:inscribed_m_pos}
If the nullspace embedding $v$~is~inscribed with circumcenter $0$, then $m_i > 0$ for all $i\in\{1,...,n\}$.
\begin{proof}
Fix some $i\in \{1,...,n\}$. Note that $\<v_i,v_j\>=r^2 \cos\angle(v_i,v_j)\le r^2$, with a strict inequality for $j\not=i$. Next,\nolinebreak\space recall \cref{res:M_balance}:
$$0 = \sum_j M_{ij} v_j = \sum_{j\sim i} M_{ij} v_j + M_{ii} v_i.$$
If we apply $\<v_i,\cdot\>$ on both side, we arrive at
$$0 = \sum_{j\sim i} M_{ij} \<v_i,v_j\>\le r^2 \sum_j M_{ij} = r^2 m_i.$$
Note that the inequality becomes strict as soon as there is a single $j\not=i$ with~$j\sim i$, which is always the case for connected graphs with more than one vertex (careful analysis shows that 1-vertex graphs are excluded anyway, as they do not posses~expansion matrices).
\end{proof}
\end{lemma}
Thus, indeed, an \emph{inscribed} nullspace embedding has maximal circumradius among the inscribed embeddings (with the usual properties) \emph{if} we restrict to circumcenter $0$.
This is a rather strong restriction.
Also, we have still not gotten rid~of~all~appear\-ances of arbitrary weight vectors, as we still require the embeddings to be $m$-cente\-red.
We formulate some conjectures to highlight a worthwhile direction:
\begin{conjecture}\label{conj:translate_nullspace}
If $v$ is a nullspace embedding of $M$, and $c\in \conv\{v_1,...,v_n\}$, then there is an expansion matrix $M_c$ for which $v_c:=v+c$ is a nullspace embedding.
\end{conjecture}
The result of Izmestiev (\cref{res:Izmestiev}) already ensures that this is true for polytope skeleta.
\begin{conjecture}
Let $v$ be an inscribed nullspace embedding with circumcenter $0$. If $w$ is any inscribed embedding whose circumcenter is in $\conv(w)$ and whose edges are not longer than the ones of $v$, then $r(w)\le r(v)$.
\end{conjecture}
\msays{No, see Joseph's counterexample on MO}
\msays{Alternative conjecture: require that the edges at a vertex span a cone that contains the origin.}
This formulation is convenient because it got rid of all arbitrary weights. We do not make any claims about the equality case $r(w)=r(v)$, but it seems plausible to expect similar criteria as in \cref{res:nullspace_expanded}.
Perhaps the restriction to circumcenter $0$ for $v$ is not necessary, especially under consideration of \cref{conj:translate_nullspace} (note, if we drop this condition, the circumcenter of $v$ can be outside of $\conv(v)$).
There are some further questions that are interesting.
\begin{question}
If $v$ is a nullspace embedding, is it true that for all edges $ij\in E$ the set $\conv\{v_i, v_j\}$ lies on the surface of the polytope $\conv(v)$, perhaps is even an edge of $\conv(v)$?
\end{question}
This should be easy to answer, but I have not tried yet:
\begin{question}
In a nullspace embedding, can there be $i\not= j$ with $v_i = v_j$?
\end{question}
\msays{No, because projections of expanded embeddings are expanded, and we can choose an appropriate projection to map two vertices to the same point. But we can ask whether this holds for fully expanded embedding.}
The following asks for a converse of what we have done so far: it asks whether~all embeddings that are \enquote{maximally expanded} emerge in the way that we discussed.
\begin{question}
Suppose that $v$ is an embedding with the following property: there is a vector $m\in\RR^n$, so that $v$ is $m$-centered and has the largest $m$-expansion among all other $m$-centered embeddings with edges not longer than $v$.
Then $v$ is the nullspace embedding of some expansion matrix.
\end{question}
\msays{\textbf{Remark:} if the inscribed polytope is centrally symmetric, then all these special centers are in the origin and we get our $m$-independent maximal expansion.}
\newpage
\begin{lemma}
If $v$ is an expanded embedding, then $m_i < 0$ for at most one $i\in\{1,...,n\}$.
\begin{proof}
Inspecting \cref{eq:exp_len_tr} we see that $\|v\|_m^2>0$ as long as there is one edge of non-zero length.
Suppose that
\end{proof}
\end{lemma}
\begin{theorem}
Fix $m_1,...,m_n\ge 0$ and $\ell_{ij}\ge 0$ for all $ij\in E$.
Let $v$ be an $m$-centered embedding with $\|v_i-v_j\|\le \ell_{ij}$ for all $ij\in E$ that maximizes the $m$-expansion $\|v\|_m^2$.
Then there is an expansion matrix $M\in\RR^{n\x n}$ for which $v$ is a nullspace embedding.
\begin{proof}
Clearly, the embeding is in a force equilibrius, otherwise it could
\end{proof}
\end{theorem}
\newpage
$$\Delta_n:=\Big\{\, x\in\RR^n_{\ge 0}\;\Big\vert\; \sum_i x_i=1\Big \,\}.$$
$$c_m(P):=\sum_i m_i p_i.$$
\begin{theorem}
Given a (not necessarily full-dimensional) polytope $P\subset\RR^d$ and a point $c\in \relint(P)$, there exists weights $m\in\Delta_n$, so that $c_m(P)=c$ and so that for each graph embedding $q\:G_P\to \RR^d$ with
\begin{myenumerate}
\item $c_p(q)=c$ and
\item $\|q_i-q_j\|\le \|p_i-p_j\|$ for all $ij\in E(G_P)$,
\end{myenumerate}
hold $\|q\|_m\le\|P\|_m$, with equality if and only if equality holds for all inequalities in \itm2 and $q$ is a linear transformation of the skeleton of $P$.
\end{theorem}
For $m\in\Delta_n$, the \emph{$m$-expansion} of $P$ is
\begin{align*}
\|P\|_m^2 &:= \frac12 \sum_{i,j} m_i m_j \|p_i-p_j\|^2
\\ &= \sum_i m_i \|p_i - c\|^2 = \sum_i m_i \|p_i\|^2 - \|c\|^2,
\end{align*}
where $c=c_m(P):=\sum_i m_ip_i$ is the \emph{$m$-center} of $P$.
\newpage
\section{Izmestiev matrices and weights}
For this section let $P\subset\RR^d$ be a polytope with vertices $p_1,...,p_n\in\F_0(P)$.
Its edge-graph is denoted $G_P$ with vertex set $V(G_P)=\{1,...,n\}$, so that $i\in V(G_P)$ corresponds to $p_i$.
\begin{definition}
For $\mathbf c=(c_1,...,c_n)\in\RR^n$ the \emph{generalized polar} is
$$P^\circ(\mathbf c) := \big\{\,x\in\RR^d\mid \<x,p_i\>\le c_i\text{ for all $i\in\{1,...,n\}$}\,\big \}.$$
Note that $P^\circ(\mathbf c)$ for $\mathbf c=(1,...,1)$ is the usual polar polytope $P^\circ$.
\end{definition}
\begin{theorem}[Theorem of Izmestiev]\label{thm:Izmestiev}
If $0\in \Int(P)$, then the matrix $M\in\RR^{n\x n}$ with entries
$$M_{ij}:=\frac{\partial^2 \vol(P^\circ(\mathbf c))}{\partial c_i\partial c_j}\,\Big|_{\,\mathbf c=(1,...,1)}$$
is well-define $($in particular, $\vol(P^\circ(\mathbf c))$ is two times continuously differentiable in the components of $\mathbf c$\!\! $)$ and has the following properties:
\begin{myenumerate}
\item $M_{ij} > 0$ whenever $ij\in E(G_P)$,
\item $M_{ij} = 0$ whenever $i\not=j$ and $ij\not\in E(G_P)$,
\item $\dim\ker M = d$,
\item $M\Phi^{\Tsymb} = 0$, where $\Phi=(p_1,...,p_n)\in\RR^{d\x n}$ is the matrix in which the vertices $p_i$ are the columns, and
\item $M$ has a unique positive eigenvalue.
\end{myenumerate}
\end{theorem}
We shall call the matrix $M=M(P)$ the \emph{Izmestiev matrix} of $P$. We remark that in contrast to its original definition in \cite{izmestiev2010colin} our definition is missing a minus sign to fit it petter in the context of this article.
\begin{remark}
Some remarks on $M$ and \cref{thm:Izmestiev}:
\begin{myenumerate}
\item the Izmestiev matrix has a convincing geometric interpretation (see \cite{izmestiev2010colin}). If $ij\in E(G_P)$ and $f_{ij}\in\F_{n-2}(P^\circ)$ is the dual face to the edge $e_{ij}\in\F_1(P)$, then
%
$$M_{ij} = \frac{\vol(f_{ij})}{\|p_i\|\|p_j\|\sin\sphericalangle(p_i,p_j)}.$$
%
The interpretation for vertices can be derived from this via property \itm4 in \cref{thm:Izmestiev}. For all $i\in\{1,...,n\}$ and an arbitrary $j\in\{1,...,d\}$ holds
%
$$M_{ii} = -\frac1{(p_i)_j} \sum_{k\not=i} M_{ik} (p_k)_j ,$$
%
where $(p_i)_j$ is the $j$-th component of the $i$-th vertex of $P$.
\item If $P$ is not full-dimensional but still $0\in\relint(P)$, we agree to define its Izmestiev matrix as if $P$ were embedded in $\aff(P)$. Among other things, this allows us to speak about the Izmestiev matrix of faces of $P$.
\item From the geometric interpretation in \itm1 we see that $M(P-x)$ is continuous in $x$ for $x\in\relint(P)$.
\end{myenumerate}
\end{remark}
Our next goal is to extend the definition of the Izmestiev matrix for when $0\in \partial P$.
\msays{When is $M=0$ or sum of entries is negative?}
\begin{definition}\quad
\begin{myenumerate}
\item $M(P,x):=M(P-x)$, where $0\in\relint(P-x)$.
\item the \emph{normalized Izmestiev matrix} is a rescaling $\bar M(P,x):= c M(P,x)$, where the factor $c>0$ is chosen (dependent on $P$ and $x$) so that the largest absolute value of any entry of $\bar M(P,x)$ is one.
\end{myenumerate}
\end{definition}
There is no deeper intention in the exact choice of $c>0$, but the purpose of this definition is to force the normalized Izmestiev matrices to be contained in a compact set so that questions of convergence can be dealt with.
\begin{theorem}
Let $f\in\F(P)$ be a face and $x\in\relint(f)$. If $x_1,x_2,...\in\relint(P)$ is a sequence with $x_k\to x$, then
$$\bar M_{ij}(P,x_k) \to \begin{cases}
\bar M_{ij}(f,x) & \text{if $p_i,p_j\in\F_0(f)$} \\
0 & \text{otherwise}
\end{cases}.
$$
\begin{proof}
\msays{TODO}
\end{proof}
\end{theorem}
\begin{definition}
For $x\in P$ the \emph{compactified Izmestiev matrix} $\mathcal M(P,x)$ is the limit of $\bar M(P, x_k)$ for a sequence $x_1,x_2,...\in\relint(P)$ with $x_k\to x$.
\end{definition}
\begin{proposition}
$\mathcal M(P,x)$ is continuous in $x$. \msays{This is not obvious at the boundary.}
\begin{proof}
\msays{TODO}
\end{proof}
\end{proposition}
\begin{proposition}
For each $i\in\{1,...,n\}$
$$\sum_j \mathcal M_{ij}(P,x) \ge 0.$$
\begin{proof}
It suffices to show this for the standard Izmestiev matrix $M(P,x)$. For all $j\in\{1,...,n\}$ holds
$$\sum_i M_{ij}(P,x) p_i = 0.$$
\end{proof}
\end{proposition}
\begin{definition}
The \emph{Izmestiev weights} of $P$ at $x\in P$ is a vector of numbers $m\in\RR^n$ with
$$m_i=m_i(P,x):=\sum_{j} \mathcal M_{ij}(P,x), \quad\text{for $i\in\{1,...,n\}$}.$$
The \emph{normalized Izmestiev weights} are a rescaling $\bar m(P,x):=c m(P,x)\in\RR^n$, where $c>0$ is a factor to ensure $\sum_i m_i = 1$.
\end{definition}
\begin{proposition}\label{res:m_zero_components}
Given a face $f\in\F(P)$ and a point $x\in\relint f$.
If $p_i\not\in\F_0(f)$, then $\bar m_i(P,x)=0$.
\begin{proof}
\msays{TODO}
\end{proof}
\end{proposition}
\begin{definition}
Given two combinatorially equivalent polytopes $P,Q\subseteq\RR^d$, the \emph{Izmestiev map} $\phi\:P\to Q$ is defined by
\begin{equation}\label{eq:Iz_map}
\phi(x) := \sum_k \bar m_k(P,x)\kern1pt q_k.
\end{equation}
\end{definition}
This map is continuous by the continuity of $\bar m(P,x)$. It is also well-define, \shortStyle{i.e.,}\ $\phi(x)\in Q$ by point \itm1 in the following lemma.
\begin{lemma}\quad \label{res:Iz_map_properties}
\begin{myenumerate}
\item given a face $f\in\F(P)$ and a point $x\in \relint f$, then $\phi(x)\in\relint f_Q$, where $f_Q$ is the face of $Q$ corresponding to $f$.
\item $\phi$ is surjective on $Q$.
\end{myenumerate}
\begin{proof}
Part \itm1 can be shown quickly: by \cref{res:m_zero_components} holds $\bar m_k(P,x)=0$ whenever $p_k\not\in \F_0(f)$, thus \eqref{eq:Iz_map} is a convex combination of the vertices in $\F_0(f_Q)$, hence, $\phi(x)\in f$.
Part \itm2 is based on a standard technique in algebraic topology. Suppose that there is a point $y\in Q$ that is not in the image of $\phi$.
We then can construct a map $\psi\:P\to\partial Q$ as follows: for each $x\in P$ draw the ray emanating from $y\in Q$ through $\phi(x)$ and let $\psi(x)$ be the intersection of this ray with $\partial Q$. This map is clearly continuous and on $\partial P$ agrees with $\phi$.
Let further $\bar \phi\: Q\to P$ be any homeomorphism that sends the relative interior of every face $f\in\F(Q)$ to the relative interior of the corresponding face $f_P\in\F(P)$. Consider the map $\bar\phi\circ \psi \: P\to\partial P$.
Its restriction to $\partial P$ agrees with $\bar\phi\circ\phi$. But $\bar\phi\circ\phi$ is homotopic to the identity $\id_P\:P\to P$ via the linear homotopy $t(\bar\phi\circ\phi)+(1-t)\id_P$. Note that this homotopy fixes relative interiors of faces of $P$. That is, $\bar\phi\circ\psi$ is homotopic to a deformation retract of $P$ onto $\partial P$, which is not possible (by a classic argument from homology).
\end{proof}
\end{lemma}
\msays{Is $\phi$ actually a homeomorphism? In particular, injective?}
\begin{proposition}\label{res:inscribed_off_center_expansion}
If $P$ is inscribed in a sphere of radius $r$ centered at the origin and $m\in\RR^n$, then the $m$-expansion of $P$ is
$$\|P\|_m^2 = Cr^2 + K,$$
where $C,K>0$ are constants that depend only on $x$ and the distance of the $m$-center of $P$ from the origin.
\begin{proof}
\begin{align*}
\|P\|_m^2 &= \sum_k m_k \|p_k - x\|^2
\\&= \sum_k m_k \|p_k\|^2 + \sum_k m_k \|x\|^2 - \sum_k m_k \<p_k,x\>
\\&= \sum_k m_k r^2 + \sum_k m_k \|x\|^2 - \Big \<\sum_k m_k p_k,x \Big\>
\\&= \Big(\sum_k m_k\Big) r^2 + \sum_k m_k \|x\|^2 -\<x,x\>
\\&= \Big(\sum_k m_k\Big) r^2 + \Big(\sum_k m_k - 1\Big) \|x\|^2.
\end{align*}
Given that $m_k\ge 0$, one can verify that $C,K>0$.
\end{proof}
\end{proposition}
\begin{theorem}
Given two combinatorially equivalent polytopes $P,Q\subset\RR^d$ with common edge-graph $G_P$, so that
\begin{myenumerate}
\item $P$ and $Q$ are inscribed in spheres that are centered at the origin and are of radius $r(P)$ and $r(Q)$ respectively,
\item $0\in\relint(P)$ and $0\in\relint(Q)$, and
\item for all $ij\in E(G_P)$ holds $\|q_i-q_j\|\le\|p_i-p_j\|$.
\end{myenumerate}
Then $r(Q)\le r(P)$, with equality if and only if $P$ and $Q$ are isometric.
\begin{proof}
Let $\phi\:P\to Q$ be the Izmestiev map.
We show that there is an $x\in P$ so that $\|x\|=\|\phi(x)\|$.
If $\phi(0)=0$ then chose $x=0$. Otherwise $\|\phi(0)\|>0$, and by the surjectivity of $\phi$ (\cref{res:Iz_map_properties} \itm2) there is a $y\in P\setminus\{0\}$ with $\phi(y)=0$. Let $y(t)\in P$ be a curve with $y(0)=0$ and $y(1)=y$.
Then $\|y(t)\|$ transitions from $0$ to a positive number, while $\|\phi(y(t))\|$ transitions from from a positive number to zero.
By the intermediate value theorem there is a $t^*\in[0,1]$ with $\|y(t^*)\|=\|\phi(y(t^*))\|$. Set $x:=y(t^*)$.
Note that we can reorient $Q$ (fixing the origin) without invalidating any relevant property of $Q$ (conditions \itm1, \itm2, \itm3 stay valid, the circumradius and combinatorial type are unchanged). Since $x\in P$ and $\phi(x)\in Q$ have the same distance from the origin, we are free to rotate $Q$ so that $x=\phi(x)$.
Now, let $m:=\bar m(P,x)$. Then $x$ is the $m$-center of $P$, and $\phi(x)$ is the $m$-center of $Q$ (\shortStyle{cf.}\ the definition of the Izmestiev map). By \cref{res:nullspace_expanded} $\|Q\|_m\le \|P\|_m$. Since $x=\phi(x)$, \cref{res:inscribed_off_center_expansion} yields the same constants $C,K>0$ when computing the $m$-expansion of $P$ and $Q$, and so we find
$$Cr(Q)^2 + K = \|Q\|_m^2 \le \|P\|_m^2 = Cr(P)^2 + K,$$
which yields $r(Q)\le r(P)$.
If $P$ and $Q$ are isometric then this is clearly satisfied with equality.
An inscribed polytope with known combinatorial type is uniquely determined by its edge lengths. Thus, if $P$ and $Q$ are not isometric, then some edge of $Q$ must be shorter than the corresponding edge of $P$, and then all the above inequalities are already strict by \cref{res:nullspace_expanded}.
\end{proof}
\end{theorem}
\newpage
$$\|p\|_w^2:=\frac12\sum_{i,j} w_i w_j \|p_i-p_j\|^2 = \sum_i w_i \|p_i\|^2 + \Big \|\sum_i w_i p_i\Big\|^2.$$
\textbf{Question:}
\begin{enumerate}
\item Does our main result hold the other way around? If $P$ has longer edges than $Q$ and $w$ are Wachspress coordinates for $Q$ (!! not for $P$ !!), do we have $\|P\|_w\ge \|Q\|_w$? Alternatively, and more generally, is our main result really restricted to Wachspress coordinates?
\end{enumerate}
\newpage
\section{Consequences}
\begin{theorem}
Given two combinatorially equivalent inscribed polytopes $P,Q\subset\RR^d$, both of which contain the circumcenter, and each edge of $Q$ is not longer than the corresponding edge in $P$, then $Q\simeq P$.
\end{theorem}
\begin{theorem}
Let $P,Q\subset\RR^d$ be combinatorially equivalent polytopes with
\begin{myenumerate}
\item $\|p_i\|=\|q_i\|$ for all $i\in V$, and
\item $\|p_i-p_j\|=\|q_i-q_j\|$ for all $ij\in E$,
\end{myenumerate}
then $P\simeq Q$.
\begin{proof}
\msays{TODO}
\end{proof}
\end{theorem}
Dual version:
\begin{corollary}\label{res:dual_version}
Let $P,Q\subset\RR^d$ be combinatorially equivalent polytopes with
\begin{myenumerate}
\item each dihedral angle in $P$ is identical to the one in $Q$, and
\item each facet in $P$ has the same distance from the origin as in $Q$,
\end{myenumerate}
then $P\simeq Q$.
\end{corollary}
This is very close to Stoker's conjecture (a proof of which has been announced earlier this year \cite{...}):
\begin{conjecture}\label{conj:Stokers}
If two combinatorially equivalent polytopes have identical dihedral angles, then also their facets (and by induction all faces of dimension $\ge 2$) have identical dihedral angles.
\end{conjecture}
While weaker in its assumptions, also the conclusion is weaker in comparison to \cref{res:dual_version}.
It is not clear to the author whether any one of these result, \cref{res:dual_version} or \cref{conj:Stokers}, easily implies the other one.
\fi
\iffalse
\newpage
\section{Capturing symmetries in the edge-graph}
\begin{theorem}
Given a polytope $P$ with $0\in\Int P$ and edge-graph $G_P$.
Let $\mathfrak c\:V\mathbin{\mathaccent\cdot\cup} E\to\RR_{\ge 0}$ be the coloring
\begin{align*}
\mathfrak c(i) &= \|p_i\|,\qquad&&\text{for all $i\in V$}, \\
\mathfrak c(ij) &= \|p_i-p_j\|,\qquad&&\text{for all $ij\in E$}.
\end{align*}
Then $\Aut(G_P^{\mathfrak c})\cong \Aut_{\Ortho}(P)$.
\begin{proof}
Obviously, every orthogonal symmetry of $P$ induces a combinatorial symmetry of $G_P^{\mathfrak c}$.
Conversely, let $\sigma\in\Aut(G_P^{\mathfrak c})$. Consider the polytope $Q$ whose vertices $q_i:=p_{\sigma(i)}$ are the same as for $P$ but permuted. Clearly, $Q$ is combinatorially equivalent to $P$, and $\|p_i-p_j\|=\|q_i-q_j\|$ for all $ij\in E$ as well as $\|p_i\|=\|q_i\|$ for all $i\in V$ as $\sigma$ preserves the coloring. By ?? $P$ and $Q$ are then isometric in the stronger sense that there exists an orthogonal transformation $T\in\Ortho(\RR^d)$ with $p_i = Tq_i$. This transformation therefore induces the permutation $\sigma$ as required.
\end{proof}
\end{theorem}
\fi
\section{Conclusion, further notes and many open questions}
\label{sec:conclusions}
We conjectured that a convex polytope is uniquely determined up to isometry~by its edge-graph, edge lengths and the collection of distances between its vertices and some interior~point, across all dimensions and combinatorial types (\cref{conj:main_rigid_intro}).
We~also~posed~a~more general conjecture expressing the idea that polytope skeleta, given their edge lengths, are maximally expanded (\cref{conj:main}).
We developed techniques~based~on~Wachs\-press coordinates and the so-called Izemstiev matrix that led to us to resolve three relevant special cases: centrally symmetric~polytopes (\cref{res:centrally_symmetric}), small perturbations (\cref{thm:main_local}), and combinatorially equivalent polytopes (\cref{thm:main_comb_eq}).
We feel confident that our approach already highlights the essential difficulties in verifying the general case.
In this section we collected further thoughts on our results, notes on connections to the literature, as well as many questions and future research directions.
\subsection{Consequences of the conjectures}
\label{sec:vast_generalization}
\cref{conj:main_rigid_intro} vastly generalizes several known ``reconstruction from the edge-graph'' results.
The following is a special case of \cref{conj:main_rigid_intro}: an inscribed polytopes with all edges of the same length would~be uniquely determined by its edge-graph.
This includes the following special cases:
\begin{itemize}
\item The reconstruction of matroids from their base exchange graph: a matroid can be identified with its matroid base polytopes, which is a 01-polytopes (hence inscribed) and has all edges of length $\sqrt 2$.
This reconstruction has been initially proven in \cite{holzmann1973graphical} and recently rediscovered in \cite{pineda2022reconstructibility}.
\item The reconstruction of simultaneously vertex- and edge-transitive polytopes from their edge-graph: this was proven, essentially using the tools of this article, in \cite{winter2020symmetric,winter2021spectral}.
\end{itemize}
It would imply an analogous reconstruction from the edge-graph for classes of polytopes such as the uniform polytopes or higher-dimensional Johnson solids \cite{johnson1966}.
Secondly, a positive answer to \cref{conj:main_rigid_intro} would also resolve Question~6.6~in \cite{winter2021capturing} on whether the metric coloring can capture the Euclidean symmetries~of~a~polytope.
\subsection{\Cref{conj:main} for graph embeddings}
\label{sec:fixing_q_version_conjecture}
In \cref{ex:cube} we show that~\cref{conj:main} does not hold when replacing $Q$ by some more general graph embedding $q\:V(G_P)\to\RR^e$ of $G_P$, even if $0\in\Int\conv(q)$.
Our intuition for why this fails, and also what distinguishes it from the setting of our conjectures and the verified special cases is, that the embedding of \cref{ex:cube} does not ``wrap around the origin'' properly.
It is not quite clear what this means for an embedding of a graph, except that it feels right to assign this quality to polytope skeleta, to embeddings close to them, and also to centrally symmetric embeddings.
One possible formalization of this idea is expressed in the conjecture below, that is even stronger than \cref{conj:main} (the idea is due to Joseph Doolittle):
\begin{conjecture}\label{conj:josephs_conjecture}
Given a polytope $P\subset\RR^d$ and a graph embedding $q\:V(G_P)\to\RR^e$ of its edge-graph $G_P$, so that
\begin{myenumerate}
\item for each vertex $i\in V(G_P)$ the cone
%
$$C_i:=q_i + \cone\{q_j-q_i\mid ij\in E(G_P)\}$$
%
contains the origin in its interior,
\item edges in $q$ are at most as long as in $P$, and
\item vertex-origin distances in $q$ are at least as large as in $P$,
\end{myenumerate}
then $\skel(P)\simeq q$.
\end{conjecture}
Note that since $\bigcap_i C_i\subseteq\conv(q)$, \itm1 already implies $0\in\Int\conv(q)$.
\subsection{Classical rigidity of frameworks}
\label{sec:classical_rigidity}
We previously noted on a natural interpre\-tations of \cref{res:centrally_symmetric} and \cref{thm:main_local} in the language of classical rigidity theory.
Consider the edges~of $P$ as \emph{cables} that can contract but not expand, and connect all vertices of $P$ to the origin using \emph{struts} that can expand but not contract.
This is known as a \emph{tensegrity framework}, and \Cref{thm:main_local} asserts that it is (locally) rigid.
Using the tools of rigidity theory, an alternative route to \cref{thm:main_local} is possible: with almost no additional effort the proof of \cref{res:expansion_main_result} yields that the~above~ten\-segrity framework is \emph{prestress stable} (we refer to \cite{connelly1996second} for the terminology), which then implies local rigidity.
The certifying self-stress assigns to the edge $\conv\{p_i,p_j\}$ the value $M_{ij}$, and to~the central strut at $p_i$ the value $-\alpha_i$.
The~associated~stress ma\-trix $\Omega$ is positive semi-definite:
$$\dot p^{\Tsymb}\Omega \dot p=\sum_{i,j} M_{ij}\|\dot p_i-\dot p_j\|^2 - 2\sum_i\alpha_i \|\dot p_i\|^2=-2\sum_{i,j}M_{ij}\<\dot p_i,\dot p_j\> \,\overset{\mathclap{\eqref{eq:trace_neg}}}\ge\, 0,$$
where $\dot p$ is some infinitesimal motion.
The equality case was discussed in the proof of \cref{res:expansion_main_result} and yields that $\dot p_i=Tp_i$ for some linear transformation $T\:\RR^d\to\RR^d$.
Since the stresses on the central struts are non-zero, the condition for the~infinitesi\-mal flexes is $0=\<p_i,\dot p_i\>=\<p_i,Tp_i\>$.\nlspace
Since the $p_i$ contain~a~basis, $T$~is~a~skew-sym\-metric matrix and $\dot p$ therefore a trivial infinitesimal flex.
It is natural to ask whether this tensegrity framework keeps its rigidity when~we instead put struts at the edges and cables between vertices and the origin.
\Cref{fig:twisting_cube} shows that this can fail.
Since an \emph{infinitesimally rigid} framework keeps its rigidity on swapping cables and struts, we see that such a polytopal framework~is~not~necessarily infinitesimally rigid.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{fig/twisting_cube.pdf}
\caption{Already the skeleton of the cube is not rigid if considered as a tensegrity framework with struts for edges and central cables. Twisting the top and bottom face lengthens the edge struts but keeps the central cables of a fixed length. This corresponds to the infinitesimal flex shown on the left.}
\label{fig:twisting_cube}
\end{figure}
For an analogous interpretation of \cref{res:centrally_symmetric} consider the tensegrity framework with~cables at the edges as before, but each central strut now connects~a~ver\-tex $p_i$ to~its antipodal counterpart $-p_i$, and is fixed in its center to the origin.
\Cref{res:centrally_symmetric} then asserts that this tensegrity framework is \emph{universally rigid}, \shortStyle{i.e.,}\ it has a unique~realization across all dimensions.
Here too, swapping cables and struts does not preserve universal or even global rigidity (see \cref{fig:octagon_ex}).
It does not preserve local rigidity either (see \cref{ex:twisting_4_cube}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.37\textwidth]{fig/octagon_ex.pdf}
\caption{An octagon and an embedding of its edge-graph with longer edges but equally long central cables, showing that the respective tensegrity framework is not globally rigid under forced central symmetry.}
\label{fig:octagon_ex}
\end{figure}
\begin{example}
\label{ex:twisting_4_cube}
Consider the 4-cube with its ``top'' and ``bottom'' facets (which are~3-cubes) embedded in the hyperplanes $\RR^3\times\{\pm 1\}$ respectively.
We flex the~skeleton~as follows: deform the~top facet as shown in \cref{fig:twisting_cube}, and the bottom~facet~so~as~to keep the framework centrally symmetric, while keeping both inside their respective hyperplanes.
The edge struts~inside the facets become longer, and the edge~struts~between the facets have previously been of minimal length between the hyperplanes, can therefore also only increase in length.
The lengths of the central cables stay~the same.
\end{example}
As a consequence, the centrally symmetric tensegrity frameworks too are not~necessarily infinitesimally rigid.
\subsection{Schlegel diagrams}
\label{sec:Schlegel_diagram}
Yet another interpretation of the frameworks discussed in \cref{sec:classical_rigidity} is as skeleta of special \emph{Schlegel diagrams}, namely, of pyramids whose base facet is the polytope $P$. This immediately suggests the consideration of general Schlegel diagrams (which was brought up by Raman Sanyal).
\begin{figure}[h!]
\centering
\includegraphics[width=0.42\textwidth]{fig/Schlegel_diagram}
\caption{Two Schlegel diagrams of 4-polytopes: of the pyramid with the 3-cube as base facet (left) and of the 4-cube (right).}
\label{fig:Schlegel_diagram}
\end{figure}
We list some questions that come to mind, but we do not feel comfortable~to~state any conjectures.
The following is an analogue of \cref{conj:main_rigid_intro}:
\begin{question}
\label{q:Schlegel_reconstruction}
Let $\mathcal P$ and $\mathcal Q$ be Schlegel diagrams of polytopes with the same~edge-graph and edge lengths. Is then $\mathcal P\simeq \mathcal Q$?
\end{question}
The following stronger version is an analogue of \cref{conj:main}:
\begin{question}
\label{q:Schlegel_expansion}
Let $\mathcal P$ and $\mathcal Q$ be Schlegel diagrams of polytopes with the same~edge-graph and and so that
\begin{myenumerate}
\item boundary edges in $\mathcal Q$ are at most as long as in $\mathcal P$, and
\item inner edges in $\mathcal Q$ are at least as long as in $\mathcal P$.
\end{myenumerate}
Do we have $\mathcal P\simeq\mathcal Q$?
\end{question}
\begin{question}
\label{q:Schlegel_locally_rigid}
Consider the skeleton of a Schlegel diagram $\mathcal P$ as a bar-joint framework. Is it (locally) rigid?
\end{question}
We formulated \cref{q:Schlegel_locally_rigid} specifically for bar-joint frameworks, as opposed to tensegrity frameworks, as there are counterexamples for the latter (see \cref{fig:Schlegel_diagram_and_twist}).
Schlegel diagrams are also not always globally rigid (see \cref{fig:Schlegel_diagram_cube_folding}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.37\textwidth]{fig/Schlegel_diagram_and_twist}
\caption{The skeleton of Schlegel diagram of a triangular prism with cables on the outside and struts on the inside is not rigid. Twisting the inner triangle increases the lengths of struts and fixes all other lengths.}
\label{fig:Schlegel_diagram_and_twist}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.37\textwidth]{fig/Schlegel_diagram_cube_folding.pdf}
\caption{Folding the Schlegel diagram of the 3-cube along a diagonal preserves all edge lengths.}
\label{fig:Schlegel_diagram_cube_folding}
\end{figure}
\iffalse
\begin{question}
Is a polytope uniquely determined by its edge-graph and the length of edges in some Schlegel diagram? \msays{TODO}
\end{question}
It was recognized by Robert Connelly and Zhen Zhang (personal communication) that many Schlegel diagrams are \emph{not} rigid if considered as tensegrities with cables for the outer edges: consider ...
In contrast, for struts on the outside they provided computations showing that the standard Schlegel diagram of the 4-cube is not only rigid, but even prestress stable.
This is perhaps the better question to ask: ...
\end{disable}
\fi
\subsection{Stoker's conjecture}
\label{sec:Stokers_conjecture}
Stoker's conjecture asks whether the dihedral angles of a polytope determine its face angles, and thereby its overall shape to some degree.
Recall that \emph{dihedral angles} are the angle at which facets meet in faces of codimension two, whereas \emph{face angles} are the dihedral angles of the facets.
Stoker's~conjecture was asked in 1968 \cite{stoker1968geometrical}, and a proof was claimed recently by Wang and Xie \cite{wang2022gromov}:
\begin{theorem}[Wang-Xie, 2022]\label{res:Stokers_conjecture}
Let $P_1$ and $P_2$ be two combinatorially equivalent polytopes such~that corresponding dihedral angles are equal.
Then all corresponding face angles are equal as well.
\end{theorem}
Our results allow us to formulate a semantically similar statement.
The following is a direct consequence of \cref{res:comb_type_rigid} when expressed for the polar dual polytope:
\begin{corollary}
\label{res:Stokers_conjecture_variant}
Let $P_1$ and $P_2$ be two combinatorially equivalent polytopes such that corresponding dihedral angles and facet-origin distances are equal.
Then $P_1\simeq P_2$.
\end{corollary}
While the assumptions in \cref{res:Stokers_conjecture_variant} are unlike stronger compared~to~Sto\-ker's conjecture (we require facet-origin distances), we also obtain isometry instead of~just identical face angles.
While related, we are not aware that either of \cref{res:Stokers_conjecture}~or \cref{res:Stokers_conjecture_variant} follows from the other one easily.
\subsection{Pure edge length constraints}
Many polytopes cannot be reconstructed~up to isometry from their edge-graph and edge lengths alone (recall \cref{fig:flex_polytope}).
However, for all we know the following is open:
\begin{question}
\label{q:comb_from_edge_lengths}
Is the combinatorial type of a polytope uniquely determined by its edge-graph~and edge lengths?
\end{question}
Let us now fix the combinatorial type.
We are aware of three types of polytopes that are not determined (up to isometry) by their face-lattice and edge lengths:
\begin{myenumerate}
\item $n$-gons with $n\ge 4$.
\item Minkowski sums: if $P=Q+R$ and $Q$ and $R$ are generically oriented \shortStyle{w.r.t.}\ each other, then a slight reorientation of the summands changes the shape of $P$ but keeps its edge lengths (see \cref{fig:deforming_cuboctahedron}).
\item polytopes having all edge~directions on a ``conic at infinity'': this implies~an affine flex \cite{connelly2018affine}, which is most easily implemented for zonotopes (recall \cref{fig:length_preserving_linear_trafo}), but happens for other polytopes as well, such as 3-polytopes with~up to five edge directions (see \cref{fig:sliced_cube}).
\end{myenumerate}
\begin{figure}[h!]
\centering
\includegraphics[width=0.46\textwidth]{fig/deforming_cuboctahedron}
\caption{The cuboctahedron can be written as the Minkowski sum~of two simplices, and twisting these simplices leads to a flex of the cuboctahedron that preserves edge lengths.}
\label{fig:deforming_cuboctahedron}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.16\textwidth]{fig/sliced_cube.pdf}
\caption{A cuboid sliced at an angle in an appropriate way has only five edge directions and has an edge length preserving affine flex. The flex deforms the bottom face into a rhombus and keeps the vertical edges vertical.}
\label{fig:sliced_cube}
\end{figure}
We are not aware of other examples of polytopes that flex in this way and~so~we wonder whether this is already a full characterization.
\begin{question}
If a polytope is \emph{not} determined up to isometry by its combinato\-rial type and edge lengths, is it necessarily a polygon, a non-trivial Minkowski sum or has all its edge directions on a conic at infinity?
Is this true at least up to dimension three?
\end{question}
In how far a 3-polytope is determined by local metric data at~its edges
was~reportedly discussed in an Oberwolfach question session (as communicated by Ivan~Izmes\-tiev on MathOverflow \cite{434771}), where the following more general question was asked:
\begin{question}\label{q:Ivans_question}
Given a simplicial 3-polytope and at each edge we prescribe either the length or the dihedral angle, in how far does this determine the polytope?
\end{question}
Having length constraints at every edge determines a simplicial polytope already up to isometry via Cauchy's rigidity theorem (\cref{res:Cauchy_rigidity}).
The angles-only version is exactly the 3-dimensional Stoker's conjecture (\cref{sec:Stokers_conjecture}). We are not~aware that this question has been addressed in the literature beyond these two extreme cases.
Note also that \cref{q:Ivans_question} is stated for \emph{simplicial} 3-polytopes, but actually~includes general 3-polytopes via a trick: if $P$ is not simplicial, triangulate every 2-face, and at each new edge created in this way prescribe a dihedral angle of $180^\circ$ to prevent~the faces from folding at it.
\iffalse
\subsection{Uniqueness of central symmetry}
\label{sec:unique_central_symmetry}
Let us state explicitly the centrally symmetric version of the unique reconstruction conjecture (\cref{conj:main_rigid_intro}):
\begin{conjecture}
\label{conj:centrally_symmetric_reconstruction}
A centrally symmetric polytope is uniquely determined, up to iso\-metry, by its edge-graph, edge lengths and vertex-origin distances.
\end{conjecture}
We emphasize that this is \emph{not} a direct consequence of \cref{res:centrally_symmetric}.
Recall that a ``centrally symmetric graph embedding $q\:V(G_P)\to\RR^e$'' as used in \cref{res:centrally_symmetric} is forced to realize its central symmetry in the same way as $P$ does.
It is however not clear that any two centrally symmetric polytopes with the same edge-graph do this.
An affirmative answer to the following question would verify \cref{conj:centrally_symmetric_reconstruction}:
\begin{question}
Suppose $P$ and $Q$ are centrally symmetric polytopes with isomorphic edge-graphs, and $\smash{\psi\:G_P\xrightarrow{\sim} G_Q}$ is a graph isomorphism.
If $p_i=-p_j$ (that is, $p_i$ and $p_j$ are antipodal), do we also have $q_{\psi(i)}=-q_{\psi(j)}$?
\end{question}
In other words: do edge-graphs of centrally symmetric polytopes already~deter\-mine which vertices form an antipodal pair?
\iffalse
assumes a fixed action of the central symmetry on the edge-graph, and while plausible, it seems open whether an edge-graph of a polytope can be centrally symmetric in more than one way.
\begin{question}[see \cite{uniqueCentralSymmetry}]
Given two centrally symmetric polytopes $P_1$ and $P_2$ and $\smash{\phi\:G_{P_1} \xrightarrow{\raisebox{-1ex}{\tiny$\sim$}} G_{P_2}}$ an isomorphism between their edge-graphs.
If $\sigma_i\in\Aut(G_{P_i})$ is the graph automorphism induced by the central symmetry $P_i \mapsto -P_i$, do we necessarily have $\sigma_1=\phi^{-1}\circ\sigma_2\circ \phi$?
\end{question}
\msays{Combining centrally symmetric and inscribed might give unique reconstruction from only the graph and edge-lengths. BUT we need that ``central symmetry'' is a unique involution in the graph. https://mathoverflow.net/questions/439787/can-a-polytopal-graph-be-centrally-symmetric-in-more-than-one-way}
\begin{disable}
\begin{corollary}
A centrally symmetric polytope is uniquely determined (up to isometry) by its edge-graph, the edge lengths, the vertex-origin distances and the action of the central symmetry on the edge-graph.
\end{corollary}
\begin{corollary}
Let $P\subset\RR^d$ and $Q\subset\RR^e$ be centrally symmetric polytopes with the same edge-graph, edge lengths and vertex-origin distances; and let $\phi\:G_P\xrightarrow{\sim} G_Q$ be a graph isomorphism. Let $\iota_P\in\Aut(G_P)$ be the graph automorphism induced by the central symmetry of $P$, and $\iota_Q\in\Aut(G_Q)$ likewise. Do we have $\iota_P=\phi^{-1}\circ \iota_Q\circ \phi$.
\end{corollary}
\end{disable}
\fi
\fi
\subsection{Injectivity of the Wachspress map $\boldsymbol{\phi}$}
\label{sec:Wachspress_map}
In \cref{res:Wachspress_map_surjective} we proved that~the Wachspress map $\phi\: P\to Q$ (\shortStyle{cf.}\ \cref{def:Wachspress_map}) between combinatorially equivalent polytopes is surjective.
In contrast, the injectivity of~the Wachspress map has been established only in dimension two by Floater and Kosinka \cite{floater2010injectivity} and is conjectured for all $d\ge 3$.
\begin{conjecture}
\label{conj:Wachspress_injective}
The Wachspress map $\phi\: P\to Q$ is injective.
\end{conjecture}
If true, the Wachspress map would provide an interesting and somewhat canonical homeomorphism (in fact, a rational map, see \cite{warren2003uniqueness}) between any two combinato\-rially~equivalent polytopes.
\subsection{What if $\boldsymbol{0\not\in\mathrm{int}(Q)}$?}
If $0\not\in Q$ then \cref{fig:origin_outside_trivial_ex,fig:origin_outside_tnontriv_ex} show that our conjectures fail.
We do however not know whether in the ``unique reconstruction''~case~the~num\-ber of solutions would be finite.
\begin{question}
Given edge-graph, edge lengths and vertex-origin distances, are there only finitely many polytopes with these parameters?
\end{question}
This is in contrast to when we replace $Q$ with a graph embedding $q\:V(G_P)\to\RR^e$, which can have a continuum of realisations (see \cref{fig:flex_pyramid}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{fig/flex_pyramid}
\caption{The square based pyramid (left) is flexible as a framework (since then the bottom face needs not stay flat). Likewise, the framework of the square based frustum with this particular choice of origin (right) flexes. It is however (locally) rigid as a polytope.}
\label{fig:flex_pyramid}
\end{figure}
In \cref{sec:combinatorial_equivalence} we showed that reconstruction from the face-lattice, edge lengths and vertex-origin distances is possible even if the origin lies only in the inside of a facet of $P$, but that it can fail if it lies in a face of codimension \emph{three}.
We do not know what happens for a face of codimension \emph{two}.
\begin{question}
\label{q:codimension_two}
Is a polytope
uniquely determined by its face-lattice, edge lengths and vertex-origin distances if the origin is allowed to lie in the inside of faces of~co\-dimension $0, 1$ and $2$?
\end{question}
\iffalse
\subsection{Dual versions}
\label{sec:dual_versions}
\Cref{conj:main} and each of its variants and special cases comes with a \emph{dual version}, asking for
\begin{itemize}
\item edges in $Q$ (or $q$) to be at \emph{least as long} as in $P$, and
\item vertex-origin distances in $Q$ (or $q$) to be at \emph{most as large} as in $P$.
\end{itemize}
In the case of \cref{conj:main}, the dual version is essentially equivalent but requires $0\in\Int(P)$ instead of $0\in\Int(Q)$. The same applies to every variant that keeps~the symmetry between $P$ and $Q$.
Replacing $Q$ by a graph embedding $q\:V(G_P)\to\RR^e$ however breaks the symmetry and yields an independent problem.
For example, the naive graph embedding formulation of the dual \cref{conj:main} is still false, as is the dual of \cref{res:centrally_symmetric} (see \cref{fig:octagon_ex}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.37\textwidth]{fig/octagon_ex.pdf}
\caption{A polygon $P$ and a graph embedding of $G_P$ with the same vertex-origin distances, yet larger edge-lengths, providing a counterexample to the naive embedding version of the dual \cref{conj:main}. Note (for \cref{sec:central_symmetry}) that this counterexample is also centrally symmetric.}
\end{figure}
In contrast, we do not know whether the dual of the local version (\cref{thm:main_local}) holds as well.
\begin{question}
\label{q:dual_of_local}
Does a dual version of \cref{thm:main_local} hold? In other words, if edges can expand and vertex-origin distances can shrink, is this structure still rigid?
\end{question}
This would follow as a consequence of infinitesimal rigidity as discussed in \cref{sec:rigidity} (\cref{q:inf_rigid_prestress_stable}) since infinitesimal rigidity is preserved under exchanging cables and struts.
\fi
\iffalse
\subsection{Flexible polytopes}
\label{sec:conclusions_flexible}
From the perspective of classical rigidity theory, and considering the main geometric structure of a polytope, the edge-length constraints are much better motivated than the random vertex-origin distance constraints. It is therefore a pity that they alone cannot describe a realization of a combinatorial type (let alone of the edge-graph) up to isometry.
\msays{Terminology: bar-joint frameworks vs.\ tensegrity frameworks}
Classical examples of polytopes that are not determined by edge lengths alone are polygons and cubes (in all dimensions); or more generally, zonotopes.
We figure that for all we know, there might be an interesting underlying structure to these \enquote{flexible polytopes} that we are going to discuss briefly.
For this section we shall work with informal definitions. We call a polytope \emph{flexible} if there is a continuous transformation of the polytope that keeps its combinatorial type and edge lengths, but that is not just a continuous isometry.
We call it rigid otherwise.
As a simple observation we remark that if a polytope $P$ is flexible then we can build other flexible polytopes from it using Minkowski sum: $Q:=P+R$, where $R$ is generic \shortStyle{w.r.t.}\ $P$. Any continuous transition in $P$ translates into a continuous transition for $Q$.
\begin{definition}
A polytope is called \emph{elementarily flexible} if it is flexible but has no non-trivial flexible Minkowski summand.
\end{definition}
The only elementarily flexible polytopes known to us are $n$-gons for $n\ge 4$.
\begin{question}
Are there elementarily flexible polytopes other than polygons?
\end{question}
Note that each zonotope is flexible but contains a $4$-gon as Minkowski summand. If true, this would satisfyingly characterize flexible polytopes, namely, as polytopes that have an $n$-gon as Minkowski summand for some $n\ge 4$.
A further interesting note regarding flexible polytopes is the following: no two instances of a flex of a flexible polytope can have Wachspress coordinates in common, except if they are affine transformations of each other. Thus follows essentially from \cref{res:expansion_main_result} because edge lengths and Wachspress coordinates of an interior point determine the shape uniquely.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{fig/deforming_cuboctahedron}
\caption{The cuboctahedron can be written as the Minkowski sum~of two simplices, and twisting these simplices leads to a flex of the cuboctahedron.}
\end{figure}
\begin{conjecture}
If $P\subset\RR^d$ is flexible, then it can be written as Minkowski sum
%
$$P=Q_1+\cdots+Q_n$$
%
so that the $Q_i$ are rigid, and each flex of $P$ is obtained by continuously reorienting some of the $Q_i$.
\end{conjecture}
\hrulefill
A Minkowski sum is \emph{generic} if its faces are just translates of the faces of the constituents.
\begin{question}
Are there flexible polytopes other than $n$-gons ($n\ge 4$) and generic Minkowski sums?
\end{question}
A simple way to build many flexible polytopes is via Minkowski sums: let $P,Q\subset\RR^d$ be two generically oriented polytopes. Their Minkowski sum $P+Q$ is flexible, and a flex can be obtained by rotating one of the Minkowski summands slightly.
\begin{conjecture}
A flexible polytope is a (non-trivial) Minkowski sum.
\end{conjecture}
\begin{question}
Are there flexible polytopes other than polygons and (non-trivial) Minkowsi sums?
\end{question}
The question of flexibility of 3-polytopes has supposedly been discussed before~in an Oberwolfach question session (as communicated by Ivan Izmestiev, see also the discussion on MathOverflow \cite{434771}). Their question was actually more general in the following way:
\begin{question}\label{q:Ivans_question}
Given a simplicial 3-polytope and at each edge we prescribe either the length or the dihedral angle, in how far does this determine the polytope?
\end{question}
Having length constraints at every edge determine a simplicial polytope already up to isometry by Cauchy's rigidity theorem (\cref{res:Cauchy_rigidity}).
The angle only version is known as the 3-dimensional Stoker's conjecture (see \cref{sec:Stokers_conjecture}) which asserts uniqueness of the face angles. We are unaware that this question has been addressed in the literature beyond these two special cases.
Note that \cref{q:Ivans_question} is stated for \emph{simplicial} 3-polytopes, but actually includes general 3-polytopes via a trick: if $P$ is not simplicial, triangulate every 2-face, and at each new \enquote{improper edge} prescribe a $180^\circ$ dihedral angle, forcing the subdivided faces to stay flat.
\hrulefill
Polytopes with triangulated 2-faces are rigid in this sense already by Cauchy's rigidity theorem.
However, also polytopes with much fewer edges seem rigid: \shortStyle{e.g.}\ some experimental computations suggest that a generic realization of the dodecahedron is infinitesimally rigid, even though it has no triangular faces. Interestingly, the case of the \emph{regular dodecahedron} presents itself as especially interesting, since it is not infinitesimally rigid.
\begin{question}
Is the regular dodecahedron flexible?
\end{question}
\fi
\iffalse
\subsection{The self-stress conjecture by Kalai}
As already mentioned in the introduction, Kalai asked \cite{kalai1994some} about a particular way to encode additional data with the edge-graph that conjecturally helps a reconstruction.
This as well as just recently been proven by Novik and Zheng \cite{novik2021reconstructing}
\begin{theorem}[Novik-Zheng, 2022]
If two polytopes have the same edge-graph and the same \enquote{space of self-stresses}, then they are affinely equivalent.
\end{theorem}
A variant of this already follows from the existence of the Izmestiev matrix: consider the framework consisting of the edges of $P$ and the bars connecting each vertex to the origin (which is in the interior of $P$).
Let's call the space of self-stresses of this framework the \enquote{extended space of self-stresses of $P$}:
\begin{corollary}
If two polytopes have the same edge-graph and the same \enquote{extended space of self-stresses}, then they are linearly equivalent.
\end{corollary}
\fi
\iffalse
\subsection{Graph embeddings}
In this article we argued that polytope skeleta are special in that they are \enquote{rigid} and \enquote{maximally expanded} in certain ways.
Looking closer however, we find that this does not hinge so much on the existence of convex polytopes for these graph embeddings, but only on the existence of this very special Izmestiev matrix.
And analogous matrices might be constructed for other graph embeddings that are not skeleta, which would then share many of the established properties.
The construction we present here will be \enquote{backwards} compared to the polytope case, in that we first have the graph and the matrix, and from that construct the embedding.
We then conclude that every embedding constructed in this way is necessarily \enquote{rigid} and \enquote{maximally expanded} in certain ways.
While this seems to downgrade polytope skeleta from their special \enquote{maximally expanded} place, we shall eventually find that they are still somehow special among all these \enquote{expanded embeddings}.
\begin{question}
Let $p$ be a graph embedding so that for all $x\in\Int \conv(p)$ the~translated embedding $p-x$ is an expanded embedding. Is then $p$ necessarily a polytope skeleton (most likely the skeleton of $\conv p$)?
\end{question}
\fi
\iffalse
\msays{
Turns out, the classic realization space has dimension at most $f_0+f_1-d-1$ everywhere.}
\msays{
Can there be \enquote{chiral} polytopes?}
\fi
\newpage
|
{
"arxiv_id": "2302.14221",
"language": "en",
"timestamp": "2023-03-01T02:05:35",
"url": "https://arxiv.org/abs/2302.14221",
"yymm": "2302"
} | \section*{Introduction}
This paper extends the results of \cite{QQWZ21} to algebras endowed with several operators, with applications to differential Rota-Baxter algebras and integro-differential algebras.
\subsection{Operated GS basis theory: from a single operator to multiple operators}\
Since its introduction by Shirshov \cite{Shirshov} and Buchberger \cite{Buc} in the sixties of last century, Gr\"obner-Shirshov (=GS) basis theory has become one of the main tools of computational algebra; see for instance \cite{Green, BokutChen14, BokutChen20}. In order to deal with algebras endowed with operators, Guo and his coauthors introduced a GS basis theory in a series of papers \cite{Guo09a, GGSZ14, GSZ13, GaoGuo17} (see also \cite{BokutChenQiu})
with the goal to attack Rota's program \cite{Rota} to classify ``interesting'' operators on algebras.
Guo et al. considered operators satisfying some polynomial identities, hence called operated polynomial identities (aka. OPIs) \cite{Guo09a, GGSZ14, GSZ13, GaoGuo17}. Via GS basis theory and the somewhat equivalent theory: rewriting systems, they could define when OPIs are GS. They are mainly interested into two classes of OPIs: differential type OPIs and Rota-Baxter type OPIs, which are carefully studied in \cite{GSZ13, GGSZ14, GaoGuo17}.
For the state of art, we refer the reader to the survey paper \cite{GGZ21} and for recent development, see \cite{ZhangGaoGuo21, GuoLi21, QQWZ21, ZhangGaoGuo}.
In these papers \cite{Guo09a, GGSZ14, GSZ13, GaoGuo17}, the operated GS theory has been only developed for algebras endowed with a single operator. The only exception is the paper \cite{BokutChenQiu} which consider algebras with several operators. We will use the results of \cite{BokutChenQiu} to study free multi-operated algebras over algebras.
\medskip
\subsection{Free operated algebras over algebras}\
Recently, there is a need to develop free operated algebras satisfying some OPIs over a fixed algebras and construct GS bases and linear bases for these free algebras as long as a GS basis is known for the given algebra. Ebrahimi-Fard and Guo \cite{ERG08a} used rooted trees and forests to give explicit constructions of free noncommutative
Rota-Baxter algebras on modules and sets; Lei and Guo \cite{LeiGuo} constructed the linear basis of free Nijenhuis algebras over associative algebras;
Guo and Li \cite{GuoLi21} gave a linear basis of the free differential algebra over associative algebras by introducing the notion of differential GS bases.
In a previous paper \cite{QQWZ21}, the authors considered a question which can be roughly stated as follows:
\begin{Ques} \label{Ques: GS basis for free algebras over algebras} Given a (unital or nonunital) algebra $A$ with a GS basis $G$ and a set $\Phi$ of OPIs,
assume that these OPIs $\Phi$ are GS in the sense of \cite{BokutChenQiu, GSZ13, GGSZ14,GaoGuo17}. Let $B$
be the free operated algebra satisfying $\Phi$ over $A$. When will $\Phi\cup G$ be a GS basis for $ B$?
\end{Ques}
They answer this question in the affirmative under a mild condition in \cite[Theorem 5.9]{QQWZ21}. When this condition is satisfied, $\Phi\cup G$ is a GS basis for $ B$ and as a consequence, we also get a linear basis of $B$. This result has been applied to all Rota-Baxter type OPIs, a class of differential type OPIs, averaging OPIs and Reynolds OPI in \cite{QQWZ21}.
It was also applied to
differential type OPIs by introducing
some new monomial orders \cite{QQZ21}.
In this paper, we consider a similar question for multi-operated algebras.
Let $\Omega$ be a nonempty set which will be the index set of operators. Algebras endowed with operators indexed by $\Omega$ will be called $\Omega$-algebras. OPIs can be extended to the multi-operated setup and one can introduce the notion of $\Omega$-GS for OPIs.
\begin{Ques} \label{Ques: nulti-operated setting} Let $\Phi$ be a set of OPIs of a set of operators indexed by $\Omega$. Let $A$ be a (unital) algebra together with a GS basis $G$. Assume that these OPIs $\Phi$ are GS in the sense of Section~\ref{Section: multi-operated GS bases}. Let $B$
be the free $\Omega$-algebra over $A$ such that the operators satisfy $\Phi$. When will $\Phi\cup G$ be an $\Omega$-GS basis for $ B$?
\end{Ques}
It is surprising that the main result Theorem 5.9 of \cite{QQWZ21} generalises to multi-operated cases without much modifications; see Theorem~\ref{Thm: GS basis for free unital Phi algebra over unital alg} for unital algebras and Theorem~\ref{Thm: GS basis for free nonunital Phi algebra over nonunital alg} for nonunital algebras.
\medskip
\subsection{Differential Rota-Baxter algebras and integro-differential algebras}\
The main motivation of this paper comes, in fact, from differential Rota-Baxter algebras and integro-differential algebras.
Differential Rota-Baxter algebras were introduced by Guo and Keigher \cite{GK08} which reflect the relation between the differential operator and the integral
operator as in the First Fundamental Theorem of Calculus. Free differential Rota-Baxter algebras were constructed by using various tools including angularly decorated rooted forests and GS basis theory \cite{GK08, BokutChenQiu}.
Integro-differential algebras (of zero weight) were defined for the algebraic study of boundary problems
for linear systems of linear ordinary differential equations. Guo, Regensburger and Rosenkranz \cite{GRR14} introduced Integro-differential algebras with weight.
Free objects and their linear bases were constructed by using GS basis theory \cite{GRR14, GGZ14, GGR15}
The main goal of this paper is to study free differential Rota-Baxter algebras and free integro-differential algebras over algebras from the viewpoint of operated GS basis theory.
In particular, when the base algebra is reduced to $\bfk$, our results also give GS bases and linear bases for free differential Rota-Baxter algebras and free integro-differential algebras.
However, the original monomial orders used in \cite{GRR14, GGZ14, GGR15} do not work in this generalized setting, and we have to introduce new monomial order to overcome the problem.
\medskip
\subsection{Outline of the paper}\
This paper is organized as follows.
The first section contains remainder on free objects in multi-operated setting and on the construction of free $\Omega$-semigroups and related structures,
and introduces some new monomial orders for the case of two operators, which will be the key technical tool of this paper.
In the second section, we recall the theory of GS bases for the multi-operated setting. After introducing OPIs, GS property for OPIs and $\Omega$-GS bases for multi-operated algebras are defined; after giving some facts about free multi-operated $\Phi$-algebras on algebras, answers to Question~\ref{Ques: nulti-operated setting} are presented.
In the third section, multi-operated GS bases and linear bases for free differential Rota-Baxter algebras on algebras are studied and the fourth section contains our investigation for free integro-differential algebras on algebras.
\medskip
\textbf{Notation:} Throughout this paper, $\bk$ denotes a base field. All the vector spaces and algebras are over $\bk$.
\bigskip
\section{New monomial orders on free multi-operated semigroups and monoids}
In this section, we recall free objects in multi-operated setting and the construction of free $\Omega$-semigroups and related structures, and
define two new monomial orders $\leq_{\operatorname{PD}}$ and $\leq_{\operatorname{uPD}}$ on free multi-operated semigroups and monoids.
The main results of this paper will highly depend on these new monomial orders.
For a set $Z$, denote by $\bk Z$ (resp. $\cals(Z)$, $\calm(Z)$) the free $\bk$-vector space (resp. free semigroup, free monoid) generated by $Z$. Denote the category of sets (resp. semigroups, monoids) by $\Set$ (resp. $\Sem$, $\Mon$). Denote the categories of $\bk$-algebras and unital $\bk$-algebras by $\Alg$ and $\uAlg$ respectively.
Throughout this section, let $\Omega$ be a nonempty set which will be the index set of operators.
\subsection{Free objects in the multi-operated setup}\
\begin{defn}
An operated set with an operator index set $\Omega$ or simply an $\Omega$-set is a set $S$ endowed with a family of maps
$P_\omega: S\rightarrow S$ indexed by $\omega\in \Omega$ (which are not necessarily homomorphisms of semigroups). The morphisms between $\Omega$-sets can be defined in the obvious way. Denote the category of $\Omega$-sets by $\mtOpSet$.
Similarly, we can define $\Omega$-semigroups and $\Omega$-monoids. Their categories are denoted by $\mtOpSem$ and $\mtOpMon$ respectively.
$\Omega$-vector spaces, nonunital or unital $\Omega$-algebras can be defined in a similar way, except asking, moreover, that all the operators are $\bk$-linear maps. Denote the category of $\Omega$-vector spaces, (resp. nonunital $\Omega$-algebras, unital $\Omega$-algebras) by $\mtOpVect$ (resp. $\mtOpAlg$, $\mtuOpAlg$) with obvious morphisms.
\end{defn}
As in \cite{QQWZ21}, there exists the following diagram of functors:
\[\xymatrixrowsep{2.5pc}
\xymatrix{
& \mtOpVect \ar@<.5ex>@{->}[rr]\ar@<.5ex>@{->}[ld]\ar@<.5ex>@{-->}[dd]
& &\mtOpAlg \ar@<.5ex>@{->}[rr]\ar@<.5ex>@{->}[ll]\ar@<.5ex>@{->}[ld]\ar@<.5ex>@{-->}[dd]
& & \mtuOpAlg \ar@<.5ex>@{->}[ld] \ar@<.5ex>@{->}[ll]\ar@<.5ex>@{->}[dd] \\
\mtOpSet \ar@<.5ex>@{->}[ur] \ar@<.5ex>@{->}[rr] \ar@<.5ex>@{->}[dd]
& &\mtOpSem \ar@<.5ex>@{->}[ur] \ar@<.5ex>@{->}[rr]\ar@<.5ex>@{->}[ll]\ar@<.5ex>@{->}[dd]
& & \mtOpMon \ar@<.5ex>@{->}[ur]\ar@<.5ex>@{->}[ll] \ar@<.5ex>@{->}[dd] &\\
& \Vect \ar@<.5ex>@{-->}[uu] \ar@<.5ex>@{-->}[rr] \ar@<.5ex>@{-->}[ld]
& & \Alg \ar@<.5ex>@{-->}[uu] \ar@<.5ex>@{-->}[rr]\ar@<.5ex>@{-->}[ll]\ar@<.5ex>@{-->}[ld]
& & \uAlg \ar@<.5ex>@{->}[uu]\ar@<.5ex>@{-->}[ll]\ar@<.5ex>@{->}[ld] \\
\Set \ar@<.5ex>@{->}[uu] \ar@<.5ex>@{-->}[ur] \ar@<.5ex>@{->}[rr]
& & \Sem \ar@<.5ex>@{->}[rr] \ar@<.5ex>@{->}[uu]\ar@<.5ex>@{-->}[ur]\ar@<.5ex>@{->}[ll]
& & \Mon \ar@<.5ex>@{->}[ur] \ar@<.5ex>@{->}[uu] \ar@<.5ex>@{->}[ll] & }
\]
In this diagram, all functors from right to left, from below to above and from southwest to northeast are the obvious forgetful functors. The other functors are free object functors which are left adjoint to the forgetful functors.
Our notations for free object functors are analogous to those in \cite{QQWZ21}. For instance, $\calf^{\mtOpAlg}_{\Alg}$ denotes the free object functor from
the category of algebras to that of nonunital $\Omega$-algebras.
We could give similar constructions of these free object functors as in Sections 1-3 of \cite{QQWZ21}. However, as we don't need the details, we will not repeat them. The curious readers could consult \cite{QQWZ21} and extend the constructions in \cite{QQWZ21} without essential difficulties.
\subsection{Free multi-operated semigroups and monoids}\
Now we explain the construction of the free $\Omega$-semigroup generated by a set $Z$.
For $\omega\in \Omega$, denote by $\lfloor Z \rfloor_\omega$ the set of all formal elements $ \lfloor z \rfloor_\omega, z\in Z$ and put
$\lfloor Z \rfloor_\Omega=\sqcup_{\omega\in \Omega} \left \lfloor Z\right \rfloor_\omega$. The inclusion into the first component $ Z\hookrightarrow Z\sqcup \lfloor Z \rfloor_\Omega$ induces an injective semigroup homomorphism
$$ i_{0,1}: \frakS_{,0}(Z):=\cals(Z) \hookrightarrow \frakS_{,1}(Z):= \cals(Z\sqcup \lfloor Z \rfloor_\Omega).
$$
For $n \geq 2$, assume that we have constructed $\frakS_{,n-2}(Z)$ and $\frakS_{,n-1}(Z)= \cals(Z\sqcup \lfloor \frakS_{,n-2}(Z) \rfloor_\Omega)$ endowed with an injective homomorphism of semigroups
$
i_{n-2,n-1}: \frakS_{,n-2}(Z) \hookrightarrow \frakS_{,n-1}(Z).
$
We define the semigroup
$$
\frakS_{,n}(Z):= \cals(Z\sqcup \lfloor \frakS_{,n-1}(Z) \rfloor_\Omega)
$$ and
the natural injection
$$
\Id_Z\sqcup \lfloor i_{n-2,n-1} \rfloor_\Omega:Z\sqcup \lfloor \frakS_{,n-2}(Z) \rfloor_\Omega \hookrightarrow Z\sqcup \lfloor \frakS_{,n-1}(Z) \rfloor_\Omega
$$
induces an injective semigroup homomorphism
$$ i_{n-1,n}: \frakS_{,n-1}(Z)= \cals(Z\sqcup \lfloor \frakS_{,n-2}(Z) \rfloor_\Omega) \hookrightarrow \frakS_{,n}(Z) = \cals (Z\sqcup \lfloor \frakS_{,n-1}(Z) \rfloor_\Omega).$$
Define $ \frakS(Z)=\varinjlim \frakS_{,n}(Z) $ and the maps sending $u\in \frakS_{,n}(Z)$ to $\left \lfloor u\right \rfloor_{\omega}\in \frakS_{,n+1}(Z)$ induces a family of operators $P_{\omega}, \omega \in \Omega$ on $\frakS(Z)$.
The construction of the free $\Omega$-monoid $\frakM (M)$ over a set $Z$ is similar, by just replacing $\cals(Z)$ by $\calm(Z)$ everywhere in the construction.
\begin{remark}\label{remark: monoids}
We will use another construction of $\frakM (Z)$. In fact, add some symbols $\lfloor1\rfloor_\Omega=\{\lfloor1\rfloor_\omega, \omega\in \Omega\}$ to $Z$ and form $ \frakS (Z \sqcup \lfloor1\rfloor_\Omega )$, then $\frakM (Z)$ can be obtained from $ \frakS(Z \sqcup \lfloor1\rfloor_\Omega )$ by just adding the empty word $1$.
\end{remark}
It is easy to see that $\bk\frakS(Z)$(resp. $\bk\frakM (Z)$) is the free nonunital (resp. unital) $\Omega$-algebra generated by $Z$.
\subsection{Monomial orders}\label{section: order}\
In this subsection, we introduce some new monomial orders on free $\Omega$-semigroups and free $\Omega$-monoids. We
only consider the case of two operators, say $\Omega=\{P, D\}$ as the main examples in mind are differential Rota-Baxter algebras and integro-differential algebras following the convention from \cite{GGR15}.
We first recall the definitions of well orders and monomial orders.
\begin{defn}
Let $Z$ be a nonempty set.
\begin{itemize}
\item [(a)] A preorder $\leq$ is a binary relation on $Z$ that is reflexive and transitive, that is, for all $x, y, z \in Z$, we have
\begin{itemize}
\item [(i)] $x \leq x$; and
\item[(ii)]if $x \leq y, y \leq z$, then $x \leq z$.
\end{itemize}
In the presence of a preoder $\leq$, we denote $x=_{Z} y$ if $x \leq y$ and $x \geq y$; if $x \leq y$ but $x \neq y$, we write $x< y$ or $y> x$.
\item[(b)] A pre-linear order $\leq$ on $Z$ is a preorder $\leq$ such that either $x \leq y$ or $x \geq y$ for all $x, y \in Z$.
\item[(c)] A linear order or a total order $\leq$ on $Z$ is a pre-linear order $\leq$ such that $\leq$ is symmetric, that is, $x \leq y$ and $y \leq x$ imply $x=y$.
\item[(d)] A preorder $\leq$ on $Z$ is said to satisfy the descending chain condition, if for each descending chain $x_1\geq x_2\geq x_3\geq \cdots$, there exists $N\geq 1$ such that $x_N=_Z x_{N+1}=_Z\cdots$.
A linear order satisfying the descending chain condition is called a well order.
\end{itemize}
\end{defn}
Before giving the definition of monomial orders, we need to introduce the following notions generalising the case of one operator.
\begin{defn}
Let $Z$ be a set and $\star$ a symbol not in $Z$.
\begin{itemize}
\item [(a)] Define $\frakMstar(Z)$ to be the subset of $\frakM(Z\cup\star)$ consisting of elements with $\star$ occurring only once.
\item [(b)] For $q\in\frakMstar(Z)$ and $u\in \frakM(Z)$, we define $q|_{u}\in \frakM(Z)$ to be the element obtained by
replacing the symbol $\star$ in $q$ by $u$. In this case, we say $u$ is a subword of $q|_{u}$.
\item [(c)] For $q\in\frakMstar(Z)$ and $s=\sum_ic_iu_i\in \bk \frakM(Z)$ with $c_i\in\bk$ and $u_i\in\frakM(Z)$, we define
$$q|_s:=\sum_ic_iq|_{u_i}.$$
\item [(d)] Define $\frakSstar(Z)$ to be the subset of $\frakS(Z\cup\star)$ consisting of elements with $\star$ occurring only once. It is easy to see $\frakSstar(Z)$ is a subset of $\frakMstar(Z)$, so we also have notations in (a)-(c) for $\frakSstar(Z)$ by restriction.
\end{itemize}
\end{defn}
\begin{defn}
Let $Z$ be a set.
\begin{itemize}
\item [(a)]A monomial order on $\cals(Z)$ is a well-order $\leq$ on $\cals(Z)$ such that
$$ u < v \Rightarrow uw < vw\text{ and }wu<wv \text{ for any }u, v, w\in \cals(Z);$$
\item [(a')] a monomial order on $\calm(Z)$ is a well-order $\leq$ on $\calm(Z)$ such that
$$ u < v \Rightarrow wuz < wvz \text{ for any }u, v, w,z\in \calm(Z);$$
\item [(b)]a monomial order on $\frakS(Z)$ is a well-order $\leq$ on $\frakS(Z)$ such that
$$u< v \Rightarrow q|_u<q|_v\quad\text{for all }u,v\in\frakS(Z)\text{ and } q\in \frakSstar(Z);$$
\item [(b')]a monomial order on $\frakM(Z)$ is a well-order $\leq$ on $\frakM(Z)$ such that
$$u< v \Rightarrow q|_u<q|_v\quad\text{for all }u,v\in\frakM(Z)\text{ and } q\in \frakMstar(Z). $$
\end{itemize}
\end{defn}
Let us recall some known preorders.
\begin{defn}
For two elements $u, v \in \frakS(Z)$,
\begin{itemize}
\item [(a)] define
$$
u \leq_{\operatorname{D}} v \Leftrightarrow \operatorname{deg}_{D}(u) \leq \operatorname{deg}_{D}(v),
$$
where the $D$-degree $\operatorname{deg}_{D}(u)$ of $u$ is the number of occurrence of $\lfloor~\rfloor_D$ in $u$;
\item [(b)] define
$$
u \leq_{\operatorname{P}} v \Leftrightarrow \operatorname{deg}_{P}(u) \leq \operatorname{deg}_{P}(v),
$$
where the $P$-degree $\operatorname{deg}_{P}(u)$ of $u$ is the number of occurrence of $\lfloor~\rfloor_P$ in $u$;
\item [(c)] define
$$
u \leq_{\operatorname{dZ}} v \Leftrightarrow \operatorname{deg}_{Z}(u) \leq \operatorname{deg}_{Z}(v),
$$
where the $Z$-degree $\operatorname{deg}_{Z}(u)$ is the number of elements of $Z$ occurring in $u$ counting the repetitions;
\end{itemize}
\end{defn}
\begin{defn}\label{Def: deg-lex order}
Let $Z$ be a set endowed with a well order $\leq_{Z}$.
Introduce the degree-lexicographical order $\leq_{\rm {dlex }}$ on $\cals(Z)$ by imposing, for any $u\neq v \in \cals(Z)$, $u <_{\rm {dlex}}v$ if
\begin{itemize}
\item[(a)] either $\operatorname{deg}_{Z}(u)<\operatorname{deg}_{Z}(v)$, or
\item[(b)] $\operatorname{deg}_{Z}(u)=\operatorname{deg}_{Z}(v)$, and $u=mu_{i}n$, $v=mv_{i}n^\prime$ for some $m,n,n^\prime\in \calm(Z)$ and $u_{i},v_{i}\in Z$ with $u_{i}<_{Z} v_{i}$.
\end{itemize}
\end{defn}
It is obvious that the degree-lexicographic order $\leq_{\mathrm{dlex}}$ on $\cals(Z)$ is a well order .
We now define a preorder $\leq_{\operatorname{Dlex}}$ on $\frakS(Z)$, by the following recursion process:
\begin{itemize}
\item [(a)] For $u,v\in \frakS_{,0}(Z)=\cals(Z)$, define
$$u\leq_{\operatorname{Dlex}_0} v \Leftrightarrow u \leq_{\mathrm{dlex}}v.$$
\item [(b)] Assume that we have constructed a well order $\leq_{\operatorname{Dlex}_n}$ on $\frakS_{,n}(Z)$ for $n\geq 0$ extending all $\leq_{\operatorname{Dlex}_i}$ for any $0\leq i\leq n-1$. The well order $\leq_{\operatorname{Dlex}_n}$ on $\frakS_{,n}(Z)$ induces a well order on $\lfloor\frakS_{,n}(Z)\rfloor_P$ (resp. $\lfloor\frakS_{,n}(Z)\rfloor_D$), by imposing
$\lfloor u\rfloor_P\leq \lfloor v\rfloor_P$ (resp. $\lfloor u\rfloor_D\leq \lfloor v\rfloor_D$) whenever $u\leq_{\operatorname{Dlex}_n} v\in \frakS_{,n}(Z)$.
By setting $u<v<w$ for all $u\in Z$, $v\in \lfloor\frakS_{,n}(Z)\rfloor_D$, and $w\in \lfloor\frakS_{,n}(Z)\rfloor_P$, we obtain a well order on $Z\sqcup \lfloor\frakS_{,n}(Z)\rfloor_P \sqcup \lfloor\frakS_{,n}(Z)\rfloor_D$.
Let $\leq_{\operatorname{Dlex}_{n+1}}$ be the degree lexicographic order on $\frakS_{,n+1}(Z)=\cals(Z\sqcup \lfloor\frakS_{,n}(Z)\rfloor_P \sqcup \lfloor\frakS_{,n}(Z)\rfloor_D)$ induced by that on $Z\sqcup \lfloor\frakS_{,n}(Z)\rfloor_P \sqcup \lfloor\frakS_{,n}(Z)\rfloor_D$.
\end{itemize}
Obviously $\leq_{\operatorname{Dlex}_{n+1}}$ extends $\leq_{\operatorname{Dlex}_{n}}$. By a limit process, we get a preorder on $\frakS(Z)$ which will be denoted by
$\leq_{\operatorname{Dlex}}$. As is readily seen, $\leq_{\operatorname{Dlex}}$ is a linear order.
\begin{remark}\label{remark: monomial order for multi-operator case}
It is easy to see that the above construction of $\leq_{\operatorname{Dlex}}$ can be extended to the case of more than two operators
In fact, for a given well order
$\leq_{\Omega}$ in the index set $\Omega$,
the defining process of $\leq_{\operatorname{Dlex}}$ on $\frakS(Z)$ is the same as above except one detail in step (b), where
we need to put $u<v<w$ for all $u\in Z$, $v\in \lfloor\frakS_{,n}(Z)\rfloor_{\omega_1}$ and $w\in \lfloor\frakS_{,n}(Z)\rfloor_{\omega_2}$ with $\omega_1 \leq_{\Omega} \omega_2\in \Omega$
\end{remark}
\begin{defn}\label{GD}
For any $u\in\frakS(Z)$, let $u_1,\dots ,u_n\in Z$ be all the elements occurring in $u$ from left to right. If a right half bracket $\rfloor_D$ locates in the gap between $u_i$ and $u_{i+1}$, where $1\leq i<n$, the GD-degree of this right half bracket is defined to be $n-i$; if there is a right half bracket $\rfloor_D$ appearing on the right of $u_n$, we define the GD-degree of this half bracket to be $0$. We define the GD-degree of $u$, denoted by $\deg_{GD}(u)$, to be the sum of the GD-degrees of all the half right brackets in $u$.
For example, the GD-degrees of the half right brackets in $u=\left \lfloor x \right \rfloor_D \left \lfloor y \right \rfloor_D$ with $x,y \in Z$ are respectively 1 and 0 from left to right, so $\deg_{GD}(u)=1$ by definition.
For $u, v\in\frakS(Z)$, define the GD-degree order $\leq_{\mathrm{GD}}$ by
$$ u\leq_{\mathrm{GD}}v\Leftrightarrow \deg_{GD}(u)\leq \deg_{GD}(v).$$
\end{defn}
\begin{defn}\label{GP}
For any $u\in\frakS(Z)$, let $u_1,\dots ,u_n\in Z$ be all the elements occurring in $u$ from left to right. If there are $i$ elements in $Z$ contained in a bracket $\lfloor~\rfloor_P$ , the GP-degree of this bracket is defined to be $n-i$. We denote by $\deg_{GP}(u)$ the sum of the GP-degree of all the brackets $\lfloor~\rfloor_P$ in $u$.
For example, the the GP-degrees of the brackets $\lfloor~\rfloor_P$ of $u=\left \lfloor xy \right \rfloor_P \left \lfloor z \right \rfloor_P$ with $x,y,z \in Z$ are respectively 1 and 2 from left to right, so $\deg_{GP}(u)=3$ by definition.
For $u,v\in\frakS(Z)$, define the GD-degree order $\leq_{\mathrm{GD}}$ by
$$ u\leq_{\mathrm{GP}}v\Leftrightarrow \deg_{GP}(u)\leq \deg_{GP}(v).$$
\end{defn}
It is easy to obtain the following lemma whose proof is thus omitted.
\begin{lem}\label{lemma: prelinear order with descending chain condition}
The orders $\leq_{\mathrm{D}}$, $\leq_{\mathrm{P}}$, $\leq_{\mathrm{dZ}}$, $\leq_{\operatorname{GD}}$ and $\leq_{\mathrm{GP}}$ are pre-linear orders satisfying the descending chain condition.
\end{lem}
Combining all the orders above, we can now construct an order $\leq_{\operatorname{PD}}$ of $\frakS(Z)$:
$$u \leq_{\operatorname{PD}}v \Leftrightarrow \left\{
\begin{array}{lcl}
u \leq_{\mathrm{D}} v ,\text{or }\\
u =_{\mathrm{D}} v \text{ and } u \leq_{\mathrm{P}} v , \text{or }\\
u =_{\mathrm{D}} v , u =_{\mathrm{P}} v \text{ and } u \leq_{\mathrm{dZ}} v, \text{or }\\
u =_{\mathrm{D}} v , u =_{\mathrm{P}} v , u =_{\mathrm{dZ}} v \text{ and } u \leq_{\mathrm{GD}} v , \text{or }\\
u =_{\mathrm{D}} v , u =_{\mathrm{P}} v , u =_{\mathrm{dZ}} v, u =_{\mathrm{GD}} v \text{ and } u \leq_{\mathrm{GP}} v , \text{or }\\
u =_{\mathrm{D}} v , u =_{\mathrm{P}} v , u =_{\mathrm{dZ}} v, u =_{\mathrm{GD}} v , u =_{\mathrm{GP}} v \text{ and } u \leq_{\mathrm{Dlex}} v.
\end{array}
\right. $$
To prove that the $\leq_{\operatorname{PD}}$ is a well-order, we need some preparation.
\begin{defn} \begin{itemize}
\item [(a)] Given some preorders $\leq_{\alpha_{1}}, \dots, \leq_{\alpha_{k}}$ on a set $Z$ with $k\geq 2$, introduce another preorder
$\leq_{\alpha_{1}, \dots, \alpha_{k}}$ by imposing recursively
$$
u \leq_{\alpha_{1}, \dots, \alpha_{k}} v \Leftrightarrow\left\{
\begin{array}{l}
u<_{\alpha_{1}} v, \text{ or } \\
u=_{\alpha_{1}} v \text { and } u \leq_{\alpha_{2}, \dots, \alpha_{k}} v.
\end{array}\right.
$$
\item [(b)] Let $k \geq2$ and let $\leq_{\alpha_i}$ be a pre-linear order on $Z_{i},~1 \leq i \leq k$. Define the lexicographical product order $\leq_{\text{clex}}$ on the cartesian product $Z_{1} \times Z_{2} \times \cdots \times Z_{k}$ by defining
$$(x_{1},\cdots, x_{k}) \leq_{\text {clex}}(y_{1},\cdots, y_{k}) \Leftrightarrow\left\{\begin{array}{l}x_{1}<_{\alpha_{1}} y_{1}, \text {or } \\ x_{1}=_{Z_{1}}y_{1} \text { and }\left(x_{2}, \cdots, x_{k}\right) \leq_{\rm{clex}}\left(y_{2}, \cdots, y_{k}\right),\end{array}\right.$$ where $\left(x_{2}, \cdots, x_{k}\right) \leq_{\rm{clex}}\left(y_{2}, \cdots, y_{k}\right)$ is defined by induction, with the convention that $\leq_{\rm{clex}}$ is the trivial relation when $k=1$.
\end{itemize}
\end{defn}
\begin{lem}[{\cite[Lemma 1.7]{QQZ21}}] \label{sequence of order gives linear order}
\begin{itemize}
\item[(a)] For $k \geq 2$, let $\leq_{\alpha_{1}}, \dots, \leq_{\alpha_{k-1}}$ be pre-linear orders on $Z$, and $\leq_{\alpha_{k}}$ a linear order on $Z$. Then $\leq_{\alpha_{1}, \dots, \alpha_{k}}$ is a linear order on $ Z$.
\item[(b)] Let $ \leq_{\alpha_{i}} $ be a well order on $Z_i$, $1\leq i\leq k$. Then the lexicographical product
order $\leq_{\rm{clex}}$ is a well order on the cartesian product $Z_1\times Z_2\times \cdots \times Z_k$.
\end{itemize}
\end{lem}
\begin{prop}
The order $\leq_{\operatorname{PD}}$ is a well order on $\frakS(Z)$.
\end{prop}
\begin{proof}Since $\leq_{\mathrm{Dlex}}$ is a linear order, so is $\leq_{\operatorname{PD}}$ by Lemma~\ref{lemma: prelinear order with descending chain condition} and Lemma~\ref{sequence of order gives linear order}(a).
It suffices to verify that $\leq_{\operatorname{PD}}$ satisfies the descending chain condition. Let $$v_1\geq_{\operatorname{PD}} v_2\geq_{\operatorname{PD}} v_3\geq_{\operatorname{PD}}\cdots \in\frakS(Z) $$ be a descending chain. By Lemma~\ref{lemma: prelinear order with descending chain condition}, there exist $N\geq 1$ such that $$\deg_D(v_N)=\deg_D(v_{N+1})=\deg_D(v_{N+2})=\cdots=:k,$$
$$\deg_P(v_N)=\deg_P(v_{N+1})=\deg_P(v_{N+2})=\cdots=:p,$$ $$\deg_Z(v_N)=\deg_Z(v_{N+1})=\deg_Z(v_{N+2}) =\cdots$$ $$\deg_{GD}(v_N)=\deg_{GD}(v_{N+1})=\deg_{GD}(v_{N+2})=\cdots,$$ and $$\deg_{GP}(v_N)=\deg_{GP}(v_{N+1})=\deg_{GP}(v_{N+2})=\cdots.$$
Thus all $v_i$ with $i\geq N$ belong to $\frakS_{,k+p}(Z)$. The restriction of the order $\leq_{\mathrm{Dlex}}$ to $\frakS_{,k+p}(Z)$ equals
to the well order $\leq_{\operatorname{Dlex}_{k+p}}$, which by definition satisfies the descending chain condition, so the chain $v_1\geq_{\operatorname{PD}} v_2\geq_{\operatorname{PD}} v_3\geq_{\operatorname{PD}}\cdots$ stabilizes after finite steps.
\end{proof}
\begin{defn}[{\cite[Definition~5.6]{GGSZ14}}]
A preorder $\leq_{\alpha}$ on $\frakS(Z)$ is called bracket compatible (resp. left compatible, right compatible) if
$$u \leq_{\alpha} v \Rightarrow\lfloor u\rfloor_D \leq_{\alpha}\lfloor v\rfloor_D \text{ and } \lfloor u\rfloor_P \leq_{\alpha}\lfloor v\rfloor_P , \text { (resp. } w u \leq_{\alpha} w v, ~u w \leq_{\alpha} v w ,\ \text { for all } w \in \frakS(Z))\
$$
\end{defn}
\begin{lem}[{\cite[Lemma~5.7]{GGSZ14}}]\label{monomial order lemma}
A well order $\leq$ is a monomial order on $\frakS(Z)$ if and only if $ \leq$ is bracket compatible, left compatible and right compatible.
\end{lem}
Now we can prove the main result of this section which is the main technical point of this paper.
\begin{thm}
The well order $\leq_{\operatorname{PD}}$ is a monomial order on $\frakS(Z)$.
\end{thm}
\begin{proof}
Let $u\leq_{\operatorname{PD}}v$.
It is obvious that preorders $\leq_{\mathrm{D}}$, $\leq_{\mathrm{P}}$ and $\leq_{\mathrm{dZ}}$ are bracket compatible, left compatible and right compatible. This solves the three cases $u<_{\mathrm{D}} v$; $u=_{\mathrm{D}} v$, $u<_{\dgp} v$; $u=_{\mathrm{D}} v$, $u=_{\dgp} v$ and $u<_{\dgx} v$.
If $u=_{\mathrm{D}} v$, $u=_{\mathrm{P}} v, u=_{\mathrm{dZ}} v$ and $u <_{\mathrm{GD}}v$,
obviously $\lfloor u\rfloor_D <_{\mathrm{GD}}\lfloor v\rfloor_D$, $\lfloor u\rfloor_P <_{\mathrm{GD}}\lfloor v\rfloor_P$ $uw <_{\mathrm{GD}}vw$ and $wu <_{\mathrm{GD}}wv$ for $w\in \frakS(Z)$. So $\lfloor u\rfloor_D <_{\mathrm{PD}}\lfloor v\rfloor_D$, $\lfloor u\rfloor_P <_{\mathrm{PD}}\lfloor v\rfloor_P$, $uw <_{\mathrm{PD}}vw$ and $wu <_{\mathrm{PD}}wv$.
The case that $u=_{\mathrm{D}} v$, $u=_{\mathrm{P}} v, u=_{\mathrm{dZ}} v$, $u =_{\mathrm{GD}}v$ and $u <_{\mathrm{GP}}v$ is similar to the above one.
It remains to consider the case that $u=_{\mathrm{D}} v$, $u=_{\mathrm{P}} v, u=_{\mathrm{dZ}} v$, $u =_{\mathrm{GD}}v$, $u =_{\mathrm{GP}}v$ and $u <_{\mathrm{Dlex}}v$.
Let $n\geq \deg_D(u), \deg_P(u)$. Since $u,v\in \frakS_{,n}(Z)$, thus $u\leq_{\operatorname{Dlex}_{n}} v$. By the fact that the restriction of $\leq_{\operatorname{Dlex}_{n+1}} $ to $\lfloor\frakS_{,n}(Z)\rfloor_D$ is induced by $ \leq_{\operatorname{Dlex}_n}$, we have $\lfloor u\rfloor_D \leq_{\operatorname{Dlex}_{n+1}} \lfloor v\rfloor_D$, $\lfloor u\rfloor_D \leq_{\operatorname{Dlex}} \lfloor v\rfloor_D$, and $\lfloor u\rfloor_D \leq_{\operatorname{PD}} \lfloor v\rfloor_D$. Similarly $\lfloor u\rfloor_P \leq_{\operatorname{PD}} \lfloor v\rfloor_P$.
Let $w\in\frakS_{,m}(Z)$. One can obtain $uw\leq_{\operatorname{Dlex}_r}vw$ and $wu\leq_{\operatorname{Dlex}_r}wv$ for $r=\max \left\lbrace m, n \right\rbrace $, so $uw\leq_{\operatorname{PD}} vw$ and $wu\leq_{\operatorname{PD}}wv$.
We are done.
\end{proof}
Now let's move to the unital case.
Now we extend $\leq_{\operatorname{PD}} $ from $\frakS(Z)$ to $\frakM(Z)$ by using
Remark~\ref{remark: monoids}.
\begin{defn}\label{udl}
Let $Z$ be a set with a well order. Let $\dagger_P$ (resp. $\dagger_D$ ) be a symbol which is understood to be $\lfloor 1 \rfloor_P$ (resp. $\lfloor 1 \rfloor_D$) and write $Z'=Z\sqcup \{\dagger_P ,\dagger_D\}$. Consider the free operated semigroup $\frakS(Z')$ over the set $Z'$. The well order on $Z$ extends to a well order $\leq$ on $Z'$ by setting $\dagger_P>z>\dagger_D$, for any $z\in Z$. Besides, we impose $\operatorname{deg}_{P}(\dagger_P) = 1$ and $\deg_{GP}(\dagger_P)=0$.
Then the monomial order $\leq_{\operatorname{PD}} $ on $\frakS(Z')$ induces a well order $\leq_{\operatorname{uPD}}$ on $\frakM(Z)=\frakS(Z')\sqcup \{1\}$ (in which $\lfloor 1 \rfloor_P$ and $\lfloor 1 \rfloor_D$ is identified with $\dagger_P$ and $\dagger_D$ respectively), by setting $u>_{\operatorname{uPD}}1$ for any $u\in \frakS(Z')$.
\end{defn}
\begin{thm}
The well order $\leq_{\operatorname{uPD}}$ is a monomial order on $\frakM(Z)$.
\end{thm}
\begin{proof} Obviously,
the well order $\leq_{\operatorname{uPD}}$ is bracket compatible on $\frakM(Z)\backslash \{1\}$.
Let $x\in \frakM(Z)\backslash \{1\}$. By definition, $x>_{\operatorname{uPD}}1$. We have $\lfloor x\rfloor_P>_{\mathrm{Dlex}}\lfloor 1\rfloor_P$ which implies $\lfloor x\rfloor_P>_{\operatorname{uPD}}\dagger_P$.
It is ready to see that $\lfloor x\rfloor_D>_{\operatorname{uPD}} x >_{\operatorname{uPD}}\dagger_D$.
Thus $\leq_{\operatorname{uPD}}$ is bracket compatible.
Clearly, $\leq_{\operatorname{uPD}}$ is left and right compatible.
\end{proof}
We record several important conclusions which will be useful later.
\begin{prop}\label{uDl}
For any $ u,v\in \frakM(Z)\backslash \{1\}$, we have
\begin{itemize}
\item [(a)] $\lfloor u\rfloor_P\lfloor 1\rfloor_P >_{\operatorname{uPD}} \lfloor u\lfloor 1\rfloor_P \rfloor_P \geq_{\operatorname{uPD}} \lfloor \lfloor u\rfloor_P \rfloor_P $,
\item [(b)] $\lfloor 1\rfloor_P\lfloor v \rfloor_P>_{\operatorname{uPD}} \lfloor \lfloor v\rfloor_P \rfloor_P \geq_{\operatorname{uPD}} \lfloor \lfloor 1\rfloor_P v \rfloor_P $,
\item [(c)] $\lfloor 1\rfloor_P\lfloor 1\rfloor_P>_{\operatorname{uPD}} \lfloor \lfloor 1\rfloor_P\rfloor_P$,
\item [(d)] $\lfloor 1\rfloor_P\lfloor v\rfloor_D>_{\operatorname{uPD}} \lfloor \lfloor 1\rfloor_P v\rfloor_D$,
\item [(e)] $\lfloor u\rfloor_D\lfloor 1\rfloor_P>_{\operatorname{uPD}} \lfloor u\lfloor 1\rfloor_P\rfloor_D$.
\end{itemize}
\end{prop}
\begin{proof}
Let $u , v\in \frakM(Z)\backslash \{1\}=\frakS(Z')$.
\begin{itemize}
\item [(a)
It is easy to see $\lfloor \lfloor u\rfloor_P \rfloor_P$ have lowest $\deg_{Z'}$ among $\lfloor u\rfloor_P\lfloor 1\rfloor_P, \lfloor u\lfloor 1\rfloor_P \rfloor_P , \lfloor \lfloor u\rfloor_P \rfloor_P $, and
we also have $\deg_{GP}(\lfloor u\rfloor_P\lfloor 1\rfloor_P)>\deg_{GP}(\lfloor u\lfloor 1\rfloor_P \rfloor_P)$.
\item [(b)] It is similar to $(a)$.
\item [(c)] It follows from $\deg_{Z'}(\lfloor 1\rfloor_P\lfloor 1\rfloor_P)>\deg_{Z'}(\lfloor \lfloor 1\rfloor_P\rfloor_P)$.
\item [(d)] It can be deduced from $\lfloor 1\rfloor_P\lfloor v\rfloor_D >_{\operatorname{Dlex}} \lfloor \lfloor 1\rfloor_P v\rfloor_D$ by Definition~\ref{Def: deg-lex order}.
\item [(e)]It holds because $\deg_{GD}(\lfloor u\rfloor_D\lfloor 1\rfloor_P)>\deg_{GD}(\lfloor u\lfloor 1\rfloor_P\rfloor_D)$.
\end{itemize}
\end{proof}
\section{Operator polynomial identities and multi-operated GS bases}\label{Section: multi-operated GS bases}
In this section, we extend the theory of operated GS bases due to \cite{BokutChenQiu, GSZ13, GGSZ14, GaoGuo17} from the case of single operator to multiple operators case. The presentation is essentially contained in \cite{GGR15}.
\subsection{Operator polynomial identities}\
In this subsection, we give some basic notions and facts related to operator polynomial identities.
Throughout this section, $X$ denotes a set.
\begin{defn}
We call an element $\phi(x_1,\dots,x_n) \in \bk\frakS(X) $ (resp. $\bk\frakM (X)$) with $ n\geq 1, x_1, \dots, x_n\in X$ an operated polynomial identity (aka OPI).
\end{defn}
\textit{{From now on, we always assume that OPIs are multilinear, that is, they are linear in each $x_i$.}}
\begin{defn} Let $\phi(x_1,\dots,x_n)$ be an OPI. A (unital) $\Omega$-algebra $A=(A,\{P_\omega\}_{\omega\in \Omega})$ is said to satisfy the OPI $\phi(x_1,\dots,x_n)$ if
$\phi(r_1,\dots,r_n)=0,$ for all $r_1,\dots,r_n\in A.$
In this case, $(A,\{P_\omega\}_{\omega\in \Omega})$ is called a (unital) $\phi$-algebra.
Generally, for a family $\Phi$ of OPIs, we call a (unital) $\Omega$-algebra $(A, \{P_\omega\}_{\omega\in \Omega})$ a (unital) $\Phi$-algebra if it is a (unital) $\phi$-algebra for any $\phi\in \Phi$.
Denote the category of $\Phi$-algebras (resp. unital $\Phi$-algebras) by $\Phi\zhx\Alg$ (resp. $\Phi\zhx\uAlg$).
\end{defn}
\begin{defn} An $\Omega$-ideal of an $\Omega$-algebra is an ideal of the associative algebra closed under the action of the operators.
The $\Omega$-ideal generated by a subset $S\subseteq A$ is denoted by $\left\langle S\right\rangle_{\mtOpAlg}$ (resp. $\left\langle S\right\rangle_{\mtuOpAlg}$).
\end{defn}
Obviously the quotient of an $\Omega$-algebra (resp. unital $\Omega$-algebra) by an $\Omega$-ideal is naturally an $\Omega$-algebra (resp. $\Omega$-unital algebra).
From now on, $\Phi$ denotes a family of OPIs in $\bk\frakS(X) $ or $\bk\frakM (X)$. For a set $Z$ and a subset $Y$ of $ \frakM (Z)$, introduce the subset $S_\Phi(Y)\subseteq\bk \frakM (Z)$ to be
\[S_{\Phi}(Y):=\{\phi(u_1,\dots,u_k)\ |\ u_1,\dots,u_k\in Y,~\phi(x_1,\dots,x_k)\in \Phi\}.\]
\subsection{Multi-operated GS bases for $\Phi$-algebras}\
In this subsection, operated GS basis theory is extended to algebras with multiple operators following closely \cite{BokutChenQiu}.
\begin{defn}
Let $Z$ be a set, $\leq$ a linear order on $\frakM(Z)$ and $f \in \bk \frakM(Z)$.
\begin{itemize}
\item [(a)] Let $f\notin \bk$. The leading monomial of $f$ , denoted by $\bar{f}$, is the largest monomial appearing in $f$. The leading coefficient of $f$ , denoted by $c_f$, is the coefficient of $\bar{f}$ in $f$. We call $f$ monic with respect to $\leq$ if $c_f = 1$.
\item [(a')] Let $f\in \bk$ (including the case $f=0$). We define the leading monomial of $f$ to be $1$ and the leading coefficient of $f$ to be $c_f=f$.
\item [(b)] A subset $S\subseteq \bk\frakM(Z)$ is called monicized with respect to $\leq$, if each nonzero element of $S$ has leading coefficient $1$.
\end{itemize}
\end{defn}
Obviously, each subset $S\subseteq \frakM(Z)$ can be made monicized if we divide each nonzero element by its leading coefficient.
We need another notation.
Let $Z$ be a set. For $u\in\frakM(Z)$ with $u\neq1$, as $u$ can be uniquely written as a product $ u_1 \cdots u_n $ with $ u_i \in Z \cup \left \lfloor\frakM(Z)\right \rfloor_\Omega$ for $1\leq i\leq n$, call $n$ the breadth of $u$, denoted by $|u|$; for $u=1$, we define
$ |u| = 0 $.
\begin{defn}
Let $\leq$ be a monomial order on $\frakS(Z)$ (resp. $\frakM(Z)$) and $ f, g \in \bk\frakS(Z)$ (resp. $\bk\frakM(Z)$) be monic.
\begin{itemize}
\item[(a)] If there are $p,u,v\in \frakS(Z)$ (resp. $\frakM(Z)$) such that $p=\bar{f}u=v\bar{g}$ with max$\left\lbrace |\bar{f}| ,|\bar{g}| \right\rbrace < \left| p\right| < |\bar{f}| +|\bar{g}|$, we call
$$\left( f,g\right) ^{u,v}_p:=fu-vg$$
the intersection composition of $f$ and $g$ with respect to $p$.
\item[(b)] If there are $p\in\frakS(Z)$ (resp. $\frakM(Z)$) and $q\in\frakSstar(Z)$ (resp. $\frakMstar(Z)$) such that $p=\bar{f}=q|_{\bar{g}}$, we call
$$\left( f,g\right)^q_p:=f-q|_g $$
the inclusion composition of $f$ and $g$ with respect to $p$.
\end{itemize}
\end{defn}
\begin{defn}
Let $Z$ be a set and $\leq$ a monomial order on $\frakS(Z)$ (resp. $\frakM(Z)$). Let $\calg\subseteq \bk\frakS(Z) $ (resp. $\bk\frakM(Z)$).
\begin{itemize}
\item[(a)]An element $f\in \bk\frakS(Z) $ (resp. $\bk\frakM(Z) $) is called trivial modulo $(\calg,p)$ for $p\in \frakS(Z)$ (resp. $\frakM(Z)$) if
$$f=\underset{i}{\sum}c_iq_i|_{s_i}\text{ with }q_i|_{\bar s_i}<p\text{, where }c_i\in \bk,\ q_i\in\frakSstar(Z)~(\text{resp.}~\frakMstar(Z))\ \mathrm{and}\ s_i\in \calg.$$
If this is the case, we write
$$
(f, g)_p \equiv 0 \bmod (\calg, p) .
$$
In general, for any $u, v\in \frakS(Z)$ (resp. $\frakM(Z)$), $u \equiv v \bmod (\calg, p)$ means that $u-v=\left.\sum c_i q_i\right|_{s_i}$, with $q_i|_{\bar s_i}<p$, where $c_i\in \bk,\ q_i\in\frakSstar(Z)$ (resp. $\frakMstar(Z))$ and $s_i\in \calg$.
\item[(b)] The subset $\calg$ is called a GS basis in $\bk\frakS(Z)$ (resp. $\bk\frakM(Z)$) with respect to $\leq$ if, for all pairs $f,g\in \calg$ monicized with respect to $\leq$, every intersection composition of the form $\left( f,g\right) ^{u,v}_p$ is trivial modulo $(\calg,p)$, and every inclusion composition of the form $\left( f,g\right)^q_p$ is trivial modulo $(\calg,p)$.
\end{itemize}
\end{defn}
\textit{ {To distinguish from usual GS bases for associative algebras, from now on, we shall rename GS bases in multi-operated contexts by $\Omega$-GS bases.}}
\begin{thm} \label{Thm: unital CD}
(Composition-Diamond Lemma) Let $Z$ be a set, $\leq$ a monomial order on $\frakM(Z)$ and $\calg\subseteq \bk\frakM(Z) $. Then the following conditions are equivalent:
\begin{itemize}
\item[(a)] $\calg$ is an $\Omega$-GS basis in $\bk\frakM(Z) $.
\item[(b)] Denote
$$\Irr(\calg):=\frakM(Z)\backslash \left\lbrace q|_{\overline{s}}~|~s\in \calg,\ \ q\in\frakMstar(Z)\right\rbrace. $$
As a $\bk$-space, $\bk\frakM(Z) =\bk\Irr(\calg)\oplus\left\langle \calg \right\rangle_\mtuOpAlg$ and $ \Irr(\calg) $ is a $\bk$-basis of $\bk\frakM(Z)\slash\left\langle\calg \right\rangle_\mtuOpAlg$.
\end{itemize}
\end{thm}
\begin{thm} \label{Thm: nonunital CD}
(Composition-Diamond Lemma) Let $Z$ be a set, $\leq$ a monomial order on $\frakS(Z)$ and $\calg\subseteq \bk\frakS(Z) $. Then the following conditions are equivalent:
\begin{itemize}
\item[(a)] $\calg$ is an $\Omega$-GS basis in $\bk\frakS(Z) $.
\item[(b)] Denote
$$\Irr(\calg):=\frakS(Z)\backslash \left\lbrace q|_{\overline{s}}~|~s\in \calg,\ \ q\in\frakSstar(Z)\right\rbrace. $$
As a $\bk$-space, $\bk\frakS(Z) =\bk\Irr(\calg)\oplus\left\langle \calg \right\rangle_\mtOpAlg$ and $ \Irr(\calg)$ is a $\bk$-basis of $\bk\frakS(Z)\slash\left\langle\calg \right\rangle_\mtOpAlg$.
\end{itemize}
\end{thm}
\begin{defn}[{\cite[Definiton~2.21(a)]{GaoGuo17}}]\label{def: Omega-GS}\begin{itemize}
\item [(a)] Let $\Phi\subseteq \bk\frakS(X)$ be a family of OPIs. Let $Z$ be a set and $\leq$ a monomial
order on $\frakS(Z)$. We call $\Phi$ $\Omega$-GS on $\bk\frakS(Z)$ with respect to $\leq$ if $S_{\Phi}(\frakS(Z))$ is an $\Omega$-GS basis in $\bk\frakS(Z)$ with respect to $\leq$.
\item [(b)]Let $\Phi\subseteq \bk\frakM(X)$ be a family of OPIs. Let $Z$ be a set and $\leq$ a monomial
order on $\frakM(Z)$. We call $\Phi$ $\Omega$-GS on $\bk\frakM(Z)$ with respect to $\leq$ if $S_{\Phi}(\frakM(Z))$ is an $\Omega$-GS basis in $\bk\frakM(Z)$ with respect to $\leq$.
\end{itemize}
\end{defn}
\subsection{Multi-operated GS basis for free $\Phi$-algebras over algebras}\
In this subsection, we consider multi-operated GS basis for free $\Phi$-algebras over algebras and generalise the main result of \cite{QQWZ21} to multi-operated cases.
We will use the following results without proof as they are counterparts in multi-operated setup of \cite[Propositions 4.8]{QQWZ21}.
\begin{prop}\label{free phi algebra}
\begin{itemize}
\item [(a)] Let $\Phi\subset\bk\frakS(X)$ and $A=\bk \cals (Z) \slash I_A$ an algebra. Then
$$ \calf_{\Alg}^{\Phi\zhx\Alg}(A):=\bk\frakS(Z)\slash\left\langle S_{\Phi}(\frakS(Z))\cup I_A\right\rangle_\mtOpAlg$$
is the free $\Phi$-algebra generated by $A$.
\item [(b)] Let $\Phi\subset\bk\frakM(X)$ and $A=\bk \calm (Z) \slash I_A$ a unital algebra. Then
$$\calf_{\uAlg}^{\Phi\zhx\uAlg}(A):=\bk\frakM(Z)\slash\left\langle S_{\Phi}(\frakM(Z))\cup I_A\right\rangle_\mtuOpAlg$$
is the free unital $\Phi$-algebra over $A$.
\end{itemize}
\end{prop}
As in \cite{QQWZ21}, we consider the following question:
\begin{Ques} Let $A$ be a (unital) algebra together with a Gr\"obner-Shirshov basis $G$. Assume that a set $\Phi$ of operated polynomial identities is $\Omega$-GS in the sense of Definition~\ref{def: Omega-GS}.
Considering the free (unital) $\Phi$-algebra $B$ over $A$, when will
the union ``$\Phi\cup G$" be a $\Omega$-GS basis for $B$?
\end{Ques}
It is surprising that the answer of the corresponding question given in
\cite{QQWZ21} can be generalised to multi-operated case without much modifications.
\begin{thm} \label{Thm: GS basis for free unital Phi algebra over unital alg}
Let $X$ be a set and $\Phi\subseteq \bk\frakM (X)$ a system of OPIs. Let $A=\bk \calm (Z) \slash I_A$ be a unital algebra with generating set $Z$.
Assume that $\Phi$ is $\Omega$-GS on $Z$ with respect to a monomial order $\leq$ in $\frakM (Z)$ and that $G$ is a GS basis of $I_{A}$ in $\bk \calm (Z)$ with respect to the restriction of $\leq$ to $ \calm(Z)$.
Suppose that the leading monomial of any OPI $\phi(x_1, \dots, x_n)\in \Phi$ has no subword in $\calm(X)\backslash X$, and that $\phi(u_1, \dots, u_n) $ vanishes or its leading monomial is $\overline{\phi}(u_1, \dots, u_n) $ for all $u_1, \dots, u_n\in \frakM (Z)$. Then $S_{\Phi}(\frakM (Z))\cup G$ is an $\Omega$-GS basis of $\left\langle S_{\Phi}(\frakM (Z))\cup I_A\right\rangle_{\mtuOpAlg}$ in $\bk\frakM (Z)$ with respect to $\leq$.
\end{thm}
\begin{proof}
The proof of \cite[Theorem~5.9]{QQWZ21} carries verbatim over multi-operated case, because it reveals that the key point is that the leading monomial of any OPI $\phi(x_1, \dots, x_n)\in \Phi$ has no subword in $\calm(X)\backslash X$.
For details, see the proof of \cite[Theorem~5.9]{QQWZ21}.
\end{proof}
There exists a nonunital version of the above result, which is also a multi-operated version of \cite[Theorem 2.15]{QQZ21}.
\begin{thm}\label{Thm: GS basis for free nonunital Phi algebra over nonunital alg}
Let $X$ be a set and $\Phi\subseteq \bk\frakS(X)$ a system of OPIs. Let $A=\bk \cals (Z) \slash I_A$ be an algebra with generating set $Z$.
Assume that $\Phi$ is $\Omega$-GS on $Z$ with respect to a monomial order $\leq$ in $\frakS(Z)$ and that $G$ is a GS basis of $I_{A}$ in $\bk \cals(Z)$ with respect to the restriction of $\leq$ to $ \cals(Z)$.
Suppose that the leading monomial of any OPI $\phi(x_1, \dots, x_n)\in \Phi$ has no subword in $\cals(X)\backslash X$, and that for all $u_1, \dots, u_n\in \frakS (Z)$, $\phi(u_1, \dots, u_n) $ vanishes or its leading monomial is $\overline{\phi}(u_1, \dots, u_n) $. Then $S_{\Phi}(\frakS(Z))\cup G$ is an $\Omega$-GS basis of $\left\langle S_{\Phi}(\frakS(Z))\cup I_A\right\rangle_\mtOpAlg$ in $\bk\frakS(Z)$ with respect to $\leq$.
\end{thm}
\section{Free differential Rota-Baxter algebras over algebras }
In this section, we apply Theorems~\ref{Thm: GS basis for free unital Phi algebra over unital alg} and \ref{Thm: GS basis for free nonunital Phi algebra over nonunital alg} to differential Rota-Baxter algebras.
From now on, let $\Omega=\{D, P\}$, fix a set $X=\{x, y\}$ with two elements such that variables in OPIs will take values in $X$.
When talking about algebras or reductions of OPIs, fix a set $Z$ and we understand that variables in OPIs will be replaced by elements of
$\frakS(Z)$ or $\frakM(Z)$.
We first recall the definition of differential Rota-Baxter algebras. We use $D(~)$ and $P(~)$ instead of the linear operators $\lfloor ~ \rfloor_D$ and $\lfloor ~ \rfloor_P$.
\begin{defn}[{\cite[Definition~2.1]{GGR15}}]\label{differentila rota-baxter algebras}
Let $\lambda \in \bk$ be fixed.
\begin{itemize}
\item[(a)] A (unital) differential $\bk$-algebra of weight $\lambda$ (also called a (unital) $\lambda$-differential $\bk$-algebra) is a (unital) associative $\bk$-algebra $R$ together with a linear operator $D: R \rightarrow R$ such that
$$
\quad D(u v)=D(u) v+u D(v)+\lambda D(u) D(v) \quad \text { for all } u, v \in R;
$$
when $R$ has a unity $1$, it is asked that $D(1)=0$.
\item[(b)] A Rota-Baxter $\bk$-algebra of weight $\lambda$ is an associative $\bk$-algebra $R$ together with a linear operator $P: R \rightarrow R$ such that
$$
P(u) P(v)=P(u P(v))+P(P(u) v)+\lambda P(u v) \quad \text { for all } u, v \in R.
$$
\item[(c)] A (unital) differential Rota-Baxter $\bk$-algebra of weight $\lambda$ (also called a (unital) $\lambda$-differential Rota-Baxter $\bk$-algebra) is a (unital) differential k-algebra $(R, D)$ of weight $\lambda$ and a Rota-Baxter operator $P$ of weight $\lambda$ such that
$$
D \circ P=\text { id }.
$$
\end{itemize}
\end{defn}
When we consider free differential Rota-Baxter algebras on algebras, it is disappointing to see that the traditional order (see \cite{BokutChenQiu}) would not meet the condition of Theorems~\ref{Thm: GS basis for free unital Phi algebra over unital alg} and \ref{Thm: GS basis for free nonunital Phi algebra over nonunital alg}. This is the intention of the new monomial orders $\leq_{\operatorname{PD}}$ and $\leq_{\operatorname{uPD}}$ introduced in Section~\ref{section: order}.
\medskip
\subsection{ Case of nonunital algebras with $\lambda\neq 0$} \label{Subsection: Case of lambda non zero}\
Assume in this subsection that $\lambda\neq 0$. Denote
\begin{itemize}
\item[(1)] $\phi_1(x,y) = P(x) P(y)-P(x P(y))-P(P(x) y)-\lambda P(u v)$,
\item[(2)] $\phi_2(x,y) = D(x)D(y) + \lambda^{-1}D(x)y + \lambda^{-1}xD(y) - \lambda^{-1}D(xy)$,
\item[(3)] $\phi_3(x) = D(P(x))-x$.
\end{itemize}
We first consider nonunital free differential Rota-Baxter algebras on algebras.
\begin{prop}For any $u, v \in \frakS(Z)$, the leading monomials of $\phi_1(u, v)$, $\phi_2(u, v)$ and $\phi_3(u)$ under $\leq_{\operatorname{PD}}$ are respectively
$ P(u)P(v), D(u)D(v)$ and $D(P(u))$.
\end{prop}
\begin{proof}
Let $u_1, \cdots, u_n$ and $v_1, \cdots, v_m$ be all the elements of $Z$ occurring in $u$ and $v$ from left to right.
For $\phi_1(u,v) = P(u) P(v)-P(u P(v))-P(P(u) v)-\lambda P(u v)$, we have $\deg_P(P(uv))$ is smaller than those of the other three terms, while the $\deg_D, \deg_Z$ and $\deg_{GD}$ of the other elements are the same. And one can see $$\deg_{GP}(P(u)P(v))-\deg_{GP}(P(uP(v)))=m>0,$$ $$\deg_{GP}(P(u)P(v))-\deg_{GP}(P(P(u)v))=n>0,$$ so the leading monomial of $\phi_1(u, v)$ is $P(u)P(v)$.
The statements about $\phi_2(u, v)$ and $\phi_3(u)$ are obvious by comparing $\deg_D$.
\end{proof}
Now let $$\dRB':=\left\lbrace\phi_1(x, y) , \phi_2(x,y) , \phi_3(x) \right\rbrace. $$
However, $\dRB'$ is not $\Omega$-GS in $\bk\frakS(Z)$ with respect to $\leq_{\operatorname{PD}}$.
\begin{exam}\label{phi_4}
For $u, v\in \frakS(Z)$, let
$$\begin{array}{rcl} f&=& \phi_2(P(u),v)=D(P(u))D(v) + \lambda^{-1}D(P(u))v + \lambda^{-1}P(u)D(v) - \lambda^{-1}D(P(u)v),\\\
g &=& \phi_3(u)=D(P(u))-u,\\
q&=&\star D(v),\\
p&=&D(P(u))D(v)=\bar{f}=\left.q\right|_{\bar{g}}. \end{array}$$
Then $$
(f,g)_p^q=f-\left.q\right|_{g}\equiv \lambda^{-1}( P(u)D(v)-D(P(u)v)+uv+\lambda uD(v)).
$$
Let $$\phi_4(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y).$$ It is clear that the leading monomial of $\phi_4(u,v)$ is $P(u)D(v)$ with respect to $\leq_{\operatorname{PD}} $ which cannot be reduced further.
\end{exam}
\begin{exam}\label{phi_5}
For $u, v\in \frakS(Z)$, let
$$\begin{array}{rcl} f&=& \phi_2(u,P(v))=D(u)D(P(v)) + \lambda^{-1}D(u)P(v) + \lambda^{-1}uD(P(v)) - \lambda^{-1}D(uP(v)),\\
g &=& \phi_3(v)=D(P(v))-v,\\
q&=&D(u)\star, \\
p&=&D(u)D(P(v))=\bar{f}=\left.q\right|_{\bar{g}}.\end{array}$$
Then
$$
(f,g)_p^q=f-\left.q\right|_{g}\equiv \lambda^{-1}(D(u)P(v)-D(uP(v))+uv+\lambda D(u)v).
$$
Let $$\phi_5(x,y)=D(u)P(y)-D(xP(y))+xy+\lambda D(x)y.$$
It is clear that the leading monomial of $ \phi_5(u,v) $ is $D(u)P(v)$ with respect to $\leq_{\operatorname{PD}} $ which cannot be reduced further.
\end{exam}
Now denote $\dRB$ to be the set of the following OPIs:
\begin{itemize}
\item [(1)] $\phi_1(x,y) =P(x)P(y) - P(xP(y)) - P(P(x)y) - \lambda P(xy)$,
\item [(2)] $ \phi_2(x,y)=D(x)D(y) + \lambda^{-1}D(x)y + \lambda^{-1}xD(y) - \lambda^{-1}D(xy) $,
\item [(3)] $\phi_3(x)=D(P(x)) - x$,
\item [(4)] $\phi_4(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y)$,
\item [(5)] $\phi_5(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y$.
\end{itemize}
It is obvious that
$\left\langle S_{\dRB'}(Z)\right\rangle_\mtOpAlg=\left\langle S_{\dRB}(Z)\right\rangle_\mtOpAlg$ for each set $Z$.
Next we will show that $\dRB$ is $\Omega$-GS with respect to $\leq_{\operatorname{PD}}$. Before that, we need the following lemma to simplify our proof.
\begin{lem}\label{including}
Let $\phi(x_1,\dots,x_n)$ and $\psi(y_1,\dots,y_m)$ be two OPIs. Let $Z$ be a set. Suppose that, for any $u_1, \dots, u_n, v_1, \dots, v_m\in \frakS(Z)$, the leading monomial of $\phi(u_1,\dots,u_n)$ is $\bar\phi(u_1,\dots,u_n)$ and leading monomial of $\psi(v_1,\dots,v_m)$ is $\bar\psi(v_1,\dots,v_m)$.
Now write $f=\phi(u_1,\dots,u_n)$ and $g=\psi(v_1,\dots,v_m)$ for fixed $u_1, \dots, u_n$, $v_1, \dots, v_m\in \frakS(Z)$.
If there exists $i~(1 \leq i \leq n)$ and $r \in \frakS^{\star}(Z)$ such that
$u_i = \left.r\right|_{\bar{g}}$, then the includion composition $(f, g)_p^q = f - q|_g$ with $p=\bar f$ and $q = \bar{\phi}(u_1, \dots, u_{i-1}, r , u_{i+1}, \dots, u_n)$, is trivial modulo $(S_{\{\phi,\psi\}}(Z),w)$.
\end{lem}
\begin{proof}
The assertion follows from
$$
\begin{aligned}
(f,g)_{p}^q
&=f-q|_{g}\\
&=(\phi-\bar\phi)(u_1, \dots, u_{i-1}, r|_{\bar g}, u_{i+1}, \dots,u_n)-\bar\phi(u_1,\dots, u_{i-1}, r|_{g-\bar g}, u_{i+1}, \dots,u_n) \\
&=(\phi-\bar\phi)(u_1, \dots, u_{i-1}, r|_{g}, u_{i+1}, \dots,u_n)-\phi(u_1,\dots, u_{i-1}, r|_{g-\bar g}, u_{i+1}, \dots,u_n).
\end{aligned}
$$
\end{proof}
Now we can prove $\dRB$ is $\Omega$-GS.
\begin{thm}\label{S2}
$\dRB$ is $\Omega$-GS in $\bk\frakS(Z)$ with respect to $\leq_{\operatorname{PD}} $.
\end{thm}
\begin{proof}
We write $i \wedge j$ the composition of OPIs of $\phi_i$ and $\phi_j$, which means $\phi_i$ lies on the left and $\phi_j$ lies on the right for intersection composition or $\phi_j$ is included in $\phi_i$ for inclusion composition. The ambiguities of all possible compositions in $\dRB$ are listed as below: for arbitrary $u, v, w \in \frakS(Z)$ and $q \in \frakS^{\star}(Z)$,
\begin{itemize}
\item [$1 \wedge 1$] \quad $\underline{P(u)P(v)P(w)}$, \quad $P\left(\left.q\right|_{P(u)P(v)} \right)P\left(w\right)$, \quad $P\left(u\right) P\left(\left.q\right|_{P(v)P(w)}\right)$
\item [$1 \wedge 2$] \quad $P\left(\left.q\right|_{D(u)D(v)} \right)P\left(w\right)$, \quad $P\left(u\right) P\left(\left.q\right|_{D(u)D(v)}\right)$,
\item [$1 \wedge 3$] \quad $P\left(\left.q\right|_{D(P(u))} \right)P\left(v\right)$, \quad $P\left(u\right) P\left(\left.q\right|_{D(P(v))}\right)$,
\item [$1 \wedge 4$] \quad $\underline{P(u)P(v)D(w)}$,\quad $P\left(\left.q\right|_{P(u)D(v)} \right)P\left(w\right)$, \quad $P\left(u\right) P\left(\left.q\right|_{P(v)D(w)}\right)$,
\item [$1 \wedge 5$] \quad $P\left(\left.q\right|_{D(u)P(v)} \right)P\left(w\right)$, \quad $P\left(u\right) P\left(\left.q\right|_{D(v)P(w)}\right)$,
\item [$2 \wedge 1$] \quad $D\left(\left.q\right|_{P(u)(P(v)}\right)D\left(w\right)$,\quad $D\left(u\right)D\left(\left.q\right|_{P(v)P(w)}\right)$,
\item [$2 \wedge 2$] \quad$\underline{D(u)D(v)D(w)}$ , \quad $D\left(\left.q\right|_{D(u)D(v)}\right)D\left(w\right)$,\quad $D\left(u\right)D\left(\left.q\right|_{D(v)D(w)}\right)$,
\item [$2 \wedge 3$] \quad $\underline{D(P(u))D(v)}$, \quad $\underline{D(u)D(P(v))}$,\quad $D\left(\left.q\right|_{D(P(u))}\right)D\left(v\right)$,\quad $D\left(u\right)D\left(\left.q\right|_{D(P(v))}\right)$,
\item [$2 \wedge 4$] \quad $D\left(\left.q\right|_{P(u)D(v)}\right)D\left(w\right)$,\quad $D\left(u\right)D\left(\left.q\right|_{P(v)D(w)}\right)$,
\item [$2 \wedge 5$] \quad $\underline{D(u)D(v)P(w)}$,\quad $D\left(\left.q\right|_{D(u)P(v)}\right)D\left(w\right)$,\quad $D\left(u\right)D\left(\left.q\right|_{D(v)P(w)}\right)$,
\item [$3 \wedge 1$] \quad $D\left(P\left(\left.q\right|_{P(u)P(v)}\right)\right)$,
\item [$3 \wedge 2$] \quad $D\left(P\left(\left.q\right|_{D(u)D(v)}\right)\right)$,
\item [$3 \wedge 3$] \quad $D\left(P\left(\left.q\right|_{D(P(u))}\right)\right)$,
\item [$3 \wedge 4$] \quad $D\left(P\left(\left.q\right|_{P(u)D(v)}\right)\right)$,
\item [$3 \wedge 5$] \quad $D\left(P\left(\left.q\right|_{D(u)P(v)}\right)\right)$,
\item [$4 \wedge 1$] \quad $P\left(\left.q\right|_{P(u)P(v)}\right)D\left(w\right)$,\quad $P\left(u\right)D\left(\left.q\right|_{P(v)P(w)}\right)$,
\item [$4 \wedge 2$] \quad $\underline{P(u)D(v)D(w)}$, \quad $P\left(\left.q\right|_{D(u)D(v)}\right)D\left(w\right)$,\quad $P\left(u\right)D\left(\left.q\right|_{D(v)D(w)}\right)$,
\item [$4 \wedge 3$] \quad $\underline{P(u)D(P(v))}$, \quad $P\left(\left.q\right|_{D(P(u))}\right)D\left(v\right)$,\quad $P\left(u\right)D\left(\left.q\right|_{D(P(v))}\right)$,
\item [$4 \wedge 4$] \quad $P\left(\left.q\right|_{P(u)D(v)}\right)D\left(w\right)$,\quad $P\left(u\right)D\left(\left.q\right|_{P(v)D(w)}\right)$,
\item [$4 \wedge 5$] \quad $\underline{P(u)D(v)P(w)}$, \quad $P\left(\left.q\right|_{D(u)P(v)}\right)D\left(w\right)$,\quad $P\left(u\right)D\left(\left.q\right|_{D(v)P(w)}\right)$,
\item [$5 \wedge 1$] \quad $\underline{D(u)P(v)P(w)}$, \quad $D\left(\left.q\right|_{P(u)P(v)}\right)P\left(w\right)$,\quad $D\left(u\right)P\left(\left.q\right|_{P(v)P(w)}\right)$,
\item [$5 \wedge 2$] \quad $D\left(\left.q\right|_{D(u)D(v)}\right)P\left(w\right)$,\quad $D\left(u\right)P\left(\left.q\right|_{D(v)D(w)}\right)$,
\item [$5 \wedge 3$] \quad $\underline{D(P(u))P(v)}$,\quad $D\left(\left.q\right|_{D(P(u))}\right)P\left(v\right)$,\quad $D\left(u\right)P\left(\left.q\right|_{D(P(v))}\right)$,
\item [$5 \wedge 4$] \quad $\underline{D(u)P(v)D(w)}$,\quad $D\left(\left.q\right|_{P(u)D(v)}\right)P\left(w\right)$,\quad $D\left(u\right)P\left(\left.q\right|_{P(v)D(w)}\right)$,
\item [$5 \wedge 5$] \quad $D\left(\left.q\right|_{D(u)P(v)}\right)P\left(w\right)$,\quad $D\left(u\right)P\left(\left.q\right|_{D(v)P(w)}\right)$.
\end{itemize}
Notice that all compositions above but the underlined ones can be dealt with by Lemma~\ref{including}.
There remains to consider the underlined compositions. We only give the complete proof for the case $5 \wedge 1$, the other cases being similar.
For the case $5 \wedge 1$, write
$f = \phi_5(u,v)$, $g= \phi_1(v,w)$ and $p= D(u)P(v)P(w)$ . So we have
$$
\begin{aligned}
(f,g)_p^{P(w),D(u)} &=-D(uP(v))P(w)+uvP(w)+ \lambda D(u)vP(w)\\
& \quad +D(u)P(vP(w))+D(u)P(P(v)w)+\lambda D(u)P(vw) \\
& \equiv -D(uP(v)P(w))+uP(v)w +\lambda D(uP(v))w+uvP(w)+ \lambda D(u)vP(w) \\
& \quad +D(uP(P(v)w))-uP(v)w- \lambda D(u)P(v)w\\
& \quad +D(uP(vP(w)))-uvP(w)-\lambda D(u)vP(w)\\
& \quad +\lambda D(uP(vw))-\lambda uvw-\lambda^2 D(u)vw\\
& \equiv -D(uP(v)P(w)) +D(uP(vP(w)))+D(uP(P(v)w))+\lambda D(uP(vw))\\
& \quad - \lambda D(u)P(v)w+\lambda D(uP(v))w -\lambda uvw-\lambda^2 D(u)vw \\
& = -D(u\phi_1(v,w))-\lambda \phi_5(u,v)w\\
& \equiv 0 \bmod \left(S_{\dRB}(Z), p\right).
\end{aligned}
$$
We are done.
\end{proof}
\begin{thm}
Let $Z$ be a set, $A=\bk \cals(Z)\slash I_A$ a nonunital $\bk$-algebra. Then we have:
$$\calf^{\dRB\zhx\Alg}_{\Alg}(A)=\bk\frakS(Z)\slash\left\langle S_{\dRB}(Z)\cup I_A\right\rangle_\mtOpAlg.$$
Moreover, assume $I_A$ has a GS basis $G$ with respect to the degree-lexicographical order $\leq_{\rm {dlex}}$. Then $S_{\dRB}(Z)\cup G$ is an operated GS basis of $\left\langle S_{\dRB}(Z)\cup I_A\right\rangle_\mtOpAlg$ in $\bk\frakS(Z) $ with respect to $\leq_{\operatorname{PD}}$.
\end{thm}
\begin{proof}
Since the leading monomial in $\dRB$ has no subword in $\cals(X)\backslash X$, the result follows immediately from Theorem~\ref{S2} and Theorem~\ref{Thm: GS basis for free nonunital Phi algebra over nonunital alg}.
\end{proof}
As a consequence, we obtain a linear basis.
\begin{thm}
Let $Z$ be a set, $A=\bk \cals(Z)\slash I_A$ a nonunital $\bk$-algebra with a GS basis $G$ with respect to $\leq_{\rm{dlex}}$. Then the set
$\Irr(S_{\dRB}(Z)\cup G)$ which is by definition the complement of
$$\left\lbrace q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)D(v)},q|_{D(P(u))},q|_{P(u)D(v)}, q|_{D(u)P(v)}, s\in G,q\in\frakSstar(Z),u,v\in\frakS(Z)\right\rbrace$$ in $\frakS(Z)$ is a linear basis of the free nonunital $\lambda$-differential Rota-Baxter algebra $\calf^{\dRB\zhx\Alg}_{\Alg}(A)$ over $A$.
\end{thm}
\begin{proof}
It can be induced directly by Theorem~\ref{Thm: nonunital CD}.
\end{proof}
\begin{remark}Since the monomial order $\leq_{\operatorname{PD}}$ is different from that used in \cite{BokutChenQiu}, our operated GS basis and linear basis are different from theirs.
\end{remark}
\subsection{ Case of nonunital algebras with $\lambda=0$}\label{Subsection: case zero}\
Now we consider
nonunital free differential Rota-Baxter algebras on algebras with $\lambda=0$.
This case can be studied similarly to the case $\lambda\neq 0$, so we omit the details in this subsection.
Denote $\phi_1(x,y)$ with $\lambda=0$ by $\phi_1^0(x,y)$ . Let
$$\phi_2^0(x,y)=D(x)y+xD(y)-D(xy).$$
We also write $\phi_3^0(x)=\phi_3(x)$ for convenience.
\begin{prop}For any $u,v \in \frakS(Z)$, the leading monomials of $\phi_1^0(u, v)$, $\phi_2^0(u, v)$ and $\phi_3^0(u)$ with respect to $\leq_{\operatorname{PD}}$ are
$ P(u)P(v), D(u)v$ and $D(P(u))$ respectively.
\end{prop}
Let $$\OdRB' :=\left\lbrace ~ \phi_1^0(x,y) , \phi_2^0(x,y) , \phi_3^0(x) \right\rbrace. $$
By the following example, one can see that $\OdRB'$ is not $\Omega$-GS in $\bk\frakS(Z)$ with respect to $\leq_{\operatorname{PD}}$.
\begin{exam}
For $u, v\in \frakS(Z)$, let
$$\begin{array}{rcl} f&=&\phi_2^0(P(u),v)=D(P(u))v+P(u)D(v)-D(P(u)v),\\\
g &=& \phi_3^0(u)=D(P(u))-u,\\
q&=&\star v,\\
p&=&D(P(u))v=\bar{f}=\left.q\right|_{\bar{g}}. \end{array}$$
Then $$
(f,g)_p^q=f-\left.q\right|_{g}\equiv P(u)D(v)-D(P(u)v)+uv.
$$
Let $$\phi_4^0(x,y)=P(x)D(y)-D(P(x)y)+xy.$$ It is clear that the leading monomial of $\phi_4^0(u,v)$ with $u,v \in \frakS(Z)$ is $P(u)D(v)$ with respect to $\leq_{\operatorname{PD}} $ which cannot be reduced further.
\end{exam}
Now denote $\OdRB$ to be the set of the following OPIs:
\begin{itemize}
\item [(1)] $\phi_1^0(x,y) =P(x)P(y) - P(xP(y)) - P(P(x)y)$,
\item [(2)] $ \phi_2^0(x,y)=D(x)y + xD(y) - D(xy) $,
\item [(3)] $\phi_3^0(x)=D(P(x)) -x$,
\item [(4)] $\phi_4^0(x,y)=P(x)D(y)-D(P(x)y)+xy$.
\end{itemize}
It is obvious that
$\left\langle S_{\OdRB'}(Z)\right\rangle_\mtOpAlg=\left\langle S_{\OdRB}(Z)\right\rangle_\mtOpAlg$ for arbitrary set $Z$.
Similar to the case $\lambda\neq0$, it can be
proved that $\OdRB$ is $\Omega$-GS with respect to $\leq_{\operatorname{PD}}$.
\begin{remark}
Note that $\phi_4^0(x,y)$ is just $\phi_4(x,y)$ with $\lambda=0$, and for $u,v \in \frakS(Z)$
$$
\phi_2^0(u,P(v)) =D(u)P(v) + uD(P(v)) - D(uP(v)) \equiv D(u)P(v) + uv - D(uP(v)),
$$
which is exactly $\phi_5(u,v)$ with $\lambda=0$. So $\phi_5(x,y)$ ($\lambda=0$) does not appear in $\OdRB$.
\end{remark}
\begin{thm}\label{S1}
$\OdRB$ is $\Omega$-GS in $\bk\frakS(Z)$ with respect to $\leq_{\operatorname{PD}} $.
\end{thm}
\begin{proof} As in the proof of Theorem~\ref{S2},
we write $i \wedge j$ the composition of OPIs of $\phi_i$ and $\phi_j$. There are two kinds of ambiguities of all possible compositions in $\OdRB$. Since $\phi_1^0(x,y)$,
$\phi_3^0(x)$, and
$\phi_4^0(x,y)$ have the same leading monomials as in the case $\lambda\neq 0$, the corresponding ambiguities $i\wedge j$ with $ i, j\in \{1, 3, 4\}$ are the same in the proof of Theorem~\ref{S2}.
Since $\phi^0_2(x,y)$ has a different leading monomial, the ambiguities of the case $i\wedge j$ with $i=2$ or $j=2$ are the following: for arbitrary $u,v,w \in \frakS(Z), q \in \frakS^{\star}(Z) $ and $s \in \frakS(Z) \text{ or } \emptyset$,
\begin{itemize}
\item [$1 \wedge 2$] \quad $P\left(\left.q\right|_{D(u)v}\right) P\left(w\right)$, \quad $P\left(u\right) P\left(\left.q\right|_{D(v)w}\right)$;
\item [$2 \wedge 1$] \quad $D\left(\left.q\right|_{P(u)P(v)}\right)w$,\quad $D\left(u\right)\left.q\right|_{P(u)P(v)}$;
\item [$2 \wedge 2$] \quad $\underline{D(u)sD(v)w}$, \quad $D\left(\left.q\right|_{D(u)v}\right)w$,\quad $D\left(u\right)\left.q\right|_{D(v)w}$;
\item [$2 \wedge 3$] \quad $\underline{D\left(P(u)\right)v}$, \quad$D\left(\left.q\right|_{D(P(u))}\right)v$,\quad $D\left(u\right)\left.q\right|_{D(P(v))}$;
\item [$2 \wedge 4$] \quad $\underline{D(u)sP(v)D(w)}$, \quad $D\left(\left.q\right|_{P(u)D(v)}\right)w$,\quad $D\left(u\right)\left.q\right|_{P(v)D(w)}$;
\item [$3 \wedge 2$] \quad $D\left(P\left(\left.q\right|_{D(u)v}\right)\right)$;
\item [$4 \wedge 2$] \quad $\underline{P(u)D(v)w}$, \quad $P\left(\left.q\right|_{D(u)v}\right)D\left(w\right)$,\quad $P\left(u\right)D\left(\left.q\right|_{D(v)w}\right)$.
\end{itemize}
Almost all the cases can be treated similarly as in the proof of Theorem~\ref{S2}, except a slight difference in the case $2\wedge 2$. In fact,
let
$f = \phi_2^0(u,sD(v))$, $g= \phi_2^0(v,w)$ and $p=D(u)sD(v)w$. So we have
$$
\begin{aligned}
(f,g)_p^{D(u)s,D(w)} &=uD(sD(v))w-D(usD(v))w-D(u)svD(w)+D(u)sD(vw)\\
& \equiv -usD(v)D(w)+uD(sD(v)w)+usD(v)D(w)-D(usD(v)w)\\
& \quad +uD(svD(w))-D(usvD(w))-uD(sD(vw))+D(usD(vw))\\
& = uD(s\phi_2^0(v,w))-D(us\phi_2^0(v,w))\\
& \equiv 0 \bmod \left(S_{\OdRB}(Z), p\right).
\end{aligned}
$$
We are done.
\end{proof}
\begin{thm}
Let $Z$ be a set and $A=\bk \cals(Z)\slash I_A$ a nonunital $\bk$-algebra. Then we have:
$$\calf^{\OdRB\zhx\Alg}_{\Alg}(A)=\bk\frakS(Z)\slash\left\langle S_{\OdRB}(Z)\cup I_A\right\rangle_\mtOpAlg.$$
Moreover, assume $I_A$ has a GS basis $G$ with respect to the degree-lexicographical order $\leq_{\rm {dlex}}$. Then $S_{\OdRB}(Z)\cup G$ is an operated GS basis of $\left\langle S_{\OdRB}(Z)\cup I_A\right\rangle_\mtOpAlg$ in $\bk\frakS(Z) $ with respect to $\leq_{\operatorname{PD}}$.
\end{thm}
As a consequence, we obtain a linear basis.
\begin{thm}
Let $Z$ be a set and $A=\bk \cals(Z)\slash I_A$ a nonunital $\bk$-algebra with a GS basis $G$ with respect to $\leq_{\rm{dlex}}$. Then the set
$\Irr(S_{\OdRB}(Z)\cup G)$ which is by definition the complement of
$$\left\lbrace q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)}, s\in G,q\in\frakSstar(Z),u,v\in\frakS(Z)\right\rbrace$$ in $\frakS(Z)$ is a linear basis of the free nonunital $0$-differential Rota-Baxter algebra $\calf^{\OdRB\zhx\Alg}_{\Alg}(A)$ over $A$.
\end{thm}
\subsection{ Case of unital algebras}\
Now we consider unital differential Rota-Baxter algebras. Since the proofs are similar to those in the previous subsections, we omit most of them.
The study still divided into cases of $\lambda \neq 0$ and $\lambda=0$.
\medskip
When $\lambda \neq 0$, since unital differential Rota-Baxter algebras have the condition $D(1)=0$, put $\udRB$ to be the union of $\dRB$ with $\{D(1)\}$, but by abuse of notation, in $\dRB$, $x,y$ take their values in $ \frakM(Z)$ instead of $\frakS(Z)$.
\begin{remark}
We have:
$$
\left\{\begin{array}{lll}
\phi_2(u,v)\equiv 0 & \text{ when } u=1 \text{ or } v=1; \\
\phi_4(u,v)\equiv -D(P(u))+u=-\phi_3(u) & \text{ when } v=1; \\
\phi_5(u,v)\equiv -D(P(v))+v=-\phi_3(v) & \text{ when } u=1.
\end{array}\right.
$$
So adding of the unity $1$ will not produce new OPIs.
Moreover,
it is clear that except the above cases, the leading monomials of OPIs in $\dRB$ are the same with respect to $\leq_{\operatorname{PD}}$ and $\leq_{\operatorname{uPD}}$ by Proposition~\ref{uDl}.
\end{remark}
With similar proofs as in Subsection~\ref{Subsection: Case of lambda non zero}, we can prove the following results.
\begin{thm}\label{uS1}
$\udRB$ is $\Omega$-GS in $\bk\frakM(Z)$ with respect to $\leq_{\operatorname{uPD}} $.
\end{thm}
\begin{thm}
Let $Z$ be a set and $A=\bk \calm(Z)\slash I_A$ a unital $\bk$-algebra. Then we have:
$$\calf^{\udRB\zhx\uAlg}_{\uAlg}(A)=\bk\frakM(Z)\slash\left\langle S_{\udRB}(Z)\cup I_A\right\rangle_\mtuOpAlg.$$
Moreover, assume $I_A$ has a GS basis $G$ with respect to the degree-lexicographical order $\leq_{\rm {dlex}}$. Then $S_{\udRB}(Z)\cup G$ is an operated GS basis of $\left\langle S_{\udRB}(Z)\cup I_A\right\rangle_\mtuOpAlg$ in $\bk\frakM(Z) $ with respect to $\leq_{\operatorname{uPD}}$.
\end{thm}
\begin{thm}
Let $Z$ be a set and $A=\bk \calm(Z)\slash I_A$ a unital $\bk$-algebra with a GS basis $G$ with respect to $\leq_{\rm{dlex}}$. Then the set
$\Irr(S_{\udRB}(Z)\cup G)$ which is by definition the complement of
$$\left\lbrace q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)D(v)},q|_{D(P(u))},q|_{P(u)D(v)}, q|_{D(u)P(v)}, q|_{D(1)}, s\in G,q\in\frakMstar(Z),u,v\in\frakM(Z)\right\rbrace$$ in $\frakM(Z)$ is a linear basis of the free unital $\lambda$-differential Rota-Baxter algebra $\calf^{\udRB\zhx\uAlg}_{\uAlg}(A)$ over $A$.
\end{thm}
\medskip
When $\lambda= 0$, denote $\OudRB:=\OdRB$ (again by abuse of notation, $\OdRB$ is understood that $u, v$ take their values in $ \frakM(X)$ instead of $\frakS(X)$).
\begin{remark}
In $\OudRB$, we have
$$
\phi_2^0(1,1)=D(1)+D(1)-D(1)=D(1),
$$
so it is not necessary to add $D(1)$ into $\OudRB$.
Note that $\phi_4^0(u,1) \equiv -D(P(v))+v=-\phi_3^0(v)$, so adding the unity $1$ will not induce any new OPI.
\end{remark}
By using similar proofs in Subsection~\ref{Subsection: case zero}, one can show the following results.
\begin{thm}\label{uS2}
$\OudRB$ is $\Omega$-GS in $\bk\frakM(Z)$ with $\leq_{\operatorname{uPD}} $.
\end{thm}
\begin{thm}
Let $Z$ be a set and $A=\bk \calm(Z)\slash I_A$ a unital $\bk$-algebra. Then we have:
$$\calf^{\OudRB\zhx\uAlg}_{\uAlg}(A)=\bk\frakM(Z)\slash\left\langle S_{\OudRB}(Z)\cup I_A\right\rangle_\mtuOpAlg.$$
Moreover, assume $I_A$ has a GS basis $G$ with respect to the degree-lexicographical order $\leq_{\rm {dlex}}$. Then $S_{\OudRB}(Z)\cup G$ is an operated GS basis of $\left\langle S_{\OudRB}(Z)\cup I_A\right\rangle_\mtuOpAlg$ in $\bk\frakM(Z) $ with respect to $\leq_{\operatorname{uPD}}$.
\end{thm}
\begin{thm}\label{different RB}
Let $Z$ be a set and $A=\bk \calm(Z)\slash I_A$ a unital $\bk$-algebra with a GS basis $G$ with respect to $\leq_{\rm{dlex}}$. Then the set
$\Irr(S_{\OudRB}(Z)\cup G)$ which is by definition the complement of
$$\left\lbrace q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)}, s\in G,q\in\frakMstar(Z),u,v\in\frakM(Z)\right\rbrace$$ in $\frakM(Z)$ is a linear basis of the free unital $0$-differential Rota-Baxter algebra $\calf^{\OudRB\zhx\uAlg}_{\uAlg}(A)$ over $A$.
\end{thm}
So far, we have completed the study of differential Rota-Baxter algebras.
\section{Free integro-differential algebras over algebras }
In this section, we carry the study of GS bases of free integro-differential algebras over algebras.
It reveals that integro-differential algebras can be investigated by using a method similar to differential Rota-Baxter algebras, but the details are more difficult.
We first recall the definition of integro-differential algebras.
\begin{defn} Let $\lambda\in \bfk$.
An integro-differential $\mathbf{k}$-algebra of weight $\lambda$ (also called a $\lambda$-integro-differential $\mathbf{k}$-algebra) is a differential $\mathbf{k}$-algebra $(R, d)$ of weight $\lambda$ with a linear operator $P: R \rightarrow$ $R$ which satisfies (c) in Definition~\ref{differentila rota-baxter algebras}:
$$
D \circ P=\text { id },
$$
and such that
$$
\begin{array}{ll}
P(D(u) P(v))=u P(v)-P(u v)-\lambda P(D(u) v) & \text { for all } u, v \in R, \\
P(P(u) D(v))=P(u) v-P(u v)-\lambda P(u D(v)) & \text { for all } u, v \in R.
\end{array}
$$
\end{defn}
Recall that
\begin{itemize}
\item[(1)] $\phi_1(x,y) = P(x) P(y)-P(x P(y))-P(P(x) y)-\lambda P(x y)$,
\item[(2)] $\phi_2(x,y)= D(x)D(y) + \lambda^{-1}D(x)y + \lambda^{-1}xD(y) - \lambda^{-1}D(xy) $,
\item[(3)] $\phi_3(x) = D(P(x))-x$,
\item [(4)] $\phi_4(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y)$,
\item [(5)] $\phi_5(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y$,
\end{itemize}
and
denote
\begin{itemize}
\item[(6)] $\phi_6(x,y) = P(D(x) P(y)) - x P(y)+P(x y)+\lambda P(D(x) y)$,
\item[(7)] $\phi_7(x,y) = P(P(x) D(y))-P(x) y+P(x y)+\lambda P(x D(y))$.
\end{itemize}
Notice that for $u, v\in\frakS(Z)$,
since $P(D(u) P(v))$ (resp. $P(P(u) D(v))$) has the largest $P$-degree in $\phi_6(u,v)$ (resp. $\phi_7(u,v)$), the leading monomial of $\phi_6(u,v)$ (resp. $\phi_7(u,v)$) with respect to $\leq_{\operatorname{PD}}$ is $P(D(u) P(v))$ (resp. $P(P(u) D(v))$).
\subsection{ Case of nonunital algebras with $\lambda\neq 0$}\
Assume in this subsection that $\lambda\neq 0$.
We first consider nonunital free integro-differential $\bk$-algebras over algebras.
According to the definition of integro-differential algebras, define
$$
\inte' :=\left\lbrace ~ \phi_2(x, y) , \phi_3(x), \phi_6(x, y) , \phi_7(x, y)~ \right\rbrace.
$$
By Example~\ref{phi_4}, Example~\ref{phi_5}, Example~\ref{phi8} and Example~\ref{phi9}, $\inte'$ is not $\Omega$-GS in $\bk\frakS(X)$ with respect to $\leq_{\operatorname{PD}}$.
\begin{exam}\label{phi8} For $u, v\in \frakS(Z)$, let
$$\begin{array}{rcl} f&=&\phi_7(u,v)=P(P(u) D(v))-P(u) v+P(u v)+\lambda P(u D(v)),\\\
g &=& \phi_4(u,v)=P(u)D(v)-D(P(u)v)+uv+\lambda uD(v),\\
q&=&P(\star),\\
p&=&P(P(u) D(v))=\bar{f}=\left.q\right|_{\bar{g}}. \end{array}$$
Then $$
(f,g)_p^q=f-\left.q\right|_{g}\equiv -P(D(P(u)v))+P(u)v.
$$
Let$$
\phi_8(x,y)=P(D(P(x)y))-P(x)y.$$
It is clear that the leading monomial of $\phi_8(u, v)$ is $P(D(P(u)v))$ with respect to $\leq_{\operatorname{PD}}$ which cannot be reduced further.
\end{exam}
\begin{exam}\label{phi9} For $u, v\in \frakS(Z)$, let
$$\begin{array}{rcl} f&=&\phi_6(u,v)=P(D(u) P(v)) - uP(v)+P(u v)+\lambda P(D(u) v),\\\
g &=& \phi_5(u,v)=D(u)P(v)-D(uP(v))+uv+\lambda D(u)v,\\
q&=&P(\star),\\
p&=&P(D(u)P(v))=\bar{f}=\left.q\right|_{\bar{g}}. \end{array}$$
Then $$
(f,g)_p^q=f-\left.q\right|_{g}\equiv -P(D(uP(v)))+uP(v).
$$
Let$$
\phi_9(x,y)=P(D(xP(y)))-xP(y).$$
It is clear that the leading monomial of $\phi_9(u, v)$ is $P(D(uP(v)))$ with respect to $\leq_{\operatorname{PD}}$ which cannot be reduced further.
\end{exam}
\begin{remark}\label{phi1}
Note that the OPI $\phi_1(x,y)$ can be induced by $\phi_3(x,y)$ and $\phi_6(x,y)$. So an integro-differential algebra can be seen as a differential Rota-Baxter algebra.
Explicitly, for $u, v\in \frakS(Z)$, let
$$\begin{array}{rcl} f&=&\phi_6(P(u),v)=P(D(P(u)) P(v)) - P(u)P(v)+P(P(u) v)+\lambda P(D(P(u)) v),\\\
g &=& \phi_3(u)=D(P(u))-u,\\
q&=&P(\star P(v)),\\
p&=&P(D(P(u)) P(v))=\bar{f}=\left.q\right|_{\bar{g}}. \end{array}$$
Then $$
(f,g)_p^q=f-\left.q\right|_{g}\equiv P(u)P(v) - P(uP(v)) - P(P(u)v) - \lambda P(uv)=\phi_1(u,v).
$$
\end{remark}
\bigskip
Now denote $\inte$ to be the set of the following OPIs:
\begin{itemize}
\item [(1)] $\phi_1(x, y) =P(x)P(y) - P(xP(y)) - P(P(x)y) - \lambda P(xv)$,
\item [(2)] $ \phi_2(x, y)=D(x)D(y) + \lambda^{-1}D(x)y + \lambda^{-1}xD(y) - \lambda^{-1}D(xy) $,
\item [(3)] $\phi_3(x)=D(P(x)) - x$,
\item [(4)] $\phi_4(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y)$,
\item [(5)] $\phi_5(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y$,
\item [(8)] $\phi_8(x,y)=P(D(P(x)y))-P(x)y$,
\item [(9)] $\phi_9(x,y)=P(D(xP(y)))-xP(y)$.
\end{itemize}
Notice that $\inte= \dRB\cup \{\phi_8(x,y), \phi_9(x,y)\}$.
\begin{prop}\label{s3s3'}
$\left\langle S_{\inte'}(Z)\right\rangle_\mtOpAlg=\left\langle S_{\inte}(Z)\right\rangle_\mtOpAlg$ for each set $Z$.
\end{prop}
\begin{proof}
We firstly prove $\left\langle S_{\inte}(Z)\right\rangle_\mtOpAlg \subseteq \left\langle S_{\inte'}(Z)\right\rangle_\mtOpAlg$, which follows from
$$
\left\{\begin{array}{lll}
\phi_1(u, v) \in \left\langle \phi_3(u, v), \phi_6(u, v)\right\rangle_\mtOpAlg \ \mathrm{by\ Remark}~\ref{phi1},\\
\phi_4(u, v) \in \left\langle \phi_2(u, v) , \phi_3(u)\right\rangle_\mtOpAlg\ \mathrm{by \ Example}~\ref{phi_4},\\
\phi_5(u, v) \in \left\langle \phi_2(u, v) , \phi_3(u)\right\rangle_\mtOpAlg \ \mathrm{by \ Example}~\ref{phi_5},\\
\phi_8(u, v) \in \left\langle \phi_4(u, v) , \phi_7(u, v) \right\rangle_\mtOpAlg\ \mathrm{by \ Example}~\ref{phi8}, \\
\phi_9(u, v) \in \left\langle \phi_5(u, v) , \phi_6(u, v) \right\rangle_\mtOpAlg\ \mathrm{by \ Example}~\ref{phi9},
\end{array}\right.
$$
where $ u,v\in \frakS(Z)$.
Next we show $ \left\langle S_{\inte'}(Z)\right\rangle_\mtOpAlg \subseteq \left\langle S_{\inte}(Z)\right\rangle_\mtOpAlg $. Note that
$$
\begin{aligned}
P(\phi_4(u,v)) &=P(P(u)D(v))-P(D(P(u)v))+P(uv)+\lambda P(uD(v))\\
& = P(P(u)D(v))-P(u)v+P(uv)+\lambda uD(v)-P(D(P(u)v))+P(u)v\\
& = \phi_7(u,v)-\phi_8(u,v),
\end{aligned}
$$
and
$$
\begin{aligned}
P(\phi_5(u,v)) &=P(D(u)P(v))-P(D(uP(v)))+P(uv)+\lambda P(D(u)v)\\
& = P(D(u)P(v))-uP(v)+P(uv)+\lambda D(u)v-P(D(uP(v)))+uP(v)\\
& = \phi_6(u,v)-\phi_9(u,v).
\end{aligned}
$$
So we have
$$
\left\{\begin{array}{lll}
\phi_6(u,v) \in \left\langle \phi_5(u,v), \phi_9(u,v)\right\rangle_\mtOpAlg,\\
\phi_7(u,v) \in \left\langle \phi_4(u,v), \phi_8(u,v)\right\rangle_\mtOpAlg.
\end{array}\right.
$$
It proves $ \left\langle S_{\inte'}(Z)\right\rangle_\mtOpAlg \subseteq \left\langle S_{\inte}(Z)\right\rangle_\mtOpAlg $.
We are done.
\end{proof}
Now we can prove $\inte$ is $\Omega$-GS.
\begin{thm}\label{S3,S4}
$\inte$ is $\Omega$-GS in $\bk\frakS(Z)$ with respect to $\leq_{\operatorname{PD}} $.
\end{thm}
\begin{proof}
Since the ambiguities $i\wedge j$ with $i, j = 1, 2,3,4,5$ in $\inte$ are the same as in Theorem~\ref{S2}, we only need to consider the ambiguities involving $\phi_8$ and $\phi_9$. The cases that cannot be dealt with directly by Lemma~\ref{including} are listed below: for arbitrary $u,v,w \in \frakS(Z), q\in\frakS^{\star}(Z)$ and $s \in \frakS(Z) \text{ or } \emptyset$,
\begin{itemize}
\item [$1 \wedge 8$] \quad $P\left(D\left(P\left(u\right)v\right)\right)P\left(w\right)$, \quad $P\left(u\right)P\left(D\left(P\left(v\right)w\right)\right)$,
\item [$3 \wedge 8$] \quad $D\left(P\left(D\left(P\left(u\right)v\right)\right)\right)$,
\item [$4 \wedge 8$] \quad $P\left(D\left(P\left(u\right)v\right)\right)D\left(w\right)$,
\item [$5 \wedge 8$] \quad $D\left(u\right)P\left(D\left(P\left(v\right)w\right)\right)$,
\item [$1 \wedge 9$] \quad $P\left(D\left(uP\left(v\right)\right)\right)P\left(w\right)$, \quad $P\left(u\right)P\left(D\left(vP\left(w\right)\right)\right)$,
\item [$3 \wedge 9$] \quad $D\left(P\left(D\left(uP\left(v\right)\right)\right)\right)$,
\item [$4 \wedge 9$] \quad $P\left(D\left(uP\left(v\right)\right)\right)D\left(w\right)$,
\item [$5 \wedge 9$] \quad $D\left(u\right)P\left(D\left(vP\left(w\right)\right)\right)$,
\item [$8 \wedge 1$] \quad $P\left(D\left(P\left(u\right)P\left(v\right)s\right)\right)$,
\item [$8 \wedge 4$] \quad $P\left(D\left(P\left(u\right)D\left(v\right)s\right)\right)$,
\item [$9 \wedge 1$] \quad $P\left(D\left(sP\left(u\right)P\left(v\right)\right)\right)$,
\item [$9 \wedge 5$] \quad $P\left(D\left(sD\left(u\right)P\left(v\right)\right)\right)$,
\item [$8 \wedge 8$] \quad $P\left(D\left(P\left(D\left(P\left(u\right)v\right)\right)w\right)\right)$,
\item [$8 \wedge 9$] \quad $P\left(D\left(P\left(D\left(uP\left(v\right)\right)\right)w\right)\right)$,
\item [$9 \wedge 8$] \quad $P\left(D\left(uP\left(D\left(P\left(v\right)w\right)\right)\right)\right)$,
\item [$9 \wedge 9$] \quad $P\left(D\left(uP\left(D\left(vP\left(w\right)\right)\right)\right)\right)$.
\end{itemize}
All these compositions can be treated similarly as in the proof of Theorem~\ref{S2}. We only give the complete proof for the case $8 \wedge 1$.
Take
$f = \phi_8(u,P(v)s)$, $g= \phi_1(u,v)$, $p= P(D(P(u)P(v)s))$ and $q=P(D(\star s))$. Then we have
$$
\begin{aligned}
(f,g)_p^q &=-P(u)P(v)s+P(D(P(uP(v))s))+P(D(P(P(u)v)s))+\lambda P(D(P(uv)s))\\
& \equiv -P(uP(v))s-P(P(u)v)s-\lambda P(uv)s+P(uP(v))s+ P(P(u)v)s+\lambda P(uv)s\\
& \equiv 0 \bmod \left(S_{\inte}(Z), p\right).
\end{aligned}
$$
We are done.
\end{proof}
\begin{thm}
Let $Z$ be a set and $A=\bk \cals(Z)\slash I_A$ a nonunital $\bk$-algebra. Then we have:
$$\calf^{\inte\zhx\Alg}_{\Alg}(A)=\bk\frakS(Z)\slash\left\langle S_{\inte}(Z)\cup I_A\right\rangle_\mtOpAlg.$$
Moreover, assume $I_A$ has a GS basis $G$ with respect to the degree-lexicographical order $\leq_{\rm {dlex}}$. Then $S_{\inte}(Z)\cup G$ is an operated GS basis of $\left\langle S_{\inte}(Z)\cup I_A\right\rangle_\mtOpAlg$ in $\bk\frakS(Z) $ with respect to $\leq_{\operatorname{PD}}$.
\end{thm}
\begin{proof}
Since the leading monomial in $\inte$ has no subword in $\cals(X)\backslash X$, the result follows immediately from Theorem~\ref{S3,S4} and Theorem~\ref{Thm: GS basis for free nonunital Phi algebra over nonunital alg}.
\end{proof}
As a consequence, we obtain a linear basis .
\begin{thm}
Let $Z$ be a set and $A=\bk \cals(Z)\slash I_A$ a nonunital $\bk$-algebra with a GS basis $G$ with respect to $\leq_{\rm{dlex}}$. Then a linear basis of the free nonunital $\lambda$-integro-differential algebra $\calf^{\dRB\zhx\Alg}_{\Alg}(A)$ over $A$ is given by the set
$\Irr(S_{\inte}(Z)\cup G)$, which is by definition the complement in $\frakS(Z)$ of the subset consisting of $ q|_w$ where $w$ runs through
$$ \bar{s},P(u)P(v),D(u)D(v),D(P(u)),P(u)D(v), D(u)P(v),P(D(P(u)v)),P(D(uP(v)))$$
for arbitrary $ s\in G,q\in\frakSstar(Z),u,v\in\frakS(Z).$
\end{thm}
\begin{proof}
It can be induced directly from Theorem~\ref{Thm: nonunital CD}.
\end{proof}
\begin{remark}Since the monomial order $\leq_{\operatorname{PD}}$ is different from that used in \cite{GGR15}, our operated GS basis and linear basis are different from theirs.
\end{remark}
\begin{remark}
Define a new OPI $\phi_{10}(x)=P(D(x))$,
and let $$\Phi_{\mathsf{IID}}=\{~\phi_1(x,y),\phi_2(x,y),\phi_3(x),\phi_{10}(x)~\}.$$
A $\Phi_{\mathsf{IID}}$-algebra is just a nonunital $\lambda$-integro-differential algebra with the operators $P$ and $D$ being the inverse operator of each other, so we call such an operated algebra an invertible integro-differential algebra. One can show that
$\Phi_{\mathsf{IID}}\cup\{\phi_4(x,y),\phi_5(x,y)\}$ is $\Omega$-GS in $\bk\frakS(Z)$ with respect to $\leq_{\operatorname{PD}} $.
\end{remark}
\subsection{ Case of nonunital algebras with $\lambda=0$}\
Now we consider
nonunital free integro-differential algebras on algebras with $\lambda=0$.
This case can be studied similarly as the case $\lambda\neq 0$, so we omit the details in this subsection.
As in Subsection~\ref{Subsection: case zero}, for a OPI $\phi$, we denote $\phi^0$ for $\phi$ with $\lambda=0$ and also write $\phi^0=\phi$ when $\lambda$ does not appear in $\phi$ for convenience.
Let
$$
\Ointe' :=\left\lbrace ~ \phi_2^0(x, y) , \phi_3^0(x) , \phi_6^0(x, y), \phi_7^0(x, y) \right\rbrace.
$$
Again, $\Ointe'$ is not $\Omega$-GS in $\bk\frakS(Z)$ with respect to $\leq_{\operatorname{PD}}$.
\begin{remark}
By Example~\ref{phi8}, we can get $\phi_8^0(u,v)$ from $\phi_4^0(u,v)$ and $\phi_7^0(u,v)$.
One can not obtain $\phi_9^0(u,v)$ from $S_{\Ointe'}(Z)$ as in Example~\ref{phi9}, since $\phi_5$ does not belong to $\Ointe'$.
However, we can still generate $\phi_9^0(u,v)$ as follows: for $u, v\in \frakS(Z)$, let
$$\begin{array}{rcl} f&=&\phi_6^0(u,v)=P(D(u)P(v))-uP(v)+P(u v),\\\
g &=& \phi_2^0(u,P(v))=D(u)P(v)+uD(P(v))-D(uP(v)),\\
q&=&P(\star),\\
w&=&P(D(u)P(v))=\bar{f}=\left.q\right|_{\bar{g}}. \end{array}$$
Then $$
(f,g)_w=f-\left.q\right|_{g}\equiv P(D(uP(v)))-uP(v)=\phi_9^0(u,v).
$$
\end{remark}
Now denote $\Ointe$ to be the set of the following OPIs:
\begin{itemize}
\item [(1)] $\phi_1^0(x,y) =P(x)P(y) - P(xP(y)) - P(P(x)y)$,
\item [(2)] $ \phi_2^0(x,y)=D(x)y + xD(y) - D(xy) $,
\item [(3)] $\phi_3^0(x)=D(P(x)) - x$,
\item [(4)] $\phi_4^0(x,y)=P(x)D(y)-D(P(x)y)+xy$,
\item [(8)] $\phi_8^0(x,y)=P(D(P(x)y))-P(x)y$,
\item [(9)] $\phi_9^0(x,y)=P(D(xP(y)))-uP(y)$.
\end{itemize}
As in the previous subsection, one can prove the following results.
\begin{prop}
$\left\langle S_{\Ointe'}(Z)\right\rangle_\mtOpAlg=\left\langle S_{\Ointe}(Z)\right\rangle_\mtOpAlg$ for any set $Z$.
\end{prop}
\begin{thm}\label{S1ID}
$\Ointe$ is $\Omega$-GS in $\bk\frakS(Z)$ with respect to $\leq_{\operatorname{PD}} $.
\end{thm}
\begin{thm}
Let $Z$ be a set and $A=\bk \cals(Z)\slash I_A$ a nonunital $\bk$-algebra. Then we have:
$$\calf^{\Ointe\zhx\Alg}_{\Alg}(A)=\bk\frakS(Z)\slash\left\langle S_{\Ointe}(Z)\cup I_A\right\rangle_\mtOpAlg.$$
Moreover, assume $I_A$ has a GS basis $G$ with respect to the degree-lexicographical order $\leq_{\rm {dlex}}$. Then $S_{\Ointe}(Z)\cup G$ is an operated GS basis of $\left\langle S_{\Ointe}(Z)\cup I_A\right\rangle_\mtOpAlg$ in $\bk\frakS(Z) $ with respect to $\leq_{\operatorname{PD}}$.
\end{thm}
\begin{thm}\label{integro}
Let $Z$ be a set and $A=\bk \cals(Z)\slash I_A$ a nonunital $\bk$-algebra with a GS basis $G$ with respect to $\leq_{\rm{dlex}}$. Then the set
$\Irr(S_{\Ointe}(Z)\cup G)$ which is by definition the complement of
$$\left\lbrace q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)},q|_{P(D(P(u)v))},q|_{P(D(uP(v)))}, s\in G,q\in\frakSstar(Z),u,v\in\frakS(Z)\right\rbrace$$ in $\frakS(Z)$ is a linear basis of the free nonunital $0$-integro-differential algebra $\calf^{\Ointe\zhx\Alg}_{\Alg}(A)$ over $A$.
\end{thm}
\subsection{ Case of unital algebras}\
Now we consider unital integro-differential algebras. Since the proofs are similar to those in the previous subsections, we omit most of them.
The study still is divided into cases of $\lambda \neq 0$ and $\lambda=0$.
When $\lambda \neq 0$, since unital integro-differential algebras have the condition $D(1)=0$, we put $\uinte:=\inte\cup \{D(1)\}$.
\begin{thm}\label{uS1ID}
$\uinte$ is $\Omega$-GS in $\bk\frakM(Z)$ with respect to $\leq_{\operatorname{uPD}} $.
\end{thm}
\begin{thm}
Let $Z$ be a set and $A=\bk \calm(Z)\slash I_A$ a unital $\bk$-algebra. Then we have:
$$\calf^{\uinte\zhx\uAlg}_{\uAlg}(A)=\bk\frakM(Z)\slash\left\langle S_{\uinte}(Z)\cup I_A\right\rangle_\mtuOpAlg.$$
Moreover, assume $I_A$ has a GS basis $G$ with respect to the degree-lexicographical order $\leq_{\rm {dlex}}$. Then $S_{\uinte}(Z)\cup G$ is an operated GS basis of $\left\langle S_{\uinte}(Z)\cup I_A\right\rangle_\mtuOpAlg$ in $\bk\frakM(Z) $ with respect to $\leq_{\operatorname{uPD}}$.
\end{thm}
\begin{thm}
Let $Z$ be a set, $A=\bk \calm(Z)\slash I_A$ a unital $\bk$-algebra with a GS basis $G$ with respect to $\leq_{\rm{dlex}}$. Then a linear basis of the free unital $\lambda$-integro-differential algebra $\calf^{\uinte\zhx\uAlg}_{\uAlg}(A)$ over $A$ is given by the set
$\Irr(S_{\uinte}(Z)\cup G)$, which is by definition the complement in $\frakM(Z)$ of the subset consisting of $ q|_w$ where $w$ runs through
$$ \bar{s},P(u)P(v),D(u)D(v),D(P(u)),P(u)D(v), D(u)P(v),P(D(P(u)v)),P(D(uP(v))), D(1)$$
for arbitrary $ s\in G,q\in\frakMstar(Z),u,v\in\frakM(Z).$
\end{thm}
When $\lambda= 0$, denote $\Ouinte:=\Ointe$.
\begin{thm}\label{uS2ID}
$\Ouinte$ is $\Omega$-GS in $\bk\frakM(Z)$ with respect to $\leq_{\operatorname{uPD}} $.
\end{thm}
\begin{thm}
Let $Z$ be a set and $A=\bk \calm(Z)\slash I_A$ a unital $\bk$-algebra. Then we have:
$$\calf^{\Ouinte\zhx\uAlg}_{\uAlg}(A)=\bk\frakM(Z)\slash\left\langle S_{\Ouinte}(Z)\cup I_A\right\rangle_\mtuOpAlg.$$
Moreover, assume $I_A$ has a GS basis $G$ with respect to the degree-lexicographical order $\leq_{\rm {dlex}}$. Then $S_{\Ouinte}(Z)\cup G$ is an operated GS basis of $\left\langle S_{\Ouinte}(Z)\cup I_A\right\rangle_\mtuOpAlg$ in $\bk\frakM(Z) $ with respect to $\leq_{\operatorname{uPD}}$.
\end{thm}
\begin{thm}
Let $Z$ be a set and $A=\bk \calm(Z)\slash I_A$ a unital $\bk$-algebra with a GS basis $G$ with respect to $\leq_{\rm{dlex}}$. Then the set
$\Irr(S_{\Ouinte}(Z)\cup G)$ which is by definition the complement of
$$\left\lbrace q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)} ,q|_{P(D(P(u)v))},q|_{P(D(uP(v)))}, s\in G,q\in\frakMstar(Z),u,v\in\frakM(Z)\right\rbrace$$ in $\frakM(Z)$ is a linear basis of the free unital $0$-integro-differential algebra $\calf^{\Ouinte\zhx\uAlg}_{\uAlg}(A)$ over $A$.
\end{thm}
\subsection{Differential Rota-Baxter algebras vs integro-differential algebras }\
Since integro-differential algebras have one more defining relation than differential Rota-Baxter algebras,
by Proposition~\ref{free phi algebra}, the free integro-differential algebra over an algebra $A$ is a quotient of the free differential Rota-Baxter algebra over $A$ in general. However, by using the descriptions of $\dRB$ and $\inte$ and Theorems~\ref{S2} and \ref{S3,S4}, we could also show that the former one is a differential Rota-Baxter algebra subalgebra of the later one.
\begin{thm}\label{thm: RB vs ID}
The free nonunital $\lambda$-integro-differential algebra $\calf^{\inte\zhx\Alg}_{\Alg}(A)$ over an algebra $A$ is differential Rota-Baxter subalgebra of the free nonunital $\lambda$-differential Rota-Baxter algebra $\calf^{\dRB\zhx\Alg}_{\Alg}(A)$ over $A$.
\end{thm}
\begin{proof}
We have the observation mentioned before
$$\inte= \dRB\cup \{\phi_8(x,y), \phi_9(x,y)\}.$$
That is to say, the operated Gr\"obner-Shirshov basis of the free nonunital $\lambda$-differential Rota-Baxter algebra $\calf^{\dRB\zhx\Alg}_{\Alg}(A)$ over an algebra $A$ is a subset of that of the free nonunital $\lambda$-integro-differential algebra $\calf^{\inte\zhx\Alg}_{\Alg}(A)$ over $A$. So by Diamond Lemma, $\calf^{\inte\zhx\Alg}_{\Alg}(A)$ is a subspace of $\calf^{\dRB\zhx\Alg}_{\Alg}(A)$. It is obvious that $\calf^{\inte\zhx\Alg}_{\Alg}(A)$ is also differential Rota-Baxter subalgebra of $\calf^{\dRB\zhx\Alg}_{\Alg}(A)$.
\end{proof}
\begin{remark}
Gao and Guo \cite{GGR15} also studied GS bases of the free integro-differential algebras and free differential Rota-Baxter algebra both generated by sets, and they deduced that the free integro-differential algebra generated by a set is a subalgebra of the free differential Rota-Baxter algebra generated by the same set.
Theorem~\ref{thm: RB vs ID} proves an analogous fact for these free algebras generated by algebras.
However, our method is completely different from theirs.
\end{remark}
\begin{remark} By using the descriptions of $\OdRB$ and $\Ointe$ (resp. $\udRB$ and $\uinte$, $\OudRB$ and $\Ouinte$) and Theorems~\ref{S1} and ~\ref{S1ID} (resp. Theorems~\ref{uS1} and \ref{uS1ID}, Theorems~\ref{uS2} and \ref{uS2ID}), we always have the same result
in both unital and nonunital cases with any $\lambda$ (zero or nonzero).
\end{remark}
\bigskip
\textbf{Acknowledgements:} The authors were supported by NSFC (No. 11771085, 12071137) and by STCSM (22DZ2229014).
|
{
"arxiv_id": "2302.14202",
"language": "en",
"timestamp": "2023-03-01T02:04:44",
"url": "https://arxiv.org/abs/2302.14202",
"yymm": "2302"
} | \subsubsection*{\bibname}}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage[capitalise,noabbrev]{cleveref}
\usepackage{url}
\usepackage{booktabs}
\usepackage{amsfonts}
\usepackage{nicefrac}
\usepackage{microtype}
\usepackage[usenames, dvipsnames]{xcolor}
\usepackage{amsthm}
\usepackage{xspace}
\usepackage{natbib}
\usepackage{bbm}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{graphicx}
\usepackage{enumitem}
\usepackage{dsfont}
\graphicspath{ {./figures/} }
\bibliographystyle{plainnat}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\abs}[1]{\left\lvert#1\right\rvert}
\newcommand{\rvars}[1]{\ensuremath{\mathbf{#1}}\xspace}
\newcommand{\rvars{X}}{\rvars{X}}
\newcommand{\rvars{Y}}{\rvars{Y}}
\newcommand{\rvars{Z}}{\rvars{Z}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\jstate}[1]{\ensuremath{\mathbf{#1}}\xspace}
\newcommand{\jstate{e}}{\jstate{e}}
\newcommand{\jstate{x}}{\jstate{x}}
\newcommand{\jstate{y}}{\jstate{y}}
\newcommand{\jstate{z}}{\jstate{z}}
\newcommand{\ensuremath{\mathsf{val}}}{\ensuremath{\mathsf{val}}}
\newcommand{\comment}[1]{%
\text{\phantom{(#1)}} \tag{#1}
}
\newcommand{\bigr\rvert}{\bigr\rvert}
\theoremstyle{definition}
\newtheorem{thm}{Theorem}
\newtheorem{prop}{Proposition}
\newtheorem{defn}{Definition}
\newtheorem{lem}{Lemma}
\newtheorem{cor}{Corollary}
\newtheorem{conj}{Conjecture}
\begin{document}
\twocolumn[
\aistatstitle{Mixtures of All Trees}
\aistatsauthor{ Nikil Roashan Selvam \And Honghua Zhang \And Guy Van den Broeck }
\aistatsaddress{
UCLA Computer Science \\ \texttt{nikilrselvam@ucla.edu}
\And UCLA Computer Science \\ \texttt{hzhang19@cs.ucla.edu}
\And UCLA Computer Science \\ \texttt{guyvdb@cs.ucla.edu}
}
]
\setcitestyle{authoryear,round,citesep={;},aysep={,},yysep={;}}
\begin{abstract}
Tree-shaped graphical models are widely used for their tractability. However, they unfortunately lack expressive power as they require committing to a particular sparse dependency structure. We propose a novel class of generative models called mixtures of \emph{all} trees: that is, a mixture over all possible~($n^{n-2}$) tree-shaped graphical models over $n$ variables. We show that it is possible to parameterize this Mixture of All Trees (MoAT) model compactly (using a polynomial-size representation) in a way that allows for tractable likelihood computation and optimization via stochastic gradient descent. Furthermore, by leveraging the tractability of tree-shaped models, we devise fast-converging conditional sampling algorithms for approximate inference, even though our theoretical analysis suggests that exact computation of marginals in the MoAT model is NP-hard. Empirically, MoAT achieves state-of-the-art performance on density estimation benchmarks when compared against powerful probabilistic models including hidden Chow-Liu Trees.
\end{abstract}
\section{INTRODUCTION}
Probabilistic graphical models~(PGMs) have been extensively studied due to their ability to exploit structure in complex high-dimensional distributions and yield compact representations. The underlying graph structure of these models typically dictates the trade-off between expressive power and tractable probabilistic inference. On one end of the spectrum lie tree-shaped graphical models including Chow-Liu trees~\citep{chow-liu}, where the underlying graph is a spanning tree $T=(V,E)$ on $n$ vertices. Tree distributions allow for efficient sampling and exact inference on a variety of queries such as computing marginals~\citep{pearl1988probabilistic, darwiche2003differential} and are widely used in practice~\citep{zhang2017latent}. However, by committing to a single sparse dependency structure (by choice of spanning tree) their expressive power is limited. On the other end of the spectrum, we have densely connected graphical models such as Markov random fields~(MRFs)~\citep{koller2009probabilistic, rabiner1986introduction}, Bayesian networks~\citep{pearl1988probabilistic}, and factor graphs~\citep{loeliger2004introduction}, which excel at modelling arbitrarily complex dependencies~\citep{mansinghka2016crosscat}, but do so at the cost of efficient computation of marginal probabilities.
This spectrum and the underlying tradeoff extends beyond graphical models to generative models at large. For instance, deep generative models like variational autoencoders~(VAEs)~\citep{maaloe2019biva} are extremely expressive, but do not support tractable inference.
In this work, we propose a novel class of probabilistic models called Mixture of \emph{All} Trees (MoAT): a mixture over all possible~($n^{n-2}$) tree-shaped MRFs over $n$ variables; e.g., MoAT represents a mixture over~$10^{196}$ components when modeling joint distributions on $100$ variables. Despite the large number of mixture components, MoAT can be compactly represented by $O(n^2)$ parameters, which are shared across the tree components.
The MoAT model strikes a new balance between expressive power and tractability:
(i)~it concurrently models all possible tree-shaped dependency structures, thereby greatly boosting expressive power; (ii)~by leveraging the tractability of the spanning tree distributions and the tree-shaped MRFs, it can not only tractably compute \emph{normalized likelihood} but also efficiently estimate \emph{marginal probabilities} via sampling. In addition, as a fixed-structure model, MoAT circumvents the problem of structure learning, which plagues most probabilistic graphical models.
This paper is organized as follows. Section 2 defines the MoAT model and shows the tractability of exact (normalized) likelihood computation despite the presence of super-exponentially many mixture components. In Section 3, we discuss the MoAT model's parameterization and learning, and demonstrate state-of-the-art performance on density estimation for discrete tabular data. Next, in Section 4, we discuss the tractability of marginals and MAP inference in MoAT and prove hardness results. Finally, we view MoAT as a latent variable model and devise fast-converging importance sampling algorithms that let us leverage the extensive literature on inference in tree distributions.
\section{MIXTURES OF ALL TREES}
\label{sec:moat}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.98\linewidth]{moat-example-new.pdf}
\caption{An example MoAT distribution over 3 binary random variables $X_1, X_2, \text{ and }X_3$. The summation at the top denotes a mixture distribution where the mixture weights are given by the weights of the corresponding spanning tree~(shown on the edges). The tables on the left shows the univariate and pairwise marginal distributions, which are shared across the mixture components~(3 possible spanning trees).}
\label{fig:moat:example}
\end{figure*}
In this section, we propose mixture of all trees (MoAT) as a new class of probabilistic models. We first introduce tree-shaped Markov random fields~(MRFs) and define the MoAT model as a mixture over all possible tree distributions weighted by the spanning tree distribution. Then, we demonstrate how to tractably compute normalized likelihood on the MoAT model.
\subsection{Mixture of Tree-shaped Graphical Models}
A \emph{tree-shaped MRF} with underlying graph structure $G(V, E)$ represents a joint probability distribution ${\Pr}_{G}$ over $n$ random variables $\mathbf{X} = X_1, \cdots, X_n$ by specifying their univariate and pairwise marginal distributions. Specifically, assuming $G$ is a tree with vertex set $V = \{1, \cdots, n\}$, we associate with each edge $(u, v) \in E$ a pairwise marginal distribution ${P}_{u v}(X_u, X_v)$ and each vertex $u$ a univariate marginal distribution ${P}_{v}(X_v)$. Assuming that ${P}_{uv}$ and ${P}_{v}$ are consistent, then the normalized joint distribution ${\Pr}_{G}$ is given by~\citet{meila-jordan}:
\begin{align}
\label{eq:tree-shape-mrf}
{\Pr}_{G}(\mathbf{x}) = \frac{\prod_{(u, v) \in E}{{P}_{uv}(x_u, x_v)}}{\prod_{u \in V} {P}_{v}(x_v)^{\deg v - 1}},
\end{align}
where $\mathbf{x}\!=\!(x_1, \cdots, x_n)$ denotes assignment to $\mathbf{X}$ and $\deg{v}$ denotes the degree of $v$ in $G$; see $\Pr_{1}(X_1, X_2, X_3)$ in Figure~\ref{fig:moat:example} as an example tree-shaped MRF.
Despite the tractability of tree-shaped MRFs, they suffer from the problem of limited expressive power. To improve the expressive power, prior works propose to learn mixtures of tree models~\citep{anandkumar2012learning, meila-jordan}, where they focus on simple mixtures of a few trees, and propose EM algorithms for parameter and structure learning. This idea, however, suffers from several limitations. Firstly, while it is known how to optimally pick a single tree distribution with respect to the training data via the Chow-Liu algorithm~\citep{chow-liu}, no known closed form solution exists for picking the optimal set of tree distributions as mixture components from the super-exponentially many possible choices for spanning trees. Secondly, by having a small fixed number of (even possibly optimal) mixture components, the model forces us to commit to a few sparse dependency structures that might not be capable of capturing complex dependencies anyway.
Though mixture of trees model becomes more expressive as more tree structures are included, the number of parameters increases with the number of mixture components, which seem to suggest that a mixture over a large number of tree components is infeasible. Despite this, we propose the mixture of {\bf all} trees model~(MoAT), a polynomial-size representation for the mixture over all possible (super-exponentially many) tree-shaped MRFs.
Formally, we define:
\begin{align}
\label{eq:moat-definition}
{\Pr}_{\text{MoAT}}(\jstate{x}) = \frac{1}{Z} \sum_{T \in \text{ST}(K_n)} \left(\prod_{e \in T} w_e \right) {\Pr}_{T}(\jstate{x})
\end{align}
where $K_n$ denotes the complete graph on $n$ vertices, $\mathsf{ST}(G)$ denotes the set of spanning trees of a connected graph $G$, and $Z$ is the normalization constant. Each mixture component is a tree-shaped MRF $\Pr_{T}$ weighted by $\prod_{e \in T} w_e$, that is, \emph{product of the edge weights of the tree}.
Note that we define the weight of each tree to be proportional to its probability in the \emph{spanning tree distribution}~\citep{borcea2009negative}, which is tractable, allowing for efficient likelihood computation on MoAT~(Section~\ref{subsec:tractable-likelihood}.)
Though a MoAT model represents a mixture over super-exponentially many tree-shaped MRFs, the number of parameters in MoAT is polynomial-size due the the \emph{parameter sharing} across its mixture components. Specifically, all tree-shaped MRFs $\Pr_{T}(\mathbf{x})$ share the same univariate and pair-wise marginals (i.e., $P_{u}(x_u)$ and $P_{uv}(x_u, x_v)$); in addition, each edge in the graph $K_{n}$ is parameterized by a positive weight $w_{uv}$. To summarize, a MoAT model over $n$ variables has $O(n^{2})$ parameters.
Figure~\ref{fig:moat:example} shows an example MoAT model over 3 binary random variables $X_1, X_2, X_3$, for which there are $3$ possible spanning trees. Note that each of the mixture components (tree distributions) share the same set of marginals, but encode different distributions by virtue of their different dependency structures.
For example, for the distribution represented in \cref{fig:moat:example},
\begin{align*}
&{\Pr}_{\text{MoAT}}(X_1=0,X_2=1,X_3=0) \\
& =\frac{1}{Z} \sum_{T \in \mathsf{ST}(K_n)} \left(\prod_{e \in T} w_e \right) T(\jstate{x}) \\
& =\frac{1}{2 \times 3 + 3 \times 6 + 6 \times 2} \times [ \left(2 \times 3 \times \frac{0.2 \times 0.3}{0.4} \right) \\
& \quad + \left(2 \times 6 \times \frac{0.2 \times 0.1}{0.3} \right) + \left(3 \times 6 \times \frac{0.1 \times 0.3}{0.5} \right)]
\end{align*}
By Cayley's formula~\citep{chaiken1978matrix}, the number of spanning trees increases super-exponentially with respect to the number of random variables, thus preventing us from evaluating them by enumeration.
\subsection{Tractable Likelihood for MoAT}
\label{subsec:tractable-likelihood}
Despite a super-exponential number ($n^{n-2}$) of mixture components, we show that computing~(normalized) likelihood
on MoAT is tractable. Our approach primarily leverages the tractability of spanning tree distributions and their compact representation as \emph{probability generating polynomials}, which has been extensively studied in the context of machine learning~\citep{li2016fast, mariet2018exponentiated, robinson2019flexible, ZhangICML21}.
\begin{defn}
Let $\Pr(\cdot)$ be a probability distribution over $n$ binary random variables $\mathbf{X}\!=\!X_1, X_2, \dots,
X_n$, then the \emph{probability generating polynomial} for $\Pr$ is defined as
\begin{align*}
{\sum}_{\mathbf{x}\in \{0,1\}^{n}} \Pr(\mathbf{X} = \mathbf{x}) \left({\prod}_{i \text{ s.t. } x_i = 1} z_i\right),
\end{align*}
where each $z_i$ is an indeterminate associated with $X_i$.
\end{defn}
To define spanning tree distributions and present their representation as probability generating polynomials, we first introduce some notation. Let $G = (V,E)$ be a connected graph with vertex set $V = \{1, \dots, n\}$ and edge set $E$.
Associate to each edge $e \in E$ an indeterminate $z_{e}$ and a weight $w_{e} \in \mathbb{R}_{\geq 0}$.
If $e = \{i, j\}$, let $A_e$ be
the $n \times n$ matrix where $A_{ii} = A_{jj} = 1$, $A_{ij} = A_{ji} = -1$ and all
other entries equal to $0$. Then the \emph{weighted Laplacian} of $G$ is given by
$L(G) = \sum_{e \in E} w_e z_e A_e,$
For instance, the weighted Laplacian for the example MoAT distribution in \cref{fig:moat:example} is
\begin{align*}
\begin{bmatrix}
2z_{ab}+6z_{ac} & -2z_{ab} & -6z_{ac}\\
-2z_{ab} & 2z_{ab}+3z_{bc} & -3z_{bc}\\
-6z_{ac} & -3z_{bc} & 3z_{bc}+6z_{ac}\\
\end{bmatrix}
\end{align*}
Using $L(G)_{\backslash\{i\}}$ to denote the principal minor of $L(G)$ that is obtained by removing its $i^{th}$ row and column, by the Matrix Tree Theorem~\citep{chaiken1978matrix}, the probability generating polynomial for the spanning tree distribution is given by:
\begin{align}
\label{eq:st-generating-polynomial}
\operatorname{det}(L(G)_{\backslash\{i\}})=\sum_{T \in \mathsf{ST}(G)} \left(\prod_{e \in T} w_e z_e \right)
\end{align}
Now we derive the formula for computing ${\Pr}_{\text{MoAT}}(\mathbf{x})$ efficiently. We first set $G=K_n$ and $z_e=\frac{P_{uv}}{P_u P_v}$ and define:
$$L^{*} := L(K_n)_{\backslash\{i\}}\bigr\rvert_{z_e=\frac{P_{uv}}{P_u P_v}};$$
and it follows from Equation~\ref{eq:st-generating-polynomial} that
\begin{align*}
\operatorname{det}\left(L^{*}\right) = \sum_{T \in \mathsf{ST}(K_n)} \left(\prod_{e \in T} w_e \right) \prod_{(u,v) \in T} \frac{P_{uv}}{P_u P_v};
\end{align*}
note that $\prod_{(u,v) \in T} P_{u} P_{v} = \prod_{u} P_{u}^{\text{deg}(u)}$; hence,
\begin{align*}
&\operatorname{det}\left(L^{*}\right)
\!=\!\frac{1}{\prod_{v \in V} P_v} \sum_{T \in \mathsf{ST}(K_n)}\!\left(\prod_{e \in T} w_e \right)\!\frac{\prod_{(u,v) \in T}P_{uv}}{\prod_{v \in V} P_v^{\operatorname{deg} v-1}}\\
&\quad\!=\!\frac{Z}{\prod_{v \in V} P_v} {\Pr}_{\text{MoAT}},
\end{align*}
where the second equality follows from the definition of MoAT~(Equation~\ref{eq:moat-definition}). Finally, we multiply both sides by $\left(\prod_{v\in V} P_v\right) / Z$ thus ${\Pr}_{\text{MoAT}}(\jstate{x})$ can be evaluated as:
\begin{align*}
{\Pr}_{\text{MoAT}}(\jstate{x})
= \frac{1}{Z} \left(\prod_{v \in V} P_v\left(x_v\right)\right) \operatorname{det}(L^{*} \bigr\rvert_{\mathbf{x}}).
\end{align*}
Note that the normalization constant of the MoAT model $Z=\sum_{T \in \mathsf{ST}(K_n)} \left(\prod_{e \in T} w_e \right)$ can be evaluated efficiently as a determinant by replacing the indeterminate $z_e$ with the constant 1. As the computational bottleneck is the determinant calculation, the time complexity is upper bounded as $\mathcal{O}(n^{\omega})$, where $\omega$ is the matrix multiplication exponent.
\section{DENSITY ESTIMATION}
In the previous section, we introduced the MoAT model and described how we can compute likelihood tractably.
In this section, we describe how to parameterize the MoAT model in a way that is amenable to learning and subsequently effective density estimation on real world datasets. There are few desirable properties we seek from this parameterization (of univariate and pairwise marginals in particular). Firstly, we need to parameterize the marginals in way that are \emph{consistent} with each other. This is essential as it guarantees that all tree-shaped mixture components~(Equation~\ref{eq:tree-shape-mrf}) in the MoAT model are normalized.
Secondly, we want our parameterization to capture the entire space of \emph{consistent} combinations of univariate and pairwise marginals. In particular, this also ensures that every tree distribution is representable by our parameterization.
\subsection{MoAT Parameter Learning}
For a MoAT model over $n$ binary random variables $V=\{X_1, X_2,..., X_n\}$, we propose the following parameterization (as illustrated in \cref{fig:parameterization}):
\begin{itemize}[noitemsep,leftmargin=*]
\item Edge weights: $w_e \in \mathbb{R}_{\geq 0}$ for $e \in {V \choose 2}$.
\item Univariate marginals: $p_v\!=\!P(X_v\!=\!1) \in [0,1]$ $\forall v \in V$.
\item Pairwise marginals: $p_{uv}\!=\!P(X_u\!=\!1, X_v\!=\!1) \in [\max(0,p_u+p_v-1),\min(p_u,p_v)]$ for $\{u,v\} \in {V \choose 2}$.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{parameterization-new.pdf}
\caption{Parameterization for multivariate and univariate marginals for the example distribution on three binary random variables. The $\alpha_{i}$s and $\beta_{ij}$s are the free parameters.}
\label{fig:parameterization}
\end{figure}
As mentioned in Section~\ref{sec:moat}, to ensure that all the mixture components of MoAT are normalized, our parameterization for $P_{u}$ and $P_{uv}$ needs to be consistent; specifically, they need to satisfy the following constraints:
\begin{itemize}[noitemsep,leftmargin=*]
\item $P(X_v\!=0)+P(X_v\!=\!1)\!=\!1$ for all $v \in V$.
\item $\sum_{a\in \{0,1\}}P(X_u\!=\!a,X_v\!=\!b)\!=\!P(X_v\!=b) \text{ }\forall b\in \{0,1\}, \forall \{u,v\} \in {V \choose 2}$.
\item $\sum_{(a,b)\in \{0,1\}^{2}}P(X_u\!=\!a,X_v\!=\!b)\!=\!1 \text{ }\forall \{u,v\} \in {V \choose 2}$.
\end{itemize}
\begin{lem}
For any distribution $\Pr(\cdot)$ over binary random variables $X_1, \dots, X_n$, there exists a set of parameters~(i.e., $p_v$ and $p_{uv}$) in our hypothesis space such that $\Pr(X_u) = P_{u}(X_u)$ and $\Pr(X_u, X_v) = P_{uv}(X_u, X_v)$ for all $1 \leq u, v \leq n$; i.e., the univariate and pair-wise marginals of $\Pr$ are the same as $P_{u}$ and and $P_{uv}$.
\end{lem}
See appendix for proof. This lemma shows that the MoAT parameterization is not just valid, but also fully general in the sense that it covers all possible \emph{consistent} combinations of univariate and pairwise marginals.
Further, the MoAT parameterization naturally extends to categorical variables. For categorical random variables $V=\{X_1, X_2,..., X_n\}$, let $\ensuremath{\mathsf{val}}(X_i)=\{1,2,\cdots , k_i\}$. It is easy to see that the values $P(X_v=i)$ for $i \in \{1,2,\cdots , k_v-1\}$ uniquely determine the univariate marginals. Similarly, the values $P(X_u\!=\!i, X_v\!=\!j)$ for $(i,j) \in \{1,2,\cdots , k_u-1\} \times \{1,2,\cdots , k_v-1\}$ uniquely determine the pairwise marginals. This extension is provably valid, but not fully general.
For MoAT over categorical variables, whether there exists a fully general parameterization~(i.e., Lemma 1 holds) is unknown. See appendix for a detailed discussion.
\paragraph{Parameter Learning} For individual tree distributions, the optimal tree structure (as measured by KL divergence from training data) is the maximum weight spanning tree of the complete graph, where edge weights are given by mutual information between the corresponding pairs of variables~\citep{chow-liu}. Following this intuition, we use mutual information to initialize $w_e$; besides, we also initialize the univariate and pairwise marginals of the MoAT model by estimating them from training data. Finally, given our parameter initialization, we train the MoAT model by performing maximum likelihood estimation~(MLE) via stochastic gradient descent.
It is worth noting that \emph{our parameter initialization is deterministic}. We perform ablation studies to check the effectiveness of our initialization. As shown in Figure~\ref{fig:init}, compared to random initialization, we observe that our
special initialization always leads to better initial log likelihood, faster convergence and better final log likelihood.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=0.95\columnwidth]{figures/cr_nltcs_big3.png}
\caption{NLTCS Dataset}
\label{fig:ablation1}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=0.95\columnwidth]{figures/cr_jester_big3.png}
\caption{Jester Dataset}
\label{fig:ablation2}
\end{subfigure}
{\caption{{Average log-likelihood throughout training on datasets across various data dimensionalities: our initialization vs. random initialization~(averaged over 5 runs).}}
\label{fig:init}}
\end{figure}
\subsection{Density Estimation via MoAT}
\begin{table}[]
\centering
\begin{tabular}{l|l| lll}
\toprule
Dataset & \# vars & MoAT & HCLT & MT \\ \hline
nltcs & 16 & -6.07 & \textbf{-5.99} & -6.01 \\
msnbc & 17 & -6.43 & \textbf{-6.05} & -6.07 \\
kdd & 65 & \textbf{-2.13} & -2.18 & \textbf{-2.13} \\
plants & 69 & -13.50 & -14.26 & \textbf{-12.95} \\
baudio & 100 & \textbf{-39.03} & -39.77 & -40.08 \\
jester & 100 & \textbf{-51.65} & -52.46 & -53.08 \\
bnetflix & 100 & \textbf{-55.52 } & -56.27 & -56.74 \\
accidents & 111 & -31.59 & \textbf{-26.74} & -29.63 \\
tretail & 135 & \textbf{-10.81} & -10.84 & -10.83 \\
pumsb & 163 & -29.89 & \textbf{-23.64} & -23.71 \\
dna & 180 & -87.10 & \textbf{-79.05} & -85.14 \\
kosarek & 190 & \textbf{-10.57} & -10.66 & -10.62 \\
msweb & 294 & \textbf{-9.80} & -9.98 & -9.85 \\
book & 500 & \textbf{-33.46} & -33.83 & -34.63 \\
tmovie & 500 & \textbf{-49.37} & -50.81 & -54.60 \\
cwebkb & 839 & \textbf{-147.70} & -152.77 & -156.86 \\
cr52 & 889 & \textbf{-84.78} & -86.26 & -85.90 \\
c20ng & 910 & \textbf{-149.44} & -153.4 & -154.24 \\
bbc & 1058 & \textbf{-243.82} & -251.04 & -261.84 \\
ad & 1556 & \textbf{-15.30} & -16.07 & -16.02 \\
\bottomrule
\end{tabular}
\caption{Comparison of average log likelihood of MoAT, HCLT, and MT across the Twenty Datasets benchmarks. Best results are presented in bold.}
\label{tab:density}
\end{table}
We evaluate MoAT on a suite of density estimation datasets called the Twenty Datasets ~\citep{twenty-datasets}, which contains 20 real-world datasets covering a wide range of application domains including media, medicine, and retail. This benchmark has been extensively used to evaluate tractable probabilistic models. We compare MoAT against two baselines:~(1)~hidden Chow-Liu trees~(HCLTs)~\citep{hclt}, which are a class of probabilistic models that achieve state-of-the-art performance on the Twenty Datasets benchmark
and~(2)~the mixture of trees model~(MT)~\citep{meila-jordan}.
\cref{tab:density} summarizes the experiment results. MoAT outperforms both HCLT and MT on 14 out of 20 datasets. In particular, the MoAT model beats baselines by large margins on all datasets with more than 180 random variables. It is also worth noting that despite having fewer parameters ($\mathcal{O}(n^2)$) than MT ($\mathcal{O}(k\cdot n^2)$, where $k$ is the number of mixture components in MT), MoAT almost always outperforms MT, with the exception of a few smaller datasets, where MoAT does not have enough parameters to fit the data well.
\section{ON THE HARDNESS OF MARGINALS AND MAP INFERENCE}
In this section, we prove the hardness of semiring queries (which is a generalization of marginals) and Maximum a posteriori~(MAP) inference on the MoAT model.
\subsection{On the Hardness of Computing Marginals}
\label{sec:marginal-hardness}
First, we define the notion of semiring queries.
\defn{Semiring Queries (SQ): Let $p(\mathbf{X})$ be a real-valued function over random variables $\mathbf{X}$. The class of semiring queries $\mathcal{Q}_{\mathrm{F}}$ is the set of queries that compute values of the following form:
\begin{align*}
f(\boldsymbol{e})=&\sum_\mathbf{z}p(\mathbf{z}, \boldsymbol{e})
\end{align*}
where $e \in \mathrm{val}(\mathbf{E})$ is a partial configuration for any subset of random variables $\mathbf{E} \subseteq \mathbf{X}$, and $\mathbf{Z}=\mathbf{X} \backslash \mathbf{E}$ is the set of remaining random variables.}
When the semiring sum/product operations correspond to the regular sum/product operations and the function $p$ is a likelihood function, the semiring query $f(\jstate{e})$ actually computes marginal probabilities.
In fact, if $p$ is the likelihood function for the MoAT model, for an assignment $\mathbf{e}$ to $\mathbf{E} \subseteq \mathbf{X}$,
\begin{align*}
f(\boldsymbol{e})=& \frac{1}{Z} \sum_\mathbf{z} \sum_{T \in \mathsf{ST}(K_n)} \left(\prod_{v \in V} P_v\left(x_v\right)\right) \\
& \left(\prod_{(u,v) \in T} w_{(u,v)} \frac{P_{uv}(x_u,x_v)}{P_u(x_u) P_v(x_v)} \right),
\end{align*}
where $Z=\sum_{T \in \mathsf{ST}(K_n)} \left(\prod_{e \in T} w_e \right)$ is the normalization constant and $\mathbf{z}$ enumerates over all instantiations of $\mathbf{Z}=\mathbf{X} \backslash \mathbf{E}$. Thus, in this case, $f(\jstate{e})$ actually computes marginals in the MoAT model. However, the generality of the semiring queries allows for negative parameter values and hence negative ``probabilities'', which we leverage to prove hardness of semiring queries on the MoAT model.
Since most marginal computation algorithms on tractable probabilistic models (such as the jointree algorithm which relies on variable elimination \citep{ZhangPoole, Dechter1996} and circuit compilation based methods \citep{chavira,darwiche2002factor}) are semiring generalizable~\citep{wachter,kimmig17,bacchus09}, the hardness of semiring queries on the MoAT model would strongly suggest the hardness of marginal computation. In other words, the hardness of semiring queries would rule out most marginal inference techniques in the literature as they perform purely algebraic computations on the parameter values without any restrictions/assumptions on the range of these values. We dedicate the rest of this subsection to establishing the same, while deferring most technical proof details to the appendix.
\thm{Computation of semiring queries on the MoAT model is NP-hard.}
\begin{proof}
To prove the hardness of semiring queries, we proceed by a reduction from the subset spanning tree problem~(denoted $\mathsf{SST}$), which we define below.
\lem{Define $\mathsf{SST}$ as the following decision problem: given a connected graph $G=(V,E)$ and a subset $K \subset V$ of the vertices, decide if there exists a spanning tree of $G$ whose leaves are exactly $K$. $\mathsf{SST}$ is NP-hard.}
Consider an arbitrary connected graph $G=(V,E)$ on $|V|=n \geq 3$ vertices and a subset of vertices $K \subset V$. Set MoAT likelihood function parameters on $n$ binary random variables $\rvars{X}=\{X_1,X_2,\ldots, X_n\}$ (corresponding to the vertices of G) as follows:
\begin{itemize}
\item $ 0 <\epsilon < 1$
\item $w_e= \begin{cases}
1, & e=\{i,j\} \in E \\
0, & \text{otherwise}
\end{cases}$
\item $\mathit{val}(X_i)=\{0,1\}$.
\item $P_v(0)=\epsilon, P_v(1)=-1$ for all $v \in V$.
\item $P_{uv}(\alpha, \beta)=\begin{cases}
\epsilon, & \alpha = \beta = 0 \\
0, & \alpha = \beta = 1 \\
-\epsilon, &\text{otherwise} \\
\end{cases}$
\end{itemize}
One can intuitively interpret an assignment of $1$ as corresponding to labelling a node as a leaf, and $0$ as marking it as unknown. The univariate and pairwise marginals have been carefully chosen to ensure that any tree assigns higher probability to assignments where all the nodes assigned $1$ are leaves in the tree and lower probabilities to assignments where one or more nodes that are assigned $1$ are actually internal nodes. In fact, for any tree, there exists a likelihood separation of $\epsilon$ between assignments that agree on the leaves and those that do not. By assigning $1$ to all the variables in $K \subset V$ and $0$ to others, and by choosing a sufficiently small $\epsilon$, we can now effectively use the MoAT likelihood as an indicator for the presence of an spanning tree whose leaves are a superset of $K$. More impressively, we can exactly count the number of spanning trees that satisfy the desired property, and we formalize the same in the following lemma.
\lem{Let $\jstate{x}$ be a complete assignment, and denote by $\mathsf{ONES}(\jstate{x})$ the set of variables are are set to $1$ in $\jstate{x}$. Denote by $\abs{\jstate{x}}$ the value $\abs{\mathsf{ONES}(\jstate{x})}$ and $\mathsf{LEAVES}(T)$ the set of leaves of a spanning tree $T$. Let $k$ be the number of spanning trees $T$ of $G$ with $\mathsf{ONES}(\jstate{x}) \subseteq \mathsf{LEAVES} (T)$. Then,
$$\begin{cases}
\frac{k}{\epsilon^{n-2}} \leq Z \cdot p(\jstate{x}) \leq \frac{k}{\epsilon^{n-2}} +\frac{n^{n-2}}{\epsilon^{n-3}}, & \abs{\jstate{x}}\%2=0\\
\frac{-k}{\epsilon^{n-2}}+\frac{-n^{n-2}}{\epsilon^{n-3}} \leq Z \cdot p(\jstate{x}) \leq \frac{-k}{\epsilon^{n-2}} , & \abs{\jstate{x}}\%2=1
\\
\end{cases}$$
}
See appendix for proof.
\cor{Let $\epsilon < \frac{1}{2^{n+1}\cdot n^{n-2}}$. The number of spanning trees $T$ with $K \subseteq \mathsf{LEAVES}(T)$ is given by $\abs{\lfloor Z \cdot \epsilon^{n-2} \cdot p(\jstate{x}) \rceil}$, where $x_i=1$ if and only if $i \in K$ (that is, $\jstate{x}$ is the assignment that assigns $1$ to all the variables in $K$ and $0$ to all the other variables), $\lfloor x \rceil$ denotes the closest integer to $x$.}
\label{cor:count-trees}
\begin{proof}
Let $k$ be the number of spanning trees $T$ with $K \subseteq \mathsf{LEAVES}(T)$. When $\abs{\jstate{x}}\%2=0$, $ k \leq Z \epsilon^{n-2} p(\jstate{x}) \leq k + \epsilon n^{n-2} \leq k + \frac{1}{2^{n+1}}$. Thus, $\abs{\lfloor Z \epsilon^{n-2} p(\jstate{x}) \rceil}=k$ as desired. An analogous proof holds for the case of $\abs{\jstate{x}}\%2=1$.
\end{proof}
Note that $\frac{P_{uv}}{P_u P_v} \geq 0$, and hence the sign of $p(\jstate{x})$ depends solely on the parity of $\abs{\jstate{x}}$. Thus, we can leverage the inclusion-exclusion formula to count spanning trees $T$ with $K = \mathsf{LEAVES}(T)$ using expressions for number of spanning trees $T$ with $K \subseteq \mathsf{LEAVES}(T)$ given by Corollary~\ref{cor:count-trees}.
\lem{The number of spanning trees $T$ with $K = \mathsf{LEAVES}(T)$ is given by $\abs{\lfloor Z \epsilon^{n-2} f(\jstate{e}) \rceil}$. }
\begin{proof}[Proof Sketch]
From the inclusion-exclusion formula we obtain that the number of spanning trees $T$ with $K = \mathsf{LEAVES}(T)$ (upto sign) is given by
\begin{align*}
& \sum_{K\subseteq L} (-1)^{\abs{L}}\sum_{T \in \mathsf{ST}(G)} \mathds{1}(L \subseteq \mathsf{LEAVES}(T)) \\
&\quad =\sum_{val(z_1)} \sum_{val(z_2)} \ldots \sum_{val(z_k)} (-1)^{\abs{\jstate{x}}}\abs{\lfloor Z \epsilon^{n-2} p(\jstate{x}) \rceil} \\
&\quad =\lfloor Z \epsilon^{n-2} \sum_{val(z_1)} \sum_{val(z_2)} \ldots \sum_{val(z_k)} p(\jstate{x}) \rceil \\
&\quad =\lfloor Z \epsilon^{n-2} f(\jstate{e}) \rceil
\end{align*}
\end{proof}
We now obtain that there exists a spanning tree $T$ with $K = \mathsf{LEAVES}(T)$ if and only if $\abs{\lfloor Z \epsilon^{n-2} f(\jstate{e}) \rceil} > 0$. This completes the reduction from $\mathsf{SST}$, as desired.
\end{proof}
It is worth re-emphasizing the strength of this hardness result in the context of marginal computation, in that it eliminates all marginal inference algorithms that are agnostic to parameter values (which is, to the best of our knowledge, all possible known exact marginal inference techniques in literature). This opens up an interesting question about new classes of marginal computation algorithms that are not parameter-value agnostic.
\subsection{On the Hardness of MAP Inference}
In this section, we prove that maximum-a-posteriori~(MAP) inference~(i.e., computing the most likely assignment) for the MoAT model is NP-hard via a reduction from the 3-coloring problem~\citep{lovasz1973coverings}.
\thm{MAP inference for MoAT is NP-hard}.
\begin{proof}
Consider an arbitrary connected graph $G=(V,E)$ on $|V|=n$ vertices. Build a MoAT model $M$ on $n$ discrete random variables $\rvars{X}=\{X_1,X_2,\ldots, X_n\}$ (corresponding to the vertices of G) as follows:
\begin{itemize}
\item $w_e= \begin{cases}
1, & e=\{i,j\} \in E \\
0, & \text{otherwise}
\end{cases}$
\item $\mathit{val}(X_i)=\{R,G,B\}$.
\item $P_v(R)=P_v(B)=P_v(G)=\frac{1}{3}$ for all $v \in V$.
\item $P_{uv}(\alpha, \beta)=\begin{cases}
0, & \alpha = \beta \\
\frac{1}{6}, & \alpha \neq \beta
\end{cases}$
\end{itemize}
Observe that the weights define a uniform distribution over all possible spaninng trees of $G$. Furthermore, the univariate marginals $P_v$ and pairwise marginals $P_{uv}$ are consistent and define a valid tree distribution.
Next, observe that a complete assignment $\jstate{x}$ to $\rvars{X}$ corresponds to a coloring of the original graph G. It is easy to check that for any particular spanning tree T, \\
$T(\jstate{x})=\begin{cases}
\frac{1}{3\times 2^{n-1}}, & \jstate{x} \text{ is a valid 3-coloring of the tree} \\
0, & \text{otherwise}
\end{cases}$
Now, we show that $\jstate{x}$ is a valid 3 coloring of the given graph $G$ if and only if $M(\jstate{x})=\frac{1}{3\times 2^{n-1}}$.
Firstly, if $\jstate{x}$ is a valid 3-coloring of $G$, then no pair of adjacent vertices in $G$ are assigned the same color. Hence, the probability assigned to $\jstate{x}$ by any of the spanning trees of $G$ is $\frac{1}{3\times 2^{n-1}}$. Hence,
\begin{align*}
M(\jstate{x}) &= \frac{1}{Z} \sum_{T \in \mathsf{ST}(G)} \left(\prod_{e \in T} w_e \right) \frac{\prod_{(u,v) \in T}P_{uv}\left(x_u, x_v\right)}{\prod_{v \in V} P_v\left(x_v\right)^{\operatorname{deg} v-1}}\\
=& \frac{1}{3\times 2^{n-1}} \left( \frac{1}{Z} \sum_{T \in \mathsf{ST}(G)} \left(\prod_{e \in T} w_e \right) \right)
= \frac{1}{3\times 2^{n-1}}
\end{align*}
Conversely, if $\jstate{x}$ is not a valid 3-coloring of $G$, then there exist at least one pair of neighboring vertices in $G$ which share the same color. Now, any spanning tree that contains the corresponding edge (which always exists) would assign zero likelihood to $\jstate{x}$ and $M(\jstate{x})$ be strictly less than $\frac{1}{3\times 2^{n-1}}$.\\
Thus, the graph is 3-colorable if and only if the global MAP state of $M$ has a probability of $\frac{1}{3\times 2^{n-1}}$.
\end{proof}
\section{EFFICIENT APPROXIMATE INFERENCE}
Unlike usual mixture models, all mixture components in MoAT are close to maximum likelihood on the entire dataset (owing to their consistent univariate and pairwise marginals), but are just sufficiently different enough to model complex dependencies. In this section, we explore how this key observation combined with the tractability of tree-shaped models lets us devise fast-converging algorithms for approximate inference on MoAT.
\subsection{MoAT as a Latent Variable Model}
Interestingly, the MoAT model yields itself to being interpreted as a latent variable model in an extremely natural way with clear semantics. Defining $Y$ to be the latent random variable with $\ensuremath{\mathsf{val}}(Y)=\mathsf{ST}(G)$, one can view MoAT as a distribution over $\{Y,X_1, X_2,..., X_n\}$, where $Y$ models the choice of spanning tree, and inference of the form ${\Pr}_{\text{MoAT}}(\jstate{x})$ amounts to marginalizing out the latent variable~$Y$. More precisely,
\begin{align*}
{\Pr}_{\text{MoAT}}(\jstate{x})
\!=&\!\sum_{T \in \mathsf{ST}(G)} \frac{\left(\prod_{e \in T} w_e \right)}{Z} \cdot \frac{\prod_{(u,v) \in T}{\Pr}_{uv}\left(x_u, x_v\right)}{\prod_{v \in V} P_v\left(x_v\right)^{\operatorname{deg} v-1}} \\
=& \sum_{y \in \ensuremath{\mathsf{val}}(Y)} \quad P(y) \quad \quad \cdot \qquad P(\jstate{x} \mid y)
\end{align*}
It is worth emphasizing the distinctiveness of this characterization. Typically in latent variable models, the latent variables act as higher dimensional features over some subset of the variables. However, for the MoAT model, the latent variable controls the sparse dependency structure that is enforced across the same set of variables.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.275,trim={0 0.33cm 0 0},clip]{figures/cr_conv_big.png}
\caption{Convergence of various sampling algorithms for posterior marginal inference on NLTCS with different evidence sizes. The reported results are averaged across 5 random seeds. }
\label{fig:kld-convergence}
\end{figure*}
\subsection{Efficient Importance Sampling on MoAT}
Exact marginals and conditionals are provably tractable on tree distributions owing to classic techniques such as variable elimination. Consequently, tree distributions are extremely amenable to efficient conditional sampling~\citep{koller2009probabilistic}. We show that MoAT, a mixture over tree distributions, also supports effective
conditional sampling even though our theoretical analysis (\cref{sec:marginal-hardness}) suggests that even computation of marginals in MoAT is NP-hard.
\paragraph{Importance Sampling} Revisiting the view of MoAT as a latent variable model $P(Y,\rvars{X})$, we arrive at a very natural choice of proposal distribution $Q(Y,\rvars{X})$ that leads to an efficient importance sampling algorithm~\citep{Tokdar2010ImportanceSA}. For evidence $\jstate{e}$, (and abusing notation to have $\jstate{x}$ refer to an assignment to the unobserved variables) we have that:
\begin{align*}
&Q(y,\jstate{x} \mid \jstate{e}) \\
&\quad = P(y)P(\jstate{x} \mid \jstate{y}\jstate{e}) \approx P(y\mid \jstate{e})P(\jstate{x} \mid \jstate{y}\jstate{e}) = P(y,\jstate{x} \mid \jstate{e})
\end{align*}
At a high level, this amounts to sampling a spanning tree unconditionally~\citep{sample-spanning-tree}, and then sampling the remaining variables from the chosen tree distribution conditioned on the evidence. More precisely, the weighting function for the samples drawn from the proposal distribution is given by
\begin{align*}
w(y,\jstate{x} \mid \jstate{e})
=\frac{P(y,\jstate{x} \mid \jstate{e})}{Q(y,\jstate{x} \mid \jstate{e})}
=\frac{P(y \mid \jstate{e})}{P(y)} = \frac{P(\jstate{e} \mid y)}{P(\jstate{e})}
\end{align*}
The efficiency of the sampling algorithm (as evaluated through, say, the effective sample size) depends on how close the sample weights are to $1$. Intuitively, the ratio $\frac{P(\jstate{e} \mid y)}{P(\jstate{e})}$ captures how much the likelihood of partial evidence in a single spanning tree differs from the corresponding likelihood in the model. As all the mixture components share the same consistent set of univariate and pairwise marginals, it is natural to expect that this ratio does not deviate significantly from $1$, thereby leading to high-quality samples. Indeed, our empirical analysis demonstrates that the aforementioned intuition does hold.
Note that we do not actually need to compute $P(\jstate{e})$ to obtain the sample weights when computing expectations. We can use the unnormalized weight $w'(y,\jstate{x} \mid \jstate{e})=P(\jstate{e} \mid y)$ as $P(\jstate{e})$ is a multiplicative constant given $\jstate{e}$, thereby leading to a self-normalizing importance sampling algorithm. The expectation of any function $f(\rvars{X})$ over $P$ can be estimated using samples $\mathcal{D}=\{\boldsymbol{x}y[1], \ldots, \boldsymbol{x}y[M]\}$ from $Q$ as:
\begin{align*}
&\hat{\boldsymbol{E}}_{\mathcal{D}}(f)
=\frac{\sum_{m=1}^M f(\boldsymbol{x}[m]) w(\boldsymbol{x}y[m])}{\sum_{m=1}^M w(\boldsymbol{x}y[m])} \\
&\quad=\frac{\sum_{m=1}^M f(\boldsymbol{x}[m]) P(\jstate{e} \mid y[m])}{\sum_{m=1}^M P(\jstate{e} \mid y[m])} \\
&\quad=\frac{\sum_{m=1}^M f(\boldsymbol{x}[m]) w'(\boldsymbol{x}y[m])}{\sum_{m=1}^M w'(\boldsymbol{x}y[m])}
\end{align*}
\paragraph{Collapsed Sampling} Observe that the sample weights $w(y,\jstate{x} \mid \jstate{e})=\frac{P(\jstate{e} \mid y)}{P(\jstate{e})=w(y\mid \jstate{e})}$ only depend on $\jstate{e}$ and $\jstate{y}$ and are independent of $\jstate{x}$.
Given an arbitrary function $f(\jstate{x})$, this allows to effectively ``push the expectation inside" to the tree level, and freely leverage any estimation method available for estimating the expectation of $f(\jstate{x})$ on a tree distribution. This amounts to a form of collapsed sampling~\citep{koller2009probabilistic}:
\begin{align*}
&\boldsymbol{E}_{\jstate{x},y \sim P(\cdot \mid \jstate{e})}(f(\jstate{x}))=\sum_{\jstate{x} y} P(y \mid \jstate{e})\cdot P(\jstate{x} \mid y\jstate{e}) \cdot f(\jstate{x})\\
&\quad=\sum_{\jstate{x} y} w(y,\jstate{x} \mid \jstate{e})\cdot P(y)\cdot P(\jstate{x} \mid y\jstate{e}) \cdot f(\jstate{x})\\
&\quad=\sum_{y} P(y) \sum_{\jstate{x}} w(y,\jstate{x} \mid \jstate{e}) \cdot P(\jstate{x} \mid y\jstate{e})\cdot f(\jstate{x})\\
&\quad=\sum_{y} P(y)\cdot w(y \mid \jstate{e}) \sum_{\jstate{x}} \left(P(\jstate{x} \mid y\jstate{e})\cdot f(\jstate{x})\right)\\
&\quad=\sum_{y} P(y)\cdot w(y \mid \jstate{e}) \cdot \boldsymbol{E}_{\jstate{x} \sim P(\cdot \mid y\jstate{e})}f(\jstate{x})
\end{align*}
Our empirical estimator then becomes
\begin{align*}
\hat{\boldsymbol{E}}_{\mathcal{D}}(f)
=&\frac{\sum_{m=1}^M w'(y[m]) \boldsymbol{E}_{\jstate{x} \sim (\cdot \mid y\jstate{e})}(f(\jstate{x})) }{\sum_{m=1}^M w'(y[m])}
\end{align*}
Intuitively, we sample a spanning tree, compute the desired quantity in the corresponding tree distribution, and weight the estimate appropriately. We are thus able to drastically speed up convergence by leveraging the whole suite of exact and approximate techniques available for estimation in tree distributions which have been extensively studied in the literature. For instance, as conditionals of the form $P(X_i=1 \mid \jstate{e})$ are tractable in tree distributions, we can efficiently estimate ${\Pr}_{\text{MoAT}}(X_i=1 \mid \jstate{e})$ as
\begin{align*}
\hat{\Pr}_{\text{MoAT}}(X_i\!=\!1 \mid \jstate{e})
=&\frac{\sum_{m=1}^M w'(y[m]) P(X_i=1 \mid y[m]\jstate{e}) }{\sum_{m=1}^M w'(y[m])}
\end{align*}
\paragraph{Empirical Evaluation}Empirically, we evaluate our importance sampling algorithm and the collapsed importance sampling algorithm against a standard Gibbs sampling algorithm~\citep{gelfand1990sampling}, which is enabled by tractable likelihood computation on the MoAT model. In our experiments, we focus on posterior marginal inference: we fix evidence $\jstate{e}$ of various sizes, and estimate univariate marginals of the remaining variables conditioned on the evidence $P(X_i \mid \jstate{e})$. To illustrate speed of convergence to the true value, we require to exactly compute these ground-truth conditionals. To that end, we limit ourselves to a MoAT model on the 16 variable NLTCS dataset from the Twenty Datasets benchmark, where we can exactly compute MoAT marginals and conditionals by exhaustive enumeration. We use average KL-divergence as our metric to assess the speed of convergence:
\begin{align*}
&D_{\mathrm{KL}}(P \| \hat{P}) \\
&=\!\sum_{X_i \in \rvars{X} \setminus E}\!P(x_i\!\mid\!\jstate{e}) \log \frac{P(x_i\!\mid\!\jstate{e})}{\hat{P}(x_i\!\mid\!\jstate{e})}
+P(\overline{x_i}\!\mid \jstate{e})\!\log \frac{P(\overline{x_i}\!\mid\!\jstate{e})}{\hat{P}(\overline{x_i}\!\mid\!\jstate{e})}
\end{align*}
As we see \cref{fig:kld-convergence}, the importance sampling and collapsed importance sampling converge orders of magnitude faster than Gibbs sampling.
These results are all the more impressive when we account for the superior computational complexity of importance sampling. The bottleneck in the importance sampling algorithm is the spanning tree sampling, leading to a time complexity of $\mathcal{O}(n^\omega)$. However, each sample in Gibbs sampling requires $n$ likelihood estimation queries, resulting in a complexity of $\mathcal{O}(n \cdot n^\omega)$. Further, we observe that the importance sampling algorithm produces very high quality samples as illustrated by the closeness of sample weights to $1$ (\cref{fig:weights-hist}).
\begin{figure}[thb]
\centering
\includegraphics[width=0.7\linewidth]{figures/cr_wts4.png}
\caption{Distribution of sample weights for $\abs{\jstate{e}}=4$.}
\label{fig:weights-hist}
\end{figure}
\section{CONCLUSION}
In this paper, we propose a novel class of generative models called mixture of all trees (MoAT), which strikes a new balance between expressivity and tractability. Despite being a mixture over super-exponentially many tree-shaped distributions, we show that
it allows for tractable computation of~(normalized) likelihood. Besides, learning a MoAT model does not involve the problem of structure learning, which plagues most probabilistic graphical models.
While we prove hardness of certain classes of queries such as MAP, we demonstrate how MoAT's foundation in tree-shaped models allows us to naturally obtain extremely fast approximate inference algorithms by interpreting it as latent variable model with clear semantics and leveraging tractability of its underlying mixture components. Empirically, we see that MoAT achieves state-of-the-art performance on a variety of density estimation tasks, outperforming powerful probabilistic models such as HCLTs. We leave it to future work to explore MoAT's potential to scale to non-tabular data such as images and text.
We hope that MoAT opens up interesting questions that push the boundaries of tractability and expressive power for probabilistic graphical models.
\section*{Acknowledgements}
We thank the reviewers for their thoughtful feedback towards improving this paper.
This work was funded in part by the DARPA Perceptually-enabled Task Guidance (PTG) Program under
contract number HR00112220005, and NSF grants \#IIS-1943641, \#IIS-1956441, and \#CCF-1837129.
\subsubsection*{\bibname}}
\onecolumn
|
{
"arxiv_id": "2302.14162",
"language": "en",
"timestamp": "2023-03-01T02:03:17",
"url": "https://arxiv.org/abs/2302.14162",
"yymm": "2302"
} | \section{Introduction}
Robotics have been extensively applied for many challenging applications, ranging from manufacturing, agriculture, space and ocean applications \cite{robotM1}, \cite{robotM2}. In some applications, the use of single robots or autonomous vehicles has limited efficiency, due to the limitation of sensing, endurance and payload carrying. To increase the efficiency of robots and autonomous vehicles for these applications, a concept of multiple collaborative robotics or swam robotics have been introduced \cite{swarm}. For underwater environment, multiple collaborative autonomous underwater vehicles (AUVs) have shown their great efficiency for many challenging applications like seabed monitoring, wind turbine inspection, marine debris monitoring and cleaning, etc \cite{AUV1}. However, controlling multiple AUVs working collaboratively is not a trivial task because the effects of nonlinear dynamics, communication delay between AUVs, and the effects of underwater environmental disturbances, i.e., waves, currents, etc., become more severe in underwater environment \cite{AUV2}.
Many elegant control methods have been investigated for increasing the tracking accuracy and robustness of multi-agent systems. Optimal controllers using distributed optimization have been developed \cite{optimal1}. A safe optimal controller based on control barrier function (CBF) has been proposed in \cite{optimal2}. Another approach based on reinforcement learning has been developed for the collaborative control of multi-agent systems \cite{optimal3}. Model predictive control (MPC) has been explored for multi-agent systems generally \cite{MPC1}, \cite{MPC2}, and for multiple AUV systems specifically \cite{MPCAUV}. Although optimal controllers and MPCs provide a good tracking performance when the full knowledge of the system can be known in advance, it is difficult to handle the disturbances and/or uncertainty components. In order to handle disturbance/uncertainty components, robust controllers have been extensively developed \cite{robust1}. Robust controllers are particularly efficient for the control system of AUVs due to their robustness against the nonlinear effects of underwater working conditions. Due to its strong robustness against disturbances (i.e., matched disturbances), sliding mode control (SMC) techniques have been developed \cite{SMC1}, \cite{SMC2}. Despite the advantages of high robustness, SMC generates high chattering, which can cause significant oscillations within the operation of mechanical systems. To reduce the chattering for SMC, a distributed bio-inspired SMC has been proposed for multiple AUVs \cite{SMC3}. However, the conventional SMCs do not provide finite time/fixed time convergence for the systems. To provide a finite time convergence, finite time consensus control methods have been developed for multi-agent systems \cite{finite1}, \cite{finite2}. To obtain both finite time convergence and higher robustness, finite time sliding mode controllers have also been introduced \cite{finiteSMC1}, \cite{finiteSMC2}. Finite time controllers have also been employed for single AUV system \cite{finiteM} and multiple AUV systems \cite{finiteAUV}, \cite{finiteAUV2}. In \cite{finiteAUV3}, a terminal SMC has been developed for the formation tracking control of multiple AUVs. The main drawback of the finite time controllers is that the convergence time of the system is dependent on the inital states of the systems. This issue, unfortunately, prevents the applicability of finite time controllers for many practical applications because, in practice, some initial states of some agents are unavailable or unknown. To overcome this drawback, fixed-time controllers have been studied recently \cite{fixedtime1}, \cite{fixedtime11}. The use of fixed-time controllers can provide a fixed-time convergence, which is independent with the initial states, for the multi-agent systems \cite{fixedtime2}, \cite{fixedtime3}, \cite{fixedtime4}.
One of the issues that reduces the tracking performance of robotic systems and multi-agent systems is the effects of unknown components such as unknown system parameters, friction terms, and faults, etc \cite{fault1}. This becomes even more severe for AUVs due to the severe effects of external environmental disturbances, especially for multiple collaborative AUV systems \cite{learning}. To approximate the unknown components, many learning techniques have been extensively developed. An iterative learning method has been employed for multi-agent systems \cite{ilearning}. An adaptive NN has been developed for multi-agent systems \cite{NN1}, \cite{NN2}, and for multiple AUVs \cite{NNAUV}. Adaptive fuzzy logic controllers have also been developed to take the knowledge of human about the dynamic system into the design to increase the approximation performance of the FLC \cite{FLC1}, \cite{FLC2}. Adaptive fixed-time FLCs have been developed to preserve the advantages of both fixed-time convergence property and the approximation capacity of FLC in \cite{FxTFLC1}, \cite{FxTFLC2}. However, the adaptive laws of the existing fixed-time FLCs do not provide fixed-time convergence for the system. In practice, it is desired that all the adaptive laws of the system can be convergent within a fixed-time to guarantee the global convergence of the system within a fixed-time. This is the main motivation of this paper.
Input saturation is another important consideration in the design of practical controllers for single agent and multi-agent systems since, in practice, the control efforts of actuators (i.e., motors) are limited \cite{Sat1}, \cite{Sat2}. Many efforts have been spent to find an effective mechanism to mitigate the effects of input saturation. In general, to reduce the effects of saturated control torques, an auxiliary design system can be employed \cite{Sat3}, \cite{Sat4}.
In summary, there are existing research gaps for formation tracking control for multiple AUVs, which will be addressed in this study: (i) the fixed-time convergence of the design of controllers for multiple AUV systems, (ii) the fixed-time convergence of the adaptive laws of the adaptive FLCs, (iii) the input saturation problem needs to be addressed within the design of distributed formation control of multiple AUV systems. To address the research gaps, a new fixed-time distributed formation tracking control for multiple AUV systems is proposed. The distributed fixed-time consensus formation will be derived based on a backstepping SMC method. To approximate the unknown components, an adaptive fixed-time FLC will be developed, in which the adaptive laws of FLC will be derived such that it can be convergent within a fixed-time to guarantee a global fixed-time convergence for the system. Furthermore, an auxiliary adaptive function will be introduced into the fixed-time controller to compensate for the effects of the overhead control efforts. The effectiveness of the new control algorithm will be tested on a consensus formation of four AUVs and compared with the counterpart distributed SMC based on a computer simulation. To highlight the novelties of this paper, we compare the proposed method with the existing approaches as follows:
\begin{itemize}
\item Unlike the existing distributed consensus formation controllers for multiple AUV systems \cite{SMC2}, this paper develops a fixed-time distributed formation algorithm for AUVs using a backstepping SMC method to preserve the merits of Lyapunov stability of the backstepping control, high robustness of SMC and bounded convergence time of the fixed-time control theory.
\item Unlike the existing adaptive fixed-time fuzzy controllers \cite{FxTFLC1}, \cite{FxTFLC2}, which do not guarantee a fixed time convergence for the adaptive laws of FLC, this paper develops a new adaptive fixed time fuzzy law to guarantee the fixed time convergence of the adaptive weights of FLC. This ensures a global fixed time convergence of the whole collaborative multiple AUVs system.
\item Unlike the existing consensus formation controllers for multiple AUVs \cite{AUV1}, \cite{finiteAUV}, \cite{SMC2}, which do no consider the input saturation issues, this paper incorporates an adaptive auxiliary function into the fixed-time distributed consensus controller to handle the problem of saturated control efforts.
\end{itemize}
\section{FIXED-TIME STABILITY and CONVERGENCE, FUZZY LOGIC, GRAPH THEORY AND PROBLEM FORMULATION}
\subsection {Fixed-time stability}
A typical nonlinear system can be represented as follows\cite{Pol2012}:
\begin{equation}
\begin{aligned}
\label {DD1}
\dot{\xi}(t)=f(\xi(t)), \quad \xi(t_0)=\xi_0, \quad \xi\in\Re^n
\end {aligned}
\end{equation}
where $f(\cdot):\Re^n\rightarrow\Re^n$ is a possibly discontinuous vector field. The fixed time convergence is determined for system (\ref{DD1}) when it is globally finite-time stable and its convergent time is bounded regardless the initial states of the system, i.e., $\forall{\xi_0}\in\Re^n$, $T(\xi_0)\leq{T_\text{max}}$ is satisfied, where ${T_\text{max}}$ is a positive constant.
\begin{lemma}[\cite{Pol2012}]
\label{lemma1} If a positive definite continuous function $V(\xi):\Re^n\rightarrow\Re$ for system (\ref{DD1}) satisfies $\dot{V}(\xi)\leq{-\chi_1V^\varrho(\xi)-\chi_2V^\varsigma(\xi)}$ for some $\chi_1>0$, $\chi_2>0$, $\varrho>1$, and $0<\varsigma<1$, then system (\ref{DD1}) is determined as a globally fixed-time stable system. The convergence time can be calculated independently with the initial states of system (\ref{DD1}) as follows:
\begin{equation}
\begin{aligned}
\label {D2}
T(\xi_0)\leq\frac{1}{\chi_1(\varrho-1)}+\frac{1}{\chi_2(1-\varsigma)}.
\end {aligned}
\end{equation}
\end{lemma}
\begin{lemma}[\cite{Pol2012}]
\label{lemma2}
If a positive definite continuous function $V(\xi):\Re^n\rightarrow\Re$ for system (\ref{DD1}) satisfies $\dot{V}(\xi)\leq{-\chi_1V^p(\xi)-\chi_2V^q(\xi)}+\varphi$ for some $\chi_1>0$, $\chi_2>0$, $p>1$, $0<q<1$, and $0<\varphi<\infty$, then system (\ref{DD1}) is called a practically fixed-time stable system. Futhermore, the solution of system (\ref{DD1}) has a residual set:
\begin{equation}
\begin{aligned}
\label {D3}
\lim_{t\to{T}}\xi| \Vert{\xi}\Vert \leq{\min}\{\chi_1^{\frac{-1}{p}}\left(\frac{\varphi}{1-\kappa}\right)^{\frac{1}{p}}, \chi_2^{\frac{-1}{q}}\left(\frac{\varphi}{1-\kappa}\right)^{\frac{1}{q}}\}
\end {aligned}
\end{equation}
where $\kappa$ satisfies $0<\kappa<1$. The settling time can be calculated independenly with the initial states of the system as follows:
\begin{equation}
\begin{aligned}
\label {D4}
T(\xi_0)\leq\frac{1}{\chi_1\kappa(1-p)}+\frac{1}{\chi_2\kappa(q-1)}.
\end {aligned}
\end{equation}
\end{lemma}
\subsection{Fuzzy Logic System}
Given a vector of input, i.e., $Z=(z_1,z_2,...,z_n)^T\in\Re^n$ and an output variable, i.e., ${y}=f({Z})\in\Re$, a fuzzy logic system can be used to map from the input to the output. The fuzzy rules of fuzzy logic system can be described as:
\begin{equation}
\label {fuz1}
Rule \: \textit{j}: \text{If}\: z_1\: \text{is}\: A_1^j\: \text{and}\: ...\: \text{and}\:z_n\: \text{is}\: A_n^j\: \text{then}\: y\: \text{is}\: B^j
\end{equation}
where $A_1^j$, $A_2^j$,..., $A_n^j$ and $B^j$ represent fuzzy sets. The fuzzy output can be obtained as:
\begin{equation}
\label {fuz2}
y={}^{\sum\limits_{j=1}^{h}{{{w}_{j}}}\left( \prod\limits_{i=1}^{n}{{{\mu }_{A_{i}^{j}}}({{z}_{i}})} \right)}/{}_{\sum\limits_{j=1}^{h}{\left( \prod\limits_{i=1}^{n}{{{\mu }_{A_{i}^{j}}}({{z}_{i}})} \right)}}={{\text{w}}^{T}}\text{ }\!\!\Psi\!\!\text{ }({Z})
\end{equation}
where $h$ specifies the number of fuzzy rules used, and ${{\mu }_{A_{i}^{j}}}({{z}_{i}})$ represents the membership function of ${z}_{i}$. $\text{w}={{\left[ {{w}_{1}},{{w}_{2}},..,{{w}_{h}} \right]}^{T}}$ represents the fuzzy weights, and $\text{ }\!\!\Psi\!\!\text{ }({Z})={{\left[ {{\Psi }_{1}}({Z}),{{\Psi }_{2}}({Z}),\dots,{{\Psi }_{h}}({Z}) \right]}^{T}}$ is a fuzzy basis vector, where its elements ${{\Psi }_{j}}({Z})$ can be described as
\begin{equation}
\label {fuz3}
{{\Psi }_{j}}({Z})={}^{\prod\limits_{i=1}^{n}{{{\mu }_{A_{i}^{j}}}({{z}_{i}})}}/{}_{\sum\limits_{j=1}^{h}{\left( \prod\limits_{i=1}^{n}{{{\mu }_{A_{i}^{j}}}({{z}_{i}})} \right)}}.
\end{equation}
\begin{lemma}[\cite{FxTFLC1, FxTFLC2}]
Let ${f(Z)}$ be a continuous function on a compact set $\Omega \in {{\Re }^{n}}$, there exists a fuzzy logic system, i.e., ${{\text{w}}^{T}}\text{ }\!\!\Phi\!\!\text{ }({Z})$, such that
\begin{equation}
\label {fuz4}
\underset{{Z}\in \Omega }{\mathop{\sup }}\,\left| {f(Z)}-{{\text{w}}^{T}}\text{ }\!\!\Psi\!\!\text{ }({Z}) \right|\le \bar{\varrho}
\end{equation}
where $\bar{\varrho}$ is the fuzzy minimum approximation error, $\text{ }\!\!\Psi\!\!{ (Z)}={{{\left[ {{\Psi }_{1}}({Z}),{{\Psi }_{2}}({Z}),...,{{\Psi }_{h}}({Z}) \right]}^{T}}}/{\sum\limits_{j=1}^{h}{{{\Psi }_{j}}({Z})}}$ is the fuzzy basis function vector.
\end{lemma}
\subsection {Graph theory}
Consier a directed graph $G=\{\Lambda,\Xi\}$ is used to describe the formation shape among a group of AUV vehicles, where $\Lambda=\{\nu_1, \nu_2,...,\nu_N\}$ denotes $N$ AUV followers, $\Xi\subseteq{\Lambda\times{\Lambda}}$ is the set of edges. $A=[\alpha_{ij}]\in\Re^{N\times{N}}$ denotes the weight of the edges, where $\alpha_{ij}=\alpha_{ji}>0$ if there is an edge between AUVs $i$ and $j$, i.e., $(\nu_j,\nu_i)\in{\Xi}$, and $\alpha_{ij}=\alpha_{ji}=0$ otherwise. Let $B=\text{diag}\{b_1,...,b_N\}$, where $b_i>0$ indicates that the follower AUV $i$ can receive the direct command signals from the AUV leader; under other conditions $b_i=0$. The Laplacian matrix $L=[l_{i,j}]\in\Re^{N\times{N}}$ with $l_{i,j}=-\alpha_{i,j}$ for $i\neq j$, and $l_{i,i}=\sum_{j=1}^N{\alpha_{i,j}}$. It is assumed that the graph $G$ is undirected and connected, and the desired trajectory information from the virtual leader will be transfered to at least one AUV, and thus not all the elements of B indentify to zero. Therefore, $L+B>0$.
\subsection{Dynamics of AUVs and Control Objective}
In this paper, a control method that can form the operations of $N$ AUVs with the dynamics described in (\ref{D1}) in a consensus manner will be derived.
\begin{equation}
\begin{aligned}
\label {D1}
&\dot{\eta}_i=J_i(\eta_{2,i})\upsilon_i,\\
&M_i\dot{\upsilon}_i+C_i(\upsilon_i)\upsilon_i+D_i(\upsilon_i)\upsilon_i=u_i\left(\tau_i(t)\right)+d_i(t,\eta_i,\upsilon_i)
\end{aligned}
\end{equation}
where $\eta_i=[\eta_{1,i},\eta_{2,i}]^T\in\Re^{6\times{1}}$, $\eta_{1,i}=[x_i, y_i, z_i]^T\in\Re^{3\times{1}}$, $\eta_{2,i}=[\phi_i, \theta_i, \psi_i]^T\in\Re^{3\times{1}}$ denote the position and orientation of $i$-th AUV, respectively. $\upsilon_i=[\upsilon_{1,i}, \upsilon_{2,i}]^T\in\Re^{6\times1}$, $\upsilon_{1,i}=[\upsilon_{x,i}, \upsilon_{y,i}, \upsilon_{z,i}]^T\in\Re^{3\times1}$, $\upsilon_{2,i}=[\omega_{x,i}, \omega_{y,i}, \omega_{z,i}]^T\in\Re^{3\times1}$ represents the translational and rotational velocities of $i$-th AUV, respectively. $u_i\left(\tau_i(t)\right)\in\Re^{6\times1}$, which will be described in (\ref{B15}), represents the control effort subject to saturation nonlinearity for the $i$-th AUV. The description of the inertia matrix $M_i\in\Re^{6\times6}$, the Coriolis and centripetal matrix $C_i(\upsilon_i)\in\Re^{6\times6}$, the hydrodynamic matrix $D_i(\upsilon_i)\in\Re^{6\times6}$ and the Jacobian matrix $J_i(\eta_{2,i})$ can be found in \cite{SMC3}. $d_i(t,\eta_i,\upsilon_i)\in\Re^{6\times1}$ denotes the lumped model uncertainty and disturbance component in the system.
\textit{Control Objective:} The objective of a distributed consensus controller is to design an appropriate controller for each AUV with the dynamics (\ref{D1}) so that the group of AUVs can: (i) form a desired formation shape, and (ii) follow a predefined trajectory, which is known as a virtual leader, within a fixed time. The desired formation shape of a group of AUVs can be determined by a specific relative postures, i.e., position and orientation, between AUVs.
\section{Fixed-time Backstepping Sliding Mode Control Design for Consensus Formation Tracking Control}
Let $\eta_i^d$, $\dot{\eta}_i^d$ and $\ddot{\eta}_i^d$ be the desired position, velocity and acceleration of the virtual leader. Define the position and orientation tracking errors between the objective trajectories and the reference trajectory for $i$-th AUV $(i\in\Gamma, \Gamma=\{1,...,N\})$ as follows:
\begin{equation}
\begin{aligned}
\label {2}
&\varepsilon_{1,i}=\sum_{j\in{\Gamma}}^{} \alpha_{ij} (\eta_i-\eta_j-\delta_{ij})+b_i(\eta_i-\eta_i^d-\delta_{id})\\
&\dot{\varepsilon}_{1,i}=\sum_{j\in{\Gamma}}^{} \alpha_{ij} (\dot{\eta}_i-\dot{\eta}_j)+b_i(\dot{\eta}_i-\dot{\eta}_i^d).
\end {aligned}
\end{equation}
Here, $\alpha_{ij}\geq{0}$ and $b_i\geq{0}$ are defined as in section II.C. $\delta_{ij}$ indicates the relative position and orientation between $i$-th AUV and $j$-th AUV ($j\in{\Gamma}$). $\delta_{id}$ denotes the relative posture between the $i$-th AUV and the reference trajectory (i.e., the virtual leader). All the AUVs are expected to have the same velocity and acceleration as the desired reference trajectory.
Differentiating the velocity of tracking error $\dot{\varepsilon}_{1,i}$ with respect to time, we have:
\begin{equation}
\begin{aligned}
\label {3}
\ddot{\varepsilon}_{1,i}=\sum_{j\in{\Gamma}}^{} \alpha_{ij} (\ddot{\eta}_i-\ddot{\eta}_j)+b_i(\ddot{\eta}_i-\ddot{\eta}_i^d)
\end {aligned}
\end{equation}
where $\ddot{\eta}_i$ and $\ddot{\eta}_j$ represent the acceleration of $i$-th AUV and its neighbors $j\in{\Gamma}$, respectively. Based on (\ref{D1}), the dynamic model of the $i$-th AUV can be expressed as:
\begin{equation}
\begin{aligned}
\label {4}
\ddot{\eta}_i=&\Phi_i\left(\upsilon_i,\eta_{i}\right)\upsilon_i+J_i\left(\eta_{2,i}\right)\Pi_iu\left(\tau_i(t)\right)\\
&+J_i(\eta_{2,i})\Pi_id_i(t,\eta_i,\upsilon_i)
\end {aligned}
\end{equation}
where,\\
$\Pi_i=M_i^{-1}$ and $\Phi_i(\upsilon_i,\eta_{i})=\dot{J}_i(\eta_{2,i})-J_i(\eta_{2,i})\Pi_iC_i(\upsilon_i)-J_i(\eta_{2,i})\Pi_iD_i(\upsilon_i)$.
\\
For facilitating the design of controllers later, the following matrices are defined:\\
$\bar{\Phi}(\upsilon,\eta)=\text{diag}\{\Phi_1(\upsilon_1, \eta_1), ...,\Phi_N(\upsilon_N, \eta_N)\}$,\\
$\bar{\Pi}=\text{diag}\{\Pi_1,...,\Pi_N\}$,\\
$\bar{J}(\eta_2)=\text{diag}\{J_1(\eta_{2,1}),...,J_N(\eta_{2,N})\}$,\\
$u\left(\tau(t)\right)=[u_1\left(\tau_1(t)\right), u_2\left(\tau_2(t)\right),...,u_N\left(\tau_N(t)\right)]^T$,\\
$d=[d_1(t,\eta_1,\upsilon_1),d_2(t,\eta_2,\upsilon_2),...,d_N(t,\eta_N,\upsilon_N)]^T$.\\
Therefore,
\begin{equation}
\begin{aligned}
\label {4_2}
\ddot{\eta}=\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}u\left(\tau(t)\right)+\bar{J}(\eta_2)\bar{\Pi}d.
\end {aligned}
\end{equation}
\begin{assumption}
The disturbance term $J_i(\eta_{2,i})\Pi_id_i(t,\eta_i,\upsilon_i)$ is bounded by the positive constant $\tilde{\lambda}_i$:
\begin{equation}
\begin{aligned}
\label {5}
\|J_i(\eta_{2,i})\Pi_id_i(t,\eta_i,\upsilon_i)\|\leq\tilde{\lambda}_i,{}{}{}i\in\Gamma.
\end {aligned}
\end{equation}
The parameter $\tilde{\lambda}_i$ typically depends on the internal model uncertainties and external environmental disturbances (i.e., marine environment) of the vehicles.
\end{assumption}
Letting the following variables:
\begin{equation}
\begin{aligned}
\label {6}
\bar{\varepsilon}_1=[\varepsilon_{1,1}, \varepsilon_{1,2},...,\varepsilon_{1,N}]^T,\\
\bar{\varepsilon}_2=[\dot{\varepsilon}_{1,1}, \dot{\varepsilon}_{1,2},...,\dot{\varepsilon}_{1,N}]^T,
\end {aligned}
\end{equation}
and\\
\begin{equation}
\begin{aligned}
\label {7}
\ddot{\eta}=[\ddot{\eta}_1, \ddot{\eta}_2,...,\ddot{\eta}_N]^T.
\end {aligned}
\end{equation}
Adding the results in (\ref{3}), (\ref{6}) and (\ref{7}) to form the overall error dynamics as:
\begin{equation}
\begin{aligned}
\label {8}
&\dot{\bar{\varepsilon}}_1=\bar{\varepsilon}_2,\\
&\dot{\bar{\varepsilon}}_2=\left(L+B\right)\left(\ddot{\eta}-\textbf{1}_N\otimes\ddot{\eta}^d\right),\\
\end {aligned}
\end{equation}
where $\otimes$ denotes the Kronecker product between two matrices. $\textbf{1}_N$ stands for an $N\times{1}$ vector with unitary elements.
Then, based on (\ref{8}), a fixed time backstepping SMC can be designed as follows:
\\
\\
\textbf{Step 1}: The first sliding surface is selected as:
\begin{equation}
\begin{aligned}
\label {B2}
s_1(t)=\bar{\varepsilon}_1(t).
\end {aligned}
\end{equation}
Differentiating (\ref{B2}) yields
\begin{equation}
\begin{aligned}
\label {B3}
\dot{s}_1(t)=\alpha_s(t),
\end {aligned}
\end{equation}
where $\alpha_s(t)=\dot{\bar{\varepsilon}}_1(t)$ is identified as the virtual control of the system (\ref{B3}).
To stabilise the sliding surface $s_1(t)$, the following virtual control input is designed:
\begin{equation}
\begin{aligned}
\label {B4}
\alpha_s=-\left(k_1s_1+k_2s_1^{\gamma}+k_3s_1^{\iota}\right),
\end {aligned}
\end{equation}
where $k_1>0$, $k_2>$ and $k_3>0$ , and $0<\gamma<1$ and $\iota>1$.
Consider a candidate Lyapunov function below:
\begin{equation}
\begin{aligned}
\label {B5}
V_1&=\frac{1}{2}s_1^T{s}_1.\\
\end {aligned}
\end{equation}
Adding the result in (\ref{B4}) into the derivative of (\ref{B5}), we obtain:
\begin{equation}
\begin{aligned}
\label {B6}
\dot{V}_1&=s_1^T\dot{s}_1\\
&=-s_1^T\left(k_1s_1+k_2s_1^{\gamma}+k_3s_1^{\iota}\right)\\
&=-k_1s_1^Ts_1-k_2\left(s_1^Ts_1\right)^{\frac{\gamma+1}{2}}-k_3\left(s_1^Ts_1\right)^{\frac{\iota+1}{2}}\\
&\leq{-k_2\left(s_1^Ts_1\right)^{\frac{\gamma+1}{2}}-k_3\left(s_1^Ts_1\right)^{\frac{\iota+1}{2}}}\\
&\leq-2^\frac{\gamma+1}{2}k_2\left(\frac{1}{2}s_1^Ts_1\right)^\frac{\gamma+1}{2}-2^\frac{\iota+1}{2}k_3\left(\frac{1}{2}s_1^Ts_1\right)^\frac{\iota+1}{2}.
\end {aligned}
\end{equation}
\\
\textbf{Step 2}: Define the second sliding surface:
\begin{equation}
\begin{aligned}
\label {B7}
s_2=\bar{\varepsilon}_2-(L+B)\alpha_s.
\end {aligned}
\end{equation}
The derivative of $s_2$ is:
\begin{equation}
\begin{aligned}
\label {B8}
\dot{s}_2&=\dot{\bar{\varepsilon}}_2-\left(L+B\right)\dot{\alpha}_s\\
&=\left(L+B\right)\left(\ddot{\eta}-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s\right).\\
\end {aligned}
\end{equation}
A candidate Lyapunov function is selected as
\begin{equation}
\begin{aligned}
\label {B9}
V_2=\frac{1}{2}s_2^T{s}_2.\\
\end {aligned}
\end{equation}
Adding the result in (\ref{B8}) into the derivative of (\ref{B9}), we obtain:
\begin{equation}
\begin{aligned}
\label {B10}
\dot{V}_2&=s_2^T\dot{s}_2\\
&=s_2^T\left(\left(L+B\right)\left(\ddot{\eta}-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s\right)\right)\\
&=s_2^T\left((L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}u(\tau(t)) \right. \right.\\
&\left. \left.+\bar{J}(\eta_2)\bar{\Pi}d-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s\right)\right).\\
\end {aligned}
\end{equation}
Based on (\ref{B10}), the backstepping sliding mode controller can be taken as
\begin{equation}
\begin{aligned}
\label {B11}
u(\tau(t))&=[\bar{J}(\eta_2)\bar{\Pi}]^{-1}\left(-\bar{\Phi}(\upsilon,\eta)\upsilon+\textbf{1}_N\otimes\ddot{\eta}^d+\dot{\alpha}_s+\tau^{'}\right),
\end {aligned}
\end{equation}
where,
\begin{equation}
\begin{aligned}
\label {B11_2}
\tau^{'}&=-\beta_s\text{sign}(s_2)-k_8s_2-k_9s_2^\gamma-k_{10}s_2^\iota,
\end {aligned}
\end{equation}
where $k_8, k_9, k_{10}$ are positive constants. $\beta_s$ is chosen to be $\beta_s>\tilde{\lambda}$, where $\tilde{\lambda}=[\tilde{\lambda}_1, \ldots, \tilde{\lambda}_N]^T$, which were defined as in Assumption 1.
Inserting the control input in (\ref{B11}) and (\ref{B11_2}) into (\ref{B10}), we have:
\begin{equation}
\begin{aligned}
\label {B12}
\dot{V}_2&=s_2^T\dot{s}_2\\
&=s_2^T\left(L+B\right)\left({-\beta_s\text{sign}(s_2)-k_8s_2-k_9s_2^{\gamma+1}} \right.\\
&\left. {-k_{10}s_2^{\iota+1}+\bar{J}(\eta_2)\bar{\Pi}d} \right)\\
&\leq-2^\frac{\gamma+1}{2}\left(L+B\right)k_9\left(\frac{1}{2}s_2^Ts_2\right)^\frac{\gamma+1}{2}\\
&-2^\frac{\iota+1}{2}\left(L+B\right)k_{10}\left(\frac{1}{2}s_2^Ts_2\right)^\frac{\iota+1}{2}.
\end {aligned}
\end{equation}
\textbf{Step 3}: We define a compounded candidate Lyapunov function:
\begin{equation}
\begin{aligned}
\label {B13_1}
{V}=V_1+V_2.
\end {aligned}
\end{equation}
Differentiating (\ref{B13_1}) yields:
\begin{equation}
\begin{aligned}
\label {B13}
\dot{V}&=\dot{V}_1+\dot{V}_2\\
&\leq-2^\frac{\gamma+1}{2}k_2\left(\frac{1}{2}s_1^Ts_1\right)^\frac{\gamma+1}{2}-2^\frac{\iota+1}{2}k_3\left(\frac{1}{2}s_1^Ts_1\right)^\frac{\iota+1}{2}\\
&-2^\frac{\gamma+1}{2}(L+B)k_9\left(\frac{1}{2}s_2^Ts_2\right)^\frac{\gamma+1}{2}\\
&-2^\frac{\iota+1}{2}(L+B)k_{10}\left(\frac{1}{2}s_2^Ts_2\right)^\frac{\iota+1}{2}\\
&\leq{-2^{\frac{\gamma+1}{2}}}\zeta_1\left(V_1^{\frac{\gamma+1}{2}}+V_2^{\frac{\gamma+1}{2}}\right)\\
&-{2}^{\frac{\iota+1}{2}}\zeta_2\left(V_1^{\frac{\iota+1}{2}}+V_2^{\frac{\iota+1}{2}}\right)\\
&\leq{-2^{\frac{\gamma+1}{2}}}\zeta_1\left(V^{\frac{\gamma+1}{2}}\right)-{2}^{\frac{\iota+1}{2}}\zeta_2\left(V^{\frac{\iota+1}{2}}\right).
\end {aligned}
\end{equation}
where $\zeta_1=\text{min}\left(k_2,\left(L+B\right)k_9\right)$ and $\zeta_2=\text{min}\left(k_3,\left(L+B\right)k_{10}\right)$.
Therefore, thanks to Lemma \ref{lemma1}, the global fixed-time convergence can be established for the system (\ref{B13}), and the reaching time can be obtained as:
\begin{equation}
\begin{aligned}
\label {B14}
T\leq{\frac{2}{{\zeta_12^{\frac{\gamma+1}{2}}}(1-\gamma)}}+\frac{2}{{2}^{\frac{\iota+1}{2}}\zeta_2(\iota-1)}.
\end {aligned}
\end{equation}
\section{Distributed Backstepping Fuzzy Sliding Mode Controller with Input Saturation}
The backstepping SMC presented in section III has two main shortcomings: (i) the bigger sliding gain is chosen based on Assumption 1, for which if the disturbance is big, the controller provides a big chattering in the system, (ii) the saturated control torque effects have not been considered. In this section, we introduce an auxiliary variable and a fuzzy approximation to overcome these shortcomings.
The designed control input $\tau_i(t)\in\Re^n$ is affected by the saturation nonlinearity and can be expressed as \cite{Sat2}:
\begin{equation}
\begin{aligned}
\label {B15}
u_i(\tau_i(t))=\begin{cases}
\text{sign}(\tau_i(t))\tau_{\text{max}_i}, & |\tau_i(t)|\geq{\tau_{\text{max}_i}},\\
\tau_i(t), & |\tau_i(t)|<{\tau_{\text{max}_i}},
\end{cases}
\end {aligned}
\end{equation}
where ${\tau_{\text{max}_i}}$ represents the maximum control torque allowed for joint $i$.
Furthermore, considering the input saturation, the saturated control torque can be approximated by
\begin{equation}
\begin{aligned}
\label {B16}
u_i(\tau_i)=g_i(\tau_i)+\varsigma_i(\tau_i),
\end {aligned}
\end{equation}
where $g_i(\tau_i)$ is a smooth function. $\varsigma_i(\tau_i)$ is the bounded approximation error. $g_i(\tau_i)$ can be chosen as \cite{Sat2}:
\begin{equation}
\begin{aligned}
\label {B17}
g_i(\tau_i)&=\tau_{\text{max}_i}\times{\text{tanh}}\left(\frac{\tau_i}{\tau_{\text{max}_i}}\right)\\
&=\tau_{\text{max}_i}\frac{e^{\tau_i/\tau_{\text{max}_i}}-e^{-\tau_i/\tau_{\text{max}_i}}}{e^{\tau_i/\tau_{\text{max}_i}}+e^{-\tau_i/\tau_{\text{max}_i}}}.
\end {aligned}
\end{equation}
The approximation error $\varsigma_i(\tau_i)$ is bounded by
\begin{equation}
\begin{aligned}
\label {B18}
|\varsigma_i(\tau_i)|=|u_i(\tau_i)-g_i(\tau_i)|\leq\tau_{{\text{max}_i}}(1-\text{tanh}(1))=\bar{\Delta}_i.
\end {aligned}
\end{equation}
Let: $g(\tau(t))=[g_1(\tau_1(t)), g_2(\tau_2(t)),...,g_N(\tau_N(t))]^T$, $\varsigma(\tau(t))=[\varsigma_1(\tau_1(t)), \varsigma_2(\tau_2(t)),...,\varsigma_N(\tau_N(t))]^T$. To compensate for the saturated controller, an adaptive auxiliary variable $\mu$ is introduced as:
\begin{equation}
\begin{aligned}
\label {B19}
\dot{\mu}=-\mu+\bar{J}(\eta_2)\bar{\Pi}\left(g(\tau)-\tau\right).
\end {aligned}
\end{equation}
For this controller, \textbf{Step 1} is as in section III. \textbf{Step 2} will be re-designed as follows:\\
\\
\textbf{Step 2}: The error variable $s_2$ can be redefined as
\begin{equation}
\begin{aligned}
\label {B20}
s_2=\bar{\varepsilon}_2-(L+B)\alpha_s-(L+B)\mu.
\end {aligned}
\end{equation}
The derivative of $s_2$ can be computed as
\begin{equation}
\begin{aligned}
\label {B21}
\dot{s}_2&=\dot{\bar{\varepsilon}}_2-(L+B)\dot{\alpha}_s-(L+B)\dot{\mu}\\
&=\left(L+B\right)\left(\ddot{\eta}-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s-\dot{\mu}\right)\\
&=\left(L+B\right)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}u(\tau) \right.\\
&\left. +\bar{J}(\eta_2)\bar{\Pi}d-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s-\dot{\mu}\right)\\
&=(L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}u(\tau)\right.\\
&\left. +\bar{J}(\eta_2)\bar{\Pi}d-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s\right.\\
&\left.+\mu-\bar{J}(\eta_2)\bar{\Pi}\left(g(\tau)-\tau\right)\right)\\
&=(L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}\left(\varsigma(\tau\right)+d) \right.\\
&\left. -\textbf{1}_N\otimes\ddot{\eta}^d+\mu+\bar{J}(\eta_2)\bar{\Pi}\tau-\dot{\alpha}_s\right).
\end {aligned}
\end{equation}
By using a FLC to approximate the lumped uncertainty and disturbance $\bar{J}(\eta_2)\bar{\Pi}(\varsigma(\tau)+d)$, the derivative of the error $s_2$ in (\ref{B21}) can be represented as
\begin{equation}
\begin{aligned}
\label {45}
\dot{s}_2=&(L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+W^{*T}\Psi(Z)+\epsilon(t) \right.\\
&\left.-\textbf{1}_N\otimes\ddot{\eta}^d+\mu+\bar{J}(\eta_2)\bar{\Pi}\tau-\dot{\alpha}_s \right),
\end {aligned}
\end{equation}
where $\epsilon(t)=\bar{J}(\eta_2)\bar{\Pi}\left(\varsigma(\tau)+d(t,\eta,\upsilon)\right)-W^{*T}\Phi(Z)$. From (\ref{5}) and (\ref{B18}) and Lemma 3, we can obtain $||\epsilon(t)||\leq\bar{\epsilon}$, where $\bar{\epsilon}>0$.
A candidate Lyapunov function is defined as
\begin{equation}
\begin{aligned}
\label {46}
V_3(s_2,\tilde{\theta})=\frac{1}{2}s_2^Ts_2+\frac{1}{2}\tilde{\theta}^T\tilde{\theta},
\end {aligned}
\end{equation}
where $\tilde{\theta}= \theta-\hat{\theta}$ is the weight approximation error with $\theta=\max_{h\in{H}}\Vert{W^*}\Vert$, and $\hat{\theta}$ is the approximation of $\theta$.
The derivative of $V_3$ in (\ref{46}) can be computed as:
\begin{equation}
\begin{aligned}
\label {47}
\dot{V}_3&=s_2^T\dot{s}_2-\tilde{\theta}^T\dot{\hat{\theta}}\\
&=s_2^T(L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+W^{*T}\Psi(Z)+\epsilon(t) \right.\\
&\left. -\textbf{1}_N\otimes\ddot{\eta}^d+\mu+\bar{J}(\eta_2)\bar{\Pi}\tau-\dot{\alpha}_s\right)-\tilde{\theta}^T\dot{\hat{\theta}}.
\end {aligned}
\end{equation}
Let $F_{sum}=\bar{\Phi}(\upsilon,\eta)\upsilon-\textbf{1}_N\otimes\ddot{\eta}^d+\mu-\dot{\alpha}_s$. Applying inequality principle, we have
\begin{equation}
\begin{aligned}
\label {48}
s_2^TW^{*T}\Psi(Z)\leq{\frac{1}{2}+\frac{1}{2}s_2^2\theta\Psi(Z)^T\Psi(Z)}.
\end {aligned}
\end{equation}
Then, $\dot{V}_3$ becomes:
\begin{equation}
\begin{aligned}
\label {49}
\dot{V}_3&\leq{s_2}^T(L+B)\left({\bar{J}(\eta_2)\bar{\Pi}\tau+F_{sum}+\epsilon(t)}\right.\\
&\left.{+\frac{1}{2}s_2^2\hat{\theta}\Psi(Z)^T\Psi(Z)}\right)+\frac{1}{2}(L+B)-\tilde{\theta}^T\dot{\hat{\theta}}\\
&\leq{s_2^T}(L+B)\left({\bar{J}(\eta_2)\bar{\Pi}\tau+F_{sum}+\epsilon(t)}\right.\\
& \left.{+\frac{1}{2}s_2^2\hat{\theta}\Psi(Z)^T\Psi(Z)}\right)+\frac{1}{2}(L+B)\\
&+\tilde{\theta}^T\left(\frac{1}{2}(L+B)s_2^2\Psi(Z)^T\Psi(Z)-\dot{\hat{\theta}}\right).\\
\end {aligned}
\end{equation}
Based on (\ref{49}), a distributed control input is designed as:
\begin{equation}
\begin{aligned}
\label {491}
\tau&=\left(\bar{J}\left(\eta_2\right)\bar{\Pi}\right)^{-1}\left(-F_{sum}-\frac{1}{2}s_2^2\hat{\theta}\Psi(Z)^T\Psi(Z)\right.\\
&\left. -\beta_s{\text{sign}}(s_2)-k_8s_2-k_9s_2^\gamma-k_{10}s_2^\iota \right),
\end {aligned}
\end{equation}
where $\beta_s$ is selected such that $\beta_s>\bar{\epsilon}$.
The adaptive law of the FLC can be selected as
\begin{equation}
\begin{aligned}
\label {50}
\dot{\hat{\theta}}=\frac{1}{2}\left(L+B\right)s_2^2\Psi(Z)^T\Psi(Z)-w_1\hat{\theta}^{\gamma}-w_2\hat{\theta}^{\iota}.
\end {aligned}
\end{equation}
Therefore,
\begin{equation}
\begin{aligned}
\label {51}
\dot{V}_3&\leq{s_2^T(L+B)\left(-k_8s_2-k_9s_2^\gamma-k_{10}s_2^\iota\right)}\\
&+\frac{1}{2}(L+B)+\tilde{\theta}^T\left(w_1\hat{\theta}^{\gamma}+w_2\hat{\theta}^{\iota}\right).
\end {aligned}
\end{equation}
Using inequality:
\begin{equation}
\begin{aligned}
\label {52}
\tilde{\theta}\hat{\theta}^{\gamma}\leq{l_1\theta^{1+\gamma}-l_2\tilde{\theta}^{1+\gamma}},
\end {aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label {52b}
\tilde{\theta}\hat{\theta}^{\iota}\leq{l_1\theta^{1+\iota}-l_2\tilde{\theta}^{1+\iota}}.
\end {aligned}
\end{equation}
Taking the results in (\ref{51}), (\ref{52}) and (\ref{52b}) together, we have:
\begin{equation}
\begin{aligned}
\label {53}
\dot{V}_3&\leq\left(L+B\right)\left(-k_8s_2-k_9s_2^{\gamma+1}-k_{10}s_2^{\iota+1}\right)\\
&+\frac{1}{2}(L+B)+w_1\left({l_1\theta^{1+\gamma}-l_2\tilde{\theta}^{1+\gamma}}\right)\\
&+w_2\left({l_1\theta^{1+\iota}-l_2\tilde{\theta}^{1+\iota}}\right)\\
&\leq(L+B)\left({-k_8s_2-k_9s_2^{\gamma+1}-k_{10}s_2^{\iota+1}}\right)\\
&-l_2\left(w_1\tilde{\theta}^{1+\gamma}+w_2\tilde{\theta}^{1+\iota}\right)+\sigma,
\end {aligned}
\end{equation}
where:
\begin{equation}
\begin{aligned}
\label {54}
\sigma=\frac{1}{2}\left(L+B\right)+l_1\left(w_1{\theta}^{1+\gamma}+w_2{\theta}^{1+\iota}\right).
\end {aligned}
\end{equation}
Therefore,
\begin{equation}
\begin{aligned}
\label {55}
\dot{V}_3&\leq{-k_9\left(L+B\right)s_2^{\gamma+1}}-l_2w_1\tilde{\theta}^{1+\gamma}\\
&-k_{10}\left(L+B\right)s_2^{\iota+1}-l_2w_2\tilde{\theta}^{1+\iota}+\sigma\\
&\leq{-2^{\frac{\gamma+1}{2}}}\nu_1{V_3}^{\frac{1+\gamma}{2}}{-2^{\frac{\iota+1}{2}}}\nu_2{V_3}^{\frac{1+\iota}{2}}+\sigma.\\
\end {aligned}
\end{equation}
Here,
\begin{equation}
\begin{aligned}
\label {56}
\nu_1=\text{min}\left(k_9\left(L+B\right),l_2w_1\right)
\end {aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label {57}
\nu_2=\text{min}\left(k_{10}\left(L+B\right),l_2w_2\right)
\end {aligned}
\end{equation}
Thus, according to Lemma \ref{lemma2}, the value $s_2$ and $\tilde{\theta}$ will converge to zero. The convergence time can be calculated as $T\leq\frac{2}{\nu_1{(1-\gamma)}}+\frac{2}{\nu_2{(\iota-1)}}$.
\\
\\
\textbf{Step 3}: Define a candidate Lyapunov function:
\begin{equation}
\begin{aligned}
\label {58}
{V}=V_1+V_3.
\end {aligned}
\end{equation}
The derivative of the above Lyapunov function is
\begin{equation}
\begin{aligned}
\label {59}
\dot{V}&\leq{-2^{\frac{\gamma+1}{2}}}\lambda_{min}\{k_2,k_5\}V_1^{\frac{\gamma+1}{2}}{-2^{\frac{\iota+1}{2}}}\lambda_{min}\{k_3,k_6\}V_1^{\frac{\iota+1}{2}}\\
&{-2^{\frac{\gamma+1}{2}}}\nu_1V_3^{\frac{\gamma+1}{2}}{-2^{\frac{\iota+1}{2}}}\nu_2V_3^{\frac{\iota+1}{2}}+\sigma\\
&\leq{-2^{\frac{\gamma+1}{2}}}\chi_1\left(V_1^{\frac{\gamma+1}{2}}+V_3^{\frac{\gamma+1}{2}}\right)\\
&-{2}^{\frac{\iota+1}{2}}\chi_2\left(V_1^{\frac{\iota+1}{2}}+V_3^{\frac{\iota+1}{2}}\right)+\sigma\\
&\leq{-2^{\frac{\gamma+1}{2}}}\chi_1\left(V^{\frac{\gamma+1}{2}}\right)-{2}^{\frac{\iota+1}{2}}\chi_2\left(V^{\frac{\iota+1}{2}}\right)+\sigma,\\
\end {aligned}
\end{equation}
where $\chi_1=\text{min}\{\nu_1, \lambda_{min}\{k_2,k_5\}\}$ and $\chi_2=\text{min}\{\lambda_{min}\{k_3,k_6\}, \nu_2\}$.
Therefore, according to Lemma {\ref{lemma2}, the global fixed-time convergence of the system is guaranteed. The settling time can be calculated as:
\begin{equation}
\begin{aligned}
\label {60}
T\leq{\frac{2}{{\chi_12^{\frac{\gamma+1}{2}}}\kappa(1-\gamma)}}+\frac{2}{{\chi_2}2^{\frac{\iota+1}{2}}\kappa(\iota-1)}.
\end {aligned}
\end{equation}
\begin{remark}
The employment of $sign$ function in (\ref{491}) generates a chattering in the system. In order to reduce the chattering, the controller (\ref{491}) can be revised as
\begin{equation}
\begin{aligned}
\label {492}
\tau&=\left(\bar{J}(\eta_2)\bar{\Pi}\right)^{-1}\left(-F_{sum}-\frac{1}{2}s_2^2\hat{\theta}\Psi(Z)^T\Psi(Z) \right.\\
&\left. -\beta_s(\frac{s_2}{||s_2||+\epsilon_1})-k_8s_2-k_9s_2^\gamma-k_{10}s_2^\iota \right),
\end {aligned}
\end{equation}
\end{remark}
where $\epsilon_1$ is a small positive number.
\section{Results and Discussions}
In this section, we validate the performance of the proposed algorithm. The dynamic model of each vehicle is described as in (\ref{D1}), where the parameters are selected as in Table \ref{table_1} \cite{SMC3}.
\begin{table}
\caption{Parameters used in the simulation of $i$th AUV ($i\in\{1,2,3,4\}$)}
\centering
\label{table_1}
\begin{tabular}{l l l l}
\hline\hline\\[-0.5ex]
Parameters &$Value$&Parameters &$Value$\\[0.5ex] \hline\\[-0.5ex]
$m_i$& $20$&$I_{x,i}$ & $20$\\
$I_{y,i}$ & $30$&$I_{z,i}$ & $35$\\
$\iota_{\upsilon{x},i}$&$-8$&$\iota_{\upsilon{y},i}$&$-10$\\
$\iota_{\upsilon{z},i}$&$-9$&$\iota_{\dot{\upsilon}{x},i}$&$-7$\\
$\iota_{\dot{\upsilon}{y},i}$&$-8$&$\iota_{\dot{\upsilon}{z},i}$&$-6$\\
$\iota_{{\omega}{x},i}$&$-0.2$&$\iota_{{\omega}{y},i}$&$-0.25$\\
$\iota_{{\omega}{z},i}$&$-0.15$&$\iota_{\dot{\omega}{x},i}$&$-20$\\
$\iota_{\dot{\omega}{y},i}$&$-30$&$\iota_{\dot{\omega}{z},i}$&$-35$\\
\hline\hline
\end{tabular}
\end{table}
Fig. \ref{fig.1} illustrates the connection between AUVs and the virtual leader, and $\alpha_{12}=a_{21}=\alpha_{23}=\alpha_{32}=\alpha_{34}=\alpha_{43}=1$. As illustrated in Fig. \ref{fig.1}, in this considered communication topology, the desired trajectory will be communicated and given to the AUV-1, i.e., $b_1=1$. Therefore, the $L$ and $B$ matrices can be calculated as:
\begin{equation}
\begin{aligned}
\label {61}
L=\begin{bmatrix}
1 &-1 & 0 & 0\\
-1 & 2 & -1 &0\\
0 & -1& 2&- 1\\
0 & 0 & -1 &1
\end{bmatrix},
B=\begin{bmatrix}
1 &0 & 0 & 0\\
0 & 0 & 0 &0\\
0 & 0& 0&0\\
0 & 0 & 0 &0
\end{bmatrix}.
\end {aligned}
\end{equation}
The moving trajectory of the virtual leader is selected as $\eta^d(t)=[30-30e^{-t}, 5t, 2t, 0, 0, 0]^T$. The desired posture between AUVs are given by $\delta_{12}=[0, 10, 0]^T$, $\delta_{21}=[0, -10, 0]^T$, $\delta_{23}=[-10, 0, 0]^T$, $\delta_{32}=[10, 0, 0]^T$, $\delta_{34}=[0, -10, 0]^T$, and $\delta_{43}=[0, 10, 0]^T$. All the vehicles have the same orientation. The relative distance between the virtual leader and AUV 1 is $\delta_{1d}=[20, 0, 0]^T$. It is assumed that the four AUVs will start from the initial positions: $\eta_1(0)=[2, 3, 3, 0.3, 0, 0.2]^T$, $\eta_2(0)=[2.5, 3.5, 3, 0.2, 0, 0.25]^T$, $\eta_3(0)=[2, 3, 3, 0.3, 0, 0.2]^T$, $\eta_4(0)=[3, 3, 2, 0.3, 0, 0.2]^T$, and $\upsilon_i=0_{6\times{1}}, i\in\{1, 2, 3, 4\}$ is set for the initial velocities of AUVs.
The disturbance term is assumed to be:
\begin{equation}
\begin{aligned}
\label {63}
d_i(t,\eta_i,\upsilon_i)=&[2.5\sin(t)-0.5\upsilon_{xi}^2-0.7\sin(\upsilon_{xi}\upsilon_{yi}),\\
&2.5\cos(t)+0.1\upsilon_{xi}^2+0.5\sin(\upsilon_{yi}), \\
&2.5\sin(t)+0.7\upsilon_{xi}^2+0.8\sin(\upsilon_{zi}),\\
&0.5\sin(t)+0.2\upsilon_{\phi{i}}^3,\\
&0.5\cos(t)-0.2\upsilon_{\theta{i}}^2,\\
&0.5\sin(t)-0.4\upsilon_{\Phi{i}}^3,]^T,\\
(i\in\{1, 2, 3, 4\}).
\end {aligned}
\end{equation}
Note that the above parameters are selected to be quite similar to the parameters used in \cite{SMC3} to facilite the comparison later. However, in this experiment, the disturbance term (\ref{63}) is modeled to be more severe to include both environmental disturbances and model uncertainties.
The selected parameters, which were chosen based on a trial-and-error procedure, for the proposed controller in this simulation are $k_1=k_8=5$, $k_2=k_3=k_9=k_{10}=0.4$, $w_1=w_2=1$, $\beta_s=20$. The parameters $\gamma=5/7$ and $\iota=7/5$. The control efforts are saturared by $\tau_\text{max}=300 Nm$.
The proposed controller uses the below membership functions, which were tuned based on a trial-and-error procedure:\\
$
{{\mu }_{A_{i}^{1}}}=\exp \left( {-{{\left( {{Z}_{i}}+7 \right)}^{2}}}/{4}\; \right), {{\mu }_{A_{i}^{2}}}=\exp \left( {-{{\left( {{Z}_{i}}+5 \right)}^{2}}}/{4}\; \right),
{{\mu }_{A_{i}^{3}}}=\exp \left( {-{{\left( {{Z}_{i}}+3 \right)}^{2}}}/{4}\; \right), {{\mu }_{A_{i}^{4}}}=\exp \left( {-{{\left( {{Z}_{i}}+1 \right)}^{2}}}/{4}\; \right),
{{\mu }_{A_{i}^{5}}}=\exp \left( {-{{\left( {{Z}_{i}}+0 \right)}^{2}}}/{4}\; \right), {{\mu }_{A_{i}^{6}}}=\exp \left( {-{{\left( {{Z}_{i}}-1 \right)}^{2}}}/{4}\; \right),
{{\mu }_{A_{i}^{7}}}=\exp \left( {-{{\left( {{Z}_{i}}-3 \right)}^{2}}}/{4}\; \right), {{\mu }_{A_{i}^{8}}}=\exp \left( {-{{\left( {{Z}_{i}}-5 \right)}^{2}}}/{4}\; \right),
{{\mu }_{A_{i}^{9}}}=\exp \left( {-{{\left( {{Z}_{i}}-7 \right)}^{2}}}/{4}\; \right).$\\
The input of the FLC is $Z_i=[\eta_i, \upsilon_i]^T$. To reduce chattering, the controller (\ref{492}) is used and $\epsilon_1=0.01$.
In order to highlight the superior performance of the proposed controller, it is analysed in a comparison with the distributed SMC \cite{SMC3}. The SMC can be designed as in Appendix A. The sliding gain of the SMC is selected as $\beta_0=200$. Note that the SMC \cite{SMC3} has not considered the effects of the input saturation in the design. The tracking performances of the proposed controller are shown in Figs. \ref{fig.2}, \ref{fig.3}, \ref{fig.4}, \ref{fig.5}, while the performances of the SMC are shown in Figs. \ref{fig.6}, \ref{fig.7}, \ref{fig.8}, \ref{fig.9}. In particular, Fig. \ref{fig.2} shows the formation shape of the four AUVs under the proposed controller. Compared with the formation shape of the AUVs under the SMC controller shown in Fig. \ref{fig.6}, the proposed controller provides faster and smoother convergence, as shown in Fig. \ref{fig.2}. Figs. \ref{fig.3} and \ref{fig.4} show the tracking errors of $\varepsilon_{1,i}, (i=1,2,3,4)$ and $\varepsilon_{2,i}, (i=1, 2, 3, 4)$ under the proposed controller, which are convergent to zero. The errors $\varepsilon_{1,i}$ and $\varepsilon_{2,i}$ under the effects of the SMC are shown in Figs. \ref{fig.7} and \ref{fig.8}, respectively. Comparing Figs. \ref{fig.7} and \ref{fig.8} and Figs. \ref{fig.3} and \ref{fig.4}, respectively, show that the SMC provides faster convergence for the tracking errors. However, this is because the sliding gain of the SMC was chosen to be a bigger value. Consequently, it leads to a much higher, possibly physically unrealisable, control efforts, as shown in Fig. \ref{fig.9}. Fig. 5 shows the control efforts of the proposed controller, which are smoother and bounded by the $\tau_\text{max}$.
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{topology.JPG}
\caption{The communication topology graph for 3 AUVs formation control}
\label{fig.1}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{3dspace.JPG}
\caption{The formation shape of four AUVs under the proposed controller}
\label{fig.2}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{e1_bar_each.JPG}
\caption{Position tracking error $\varepsilon_{1}={e}_{1}$ of the AUVs under the proposed controller}
\label{fig.3}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{e2_bar_each.JPG}
\caption{Velocity tracking error $\varepsilon_{2}={e}_{2}$ of the AUVs under the proposed controller}
\label{fig.4}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{tau_each.JPG}
\caption{Control efforts $\tau_i (i=1,2,3,4)$ of the AUVs under the proposed controller}
\label{fig.5}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{3dSMC.JPG}
\caption{The formation shape of four AUVs under the SMC controller}
\label{fig.6}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{e1SMC.JPG}
\caption{Position tracking errors $\varepsilon_{1}=e_{1}$ of the AUVs under the SMC controller}
\label{fig.7}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{e2SMC.JPG}
\caption{Velocity tracking errors $\varepsilon_{2}=e_{2}$ of the AUVs under the SMC controller}
\label{fig.8}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{tauSMC.JPG}
\caption{The control efforts $\tau_{i}, (i=1, 2, 3, 4)$ of the AUVs under the SMC controller}
\label{fig.9}
\end{figure}
\section{Conclusion}
This paper has introduced a new distributed fixed-time consensus formation control for multiple AUV systems based on an adaptive backstepping fuzzy sliding mode control. In this scheme, a distributed consensus tracking error was derived to form a distributed global error dynamics. Then, a fixed-time backstepping SMC was designed to obtain a fixed-time convergence for the whole system. However, the backstepping fixed-time SMC had two main shortcomings: (i) it produces a greater magnitude of chattering due to the selection of bigger sliding gain, (ii) it has not considered the input saturation effects in the design. To overcome these shortcomings, an auxiliary variable and an adaptive fixed-time fuzzy logic approximation have been employed. The computer simulation on a formation consensus control for four AUVs demonstrated that the fixed-time controller provided a quick response and stability for the group of AUVs and can handle the input saturation problem well.
In future works, we will address the multiple constraints problem (i.e., working spaces constraints, states constraints and input constraints simultaneously) and obstacle avoidance problems for multiple collaborative AUVs. An physical experiment system based on the hardware systems of BlueROV2 robots are being developed, and the experimental results will be shown in future works.
\appendices
{
\section*{Appendix A: Design Distributed sliding mode control}
The SMC can be derived as follows \cite{SMC3}:\\
First, the sliding surface is selected as
\begin{equation}
\begin{aligned}
\label {70}
s=k_1(L+B)\bar{\varepsilon}_1+\bar{\varepsilon}_2
\end {aligned}
\end{equation}
where $k_1$ is a positive constant.
The sliding mode control can be designed as
\begin{equation}
\begin{aligned}
\label {71}
\tau&=[\bar{J}(\eta_2)\bar{\Pi}]^{-1}\left(-\bar{\Phi}(\upsilon,\eta)\upsilon+\tau^{'}\right)
\end {aligned}
\end{equation}
where $\tau'$ is designed as
\begin{equation}
\begin{aligned}
\label {711}
\tau&=[\bar{J}(\eta_2)\bar{\Pi}]^{-1}\left(-k_1\bar{\varepsilon}_2+\textbf{1}_N\otimes\ddot{\eta}^d-\beta_0\text{sign}(s)\right)
\end {aligned}
\end{equation}
To reduce the chattering, the controller (\ref{711}) is revised as:
\begin{equation}
\begin{aligned}
\label {712}
\tau&=[\bar{J}(\eta_2)\bar{\Pi}]^{-1}\left(-k_1\bar{\varepsilon}_2+\textbf{1}_N\otimes\ddot{\eta}^d-\beta_0\frac{s}{||s||+\epsilon_1}\right).
\end {aligned}
\end{equation}
The convergence and stability of the SMC can be referred to \cite{SMC3}.
}
\section{Introduction}
Robotics have been extensively applied for many challenging applications, ranging from manufacturing, agriculture, space and ocean applications \cite{robotM1}, \cite{robotM2}. In some applications, the use of single robots or autonomous vehicles has limited efficiency, due to the limitation of sensing, endurance and payload carrying. To increase the efficiency of robots and autonomous vehicles for these applications, a concept of multiple collaborative robotics or swam robotics have been introduced \cite{swarm}. For underwater environment, multiple collaborative autonomous underwater vehicles (AUVs) have shown their great efficiency for many challenging applications like seabed monitoring, wind turbine inspection, marine debris monitoring and cleaning, etc \cite{AUV1}. However, controlling multiple AUVs working collaboratively is not a trivial task because the effects of nonlinear dynamics, communication delay between AUVs, and the effects of underwater environmental disturbances, i.e., waves, currents, etc., become more severe in underwater environment \cite{AUV2}.
Many elegant control methods have been investigated for increasing the tracking accuracy and robustness of multi-agent systems. Optimal controllers using distributed optimization have been developed \cite{optimal1}. A safe optimal controller based on control barrier function (CBF) has been proposed in \cite{optimal2}. Another approach based on reinforcement learning has been developed for the collaborative control of multi-agent systems \cite{optimal3}. Model predictive control (MPC) has been explored for multi-agent systems generally \cite{MPC1}, \cite{MPC2}, and for multiple AUV systems specifically \cite{MPCAUV}. Although optimal controllers and MPCs provide a good tracking performance when the full knowledge of the system can be known in advance, it is difficult to handle the disturbances and/or uncertainty components. In order to handle disturbance/uncertainty components, robust controllers have been extensively developed \cite{robust1}. Robust controllers are particularly efficient for the control system of AUVs due to their robustness against the nonlinear effects of underwater working conditions. Due to its strong robustness against disturbances (i.e., matched disturbances), sliding mode control (SMC) techniques have been developed \cite{SMC1}, \cite{SMC2}. Despite the advantages of high robustness, SMC generates high chattering, which can cause significant oscillations within the operation of mechanical systems. To reduce the chattering for SMC, a distributed bio-inspired SMC has been proposed for multiple AUVs \cite{SMC3}. However, the conventional SMCs do not provide finite time/fixed time convergence for the systems. To provide a finite time convergence, finite time consensus control methods have been developed for multi-agent systems \cite{finite1}, \cite{finite2}. To obtain both finite time convergence and higher robustness, finite time sliding mode controllers have also been introduced \cite{finiteSMC1}, \cite{finiteSMC2}. Finite time controllers have also been employed for single AUV system \cite{finiteM} and multiple AUV systems \cite{finiteAUV}, \cite{finiteAUV2}. In \cite{finiteAUV3}, a terminal SMC has been developed for the formation tracking control of multiple AUVs. The main drawback of the finite time controllers is that the convergence time of the system is dependent on the inital states of the systems. This issue, unfortunately, prevents the applicability of finite time controllers for many practical applications because, in practice, some initial states of some agents are unavailable or unknown. To overcome this drawback, fixed-time controllers have been studied recently \cite{fixedtime1}, \cite{fixedtime11}. The use of fixed-time controllers can provide a fixed-time convergence, which is independent with the initial states, for the multi-agent systems \cite{fixedtime2}, \cite{fixedtime3}, \cite{fixedtime4}.
One of the issues that reduces the tracking performance of robotic systems and multi-agent systems is the effects of unknown components such as unknown system parameters, friction terms, and faults, etc \cite{fault1}. This becomes even more severe for AUVs due to the severe effects of external environmental disturbances, especially for multiple collaborative AUV systems \cite{learning}. To approximate the unknown components, many learning techniques have been extensively developed. An iterative learning method has been employed for multi-agent systems \cite{ilearning}. An adaptive NN has been developed for multi-agent systems \cite{NN1}, \cite{NN2}, and for multiple AUVs \cite{NNAUV}. Adaptive fuzzy logic controllers have also been developed to take the knowledge of human about the dynamic system into the design to increase the approximation performance of the FLC \cite{FLC1}, \cite{FLC2}. Adaptive fixed-time FLCs have been developed to preserve the advantages of both fixed-time convergence property and the approximation capacity of FLC in \cite{FxTFLC1}, \cite{FxTFLC2}. However, the adaptive laws of the existing fixed-time FLCs do not provide fixed-time convergence for the system. In practice, it is desired that all the adaptive laws of the system can be convergent within a fixed-time to guarantee the global convergence of the system within a fixed-time. This is the main motivation of this paper.
Input saturation is another important consideration in the design of practical controllers for single agent and multi-agent systems since, in practice, the control efforts of actuators (i.e., motors) are limited \cite{Sat1}, \cite{Sat2}. Many efforts have been spent to find an effective mechanism to mitigate the effects of input saturation. In general, to reduce the effects of saturated control torques, an auxiliary design system can be employed \cite{Sat3}, \cite{Sat4}.
In summary, there are existing research gaps for formation tracking control for multiple AUVs, which will be addressed in this study: (i) the fixed-time convergence of the design of controllers for multiple AUV systems, (ii) the fixed-time convergence of the adaptive laws of the adaptive FLCs, (iii) the input saturation problem needs to be addressed within the design of distributed formation control of multiple AUV systems. To address the research gaps, a new fixed-time distributed formation tracking control for multiple AUV systems is proposed. The distributed fixed-time consensus formation will be derived based on a backstepping SMC method. To approximate the unknown components, an adaptive fixed-time FLC will be developed, in which the adaptive laws of FLC will be derived such that it can be convergent within a fixed-time to guarantee a global fixed-time convergence for the system. Furthermore, an auxiliary adaptive function will be introduced into the fixed-time controller to compensate for the effects of the overhead control efforts. The effectiveness of the new control algorithm will be tested on a consensus formation of four AUVs and compared with the counterpart distributed SMC based on a computer simulation. To highlight the novelties of this paper, we compare the proposed method with the existing approaches as follows:
\begin{itemize}
\item Unlike the existing distributed consensus formation controllers for multiple AUV systems \cite{SMC2}, this paper develops a fixed-time distributed formation algorithm for AUVs using a backstepping SMC method to preserve the merits of Lyapunov stability of the backstepping control, high robustness of SMC and bounded convergence time of the fixed-time control theory.
\item Unlike the existing adaptive fixed-time fuzzy controllers \cite{FxTFLC1}, \cite{FxTFLC2}, which do not guarantee a fixed time convergence for the adaptive laws of FLC, this paper develops a new adaptive fixed time fuzzy law to guarantee the fixed time convergence of the adaptive weights of FLC. This ensures a global fixed time convergence of the whole collaborative multiple AUVs system.
\item Unlike the existing consensus formation controllers for multiple AUVs \cite{AUV1}, \cite{finiteAUV}, \cite{SMC2}, which do no consider the input saturation issues, this paper incorporates an adaptive auxiliary function into the fixed-time distributed consensus controller to handle the problem of saturated control efforts.
\end{itemize}
\section{FIXED-TIME STABILITY and CONVERGENCE, FUZZY LOGIC, GRAPH THEORY AND PROBLEM FORMULATION}
\subsection {Fixed-time stability}
A typical nonlinear system can be represented as follows\cite{Pol2012}:
\begin{equation}
\begin{aligned}
\label {DD1}
\dot{\xi}(t)=f(\xi(t)), \quad \xi(t_0)=\xi_0, \quad \xi\in\Re^n
\end {aligned}
\end{equation}
where $f(\cdot):\Re^n\rightarrow\Re^n$ is a possibly discontinuous vector field. The fixed time convergence is determined for system (\ref{DD1}) when it is globally finite-time stable and its convergent time is bounded regardless the initial states of the system, i.e., $\forall{\xi_0}\in\Re^n$, $T(\xi_0)\leq{T_\text{max}}$ is satisfied, where ${T_\text{max}}$ is a positive constant.
\begin{lemma}[\cite{Pol2012}]
\label{lemma1} If a positive definite continuous function $V(\xi):\Re^n\rightarrow\Re$ for system (\ref{DD1}) satisfies $\dot{V}(\xi)\leq{-\chi_1V^\varrho(\xi)-\chi_2V^\varsigma(\xi)}$ for some $\chi_1>0$, $\chi_2>0$, $\varrho>1$, and $0<\varsigma<1$, then system (\ref{DD1}) is determined as a globally fixed-time stable system. The convergence time can be calculated independently with the initial states of system (\ref{DD1}) as follows:
\begin{equation}
\begin{aligned}
\label {D2}
T(\xi_0)\leq\frac{1}{\chi_1(\varrho-1)}+\frac{1}{\chi_2(1-\varsigma)}.
\end {aligned}
\end{equation}
\end{lemma}
\begin{lemma}[\cite{Pol2012}]
\label{lemma2}
If a positive definite continuous function $V(\xi):\Re^n\rightarrow\Re$ for system (\ref{DD1}) satisfies $\dot{V}(\xi)\leq{-\chi_1V^p(\xi)-\chi_2V^q(\xi)}+\varphi$ for some $\chi_1>0$, $\chi_2>0$, $p>1$, $0<q<1$, and $0<\varphi<\infty$, then system (\ref{DD1}) is called a practically fixed-time stable system. Futhermore, the solution of system (\ref{DD1}) has a residual set:
\begin{equation}
\begin{aligned}
\label {D3}
\lim_{t\to{T}}\xi| \Vert{\xi}\Vert \leq{\min}\{\chi_1^{\frac{-1}{p}}\left(\frac{\varphi}{1-\kappa}\right)^{\frac{1}{p}}, \chi_2^{\frac{-1}{q}}\left(\frac{\varphi}{1-\kappa}\right)^{\frac{1}{q}}\}
\end {aligned}
\end{equation}
where $\kappa$ satisfies $0<\kappa<1$. The settling time can be calculated independenly with the initial states of the system as follows:
\begin{equation}
\begin{aligned}
\label {D4}
T(\xi_0)\leq\frac{1}{\chi_1\kappa(1-p)}+\frac{1}{\chi_2\kappa(q-1)}.
\end {aligned}
\end{equation}
\end{lemma}
\subsection{Fuzzy Logic System}
Given a vector of input, i.e., $Z=(z_1,z_2,...,z_n)^T\in\Re^n$ and an output variable, i.e., ${y}=f({Z})\in\Re$, a fuzzy logic system can be used to map from the input to the output. The fuzzy rules of fuzzy logic system can be described as:
\begin{equation}
\label {fuz1}
Rule \: \textit{j}: \text{If}\: z_1\: \text{is}\: A_1^j\: \text{and}\: ...\: \text{and}\:z_n\: \text{is}\: A_n^j\: \text{then}\: y\: \text{is}\: B^j
\end{equation}
where $A_1^j$, $A_2^j$,..., $A_n^j$ and $B^j$ represent fuzzy sets. The fuzzy output can be obtained as:
\begin{equation}
\label {fuz2}
y={}^{\sum\limits_{j=1}^{h}{{{w}_{j}}}\left( \prod\limits_{i=1}^{n}{{{\mu }_{A_{i}^{j}}}({{z}_{i}})} \right)}/{}_{\sum\limits_{j=1}^{h}{\left( \prod\limits_{i=1}^{n}{{{\mu }_{A_{i}^{j}}}({{z}_{i}})} \right)}}={{\text{w}}^{T}}\text{ }\!\!\Psi\!\!\text{ }({Z})
\end{equation}
where $h$ specifies the number of fuzzy rules used, and ${{\mu }_{A_{i}^{j}}}({{z}_{i}})$ represents the membership function of ${z}_{i}$. $\text{w}={{\left[ {{w}_{1}},{{w}_{2}},..,{{w}_{h}} \right]}^{T}}$ represents the fuzzy weights, and $\text{ }\!\!\Psi\!\!\text{ }({Z})={{\left[ {{\Psi }_{1}}({Z}),{{\Psi }_{2}}({Z}),\dots,{{\Psi }_{h}}({Z}) \right]}^{T}}$ is a fuzzy basis vector, where its elements ${{\Psi }_{j}}({Z})$ can be described as
\begin{equation}
\label {fuz3}
{{\Psi }_{j}}({Z})={}^{\prod\limits_{i=1}^{n}{{{\mu }_{A_{i}^{j}}}({{z}_{i}})}}/{}_{\sum\limits_{j=1}^{h}{\left( \prod\limits_{i=1}^{n}{{{\mu }_{A_{i}^{j}}}({{z}_{i}})} \right)}}.
\end{equation}
\begin{lemma}[\cite{FxTFLC1, FxTFLC2}]
Let ${f(Z)}$ be a continuous function on a compact set $\Omega \in {{\Re }^{n}}$, there exists a fuzzy logic system, i.e., ${{\text{w}}^{T}}\text{ }\!\!\Phi\!\!\text{ }({Z})$, such that
\begin{equation}
\label {fuz4}
\underset{{Z}\in \Omega }{\mathop{\sup }}\,\left| {f(Z)}-{{\text{w}}^{T}}\text{ }\!\!\Psi\!\!\text{ }({Z}) \right|\le \bar{\varrho}
\end{equation}
where $\bar{\varrho}$ is the fuzzy minimum approximation error, $\text{ }\!\!\Psi\!\!{ (Z)}={{{\left[ {{\Psi }_{1}}({Z}),{{\Psi }_{2}}({Z}),...,{{\Psi }_{h}}({Z}) \right]}^{T}}}/{\sum\limits_{j=1}^{h}{{{\Psi }_{j}}({Z})}}$ is the fuzzy basis function vector.
\end{lemma}
\subsection {Graph theory}
Consier a directed graph $G=\{\Lambda,\Xi\}$ is used to describe the formation shape among a group of AUV vehicles, where $\Lambda=\{\nu_1, \nu_2,...,\nu_N\}$ denotes $N$ AUV followers, $\Xi\subseteq{\Lambda\times{\Lambda}}$ is the set of edges. $A=[\alpha_{ij}]\in\Re^{N\times{N}}$ denotes the weight of the edges, where $\alpha_{ij}=\alpha_{ji}>0$ if there is an edge between AUVs $i$ and $j$, i.e., $(\nu_j,\nu_i)\in{\Xi}$, and $\alpha_{ij}=\alpha_{ji}=0$ otherwise. Let $B=\text{diag}\{b_1,...,b_N\}$, where $b_i>0$ indicates that the follower AUV $i$ can receive the direct command signals from the AUV leader; under other conditions $b_i=0$. The Laplacian matrix $L=[l_{i,j}]\in\Re^{N\times{N}}$ with $l_{i,j}=-\alpha_{i,j}$ for $i\neq j$, and $l_{i,i}=\sum_{j=1}^N{\alpha_{i,j}}$. It is assumed that the graph $G$ is undirected and connected, and the desired trajectory information from the virtual leader will be transfered to at least one AUV, and thus not all the elements of B indentify to zero. Therefore, $L+B>0$.
\subsection{Dynamics of AUVs and Control Objective}
In this paper, a control method that can form the operations of $N$ AUVs with the dynamics described in (\ref{D1}) in a consensus manner will be derived.
\begin{equation}
\begin{aligned}
\label {D1}
&\dot{\eta}_i=J_i(\eta_{2,i})\upsilon_i,\\
&M_i\dot{\upsilon}_i+C_i(\upsilon_i)\upsilon_i+D_i(\upsilon_i)\upsilon_i=u_i\left(\tau_i(t)\right)+d_i(t,\eta_i,\upsilon_i)
\end{aligned}
\end{equation}
where $\eta_i=[\eta_{1,i},\eta_{2,i}]^T\in\Re^{6\times{1}}$, $\eta_{1,i}=[x_i, y_i, z_i]^T\in\Re^{3\times{1}}$, $\eta_{2,i}=[\phi_i, \theta_i, \psi_i]^T\in\Re^{3\times{1}}$ denote the position and orientation of $i$-th AUV, respectively. $\upsilon_i=[\upsilon_{1,i}, \upsilon_{2,i}]^T\in\Re^{6\times1}$, $\upsilon_{1,i}=[\upsilon_{x,i}, \upsilon_{y,i}, \upsilon_{z,i}]^T\in\Re^{3\times1}$, $\upsilon_{2,i}=[\omega_{x,i}, \omega_{y,i}, \omega_{z,i}]^T\in\Re^{3\times1}$ represents the translational and rotational velocities of $i$-th AUV, respectively. $u_i\left(\tau_i(t)\right)\in\Re^{6\times1}$, which will be described in (\ref{B15}), represents the control effort subject to saturation nonlinearity for the $i$-th AUV. The description of the inertia matrix $M_i\in\Re^{6\times6}$, the Coriolis and centripetal matrix $C_i(\upsilon_i)\in\Re^{6\times6}$, the hydrodynamic matrix $D_i(\upsilon_i)\in\Re^{6\times6}$ and the Jacobian matrix $J_i(\eta_{2,i})$ can be found in \cite{SMC3}. $d_i(t,\eta_i,\upsilon_i)\in\Re^{6\times1}$ denotes the lumped model uncertainty and disturbance component in the system.
\textit{Control Objective:} The objective of a distributed consensus controller is to design an appropriate controller for each AUV with the dynamics (\ref{D1}) so that the group of AUVs can: (i) form a desired formation shape, and (ii) follow a predefined trajectory, which is known as a virtual leader, within a fixed time. The desired formation shape of a group of AUVs can be determined by a specific relative postures, i.e., position and orientation, between AUVs.
\section{Fixed-time Backstepping Sliding Mode Control Design for Consensus Formation Tracking Control}
Let $\eta_i^d$, $\dot{\eta}_i^d$ and $\ddot{\eta}_i^d$ be the desired position, velocity and acceleration of the virtual leader. Define the position and orientation tracking errors between the objective trajectories and the reference trajectory for $i$-th AUV $(i\in\Gamma, \Gamma=\{1,...,N\})$ as follows:
\begin{equation}
\begin{aligned}
\label {2}
&\varepsilon_{1,i}=\sum_{j\in{\Gamma}}^{} \alpha_{ij} (\eta_i-\eta_j-\delta_{ij})+b_i(\eta_i-\eta_i^d-\delta_{id})\\
&\dot{\varepsilon}_{1,i}=\sum_{j\in{\Gamma}}^{} \alpha_{ij} (\dot{\eta}_i-\dot{\eta}_j)+b_i(\dot{\eta}_i-\dot{\eta}_i^d).
\end {aligned}
\end{equation}
Here, $\alpha_{ij}\geq{0}$ and $b_i\geq{0}$ are defined as in section II.C. $\delta_{ij}$ indicates the relative position and orientation between $i$-th AUV and $j$-th AUV ($j\in{\Gamma}$). $\delta_{id}$ denotes the relative posture between the $i$-th AUV and the reference trajectory (i.e., the virtual leader). All the AUVs are expected to have the same velocity and acceleration as the desired reference trajectory.
Differentiating the velocity of tracking error $\dot{\varepsilon}_{1,i}$ with respect to time, we have:
\begin{equation}
\begin{aligned}
\label {3}
\ddot{\varepsilon}_{1,i}=\sum_{j\in{\Gamma}}^{} \alpha_{ij} (\ddot{\eta}_i-\ddot{\eta}_j)+b_i(\ddot{\eta}_i-\ddot{\eta}_i^d)
\end {aligned}
\end{equation}
where $\ddot{\eta}_i$ and $\ddot{\eta}_j$ represent the acceleration of $i$-th AUV and its neighbors $j\in{\Gamma}$, respectively. Based on (\ref{D1}), the dynamic model of the $i$-th AUV can be expressed as:
\begin{equation}
\begin{aligned}
\label {4}
\ddot{\eta}_i=&\Phi_i\left(\upsilon_i,\eta_{i}\right)\upsilon_i+J_i\left(\eta_{2,i}\right)\Pi_iu\left(\tau_i(t)\right)\\
&+J_i(\eta_{2,i})\Pi_id_i(t,\eta_i,\upsilon_i)
\end {aligned}
\end{equation}
where,\\
$\Pi_i=M_i^{-1}$ and $\Phi_i(\upsilon_i,\eta_{i})=\dot{J}_i(\eta_{2,i})-J_i(\eta_{2,i})\Pi_iC_i(\upsilon_i)-J_i(\eta_{2,i})\Pi_iD_i(\upsilon_i)$.
\\
For facilitating the design of controllers later, the following matrices are defined:\\
$\bar{\Phi}(\upsilon,\eta)=\text{diag}\{\Phi_1(\upsilon_1, \eta_1), ...,\Phi_N(\upsilon_N, \eta_N)\}$,\\
$\bar{\Pi}=\text{diag}\{\Pi_1,...,\Pi_N\}$,\\
$\bar{J}(\eta_2)=\text{diag}\{J_1(\eta_{2,1}),...,J_N(\eta_{2,N})\}$,\\
$u\left(\tau(t)\right)=[u_1\left(\tau_1(t)\right), u_2\left(\tau_2(t)\right),...,u_N\left(\tau_N(t)\right)]^T$,\\
$d=[d_1(t,\eta_1,\upsilon_1),d_2(t,\eta_2,\upsilon_2),...,d_N(t,\eta_N,\upsilon_N)]^T$.\\
Therefore,
\begin{equation}
\begin{aligned}
\label {4_2}
\ddot{\eta}=\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}u\left(\tau(t)\right)+\bar{J}(\eta_2)\bar{\Pi}d.
\end {aligned}
\end{equation}
\begin{assumption}
The disturbance term $J_i(\eta_{2,i})\Pi_id_i(t,\eta_i,\upsilon_i)$ is bounded by the positive constant $\tilde{\lambda}_i$:
\begin{equation}
\begin{aligned}
\label {5}
\|J_i(\eta_{2,i})\Pi_id_i(t,\eta_i,\upsilon_i)\|\leq\tilde{\lambda}_i,{}{}{}i\in\Gamma.
\end {aligned}
\end{equation}
The parameter $\tilde{\lambda}_i$ typically depends on the internal model uncertainties and external environmental disturbances (i.e., marine environment) of the vehicles.
\end{assumption}
Letting the following variables:
\begin{equation}
\begin{aligned}
\label {6}
\bar{\varepsilon}_1=[\varepsilon_{1,1}, \varepsilon_{1,2},...,\varepsilon_{1,N}]^T,\\
\bar{\varepsilon}_2=[\dot{\varepsilon}_{1,1}, \dot{\varepsilon}_{1,2},...,\dot{\varepsilon}_{1,N}]^T,
\end {aligned}
\end{equation}
and\\
\begin{equation}
\begin{aligned}
\label {7}
\ddot{\eta}=[\ddot{\eta}_1, \ddot{\eta}_2,...,\ddot{\eta}_N]^T.
\end {aligned}
\end{equation}
Adding the results in (\ref{3}), (\ref{6}) and (\ref{7}) to form the overall error dynamics as:
\begin{equation}
\begin{aligned}
\label {8}
&\dot{\bar{\varepsilon}}_1=\bar{\varepsilon}_2,\\
&\dot{\bar{\varepsilon}}_2=\left(L+B\right)\left(\ddot{\eta}-\textbf{1}_N\otimes\ddot{\eta}^d\right),\\
\end {aligned}
\end{equation}
where $\otimes$ denotes the Kronecker product between two matrices. $\textbf{1}_N$ stands for an $N\times{1}$ vector with unitary elements.
Then, based on (\ref{8}), a fixed time backstepping SMC can be designed as follows:
\\
\\
\textbf{Step 1}: The first sliding surface is selected as:
\begin{equation}
\begin{aligned}
\label {B2}
s_1(t)=\bar{\varepsilon}_1(t).
\end {aligned}
\end{equation}
Differentiating (\ref{B2}) yields
\begin{equation}
\begin{aligned}
\label {B3}
\dot{s}_1(t)=\alpha_s(t),
\end {aligned}
\end{equation}
where $\alpha_s(t)=\dot{\bar{\varepsilon}}_1(t)$ is identified as the virtual control of the system (\ref{B3}).
To stabilise the sliding surface $s_1(t)$, the following virtual control input is designed:
\begin{equation}
\begin{aligned}
\label {B4}
\alpha_s=-\left(k_1s_1+k_2s_1^{\gamma}+k_3s_1^{\iota}\right),
\end {aligned}
\end{equation}
where $k_1>0$, $k_2>$ and $k_3>0$ , and $0<\gamma<1$ and $\iota>1$.
Consider a candidate Lyapunov function below:
\begin{equation}
\begin{aligned}
\label {B5}
V_1&=\frac{1}{2}s_1^T{s}_1.\\
\end {aligned}
\end{equation}
Adding the result in (\ref{B4}) into the derivative of (\ref{B5}), we obtain:
\begin{equation}
\begin{aligned}
\label {B6}
\dot{V}_1&=s_1^T\dot{s}_1\\
&=-s_1^T\left(k_1s_1+k_2s_1^{\gamma}+k_3s_1^{\iota}\right)\\
&=-k_1s_1^Ts_1-k_2\left(s_1^Ts_1\right)^{\frac{\gamma+1}{2}}-k_3\left(s_1^Ts_1\right)^{\frac{\iota+1}{2}}\\
&\leq{-k_2\left(s_1^Ts_1\right)^{\frac{\gamma+1}{2}}-k_3\left(s_1^Ts_1\right)^{\frac{\iota+1}{2}}}\\
&\leq-2^\frac{\gamma+1}{2}k_2\left(\frac{1}{2}s_1^Ts_1\right)^\frac{\gamma+1}{2}-2^\frac{\iota+1}{2}k_3\left(\frac{1}{2}s_1^Ts_1\right)^\frac{\iota+1}{2}.
\end {aligned}
\end{equation}
\\
\textbf{Step 2}: Define the second sliding surface:
\begin{equation}
\begin{aligned}
\label {B7}
s_2=\bar{\varepsilon}_2-(L+B)\alpha_s.
\end {aligned}
\end{equation}
The derivative of $s_2$ is:
\begin{equation}
\begin{aligned}
\label {B8}
\dot{s}_2&=\dot{\bar{\varepsilon}}_2-\left(L+B\right)\dot{\alpha}_s\\
&=\left(L+B\right)\left(\ddot{\eta}-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s\right).\\
\end {aligned}
\end{equation}
A candidate Lyapunov function is selected as
\begin{equation}
\begin{aligned}
\label {B9}
V_2=\frac{1}{2}s_2^T{s}_2.\\
\end {aligned}
\end{equation}
Adding the result in (\ref{B8}) into the derivative of (\ref{B9}), we obtain:
\begin{equation}
\begin{aligned}
\label {B10}
\dot{V}_2&=s_2^T\dot{s}_2\\
&=s_2^T\left(\left(L+B\right)\left(\ddot{\eta}-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s\right)\right)\\
&=s_2^T\left((L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}u(\tau(t)) \right. \right.\\
&\left. \left.+\bar{J}(\eta_2)\bar{\Pi}d-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s\right)\right).\\
\end {aligned}
\end{equation}
Based on (\ref{B10}), the backstepping sliding mode controller can be taken as
\begin{equation}
\begin{aligned}
\label {B11}
u(\tau(t))&=[\bar{J}(\eta_2)\bar{\Pi}]^{-1}\left(-\bar{\Phi}(\upsilon,\eta)\upsilon+\textbf{1}_N\otimes\ddot{\eta}^d+\dot{\alpha}_s+\tau^{'}\right),
\end {aligned}
\end{equation}
where,
\begin{equation}
\begin{aligned}
\label {B11_2}
\tau^{'}&=-\beta_s\text{sign}(s_2)-k_8s_2-k_9s_2^\gamma-k_{10}s_2^\iota,
\end {aligned}
\end{equation}
where $k_8, k_9, k_{10}$ are positive constants. $\beta_s$ is chosen to be $\beta_s>\tilde{\lambda}$, where $\tilde{\lambda}=[\tilde{\lambda}_1, \ldots, \tilde{\lambda}_N]^T$, which were defined as in Assumption 1.
Inserting the control input in (\ref{B11}) and (\ref{B11_2}) into (\ref{B10}), we have:
\begin{equation}
\begin{aligned}
\label {B12}
\dot{V}_2&=s_2^T\dot{s}_2\\
&=s_2^T\left(L+B\right)\left({-\beta_s\text{sign}(s_2)-k_8s_2-k_9s_2^{\gamma+1}} \right.\\
&\left. {-k_{10}s_2^{\iota+1}+\bar{J}(\eta_2)\bar{\Pi}d} \right)\\
&\leq-2^\frac{\gamma+1}{2}\left(L+B\right)k_9\left(\frac{1}{2}s_2^Ts_2\right)^\frac{\gamma+1}{2}\\
&-2^\frac{\iota+1}{2}\left(L+B\right)k_{10}\left(\frac{1}{2}s_2^Ts_2\right)^\frac{\iota+1}{2}.
\end {aligned}
\end{equation}
\textbf{Step 3}: We define a compounded candidate Lyapunov function:
\begin{equation}
\begin{aligned}
\label {B13_1}
{V}=V_1+V_2.
\end {aligned}
\end{equation}
Differentiating (\ref{B13_1}) yields:
\begin{equation}
\begin{aligned}
\label {B13}
\dot{V}&=\dot{V}_1+\dot{V}_2\\
&\leq-2^\frac{\gamma+1}{2}k_2\left(\frac{1}{2}s_1^Ts_1\right)^\frac{\gamma+1}{2}-2^\frac{\iota+1}{2}k_3\left(\frac{1}{2}s_1^Ts_1\right)^\frac{\iota+1}{2}\\
&-2^\frac{\gamma+1}{2}(L+B)k_9\left(\frac{1}{2}s_2^Ts_2\right)^\frac{\gamma+1}{2}\\
&-2^\frac{\iota+1}{2}(L+B)k_{10}\left(\frac{1}{2}s_2^Ts_2\right)^\frac{\iota+1}{2}\\
&\leq{-2^{\frac{\gamma+1}{2}}}\zeta_1\left(V_1^{\frac{\gamma+1}{2}}+V_2^{\frac{\gamma+1}{2}}\right)\\
&-{2}^{\frac{\iota+1}{2}}\zeta_2\left(V_1^{\frac{\iota+1}{2}}+V_2^{\frac{\iota+1}{2}}\right)\\
&\leq{-2^{\frac{\gamma+1}{2}}}\zeta_1\left(V^{\frac{\gamma+1}{2}}\right)-{2}^{\frac{\iota+1}{2}}\zeta_2\left(V^{\frac{\iota+1}{2}}\right).
\end {aligned}
\end{equation}
where $\zeta_1=\text{min}\left(k_2,\left(L+B\right)k_9\right)$ and $\zeta_2=\text{min}\left(k_3,\left(L+B\right)k_{10}\right)$.
Therefore, thanks to Lemma \ref{lemma1}, the global fixed-time convergence can be established for the system (\ref{B13}), and the reaching time can be obtained as:
\begin{equation}
\begin{aligned}
\label {B14}
T\leq{\frac{2}{{\zeta_12^{\frac{\gamma+1}{2}}}(1-\gamma)}}+\frac{2}{{2}^{\frac{\iota+1}{2}}\zeta_2(\iota-1)}.
\end {aligned}
\end{equation}
\section{Distributed Backstepping Fuzzy Sliding Mode Controller with Input Saturation}
The backstepping SMC presented in section III has two main shortcomings: (i) the bigger sliding gain is chosen based on Assumption 1, for which if the disturbance is big, the controller provides a big chattering in the system, (ii) the saturated control torque effects have not been considered. In this section, we introduce an auxiliary variable and a fuzzy approximation to overcome these shortcomings.
The designed control input $\tau_i(t)\in\Re^n$ is affected by the saturation nonlinearity and can be expressed as \cite{Sat2}:
\begin{equation}
\begin{aligned}
\label {B15}
u_i(\tau_i(t))=\begin{cases}
\text{sign}(\tau_i(t))\tau_{\text{max}_i}, & |\tau_i(t)|\geq{\tau_{\text{max}_i}},\\
\tau_i(t), & |\tau_i(t)|<{\tau_{\text{max}_i}},
\end{cases}
\end {aligned}
\end{equation}
where ${\tau_{\text{max}_i}}$ represents the maximum control torque allowed for joint $i$.
Furthermore, considering the input saturation, the saturated control torque can be approximated by
\begin{equation}
\begin{aligned}
\label {B16}
u_i(\tau_i)=g_i(\tau_i)+\varsigma_i(\tau_i),
\end {aligned}
\end{equation}
where $g_i(\tau_i)$ is a smooth function. $\varsigma_i(\tau_i)$ is the bounded approximation error. $g_i(\tau_i)$ can be chosen as \cite{Sat2}:
\begin{equation}
\begin{aligned}
\label {B17}
g_i(\tau_i)&=\tau_{\text{max}_i}\times{\text{tanh}}\left(\frac{\tau_i}{\tau_{\text{max}_i}}\right)\\
&=\tau_{\text{max}_i}\frac{e^{\tau_i/\tau_{\text{max}_i}}-e^{-\tau_i/\tau_{\text{max}_i}}}{e^{\tau_i/\tau_{\text{max}_i}}+e^{-\tau_i/\tau_{\text{max}_i}}}.
\end {aligned}
\end{equation}
The approximation error $\varsigma_i(\tau_i)$ is bounded by
\begin{equation}
\begin{aligned}
\label {B18}
|\varsigma_i(\tau_i)|=|u_i(\tau_i)-g_i(\tau_i)|\leq\tau_{{\text{max}_i}}(1-\text{tanh}(1))=\bar{\Delta}_i.
\end {aligned}
\end{equation}
Let: $g(\tau(t))=[g_1(\tau_1(t)), g_2(\tau_2(t)),...,g_N(\tau_N(t))]^T$, $\varsigma(\tau(t))=[\varsigma_1(\tau_1(t)), \varsigma_2(\tau_2(t)),...,\varsigma_N(\tau_N(t))]^T$. To compensate for the saturated controller, an adaptive auxiliary variable $\mu$ is introduced as:
\begin{equation}
\begin{aligned}
\label {B19}
\dot{\mu}=-\mu+\bar{J}(\eta_2)\bar{\Pi}\left(g(\tau)-\tau\right).
\end {aligned}
\end{equation}
For this controller, \textbf{Step 1} is as in section III. \textbf{Step 2} will be re-designed as follows:\\
\\
\textbf{Step 2}: The error variable $s_2$ can be redefined as
\begin{equation}
\begin{aligned}
\label {B20}
s_2=\bar{\varepsilon}_2-(L+B)\alpha_s-(L+B)\mu.
\end {aligned}
\end{equation}
The derivative of $s_2$ can be computed as
\begin{equation}
\begin{aligned}
\label {B21}
\dot{s}_2&=\dot{\bar{\varepsilon}}_2-(L+B)\dot{\alpha}_s-(L+B)\dot{\mu}\\
&=\left(L+B\right)\left(\ddot{\eta}-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s-\dot{\mu}\right)\\
&=\left(L+B\right)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}u(\tau) \right.\\
&\left. +\bar{J}(\eta_2)\bar{\Pi}d-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s-\dot{\mu}\right)\\
&=(L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}u(\tau)\right.\\
&\left. +\bar{J}(\eta_2)\bar{\Pi}d-\textbf{1}_N\otimes\ddot{\eta}^d-\dot{\alpha}_s\right.\\
&\left.+\mu-\bar{J}(\eta_2)\bar{\Pi}\left(g(\tau)-\tau\right)\right)\\
&=(L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+\bar{J}(\eta_2)\bar{\Pi}\left(\varsigma(\tau\right)+d) \right.\\
&\left. -\textbf{1}_N\otimes\ddot{\eta}^d+\mu+\bar{J}(\eta_2)\bar{\Pi}\tau-\dot{\alpha}_s\right).
\end {aligned}
\end{equation}
By using a FLC to approximate the lumped uncertainty and disturbance $\bar{J}(\eta_2)\bar{\Pi}(\varsigma(\tau)+d)$, the derivative of the error $s_2$ in (\ref{B21}) can be represented as
\begin{equation}
\begin{aligned}
\label {45}
\dot{s}_2=&(L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+W^{*T}\Psi(Z)+\epsilon(t) \right.\\
&\left.-\textbf{1}_N\otimes\ddot{\eta}^d+\mu+\bar{J}(\eta_2)\bar{\Pi}\tau-\dot{\alpha}_s \right),
\end {aligned}
\end{equation}
where $\epsilon(t)=\bar{J}(\eta_2)\bar{\Pi}\left(\varsigma(\tau)+d(t,\eta,\upsilon)\right)-W^{*T}\Phi(Z)$. From (\ref{5}) and (\ref{B18}) and Lemma 3, we can obtain $||\epsilon(t)||\leq\bar{\epsilon}$, where $\bar{\epsilon}>0$.
A candidate Lyapunov function is defined as
\begin{equation}
\begin{aligned}
\label {46}
V_3(s_2,\tilde{\theta})=\frac{1}{2}s_2^Ts_2+\frac{1}{2}\tilde{\theta}^T\tilde{\theta},
\end {aligned}
\end{equation}
where $\tilde{\theta}= \theta-\hat{\theta}$ is the weight approximation error with $\theta=\max_{h\in{H}}\Vert{W^*}\Vert$, and $\hat{\theta}$ is the approximation of $\theta$.
The derivative of $V_3$ in (\ref{46}) can be computed as:
\begin{equation}
\begin{aligned}
\label {47}
\dot{V}_3&=s_2^T\dot{s}_2-\tilde{\theta}^T\dot{\hat{\theta}}\\
&=s_2^T(L+B)\left(\bar{\Phi}(\upsilon,\eta)\upsilon+W^{*T}\Psi(Z)+\epsilon(t) \right.\\
&\left. -\textbf{1}_N\otimes\ddot{\eta}^d+\mu+\bar{J}(\eta_2)\bar{\Pi}\tau-\dot{\alpha}_s\right)-\tilde{\theta}^T\dot{\hat{\theta}}.
\end {aligned}
\end{equation}
Let $F_{sum}=\bar{\Phi}(\upsilon,\eta)\upsilon-\textbf{1}_N\otimes\ddot{\eta}^d+\mu-\dot{\alpha}_s$. Applying inequality principle, we have
\begin{equation}
\begin{aligned}
\label {48}
s_2^TW^{*T}\Psi(Z)\leq{\frac{1}{2}+\frac{1}{2}s_2^2\theta\Psi(Z)^T\Psi(Z)}.
\end {aligned}
\end{equation}
Then, $\dot{V}_3$ becomes:
\begin{equation}
\begin{aligned}
\label {49}
\dot{V}_3&\leq{s_2}^T(L+B)\left({\bar{J}(\eta_2)\bar{\Pi}\tau+F_{sum}+\epsilon(t)}\right.\\
&\left.{+\frac{1}{2}s_2^2\hat{\theta}\Psi(Z)^T\Psi(Z)}\right)+\frac{1}{2}(L+B)-\tilde{\theta}^T\dot{\hat{\theta}}\\
&\leq{s_2^T}(L+B)\left({\bar{J}(\eta_2)\bar{\Pi}\tau+F_{sum}+\epsilon(t)}\right.\\
& \left.{+\frac{1}{2}s_2^2\hat{\theta}\Psi(Z)^T\Psi(Z)}\right)+\frac{1}{2}(L+B)\\
&+\tilde{\theta}^T\left(\frac{1}{2}(L+B)s_2^2\Psi(Z)^T\Psi(Z)-\dot{\hat{\theta}}\right).\\
\end {aligned}
\end{equation}
Based on (\ref{49}), a distributed control input is designed as:
\begin{equation}
\begin{aligned}
\label {491}
\tau&=\left(\bar{J}\left(\eta_2\right)\bar{\Pi}\right)^{-1}\left(-F_{sum}-\frac{1}{2}s_2^2\hat{\theta}\Psi(Z)^T\Psi(Z)\right.\\
&\left. -\beta_s{\text{sign}}(s_2)-k_8s_2-k_9s_2^\gamma-k_{10}s_2^\iota \right),
\end {aligned}
\end{equation}
where $\beta_s$ is selected such that $\beta_s>\bar{\epsilon}$.
The adaptive law of the FLC can be selected as
\begin{equation}
\begin{aligned}
\label {50}
\dot{\hat{\theta}}=\frac{1}{2}\left(L+B\right)s_2^2\Psi(Z)^T\Psi(Z)-w_1\hat{\theta}^{\gamma}-w_2\hat{\theta}^{\iota}.
\end {aligned}
\end{equation}
Therefore,
\begin{equation}
\begin{aligned}
\label {51}
\dot{V}_3&\leq{s_2^T(L+B)\left(-k_8s_2-k_9s_2^\gamma-k_{10}s_2^\iota\right)}\\
&+\frac{1}{2}(L+B)+\tilde{\theta}^T\left(w_1\hat{\theta}^{\gamma}+w_2\hat{\theta}^{\iota}\right).
\end {aligned}
\end{equation}
Using inequality:
\begin{equation}
\begin{aligned}
\label {52}
\tilde{\theta}\hat{\theta}^{\gamma}\leq{l_1\theta^{1+\gamma}-l_2\tilde{\theta}^{1+\gamma}},
\end {aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label {52b}
\tilde{\theta}\hat{\theta}^{\iota}\leq{l_1\theta^{1+\iota}-l_2\tilde{\theta}^{1+\iota}}.
\end {aligned}
\end{equation}
Taking the results in (\ref{51}), (\ref{52}) and (\ref{52b}) together, we have:
\begin{equation}
\begin{aligned}
\label {53}
\dot{V}_3&\leq\left(L+B\right)\left(-k_8s_2-k_9s_2^{\gamma+1}-k_{10}s_2^{\iota+1}\right)\\
&+\frac{1}{2}(L+B)+w_1\left({l_1\theta^{1+\gamma}-l_2\tilde{\theta}^{1+\gamma}}\right)\\
&+w_2\left({l_1\theta^{1+\iota}-l_2\tilde{\theta}^{1+\iota}}\right)\\
&\leq(L+B)\left({-k_8s_2-k_9s_2^{\gamma+1}-k_{10}s_2^{\iota+1}}\right)\\
&-l_2\left(w_1\tilde{\theta}^{1+\gamma}+w_2\tilde{\theta}^{1+\iota}\right)+\sigma,
\end {aligned}
\end{equation}
where:
\begin{equation}
\begin{aligned}
\label {54}
\sigma=\frac{1}{2}\left(L+B\right)+l_1\left(w_1{\theta}^{1+\gamma}+w_2{\theta}^{1+\iota}\right).
\end {aligned}
\end{equation}
Therefore,
\begin{equation}
\begin{aligned}
\label {55}
\dot{V}_3&\leq{-k_9\left(L+B\right)s_2^{\gamma+1}}-l_2w_1\tilde{\theta}^{1+\gamma}\\
&-k_{10}\left(L+B\right)s_2^{\iota+1}-l_2w_2\tilde{\theta}^{1+\iota}+\sigma\\
&\leq{-2^{\frac{\gamma+1}{2}}}\nu_1{V_3}^{\frac{1+\gamma}{2}}{-2^{\frac{\iota+1}{2}}}\nu_2{V_3}^{\frac{1+\iota}{2}}+\sigma.\\
\end {aligned}
\end{equation}
Here,
\begin{equation}
\begin{aligned}
\label {56}
\nu_1=\text{min}\left(k_9\left(L+B\right),l_2w_1\right)
\end {aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label {57}
\nu_2=\text{min}\left(k_{10}\left(L+B\right),l_2w_2\right)
\end {aligned}
\end{equation}
Thus, according to Lemma \ref{lemma2}, the value $s_2$ and $\tilde{\theta}$ will converge to zero. The convergence time can be calculated as $T\leq\frac{2}{\nu_1{(1-\gamma)}}+\frac{2}{\nu_2{(\iota-1)}}$.
\\
\\
\textbf{Step 3}: Define a candidate Lyapunov function:
\begin{equation}
\begin{aligned}
\label {58}
{V}=V_1+V_3.
\end {aligned}
\end{equation}
The derivative of the above Lyapunov function is
\begin{equation}
\begin{aligned}
\label {59}
\dot{V}&\leq{-2^{\frac{\gamma+1}{2}}}\lambda_{min}\{k_2,k_5\}V_1^{\frac{\gamma+1}{2}}{-2^{\frac{\iota+1}{2}}}\lambda_{min}\{k_3,k_6\}V_1^{\frac{\iota+1}{2}}\\
&{-2^{\frac{\gamma+1}{2}}}\nu_1V_3^{\frac{\gamma+1}{2}}{-2^{\frac{\iota+1}{2}}}\nu_2V_3^{\frac{\iota+1}{2}}+\sigma\\
&\leq{-2^{\frac{\gamma+1}{2}}}\chi_1\left(V_1^{\frac{\gamma+1}{2}}+V_3^{\frac{\gamma+1}{2}}\right)\\
&-{2}^{\frac{\iota+1}{2}}\chi_2\left(V_1^{\frac{\iota+1}{2}}+V_3^{\frac{\iota+1}{2}}\right)+\sigma\\
&\leq{-2^{\frac{\gamma+1}{2}}}\chi_1\left(V^{\frac{\gamma+1}{2}}\right)-{2}^{\frac{\iota+1}{2}}\chi_2\left(V^{\frac{\iota+1}{2}}\right)+\sigma,\\
\end {aligned}
\end{equation}
where $\chi_1=\text{min}\{\nu_1, \lambda_{min}\{k_2,k_5\}\}$ and $\chi_2=\text{min}\{\lambda_{min}\{k_3,k_6\}, \nu_2\}$.
Therefore, according to Lemma {\ref{lemma2}, the global fixed-time convergence of the system is guaranteed. The settling time can be calculated as:
\begin{equation}
\begin{aligned}
\label {60}
T\leq{\frac{2}{{\chi_12^{\frac{\gamma+1}{2}}}\kappa(1-\gamma)}}+\frac{2}{{\chi_2}2^{\frac{\iota+1}{2}}\kappa(\iota-1)}.
\end {aligned}
\end{equation}
\begin{remark}
The employment of $sign$ function in (\ref{491}) generates a chattering in the system. In order to reduce the chattering, the controller (\ref{491}) can be revised as
\begin{equation}
\begin{aligned}
\label {492}
\tau&=\left(\bar{J}(\eta_2)\bar{\Pi}\right)^{-1}\left(-F_{sum}-\frac{1}{2}s_2^2\hat{\theta}\Psi(Z)^T\Psi(Z) \right.\\
&\left. -\beta_s(\frac{s_2}{||s_2||+\epsilon_1})-k_8s_2-k_9s_2^\gamma-k_{10}s_2^\iota \right),
\end {aligned}
\end{equation}
\end{remark}
where $\epsilon_1$ is a small positive number.
\section{Results and Discussions}
In this section, we validate the performance of the proposed algorithm. The dynamic model of each vehicle is described as in (\ref{D1}), where the parameters are selected as in Table \ref{table_1} \cite{SMC3}.
\begin{table}
\caption{Parameters used in the simulation of $i$th AUV ($i\in\{1,2,3,4\}$)}
\centering
\label{table_1}
\begin{tabular}{l l l l}
\hline\hline\\[-0.5ex]
Parameters &$Value$&Parameters &$Value$\\[0.5ex] \hline\\[-0.5ex]
$m_i$& $20$&$I_{x,i}$ & $20$\\
$I_{y,i}$ & $30$&$I_{z,i}$ & $35$\\
$\iota_{\upsilon{x},i}$&$-8$&$\iota_{\upsilon{y},i}$&$-10$\\
$\iota_{\upsilon{z},i}$&$-9$&$\iota_{\dot{\upsilon}{x},i}$&$-7$\\
$\iota_{\dot{\upsilon}{y},i}$&$-8$&$\iota_{\dot{\upsilon}{z},i}$&$-6$\\
$\iota_{{\omega}{x},i}$&$-0.2$&$\iota_{{\omega}{y},i}$&$-0.25$\\
$\iota_{{\omega}{z},i}$&$-0.15$&$\iota_{\dot{\omega}{x},i}$&$-20$\\
$\iota_{\dot{\omega}{y},i}$&$-30$&$\iota_{\dot{\omega}{z},i}$&$-35$\\
\hline\hline
\end{tabular}
\end{table}
Fig. \ref{fig.1} illustrates the connection between AUVs and the virtual leader, and $\alpha_{12}=a_{21}=\alpha_{23}=\alpha_{32}=\alpha_{34}=\alpha_{43}=1$. As illustrated in Fig. \ref{fig.1}, in this considered communication topology, the desired trajectory will be communicated and given to the AUV-1, i.e., $b_1=1$. Therefore, the $L$ and $B$ matrices can be calculated as:
\begin{equation}
\begin{aligned}
\label {61}
L=\begin{bmatrix}
1 &-1 & 0 & 0\\
-1 & 2 & -1 &0\\
0 & -1& 2&- 1\\
0 & 0 & -1 &1
\end{bmatrix},
B=\begin{bmatrix}
1 &0 & 0 & 0\\
0 & 0 & 0 &0\\
0 & 0& 0&0\\
0 & 0 & 0 &0
\end{bmatrix}.
\end {aligned}
\end{equation}
The moving trajectory of the virtual leader is selected as $\eta^d(t)=[30-30e^{-t}, 5t, 2t, 0, 0, 0]^T$. The desired posture between AUVs are given by $\delta_{12}=[0, 10, 0]^T$, $\delta_{21}=[0, -10, 0]^T$, $\delta_{23}=[-10, 0, 0]^T$, $\delta_{32}=[10, 0, 0]^T$, $\delta_{34}=[0, -10, 0]^T$, and $\delta_{43}=[0, 10, 0]^T$. All the vehicles have the same orientation. The relative distance between the virtual leader and AUV 1 is $\delta_{1d}=[20, 0, 0]^T$. It is assumed that the four AUVs will start from the initial positions: $\eta_1(0)=[2, 3, 3, 0.3, 0, 0.2]^T$, $\eta_2(0)=[2.5, 3.5, 3, 0.2, 0, 0.25]^T$, $\eta_3(0)=[2, 3, 3, 0.3, 0, 0.2]^T$, $\eta_4(0)=[3, 3, 2, 0.3, 0, 0.2]^T$, and $\upsilon_i=0_{6\times{1}}, i\in\{1, 2, 3, 4\}$ is set for the initial velocities of AUVs.
The disturbance term is assumed to be:
\begin{equation}
\begin{aligned}
\label {63}
d_i(t,\eta_i,\upsilon_i)=&[2.5\sin(t)-0.5\upsilon_{xi}^2-0.7\sin(\upsilon_{xi}\upsilon_{yi}),\\
&2.5\cos(t)+0.1\upsilon_{xi}^2+0.5\sin(\upsilon_{yi}), \\
&2.5\sin(t)+0.7\upsilon_{xi}^2+0.8\sin(\upsilon_{zi}),\\
&0.5\sin(t)+0.2\upsilon_{\phi{i}}^3,\\
&0.5\cos(t)-0.2\upsilon_{\theta{i}}^2,\\
&0.5\sin(t)-0.4\upsilon_{\Phi{i}}^3,]^T,\\
(i\in\{1, 2, 3, 4\}).
\end {aligned}
\end{equation}
Note that the above parameters are selected to be quite similar to the parameters used in \cite{SMC3} to facilite the comparison later. However, in this experiment, the disturbance term (\ref{63}) is modeled to be more severe to include both environmental disturbances and model uncertainties.
The selected parameters, which were chosen based on a trial-and-error procedure, for the proposed controller in this simulation are $k_1=k_8=5$, $k_2=k_3=k_9=k_{10}=0.4$, $w_1=w_2=1$, $\beta_s=20$. The parameters $\gamma=5/7$ and $\iota=7/5$. The control efforts are saturared by $\tau_\text{max}=300 Nm$.
The proposed controller uses the below membership functions, which were tuned based on a trial-and-error procedure:\\
$
{{\mu }_{A_{i}^{1}}}=\exp \left( {-{{\left( {{Z}_{i}}+7 \right)}^{2}}}/{4}\; \right), {{\mu }_{A_{i}^{2}}}=\exp \left( {-{{\left( {{Z}_{i}}+5 \right)}^{2}}}/{4}\; \right),
{{\mu }_{A_{i}^{3}}}=\exp \left( {-{{\left( {{Z}_{i}}+3 \right)}^{2}}}/{4}\; \right), {{\mu }_{A_{i}^{4}}}=\exp \left( {-{{\left( {{Z}_{i}}+1 \right)}^{2}}}/{4}\; \right),
{{\mu }_{A_{i}^{5}}}=\exp \left( {-{{\left( {{Z}_{i}}+0 \right)}^{2}}}/{4}\; \right), {{\mu }_{A_{i}^{6}}}=\exp \left( {-{{\left( {{Z}_{i}}-1 \right)}^{2}}}/{4}\; \right),
{{\mu }_{A_{i}^{7}}}=\exp \left( {-{{\left( {{Z}_{i}}-3 \right)}^{2}}}/{4}\; \right), {{\mu }_{A_{i}^{8}}}=\exp \left( {-{{\left( {{Z}_{i}}-5 \right)}^{2}}}/{4}\; \right),
{{\mu }_{A_{i}^{9}}}=\exp \left( {-{{\left( {{Z}_{i}}-7 \right)}^{2}}}/{4}\; \right).$\\
The input of the FLC is $Z_i=[\eta_i, \upsilon_i]^T$. To reduce chattering, the controller (\ref{492}) is used and $\epsilon_1=0.01$.
In order to highlight the superior performance of the proposed controller, it is analysed in a comparison with the distributed SMC \cite{SMC3}. The SMC can be designed as in Appendix A. The sliding gain of the SMC is selected as $\beta_0=200$. Note that the SMC \cite{SMC3} has not considered the effects of the input saturation in the design. The tracking performances of the proposed controller are shown in Figs. \ref{fig.2}, \ref{fig.3}, \ref{fig.4}, \ref{fig.5}, while the performances of the SMC are shown in Figs. \ref{fig.6}, \ref{fig.7}, \ref{fig.8}, \ref{fig.9}. In particular, Fig. \ref{fig.2} shows the formation shape of the four AUVs under the proposed controller. Compared with the formation shape of the AUVs under the SMC controller shown in Fig. \ref{fig.6}, the proposed controller provides faster and smoother convergence, as shown in Fig. \ref{fig.2}. Figs. \ref{fig.3} and \ref{fig.4} show the tracking errors of $\varepsilon_{1,i}, (i=1,2,3,4)$ and $\varepsilon_{2,i}, (i=1, 2, 3, 4)$ under the proposed controller, which are convergent to zero. The errors $\varepsilon_{1,i}$ and $\varepsilon_{2,i}$ under the effects of the SMC are shown in Figs. \ref{fig.7} and \ref{fig.8}, respectively. Comparing Figs. \ref{fig.7} and \ref{fig.8} and Figs. \ref{fig.3} and \ref{fig.4}, respectively, show that the SMC provides faster convergence for the tracking errors. However, this is because the sliding gain of the SMC was chosen to be a bigger value. Consequently, it leads to a much higher, possibly physically unrealisable, control efforts, as shown in Fig. \ref{fig.9}. Fig. 5 shows the control efforts of the proposed controller, which are smoother and bounded by the $\tau_\text{max}$.
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{topology.JPG}
\caption{The communication topology graph for 3 AUVs formation control}
\label{fig.1}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{3dspace.JPG}
\caption{The formation shape of four AUVs under the proposed controller}
\label{fig.2}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{e1_bar_each.JPG}
\caption{Position tracking error $\varepsilon_{1}={e}_{1}$ of the AUVs under the proposed controller}
\label{fig.3}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{e2_bar_each.JPG}
\caption{Velocity tracking error $\varepsilon_{2}={e}_{2}$ of the AUVs under the proposed controller}
\label{fig.4}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{tau_each.JPG}
\caption{Control efforts $\tau_i (i=1,2,3,4)$ of the AUVs under the proposed controller}
\label{fig.5}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{3dSMC.JPG}
\caption{The formation shape of four AUVs under the SMC controller}
\label{fig.6}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{e1SMC.JPG}
\caption{Position tracking errors $\varepsilon_{1}=e_{1}$ of the AUVs under the SMC controller}
\label{fig.7}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{e2SMC.JPG}
\caption{Velocity tracking errors $\varepsilon_{2}=e_{2}$ of the AUVs under the SMC controller}
\label{fig.8}
\end{figure}
\begin{figure}[!t]\centering
\includegraphics[width=8.5cm]{tauSMC.JPG}
\caption{The control efforts $\tau_{i}, (i=1, 2, 3, 4)$ of the AUVs under the SMC controller}
\label{fig.9}
\end{figure}
\section{Conclusion}
This paper has introduced a new distributed fixed-time consensus formation control for multiple AUV systems based on an adaptive backstepping fuzzy sliding mode control. In this scheme, a distributed consensus tracking error was derived to form a distributed global error dynamics. Then, a fixed-time backstepping SMC was designed to obtain a fixed-time convergence for the whole system. However, the backstepping fixed-time SMC had two main shortcomings: (i) it produces a greater magnitude of chattering due to the selection of bigger sliding gain, (ii) it has not considered the input saturation effects in the design. To overcome these shortcomings, an auxiliary variable and an adaptive fixed-time fuzzy logic approximation have been employed. The computer simulation on a formation consensus control for four AUVs demonstrated that the fixed-time controller provided a quick response and stability for the group of AUVs and can handle the input saturation problem well.
In future works, we will address the multiple constraints problem (i.e., working spaces constraints, states constraints and input constraints simultaneously) and obstacle avoidance problems for multiple collaborative AUVs. An physical experiment system based on the hardware systems of BlueROV2 robots are being developed, and the experimental results will be shown in future works.
\appendices
{
\section*{Appendix A: Design Distributed sliding mode control}
The SMC can be derived as follows \cite{SMC3}:\\
First, the sliding surface is selected as
\begin{equation}
\begin{aligned}
\label {70}
s=k_1(L+B)\bar{\varepsilon}_1+\bar{\varepsilon}_2
\end {aligned}
\end{equation}
where $k_1$ is a positive constant.
The sliding mode control can be designed as
\begin{equation}
\begin{aligned}
\label {71}
\tau&=[\bar{J}(\eta_2)\bar{\Pi}]^{-1}\left(-\bar{\Phi}(\upsilon,\eta)\upsilon+\tau^{'}\right)
\end {aligned}
\end{equation}
where $\tau'$ is designed as
\begin{equation}
\begin{aligned}
\label {711}
\tau&=[\bar{J}(\eta_2)\bar{\Pi}]^{-1}\left(-k_1\bar{\varepsilon}_2+\textbf{1}_N\otimes\ddot{\eta}^d-\beta_0\text{sign}(s)\right)
\end {aligned}
\end{equation}
To reduce the chattering, the controller (\ref{711}) is revised as:
\begin{equation}
\begin{aligned}
\label {712}
\tau&=[\bar{J}(\eta_2)\bar{\Pi}]^{-1}\left(-k_1\bar{\varepsilon}_2+\textbf{1}_N\otimes\ddot{\eta}^d-\beta_0\frac{s}{||s||+\epsilon_1}\right).
\end {aligned}
\end{equation}
The convergence and stability of the SMC can be referred to \cite{SMC3}.
}
|
{
"arxiv_id": "2302.14150",
"language": "en",
"timestamp": "2023-03-02T02:18:35",
"url": "https://arxiv.org/abs/2302.14150",
"yymm": "2302"
} | \section{Motivation}
Maximal inequalities are at the heart of empirical process theory \citep{van2014probability}.
The case of Gaussian processes is well-understood via the celebrated generic chaining technique
\citep{Talagrand2016upper}. There, a key role in the lower
bounds is played Slepian's inequality,
which allows one to approximate a Gaussian process
by an appropriate uncorrelated one.
The absence of a generic analog of Slepian's inequality
--- say, for the kind of Binomal process considered in
\citet{CK22}
--- can be a major obstruction in obtaining tight lower
bounds. Indeed, as Proposition~\ref{prop:pinelis-cont}
below shows,
for nonnegative $X_i$,
any upper bound on
$\mathop{\mexp}\max_i \tilde X_i$,
where $\tilde X_i$
is the ``the independent version'' of $X_i$,
automatically yields an upper bound on
$\mathop{\mexp}\max_i X_i$.
The reverse direction, of course, fails without
additional structural assumptions. We discover that
pairwise independence suffices for the reverse direction,
and that this condition can be relaxed further.
\section{The Bernoulli case}
Let $X_1,X_2,\ldots,X_n$
and
$\tilde X_1,\tilde X_2,\ldots,\tilde X_n$
be two
collections
of
Bernoulli random variables,
where
the $\tilde X_i$s are
mutually
independent (and independent of the $X_i$s),
with $X_i,\tilde X_i\sim\mathrm{Bernoulli}(p_i)$.
Letting
$Z=\sum_{i=1}^n X_i$
and
$\tilde Z=\sum_{i=1}^n \tilde X_i$,
we have
\begin{eqnarray*}
\mathop{\mexp}\max_{i\in[n]}X_i=\P(Z>0),
\qquad
\mathop{\mexp}\max_{i\in[n]}\tilde X_i=\P(\tilde Z>0).
\end{eqnarray*}
\paragraph{Decoupling from above.}
An elegant result of \citet{pinelis22}
(answering our question)
shows that
$\P(Z>0)\lesssim\P(\tilde Z>0)$;
his proof provided for completeness:
\begin{proposition}[Pinelis]
\label{prop:pinelis}
For $c=\mathrm{e}/(\mathrm{e}-1)$
and $X_i,\tilde X_i,Z,\tilde Z,p_i$ as above,
we have
\begin{eqnarray*}
\P(Z>0)\le c\P(\tilde Z>0).
\end{eqnarray*}
\end{proposition}
\begin{proof}
Put $M=\P(Z>0)$,
$\tilde M=\P(\tilde Z>0)$
and $S=\sum_{i=1}^n p_i$,
and observe that
\begin{eqnarray*}
M\le\min\set{S,1}\le c(1-\mathrm{e}^{-S}).
\end{eqnarray*}
On the other hand,
\begin{eqnarray*}
\tilde M=1-\prod_{i=1}^n(1-p_i)
\ge1-\mathrm{e}^{-S},
\end{eqnarray*}
whence $M\le c\tilde M$.
\end{proof}
Further, we note that Pinelis's constant
$c=\mathrm{e}/(\mathrm{e}-1)$
is optimal. Indeed,
consider the case where
$p_i=1/n$, $i\in[n]$, and $\P(Z=1)=1$.
This makes $\P(\tilde Z>0)=1-(1-1/n)^n\to 1-1/\mathrm{e}$
as $n\to\infty$.
Despite its elegance,
Proposition~\ref{prop:pinelis}
will likely have limited applications,
since in practice, the techniques for upper-bounding
$\mathop{\mexp}\max_i X_i$ rely on the union bound and
are insensitive to the dependence structure of $X_i$
--- in which case
the technique employed in upper-bounding
$\mathop{\mexp}\max_i X_i$ automatically upper-bounds
$\mathop{\mexp}\max_i \tilde X_i$ as well.
\paragraph{Decoupling from below.}
A more interesting and useful direction
would be to obtain an estimate of the form
$\P(Z>0)\gtrsim\P(\tilde Z>0)$.
Clearly, no such
dimension-free
estimate can hold without
further assumptions on the $X_i$.
Indeed,
for a small $\varepsilon>0$,
let
$\P(X_1=X_2=\ldots=X_n=1)=\varepsilon$
and
$\P(X_1=X_2=\ldots=X_n=0)=1-\varepsilon$.
In this case, $\P(Z>0)=\varepsilon$.
On the other hand,
$\P(\tilde Z>0)=1-(1-\varepsilon)^n
=n\varepsilon+O(\varepsilon^2)
$, and so
$
\P(\tilde Z>0)
/
\P(Z>0)
\to n
$
as $\varepsilon\to0$.
Nor can the ratio exceed $n$,
since
\begin{eqnarray*}
\mathop{\mexp}\max_{i\in[n]}\tilde X_i
\le
\sum_{i=1}^n\mathop{\mexp} \tilde X_i
\le
n\max_{i\in[n]}\mathop{\mexp} \tilde X_i
=
n\max_{i\in[n]}\mathop{\mexp} X_i
\le
n\mathop{\mexp}\max_{i\in[n]} X_i.
\end{eqnarray*}
Let us recall the notion of
pairwise independence. For the Bernoulli case,
it means that
for each $i\neq j\in [n]$, we have
$\mathop{\mexp}[X_iX_j]=
\mathop{\mexp}[X_i]\mathop{\mexp}[X_j]
$. The main result of this note
is that
pairwise independence suffices
for
$\P(Z>0)\gtrsim\P(\tilde Z>0)$.
\begin{proposition}
\label{prop:main}
Let $X_i,\tilde X_i,Z,\tilde Z,p_i$
be as
above,
and assume additionally that the $X_i$
are pairwise independent.
Then
\begin{eqnarray*}
\P(Z>0) \ge \frac12\P(\tilde Z>0).
\end{eqnarray*}
\end{proposition}
\begin{proof}
By the Paley-Zygmund inequality,\footnote{
We thank Ron Peled for the suggestion of applying
Paley-Zygmund to $Z$.
}
\begin{eqnarray*}
\P(Z>0) \ge
\frac{(\mathop{\mexp} Z)^2}{\mathop{\mexp}[Z^2]}.
\end{eqnarray*}
Now $\mathop{\mexp} Z=\sum_{i=1}^n p_i$
and, by pairwise independence,
\begin{eqnarray}
\label{eq:Z^2}
\mathop{\mexp}[Z^2]
=
\sum_{i=1}^n p_i
+
2\sum_{1\le i<j\le n} p_ip_j
=
\sum_{i=1}^n p_i
+
\paren{\sum_{i=1}^n p_i}^2
-
\sum_{i=1}^n p_i^2
\le
\sum_{i=1}^n p_i
+
\paren{\sum_{i=1}^n p_i}^2
.
\end{eqnarray}
Hence,
\begin{eqnarray*}
\frac{(\mathop{\mexp} Z)^2}{\mathop{\mexp}[Z^2]}
\ge
\frac{
\paren{\sum_{i=1}^n p_i}^2
}{
\sum_{i=1}^n p_i
+
\paren{\sum_{i=1}^n p_i}^2
}
.
\end{eqnarray*}
On the other hand,
$\P(\tilde Z>0)$ is readily computed:
\begin{eqnarray*}
\P(\tilde Z>0)
=
1-\prod_{i=1}^n(1-p_i).
\end{eqnarray*}
Therefore, to prove the claim, it suffices to show that
\begin{eqnarray*}
F(p_1,\ldots,p_n)
:=
2\paren{\sum_{i=1}^n p_i}^2
-
\paren{\sum_{i=1}^n p_i
+
\paren{\sum_{i=1}^n p_i}^2
}
\paren{1-\prod_{i=1}^n(1-p_i)}
\ge0.
\end{eqnarray*}
To this end,\footnote{
This elegant proof that $F\ge0$
is due to D. Berend, who also corrected
a mistake in an earlier, clunkier proof of ours.} we factorize
$F=SG$,
where
$G=S+P+SP-1$,
$S=\sum_i p_i$
and $P=\prod_i(1- p_i)$.
\iffalse
$F=
G
\sum_{i=1}^n p_i
$,
where
\begin{eqnarray*}
G(p_1,\ldots,p_n)
=
2\sum_{i=1}^n p_i
-
\paren{1
+
\sum_{i=1}^n p_i
}
\paren{1-\prod_{i=1}^n(1-p_i)}
.
\end{eqnarray*}
\fi
Thus,
$F\ge0\iff G\ge 0$
and in particular, it suffices to
verify the latter.
Now if $S\ge1$ then obviously $G\ge0$
and we are done.
Otherwise,
since $P\ge1-S$
trivially holds,
we have
$G\ge S(1-S)$.
In this case, $S<1\implies G\ge0$.
\end{proof}
We conjecture that the constant $\frac12$ in Proposition~\ref{prop:main} is not optimal.
For a fixed $n$, define the joint
pairwise independent
distribution on
$(X_1,\ldots,X_n)$
---
conjecturally, an extremal
one for minimizing
$
\P(Z=0)/
\P(\tilde Z>0)
$
---
as follows: $p_i=1/(n-1)$, $i\in[n]$,
$\P(Z=0)=
\frac12-\frac{1}{2(n-1)}
$, and
$\P(Z=2)=1-\P(Z=0)$.
This makes
$\P(\tilde Z>0)=1-(1-1/(n-1))^n\to 1-1/\mathrm{e}$
as $n\to\infty$.
If our conjecture is correct, the optimal
constant for the lower bound is $c'=\frac{\mathrm{e}}{2(\mathrm{e}-1)}$, or exactly twice Pinelis's constant.\footnote{
We thank Daniel Berend, Alexander Goldenshluger,
and
Yuval Peres
for raising the question of the constants.
AG
(and also Omer Ben-Porat)
pointed out a possible connection to
{\em prophet inequalities}
---
and in particular, the
Bernoulli selection lemma
in \cite{Correa17}
}
\paragraph{Relaxing pairwise independence.}
An inspection of the proof
shows that we do not actually need
$\mathop{\mexp}[X_iX_j]=p_ip_j$,
but rather only
$\mathop{\mexp}[X_iX_j]\le p_ip_j$.
This condition is called {\em negative (pairwise) covariance} \citep{Dubhashi:1998:BBS:299633.299634}.
\section{Positive real case}
In this section, we assume that
$X_1,\ldots,X_n$ are nonnegative
integrable random variables
and the
$\tilde X_1,\ldots,\tilde X_n$
are their independent copies:
each $\tilde X_i$ is distributed identically to $X_i$
and the $\tilde X_i$ are mutually independent.
As a warmup, let us see how
Proposition~\ref{prop:pinelis}
yields $
\mathop{\mexp}\max_{i\in[n]}X_i
\lesssim
\mathop{\mexp}\max_{i\in[n]}\tilde X_i
$:
\begin{proposition}
\label{prop:pinelis-cont}
Let
$X_1,\ldots,X_n$ be nonnegative and
integrable
with independent copies
$\tilde X_i$ as above.
For $c=\mathrm{e}/(\mathrm{e}-1)$, we have
\begin{eqnarray*}
\mathop{\mexp}\max_{i\in[n]}X_i
\le
c\mathop{\mexp}\max_{i\in[n]}\tilde X_i
.
\end{eqnarray*}
\end{proposition}
\begin{proof}
For $t>0$ and $i\in[n]$,
put
$Y_i(t)=\pred{X_i>t}$,
$\tilde Y_i(t)=\pred{\tilde X_i>t}$
and
$Z(t)=\sum_{i=1}^nY_i(t)$,
$\tilde Z(t)=\sum_{i=1}^nY_i(t)$.
Then
\begin{eqnarray*}
\mathop{\mexp}\max_{i\in[n]}X_i
&=&
\int_0^\infty
\P\paren{
\max_{i\in[n]}X_i>t
}\mathrm{d} t
\\
&=&
\int_0^\infty
\P\paren{
Z(t)>0
}\mathrm{d} t
\\
&\le&
c\int_0^\infty
\P\paren{
\tilde Z(t)>0
}\mathrm{d} t
\\
&=&
c\int_0^\infty
\P\paren{
\max_{i\in[n]}\tilde X_i>t
}\mathrm{d} t
\\
&=&
c\mathop{\mexp}\max_{i\in[n]}\tilde X_i
.
\end{eqnarray*}
\end{proof}
For pairwise independent $X_i$,
we have a reverse inequality:
\begin{proposition}
\label{prop:main-cont}
Let
$X_1,\ldots,X_n$ be nonnegative and
integrable
with independent copies
$\tilde X_i$ as above.
If additionally the $X_i$
are pairwise independent, then
\begin{eqnarray*}
\mathop{\mexp}\max_{i\in[n]}X_i
\ge
\frac12\mathop{\mexp}\max_{i\in[n]}\tilde X_i
.
\end{eqnarray*}
\end{proposition}
\begin{proof}
The proof is entirely analogous to
that of Proposition~\ref{prop:pinelis-cont},
except that Proposition~\ref{prop:main}
is invoked in the inequality step.
\end{proof}
\paragraph{Relaxing pairwise independence.}
As before, the full strength of pairwise
independence of the $X_i$ is not needed.
The condition $\P(X_i>t,X_j>t)\le \P(X_i>t)\P(X_j>t)$
for all $i\neq j\in[n]$ and $t>0$ would suffice;
it is weaker than
pairwise
{\em negative upper orthant dependence}
\citep{10.1214/aos/1176346079}.\footnote{
Thanks to Murat Kocaoglu
for this reference.
}
\section{Back to Bernoulli: beyond negative covariance}
What if the Bernoulli $X_i$
do not satisfy the negative
covariance condition
$\mathop{\mexp}[X_iX_j]\le p_ip_j$?
Proposition~\ref{prop:main} is not directly inapplicable,
but not all is lost.
For $i\neq j\in[n]$, define $\eta_{ij}$ by
\begin{eqnarray*}
\eta_{ij}=\paren{
\mathop{\mexp}[X_iX_j]
-
p_ip_j
}_+
\end{eqnarray*}
and put $\eta_{ii}:=0$.
Thus,
$
\mathop{\mexp}[X_iX_j]\le p_ip_j+\eta_{ij}
$, and, repeating the calculation in \eqref{eq:Z^2},
\begin{eqnarray*}
\mathop{\mexp}[Z^2]
\hide{
=
\sum_{i=1}^n p_i
+
2\sum_{1\le i<j\le n} p_ip_j
=
\sum_{i=1}^n p_i
+
\paren{\sum_{i=1}^n p_i}^2
-
\sum_{i=1}^n p_i^2
}
\le
\sum_{i=1}^n p_i
+
\paren{\sum_{i=1}^n p_i}^2
+
\sum_{i,j\in[n]}\eta_{ij}
.
\end{eqnarray*}
Let us put
$S=\sum_{i=1}^n p_i$,
$A=S^2$,
$B=
S
+
S^2
$,
$C=\frac12\P(\tilde Z>0)$,
and $H=\sum_{i,j\in[n]}\eta_{ij}$.
Now, for $A,B,C,H\ge 0$, we have
\begin{eqnarray*}
\frac{A}{B}\ge C
&\implies&
\frac{A}{B+H}\ge c\paren{1-\frac{H}{B+H}}.
\end{eqnarray*}
and
so
we obtain a generalization of
Proposition~\ref{prop:main}:
\begin{proposition}
\label{prop:eta}
Let $X_i,\tilde X_i,Z,\tilde Z,p_i,B,H$
be as
above.
Then
\begin{eqnarray*}
\P(Z>0) \ge \frac12\paren{1-\frac{H}{B+H}}\P(\tilde Z>0)
.
\end{eqnarray*}
\end{proposition}
When $H\lesssim B$,
Proposition~\ref{prop:eta}
yields useful estimates.
\bibliographystyle{plainnat}
|
{
"arxiv_id": "2302.14117",
"language": "en",
"timestamp": "2023-03-01T02:01:31",
"url": "https://arxiv.org/abs/2302.14117",
"yymm": "2302"
} |
\section{Introduction}{
People create videos to share their experiences and expertise.
To create a compelling video, creators first capture raw video footage then edit it to remove irrelevant or low-quality footage and add effects.
For instance, creators speed up repetitive actions (\textit{e.g.}, walking or cleaning), remove long pauses in their speech, and cut blurry footage due to camera shakes.
With current video editing tools (\textit{e.g.}, Premiere~\cite{premiere}, Final Cut Pro~\cite{finalcut}, Descript~\cite{descript}), creators first need to \textit{visually inspect} the video footage and the corresponding video timeline~\cite{premiere,finalcut} or transcript~\cite{descript}, to identify edit points.
Just as visual inspection presents an accessibility barrier for BLV creators authoring static visuals (\textit{e.g.}, photos~\cite{bennett2018teens}, presentations~\cite{peng2022diffscriber}, documents~\cite{das2022co11ab}),
relying on visual inspection makes video editing inaccessible to a growing number of blind and low vision (BLV) video creators~\cite{seo2021understanding} who author and share general-purpose and accessibility-focused videos including vlogs, reviews, and tutorials online.
While prior work explored how to make videos accessible to BLV audience members ~\cite{liu2022crossa11y,pavel2020rescribe,liu2021what}, and how to make video editing more efficient for sighted creators~\cite{huber2019bscript,Berthouzoz2012,chi2013democut}, existing work has not yet explored how to make video editing accessible to BLV creators.}
To better understand BLV creators' current video production strategies and challenges, we interviewed 8 BLV video creators and analyzed 24 videos about screen reader-based video editing.
While BLV creators devised creative techniques to film and edit their videos such as describing the visual content during video capture, and editing the video using audio editing tools, the creators reported that video editing remained challenging due to
four core accessibility barriers of videos and video editing tools: lack of access to the visual content of the video (\textit{e.g.}, settings, objects), lack of access to the visual quality of the video (\textit{e.g.}, lighting, blurriness), lack of efficient ways to navigate to different parts of the video, and limited screen reader support (\textit{e.g.}, deeply nested menus or icons listed as ``button'').
As a result, BLV creators reported that they either recruited sighted collaborators to review and edit their videos, or uploaded their original video recordings without editing the footage to their preferred level of polish.
To address accessibility barriers of current video editing tools, we present AVscript{}, a system that supports accessible video editing via \textit{audio-visual scripts} (\autoref{fig:teaser}). ~AVscript{}'s audio-visual script (\autoref{fig:teaser}B) features a transcript of the narration in the video {(\textit{e.g.}, ``First of all,...'')} overlaid with information about the visual content in the video {(\textit{e.g.}, ``Scene 7: A pantry full of food...'')} and visual errors in the video (\textit{e.g.}, ``Camera blur'').
We align the audio-visual script to the video such that BLV creators can directly review, navigate, and edit the video via text.
{As creators play the video, ~AVscript{} surfaces visual information by alerting creators to scene changes and visual errors using audio notifications. AVscript{} also allows creators to inspect objects in the current frame using the ``Inspect'' feature.}
To facilitate efficient navigation, ~AVscript{} features an outline (\autoref{fig:teaser}C) and search feature (\autoref{fig:teaser}E).
~AVscript{}'s outline (\autoref{fig:teaser}C) of key scenes and errors lets creators skim to gain a high-level overview of the video or click to navigate to the corresponding point in the script and video. ~AVscript{}'s search (\autoref{fig:teaser}E) lets creators navigate the video by searching for visual objects (\textit{e.g.}, ``microwave'') or transcript words of interest.
{To assess AVscript{}, we conducted a within-subjects study with 12 BLV editors comparing AVscript{} to the their existing workflows and invited 3 BLV creators to edit their own footage using AVscript{}. In the within-subjects study with 12 BVI editors, creators reported lower mental demand and greater confidence in their final video when editing videos with ~AVscript{} compared to using their own video editing tools (\textit{e.g.}, Reaper, a timeline-based editor, and FFmpeg, a command-line tool). All creators expressed that they wanted to use the tool in the future as it helped them efficiently review their video footage and identify visual errors. BLV creators editing their own footage with AVscript{} used AVscript's visual descriptions to efficiently recall what they filmed, and AVscript's error detection to remove visual errors. After using AVscript{} to edit their own footage, creators reported that ~AVscript{} would enable them to edit more videos without assistance, thus decreasing the time required to produce videos and empowering them to create new types of videos with more diverse content and styles.}
Our work contributes:
\begin{itemize}
\item A formative study revealing current practices and unique challenges of video editing by BLV creators.
\item Design and development of AVscript{}, a novel system that uses \textit{audio-visual script} to improve accessibility in reviewing, navigating, and editing videos.
\item Two user studies demonstrating how BLV creators leverage AVscript{} to edit given videos and their own videos.
\end{itemize}
\section{Related Work}
Our work builds upon previous work in accessible authoring tools (Section ~\ref{RW4}), video accessibility (Section ~\ref{RW3}), text-based editing tools for audio and visual media (Section ~\ref{RW1}), and interaction techniques for video navigation (Section ~\ref{RW2}).
\subsection{Accessible Authoring Tools}\label{RW4}
Prior research has explored how creators with visual impairments currently author photos~\cite{adams2016blind, brady2013investigating}, drawings~\cite{kurze1996tdraw}, documents~\cite{villaverde2014facilitating}, websites~\cite{li2019editing}, presentations~\cite{schaadhardt2021understanding, peng2022diffscriber}, audio~\cite{saha2020understanding}, and videos~\cite{seo2021understanding}. Such work identified that authoring tools for visual content remain inaccessible because they do not convey visual information about the content the creator is authoring (\textit{e.g.}, framing of a photo~\cite{adams2016blind}, layout of a website~\cite{li2019editing}).
Thus, prior work created tactile displays to make authoring websites~\cite{li2019editing}, documents~\cite{avila2018tactile}, and maps~\cite{shi2020molder} accessible. While tactile displays allow creators with visual impairments to access visual content, creators must have access to an embosser or laser cutter to print tactile sheets for the initial and revised visual designs.
To provide access to visual designs and revisions without tactile displays, Peng et al. generated visual descriptions that let presenters with visual impairments obtain information about the content, layout, and style of their slides~\cite{peng2022diffscriber}.
Prior work also explores how to support collaboration between creators who are sighted and creators with visual impairments (\textit{i.e.} mixed-ability collaboration) by providing descriptions of revisions~\cite{peng2022diffscriber}, and access to awareness information that describes who was authoring what during collaborative editing~\cite{das2022co11ab,lee2022}.
While these tools make authoring static visuals including text documents~\cite{das2022co11ab,lee2022} and visual designs~\cite{peng2022diffscriber}, accessible for creators with visual impairments, we explore how to make video authoring accessible to creators with visual impairments by representing the dynamic visual and audio content of a video as text.
\subsection{Video Accessibility}\label{RW3}
Creating an accessible tool for authoring videos is challenging partially due to the inaccessibility of videos themselves.
Videos are inaccessible to BLV audiences when the visual content in the video is not described by the audio (\textit{e.g.,} travel videos with scenic shots set to music)~\cite{liu2022crossa11y,liu2021what,peng2021say}.
To make videos accessible, video creators~\cite{pavel2020rescribe}, volunteers~\cite{youdescribe}, or professional audio describers~\cite{3playmedia} add audio descriptions to describe important visual content that is not understandable from the audio alone.
As authoring audio descriptions is challenging, prior work developed tools that help creators gain feedback on audio descriptions~\cite{saray2011adaptive, natalie2020viscene}, respond to audience requests for descriptions~\cite{youdescribe}, optimize descriptions to fit within time available~\cite{pavel2020rescribe}, and recognize mismatches between audio and visuals to add descriptions as they capture~\cite{peng2021say} or edit~\cite{liu2022crossa11y} videos. Beyond helping creators author accessible videos, prior work makes inaccessible videos accessible on demand by generating automatic~\cite{wang2021toward} or interactive~\cite{peng2021slidecho, huh2022cocomix} visual descriptions.
While such prior work provides BLV audience members access to visual content in videos, the prior approaches were designed to make videos accessible for consumption rather than authoring, such that they lack important information required for video authoring tasks (\textit{e.g.,} lighting, camera stability).
We investigate how to improve the accessibility of video authoring by providing text descriptions of both the visual content and quality of videos.
\subsection{Text-based Audio and Video Editing}\label{RW1}
Audio and video editing is time-consuming as it requires multiple iterations of reviewing footage to find edit points, navigating to the edit points, and applying edits~\cite{chi2013democut}. To improve the efficiency of editing audio and video footage, prior work introduced text-based editing, which allows users to edit audio or video as they would a text document by time-aligning the words in the speech transcript with words in the audio~\cite{shin2016, rubin2013content, huber2019bscript, descript, reductvideo, imvidu,truong2016quickcut, Leake2017, Berthouzoz2012, fried2019}. Researchers further improved the efficiency of text-based editing by: highlighting pauses or repeated words in the transcript~\cite{rubin2013content}, suggesting opportunities for B-roll~\cite{huber2019bscript}, and matching voice-over recordings with relevant narrated video segments~\cite{truong2016quickcut}.
In addition, prior research used text-based editing to improve the quality of the video output by creating seamless transitions when cuts or dialogue changes occur in talking head videos~\cite{fried2019}, dialogue-driven scenes~\cite{Leake2017}, and interview videos~\cite{Berthouzoz2012}.
However, existing text-based editing tools were designed for sighted video editors who can
visually inspect video footage to identify editing opportunities, and visually skim the text transcript to navigate efficiently.
We explore how to make text-based video editing accessible by
integrating visual content and quality into the speech transcript (i.e. creating an audio-visual script), improving non-visual skimming via an outline, and making editing operations screen reader accessible.
\subsection{Video Navigation Interaction Techniques}\label{RW2}
{Traditional video players such as Premiere~\cite{premiere} and Final Cut Pro~\cite{finalcut} and editors require navigating the video using a timeline. However, timeline-based navigation is challenging as video creators and consumers need to scrub back and forth to find content of interest.
To help video consumers skim and navigate to content of interest, prior work introduced approaches to navigate videos based on transcripts~\cite{kim2014data, pavel2015scene, pavel2016vidcrit}, high-level chapters and scenes~\cite{kim2014crowd, chi2012mixt, fraser2020, yang2022catchlive, pavel2014digests, truong2021makeup,pavel2015scene}, or key objects and concepts~\cite{chang2021ruby, liu2018concept, peng2021slidecho}.
While transcripts help users efficiently search for words used in the video~\cite{kim2014data, pavel2015scene, pavel2016vidcrit}, they can be difficult to skim as they are often long, unstructured, and contain disfluencies present in speech~\cite{pavel2014digests}.
To video consumers skim and navigate videos more efficiently, prior work segmented videos into high-level segments (i.e. scenes or chapters) and let consumers browse these segments based on representative thumbnails or text descriptions~\cite{kim2014crowd, chi2012mixt, fraser2020, yang2022catchlive, pavel2014digests, truong2021makeup,pavel2015scene}.
Prior work segmented videos into chapters or scenes by using the transcript to automatically segment the video based on transcript topics~\cite{fraser2020, yang2022catchlive, pavel2014digests, truong2021makeup}, crowdsourcing to annotate segmentation points~\cite{kim2014crowd}, or metadata such as command logs to segment based on interactions~\cite{fraser2020,chi2012mixt,pavel2016vidcrit}.
As it is particularly challenging for screen-reader users to skim text~\cite{ahmed2012read}, we similarly segment the video to create an outline of important moments (\textit{e.g.,} scene descriptions, visual errors) such that readers can quickly navigate our the audio-visual script using the outline.
We also build on prior work that uses low-level features in the video (\textit{e.g.,} keywords~\cite{chang2021ruby}, presentation slide elements~\cite{peng2021slidecho}) to facilitate search, as we similarly enable search via speech and visual objects to help BLV creators locate footage to edit.}
\section{Formative Study}
Prior work explores practices of content creators with visual impairments creating media such as photos~\cite{adams2016blind,bennett2018teens}, drawings~\cite{kurze1996tdraw}, documents~\cite{villaverde2014facilitating}, and audio~\cite{saha2020understanding}, and explores community aspects of YouTube content creation such as high-level motivations for content creation and engagement with viewers~\cite{seo2021understanding}, but existing work has not yet explored BLV creators' unique practices and challenges in video editing.
To understand how BLV video creators edit videos, we analyzed {24} YouTube videos\footnote{See Appendix C for details on our video collection approach} {(V1-V24)} by 20 BLV creators about their video editing processes and interviewed an additional 8 experienced BLV video creators (P1-P8, Table~\ref{tab:participants}) about their video editing motivations, current practices, and accessibility barriers\footnote{See Supplemental Material for the full list of questions}.
Participants were recruited using mailing lists and compensated \$ 20 USD for the 1-hour semi-structured interview. We transcribed\footnote{https://clovanote.naver.com} the YouTube videos and interview recordings, then two researchers first independently coded all videos using open-coding~\cite{hartson2012ux}, then met to resolve conflicts and update codes.
Then, the same two researchers used affinity diagramming~\cite{holtzblatt1997contextual} to group the codes into themes: goals for editing videos, strategies for editing videos, and challenges in editing videos.
\subsection{Findings: Goals for video editing}\label{form_results}
Interview participants reported that their motivation for editing was to make videos more engaging (6 participants), or tailor videos to a specific audience (2 participants). As P2 summarized: \textit{``I only keep the highlights [...] because short and snappy videos are more popular''} (P2).
When editing, 5 participants mentioned that they polished their videos by editing out visual or audio mistakes, and 4 participants highlighted that they make videos concise by removing unimportant footage.
For example, participants mentioned they removed audio mistakes like `um's and `ah's in the video (P2, P3, P5), pauses in the speech (P5, P7), or answering an audience question incorrectly (P3).
While participants currently edited primarily via the video's audio track, they were often creating videos for a broad audience: \textit{``My video is not just for people with visual impairments. For sighted viewers, I want to make sure nothing is visually awkward''} (P7).
P2 added that editing the visuals is particularly crucial for BLV video creators as they often make mistakes while capturing video (\textit{e.g.}, filming with lights turned off), and re-filming can be a burden.
Finally, in addition to cutting out unimportant footage and mistakes, participants wanted to capture viewer attention by adding music and intros to their videos.
\subsection{Findings: Strategies for video editing}\label{practices}
\ipstart{Describing visual content and mistakes} BLV creators are unable to recognize the visual content in a video (\textit{e.g.}, objects, actions, background setting) unless the visual content is understandable from the audio alone (\textit{e.g.}, described by a narrator, or accompanied by sound)~\cite{liu2021what}.
Thus, most participants mentioned that they verbally described visual content (\textit{e.g.}, where they are and what they are doing) as they filmed their video.
Participants reported a dual benefit to visual descriptions: identifying visual content while editing, and making the final video more accessible to BLV audience members.
In addition to describing visual content, P3, P4, and P8 added verbal cues for editing when they made mistakes during filming. For example, P8 explained \textit{``When I drop something, I'd say out loud `Don't use the earlier part' so that I will easily remove it later''}.
P7 dealt with a lack of information about the visual content and quality by focusing on the audio: \textit{``Because I cannot check the visual quality of the footage, I am very picky about the audio. If there is some traffic noise, I don't use that part.''}
P8 renamed his video files with visual descriptions or editing cues so that he could locate relevant clips without playing the video.
However, creators' reported that their descriptions were inaccurate or incomplete when they were not aware of all relevant mistakes (\textit{e.g.}, bad lighting, blur, poor framing or composition) or visual content.
P2 recalled that once \textit{``When I was pointing at an object describing it, it wasn't there!''}.
Creators also shared that constantly describing visual content during filming took attention away from being creative (V19) and forced them to replace their audio track with music when they did not want the narration to be included in the final video (P7).
\ipstart{Identifying accessible video editing tools} To edit videos, seven participants used timeline-based editing tools (\textit{e.g.}, Final Cut Pro) and one participant used FFmpeg, a command-line tool (P4).
Participants noted that video editing tools were largely inaccessible: \textit{``Finding an accessible editing tool in the first place is difficult. Very few tools are accessible themselves and also have accessible documents or tutorials.''} (P4). Participants identified accessible video editing tools via other BLV creators and then learned how to use these tools with a screen reader via text tutorials, videos aimed at BLV editors, and official documentation.
Even with the most accessible timeline-based tools, participants reported that the menus, buttons, and sliders were often unclickable with a screen reader or not properly labeled (\textit{e.g.,} only reading `button' instead of describing the function).
In addition, such tools have complex menus that are difficult to navigate with a screen reader: \textit{``Having no access to GUI, I have to continuously tab to locate the button of my interest. This becomes tedious because video editing is a complex task''} (P1).
\ipstart{Navigating videos linearly} While most BLV creators edited their videos with timeline-based tools, the visual aids that these tools provide for sighted creators to browse, skim and select video footage (\textit{e.g.}, visual thumbnails to preview video content by hovering, audio waveforms to preview start and end of speech) are not accessible to BLV creators.
As a result, all participants reported that they review and edit videos by first watching the entire video all the way through, and either editing as they go (6 participants) or noting timestamps to edit later (P1, P4).
P7 noted that he usually watched the entire clip several times because he cannot jump from place to place in the video.
To avoid re-watching long videos in order to find content of interest, participants commonly filmed multiple short clips: \textit{``Because navigating within a single clip is so tedious, I never create a long clip.''} (P8).
\ipstart{Recruiting sighted collaborators}
BLV creators commonly sought out assistance from sighted people for filming, editing, and reviewing the final video (V1, V3, V6, V19, P2, P3, P7, P8).
For example, the creator of V3 has her sighted husband set up a camera for filming, make video intro templates, and apply color correction. The creator of V1 and V6 recently hired an editor for even basic editing tasks as editing takes too long causing back pain.
Before publishing their video, 4 participants (P2, P3, P6, and P8) wanted a sighted person with or without video editing experience to view the video and provide a sense of how an ``average viewer'' would see it.
For example, P3 often uses Be My Eyes\footnote{https://www.bemyeyes.com/} or Aira\footnote{https://aira.io/} to ask a volunteer to provide feedback on visual quality (\textit{e.g.,} whether she is centered in the frame and well-lit).
All interview participants mentioned that they wished to edit videos independently, as sighted assistance is not always available or affordable, and they wanted to gain control over the process as creators.
\subsection{Reflection: Opportunities for BLV creator support}\label{challenges}
While BLV creators resourcefully crafted strategies to work around inaccessible video editing tools, creators' remaining challenges (C1-C5) point to opportunities for technical or social support:
\begin{itemize}
\item[\textbf{C1.}] Recognizing visual content in a video (\textit{e.g.}, setting, actions)
\item[\textbf{C2.}] Assessing the visual quality of a video (\textit{e.g.}, lighting)
\item[\textbf{C3.}] Accessing editing tool menus with a screen reader
\item[\textbf{C4.}] Non-linear browsing and skimming of videos
\item[\textbf{C5.}] Performing visual edits (\textit{e.g.}, color correction)
\end{itemize}
Our formative study indicates that BLV creators currently extend time, effort, creative agency, and social resources to overcome these challenges. For example, by narrating the visual content and noting mistakes as they film (\textbf{C1, C2}), losing and regaining editing task focus to navigate menus (\textbf{C3}), spending time watching a video linearly rather than jumping to the point of interest (\textbf{C4}) and recruiting sighted collaborators for inaccessible or overly tedious tasks (\textbf{C1-5}).
Our work explores how to make video editing more accessible by providing creators' access to video visuals (\textbf{C1, C2}) and more efficient by improving the ability of creators to skim and browse for content of interest (\textbf{C4}), while the remaining challenges (\textbf{C3, C5}) indicate rich opportunities for future research and commercial accessibility improvements.
\section{AVscript{} }
AVscript{} (\autoref{fig:teaser}) makes video editing accessible and efficient for BLV creators with \textit{audio-visual scripts} that let creators navigate and edit based on a text script of visual content, visual quality, and speech.
We first illustrate how BLV creators can use ~AVscript{} to edit videos through an example use scenario. Then, we describe ~AVscript{}'s interface and the computational pipeline that powers it.
\subsection{Editing a video with ~AVscript{}}\label{scenario}
Anna, a YouTube content creator with a visual impairment, filmed a cooking tutorial video to upload to her channel.
Anna wants to improve the conciseness and quality of her tutorial to make it engaging to viewers, so she imports the tutorial video into ~AVscript{} to edit it.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/outline.pdf}
\caption{~AVscript{}'s outline pane displays a navigable summary of the audio-visual script including the high-level scenes and potential edit points (pauses and visual errors).}\label{fig:outline-pane}
\Description{The outline pane and the audiovisual script are presented in the figure. In both panes, information about scenes 5 to 8 is shown. On the left outline pane, scene descriptions such as ``The person presses on a time in the oven'' and visual errors such as ``Camera blur in 4:32'' are listed. On the right audio-visual script, all of this information is embedded with the narration script.}
\end{figure}
Using her screen reader, Anna first skims the outline to review her footage (\autoref{fig:outline-pane}). Because the outline summarizes the video with a description of each scene, she quickly recalls the video's content and plans what to edit from the footage. Reading through the outline, Anna notices that the second scene, where she shows her pantry, is over ten minutes long. To shorten the scene, she clicks the item of the outline and jumps to the pantry scene.
As she plays the video from the start of the pantry scene, Anna hears a notification indicating that there is a visual error (\autoref{fig:video-pane}). To check the error, she pauses the video. As the position of her cursor in the audio-visual script updates alongside the video progress,
she can easily read the corresponding line in the audio-visual script which indicates there was a camera blur. To learn more about the visuals at that point, she presses the `i' key to inspect the frame. As ~AVscript{} reads out the objects detected in the frame, Anna notices `door' and `hand' and realizes that the camera was shaking as she tried to open the pantry door. To remove the blurry footage, Anna selects the line that contains `camera blur' and deletes the text.
Anna also remembers that she spent a long time silently waiting for the microwave to finish while filming. She searches for `microwave' (\autoref{fig:search}) to find where the microwave appeared in the video and clicks on the relevant result. She shortens the pause by using the `speed change' feature to make the clip two times faster.
\subsection{Interface}\label{interface}
The ~AVscript{} interface consists of: a ~\textit{video pane} (\autoref{fig:video-pane}A), a ~\textit{audio-visual script} (\autoref{fig:outline-pane}D), an ~\textit{outline pane} (\autoref{fig:outline-pane}C), a ~\textit{tool pane} (\autoref{fig:search}D), and a ~\textit{search pane} (\autoref{fig:search}E).
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{figures/video-pane.pdf}
\caption{AVscript{}'s video pane provides two types of audio notifications: scene change notifications (page-flipping sound) and visual error notifications (warning sound). By pressing the `i' key, users can activate inspect mode to access detected objects in the current frame.}\label{fig:video-pane}
\Description{The close-up of the video pane. Over the video player timeline, notification icons are shown to indicate that this information is provided as users play the video. When the user clicks the `i' key, the following speech is provided: ``Inspect mode, currently in the frame: cereal box, snacks, shelf''}
\end{figure}
\subsubsection{Video Pane}
The ~\textit{video pane} displays the video and the timeline (\autoref{fig:video-pane}). As the user listens to the video, the system provides sound notifications for the key visual events (e.g., ``Scene Change'', ``Camera Blur''). Users can access visual information in the current frame by pausing the video and pressing the `I' key to activate \textit{inspect mode}, which reads out a list of detected objects in the frame.
\subsubsection{Audio-Visual Script \& Outline Pane}
The ~\textit{audio-visual script}'s \textit{audio-visual script} (\autoref{fig:outline-pane}D) displays the narration and pauses in the video speech along with high-level visual scenes and visual errors. The audio-visual script is aligned to the video, so navigating within the script will navigate within the video (and vice versa), and edits to the script (\textit{e.g.}, selecting and deleting a sentence) are reflected in the video.
The audio-visual script first includes lines that represent each sentence and comma-separated phrases greater than three words in the transcript (\textit{e.g.}, ``First of all...'').
To inform users about the scene changes in the video, ~AVscript{} provides high-level scene headings in the script that summarize the key visual content in the scene (\textit{e.g.}, ``A person is holding a can next to an empty refrigerator'').
In addition to the scene headings, ~AVscript{} also provides recommendations for potential edit points alongside the text that occurs at that time. For visual errors (highlighted in orange), the system describes the type of error (e.g., ``Bad lighting''), and for long silences (highlighted in blue), the system provides the duration of silence (\textit{e.g.} ``25.5 seconds'').
~AVscript{}'s audio-visual script is designed to enable screen reader users to easily navigate the video at different levels of granularity (high-level visual scenes, narration or pause lines, and words) using key commands (\textit{e.g.}, ctrl/cmd + ~$\rightarrow$/$\leftarrow$ to jump forward or backward by a line).
In the ~\textit{outline pane} (\autoref{fig:outline-pane}C), the scene headings and recommendations for edit points are listed and sorted in timeline order to provide an overview of the major visual events. By clicking an item in the outline, creators can directly jump to the corresponding part of the video, with an updated cursor position in the script.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figures/search.pdf}
\caption{~AVscript{} supports search over the transcribed speech and visual objects in the video. BLV creators can skim the results and click on a search result to jump to the corresponding point in the video.}\label{fig:search}
\Description{The tool pane and the search pane are presented in the figure. The user is searching for the keyword ``microwave'' and seven search results are listed, where one of the results is narration search and the other seven results are visual search.}
\end{figure}
\subsubsection{Tool Pane \& Search Pane}
To edit the video with ~AVscript{}, the creator selects a segment in the audio-visual script and either presses delete (i.e. backspace) to remove that part of the video, shortens the segment by adjusting the start and end time with the ``Trim'' tool, or changes the playback speed of the segment by using the ``Speed'' tool.
When creators have a specific editing target in mind (\textit{e.g.}, a microwave), they can use ~\textit{search pane} (\autoref{fig:search}) to query a speech word, a visual object, or a visual error (\textit{e.g.}, ``microwave'', ``Camera Moving'', or ``Pause''). Then, creators can review and select a search result to jump to the start time of the result in the video and audio-visual script.
\subsubsection{{Implementation}}
{
We implemented AVscript{} using React.js, HTML and CSS for the front-end web interface and Firebase for the back-end interface. For embedding a video player, we used Remotion~\cite{remotion} for efficient server-side rendering and parametrization. For \textit{audio-visual scripts}, we used Draft.js~\cite{draft}, a text editor framework for React. We followed the guidelines of W3C~\cite{wai} and tested the compatibility of the AVscript{} with all three major screen readers: \textit{NVDA}, \textit{JAWS}, and \textit{VoiceOver}.
}
\subsection{Computational Pipeline}\label{pipeline}
\begin{figure*}[t]
\includegraphics[width=\textwidth]{figures/pipeline_younghokim.pdf}
\caption{Computational Pipeline of creating ~AVscript{}'s audio-visual script from raw footage. It takes two inputs: audio and frames. To generate an aligned transcript, we obtain the transcript from audio using Otter.ai and align using P2FA. To segment the footage into multiple scenes, we first detect objects in each frame with Detic, using the nouns extracted from the transcript as custom vocabulary. Then, we segment the footage when there is a salient change in the objects detected in nearby frames. For each scene, we caption the first frame using BLIP, then use the caption as the scene's title in the audio-visual script. }\label{fig:pipeline}
\Description{The figure shows how audio and visual frames of the video are processed to segment scenes, generate scene descriptions, and detect pauses and visual errors. On the right, the resulting audio-visual script is shown.}
\end{figure*}
~AVscript{}'s computational pipeline (\autoref{fig:pipeline}) transcribes and aligns video speech (Section \ref{transcription}), detects objects and segments scenes (Section \ref{segmentation}), and detects visual errors (Section \ref{recommendation}).
\subsubsection{Transcribing and Aligning Speech}\label{transcription}
To enable word-level editing, ~AVscript{} transcribes the video speech using Otter.ai~\footnote{https://otter.ai/home}, then uses P2FA to align each word in the transcript to the corresponding word in the speech.
Following Rubin ~\textit{et al.}~\cite{rubin2013content}, we use CMU Sphinx Knowledge Base Tool~\cite{sphinx} to obtain word phonemes of out-of-vocabulary words (\textit{e.g.}, the coffee machine name ``Keurig'').
To enable phrase-level navigation and editing, AVscript{} then splits the transcript into sentences and comma-separated phrases that are three words or longer. ~AVscript{} also creates pause segments for any pause longer than three seconds.
As widely used screen readers (\textit{e.g.}, VoiceOver, NVDA, JAWS) read the text in HTML `input' elements line-by-line, we place each phrase and pause on a different line for ease of screen reader navigation.
\subsubsection{Segmenting and Labeling Scenes}\label{segmentation}
Using OpenCV we extract frames from the video at a rate of one frame per second.
For each frame, we detect objects in the frame using Detic~\cite{zhou2022detecting} to retrieve visual information of the content for frame inspection, visual search, and detection of major visual changes for scene segmentation.
In our pilot experiments, using all objects detected in the frame resulted in too much irrelevant information passed to the pipeline or presented to the creator (\textit{e.g.,} listing all the objects in the background such as a coffee mug, spoons, forks).
To limit our object detection to objects that are likely to be important, we only detect the objects referred to in the narration. To create a custom vocabulary set, we use Spacy's part-of-speech tagger ~\cite{Honnibal_spaCy_Industrial-strength_Natural_2020} to extract all noun phrases in the transcript (`NN': noun, singular or mass, `NNP': noun, proper singular, `NNPS': noun, proper plural, `NNS': noun, plural). Then we pass the custom vocabulary to Detic~\cite{zhou2022detecting} to detect all instances of each noun in each frame. Detic provides the bounding box for each noun in each frame and a confidence value. We include all objects with a confidence value greater than 0.3.
In the inspect and search mode, ~AVscript{} reads objects in order of the size of their bounding box (largest first).
We segment videos into higher-level scenes by using a sliding window of width 4 to compare \% of similar objects in the 2 frames before and 2 frames after a potential boundary (similar to Haq et al.~\cite{haq2019movie}).
If a scene boundary occurs in the middle of a phrase boundary, we adjust the scene boundary to match the phrase boundary. We cut short scenes that did not encompass any entire phrase.
Then, to obtain a description for each scene, we generate the caption of the first non-blurry frame of each scene using a BLIP~\cite{li2022blip}'s pre-trained model (CapFilt-L) with nucleus sampling. {While BLIP produces state-of-the-art image captioning performance, BLIP occasionally misidentified objects, misgendered people, and cited incorrect emotions in pilot experiments.}
{We evaluated our scene segmentation on two videos (V1 and V2 in Section \ref{sec:eval:materials}) by comparing our predicted scene boundaries to scene boundaries independently labeled by two researchers (Coders A and B who are authors of this paper). We measured percent similarity (i.e. Jaccard Index) between each set of scene boundary labels by dividing the number of matching labels by the total number of labels, considering any labels less than 3 seconds apart as the same.
For V1, coders A and B shared 34\% matching boundaries with each other, while our segmentation algorithm shared 37\% and 38\% matching boundaries with coders A and B respectively.
For V2, coders A and B shared 57\% matching boundaries with each other, while our segmentation algorithm shared 64\% and 48\% matching boundaries with coders A and B respectively. Overall, our segmentation algorithm achieved similar agreement with human coders as they did with each other. When disagreements occurred they typically represented high-quality segmentation boundaries provided at different levels of granularity (\textit{e.g.}, a single segment for adding ingredients vs. three segments for adding flour, water and salt). }
\subsubsection{Detecting Visual Errors}\label{recommendation}
The common components of photo quality that BLV people find difficult to achieve are blur, lighting, framing, and composition~\cite{adams2016blind, brady2013investigating}.
Among these, ~AVscript{} supports identifying blur and poor lighting, and also considers camera motion blur to support video rather than photo content.
To detect dark lighting, for each frame in the video we reduced the size of the frame to 100x100 to reduce the computation, then classify the frame as ``dark'' if the mean pixel luminescence value falls below an empirically determined threshold of 0.25.
To detect blurry frames, we use the modified Laplacian method ~\cite{pech2000diatom}. For each frame, we first convert the image to grayscale using OpenCV and then compute the variance of Laplacian to calculate the focus score. Then, we classify the frame as ``blurry'' if the focus score falls below an empirically determined threshold of 5.
Using the detection results of each frame, we mark a segment as `dark' or `blurry' when more than three consecutive frames are identified as such.
Finally, to avoid naively identifying all the camera moving parts (\textit{e.g.,} facing the camera to a different object) as `blurry', we also used the object detection results to detect `camera moving' between scenes (frequent change in the object set over time). For segments that were classified as both ``blurry'' and ``camera moving'', we label them as ``camera moving'' to indicate that the motion blur may make objects in the frame difficult to see.
{We evaluated our error detection pipeline on two videos (V1 and V2 in Section \ref{sec:eval:materials}) by first creating a set of ground truth labels of visual errors based on existing video editing guidelines~\cite{vlog, Colostate, 101, TTU, handbook, ncsl}. Two researchers first met to group established guidelines into common themes (See Supplemental Material for aggregated guidelines), then researchers seperately annotated edit points for a single video (V1) and met to resolve conflicts and revise the guidelines.
One of the researchers annotated the other video (V2) following the revised guidelines. In total, the ground truth labels included 18 errors for V1 and 15 errors for V2. }
{When compared with the ground truth edit points, AVscript{}'s pipeline achieved high precision and low recall for visual error detection (precision=100\%, recall=38.89\% for V1, precision=87.50\%, recall=46.67\% for V2). The most common error type not detected by our pipeline was a partial blur due to the main object being out of focus as our pipeline only calculates the focus score of the entire frame. One of the reasons for the high precision and low recall is that we empirically set the threshold of AVscript{}'s pipeline low to avoid false notification of errors, or presenting users with too many error suggestions.
}
\section{Comparison Study}\label{comparison-study}
We conducted a within-subjects study to examine how ~AVscript{} impacts experienced BLV creators' video editing practice compared to their personal video editing tools.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/stacked-bar.pdf}
\caption{Distribution of the rating scores for the participants' personal editing tools and ~AVscript{} (1 = low, 7 = high). Note that a lower value indicates positive feedback and vice versa.
{The asterisks indicate the statistical significance as a result of Wilcoxon text} (~\textit{p} < 0.05 is marked with * and ~\textit{p} < 0.01 is marked with **). AVscript{} significantly outperformed users' own tools in mental demand, Temporal demand, effort, frustration, confidence in the output, independence in reviewing output, and helpfulness in identifying errors.}
\label{fig:NASA-TLX}
\Description{The two stacked bar charts display the distribution of the rating scores for the participants' participants' personal editing tools and AVscript. Blue colors are used to indicate lower responses (positive) and red colors are used to indicate higher responses (negative). }
\end{figure*}
\subsection{Method}\label{comparison-methods}
\subsubsection{Participants}
We recruited 12 participants who all had a visual impairment, used a screen reader to access their device, and had prior experience editing videos (8 males and 4 females, aged 28--58) using mailing lists and social media (Table~\ref{tab:participants}). Three of the participants also participated in the formative study (P1, P4, P8).
Among the participants, ten participants had a YouTube channel where they posted videos to the public, while two participants shared their videos privately (\textit{e.g.}, to the company they work for or family and friends). All 12 participants mentioned that they create videos for both sighted and BLV audience members. Participants authored a variety of videos including vlogs, tutorials, product reviews, presentations, and more (see Table~\ref{tab:participants} for a complete list).
To edit videos, 11 participants used timeline-based editing tools (Reaper, Windows Movie Maker, Microsoft Photos, Final Cut Pro, and VideoReDo), and 1 participant used scripting tools (FFmpeg, Python). Reaper is primarily an audio production tool, but it also supports videos. {All participants used one or more of the three popular screen readers (NVDA, JAWS, VoiceOver) which are all compatible with ~AVscript{}}.
\subsubsection{Materials}\label{sec:eval:materials}
We selected three videos from YouTube authored by BLV creators that contained (Table~\ref{tab:videos}): primarily raw video footage with few edits, real-world camera footage rather than screen recordings, and narration in English by the video author. We selected a short video (V0) for the tutorial. Videos used in the main sessions (V1-V2) were created by the same YouTube creator\footnote{https://www.youtube.com/c/BlindMovingOn} and were selected to be similar in terms of length, amount of narration, and shot changes.
For both videos, we only used around the first 11 minutes of the video such that participants could edit the video within the study time. For each video, we did not manually correct algorithmic results except for replacing the incorrect gender identification of the speaker.
\subsubsection{Procedure}
We conducted a 120-minute remote study on Zoom {where all participants had a 1:1 session with one of the researchers}.
We first asked participants demographic and background questions about their prior video editing experience. We then gave a 20-minute tutorial on the ~AVscript{} interface in which participants edited V0 to learn system features. Participants then edited one video (V1 or V2) with ~AVscript{} and the other video (V1 or V2) using their existing editing tools (within subjects). The order of system type (their own editing tools vs. ~AVscript{}) and video clips (V1 or V2), was counterbalanced and randomly assigned to participants. {During the task, we answered participant questions about AVscript's screen reader controls and the amount of task time remaining but did not provide any help with understanding or editing the video.} We encouraged participants to take a short break between two sessions. For each interface, we conducted a post-stimulus survey that included three types of questions: NASA-TLX ratings, ratings about the final video output, and ratings about the perceived helpfulness of system operations. {As we did not provide assistance with video understanding or editing during the study, ratings related to \textit{assistance} (Figure \ref{fig:NASA-TLX}) intend to capture participants' perceptions of their ability to use each tool independently.} All ratings were on a 7-point Likert scale.
After the session using ~AVscript{}, the edited video was saved to our server. We also asked the participants to share the output video edited using their personal video editing tools. At the end of the study, we conducted a semi-structured interview to understand participants' strategies using ~AVscript{} and the pros and cons of both ~AVscript{} and their own tools. We compensated participants with a 40 USD Amazon Gift Card. This study was approved by our institution's Institutional Review Board (IRB).
\subsubsection{Analysis}
We collected the video recordings, the interaction logs, the output videos, and the survey responses to perform both quantitative and qualitative analyses.
AVscript{}'s interaction logs were collected using Google Firebase~\cite{firebase}.
We reviewed both the session recording and interaction logs to extract the operations participants performed using their baseline video editing tools and AVscript. We triangulated the logs with the output videos to validate the extraction (\textit{e.g.,} comparing the edit points in the video to the edit operations).
We transcribed the exit interviews and participants' spontaneous comments during the tasks and grouped the transcript according to (1) strategies of using AVscript{} and (2) perceived benefits and limitations of our system.
\subsubsection{Study Limitations} In this study, participants used AVscript{} for the first time and compared it to their own editing tools, which they are already familiar with. Thus, this study neither reveals how long-term use might impact the editing experience of users, nor how participants who have never edited videos before might use the system. We selected the video length (around 11 min) and the editing time provided (around 30 min) to balance providing a realistic use scenario while keeping the study time short, especially as editing is cognitively demanding.
As a result, not all participants were able to complete editing within the time provided.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/interaction_sequence_younghokim.pdf}
\caption{Sequences of operations that are relevant to ~AVscript{}'s navigation and editing features by participant (comparison study). Participants' data is grouped by video ID and then sorted by participant ID. Note that trivial cursor movements in the transcript without triggering the video control were not included for brevity.}\label{fig:sequence}
\Description{The figure shows the pattern of users' interaction when using AVscript including navigations, search, and edits. There is no common pattern among multiple users. }
\end{figure*}
\input{tables/comparison-study}
\subsection{Results}
Overall, participants rated using ~AVscript{} to edit videos as requiring significantly less effort ($\mu$=4.58, $\sigma$=1.51 vs. $\mu$=2.17, $\sigma$=1.11; $Z$=2.96, $p$<0.01), frustration ($\mu$=3.58, $\sigma$=2.11 vs. $\mu$=2.08, $\sigma$=1.31; $Z$=2.39, $p$<0.05), mental demand ($\mu$=3.17, $\sigma$=1.80 vs. $\mu$=2.00, $\sigma$=0.95; $Z$=2.03, $p$<0.05), and temporal demand ($\mu$=4.5, $\sigma$=1.93 vs. $\mu$=2.58, $\sigma$=1.16; $Z$=2.54, $p$<0.05) compared to their own editing tools that they were experienced with (Figure~\ref{fig:NASA-TLX}). Perceived performance and physical demand were not significantly decreased for ~AVscript{}, and all significance testing was performed with the Wilcoxon Signed Rank test.
All participants stated they would like to use ~AVscript{} in the future for reviewing and editing their videos.
We report the statistics of the videos edited by participants in ~\autoref{tab:video_stat}. While 30 minutes were given for each editing session, six participants using ~AVscript{} finished the task early. Due to the limited time, ten participants using their own editing tools did not edit the later part of the footage (P4, P8, P9, P11-17).
The Video Timeline column in \autoref{tab:video_stat} shows the edited time segments over the timeline of the videos.
{As participants using their own tools often did not reach the second half of the video, the output videos in the baseline condition included notable errors in the latter half of the video such as leaving in dark scenes (V1), long pauses (V2), and repetitive actions (V2). However, across both conditions, short edits to the video timeline often introduced jump cuts ~\cite{ncsl, TTU} in the final output video.}
\autoref{fig:sequence} summarizes how creators used ~AVscript{} by visualizing operation sequences relevant to navigation and editing.
Overall, participants frequently jumped between different parts of the video using the headings, transcript lines, and words in the \textit{audio-visual script} (\autoref{fig:sequence}, light blue ``Text Jump'' cells).
Four participants (P10, P14, P16, P17) used the search feature once (\autoref{fig:sequence}, blue ``Search Jump'' cells).
Participants frequently deleted speech, pauses, and visual errors in the video (\autoref{fig:sequence}, yellow, orange and red ``deletion'' cells).
Because AVscript{}'s ~\textit{audio-visual script} is aligned with the video timeline and contains descriptions of pauses and errors (\textit{e.g.,} duration, error type), five participants (P4, P8, P9, P16, P17) often subsequently deleted problematic segments only using text descriptions without actually playing the video.
In addition to deleting clips, participants tried to recover from pauses and visual errors by trimming or changing the speed; five participants trimmed pause segments (P8, P10, P15, P16, P17) and one participant changed the playback speed (P1).
\subsubsection{Reviewing Videos and Identifying Errors to Edit. }
Participants rated ~AVscript{} as significantly more helpful for reviewing their video footage to identify errors compared to their existing editing tools ($\mu$=4.25, $\sigma$=2.22 vs. $\mu$=2.00, $\sigma$=1.04; $Z$=2.17, $p$<0.01). When reflecting on their final video, participants expressed that they were more confident with their final result ($\mu$=4.67, $\sigma$=1.37 vs. $\mu$=3.00, $\sigma$=1.41; $Z$=2.34, $p$<0.01), and needed less assistance reviewing it ($\mu$=5, $\sigma$=1.54 vs. $\mu$=2.75, $\sigma$=1.66; $Z$=2.31, $p$<0.01) compared to their typical process.
\ipstart{Text-based vs. Timeline-based Video Review} Using ~AVscript{}, participants primarily reviewed the video by reading the text of the audio-visual script and outline, while with their own video editing tools participants primarily reviewed the video by playing the video.
For example, of 7 participants who reviewed the entire video before editing it with AVscript{}, five participants read the entire audio-visual script using a screen reader or braille display without playing the video (P4, P9, P10, P16, P17), and three read the outline to gain an overview of the video (P10, P11, P13). P10 did both. On the other hand, when using their baseline tools, all participants played the video from the beginning to identify points to edit. Participants expressed that reading the text of the audio-visual script or outline allowed them to skim the footage faster than the video alone. P16 who reviewed the 11-minute video with ~AVscript{} in 3 minutes remarked, \textit{``I've been using NVDA [screen reader] for so long that I can understand a very fast TTS [Text-To-Speech]. Because I read 1,075 words-per-minute reading the script instead of playing video saves so much time for me.'' }
\ipstart{Gaining an Overview of Visual Content and Errors} In addition to providing options for faster review, participants reported that they used ~AVscript{}'s high-level description of visual scenes and errors to (1) form a mental picture of the visual content (\textit{e.g.}, connecting background sounds with the scene descriptions, or imagining what the scene contained), (2) plan what edits they would like to make later (\textit{e.g.}, get an overview of the parts of the video that they needed to edit), and (3) mentally bookmark their progress as they edited (\textit{e.g.}, using a scene title to remember they had left off editing). P16 remarked that the descriptions were particularly helpful for silences: \textit{``Even in silence, I know what is going on in this video! Reading these scene labels, I can construct mental imagery of what the footage looks like.''} P10 and P14 also used the \textit{inspect} feature in concert with the high-level descriptions of visual content and errors (\textit{e.g.}, to understand the content of a pause, and to access objects at the beginning of a scene).
\ipstart{Identifying Opportunities to Edit Video Footage} Participants considered ~AVscript{}'s visual errors in making decisions for visual editing, while they edited audio errors (\textit{e.g.}, pauses, and repeated words) with both systems.
Using ~AVscript{}, all participants reviewed the visual errors in the video, and 11 of the 12 participants ~AVscript{} edited a visual error (\textit{e.g.}, by deleting, speeding up, or trimming the error). When evaluating visual errors, participants read the error along with the speech associated with the error to decide whether to delete it or not. For example, when assessing a visual error that overlapped with an important sentence in the speech that would harm the meaning of the speech if deleted, participants left the footage intact. On the other hand, if participants could make a natural edit (\textit{e.g.}, cutting out an unnecessary sentence, or trimming the length of the error) they would cut it out. To edit the errors detected by ~AVscript{}, 11 participants deleted the entire segment of the error, whereas one participant changed the playback of the error segment leaving some part of it. P13 stated, \textit{``If I just get rid of the error, it might result in a jumpcut or leave a too small gap between the sentences which is unnatural.''}
Participants expressed that getting informed of the visual errors made them more confident in their edits, but P4, P9, P11, and P12 noted they would like severity information about the error to inform quality vs. content trade-offs. P12 noted \textit{``It says bad lighting, but what I want to know is how bad so that I can make a decision whether to keep it, fix it, or remove it.''}
With both systems, participants edited out irrelevant footage and audio errors (\textit{e.g}, pauses, repeated words). With ~AVscript{}, participants made edits at word level or line level (a sentence, a long phrase, or a pause) and sometimes removed multiple lines at once when they decided not to keep a big chunk of the scene that they did not find interesting. Using their own editing tools, all participants made edits to remove filler words or pauses between speeches, and some participants similarly deleted uninteresting content.
\subsubsection{Navigating and Applying Edits.}
While participants found AVscript to be beneficial for high-level navigation and editing operations (\textit{e.g.}, by scenes, lines, words, long pauses) and non-linear navigation, the current version lacked the fine-grained navigation and editing provided by their typical video editors that enables participants to edit fine-grained audio (\textit{e.g.}, short pauses). As participants found ~AVscript{} to be helpful for some navigation tasks more than others, participants did not rate ~AVscript{} to be significantly more helpful for their existing tools for navigation ($\mu$=2.5,$\sigma$=2.11 vs. $\mu$=1.3,$\sigma$=0.78; $Z$=1.63, $p$>0.05) or applying edits ($\mu$=2.58,$\sigma$=1.73 vs. $\mu$=1.83,$\sigma$=1.19; $Z$=0.99, $p$>0.05).
\ipstart{Coarse-Grained Navigation} Using ~AVscript{}'s audio visual script, all participants efficiently navigated the video content by moving the cursor in the transcript both line-by-line (up/down arrow keys) and word-by-word (alt/option + right/left arrow keys). P12 and P16 also jumped to the next scene in the audio-visual script by pressing the `H' key in the screen reader's browser mode (used to navigate to the next heading element). As participants edited the video, 7 participants also used the outline pane to quickly navigate to a scene or an error suggestion. In contrast, using their typical video editors' timelines all participants navigated by skipping ahead in a fixed time or frame interval (\textit{e.g.}, skip ahead 5 seconds) rather than by content (\textit{e.g.}, sentence, word, pause, error or scene). Participants then needed to iterate multiple times to find the relevant cut point, as described by P11: \textit{``To delete one word, I have to navigate so many times to precisely set the start and end of what I want to cut out. So I often create a small loop around the target just for editing.''} Four participants also scrubbed backward or forwards to navigate to a near word or pause target (P8, P10, P11, P14) despite its disadvantages: \textit{`The scrubbing audio makes no sense to me, but it can still be used to detect pauses''} (P11). P10 and P11 also used the tab key in Reaper to jump to the next audio peak to locate the end of long silences.
\ipstart{Fine-Grained Navigation} While ~AVscript{} makes editing words or pauses convenient, participants asked that in the future ~AVscript{} also include frame- and interval-level navigation to facilitate fine-grained adjustments to the cursor placement, especially when speech is not present. In addition, as the system limited the pauses displayed to screen reader users to 3s long to optimize skimming the audio-visual script, participants expressed that they wanted a mode for fine-grained edits that would display small pauses.
\ipstart{Non-linear navigation}
Participants also used ~AVscript{} to efficiently navigate the video non-linearly, using the outline to navigate to an error they would like to edit, then moving their cursors back to play from a few lines prior to figuring out where to make the edit by considering the audio content and the visual error together (P4, P9).
Four participants used the search pane to find and skip to a specific part in the script (P10, P14, P16, P17). P10 exclaimed: \textit{``This search feature is revolutionary! I can search not just for text, but an object or even pauses so easily.''} Yet, participants who never used the search feature to navigate the video speculated that searching for visual content would be more useful for their own videos. P9 noted \textit{``I didn't know what to search for as I didn't film this video. If I use it (AVscript{}) for my own video, I will definitely find it useful.''}
\ipstart{Applying edits} The ability to apply edits with ~AVscript{} was limited to high-level edits of the video footage itself. With their own editing tools, participants additionally applied effects to improve the audio or visual quality of the footage, including: applying a high pass filter to remove background noise and heavy breaths (P13), inserting music and adjusted its volume so that the original audio is louder than the music (P17), adding an intro and credit to the footage by inserting a black image with white text (P17). After making a cut in the video, P15 and P17 used a transition effect to avoid the abrupt jump in the audio or visual.
With When making edits, 2 participants often referred to a help menu, or a self-created list of hotkeys and commands to remember the keys they should use (P4, P8).
Participants who didn't use the built-in video player of the editing tool read the timestamps from the player and then passed them into the command line (P4 using FFmpeg), or to the input field of the tool (P16 using VideoReDo). Both P4 and P16 noted the inconvenience of switching between two different interfaces. P16 said ``\textit{Because the video player and VideoReDo use different time formats, I cannot directly copy and paste. When I manually read and type them, I sometimes make typos and this makes very confusing results.}'' P4 also mentioned ``\textit{While the script-based editing is very accessible, I have to run the command after each edit to check the results. If it's a long video, I have to wait for a long time for the video to be processed.}''
\section{Exploratory Case Studies}\label{exploratory-study}
The comparison study demonstrated that BLV creators were able to use AVscript{} to understand and edit videos.
To learn how BLV creators would use ~AVscript{} to edit their own videos, we conducted an exploratory study with 3 BLV creators (P14, P18, P19 in Table~\ref{tab:participants}) where the creators edited their own footage.
\subsection{Method}
We recruited 3 video creators with visual impairments who used screen readers to access their devices using mailing lists and social media (P14, P18, P19). P14 also participated in the comparison study.
All three participants created and uploaded videos to their YouTube channels on a regular basis, and two of the three participants had not edited videos before. Before the study, we collected footage from each participant that they had filmed themselves (\autoref{fig:exp-videos}).If participants provided multiple clips we concatenated them in order of time filmed.
During a 120-minute remote study session, we asked participants background questions, provided a tutorial of ~AVscript{}, invited participants to edit their own footage with ~AVscript{}, and asked participants semi-structured interview questions about their experience (see Supplemental Material). We compensated participants \$80 via Amazon Gift Card for filming their footage and participating in the study.
\subsection{Three Vignettes: How BLV Creators Use ~AVscript{} in Context}
\subsubsection{V3: Growing with Bryan{}}
Bryan{} (P18) regularly posts videos to demonstrate how nature is accessible on his YouTube channel. To film his planting demonstration video (V3), he strapped a camera to his chest or forehead to use both hands freely and filmed four clips over four different days. Because the first two clips were filmed more than a month ago, Bryan{} quickly reviewed the footage by skimming through ~AVscript{}'s script. He mentioned \textit{``I usually make videos comparing how a plant changed after several months. Using [AVscript{}], I don't have to watch all the clips again. I can save so much time reviewing and remembering what I filmed!''}
With ~AVscript{}, Bryan{} first used the ~\textit{outline} to jump to the start of each scene (clip), then deleted the first few lines of the script where he mentioned the date it was filmed.
When he noticed that he pointed at a plant to describe it in the video, he used the \textit{inspect} feature to make sure the plant was in the frame.
Bryan{} described that with ~AVscript{} that he can be more independent in making videos, which will enable him to create videos more quickly. He explained that he typically required sighted reviewers: \textit{``Because it is so difficult to make sure everything I mention is in the picture, I usually film the same content with several takes, and ask sighted friends to ask which one is the most appropriate.''}
\subsubsection{V4: An Adventure to Dinner}
Rachel{} (P19) is a content creator who makes a wide range of different media: podcasts, interviews, live streams, Vlogs, and tech demos. While she is an experienced audio editor, she has never tried editing a video due to a steep learning curve. For the study, she filmed a Vlog on her way to dinner (V4).
Rachel{} mentioned that ~AVscript{} is easy to learn and use with a screen reader: \textit{``Absolutely fantastic, I have never been able to edit videos before, but after 15 minutes of learning how to use this, I can edit my video. It's a giant leap forward.''}
While editing, Rachel{} found ~AVscript{}'s search feature useful because she still remembered most content of the footage that she filmed two days ago: \textit{``When I was walking on the street, I met a family and chatted with them for a while. To jump and edit that part, I tried searching for `boy' or `person'.''} She enjoyed having the option to search for the visual content of the footage, as she might forget the exact word she said, but still remember what was visible in the frame.
Rachel{} also noticed that ~AVscript{} had errors in the speech-aligned transcript and scene description. When she read one of the scene labels, she said \textit{``Oh it says I'm holding a purple leash! That is my purple cane. I guess this is created by AI?''}
Overall, Rachel{} mentioned that she feels more confident showing her video to more people after fixing the visual errors detected by ~AVscript{}. As a creator without light perception, Rachel{} has often experienced filming videos with bad lighting (\textit{e.g.,} turning the light off, or facing back to the sunlight). She noted \textit{``[AVscript{}] is also guiding me on how to film with fewer visual errors.''}
\subsubsection{V5: Blind Construction Tools}
Lewis{} creates workout videos, product reviews, and tips for people with visual impairments. He often shares his videos on social media or participates in workout video contests. To film a video on construction tools (V5), P14 set up a camera with a tripod and used TalkBack to guide him with the filming position (e.g., TalkBack giving directions ``Face detected - upper right'').
To quickly skim through his footage, Lewis{} pressed and held the down arrow key to mimic the scrubbing feature of Reaper (his typical video editor). He described that the lines helped him navigate efficiently: \textit{``I don't have to read the entire line to check where I am (the cursor is) in the video. Just listening to the first word or first syllable is enough.''}
When Lewis{} reached a part of the video that ~AVscript{} detected as \textit{blurry} he mentioned: \textit{``Oh this is not a bad thing here, I had to walk quickly, and it's probably because of that.''}
Lewis{} also used the ~\textit{inspect} feature to choose an editing point. To find the first few seconds where he started the recording and was not on the frame, he continuously clicked ~\textit{inspect} to find the exact timestamp where he appeared, then trimmed the video up to that point. He noted: \textit{``The script does tell when the word begins and ends, but it doesn't tell when an action begins and ends.''} Lewis{} also reported speech recognition errors: \textit{``I mumbled something here, but it wasn't caught in the transcript. Maybe because of the radio music. It is difficult to edit that part out when I don't see it on the transcript.''}
Using ~AVscript{}, Lewis{} anticipated that collaboration with sighted reviewers will be easier because he can show them only the errors detected by the system instead of asking them to review the entire footage. He also noted that he wanted to create different content and styles of videos with the help of ~AVscript{}: \textit{``In the past, I always used a tripod to avoid camera shakes. Now that I can check whether my footage is shaky, I want to try carrying around my camera.''}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/exp-videos.pdf}
\caption{Three videos filmed by BLV creators for exploratory case studies.}
\label{fig:exp-videos}
\Description{The thumbnails of the three videos used in the exploratory case studies. In the first image, the hands of a gardener can be seen. In the second image, a woman is walking with a purple cane. In the third image, the inside of a garage is shown.}
\end{figure}
\subsubsection{Reflection on Three Vignettes}
All three BLV creators used AVscript to speed up video editing steps (\textit{e.g.,} Rachel{} browsing the video for a specific scene), or locate visual errors or actions (\textit{e.g.,} Lewis{} noticing a camera blur) which was not possible prior to using ~AVscript{}.
Creators also reported the limitations of ~AVscript{}: (1) errors in the speech-aligned transcript and scene description, (2) lack of detailed information on visual content such as motion details or object colors, especially for clips without much narration.
Overall, all three creators wanted to use ~AVscript{} in the future to be more creative with the content and styles of videos.
\section{Discussion and Future Work}
{In this section, we reflect on our findings from the
design, development, and evaluation of AVscript.
We also present future directions for research exploring accessible authoring tools.}
\begin{revisedenv}\ipstart{Navigating Videos based on Visual Content}
Our formative study revealed that BLV video creators' current tools did not enable access to visual content in their video footage (\textbf{C1.} Recognizing visual content in a video).
To address this challenge, AVscript{} provides access to visual content including: a summary of key visual moments via \textit{scene descriptions}, a list of low-level objects via \textit{inspect mode}, and on-demand access to visuals of interest via \textit{search}.
While creators using AVscript{} occasionally listened to the video and scene descriptions linearly (similar to how BLV audiences currently listen to audio descriptions that describe visual content in a video alongside the video narration~\cite{pavel2020rescribe,li2021non}), creators also used scene descriptions for new use cases including skimming the outline of scene descriptions to gain an overview of visual content, and clicking on scene descriptions to navigate to video scenes (similar to how sighted video creators use video keyframes to navigate with timeline-based video editing tools~\cite{finalcut, premiere}).
Scene description-based navigation helped address an existing challenge for video creators (\textbf{C4}. Non-linear browsing and skimming of videos), and future work may explore extending this navigation approach to video consumers.
However, state-of-the-art scene descriptions still include errors. In our studies, BLV creators editing their own footage were able to recognize and recover from errors that mismatched their expectations (\textit{e.g.}, ``leash'' vs. ``cane'' in a walking video), but creators editing unfamiliar footage missed notable errors (a pile of laundry described as ``animal on bed'').
Future improvements to scene description accuracy could help AVscript{} better support BLV creators using unfamiliar footage (\textit{e.g.}, when adding stock video b-roll).
While creators used AVscript's low-level inspect mode less often than the high-level scene descriptions, one creator used inspect mode to achieve fine-grained navigation to a visual scene boundary, similar to the fine-grained navigation to audio pause boundaries that BLV creators currently perform via audio.
Future work may explore how navigation practices change with long-term use of AVscript{} for video editing, and how to further facilitate fine-grained visual navigation.
\end{revisedenv}
\begin{revisedenv}\ipstart{Editing based on Visual Error Suggestions}
To address the challenge of assessing the visual quality of a video (\textbf{C2}), AVscript{} informs users of potential edit points (blur, bad lighting, camera motion, and audio pause).
Participants frequently used the visual errors provided by AVscript{} to remove distracting and low-quality visuals from the video (\textit{e.g.}, camera shakes, dark lighting), and participants reported that edit point suggestions improved their confidence in their final video.
While participants occasionally noticed errors in visual content descriptions, none of the participants expressed skepticism toward visual quality predictions.
However, participants asked for information about the severity of the predicted visual errors to help them weigh the content and quality of a clip before removing it.
In the future, AVscript{} will provide confidence scores and severity levels for predicted visual errors to better support BLV creators in making editing decisions.
In addition, describing errors in more detail and explaining potential causes (\textit{e.g.,}``\textit{Blur -- out of focus, possibly due to the camera being too close to the target object}'') could help novice video editors understand errors and film better footage. Finally, AVscript{} could recognize other types of visual errors in the future, such as \textit{composition errors}~\cite{adams2013qualitative} and \textit{jump cuts}~\cite{TTU} which we observed in the videos edited by BLV creators.
\end{revisedenv}
\begin{revisedenv}
\ipstart{Text-based Video Editing for BLV Creators}
Prior research on text-based editing primarily focused on sighted video authors~\cite{truong2016quickcut, huber2019bscript, Berthouzoz2012}. We explored using text-based editing to help BLV creators navigate videos efficiently (\textbf{C3}). Our studies revealed benefits of using text-based editing that echo findings in prior work (\textit{e.g.}, lower mental load than timeline-based interface \cite{truong2016quickcut, huber2019bscript}), and demonstrated unique benefits for BLV creators (\textit{e.g.}, better screen reader compatibility than timeline-based interfaces, and access to rapid screen reader text-to-speech for quick skimming and editing).
Still, creators mentioned that timeline-based video editing interfaces enable granular access to the audio track without word-level constraints (\textit{e.g.,} editing out background noise which is not captured in the transcript).
In the future, we plan to integrate timeline-based editing into ~AVscript{} to enable creators to use the \textit{audio-visual script} for coarse navigation and the timeline for fine-grained navigation.
\end{revisedenv}
\ipstart{Supporting New Editing Tasks}
While our system explored deleting or speeding up video segments --- core tasks in video production --- future can explore how to support BLV creators in making visual edits such as composing title slides or adding visual effects. For example, an editing system could describe the impact of an applied effect on the visual content in the video (\textit{e.g.}, ``the vignette effect now covers the hands'') using techniques from prior work in BLV visual design authoring~\cite{peng2022diffscriber} and computer vision approaches for captioning differences between pairs of similar images~\cite{jhamtani2018learning}.
Recent strides in prompt-driven text generation~\cite{brown2020language}, image generation~\cite{ramesh2022hierarchical, saharia2022photorealistic}, and image editing~\cite{nichol2021glide} suggest that prompt-driven video editing (\textit{e.g.}, make this clip moody) may be possible in the future~\cite{hong2022cogvideo}. Future research is needed to help BLV creators evaluate their results with such tools.
In addition, ~AVscript{} considers single-track videos as the format common in BLV creators' videos today. However, in the future, we could explore approaches to help creators enhance their videos with b-roll (\textit{e.g.}, by helping creators find and insert their own footage using text~\cite{truong2016quickcut} or suggesting opportunities for adding online b-roll~\cite{huber2019bscript}).
\ipstart{Supporting New Stages of Video Production}
Our formative study suggested that BLV creators currently use creative but effort-intensive filming strategies (\textit{e.g.}, describing visual content) and editing strategies (\textit{e.g.}, navigating video footage only linearly) to produce and share their videos to broad audiences.
As ~AVscript{} enabled BLV creators to edit videos more efficiently, with less mental demand, and more confidence in their end result, BLV creators mentioned it would change their filming practice by capturing additional desired footage.
Future work may explore how improved access to video editing will impact filming practices, and further improve the filming process by providing additional information about the visual content and errors, as provided in our system, at capture time. Similar to prior work in supporting BLV photographers, future systems could information about framing the shot~\cite{adams2016blind} paired with the presence and severity of potential visual errors. When BLV creators move as they film videos like Vlogs and tutorials, approaches to alert creators of potential errors may distract them (similar to the demand of describing visual content). Thus, future work could explore enabling BLV creators to capture with a wide field of view at capture time (\textit{e.g.}, 360 or 180-degree video) and edit the video to produce a smooth normal field of view footage that captures the content of interest~\cite{su2016pano2vid}.
Finally, our work points to solutions in the video publishing process including improving the acceptance of sighted audiences to visual errors, and platform-supported funding for BLV creators seeking to hire sighted reviewers.
\ipstart{Beyond Manual Text-based Editing}
We designed ~AVscript{} to use text as BLV video creators we interviewed were highly proficient at using screen readers. Text enabled creators to use their screen reader experience to review and navigate video at high speeds.
We are currently exploring multimodal approaches for editing videos by combining a screen reader and voice input together to facilitate fast and low-burden navigation and editing (\textit{e.g.}, ``\textit{jump to 5 minutes}'', ``\textit{delete this scene}''). The visualization community has demonstrated ample applications that support multimodal data exploration with touch and speech~(\textit{e.g.,}~\cite{Srinivasan2018Orko, Srinivasan2020DataBreeze, Srinivasan2020InChorus, kim2021data}). In similar vein, we plan to build on work in tactile displays~\cite{zhang2020automatic} to surface the visual content in the video. While consuming video with tactile displays may be challenging, editing video may benefit from providing creators access to slow frame-by-frame content (\textit{e.g.}, to assess when a person moves out of the frame) and waveform visualizations.
\section{Conclusion}
In this work, we designed and developed ~AVscript{}, an accessible text-based video editing tool that enables BLV creators to edit videos using a text script that describes the visual content and visual errors in the footage. The design was informed by a formative study consisting of YouTube video analysis and interviews with BLV creators. The comparison study (N=12) showed that AVscript{} significantly reduces the mental demand of BLV creators when compared to their own video editing tools. In the exploratory case study (N=3) we also explored how BLV creators edit their own videos using AVscript{}. We hope our research catalyzes future work on improving the accessibility of media authoring.
\section{STUDY PARTICIPANTS DEMOGRAPHICS}
\begin{table*}[htp]
\aptLtoXcmd{}{%
\small\sffamily
\def1.2{1.2}
\setlength{\tabcolsep}{0.3em}
}
\centering
\caption{Study Participants (P1-P8: Formative study participants, P1, P4, P8-P17: Controlled study participants, P14, P18, P19: Exploratory study participants. Three participants marked with {$^{*}$} participated in both formative study and controlled study (P1, P4, P8), and one participant with ~\dag~ participated in both controlled study and exploratory study (P14). All participants are screen reader users.)}
\label{tab:participants}
\begin{tabular}{lccccccc}
\toprule
PID & Age & Gender & Visual impairment & Onset & Video editing tool & Content type & Experience (yr.)\\
\midrule
P1{$^{*}$} & 27 & M & Legally blind & Acquired & Microsoft Photos & User interviews & 4 \\
P2 & 23 & M & Totally blind & Acquired & VirutalDub 2 & Sports videos, Product reviews & 5\\
P3 & 22 & F & Legally blind & Congenital & Final Cut Pro & Live streams, Presentations & 1\\
P4{$^{*}$} & 35 & M & Low vision & Acquired & Python \& FFmpeg & Art demonstrations, Tutorials & 7 \\
P5 & 28 & F & Legally blind & Congenital & iMovie & Video podcasts & 11\\
P6 & 24 & M & Low vision & Acquired & Final Cut Pro & Short-form videos & 4\\
P7 & 41 & M & Legally blind & Congenital & iMovie (mobile) & Vlogs & 2\\
P8{$^{*}$} & 41 & M & Legally blind & Acquired & Final Cut Pro & Short film & 20\\
P9 & 40 & F & Totally blind & Congenital & Microsoft Photos & Accessibility videos & 1 \\
P10 & 54 & M & Totally blind & Congenital & Reaper & Podcasts, Music videos & 1\\
P11 & 31 & F & Legally blind & Acquired & Reaper & Fashion videos, Accessibility videos & 8\\
P12 & 30 & M & Totally blind & Congenital & Reaper & Twitch streams, Short-form videos & 3\\
P13\dag & 58 & M & Legally blind & Acquired & Reaper & \makecell{Workout videos, product reviews,\\Accessibility videos} & 2\\
P14 & 40 & M & Legally blind & Congenital & Reaper & Tech demonstrations & 1\\
P15 & 29 & F & Totally blind & Acquired & Windows Movie Maker & \makecell{Accessibility videos,\\Video editing tutorials} & 5\\
P16 & 31 & M & Totally blind & Congenital & VideoReDo & Conference talks & 6\\
P17 & 30 & F & Totally blind & Acquired & Windows Movie Maker & Video podcasts, Short-form video & 3\\
P18 & 63 & M & Totally blind & Congenital & None & Planting tutorials & 5\\
P19 & 29 & F & Totally blind & Acquired & None & Vlogs, Video podcasts, Live streams & 0\\
\bottomrule
\end{tabular}
\end{table*}
\section{EVALUATION STUDY VIDEO DATASET}
\begin{table}[htp]
\aptLtoXcmd{}{%
\small\sffamily
\def1.2{1.2}
\setlength{\tabcolsep}{0.3em}
}
\centering
\caption{Videos used in the evaluation study. }
\label{tab:videos}
\begin{tabular}{ccccc}
\toprule
\multicolumn{1}{l}{\textbf{Video ID}} & \textbf{Title} & \multicolumn{1}{l}{\textbf{Duration (Original)}} & \textbf{Creator} & \multicolumn{1}{l}{\textbf{URL}} \\
\midrule
V0 & College Life...As A Blind Girl! & 3m 12s (9m 10s) & Rae Green & \cite{v0} \\
V1 & How Blind Mom Cooks & 11m 12s (20m 38s) & Ashley Nemeth & \cite{v1} \\
V2 & Day In The Life Blind Mom & 11m 5s (20m 16s) & Ashley Nemeth & \cite{v2} \\ \hline
V3 & \makecell{Growing With Blind Bryan: \\ New border, Rampant Runner and a Juicy Peach Tree} & 10m 2s & P14 & None \\
V4 & An adventure to dinner: Demonstrating O\&M techniques & 12m 57s & P18 & None \\
V5 & Blind construction tools & 9m 37s & P19 & None \\
\bottomrule
\end{tabular}
\end{table}
\section{FORMATIVE STUDY VIDEO DATASET}
To collect videos demonstrating how BLV creators edit videos, we first searched YouTube for all combinations of a set of vision-related keywords (blind, low vision, visual impairment, screen reader), and a set of video editing keywords (editing videos, making videos, creating videos), following prior work~\cite{li2021non, li2022feels}.
For each search phrase, we included all unique videos that had a title related to vision and video editing and stopped the search when the results of an entire search page were irrelevant.
We then filtered out videos that did not cover video editing (1 filtered) or had poor audio or video quality (3 filtered). Our final dataset contained 24 videos (V1-V24) uploaded before October 12, 2021.
The videos contained overviews of the video production process (2 videos) and tutorials of video editing software (22 videos). For the full list of videos, see Supplemental Material.
|
{
"arxiv_id": "2302.14171",
"language": "en",
"timestamp": "2023-03-01T02:03:33",
"url": "https://arxiv.org/abs/2302.14171",
"yymm": "2302"
} | \section{Introduction}
\label{sec:1}
The theoretical and experimental study of the behavior of liquids and
their mixtures in a vicinity of their critical point (see, e.g.,
works \cite{l104,b112,p112,y114,y118,v118,o197,p104}) is an important
and challenging task. In our previous works \cite{kpd118,p120},
the behavior of fluid was studied in an immediate
vicinity of the critical point, and in works \cite{kd116,kdp117,pd120}
beyond this vicinity. As a result, a wide region near
the critical point has been covered. The mathematical
description was carried out in the framework of
the cell fluid model using the grand canonical ensemble.
The whole volume $V$ of a system consisting
of $N$ interacting particles was conventionally divided
into $N_v$ cells, each of the volume $v=V/N_v=c^3$,
where $c$ is the linear cell size. Note that unlike the
lattice gas model where the cell is assumed to contain
no more than one particle, in this approach the
cell can include more than one particle.
In works \cite{kpd118,p120}, the analysis was performed in the
framework of the collective-variables approach
using the renormalization group transformation \cite{ymo287}.
An analytical procedure for calculating the grand
partition function and the thermodynamic potential
of the cell fluid model was developed in works
\cite{kpd118,p120} in the approximation of a non-Gaussian
(quartic) distribution for order parameter fluctuations
without involving the hard-spheres reference
system. The formation of the reference system
as a part of the repulsive component of the interaction
potential made it possible to take into account all
kinds of interaction (both short- and long-range) from
the same position of the collective-variables approach.
The role of interaction potential in this work is
played by the Morse potential. The interaction potential
parameters, which are given in works \cite{kpd118,p120} and
are necessary for quantitative estimates, correspond
to the data for sodium. They were taken from work
\cite{singh}, which was devoted to the study of vapor-liquid
equilibrium curves for metals using Monte Carlo simulation
and Morse potential. Taking this potential
into account, the vapor-liquid coexistence curves for
metals were also analyzed in the framework of the
integral-equations approach \cite{apf_11}.
The Morse potential is widely used when studying
the melting and laser ablation processes using computer
simulation \cite{z102,x104,l102}. There are works in the literature
devoted to the study of the structural properties
of the Morse and Lennard-Jones clusters, as well
as to a comparison between them \cite{l102,d196,d104,t102}. In some
cases, modifications of the Morse function were made
\cite{c198,s199,l105,d198} in order to improve the numerical results.
Although the Morse potential was traditionally used to
model covalently bound diatomic molecules \cite{m129,l199,c105}, it
is also applied to estimate non-bounding interactions
\cite{m101,o103}. This potential is qualitatively similar to the
Lennard-Jones potential, but they are quite different
from the quantitative point of view. The Lennard-
Jones and Morse potentials can be compared directly
using a mathematical relationship enabling the point
of energy minimum to be located at the same position
\cite{s102,okumura_00}. In addition, it was shown that either of the
potentials can be derived from the other \cite{l103}.
This work complements the study of the critical
behavior of the Morse fluid that was performed in
works \cite{kpd118,p120}. In particular, for temperatures lower
than the critical one, solutions of a certain cubic equation
are obtained. They govern the quantities entering
the equation of state of the cell fluid model. The
behavior of the equation solutions depending on the
chemical potential value is analyzed in the immediate
vicinity of the critical point, and the regions where
the chemical potential changes are singled out. Each
of those regions is considered separately. Expressions
for the boundary densities (the densities at the regions'
boundaries) are obtained. For this purpose, a
nonlinear equation, which relates the density to the
chemical potential, is used. The equation of state of
the cell fluid model and the binodal equation are presented.
\section{Chemical potential and density changes
at temperatures below the critical one}
\label{sec:2}
The equation of state of the cell fluid model for temperatures
$T<T_c$ \cite{p120} contains quantities dependent
on the solution $\sigma'_0$ of the equation
\be
(\sigma'_0)^3 + p' \sigma'_0 + q' = 0.
\label{3d12fb}
\ee
(the solution is also given in work \cite{p120}). The coefficients
\[
p' = 6 \frac{r_{n'_p+2}}{u_{n'_p+2}}, \quad
q' = - 6 \frac{s^{5/2}}{u_{n'_p+2}} \frac{\tilde h}{(\tilde h^2 + h^2_{cm})^{1/2}}
\]
include the quantities $r_{n'_p+2}$ and $u_{n'_p+2}$ determining
the long-wavelength part of the grand partition function
of the model. The quantity $\tilde h$ is proportional to
the chemical potential $M$, and the quantity $h_{cm}$ is
characterized by the renormalized relative temperature
$\tau = (T-T_c)/T_c$ (see work \cite{p120}). The renormalization
group parameter $s$ determines the separation
of the phase space of collective variables into layers.
The form of the solutions of Eq. (\ref{3d12fb}) depends on the
sign of the discriminant
\be
Q = (p'/3)^3 + (q'/2)^2.
\label{3d13fb}
\ee
If $Q>0$, the single real solution $\sigma'_0$ of Eq. (\ref{3d12fb}), according
to Cardano's formula, looks like
\bea
&&
\sigma'_{0b} = A+B, \non
&&
A = (-q'/2 + Q^{1/2})^{1/3}, \non
&&
B = (-q' / 2 - Q^{1/2})^{1/3}.
\label{3d15fb}
\eea
If $Q<0$, there are three real solutions (the quantity
$\sigma'_0$ acquires three possible real values)
\bea
&&
\sigma'_{01} = 2 (-p' / 3)^{1/2} \cos (\alpha_r / 3), \non
&&
\sigma'_{02,03} = -2 (-p' / 3)^{1/2} \cos (\alpha_r / 3 \pm \pi / 3),
\label{3d16fb}
\eea
where $\alpha_r$ is determined from the equation
\be
\cos \alpha_r = - \frac{q'}{2(-p'/3)^{3/2}}.
\label{3d17fb}
\ee
If the discriminant is negative, solutions (\ref{3d16fb}) can be
rewritten as follows:
\bea
&&
\sigma'_{01} = 2\sigma_{0r} \cos\frac{\alpha_r}{3}, \quad
\sigma'_{02} = - 2\sigma_{0r} \cos\left( \frac{\alpha_r}{3} + \frac{\pi}{3}\right),\non
&&
\sigma'_{03} = - 2\sigma_{0r} \cos\left( \frac{\alpha_r}{3} - \frac{\pi}{3}\right).
\label{5d1fb}
\eea
Here,
\bea
&&
\sigma_{0r} = \left( - \frac{2r_{n'_p+2}}{u_{n'_p+2}}\right)^{1/2}, \non
&&
\alpha_r = \arccos \left( \frac{M\left( \tilde h^2_q+h^2_{cm}\right)^{1/2}}
{M_q\left( \tilde h^2+h^2_{cm}\right)^{1/2}}\right).
\label{5d2fb}
\eea
The chemical potential $M_q$ is determined from the condition $Q=0$ and
satisfies the equality
\be
M_q = \Biggl[ - \frac{8 r^3_{n'_p+2} (1 + \alpha^2_{mq})}{9 u_{n'_p+2} s^5 \beta W(0)}
\Biggr]^{1/2} h_{cm},
\label{3d14fb}
\ee
where
\be
\alpha_{mq} = \tilde h_q/h_{cm}, \quad
\tilde h_q = M_q \left( \beta W(0)\right)^{1/2}.
\label{5d3fb}
\ee
Here, $\beta = 1/(kT)$ is the inverse temperature, and
$W(0)$ is the Fourier transform of the effective interaction
potential \cite{kpd118} at the zero wave vector. For all
$|M|<M_q$, the discriminant $Q<0$ and, therefore,
there are three real roots of Eq. (\ref{3d12fb}) in this interval of
$M$-values.
The dependence of the solutions of the cubic equation
(\ref{3d12fb}) on the chemical potential $M$ at $T<T_c$ is
shown in Figs.~\ref{fig_1fb} and \ref{fig_2fb}.
\begin{figure}
\centering \includegraphics[width=0.70\textwidth]{fig1_sig0sm_m.eps}
\caption{Solutions of cubic equation (\ref{3d12fb}) as functions of the
chemical potential $M$ at $\tau=-0.005$. Curves 1, 2, 3, and 4 correspond
to $\sigma'_0=\sigma'_{01}$, $\sigma'_0=\sigma'_{02}$,
$\sigma'_0=\sigma'_{03}$, and $\sigma'_0=\sigma'_{0b}$, respectively.}
\label{fig_1fb}
\end{figure}
\begin{figure}
\centering \includegraphics[width=0.70\textwidth]{fig2_sig0sm_t_m1.eps}
\caption{Dependence of the solutions of cubic equation (\ref{3d12fb})
on the chemical potential $M$ at various values of the relative temperature
($\tau=-0.005$, $\tau=-0.007$, $\tau=-0.009$).}
\label{fig_2fb}
\end{figure}
Figure~\ref{fig_2fb} makes it possible to trace the shift of $M_q$
(the joint points of the thin and thick solid curves) as $\tau$ changes.
As one can see, the absolute value of $M_q$ decreases with the reduction
of $|\tau|$.
In the case $|M|>M_q$, as was at $T>T_c$, Eq. (\ref{3d12fb}) has
a single real solution (\ref{3d15fb}). For the latter, at $M=-M_q$
and taking into account the equality $Q=0$, we obtain
\be
\sigma_{0bq}^{'(-)} \! = \! 2 \left[ - 3s^{5/2} \frac{M_q(\beta W(0))^{1/2}}
{u_{n'_p+2} h_{cm}(1+\alpha^2_{mq})^{1/2}}\right]^{1/3} \!\!\! = \!\! - 2\sigma_{0r},
\label{5d4fb}
\ee
whereas at $M=M_q$ we have
\be
\sigma_{0bq}^{'(+)} = 2 \left[ 3s^{5/2} \frac{M_q(\beta W(0))^{1/2}}
{u_{n'_p+2} h_{cm}(1+\alpha^2_{mq})^{1/2}}\right]^{1/3} = 2\sigma_{0r}.
\label{5d5fb}
\ee
Let us analyze the asymptotics of solutions (\ref{5d1fb}) at
$|M|=M_q$. If $M=-M_q$, we obtain $\cos \alpha_{rq}^{(-)}=-1$,
$\alpha_{rq}^{(-)}=\pi$, and
\bea
&&
\sigma_{01}^{'(-)} = 2 \sigma_{0r} \cos\frac{\pi}{3} = \sigma_{0r},\non
&&
\sigma_{02}^{'(-)} = - 2 \sigma_{0r} \cos\left( \frac{2\pi}{3}\right) = \sigma_{0r},\non
&&
\sigma_{03}^{'(-)} = - 2 \sigma_{0r} \cos 0 = - 2\sigma_{0r}.
\label{5d6fb}
\eea
The case $M=M_q$ brings us to the formulas $\cos \alpha_{rq}^{(+)}=1$,
$\alpha_{rq}^{(+)}=0$, and
\bea
&&
\sigma_{01}^{'(+)} = 2 \sigma_{0r} \cos 0 = 2 \sigma_{0r},\non
&&
\sigma_{02}^{'(+)} = - 2 \sigma_{0r} \cos \frac{\pi}{3} = - \sigma_{0r},\non
&&
\sigma_{03}^{'(+)} = - 2 \sigma_{0r} \cos \left( - \frac{\pi}{3} \right) = - \sigma_{0r}.
\label{5d7fb}
\eea
Thus, if $M=-M_q$, the solution is $\sigma_{0bq}^{'(-)}=-2\sigma_{0r}$ (see
Eq. (\ref{5d4fb})), which coincides with $\sigma_{03}^{'(-)}$ from
Eq. (\ref{5d6fb}). If $M=M_q$, the solution is
$\sigma_{0bq}^{'(+)}=2\sigma_{0r}$ (see Eq. (\ref{5d5fb}))
and it coincides with $\sigma_{01}^{'(+)}$ from Eq. (\ref{5d7fb}).
The conclusion drawn from the above calculations
is as follows. As the chemical potential increases to
$-M_q$ from the side of negative values, Eq. (\ref{3d12fb}) has a
single solution given by Eq. (\ref{3d15fb}) (region I, gas phase,
in Fig.~\ref{fig_3fb}). At $-M_q<M<0$, it transforms into the solution
\begin{figure}
\centering \includegraphics[width=0.70\textwidth]{fig3_reg_mn.eps}
\caption{Regions of chemical potential variation and the corresponding
densities for temperatures below the critical one.}
\label{fig_3fb}
\end{figure}
$\sigma'_{03}$ from Eq. (\ref{5d1fb}), which is valid up to $M=-0$
(region II, transient gas phase). For $M=-M_q$, we obtain
$\sigma'_{03}=\sigma_{03}^{'(-)}=-2\sigma_{0r}$ (see Eq. (\ref{5d6fb})),
and for $M=-0$, we arrive at the expressions $\cos \alpha_{r0}^{(-)}=0$,
$\alpha_{r0}^{(-)}=\pi/2$, and
\be
\lim_{M\rightarrow-0}\sigma'_{03} = \sigma_{030}^{'(-)} =
- 2 \sigma_{0r} \cos\lp - \frac{\pi}{6}\rp = - \sqrt 3 \sigma_{0r}.
\label{5d8fb}
\ee
On the other hand, as $M$ decreases to $M_q$ from
the side of positive values, there is the single solution
(\ref{3d15fb}) (region IV, fluid phase). At $0<M<M_q$, this
solution transforms into the solution $\sigma'_{01}$ from
Eq. (\ref{5d1fb}), which is valid up to $M=+0$ (region III, transient
fluid phase). For $M=M_q$, we have
$\sigma'_{01}=\sigma_{01}^{'(+)}=2\sigma_{0r}$ (see Eq. (\ref{5d7fb})),
and for $M=+0$ we obtain
\be
\lim_{M\rightarrow +0}\sigma'_{01} = \sigma_{010}^{'(+)} =
2 \sigma_{0r} \cos\lp \frac{\pi}{6}\rp = \sqrt 3 \sigma_{0r}.
\label{5d9fb}
\ee
According to the chemical potential value $M$, the equation of state of
the cell fluid model at $T<T_c$ (see work \cite{p120}) can be written
in the form
\bea
&&
\frac{Pv}{kT} = P_a^{(-)}(T) + E_{\mu} +
D_{13} (\sigma'_{0b}) \left[ \Theta (-M-M_q) + \rd \non
&&
\ld + \Theta (M - M_q)\right] +
D_{13} (\sigma'_{03}) \Theta(-M) \Theta (M+M_q) + \non
&&
+ D_{13} (\sigma'_{01}) \Theta(M) \Theta(M_q-M).
\label{5d10fb}
\eea
Here, the quantity
\bea
&&
D_{13}(\sigma'_0) = \lp \gamma_s^{(-)} - e_2^{(-)}\rp
\lp \tilde h^2 + h^2_{cm}\rp^{\frac{d}{d+2}} + \non
&&
+ e_0^{(-)} \tilde h \lp \tilde h^2 + h^2_{cm}\rp^{\frac{d-2}{2(d+2)}}
\label{5d11fb}
\eea
depends on the solution $\sigma'_0$ of Eq. (\ref{3d12fb}); $d=3$ is
the space dimension; $v$ is the cell volume; $\Theta(M)$ is
the Heaviside function, which is equal to unity if $M>0$,
to zero if $M<0$, and to $1/2$ if $M=0$; the quantity $P_a^{(-)}(T)$
depends analytically on the temperature; the coefficient $\gamma_s^{(-)}$
characterizes the non-analytical contribution to the thermodynamic
potential; the quantities $e_0^{(-)}$ and $e_2^{(-)}$ depend on the roots
of the cubic equation (\ref{3d12fb}). Expressions for all those
quantities, as well as for $E_{\mu}$, are given in work \cite{p120}.
The equation of state (\ref{5d10fb}) makes it possible to study
the dependence of the pressure $P$ on the chemical potential $M$ and
the relative temperature $\tau$. This equation can be rewritten in terms
of the temperature and density variables. For this purpose, the chemical
potential expressed via the temperature $\tau$ and the average
density $\bar n$ should be substituted into Eq. (\ref{5d10fb}),
and the intervals of chemical potential values in the
Heaviside functions have to be replaced by the corresponding
density values from the regions shown in Fig.~\ref{fig_3fb}.
Let us consider each of the regions separately.
{\bf Region I ($M\leq -M_q$).} Here the solution $\sigma'_0$ of
Eq. (\ref{3d12fb}) looks like $\sigma'_{0b}$ from Eq. (\ref{3d15fb}).
If $M=-M_q$, then expression (\ref{5d4fb}), where $\sigma_{0r}$ is given
by the relationship from Eq. (\ref{5d2fb}), is valid for $\sigma'_{0b}$.
Solution (\ref{5d4fb}) for $\sigma_{0bq}^{'(-)}$ coincides with
$\sigma_{03}^{'(-)}$ from Eq. (\ref{5d6fb}). On the other hand,
the equality \cite{p120}
\be
b_3^{(-)} M^{1/5} = \bar n - n_g + M,
\label{4d37fb}
\ee
holds, which couples the average density $\bar n$ with the
chemical potential (in this case, $M=-M_q$) and the
quantity $\sigma_{00}^{(-)}=f(\sigma'_{0b})$, which is included
into the coefficient $b_3^{(-)}$. Note that $n_g$ is determined via
the coefficients in the initial expression for the grand partition
function, and $\sigma_{00}^{(-)}$ is a function of the quantity
$\alpha_m = \tilde h/h_{cm}$, which includes the initial chemical
potential $\mu$ (included into $M$) and the relative
temperature $\tau$. From Eq. (\ref{4d37fb}), we can determine
the density $\bar n_{12}$ (the boundary density between
regions I and II) corresponding to the value
$M=-M_q$. Neglecting the last term in Eq. (\ref{4d37fb}), we
obtain
\bea
&&
\bar n_{12} = n_g + b_3^{(-)} M^{1/5}\big|_{\substack{M=-M_q}} =
n_g + \left[ (1+\alpha^2_{mq})^{1/2}h_{cm}\right]^{1/5}
\sigma_{00}^{(-)}(\sigma_{03}^{'(-)}), \non
&&
\sigma_{03}^{'(-)} = - 2 \sigma_{0r}.
\label{5d12fb}
\eea
{\bf Region II ($-M_q < M \leq -0$).} At $M=-0$, the equality
(\ref{5d8fb}) holds true and the boundary density
$\bar n_{20}=\lim_{M\rightarrow -0} \bar n$ takes the form
\bea
&&
\bar n_{20} = n_g +
\lim_{M\rightarrow -0} \left[ (1+\alpha^2_{m})^{1/2}h_{cm}\right]^{1/5}
\sigma_{00}^{(-)} =
n_g + h_{cm}^{1/5} \sigma_{00}^{(-)}(\sigma_{030}^{'(-)}), \non
&&
\sigma_{030}^{'(-)} = - \sqrt 3 \sigma_{0r}.
\label{5d13fb}
\eea
{\bf Region III ($+0 \leq M < M_q$).} This region starts from the value
$M=+0$, where $\sigma_{010}^{'(+)}=\sqrt 3 \sigma_{0r}$ and, accordingly,
\be
\bar n_{03} = n_g + h_{cm}^{1/5} \sigma_{00}^{(-)} (\sigma_{010}^{'(+)}).
\label{5d14fb}
\ee
The chemical potential $M$ in region III acquires values less than $M_q$.
{\bf Region IV ($M \geq M_q$).} This region starts from the value
$M=M_q$, which corresponds to $\sigma_{01}^{'(+)}=2\sigma_{0r}$,
so that
\be
\bar n_{34} = n_g + \left[ (1 + \alpha^2_{mq})^{1/2} h_{cm}\right]^{1/5}
\sigma_{00}^{(-)} (\sigma_{01}^{'(+)}).
\label{5d15fb}
\ee
The initial boundary density $\bar n_{34}$ in region IV increases
to a certain value $\bar n_{max}$, which corresponds to $M_{max}$.
At $\bar n > \bar n_{max}$, the chemical potential $M$ decreases with
the increasing density $\bar n$, which does not reflect the physical
nature of the phenomenon (a similar picture is observed
at $\bar n < \bar n_{min}$).
The determination of the boundary densities $\bar n_{12}$, $\bar n_{20}$,
$\bar n_{03}$, and $\bar n_{34}$ makes it possible to write the equation
of state (\ref{5d10fb}) in the form
\bea
&&
\frac{Pv}{kT} = P_a^{(-)}(T) + E_{\mu} +
D_{13} (\sigma'_{0b}) \left[ \Theta (\bar n_{12}-\bar n) + \rd \non
&&
\ld + \Theta (\bar n - \bar n_{34})\right] +
D_{13} (\sigma'_{03}) \Theta(\bar n - \bar n_{12}) \Theta (-\bar n + \bar n_{20}) + \non
&&
+ D_{13} (\sigma'_{01}) \Theta(\bar n - \bar n_{03}) \Theta(\bar n_{34}-\bar n),
\label{5d16fb}
\eea
where
\be
D_{13}(\sigma'_0) \! = \! \lp \frac{\bar n \! - \! n_g}{\sigma_{00}^{(-)}} \rp^6
\!\! \left[ \! e_0^{(-)} \! \frac{\alpha_m}{(1+\alpha_m^2)^{1/2}} \! + \!
\gamma_s^{(-)} \! - \! e_2^{(-)} \! \right].
\label{5d17fb}
\ee
The dependence of the pressure $P$ (see Eq. (\ref{5d16fb})) on $\bar n$
for various $\tau$ is shown in Fig.~\ref{fig_4fb}.
\begin{figure}
\centering \includegraphics[width=0.70\textwidth]{fig4_pn_betc.eps}
\caption{Pressure as a function of average density for various values
of the relative temperature.}
\label{fig_4fb}
\end{figure}
\section{Relationship between the density
and the chemical potential of fluid.
Limiting cases}
\label{sec:3}
Nonlinear equation (\ref{4d37fb}), which describes the relationship
between the density $\bar n$ and the chemical potential $M$, can be
rewritten in the form \cite{p120}
\be
\bar n = n_g - M + \sigma_{00}^{(-)} \lp \tilde h^2 + h^2_{cm} \rp^{\frac{d-2}{2(d+2)}}.
\label{6d1fb}
\ee
The general form of equation (\ref{6d1fb}) or (\ref{4d37fb}) makes it
possible to change in a natural way to the cases when
either of the variables (the temperature or the chemical
potential) is crucial for the description of the order
parameter behavior.
Let us describe the behavior of $\bar n$ for some limiting
cases. One of them is the absence of chemical potential $M$ (i.e., $M=0$
and hence $\tilde h=0$) and $T\neq T_c$. Then, we obtain
\be
\sigma_{00}^{(-)}(M=0) = \frac{e_0^{(-)}}{(\beta W(0))^{1/2}} + e_{020}^{(-)},
\label{6d2fb}
\ee
where
\[
e_{020}^{(-)} = e_{02}^{(-)}(M=0) = \frac{1}{(\beta W(0))^{1/2}}
\frac{f_{Iv}}{s^3}.
\]
The expression for $f_{Iv}$ is given in work \cite{p120}. From
Eq. (\ref{6d1fb}), we obtain the dependence
\be
\bar n = n_g + \sigma_{00}^{(-)}(M=0) \tilde\tau_1^{\beta},
\label{6d3fb}
\ee
where the critical exponent $\beta=\nu/2$.
Another limiting case is $M\neq 0$ and $T=T_c$. The density $\bar n$ in
Eq. (\ref{6d1fb}) at $T=T_c$ satisfies the equality
\be
\bar n = n_g - M + \sigma_{00}^{(-)}(T_c) \tilde h^{1/\delta},
\label{6d4fb}
\ee
where
\be
\sigma_{00}^{(-)}(T_c) = \frac{6}{5} \frac{1}{(\beta_c W(0))^{1/2}}
\lb e_0^{(-)} + \gamma_s^{(-)} - e_2^{(-)} \rb
\label{6d5fb}
\ee
and the critical exponent $\delta=5$.
In the general case, i.e., at $M\neq 0$ and $T\neq T_c$,
Eq. (\ref{6d1fb}) can be presented as follows:
\be
\bar n = n_g - M + \sigma_{00}^{(-)} \lp \tilde h^2 +
\tilde\tau_1^{2\beta\delta} \rp^{1/(2\delta)}.
\label{6d6fb}
\ee
Note that $M\ll 1$, and $\tilde h\sim M$. Therefore, the second
summand, $M$, in the right-hand sides of Eqs. (\ref{6d1fb}),
(\ref{6d4fb}), and (\ref{6d6fb}) is much smaller than the third term
and can be neglected.
\section{Binodal equation}
\label{sec:4}
The binodal equation can be obtained from Eq. (\ref{6d1fb}) by
putting $M=0$. Then we arrive at Eq. (\ref{6d3fb}). Now, substituting
the expression $\tilde\tau_1 = -\tau \frac{c_{11}}{q} E_2^{n_0}$, we
obtain
\be
\bar n = n_g + \sigma_{00}^{(-)}(M=0)
\lp -\tau \frac{c_{11}}{q} E_2^{n_0} \rp^{\beta}.
\label{7d1fb}
\ee
Here, $E_2$ is one of the eigenvalues of the matrix for the
linear transformation of the renormalization group,
the quantity $c_{11}$ characterizes one of the coefficients in
the solutions of recurrence relations for the $\rho^4$-model
\cite{p120,kpd118}, $n_0$ is the difference between the exit points
from the critical fluctuation regime at $T>T_c$ and $T<T_c$, and $q$ is
associated with the averaging of the wave vector square. For more
information on those parameters, see work \cite{p120}.
Let us solve Eq. (\ref{7d1fb}) with respect to the temperature.
Taking the equalities $\tau=T/T_c-1$ and $\beta=\nu/2$ into account,
we obtain
\be
\lb \frac{\lp \frac{\bar n}{n_g}-1 \rp n_g}{\sigma_{00}^{(-)}(M=0)} \rb^{2/\nu}
\frac{q}{c_{11} E_2^{n_0}} = - \frac{T}{T_c} + 1
\label{7d2fb}
\ee
or
\be
\frac{T}{T_c} = 1 - \lbr \lb \frac{\lp \frac{\bar n}{n_g}-1 \rp n_g}
{\sigma_{00}^{(-)}(M=0)} \rb^2 \rbr^{1/\nu} \frac{q}{c_{11} E_2^{n_0}}.
\label{7d3fb}
\ee
On the basis of Eq. (\ref{7d3fb}), we can plot the binodal
curve in the temperature versus density plane (see Fig.~\ref{fig_5fb}).
\begin{figure}
\centering \includegraphics[width=0.70\textwidth]{fig5_ttnnMFACPs_m1.eps}
\caption{Coexistence curve (binodal curve) obtained in an
immediate vicinity of the critical point taking into account
the interaction potential parameters that are characteristic of
sodium. The solid curve (the dome) was plotted according to
the obtained binodal equation, and the dotted curve is a result
of the zero-mode approximation \cite{kdp117}.}
\label{fig_5fb}
\end{figure}
This curve agrees with the data predicted for
sodium by extrapolating the results of computer simulation
\cite{singh} to $T/T_c \approx 1$ (see work \cite{p120}).
The spinodal equation, which describes the limiting
states of the system that determine the boundaries
of the instability region, can be found from the
extremum condition
\[
\frac{\partial (P v/k T)}{\partial \bar n}\bigg|_T= 0
\]
for the equation of state (\ref{5d16fb}), where we should
substitute the chemical potential $M$ expressed from
Eq. (\ref{4d37fb}) in terms of the average density $\bar n$.
\section{Conclusions}
\label{sec:5}
In this paper, the cell model was used to study
the behavior of fluid in a close vicinity of its critical
point. This is an interesting (because of fundamental
and applied aspects) and difficult (because
of a substantial role of fluctuation effects) issue for
analysis. The study of the relationship between the
density and the chemical potential at temperatures
$T<T_c$ made it possible to determine the corresponding
densities in the regions where the chemical potential
changes and obtain both the equation of state
and the binodal equation. Solutions of a certain cubic
equation were presented, which the equation of
state of the cell fluid model depends on. Their analysis
made it possible to describe the transition from
one solution to another when the chemical potential
tends to zero. On the basis of the obtained equation of
state, the pressure variation with the density growth
at various temperatures has been illustrated graphically.
Using the binodal equation, a binodal curve in a
narrow temperature interval was constructed for the
microscopic parameters of the Morse potential that
are inherent to sodium. As compared with the case
of the zero-mode approximation, the obtained dome
of the coexistence curve is wider and agrees better
with the results of computer simulation \cite{singh}.
|
{
"arxiv_id": "2302.14118",
"language": "en",
"timestamp": "2023-03-01T02:01:31",
"url": "https://arxiv.org/abs/2302.14118",
"yymm": "2302"
} | \section{Introduction}
Quantum chromodynamics (QCD), in the limit of a large number of colors $N_c$ \cite{Hooft:74,Witten:79}, is a successful tool to reproduce the qualitative features of strong interaction phenomena at moderate energies of the order of the $\rho$-meson mass. To succeed in the quantitative description of hadronic physics, one should rely on the effective Lagrangian approach, or on lattice calculations. Various options of the QCD inspired effective Lagrangian are discussed in the literature, differing in the content of the fields used and the rules of $N_c$ counting \cite{Witten:80,Veneziano:80,Trahern:80,Ohta:80,Ohta:81,Leutwyler:96a,Leutwyler:96b,Taron:97,Kaiser:00,Osipov:06,Weinberg:10}. For instance, in \cite{Leutwyler:96a,Leutwyler:96b}, the properties of the lightest pseudoscalar nonet were studied by the effective Lagrangian arranged according to the powers of momenta, masses of light quarks, and $N_c$. In particular, it was assumed that the masses of light quarks are of the order $m_i=\mathcal{O}(1/N_c)$, where $i=u,d,s$ are flavors. This approach is known now as $1/N_c$ chiral perturbation theory ($1/N_c\chi$PT) \cite{Kaiser:00} wherein the $\eta'$ meson is included consistently by means of the $1/N_c$ expansion. When $N_c\to\infty$, the axial $U(1)_A$ anomaly is absent and the pseudoscalar $SU(3)$ singlet becomes the ninth Goldstone boson associated with the spontaneous symmetry breaking of $U(3)_L \times U(3)_R \to U(3)_V$ \cite{Witten:79b,Veneziano:79}. A simultaneous chiral and $1/N_c$ expansion leads to an effective theory for the pseudoscalar nonet that is not only internally consistent but is also very useful in practice \cite{Goity:02,Bickert:20}.
In view of the fruitfulness of Leutwyler's idea to count $m_i=\mathcal{O}(1/N_c)$, we find it interesting to apply this counting rule to calculations based on the low-energy meson Lagrangian derived from the effective $U(3)_L\times U(3)_R$ symmetric four-quark interactions of the Nambu -- Jona-Lasinio (NJL) type \cite{Nambu:61a,Nambu:61b}. Our interest in the NJL model (in connection with the task of studying the properties of the pseudoscalar nonet) is due to two reasons.
First, the model implies a specific mechanism of spontaneous chiral symmetry breaking (S$\chi$SB). Therefore, its use allows us to express a number of arbitrary parameters, known from the analysis of Leutwyler, through the characteristics of the hadron vacuum, and thereby obtain their numerical values. This makes it possible to study in detail the four-quark mechanism of S$\chi$SB.
Second, in obtaining the meson Lagrangian it is important to properly account for the effect of explicit violation of chiral symmetry. For this purpose, we for the first time use a new asymptotic expansion of the quark determinant \cite{Osipov:21a,Osipov:21b,Osipov:21c}, which is based on the Volterra series. This series together with the Fock-Schwinger proper-time method turns out to be an efficient tool that allows not only to isolate divergent parts of quark loop diagrams, but also to accurately reproduce the flavor structure of coupling constants of the effective meson Lagrangian. The latter circumstance is fundamental in studying the explicit violation of chiral symmetry in the NJL model.
The method used here differs significantly from the schemes applied earlier in the NJL model to extract the consequences of explicit chiral symmetry breaking. The noncommutativity of the mass matrix of quarks with meson fields leads to an additional rearrangement of the asymptotic series in powers of proper time. As a result, the effective meson Lagrangian not only contains divergent vertices (at removal of regularization in loop integrals), but also has uniquely defined finite terms which vanish in the limit of equal masses. These terms contain, apparently, important additional information about isospin and flavor symmetry breaking, which is absent in the standard meson Lagrangian of the NJL model \cite{Volkov:84,Wadia:85, Volkov:86,Ebert:86,Bijnens:93,Osipov:17}. The study of the physical consequences of accounting for these finite contributions is a long-term goal of the approach developed here.
The pseudoscalar mesons offer an excellent ground for checking the effectiveness of the asymptotic expansion based on the Volterra representation. This concerns both the mass formulas and other low-energy characteristics of the light pseudoscalar nonet, primarily those associated with an explicit violation of chiral symmetry. The counting rule $m_i=\mathcal{O}(1/N_c)$ makes this task more tractable for the NJL model. Indeed, any attempt to calculate the first correction to the leading-order $1/N_c$ result within the standard approach requires to account for chiral logarithms, a step that implies a major modification of the NJL model. However, if $m_i=\mathcal{O}(1/N_c)$, the contribution of chiral logarithms starts only from the order $(m_i/N_c)\ln m_i$. It is with this circumstance that the possibility of effective use of the $1/N_c$ NJL model for estimating the masses and other characteristics of the pseudoscalar nonet of mesons is connected. And it is in this case that the Volterra series plays the main role in describing the effects of explicit chiral symmetry breaking.
In this article, we deal with electrically charged and strange pseudoscalars. The neutral states $\pi^0$, $\eta$, and $\eta'$ are considered in a separate article. This is due both to the volume of the material presented and to the convenience of its perception.
The article is organized as follows. In Sec.\,\ref{s2}, we briefly describe the method for deriving the effective meson Lagrangian based on four-quark interactions, and demonstrate how the Volterra representation is embedded in the general scheme of the Fock-Schwinger proper-time method. In Sec.\,\ref{s3}, we discuss modifications related with the $1/N_c$ treatment of the NJL gap equation. The mixing between pseudoscalar and axialvector fields is considered in Sec.\,\ref{s4}. The kinetic terms of the meson effective Lagrangian are considered in Sec.\,\ref{s5}. Here we obtain the decay constants of pseudoscalars by rescaling the corresponding collective variables. The masses of the charged pion and strange pseudoscalars are discussed in Sec.\,\ref{s6}. The Gasser-Leutwyler ellipse and other sum rules relating pseudoscalar masses with light quark masses are discussed in Sec.\,\ref{s7}. Our numerical estimates are given in Sec.\,\ref{s8}. We summarize in Sec.\,\ref{Conclusions}.
\section{Effective Lagrangian}
\label{s2}
Four-quark interactions are widely used to describe the mechanism of S$\chi$SB and construct the effective action of mesons at moderate energies \cite{Volkov:84,Volkov:86,Ebert:86,Osipov:17,Bijnens:93}
\begin{equation}
\label{Lagr1}
\mathcal L=\bar q(i\gamma^\mu\partial_\mu -m)q+\mathcal{L}_{4q}.
\end{equation}
Here $\gamma^\mu$ are the Dirac gamma-matrices, $\bar q=(\bar u,\bar d,\bar s)$ is a flavor triplet of Dirac 4-spinors with $\bar u=u^\dagger \gamma^0$, and $m$ is a diagonal matrix $m=\mbox{diag}(m_u,m_d,m_s)$ containing the masses of current up, down and strange quarks. The Lagrange density describing four-quark interactions has the form $\mathcal{L}_{4q}=\mathcal L^{(0)}+\mathcal L^{(1)}$, where the sum consists of $G=U(3)_L\times U(3)_R$ chiral symmetric four-quark operators with spin zero and one respectively
\begin{eqnarray}
\mathcal L^{(0)} &=& \frac{G_S}{2} \left[ (\bar q\lambda_a q)^2+(\bar q i\gamma_5\lambda_a q)^2 \right], \\
\mathcal L^{(1)} &=&- \frac{G_V}{2} \left[ (\bar q\gamma^\mu\lambda_a q)^2+(\bar q \gamma^\mu\gamma_5\lambda_a q)^2 \right],
\end{eqnarray}
where $a=0,1,\ldots, 8$, the matrix $\lambda_0=\sqrt{2/3}$ and $\lambda_i$ are the eight Gell-Mann matrices of $SU(3)$. The coupling constants $G_S$ and $G_V$ have dimensions (mass)$^{-2}$ and can be fixed from the meson mass spectrum.
The spin-0 short-range attractive force between light quarks $\sim G_S(\bar q\lambda_a q)^2$ is responsible for the $S\chi SB$. If this interaction is sufficiently strong $G_S\geq G_{\mbox{\footnotesize{cr}}}$, it can rearrange the vacuum, and the ground state becomes superconducting, with a nonzero quark condensate. As a result, nearly massless current quarks become effectively massive constituent quarks. The short-range interaction can then bind these constituent quarks into mesons.
For a theory described by the Lagrangian density (\ref{Lagr1}), the vacuum to vacuum amplitude is given by functional integration
\begin{eqnarray}
\label{Zq}
Z&\!=\!\!&\int [dq][d\bar q] \exp \left( i\!\!\int\!\! d^4x \,\mathcal L\right) \\
\label{Zsq}
&\!=\!\!&\int [dq][d\bar q] [ds_a][dp_a][dv_a^{\mu}][da_a^{\mu}]\exp \left( i\!\!\int\!\! d^4x \,\mathcal L' \right)\!, \nonumber
\end{eqnarray}
where
\begin{eqnarray}
\label{Lagr2}
\mathcal L' &=& \bar q [i\gamma^\mu \partial_\mu +s+i\gamma_5 p
+\gamma^\mu (v_\mu +\gamma_5 a_\mu )] q \nonumber \\
&-& \frac{\mbox{tr}[(s+m)^2+p^2]}{4G_S} +\frac{\mbox{tr}(v_\mu^2+a_\mu^2)}{4G_V}.
\end{eqnarray}
The new Lagrangian $\mathcal L'$ has the same dynamical content as $\mathcal L$ since if we perform a functional integration over collective variables $s_a$, $p_a$, $v^\mu_a$ and $a^\mu_a$ in (\ref{Zsq}) we obtain the original expression (\ref{Zq}). Notice that $s=s_a\lambda_a$, $p=p_a\lambda_a$, $v^\mu=v^\mu_a\lambda_a$ and $a^\mu=a^\mu_a\lambda_a$.
In the world of zero quark bare masses $m=0$, $\mathcal L'$ is invariant under global $U(3)_L\times U(3)_R$ transformations. In particular, the group $G$ acts on the quark fields as follows
\begin{equation}
q'=(P_RV_R+P_LV_L)q=e^{i\alpha}e^{i\gamma_5\beta} q,
\end{equation}
where the projection operators $P_{R,L}$ are $P_R=(1+\gamma_5)/2$, $P_L=(1-\gamma_5)/2$. It is convenient to choose finite unitary transformations $V_{R,L}\in G$ in the form of a product of two exponents $V_R=e^{i\alpha}e^{i\beta}$, $V_L=e^{i\alpha}e^{-i\beta}$, where $\alpha=\alpha_a\lambda_a$, $\beta=\beta_a\lambda_a$ and the parameters $\alpha_a$ and $\beta_a$ are real. Then it follows that
\begin{eqnarray}
s'+ip'&=&V_L(s+ip)V^\dagger_R, \nonumber \\
v'_\mu+a_\mu' &=& V_R(v_\mu+a_\mu)V_R^\dagger, \nonumber \\
v'_\mu -a_\mu' &=& V_L (v_\mu -a_\mu)V_L^\dagger.
\end{eqnarray}
Let us use the freedom of choice of dynamical variables in (\ref{Zsq}) to carry out the transition to a nonlinear realization of chiral symmetry for Goldstone particles. To do this, we represent the complex $3\times 3$ matrix $s+ip$ as the product of the unitary matrix $\xi$ and the Hermitian matrix $\tilde\sigma$
\begin{equation}
s+ip=\xi\tilde\sigma\xi.
\end{equation}
From the covariance of this expression under the action of the group $G$, it follows that the matrices $\xi$ and $\tilde\sigma$ are transformed as
\begin{equation}
\xi'=V_L\xi h^\dagger = h\xi V_R^\dagger, \quad \tilde\sigma'=h\tilde\sigma h^\dagger,
\end{equation}
where $h$ is a unitary compensating transformation belonging to the maximal subgroup $H\subset G$, leaving the vacuum invariant, and arising under the action of the chiral group $G$ on the coset representative $\xi$ of the $G/H$ manifold. In these variables we have $\bar q(s+i\gamma_5 p)q=\bar Q\tilde\sigma Q,$ where the new quark fields are given by $Q=(\xi P_R+\xi^\dagger P_L)q$. A nonlinear realization of $G$ becomes a linear representation when restricted to the subgroup $H$ \cite{Wess:69a,Wess:69b}. Indeed, one can see that the field $Q$ transforms as the fundamental representation of the subgroup $H$: $Q'=hQ$.
Having done similar redefinitions in the rest of the Lagrangian (\ref{Lagr2}), we find $ \mathcal L' \to \mathcal L''$, where
\begin{eqnarray}
\label{ELM}
\mathcal L''&=&\bar Q (i\gamma^\mu d_\mu -M+\sigma ) Q +\frac{1}{4G_V} \mbox{tr}(V_\mu^2+A_\mu^2) \nonumber \\
&-&\frac{1}{4G_S} \mbox{tr}\left[\sigma^2-\{\sigma ,M\} +(\sigma-M)\Sigma \right].
\end{eqnarray}
Notice the replacement $\tilde\sigma=\sigma -M$ made in (\ref{ELM}). The matrix $M$ is diagonal $M=\mbox{diag} (M_u,M_d,M_s)$, and its elements are considered as the masses of constituent quarks $Q$. We assume on this step that chiral symmetry is realized in the Nambu-Goldstone sense (as $m_i\to 0$), i.e., heavy constituent masses result from dynamic symmetry breaking and are controlled by the gap equation (see below Eq.\,(\ref{gapEq})). In turn, $\sigma (x)$ describes quantum fluctuations of the $\tilde\sigma$ field around a physical vacuum.
The corresponding collective variables for vector, axial-vector, scalar and pseudoscalar fields are given by Hermitian matrices $V_\mu=V_\mu^a\lambda_a$, $A_\mu=A_\mu^a\lambda_a$, $\sigma=\sigma_a\lambda_a$, $\phi=\phi_a\lambda_a$, where
\begin{eqnarray}
V_\mu &=& \frac{1}{2} \left[\xi (v_\mu +a_\mu )\xi^\dagger +\xi^\dagger (v_\mu -a_\mu )\xi \right], \nonumber \\
A_\mu &=& \frac{1}{2} \left[\xi (v_\mu +a_\mu )\xi^\dagger -\xi^\dagger (v_\mu -a_\mu )\xi \right], \nonumber \\
\Sigma &=&\xi m \xi+\xi^\dagger m \xi^\dagger , \quad \xi=\exp \left(\frac{i}{2}\,\phi\right).
\end{eqnarray}
The pseudoscalar field $\phi$ is dimensionless, later on, when passing to the field functions of physical states, it will acquire the required dimension of mass. The vector $V_\mu$ and axial-vector $A_\mu$ fields are chosen to transform as $V_\mu'=hV_\mu h^\dagger$, $A_\mu'=hA_\mu h^\dagger$.
The symbol $d_\mu =\partial_\mu -i\Gamma_\mu$ in (\ref{ELM}) denotes the covariant derivative, where
\begin{eqnarray}
\label{gammamu}
&&\Gamma_\mu = \xi^{(+)}_\mu +V_\mu+\gamma_5\left( \xi_\mu^{(-)}+A_\mu \right), \\
&& \xi_\mu^{(\pm )}= \frac{i}{2} \left( \xi\partial_\mu \xi^\dagger \pm \xi^\dagger \partial_\mu \xi \right).
\end{eqnarray}
It is easy to establish that $\Gamma_\mu$ is a connection on $G/H$ satisfying the standard transformation rules under the local action of $G$
\begin{equation}
\Gamma_\mu' = h\Gamma_\mu h^\dagger +ih\partial_\mu h^\dagger.
\end{equation}
To be precise, this is how $\xi_\mu^{(+)}$ is transformed, the other fields in $\Gamma_\mu$ are the covariant objects with similarity transformations.
To exclude quark degrees of freedom in the functional integral
\begin{equation}
\label{Z3}
Z\!=\!\!\int [dQ][d\bar Q] [d\sigma_a] \mathcal D\mu [\phi_a] [dV_a^{\mu}][dA_a^{\mu}] e^{\, i\!\!\int\!\! d^4x \,\mathcal L''}\!,
\end{equation}
it is necessary to integrate over the quark variables. Before we do this, let us clarify that the $G$ invariant measure $\mathcal D\mu [\phi_a]$ in $Z$ is related to the curvature of the $G/H$ group manifold parametrized by the pseudoscalar variables $\phi_a$. It is easy to find an explicit expression for this differential form, but we will not need it in what follows. Therefore, we better proceed directly to the calculation of the real part of the effective meson Lagrangian taking the integral over quark fields. The result is a functional determinant
\begin{equation}
\label{logdet1}
W_E =\ln |\det D_E|
= -\int\limits^\infty_0\!\frac{dt}{2t}\,\rho_{t,\Lambda}\,\mbox{Tr}\left(e^{-t D_E^\dagger D_E^{}}\right),
\end{equation}
representing a real part of the one-loop effective action in Euclidian (E) space as the integral over the proper-time $t$. Here, $D=i\gamma^\mu d_\mu -M+\sigma \to D_E$. Note that the rules we use to continue to Euclidean space are standard and can be found, for instance, in \cite{Osipov:21c}. The symbol "Tr" denotes the trace over Dirac $(D)$ $\gamma$-matrices, color $(c)$ $SU(3)$ matrices, and flavor $(f)$ matrices, as well as integration over coordinates of the Euclidean space: $\mbox{Tr}\equiv \mbox{tr}_I \int\! d^4x_E$, where $I=(c,D,f)$. The trace in the color space is trivial: It leads to the overall factor $N_c=3$.
The dependence on matter fields in $D_E$ after switching to the Hermitian operator
\begin{equation}
D_E^\dagger D_E^{}=M^2 -d^2+Y
\end{equation}
is collected in the $3\times 3$ matrix $Y$
\begin{eqnarray}
Y&=&\sigma^2-\{\sigma,M\} +i[\gamma_\alpha (\sigma -M), d_\alpha ] \nonumber \\
&+&\frac{1}{4}[\gamma_\alpha, \gamma_\beta ] [d_\alpha, d_\beta ],
\end{eqnarray}
and the covariant derivative $d_\alpha$
\begin{eqnarray}
d_\alpha &=&\partial_\alpha +i\Gamma_\alpha, \\
\Gamma_\alpha &=&V_\alpha -\xi_\alpha^{(+)} +\gamma_{5E} (A_\alpha -\xi_\alpha^{(-)} ),
\end{eqnarray}
where $\Gamma_\alpha$ is a connection in a curved factor space of Goldstone fields (in four-dimensional Euclidean space).
In (\ref{logdet1}) we use the proper-time regularization $\rho_{t,\Lambda}$ with two subtractions at the mass scale $\Lambda$
\begin{equation}
\rho_{t,\Lambda}=1-(1+t\Lambda^2)e^{-t\Lambda^2},
\end{equation}
which in the NJL model was first used in \cite{Osipov:85}. The ultraviolet cutoff $\Lambda$ characterizes the scale of $S\chi SB$, i.e., above this scale four-quark interactions disappear and QCD becomes perturbative. Obviously, the value of $\Lambda$ depends on the regularization scheme used, and generally varies in the interval $0.65-1.3\,\mbox{GeV}$ \cite{Osipov:85,Klevansky:92}. In the present paper, we apply the proper-time regularization with $\Lambda =1.1\,\mbox{GeV}$. This value, as will be shown above, is phenomenologically justified and is consistent with an estimate of the chiral symmetry breaking scale $\Lambda_{\chi SB}\leq 4\pi f_\pi$ \cite{Georgi:84}, where $f_\pi=92.2\,\mbox{MeV}$ is the pion decay constant.
The functional trace in (\ref{logdet1}) can be evaluated by the Schwinger technique of a fictitious Hilbert space. The use of a plane wave with Euclidian 4-momenta $k$, $\langle x|k\rangle $, as a basis greatly simplifies the calculations (details, for instance, are given in \cite{Osipov:21c}) and leads to the representation of the functional trace by the integrals over coordinates and 4-momenta
\begin{equation}
\label{logdet-2}
W_E=\! - \!\!\int\!\!\frac{d^4x d^4k}{(2\pi )^4}\, e^{-k^2}\!
\!\! \int\limits^\infty_0\!\!\frac{dt}{2t^3}\,\rho_{t,\Lambda}\,
\mbox{tr}_I\! \left(e^{-t(M^2+A)}\right)\! .
\end{equation}
The self-adjoint operator $A$ is given by
\begin{equation}
\label{A}
A= -d^2 -2ik d / \sqrt{t} +Y,
\end{equation}
where a summation over four-vector indices is implicit.
To advance further in our calculation of (\ref{logdet-2}), we use the Volterra series
\begin{equation}
\label{alg}
e^{-t(M^2+A)}=e^{-tM^2}\!\left[1+\sum_{n=1}^\infty (-1)^n f_n(t,A) \right],
\end{equation}
where the expression in the square brackets is the time-ordered exponential $\mbox{OE}[-A](t)$ of $A(s)= e^{sM^2}\! A\, e^{-sM^2}$, accordingly
\begin{equation}
f_n(t,A)=\!\int\limits_0^t\!\! ds_1\!\!\int\limits_0^{s_1}\!\! ds_2 \ldots \!\!\!\!\int\limits_0^{s_{n-1}}\!\!\!\! ds_n A(s_1) A(s_2) \ldots A(s_n).
\end{equation}
This series generalizes the standard large mass expansion of the heat kernel to the case of unequal masses. If masses are equal, this formula yields the well-known large mass expansion with standard Seeley-DeWitt coefficients $a_n(x,y)$ \cite{Ball:89}. In fact, formula (\ref{alg}) is an extension of the Schwinger's method used to isolate the divergent aspects of a calculation in integrals with respect to the proper time \cite{Schwinger:51,DeWitt:65} to the non-commutative algebra $[M,A]\neq 0$ (see also \cite{Feynman:51}).
Inserting Eq.\,(\ref{alg}) into (\ref{logdet-2}) with the following integrations over four-momenta $k_\alpha$ and the proper-time $t$ one finds -- after the continuation to Minkowski space -- the one-quark-loop (1QL) contribution to the effective meson Lagrangian in the form of the asymptotic series
\begin{equation}
\label{WE}
\mathcal L_{\mbox{\scriptsize 1QL}} = - \frac{N_c}{32\pi^2} \sum_{n=1}^{\infty}\,\mbox{tr}\,b_n(x,x),
\end{equation}
where coefficients $b_n(x, x)$ depend on the meson fields and quark masses. These coefficients contain the full information about both the effective meson vertices and the corresponding coupling constants. The first two coefficients are \cite{Osipov:21c}
\begin{eqnarray}
\label{cb1}
\mbox{tr}\,b_1&=&\mbox{tr}_{Df} \left[-J_0\circ Y -\frac{1}{4} (\Delta J_0\circ \Gamma_\mu )\Gamma^\mu \right], \\
\label{cb2}
\mbox{tr}\,b_2&=&\mbox{tr}_{Df} \left[\frac{Y}{2} J\!\circ Y-\frac{1}{12}\Gamma^{\mu\nu} (J\circ \Gamma_{\mu\nu})\right] \nonumber\\
&+&\mbox{tr}_D\,\Delta b_2,
\end{eqnarray}
where $\Gamma^{\mu\nu}=\partial^\mu\Gamma^\nu -\partial^\nu\Gamma^\mu - i[\Gamma^\mu , \Gamma^\nu ]$.
For convenience, along with the usual matrix multiplication, we use here the non-standard Hadamard product \cite{Styan:73}, which is the matrix of elementwise products $(A\circ B)_{ij} =A_{ij} B_{ij}$. The Hadamard product is commutative unlike regular matrix multiplication, but the distributive and associative properties are retained. In addition, the following notations are used.
The proper-time integral $J_0$ is considered as a diagonal matrix with elements given by $(J_0 )_{ij} = \delta_{ij} J_0 (M_i)$, where
\begin{equation}
\label{J0}
J_0(M_i)\! =\!\!\int\limits_0^\infty \!\frac{dt}{t^2}\,\rho_{t,\Lambda}\, e^{-tM_i^2}\!=\!
\Lambda^2-M^2_i\ln\left(1+\frac{\Lambda^2}{M^2_i}\right).
\end{equation}
This matrix collects contributions of one-loop Feynman diagrams, known as a "tadpole".
The other set of proper-time integrals in (\ref{cb2}) is given by the matrix $J$ with elements $J_{ij}=J_1(M_i,M_j)$
\begin{eqnarray}
\label{J-ij}
&&J_{ij}=\frac{1}{\Delta_{ij}}\left[J_0(M_j)-J_0(M_i)\right] \\
&&=\frac{1}{\Delta_{ij}}\left[M_i^2 \ln \left(1+\frac{\Lambda^2}{M_i^2}\right)-M_j^2 \ln \left(1+\frac{\Lambda^2}{M_j^2}\right)\right] \nonumber
\end{eqnarray}
with $\Delta_{ij}=M_i^2-M_j^2$. If the masses are equal $M_i=M_j$ it gives the diagonal elements
\begin{equation}
J_{ii}\equiv J_1(M_i) =\ln\left(1+\frac{\Lambda^2}{M^2_i}\right)-\frac{\Lambda^2}{\Lambda^2+M^2_i}.
\end{equation}
It can be seen from this expression that the integral diverges logarithmically at $\Lambda\to\infty$. To stress this, we use the subscript $1$ in labelling of such integrals to distinguish them from the quadratic divergence of integrals $J_0$.
The last set of proper-time integrals which we will need in the following is given by the matrix $\Delta J_0$ with elements $(\Delta J_0)_{ij}=\Delta J_0(M_i,M_j)$. Here
\begin{equation}
\label{DJ0ij}
\Delta J_0(M_i,M_j)=2J_0(M_i,M_j)-J_0(M_i)-J_0(M_j)
\end{equation}
and
\begin{eqnarray}
\label{J0ij}
&&\!\!\!\!\!\!\! J_0(M_i,M_j)=\frac{\Lambda^2}{2} +\frac{\Lambda^4}{2\Delta_{ij}} \ln\frac{\Lambda^2+M_i^2}{\Lambda^2+M_j^2} \nonumber \\
&&\!\!\!\!\!\!\!\!\! -\frac{1}{2\Delta_{ij}}\left[M_i^4\ln\left(1+\frac{\Lambda^2}{M_i^2}\right)-M_j^4\ln\left(1+\frac{\Lambda^2}{M_j^2}\right)\right]\!.
\end{eqnarray}
In the coincidence limit $M_i\to M_j$, we have $$\lim_{M_i\to M_j} J_0(M_i,M_j)=J_0(M_i)\,,$$ and therefore $(\Delta J_0)_{ii}=0$. In the case of unequal masses, the difference (\ref{DJ0ij}) is finite (at $\Lambda\to\infty$) and thus gives us an example of a contribution that does not occur in the standard approach to the NJL model.
Recall that the standard meson NJL Lagrangian absorbs only divergent parts of one-loop quark diagrams \cite{Volkov:86,Ebert:86,Kikkawa:76}. They are represented by the first term in (\ref{cb1}) and the first two terms in (\ref{cb2}). Contrary to the standard approach, the coefficients $b_1$ and $b_2$ additionally have many (about hundred) finite contributions of Feynman diagrams compactly assembled in the second term in (\ref{cb1}) and third term in (\ref{cb2}). Each of them vanishes in the chiral limit; therefore they break either isotopic or $SU(3)_f$ symmetry. The appearance of these new vertices is understandable - they arise as a finite difference when subtracting two or more divergent integrals with different masses. Both the structure and the coupling constant of any finite vertex are uniquely fixed by the Volterra series. Since for the tasks considered here, we do not need the expression of $\mbox{tr}_D\,\Delta b_2$, we do not give its explicit form, but refer the interested reader to the work \cite{Osipov:21c}, where the corresponding expressions were obtained.
It is easy to understand why there are no finite terms in the standard approach. The reason is contained in the treatment of the heat kernel $\exp[-t(M^2+A)]$. To find the asymptotics of this object, one usually separates a commutative matrix $\mu$: $M^2+A=\mu^2+A+(M^2-\mu^2)$ to factorize it from the exponent. As a result, the expansion of the heat kernel contains only standard Seeley-DeWitt coefficients
\begin{equation}
\label{alg8}
e^{-t(M^2+A)}=e^{-t\mu^2}\sum_{n=0}^\infty t^n a_n(x,x).
\end{equation}
The mass scale $\mu$ is arbitrary. For example, this parameter may be identified with the average constituent quark mass $\mu =\mbox{tr} M/3$ \cite{Ebert:86} or with the constituent quark mass in the chiral limit $\mu =M_0$ \cite{Bijnens:93}. In both cases, the integration over proper time $t$ in (\ref{alg8}) yields couplings $\propto J_n(\mu )$ which are not sensitive to the flavor content of quark-loop integrals. The explicit violation of chiral symmetry is carried out only due to the corresponding part of $Y$ and the term $M^2-\mu^2$. This approach leads to the different pattern of the flavor symmetry breaking and does not contain the finite terms as well because usually only the first two field-dependent terms in (\ref{alg8}) ($n=1,2$) are considered.
An attempt made in \cite{Ebert:86} to restore the flavor dependence of the coupling constants by replacing the divergent integrals $J_n(\mu )$ by the expressions following from the direct calculations of corresponding Feynman graphs is inconsistent mathematically, although being correct qualitatively. In such a way it is impossible to trace the pattern of an explicit chiral symmetry breaking without distorting it. The Volterra series (\ref{alg}) not only gives a rigorous foundation of the substitutions made in \cite{Ebert:86}, but also associates with them a definite finite part.
So, as a result of the calculations performed, we finally arrive to the effective meson Lagrangian given by
\begin{eqnarray}
\label{EL}
\mathcal L''&\to& \mathcal L_{\mbox{\scriptsize eff}}=\mathcal L_{\mbox{\scriptsize 1QL}} +\frac{1}{4G_V}\mbox{tr}_f(V_\mu^2+A_\mu^2) \nonumber \\
&-&\frac{1}{4G_S} \mbox{tr}_f\! \left[\sigma^2-\{\sigma, M\}+(\sigma -M)\Sigma\right].
\end{eqnarray}
This Lagrangian contains all the information about chiral symmetry breaking, including effects induced by unequal quark masses. In what follows we will be interested only in the part of this Lagrangian that is responsible for the physics of pseudoscalar mesons.
\section{Gap equation and $1/N_c$ expansion}
\label{s3}
First let us exclude from the effective Lagrangian (\ref{EL}) the linear in $\sigma$ term (the tadpole). The corresponding contributions are contained in $\mathcal L_{\mbox{\scriptsize 1QL}}$ and the last term in (\ref{EL}). Singling them out, e.g.,
\begin{equation}
\mbox{tr}_{Df}\, (-J_0\circ Y)\to 8\!\!\sum_{i=u,d,s}\!\! J_0(M_i) M_i \sigma_i ,
\end{equation}
we arrive at the Lagrangian
\begin{equation}
\label{tadpoleL}
\mathcal L_\sigma =\sum_{i=u,d,s}\!\! \sigma_i\left[\frac{M_i-m_i}{2G_S}-\frac{N_c}{4\pi^2} M_i J_0(M_i) \right].
\end{equation}
Requiring that the tadpole term vanishes, we obtain a self-consistency equation
\begin{equation}
\label{gapEq}
M_i \left(1-\frac{N_c G_S}{2\pi^2} J_0(M_i)\right)=m_i,
\end{equation}
which relates the mass of light quark $m_i$ to the mass of heavy constituent quark $M_i$. This equation can be rewritten in terms of the quark condensate
\begin{equation}
\label{QC}
\langle \bar q\lambda_i q\rangle =-\frac{M_i-m_i}{2G_S}.
\end{equation}
In the strong coupling regime
\begin{equation}
\label{SCR}
G_S\Lambda^2>\frac{2\pi^2}{N_c}=6.58,
\end{equation}
each of three $(i=u,d,s)$ equations (\ref{gapEq}) has a nontrivial solution, which describes a gap in the spectrum of fermions. This solution signals that the ground state becomes superconducting, with a nonzero quark condensate. Knowing that spontaneous breaking of chiral symmetry is present in QCD at large $N_c$ \cite{Hooft:74,Witten:79}, we conclude that $G_SN_c=\mbox{const}$, and $\Lambda\sim {\mathcal O}(1)$ in the large-$N_c$ limit.
Let us emphasize the difference between the approaches associated with the two alternative assumptions made for the current quark mass counting rule at large $N_c$.
If we assume that $m_i=\mathcal O(1)$, then both parts of the gap equation (\ref{gapEq}) are present in leading order in $1/N_c$. As a consequence, one should look for an exact solution of the gap equation. The result can be also presented as a series in powers of current quark masses \cite{Osipov:92}. In this case, it is always possible to estimate the accuracy of the expansion used by comparing the truncated result to the exact solution.
The counting rule $m_i=\mathcal O(1/N_c)$, yields that the right hand side of Eq.\,(\ref{gapEq}) at leading $1/N_c$-order (LO) tends to zero; chiral symmetry is restored $m_i = 0$; the masses of all constituent quarks are equal to the same value $M_0$, which is determined by the equation
\begin{equation}
\label{gap2}
1-\frac{N_cG_S}{2\pi^2}J_0(M_0)=0.
\end{equation}
The nontrivial solution of the gap equation $M_i(m_i)$ must be represented as a series in powers of the current quark masses. For specific calculations, it is necessary to limit oneself only to those terms that do not exceed the accuracy of the calculations performed. So, up to the next to leading order (NLO) correction included, we can write
\begin{equation}
\label{expm}
M_i(m_i)=M_0+M'(0) \, m_i +\mathcal O (m_i^2).
\end{equation}
Here we suppose that the quark condensate is the leading order parameter of the spontaneously broken symmetry. It follows then from (\ref{QC}) that $M_0=\mathcal O(N_c^0)$. As we will see later, the $1/N_c$-correction to the mass formulae of charged pseudoscalars is small, which speaks in favor of the hypothesis just adopted.
The coefficients of the Taylor expansion (\ref{expm}) can be determined by differentiating Eq. (\ref{gapEq}) under the assumption that $m_i$ are independent variables. For example, at the first step, we have
\begin{equation}
\label{Mprime}
M'(0)=\frac{\pi^2}{N_cG_S M_0^2 J_1(M_0)}\equiv a.
\end{equation}
Since $J_1(M_0)$ is a monotonically decreasing positive definite function of $M_0$ in the region $M_0>0$, we conclude that $a>0$. It follows that the first correction increases the mass of the constituent quark.
It is important to understand that the Lagrangian (\ref{tadpoleL}) and, as a consequence, the expansion (\ref{expm}) may get additional contributions from the one-loop meson diagrams. In this case, $M_0$ obtains a $1/N_c$ correction from a scalar tadpole graph. However, such correction may be avoided if one restricts to the mean-field approximation. On the contrary, a pseudoscalar tadpole gives a leading (in the chiral limit $m_i\to 0$) nonanalytic contribution only at higher order $m/N_c \ln m$, and therefore does not affect (\ref{expm}). Thus in the following we restrict our consideration to the mean-field approximation. In this case the result (\ref{expm}) does not suffer from the one-loop meson contributions.
For the same reason, the question regarding the exact solution of the equation (\ref{gapEq}) does not arise: Each expansion step in (\ref{expm}) is associated with the need to take into account more and more complicated contributions of meson loop diagrams.
\section{PA-mixing}
\label{s4}
To address the physical pseudoscalar fields, it is necessary to eliminate the mixing of pseudoscalars with axial-vector fields (PA-mixing), and also to separate the kinetic part of the free Lagrangian of pseudoscalars in $\mathcal L_{\mbox{\scriptsize 1QL}}$.
The first goal is achieved by redefining the axial vector field \cite{Morais:17}
\begin{equation}
\label{PA}
A_\mu=A_\mu'-\kappa_A\circ \xi_\mu^{(-)},
\end{equation}
where the nonet of axial-vector fields is given by
$$
A_\mu =
\left(\begin{array}{ccc}
f_{u\mu} &\sqrt 2 a_{1\mu}^+ & \sqrt 2 K^+_{1A\mu} \\
\sqrt 2 a_{1\mu}^- & f_{d\mu} & \sqrt 2 K^0_{1A\mu} \\
\sqrt 2 K^-_{1A\mu} & \sqrt 2 \bar K^0_{1A\mu} & f_{s\mu}
\end{array}\right).
$$
The nine pseudoscalar fields are collected in a hermitian matrix
$$
\phi = \phi_a \lambda_a =
\left(\begin{array}{ccc}
\phi_u &\sqrt 2 \pi^+ & \sqrt 2 K^+ \\
\sqrt 2 \pi^- & \phi_d & \sqrt 2 K^0 \\
\sqrt 2 K^- & \sqrt 2 \bar K^0 & \phi_s
\end{array}\right),
$$
where the diagonal elements are
\begin{eqnarray}
\label{uds-038}
\phi_u&=&\phi_3+\frac{1}{\sqrt 3}\left(\phi_8 +\sqrt 2\phi_0 \right), \nonumber \\
\phi_d&=&-\phi_3+\frac{1}{\sqrt 3}\left(\phi_8 +\sqrt 2\phi_0 \right), \nonumber \\
\phi_s&=& \frac{1}{\sqrt 3}\left(\sqrt 2\phi_0 -2\phi_8 \right).
\end{eqnarray}
After the replacement (\ref{PA}) PA-mixing terms contained in $\mathcal L_{\mbox{\scriptsize 1QL}} $ and in the second term of (\ref{EL}) can be canceled by an appropriate choice of a matrix $\kappa_A$. To demonstrate this, it is necessary to consider the following terms of the effective meson Lagrangian
\begin{eqnarray}
\label{pa1}
\mathcal L_{\mbox{\scriptsize 1QL}}^{(b_1)}&\to& \frac{N_c}{32\pi^2}\left\{ (\Delta J_0)_{ud} \left[(1-\kappa_{Aud})\partial_\mu \pi^+ +2a_{1\mu}^{'+}\right] \right. \nonumber \\
&\times & \left[(1-\kappa_{Aud})\partial_\mu \pi^- +2a_{1\mu}^{'-}\right] \nonumber \\
&+&(\Delta J_0)_{us} \left[(1-\kappa_{Aus})\partial_\mu K^+ +2K_{1A\mu}^{'+}\right] \nonumber \\
&\times & \left[(1-\kappa_{Aus})\partial_\mu K^- +2K_{1A\mu}^{'-}\right] \nonumber \\
&+&(\Delta J_0)_{ds} \left[(1-\kappa_{Ads})\partial_\mu K^0 +2K_{1A\mu}^{'0}\right] \nonumber \\
&\times &\left. \left[(1-\kappa_{Ads})\partial_\mu \bar K^0 +2\bar K_{1A\mu}^{'0}\right] \right\},
\end{eqnarray}
where the symbol $(b_1)$ indicates that the considered contribution is due to the coefficient $b_1$.
The next contribution owes its origin to the coefficient $b_2$, namely its part described by the first term of Eq.(\ref{cb2})
\begin{eqnarray}
\label{pa2}
&&\mathcal L_{\mbox{\scriptsize 1QL}}^{(b_2)}\to \frac{N_c}{16\pi^2}\left\{ (M_u+M_d)^2 J_1(M_u,M_d) \right. \nonumber \\
&&\times \left[(1-\kappa_{Aud})\partial_\mu \pi^+ +2a_{1\mu}^{'+}\right] \left[(1-\kappa_{Aud})\partial_\mu \pi^- +2a_{1\mu}^{'-}\right] \nonumber \\
&&+(M_u+M_s)^2 J_1(M_u,M_s) \left[(1-\kappa_{Aus})\partial_\mu K^+ +2K_{1A\mu}^{'+}\right] \nonumber\\
&&\times \left[(1-\kappa_{Aus})\partial_\mu K^- +2K_{1A\mu}^{'-}\right] \nonumber \\
&&+(M_d+M_s)^2 J_1(M_d,M_s) \left[(1-\kappa_{Ads})\partial_\mu K^0 +2K_{1A\mu}^{'0}\right] \nonumber \\
&&\times \left. \left[(1-\kappa_{Ads})\partial_\mu \bar K^0 +2\bar K_{1A\mu}^{'0}\right] \right\}.
\end{eqnarray}
It remains to take into account the last contribution related to the PA-mixing, which arises due to the second term in (\ref{EL}). This contribution is
\begin{eqnarray}
\label{pa3}
&&-\frac{1}{2G_V} \left(\kappa_{Aud} a_{1\mu}^{'-} \partial_\mu \pi^+
+\kappa_{Aus} K_{1A\mu}^{'-} \partial_\mu K^+ \right. \nonumber \\
&& \ \ \ \ \ \ \ \ \ \ \ \left. +\kappa_{Ads} \bar K_{1A\mu}^{'0} \partial_\mu K^0 \right) +h.c.
\end{eqnarray}
Collecting the results (\ref{pa1}), (\ref{pa2}) and (\ref{pa3}), we find that the matrix $\kappa_A$ is symmetric $(\kappa_A)_{ij}=(\kappa_A)_{ji}$, with elements given by
\begin{equation}
\label{kA-1}
\kappa_{Aij}^{-1}=1+\frac{8\pi^2}{N_cG_V[2(M_i+M_j)^2 J_{ij}+\Delta J_{0ij}]}.
\end{equation}
In the chiral limit this result coincides with the standard NJL approach, but it differs in the general case. It follows then, that $G_V=\mathcal O(1/N_c)$, and, in particular, for Eq.\,(\ref{Mprime}) we find
\begin{equation}
a=\frac{G_V}{G_S}\left(\kappa^{-1}_{A0}-1\right),
\end{equation}
where the index $0$ means that the function of the quark masses $(\kappa^{-1}_{A})_{ij}$ is calculated in the chiral limit $m_i\to 0$, i.e.,
\begin{equation}
\kappa^{-1}_{A0}=1+\frac{\pi^2}{N_cG_VM_0^2J_1(M_0)}=\frac{Z_0}{Z_0-1}.
\end{equation}
The last equality relates $\kappa^{-1}_{A0}$ with the constant $Z$ \cite{Osipov:85b,Volkov:86,Ebert:86} commonly used in the NJL model to get rid of the PA-mixing effect, $Z_0=\lim_{m_i\to 0} Z$.
The first two terms in the expansion of Eq.\,(\ref{kA-1}) in powers of $1/N_c$ are given by
\begin{equation}
\label{kAij-1exp}
\kappa_{Aij}^{-1}=\kappa^{-1}_{A0}\left[1-\frac{m_i+m_j}{2M_0}\left(a-\delta_M\right) \right]+\mathcal O(1/N_c^2),
\end{equation}
where
\begin{equation}
\label{dM}
\delta_M=a\left\{1-2(1-\kappa_{A0})\left[1- \frac{\Lambda^4 J_1(M_0)^{-1}}{(\Lambda^2+M_0^2)^2} \right]\right\}.
\end{equation}
Notice that $\Delta J_{0ij}$ contributes to (\ref{kAij-1exp}) only starting from the $1/N_c^2$ order. As we will show soon, $\delta_M$ determines the first order correction to the current algebra result for masses of electrically charged and strange pseudoscalars. It is a function of four parameters $\Lambda$, $G_S$, $G_V$ and $M_0$, which determine the structure of the hadron vacuum.
\section{Kinetic terms and decay constants}
\label{s5}
Our next task is to obtain the kinetic part of the free Lagrangian of pseudoscalar fields. To do this, we need the already known expressions (\ref{pa1}), (\ref{pa2}) and, in addition, one should write out the corresponding contribution of the second term in (\ref{EL}), that was omitted in (\ref{pa3})
\begin{eqnarray}
&&\frac{1}{4G_V} \left(\kappa_{Aud}^2 \partial_\mu\pi^+\partial_\mu\pi^-
+ \kappa_{Aus}^2 \partial_\mu K^+\partial_\mu K^- \right. \nonumber \\
&&\ \ \ \ \ \ \left. + \kappa_{Ads}^2 \partial_\mu \bar K^0\partial_\mu K^0 \right).
\end{eqnarray}
Collecting all these contributions, one finds, for instance in the case of charged pions, that the kinetic term is given by
\begin{eqnarray}
\mathcal L_{\mbox{\scriptsize kin}}^{\pi^+\pi^-}&=& \partial_\mu\pi^+\partial_\mu\pi^-
\left\{ \frac{\kappa^2_{Aud}}{4G_V} + \frac{N_c}{32\pi^2} (1-\kappa_{Aud})^2\right. \nonumber \\
&\times&\left. [2(M_u+M_d)^2J_1(M_u,M_d)+\Delta J_{0ud} ]\frac{}{}\right\} \nonumber \\
&=& \left( \frac{\kappa_{Aud}}{4G_V} \right)\, \partial_\mu\pi^+\partial_\mu\pi^- .
\end{eqnarray}
To give this expression a standard form, one should introduce the physical pion fields $\pi^\pm_{\mbox{\tiny ph}}$
\begin{equation}
\label{fpi}
\pi^{\pm} =\sqrt{\frac{4G_V}{\kappa_{Aud}}}\, \pi^{\pm} _{\mbox{\tiny ph}}=\frac{1}{f_\pi} \, \pi^{\pm} _{\mbox{\tiny ph}}.
\end{equation}
The dimensional parameter $f_\pi$ is nothing else than the weak decay constant of a charged pion. Similar calculations in the case of kaons give the values of the corresponding weak decay constants
\begin{equation}
\label{fK}
f_{K^\pm}= \sqrt{\frac{\kappa_{Aus}}{4G_V}}, \quad f_{K^0}= \sqrt{\frac{\kappa_{Ads}}{4G_V}}.
\end{equation}
The resulting expressions require a more detailed discussion.
First, they differ from the standard result of the NJL model, where the constant $f_\pi$ is estimated through the quark analog of the Goldberger-Treiman relation. The latter is a result of current algebra. Therefore, it is valid only in the leading order of expansion in current quark masses. It can be easily shown that in the chiral limit the formula (\ref{fpi}) coincides with the result of the standard approach.
Second, using Eq.\,(\ref{kAij-1exp}), one obtains from (\ref{fpi}) and (\ref{fK}) the first order corrections to the current algebra result
\begin{equation}
\label{fij}
f_{ij}=F\left(1+\frac{m_i+m_j}{4M_0}(a-\delta_M )\right),
\end{equation}
where
\begin{equation}
F=\sqrt{\frac{\kappa_{A0}}{4G_V}}=\mathcal O(\sqrt{N_c})
\end{equation}
is the pion weak decay constant in the chiral limit. In particular, it follows then that
\begin{equation}
\label{K/pi}
\frac{f_{K^\pm}}{f_\pi}=1+\frac{m_s-m_d}{4M_0}(a-\delta_M ).
\end{equation}
It is instructive to compare our result (\ref{fij}) with the result of $1/N_c\chi$PT. In this approach the corrections to $f_\pi$ and $f_K$ are determined by the constant $K_6=4B_0 L_5^r/F^2$ \cite{Gasser:85}, where the low-energy coupling constant $L_5^r$ counts of $\mathcal O(N_c)$, and the constant $B_0=\mathcal O(N_c^0)$ is related to the quark condensate. Such comparison yields
\begin{equation}
\label{K6}
\frac{a-\delta_M}{4M_0}\leftrightarrow K_6.
\end{equation}
That demonstrates the full agreement between the two approaches at this stage.
It is well known that the numerical values of the ratio (\ref{K/pi}) calculated by the various groups of authors using NJL model lie between $1.02$ and $1.08$ \cite{Klevansky:92} and thus underestimate the experimental value $f_K/f_\pi =1.19$. As we show below, formula (\ref{K/pi}) perfectly reproduces the experimental value, as it also takes place in $1/N_c\chi$PT.
\section{Mass formulas and current quark masses}
\label{s6}
Let's establish now the mass formulas of $\pi^\pm$, $K^\pm$, $K^0$ and $\bar K^0$ mesons. To do this, we need the corresponding contribution arising from the last term of the Lagrangian (\ref{EL})
\begin{eqnarray}
\label{psm}
\frac{1}{4G_S}\mbox{tr}_f\, M\Sigma \to &-&\frac{1}{4G_S}\left[ (M_u+M_d)(m_u+m_d)\pi^+\pi^- \right. \nonumber \\
&+&(M_u+M_s)(m_u+m_s)K^+K^- \nonumber \\
&+&\left. (M_d+M_s)(m_d+m_s)\bar K^0K^0 \right].
\end{eqnarray}
Note that Lagrangian $\mathcal L_{\mbox{\scriptsize 1QL}}$ does not contribute to the pseudoscalar masses.
Now, after redefinitions of fields in Eq.\,(\ref{psm}), we finally arrive to the result
\begin{eqnarray}
\mathcal L_{\mbox{\scriptsize mass}}&=&-\frac{G_V}{G_S}
\left[ \frac{1}{\kappa_{Aud}}(M_u+M_d)(m_u+m_d)\pi_{\mbox{\tiny ph}}^+\pi_{\mbox{\tiny ph}}^- \right. \nonumber \\
&+&\frac{1}{\kappa_{Aus}}(M_u+M_s)(m_u+m_s)K_{\mbox{\tiny ph}}^+K_{\mbox{\tiny ph}}^- \nonumber \\
&+&\left.\frac{1}{\kappa_{Ads}} (M_d+M_s)(m_d+m_s)\bar K_{\mbox{\tiny ph}}^0K_{\mbox{\tiny ph}}^0 \right].
\end{eqnarray}
It follows then that the masses are
\begin{eqnarray}
\label{pi}
&&\bar M_{\pi^\pm}^2=\frac{ 1}{4G_S f^2_{ud}} (M_u+M_d)(m_u+m_d), \\
\label{K+}
&&\bar M_{K^\pm}^2=\frac{1}{4G_S f^2_{us}} (M_u+M_s)(m_u+m_s), \\
\label{K0}
&&\bar M_{K^0}^2=\frac{1}{4G_S f^2_{ds}} (M_d+M_s)(m_d+m_s).
\end{eqnarray}
Here and below, the overline indicates that the masses were obtained without taking into account electromagnetic corrections. It should be emphasized that Eqs.\,(\ref{pi})-(\ref{K0}) differ from similar expressions obtained in \cite{Volkov:86,Ebert:86} and in other available works where the NJL model has been used. In our result the sum of the current quark masses is factorized, i.e., the Gell-Mann--Oakes--Renner relation \cite{Oakes:68} is already satisfied at this level. Obviously, all above NJL-based results coincide in the limit of exact $SU(3)_f$ symmetry. However, when calculating the first $1/N_c$ correction, these approaches lead to different results. In favor of the Eqs.\,(\ref{pi})-(\ref{K0}), as we will now see, is their agreement with the results of similar calculations made in the $1/N_c\chi$PT.
Expanding expressions (\ref{pi})-(\ref{K0}) in $1/N_c$ series, one can not only obtain the known result of the current algebra \cite{Weinberg:77}
\begin{eqnarray}
\label{GOR}
\bar\mu^2_{\pi^\pm}&=&B_0 (m_u+m_d), \nonumber \\
\bar\mu^2_{K^\pm}&=&B_0 (m_u+m_s), \nonumber \\
\bar\mu^2_{K^0}&=&B_0 (m_d+m_s),
\end{eqnarray}
where the constant $B_0$ is related to the quark condensate $\langle\bar qq\rangle_0\equiv\langle\bar uu\rangle_0=\langle\bar dd\rangle_0=\langle\bar ss\rangle_0$
\begin{equation}
B_0=\frac{2G_VM_0}{G_S\kappa_{A0}}=\frac{M_0}{2G_SF^2}=-\frac{\langle\bar qq\rangle_0}{F^2},
\end{equation}
but also to move further and obtain the first order correction
\begin{eqnarray}
\label{pi1}
&&\bar m_{\pi^+}^2=\bar\mu^2_{\pi^+}\left(1+\frac{m_u+m_d}{2M_0}\, \delta_M \right), \\
\label{K+1}
&&\bar m_{K^+}^2=\bar\mu^2_{K^+}\left(1+\frac{m_u+m_s}{2M_0} \, \delta_M \right), \\
\label{K01}
&&\bar m_{K^0}^2=\bar\mu^2_{K^0} \left(1+\frac{m_d+m_s}{2M_0} \, \delta_M \right).
\end{eqnarray}
These relations agree with those of $1/N_c\chi$PT \cite{Leutwyler:96a}, i.e., the following correspondence between parameters takes place
\begin{equation}
\label{dmch}
\frac{\delta_M}{2M_0}\leftrightarrow K_3=8 \frac{B_0}{F^2}\left(2L_8^r-L_5^r\right).
\end{equation}
As pointed out in \cite{Kaplan:86}, in chiral perturbation theory $L_8^r$ cannot be determined on purely phenomenological grounds. Treating $L_8^r$ as a free parameter, one may obtain both the positive and negative sign for the difference $2L_8^r-L_5^r$. In the framework of the $1/N_c\chi$PT, Leutwyler managed to establish the generous low bound for the range where a truncated $1/N_c$ expansion leads to meaningful results: $2L^r_8-L_5^r>0$. Based on formulas (\ref{K6}) and (\ref{dmch}), it is easy to express these low-energy constants in terms of the NJL model parameters
\begin{equation}
\label{L8}
L_5^r=\frac{(a-\delta_M)G_SF^4}{8M_0^2}, \quad L_8^r=\frac{aG_SF^4}{16M_0^2}.
\end{equation}
As a consequence of (\ref{dmch}), we find the $1/N_c$ correction $\Delta_M$ considered in \cite{Leutwyler:96a}
\begin{equation}
\Delta_M = \frac{8}{F^2}(M_K^2 - M_\pi^2)\left(2L_8^r-L_5^r\right) \leftrightarrow \frac{m_s-\hat m}{2M_0}\,\delta_M,
\end{equation}
where $\hat m=(m_u+m_d)/2$. The value of $\Delta_M$ characterizes the degree of breaking of $SU(3)_f$ symmetry. Although $\Delta_M$ cannot be calculated within the framework of $1/N_c\chi$PT, the estimate $0 < \Delta_M \leq 0.13$ was obtained in \cite{Leutwyler:97} based on additional reasonable considerations.
In the model studied here, the sign of $\delta_M$ coincides with the sign of the ratio $\delta_M/a$ in Eq.\,(\ref{dM}), which, as it is shown in Fig.\ref{fig1}, is a monotonically increasing function of the variable $M_0$ (at $M_0 \geq 0$) and becomes strictly positive beginning with a certain value $M_{0\mbox{\tiny min}}$. Thus, any solution of Eq.\,(\ref{gap2}) with $M_0>M_{0\mbox{\tiny min}}$ gives $\delta_M>0$. As we will show later, the value of $\delta_M$ is uniquely determined in the model, but first we should discuss the role of Daschen's theorem in the parameter-fixing procedure.
\begin{figure}
\includegraphics[width=0.45\textwidth]{fig1.pdf}
\caption{The ratio $\delta_M/a$ (see Eq.\,(\ref{dM})) is shown as a function of $M_0$ (in GeV) at fixed values of $\Lambda = 1.1\,\mbox{GeV}$, $G_S = 6.4\,\mbox{GeV}^{-2}$, and $G_V = 3.6\,\mbox{GeV}^{-2}$. For such parameter setting, the point $M_{0\mbox{\tiny min}}=0.244\,\mbox{GeV}$ satisfies both the requirement $\delta_M=0$, and Eq.\,(\ref{gap2}).}
\label{fig1}
\end{figure}
Let us return to mass formulas. From (\ref{pi1})-(\ref{K01}) it follows that
\begin{eqnarray}
\label{udFull}
&&\frac{\bar m_{K^+}^2\!-\bar m_{K^0}^2\!+\bar m_{\pi^+}^2}{\bar m_{K^0}^2\!-\bar m_{K^+}^2\!+\bar m_{\pi^+}^2}
= \frac{m_u}{m_d}\! -\! \frac{m_s\delta_M}{2M_0}\left(\!1\!-\!\frac{m_u^2}{m_d^2}\right)\!, \\
\label{sdFull}
&&\frac{\bar m_{K^+}^2\!+\bar m_{K^0}^2\!-\bar m_{\pi^+}^2}{\bar m_{K^0}^2\!-\bar m_{K^+}^2\!+\bar m_{\pi^+}^2}
= \frac{m_s}{m_d}\! +\! \frac{m_u\delta_M}{2M_0}\left(\frac{m_s^2}{m_d^2}\!-\!1\!\right)\!.
\end{eqnarray}
The current algebra result arises from here in the leading order of the chiral expansion,
\begin{eqnarray}
\label{udchex}
&& \frac{m_u}{m_d}=\frac{\bar \mu_{K^+}^2 -\bar \mu_{K^0}^2+\bar \mu_{\pi^+}^2}{\bar \mu_{K^0}^2-\bar \mu_{K^+}^2+\bar \mu_{\pi^+}^2}\equiv R_x, \\
\label{sdchex}
&&\frac{m_s}{m_d}=\frac{\bar \mu_{K^+}^2+\bar \mu_{K^0}^2-\bar \mu_{\pi^+}^2}{\bar \mu_{K^0}^2-\bar \mu_{K^+}^2+\bar \mu_{\pi^+}^2}\equiv R_y.
\end{eqnarray}
Additionally, one may wish to take into account the electromagnetic interaction of charged particles, which increases the masses of these states:
\begin{eqnarray}
\mu_{\pi^+}^2 &=&\bar\mu^2_{\pi^+} +\Delta^2_{el}, \quad \mu^2_{\pi^0}=\bar\mu^2_{\pi^0}=\bar\mu^2_{\pi^+}, \\
\mu_{K^+}^2&=&\bar\mu^2_{K^+} +\tilde\Delta^2_{el}, \quad \mu^2_{K^0}=\bar\mu^2_{K^0}.
\end{eqnarray}
The difference between the masses of the charged and neutral pions $\mu_{\pi^+}>\mu_{\pi^0}$ is due primarily to the electromagnetic interaction. The contribution of the strong interaction is proportional to $(m_d-m_u)^2$ and is thereby negligibly small. Using the Dashen theorem \cite{Dashen:69}
\begin{equation}
\label{Dteor}
\Delta^2_{el}=\tilde\Delta^2_{el},
\end{equation}
which is a strict result of the current algebra, one arrives at the well-known Weinberg ratios \cite{Weinberg:77}
\begin{eqnarray}
\label{ud}
\frac{m_u}{m_d}&=&\frac{2\mu^2_{\pi^0}-\mu^2_{\pi^+}+\mu^2_{K^+}-\mu^2_{K^0}}{\mu^2_{K^0}-\mu^2_{K^+}+\mu^2_{\pi^+}}=0.56, \\
\label{sd}
\frac{m_s}{m_d}&=&\frac{\mu^2_{K^+}+\mu^2_{K^0}-\mu^2_{\pi^+}}{\mu^2_{K^0}-\mu^2_{K^+}+\mu^2_{\pi^+}}=20.18.
\end{eqnarray}
Taking into account the first $1/N_c$ correction in (\ref{udFull})-(\ref{sdFull}) and after the inclusion of electromagnetic corrections, we arrive to the Leutwyler inequalities \cite{Leutwyler:96a}
\begin{eqnarray}
\label{Lud}
\frac{m_u}{m_d}&>&\frac{2m^2_{\pi^0}-m^2_{\pi^+}+m^2_{K^+}-m^2_{K^0}}{m^2_{K^0}-m^2_{K^+}+m^2_{\pi^+}}\equiv R_{xD}, \\
\label{Lsd}
\frac{m_s}{m_d}&<&\frac{m^2_{K^+}+m^2_{K^0}-m^2_{\pi^+}}{m^2_{K^0}-m^2_{K^+}+m^2_{\pi^+}}\equiv R_{yD},
\end{eqnarray}
which are valid at $\delta_M > 0$ (here and below, the subscript $D$ marks expressions that are derived with the Dashen theorem). If $\delta_M < 0$, the reverse inequalities are fulfilled.
The specific case $\delta_M=0$ indicates that the first correction to the result of the current algebra vanishes and the Weinberg ratios are satisfied, which is possible if $M_0 = M_{0\mbox{\tiny min}}$. In this case, at $\Lambda = 1.1\,\mbox{GeV}$, the rest six parameters $G_S, G_V$, $m_u$, $m_d$, $m_s$ and $\Delta^2_{el}$ can be fixed by the phenomenological values of the masses $m_{\pi^0}$, $m_{\pi^+}$, $m_{K^0}$, $m_{K^+}$, the weak decay constant of the pion $f_\pi = 92.3\pm 0.1\,\mbox{MeV}$, and the requirement that the first order correction to the current algebra result is suppressed $\delta_M = 0$ (in this case the Dashen theorem is exact). As a result, we obtain $M_0 = M_{0\mbox{\tiny min}} = 236\,\mbox{MeV}$, $G_S = 6.35\,\mbox{GeV}^{-2}$, $G_V = 4.28\,\mbox{GeV}^{-2}$; the magnitude of the quark condensate is $\langle 0|\bar qq|0\rangle_0 = -(265\,\mbox{MeV})^3$; the light quark masses are $m_u = 2.8\,\mbox{MeV}$, $m_d=5.0\,\mbox{MeV}$, and $m_s =101\,\mbox{MeV}$. The parameter characterizing the relative magnitude of breaking of isotopic symmetry compared to the breaking of $SU(3)_f$ symmetry is
\begin{equation}
\label{R}
R=\frac{m_s-\hat m}{m_d-m_u}=44.
\end{equation}
All these results are summarized in the Table\,\ref{ParameterSets} (see set $\delta_M=0$).
\begin{table*}
\caption{The six parameters of the model $\Lambda$, $G_{S,V}$, $m_{u,d,s}$ and electromagnetic correction to the masses of charged mesons, $\Delta^2_{el}$, $\tilde\Delta^2_{el}$, are fixed by using the meson masses $m_{\pi^0}$, $m_{\pi^+}$, $m_{K^0}$, $m_{K^+}$, the weak pion decay constant $f_\pi$ and the cutoff $\Lambda$ as an input (input values are marked with an asterisk ($^\ast$)). In the first row of the table, set $(a)$, the condition $\delta_M=0$ is used as the seventh input value. This set describes a hypothetical case of the complete absence of the first $1/N_c$ correction to the mass formulas (\ref{pi1})-(\ref{K01}). Set $(b)$ describes a realistic case when the first correction is nonzero (here $f_K$ and $\eta\to 3\pi$ decay rate are used as additional input values). All units, except $\left[G_{S,V}\right]=\text{GeV}^{-2}$ and dimensionless ratio $R$ (see Eq.\,(\ref{R})), are given in MeV.}
\label{ParameterSets}
\begin{footnotesize}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lrrrrrrrrcrrrrrcr@{}}
\hline
\hline
\multicolumn{1}{c}{Set}
& \multicolumn{1}{c}{$\delta_M$}
& \multicolumn{1}{c}{$\Lambda$}
& \multicolumn{1}{c}{$G_S$}
& \multicolumn{1}{c}{$G_V$}
& \multicolumn{1}{c}{$m_u$}
& \multicolumn{1}{c}{$m_d$}
& \multicolumn{1}{c}{$m_s$}
& \multicolumn{1}{c}{$M_0$}
& \multicolumn{1}{c}{$-\langle\bar qq\rangle^{1/3}_0$}
& \multicolumn{1}{c}{$M_u$}
& \multicolumn{1}{c}{$M_d$}
& \multicolumn{1}{c}{$M_s$}
& \multicolumn{1}{c}{$F$}
& \multicolumn{1}{c}{$f_\pi$}
& \multicolumn{1}{c}{$f_K$}
& \multicolumn{1}{c}{$R$} \\
\hline
$(a)$
& $0^\ast$
& $1.1^\ast$
& $6.35$
& $4.28$
& $2.8$
& $5.0$
& $101$
& $236$
& $265$
& $248$
& $257$
& $668$
& $89.0$
& $92.2^\ast$
& $131$
& $44$\\
$(b)$
& $ 0.67$
& $1.1^\ast$
& $6.6$
& $7.4$
& $2.6$
& $4.6$
& $84$
& $274$
& $275$
& $283$
& $290$
& $567$
& $90.5$
& $92.2^\ast$
& $111^\ast$
& $40$\\
\hline
\hline
\end{tabular*}
\end{footnotesize}
\end{table*}
If the first correction is nonzero and $M_0>M_{0\mbox{\tiny min}}$, i.e., $\delta_M>0$, the value of the quark condensate increases. Consequently, the light quark masses decrease. Therefore, the above estimates for the masses $m_u$, $m_d$, and $m_s$ should be considered as an upper bound of the model.
\section{The Gasser-Leutwyler ellipse and higher order curves}
\label{s7}
Let us return to the analysis of mass formulas (\ref{pi1})-(\ref{K01}) at $\delta_M \neq 0$. A number of sum rules which do not involve $\delta_M$ can be obtained from these formulas.
Gasser and Leutwyler \cite{Gasser:85} considered the simplest case described by a second-order curve in two independent variables $x = m_u/m_d$ and $y = m_s/m_d$. We arrive at this curve using two ratios
\begin{eqnarray}
\label{Ratio1}
&&R_1\!=\!\frac{\bar m_{K^+}^2}{\bar m_{\pi^+}^2}\!=\!\frac{m_u\!+\!m_s}{m_u\!+\!m_d}\left[1\!+\!\frac{m_s\!-\!m_d}{2M_0} \,\delta_M\right]\!, \\
\label{Ratio2}
&&R_2\!=\!\frac{\bar m_{K^0}^2\!-\!\bar m_{K^+}^2}{\bar m_{K^0}^2\!-\!\bar m_{\pi^+}^2}\!=\!\frac{m_d\!-\!m_u}{m_s\!-\!m_u}\left[1\!+\!\frac{m_s\!-\!m_d}{2M_0} \,\delta_M\right]\!,
\end{eqnarray}
from which it follows that
\begin{equation}
\label{Q2}
Q^2\equiv \frac{R_1}{R_2} =\frac{m_s^2-m^2_u}{m_d^2-m_u^2}.
\end{equation}
The right hand side of this expression depends only on the ratios $x$ and $y$ of the light quark masses. The locus of these points is an ellipse
\begin{equation}
\label{ellipse}
y^2-x^2(1-Q^2)=Q^2.
\end{equation}
Note that replacing $\bar m_{K^+}\!\leftrightarrow\! \bar m_{K^0}$ in the left-hand side of equations Eqs.\,(\ref{Ratio1}) and (\ref{Ratio2}) reduces to replacing $m_u\!\leftrightarrow\! m_d$ in the right-hand side, and it would seem that we arrive at a new relation
\begin{equation}
\label{Q22}
\tilde Q^2\equiv \frac{\bar m_{K^0}^2}{\bar m_{\pi^+}^2}\frac{\bar m_{K^+}^2-\bar m_{\pi^+}^2}{\bar m_{K^0}^2-\bar m_{K^+}^2} =\frac{m_s^2-m^2_d}{m_d^2-m_u^2}.
\end{equation}
However, it is easy to see that this equation coincides with (\ref{Q2}) because $Q^2=\tilde Q^2+1$.
Taking into account electromagnetic corrections, according to the Dashen theorem, we obtain an ellipse with a semimajor axis $Q \to Q_D$
\begin{eqnarray}
\label{QR12D}
Q^2\to Q^2_D&=&\frac{(m_{K^0}^2-m_{\pi^0}^2)(m_{K^+}^2-m_{\pi^+}^2+m_{\pi^0}^2)}{m_{\pi^0}^2 (m_{K^0}^2-m_{K^+}^2+m_{\pi^+}^2-m_{\pi^0}^2)}, \nonumber \\
R_1\to R_{1D}&=&1+\frac{m_{K^+}^2-m_{\pi^+}^2}{m_{\pi^0}^2}, \\
R_2\to R_{2D}&=&1-\frac{m_{K^+}^2-m_{\pi^+}^2}{m_{K^0}^2-m_{\pi^0}^2}, \nonumber
\end{eqnarray}
which gives $Q_D = 24.3$, $R_{1D}=13.3$, $R_{2D}=0.0225$ for physical values of the masses. Obviously, the point $(x, y) = (R_{xD}, R_{yD})$ belongs to this ellipse. Below, for the sake of brevity, this point is called the Weinberg point, where $\delta_M = 0$ and, consequently, the Weinberg ratios (\ref{ud}) and (\ref{sd}) are satisfied.
Consider now a set of arbitrary ratios $R_{i}$ ($i =1,2,\ldots$) combined from the meson masses (\ref{pi1})-(\ref{K01})
\begin{equation}
\label{Ri}
R_i=k_i\left(1+l_i\frac{m_d}{2M_0}\delta_M\right),
\end{equation}
where coefficients $k_i$ and $l_i$ are functions of $x$ and $y$. It is clear that by taking two arbitrary elements of the given set, say $R_i$ and $R_j$, we can eliminate the dependence on $m_d\delta_M/2M_0$ and arrive at the equation of the curve in the $(x,y)$ plane
\begin{equation}
\label{gencase1}
k_ik_j(l_i-l_j)=l_ik_iR_j-l_jk_jR_i.
\end{equation}
If $l_i=l_j$, the equation simplifies to $k_iR_j=k_jR_i$. This curve, where $i=1,j=2$, is an ellipse (\ref{ellipse}). If $l_i\neq l_j$, the curve (\ref{gencase1}) is of a higher order, and consequently the allowed values of $x$ and $y$ do not belong to the ellipse. Hence we have a set of alternative sum rules to determine the light quark mass ratios. The ellipse is distinguished by two properties: (i) It survives even after accounting for chiral logarithms, (ii) It has an additional symmetry with respect to replacement $m_u\!\leftrightarrow\! m_d$. Nevertheless, in the approximation considered here, it is difficult to give preference to one of curves. Let us describe the most important property of the family.
For this purpose, note that the family (\ref{gencase1}) is enclosed between two curves. The first one is given by the ratios $R_1$ (see Eq.\,(\ref{Ratio1})) and $R_3$
\begin{eqnarray}
\label{us}
R_3&=&\frac{\bar m^2_{K^+}+\bar m^2_{K^0}-\bar m^2_{\pi^+}}{\bar m^2_{K^+}-\bar m^2_{K^0}+\bar m^2_{\pi^+}}\nonumber \\
&=&\frac{y}{x}\left[1+\left(\frac{y}{x}-\frac{x}{y}\right)\frac{m_d\delta_M}{2M_0}\right].
\end{eqnarray}
In this specific case, Eq.\,(\ref{gencase1}) leads to the elliptic curve
\begin{equation}
\label{lbc}
x(y-1)(y-xR_3)=(x-y)(1+x)R_1+y^2-x^2
\end{equation}
which has two connected components one of which passes through the Weinberg point and determines the lower bound in Fig.\,\ref{fig2}. To plot the curve (\ref{lbc}) we use the physical values of the meson masses, where electromagnetic corrections are taken into account in accord with the Dashen theorem; i.e., $R_1\to R_{1D}$, and
\begin{equation}
R_3\to R_{3D}=\frac{m^2_{K^+}+m^2_{K^0}-m^2_{\pi^+}}{2m^2_{\pi^0}+m^2_{K^+}-m^2_{K^0}-m^2_{\pi^+}}.
\end{equation}
\begin{figure}
\includegraphics[width=0.45\textwidth]{fig2.pdf}
\caption{Ellipse (dashed line) specified by Eq.\,(\ref{ellipse}), where $Q\to Q_D$, and curves given by Eq.\,(\ref{lbc}) (lower solid line) and Eq.\,(\ref{ubc}) (upper solid line) obtained with the Dashen theorem ($R_{1,3,x}\to R_{1D,3D,xD}$). The dot belonging to all three curves is the Weinberg point. The curves intersect at the $SU(3)$ limit point $(x,y)=(1,1)$.}
\label{fig2}
\end{figure}
To establish the upper bound of the family (\ref{gencase1}) we consider the ratios $R_3$ and $R_x$. It leads to the fifth order curve
\begin{equation}
\label{ubc}
xy(1-x^2)(xR_3-y)=(y^2-x^2)(x-R_x).
\end{equation}
It has three connected components. Fig.\,\ref{fig2} shows the component passing through the Weinberg point. It is also obtained with the Dashen theorem: $R_3\to R_{3D}$ and $R_x \to R_{xD}$, and it lies primarily above the ellipse specified by Eq.\,(\ref{ellipse}). The other curves lie inside indicated boundaries. The common property of the family is that all of them pass through the Weinberg point. The existence of numerous curves generated by mass formulas (\ref{pi1})-(\ref{K01}) does not affect the Leutwyler inequalities (\ref{Lud}) and (\ref{Lsd}). The question which of the curves (sum rules) is more suitable for approximating the final result can be clarified only after the model parameters are fixed.
\section{Numerical estimates}
\label{s8}
Let us fix the six parameters of the model $\Lambda$, $G_S$, $G_V$, $m_u$, $m_d$, $m_s$, and electromagnetic corrections to the masses of charged mesons $\Delta_{el}$ and $\tilde\Delta_{el}$. For a direct comparison with the empirical data and $1/N_c\chi$PT results, we have used the values for the pion and kaon decay constants, $f_\pi\simeq 92\,\mbox{MeV}$, $f_K\simeq 110\,\mbox{MeV}$, and the masses of pseudoscalar mesons: $m_{\pi^0}$, $m_{\pi^+}$, $m_{K^0}$, $m_{K^+}$. Let us recall that $\Delta_{el}^2\simeq m^2_{\pi^+}-m^2_{\pi^0}$. In addition, as noted above, we choose the cutoff $\Lambda$ according to a generally accepted estimate of the scale of spontaneous chiral symmetry breaking $\Lambda_{\chi SB}\simeq 4\pi f_\pi$ \cite{Georgi:84}. To fix $\tilde\Delta_{el}$ we use (following the Leutwyler's analyses \cite{Leutwyler:97}) the $\eta\to 3 \pi$ decay. The latter requires some clarifications.
The Dashen theorem is valid only in the leading order of chiral expansion. Since $\delta_M\neq 0$, the Eq.\,(\ref{Dteor}) no longer holds. The inequality $\tilde\Delta_{el}^2\neq \Delta_{el}^2=m_{\pi^+}^2-m_{\pi^0}^2$ should be considered instead. It follows that the ratio (\ref{Q2}) depends now on $\tilde \Delta_{el}^2$
\begin{equation}
Q^2_{\tilde D}=\left(\frac{m_{K^0}^2}{m_{\pi^0}^2}-1\right)\left(\frac{m_{K^0}^2}{m_{K^0}^2-m_{K^+}^2+\tilde\Delta_{el}^2}-1\right)\!.
\end{equation}
In \cite{Leutwyler:97} it was argued that the value of $Q^2_{\tilde D}$ can be extracted accurately from the observed $\eta\to 3 \pi $ decay width. The reason is that electromagnetic contributions to this process are suppressed, and, as a consequence, determination of $Q^2_{\tilde D}$ is less sensitive to the uncertainties therein. The current knowledge based on $\eta\to 3 \pi $ gives the range $Q_{\tilde D} = 22.3 \pm 0.8 $ \cite{Leutwyler:09}, which leads to $\tilde\Delta_{el}=(47.1\mp 4.5)\,\mbox{MeV}$ correspondingly. For comparison, $\Delta_{el} = 35.5\,\mbox{MeV}$. The gray elliptic band in Fig.\,\ref{fig3} corresponds to the range $\tilde\Delta_{el}$ indicated above. Similar bands are also plotted for the higher order curves. Obviously, the closer the value of $\tilde\Delta_{el}$ to $\Delta_{el}$, the closer the curve is to the corresponding one obtained taking into account Dashen's theorem.
The interval we use slightly differs from the recent result $Q_{\tilde D} =22.1\pm 0.7$ \cite{Colangelo:18}. Nonetheless, we prefer to consider more wider region because the lattice QCD collaborations report on larger values: $Q_{\tilde D}=23.4\pm 0.6$ \cite{Fodor:16} for $N_f=2+1$ and $Q_{\tilde D}=23.8\pm 1.1$ \cite{Giusti:17} for $N_f=2+1+1$ simulations.
The above estimate for $\tilde\Delta_{el}$ means that the mass difference between charged and neutral kaons due to electromagnetic interactions is $(m_{K^+}-m_{K^0})_{el}=(2.2\mp 0.4)\,\mbox{MeV}$. It agrees with the result of lattice QCD calculations $(m_{K^+}-m_{K^0})_{el}=1.9\,\mbox{MeV}$ \cite{Duncan:96}.
The input values give the following estimates for the couplings $G_S=6.6\,\mbox{GeV}^{-2}$ and $G_V=7.4\,\mbox{GeV}^{-2}$. These constants, in particular, describe the theory in the limit $N_c\to\infty$, i.e., when the masses of the current quarks vanish and $f_\pi=f_K=F$. This means that their values should mainly determine some vacuum characteristics. Indeed, after fixing the parameters, we see that
\begin{eqnarray}
\sqrt{\frac{1}{2G_S}}&\simeq& M_0\simeq |\langle \bar qq\rangle_0^{1/3}|, \nonumber \\
\sqrt{\frac{Z_0-1}{4G_VZ_0}}&\simeq&\sqrt{\frac{1}{16G_V}}\simeq F.
\end{eqnarray}
Some of numerical estimates are given in the second line of the Table\,\ref{ParameterSets}. For $Z_0$ we have $Z_0=1.32$, taking into account the $1/N_c$ correction this gives $Z_\pi =1.34$, $Z_K=1.51$, which if averaged coincides with the estimate of the standard NJL model $Z=1.4$ \cite{Volkov:86}.
The Gell-Mann, Oakes and Renner result for $B_0$ is modified at NLO by the factor, which is the ratio of the squares of the masses of the pseudoscalar meson, to its value at LO, i.e.,
\begin{equation}
B_0\to B_P=B_0 \left(\frac{\bar m_P^2}{\bar\mu_P^2}\right),
\end{equation}
where $P=\pi^\pm, K^\pm, K^0$-$\bar K^0$. Numerically the correction is less than 1\% for pions, and around 11\% for kaons. Thus, the correction of $\mathcal O(m^2_i)$ in the $1/N_c$ expansion of the pseudoscalar masses is much less than the LO result. This supports the assumption made that the quark condensate is of the order $N_c$.
Mass formulas (\ref{pi1})-(\ref{K01}) allow to obtain the absolute values of the quark masses, if one knows the vacuum characteristics encoded in the parameter $B_0$, and in the ratio $\delta_M/M_0$. We need also to take into account the electromagnetic corrections. Given that the difference of charge and neutral pion masses has mainly electromagnetic origin, one finds
\begin{eqnarray}
&&\bar m_{\pi^\pm}^2=m_{\pi^\pm}^2-\Delta^2_{el}=m_{\pi^0}^2, \nonumber \\
&&\bar m_{K^\pm}^2=m_{K^\pm}^2-\bar\Delta_{el}^2, \nonumber \\
&&\bar m_{K^0}^2=m_{K^0}^2,
\end{eqnarray}
where $m_{\pi^0}$, $m_{\pi^\pm}$, $m_{K^\pm}$, and $m_{K^0}$ are the physical masses of the states. We collect our results in the first line of the Table \ref{QMR}. The error bars there indicate the change in values within the range $\tilde\Delta_{el}=(47.1\mp 4.5)\,\mbox{MeV}$. The results of calculations in $1/N_c\chi$PT \cite{Leutwyler:96} (second line) and data quoted by the Particle Data Group (PDG) \cite{PDG:22} (third line) are also given there.
\begin{table*}
\caption{The light quark masses (in MeV) and their ratios obtained in the $1/N_c$ NJL model are compared with the results of $1/N_c\chi$PT and PDG. In the first line, the error bars indicate the change in values within the range $\tilde\Delta_{el}=(47.1\mp 4.5)\,\mbox{MeV}$ correspondingly. }
\label{QMR}
\begin{footnotesize}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lllllllll@{}}
\hline
\hline
\multicolumn{1}{c}{Set}
& \multicolumn{1}{c}{$m_u $}
& \multicolumn{1}{c}{$m_d$}
& \multicolumn{1}{c}{$m_s$}
& \multicolumn{1}{c}{$m_u/m_d$}
& \multicolumn{1}{c}{$m_s/m_d$}
& \multicolumn{1}{c}{$m_s/\hat m$}
& \multicolumn{1}{c}{$m_s/m_u$}
& \multicolumn{1}{c}{$R$} \\
\hline
$1/N_c \mbox{NJL}$
& $2.57\pm 0.07$
& $4.56^{-0.06}_{+0.08}$
& $83.7\pm 0.1$
& $0.564^{-0.025}_{+0.023}$
& $18.3 \mp 0.3$
& $23.46\mp 0.02$
& $32.5\mp 0.9$
& $40.3^{+2.9}_{-2.8}$
\\
$1/N_c \chi\mbox{PT}$ \cite{Leutwyler:96}
&
&
&
& $0.553\pm 0.043$
& $18.9\pm 0.8$
& $24.4\pm 1.5$
& $34.4\pm 3.7$
& $40.8\pm 3.2$
\\
$\mbox{PDG}$ \cite{PDG:22}
& $2.16^{+0.49}_{-0.26}$
& $4.67^{+0.48}_{-0.17}$
& $93.4^{+8.6}_{-3.4}$
& $0.474^{+0.056}_{-0.074}$
& $19.5\pm 2.5$
& $27.33^{+0.67}_{-0.77}$
& $$
& $$
\\
\hline
\hline
\end{tabular*}
\end{footnotesize}
\end{table*}
Here it is appropriate to make a few remarks.
First, we can conclude that the results of the $1/N_c$ NJL model are in a remarkable agreement with the estimates made in \cite{Leutwyler:96}. This is also evidenced by the estimates we obtain for a number of parameters of the $1/N_c$ chiral perturbation theory. For the problem considered here these are the following constants: $L_5^r$, $L_8^r$, $\Delta_M$, $R$. Our estimates are
\begin{equation}
L_5^r= 2.1\cdot 10^{-3}, \quad
L_8^r= 1.3\cdot 10^{-3}, \quad
\Delta_M= 0.10.
\end{equation}
This agrees with the the phenomenological estimates of these low-energy coupling constants $L_5^r= (2.2\pm 0.5) \cdot 10^{-3}$ and $L_8^r= (1.1\pm 0.3)\cdot 10^{-3}$ in \cite{Gasser:85}. The parameter $\Delta_M$ cannot be calculated within $1/N_c\chi$PT. Our result is compatible with the estimate $0 < \Delta_M \leq 0.13$ \cite{Leutwyler:96a}. For the ratio $R$ the $1/N_c\chi$PT yields $R=40.8\pm 3.2$. Our calculations give $R=40.3^{+2.9}_{-2.8}$. Let us also indicate the independent estimate $R = 41\pm 4$ obtained in the work \cite{Urech:95}.
Evaluation of the absolute values of the current quark masses, shows that our result differs little from the standard estimates obtained in the NJL model. For comparison, let us point out the paper \cite{Volkov:86}, where it was found that $m_u=m_d=3\,\mbox{MeV}$ and $m_s=80\,\mbox{MeV}$. Recall that the resulting masses only make sense in the limited context of a particular quark model, and cannot be related to the quark mass parameters of the Standard Model. However, the quark-mass ratios found here have a broader meaning because they do not depend on the absolute normalization of the quark mass.
The point in the Fig.\,\ref{fig3} inside the gray band corresponds the central value of the ratios $m_u/m_d$ and $m_s/m_d$ presented by the set $1/N_c$\,NJL in the Table \ref{QMR}.
\begin{figure}
\includegraphics[width=0.45\textwidth]{fig3.pdf}
\caption{The area of admissible values of the ratios $x, y$ is formed by the intersection of three bands corresponding to the interval $\tilde\Delta_{el}=(47.1\mp 4.5)\,\mbox{MeV}$ at the ellipse (gray band) and at curves of higher order (\ref{lbc}) and (\ref{ubc}). The dot belonging to all three bands is the estimate of the NJL model.}
\label{fig3}
\end{figure}
It is also interesting to note that the bounds on the two ratios $m_u/m_d$ and $m_s/m_d$ can be established solely on the sum rules (\ref{gencase1}). Since all curves (\ref{gencase1}) intersect at a Weinberg point, considering only the extreme curves automatically cuts out the area belonging to the entire curve family in question. In Fig.\,\ref{fig3} we show two bands obtained on the bases of the extremal curves (\ref{lbc}) and (\ref{ubc}), from their intersection one can deduce that
\begin{equation}
\label{curmassratios-extr}
\frac{m_u}{m_d}=0.50\pm 0.09, \quad \frac{m_s}{m_d}=19.22 \pm 0.62.
\end{equation}
These ratios do not imply any definite value for the parameter $\delta_M$ and therefore can be considered as the maximum range for possible values $m_u/m_d$ and $m_s/m_d$ that arises in the truncated theory based on formulas (\ref{pi1})-(\ref{K01}). This result agrees well with the values $m_u/m_d=0.474^{+0.056}_{-0.074}$ and $m_s/m_d=19.5\pm 2.5$ quoted by the Particle Data Group \cite{PDG:22}.
A narrower interval is obtained if we additionally require the fulfillment of the Leutwyler's inequalities (\ref{Lud}) and (\ref{Lsd}), i.e., $\delta_M>0$. In this case, the result reads
\begin{equation}
\label{curmassratios-extr2}
\frac{m_u}{m_d}=0.53\pm 0.06, \quad \frac{m_s}{m_d}=19.13 \pm 0.53.
\end{equation}
\section{Conclusions}
\label{Conclusions}
Here an attempt is made to implement, within the framework of the NJL model, the well established idea of Leutwyler, according to which the masses of light quarks are $1/N_c$ suppressed. This hypothesis previously turned out to be fruitful in constructing the effective Lagrangian of the $1/N_c\chi$PT. Extending this idea to the NJL model, we conclude that a coherent picture of masses and decay constants of electrically charged and strange pseudoscalars arises as well.
Let us emphasize that in the NJL model, the vertices of the effective meson Lagrangian result from a direct calculation of the one-loop quark diagrams. This procedure needs to account for the effects of explicit chiral symmetry breaking. Many authors have taken steps in this direction. Still, we find that approach presented above leads to results and to a formulation of the situation that have not been yet reported in the literature (excluding a short letter published recently \cite{Osipov:22}). In particular, employing a recently developed method based on the Fock-Schwinger proper-time asymptotic expansion and the Volterra series, we demonstrate that this tool fairly reproduces the symmetry breaking pattern grasped by the effective Lagrangian of the $1/N_c\chi$PT.
The mass formulas obtained take into account the first $1/N_c$ correction to the result of current algebra. At this level it is possible to establish a set of mutually exclusive relations that directly relate the masses of $\pi^{\pm}$, $K^\pm$, $K^0$ and $\bar K^0$ mesons to the ratios of quark masses. We show that the $\eta\to 3\pi$ decay data do not allow giving preference to any one of the relations, but single out a physically significant range associated with all of them. The existence of such a range makes it possible to set limits for the light quark mass ratios $0.47<m_u/m_d<0.59$ and $18.60<m_s/m_d<19.66$, if one additionally requires that the Leutwyler inequalities are satisfied.
We have not considered here neutral $\pi^0$, $\eta$ and $\eta'$ states. They are the subject of a separate work, which is being prepared.
\section*{Acknowledgments}
I am grateful to B. Hiller, O. Teryaev and M.\,K. Volkov for the interest in the subject and useful discussions. This work is supported by Grant from Funda\c{c}\~ ao para a Ci\^ encia e Tecnologia (FCT) through the Grant No. CERN/FIS-COM/0035/2019.
|
{
"arxiv_id": "2302.14214",
"language": "en",
"timestamp": "2023-03-01T16:53:37",
"url": "https://arxiv.org/abs/2302.14214",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
Pair correlation is one of the most important many-body correlations influencing the ground state
structure and low-energy excitations in nuclei\cite{Brink-Broglia,Broglia-Zelevinsky,Dean-Hjorth-Jensen,BM2,Ring-Schuck}.
It also brings about superfluidity or superconductivity
in infinite nuclear matter and in neutron stars\cite{Broglia-Zelevinsky,Dean-Hjorth-Jensen,Takatsuka-Tamagaki,Sedrakian-Clark}.
It originates from
attractive interactions between
like nucleons (or unlike nucleons), depending on the quantum numbers of the
interacting pair, eg $^{1}S_0$, $^{3}S_1$, $^{3}P_2$, etc. It causes condensation of
the correlated pairs, Cooper pairs.
One of the most important aspects of the superfluidity/superconductivity in Fermion systems
is the presence of the pair gap $\Delta$, which originates from the
condensation of the Cooper pairs\cite{BCS}. The pair gap or the pair condensate is the
order parameter of the superfluidity/superconductivity and it plays also a role of the dynamical variable in the
phenomenological Ginzburg-Landau theory\cite{Ginzburg-Landau}. Another important feature is
that the finite pair gap breaks the U(1) gauge symmetry. The spontaneous symmetry breaking
produces characteristic collective modes of excitations, known as the
Anderson-Boboliubov (Nambu-Goldstone) mode\cite{Anderson1958,Bogoliubov1958,Nambu1960} and
the Higgs mode\cite{Anderson1958,Higgs1964,Littlewood-Varma1981-82,Pekker-Varma,Shimano-Tsuji}, which are
oscillations of the order parameter with respect to the phase and the amplitude,
respectively. The Anderson-Bogoliubov mode is a mass-less phonon with a linear dispersion relation
whereas the Higgs mode is massive, i.e. the lowest excitation energy is $2\Delta$\cite{Anderson1958,Higgs1964,Littlewood-Varma1981-82,Pekker-Varma,Shimano-Tsuji}.
An analogy to the superfluidity/superconductivity
has been brought into finite nuclei
by applying the BCS theory to the nuclear Hamiltonian\cite{Bohr-Mottelson-Pines,Belyaev1959,BM2,Ring-Schuck}.
In this description, nuclei may be
in superconducting/superfluid or normal phase depending on whether they are closed
or open shell nuclei, and also on other conditions such as rotational frequency\cite{Brink-Broglia,Broglia-Zelevinsky}.
The odd-even mass difference is a typical indicator of the nuclear superfluid phase\cite{BM2,Ring-Schuck}
since it corresponds to the pair gap in the single-particle excitation spectrum.
Two-nucleon transfer (pair transfer) reaction has been considered as a probe for the pair correlation
in nuclei\cite{Yoshida62, Bes-Broglia66, Broglia73, Brink-Broglia, BM2, Oertzen2001,Potel2013}.
A typical example is the $(p,t)$ and $(t,p)$ reaction on the even-even Sn isotopes,
which shows enhanced cross sections for
transitions between 0$^{+}$ ground states of adjacent isotopes\cite{Yoshida62,Bes-Broglia66,Broglia73,Brink-Broglia}.
In parallel to
the rotational motion of deformed nuclei, the concept of the
pair rotation is introduced\cite{Bes-Broglia66,Broglia73,Brink-Broglia}, as a nuclear counter part of the
Anderson-Bogoliubov mode\cite{Anderson1958,Bogoliubov1958}.
It has been argued that the pair rotation is a phase rotation of the gap, and is
characterized with a rotational band,
a series of 0$^{+}$ ground states along the isotope chain. The
transition strength of the pair transfer within the rotational band is
related to the order parameter, the pair gap.
Another collective mode associated with the pair correlation is
the pair vibration\cite{Bes-Broglia66,Broglia73,Brink-Broglia},
which is introduced in an analogy to the shape vibration modes in nuclei.
It is a collective vibrational state with spin parity 0$^{+}$ and excitation energy around
$\sim 2\Delta$. It is regarded as
oscillation of the amplitude of the pair gap $\Delta$.
The lowest-lying excited 0$^{+}$ states populated
by the $(p,t)$ and $(t,p)$ reactions or other two-neutron transfer are identified as
the pair vibrational states. Although not stated explicitly in the preceding works,
the low-lying pair vibrational mode might be regarded as a counter part of the Higgs mode
in superfluids/superconductors.
We remark here that microscopic theories such as the
quasiparticle random phase approximation predict also another pair vibrational mode
with high excitation energy, called often the giant pair vibration (GPV) \cite{Broglia1977,Cappuzzello2015,Cavallaro2019,Assie2019},
which arises from the shell structure inherent to finite nuclei.
It is therefore a non-trivial issue how the pair vibrations in nuclei, either low-lying or high-lying,
or both, are related
to the macroscopic picture of the Higgs mode. Recently an observation of
the giant pair vibrations in a two-neutron transfer reaction populating $^{14}$C and $^{15}$C
has been reported\cite{Cappuzzello2015,Cavallaro2019}. Identification in medium and heavy nuclei
is an open question\cite{Assie2019} and further experimental study is awaited.
In the present study, we intend to describe the pair vibrations, including both low-lying and
high-lying ones, from a view point of the Higgs mode. We shall discuss then that this viewpoint
provides a new perspective to the nature of the pair correlation in nuclei.
Our study proceeds in two steps. Firstly, we
introduce a new scheme of exploring a
response that probes the Higgs mode in a pair correlated nucleus.
One of commonly adopted methods to describe the pairing collective modes, including the
pair vibrations, is to consider addition or removal of a two-nucleon pair, or
describe transition strength for pair-addition and pair-removal operators.
In the present work, we propose a variant of
these operators, which is more suitable for probing the Higgs mode in finite nuclei. We formulate
then the strength function for this new two-nucleon transfer operator, named Higgs
strength function, and we then characterize the
pair vibrations, including both low-lying and high-lying, in terms of the Higgs strength function.
The formulation is based on Skyrme-Hartree-Fock-Bogoliubov mean-field model and the
quasiparticle random phase approximation, which are widely used to describe the
ground state and collective excitations, including pair vibrations,
in medium and heavy nuclei\cite{Ring-Schuck,Nakatsukasa-etal2016,Paar-etal2007}.
Secondly, we shall argue that the Higgs strength function carries an important
information on the pair correlation in nuclei, in particular, the pair condensation energy or the effective
potential energy, which is the
energy gain obtained by the condensation of Cooper pairs.
Similarly to the Ginzburg-Landau theory of superconducting systems, an effective potential energy as a function
of the order parameter, i.e, the pair condensate, can be considered also in finite nuclei, and it
is straightforwardly evaluated in the framework of the Hartree-Fock-Bogoliubov mean-field model.
On another hand, the Higgs strength function probes the small amplitude oscillation of the order parameter
and one can expect that it can be related to the curvature of the effective potential. Motivated by this idea,
we examine in detail the effective potential energy obtained with the present Skyrme-Hartree-Fock-Bogoliubov model.
As shown later, the effective potential has a rather simple structure, parametrized with a fourth order
polynomials of the pair condensate. Combining these observations, we propose a novel method with which one can
evaluate the pair condensation energy using the Higgs strength function. We shall discuss the validity of this
method with numerical calculations performed for neutron pairing in Sn isotopes.
\section{Theoretical framework}
As a theoretical framework for our discussion, we adopt the
Hartree-Fock-Bogoliubov (HFB) theory in which the pair correlation
is described by means of the Bogoliubov's quasiparticles and the
selfconsistent pair field\cite{Ring-Schuck}. It has the same structure as that of
the Kohn-Sham density functional theory for superconducting electron systems
with an extension based on the Bogoliubov-de Genne scheme\cite{Nakatsukasa-etal2016}.
The adopted model is essentially the same as the one employed in our previous studies\cite{Serizawa2009,Matsuo2010,Shimoyama2011,Shimoyama2013}.
We here describe an outline of the model with emphasis on some aspects which are
necessary in the following discussion.
In the HFB framework, the superfluid/superconducting state is described with a
generalized Slater determinant $\ket{\Psi}$, which is a determinantal state
of Bogoliubov's quasiparticles. The total energy of the system for $\ket{\Psi}$
is a functional of one-body densities of various types, and we adopt
the Skyrme functional $\mathcal{E}_{Skyrme}$
combined with the pairing energy $\mathcal{E}_{pair}$.
Assuming that the
pairing energy originates from a contact two-body interaction, $\mathcal{E}_{pair}[\rho(\mbox{\boldmath $r$}),\tilde{\rho}(\mbox{\boldmath $r$}),\tilde{\rho}^*(\mbox{\boldmath $r$})]$
is an functional of the local pair condensate (also called pair density in the literature)
\begin{align}
\tilde{\rho}(\mbox{\boldmath $r$}) &= \braket{\Psi}{\hat{P}(\mbox{\boldmath $r$})}{\Psi}
= \bra{\Psi} \psi(\mbox{\boldmath $r$} \uparrow)\psi(\mbox{\boldmath $r$} \downarrow)\ket{\Psi},
\ \ \ &
\hat{P}(\mbox{\boldmath $r$})=\frac{1}{2}\sum_\sigma \psi(\mbox{\boldmath $r$}\tilde{\sigma})\psi(\mbox{\boldmath $r$}\sigma), \\
\tilde{\rho}^*(\mbox{\boldmath $r$}) &= \braket{\Psi}{\hat{P}^\dagger(\mbox{\boldmath $r$})}{\Psi}
= \bra{\Psi} \psi^\dagger(\mbox{\boldmath $r$} \downarrow)\psi^\dagger(\mbox{\boldmath $r$} \uparrow)\ket{\Psi},
\ \ \ &
\hat{P}^\dagger(\mbox{\boldmath $r$})=\frac{1}{2}\sum_\sigma \psi^\dagger(\mbox{\boldmath $r$}\sigma) \psi^\dagger(\mbox{\boldmath $r$}\tilde{\sigma}),
\end{align}
and local one-body density
\begin{align}
\rho(\mbox{\boldmath $r$}) &= \braket{\Psi}{\hat{\rho}(\mbox{\boldmath $r$})}{\Psi}, \ \ \ & \hat{\rho}(\mbox{\boldmath $r$})=\sum_\sigma \psi^\dagger(\mbox{\boldmath $r$}\sigma)\psi(\mbox{\boldmath $r$}\sigma)
\end{align}
where
$\psi^\dagger(\mbox{\boldmath $r$}\sigma)$ and $\psi(\mbox{\boldmath $r$}\sigma)$ are nucleon creation and annihilation operators
with the coordinate and the spin variables. Note that the local pair condensate $\tilde{\rho}(\mbox{\boldmath $r$})$ and $\tilde{\rho}^*(\mbox{\boldmath $r$})$
is finite
if the condensation of Cooper pairs occurs.
Here and hereafter we do not write the isospin index for simplicity. Notation follows Ref.\cite{Matsuo2001}.
\noindent
\underline{Stationary ground state}
The variational principle $\delta \mathcal{E}=\delta \mathcal{E}_{Skyrme} + \delta \mathcal{E}_{pair}=0$ leads to the
so-called Hartree-Fock-Bogoliubov equation or the Bogoliubov-de-Genne equation
\begin{equation}\label{HFB}
\left(
\begin{array}{cc}
\hat{t}+\Gamma(\mbox{\boldmath $r$})-\lambda & \Delta(\mbox{\boldmath $r$}) \\
\Delta^*(\mbox{\boldmath $r$}) & -(\hat{t}+\Gamma(\mbox{\boldmath $r$})-\lambda)
\end{array}
\right)
\left(
\begin{array}{c}
\varphi_{1,i}(\mbox{\boldmath $r$}\sigma) \\
\varphi_{2,i}(\mbox{\boldmath $r$}\sigma)
\end{array}
\right)
=E_i
\left(
\begin{array}{c}
\varphi_{1,i}(\mbox{\boldmath $r$}\sigma) \\
\varphi_{2,i}(\mbox{\boldmath $r$}\sigma)
\end{array}
\right),
\end{equation}
which determines energy $E_i$ and two-component wave function $\phi_i(\mbox{\boldmath $r$}\sigma)=(\varphi_{1,i}(\mbox{\boldmath $r$}\sigma),\varphi_{2,i}(\mbox{\boldmath $r$}\sigma))^T$ of the quasiparticles.
Here the Hatree-Fock potential $\Gamma(\mbox{\boldmath $r$})$ and the pair potential or the position-dependent pair gap
$\Delta(\mbox{\boldmath $r$})=\partial \mathcal{E}_{pair} /\partial \tilde{\rho}^*(\mbox{\boldmath $r$})$ are also functional of the one-body densities including
$\tilde{\rho}(\mbox{\boldmath $r$})$ and $\tilde{\rho}^*(\mbox{\boldmath $r$})$.
The one-body
density and the pair density are evaluated as a sum of the quasiparticle wave functions:
\begin{align}
\rho(\mbox{\boldmath $r$})&=\sum_i \sum_\sigma |\varphi_{2,i}(\mbox{\boldmath $r$}\sigma) |^2, \\
\tilde{\rho}(\mbox{\boldmath $r$})&=-\frac{1}{2}\sum_i \sum_\sigma \varphi_{2,i}^*(\mbox{\boldmath $r$}\sigma)\varphi_{1,i}(\mbox{\boldmath $r$}\sigma).
\end{align}
The HFB ground state $\ket{\Psi_0}$ is obtained by solving the HFB equation with an iterative procedure.
\noindent
\underline{U(1) gauge symmetry and its violation}
\begin{figure}[h]
\centering
\begin{minipage}{\columnwidth}
\includegraphics[width=0.52\columnwidth]{fig1.pdf}
\end{minipage}
\caption{Schematic figure of the effective potential $U(p)=\mathcal{E}(p)-\mathcal{E}(0)$
as a function of the complex order parameter $p$, which
represents the pair condensate or the pair gap.
}
\label{fig_mexican}
\end{figure}
For the sake of the discussions below, we briefly recapitulate the U(1) gauge symmetry in
the Hartree-Fock-Bogoliubov (HFB) model. Consider the global U(1) gauge transformation
$\ket{\Psi} \rightarrow e^{i\theta \hat{N}}\ket{\Psi} $ with $\hat{N}$ being the nucleon
number operator $\hat{N}=\int d\mbox{\boldmath $r$} \sum_{\sigma} \psi^\dagger(\mbox{\boldmath $r$}\sigma)\psi(\mbox{\boldmath $r$}\sigma)$.
The energy functional or the total Hamiltonian is symmetric
with respect to the
U(1) gauge transformation. However,
the HFB ground state $\ket{\Psi_0}$ violates
spontaneously this symmetry as
$e^{i\theta \hat{N}}\ket{\Psi_0} $ differs from $\ket{\Psi_0}$. Correspondingly
the pair condensate and the pair field
are modulated in their phases as
\begin{align} \label{gauge-trans}
\tilde{\rho}(\mbox{\boldmath $r$})\rightarrow e^{2i\theta}\tilde{\rho}(\mbox{\boldmath $r$}) , \ \ \ \ \
\Delta(\mbox{\boldmath $r$}) \rightarrow e^{2i\theta} \Delta(\mbox{\boldmath $r$}).
\end{align}
All the states $e^{i\theta \hat{N}}\ket{\Psi_0} $ transformed from one realization of the HFB ground state
$\ket{\Psi_0} $
are also degenerate HFB ground state. This can be seen in the fact that the above equations are invariant
for
$\varphi_{1,i}(\mbox{\boldmath $r$}\sigma) \rightarrow e^{i\theta}\varphi_{1,i}(\mbox{\boldmath $r$}\sigma)$, $\varphi_{2,i}(\mbox{\boldmath $r$}\sigma) \rightarrow e^{-i\theta}\varphi_{2,i}(\mbox{\boldmath $r$}\sigma)$ together with Eq.(\ref{gauge-trans}).
In the following we choose one of the HFB ground states in which
the pair condensate $\tilde{\rho}(\mbox{\boldmath $r$})$ is real. In fact, the pair density and the pair potential
are assumed to be real in our numerical code for the HFB ground state.
The situation is illustrated schematically in Fig.\ref{fig_mexican}. Here the energy gain or the
effective potential $U(p)=\mathcal{E}(p)-\mathcal{E}(0)$ is shown
as a function of an "order parameter" $p$, which could be an average
value of the pair condensate $\tilde{\rho}(\mbox{\boldmath $r$})$ or the pair potential $\Delta(\mbox{\boldmath $r$})$
(Details will be specified later). The order parameter $p$ is a complex variable
and it transforms as
$p \rightarrow e^{2i\theta}p$ under the U(1) gauge transformation. The origin
$p=0$ corresponds to the Hartree-Fock ground state where the pair condensate is
absent, and $p=p_0$ corresponds to the HFB ground state.
The difference $U_{\mathrm{cond}}=\mathcal{E}(p_0)-\mathcal{E}(0)$ is the energy gain associated with the
condensation of the Cooper pairs.
\noindent
\underline{Excitation modes}
We describe excitation modes build on top of the HFB ground state $\ket{\Psi_0}$ by means of the
quasi-particle random-phase approximation (QRPA). The QRPA is equivalent to describing
a linear response under an external perturbation in the frame work of the
time-dependent HFB theory\cite{Matsuo2001,Nakatsukasa-etal2016}.
We consider a time-dependent perturbation $\mu e^{-i\omega t}\hat{V}_{\mathrm{ext}}$ where a perturbing field
$\hat{V}_{\mathrm{ext}}$
belongs to a class of generalized one-body operators
expressed in terms of
the local density $\hat{\rho}(\mbox{\boldmath $r$})$
and the pair densities $\hat{P}(\mbox{\boldmath $r$})$ and $\hat{P}^\dagger(\mbox{\boldmath $r$})$:
\begin{equation} \label{ext-field}
\hat{V}_{\mathrm{ext}}=\int d\mbox{\boldmath $r$} \left\{
v_{0}(\mbox{\boldmath $r$})\hat{\rho}(\mbox{\boldmath $r$}) +v_{r}(\mbox{\boldmath $r$})\hat{P}(\mbox{\boldmath $r$}) +v_{a}(\mbox{\boldmath $r$})\hat{P}^\dagger(\mbox{\boldmath $r$})
\right\}.
\end{equation}
The perturbation causes time-dependent fluctuations
$\delta\rho(\mbox{\boldmath $r$},\omega)$, $\delta\tilde{\rho}(\mbox{\boldmath $r$},\omega)$ and $\delta\tilde{\rho}^\star(\mbox{\boldmath $r$},\omega)$
(expressed in the frequency domain) in the three densities $\rho(\mbox{\boldmath $r$})$, $\tilde{\rho}(\mbox{\boldmath $r$})$ and $\tilde{\rho}^*(\mbox{\boldmath $r$})$.
It induces also fluctuations in the self-consistent field
\begin{equation} \label{sc-field}
\delta \hat{V}_{sc}(\omega) = \int d\mbox{\boldmath $r$} \left\{
\delta\Gamma(\mbox{\boldmath $r$},\omega)\hat{\rho}(\mbox{\boldmath $r$}) +\delta\Delta^*(\mbox{\boldmath $r$},\omega)\hat{P}(\mbox{\boldmath $r$}) +\delta\Delta(\mbox{\boldmath $r$},\omega)\hat{P}^\dagger(\mbox{\boldmath $r$})
\right\}
\end{equation}
through the densities.
Here we assume that fluctuations in the Hartree-Fock potential
and the pair potential, $\delta\Gamma$ and $\delta\Delta$,
are proportional to the density
fluctuations $\delta\rho(\mbox{\boldmath $r$},\omega)$, $\delta\tilde{\rho}(\mbox{\boldmath $r$},\omega)$ and $\delta\tilde{\rho}^\star(\mbox{\boldmath $r$},\omega)$.
Then
the linear response of the system is governed by
\begin{equation} \label{rpa}
\left(
\begin{array}{c}
\delta\rho(\mbox{\boldmath $r$},\omega) \\
\delta\tilde{\rho}(\mbox{\boldmath $r$},\omega) \\
\delta\tilde{\rho}^\star(\mbox{\boldmath $r$},\omega)
\end{array}
\right)
=\int_0 d\mbox{\boldmath $r$}'
\left(
\begin{array}{ccc}
& & \\
& R_{0}^{\alpha\beta}(\mbox{\boldmath $r$},\mbox{\boldmath $r$}',\omega)& \\
& &
\end{array}
\right)
\left(
\begin{array}{l}
\frac{\delta \Gamma}{\delta\rho}
\delta\rho(\mbox{\boldmath $r$}',\omega)+ v_{0}(\mbox{\boldmath $r$}') \\
\frac{\delta\Delta^*}{\delta\tilde{\rho}^*} \delta\tilde{\rho}^\star(\mbox{\boldmath $r$}',\omega) + v_{r}(\mbox{\boldmath $r$}') \\
\frac{\delta\Delta}{\delta\tilde{\rho}} \delta\tilde{\rho}(\mbox{\boldmath $r$}',\omega)+ v_{a}(\mbox{\boldmath $r$}')
\end{array}
\right).
\end{equation}
Here $R_{0}^{\alpha\beta}(\mbox{\boldmath $r$},\mbox{\boldmath $r$}',\omega)$ is unperturbed density response function
\begin{equation} \label{resp-fn}
R_{0}^{\alpha\beta}(\mbox{\boldmath $r$},\mbox{\boldmath $r$}',\omega)=\frac{1}{2}\sum_{ij}\left\{
\frac{\braket{0}{\hat{\rho}_\alpha(\mbox{\boldmath $r$})}{ij}\braket{ij}{\hat{\rho}_\beta(\mbox{\boldmath $r$}')}{0}}
{\hbar\omega+i\epsilon-E_i-E_j}
-
\frac{\braket{0}{\hat{\rho}_\beta(\mbox{\boldmath $r$}')}{ij}\braket{ij}{\hat{\rho}_\alpha(\mbox{\boldmath $r$})}{0}}
{\hbar\omega+i\epsilon+E_i+E_j}
\right\}
\end{equation}
with $E_i$ in the denominator being the excitation energy of the quasiparticle state $i$.
The numerators in the r.h.s. are matrix elements
of the density operators
$\hat{\rho}_\alpha(\mbox{\boldmath $r$})=\hat{\rho}(\mbox{\boldmath $r$}),\hat{P}(\mbox{\boldmath $r$}),\hat{P}^\dagger(\mbox{\boldmath $r$})$
between the HFB ground state $\ket{0}=\ket{\Psi_0}$ and
two-quasiparticle configurations $\ket{ij}=\beta_i^\dagger\beta_j^\dagger\ket{\Psi_0}$.
Here we put the spectral representation of the unperturbed
density response function expressed with discretized spectrum even for unbound quasiparticle
states. The continuum nature of the unbound quasiparticle states can be taken into account
explicitly using the method of the continuum QRPA formalism\cite{Matsuo2001}.
See Appendix for the two-quasiparticle matrix elements and the continuum QRPA.
The excited states $\ket{\nu}$ of the system appear as poles of the density fluctuations as a
function of the excitation energy $\hbar\omega$. The transition matrix elements for the perturbing field is
obtained through the strength function
\begin{equation}
S(\hbar\omega)\equiv \sum_\nu \left| \braket{\nu}{\hat{V}_{\mathrm{ext}}}{0} \right|^2 \delta(\hbar\omega - E_\nu),
\end{equation}
which can be calculated in terms of the solution of the linear response equation:
\begin{align}
S(\hbar\omega) & = -\frac{1}{\pi} \mathrm{Im} \langle \hat{V}_{\mathrm{ext}}^\dagger \rangle(\omega) \nonumber \\
& = -\frac{1}{\pi} \mathrm{Im}
\int d\mbox{\boldmath $r$} \left\{ v_{0}^*(\mbox{\boldmath $r$})\delta\rho(\mbox{\boldmath $r$},\omega)
+v_{a}^*(\mbox{\boldmath $r$})\delta\tilde{\rho}(\mbox{\boldmath $r$},\omega)+v_{r}^*(\mbox{\boldmath $r$})\delta\tilde{\rho}^{\star}(\mbox{\boldmath $r$},\omega) \right\}.
\end{align}
\noindent
\underline{Model parameters and numerical details}
We discuss the pairing correlation of neutrons in semi-magic Sn isotopes, for which
the ground states are expected to be spherical. We
adopt the SLy4 parameter set\cite{chabanat98} for the Skyrme energy functional. The pairing interaction defining the pairing energy functional is the density dependent
delta interaction (DDDI), the contact force whose interaction strength depends on the position through the
nucleon density:
\begin{align}\label{DDDI}
v^{{\rm pair}}_q(\mbox{\boldmath $r$},\mbox{\boldmath $r$}')&={1\over2}(1-P_\sigma)V_q(\mbox{\boldmath $r$})
\delta(\mbox{\boldmath $r$}-\mbox{\boldmath $r$}'), \ \ \ (q=n,p) \\
V_q(\mbox{\boldmath $r$})&=v_0\left(1-\eta\left({\rho_q(\mbox{\boldmath $r$})\over \rho_0}\right)^\alpha\right).
\end{align}
Correspondingly the pair potential is given by
$
\Delta_q(\mbox{\boldmath $r$})=V_q(\mbox{\boldmath $r$})\tilde{\rho}_q(\mbox{\boldmath $r$}).
$
In the actual calculation, we assume the spherical HFB mean-field and solve all the equations using
the polar coordinate system and the partial wave expansion. The radial coordinate is truncated at
$r=R_{\mathrm{max}}$ with $R_{\mathrm{max}}=20$fm. The partial waves are taken into account up to the maximum angular quantum
number $l_{\mathrm{max}},j_{\mathrm{max}}=12,25/2$. We truncate the quasiparticle states by maximum quasiparticle energy $E_{\mathrm{max}}=60$ MeV. The radial coordinate is discretized with an interval $\Delta r=0.2$fm.
The parameters of the DDDI are $v_0=-458.4$ MeV fm$^{-3}$, $\rho_0=0.08$ fm$^{-3}$, $\alpha=0.845$, and $\eta=0.71$ which are
chosen to reproduce the scattering length of the nuclear force in the $^{1}S_0$ channel and the experimental gap in $^{120}$Sn\cite{Matsuo2010}.
The HFB equation is solved with the Runge-Kutta method with box boundary condition, i.e $\phi(r)=0$ at $r=R$. With this boundary condition, unbound quasiparticle states have discrete spectrum, and the resultant strength function does also. It is possible to impose the proper boundary
condition appropriate for unbound and continuum quasiparticle states using the framework of the continuum
QRPA\cite{Matsuo2001,Serizawa2009,Matsuo2010,Shimoyama2011,Shimoyama2013} (See also Appedix).
The QRPA calculation is performed using the discretized spectral representation while the
continuum QRPA is also used in some examples. The smoothing parameter is $\epsilon=100$ keV.
\section{Pair vibrations and Higgs response}
\subsection{Response to pair transfer operators}
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\includegraphics[width=0.82\columnwidth]{fig2.pdf}
\end{minipage}
\caption{Pair-addition and pair-removal strength functions,
$S_{\mathrm{ad}}(E)$ and $S_{\mathrm{rm}}(E)$, for neutrons in $^{120}$Sn, plotted
as functions of the excitation energy $E$. The unit of the strength functions is
MeV$^{-1}$ since $\hat{P}_{\mathrm{ad}}$ and $\hat{P}_{\mathrm{rm}}$ are dimensionless.
The solid line is the result
of the QRPA calculation based on the discretized spectral representation
of the response function while the dotted line is the result of the
continuum QRPA. The smoothing width is $\epsilon=100$ keV.}
\label{Pad-rm-strength}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\includegraphics[width=0.52\columnwidth]{fig3.pdf}
\end{minipage}
\caption{Pair-addition and pair-removal transition densities
$r^2P^{(\mathrm{ad})}_{\nu}(r)$ (solid curve) and $r^2P^{(\mathrm{rm})}_{\nu}(r)$ (dotted curve) of neutrons
for some peaks seen in the pair-addition and
pair-removal strength functions for $^{120}$Sn, shown in Fig.\ref{Pad-rm-strength}.
}
\label{trans-dens}
\end{figure}
Sensitive probes to the pairing correlation may be expressed in terms of the
pair field operators $\hat{P}(\mbox{\boldmath $r$}) =\psi(\mbox{\boldmath $r$}\uparrow) \psi(\mbox{\boldmath $r$}\downarrow)$ and
$\hat{P}^\dagger(\mbox{\boldmath $r$}) =\psi^\dagger(\mbox{\boldmath $r$}\downarrow) \psi^\dagger(\mbox{\boldmath $r$}\uparrow)$. We define
a pair-addition operator
\begin{equation} \label{Pad-op}
\hat{P}_{\mathrm{ad}}=\int d\mbox{\boldmath $r$} Y_{00}f(r)\hat{P}^{\dagger}(\mbox{\boldmath $r$})
=\frac{1}{\sqrt{4\pi}}\int d\mbox{\boldmath $r$} f(r) \psi^\dagger(\mbox{\boldmath $r$} \downarrow)\psi^\dagger(\mbox{\boldmath $r$} \uparrow)
\end{equation}
and a pair-removal operator
\begin{equation} \label{Prm-op}
\hat{P}_{\mathrm{rm}}=\int d\mbox{\boldmath $r$} Y_{00} f(r) \hat{P}(\mbox{\boldmath $r$})
=\frac{1}{\sqrt{4\pi}}\int d\mbox{\boldmath $r$} f(r) \psi(\mbox{\boldmath $r$} \uparrow)\psi(\mbox{\boldmath $r$} \downarrow).
\end{equation}
These operators bring about a transition changing particle number
by $\Delta N=\pm 2$.
Here we introduce a form factor $f(r)$ which is effective in a spatial region where the
nucleon density is finite, but diminishes far outside the nucleus as we are interested in a process where a nucleon
pair is added to or removed from a nucleus. Specifically
we choose a Woods-Saxon function,
\begin{equation}
f(r)=\frac{1}{1+\exp((r-R)/a)}
\end{equation}
with $R=1.27 \times A^{1/3}$ fm and $a=0.67$ fm,
but as shown later the main conclusion does not depend on
detailed form of $f(r)$.
In the present study we describe the pair vibration with the lowest multipolarity
$J^\pi=0^+$.
Here $Y_{00}$ is the spherical harmonics with rank 0
and both $\hat{P}_{\mathrm{ad}}$ and $\hat{P}_{\mathrm{rm}}$ carry the angular and parity
quantum numbers $0^+$.
We describe the pair vibration with spin parity
$J^\pi=0^+$.
Response of a nucleus against these operators are represented by the strength functions
\begin{equation}
S_{\mathrm{ad}}(E)=\sum_\nu \left| \braket{\nu}{\hat{P}_{\mathrm{ad}}}{0} \right|^2 \delta(E - E_\nu)
\end{equation}
for the pair-addition process, and
\begin{equation}
S_{\mathrm{rm}}(E)=\sum_\nu \left| \braket{\nu}{\hat{P}_{\mathrm{rm}}}{0} \right|^2 \delta(E- E_\nu)
\end{equation}
for the pair-removal process. Here $\ket{\nu}$ is the QRPA excited states
with excitation energy $E_\nu$
whereas $\braket{\nu}{\hat{P}_{\mathrm{ad}}}{0}$ and $ \braket{\nu}{\hat{P}_{\mathrm{rm}}}{0}$ is the matrix elements of
the pair-add and pair-removal operators between the HFB ground state $\ket{0}=\ket{\Psi_0}$ and the
QRPA excited states. Transition densities
\begin{align}
P^{(\mathrm{ad})}_{\nu}(\mbox{\boldmath $r$})&=\braket{\nu}{\hat{P}^\dagger(\mbox{\boldmath $r$})}{0}=\braket{\nu}{\psi^\dagger(\mbox{\boldmath $r$}\downarrow)\psi^\dagger(\mbox{\boldmath $r$}\uparrow)}{0}
=P^{(\mathrm{ad})}_{\nu}(r)Y_{00}^* , \\
P^{(\mathrm{rm})}_{\nu}(\mbox{\boldmath $r$})&=\braket{\nu}{\hat{P}(\mbox{\boldmath $r$})}{0}=\braket{\nu}{\psi(\mbox{\boldmath $r$}\uparrow)\psi(\mbox{\boldmath $r$}\downarrow)}{0}=P^{(\mathrm{rm})}_{\nu}(r)Y_{00}^* ,
\end{align}
representing amplitudes of pair-addition and -removal are also useful measures to look into
structure of the excited states.
Figure \ref{Pad-rm-strength} (a) and (b) show the pair-add and pair-removal strength functions, respectively,
calculated for neutrons in $^{120}$Sn.
The results exhibit several noticeable peaks. We first discuss them referring to
the concept of the pair rotation and the pair vibration.
First, we note the
peak with the lowest excitation energy $E=0.60$ MeV, which is seen both in the pair-addition and in the pair-removal strength functions.
It corresponds to the pair rotation, i.e. the transition from the ground state of the system $N$ to the ground state with
$N \pm 2$. The pair rotation, which is associated with a displacement along the dashed line in Fig.\ref{fig_mexican}, should appear as a zero-energy Nambu-Goldstone mode arising from the
the gauge transformation $e^{i\theta \hat{N}}\ket{\Psi_0}$. In the actual numerical implementation
of the HFB+QRPA formalism,
the calculated excitation energy of the pair rotation deviates from zero by a small amount, 0.60 MeV
in the present case, due to numerical errors and truncations.
The transition densities of the
lowest energy peak, the pair rotation mode, are shown in Fig.\ref{trans-dens}. It is seen that the pair-addition
transition density and the pair-removal one has the same shape as functions of $r$, but they
have the opposite phases.
This is what is expected for the pair rotation, since
the phase change in the pair condensate $\tilde{\rho}(\mbox{\boldmath $r$})$ gives
$\delta\tilde{\rho}(\mbox{\boldmath $r$})= \delta e^{2i\delta\theta}\tilde{\rho}(\mbox{\boldmath $r$})\sim 2i\delta\theta\tilde{\rho}(\mbox{\boldmath $r$})$ and $\delta\tilde{\rho}^*(\mbox{\boldmath $r$})=
\delta e^{-2i\delta\theta}\tilde{\rho}(\mbox{\boldmath $r$})\sim -2i\delta\theta\tilde{\rho}(\mbox{\boldmath $r$})$, i.e.,
opposite sign in these quantities
with a common shape proportional to the ground state pair condensate $\tilde{\rho}(\mbox{\boldmath $r$})$.
Other peaks in Fig.\ref{Pad-rm-strength}(a) and (b) represent transitions to excited $0^+$ states in the neighbor nuclei.
The peak at $E=2.44$ MeV, the lowest excited $0_2^+$ state, has transition
strengths which are not very large.
This state corresponds to the pair vibration,
which is predicted to be
a lowest excited $0_2^+$ state with energy about $2\Delta$\cite{Bes-Broglia66,Broglia73,Brink-Broglia}.
The excitation energy 2.44 MeV is consistent with the
predicted value $E \approx 2\Delta$ as an average value of the pair gap is
$\Delta=\int d\mbox{\boldmath $r$} \rho(r)\Delta(r) /\int d\mbox{\boldmath $r$}\rho(r)=1.15$ MeV in the present case.
In the interval of excitation energy from 3 MeV up to around 20 MeV,
there exist several peaks which have larger pair-addition or -removal strengths
than those of the pair vibration at $E=2.44$ MeV, e.g.
peaks at $E=12. 36$ and 17.45 MeV seen in the pair-addition strength,
and those at $E=3.85$ and 17.31 MeV in the pair-removal
strength. These states may also be regarded as pair vibrational states as the pair-transfer
strengths are enhanced by the RPA correlation (see below). Although
high-lying pair vibration is often called the
giant pair vibration (GPV)\cite{Broglia1977,Cappuzzello2015,Cavallaro2019,Assie2019},
we adopt in the present paper
a more inclusive term "high-lying pair vibrations" for these peaks
since the strength distribution
does not form a single giant peak, but rather multiple peaks in a wide energy
interval.
We remark that the low-lying pair vibration at $E=2.44$ MeV and other high-lying pair vibrations have
slightly different characters. The low-lying pair vibration has both the pair-addition and -removal strengths
whereas the high-lying pair vibrations have either the pair-addition
strength or the pair-removal strengths. This difference is also clearly seen
in the transition density, shown in Fig.\ref{trans-dens}: the low-lying pair vibration
at $E=2.44$ MeV has both the pair-addition and pair-removal
amplitudes. Contrastingly
the high-lying pair vibrations
are seen either in the pair-addition
strength function or in the pair-removal strength function, but not in the both. This is reflected also
in the transition density. As shown in Fig.\ref{trans-dens},
the transition density
of the peaks at $E =12.31$ MeV and
$E \approx 17.45$ MeV seen in the pair-addition
strength function have large pair-addition amplitude while the pair-removal
amplitude is almost negligible. These states can be populated only by the
pair-addition process. On the contrary, the peaks at $E=3.85$ and 17.31 MeV existing
in the pair-removal strength function have dominant amplitude
for the pair-removal transition density.
\subsection{Higgs response}
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\includegraphics[width=0.82\columnwidth]{fig4.pdf}
\end{minipage}
\caption{(a) Higgs strength function
$S_{\mathrm{H}}(E)$ of neutrons for $^{120}$Sn, plotted
as functions of the excitation energy $E$.
(b) Sum of the
pair-addition and pair-removal strength functions,
$S_{\mathrm{ad}}(E)+S_{\mathrm{rm}}(E)$.
(c) Unperturbed Higgs strength function $S_{\mathrm{H}}(E)$
for uncorrelated neutron two-quasiparticle excitations.
}
\label{Higgs-strength}
\end{figure}
We shall here address a question how the pair vibrations discussed above are related to
the macroscopic picture as an amplitude oscillation of the order parameter,
the Higgs mode. However, the existence of multiple states, including
both the low-lying and high-lying pair vibrations, makes it difficult to relate
this macroscopic picture to one of the pair vibrational
states. We need a new way of characterizing the pair vibrations, which is
more suitable for obtaining the macroscopic information.
As is illustrated by Fig.\ref{fig_mexican}, response of the pair fields around a HFB equilibrium point has
two different directions with respect to the order parameter of the pair condensation:
the phase mode changing the
phase of the pair gap (or the pair condensate), and the other changing the
amplitude of the pair gap/condensate.
The pair-add operators, $\hat{P}_{\mathrm{ad}} \sim \psi^\dagger\psi^\dagger$, and the
pair-removal operators $\hat{P}_{\mathrm{rm}} \sim \psi\psi$, do not probe separately these two
different modes since both the pair rotation (the phase
mode) and the pair vibrations (possible amplitude mode) are seen in
both of the pair-addition
and pair-removal strengths functions.
Instead we introduce a pair field operator that is
a linear combination of $\hat{P}_{\mathrm{ad}} $ and $\hat{P}_{\mathrm{rm}}$:
\begin{equation} \label{Higgs-op}
\hat{P}_{\mathrm{H}}= \hat{P}_{\mathrm{ad}} + \hat{P}_{\mathrm{rm}}=\frac{1}{\sqrt{4\pi}}\int d\mbox{\boldmath $r$} f(r)
\left\{ \psi(\mbox{\boldmath $r$} \uparrow)\psi(\mbox{\boldmath $r$} \downarrow) +\psi^\dagger(\mbox{\boldmath $r$} \downarrow)\psi^\dagger(\mbox{\boldmath $r$} \uparrow) \right\}.
\end{equation}
Note that fluctuation of the amplitude $|\tilde{\rho}(\mbox{\boldmath $r$})|$ of the pair condensate reads
$\delta|\tilde{\rho}(\mbox{\boldmath $r$})|=\left(\tilde{\rho}(\mbox{\boldmath $r$})\delta\tilde{\rho}^*(\mbox{\boldmath $r$},t)+\tilde{\rho}^*(\mbox{\boldmath $r$})\delta\tilde{\rho}(\mbox{\boldmath $r$},t)\right)/2|\tilde{\rho}(\mbox{\boldmath $r$})|=
(\delta\tilde{\rho}^*(\mbox{\boldmath $r$},t)+\delta\tilde{\rho}(\mbox{\boldmath $r$},t))/2$ for real $\tilde{\rho}(\mbox{\boldmath $r$})$, and therefore the Higgs operator $\hat{P}_{\mathrm{H}}$
defined by Eq.(\ref{Higgs-op}) probes the amplitude fluctuation.
In the following, we call $\hat{P}_{\mathrm{H}}$ the Higgs operator after the nomenclature that the amplitude mode of the
pair field is often called Higgs mode\cite{Pekker-Varma,Shimano-Tsuji}. The strength function for the Higgs operator is
defined by
\begin{equation}
S_{\mathrm{H}}(E) = \sum_\nu \left| \braket{\nu}{\hat{P}_{\mathrm{H}}}{0} \right|^2 \delta(E - E_\nu).
\end{equation}
Figure \ref{Higgs-strength} (a) is the calculated Higgs strength function $S_{\mathrm{H}}(E)$
of neutrons for $^{120}$Sn.
For comparison we show in Fig.\ref{Higgs-strength}(b) a sum
$S_{\mathrm{ad}}(E)+S_{\mathrm{rm}}(E)$ of the pair-addition and pair-removal
strength functions (Fig.\ref{Pad-rm-strength}). The panel (c) is a
Higgs strength function for independent two-quasiparticle excitations,
namely a calculation in which the RPA correlation is removed.
We observe the following features.
We first note that, compared with the uncorrelated
result (c), the Higgs strength in (a) is enhanced by several times for the peaks of
low-lying and high-lying pair vibrations. This clearly indicates collectivity of both the
low- and high-lying pair vibrations.
Secondly we observe that
the Higgs strength function is close to the sum of the pair-addition and pair-removal
strengths although there are some
differences. A significant and important difference is seen for the pair rotation mode
(the peak at $E=0.60$ MeV in this example), for which the Higgs strength almost invisible.
Indeed, it should vanish in principle due to the very nature of the phase mode
associated with the U(1) gauge symmetry. As discussed above,
the pair-add and pair-removal amplitudes of the phase mode
have the same shape but with opposite sign (cf. Fig.\ref{trans-dens}),
and consequently these two amplitudes cancel each other in the matrix element
of the Higgs operator $\braket{\nu}{\hat{P}_{\mathrm{H}}}{0}=\braket{\nu}{\hat{P}_{\mathrm{ad}}}{0} + \braket{\nu}{\hat{P}_{\mathrm{rm}}}{0}=0$.
For the low-lying pair vibration (the peak at $E=2.44$ MeV), contrastingly,
the Higgs strength is enhanced. Namely it is more than the sum of the pair-addition
and pair-removal strengths (It is approximately two times the sum). This is because
the both pair-addition and pair-removal transition densities of this mode
have the same sign, as seen in Fig.\ref{trans-dens}, and they leads to a
coherent sum in the matrix element of the Higgs operator
$\braket{\nu}{\hat{P}_{\mathrm{H}}}{0} = \braket{\nu}{\hat{P}_{\mathrm{ad}}}{0}+ \braket{\nu}{\hat{P}_{\mathrm{rm}}}{0}$.
Consequently the low-lying pair vibration contributes more strongly
to the Higgs strength than to the pair-addition and pair-removal strengths. Because of
this enhancement due to the coherence, the low-lying pair vibration could be regarded
as an Higgs mode in nuclei. This interpretation might be supported also by
the energy relation $E \approx 2\Delta$ of the low-lying pair vibration, which is
also the case for
the Higgs mode in the superconductors\cite{Anderson1958,Littlewood-Varma1981-82,Pekker-Varma,Shimano-Tsuji}
with the energy relation $E = 2\Delta$. However, for the reasons below we reserve possible identification
of the low-lying pair vibration as the Higgs mode.
The high-lying pair vibrations, emerging as several peaks distributed up to around $E\sim 20$ MeV,
have the Higgs strength comparable to that of the low-lying pair vibration. From the viewpoint of
the response to the Higgs operator, the high-lying pair vibrations may also be candidates of
the Higgs mode. However the energy relation $E = 2 \Delta$ does not hold.
The high-lying pair vibrations exhibit no coherence between the matrix elements
$\braket{\nu}{\hat{P}_{\mathrm{ad}}}{0}$ and $\braket{\nu}{\hat{P}_{\mathrm{rm}}}{0}$.
The Higgs strengths
of these states are incoherent sum of the pair-addition and -removal strengths. It may not be
reasonable to identify either the
high-lying or the low-lying pair vibrations as a pure Higgs mode.
\subsection{Static polarizability}
The Higgs strength is not concentrated to a single state, but
distributed over many excited states, including both the low-lying and high-lying pair vibrations.
Since there seems no one single pair vibrational state interpreted as a pure Higgs mode, we need
a different viewpoint which integrates both the low- and high-lying pair vibrations.
Here we consider the static polarizability
which can be derived from the Higgs strength function. It is defined by
\begin{equation} \label{static-pol}
\alpha_{\mathrm{H}}= \frac{d \langle \hat{P}_{\mathrm{H}} \rangle'}{d\mu},
\end{equation}
where $\langle \hat{P}_{\mathrm{H}} \rangle'$ is a change of the expectation value
under the static perturbation $\hat{V}_{\mathrm{ext}}=\mu \hat{P}_{\mathrm{H}}$. The static
polarizability is related to the strength function through the
inversely energy weighted sum
\begin{equation} \label{e-inv-sum}
I_{-1}=2\int_{E_{\mathrm{min}}}^{E_{\mathrm{max}}} dE \frac{S_{\mathrm{H}}(E)}{E}=2\sum_\nu \frac{\left| \braket{\nu}{\hat{P}_{\mathrm{H}}}{0} \right|^2}{E_\nu} = \alpha_{\mathrm{H}}
\end{equation}
of the Higgs strength function $S_{\mathrm{H}}(E)$.
In other words, the static polarizability for the Higgs operator, called Higgs polarizablility hereafter,
can be evaluated if the
Higgs strength function is obtained. We note that the inversely energy-weighted sum
and the static polarizability has been discussed intensively for the case of the
E1 strength function, i.e. the electric dipole excitations in nuclei, as a probe of
neutron skin and nuclear matter properties
\cite{Reinhard-Nazarewicz2010, Tamii2011,Piekarewicz2012,Roca-Maza2015,Roca-Maza-Paar2018}.
In the present study the inversely weighted sum is calculated by taking an integral
in an energy interval from $E_{\mathrm{min}}=1.1$ MeV to the upper limit of the excitation energy $E_{\mathrm{max}}=60$ MeV.
The lower bound is set to exclude the contribution from the pair rotation, which should have vanished
if the complete selfconsistency is fulfilled in the numerical calculation.
Figure \ref{fig_e_inv_sum} is a running sum of Eq.(\ref{e-inv-sum}) where the upper bound $E_{\mathrm{max}}$ is
varied. It is seen that although the contribution of the low-lying pair vibration mode (the lowest excitation)
is as large as about 35\% of the total sum, strengths distributed at higher excitation energies contribute
significantly. The large strengths of the high-lying pairing vibrations (distributed up to around 20 MeV),
gives sizable contribution of about 45\% (i.e. 80\% including all up to 20 MeV) to the total sum.
Contributions of strengths above 20 MeV is small, and there is no distinct distribution
of the strengths.
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\includegraphics[width=0.82\columnwidth]{fig5.pdf}
\end{minipage}
\caption{Inversely energy-weighted sum $I_{-1}(E_{\mathrm{max}})$ of the Higgs strength
function for $^{120}$Sn, as a function of the maximum excitation energy $E_{\mathrm{max}}$ of the
sum.
}
\label{fig_e_inv_sum}
\end{figure}
\section{Pair condensation energy}
\subsection{Effective potential}
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\includegraphics[width=0.7\columnwidth]{fig6.pdf}
\end{minipage}
\caption{The potential energy curve as a function of the
order parameter $p=\langle \hat{P}_{\mathrm{H}} \rangle$ for $^{120}$Sn, $^{132}$Sn
and $^{140}$Sn. The total energy obtained with the constrained Hartree-Fock-Bogoliubov (CHFB)
calculation is plotted with the diamond symbol. The dashed curve is
a quartic function, Eq.(\ref{quartic-fn}), fitted to the CHFB results.
}
\label{fig_cond_potential}
\end{figure}
Let us now relate the above results to the macroscopic picture discussed with Fig.\ref{fig_mexican}.
For this purpose we specify the order parameter and the effective potential.
As the order parameter we adopt the expectation value of the pair removal
operator
\begin{equation}
p = 2\langle \hat{P}_{\mathrm{rm}}\rangle = \frac{1}{\sqrt{\pi}}\int d\mbox{\boldmath $r$} f(r) \tilde{\rho}(\mbox{\boldmath $r$}),
\end{equation}
which also represents a spatial integral (or average) of the pair condensate $\tilde{\rho}(\mbox{\boldmath $r$})$.
(The factor of two is put here just for later convenience). The order parameter $p$ is a
complex variable as
it follows $p \rightarrow e^{2i\theta}p$ under the U(1) gauge transformation. We then
consider
the total energy of the system $\mathcal{E}(p)$ as the effective potential, a function of the order parameter $p$.
Because of the U(1) symmetry $\mathcal{E}(p)$ depends only on the amplitude $|p|$, and it is
sufficient to consider the potential curve along the real axis of $p$, which
corresponds to the expectation value of the Higgs operator $\hat{P}_{\mathrm{H}}$.
In the following analysis we use
\begin{equation}
p = \langle \hat{P}_{\mathrm{H}} \rangle
\end{equation}
which in fact represents the amplitude $|p|$ of the order parameter.
The effective potential $U(p)=\mathcal{E}(p)-\mathcal{E}(0)$ can be calculated with the total energy $\mathcal{E}(p)$
for the generalized Slater determinant $\ket{\Psi(p)}$, which is obtained
by means of the constrained Hartree-Fock-Bogoliubov (CHFB) calculation\cite{Ring-Schuck}
using $\hat{P}_{\mathrm{H}}$ as a constraining operator.
Figure \ref{fig_cond_potential} shows the effective potential calculated
for $^{120}$Sn, $^{132}$Sn and $^{140}$Sn.
Let us focus on the representative example of $^{120}$Sn.
The effective potential in this example has a so-called Mexican hat shape, which has a minimum
at finite $p=p_0$ where $p_0=\langle \hat{P}_{\mathrm{H}} \rangle$ corresponds
to the HFB ground state $\ket{\Psi_0}$. Since the shape is
smooth as a function of $p$, it may be approximated by a polynomial. Guided by the
Ginzburg-Landau theory of the superconductivity\cite{Ginzburg-Landau}, we here assume a
fourth-order polynomial
\begin{equation} \label{quartic-fn}
U_{\mathrm{4th}}(p) = ap^4 +b p^2+c,
\end{equation}
and we fit this function to the calculated effective potential $\mathcal{E}(p)$. As seen in Fig.\ref{fig_cond_potential},
the fourth-order polynomial is a good representation of the CHFB results.
We have checked and confirmed good accuracy of Eq.(\ref{quartic-fn}) for other even-even Sn isotopes,
see two other examples in Fig.\ref{fig_cond_potential}.
\subsection{Pair condensation energy}
In the case where the effective potential is approximated well by the fourth-order polynomial
there holds a simple relation between the potential parameters, $a$ and $b$, and
the order parameter $p_0$ and the curvature $C$ of the effective potential at the potential
minimum:
\begin{align}
p_0&=\sqrt{\frac{-b}{2a}}, \\
C&=\left. \frac{d^2 U_{\mathrm{4th}}(p)}{d^2 p}\right|_{p=p_0}=-4b.
\end{align}
As a consequence, the potential depth $D$ can be also related to
the two parameters characterizing the minimum:
\begin{equation}
D=U_{\mathrm{4th}}(p_0)-U_{\mathrm{4th}}(0) = -\frac{b^2}{4a}=-\frac{1}{8}Cp_0^2.
\end{equation}
We remark here that the parameters $p_0$ and $C$ can be derived from the
Higgs response. The order parameter $p_0$ at the minimum is
the expectation value of the Higgs operator for the ground state
\begin{equation}
p_0=\braket{\Psi_0}{\hat{P}_{\mathrm{H}}}{\Psi_0}.
\end{equation}
The potential curvature $C$ at the minimum $p=p_0$
is related to the Higgs polarizability $\alpha_{\mathrm{H}}$, Eq.(\ref{static-pol}),
\begin{equation}
C=1/\alpha_{\mathrm{H}}
\end{equation}
through the Hellmann-Feynman theorem\cite{Hellmann-Feynman,Ring-Schuck}.
Consequently the potential depth $D$, which can be interpreted as the pair condensation
energy $U_{\mathrm{cond}}=\mathcal{E}(p_0)-\mathcal{E}(0)$, can be evaluated as
\begin{equation} \label{cond-eng}
U_{\mathrm{cond}}^{\mathrm{Higgs}}=-\frac{1}{8}\frac{p_0^2}{\alpha_{\mathrm{H}}}
\end{equation}
using the Higgs polarizability $\alpha_{\mathrm{H}}$.
Note that the quantities in the last expression can be evaluated using
the matrix element of the Higgs operator $\hat{P}_{\mathrm{H}}$ for the
ground state $\ket{\Psi_0}$, and the strength function
of the Higgs operator $\hat{P}_{\mathrm{H}}$, i.e. the information of
the Higgs response of the system.
The pair condensation energy in $^{120}$Sn obtained using the Higgs response, i.e. Eq.(\ref{cond-eng}).
is $U_{\mathrm{cond}}^{\mathrm{Higgs}}=1.51$ MeV.
It is compared the pair condensation energy obtained directly from the
CHFB calculation, $U_{\mathrm{cond}}^{\mathrm{CHFB}}=1.58$ MeV. It is found that the evaluation using the
Higgs response reproduces the actual value (
obtained directly from the CHFB calculation) within an error less than 10 \%.
It suggests that the pair condensation energy may be evaluated
via the Higgs response with good accuracy.
\section{Sn isotopes}
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\includegraphics[width=0.82\columnwidth]{fig7.pdf}
\end{minipage}
\caption{(a)The order parameter $p_0=\langle \hat{P}_{\mathrm{H}} \rangle$ and the
average pair gap $\Delta$ of neutrons, calculated for the ground states of the
Sn isotopes. (b) The static polarizability $\alpha_{\mathrm{H}}$ for the Higgs operator $\hat{P}_H$, calculated
from the inversely energy weighted sum of the Higgs strength function.
}
\label{fig_order_gap_static}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\includegraphics[width=0.82\columnwidth]{fig8.pdf}
\end{minipage}
\caption{The pair condensation energy $U_{\mathrm{cond}}^{\mathrm{Higgs}}$ evaluated using the
Higgs strength function in the Sn isotopes is plotted with crosses.
It is compared with the pair condensation energy $U_{\mathrm{cond}}^{\mathrm{CHFB}}$
which is calculated directly using the constrained Hartree-Fock-Bogoliubov method,
plotted with circles. The dotted line is an analytic estimate
$\frac{1}{2}g\Delta^2$ with the oscillator estimate of the
single-particle level density $g=0.015A$ MeV$^{-1}$. }
\label{fig_cond_eng}
\end{figure}
We shall apply the above method of evaluating the pair condensation energy to the neutron pair correlation
in even-even Sn isotopes covering from doubly-magic neutron-deficient $^{100}$Sn to neutron-rich
$^{150}$Sn.
The ground state value of the order parameter
$p_0=\langle \hat{P}_{\mathrm{H}} \rangle $ as well as the average pairing gap $\Delta=\int d\mbox{\boldmath $r$} \rho(r)\Delta(r) /\int d\mbox{\boldmath $r$}\rho(r)$
are shown in Fig.\ref{fig_order_gap_static} (a).
It is seen that these two
quantities are
essentially proportional to each others. They are zero in
neutron magic nuclei $^{100}$Sn and $^{132}$Sn, where the pair condensate vanishes
due to a large shell gap at the neutron
Fermi energy, corresponding to the "normal"
phase.
Figure \ref{fig_order_gap_static} (b) shows the Higgs polarizability $\alpha_{\mathrm{H}}$, evaluated from the Higgs strength function.
A noticeable feature is that its neutron number dependence is not correlated with
that of the pairing gap $\Delta$ or the order parameter $p_0=\langle \hat{P}_{\mathrm{H}} \rangle$.
For instance the Higgs
polarizability in the closed-shell nuclei
$^{100}$Sn and $^{132}$Sn is larger than neighboring isotopes. It is also remarkable that the largest value
is in $^{140}$Sn where the pairing gap takes an intermediate value.
Figure \ref{fig_cond_eng} is the pair condensation energy $U_{\mathrm{cond}}^{\mathrm{Higgs}}$ of neutrons evaluated from the
Higgs response, i.e. using Eq.(\ref{cond-eng}). It is compared with
the pair condensation energy $U_{\mathrm{cond}}^{\mathrm{CHFB}}$ which is calculated directly using the constrained
Hartree-Fock-Bogoliubov method for several isotopes. We confirm that
the proposed method works well not only for $^{120}$Sn discussed above,
but also for the other isotopes.
Another point is that the isotopic trend of the pair condensation energy
shows some resemblance and difference from that of the pair gap. For comparison
we plot also an
estimate of the pair condensation energy
$\frac{1}{2}g\Delta^2$ based on the equidistant single-particle model\cite{Ring-Schuck}
where $g$ is the single-particle level density. This estimate reflects the pair correlation through
the pair gap alone.
It is seen that the microscopic condensation energy differs from this estimate
with respect to both the magnitude and the isotopic dependence.
This points to that the pair condensation energy is a physical quantity which carries
information not necessarily expected from the pairing gap.
A noticeable example is $^{140}$Sn, where the pair condensation energy is very small
(about 0.1 MeV). The small condensation energy is due to the large value of the
Higgs polarizability (see Fig.\ref{fig_order_gap_static} (c) ), and its origin is seen
in the Higgs strength function, shown in Fig.\ref{fig_higgs_120-132-140}.
The small pair condensation energy, reflecting the very shallow effective potential
with small potential curvature (Fig.\ref{fig_cond_potential}(a)),
manifests itself as the presence of the low-lying pair vibration which has a very large Higgs strength,
several times larger than that in $^{120}$Sn.
We also note $^{132}$Sn, in which the HFB ground state has no pair condensate.
Correspondingly the effective potential shown in Fig. \ref{fig_cond_potential} (b) has the minimum at $p_0=0$,
and there is no pair rotation in this case.
The Higgs polarizability (or the potential curvature at the minimum) in this case is slightly larger
(smaller) than that for $^{120}$Sn
(see Fig.\ref{fig_order_gap_static} (b) and Fig.\ref{fig_cond_potential} ). This feature can be traced to
the presence of the pair vibrations with relatively large pair-transfer strengths as seen
in Fig.\ref{fig_higgs_120-132-140}.
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\includegraphics[width=0.82\columnwidth]{fig9.pdf}
\end{minipage}
\caption{Higgs strength function $S_{\mathrm{H}}(E)$ for $^{120}$Sn, $^{132}$Sn and $^{140}$Sn,
calculated with the continuum QRPA.
}
\label{fig_higgs_120-132-140}
\end{figure}
\section{Discussion}
The method to evaluate the pair condensation energy from the Higgs response
should not depend on details of the definitions of the pair transfer operators
$\hat{P}_{\mathrm{ad}}$, $\hat{P}_{\mathrm{rm}}$ and $\hat{P}_{\mathrm{H}}$.
We have checked other choices of the form factor $f(r)$ appearing in Eqs.(\ref{Pad-op}) and (\ref{Prm-op})
by using $f(r)=\frac{d}{dr}\frac{1}{1+\exp((r-R)/a)}$ and $f(r)=\frac{r^2}{1+\exp((r-R)/a)}$,
both of which have a weight on the surface area of the nucleus. The pair condensation energy
derived from the Higgs response in $^{120}$Sn is $U^{\mathrm{Higgs}}_{\mathrm{cond}}=1.51$ MeV and $1.58$ MeV
for the above two choices. They coincide well with $U^{\mathrm{Higgs}}_{\mathrm{cond}}=1.51$ MeV obtained with
the volume-type form factor $f(r)=\frac{1}{1+\exp((r-R)/a)}$.
We also note that Eq. (\ref{cond-eng}) for the pair condensation energy
is expressed with a relative quantity $\alpha_{\mathrm{H}}/p_0^2$,
for which relevant are not the absolute value but
relative transition strengths $S_{\mathrm{H}}(E)/p_0^2$ or
$\braket{\nu}{\hat{P}_{\mathrm{H}}}{0}^{2}/\langle \hat{P}_{\mathrm{H}} \rangle^2$ normalized
by the ground state matrix element $p_0=\langle \hat{P}_{\mathrm{H}} \rangle$.
We envisage application of the present method to experimental studies
using two-neutron transfer reactions, such as $(p,t)$ and
$(t,p)$ reactions. As discussed in Ref.\cite{Yoshida62,Broglia73,Brink-Broglia}, the cross sections of a two-neutron
stripping and pick-up reactions may be related to matrix elements
of the pair-addition
and the pair-removal operators if one assumes a one-step process. In this simple picture,
the transition amplitude for the ground-to-ground transition $N \rightarrow N\pm 2$
may correspond to
matrix elements $\braket{0_{\mathrm{gs}}^+, N+2}{\hat{P}_{\mathrm{ad}}}{0_{\mathrm{gs}}^+, N}$
and $\braket{0_{\mathrm{gs}}^+, N-2}{\hat{P}_{\mathrm{rm}}}{0_{\mathrm{gs}}^+, N}$, which is approximated by
the expectation value $\braket{\Psi_0}{\hat{P}_{\mathrm{ad}}}{\Psi_0}=\braket{\Psi_0}{\hat{P}_{\mathrm{rm}}}{\Psi_0}=\langle \hat{P}_{\mathrm{H}} \rangle/2$
for the HFB ground state $\ket{\Psi_0}$.
Thus the ground-to-ground pair transfer is expected to provide the order parameter $p_0=\langle \hat{P}_{\mathrm{H}} \rangle$.
Similarly the two-neutron transfer cross sections populating excited $0^+$ states
may be related to matrix elements $\braket{0_{\nu}^+, N+2}{\hat{P}_{\mathrm{ad}}}{0_{\mathrm{gs}}^+, N}$
and $\braket{0_{\nu}^+, N-2}{\hat{P}_{\mathrm{rm}}}{0_{\mathrm{gs}}^+, N}$, which correspond to
the matrix elements $\braket{\nu}{\hat{P}_{\mathrm{ad}}}{0}$ and $\braket{\nu}{\hat{P}_{\mathrm{rm}}}{0}$
for the QRPA excited states $\ket{\nu}$. We thus expect that
two-neutron stripping and pick-up reactions populating excited $0^+$ states
provides the pair-addition and pair-removal strength functions, $S_{\mathrm{ad}}(E)$
and $S_{\mathrm{rm}}(E)$. The Higgs strength function $S_{\mathrm{H}}(E)$ is then evaluated as a
sum of $S_{\mathrm{ad}}(E)$ and $S_{\mathrm{rm}}(E)$ (cf. Fig.\ref{Higgs-strength}). Here
transition to the low-lying pair vibration needs to be treated separately
since for this case the Higgs transition amplitude is a coherent
sum of the amplitudes $\braket{\nu}{\hat{P}_{\mathrm{ad}}}{0}$ and $\braket{\nu}{\hat{P}_{\mathrm{rm}}}{0}$.
Once the Higgs strength function is obtained in this way, the Higgs polarizability $\alpha_{\mathrm{H}}$
can be evaluated. Consequently, combining $p_0$ and $\alpha_{\mathrm{H}}$ thus obtained, we
may estimate the neutron pair condensation energy $U_{\mathrm{cond}}^{\mathrm{Higgs}}$ using Eq.(\ref{cond-eng}).
Note that
this argument based on the one-step picture may be simplistic from a viewpoint of the
reaction mechanisms, such as the finite range effect and the two-step processes\cite{Potel2013}. More quantitative analysis of the two-nucleon transfer reactions needs to be explored in detail. However
it is beyond the scope of the present work, and we leave it for a future
study.
\section{Conclusions}
We have discussed a new idea of the Higgs response, which probes the Cooper pair condensate
in pair correlated nuclei. It is based on the analogy between the pair correlated nuclei and the
superfluid/superconducting state in infinite Fermi systems. The latter systems exhibit
characteristic collective excitation modes which emerge as a consequence of the spontaneous violation of the U(1) gauge symmetry; the Higgs mode, the
amplitude oscillation of the pair condensate, and
the Nambu-Goldstone mode (or the Anderson-Bogoliubov
mode in neutral superfluid systems), the phase oscillation of the pair condensate.
In the present study, we explored possible counterpart of the Higgs mode in finite nuclei.
For this purpose, we considered a new kind of pair-transfer operator, the Higgs operator, which probes the
amplitude motion of the pair condensate. We then described the strength function for the Higgs operator by means of
the quasiparticle random phase approximation for the Skyrme-Hartree-Fock-Bogoliubov model. Using the
numerical example performed for neutron pairing in Sn isotopes, we have shown that strong
Higgs response is seen not only in the low-lying pair vibration but also in high-lying pair vibrations
which have excitation energies up to around 20 MeV. It is more appropriate to deal with strength distribution
rather than to consider a single pure Higgs mode. We find also that the calculated Higgs strength function (the one for the Higgs operator) is very close to the incoherent sum of the strength functions defined separately for the pair-addition
and pair-removal operators, except for the strength of the low-lying pair vibration mode.
This indicates that the Higgs response
can be evaluated through the pair-addition and pair-removal responses of nuclei.
The Higgs response provides us
the pair condensation energy, the energy gain caused by the Cooper pair condensate. Considering
an effective potential curve $U(p)$ as a function of the Cooper pair condensate
$p=\langle \hat{P}_{\mathrm{H}} \rangle$, the Hellmann-Feynman theorem relates the curvature of the potential
to the static polarizability $\alpha_{\mathrm{H}}$ with respect to the Higgs operator, and the latter
is directly obtained
as the inversely energy-weighted sum of the Higgs strength function.
Furthermore, we have shown using the constrained HFB calculations that the effective
potential is well approximated by a quartic function as is the case of the Ginzburg-Landau
phenomenology. Utilizing a simple relation valid for the quartic potential, we arrive at an expression
of the pair condensation energy, Eq.(\ref{cond-eng}), which can be evaluated
in terms of the Higgs polarizability $\alpha_{\mathrm{H}}$, an integrated Higgs response. The validity and accuracy of
this evaluation is demonstrated using systematic numerical calculations performed for even-even Sn isotopes.
Possible application of this scheme to the pair transfer experiments is a subject
to be studied in future.
\section*{Acknowledgement}
The authors deeply thank S.~Shimoura, S.~Ota, M.~Dozono and K.~Yoshida for stimulating and fruitful
discussions held in various stages of the present study. This work was supported by the
JSPS KAKENHI (Grant No. 20K03945).
\section*{Appendix}
The matrix elements of the one-body operators $\hat{\rho}_\alpha(\mbox{\boldmath $r$})=\hat{\rho}(\mbox{\boldmath $r$}),\hat{P}(\mbox{\boldmath $r$}),\hat{P}^\dagger(\mbox{\boldmath $r$})$
for two-quasiparticle states $\ket{ij}$, which appears in the unperturbed response function
in the spectrum representation Eq.(\ref{resp-fn}),
are given by
\begin{align}
\braket{ij}{\hat{\rho}_\alpha(\mbox{\boldmath $r$})}{0}& =\sum_\sigma \phi_i^\dagger(\mbox{\boldmath $r$}\sigma){{\cal{A}}}_{\alpha} \bar{\phi}_{\tilde{j}}(\mbox{\boldmath $r$}\sigma), \\
\braket{0}{\hat{\rho}_\alpha(\mbox{\boldmath $r$})}{ij}&=\sum_\sigma \bar{\phi}_{\tilde{j}}^\dagger(\mbox{\boldmath $r$}\sigma){{\cal{A}}}_{\alpha} \phi_{i},(\mbox{\boldmath $r$}\sigma),
\end{align}
where $\bar{\phi}_{\tilde{j}}(\mbox{\boldmath $r$}\sigma)=(-\varphi_{2,j}^*(\mbox{\boldmath $r$}\tilde{\sigma}),\varphi_{1,j}^*(\mbox{\boldmath $r$}\tilde{\sigma}))^T$.
The $2 \times 2$ matrix ${{\cal{A}}}_\alpha$ is
\begin{equation} \label{qp-matel}
{{\cal{A}}}_\alpha=
\left(
\begin{array}{cc}
2 & 0 \\
0 & 0
\end{array}
\right)
,
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0
\end{array}
\right)
,
\left(
\begin{array}{cc}
0 &1 \\
0 & 0
\end{array}
\right)
\end{equation}
for the density $\hat{\rho}_\alpha(\mbox{\boldmath $r$})=\hat{\rho}(\mbox{\boldmath $r$}),\hat{P}(\mbox{\boldmath $r$}),\hat{P}^\dagger(\mbox{\boldmath $r$})$, respectively,
which appear in the l.h.s. of Eq.(\ref{rpa}). For the density $\hat{\rho}_\beta(\mbox{\boldmath $r$})$ which cause
the perturbation,
the matrix ${{\cal{B}}}_\beta$ is
${{\cal{B}}}_{\beta}=
\left(
\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right)
$
for $\hat{\rho}_\beta(\mbox{\boldmath $r$})=\hat{\rho}(\mbox{\boldmath $r$})$, and the same as Eq.(\ref{qp-matel}) for
$\hat{\rho}_\beta(\mbox{\boldmath $r$})=\hat{P}(\mbox{\boldmath $r$})$ and $\hat{P}^\dagger(\mbox{\boldmath $r$})$.
The continuous nature of the unbound quasiparticle states can be taken into account in the
unperturbed response function by using the Green's function for the quasiparticle states:
\begin{align}
R_{0}^{\alpha\beta}(\mbox{\boldmath $r$},\mbox{\boldmath $r$}',\omega) =\frac{1}{4\pi i}
\int_C dE \sum_{\sigma\sigma'}
& \left\{ {\mathrm Tr}{\cal{A}}_\alpha {\cal{G}}_0(\mbox{\boldmath $r$}\sigma\mbox{\boldmath $r$}'\sigma',E+\hbar\omega+i\epsilon){\cal{B}}_\beta
{\cal{G}}_0(\mbox{\boldmath $r$}'\sigma'\mbox{\boldmath $r$}\sigma,E) \right. \nonumber \\
& +
\left.
{\mathrm Tr}{\cal{A}}_\alpha {\cal{G}}_0(\mbox{\boldmath $r$}\sigma\mbox{\boldmath $r$}'\sigma',E){\cal{B}}_\beta
{\cal{G}}_0(\mbox{\boldmath $r$}'\sigma'\mbox{\boldmath $r$}\sigma,E-\hbar\omega-i\epsilon)
\right\}
\end{align}
where ${\cal{G}}_0(\mbox{\boldmath $r$}'\sigma'\mbox{\boldmath $r$}\sigma,E)$ is the Green's function for the HFB equation (\ref{HFB}), and
$\int_C dE$ is a contor integral in the complex $E$ plane. Details are given in Ref.\cite{Matsuo2001}.
|
{
"arxiv_id": "2302.14143",
"language": "en",
"timestamp": "2023-03-01T02:02:25",
"url": "https://arxiv.org/abs/2302.14143",
"yymm": "2302"
} | \section{Introduction}
Given a finite set $X$ and a cyclic group $\langle g \rangle$ of order $n$ that acts on $X$, we can consider the cardinality of the fixed point set $X^{g^d}$, for a positive integer $d$. The triple $(X, \langle g \rangle,f(q))$, where $f(q) \in \mathbb{N}[q]$, is said to exhibit the cyclic sieving phenomenon (CSP) if $\vert X^{g^d} \vert =f(\omega^d)$ for all $d\geq 0$, where $\omega$ is a primitive $n$th root of unity. The cyclic sieving phenomenon was introduced by Reiner, Stanton and White in 2004 \cite{rsw} and has been widely studied since then, in various settings (see \cite{sagan} for details).
Several authors have produced CSPs for various sets of Young tableaux (see, for instance, \cite{linusson, bms, fontaine, gaetz, ohpark1, ohpark2, pechenik, psv, rhoades}). Candidates for cyclic sieving polynomials are generally $q$-analogues of a natural counting formula (for example, the hook-length formula for standard tableaux) and a cyclic action on standard or semistandard tableaux is given by Sch\"utzenberger's jeu-de-taquin promotion operator $\partial$ (\cite{schutz1,schutz2}). One roadblock is that the order of promotion (the least positive integer that fixes all tableaux in the set under $\partial$) is unknown for most shapes. There are also situations where the order of promotion is known but the most natural cyclic sieving polynomial does not yield a CSP. For example, the order of promotion for staircase tableaux was given in \cite{ponwang} but, so far, a CSP for staircase tableaux remains elusive.
For standard rectangular tableaux of shape $\lambda=(a^b)$, the order of promotion is $ab$ \cite{haiman} and if $X=SSYT(a^b,k)$ is the set of semistandard rectangular tableaux with entries less than or equal to $k$, the order of promotion on $X$ is $k$. Rhoades proved CSPs for both standard and semistandard rectangular tableaux \cite{rhoades}. We give a summary of CSP results for semistandard tableaux thus far. For a list of CSPs in other settings, see \cite[Table 1]{linusson}.
\bigskip
\noindent (1) Rhoades \cite{rhoades} proved that $(SSYT(\lambda,k),\langle \partial \rangle, q^{-\kappa(\lambda)}s_{\lambda}(1,q,\ldots,q^{k-1}))$ is a CSP triple, where $\lambda$ is a rectangular partition, $s_{\lambda}(1,q,\ldots,q^{k-1})$ is a principal specialization of the Schur polynomial and $\kappa(\lambda)=\sum_i(i-1)\lambda_i$.
\bigskip
\noindent (2) In \cite{fontaine}, the authors showed that $(SSYT(a^b,\gamma), \langle \partial^d \rangle, q^*K_{a^b,\gamma}(q))$ is a CSP triple, refining Rhoades's result. Here $SSYT(a^b,\gamma)$ is the set of rectangular tableaux with fixed content $\gamma$, with $\gamma$ invariant under the $d$th cyclic shift, where $d$ is the frequency of $\gamma$---the number of cyclic shifts to return $\gamma$ to itself---and $q^*K_{a^b,\gamma}(q)$ is a Kostka-Foulkes polynomial up to a power of $q$.
\bigskip
\noindent (3) A CSP for semistandard hook tableaux with content $\mu$ was given in \cite{bms} where it is shown that $(SSYT((n-m,1^m),\mu), \langle \partial^d \rangle, f(q))$
is a CSP triple, with cyclic sieving polynomial $f(q)=\left[ \begin{array}{c}nz(\mu)-1 \cr m \cr \end{array}\right]_q.$ Here $z(\mu)$ is the number of non-zero entries in $\mu$.
\bigskip
\noindent (4) Using the cyclic action $\tt{c}$ arising from the $U_q(\mathfrak{sl}_n)$ crystal structure for semistandard tableaux, Oh and Park \cite{ohpark1} proved
$(SSYT(\lambda),\langle \mathtt{c} \rangle, q^{-\kappa(\lambda)}s_\lambda(1,q,\ldots,q^{k-1}))$
exhibits the CSP when the length of $\lambda$ is less than $k$ and $\mbox{gcd}(k,\vert \lambda \vert)=1$. The result was extended to skew shapes in \cite{alexander}.
\bigskip
\noindent (5) In \cite{linusson}, the authors gave a CSP for semistandard tableaux of stretched hook shape $\lambda=((a+1)n,n^b)$ and rectangular content $\mu=(n^{a+b+1})$. They proved that
$(SSYT((a+1)n,n^b),\mu), \langle \partial \rangle, f(q))$ exhibits the CSP, where $$f(q)=\prod_{1 \leq i \leq a}\prod_{1 \leq j \leq b} \frac{[i+j+n-1]_q}{[i+j-1]_q}=q^{-n\binom{b+1}{2}}\widetilde{K}_{\lambda,\mu}(q).$$ Here $\widetilde{K}_{\lambda,\mu}(q)$ is a modified Kostka-Foulkes polynomial.
In this paper, we give a CSP for the set of semistandard tableaux $SSYT(\lambda,\mu)$ of shape $\lambda=(m,n^b)$ and content $\mu=(\mu_1,\ldots,\mu_{b+2})$, where $m, n, b$ are positive integers. The shape is a more general version of the stretched hook shape $\lambda=((a+1)n,n^b)$ in (5) and our content is a $(b+2)$-tuple whereas the content in (5) is rectangular of the form $(n^{a+b+1})$. The CSP polynomial is a $q$-binomial coefficient, which is a modified Kostka-Foulkes polynomial. Our CSP coincides with (5) in the case where $a=1$; that is when $\lambda=(2n,n^b)$ and $\mu=(n^{b+2})$.
After reviewing the necessary definitions and results in Sections \ref{sec:prelim1} and \ref{sec:prelim2} we prove our main result in Section \ref{sec:main}, which is that $(SSYT(\lambda,\mu), \langle \partial^{b+2} \rangle, q^{-n\binom{b+1}{2}}\widetilde{K}_{\lambda,\mu}(q))$ is a CSP, for $\lambda=(m,n^b)$, $\mu=(\mu_1,\ldots,\mu_{b+2})$, and $\widetilde{K}_{\lambda,\mu}(q)$ a modified Kostka-Foulkes polynomial.
\section{Semistandard tableaux and jeu de taquin promotion}\label{sec:prelim1}
A weakly decreasing $r$-tuple $\lambda=(\lambda_1,\ldots,\lambda_r)$ is a partition of a positive integer $n$ if $\lambda_i \geq 0$ and $\sum_{i=1}^r \lambda_i=n$. The Young diagram of shape $\lambda$ consists of $n$ boxes in $r$ left-justified rows with the $i$th row containing $\lambda_i$ boxes. A $\lambda$-tableau $T$ is obtained by filling the Young diagram with positive integers. A $\lambda$-tableau is {\em semistandard} if the entries in its columns are strictly increasing from top to bottom and the entries in its rows are weakly increasing from left to right. If $T$ contains entries from the set $\{1, \ldots, k\}$, the {\em content} of $T$ is the $k$-tuple $\mu=(\mu_1,\ldots,\mu_k)$ where $\mu_i$ is equal to the number of entries equal to $i$ in $T$. We will denote the set of semistandard $\lambda$-tableau with content $\mu$ by $SSYT(\lambda, \mu)$.
\begin{exa} \label{firstexa} The tableau $T=\begin{ytableau}1& 1 & 2 & 3 & 5 \cr 2 & 3 & 4 \cr 3 \cr \end{ytableau}$ belongs to $SSYT(\lambda,\mu)$ where $\lambda=(5,3,1)$ and $\mu=(2,2,3,1,1).$
\end{exa}
{\em Jeu-de-taquin promotion} (\cite{schutz1,schutz2}) is a combinatorial algorithm that gives an action on semistandard tableaux. We will use the version defined in \cite{bms}, which is the inverse of the operation used in \cite{linusson}. For a semistandard tableau $T$ with entries in $\{1,\ldots,k\}$, first replace each entry equal to $k$ with a dot. If there is a dot in the figure that is not contained in a continuous strip of dots in the northwest corner, choose the westernmost dot and slide it north or west until it lands in a connected component of dots in the northwest corner according to the following rules:
$$ \begin{ytableau} a & b \cr \bullet \cr \end{ytableau} \rightarrow \begin{ytableau} \bullet & b \cr a \cr \end{ytableau}\ ; \ \ \begin{ytableau} a & \bullet \cr b \cr\end{ytableau} \rightarrow \begin{ytableau} \bullet & a \cr b \cr \end{ytableau}\ ; \ \ \begin{ytableau} a & b \cr c & \bullet \cr \end{ytableau} \rightarrow
\left\{\begin{array}{ll}
\begin{ytableau} a & \bullet \cr c & b \cr\end{ytableau} & \text{if $c \leq b$} \\ \\
\begin{ytableau} a & b \cr \bullet & c \cr \end{ytableau} & \text{if $b<c$.}
\end{array}\right. $$
Repeat for the remaining dots, then replace each dot with $1$ and increase all other entries by one, giving $\partial(T)$, which is semistandard. If $T$ has content $\mu=(\mu_1,\ldots,\mu_ k)$, then $\partial(T)$ has content $(\mu_k,\mu_1,\ldots,\mu_{k-1})$.
\begin{exa} Below is an illustration of jeu-de-taquin promotion.
$$T=\begin{ytableau} 1 & 1 & 2 & 3 \cr 2 & 3 & 4 & 5 \cr 5 & 5 \cr\end{ytableau} \rightarrow \begin{ytableau} 1 & 1 & 2 & 3 \cr 2 & 3 & 4 & \bullet \cr \bullet & \bullet \cr \end{ytableau} \rightarrow \begin{ytableau} \bullet & 1 & 2 & 3 \cr 1 & 3 & 4 & \bullet \cr 2 & \bullet \cr \end{ytableau} \rightarrow \begin{ytableau}\bullet & \bullet & 2 & 3 \cr 1 & 1 & 4 & \bullet \cr 2 & 3 \cr \end{ytableau}$$
$$ \rightarrow \begin{ytableau} \bullet & \bullet & \bullet & 3 \cr 1 & 1 & 2 & 4 \cr 2 & 3 \cr \end{ytableau} \rightarrow \begin{ytableau} 1 & 1 & 1 & 4 \cr 2 & 2 & 3 & 5 \cr 3 & 4 \cr \end{ytableau} =\partial(T)$$
\end{exa}
The {\em order of promotion} of a tableau $T$ is the least positive integer $r$ such that $\partial^r(T)=T$. If a set $X$ of semistandard tableaux is invariant under $\partial$, the least positive integer $r$ such that $\partial^r(T)=T$ for all $T \in X$ is the {\em order of promotion} on $X$.
\section{Kostka-Foulkes polynomials}\label{sec:prelim2}
The \textit{Kostka-Foulkes polynomials}, denoted $K_{\lambda, \mu}(q)$, relate Hall-Littlewood polynomials to Schur polynomials (see \cite{butler} for a comprehensive overview). They generalize the Kostka coefficients $K_{\lambda \mu}$, since $K_{\lambda,\mu}(1)=K_{\lambda \mu}$, which is the number of semistandard tableaux of shape $\lambda$ and content $\mu$. It was shown by Lascoux and Sch\"utzenberger \cite{lascoux} that the Kostka-Foulkes polynomials can be found using a statistic, called {\em charge}, which had previously been conjectured by Foulkes \cite{foulkes}:
$$K_{\lambda,\mu}(q)=\sum_{T \in SSYT(\lambda,\mu)}q^{charge(T)}.$$
We will work with \textit{modified Kostka-Foulkes polynomials} $\widetilde{K}_{\lambda,\mu}(q)$, which are related to the Kostka-Foulkes polynomials by the relation $\widetilde{K}_{\lambda,\mu}(q)=q^{\kappa(\mu)}K_{\lambda,\mu}(q^{-1})$, where $\kappa(\mu)=\sum_i(i-1)\mu_i$. These can be obtained via a statistic on tableaux called {\em cocharge}, denoted $cc(T)$, which we will define shortly: $$\widetilde{K}_{\lambda, \mu}(q)=\sum_{T \in SSYT(\lambda,\mu)}q^{cc(T)}.$$
Given a permutation $w=w_1\ldots w_n \in \mathfrak{S}_n$, where $\mathfrak{S}_n$ is the symmetric group on $n$ letters, define the {\em cocharge} of $j$ in $w$ recursively as follows:
\begin{equation*}
cc(w,j) := \left\{
\begin{array}{ll}
0 & \quad \text{if} \, j=1 \\
cc(w,j-1)+1 & \quad \text{if $j$ precedes $j-1$ in} \, w \\
cc(w,j-1) & \quad \text{otherwise}.
\end{array}
\right.
\end{equation*}
The cocharge of the word $w$ is $cc(w)=\sum_{j=1}^n cc(w,j)$ and $charge(w)=\binom{n}{2} - cc(w)$.
The content of a word $w$ is $\mu=(\mu_1,\ldots,\mu_n)$, where $\mu_i$ records the number of entries $i$ in $w$. We can define cocharge for a word $w$ whenever its content $\mu$ is a partition. To do so, obtain $\mu_1$ standard subwords from $w$ in the following way: start by selecting the rightmost $1$, then move left to find the rightmost $2$ that precedes the chosen $1$ and if there is not a $2$ preceding the $1$, loop around to the beginning of the word to choose the rightmost $2$. Continue for $3,4,$ etc., until the largest entry in the word has been selected. The selected entries, listed in the order they appear in $w$, form the first standard subword $w^{(1)}$. Delete the entries in $w^{(1)}$ from $w$ and repeat the process with the word consisting of the remaining entries to obtain $w^{(2)}$. Continue until no entries in the word remain, forming $\mu_1$ subwords. Each of the subwords $w^{(i)}$ is a permutation, and $w^{(1)}, \ldots,w^{(k)}$ are the parts of the conjugate partition $\mu^t$. Define the cocharge of $w$ as $cc(w)=\sum_{i=1}^{\mu_1} cc(w^{(i)})$.
To get the cocharge of a tableau, we work with its reading word $\mbox{rw}(T)$, which is obtained by listing the entries of $T$, left to right, across the rows, starting with the bottom row of $T$. Define
$cc(T)=cc(\mbox{rw}(T))$. For the cocharge of $T$ to be well-defined, it is necessary for the content of $T$ to be a partition. However, $\widetilde{K}_{\lambda,\mu}=\widetilde{K}_{\lambda,\sigma \mu}$, where $\sigma$ is a permutation and $\sigma (\mu_1,\ldots,\mu_n)=(\mu_{\sigma(1)}, \ldots, \mu_{\sigma(n)})$ so this does not impede the use of cocharge to find the modified Kostka-Foulkes polynomial.
\begin{exa} Let $T=\begin{ytableau} 1 & 1 & 1 & 1 & 2 & 2 & 3 & 4 \cr 2 & 2 & 3 \cr 3 & 4 & 4 \cr \end{ytableau}$ with $\mbox{rw}(T)=34422311112234$. The content of $w$, which is the content of $T$, is $\mu=(4,4,3,3)$. The four standard subwords obtained from $w$ are:
$w^{(1)}=3214, \ w^{(2)}=4213, \ w^{(3)}=4312, \ w^{(4)}=12$. Then $cc(w^{(1)})=cc(w^{(1)},1)+cc(w^{(1)},2)+cc(w^{(1)},3)+cc(w^{(1)},4)=0+1+2+2=5$, $cc(w^{(2)})=0+1+1+2=4, cc(w^{(3)})=0+0+1+2=3,
cc(w^{(4)})=0+0=0$ so $cc(w)=12$. \end{exa}
\section{Main result}\label{sec:main}
Our aim in this section is to prove a CSP for semistandard tableaux of shape $\lambda=(m,n^b)$ and content $\mu=(\mu_1,\ldots,\mu_{b+2})$. For $\lambda$ and $\mu$ so defined, let $$\displaystyle \beta(\lambda,\mu)=m-n-\sum_{i=1}^{b+2}\gamma_i, \mbox{ where }
\gamma_i= \begin{cases}
\mu_i-n &\mbox{if } \mu_i>n \\
0 & \text{otherwise}.
\end{cases} $$
If $\mu_i>n$, at least $\gamma_i=\mu_i-n$ entries equal to $i$ are forced into the last $m-n$ columns of $T\in SSYT(\lambda,\mu)$ and we will refer to these as {\em forced entries}. Thus $\sum_{i=1}^{b+2}\gamma_i$ is the number of entries in the last $m-n$ columns that are fixed and there are $m-n-\sum_{i=1}^{b+2} \gamma_i$ boxes in the last $m-n$ columns for which the entries can vary. The entries remaining in the last $m-n$ columns of $T$ after deleting $\gamma_i$ entries equal to $i$, for each $1 \leq i \leq b+2$, will be called the {\em free entries} in $T$. Each tableau $T \in SSYT(\lambda,\mu)$ has $\beta(\lambda,\mu)$ free entries, which belong to the set $\{2,\ldots,b+2\}$. Furthermore, since the sum $\sum_{i=1}^{b+2}\gamma_i$ is the same for any permutation $\sigma \mu$ of the content, any tableau in $SSYT(\lambda,\sigma \mu)$ also has $\beta(\lambda,\mu)$ free entries.
The free entries in $T \in SSYT(\lambda,\mu)$ can also be determined by considering a multiset of elements from $\{2,\ldots,b+2\}$ that are missing from the first $n$ columns. Each of the first $n$ columns of $T$ is necessarily missing one element from $\{1,\ldots,b+2\}$ and the collection of these elements forms a multiset. If $\mu_i <n$, then $i$ is missing from at least $n-\mu_i$ of the first $n$ columns in any tableau, so for each $i$ in the multiset with $\mu_i<n$, remove $n-\mu_i$ entries equal to $i$ to get a multiset $A_T$ of elements from $\{2,\ldots,b+2\}$. The set $A_T$ consists precisely of the free elements in $T$ so $\beta(\lambda,\mu)=\vert A_T \vert=n-\sum_{\mu_i<n} (n-\mu_i)$. Let $\mathcal{A}$ denote the set of $\beta(\lambda,\mu)$-element multisets of $\{2,\ldots,b+2\}$ and define a map $$\phi:SSYT(\lambda,\mu) \rightarrow \mathcal{A} \mbox{ where } \phi(T)=A_T.$$
Since $\vert A_T \vert \leq n$, for $T \in SSYT(\lambda,\mu)$, the following lemma is immediate.
\begin{lem}\label{betalem}
Suppose that $\lambda=(m,n^b)$ and $\mu=(\mu_1,\ldots,\mu_{b+2})$. Then $\beta(\lambda,\mu) \leq n$.
\end{lem}
\begin{exa} Let $T =\ \ \begin{ytableau}
1 & 1 & 1 & 1 & 1 & {\bf 1} & 2 & 4 & 4& {\bf 4} & {\bf 4} & {\bf 6} \\
2 & 2 & 2 & 3 & 3 \\
3 & 3 & 4 & 4 & 4 \\
5 & 5 & 5 & 5 & 5 \\
6 & 6 & 6 & 6 & 6 \\
\end{ytableau}\ ,$
\noindent where $\lambda=(12,5^4)$ and $\mu=(6,4,4,7,5,6)$. Then $\gamma_1=\gamma_6=1,\gamma_4=2, \gamma_2=\gamma_3=\gamma_5=0$ and $\beta(\lambda,\mu)=3$. Entries corresponding to $\gamma_1,\gamma_4,\gamma_6$, which are forced into the top row, are boldfaced in the tableau. The missing elements from the first five columns of $T$ are $\{4,4,3,2,2\}$. Since $\mu_2, \mu_3<5$ and $n-\mu_2=n-\mu_3=1$, we remove both a $3$ and a $2$ to get $\phi(T)=A_T=\{4,4,2\}$. These are the free entries in the arm of the first row, which are not boldfaced. \end{exa}
\noindent We will use the following straightforward fact in the proofs that follow.
\begin{lem} Suppose that $T$ is a semistandard tableau of shape $\lambda=(m,n^b)$ and content $\mu=(\mu_1,\ldots,\mu_{b+2})$. Then any row $i$ of $T$, where $i \geq 2$, can contain only the entries $i$ or $i+1$.
\end{lem}
\begin{lem} \label{lem:sigma} Suppose that $\lambda=(m,n^b)$, $\mu=(\mu_1,\ldots,\mu_{b+2})$ and that $T \in SSYT(\lambda,\mu)$. Let $\sigma=(2,3\ldots,b+2) \in \mathfrak{S}_{b+2}$. Then $\phi(\partial(T))=\sigma \phi(T)$. \end{lem}
\begin{proof}
Since jeu de taquin promotion permutes the content of $T$, $\vert \phi(\partial(T)) \vert=\vert \phi(T)\vert=\beta(\lambda,\mu)$. Let $f_i^T$ denote the number of $i$'s in the multiset $\phi(T)=A_T$ and $c_i^T$ the number of $i$'s in the first $n$ columns of $T$. If $\mu_i>n$ then $f_i^T=n- c_i^T$, and if $\mu_i<n$ then $f_i^T=n-c_i^T-(n-\mu_i)=\mu_i-c_i^T$. Any entry $i$ in $T$ with $3 \leq i \leq b+1$ either belongs to the first $n$ columns below the first row or in the last $m-n$ columns and after jeu de taquin promotion becomes an $i+1$ that belongs to the first $n$ columns below the first row of $\partial(T)$ or in the last $m-n$ columns of $\partial(T)$, respectively, so for $3 \leq i \leq b+1$, $c_i^T=c_{i+1}^{\partial(T)}$, which yields $f_i^T=f_{i+1}^{\partial(T)}$.
Entries equal to $2$ belong to either the first or second row of the tableau. Those below the first row or in the last $m-n$ columns move to boxes below the first row or in the last $m-n$ columns and become $3$'s under jeu de taquin promotion. Any of the first $n$ columns that contains a $2$ in the first row contains a $b+2$ in the last row. If there are also $(b+2)$'s in row $b+1$ with $1$'s above them in the first row, these are moved first by jeu de taquin promotion. Since the top entry in the column in this case is equal to $1$, then for some $i$ the $i$th row contains the entry $i$ while the row beneath it contains the entry $i+2$. Jeu de taquin promotion then commences in the following way, beginning with the leftmost column that contains a $b+2$ in row $b+1$:
$$\begin{ytableau} 1 & 1 \cr 2 & 2 \cr \vdots & \vdots \cr i & i \cr \scriptstyle{i+1} & \scriptstyle{i+2} \cr \vdots & \vdots \cr \scriptstyle{b+1} & \bullet \end{ytableau} \rightarrow \begin{ytableau} 1 & 1 \cr 2 & 2 \cr \vdots & \vdots \cr i & i \cr \scriptstyle{i+1} & \bullet \cr \vdots & \vdots \cr \scriptstyle{b+1} & \scriptstyle{b+1} \cr \end{ytableau}
\rightarrow \begin{ytableau} 1 & 1 \cr 2 & 2 \cr \vdots & \vdots \cr i & i \cr \bullet & \scriptstyle{i+1} \cr \vdots & \vdots \cr \scriptstyle{b+1} & \scriptstyle{b+1} \cr \end{ytableau}$$
The jeu de taquin promotion path then moves left across row $i+1$ to the first column without a dot and north to the first row, replacing a $1$ with a dot. Promotion behaves in the same way for all remaining columns that contain a $b+2$ in the last row and a $1$ in the top row. For columns that contain $(b+2)$'s in the last row and entries equal to $2$ in the first row, it is now the case that for the leftmost such column, any entry $i$ in the column has an $i-1$ immediately to its left. Thus, jeu de taquin promotion slides the box in the last row directly to row one, which moves the $2$ into the second row. Each remaining $b+2$ in row $b+1$ behaves in the same way, sliding each remaining $2$ into the second row. It follows that $c_2^T=c_3^{\partial(T)}$ so $f_i^T=f_{i+1}^{\partial(T)}$ for $2 \leq i \leq b+1$.
Since $\displaystyle \beta(\lambda,\mu)=\sum_{i=2}^{b+2} f_i^{\partial(T)}=f_2^{\partial(T)}+\sum_{i=2}^{b+1}f_i^T$ and $\displaystyle \beta(\lambda,\mu)=\sum_{i=2}^{b+2}f_i^T=f_{b+2}^T+\sum_{i=2}^{b+1}f_i^T$, we have $f_2^{\partial(T)}=f_{b+2}^T$ and the result follows. \end{proof}
The following lemma shows that if $\lambda=(m,n^b)$ and $T \in SSYT(\lambda,\mu)$ has fixed content $\mu=(\mu_1,\ldots,\mu_{b+2})$, then $T$ is uniquely determined by its free entries. It follows that the map $\phi:SSYT(\lambda,\mu) \rightarrow \mathcal{A}$ is a bijection.
\begin{lem}\label{lem:unique} Let $\lambda=(m,n^b)$, $\mu=(\mu_1,\ldots,\mu_{b+2})$ and let $T \in SSYT(\lambda,\mu)$. Then $T$ is uniquely determined by the multiset $A_T$. \end{lem}
\begin{proof}
For each $i$ with $\mu_i<n$, add $n-\mu_i$ elements $i$ to $A_T$ to produce an $n$-element multiset $X$. Since $T$ is semistandard, listing the elements of $X$ in weakly decreasing order completely determines the entries in $\{1,\ldots,b+2\}$ that are missing from each of the first $n$ columns of $T$. The complement of the $k$th element in the multiset gives the $k$th column of $T$. The remaining entries in $T$, determined from $\mu$, appear in weakly increasing order in the first row of $T$.
\end{proof}
Since jeu de taquin promotion permutes the content of a tableau, the content of $T$ and $\partial^{b+2}(T)$ are equal so $\partial^{b+2}:SSYT(\lambda,\mu) \rightarrow SSYT(\lambda,\mu)$ for $\lambda=(m,n^b)$ and $\mu=(\mu_1,\ldots,\mu_{b+2})$. By Lemma \ref{lem:sigma}, if $T \in SSYT(\lambda,\mu)$ and $\sigma=(2,\ldots,b+2)$ then $\phi(\partial^{(b+2)j}(T))=\sigma^{j}\phi(T)$. Thus $\phi({\partial^{(b+2)(b+1)}(T)})=\sigma^{b+1}\phi(T)=\phi(T)$ so $\partial^{(b+2)(b+1)}(T)=T$. If $1\leq j <b+1$ satisfies $\partial^{(b+2)j}(T)=T$ for all $T$, then $\sigma^j\phi(T)=\phi(T)$ for all $T$, which is not possible, so have the following Lemma.
\begin{lem} Let $\lambda=(m,n^b)$ and $\mu=(\mu_1,\ldots,\mu_{b+2})$. The {\em order of promotion} on $SSYT(\lambda,\mu)$ under the cyclic action of $\partial^{b+2}$ is equal to $b+1$. \end{lem}
For a positive integer $n$, let
$\displaystyle [n]_q=\frac{q^{n}-1}{q-1}$ and
$[n]_q!=[n]_q[n-1]_q \cdots [1]_q$. The {\em $q$-binomial
coefficients} are defined by
$\displaystyle \left[\begin{array}{c}n \cr k
\cr\end{array}\right]_q=\frac{[n]_q!}{[k]_q![n-k]_q!}.$
To prove our CSP, we will use the bijection $\phi$ and invoke the following theorem due to Reiner, Stanton and White.
\begin{thm}\label{thm:rsw} (Reiner, Stanton and White \cite{rsw}) Let $X$ be the set of $k$-element multisets of $\{1,2,\ldots,n\}$, let $C=\mathbb{Z}/n\mathbb{Z}$ act on $X$ via the permutation $\theta=(1,2,\ldots,n)$ and let $\displaystyle f(q)=\left[ \begin{array}{c} n+k-1 \cr k \cr \end{array} \right]_q$. Then $(X,C,f(q))$ exhibits the cyclic sieving phenomenon. \end{thm}
\begin{thm}\label{thm:main} Let $\lambda=(m,n^b)$, $\mu=(\mu_1,\ldots,\mu_{b+2})$, let $C=\mathbb{Z}/(b+1)\mathbb{Z}$ act on $SSYT(\lambda,\mu)$ via $\partial^{b+2}$ and let $f(q)=\left[
\begin{array}{c} b+\beta(\lambda,\mu) \cr \beta(\lambda,\mu) \cr \end{array} \right]_q$ . Then $(SSYT(\lambda,\mu),C,f(q))$ exhibits the cyclic sieving phenomenon.\end{thm}
\begin{proof} Adjust the map $\phi$ by decrementing each of the entries in $\phi(T)$ to get $\psi:SSYT(\lambda,\mu) \rightarrow \mathcal{B}$, where $\mathcal{B}$ is the set of $\beta(\lambda,\mu)$-element multisets of $\{1,\ldots,b+1\}$. Let $\theta=(1,2,\ldots,n)$. By Lemma \ref{lem:sigma}, $\psi(\partial^{b+2}(T))=\theta(\psi(T))$ and $\psi(\partial^{(b+2)j}(T))=\theta^j(\psi(T))$.
We have $\psi^{(b+2)j}(T)=T$ if and only if $\psi(\partial^{(b+2)j}(T))=\psi(T)$, which, by the above, yields $\theta^j(\psi(T))=\psi(T)$. But, by Theorem \ref{thm:rsw}, $\vert \mathcal{B}^{\theta^j} \vert=f(\omega^j)$, where $\omega$ is a
primitive $(b+1)$-th root of unity. The result now follows. \end{proof}
\bigskip
We will now examine the relationship between the cyclic sieving polynomial in Theorem \ref{thm:main} and the modified Kostka-Foulkes polynomial $\widetilde{K}_{\lambda,\mu}$. To do so, we work with plane partitions to get a nice formula for cocharge in the case where $\lambda=(m,n^b)$ and $\mu=(\mu_1,\ldots,\mu_{b+2})$.
A \textit{plane partition} is an array $\pi = (\pi_{ij})_{i,j \geq 1}$ of nonnegative integers such that $\pi$ has finitely many nonzero entries and is weakly decreasing in rows and columns. If $\sum \pi_{ij} = n$, we write $\vert{\pi}\vert=n$ and say that $\pi$ is a plane partition of $n$. We can adjust the bijection $\phi$ between $SSYT(\lambda,\mu)$ and the set of $\beta(\lambda,\mu)$-element multisets of $\{2,\ldots,b+2\}$ by subtracting two from each of the entries in $\phi(T)$ and reversing the order to get a bijection between $SSYT(\lambda,\mu)$ and the set of one-row plane partitions $\pi=(\pi_1,\ldots,\pi_{\beta(\lambda,\mu)})$ with $\pi_1\leq b$ and $\beta(\lambda,\mu)$ columns; we will denote the image of $T$ under this bijection by $\pi_T$.
\bigskip
\begin{thm}\label{thm:cc} Let $\lambda=(m,n^b)$, let $\mu=(\mu_1,\ldots,\mu_{b+2})$ be a partition of $m+nb$ and let $T \in SSYT(\lambda,\mu)$. Then $\displaystyle \mbox{cc}(\mbox{rw}(T))=\vert \pi_T \vert + n \binom{b+1}{2}$, where
$\vert \pi_T \vert$ is the sum of the entries in $\pi_T$. \end{thm}
\begin{proof}
We will consider the contribution of a given entry $i$ in the tableau to the cocharge. An entry $i$ copies the cocharge contribution of $i-1$ in its subword if $i-1$ precedes $i$ in its subword (an $(i-1,i)$ pairing) and it increases cocharge otherwise (an $(i,i-1)$ pairing).
If an entry $i$ belongs to row $i$, each of the entries $1, \ldots, i-1$ appear above it in the same column so $i$ pairs with an $i-1$ in a row above it, giving associated subword $w=\cdots i (i-1) \cdots 3 2 1$. Thus, every entry $i$ in row $i$ contributes $i-1$ to the cocharge.
Any entry $i$ in row one, where $i \geq 2$, pairs as $(i-1,i)$ in the associated subword so copies the cocharge of $i-1$. Any subword containing a forced entry consists entirely of forced entries. Since there are $\mu_1-n$ forced $1$'s, there are $\mu_1-n$ subwords consisting of all forced entries, and these are of the form $w=12\cdots i$, so each forced entry contributes zero to the cocharge.
If $i-1$ belongs to one of the first $n$ columns, it either belongs to row $i-1$ or to row $i-2$. If $i$ is a free entry in row one, it cannot pair with a forced entry $i-1$ and, if it pairs with an entry $i-1$ in row $i-2$ or with a free entry $i-1$ in row one, this forces an entry $i$ in row $i-1$. It would then follow that $i-1$ pairs with an entry $i$ in the row beneath it, instead of the $i$ in the first row. Thus each free entry $i$ in row one pairs with an $i-1$ in row $i-1$, creating an $(i-1,i)$ pairing in the subword so copies the contribution of $i-1$ to the cocharge. Thus each free entry $i$ in row one contributes $i-2$ to the cocharge.
Finally, we will show that entries $i+1$ in row $i$ contribute $i-1$ to the cocharge. If $i+1$ pairs with an $i$ in the same row, this yields an $(i,i+1)$ pairing in the subword so $i+1$ contributes the same value to cocharge as $i$, which is $i-1$. If $i+1$ pairs with a free entry $i$ in row one this yields an $(i+1,i)$ pairing in the subword so that $i+1$ increases the contribution of the free entry, giving a contribution of $i-1$. The last case is when $i+1$ pairs with an $i$ in row $i-1$ creating an $(i,i-1)$ pairing in the subword so increasing the contribution of $i-1$ by one. By induction, $i-1$ contributes $i-2$ to the cocharge so $i+1$ contributes $i-1$ to cocharge. It follows that all entries in row $i$ contribute $i-1$ to the cocharge so
$\displaystyle cc(\mbox{rw}(T))=\vert \pi_T \vert + n \sum_{i=1}^{b+1} (i-1)=\vert \pi_T \vert + n \binom{b+1}{2}.$ \end{proof}
\begin{cor} Let $\lambda=(m,n^b)$, $\mu=(\mu_1,\ldots,\mu_{b+2})$ and $f(q)=\left[
\begin{array}{c} b+\beta(\lambda,\mu) \cr \beta(\lambda,\mu) \cr \end{array} \right]_q$. Then $f(q)=q^{-n\binom{b+1}{2}}\widetilde{K}_{\lambda,\mu}(q).$ \end{cor}
\begin{proof}
Denoting $SSYT(\lambda,\mu)$ by $SSYT$, the result follows from Lemma \ref{thm:cc} since
\begin{eqnarray*}\widetilde{K}_{\lambda,\mu}(q)=\sum_{T\in SSYT} q^{cc(rw(T))}&=&q^{n \binom{b+1}{2}}\sum_{T\in SSYT} q^{\vert \pi_T \vert}\cr
&=&q^{n \binom{b+1}{2}}\sum_{\substack{\pi=(\pi_1,\ldots,\pi_{\beta(\lambda,\mu)}) \\ \pi_1 \leq b}}q^{\vert \pi \vert} = q^{n \binom{b+1}{2}}\left[
\begin{array}{c} b+\beta(\lambda,\mu) \cr \beta(\lambda,\mu) \cr \end{array} \right]_q,\cr \end{eqnarray*} by \cite[I.3.19]{stanley1} (see also \cite[\S 7.21]{stanleybook}).
\end{proof}
\noindent {\bf Acknowledgement.} The authors wish to thank two anonymous referees who provided useful suggestions that improved the paper, including an equivalent variation on our map between tableaux and multisets.
\begin{bibdiv}
\begin{biblist}
\bib{alexander}{article}{author={P. Alexandersson},title={\em{Free action and cyclic sieving on skew semi-standard Young tableaux}}, journal={\em{Bull. Iran. Math. Soc.}}, volume={49}, issue={6}, year={2023}}
\bib{linusson}{article}{author={P. Alexandersson}, author={E. Kantarci O\u{g}uz}, author={S. Linusson}, title={\em{Promotion and cyclic sieving on families of SSYT}}, journal={\em{Ark. Mat.}}, year={2021}, volume={59}, pages={247--274}}
\bib{bms}{article}{author={M. Bennett}, author={B. Madill}, author={A. Stokke}, title={\em{Jeu-de-taquin promotion and a cyclic sieving phenomenon for semistandard hook tableaux}}, journal={\em{Discrete Math.}}, year={2014}, volume={319}, pages={62--67}}
\bib{butler}{book}{author={L. M. Butler}, title={Subgroup lattices and symmetric functions}, publisher={Amer. Math. Soc.}, place={Providence}, year={1994}}
\bib{fontaine}{article}{author={B. Fontaine}, author={J. Kamnitzer}, title={\em{Cyclic sieving, rotation, and geometric representation theory}}, journal={\em{Sel. Math. New Ser.}}, volume={20}, year={2013}, pages={609--625}}
\bib{foulkes}{collection.article}{author={H.O. Foulkes}, title={\em{A survey of some combinatorial aspects of symmetric functions}}, booktitle={In: \em{Permutations}}, publisher={Gauthier-Villars}, place={Paris}, year={1974}, pages={79-92}}
\bib{gaetz}{article}{author={C. Gaetz}, author={O. Pechenik}, author={J. Striker}, author={J.P. Swanson}, title={\em{Curious cyclic sieving on increasing tableaux}}, journal={\em{Enumer. Comb. Appl.}}, volume={2}, number={3}, article={S2R18}, year={2022}}
\bib{haiman}{article}{author={M. D. Haiman}, title={\em{Dual equivalence with applications, including a conjecture of Proctor}}, journal={\em{Discrete Math.}}, volume={99}, year={1992}, pages={79--113}}
\bib{lascoux}{article}{author={A. Lascoux}, author={M.-P. Sch\"{u}tzenberger}, title={\em{Sur une conjecture de H.O. Foulkes}}, journal={\em{C.R. Acad. Sc. Paris}}, volume={286A}, year={1978}, pages={323--324}}
\bib{ohpark1}{article}{author={Y.-T. Oh}, author={E. Park}, title={\em{Crystals, semistandard tableaux and cyclic sieving phenomenon}}, journal={\em{Electron. J. Comb.}}, volume={26}, year={2019}}
\bib{ohpark2}{article}{author={Y.-T. Oh }, author={E. Park}, title = {\em{q-dimensions of highest weight crystals and cyclic sieving phenomenon}},
journal = {\em{European J. of Combin.}},
volume = {97},
pages = {103372},
year = {2021}}
\bib{pechenik}{article}{author={O. Pechenik}, title={\em{Cyclic sieving of increasing tableaux and small Schr\"oder paths}}, journal={\em{J. Combin. Theory, Ser. A}}, volume={125}, year={2014}, pages={357--378}}
\bib{ponwang}{article}{author={S. Pon}, author={Q. Wang}, title={\em{Promotion and evacuation on standard
Young tableaux of rectangle and staircase shape}}, journal={\em{Electron. J. Combin.}}, volume={18}, year={2011}}
\bib{psv}{article}{author={T. Pressey}, author={A. Stokke}, author={T. Visentin}, title={\em{Increasing tableaux, Narayana numbers and an instance of the cyclic sieving phenomenon}}, journal={\em{Ann. Comb.}}, volume={20}, year={2016}, pages={609--621}}
\bib{rsw}{article}{author={V. Reiner}, author={D. Stanton},
author={D. White}, title={\em{The cyclic sieving phenomenon}},
journal={\em{J. Combin. Theory Ser. A}}, volume={108}, year={2004},
pages={17--50}}
\bib{rhoades}{article}{author={B. Rhoades}, title={\em{Cyclic sieving,
promotion, and representation theory}}, journal={\em{J. Combin. Theory
Ser.~A}}, volume={117}, year={2010}, pages={38--76}}
\bib{sagan}{book}{author={B. Sagan}, title={The cyclic sieving phenomenon: a survey},
series={Surveys in
Combinatorics 2011, London Math. Soc. Lecture Note Series},
volume={392}, publisher={Cambridge University Press},
place={Cambridge}, year={2011}, pages={183--234}}
\bib{schutz1}{article}{author={M.-P. Sch\"{u}tzenberger},
title={\em{Quelques remarques sur une construction de Schensted}},
journal={\em{Canad. J. Math.}}, volume={13}, year={1961},
pages={117--128}}
\bib{schutz2}{article}{author={ M.-P.
Sch\"{u}tzenberger},title={\em{Promotion des morphismes d'ensembles
ordonn\'{e}s}}, journal={\em{Discrete Math.}}, volume={2}, year={1972},
pages={93--74}}
\bib{stanley1}{book}{author={R. Stanley}, title={Enumerative Combinatorics}, volume={1}, edition={2}, publisher={Cambridge Univ. Press}, place={Cambridge}, year={2011}}
\bib{stanleybook}{book}{author={ R. Stanley}, title={Enumerative
Combinatorics}, volume={2}, publisher={Cambridge Univ. Press},
place={Cambridge}, year={1997}}
\end{biblist}
\end{bibdiv}
\end{document} |
{
"arxiv_id": "2302.14185",
"language": "en",
"timestamp": "2023-03-01T02:04:15",
"url": "https://arxiv.org/abs/2302.14185",
"yymm": "2302"
} | \section{Introduction}
Radio recombination lines (RRLs) have long been a valuable tool used to study the interstellar medium in our Galaxy. These lines appear in the spectra of \hi and \hii regions along the plane of the Galaxy \citep{lockman1996detection}, and act as a probe of the physical conditions in the regions where they are observed.
Line width and integrated optical depth give information about the temperature, electron density, and emission measure of the gas \citep{salgado2017low, salas2017lofar}. \cite{oonk2017carbon} show that low-frequency carbon and hydrogen lines from molecular clouds can give a lower limit to the cosmic-ray ionization rate. A quantitative study of the low-frequency recombination lines and how they can be used to distinguish hot and cold components of the interstellar medium is done by \cite{shaver1975characteristics, shaver1975theoretical}.
Diffuse recombination lines not associated with any specific star-forming region are also seen along the Galactic Plane and are typically about 0.01-0.1\% of the continuum \citep{erickson1995low}. These lines are typically associated with the colder, lower-density areas inside molecular clouds illuminated by radio-bright objects near the observer's line of sight. Low-frequency carbon line regions are also suggested to be present with photo-dissociation regions \citep{kantharia2001carbon}, stimulated emissions from the low density \hii regions \citep{pedlar1978studies}, \hi self-absorbing regions \citep{roshi2011carbon}, CO-dark surface layers of molecular clouds \citep{oonk2017carbon}, and denser regions within CO emitting clouds \citep{roshi2022arecibo}. Although helium lines have been observed at 1.4~GHz toward the Galactic center \citep{heiles1996radio} and at 750~MHz toward DR21 \citep{roshi2022arecibo}, due to the lower ionization levels, carbon and hydrogen RRLs are brighter and widely used observables to study the interstellar medium at lower frequencies \citep{lowfreqrrl2002}. \citet{erickson1995low} used the Parkes 64~m telescope with a beam of $\approx$~4$^\circ$ to survey the inner region of the Galactic Plane for carbon RRLs at 76.4~MHz and found a large line-forming region spanning longitudes $|\ell|<20\degree$ and latitudes within a few degrees of the plane. They also observed numerous targets in the plane of the Galaxy in the frequency range of 44~to 92~MHz. C$\alpha$ lines had widths ranging from 5~to 47~km/s. The region was estimated at distances up to 4~kpc, placing it in the Sagittarius and/or Scutum arms of the Galaxy. They also found lines tangent to the Scutum arm at longitudes between $-48\degree<\ell<20\degree$ which decreased sharply at $\ell=20^\circ$, believed to be due to change in opacity of the absorbing region. \cite{erickson1995low} suggest that the likely sites for the lines are cold \hi regions, however, there were no hydrogen RRL detections.
One of the highly-studied regions is around the supernova remnant Cassiopeia A (Cas A). It enables observation of both emission and absorption carbon RRLs \citep{payne1989stimulated} and is a common target for low-frequency radio telescopes. \cite{oonk2017carbon} used LOFAR to observe Cas~A from 34~to 78~MHz. They found line widths ranging from 5.5~to 18~km/s, with a line width of 6.27$\pm$0.57~km/s for C467$\alpha$. \cite{payne1989stimulated} observed carbon recombination lines in the frequency range 34~to~325~MHz using the Green Bank telescope in the direction of Cas A and suggest the origins of the lines to be neutral HI regions in the interstellar medium, which is further supported by the evidence presented in \cite{roshi1997hydrogen} using Ooty Radio Telescope at 328~MHz. \cite{roshi2002carbon} performed low and high-resolution surveys of the inner Galaxy over longitudes $-28\degree<\ell<89\degree$ using the Ooty Radio Telescope at 327 MHz, finding carbon RRLs in emission, primarily for $-1\degree<\ell<20\degree$. \cite{kantharia1998carbon} detect carbon lines towards Cas A at 34.5~MHz, 332~MHz, 560~MHz, and 770~MHz, and suggest the line forming regions to be associated with cold atomic hydrogen in the interstellar medium using the integrated line-to-continuum ratio. Spatially resolved carbon RRLs have also been mapped towards Cas A and used as tracers of the cold interstellar gas \citep{salas2018mapping}. Carbon RRLs have also been instrumental in evaluating the physical conditions of the line-forming regions around Orion A \citep{salas2019carbon}.
High-frequency recombination lines are commonly found in dense \hii regions associated with star formation, where the energetic photons ionize the surrounding gas, allowing recombination to occur. This results in RRLs associated with hydrogen, helium, and carbon. Planetary nebulae are another common source, as the expanding ionization bubble around protostars and stellar nurseries provide targets for detection. These types of recombination lines have been observed typically above 1~GHz. Using data from the \hi Parkes All-Sky Survey at 1.4~GHz to observe H168$\alpha$, H167$\alpha$, and H166$\alpha$ RRLs, \cite{alves2015hipass} mapped the diffuse lines of the inner Galactic Plane ($-164\degree<\ell<52\degree$)
and compared the spatial distribution of the ionized gas with that of carbon monoxide (CO). They reported the first detection of RRLs in the southern ionized lobe of the Galactic Center and found helium RRLs in \hii regions, as well as diffuse carbon RRLs. A higher resolution survey is presented by SIGGMA \citep{liu2019survey} with a beam size of 3.4$'$ using Arecibo L-band Feed Array in the frequency range of 1225~to 1525~MHz spanning H163$\alpha$ to H174$\alpha$. The project THOR uses the Very Large Array (VLA) to survey HI, OH, and recombination lines from the Milky Way at a resolution of 20$"$ in the frequency range of 1~GHz to 2~GHz \citep{bihr2015thor, wang2020hi}.
Beyond our Galaxy, extragalactic RRLs have been seen in both emission and absorption. \cite{shaver1978extragalactic} used the Westerbork Synthesis Radio Telescope to observe H$\alpha$ recombination lines in emission from M82. The Expanded Very Large Array (EVLA) detected hydrogen RRLs in emission from NGC253 \citep{kepley2011unveiling}. More recently, \cite{emig2019first} detected RRLs in the frequency range 109-189.84~MHz in the spectrum of 3C190 with a FWHM of $31.2\pm8.3$~km/s and at a redshift of $z=1.124$.
With the advent of experiments aiming to detect redshifted 21~cm signals from neutral gas in the intergalactic medium (IGM) between early galaxies at $z>6$, RRLs from our Galaxy and others have been considered as possible foregrounds for the cosmological observations \citep{peng2003foregrounds}. Foregrounds for redshifted 21~cm observations are dominated by Galactic synchrotron radiation, which is typically $\sim$200~K away from the inner Galactic Plane at 150~MHz \citep{{liu2020data, mozdzen2016improved, monsalve2021absolute}} increasing to $\sim$2000~K at 75~MHz \citep{ mozdzen2019low}, and a factor of ten higher along the inner Plane. These levels are $10^5-10^6$ times larger than the expected 21~cm emission amplitude of 1-10~mK during reionization. Before reionization, some astrophysical scenarios predict 21~cm absorption of $\sim$100~mK \citep{furlanetto2006cosmology} and non-standard physics models can yield 21~cm signals up to 1000~mK \citep{fialkov2018constraining}. There are strategies in place to mitigate the spectrally-smooth foregrounds from observations, including subtraction based on sky models or parameterized fits and avoidance in either Fourier or delay space (e.g. \citealt{ di2002radio, bowman2009foreground, bernardi2011subtraction, parsons2012per, 2014PhRvD..90b3018L, 2015ApJ...804...14T, 2016MNRAS.458.2928C, 2018ApJ...864..131K, 2019MNRAS.488.2904S}).
Redshifted 21~cm observations aim to study the early epochs of the Universe $6<z<200$ and target the frequency range of 10-200~MHz, The current generation of instruments primarily aims to make statistical measurements in the Epoch of Reionization ($6<z<13$ or roughly 100-200~MHz \citep{fan2006observational}), where RRLs are typically 0.1-1\% of the continuum brightness along the Galactic Plane \citep{alves2015hipass}. Below 200~MHz, RRLs are typically $\sim$10~K along the Plane. This is much weaker than the synchrotron foreground but is still larger than the expected cosmological 21~cm signal. RRLs within this frequency range are therefore interesting to not only characterize the diffuse gas regions but also to constrain the effects on 21~cm observations. Hydrogen RRLs are primarily expected to be observed in emission, while carbon RRLs are expected to transition from emission above $\sim150$~MHz to absorption below \citep{payne1989stimulated, erickson1995low}. Away from the Galactic Center and at high-Galactic latitudes where 21~cm observations are generally targeted, RRLs remain poorly quantified \citep{roshi2000hydrogen}. If diffuse RRLs away from the Galactic Center are about 0.1\% of the synchrotron continuum, similar to their ratio near the Center, we might expect amplitudes as large as 200~mK at 150~MHz increasing to 2~K at 75~MHz.
A number of experiments are underway to detect and characterize the redshifted 21~cm signal. They are divided into two groups by observational strategy. The first group attempts to measure the all-sky averaged global signal and includes EDGES \citep{bowman2018absorption}, SARAS \citep{singh2017first}, LEDA \citep{2018MNRAS.478.4193P}, and REACH \citep{de2019reach}. The second group aims to detect the power spectrum of angular and spectral fluctuations in the signal and includes HERA \citep{deboer2017hydrogen}, LOFAR \citep{gehlot2019first}, OVRO-LWA \citep{eastwood201921}, and MWA \citep{tingay2013murchison}. In the 50-200~MHz frequency band of redshifted 21~cm observations, diffuse Galactic RRLs potentially form a ``picket fence'' with a $\approx$10~kHz-wide line every $\approx$300-1900~kHz. The narrow RRLs are easier to mitigate in 21~cm experiments than the brighter, spectrally smooth foregrounds. In global 21~cm observations, the RRL frequencies can be flagged and omitted from signal analysis with little impact on the result (which we will show in Section~\ref{sec:G21_effects}). However, for 21~cm power spectrum observations, flagging the RRL frequencies may complicate the analysis by correlating spectral Fourier modes used for the power-spectrum estimate. This could potentially spread synchrotron contamination into otherwise foreground-free parts of the power spectrum, even after applying inpainting techniques that aim to fill in missing frequency channels. It would be preferable to skip this flagging if possible.
Here, we use observations from the EDGES low-band and mid-band instruments to characterize diffuse RRLs at low radio frequencies. Primarily an instrument designed to measure the redshifted global 21~cm, EDGES also provides an opportunity to study the diffuse RRLs in frequencies relevant to 21~cm measurements during the era of Cosmic Dawn and Epoch of Reionization. While the EDGES instruments are very sensitive to faint signals in the radio spectrum between 50 and 200~MHz, they have a poor angular resolution. Hence EDGES observations provide primarily an average line strength over large regions of the sky. Building on the observations, we use detected line strengths to study the effects of RRLs on global and power spectrum 21~cm observations to determine if detected RRLs and upper limit levels will have a significant impact on the redshifted 21~cm analysis.
In Section~\ref{sec:methods} we summarize the observations and our methods. In Section~\ref{sec:results} we present the results from EDGES low-band between 50-87~MHz, focusing on the inner Galactic Plane in Section~\ref{sec:innerplane} and away from the inner Plane in Section~\ref{sec:3.3}. We extend the analysis to include 108-124.5~MHz in Section~\ref{sec:3.6}. In Section~\ref{sec:21cm} we discuss the implications for 21~cm global measurements and 21~cm power spectrum observations, including foreground cleaning. We conclude in Section~\ref{sec:4}.
\begin{figure}
\hskip-0.5cm
\includegraphics[scale=0.6]{low_band_spectra_and_residuals.pdf}
\caption{Integrated spectrum (top) and residuals following a 5-term polynomial fit subtraction (middle) and a 9-term polynomial fit subtraction (bottom) for the two-hour bin centered at LST~18~h, when the Galactic Center is at zenith. A 5-term polynomial fit fails to capture all the bandpass features, whereas a 9-term polynomial fit removes all the structures, giving continuum and bans pass normalized residuals. In the middle and bottom panels, the 85~vertical red lines represent the C$\alpha$ frequencies that are expected within the observed frequency range of 50-87~MHz. Many of the most extreme negative deviations align with RRL frequencies. The lines are strongest at low frequencies, reaching about 2~K below 55~MHz, and weaker at higher frequencies with amplitudes of about 1~K at 60~MHz and less than 0.5~K above 70~MHz. }
\label{fig:specresid}
\end{figure}
\section{Methods}
\label{sec:methods}
The Experiment to Detect the Global EoR Signature (EDGES) is located at the Murchison Radio-astronomy Observatory (MRO) in Western Australia. It includes multiple instruments, each spanning a subset of the frequency range between 50 and 200~MHz. Each instrument consists of a wideband dipole-like single polarization antenna made from two rectangular metal plates mounted horizontally above a metal ground plane. Below the ground plane sits the receiver, and signals are carried from the antenna to the receiver via a balun.
\subsection{Observations}
We use 384~days of observations from the EDGES low-band instrument between October 2015 and April 2017, in the frequency range of 50-100~MHz. The data include and expand on the observations used by \cite{bowman2018absorption} and \cite{mozdzen2019low}. EDGES is a zenith-pointing drift-scan instrument without any steering capability. At 75~MHz, the antenna beam has a full width half maximum (FWHM) of 72$\degree$ parallel to the excitation axis in the north-south direction and 110$\degree$ perpendicular to the excitation axis \citep{mahesh2021beam}. This gives the instrument a pointing declination corresponding to the site latitude of -26.7$^\circ$. The spectrometer samples antenna voltages at 400~MS/s and applies a Blackman-Harris window function and fast Fourier transform to blocks of 65536 samples to yield spectra with 32768 channels from 0-200 MHz. This results in a frequency channel spacing of 6.1~kHz. Neighboring channels are correlated due to the window function, yielding an effective spectral resolution of 12.2~kHz
The size of the EDGES antenna beam is much larger than any single radio source or region.
This has two main effects. First, any one source generally will not contribute substantially to the observed antenna temperature. Second, many individual sources or regions may be within the beam at any time, especially along the Galactic Plane. The line strengths and line widths observed by EDGES, therefore, also will be the aggregate effect of many contributions, each with its own intrinsic broadening and Doppler shift.
The averaging effect of the beam is compounded by the relatively poor spectral resolution of the observations. At 75~MHz, the raw frequency channel spacing of 12.2~kHz yields an equivalent velocity resolution of 48.8~km/s, which is generally larger than typical gas velocities in RRL regions (see Section~\ref{sec:widths}).
The end result is that these observations are not suitable for studying individual RRL targets in detail, but rather for characterizing the broad RRL properties in different regions of the sky, particularly away from the Galactic Center.
\begin{figure}
\centering
\includegraphics[scale=0.6]{c508_and_stacked_profile.pdf}
\caption{Observed C503$\alpha$ absorption line centered at 51.5~MHz (top) and the stacked average line profile from all 85~RRLs across the band, $423 \leq n \leq 507$ (bottom) for the LST~18~h bin. The effective frequency of the stacked profile is 66.1~MHz. The significance of detection improves from about 4$\sigma$ for the individual line to 28$\sigma$ for the stacked profile. }
\label{fig:line_profile}
\end{figure}
\subsection{Data Processing}
\label{sec2.2}
We reduce the EDGES observations and calibrate to an absolute temperature scale following the processes described in \cite{rogers2012calibration}, \cite{monsalve2017calibration}, and \cite{bowman2018absorption}, including filtering of radio-frequency interference (RFI). Following this filter, instances of persistent, weak RFI can still be seen in terrestrial FM radio band transmission above 87~MHz. To avoid complications from this interference, we keep only data below 87~MHz for the remainder of the low-band analysis. The data for each day is then binned into two-hour segments in Local Sidereal Time (LST). Bin centers are referenced to the Galactic Center at LST 17.8~h, yielding a three-dimensional dataset in frequency, day, and LST. For simplicity, we refer to the LST bins by their nearest whole-hour in the rest of this paper (e.g. using LST 18~h to refer to the bin centered on the Galactic Center) since the distinction is insignificant given the two-hour width of each bin and large EDGES beam that spans about 6~h in right ascension.
The spectra are fitted with a 9-term polynomial over the frequency range of 50-87~MHz. The expected RMS in the daily spectra is typically $\sim$20-30~K. We subtract the fits from the polynomial to obtain residuals that have continuum levels normalized to zero. We perform RFI filtering again, marking any day-LST bins with residuals larger than 30~K as bad and removing them from the analysis. Typically these cases are due to local weather or abnormal conditions in the upper atmosphere. The resulting total number of spectra for each LST bin varies from 375 to 384. The residuals for all remaining good days within a given LST bin are then averaged to yield a two-dimensional dataset in frequency and LST. The duty cycle of the observations was only 17\% of wall-clock time, yielding a total of about 126 hours of effective integration time in each LST bin. Figure \ref{fig:specresid} shows the spectra for LST 18~h and the corresponding residuals and the expected C$\alpha$ frequencies. The residual RMS in each spectrum provides an estimate of thermal noise and varies from a high of 374~mK for the spectrum at LST~18~h, when the Galactic Center is overhead, to a low of about 100~mK for LST bins where the plane is primarily near or below the horizon. The noise in each spectrum is strongest at low frequencies because EDGES is sky-noise limited and the sky noise follows the synchrotron power-law spectrum.
\subsection{Radio Recombination Lines}
\label{sec_rrl}
When an electron in an atom loses or gains energy, it produces an emission or absorption line based on the change in energy levels. An electron falling from a higher energy state, $n_2$, to lower energy state $n_1$ will produce a photon with a frequency \citep[refer][for details]{2002ASSL..282.....G}:
\begin{equation}
\nu_{n_2 \rightarrow n_1} = \frac{E_{n_2} - E_{n_1}}{h} = c~R~Z^2 \left( \frac{1}{n_{1} ^2}-\frac{1}{n_{2} ^2}\right)
\label{eq1}
\end{equation}
where $c$ is the speed of light, $R$ is the Rydberg constant, and $Z$ is the effective nuclear charge seen by the electron (usually very nearly $Z=1$ for an excited electron in the outer shell of a neutral atom) for constants in c.g.s.~units. The Rydberg constant is given by:
\begin{equation}
R = \frac{2 \pi^2~m_e~e^4}{c~h^3} \left({\frac{M}{M+m_e}} \right)
\end{equation}
where $h$ is Planck's constant, $e$ is the charge of an electron, $M$ is the mass of the nucleus, and $m_e$ is the mass of the electron.
Free electrons captured by ionized atoms will cascade down through energy levels until they reach the ground state. Transitions are denoted by the element abbreviation, the principal quantum number of the lower energy level ($n_1$), and a Greek letter indicating the change in principal quantum number ($n_2$ - $n_1$) with $\alpha=1$, $\beta=2$, etc.
Single ($\alpha$) transitions are the most prevalent and should yield the strongest signals. Within the EDGES low-band range, neighboring $\alpha$ lines for a given species are generally separated by about 300-600~kHz. C$\alpha$ and H$\alpha$ lines of the same order are offset from each other by 150~km/s in velocity space, with hydrogen lines lower in frequency due to the lower mass of the hydrogen nucleus compared to carbon. \citet{erickson1995low} observed that $\beta$ lines are about a factor of two to three weaker than $\alpha$ lines, and $\gamma$ lines are about a factor of four weaker than $\alpha$ lines at similar frequencies, but this ratio is dependent on specific physical conditions of the gas of any line-forming region. $\beta$ and $\gamma$ lines follow similar spacing and offset patterns to $\alpha$ lines in the EDGES band.
\subsubsection{Line Widths}
\label{sec:widths}
The observed linewidth of a typical RRL is determined by radiation broadening, pressure broadening, and thermal and turbulent Doppler broadening \citep{salas2017lofar, roshi2002carbon}.
Only Doppler broadening is significant given the EDGES spectral resolution. The full width half maximum (FWHM) line width due to Doppler broadening is given by:
\begin{equation}
{\Delta \nu_G = \frac{\nu_0}{c} \left[ 4\ln{(2)} \frac {2kT}{M} + V_T^2\right]^{1/2} }
\end{equation}
where $\nu_0$ is the center frequency of the line, $k$ is the Boltzmann constant, $T$ is the temperature of the gas, and $V_T$ is the turbulent velocity. For the most extreme case of hot hydrogen gas at 7000~K \citep{oonk2019spectroscopy} and using an example H423$\alpha$ line just below 87~MHz, the thermally-broadened FWHM is 3.4~kHz, equivalent to 12~km/s. For cold hydrogen gas at $\approx100$~K, the thermal broadening is nearly a factor of ten smaller. C$\alpha$ lines are broadened less than H$\alpha$ lines by an additional factor of 3.5 due to the higher mass of carbon atoms, reducing the thermal broadening to about 0.1~kHz, equivalent to 0.4~km/s. Turbulent velocities for motions of individual cells of gas are typically $|V_T|\approx20$~km/s for observations on degree scales \citep{2002ASSL..282.....G}, generally setting a lower bound of $\approx5$~kHz for line widths in the EDGES band.
These typical line widths are small compared to the spectral resolution of the EDGES observations. However, the observed lines will include additional Doppler broadening because of other variations in radial velocity between the gas and Earth, including Galactic rotation and Earth's orbit around the Sun. First, when the Galactic center is at the zenith, the large EDGES beam encompasses the broad molecular ring within about $|\ell|<30^\circ$. The radial velocity for most of the molecular gas in this region sweeps from about $-60$ to $60$~km/s, increasing linearly with $\ell
, although some gas in the inner nuclear disk reaches over 200~km/s \citep{dame2001milky}. Elsewhere along the plane, the radial velocity of molecular gas is generally within $|V_R|<40$~km/s. These Galactic rotational contributions are strong compared to the thermal and turbulent contributions to the line width. At the C$\alpha$ mean effective frequency of 66.1~MHz (see Sec \ref{stacked_profiles_sec}), this will contribute 13~kHz to the lined width observed by EDGES when the Galactic Center is near the zenith and 7~kHz when other parts of the plane are in the beam. Second, since the data were recorded over 18~months, the projection toward the gas of Earth's $|V_E|=30$~km/s orbital velocity around the Sun over that period varies. This further broadens line widths in the long integrations. Regions viewed along the ecliptic plane, including the Galactic Center and anti-center, are broadened by the full 30~km/s, equivalent to nearly 7~kHz. The effect decreases away from the ecliptic plane. At a declination of -26.7\degree, the effect is weakest around LST~6~h.
Treating the radial velocity effects analogously to turbulent motions and summing all of the Doppler velocity broadening contributions, we expect FWHM line widths at an effective frequency of 66.1~MHz to be about:
\begin{equation}
\Delta\nu_G \approx \frac{\nu_0}{c} \left [ V_T^2 + V_R^2 + V_E^2 \right ]^{1/2} \approx 12~\text{kHz}
\end{equation}
The Doppler broadened line width is proportional to the frequency and will be about 30\% larger at the high end of the observed band. Even with this additional broadening, individual lines are essentially unresolved by the 12~kHz effective resolution of the observations.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.45]{low_band_alpha_plot.pdf}
\caption{C$\alpha$ line profile (blue) for each LST bin, with 85 individual lines stacked with the principal quantum number $423 \leq n \leq 507$ centered at 66.1~MHz, along with the best-fit double Gaussian profile (orange). The small emission feature about 35~kHz below the C$\alpha$ absorption in each panel is due to H$\alpha$. The stacked line profiles are strong when the Galactic Center is in the beam, peaking at LST~18~h when the Center is at the zenith. They are weak when higher Galactic latitudes are in the beam (e.g. LST 2-6~h). The best-fit Gaussian profile parameters for each LST bin are listed in Table~\ref{tab:fitparamsalpha}. }
\label{fig:average_profiles}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.45]{low_band_beta_plot.pdf}
\caption{Same as Figure~\ref{fig:average_profiles}, but centered on $C\beta$ lines, with 106 individual lines stacked $533 \leq n \leq 638$ centered 66.3~MHz. The presence of any features corresponding to hydrogen emission is less clear here, hence we have fit only a single Gaussian to the main C$\beta$ profile in each bin.}
\label{fig:betasingle}
\end{figure*}
\subsection{Stacked Profiles}
\label{stacked_profiles_sec}
Figure~\ref{fig:specresid} shows the final integrated spectrum from EDGES for the LST~18~h bin when the Galactic Center is at the zenith, as well as the residuals after removing the continuum fit of the spectrum. The RRLs are particularly evident in the residuals below 60~MHz as numerous absorption spikes extending below the noise. An example of an individual line is shown in Figure~\ref{fig:line_profile}.
The thermal noise in the residual spectrum of each LST bin is sufficiently large that individual RRLs are not detected at high significance. We, therefore, take one additional step to average all of the individual RRLs together within each LST bin. Using the residual spectrum for each LST bin, we extract a window around each expected carbon RRL, centered on the channel that contains the line center. This results in a total of 85~windowed spectra for each LST bin with an effective mean frequency of 66.1~MHz. The effective frequency is below the mid-point of the band because RRLs are more closely spaced at the lower end of the band. Each extracted window spans 300~kHz and contains 49~frequency channels. The full-band polynomial fit subtracted earlier in the analyis removed the continuum background for all lines. The 85~windowed spectra are averaged to yield the stack profile. The stacking analysis is performed at the data's true spectral resolution of 6.1~kHz, aligning the frequency channels closest to the line centers, resulting in a maximum misalignment in individual lines of 6.1~kHz.
Figure \ref{fig:line_profile} illustrates the improvement in signal-to-noise ratio from a single C$\alpha$ line to the average profile for all C$\alpha$ lines in the band.
\section{Results}
\label{sec:results}
The average $\alpha$ line profiles for each LST bin are shown in Figure~\ref{fig:average_profiles}. $T_{\text{RRL}}$ represents the observed brightness temperature of the stacked line profile as a function of frequency relative to the expecte line center. In each bin, we see a clear absorption centered on the expected C$\alpha$ line. In most of the average line profiles, we also see a persistent excess in the brightness temperature centered about 35~kHz below the observed absorption line. This emission excess coincides with the expected offset in frequency for the corresponding H$\alpha$ lines for the same principal quantum numbers, as noted in Section~\ref{sec_rrl}. We therefore begin in Section~\ref{gaussian_fit} by simultaneously fitting for C$\alpha$ absorption and H$\alpha$ emission line profiles in the stacked spectrum at each LST bin. Then we proceed to restack the spectra at C$\beta$ and C$\gamma$ line centers and fit those lines independently. We note that the noise in the stacked profiles shows non-Gaussian noise that is correlated with the Galactic Plane. This could be due to possible narrowband systematics and/or the possible overlapping of some $\alpha$ lines with higher principal quantum number $\beta$ and $\gamma$ lines. More sophisticated simultaneous fitting techniques could better address this possibility, but is beyond the scope of this work.
\subsection{Observed Line Profiles}
\label{gaussian_fit}
\begin{table*}[]
\caption{\label{tab:fitparamsalpha} Best-fit Gaussian model parameters and integrated optical depths for each LST bin}
\begin{tabular*}{\textwidth}{ c c c c c c c c c c c c c }
\hline \hline
Bin & \multicolumn{4}{c}{C$\alpha$} & \multicolumn{4}{c}{C$\beta$} & \multicolumn{4}{c}{H$\alpha$} \\
\hline
LST & $A$ & $\nu_0$ & FWHM & $\tau d\nu$ & $A$ & $\nu_0$ & FWHM & $\tau d\nu$ & $A$ & $\nu_0$ & FWHM & $\tau d\nu$ \\
(h) & (mK) & (kHz) & (kHz) & (Hz) & (mK) & (kHz) & (kHz) & (Hz) & (mK) & (kHz) & (kHz) & (Hz) \\
\hline
00 & -81$\pm$10 & -1$\pm$1 & 18$\pm$3 & 0.3$\pm$0.1 & -13$\pm$5 & 10$\pm$12 & 58$\pm$28 & 0.2$\pm$0.1 & 44$\pm$15 & -33$\pm$2 & 10$\pm$4 & -0.1$\pm$0.1 \\
02 & -37$\pm$11 & -2$\pm$2 & 12$\pm$6 & 0.1$\pm$0.1 & 0$\pm$7 & 10$\pm$1 & 3$\pm$0 & 0.0$\pm$0.1 & 21$\pm$10 & -31$\pm$3 & 13$\pm$12 & -0.1$\pm$0.1 \\
04 & -43$\pm$6 & -3$\pm$1 & 14$\pm$6 & 0.1$\pm$0.1 & -8$\pm$5 & 10$\pm$15 & 426$\pm$33 & 0.6$\pm$0.4 & 2$\pm$1 & -27$\pm$24 & 24$\pm$57 & -0.0$\pm$0.1 \\
06 & -44$\pm$7 & -4$\pm$2 & 20$\pm$5 & 0.2$\pm$0.1 & -29$\pm$9 & -5$\pm$2 & 12$\pm$5 & 0.1$\pm$0.1 & 27$\pm$8 & -36$\pm$2 & 13$\pm$7 & -0.1$\pm$0.1 \\
08 & -53$\pm$9 & 0$\pm$2 & 21$\pm$5 & 0.2$\pm$0.1 & -48$\pm$10 & -5$\pm$2 & 21$\pm$5 & 0.2$\pm$0.1 & 45$\pm$9 & -33$\pm$3 & 24$\pm$7 & -0.2$\pm$0.1 \\
10 & -65$\pm$8 & 1$\pm$1 & 23$\pm$6 & 0.3$\pm$0.1 & -37$\pm$9 & 0$\pm$2 & 24$\pm$7 & 0.1$\pm$0.1 & 45$\pm$13 & -31$\pm$2 & 16$\pm$5 & -0.1$\pm$0.1 \\
12 & -183$\pm$14 & 4$\pm$1 & 22$\pm$2 & 0.8$\pm$0.1 & -73$\pm$10 & 5$\pm$1 & 19$\pm$5 & 0.2$\pm$0.1 & 50$\pm$27 & -40$\pm$3 & 24$\pm$7 & -0.2$\pm$0.1 \\
14 & -420$\pm$28 & 5$\pm$1 & 21$\pm$1 & 1.8$\pm$0.2 & -149$\pm$15 & 5$\pm$1 & 25$\pm$2 & 0.7$\pm$0.1 & 66$\pm$18 & -31$\pm$5 & 24$\pm$9 & -0.3$\pm$0.2 \\
16 & -677$\pm$34 & 3$\pm$1 & 22$\pm$1 & 3.0$\pm$0.2 & -305$\pm$25 & 4$\pm$1 & 26$\pm$2 & 1.5$\pm$0.2 & 123$\pm$34 & -32$\pm$4 & 24$\pm$7 & -0.6$\pm$0.3 \\
18 & -795$\pm$40 & 2$\pm$1 & 22$\pm$1 & 3.4$\pm$0.3 & -265$\pm$29 & 2$\pm$1 & 26$\pm$2 & 1.3$\pm$0.2 & 203$\pm$46 & -34$\pm$2 & 17$\pm$2 & -0.7$\pm$0.2 \\
20 & -616$\pm$33 & 2$\pm$1 & 22$\pm$1 & 2.6$\pm$0.2 & -204$\pm$23 & 3$\pm$1 & 27$\pm$2 & 1.1$\pm$0.2 & 143$\pm$32 & -39$\pm$3 & 24$\pm$5 & -0.6$\pm$0.2 \\
22 & -346$\pm$21 & 2$\pm$1 & 24$\pm$2 & 1.6$\pm$0.2 & -150$\pm$16 & -1$\pm$1 & 24$\pm$2 & 0.7$\pm$0.1 & 98$\pm$23 & -34$\pm$1 & 20$\pm$5 & -0.4$\pm$0.2\\
\hline
\end{tabular*}
\tablecomments{Integrated optical depths ($\tau d\nu$) are calculated using Equation~\ref{eqn_taudnu}. Reported C$\alpha$ and H$\alpha$ line profiles have 85~stacked lines $423 \leq n \leq 507$ with an effective mean frequency of 66.1~MHz, while C$\beta$ has 106 stacked lines $533 \leq n \leq 638$ with an effective mean frequency of 66.3~MHz.}
\vspace{2pt}
\end{table*}
The shape of the average profile is determined by the line profiles of each of the 85~lines, including their Doppler broadening (12~kHz), the instrument's spectral resolution (12~kHz), and misalignment of the individual line centers during averaging due to stacking with discrete spectral channels (6~kHz). The first is a Gaussian profile, while the last two are top hat functions. Combining all the broadening contributions results in the effective convolution of an 18~kHz top hat function with a 12~kHz Gaussian yielding a total expected line width of about $[(12+6)^2 + 12^2]^{1/2} \approx 22$~kHz. This width is similar to the separation between C$\alpha$ and H$\alpha$ lines within the band, which ranges from 25~to 43~kHz with an average of 33.2~kHz. Thus, we expect there to be some line blending between C$\alpha$ and H$\alpha$ lines, motivating simultaneous fits of both lines in the stacked profiles to reduce bias on the inferred properties.
In each LST bin, we simultaneously fit Gaussian models for the H$\alpha$ and C$\alpha$ lines to account for any overlap. The Gaussian model for each line is given by:
\begin{equation}
\phi_{\text{RRL}}(\nu)= A \exp \left[ {- \frac {(\nu-\nu_0)^2}{2\sigma^2}}\right]
\end{equation}
where $A$ is the peak amplitude of the profile, $\nu_0$ is the frequency of the line center, and $\sigma$ is the standard deviation width of the line. The FWHM of the profile is $2(2 \ln 2)^{1/2}\sigma$. The thermal noise variance used in the fit of the average line profile is derived from the variances for each window. A larger 300-channel region is used to derive these variances, with the lines in each window masked out. The variance is used in the Gaussian fits to estimate the uncertainty on the best-fit parameters. The maximum hydrogen line width is limited by a prior to 24~kHz to prevent the model from trying to absorb any broad structure in the continuum, particularly when the hydrogen line is weak or not present.
The best-fit Gaussian profiles for each LST bin are also shown in Figure~\ref{fig:average_profiles}. Table~\ref{tab:fitparamsalpha} lists the best-fit parameters. We find C$\alpha$ line detection significance ranges from 3$\sigma$ at LST~4~h where the best-fit line amplitude is only -33~mK to 28$\sigma$ at LST~18~h where the amplitude is -795~mK. H$\alpha$ line detections range from no significant detection at LST~4~h to 6$\sigma$ confidence at LST~18~h where the line is in emission with amplitude 203~mK.
The typical observed line FWHM is about 23~kHz for C$\alpha$ and 20~kHz for H$\alpha$, in reasonable agreement with expectations.
We also report integrated optical depths in Table~\ref{tab:fitparamsalpha}. They are calculated as the area under the best-fit Gaussian model for each stacked absorption line profile normalized by the observed continuum sky temperature, $T_{sky}$, at the average line center frequency. This is given analytically as:
\begin{equation}
\label{eqn_taudnu}
\int \tau(\nu) d\nu \approx \frac{\sqrt{2 \pi}~\sigma~|A|}{T_{\text{sky}}(\nu_0)}
\end{equation}
where $\tau(\nu) \equiv |T_{\text{RRL}}(\nu)| / T_{\text{sky}}(\nu)$. For C$\alpha$, the observed integrated optical depths range from 0.1~kHz at LST~2~h to 3.4~kHz at LST~18~h. We note the integrated optical depths are affected by beam dilution (see Section~\ref{sec:innerplane} below) if the signal does not fill the full beam.
Figure~\ref{fig:betasingle} shows the stacked profiles centered on C$\beta$ lines in each LST bin. Within the same frequency range of 50-87~MHz, we stack 68 C$\beta$ lines with principal quantum numbers ranging $533 \leq n \leq 638$ resulting in an effective mean frequency of 66.3~MHz. We find that EDGES is sensitive to these lines with a maximum significance of 13$\sigma$ for a line amplitude of -305~mK at LST~16~h. The C$\beta$ profiles drop to the noise level when the Galactic Center moves out of the instrument beam. Parameters from these fits are included in Table~\ref{tab:fitparamsalpha}. We do not see clear evidence for H$\beta$ lines, which would be centered about 35~kHz below the C$\beta$ lines, but note additional structures in some LST bins (LST~14~h, LST~16~h, LST~18~h, LST~20~h) roughly 50~kHz below the C$\beta$ line centers. We rule out RFI as the cause of these as the structures are correlated with the Galactic Plane, possibly indicating unaccounted for lines or instrumental errors, as discussed above
Figure~\ref{fig:gamma_lines} shows the stacked profiles in LST~18~h and LST~6~h centered at C$\gamma$ line centers. We stack 122 C$\gamma$ lines with principal quantum numbers ranging $610 \leq n \leq 731$ at an effective mean frequency of 66.1~MHz. The Gaussian fit to the stacked profile in the LST~18~h bin gives a 5$\sigma$ detection of $-171.5\pm34.3$~mK. Similar to C$\alpha$ and C$\beta$ lines, the signal reduces as the Galactic Center moves out of the beam and we see no evidence for detection between LST~0 and~6~h.
Overall, the magnitudes of C$\beta$ lines are about 30-50\% of the C$\alpha$ lines at corresponding LSTs, and C$\gamma$ lines are about 20\% of C$\alpha$ lines, in agreement with trends reported by \cite{erickson1995low}. Table \ref{tab:beamcorrection} gives a summary of all observed line properties at LST~18~h.
\begin{figure}[t]
\hskip-0.5cm
\includegraphics[scale=0.6]{low_band_gamma_lines.pdf}
\caption{C$\gamma$line profiles, with 122 individual lines stacked, $610 \leq n \leq 731$, and an effective frequency of 66.1~MHz. The Gaussian model in the top panel gives the best-fit C$\gamma$ amplitude of $-171.5\pm34.4$~mK when the Galactic Plane is in the beam at LST~18~h and the bottom panel shows a non-detection when the Galactic Plane moves away from the beam at LST~06~h with a noise level of $\sim$4~mK.}
\label{fig:gamma_lines}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.5]{COskymaportho70.pdf}
\caption{Simulated all-sky RRL proxy maps for the center of each LST bin. The images are orthographic projections of the Planck~2015 CO map at 115.27~GHz \citep{adam2016planck, ade2016planck}, which provides a rough proxy for the location of carbon in the Galaxy. The CO line emission is tightly concentrated along the Galactic Plane, similar to RRL surveys of the Plane.}
\label{fig:fig7}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.5]{ghaskymaportho70.pdf}
\caption{Same as Figure~\ref{fig:fig7}, but for the total radio emission visible by EDGES derived from the \cite{haslam1981408} 408-MHz sky map scaled to 70~MHz using a spectral index of $\beta=-2.4$. The maps are dominated by synchrotron and free-free emission, which extends from the Galactic Plane to higher latitudes compared to the CO emission observed by Planck.}
\label{fig:fig6}
\end{figure*}
\subsection{Inner Galactic Plane}
\label{sec:innerplane}
We begin by comparing the EDGES observations to prior measurements along the inner Galactic Plane. At LST~18~h, when the Galactic Center is directly overhead and the primary response of the EDGES beam includes the strong RRL regions seen in previous surveys, the average line profile shows distinct carbon absorption and hydrogen emission lines. The C$\alpha$ line has a best-fit Gaussian amplitude of $A=-795$~mK and the H$\alpha$ has a line amplitude of 203~mK. However, we cannot directly compare the amplitudes observed by EDGES to prior observations with higher angular resolution. Due to the large size of the EDGES beam, the inner Galactic Plane only fills a relatively small fraction of the total beam. We must account for this beam dilution when the Galactic Plane is overhead to compare EDGES observations with prior measurements. The large beam is also responsible for spectral effects not present in narrow beam observations. Narrow beam observations do not span a large range of Galactic longitudes, hence the spectral smearing from Galactic rotation is lower in existing narrow beam measurements. Lastly, prior measurements have typically been acquired in (or translated to) a single epoch, eliminating the spectral offset due to Earth's orbital velocity that manifests as an additional broadening contribution in the EDGES observations. Here we will account for these effects in addition to the simple beam dilution.
To calculate the beam dilution, we take the area of the Galactic Plane containing the strong line regions with height $|b|<{3^{\circ}}$ \citep{roshi2002carbon} and longitudinal width $|\ell|<40^\circ$ \citep{alves2015hipass}, yielding an effective solid angle for the Galactic Plane of 480~square degrees. Comparing this to the effective area of the EDGES beam yields an approximate beam dilution ratio of $f_b=0.088$.
The effect of spectral smearing in EDGES observations also reduces the peak amplitude of observed line profiles.
Typical intrinsic widths for low-frequency RRLs along the Galactic Plane are 30~km/s or about 7~kHz at the center of the observed band, compared to 23~kHz observed width for EDGES. This yields a spectral smearing factor in EDGES observations of $f_s=0.30$.
\begin{table}[h!]
\caption{\label{tab:beamcorrection} Undiluted Line Estimates for LST~18~h}
\hskip-2.0cm
\begin{tabular}{ccccccc}
\hline \hline
Line & n & \multicolumn{2}{c}{Observed} & \multicolumn{3}{c}{Undiluted Estimate}\\
\hline
& & A & FWHM & A & $L/C$ & $\tau d\nu$ \\
& & (K) & (kHz)& (K) & ($10^{-4}$) & (Hz) \\
\hline
\multicolumn{7}{c}{ 50-87~MHz } \\
\hline
C$\alpha$ & 423-507 & -0.795 & 22 & -30 & -12 & 9 \\
C$\beta$ & 533-638 & -0.277 & 28 & -10 & -4 & 3 \\
C$\gamma$ & 610-731 & -0.171 & 22 & -6 & -2 & 2 \\
H$\alpha$ & 423-507 & 0.203 & 17 & 7 & 3 & -2 \\
\hline
\multicolumn{7}{c}{ 108-124.5~MHz} \\
\hline
C$\alpha$ & 375-392 & -0.046 & 73 & -2 & -2 & 6 \\
H$\alpha$ & 375-392 & 0.098 & 49 & 4 & 5 & -8
\end{tabular}
\tablecomments{Estimates of line strengths that would be observed by a narrow beam instrument for the LST~18~h bin along the inner Galactic Plane. The effects of dilution due to the large EDGES beam and long observing season are accounted for.}
\end{table}
As an estimate of the undiluted amplitude of the RRL signal from the Galactic Plane, we divide our fitted amplitudes by the calculated beam dilution and spectral smearing ratios for LST~18~h. Table~\ref{tab:beamcorrection} summarizes the equivalent undiluted line strengths, which broadly match the expectations of $\approx$10~K \citep{emig2019first}. As an additional assessment, we report line-to-continuum ratios ($L/C$) and integrated optical depths using the undiluted line amplitudes. The line-continuum ratio is calculated using the undiluted peak amplitude of each stacked profile and an estimate of 26,945~K for the undiluted continuum temperature along the inner Galactic Plane at the effective mean frequency if 66.3~MHz.
The undiluted integrated optical depth, $\tau d\nu$, is a measure of the total line power. Assuming line blending is minimal (as described in Sec \ref{gaussian_fit}), $\tau d\nu$ is conserved through spectral smearing. Hence, to calculate the undiluted optical depth we apply only the beam dilution factor to the line amplitudes reported in Table~\ref{tab:fitparamsalpha} and use the same estimated undiluted continuum temperature at the effective mean frequency as for the $L/C$ calculations. We find $10^{-4} \lesssim |L/C| \lesssim 10^{-3}$, and $2\lesssim|\tau d\nu|\lesssim9$. Both are generally in good agreement with previous measurements. The undiluted $\tau d\nu$ estimates for C$\alpha$ and C$\beta$ match the observations of \citet{oonk2019spectroscopy} within about 20\%. The H$\alpha$ estimates, however, are about twice as large as the \citet{oonk2019spectroscopy} observations, which could reflect different distributions of carbon and hydrogen gas along the inner Galactic Plane.
We investigate the frequency dependence of C$\alpha$ line strength further in Section~\ref{sec:rrl_frequency_dependence}.
\subsection{Away From the Inner Galactic Plane}
\label{sec:3.3}
We are particularly interested in the RRL strength at high-Galactic latitudes and along the outer disk because these are the regions where redshifted 21~cm experiments primarily observe. The best-fit H$\alpha$ and C$\alpha$ amplitudes reduce as the Galactic Plane moves out of the EDGES beam. When the beam is nearly opposite the Galactic Center (around LST~2-6~h), the line strengths reach minima.
The RRLs observed by EDGES away from the inner Galactic Plane may come from a discrete number of small regions, each with strong lines, or they may arise from large areas of diffuse gas with weak lines. In the scenario of diffuse gas with weak lines, we can assume the gas fills the entire EDGES beam and so requires no dilution factor to estimate its true strength. We use this assumption to place an upper limit on the strength of these signals directly from the observations.
The inner plane of the Galaxy passes out of the EDGES beam starting at about LST~0~h, leaving only high latitudes or the outer disk visible until about LST~10~h. During this period, C$\alpha$ is always detected at 3$\sigma$ or higher and reaches its weakest amplitude of $-33\pm11$~mK near the Galactic anti-center of LST~2~h, as listed in Table~\ref{tab:fitparamsalpha}. In contrast, the H$\alpha$ line is detected at the beginning and end of the period at 3$\sigma$ or higher, but decreases from about 40~mK at the these boundaries to a minimum of 4$\pm8$~mK at LST~4~h, where it is consistent with no detection. The 3$\sigma$ upper limit on H$\alpha$ emission at LST~4~h is 28~mK. Similarly C$\beta$ is marginally detected at about $2\sigma$ at LST~0 and~4~h, but not at LST~2~h. For the upper limit of C$\beta$ absorption in this region, we use the $3\sigma$ upper limit of the LST~0~h weak detection, giving 32~mK.
\subsection{Model Tracers}
While EDGES observations provide sensitive tests of RRLs away from the inner Galactic Plane, they cannot produce maps of the line strength distribution on the sky. Here we investigate if existing all-sky maps of molecular gas can predict the observed variations in RRL strength with LST, thus potentially serving as proxies for the RRL distribution on the sky.
We use the Planck 2015 carbon monoxide (CO) map at 115.27~GHz \citep{adam2016planck, ade2016planck}. CO traces molecular gas in the Galaxy and may serve as a proxy for C$\alpha$ strength \citep{chung2019cross}. We convolve the EDGES beam with the CO map to simulate an observed drift scan as if EDGES were to see the sky described by the map. For simplicity, we take the CO map as a direct proxy to the RRL line strength. We do not account for underlying astrophysics and do not attempt to convert CO intensity to CO column density and C column density that could be used to more accurately calculate C$\alpha$ line strengths. Figure~\ref{fig:fig7} illustrates the CO sky as it would be viewed by EDGES. For comparison, Figure~\ref{fig:fig6} shows the same view of the total radio emission found by following the same procedure to create simulated observations using the \citet{haslam1981408} 408~MHz map that is dominated by synchrotron radiation. We assume a constant spectral index of $\beta=-2.4$ for scaling the Haslam map to the observed frequencies.
In Figure~\ref{fig:fig12}, we show the simulated drift scans along with EDGES C$\alpha$ and H$\alpha$ observations, all normalized to their maximum amplitudes at LST~18~h.
The CO proxy map reasonably reproduces the relative LST trends of the C$\alpha$ and H$\alpha$ drift scans observed by EDGES, capturing the peak-to-peak range and the temporal width of the cycle more closely than the synchrotron continuum map. This is consistent with RRLs being more concentrated on the Galactic Plane than synchrotron emission. Despite capturing the broad trends, the simplistic CO proxy fails to predict the relative RRL amplitudes and trends with LST observed between LST~0-10~h, suggesting it is a poor template for predicting RRLs in regions of the sky of most interest to redshifted 21~cm power spectrum observations.
\begin{figure}[t]
\centering
\includegraphics[trim={0.1in 0 0 0},clip, scale=0.6]{normalizedcomparison.pdf}
\caption{Peak normalized comparison of measured RRL strength and simulated proxy measurements as a function of LST. The average line profile magnitudes for C$\alpha$ (purple diamonds) and H$\alpha$ (red squares) are shown along with the C$\alpha$ integrated optical depths (black circles). The simulated proxy measurement based on the Planck 115.27~GHz CO~map (dash-dot green line) provides a reasonable predictor of relative RRL strength, particularly when the Galactic Center is in the beam. However, it over-predicts the RRL strength at low LST. For reference, the continuum drift scan from EDGES observations (solid blue line) is plotted along with a simulated continuum drift scan using the \cite{haslam1981408} 408-MHz map (dotted black line). As expected, the continuum drift scans show a broader peak than the RRL scans, consistent with synchrotron emission extending farther from the Galactic Plane compared to line emission. }
\label{fig:fig12}
\end{figure}
\subsection{Extending to 124.5~MHz}
\label{sec:3.6}
Here we repeat the above analysis for data from the EDGES mid-band instrument using 66~days of observations from early 2020. The mid-band instrument uses a smaller antenna than the low-band instrument, shifting the frequency range to 60-160~MHz. At 90~MHz, the mid-band antenna beam has FWHM of 75.4$\degree$ parallel to excitation axis in the north-south direction and 106.6$\degree$ perpendicular \citep{monsalve2021absolute}. For this analysis, we reduce the frequency range to 108-124.5~MHz, following a conservative approach of excluding RFI from FM band below 108~MHz and from satellite communication bands and air-traffic control channels above 125~MHz. Even in this narrow band, some of the frequency channels are dominated by RFI and thus need rigorous flagging. We completely discard time bins that have 60\% or more of the frequency channels flagged. We fit and remove the continuum using an 11-term polynomial and further clean the data by discarding days with residual amplitudes larger than 10~K.
From Equation~\ref{eq1}, we expect 18~RRL C$\alpha$ transitions ($375 \leq n \leq 392$) in the frequency range of 108~to 124.5~MHz that are roughly spaced 1~MHz apart. We select 600~kHz windows around each line center and stack these segments from the continuum-removed residual spectra resulting in an effective mean frequency of 116.3~MHz. We expect similar LST dependence as low band results and thus for simplicity, we only investigate two LST bins: a two-hour bin centered on LST~18~h, when the Galactic Center is overhead, and a large twelve-hour bin centered on LST~6~h, when the Galactic Center is not overhead. The average line profile was fit in each bin with a double Gaussian for hydrogen emission and carbon absorption.
In the LST 18~h bin, we find a best-fit amplitude of $-46 \pm 15$~mK and FWHM of 73.4~kHz for C$\alpha$ and $98 \pm 38$~mK and FWHM of 48.9~kHz for H$\alpha$, as shown in Figure~\ref{fig:fig9}. We can also correct for beam dilution and compare these observations to existing RRL surveys. The EDGES mid-band antenna is a scaled copy of the low-band antenna, hence it has similar beam properties and the dilution factors we calculated earlier are still valid. Accounting for the dilution effects and using an undiluted continuum temperature of 6980~K, we find an undiluted H$\alpha$ amplitude of about 4~K with $\tau dv\approx-8$~Hz and $L/C\approx5\times10^{-4}$.
C$\alpha$ has an undiluted amplitude of 2~K with $\tau d\nu\approx6$~Hz with $L/C\approx-2\times10^{-4}$. Away from the Galactic Center in the large LST bin centered on 6~h, we find the best fit amplitude for C$\alpha$ of $3.4 \pm 3.5$~mK, consistent with no detection, and yielding a $3\sigma$ upper limit of 14~mK. Similarly, we see no evidence of H$\alpha$ emission present and can assign a comparable upper limit.
We note that at LST~18~h \cite{emig2020lofar}, \cite{oonk2019spectroscopy}, and \cite{shaver1978extragalactic} report increasing amplitudes of H$\alpha$ lines with increasing frequency that are inconsistent with the amplitude changes observed by EDGES from 66~MHz to 116~MHz. \cite{oonk2019spectroscopy} find a factor of 6.5 increase in detected H$\alpha$ line amplitudes at 113~MHz compared to 63~MHz. We find a factor of two decrease at the higher frequency at LST~18~h and a non-detection away from the Galactic Plane. One possible explanation for the discrepancy could be that stronger H$\alpha$ emission lines are getting flagged during the RFI excision in the EDGES analysis, although we have checked and see no bias in the flagged RFI channels at the frequencies corresponding to H$\alpha$ lines. Another possibility is that the unique beam properties (e.g. size and freqeuency dependence) of each study lead to different perceived frequency dependence of the line amplitudes. Substantial overlap between the H$\alpha$ and C$\alpha$ lines could hinder the model fitting in the EDGES analysis. If this effect is larger at 116~MHz than at 66~MHz in the EDGES analysis, the recovered amplitude estimates for both types of lines would likely lower than their true levels. However, in Figure~\ref{fig:rrl_frequency_dependence}, the EDGES C$\alpha$ line at 116~MHz and the nearest counterpart observed by \cite{oonk2019spectroscopy} at 113~MHz appear in reasonable agreement. Further investigation is warranted but beyond the scope of this work.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{rrlmidband.pdf}
\caption{Stacked line profiles (blue) in mid-band EDGES data spanning the frequency range 108-124.5~MHz, with $375 \leq n \leq 392$ and effective mean frequency of 116.3~MHz. The top panel shows a two-hour LST bin centered on the Galactic Center. A double Gaussian model (orange) gives a best-fit C$\alpha$ amplitude of $-46 \pm 15$~mK. The best-fit H$\alpha$ amplitude of $98\pm38$~mK, peaking 59~kHz below the carbon absorption, is about half of the peak level found at lower-frequencies. The bottom panel shows a long twleve-hour bin centered on LST~6~h, yielding stacked line profiles consistent with no detection (best-fit C$\alpha$ amplitude of $3.4\pm3.5$~mK).}
\label{fig:fig9}
\end{figure}
\subsection{Frequency Dependence of RRLs}
\label{sec:rrl_frequency_dependence}
In this section, we investigate the frequency dependence of C$\alpha$ and C$\beta$ line amplitudes within the observed bands and compare the EDGES observations to other prior observations and theoretical expectations. The observed temperature brightness of C$\alpha$ absorption lines relative to the background is dependent on the electron temperature ($T_e$) \citep{shaver1980}, optical depth of the lines ($\tau_L$), and background radiation temperature ($T_{R}$).
The assumption of local thermodynamic equilibrium (LTE) may not hold for $n\gtrsim100$ \citep{2017ApJ...837..141S}, resulting in increased contributions from stimulated emission or absorption. In this regime, observed line temperatures relative to the total continuum can be written as:
\begin{equation}
\label{eqn:rrl_frequency_dependence}
T_{RRL}(\nu_n) \approx \tau^{LTE}_L \left [ b_n T_e - b_n \beta_n T_R(\nu_n) \right ]
\end{equation}
where $b_n$ and $\beta_n$ are departure coefficients \citep{2002ASSL..282.....G} accounting for deviations from LTE and $\nu_n$ is the frequency corresponding to a specific transition line. For the spectrally unresolved lines observed here, $\tau^{LTE}_L$ is effectively constant with frequency due to the nearly identical Saha-Boltzmann occupations for nearby $n$ and low photon energy relative to the excitation temperature. The departure coefficients have smooth but complicated behaviors as a function of $n$ and can span $0.6\lesssim b_n \lesssim 1.3$ and $-20 \lesssim b_n \beta_n \lesssim 15$, depending on the density and temperature of the gas \citep{2017ApJ...837..141S}, which is generally expected to be $n_e\approx0.1$~cm$^{-3}$ and $T_e\approx50-100$~K.
The sky continuum temperature is dominated by synchrotron and free-free emission at low radio frequencies and follows a power-law, $T_{cont}=T_{100} (\nu/\nu_{100})^{\beta}$~K, where $\beta$ is the spectral index and $T_{100}$ is a normalizing temperature at $\nu_{100}=100$~MHz. The observed sky temperature from Earth toward the inner Galactic Plane is about 10,000~K at 100~MHz, falling to about 1000~K away from the Plane \citep{2017MNRAS.469.4537D}. The gas that produces RRLs is at varying distances from Earth. Some of the synchrotron and free-emission we observe originates between Earth and the gas. This portion of the received emission does not experience RRL absorption in the gas and is not included in the radiation background, $T_R$, in Equation~\ref{eqn:rrl_frequency_dependence}. To account for this, $T_R$ is typically approximated as a fraction of $T_{cont}$.
In Figure~\ref{fig:rrl_frequency_dependence}, we show the frequency dependence of the C$\alpha$ and C$\beta$ lines measured by EDGES. To place the EDGES observations on the same scale as previously reported results from other instruments, we use undiluted $T_{RRL}$ line amplitude estimates instead of the directly measured values.
To estimate the undiluted $T_{\text{RRL}}$ of individual lines, we use the amplitude of the continuum-subtracted residual spectrum at each line center frequency. The $1\sigma$ uncertainty for each line is estimated from the residual spectrum RMS, scaled to each line frequency by a power-law matching the sky spectrum. For LST~18~h, we plot the undiluted estimates and compare them with the \citet{oonk2019spectroscopy} and \citet{erickson1995low} observations of the Galactic Center region. The \citet{oonk2019spectroscopy} data were reported as $\tau d\nu$ integrals, where $\tau=T_{RRL}/T_{cont}$ is the observed line-to-continuum ratio. To convert them to brightness temperature units, we estimate $\tau$ from their reported $\tau d\nu$ and FWHM$_{line}$ and then multiply by $T_{cont}$ assuming $\beta=-2.4$ and $T_{100}=10,000$~K, appropriate for the inner Galactic Plane. We also calculate and correct for beam dilution factors for each of their measurements using their reported FWHM$_{beam}$ values and treating them as the diameters of circular top hat areas in solid angle. The beam dilution factors increase from 0.63 at 40~MHz to unity (no dilution) at 87~MHz and above. \cite{erickson1995low} report the line amplitudes scaled by the system temperature. With a FWHM$_{beam}$ of 4$\degree$ at 76.4~MHz, the line amplitudes are likely not beam diluted and we apply no correction. We take all 35 reported fields from the line-forming region with Galactic longitudes $l=340 \degree$ to $l=20 \degree$, rescale them with the reported system temperature and calculate an effective mean line amplitude of $-7.9 \pm0.9$~K for C348$\alpha$-C444$\alpha$. Even with the simple assumptions and approximations used to place all of the observations on the same scale, we see good agreement between the line amplitudes measured here and previously reported results by \citet{oonk2019spectroscopy} and \citet{erickson1995low}. We repeat a similar process for the middle row of Figure~\ref{fig:rrl_frequency_dependence} to show the undiluted $T_{\text{RRL}}$ of C$\alpha$ and C$\beta$ lines measured by EDGES away from the inner Galactic Plane in a large LST~0-12~h averaged bin. In this case, however, there are no comparable observations from other instruments for comparison.
As a further check, we fit a model to the data using Equation~\ref{eqn:rrl_frequency_dependence} simplified to its LTE limit by setting $b_n=1$ and $b_n \beta_n=1$. For LST~18~h, we fit all three data sets jointly and initially assume $T_e=100$~K and $T_R(\nu) = 0.2 \times T_{cont}(\nu)$, similar to the background radiation model used by \citet{oonk2019spectroscopy}. We find a best-fit $\tau_L= 2 \times 10^{-3}$ for C$\alpha$ with Bayesian information criterion (BIC) of 2938. This is shown as the ``strong prior'' model in Figure~\ref{fig:rrl_frequency_dependence}. Removing our assumptions for $T_e$ and $T_R$ and including $T_e$, $T_{100}$, and $\beta$ in the parameter estimation improves agreement between data and model and is highly preferred, decreasing the BIC to 844. This ``relaxed prior'' fit yields $T_e=190$~K, $T_{100}=1110$~K, $\beta=-3.6$, and $\tau_L=5\times10^{-3}$. Similarly, for C$\beta$, the ``strong prior'' yields a $\tau_L= 1.6 \times 10^{-3}$ with a BIC of 899 while a ``relaxed prior'' fit yields $T_e=162$~K, $T_{100}=1800$~K, $\beta=-2.7$, and $\tau_L=1.7\times10^{-3}$ with a BIC of 854, thus favoring a relaxed prior model. Collectively, these results suggest assuming ideal LTE conditions is insufficient to accurately capture the full frequency dependence of the line amplitudes along the inner Galactic Plane between 40 and 200~MHz, at least for C$\alpha$.
For the LST~0-12~h case, we test the same pair of strong and relaxed prior models. For the strong prior, we use $T_{100}=200$~K, but retain $T_e=100$~K and $\beta=-2.4$. We find for C$\alpha$, $\tau_L=1\times10^{-4}$ with a BIC of 281. For relaxed priors we find best-fit $T_e=63$~K, $T_{100}=3110$~K, $\beta=-4.1$, and $\tau_L=3\times10^{-6}$ with a BIC of 271, thus slightly favoring a relaxed prior model. Similarly for C$\beta$, a strong prior model yields $\tau_L=5\times10^{-5}$ with a BIC of 365, while a relaxed prior model yields $\tau_L = 2\times10^{-4}$, $T_e=278$~K, $T_{100}= 257$~K, $\beta=-1$ with a BIC of 365. The relatively low signal to noise ratio of the data and more limited frequency range at LST~0-12~h compared to LST~18~h likely reduces the sensitivity to distinguish between the two models in this case.
For both LST~18 and 0-12~h cases, the relaxed prior models prefer steep spectral indices for $T_R$ that are physically unrealistic. A full treatment of $b_n$ and $\beta_n$ in the model would likely improve agreement with data without driving the models to extreme background radiation properties. It is beyond the scope of this work to incorporate detailed departure coefficient models, which are sensitive to $T_e$ and $n_e$, but we can test if smoothly varying departure coefficients can provide the needed degrees of freedom for the model to agree with the data. Since $b_n$ has been calculated to remain close to unity and the background radiation brightness temperature is likely much larger than the gas temperature, we expect that RRL amplitudes in our observations are dominated by the $T_R$ term in Equation~\ref{eqn:rrl_frequency_dependence} and relatively insensitive to the $T_e$ term. We keep $b_n=1$ fixed and focus on the product $b_n \beta_n$, for both C$\alpha$ and C$\beta$ cases. The departure coefficient models of \citet{2017ApJ...837..141S} show that $b_n \beta_n$ is generally a monotonically varying smooth function for $300 \lesssim n \lesssim 550$, for C$\alpha$, and $490 \lesssim n \lesssim 700$ for C$\beta$, over a large range of $T_e$ and $n_e$. We account for this structure by modeling $b_n \beta_n$ as a 3$^{rd}$-order polynomial in frequency. We fit for the $b_n \beta_n$ model and $\tau_L$ while using the strong prior scenario described above with fixed $T_e$, $T_{100}$, and $\beta$. In case of both C$\alpha$ and C$\beta$, for LST~18~h, including the polynomial for $b_n \beta_n$ is the most preferred model, lowering the BIC by 86 beyond the relaxed prior case, while for LST~0-12~h, including the $b_n \beta_n$ polynomial is not preferred, with similar BIC to the relaxed prior model.
The bottom panel of Figure~\ref{fig:rrl_frequency_dependence} shows the derived $b_n \beta_n$ dependence on frequency. For LST~18~h, it matches qualitatively with \citet[their Figure~8]{2017ApJ...837..141S} with a large smoothly varying trend across most of the frequency range. The LST~0-12~h observations are more limited without accompanying measurements at high frequencies and the best-fit is not constrained by data above 120~MHz. This makes $b_n \beta_n$ highly covariant with $\tau_L$. We had to fix $\tau_L=1\times10^{-4}$ to prevent the estimation from trending to unrealistically large $\tau_L\approx1$ and small $b_n \beta_n$. If the gas in both LST cases has similar properties, we might expect that the better constrained $b_n \beta_n$ polynomial from the LST~18~h data would also improve the LST~0-12~h fits. Using that polynomial instead of $b_n=\beta_n=1$, and fixing all other parameters except $\tau_L$ with the strong prior assumptions, decreases the BIC by nearly three, making it the most preferred model for LST~0-12~h case.
\begin{figure*}[t]
\includegraphics[scale=0.7]{spectral_dependence_alpha_beta.pdf}
\caption{Frequency dependence of C$\alpha$ and C$\beta$ absorption lines. The top panels shows LST~18~h after correcting for beam dilution and spectral smearing. The middle panels shows the average over LST~0-12~h assuming the lines are due to diffuse gas filling the beam, hence no beam dilution correction is applied. In both cases, EDGES low-band data (blue circles) have a strong frequency dependence. A single mid-band point (green square) for C$\alpha$ represents the average across its analyzed data. For LST~18~h, the \citet{oonk2019spectroscopy} data (red diamonds) and a composite of the \citet{erickson1995low} data (blue square) are shown after adjusting to brightness temperature units as described in Section~\ref{sec:rrl_frequency_dependence}. 1$\sigma$ error bars are plotted for all points, but do not include dilution uncertainty in the LST~18~h case, which is conservatively estimated as a 50\% error in undiluted $T_{RRL}$. Overall, there is good agreement between this work and \citet{oonk2019spectroscopy}. The black solid and dashed lines show best-fit models in the LTE limit of Equation~\ref{eqn:rrl_frequency_dependence} as described in Section~\ref{sec:rrl_frequency_dependence}. The black dash-dot lines show the best-fit model when the departure coefficient product, $b_n \beta_n$, is modeled as a 3$^{rd}$-order polynomial in frequency and included in the parameter estimation with the strong priors. The best-fit departure coefficient models are shown in the bottom panel. For LST~18~h, the best-fit $b_n \beta_n$ for C$\alpha$ qualitatively resembles the curves in \citet{2017ApJ...837..141S}.
}
\label{fig:rrl_frequency_dependence}
\end{figure*}
\section{Implications for 21~cm Observations}
\label{sec:21cm}
Summarizing the findings from Section~\ref{sec:3.3} for diffuse RRL contributions away from the Galactic Center, we have detected $C\alpha$ absorption at about 30-40~mK and placed upper limits on C$\beta$ absorption and H$\alpha$ emission lines at 30~mK levels in the 50-87~MHz band. In the 108-124~MHz band, we have placed upper limits on C$\alpha$ absorption and H$\alpha$ emission at about 14~mK. These levels are generally at or below the expected redshifted 21~cm angular and spectral fluctuation amplitudes in theoretical models (e.g., \citealt{furlanetto2004observing, barkana2009studying, 2019MNRAS.486.1763F}), which are typically 10-100~mK, depending on model and redshift. Here we empirically study the quantitative effects induced by RRLs on the global and power spectrum of 21~cm signal using the amplitudes of the detected lines.
\subsection{Global 21~cm Signal}
\label{sec:G21_effects}
Early results in the search for the global 21~cm signal (e.g. \citealt{monsalve2017results, bowman2018absorption, 2018ApJ...858...54S, singh2022detection}) have generally ignored the presence of RRLs. To test if this is a reasonable assumption, we create simulated observations to investigate the recovery of the global 21~cm signal with and without the strongest RRLs in an extreme scenario assuming the measurement is made at LST~18 when the strong lines on the inner Galactic Plane are present.
To model the RRLs, we start with the best-fit stacked line amplitudes in the LST-18~h bin at the effective frequency of 66.1~MHz (for the 50-87~MHz band). We model all individual C$\alpha$ lines in the full EDGES low-band of 50-100~MHz by scaling the measured stacked line amplitude in frequency and adding Gaussian profiles with 19.8~kHz FWHM at each line center. For the frequency scaling, we note that for the target frequencies, Equation~\ref{eqn:rrl_frequency_dependence} is dominating by the $T_R$ term. We therefore simplify the expression and use only a power-law with spectral index of -3.6 for scaling the C$\alpha$ line ampliutudes and a spectral index of -2.7 for scaling the C$\beta$ line amplitudes. This is consistent with the relaxed prior best-fit parameters from above.
H$\alpha$ emission lines are added with a constant amplitude of 203~mK, as an extreme case of highest detected amplitude representing all the lines in full frequency band. This RRL model has an RMS of 207~mK for a simulated spectrum with 6~kHz resolution, matching the raw resolution of the EDGES data. The RMS reduces to 101~mK after binning to 61~kHz, which matches the spectral resolution of SARAS \citep{singh2022detection}, and 14~mK after binning to the 390~kHz resolution used in \cite{bowman2018absorption}. The nosie falls faster than the square root of the channel size because, for large spectral bins, some of the emission and absorption lines fall in the same bins, offsetting their contributions to the RMS. At the respective spectral resolutions, the RMS is below the thermal noise, which is about 25~mK for deep EDGES observations and about 200~mK for SARAS \citep{singh2022detection}. Hence we expect the RRLs will have little impact on the signal recovery.
To test signal recovery, we start by extracting a realistic foreground continuum model from the EDGES observations used in \cite{bowman2018absorption}. We use \texttt{edges-analysis}\footnote{\href{https://github.com/edges-collab/edges-analysis}{https://github.com/edges-collab/edges-analysis}} to calibrate the data and flag RFI, correct for beam chromaticity, and bin the observations between LST 17-18~h. We simultaneously fit a 5-term linearized logarithmic polynomial (``linlog'') foreground model and a 5-term flattened Gaussian 21~cm absorption profile. We use this best-fit foreground model as the continuum in the simulated observations. We then add the RRL model and a modeled 21~cm absorption feature matching the best-fit properties reported in \cite{bowman2018absorption}, but with a conservative amplitude of 200~mK. When we fit a 5-term linlog foreground model and 21~cm absorption flattened Gaussian profile simultaneously to the simulated observations, the difference in amplitude of the best-fit global 21~cm absorption feature with and without the RRLs is about 2~mK, extremely small compared to the 200~mK injected signal. Figure~\ref{fig:global21cm} shows the RRL model and resulting global 21~cm fit. Lastly, we tested flagging RRLs in the analysis of actual observations and found it did not affect the results reported in \cite{bowman2018absorption}.
These findings apply directly to EDGES observations. They also should be representative of most other global 21~cm experiments since most instruments have large beams that would similarly dilute the observed amplitude of strong lines from the inner the Galactic Plane. We note, however, the effects of RRLs on global 21~cm searches could be more significant for instruments with higher spectral and angular resolution.
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{rrlglobalresiduals.pdf}
\caption{ \textit{Top:} Residuals after fitting and removing a 5-term foreground model from simulated spectra in the 50-100~MHz band at LST~18~h with 6~kHz resolution. The orange curve shows the residuals to a model without RRLs, whereas the blue curve includes modeled C$\alpha$, C$\beta$, and H$\alpha$ lines. \textit{Middle:} The residuals rebinned to 390~kHz for comparison with \citet{bowman2018absorption}. \textit{Bottom:} Residuals after the foreground model is fit and removed when a 200~mK flattened Gaussian global 21~cm absorption signal is also included in the simulated spectra. RRLs contribute to an increase in RMS by 12~mK in the data. The difference in amplitude of the best-fit 21~cm profile is only 2~mK, an error of 1\%.}
\label{fig:global21cm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{rrldelayspeclst0to12.pdf}
\caption{ \textit{Top:} Simulated RRL spectrum, similar to Figure~\ref{fig:global21cm} but for LST~0-12~h, away from the inner Galactic Plane. C$\alpha$, C$\beta$ and H$\alpha$ lines are modeled in the 50-100~MHz band with 1~kHz resolution. No other foreground contributions are included. \textit{Middle:} The simulated spectrum rebinned to 100~kHz, which is approximately the spectral resolution used by HERA. \textit{Bottom:} Delay spectra calculated from the simulated RRLs for three different spectral binnings. These are equivalent to delay spectra from a notional zero-length baseline in an interferometer. Most of the power contributed by the RRLs is above 2~$\mu$s delay, corresponding to spectral scales smaller than 500~kHz. The 100~kHz spectral resolution of HERA couples power from smaller scales to larger scales of order 1~MHz. An intermediate spectral resolution of 40~kHz removes much of the coupling to large scales. Even with coupling, the power from scales larger than 500~kHz ($< 2$~$\mu$s) contributes less than 10~mk$^2$ to the total 36~mk$^2$ variance in the simulated spectrum with HERA-like resolution. }
\label{fig:delay_lst0-12}
\end{figure}
\subsection{21~cm Power Spectrum}
\label{sec:21cm_powerspectrum}
Interferometers optimized for 21~cm power spectrum observations aim to detect and characterize the IGM during Cosmic Dawn and reionization. Most instruments have focused on observing the reionization era between $13>z>6$, corresponding to frequencies between 100~and 200~MHz. Recently, several instruments have begun to place initial upper limits on the 21~cm signal between $25>z>15$, between about 50~and 90~MHz \citep{eastwood201921, gehlot2019first, gehlot2020aartfaac}. During reionization, fluctuations in the 21~cm signal are dominated by the growth of reionized regions on scales of order 10~Mpc, corresponding to spectral fluctuations on scales of order 1~MHz in observations, although there is also a significant power on smaller scales. Before reionziation, the 21~cm power spectrum traces the matter power spectrum and ultraviolet and X-ray heating of the IGM, yielding power on generally similar scales as during reionization.
The amplitude of 21~cm fluctuations is predicted to be $\Delta^2 \lesssim 10^2$~mK$^2$, varying with redshift as the IGM evolves.
Although there have been significant improvements in upper limits of 21 cm power spectrum measurements in the past decade, the current best upper limits are an order of magnitude higher than the fiducial model \citep{barry2021ska, abdurashidova2022first}.
Here, we estimate the approximate power contributed by RRLs over these spectral scales in a typical observation by simulating all C$\alpha$, C$\beta$, and H$\alpha$ RRLs spanning 50-100~MHz following a similar method as for the global 21~cm test but using the average line amplitudes and limits found for LSTs 0-12~h. These lines are much weaker than those along the inner Galactic Plane. This analysis is directly relevant for the 21~cm interferometers targeting the Cosmic Dawn between $25>z>15$, including OVRO-LWA, LOFAR, and AARTFAAC. Although the Hydrogen Epoch of Reionization Array (HERA; \citealt{deboer2017hydrogen,abdurashidova2022first}) is optimized to observe reionization, it is sensitive down to 50~MHz. We focus our analysis here on HERA given our familiarity with the instrument.
To simulate observations including RRLs, we use the best-fit strong prior model shown in the middle panel of Figure~\ref{fig:rrl_frequency_dependence} as our reference for C$\alpha$ amplitudes and a constant H$\alpha$ amplitude of 33~mK across the band, which is the average from the individual two-hour LST bins centered between 0-12~h. This model somewhat overestimates the line strength compared to the very weakest region of sky at LSTs 2-6~h, but will be more typical of the region covered by long drift scans with HERA. We note that instruments like HERA have high angular resolution (0.8$\degree$ at 75~MHz) in comparison to EDGES (72$\degree$ at 75~MHz), and here we study an averaging effect covering the full beam of HERA. The resulting simulated spectra have 14~mK RMS when the lines are fully resolved, falling to 6~mK RMS when binned with 100~kHz resolution, typical of HERA observations.
Figure~\ref{fig:delay_lst0-12} shows simulated RRL sky spectra and their corresponding delay spectra. The delay spectra are found by applying a Blackman-Harris window function to the simulated sky spectra and Fourier transforming into delay space. The delay spectra are equivalent to what would be seen by a zero-length baseline in an interferometer if the sky contained only RRLs. They illustrate that much of the power introduced by the RRLs resides on small spectral scales below 500~kHz. The limited spectral resolution of HERA couples some of this small-scale power to larger scales, but the total power contributed by scales larger than 500~kHz accounts for less than 10~mK$^2$ of the total 36~mK$^2$ variance in the 100~kHz-binned spectrum. These variances are two orders of magnitude lower than the current best upper limits at high redshift of $\Delta^2 \leqslant 3496$~mK$^2$ at $z=10.4$ \citep{abdurashidova2022improved} and $\Delta^2 \leqslant 7388$~mK$^2$ at $z=17.9-18.6$ \citep{gehlot2020aartfaac}. The RRL power will likely be lower on non-zero length baselines, falling quickly with baseline length similar to other diffuse Galactic emission. Future work to map the spatial distribution of RRLs would be valuable to confirm this assumption, and levels of systematics introduced due to RRLs may be considered in next generation intruments. Nevertheless, the power levels introduced by RRLs are at the low end of expected 21~cm signal levels and we anticipate the lines can likely be ignored in the current generation of 21~cm interferometers, which are focused on detection and not detailed characterization of the signal.
In the ultimate case that the lines cannot be neglected initially, two scenarios are possible. First, if our assumption that diffuse weak lines dominate the observations is incorrect and instead strong lines from several small regions do indeed dominate, then the diffuse contribution would be even lower than our estimated upper limits above. Surveys of the localized dominant regions would likely enable them to be subtracted from 21~cm observations during processing with reasonable accuracy. Second, if diffuse lines are present across large areas of the sky at levels that interfere with the 21~cm analysis as may be possible (see e.g., \citealt{grenier2005clouds}), then those frequencies may need to be completely omitted from analysis.
To investigate possible spectral leakage effects that RRL flagging could have on 21~cm power spectrum analysis, we use \texttt{pyuvsim} to simulate observations with HERA. HERA is a 350~elements radio interferometer being built in South Africa for observing redshifted 21~cm power spectrum. HERA is designed to observe in the frequency range of 50-250~MHz. We simulate HERA visibilities at LST~0, 6, 12, and~18~h on 18~baselines for the telescope's east-west polarization using the Global Sky Model \citep{de2010global}. The resulting visibilities are then passed through a Blackman-Harris window and transformed into delay space both before and after masking RRL channels. We apply a delay-inpainting algorithm in the masked delay spectra \citep{parsons2009,aguirre2022validation} and attempt to reconstruct the ideal observation as if RRLs had not been present. The inpainting algorithm is a deconvolution that effectively extrapolates along frequency for each visibility to fill in gaps due to RFI excision or other missing data. An example of the resulting delay spectra is shown in Figure~\ref{fig:fig15} for LST~6~h, which is broadly representative of all baselines and LSTs simulated. The effect of masking the RRL frequencies, even with inpainting, is to raise the noise floor in the modes of interest to 21~cm power spectrum analysis ($|\tau|>0.1$~$\mu$s) by about two orders of magnitude. This limits the dynamic range to approximately $10^3$, below the target performance of the telescope that is likely needed to detect the 21~cm power spectrum. This suggests further development would be needed to identify successful strategies for mitigating diffuse RRLs in power spectrum analysis if flagging is required.
\\
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{simulated_delay_spectra_mK.pdf}
\caption{Comparison of simulated delay spectra for a baseline on the HERA radio telescope at LST~6~h with and without assuming RRL frequencies need to be masked. The dotted line shows the reference delay spectrum without flagging RRLs. Masking and inpainting RRL frequencies before the delay transform (solid line) raises the foreground leakage by over two orders of magnitude in the delay modes used for redshifted 21~cm science. A similar trend was observed for the other simulated LSTs and baselines in HERA.}
\label{fig:fig15}
\end{figure}
\section{Conclusions}
\label{sec:4}
We have used existing long drift scans from the EDGES low-band and EDGES mid-band instruments, with an effetcive spectral resolution of 12.2~kHz, divided into LST bins, to assess the strength of RRLs averaged over large areas of the sky. In the lower frequency range of 50-87~MHz, C$\alpha$ absorption lines were seen at all LST periods, with amplitudes from -33~to -795~mK and H$\alpha$ lines were seen in emission, with average fitted amplitudes ranging from no detection to 203~mK. C$\beta$ lines were also seen in absorption with an average fitted amplitude of 0~to -305~mK. At the higher frequency band of 108-124.5~MHz, RRL lines were seen only when the Galactic Center was overhead. H$\alpha$ lines were seen in emission with amplitude of 98~mK and C$\alpha$ in absorption with an amplitude of -46~mK.
Conservatively interpreting the observations away from the Galactic Center as RRLs from diffuse gas, as opposed to isolated sources, we find that the amplitudes of the diffuse lines are at or below the expected redshifted 21~cm signal from Cosmic Dawn. They should have little or no effect in global 21~cm observations and are an order of magnitude or more below current 21~cm power spectrum limits at high redshift. This suggests RRLs do not impose as significant systematics in low frequency regimes, and may not need immediate mitigations in current 21~cm experiments that aim for detection. However, the power levels introduced by RRLs may need mitigation in the next generation of instruments aiming to characterize the 21~cm signal in detail post-detection. If so, a delay spectrum analysis for simulated HERA observations showed that masking and inpainting RRL frequencies raises the foreground leakage by about two orders of magnitude in the higher delay modes used for 21~cm analysis. This reduced the dynamic range of the delay spectra to about $10^3$, suggesting new development would be needed to ensure such masking does not prevent 21~cm detection. Further work is also warranted to extend this analyis to higher frequencies between 100~and 200~MHz, corresponding to the 21~cm redshifts for reionziation, although the non-detection of C$\alpha$ and H$\alpha$ we find away from the inner Galactic Plane at 116~MHz suggests RRLs will be less of a problem for reionization observations than for higher redshift Cosmic Dawn observations.
\begin{acknowledgements}
This work was supported by the NSF through research awards for EDGES (AST-1609450, AST-1813850, and AST-1908933) and by the NASA Solar System Exploration Research Virtual Institute cooperative agreement number 80ARC017M0006. N.~M. was supported by the Future Investigators in NASA Earth and Space Science and Technology (FINESST) cooperative agreement 80NSSC19K1413. Authors thank the anonymous reviewers for their detailed inputs and suggestions. Authors also thanks Dr. Anish Roshi for useful discussions and providing relevant references. EDGES is located at the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. We thank CSIRO for providing site infrastructure and support.
\end{acknowledgements}
\software{astropy \citep{2013A&A...558A..33A,2018AJ....156..123A},
edges-analysis (\url{http://github.com/edges-collab}),
pyuvsim \citep{2019JOSS....4.1234L}, pyuvdata \citep{pyuvdata2017}
}
|
{
"arxiv_id": "2302.14226",
"language": "en",
"timestamp": "2023-03-01T02:05:44",
"url": "https://arxiv.org/abs/2302.14226",
"yymm": "2302"
} | \section{Introduction}
A main objective of synthetic biology is designing programmable chemical controller which can operate in molecular contexts incompatible with traditional electronics \cite{del2016control,vasic2020}. We have learned plenty of algorithms from how real life works such as artificial neural network and genetic algorithm, while on the contrary, inserting advanced computational methods into living organisms to accomplish specific tasks e.g. biochemical sensing and drug delivery remains to be studied. A great deal of related work has sprung up in recent years: Moorman et al. \cite{moorman2019dynamical} proposed a biomolecular perceptron network in order to recognize patterns or classify cells \emph{in-vivo}. The beautiful work of Vasic et al. \cite{vasic2022programming} transformed the feed-forward RRelu neural network into chemical reaction networks (CRNs for short), and performed their model on standard machine learning training sets. There are also some attempts to build CRNs capable of learning \cite{blount2017feedforward,chiang2015reconfigurable}. However, few has implemented the whole neural network computation into a biochemical system. The main reason is that the algorithm based on computer instruction performs operations in a sequential manner whereas biochemical reactions proceed synchronously. This contradiction calls for an appropriate regulation method which isolates two reaction modules \cite{vasic2020} from co-occurring and controls their sequence. Blount et al. \cite{blount2017feedforward} constructed cell-like compartments and added electrical clock signals artificially in order to solve this problem, which increased the difficulty of biochemical implementation. A more natural idea is to design chemical oscillators which can automatically generate chemical species acting as periodical clock signals, taking advantage of the oscillatory changes of their concentration between high and low phases to turn corresponding reaction module on or off.
\par Oscillation phenomena are often encountered in chemical and biological systems e.g. Belousov-Zhabotinskii reaction \cite{tyson1980target} and circadian rhythm \cite{forger2017biological}, research and design of oscillators have been extensively studied in most aspects of industrial applications \cite{castro2004unique,kannapan2016synchronization,zhao2019orbital}, regulating reaction sequence by chemical oscillators is of course of interest. Arredondo and Lakin \cite{arredondo2022supervised} utilized a $20$-dimensional oscillator extended from ant colony model \cite{lachmann1995computationally} to order the parts of their chemical neural networks. Jiang et al. introduced a different oscillator model with $12$ species and $24$ reactions \cite{jiang2011}, then chose two of these species to serve as clock signals. These works follow the same logic: Firstly, find a suitable oscillator model with a set of appropriate selection of parameters and initial values, then confirm that the model is indeed available for use by simulation. There are two main drawbacks with such design. One is the lack of analysis on oscillation mechanism, i.e., it is often unclear why oscillatory behaviour emerges in these models. The other one is the initial concentration of these oscillators seems too strict to be realized in real chemical reaction system, e.g., Arredondo and Lakin demanded the initial concentration of some species equals to $10^{-6}$ while others is $1$ \cite{arredondo2022supervised}. In view of this, we summarize the requirements for species as clock signals and design a universal chemical oscillator model based on relaxation oscillation with clear dynamical analysis i.e. we can explain why the oscillation behaviour occurs and how it evolves, and make sure that the selection of initial values is broad. We also consider the effect of parameter selection on the oscillation properties and provide the period estimation of our clock signals.
\par In this paper we mainly focus on designing chemical oscillators for the sequence execution of two chemical reaction modules. Sequence control and alternation of two reaction modules are very common in molecular operations and synthetic biology, such as module instructions that involve judgment statement before specific execution, and reaction modules that realize the loop of feed-forward transmission and back-propagation learning process in artificial neural networks. We not only provide a common approach of designing suitable chemical oscillator model for such requirements, but also offer strategy for spontaneous loop termination of reaction modules to be regulated. Our oscillator model can be transformed into abstract chemical reaction networks \cite{feinberg2019foundations} through appropriate kinetics assumption (mainly the \textit{mass-action kinetics}), and finally DNA strand displacement cascades \cite{soloveichik2010dna} or other technical means is used to implement the CRNs into real chemistry.
This paper is organized as follows. Preliminaries and problem statements are given in \cref{sec:basic}. \Cref{sec:model} exhibits the structure of $4$-dimensional chemical relaxation oscillator based on $2$-dimensional relaxation oscillation, which is able to generate a pair of symmetric clock signals satisfying our requirements for module regulation. We also provide detailed dynamical analysis based on geometric singular perturbation theory on this model. In \cref{sec:fur} we discuss how to control the period and occurrence order of the oscillating species via adjusting oscillator parameters and initial values. Then we demonstrate the loop control of molecular computations and present termination strategy for them in \cref{sec:ter}. And finally, \cref{sec:conclusions} is dedicated to conclusion of the whole paper.
\section{Preliminaries and problem statement}
\label{sec:basic}
In this section we provide the preparatory knowledge on CRN \cite{feinberg2019foundations}, and further formulate the problem by a motivating example of how to construct a chemical oscillator to control the occurrence order of two reaction modules. We first give some notations. The sets of positive integers, real numbers, non-negative real numbers and positive real numbers are denoted by $\mathbb{Z}_{>0}, \mathbb{R}, \mathbb{R}_{\geq 0}$ and $\mathbb{R}_{>0}$, respectively. We use $\mathbb{R}^n$ to denote an $n$-dimensional Euclidean space, a vector $\alpha \in \mathbb{R}^n_{\geq0}$ if any component $\alpha_{i} \in \mathbb{R}_{\geq 0}$ with $i=1,2,...n$ and $\alpha \in \mathbb{R}^n_{>0}$ if $\alpha_{i} \in \mathbb{R}_{> 0}$.
\subsection{CRN}
A CRN with the $j$th $(j=1,...,m)$ reaction following
\begin{equation}
R_j:~~~~~a_{j1}S_{1}+\cdots +a_{jn}S_{n} \to b_{j1}S_{1}+\cdots +b_{jn}S_{n}
\end{equation}
consists of three nonempty finite sets $\mathcal{S}$, $\mathcal{C}$ and $\mathcal{R}$, i.e.,
\begin{itemize}
\item species set $\mathcal{S}=\{S_1,...,S_n\}$;
\item complex set $\mathcal{C}=\bigcup_{j=1}^m \{a_{j1}S_{1}+\cdots +a_{jn}S_{n}, b_{j1}S_{1}+\cdots +b_{jn}S_{n}\}$ with each element to be a linear combination of species over the non-negative integers;
\item reaction set $\mathcal{R}=\{R_1,...,R_m\}$ with each element including two complexes connected by arrow, and the left complex of the arrow called reactant while the right one called product.
\end{itemize}
Denote the concentration of species $S_i$ by $s_i\in\mathbb{R}_{\geq 0}$, then the dynamics describing concentrations change of all species can be written as
\begin{equation}\label{dynamics1}
\frac{\mathrm{d} s}{\mathrm{d} t} = \Xi \cdot r\left(s\right)\ ,
\end{equation}
where $s\in\mathbb{R}^n_{\geq 0}$ is the $n$-dimensional concentration vector, $\Xi\in\mathbb{Z}^{n\times m}$ is called the stoichiometric matrix with every entry defined by $\Xi_{ij} = b_{ij}-a_{ij}$, and $r(s)$ is the
$m$-dimensional vector-valued function evaluating the reaction rate. The most common model to specify the reaction rate is \textit{mass-action kinetics} that induces $r(s)$ by
\begin{equation}\label{dynamics2}
r (s) = \left ( k_{1}\prod_{i=1}^{n}s_{i}^{a_{1i}},...,k_{m}\prod_{i=1}^{n}s_{i}^{a_{mi}} \right )^{\top}
\end{equation}
with $k_{j}>0$ to represent the reaction rate constant of reaction $R_j$. The CRN equipped with \textit{mass-action kinetics} is called mass-action system, which is essentially a group of polynomial ODEs. In the context, we use this class of systems for the subsequent research. The following example gives an illustration of a mass-action system.
\begin{example}
For a reaction network taking the route
\begin{equation*}
2S_{1} \overset{k_{1}}{\rightarrow} S_{2} + S_{3}\ ,\ \ \
S_{3} \overset{k_{2}}{\rightarrow} 2S_{1}\ ,
\end{equation*}
\end{example}
the species set is $\mathcal{S}=\left \{ S_{1}, S_{2}, S_{3} \right \}$, complex set $\mathcal{C}=\left \{2S_{1}, S_{2}+S_{3},S_{3} \right \}$, stoichiometric matrix $\Xi_{3\times 2} = \begin{pmatrix}
-2&2\\
1&0 \\
1&-1
\end{pmatrix}$, rate function $r\left (s \right ) = \left ( k_{1}s_{1}^{2}, k_{2}s_{3} \right )^{\top}$, and the corresponding ODEs are:
\begin{subequations}
\begin{align}
\frac{\mathrm{d} s_{1}}{\mathrm{d} t} &= -2k_{1}s_{1}^{2} + 2k_{2}s_{3}\ , \\
\frac{\mathrm{d} s_{2}}{\mathrm{d} t} &= k_{1}s_{1}^{2}\ , \\ \frac{\mathrm{d} s_{3}}{\mathrm{d} t} &= k_{1}s_{1}^{2} - k_{2}s_{3}\ .
\end{align}
\end{subequations}
It has been proved that mass-action chemical kinetics is Turing universal \cite{fages2017strong}. This means that any computation can be embedded into a group of polynomial ODEs \cite{bournez2017polynomial}, and then realizing them with mass-action systems. In practice, this process is implemented by mapping the input of calculation into the initial concentrations of some species of the network and the output into the limiting value of other species, usually taking equilibrium. We present an example of ``addition calculation" to give readers more clear illustration \cite{vasic2020}.
\begin{example}\label{ex_gao1}
A CRN follows
\begin{align*}
S_{1} &\to S_{1} + S_{2}\ , &
S_{3} &\to S_{3} + S_{2}\ , & S_{2} &\to \varnothing
\end{align*}
with all the reaction rate constants to be $1$ (In the context, when the reaction rate constant is $1$, we just omit it), where the last reaction refers to an outflow reaction. This network can serve for implementing addition calculation, like $a+b=c,~a,b,c\in\mathbb{R}$. To this task, we write the ODEs of the dynamics as
\begin{align*}
\frac{\mathrm{d} s_{2}}{\mathrm{d} t} &= s_{1} + s_{3} - s_{2}\ , &
\frac{\mathrm{d} s_{1}}{\mathrm{d} t} &=\frac{\mathrm{d} s_{3}}{\mathrm{d} t} = 0
\end{align*}
with initial point vector to be $x(0)$. Clearly, when all reactions reach equilibrium, the equilibrium point vector $s^*$ satisfies $s_1^*=s_1(0)$, $s_2^*=s_1^*+s_3^*$ and $s_3^*=s_3(0)$. Therefore, by letting $s_1(0)=a$, $s_3(0)=b$, and $s_2^*=c$, we realize the addition calculation by this network.
\end{example}
\subsection{Problem statement}\label{sec2.3}
When implementing calculation using chemical reactions, the core difficulty is to deal with the contradiction between the sequential execution of calculation instructions and intrinsically parallel occurrence of chemical reactions. This is special true for those compound arithmetics, like loop calculation etc., in which the calculating procedures usually have definite priority. We consider the task to implement the frequently-used loop iteration calculation $s_1=s_1+1$ appearing in many machine learning algorithms through the following CRNs.
\begin{example}\label{ex2.2}
Given two reaction modules ($\mathcal{M}$) \begin{align*}
\mathcal{M}_1:
S_{1} &\to S_{1} + S_{2}\ , &\mathcal{M}_2: S_{2} &\to S_{1} + S_{2}\ , \\
S_{3} &\to S_{3} + S_{2}\ , & S_{1} &\to \varnothing\ , \\
S_{2} &\to \varnothing \ ;
\end{align*}
the ODEs are
\begin{align*}
\mathcal{M}_1:
\frac{\mathrm{d} s_{2}}{\mathrm{d} t} &= s_{1} + s_{3} - s_{2}\ , &
\mathcal{M}_2: \frac{\mathrm{d} s_{1}}{\mathrm{d} t} &= s_{2} - s_{1}\ , \\
\frac{\mathrm{d} s_{1}}{\mathrm{d} t} &=\frac{\mathrm{d} s_{3}}{\mathrm{d} t} = 0\ ; & \frac{\mathrm{d} s_{2}}{\mathrm{d} t} &= 0\ .
\end{align*}
It is easy to get their solutions to be
\begin{subequations}\label{solution}
\begin{align}
\mathcal{M}_1&: s_{2}(t)=s_{1}(0)+s_{3}(0)-(s_1(0)+s_3(0)-s_2(0))e^{-t}, \\
\mathcal{M}_2&: s_{1}(t)=s_{2}(0)-(s_2(0)-s_1(0))e^{-t}.
\end{align}
\end{subequations}
$\mathcal{M}_1$ is actually the network given in Example \ref{ex_gao1}, called addition module, and $\mathcal{M}_2$ finishes the load task, called load module \cite{vasic2020}. When these two modules work independently, $\mathcal{M}_1$ can perform the calculation of $s_2^*=s_1^*+1$ by setting $s_3(0)=1$ while $\mathcal{M}_2$ realizes $s_1^*=s_2(0)=s_2^*$. Moreover, the expressions of solutions (\ref{solution}) imply that both of them converge to equilibrium exponentially. Therefore, alternation and loop of $\mathcal{M}_1$ and $\mathcal{M}_2$ could realize the desired loop iteration calculation $s_1=s_1+1$. However, if we directly put these two reaction modules together, there is strong coupling on the dynamics of $S_{1}$ and $S_{2}$, and their concentrations would increase to infinity in the absence of regulation, which fails to execute the calculation instruction.
\end{example}
The above example suggests that we need to find a new tool to control strictly and alternatively the ``turn on" and ``turn off" of occurrence of two modules $\mathcal{M}_1$ and $\mathcal{M}_2$. A possible solution is to introduce chemical oscillator that produces periodical signals to control reactions. For this purpose, we modify $\mathcal{M}_1$ and $\mathcal{M}_2$ as follows.
\begin{example}\label{ex2.3}
The modified reaction modules are
\begin{align*}
\tilde{\mathcal{M}}_{1}:
S_{1} + U &\to S_{1} + S_{2} + U\ , & \tilde{\mathcal{M}}_{2}: S_{2} + V&\to S_{1} + S_{2} + V\ ,\\
S_{3} + U&\to S_{3} + S_{2} + U\ , & S_{1} + V&\to V\ .\\
S_{2} + U&\to U\ ;
\end{align*}
Here, we introduce two species $U$ and $V$ that are involved in reactions as catalysts. Their participation in reactions, on one hand, will not change themselves as reactions go on, and on the other hand, will not affect the dynamics of the other species appearing in the original modules, i.e., not interfering with the original calculation task of $\mathcal{M}_1$ and $\mathcal{M}_2$. The ODEs of dynamics of the whole network ($\tilde{\mathcal{M}}_{1}$ plus $ \tilde{\mathcal{M}}_{2}$) are expressed as
\begin{subequations}\label{eq2.4}
\begin{align}
\frac{\mathrm{d} s_{1}}{\mathrm{d} t} &= (s_{2} - s_{1})v\ , \\
\frac{\mathrm{d} s_{2}}{\mathrm{d} t} &= (s_{1} + s_{3} -s_{2})u\ , \\
\frac{\mathrm{d} s_{3}}{\mathrm{d} t} &= 0\ .
\end{align}
\end{subequations}
From the route, it can be concluded that whether species $U$/$V$ is existing will ``turn on" or ``turn off" $\tilde{\mathcal{M}}_{1}$/$\tilde{\mathcal{M}}_{2}$. Hence, as long as the concentrations of $U$ and $V$ are designed to be a pair of clock signals that oscillate with the same period, they will be able to generate ``loop" so as to control the execution sequence of $\tilde{\mathcal{M}}_{1}$ and $\tilde{\mathcal{M}}_{2}$, and finally to realize the loop iteration calculation $s_1=s_1+1$. \cref{fig1} gives a schematic diagram of a pair of standard clock signals to ``turn on" and ``turn off" alternatively and periodically two reaction modules.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth,scale=1.00]{fig1.pdf}
\caption{A schematic diagram of a pair of standard clock signals $U$ and $V$ for module regulation.}
\label{fig1}
\end{figure}
\end{example}
Based on Example \ref{ex2.3} and the usual requirements for clock signals \cite{jiang2011}, we define ours for the current work, called symmetrical clock signals.
\begin{definition}[symmetrical clock signals]\label{def2.4}
A pair of oscillatory species $U$ and $V$ are called symmetrical clock signals if
\begin{enumerate}
\item $U$ and $V$ oscillate synchronously but with abrupt transitions between their phases;
\item concentration of $U$ is strictly greater than 0 at high amplitude and approximately 0 at low amplitude, so is $V$;
\item the amplitudes of $U$ and $V$ are complementary, i.e., when concentration of $U$ is at high amplitude, the one of $V$ is precisely at low amplitude, and vice versa.
\end{enumerate}
\end{definition}
Note that the last two requirements are trivial and may be generated from any form of oscillation while the first one is not trivial, which serves for guaranteeing the accuracy of module regulation. This motivates us to consider relaxation oscillation \cite{fernandez2020symmetric,field1974oscillations,krupa2013network} as a basic oscillation structure to generate symmetrical clock signals. Thus the following task is on how to develop chemical relaxation oscillator towards controlling molecular computation.
\section{Chemical relaxation oscillator}\label{sec:model}
In this section, we introduce the mechanism of relaxation oscillation and develop $4$-dimensional chemical relaxation oscillator for the current task.
\subsection{Mechanism of 2-dimensional relaxation oscillator}
Relaxation oscillation is a type of common oscillation in biochemical systems \cite{krupa2013network}, whose general form, as an example of $2$-dimensional case, is
\begin{equation}\begin{aligned}\label{eq2.5}
\epsilon \frac{\mathrm{d} x}{\mathrm{d} t} &= f(x,y)\ , \\
\frac{\mathrm{d} y}{\mathrm{d} t} &=g(x,y)\ ,
& x \in \mathbb{R},\ y \in \mathbb{R},\ 0 < \epsilon \ll 1\ , \end{aligned}\end{equation}
where $f,g$ are $C^k$-functions with $k\geq3$, and $\mathscr{C}_0\triangleq\left \{(x,y):f(x,y)=0\right \}$ is the critical manifold.
\begin{definition}[normally hyperbolic manifold \cite{fenichel1979geometric}]\label{def3.6}
A manifold $\mathscr{C} \subseteq \mathscr{C}_{0}$ is normally hyperbolic if $\forall (x,y) \in \mathscr{C}$, $\frac{\partial f}{\partial x}(x,y) \neq 0$. Point with $\frac{\partial f}{\partial x}(x,y) =0$ is accordingly called non-hyperbolic point or fold point.
Further, a normally hyperbolic manifold $\mathscr{C}$ is attracting if $\frac{\partial f}{\partial x}(x,y) < 0$ for $\forall (x,y) \in \mathscr{C}$ and is repelling if $\frac{\partial f}{\partial x}(x,y) > 0$.
\end{definition}
There have been a great deal of studies \cite{fernandez2020symmetric,krupa2001relaxation,grasman2012asymptotic,chuang1988asymptotic} on the dynamics of (\ref{eq2.5}), where we are rather concerned with those related to oscillation. When the critical manifold $\mathscr{C}_0$ is S-shaped, the function $y=\varphi(x)$ induced by $\mathscr{C}_0$ has precisely two critical points, a non-degenerate minimum $x_{m}$ and another non-degenerate maximum $x_{M}$ satisfying $x_{M}>x_{m}$, which together with $x_l: \varphi(x_l)=y_M=\varphi(x_M), \ x_r: \varphi(x_r)=y_m=\varphi(x_m)$
defines a singular trajectory $\Gamma_{0}$ by
\begin{equation}\begin{aligned}\label{eq_gao3}
\Gamma_{0}&=\left \{(x,\varphi(x)):x_{l}<x\leq x_{m}\right \} \cup \left \{(x,y_{m}):x_{m}<x\leq x_{r}\right \} \\
&\cup \left \{(x,\varphi(x)):x_{M}\leq x<x_{r}\right \} \cup \left \{(x,y_{M}):x_{l}\leq x<x_{M}\right \}\ . \end{aligned}\end{equation}
For this class of systems, Krupa and Szmolyan \cite{krupa2001relaxation} gave a detailed geometric analysis of relaxation oscillation and further suggested a sufficient condition to the existence of a relaxation oscillation orbit $\Gamma_{\epsilon}$ lying in the $O(\epsilon)$-neighborhood of $\Gamma_{0}$. Here, we ignore the details about this condition, and recommend reader to refer to Theorem 2.1 in that paper for more information. Note that in the common model of relaxation oscillation, the oscillation orbit is often accompanied by an unstable equilibrium point on the repelling part of the critical manifold, we continue to use this constraint in our models.
\begin{remark}
The dynamics of (\ref{eq2.5}) serving as a chemical relaxation oscillator should satisfy the flowing three conditions:
\begin{itemize}
\item [1.] it can generate a relaxation oscillation in the first quadrant;
\item [2.] its detailed expression should match mass-action kinetics;
\item [3.] the signals generated should be symmetrical according to Definition \ref{def2.4}.
\end{itemize}
\end{remark}
The van der Pol equation \cite{braaksma1993critical} is a typical instance of structure \cref{eq2.5} with ``S-shaped" manifold, which will act as a basis to design the needed oscillator. Instead of directly using it, we provide a coordinate-transformed version for the current purpose, written as
\begin{subequations} \label{eq:gao2}
\begin{align}
\epsilon \frac{\mathrm{d} x}{\mathrm{d} t} &=-x^3+9x^2-24x+21-y \ ,\\
\frac{\mathrm{d} y}{\mathrm{d} t} &=x-3 \ , ~~x,~y\in\mathbb{R}_{> 0}
\ , ~~ 0 < \epsilon \ll 1\ .
\end{align}
\end{subequations}
Clearly, $(x^*,y^*)=(3,3)$ is its sole equilibrium and the singular trajectory is
\begin{equation}\label{Gamma_0}
\begin{aligned}
\Gamma_{0}=\left\{ \left ( x,\varphi (x) \right ) :1 < x \leq 2 \right\} \cup \left\{ \left ( x,1 \right ):2 < x \leq 5\right\} \\
\cup \left\{ \left ( x,\varphi (x) \right ) : 4 \leq x < 5\right\} \cup \left\{ \left ( x,5 \right ): 1 \leq x < 4\right\}
\end{aligned}
\end{equation}
with $\varphi (x)=-x^3+9x^2-24x+21$. The phase plane portrait is presented in \cref{fig2}. As can be seen, the equilibrium is unstable and the relaxation oscillation orbit $\Gamma_{\epsilon}$ which lies in the $O(\epsilon)$-neighborhood of $\Gamma_0$ is also in the first quadrant. Hence, the flows starting from points in the first quadrant, except the unstable equilibrium, will soon converge to $\Gamma_{\epsilon}$ through horizontal motion. Note that the model of (\ref{eq:gao2}) does not match the expressions of the kinetic equations (\ref{dynamics1}) and (\ref{dynamics2}) for a certain mass-action system, since the terms $-y$ and $-3$ cannot reflect the consumption of species $X$ and $Y$, respectively. To fix this point and also avoid destroying the inherent dynamic property of (\ref{eq:gao2}), a naive idea is to multiply the first equation by $x$ while the second equation by $y$, which gives a modified version as follows.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\linewidth,scale=1.00]{fig2.pdf}
\caption{Phase plane portrait of the relaxation oscillation model \cref{eq:gao2}, where the green dotted broken curve represents the critical manifold $\mathscr{C}_0$, the shadow is $\epsilon$-neighbor of the corresponding $\mathscr{C}_0$ part, the rail enclosed by red broken lines approximates position of the relaxation oscillation orbit $\Gamma_{\epsilon}$, the black dotted arrows describe direction of the flows and $E(3,3)$ is the unique unstable equilibrium.}
\label{fig2}
\end{figure}
\begin{example}[Modified van der Pol model]\label{ex2.6}
The modified van der Pol model is governed by
\begin{subequations}\label{eq:gao4}
\begin{align}
\epsilon \frac{\mathrm{d} x}{\mathrm{d} t} &=(-x^3+9x^2-24x+21-y)x \ ,\\
\frac{\mathrm{d} y}{\mathrm{d} t} &=(x-3)y \ ,
~~x,~y\in\mathbb{R}_{> 0}
\ , ~~ 0 < \epsilon \ll 1\ .
\end{align}
\end{subequations}
Compared with (\ref{eq:gao2}), the current model adds $\left \{(x,y):x=0\right \}$ into the critical manifold and creates two saddle points on the axis. \Cref{fig3} displays the oscillating diagram of $x$ and $y$ by taking $(x_{0},y_{0})=(5,5)$ and $\epsilon=0.001$. Obviously, the corresponding species $X$ and $Y$ cannot directly play the role of a pair of symmetrical clock signals as \cref{def2.4} demands. However, the oscillatory species $X$ satisfies the requirement that $x$ has abrupt transitions between phases. We will utilize it and further design other structure to build a pair of symmetrical clock signals.
\end{example}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\linewidth,scale=1.00]{fig3.pdf}
\caption{Diagram of oscillators $x$ and $y$ of the model (\ref{eq:gao4}) starting from $(5,5)$ when $\epsilon=0.001$.}
\label{fig3}
\end{figure}
\subsection{Development of 4-dimensional chemical relaxation oscillator}
As suggested by Example \ref{ex2.6}, the species $X$ from a $2$-dimensional relaxation oscillator can act as a driving signal to produce symmetrical clock signals required in Definition \ref{def2.4}. However, note that this signal is not approximately $0$ at low amplitude, we thus introduce a ``subtraction operation" to pull down its low amplitude to $0$, i.e., considering the known truncated subtraction module \cite{vasic2020,buisman2009computing}
\begin{equation}
\begin{aligned}\label{subtraction}
P &\to P+U\ , & U &\to \varnothing\ , \\
X &\to X+V\ , & U+V &\to \varnothing\ .
\end{aligned}
\end{equation}
This module may finish the task $u^*=p(0)-x(0)$ when $p(0)>x(0)$ or $u^*=0$ when $p(0)\leq x(0)$. Therefore, as long as the initial concentration of species $P$ is taken be less than that of species $X$, the equilibrium $u^*$ of species $U$ will be ``pulled down" to $0$, i.e., $U$ has the potential to be one of the symmetrical clock signals. Based on this module, we set up the species $X$ to follow the dynamics of (\ref{eq2.5}) exactly, and further modify it to be
\begin{equation}
\begin{aligned}\label{msubtraction}
P &\overset{\kappa}{\rightarrow} P+U\ ,\ \ \ U \overset{\kappa}{\rightarrow} \varnothing\ , \\
X &\overset{\kappa}{\rightarrow} X+V\ , \ \ \ V \overset{\kappa}{\rightarrow} \varnothing\ , \ \ \ U+V \overset{\kappa/\epsilon}{\rightarrow} \varnothing\
\end{aligned}
\end{equation}
with $\kappa \gg 1$. Apparently, the modifications include: i) a new outflow reaction $V \overset{\kappa}{\rightarrow} \varnothing$ is added; ii) the reaction rate constant of $U+V \overset{\kappa/\epsilon}{\rightarrow} \varnothing$ is set to be rather large compared to others; iii) the overall reaction rate constants have a significant increase in order of magnitude since $\kappa \gg 1$. We will give reasons of making these modifications during the subsequent analysis.
By combining the dynamics for the driving signal $X$, i.e., (\ref{eq2.5}), and that of the mass-action system (\ref{msubtraction}), we get
\begin{subequations}\label{eq3.2}
\begin{align}
\epsilon_{1}\frac{\mathrm{d} x}{\mathrm{d} t} &= \eta_{1}f(x,y) \label{eq3.2a} \ , \\
\frac{\mathrm{d} y}{\mathrm{d} t} &= \eta_{1}g(x,y) \label{eq3.2b} \ ,\\
\epsilon_{1}\epsilon_{2}\frac{\mathrm{d} u}{\mathrm{d} t} &= \eta_{1}(\epsilon_{1}(p-u)-uv)\label{eq3.2c}\ , \\
\epsilon_{1}\epsilon_{2}\frac{\mathrm{d} v}{\mathrm{d} t} &= \eta_{1}(\epsilon_{1}(x-v)-uv)\label{eq3.2d}\
\end{align}
\end{subequations}
with $0 < \epsilon_{1}, \epsilon_{2}=\eta_1/\kappa \ll 1$ and $\eta_{1}, p >0$. Note that we reshape the dynamics (\ref{eq2.5}) by multiplying $\eta_{1}$ with the main purpose of distinguishing the time scales of reaction rates between species $X,~Y$ and $U,~V$. The ODEs (\ref{eq3.2a}) and (\ref{eq3.2b}) will degenerate to (\ref{eq2.5}) if $\eta_1$ is modeled into $f(x,y)$ and $g(x,y)$. Here, $f(x,y)$ and $g(x,y)$ are assumed to guarantee the existence of relaxation oscillation \cite{krupa2001relaxation}, and moreover, the relaxation oscillation orbit $\Gamma_{\epsilon}$ lies strictly in the first quadrant along with a unique unstable equilibrium; $p$ is a constant representing the initial concentration of catalyst $P$; and $\eta_1/\epsilon_2\gg \eta_1$ ensures that $U$ and $V$ response quickly enough to the oscillator $X$ so that they can oscillate synchronously with $X$.
\begin{remark}
In the ODEs of (\ref{eq3.2c}) and (\ref{eq3.2d}), the very large reaction rate constant for $U+V \to \varnothing$ is set as $\eta_1/\epsilon_1$, which directly borrows the small parameter $\epsilon_1$ for the perturbed system (\ref{eq3.2a}) and (\ref{eq3.2b}). The main reason is only for the convenience of making dynamic analysis, but this is not necessary for developing chemical relaxation oscillator.
\end{remark}
For this 4-dimensional oscillator model, i.e., describing the evolution of species set $\mathcal{S}=\left \{X,Y,U,V\right \}$, there are two time-scale parameters $\epsilon_{1}$ and $\epsilon_{2}$ that motivates us to analyze its dynamics using singular perturbation theory \cite{kuehn2015multiple}. Let $\alpha=(u,v)$, $\beta=(x,y)$, $F(\alpha, \beta)=(\eta_{1}(p-u-uv/\epsilon_{1}),\eta_{1}(x-v-uv/\epsilon_{1}))$ and $G(\alpha, \beta)=(\eta_{1}f(x,y)/\epsilon_{1},\eta_{1}g(x,y))$, then we define the corresponding slow-fast systems (labeled by $\sigma_{sl}$ and $\sigma_{fa}$, respectively) as follows
\begin{subequations}\label{eq3.3}
\begin{align}
\sigma_{sl}&\triangleq\left\{(\alpha,\beta)\bigg| \epsilon_{2}\frac{\mathrm{d} \alpha}{\mathrm{d} t} = F(\alpha, \beta), ~\frac{\mathrm{d} \beta}{\mathrm{d} t} = G(\alpha, \beta)~\text{as}~\epsilon_{2} \to 0 \right\}\ ,\\
\sigma_{fa}&\triangleq\left\{(\alpha,\beta)\bigg| \frac{\mathrm{d} \alpha}{\mathrm{d} \tau} = F(\alpha, \beta),~\frac{\mathrm{d} \beta}{\mathrm{d} \tau} = \epsilon_{2}G(\alpha, \beta), ~\tau =t/\epsilon_{2}~\text{as}~\epsilon_{2} \to 0\right\} \ ,
\end{align}
\end{subequations}
which are equivalent essentially. Further, we define their reduced version by setting $\epsilon_2=0$, i.e.,
\begin{subequations}\label{eq3.5}
\begin{align}
\sigma_{rsl}&\triangleq\left\{(\alpha,\beta)\bigg| 0 = F(\alpha, \beta), ~\frac{\mathrm{d} \beta}{\mathrm{d} t} = G(\alpha, \beta) \right\}\label{eq.3.5a}\ ,\\
\sigma_{rfa}&\triangleq\left\{(\alpha,\beta)\bigg| \frac{\mathrm{d} \alpha}{\mathrm{d} \tau} = F(\alpha, \beta),~\frac{\mathrm{d} \beta}{\mathrm{d} \tau} = 0, ~\tau =t/\epsilon_{2}\label{eq3.5b}\right\} \ .
\end{align}
\end{subequations}
The flows generated from $\sigma_{rsl}$ and $ \sigma_{rfa}$ are called slow flow and fast flow, respectively. They will be utilized to approximate the flows of $\sigma_{sl}$ and $ \sigma_{fa}$ under the condition of sufficiently small $\epsilon_2$. The critical manifold $\mathscr{C}_0$, induced by $\sigma_{rsl}$ according to $\mathscr{C}_{0}=\left \{(\alpha, \beta): F(\alpha, \beta)=0 \right \}$, gives
\begin{equation}\label{eq3.7}
\mathscr{C}_{0}:~~~~~
\begin{aligned}
\epsilon_{1}(p-u)-uv&=0 \ , \\
\epsilon_{1}(x-v)-uv&=0 \ ,
\end{aligned}
\end{equation}
\begin{lemma}\label{le3.7}
The critical manifold $\mathscr{C}_{0}$ given in \cref{eq3.7} is normally hyperbolic and attracting.
\end{lemma}
\begin{proof}
In this case, $\frac{\partial F}{\partial \alpha}(\alpha, \beta)$ is a $2$-dimensional matrix, and the condition in \cref{def3.6} correspondingly becomes a constraint on eigenvalues \cite{fenichel1979geometric}. From the eigenvalues of $\frac{\partial F}{\partial \alpha}(\alpha, \beta)$, $\lambda_{1}=-\eta_{1}$ and $\lambda_{2}=-\eta_{1}-\eta_{1}(u+v)/\epsilon_{1}$, we have both of them to be less than $0$, so the result is true.
\end{proof}
By applying the Fenichel Slow Manifold Theorem \cite{fenichel1979geometric} to the above $\mathscr{C}_0$, we have
\begin{remark}\label{slow manifold theorem}
There exists a slow manifold in the $O(\epsilon_2)$-neighborhood of $\mathscr{C}_0$, denoted by $\mathscr{C}_{\epsilon_2}$, satisfying that $\mathscr{C}_{\epsilon_2}$ is also attracting, and moreover, $\mathscr{C}_{\epsilon_2}$ is locally invariant under the flows of $\sigma_{sl}$, i.e., any flow of $\sigma_{sl}$ will remain in motion on the manifold once it enters the neighborhood of $\mathscr{C}_{\epsilon_2}$. $\mathscr{C}_{\epsilon_2}$ can therefore be treated as a perturbation of $\mathscr{C}_0$.
\end{remark}
We can depict the evolution of trajectory of slow-fast system \cref{eq3.3} more concretely through the following theorem, which also implies that the oscillating signals $U$ and $V$ can respond to the driving signal $X$ quickly enough due to the introduction of time scale $\epsilon_2$.
\begin{theorem}\label{thm3.11}
For the slow-fast system \cref{eq3.3}, any of its trajectories originating from the area $\left \{(\alpha,\beta):\alpha \in \mathbb{R}^2_{\geq 0},\ \beta \in \mathbb{R}^2_{>0} \right \}$ will merge instantaneously into the slow manifold $\mathscr{C}_{\epsilon_2}$ approximately along the fast flow, and moreover, will not leave the manifold.
\end{theorem}
\begin{proof}
The critical manifold $\mathscr{C}_{0}$ divides the area $\left \{(\alpha,\beta):\alpha \in \mathbb{R}^2_{\geq 0},\ \beta \in \mathbb{R}^2_{>0} \right \}$ into two parts as $F>0$ and $F<0$. Based on \cref{le3.7}, the two eigenvalues of $\frac{\partial F}{\partial \alpha}(\alpha, \beta)$ are always negative at any point in the mentioned area, so fast flows of \cref{eq3.5b} from both sides of $\mathscr{C}_{0}$ tend to travel towards $\mathscr{C}_{0}$, which approximates the instantaneous behavior of the slow-fast system \cref{eq3.3}. Therefore, the trajectory originating from the area will instantaneously converge towards $\mathscr{C}_{0}$ approximately along the fast flows. According to \cref{slow manifold theorem}, $\mathscr{C}_{\epsilon_2}$ lies in the $O(\epsilon_2)$-neighborhood of $\mathscr{C}_{0}$ and is locally invariant, which means the trajectory will finally merge into the slow manifold $\mathscr{C}_{\epsilon_2}$ and will not leave.
\end{proof}
\cref{thm3.11} means that the long-term dynamical behavior of the slow-fast system \cref{eq3.3} is fully decided by $\mathscr{C}_{\epsilon_{2}}$, which essentially results from no non-hyperbolic points on $\mathscr{C}_{0}$. And because $\mathscr{C}_{\epsilon_{2}}$ can be viewed as a perturbation of $\mathscr{C}_{0}$, we just need to pay attention on the dynamical behavior of $\mathscr{C}_{0}$. The following theorem gives an approximation to $\mathscr{C}_{0}$.
\begin{theorem}\label{thm3.13}
For the critical manifold $\mathscr{C}_{0}$ shown in \cref{eq3.7}, if the initial concentration of catalyst $P$, $p$, is set to be between the high and low amplitude of the driving signal $X$, i.e., $\exists \delta >0$ such that $x-p>\delta$ when $x$ is at the high amplitude and $p-x>\delta $ when $x$ is at the low amplitude, then the concentrations of oscillating signals $U$ and $V$ can be estimated as
\begin{equation}\label{eq3.8}
\begin{cases}
u(x)= 0 \ ,\\
v(x)= x-p \ ,
\end{cases}
\end{equation}
when $x$ is at the high amplitude, and
\begin{equation}\label{eq3.9}
\begin{cases}
u(x)= p-x \ ,\\
v(x)= 0 \ ,
\end{cases}
\end{equation}
when $x$ is at the low amplitude, with each of the estimation errors to be $O(\epsilon_{1})$.
\end{theorem}
\begin{proof}
From \cref{eq3.7}, it is easily to get
\begin{subequations}\label{eq2.10}
\begin{align}
v^2+(p-x+\epsilon_{1})v-\epsilon_{1}x=0 \ , \\
u^2-(p-x-\epsilon_{1})u-\epsilon_{1}p=0 \ .
\end{align}
\end{subequations}
We firstly consider the case that $x$ is at the low amplitude, i.e. $\exists \delta >0$ such that $p-x>\delta$. Let $0<\epsilon_{1} \ll \delta$, then $p-x \pm \epsilon_{1}>0$. Under the constraint of $u,v \geq 0$, the above equations may be solved as
\begin{subequations}\label{eq3.12}
\begin{align}
u(x)=&\frac{(p-x-\epsilon_{1})+ \sqrt{(p-x-\epsilon_{1})^{2}+4\epsilon_{1}p}}{2} \ , \\
v(x)=&\frac{-(p-x+\epsilon_{1})+ \sqrt{(p-x+\epsilon_{1})^{2}+4\epsilon_{1}x}}{2} \ .
\end{align}
\end{subequations}
Hence, the errors of using (\ref{eq3.9}) to estimate them may be calculated as
\begin{equation*}
\begin{aligned}
\left | u(x)-(p-x) \right |=&\left(\frac{2p}{\sqrt{(p-x-\epsilon_{1})^2+4\epsilon_{1}p}+(p-x-\epsilon_{1})}-1\right)\epsilon_{1} \\
<&\left(\frac{p}{p-x-\epsilon_{1}}-1\right)\epsilon_{1}
<\left(\frac{p}{p-x-\delta}-1\right)\epsilon_{1}
=O(\epsilon_{1}) \
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\left | v(x)-0 \right |=&\frac{2\epsilon_{1}x}{\sqrt{(p-x+\epsilon_{1})^2+4\epsilon_{1}x}+(p-x+\epsilon_{1})} \\
<&\frac{x}{p-x+\epsilon_{1}}\epsilon_{1}
<\frac{x}{p-x}\epsilon_{1}
=O(\epsilon_{1}) \ .
\end{aligned}
\end{equation*}
For the case of $x$ at the high amplitude, the analysis may be performed based on the same logic, which completes the proof.
\end{proof}
\begin{remark}
This theorem indicates that the time-scale parameter $\epsilon_{1}$ appearing in the reaction rate constant of $U+V \overset{\eta_1/\epsilon_1}{\longrightarrow} \varnothing$ plays an important role in generating a pair of oscillating signals with symmetry as required by \cref{def2.4}. It ensures that on the critical manifold $\mathscr{C}_0$ given by (\ref{eq3.7}) there is always a species, either $U$ or $V$, whose concentration is close to $0$ no matter what the concentration of driving signal $X$ is, and furthermore, the approximation error is $O(\epsilon_1)$. This encouraging result also conversely accounts for that the modifications on greatly enlarging the reaction rate constant of $U+V \to \varnothing$ compared to others in (\ref{msubtraction}) is quite reasonable.
\end{remark}
The above two theorems explain how the flows of slow-fast system \cref{eq3.3} evolve towards the neighborhood of $\mathscr{C}_{0}$ and provide a more intuitive description about $\mathscr{C}_{0}$. Now, we can announce that the constructed chemical relaxation oscillator \cref{eq3.2} is qualified to produce symmetrical clock signals as required.
\begin{theorem}\label{thm:3.15}
For a $4$-dimensional system (\ref{eq3.2}) describing the concentrations evolution of species set $\mathcal{S}=\left \{X,Y,U,V\right \}$, assume the initial concentration $p$ of catalyst $P$ to be taken as \cref{thm3.13} claims, and the positive initial concentration point $(x(0),y(0),u(0),v(0))$ of the system to satisfy that $(x(0),y(0))$ is not the unique unstable equilibrium of the subsystem $\Sigma_{xy}$ (governed only by (\ref{eq3.2a}) and (\ref{eq3.2b})). Then species $U$ and $V$ can act as a pair of symmetrical clock signals as requested by \cref{def2.4}.
\end{theorem}
\begin{proof}
From \Cref{thm3.11}, we know that for the system considered, any trajectory originating from the area $\left \{(x,y,u,v):x,y>0, u,v \geq 0\right \}$ will merge instantaneously into the $O(\epsilon_2)$-neighborhood of critical manifold $\mathscr{C}_{0}$. Further, based on the algebraic equation \cref{eq3.7} defining $\mathscr{C}_0$, concentration of $U$ and $V$ under the conditions will oscillate synchronously with the driving species $X$. Note that abrupt transitions exist between phases of $X$, so do $U$ and $V$. The first item of \cref{def2.4} is satisfied. \Cref{thm3.13} provides an approximate description of $\mathscr{C}_0$, we can utilize \cref{eq3.8,eq3.9} as estimation of concentration of $U$ and $V$ with error $O(\epsilon_{1}+\epsilon_{2})$. Then the second and third items of \cref{def2.4} correspond naturally to the conclusion of \cref{thm3.13}. Therefore, as long as we avoid the unstable equilibrium point as $(x(0),y(0))$, the relaxation oscillator $X$ will drive $U$ and $V$ to oscillate synchronously, acting as the desired symmetrical clock signals.
\end{proof}
At the end of this section, we provide an instance of system \cref{eq3.2} to examine the results of our clock signal design.
\begin{example}[standard chemical relaxation oscillator]\label{ex3.16}
We use the modified van der Pol model presented in \cref{ex2.6} as a specific $2$-dimensional relaxation oscillator to produce the driving signal $X$. Through replacing (\ref{eq3.2a}) and (\ref{eq3.2b}) by (\ref{eq:gao4}) in (\ref{eq3.2}), we obtain the $4$-dimensional chemical oscillator model to be
\begin{subequations}\label{eq3.15}
\begin{align}
\epsilon_{1} \frac{\mathrm{d} x}{\mathrm{d} t} &=\eta_{1}(-x^3+9x^2-24x+21-y)x \label{eq3.15a}\ ,\\
\frac{\mathrm{d} y}{\mathrm{d} t} &=\eta_{1}(x-3)y \label{eq3.15b}\ , \\
\epsilon_{1}\epsilon_{2}\frac{\mathrm{d} u}{\mathrm{d} t} &= \eta_{1}(\epsilon_{1}(p-u)-uv)\label{eq3.15c}\ , \\
\epsilon_{1}\epsilon_{2}\frac{\mathrm{d} v}{\mathrm{d} t} &= \eta_{1}(\epsilon_{1}(x-v)-uv)\label{eq3.15d}\ .
\end{align}
\end{subequations}
Notice that the modified van der Pol model adds $\left \{(x,y):x=0 \right \}$ into the critical manifold induced by the original van der Pol model (\ref{eq:gao2}), but this will not affect the evolution of trajectory from the latter to the former due to the selection of the initial point $(x(0),y(0)) \in \mathbb{R}^2_{>0}$. We name this $4$-dimensional model (\ref{eq3.15}) as the standard chemical relaxation oscillator in the context.
With $\epsilon_{1}=\epsilon_{2}=0.001, \eta_{1}=0.1, p=3$ and initial point $(x(0), y(0), u(0), v(0))=(5,5,0,0)$, we show the simulation result of the oscillating species $U$ and $V$ in \cref{fig4}. Obviously, they satisfy the requirements in \cref{def2.4}.
\end{example}
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth,scale=1.00]{fig4.pdf}
\caption{The symmetrical clock signals $U$ and $V$ suggested in \cref{ex3.16}.}
\label{fig4}
\end{figure}
\section{Period and occurrence order control of symmetrical clock signals}\label{sec:fur}
In this section, based on the standard symmetrical clock signals, we discuss how to control the period and occurrence order of the oscillating species via adjusting oscillator parameters and initial values, respectively.
\subsection{Oscillator parameters to control the period of symmetrical clock signals}
In the standard chemical relaxation oscillator of \cref{eq3.15}, species $X$ is the driving signal for symmetrical clock signals $U$ and $V$, so we are only concerned with the effect of parameters encountered in \cref{eq3.15a} and \cref{eq3.15b} on them. However, to ensure a S-shaped critical manifold constructed and the unique equilibrium assigned on the repelling part as \cref{fig2} shows, we neglect to discuss those parameters in \cref{eq3.15a}, but keep the current modified van der Pol model (a cubic function) completely. The only parameter in \cref{eq3.15b} that can determine the equilibrium is ``$3$" parameter, we thus redefine the subsystem $\Sigma_{xy}$ of \cref{eq3.15a} and \cref{eq3.15b} to be
\begin{subequations}\label{eq4.1}
\begin{align}
\epsilon_{1} \frac{\mathrm{d} x}{\mathrm{d} t} &=\eta_{1}(-x^3+9x^2-24x+21-y)x \ ,\\
\frac{\mathrm{d} y}{\mathrm{d} t} &=\eta_{1}(x-\ell)y \label{eq4.1b}\ ,
\end{align}
\end{subequations}
where $\ell\in \mathbb{R}$ replaces the original ``3", and induces an indefinite equilibrium $x^*=\ell$ for species $X$. Based on the work of Krupa and Szmolyan \cite{krupa2001relaxation}, it can be inferred that in the first quadrant (1) when $\ell<2$ or $\ell>4$, the unique equilibrium $(x^*,y^*)=(\ell, -\ell^3+9\ell^2-24\ell)$ is asymptotically stable, so trajectory will converge to it along the critical manifold and oscillation disappears; (2) when $\ell=2$ or $4$, the equilibrium $(x^*,y^*)=(2,1)$ or $(4,5)$ is actually the non-hyperbolic points on the critical manifold $\mathscr{C}_0$; (3) when $2<\ell<4$, there exists a relaxation oscillation that leads to the instability of the equilibrium. Therefore, the third case $2<\ell<4$ is the available interval, but still in need of avoiding being too close to $2$ or $4$ in practice. Note that in the first quadrant the change of $\ell$ just affects the position of equilibrium of system \cref{eq4.1} along its critical manifold but does not inflect the latter, and the related singular trajectory is exactly the same with $\Gamma_0$ given in \cref{Gamma_0}.
\begin{lemma}\label{le4.1}
Given a $2$-dimensional relaxation oscillator in the form of \cref{eq4.1}, for its singular trajectory of the first quadrant, i.e., $\Gamma_0$ in \cref{Gamma_0} with $\varphi(x)=-x^3+9x^2-24x+21$, denote the left and right part of $\Gamma_0$ by $\Gamma_{l,0}=\left \{(x,\varphi(x)):1<x<2 \right \}$ and $\Gamma_{r,0}=\left \{(x,\varphi(x)):4<x<5 \right \}$ separately, and their corresponding relaxation oscillation orbit parts by $\Gamma_{l,\epsilon_{1}}$ and $\Gamma_{r,\epsilon_{1}}$, which can be described as
\begin{subequations}
\begin{align}
\Gamma_{l, \epsilon_{1}}: y=& \chi_{l}(x,\epsilon_{1} ) \ , 1<x<2 \ , \\
\Gamma_{r, \epsilon_{1}}: y=& \chi_{r}(x,\epsilon_{1} ) \ , 4<x<5 \ .
\end{align}
\end{subequations}
Then for sufficiently small $\epsilon_{0}>0$, there exist differentiable mappings $\psi_{1}$ and $\psi_{2}$ defined separately on $(1,2) \times (0,\epsilon_{0})$ and $(4,5) \times (0,\epsilon_{0})$ as
\begin{subequations}
\begin{align}
\psi_{1}&: (x,\epsilon_{1}) \mapsto \varphi(x)-\chi_{l}(x,\epsilon_{1} )\ ,\\
\psi_{2}&: (x,\epsilon_{1}) \mapsto \chi_{r}(x,\epsilon_{1} )-\varphi(x)\ ,
\end{align}
\end{subequations}
and moreover, $\left | \psi _{1}(x,\epsilon_{1} ) \right |< O (\epsilon_{1} )\ ,
\left | \psi _{2}(x,\epsilon_{1} ) \right |< O (\epsilon_{1} )\ $.
\end{lemma}
\begin{proof}
$\Gamma_{l,0}$ and $\Gamma_{r,0}$ are two normally hyperbolic parts of the critical manifold, and $\Gamma_{l,\epsilon_{1}}$ and $\Gamma_{r,\epsilon_{1}}$ actually describe the slow manifold corresponding to $\Gamma_{l,0}$ and $\Gamma_{r,0}$. According to Fenichel Slow Manifold Theorem, $\Gamma_{l,\epsilon_{1}}$ and $\Gamma_{r,\epsilon_{1}}$ are separately differential homeomorphic to $\Gamma_{l,0}$ and $\Gamma_{r,0}$, which confirms the existence of differentiable mappings $\psi_{1}$ and $\psi_{2}$. Moreover, $\psi_{1}$ refers to the distance between $\Gamma_{l,\epsilon_{1}}$ and $\Gamma_{l,0}$, which together with the fact that $\Gamma_{l,\epsilon_{1}}$ lies in the $O(\epsilon_1)$-neighborhood of $\Gamma_{l}$, suggests $\left | \psi _{1}(x,\epsilon_{1} ) \right |< O (\epsilon_{1} )$. Based on the same logic, we can get $\left | \psi _2(x,\epsilon_{1} ) \right |< O (\epsilon_{1} )$, too.
\end{proof}
The oscillating period of the driving signal $X$ in \cref{eq4.1} is the time cost to travel along the whole singular trajectory $\Gamma_0$, which can be approximately control by $\ell$ through the following theorem.
\begin{theorem}\label{thm4.2}
For the $2$-dimensional relaxation oscillator \cref{eq4.1}, the oscillating period of $x$ is approximated to be $T=T_{l}+T_{h}$, where $T_{l}$ and $T_{h}$ separately refers to the period of $x$ at low amplitude and high amplitude, given by
\begin{subequations}\label{eq4.2}
\begin{align}
T_{l}&= \int_{1}^{2}\frac{(\varphi'(x)-\frac{\partial \psi _{1}}{\partial x}(x,\epsilon_{1} ))dx}{\eta_{1}(x-\ell)(\varphi(x)-\psi _{1}(x,\epsilon_{1} ))} \label{eq4.2a}\ , \\
T_{h}&= \int_{5}^{4}\frac{(\varphi'(x)+\frac{\partial \psi _{2}}{\partial x}(x,\epsilon_{1} ))dx}{\eta_{1}(x-\ell)(f(x)+\psi _{2}(x,\epsilon_{1} ))} \label{eq4.2b}\
\end{align}
\end{subequations}
with $\varphi(x)$, $\psi_1(x,\epsilon_1)$ and $\psi_2(x,\epsilon_1)$ to have the same meanings as those in \cref{le4.1}.
\end{theorem}
\begin{proof}
We first confirm the formula for $T_{l}$. According to \cref{le4.1}, we can use $y=\varphi(x)-\psi_{1}(x,\epsilon_{1} )$ to express the orbit when $x$ oscillates at low amplitude, i.e., $\Gamma_{l,\epsilon_{1}}$. Then we get
\begin{equation}\label{eq4.6a}
\frac{\mathrm{d} y}{\mathrm{d} x}= \varphi'(x)-\frac{\partial \psi _{1}(x,\epsilon_{1} )}{\partial x}\ .
\end{equation}
Furthermore substituting $y=\varphi(x)-\psi_{1}(x,\epsilon_{1} )$ into the right hand of \cref{eq4.1b} yield
\begin{equation}\label{eq4.6b}
\frac{\mathrm{d} y}{\mathrm{d} t}=\eta_{1}(x-\ell)(\varphi(x)-\psi _{1}(x,\epsilon_{1} ))\ .
\end{equation}
Hence the time it takes to travel along $\Gamma_{l,\epsilon_{1}}$ is given by
\begin{equation}\label{eq4.7}
T_{l}=\int _{\Gamma_{l,\epsilon_{1}}}dt=\int_{1}^{2}\frac{1}{\frac{\mathrm{d} x}{\mathrm{d} t}}dx=\int_{1}^{2}\frac{\frac{\mathrm{d} y}{\mathrm{d} x}}{\frac{\mathrm{d} y}{\mathrm{d} t}}dx
=\int_{1}^{2}\frac{(\varphi'(x)-\frac{\partial \psi _{1}}{\partial x}(x,\epsilon_{1} ))dx}{\eta_{1}(x-\ell)(\varphi(x)-\psi _{1}(x,\epsilon_{1} ))}\ .
\end{equation}
In the same way, we can prove $T_{h}$ of \cref{eq4.2b}. Except for $T_l$ and $T_h$, the whole period of $x$ includes the time to travel along the horizontal trajectory parts in the form of fast flow. We thus ignore these extremely short time costs and take $T=T_{l}+T_{h}$ as an approximation of the whole period of $x$.
\end{proof}
Note that \cref{thm4.2} provides an approximation to the oscillating period of the driving signal $X$, but it is not a practical one. The main reason is the lack of explicit expressions of $\psi _{i}(x,\epsilon_{1}), i=1,2$ in \cref{eq4.2a,eq4.2b}. A more practical estimation is needed. Clearly, in those two formulas $\left | \psi _{i}(x,\epsilon_{1} ) \right | < O(\epsilon_{1}), i=1,2$ are very small for sufficiently small $\epsilon_{1}$, and can thus be considered as negligible; also, $\frac{\partial \psi _{1}(x,\epsilon_{1} )}{\partial x}$ (or $\frac{\partial \psi _{2}(x,\epsilon_{1} )}{\partial x}$) can be thought to be very small to be abandoned since the relaxation oscillation orbit $\Gamma_{l,\epsilon_1}$ (or $\Gamma_{r,\epsilon_1}$) is nearly ``parallel" to $\Gamma_{l,0}$ (or $\Gamma_{r,0}$). We thus give a more simplified but more practical estimation to $T_{l}$ and $T_{h}$ as
\begin{subequations}\label{eq4.8}
\begin{align}
T_{l} &\approx \int_{1}^{2}\frac{\varphi'(x)dx}{\eta_{1}(x-\ell)\varphi(x)} \label{eq4.8a}\ , \\
T_{h} &\approx \int_{5}^{4}\frac{\varphi'(x)dx}{\eta_{1}(x-\ell)\varphi(x)} \label{eq4.8b}\ . \end{align}
\end{subequations}
An immediate application of these estimates is to $\Sigma_{xy}$ of the standard chemical relaxation oscillator of \cref{ex3.16}, and by setting $\ell=3$ and $\eta_1=0.1$, we obtain
\begin{subequations}\label{periodEs}
\begin{align}
T_{l} &\approx \int_{1}^{2}\frac{10(-3x^2+18x-24)dx}{(x-3)(-x^3+9x^2-24x+21)}= 10.470\ , \\T_{h} &\approx \int_{5}^{4}\frac{10(-3x^2+18x-24)dx}{(x-3)(-x^3+9x^2-24x+21)}= 9.193\ .
\end{align}
\end{subequations}
In the whole standard chemical relaxation oscillator, the oscillation of the driving signal $X$ will stimulate the symmetrical clock signals $U$ and $V$ to oscillate synchronously. Under the parameters of \cref{ex3.16}, when $X$ travels along $\Gamma_{l,\epsilon_1}$, $U$ will travel at high amplitude (Let $T_1$ represent the consumed time); when $X$ travels along $\Gamma_{r,\epsilon_1}$, $V$ will travel at high amplitude ($T_2$ represents the consumed time). Therefore, we have $T_1\approx T_l$ and $T_2\approx T_h$, which are basically consistent with \cref{fig4}. This indicates that the estimations by \cref{eq4.8} are reliable.
\begin{remark}
The estimation formulas \cref{eq4.8} also reveal that, compared to $\ell$, the parameter $\eta_{1}$ can control $T_l$ and $T_h$ more directly. As they actually correspond to the maximum time used for the controlled reaction modular, such as $\tilde{\mathcal{M}}_1$ an $\tilde{\mathcal{M}}_2$ in \cref{ex2.3}, to perform computation, their adjustment can help prolong or shorten the execution time of reaction modular quantitatively.
\end{remark}
\subsection{Oscillator initial values to control the occurrence of symmetrical clock signals}
Like discussing oscillator parameters, here we only consider the effect of the initial values of species appearing in subsystem $\Sigma_{xy}$ of the standard chemical relaxation oscillator \cref{eq3.15}, i.e., the effect of $(x(0),y(0))$ on oscillating behaviors of $U$ and $V$. As one might know, for subsystem $\Sigma_{xy}$ governed by \cref{eq3.15a,eq3.15b}, the $\omega-limit$ set\footnote{The $\omega-limit$ set refers to the invariant closed set to which the trajectory converges as time $t$ approaches positive infinity. The $\omega-limit$ sets of plane vector fields are usually divided into two categories: closed orbit and equilibrium point.} in the first quadrant only consists of the relaxation oscillation orbit $\Gamma_{\epsilon_{1}}$, i.e., the $O(\epsilon_1)$-neighborhood of $\Gamma_0$ defined in \cref{Gamma_0}, and a unstable equilibrium $(x^*,y^*)=(3,3)$. However, different initial values of $(x(0),y(0))$ will cause the trajectory of $\Sigma_{xy}$ to merge into different positions of $\Gamma_{\epsilon_{1}}$, which finally affects the behavior of $U$ and $V$.
To depict this effect, we look at the repelling part of its critical manifold in the first quadrant, which takes $\left \{(x,\varphi(x)):~ 2<x<4 \right \}$ with $\varphi(x)=-x^3+9x^2-24x+21$, and is identified by $\varphi_{re}(x)$ for distinguishing from $\varphi(x)$. Through rewriting it as $\left \{(\varphi_{re}^{-1}(y),y):~ 1<y<5 \right \}$, where $\varphi_{re}^{-1}(y)$ means the inverse function of $\varphi(x)$, we can divide the first quadrant of the phase plane of $\Sigma_{xy}$ into two areas
\begin{align*}
\mathcal{A}_1:\left \{(x,y):y\geq5\right \} \cup \left \{(x,y):x<\varphi^{-1}_{re}(y),1<y<5\right \}
\cup \left \{(\varphi^{-1}_{re}(y),y):3<y<5\right \}\ ,\\
\mathcal{A}_2:\left \{(x,y):y\leq1\right \} \cup \left \{(x,y):x>\varphi^{-1}_{re}(y),1<y<5\right \} \cup \left \{(\varphi^{-1}_{re}(y),y):1<y<3\right \}\ ,
\end{align*}
where only the unstable equilibrium $(x^*,y^*)=(3,3)$ is excluded.
\begin{proposition}\label{pro4.4}
Given the subsystem $\Sigma_{xy}$ governed by \cref{eq3.15a,eq3.15b}, any of its trajectories starting from an initial point $(x(0),y(0))\in \mathcal{A}_1$ merges into the left part of relaxation oscillation orbit $\Gamma_{\epsilon_{1}}$, the $O(\epsilon_1)$-neighborhood of $\Gamma_0$ given in \cref{Gamma_0}. The situation changes to the right part of $\Gamma_{\epsilon_{1}}$ if $(x(0),y(0))\in \mathcal{A}_2$.
\end{proposition}
\begin{proof}
We focus on providing a statement about $\mathcal{A}_1$. The first part of $\mathcal{A}_1$ corresponds to area above $\Gamma_{\epsilon_{1}}$, originating from which the trajectory can only converge horizontally towards the neighborhood of left part of the critical manifold and then merge into the left part of $\Gamma_{\epsilon_{1}}$, i.e., $\Gamma_{l,\epsilon_1}$, along the slow manifold. The trajectory originating from the second part will merge instantaneously into $\Gamma_{l,\epsilon_1}$ because of the effect of the repelling manifold $\varphi_{re}(x)$. The third part describes the upper segment of $\varphi_{re}(x)$, from which the trajectory will immediately enter the second part of $\mathcal{A}_1$ for that $\frac{\mathrm{d} y}{\mathrm{d} t}>0$. The situation for $(x(0),y(0))\in \mathcal{A}_2$ is just inverse.
\end{proof}
\begin{remark}
\cref{pro4.4} suggests that as long as the initial point of $\Sigma_{xy}$ governed by \cref{eq3.15a,eq3.15b} gets around the unique unstable equilibrium $(3,3)$, the oscillation takes place. Furthermore, to change its position in $\mathcal{A}_1$ or $\mathcal{A}_2$ may lead to the initial oscillation of $x$ at high amplitude or at low amplitude. Frankly, this is not a strict request on the initial point, which is quite different from the one proposed by Arredondo and Lakin \cite{arredondo2022supervised}. They required initial concentration of some species to differ by an order of $10^{6}$ from others. This is not easy to do in real chemical reaction realization. Therefore the current oscillator model is quite encouraging and competitive.
\end{remark}
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth,scale=1.00]{fig5.pdf}
\caption{Trajectory evolution diagram of subsystem $\Sigma_{xy}$ of \cref{eq3.15} starting from four different initial points $A(6,6),~B(2,2)\in\mathcal{A}_1$ and $C(0.5,0.5,~D(4,4)\in\mathcal{A}_2$, where the green, red and black lines express the same information as given in \cref{fig2}.}
\label{fig5}
\end{figure}
\begin{figure}[tbhp]
\centering
\subfloat[$x(0)=y(0)=6$.]{\label{fig6a}\includegraphics[width=0.5\linewidth]{fig6_1.pdf}}
\subfloat[$x(0)=y(0)=2$.]{\label{fig6b}\includegraphics[width=0.5\linewidth]{fig6_2.pdf}}
\qquad
\subfloat[$x(0)=y(0)=0.5$.]{\label{fig6c}\includegraphics[width=0.5\linewidth]{fig6_3.pdf}}
\subfloat[$x(0)=y(0)=4$.]{\label{fig6d}\includegraphics[width=0.5\linewidth]{fig6_4.pdf}}
\caption{The oscillating behaviors of the standard chemical relaxation oscillator at a fixed initial value of $(u(0),v(0))=(0,0)$ but different initial values of $(x(0),y(0))$.}
\label{fig6}
\end{figure}
\Cref{fig5} displays the trajectory evolution starting from $4$ different initial points $A(6,6)\in\mathcal{A}_1,~ B(2,2)\in\mathcal{A}_1,~ C(0.5,0.5)\in\mathcal{A}_2$ and $D(4,4)\in\mathcal{A}_2$. They go towards the corresponding part of $\Gamma_{\epsilon_{1}}$ as \cref{pro4.4} expects. We also present the time evolution of the driving signal $X$ from these $4$ initial points in \cref{fig6}, where the time evolution of the $X$-driven symmetrical clock signals $U$ and $V$ are synchronously exhibited at a fixed initial point $(u(0),v(0))=(0,0)$. In these two figures, the other parameters $\epsilon_1,~\eta_1$ and $p$ of the standard chemical relaxation oscillator are taken the same values with those in \cref{ex3.16}. Combing \cref{fig5} and \cref{fig6} might suggest the following information.
(i) When starting from $A(6,6)$ or $B(2,2)$, the trajectory of $\Sigma_{xy}$ merges into the left part of relaxation oscillation orbit, i.e., $\Gamma_{l,\epsilon_1}$, and $x$ oscillates at the low amplitude first, leading to positive concentration of $U$ to appear, i.e., $u$ oscillating at high amplitude. Speaking more specifically, the trajectory originating from $A(6,6)$ (may extend to the whole area $\left \{(x,y):y>5\right \}$) needs to first follow the slow manifold to reach $\Gamma_{l,\epsilon_1}$, which makes the first period of $U$ larger than the subsequent ones as \cref{fig6a} shows. However, when initial point is $B(2,2)$ (may extend to $\left \{(x,y):x<\varphi^{-1}_m(y),1<y<5\right \}$), the trajectory follows the fast manifold to reach $\Gamma_{l,\epsilon_1}$, which leads to the first period of $u$ smaller than the subsequent ones, as can be seen in \cref{fig6b}.
(ii) \Cref{fig6c,fig6d} exhibit the corresponding cases where the trajectory merges into $\Gamma_{r,\epsilon_1}$ i.e. the right part of the relaxation oscillation orbit with positive concentration of $V$ in the first period. Furthermore, the first period of $v$ also appears larger or smaller than the subsequent ones when initial point is $C(0.5,0.5)$ or $D(4,4)$.
\begin{remark}
For the standard chemical relaxation oscillator of \cref{eq3.15}, the position of initial point $(x(0),y(0))$ determines the oscillating position of driving signal $X$, either at high amplitude or at low amplitude, in the first period. This further drives the oscillating positions of symmetrical clock signals $U$ and $V$ in the first period, so it can control their occurrence order. Noticeably, whether $U$ (respectively, $V$) is existing or not will ``turn on" or ``turn off" the modular $\tilde{\mathcal{M}}_1$ (respectively, $\tilde{\mathcal{M}}_2$), as said in \cref{ex2.3}. Thus, the position of $(x(0),y(0))$ can finally control the computation order of $\tilde{\mathcal{M}}_1$ and $\tilde{\mathcal{M}}_2$. As an example of letting $\tilde{\mathcal{M}}_1$ execute first, the selection of initial value $(x(0),y(0))$ should ensure that the trajectory merges to $\Gamma_{l,\epsilon_1}$.
\end{remark}
\section{Loop control and termination of molecular computations}\label{sec:ter}
In this section we apply the standard chemical relaxation oscillator of \cref{eq3.15}
to control molecular computations periodically, and further present a termination strategy for them.
\subsection{Loop control of molecular computations}
We go back to the motivating example in \cref{sec2.3} that two reaction modules $\mathcal{M}_1$ and $\mathcal{M}_2$ are designed towards the target of making the loop iteration calculation $s_1=s_1+1$. Here, their calculations control is just taken as an application case for the standard chemical relaxation oscillator of \cref{eq3.15}. As stated in \cref{sec2.3}, these two modules need to be modified as $\tilde{\mathcal{M}}_1$ and $\tilde{\mathcal{M}}_2$, given in \cref{ex2.3}, to avoid coupling between $s_1$ and $s_2$. Then we can apply the standard chemical relaxation oscillator of \cref{eq3.15} to control their calculations through combining all related dynamical equations, i.e., combining \cref{eq3.15} with \cref{eq2.4}, which gives
\begin{equation}\label{eq5.1}
\begin{aligned}
\epsilon_{1} \frac{\mathrm{d} x}{\mathrm{d} t} &=\eta_{1}(-x^3+9x^2-24x+21-y)x \ , & \frac{\mathrm{d} s_{1}}{\mathrm{d} t} &= (s_{2} - s_{1})v\ , \\
\frac{\mathrm{d} y}{\mathrm{d} t} &=\eta_{1}(x-3)y \ , & \frac{\mathrm{d} s_{2}}{\mathrm{d} t} &= (s_{1} + s_{3} -s_{2})u\ , \\
\epsilon_{1}\epsilon_{2}\frac{\mathrm{d} u}{\mathrm{d} t} &= \eta_{1}(\epsilon_{1}(p-u)-uv)\ , & \frac{\mathrm{d} s_{3}}{\mathrm{d} t} &= 0\ . \\
\epsilon_{1}\epsilon_{2}\frac{\mathrm{d} v}{\mathrm{d} t} &= \eta_{1}(\epsilon_{1}(x-v)-uv)\ ,
\end{aligned}
\end{equation}
The control flow goes like this: (1) the left-upper two equations produce the periodical signal $x$; (2) $x$ drives the left-lower two equations to generate symmetrical clock signals $u$ and $v$; (3) $u$ and $v$ control the iteration calculation $s_1^*=s_1^*+s_3^*$ through the right three equations according to disappearance of either $u$ or $v$.
As an illustration, \cref{fig7a} presents the time evolution of $s_1$ and $s_2$ of the system \cref{eq5.1} starting from $(x(0),y(0),u(0),v(0),s_1(0),s_2(0),s_3(0))= (5,5,0,0,0,0,1)$ with model parameters $\epsilon_{1}=\epsilon_{2}=0.001, \eta_{1}=0.1, p=3$, which are exactly the same with those in \cref{ex3.16}. Clearly, the curve of $s_1$ is a staircase-like function, and $s_1$ will add $1$ periodically as time goes on. This indicates that the standard chemical relaxation oscillator of \cref{eq3.15} is quite valid to periodically control the execution of computation modules $\tilde{\mathcal{M}}_1$ and $\tilde{\mathcal{M}}_2$, and finally reaches the target of performing the frequently-used loop iteration calculation $s_1=s_1+1$ encountered in many machine learning algorithms. A further look at \cref{fig7b} (actually an overlay of \cref{fig7a} and \cref{fig4}) reveals that the calculation time for these two modules are very short, i.e., time used for the curve of $s_1$ or $s_2$ to climb stair, compared to the sum of the oscillating periods of $U$ and $V$ estimated by \cref{periodEs}. The main reason is that both $\tilde{\mathcal{M}}_1$ and $\tilde{\mathcal{M}}_2$ complete calculation instruction at exponential speed, whose time consumption is far lower than $T_1+T_2\approx T_l+T_h\approx 19.6$s. There is still a large room to reduce the oscillating period of the driving signal $X$ in the standard chemical relaxation oscillator of \cref{eq3.15} for the current purpose.
\begin{figure}[tbhp]
\centering
\subfloat[]{\label{fig7a}\includegraphics[width=0.5\linewidth]{fig7_1.pdf}}
\subfloat[]{\label{fig7b}\includegraphics[width=0.5\linewidth]{fig7_2.pdf}}
\caption{Time evolution of species $S_{1}$ and $S_{2}$ of the system \cref{eq5.1} in response to symmetrical clock signals $U$ and $V$: (a) without and (b) with $U$ and $V$ exhibited.}
\label{fig7}
\end{figure}
Although there are reaction networks corresponding to the dynamical equations \cref{eq3.15c}, \cref{eq3.15d} and \cref{eq2.4}, i.e., network \cref{msubtraction} with $\kappa=\eta_1/\epsilon_2$ and $\tilde{\mathcal{M}}_1$ plus $\tilde{\mathcal{M}}_2$ given in \cref{ex2.3}, respectively, the ODEs of \cref{eq5.1} are not enough to be representative of calculation made by chemical molecular since there is no reaction network corresponding to the equations of \cref{eq3.15a,eq3.15b}. It needs to find a mass-action CRN system that has the dynamics of \cref{eq3.15a,eq3.15b}, which is called CRN realization from the kinetic equations. Note that there should be many CRN realizations for the same kinetic equations, and a great deal of algorithms \cite{szederkenyi2011finding} have been developed to attain this purpose. Since how to realize a CRN from a group of kinetic equations is not the main topic of this work, we only provide a naive realization by directly transforming each monomial in \cref{eq3.15a,eq3.15b} into an abstract reaction where the species coefficients on the left and right sides are completely determined by the order of the monomial, and the parameters $\eta_1, \epsilon_1$ and $\epsilon_2$ all appear as the rate constants. The result is as follows
\begin{equation}\label{crn1}
\begin{aligned}
4X&\xrightarrow{\eta_1/\epsilon_1}3X\ , & 3X&\xrightarrow{9\eta_{1}/\epsilon_{1} }4X\ , &
2X&\xrightarrow{24\eta_{1}/\epsilon_{1} }X\ , \\
X&\xrightarrow{21\eta_{1}/\epsilon_{1} }2X\ , &
X+Y&\xrightarrow{\eta_{1}/\epsilon_{1} }Y\ , &
X+Y&\xrightarrow{\eta_{1}}X+2Y\ , ~~~ ~~Y\xrightarrow{3\eta_{1}}\varnothing\ .
\end{aligned}
\end{equation}
In the above reaction network, there include some very tiny parameters, such as $\epsilon_1$ and $\epsilon_2$, whose values are difficult to be evaluated precisely. As a result, it seems impossible to control the reaction rate constants as exact as assigned. Actually, these parameters, also including $\eta_1$ and even the coefficients in the S-shaped function, are not necessary to fit perfectly the assigned values. Their real values only ensure the system to generate oscillation and have a unique equilibrium which lies on the repelling part of the critical manifold and stays away from the two non-hyperbolic points.
\subsection{Loop termination of molecular computations}
We have finished the task of controlling iteration computation $s_1=s_1+1$ through the standard chemical relaxation oscillator of \cref{eq3.15}. As the driving signal $X$'s oscillation goes on, the equilibrium concentration of species $S_1$ will add $1$ periodically. This phenomenon will never stop even if there is usually a restriction of upper bound on $s_1^*$, e.g., when it is used to model the ``iteration times" in machine learning algorithm, an instruction of iteration termination should be necessary. For this reason, we introduce a new species $W$, called counter species, which on the one hand, uses its concentration $w$ to measure the difference between a given loop times $l$, an integer representing the concentration of termination species $L$, and the concentration of $S_1$; and on the other hand, can control the occurrence or termination of computation modules.
Towards the above first purpose, we construct the following specific truncated subtraction module to generate the species $W$
\begin{align}\label{eq5.2}
L+W &\overset{\eta_{3} }{\rightarrow} L+2W\ , &
S_{1}+W &\overset{\eta_{3} }{\rightarrow} S_{1}\ , & 2W &\overset{\eta_{3} }{\rightarrow} W\ ,
\end{align}
whose ODEs are expressed as
\begin{subequations}\label{eq5.3}
\begin{align}
\frac{\mathrm{d} w}{\mathrm{d} t}&=\eta_3(l-s_{1}-w)w\ , \\
\frac{\mathrm{d} s_{1}}{\mathrm{d} t}&=\frac{\mathrm{d} l}{\mathrm{d} t}=0\ .
\end{align}
\end{subequations}
\begin{remark}\label{rek5.1}
The analytical solution of the ODEs \cref{eq5.3} is
\begin{equation}
w= \frac{(l-s_{1})w(0)}{w(0)+(l-s_{1}-w(0))e^{-\eta_3(l-s_{1})t}}\
\end{equation}
with $l \neq s_1$. In the case of $l> s_{1}$ and $w(0)>0$, $w$ converges exponentially to $l-s_{1}$; in the case of $l= s_{1}$, $w$ degenerates to the linear form of
\begin{equation}\label{eq:w}
w=\frac{w(0)}{1+\eta_3w(0)t}\ ;
\end{equation}
and in the case of $l< s_{1}$, the equilibrium is $0$. The reaction rate constant $\eta_3$ plays the role of regulating the convergence speed of $w$.
\end{remark}
\cref{rek5.1} suggests that species $W$ will vanish as reactions \cref{eq5.2} proceed. Therefore this species can behave as a catalyst to serve for the previous second purpose, i.e., controlling the occurrence or termination of computation modules. We add it as a catalyst to each reaction of the computation modules $\tilde{\mathcal{M}}_1$ and $\tilde{\mathcal{M}}_2$, which yields two new computation modules $\hat{\mathcal{M}}_{1}$ and $\hat{\mathcal{M}}_{2}$ as
\begin{align*}
\hat{\mathcal{M}}_{1}:
S_{1} + U +W &\to S_{1} + S_{2} + U +W\ , & \hat{\mathcal{M}}_{2}: S_{2} + V +W &\to S_{1} + S_{2} + V +W \ ,\\
S_{3} + U + W&\to S_{3} + S_{2} + U + W\ , & S_{1} + V + W&\to V + W\ .\\
S_{2} + U + W&\to U + W\ ;
\end{align*}
At this time the dynamics changes to be
\begin{equation}\label{eq5.6}
\begin{aligned}
\frac{\mathrm{d} s_{1}}{\mathrm{d} t} = (s_{2} - s_{1})vw\ ,
~~~~\frac{\mathrm{d} s_{2}}{\mathrm{d} t} = (s_{1} + s_{3} -s_{2})uw\ ,~~~~ \frac{\mathrm{d} s_{3}}{\mathrm{d} t} = 0\ .
\end{aligned}
\end{equation}
It is obvious that species $W$ has the same power as species $U$ and $V$, only determining the occurrence or termination of reactions, but not changing the positive equilibrium of system \cref{eq5.6}.
We couple all ODEs of \cref{eq3.15,eq5.2,eq5.6}, and get the whole ODEs to be
\begin{equation}\label{eq5.5}
\begin{aligned}
\epsilon_{1} \frac{\mathrm{d} x}{\mathrm{d} t} &=\eta_{1}(-x^3+9x^2-24x+21-y)x \ , \ \ \ & \frac{\mathrm{d} s_{1}}{\mathrm{d} t} &= (s_{2} - s_{1})vw\ , \\
\frac{\mathrm{d} y}{\mathrm{d} t} &=\eta_{1}(x-3)y \ ,
& \frac{\mathrm{d} s_{2}}{\mathrm{d} t} &= (s_{1} + s_{3} -s_{2})uw\ , \\
\epsilon_{1}\epsilon_{2}\frac{\mathrm{d} u}{\mathrm{d} t} &= \eta_{1}(\epsilon_{1}(p-u)-uv)\ , & \frac{\mathrm{d} w}{\mathrm{d} t}&=\eta_{3}(l-s_{1}-w)w\ ,\\
\epsilon_{1}\epsilon_{2}\frac{\mathrm{d} v}{\mathrm{d} t} &= \eta_{1}(\epsilon_{1}(x-v)-uv)\ , & \frac{\mathrm{d} s_{3}}{\mathrm{d} t}& = \frac{\mathrm{d} l}{\mathrm{d} t} = 0\ , \\
\end{aligned}
\end{equation}
which has the function of automatically performing loop iteration calculation and timely terminating it when calculation times is beyond a desired point. \cref{fig8} shows the loop termination results by species $W$, where \cref{fig8a} (respectively, \cref{fig8b}) simulates the case of $\eta_{3}=1$ (respectively, $\eta_3=50$), $w(0)=l(0)=4$, and other parameters and initial value information taken the same as given for \cref{fig7}. When $\eta_{3}=1$, neither $s_1$ nor $s_2$ is stable at $4$, but beyond a little. This means there is a lag to terminate $\hat{\mathcal{M}}_{1}$ and $\hat{\mathcal{M}}_{2}$. The reason is that when $s_1$ increases to be close to $l$, $w$ converges towards $0$ at a nearly linear speed as \cref{eq:w} shows, resulting in $w$ to shut down the two modules lagging behind expectation. However, when a larger $\eta_3=50$ is selected to accelerate the convergence of $w$, shown in \cref{fig7b}, the lag phenomenon is weakened and $s_1$ is stable at $4$. In practice a relatively large $\eta_3$ is thus desired to be selected.
\begin{figure}[tbhp]
\centering
\subfloat[]{\label{fig8a}\includegraphics[width=0.5\linewidth]{fig8_1.pdf}}
\subfloat[]{\label{fig8b}\includegraphics[width=0.5\linewidth]{fig8_2.pdf}}
\caption{Loop termination of iteration computation $s_1=s_1+1$ by counter species $W$ in system \cref{eq5.5} with (a) $\eta_3=1$ and (b) $\eta_3=50$.}
\label{fig8}
\end{figure}
\begin{remark}
Our chemical relaxation oscillator design and loop termination strategy achieve the iteration computation $s_1=s_1+1$ well through chemical molecular reactions in the form of \cref{crn1} plus \cref{eq5.2} plus $\hat{\mathcal{M}}_1$ and $\hat{\mathcal{M}}_2$. This is rather a basic operation in many machine learning algorithms, and the whole modular can work as a counter CRN to control the alternation calculations of any two target modules, labeled by $\mathcal{TM}_1$ and $\mathcal{TM}_2$. As long as the symmetrical clock signals $U$ and $V$ generated by the standard chemical relaxation oscillator are fed into both $\mathcal{M}_1$, $\mathcal{M}_2$ and $\mathcal{TM}_1$, $\mathcal{TM}_2$ in parallel as catalysts, and the counter species $W$ is used to monitor the alternation times of their computations, the target calculation may be implemented. We exhibit the corresponding flowchart in \cref{fig9}. This application is quite promising, and may push the development of achieving all kinds of molecular computations.
\end{remark}
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth,scale=1.00]{fig9.pdf}
\caption{A schematic diagram of applying chemical oscillation-based iteration computation modules $\mathcal{M}_1$ plus $\mathcal{M}_2$ to control the alternative calculation and termination of two target modules $\mathcal{TM}_1$ and $\mathcal{TM}_2$.}
\label{fig9}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper we develop a systematic approach to realize synchronous sequential computation with abstract chemical reactions. Our ultimate goal is to execute complex calculations in biochemical environments, and after setting the initial values of species and reaction rates, the biochemical system could run automatically to complete the target calculation task.
For this, we design a $4$-dimensional oscillator model to generate a pair of symmetric clock signals $U$ and $V$ whose concentrations change periodically. Different from the chemical oscillators used in previous work, we construct the $4$-dimensional oscillator model based on the architecture of $2$-dimensional relaxation oscillation. We strictly analyze the dynamical properties of the oscillator model and discuss the conditions of parameters and initial values to control the period and occurrence order of $U$ and $V$. The strength of our model lies in a broad selection of initial values and a clear, easy-to-implement parameter choice.
We demonstrate the process of module regulation under the example of $\mathcal{M}_1$ and $\mathcal{M}_2$, and give a termination strategy for the loop control. Although there is very little work that pays attention on this topic, we still believe that our consideration of loop termination makes sense for synthesizing autonomously running life.
\par This paper actually provides guidance for implementing calculation instructions and machine learning algorithms into biochemical environments, the oscillator model we design acts as a hub connecting various parts of reaction modules corresponding to specific calculation tasks. Oscillation, especially the relaxation oscillation, plays a crucial role in this process. Different from modeling and analyzing the oscillation phenomena observed in biochemical experiments, our work takes advantage of oscillation as a means to achieve specific functions. Our $4$-dimensional oscillator model solves the task of two-module regulation well, but is a little weak when faced with tasks of three or more modules. It will be the focus of our future work to find suitable oscillation structure and design corresponding chemical oscillator model for multi-module regulation.
\bibliographystyle{siamplain}
|
{
"arxiv_id": "2302.14156",
"language": "en",
"timestamp": "2023-03-01T02:02:45",
"url": "https://arxiv.org/abs/2302.14156",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
\subsection{Brinkman Penalization as a Design Parametrization Technique}
The first application of \textbf{t}opology \textbf{o}ptimization (TO) to fluid-dependent problems appeared in the seminal work of \citet{Borrvall2003}, where they addressed a pure fluid problem under Stokes flow conditions. Later, \citet{Gersborg-Hansen2005} extended the work to Navier-Stokes equations. Although both works utilized the analogy of a 2D channel flow with varying thickness for design parametrization, the later recognized the similarity between this model and Brinkman equations of fluid flow in porous media \citep{Brinkman1947}. This similarity was also noted independently by \citet{Evgrafov2005} and \citet{Guest2006}, where the later directly used Darcy's law - a porous flow model - to introduce fluid flow in porous regions, hence freeing the topology optimization model from its two-dimensional channel assumption. In addition, the use of a porous flow model such as Darcy's law warranted a physical interpretation of porosity for intermediate densities. Consequently, this model could potentially be used to design porous media such as filters, and it's no longer \textit{a mere bridge} to interpolate between fluid and solid with the final goal of reaching only pure discrete designs \citep[p.~463]{Guest2006}. This now termed \textit{Brinkman penalization} is the de facto method for density-based topology optimization of fluid-dependent problems. In the remainder of this work, `Brinkman penalization' and `inverse permeability' are used interchangeably and our discussion is limited to finite element discretizations such that each finite element is parameterized using a single fluid design variable $\rho$.
Typically, Brinkman penalization is employed by appending a negative volumetric force to the body force and internal force terms in the Navier-Stokes momentum equation. This volumetric force is basically the Brinkman inverse permeability scalar function multiplied by the velocity vector, such that it has a scalar component in each spatial direction; i.e. $x$ and $y$ in 2D. This Brinkman penalization function is convex and ranges between a maximum and a minimum limit. It usually takes the following form which first appeared in \citep{Borrvall2003}:
\begin{equation}
\alpha (\rho) = \alpha_\text{max} + \rho (\alpha_\text{min} - \alpha_\text{max}) \frac{1 + p_\alpha}{\rho + p_\alpha}
\label{eq:alpha}
\end{equation}
\noindent where $\alpha_\text{max}$ and $\alpha_\text{min}$ are the maximum and minimum inverse permeability limits (also known as Brinkman penalization limits), $\rho$ is the fluid design variable ($\rho = 1$ for fluid and $\rho = 0$ for solid), and $p_\alpha$ is the Brinkman penalization interpolation parameter. This Brinkman penalization function is different from the somewhat \textit{analogous} \textbf{S}olid \textbf{I}sotropic \textbf{M}aterial with \textbf{P}enalization (SIMP) function used with topology optimization of solids in two aspects:
\begin{enumerate}[label=(\alph*).]
\item Unlike SIMP, since the volumetric force term is appended to other non-zero terms, there is no fear of singularities - from a mathematical perspective - if the volumetric force term vanishes in pure fluid elements \citep[p.~469]{Guest2006}. Hence, $\alpha_\text{min}$ maybe taken as zero except when there is a physical need to be non-zero as in solving a two-dimensional problem with a finite out-of-plane thickness such as a microfluidic device, cf. \citep[p.~182]{Gersborg-Hansen2005}, \citep[p.~978]{Olesen2006}, and \citep[p.~5]{Alexandersen2023}.
\item A linear SIMP function in terms of the design variable means no penalization is imposed on intermediate elements, while in Brinkman penalization, a linear relation enforces severe penalization on intermediate elements.
\end{enumerate}
\subsection{Calculation of $\alpha_\text{max}$}
As for $\alpha_\text{max}$, it is typically selected just high enough to enforce near zero velocity in non-fluid elements, yet small enough not to introduce numerical instabilities. From early on, the significance of this maximum limit was recognized and its effect on the optimized design was discussed. In \citep[p.~102]{Borrvall2003}, the authors recognized the strong dependence of the objective function of power dissipation on this maximum limit, yet the optimized designs were found to be highly independent of that limit. In \citep[p.~184]{Gersborg-Hansen2005}, the authors chose the maximum limit so as to enforce a small flow rate in the solid regions in the range of two orders of magnitude lower than the maximum velocity in pure fluid regions.
In \citep{Guest2006}, the authors studied the effect of different magnitudes of the maximum limit by solving a sample problem through two models; a pure fluid model and their Darcy-Stokes model developed for TO. The resolved fluid velocity and pressure fields from the two models were then compared. They noted a linear relation between the permeability (i.e. $1/\alpha_\text{max}$ in this work) and the maximum velocity in the solid regions. A deterioration in the solution accuracy coincided with the loss of that linear relation, which occurred at low permeability values (i.e. equivalent to high $\alpha_\text{max}$ in our formulation). In our numerical experiments, we noticed a different behavior that is discussed in detail in Section \ref{sec:character}.
\citet{Guest2006} also hinted at the dependency of the Brinkman penalization limits on the mesh size utilized by calculating a certain permeability value as a function of mesh size. This value corresponded to equal diagonal terms between the Darcy and Stokes stiffness matrices, and was later used as an initial value for their implemented continuation technique. In contrast to \citep{Borrvall2003, Gersborg-Hansen2005} which gradually raised $p_\alpha$ in Eq. \ref{eq:alpha} to introduce continuation, \cite{Guest2006} implemented what is analogous to $\alpha(\rho) = \rho \ \alpha_\text{max}$ directly and gradually raised $\alpha_\text{max}$ to introduce continuation.
In \citep{Olesen2006}, the authors calculated the proper maximum inverse permeability limit by looking at the streamlines in the resolved velocity field to estimate how much flow went through the solid structure, and also by looking at the relation of the objective function w.r.t. the maximum limit. They also mentioned that the maximum limit could be of use as a continuation tool in severe nonconvex problems similar to \cite{Guest2006}.
\citet{Kreissl2011} noted the independence of the maximum impermeability limit on the Reynolds number. They also noted the need for a relatively fine mesh for the pressure fields to match between the Brinkman-penalized Navier-Stokes and original Navier-Stokes with body-fitted mesh.
In recent literature on \textbf{t}opology \textbf{o}ptimization of \textbf{f}luid-\textbf{s}tructure \textbf{i}nteraction (TOFSI) problems, $\alpha_\text{max}$ is calculated by solving an analysis of an initial discrete design of the TO problem using a body-fitted mesh of segregated, non-overlapping fluid/solid domains. A parameter of interest obtained from this analysis is used as a benchmark against the same parameter calculated by analyzing the unified domain formulation with the Brinkman penalization term implemented. The maximum limit $\alpha_\text{max}$ is usually progressively increased by an order of magnitude until the two results match within a certain error margin, cf. \citep[p.~602]{Yoon2010} and \citep[p.~993]{Lundgaard2018}.
While the trial-and-error approach for selecting a proper $\alpha_\text{max}$ is acceptable for a single design problem, sometimes a need arises for calibrating $\alpha_\text{max}$ such that different mesh sizes and flow conditions produce the same behavior. In particular, there is usually a necessity for solving the TO problem using a relatively coarse mesh before committing to solving the final refined (hence costly) mesh, such as the need to calibrate some interpolation and projection parameters. In fact, the motivation for this study arose in the authors' work on density-based TOFSI problems; a multiphysics problem known for its tediously strong nonlinear and nonconvex behavior. We noticed that after calibrating some interpolation parameters on a relatively coarse mesh that is solvable within a reasonable time frame, the same parameters produced a different behavior when applied to the finer mesh needed for producing the final results.
\textbf{In this work}, we investigate the dependency of the Brinkman penalization term on the mesh size and the flow conditions. Through analyzing the Navier-Stokes equations in their PDE as well as discretized finite element forms, we propose proportionality relations to describe these dependencies. We solve a wide range of numerical experiments and use curve fitting to characterize these dependencies. The rest of this manuscript is organized as follows; in \textbf{Section \ref{sec:govern_eqns}}, we introduce the fluid flow governing equations and boundary conditions. In \textbf{Section \ref{sec:finite}}, we discuss the finite element discretization of the governing equations which provide valuable insights into the dependency of the Brinkman penalization maximum limit on the mesh size. In \textbf{Section \ref{sec:dependency}}, we analyze the fluid flow governing equations to deduce proportionality relations between the Brinkman penalization term and mesh size and flow conditions. In \textbf{Section \ref{sec:character}}, we use numerical experiments to verify and characterize the proportionality relations derived. Finally, in \textbf{Section \ref{sec:conc}}, we summarize our findings and present our concluding remarks.
\section{Governing Equations of Fluid Flow}
\label{sec:govern_eqns}
Before starting out investigation into the dependence of the Brinkman maximum limit on the mesh size and the flow conditions, we should first establish the governing equations of the problem at hand. Consider the Navier-Stokes equations in their incompressible, steady-state form \cite[p.~10]{Reddy2010}. The strong form of the PDEs modified for TO is as follows:
\begin{gather}
\boldsymbol{\nabla} \cdot \mathbf{v} = 0, \label{eq:contin} \\
%
\rho_f \, (\mathbf{v} \cdot \boldsymbol{\nabla}) \mathbf{v} = \boldsymbol{\nabla} \cdot \boldsymbol{\upsigma}^f + \mathbf{f}^f - \alpha(\rho) \mathbf{v} , \label{eq:ns} \\
%
\boldsymbol{\upsigma}^f = - p \mathbf{I} + \mu \left[ \boldsymbol{\nabla} \mathbf{v} + (\boldsymbol{\nabla} \mathbf{v})^T \right], \\
%
\alpha(\rho) = \alpha_\text{max} + \rho \left( \alpha_\text{min} - \alpha_\text{max} \right) \frac{1 + p_\alpha}{ \rho + p_\alpha}. \label{eq:invrs_perm}
\end{gather}
\noindent where $\mathbf{v}$ is the fluid velocity, $\rho_f$ is the fluid density (a subscript $f$ is used to distinguish it from the design variables $\rho$), $\boldsymbol{\upsigma}^f$ is the Cauchy fluid stress tensor, $\mathbf{f}^f$ is the external fluid force (assumed zero and dropped in the remainder of this work), $p$ is the hydrostatic pressure, and $\mu$ is the fluid dynamic viscosity. The fluid momentum equation, Eq. \ref{eq:ns}, is appended with the Brinkman penalization term $- \alpha(\rho) \mathbf{v}$ as a volume force to enforce zero velocity in 0\% fluid elements while allowing for smooth interpolation between the artificial density limits; i.e. between solid and fluid. The Brinkman penalization interpolation parameter $p_\alpha$ is usually selected based on the physics of the problem at hand; e.g. Reynolds number in TOFSI problems \cite[p.~974]{Lundgaard2018}. A continuation scheme maybe used with $p_\alpha$ to avoid the optimizer getting stuck in local minima; cf. \citep[p.~96]{Borrvall2003} and \citep[p.~10]{Alexandersen2023}.
The essential boundary conditions are defined as follows:
\begin{alignat}{3}
\text{Fluid No-slip:} & \qquad \mathbf{v} = 0 & \quad \text{on } & \mathit{\Gamma}_{\mathbf{v}_0}, \label{eq:bc_noslip} \\
\text{Fluid Inlet:} & \qquad \mathbf{v} = \mathbf{v}_\text{in} & \quad \text{on } & \mathit{\Gamma}_{\mathbf{v}_\text{in}}, \label{eq:bc_v_in} \\
\text{Fluid Outlet:} & \qquad p = 0 & \quad \text{on } & \mathit{\Gamma}_{\mathbf{v}_\text{out}}. \label{eq:bc_p_out}
\end{alignat}
\noindent where $\mathbf{v}_\text{in}$ is the prescribed inlet velocity at the inlet boundary $\mathit{\Gamma}_{\mathbf{v}_\text{in}}$, and $\mathit{\Gamma}_{\mathbf{v}_\text{out}}$ is the outlet boundary with a prescribed zero pressure applied. Note that the fluid no-slip boundary condition in Eq. \ref{eq:bc_noslip} is only defined on the remaining external domain boundaries $\mathit{\Gamma}_{\mathbf{v}_0}$ aside from the inlet and outlet boundaries such that $ \mathit{\partial \Omega}_f = \mathit{\Gamma}_{\mathbf{v}_0} \cup \mathit{\Gamma}_{\mathbf{v}_\text{in}} \cup \mathit{\Gamma}_{\mathbf{v}_\text{out}} $. The volume force term appended to the fluid momentum in Eq. \ref{eq:ns} automatically enforces a no-slip condition wherever needed within the solid domain and its boundaries.
\section{Finite Element Formulations}
\label{sec:finite}
To study the dependence of the Brinkman penalization upper limit $\alpha_\text{max}$ on the mesh size, we must take a closer look at the discretized weak form of the Navier-Stokes and continuity equations. We implement the \textit{standard Galerkin method of weighted residuals} where the test/weight functions are the same as the interpolation/shape functions. The resulting model is of the \textit{velocity-pressure} (or \textit{mixed}) type where both velocity and pressure are solved for simultaneously. To satisfy the \textit{Ladyzhenskaya-Babuska-Brezzi} condition, cf. \ \cite[p.~176]{Reddy2010}, \textit{P2P1 Lagrangian} finite elements (i.e. 9 velocity nodes and 4 pressure nodes) are used with a low to moderate Reynolds number to avoid using stabilization techniques that artificially - but not necessarily accurately - dampen the discontinuities. We employ regular rectangular meshing of equal size, the mesh size $h$ denotes the length of the finite element side. The continuity equation (Eq. \ref{eq:contin}) is typically weighted by the pressure shape function $\mathbf{\Phi}$ while the momentum equation (Eq. \ref{eq:ns}) is typically weighted by the velocity shape function $\mathbf{\Psi}$. The resulting finite element system in 2D \textit{on the elemental level} is as follows:
\begin{gather}
\underbrace{\begin{bmatrix}
2 \mathbf{K}_{11} + \mathbf{K}_{22} + \mathbf{C(v)} & \mathbf{K}_{12} & - \mathbf{Q}_1 \\[4pt]
\mathbf{K}_{21} & \mathbf{K}_{11} + 2 \mathbf{K}_{22} + \mathbf{C(v)} & - \mathbf{Q}_2 \\[4pt]
- \mathbf{Q}_1^T & - \mathbf{Q}_2^T & \mathbf{0}
\end{bmatrix}}_{\text{Conservation of Momentum and Mass}}
\begin{Bmatrix}
\mathbf{\hat{v}}_1 \\[4pt]
\mathbf{\hat{v}}_2 \\[4pt]
\mathbf{\hat{p}}
\end{Bmatrix} +
\underbrace{\begin{bmatrix}
\mathbf{A} & \mathbf{0} & \mathbf{0} \\[4pt]
\mathbf{0} & \mathbf{A} & \mathbf{0} \\[4pt]
\mathbf{0} & \mathbf{0} & \mathbf{0}
\end{bmatrix}}_{\substack{\text{Brinkman} \\ \text{Penalization}}}
\begin{Bmatrix}
\mathbf{\hat{v}}_1 \\[4pt]
\mathbf{\hat{v}}_2 \\[4pt]
\mathbf{\hat{p}}
\end{Bmatrix} = \mathbf{0}.
\label{eq:main_fea}
\end{gather}
The coefficient matrices in the finite element form are defined as ($\int_{-1}^{+1} \int_{-1}^{+1}$ and $\dd{\xi} \dd{\eta}$ are implied)\footnote{Summation is implied on repeated indices in $\mathbf{C(v)}$ but not in $\mathbf{K}_{ij}$.}:
\begingroup
\allowdisplaybreaks
\begin{gather}
\mathbf{K}_{ij} = \mu \, \pdv{\mathbf{\Psi}}{x_i} \pdv{\mathbf{\Psi}}{x_j} ^T |\mathbf{J}|, \label{eq:2} \\
\mathbf{C(v)} = \rho \, \mathbf{\Psi} \left[ \left( \mathbf{\Psi}^T \mathbf{\hat{v}}_i \right) \pdv{\mathbf{\Psi}}{x_i} ^T \right] |\mathbf{J}|, \\
\mathbf{Q}_i = \pdv{\mathbf{\Psi}}{x_i} \, \mathbf{\Phi}^T |\mathbf{J}|, \\
\mathbf{A} = \alpha(\rho) \mathbf{\Psi} \, \mathbf{\Psi}^T |\mathbf{J}|.
\label{eq:brnk_fea}
\end{gather}
\endgroup
\noindent where $\mathbf{\hat{v}}_1$ and $\mathbf{\hat{v}}_2$ are the nodal velocities in $x$ and $y$, respectively, and $\mathbf{\hat{p}}$ is the nodal pressures. $|\mathbf{J}|$ is the Jacobian determinant and $\xi$ and $\eta$ are the natural coordinates. No externally applied nodal fluid forces are used in this work as fluidic boundary conditions (Eqs. \ref{eq:bc_noslip} to \ref{eq:bc_p_out}) are implemented directly by setting nodal velocities/pressures to their appropriate values (i.e. strong, point-wise enforcement). Appropriate global assembly of Eq. \ref{eq:main_fea} is implemented and the resulting nonlinear system is solved using the undamped Newton-Raphson method \cite[p.~190]{Reddy2010}.
In the next section, we analytically investigate the dependence (or independence) of \am on the mesh size and the flow conditions.
\section{Analytical Derivation of the Dependence of $\alpha_\text{max}$ on Mesh Size and Flow Conditions}
\label{sec:dependency}
In order to establish the basis of our investigation, we take a look at two sets of parameters; namely the maximum state variable errors in 100\% fluid regions, and the maximum velocity in pure solid regions. We consider two perspectives; \textbf{(i)} the suitability of these parameters in measuring how the Brinkman penalized Navier-Stokes approximates the pure Navier-Stokes, and \textbf{(ii)} the easiness of investigating either set of parameters. In the following, we state our argument for the validity of using either set of parameters:
\begin{enumerate}
\item The errors in the velocity and pressure fields resolved in 100\% fluid regions in comparison to those resolved using a pure fluid model. One of the main indicators of the validity of the Brinkman penalization model is that it should produce similar state fields to what is produced from a pure fluid model. This is even more critical in multiphysics problems whose behavior depends on the state variables such as structural compliance in TOFSI.
\item The velocity in the solid regions is a good and direct indication of the validity of the Brinkman penalization model in simulating porous flow in solid media. In fact, it was one of the early parameters used in calibrating $\alpha_\text{max}$ as in \cite{Guest2006}.
\end{enumerate}
Now that we established the validity of choosing either option from a representation point of view, next we look at the complexity of using either option from a mathematical equation-based perspective. The \textbf{first option} is a bit tricky to utilize as $\alpha_\text{max}$ does not have a direct influence on the 100\% fluid regions, instead the errors in the fluid state variables are reduced by minimizing the flow velocity in the pure solid regions hence directing the entire flow to the pure fluid regions and increasing the similarity to the results of a pure fluid model. Notice that in these discussions, we are looking at a special case, that is the existence of only discrete densities; either $\rho = 1$ in 100\% fluid regions or $\rho = 0$ in 100\% solid regions. The \textbf{second option}, i.e. velocity in solid regions, can be easily deduced through looking at the diagonal terms in Eq. \ref{eq:main_fea}.
In the following subsections, we discuss the dependency of the maximum inverse permeability limit on mesh size and flow conditions using the maximum velocity in solid regions\footnote{Typically, max$(|v_\mathrm{solid}|)$ should be scaled w.r.t. a nominal velocity characterizing the flow such as the characteristic $v_c$. In this work, $v_c$ is only changed by less than an order of magnitude, hence this scaling is not discussed further.} - designated as max$(|v_\mathrm{solid}|)$ - as a criterion through looking at the diagonal terms in the discretized finite element equations.
\subsection{Dependence of $\alpha_\text{max}$ on Mesh Size}
\label{ssec:alpha_h_anltc}
A closer look at the diagonal matrices in Eq. \ref{eq:main_fea} reveals the dependency of the original Navier-Stokes terms (i.e. $\mathbf{K}_{ij}$ and $\mathbf{C(v)}$) on the mesh size $h$ through the derivatives of the shape functions w.r.t. the global coordinates $\pdv*{\mathbf{\Psi}}{x_i}$. Recall that these derivatives are obtained as follows:
\begin{gather}
\begin{Bdmatrix}
\pdv{\mathbf{\Psi}}{x} \\[10pt] \pdv{\mathbf{\Psi}}{y}
\end{Bdmatrix} = \left[ \mathbf{J} \right]^{-1} \begin{Bdmatrix}
\pdv{\mathbf{\Psi}}{r} \\[10pt] \pdv{\mathbf{\Psi}}{s}
\end{Bdmatrix}, \\
%
\left[ \mathbf{J} \right]^{-1} = \frac{1}{|\mathbf{J}|} \begin{bdmatrix}
+J_{2,2} & -J_{1,2} \\[8pt] -J_{2,1} & +J_{1,1}
\end{bdmatrix}.
\end{gather}
\noindent where $J_{i,j}$ are the original Jacobian matrix components. Notice that, unlike the derivatives of $\mathbf{\Psi}$ w.r.t. the natural coordinates $\xi$ and $\eta$, $\pdv*{\mathbf{\Psi}}{x_i}$ is dependent on the mesh size $h$ through the components of the Jacobian matrix (in the numerator) and through the Jacobian determinant (in the denominator). This dependency can be characterized in closed form for the special case of regular, square meshing. The Jacobian matrix is calculated as follows:
\begin{equation}
\left[ \mathbf{J} \right] = \begin{bdmatrix}
\pdv{\mathbf{\Psi}}{\xi} ^T \mathbf{\hat{x}} & \quad \pdv{\mathbf{\Psi}}{\xi} ^T \mathbf{\hat{y}} \\
\pdv{\mathbf{\Psi}}{\eta} ^T \mathbf{\hat{x}} & \quad \pdv{\mathbf{\Psi}}{\eta} ^T \mathbf{\hat{y}}
\end{bdmatrix}
\end{equation}
\noindent where the elemental nodal coordinates $\mathbf{\hat{x}}$ and $\mathbf{\hat{y}}$ are linearly proportional to the mesh size $h$ for the special case of regular, square finite elements.
In addition, the Jacobian determinant is typically related to the finite element's area (i.e. related to $h^2$). For a square element, the Jacobian determinant is known to be one fourth the element's area when evaluated anywhere within the element. This means that every $\pdv*{\mathbf{\Psi}}{x_i}$ is linearly proportional to the reciprocal of the mesh size, i.e. $1/h$. Again, the strength and regularity of this dependency depends on how distorted the element is from the ideal square shape.
On the other hand, the Brinkman penalization contribution to Eq. \ref{eq:main_fea} - namely matrix $\mathbf{A}$ - is independent of this parameter as it lacks any terms containing $\pdv*{\mathbf{\Psi}}{x_i}$. Hence, while the original Navier-Stokes terms change with different mesh sizes, the Brinkman penalization term does not.
From Eqs. \ref{eq:main_fea}-\ref{eq:brnk_fea}, it can be noted that the inverse permeability $\alpha(\rho)$ should be inversely proportional to $h^2$ (through $\mathbf{K}_{i,j}$ which contains two $\pdv*{\mathbf{\Psi}}{x_i}$ derivatives) and inversely proportional to $h$ (through $\mathbf{C(v)}$ which contains one $\pdv*{\mathbf{\Psi}}{x_i}$ derivative). Hence, the following relation between $\alpha_\text{max}$ and $h$:
\begin{equation}
\alpha_\text{max} \propto \frac{1}{h^2} \qquad \& \qquad \alpha_\text{max} \propto \frac{1}{h}
\end{equation}
\subsection{Dependence of $\alpha_\text{max}$ on Flow Conditions}
\label{ssec:kmax_re_anltc}
The dependence of the Brinkman penalization maximum limit $\alpha_\text{max}$ on the Reynolds number $Re$ can be investigated through looking at the non-dimensionalized form of Navier-Stokes equations. Following the treatment by \citet[p.~430]{leal2007advanced}, it is possible to non-dimensionalize Navier-Stokes equations w.r.t. the Reynolds number when under the assumptions of incompressible fluid, steady-state, and negligible body forces. Consider the following relations:
\begingroup
\allowdisplaybreaks
\begin{gather}
\mathbf{v^*} = \frac{\mathbf{v}}{v_c}, \label{eq:nondmnsl_v} \\
%
\boldsymbol{\nabla^*} = L_c \boldsymbol{\nabla}, \\
%
p^* = \frac{p}{\rho_f \ {v_c}^2}, \\
%
Re = \frac{v_c \ L_c \ \rho_f}{\mu}, \\
%
\alpha^*(\rho) = \frac{\alpha(\rho) \ {L_c}^2}{\mu},
\label{eq:nondmnsl_alpha} \\
%
\alpha^* (\rho) = \alpha^*_\text{max} \left( 1 - \rho \frac{1 + p_\alpha}{\rho + p_\alpha} \right).
\end{gather}
\endgroup
\noindent where the dimensionless form of each variable is designated with an asterisk superscript as in $\square^*$. $v_c$ is a characteristic velocity, taken in this work as the maximum inlet velocity in a parabolic laminar profile. $L_c$ is the characteristic length, taken as the width of the inlet boundary $\mathit{\Gamma}_{\mathbf{v}_\mathrm{in}}$. The relation in Eq. \ref{eq:nondmnsl_alpha} has been mentioned in relevant literature in some form, cf. \citep[p.~183]{Gersborg-Hansen2005}, \citep[p.~978]{Olesen2006}, and \citep[p.~598]{Yoon2010}. In that sense, Darcy's number $Da$ is equivalent to $1/\alpha^*_\text{max}$, both dimensionless. Generally, Darcy's number is related to a characteristic length that is relevant to the porous medium microstructure. It will be shown later in Section \ref{sec:character} that the characteristic length in Eq. \ref{eq:nondmnsl_alpha} should be different from, yet somehow related to, $L_c$.
By implementing Eqs. \ref{eq:nondmnsl_v}-\ref{eq:nondmnsl_alpha}, the Brinkman-penalized Navier-Stokes equations are non-dimensionalized in the following form:
\begin{equation}
\frac{\rho_f \ {v_c}^2}{L_c} \, (\mathbf{v^*} \cdot \boldsymbol{\nabla^*}) \mathbf{v^*} = - \frac{\rho_f \ {v_c}^2}{L_c} \, \boldsymbol{\nabla^*} \ p^* + \frac{\mu \ v_c}{{L_c}^2} {\nabla^*}^2 \ \mathbf{v^*} - \frac{\mu \ v_c}{{L_c}^2} \alpha^*(\rho) \ \mathbf{v^*}
\end{equation}
\noindent which can be rearranged through a multiplication by $L_c / \rho_f \ {v_c}^2$ as follows:
\begin{equation}
(\mathbf{v^*} \cdot \boldsymbol{\nabla^*}) \mathbf{v^*} = - \boldsymbol{\nabla^*} \ p^* + \frac{1}{Re} {\nabla^*}^2 \ \mathbf{v^*} - \frac{1}{Re} \alpha^*(\rho) \ \mathbf{v^*}
\label{eq:nondmnsl_ns}
\end{equation}
Similarly to the discussion in Section \ref{ssec:alpha_h_anltc}, we look at the diagonal terms in the finite element form of Eq. \ref{eq:nondmnsl_ns}. It appears that it's difficult to completely isolate $Re$ and its components, i.e. $v_c, \ \mu, \ \mathrm{and} \ L_c$, in a single term. Hence, although it might appear that $\alpha^*(\rho)$, hence $\alpha^*_\text{max}$, is independent of $Re$, $\alpha_\text{max}$ has the following dependencies:
\begin{equation}
\alpha_\text{max} \propto \mu \qquad \& \qquad \alpha_\text{max} \propto \frac{1}{{L_c}^2} \qquad \& \qquad \alpha_\text{max} \propto v_c
\label{eq:kmax_mu_lc_vc}
\end{equation}
In Eq. \ref{eq:kmax_mu_lc_vc}, the first two relations come from Eq. \ref{eq:nondmnsl_alpha} while the third one comes from the existence of velocity components in the convective term on the left hand side of Eq. \ref{eq:nondmnsl_ns} ($\mathbf{C(v)}$ in the finite element form). In other words, $\alpha_\text{max}$ is independent of $\rho_f$ but dependent on $v_c, \ \mu, \ \mathrm{and} \ L_c$.
In the next section, with the aid of numerical experiments, we focus on verifying the validity of the derived dependencies and on calculating the numerical values of the coefficients of proportionality derived earlier. Note that these coefficients of proportionality are only valid for the design problem discussed in this work. Nonetheless, we show that only a small number of data points is needed to calculate these coefficients for other problems.
\section{Characterizing the Dependency of $\alpha_\text{max}$ on Mesh Size and Flow Conditions}
\label{sec:character}
In this section, through numerical experiments, we aim to prove the validity of the derived dependencies in Section \ref{sec:dependency}; namely the dependency of the Brinkman penalization maximum limit $\alpha_\text{max}$ on the mesh size $h$, the fluid dynamic viscosity $\mu$, the characteristic length $L_c$, and the characteristic velocity $v_c$ as well as its independency of the fluid density $\rho_f$. In addition, we calculate exact numerical relations that describe these dependencies through curve fitting.
To generate the data we use for curve fitting, we solve the governing equations of the Navier-Stokes equations equipped with the Brinkman penalization term. The problem to be solved is an initial design of the \textit{modified} beam in a channel problem described in Fig. \ref{fig:mdfd_bm_in_a_chnl}. The \textit{original} version of this problem was first discussed in a TOFSI context in \ \cite[p.~610]{Yoon2010} and has been used later as a benchmark problem in a number of works on TOFSI. It was later modified by \cite{Lundgaard2018}, hence the \textit{modified} designation, to increase the relative size of the design domain to the whole computational domain, rescale it from the micro to the macro scale, and generally strengthen the fluid-structure dependency.
As detailed in Fig. \ref{fig:mdfd_bm_in_a_chnl}, the problem features a 0.8 x 1.4 m rectangular design space (light gray) placed inside a 1 x 2 m rectangular channel. To avoid trivial solutions, a 0.05 x 0.5 m non-design solid beam (dark gray) is placed within the design space to force the optimizer to reach a more sophisticated solution than a simple bump at the bottom of the channel. The problem is solved for an initial discrete design such that $\rho = 0$ in $\Omega_d$ and $\Omega_{nd}$ and $\rho = 1$ in $\Omega_f \backslash \{\Omega_d \cup \Omega_{nd} \}$. Recall that, in this work, $\rho$ is defined as a fluid, not a solid, design variable.
\begin{figure}[b]
\centering
\includegraphics[width=0.5\textwidth]{0.Figures/beam_problem_sigmund.pdf}
\caption{The \textit{modified} beam in a channel design problem as described in \cite{Lundgaard2018}.}
\label{fig:mdfd_bm_in_a_chnl}
\end{figure}
The top and bottom surfaces of the channel $\mathit{\Gamma}_{\mathbf{v}_0}$ have a no-slip condition applied. A fully-developed, parabolic laminar flow profile of a maximum velocity $v_c$ is applied at the inlet $\mathit{\Gamma}_{\mathbf{v}_\text{in}}$ on the left, and a zero pressure condition is applied at the outlet $\mathit{\Gamma}_{\mathbf{v}_\text{out}}$ on the right. The bottom surface of the design and non-design spaces $\mathit{\Gamma}_{\mathbf{d}_0}$ is fixed to a ground structure. Note that even though this is a TOFSI problem, we are only concerned with the fluid flow analysis in this discussion.
As discussed earlier, the characteristic length $L_c$ is taken as the width of the entry boundary $\mathit{\Gamma}_{\mathbf{v}_\mathrm{in}}$ on the left. All the dimensions shown in Fig. \ref{fig:mdfd_bm_in_a_chnl} are scaled linearly with $L_c$. Unless otherwise noted, these default values are used:
\begingroup
\allowdisplaybreaks
\begin{gather}
v_c = 1 \ \mathrm{m / s}, \\
\rho_f = 1 \ \mathrm{kg / m^3}, \\
\mu = 1 \ \mathrm{Pa \cdot s}, \\
L_c = 1 \ \mathrm{m}, \\
h = 0.01 \ \mathrm{m}, \\
\alpha_\text{min} = 0 \ \mathrm{kg/m \cdot s}.
\end{gather}
\endgroup
First, we need to look at the maximum velocity in solid regions and the maximum state variable errors in fluid regions for a range of $\alpha_\text{max}$. Since such a study is a fluid flow analysis, it could be performed in a commercial software such as COMSOL Multiphysics \citep{comsol6} by employing the ``parametric sweep" feature. The Brinkman penalization term is easily implemented within the laminar flow model by using a volumetric force node and adding the scalar terms $- \alpha_\text{max} \ u$ and $- \alpha_\text{max} \ v$ in the $x$ and $y$ directions, respectively, where $u$ and $v$ are the $x$ and $y$ velocities as defined in the software.
\begin{figure}[th!]
\subfloat{\includegraphics[width=0.5\textwidth]{0.Figures/v_sld_mtlab_vs_cmsl.pdf}}
\subfloat{\includegraphics[width=0.5\textwidth]{0.Figures/v1_mtlab_vs_cmsl.pdf}} \\
\subfloat{\includegraphics[width=0.5\textwidth]{0.Figures/v2_mtlab_vs_cmsl.pdf}}
\subfloat{\includegraphics[width=0.5\textwidth]{0.Figures/p_mtlab_vs_cmsl.pdf}}
\caption{Effect of $\alpha_\text{max}$ on the state variables in the solid and fluid domains. Results are obtained from our proprietary code in MATLAB as well as the commercial software COMSOL Multiphysics.}
\label{fig:mtlb_vs_cmsl}
\end{figure}
We solved the same problem using COMSOL as well as a proprietary code we wrote in MATLAB and the results are shown in Fig. \ref{fig:mtlb_vs_cmsl}. A mesh size of $h = 0.005$ m is utilized, resulting in a total of 80,000 finite elements. The problem is solved for the following range of $\alpha_\text{max}$ values; 0, $1e1$, $1e2$, ..., $1e40$.
The results obtained display a linear log-log relation between the Brinkman penalization maximum limit $\alpha_\text{max}$ and the maximum velocity in the 100\% solid regions. In contrast to the work of \citep[p.~471]{Guest2006}, we noted two differences; \textbf{(i)} we experience a linear relation between the log of the values, not the values themselves, and \textbf{(ii)} we don't notice any loss of accuracy at the high end of $\alpha_\text{max}$ even at considerably high values. We conjecture the reason for the first discrepancy is that \citet{Guest2006} addressed Stokes flow which neglected the nonlinear convection term while we are solving full Navier-Stokes equations. As for the second discrepancy, we conjecture this deterioration in accuracy is related to one or a combination of the following reasons; \textbf{(a)} the use of iterative solvers for the governing equations without proper preconditioners and tight convergence criteria, and \textbf{(b)} the use of stabilization techniques not calibrated for their newly-developed Darcy-Stokes model (cf. \citep[p.~1238]{Kreissl2011}). On the other hand, loss of linearity occurs understandably at low $\alpha_\text{max}$ values as the Brinkman-penalized Navier-Stokes model loses its accurate representation of the impermeable solid domain.
We also notice that the maximum absolute relative percentage errors in the pure fluid state variables maintain a linear log-log relation with $\alpha_\text{max}$ up till a certain limit ($\alpha_\text{max} \approx 1e18$ that is equivalent to max$(|v_\mathrm{solid}|) \approx 1e-12$), after which the values plateau at an almost constant level. We could argue that beyond this limit, no benefit is gained from using a higher $\alpha_\text{max}$ value. Hence, for the following results, we only run each study up till a maximum value of $\alpha_\text{max} = 1e20$.
In the following subsections, we discuss the dependency of $\alpha_\text{max}$ on each parameter individually.
\subsection{Relation between $\alpha_\text{max}$ and $h$}
\label{ssec:kmax_h_chrctr}
The first set of results concerns the dependency of $\alpha_\text{max}$ on the mesh size $h$. A fluid flow analysis is run for all combinations of the following values; $h = 1/30, 1/50, 1/70, ..., 1/190$ and $\alpha_\text{max} = 0, 1e0, 1e1,$ $..., 1e20$. The extracted results are presented in Fig. \ref{fig:kmax_vs_h}. It can be noted that for max$(|v_\mathrm{solid}|) \approx 1e-2$ and smaller, the log-log relation is linear.
\begin{figure}[t!]
\centering
\subfloat[Original.]{\includegraphics[width=0.5\textwidth]{0.Figures/v_sld_var_h.pdf}}
\subfloat[Zoomed in.]{\includegraphics[width=0.5\textwidth]{0.Figures/v_sld_var_h_zmd.pdf}}
\caption{Maximum velocity in the solid regions vs $\alpha_\text{max}$ for different $h$ values on a log-log scale.}
\label{fig:kmax_vs_h}
\end{figure}
\begin{figure}[t!]
\captionsetup{margin={10pt}}
\centering
\subfloat[A log-log scale is used. From bottom up, max$(|v_\mathrm{solid}|) = 1e-2, 1e-3, ..., 1e-12$.]{\includegraphics[width=0.5\textwidth]{0.Figures/kmax_vs_h_crv_ftng.pdf}}
\subfloat[A single case of max$(|v_\mathrm{solid}|) = 1e-12$ with error bars calculated as abs($\alpha_\text{max}|$\textsubscript{Data} - $\alpha_\text{max}|$\textsubscript{Eq.}).] {\includegraphics[width=0.5\textwidth]{0.Figures/kmax_vs_h_crv_ftng_err.pdf}}
\caption{Comparison of Eq. \ref{eq:kmax_vs_h_crv_ftng} (asterisks) to data points from the numerical experiments (solid lines).}
\label{fig:kmax_vs_h_crv_ftng_chck}
\end{figure}
In order to characterize the relation between $\alpha_\text{max}$ and $h$, we use curve fitting in order to calculate an expression for $\alpha_\text{max}$ as a function of $h$ and max$(|v_\mathrm{solid}|)$. For curve fitting, we limit our data to only 6 points (3 points along h and 2 points along max$(|v_\mathrm{solid}|)$); which are all combinations of $h = 1/30, 1/110, 1/190$ and $\alpha_\text{max} = 1e8, 1e20$. We emphasize that for the curve fitting to be accurate, it is better for the data points used for fitting to be spanning the range of interest for each parameter. We note also that this choice of $\alpha_\text{max}$ ensures that \mvs $\le 1e-2$, hence within the linear portion of the log-log relation as presented in Fig. \ref{fig:kmax_vs_h}. Using curve fitting, the following relation is obtained:
\begin{equation}
\alpha_\text{max} = 10^{-q} \left( \frac{31.32}{h^2} + \frac{7635}{h} - 8.039e04 \right)
\label{eq:kmax_vs_h_crv_ftng}
\end{equation}
\noindent where $q$ is the exponent of the intended maximum velocity in the solid regions; i.e. max$(|v_\mathrm{solid}|) = 10^q$. To check the soundness of this relation, we compare Eq. \ref{eq:kmax_vs_h_crv_ftng} to the original set of data points for $h = 1/30, 1/50, 1/70, ..., 1/190$ and $\alpha_\text{max} = 1e8, 1e9, 1e10, $ $..., 1e20$. The comparison is presented in Fig. \ref{fig:kmax_vs_h_crv_ftng_chck} showing good agreement with a maximum error of 3.4\% for the case of max$(|v_\mathrm{solid}|) = 10^{-12}$.
\begin{figure}[th!]
\centering
\subfloat[Original.]{\includegraphics[width=0.5\textwidth]{0.Figures/v_sld_var_rho.pdf}}
\subfloat[Zoomed in.]{\includegraphics[width=0.5\textwidth]{0.Figures/v_sld_var_rho_zmd.pdf}}
\caption{Maximum velocity in the solid regions vs $\alpha_\text{max}$ for different $\rho_f$ values on a log-log scale.}
\label{fig:kmax_vs_rho}
\end{figure}
\subsection{Relation between $\alpha_\text{max}$ and $\rho_f$}
\label{ssec:alpha_rho_chrctr}
A fluid flow analysis is run for all combinations of the following values; $\rho_f = 0.5, 1, 1.5, ..., 4$ and $\alpha_\text{max} = 0, 1e0, 1e1,$ $..., 1e20$. The extracted results are presented in Fig. \ref{fig:kmax_vs_rho}, where it's clear that $\alpha_\text{max}$ is ``almost" independent of $\rho_f$. In fact, $\alpha_\text{max}$ is not entirely independent of $\rho_f$ due to the appearance of velocity components in the convective term on the left hand side of Eq. \ref{eq:nondmnsl_ns}. From a physics perspective, altering the value of $\rho_f$ affects the velocity field due to the changing ratio of inertia vs viscous forces. In Fig. \ref{fig:v_strmlns_cmprsn}, a comparison is presented between the velocity streamlines for the cases of $\rho_f = 0.5$ kg/m\textsuperscript{3} vs $\rho_f = 4$ kg/m\textsuperscript{3}, where the later case shows a slightly increased fluid inertia. Nonetheless, this effect is minimal on the velocity in the solid regions due to the fact that, in those regions, the viscous forces (i.e. contributions from Brinkman penalization and fluid viscosity) are much larger than the inertia forces. Hence, unless the change in $\rho_f$ exceeds an order of magnitude or Reynolds number is generally large, it's safe to ignore its effect on $\alpha_\text{max}$ from a practical perspective.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{0.Figures/v_strmlns_cmpr_rho.pdf}
\caption{Comparison of velocity streamlines for $\rho_f = 0.5$ kg/m\textsuperscript{3} (solid black) vs $\rho_f = 4$ kg/m\textsuperscript{3} (dashed red).}
\label{fig:v_strmlns_cmprsn}
\end{figure}
\subsection{Relation between $\alpha_\text{max}$ and $\mu$}
\label{ssec:kmax_mu_chrctr}
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=0.5\columnwidth]{0.Figures/v_sld_var_mu.pdf}}
\subfloat[]{\includegraphics[width=0.5\columnwidth]{0.Figures/v_sld_var_mu_zmd.pdf}}
\caption{Maximum velocity in the solid regions vs $\alpha_\text{max}$ for different $\mu$ values on a log-log scale.}
\label{fig:kmax_vs_mu}
\end{figure}
\begin{figure}[t!]
\captionsetup{margin={10pt}}
\centering
\subfloat[A log-log scale is used. From bottom up, max$(|v_\mathrm{solid}|) = 1e-2, 1e-3, ..., 1e-12$.]{\includegraphics[width=0.5\textwidth]{0.Figures/kmax_vs_mu_crv_ftng.pdf}}
\subfloat[A single case of max$(|v_\mathrm{solid}|) = 1e-12$ with error bars calculated as abs($\alpha_\text{max}|$\textsubscript{Data} - $\alpha_\text{max}|$\textsubscript{Eq.}).] {\includegraphics[width=0.5\textwidth]{0.Figures/kmax_vs_mu_crv_ftng_err.pdf}}
\caption{Comparison of Eq. \ref{eq:kmax_vs_mu_crv_ftng} (asterisks) to data points from the numerical experiments (solid lines).}
\label{fig:kmax_vs_mu_crv_ftng_chck}
\end{figure}
A fluid flow analysis is run for all combinations of the following values; $\mu = 0.5, 1.0, 1.5, ..., 5.0$ and $\alpha_\text{max} = 0, 1e0, 1e1,$ $..., 1e20$. The extracted results are presented in Fig. \ref{fig:kmax_vs_mu}. Similarly to the approach followed in Section \ref{ssec:kmax_h_chrctr}, we limit the data used for curve fitting to only 4 points (2 points along $\mu$ and 2 points along max$(|v_\mathrm{solid}|)$); which are all combinations of $\mu = 0.5, 5.0$ and $\alpha_\text{max} = 1e8, 1e20$. The following relation is obtained:
\begin{equation}
\alpha_\text{max} = 10^{-q} \left( 9.857e5 \ \mu + 7331 \right)
\label{eq:kmax_vs_mu_crv_ftng}
\end{equation}
\noindent where $q$ is defined similarly to Section \ref{ssec:kmax_h_chrctr}. To check the soundness of this relation, we compare Eq. \ref{eq:kmax_vs_mu_crv_ftng} to the original set of data points for $\mu = 0.5, 1.0, 1.5, ..., 5.0$ and $\alpha_\text{max} = 1e8, 1e9, 1e10, $ $..., 1e20$. The comparison is presented in Fig. \ref{fig:kmax_vs_mu_crv_ftng_chck} showing good agreement. Noticing that the error is consistently increasing with increasing $\mu$, we conjecture this error is due to the changing ratio in inertia vs viscous forces discussed in Section \ref{ssec:alpha_rho_chrctr}. Nonetheless, the maximum error at $\mu = 5$ Pa$\cdot$s is less than 1\% in the case of max$(|v_\mathrm{solid}|) = 10^{-12}$.
\subsection{Relation between $\alpha_\text{max}$ and $L_c$}
\label{ssec:kmax_lc_chrctr}
\begin{figure}[t!]
\centering
\subfloat[]{\includegraphics[width=0.5\columnwidth]{0.Figures/v_sld_var_lc.pdf}}
\subfloat[]{\includegraphics[width=0.5\columnwidth]{0.Figures/v_sld_var_lc_zmd.pdf}}
\caption{Maximum velocity in the solid regions vs $\alpha_\text{max}$ for different $L_c$ values on a log-log scale.}
\label{fig:kmax_vs_lc}
\end{figure}
\begin{figure}[t!]
\captionsetup{margin={10pt}}
\centering
\subfloat[A log-log scale is used. From bottom up, max$(|v_\mathrm{solid}|) = 1e-2, 1e-3, ..., 1e-12$.]{\includegraphics[width=0.5\textwidth]{0.Figures/kmax_vs_lc_crv_ftng.pdf}}
\subfloat[A single case of max$(|v_\mathrm{solid}|) = 1e-12$ with error bars calculated as abs($\alpha_\text{max}|$\textsubscript{Data} - $\alpha_\text{max}|$\textsubscript{Eq.}).] {\includegraphics[width=0.5\textwidth]{0.Figures/kmax_vs_lc_crv_ftng_err.pdf}}
\caption{Comparison of Eq. \ref{eq:kmax_vs_lc_crv_ftng} (asterisks) to data points from the numerical experiments (solid lines).}
\label{fig:kmax_vs_lc_crv_ftng_chck}
\end{figure}
A study is run for all combinations of the following values; $L_c = 0.5, 1.0, 1.5, ..., 5.0$ and $\alpha_\text{max} = 0, 1e0, 1e1,$ $..., 1e20$. The extracted results are presented in Fig. \ref{fig:kmax_vs_lc}. At first, we attempted to follow an approach similar to the one followed in Section \ref{ssec:kmax_h_chrctr} by limiting the data used for curve fitting to only 6 points (3 points along $L_c$ and 2 points along max$(|v_\mathrm{solid}|)$); which are all combinations of $L_c = 0.5, 3.0, 5.0$ and $\alpha_\text{max} = 1e8, 1e20$. However, the fitted equation in the form of $\alpha_\text{max} \propto 1/L_c^2$ showed considerable disagreement with the original data set extracted from numerical experiments. Secondly, we even attempted to use all data points in the curve fitting process, but still failed to get a satisfying agreement. This issue led us to believe that the use of $L_c$ in non-dimensionalizing $\alpha(\rho)$ in Eq. \ref{eq:nondmnsl_alpha} is incorrect. To gain some insight into the relation between $\alpha_\text{max}$ and $L_c$, we attempted to fit an equation in the form of $\alpha_\text{max} \propto a_1/{L_c}^{a_2}$ where $a_1$ and $a_2$ are constants. The following relation is obtained:
\begin{equation}
\alpha_\text{max} = 10^{-q} \left( \frac{9.065e5}{{L_c}^{0.6073}} + 8.3e4 \right)
\label{eq:kmax_vs_lc_crv_ftng}
\end{equation}
\noindent where $q$ is defined in Section \ref{ssec:kmax_h_chrctr}. Notice that in fitting this relation, we only used 6 points as discussed earlier. To check the soundness of this relation, we compare Eq. \ref{eq:kmax_vs_lc_crv_ftng} to the original set of data points for $L_c = 0.5, 1.0, 1.5, ..., 5.0$ and $\alpha_\text{max} = 1e8, 1e9,$ $..., 1e20$. The comparison is presented in Fig. \ref{fig:kmax_vs_lc_crv_ftng_chck} showing surprisingly good agreement with a maximum error less than 0.5\% in the case of max$(|v_\mathrm{solid}|) = 10^{-12}$. We conjecture that $\alpha_\text{max}$ is in fact related to a length characteristic of the porous microstructure, and this length is related to $L_c$ on the macroscale.
\subsection{Relation between $\alpha_\text{max}$ and $v_c$}
\label{ssec:kmax_vc_chrctr}
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=0.5\columnwidth]{0.Figures/v_sld_var_vc.pdf}}
\subfloat[]{\includegraphics[width=0.5\columnwidth]{0.Figures/v_sld_var_vc_zmd.pdf}}
\caption{Maximum velocity in the solid regions vs $\alpha_\text{max}$ for different $v_c$ values on a log-log scale.}
\label{fig:kmax_vs_vc}
\end{figure}
\begin{figure}[t!]
\captionsetup{margin={10pt}}
\centering
\subfloat[A log-log scale is used. From bottom up, max$(|v_\mathrm{solid}|) = 1e-2, 1e-3, ..., 1e-12$.]{\includegraphics[width=0.5\textwidth]{0.Figures/kmax_vs_vc_crv_ftng.pdf}}
\subfloat[A single case of max$(|v_\mathrm{solid}|) = 1e-12$ with error bars calculated as abs($\alpha_\text{max}|$\textsubscript{Data} - $\alpha_\text{max}|$\textsubscript{Eq.}).] {\includegraphics[width=0.5\textwidth]{0.Figures/kmax_vs_vc_crv_ftng_err.pdf}}
\caption{Comparison of Eq. \ref{eq:kmax_vs_vc_crv_ftng} (asterisks) to data points from the numerical experiments (solid lines).}
\label{fig:kmax_vs_vc_crv_ftng_chck}
\end{figure}
A study is run for all combinations of the following values; $v_c = 0.5, 1.0, 1.5, ..., 5.0$ and $\alpha_\text{max} = 0, 1e0, 1e1,$ $..., 1e20$. The extracted results are presented in Fig. \ref{fig:kmax_vs_vc}. Similarly to the approach followed in Section \ref{ssec:kmax_h_chrctr}, we limit the data used for curve fitting to only 4 points (2 points along $v_c$ and 2 points along max$(|v_\mathrm{solid}|)$); which are all combinations of $v_c = 0.5, 5.0$ and $\alpha_\text{max} = 1e8, 1e20$. The following relation is obtained:
\begin{equation}
\alpha_\text{max} = 10^{-q} \left(1.034e6 \ v_c - 2.253e4 \right)
\label{eq:kmax_vs_vc_crv_ftng}
\end{equation}
\noindent where $q$ is defined similarly to Section \ref{ssec:kmax_h_chrctr}. To check the soundness of this relation, we compare Eq. \ref{eq:kmax_vs_vc_crv_ftng} to the original set of data points for $v_c = 0.5, 1.0, 1.5, ..., 5.0$ and $\alpha_\text{max} = 1e8, 1e9, 1e10, $ $..., 1e20$. The comparison is presented in Fig. \ref{fig:kmax_vs_vc_crv_ftng_chck} showing good agreement. Similarly to Section \ref{ssec:kmax_mu_chrctr}, the error appears to be decreasing with increasing $v_c$, we conjecture this error is due to the changing ratio in inertia vs viscous forces discussed in Section \ref{ssec:alpha_rho_chrctr}. Nonetheless, the maximum error is only 2.1\% in the case of max$(|v_\mathrm{solid}|) = 10^{-12}$.
\section{Conclusions}
\label{sec:conc}
In this work, we investigated the dependency of the inverse permeability maximum limit on the mesh size and flow conditions. The motivation behind this study is the need for mimicking the same behavior of the Brinkman-penalized Navier-Stokes equations for different mesh sizes and flow conditions, which is particularly useful when calibrating the various interpolation and projection parameters common in density-based topology optimization of fluid-dependent problems.
We first started by investigating the fluid flow governing equations in their strong as well as discretized finite element forms. We analytically derived proportionality relations between the maximum inverse permeability limit and the mesh size and flow condition parameters. We emphasize that these proportionality relations are not closed-form, instead they are generally true with a certain range of flow behavior. In general, these proportionality relations are independent of the design problem, though the proportionality coefficients are problem-dependent.
For a specific design problem common in topology optimization of fluid-structure interactions, we proved these dependency relations numerically for the mesh size, dynamic viscosity, and characteristic velocity. For the characteristic length, a different relation was obtained from curve fitting which we believe is due to the dependency of the maximum inverse permeability limit on a microscale characteristic length that is somehow related to the macroscale one. In the case of the fluid density, it is deduced analytically and proven numerically that the maximum inverse permeability limit is independent of the fluid density when the change is within a reasonable range of Reynolds numbers. We also showed that only a handful of data points are needed to calculate proportionality coefficients for other problems, given that the analytical dependency relations are known a priori.
\bibliographystyle{elsarticle-num-names}
\pdfbookmark[0]{References}{References}
|
{
"arxiv_id": "2302.14120",
"language": "en",
"timestamp": "2023-03-01T02:01:35",
"url": "https://arxiv.org/abs/2302.14120",
"yymm": "2302"
} | \section{Introduction and related work}
An interesting alternative to the ubiquitous transformer
architecture is the recently introduced structured
state space sequence model (S4) which showed promising
results for modeling long range dependencies on the LRA (Long Range
Arena) benchmark for sequence-level classification of different modalities such as text, images and mathematical expressions~\cite{gu2021efficiently}. The main
idea behind S4 is that the input sequence can be modeled as a linear
RNN obtained by discretizing a continuous state space model. The
physical meaning of a state in S4 is a time-varying vector of linear
expansion coefficients used to approximate the input sequence with
orthogonal polynomials under a given measure and support (weighting function and input window)~\cite{gu2020hippo}. The appeal of these
models is that they can be efficiently implemented as full sequence
convolutions running in ${\cal O}(T\log T)$ instead of the ${\cal O}(T^2)$ complexity for self-attention with $T$ being the input sequence length. Moreover, these models are solidly grounded in function approximation theory and have interpretable parameters in terms of basis functions, measures and time sampling intervals.
In~\cite{gu2021efficiently} the authors
consider a diagonal plus low-rank approximation of the state
transition matrix which simplifies the convolutional kernel
estimation. In~\cite{gupta2022diagonal}, the authors observed that there is no
loss in performance when assuming that the transition matrix is
diagonal with complex eigenvalues which is conceptually simpler and
straightforward to implement compared to~\cite{gu2021efficiently}. Because of this, diagonal state space (DSS) models will be adopted in this paper. In both
works, the authors initialize the diagonal entries of the state
transition matrix with the eigenvalues of a higher-order polynomial
projection operator (HiPPO) matrix such that the input function is
uniformly approximated with Legendre polynomials over a sliding window
of fixed length. In~\cite{mehta2022long} the authors argue that parameterizing the eigenvalues in log-space and initializing them with -exp for the real parts and +exp for the imaginary parts is just as effective and improve the
DSS model further by augmenting it with self-attention to better
capture local dependencies. In~\cite{gu2022parameterization}, the authors revisit the parameterization and initialization of DSS and propose eigenvalue initialization schemes with constant negative real parts with respect to the eigenvalue index and imaginary parts which scale either inversely or linearly with the eigenvalue index. The former results in projecting the input onto the space of Legendre polynomials with uniform weighting from the beginning of the sequence up to the current time whereas the latter amounts to using damped Fourier basis functions as approximators with an exponentially decaying weighted history.
While DSS has been primarily developed as an alternative to
self-attention, the dual RNN/convolutional representation suggests
that it has potential to outperform the depthwise temporal
convolutions in the conformer architecture~\cite{gulati2020conformer}. We echo the
findings of~\cite{mehta2022long} which indicate that self-attention and
DSS exhibit complementary behaviour and do not necessarily subsume each other. Given the popularity and effectiveness of conformers for both hybrid~\cite{zeineldeen2022improving} and end-to-end ASR~\cite{tuske2021limit,sainath2022improving,li2022recent,shi2022streaming,zhang2022bigssl}, several other avenues
have been explored in the literature to either improve the conformer
architecture or the training recipe. In~\cite{burchi2021efficient}, the authors
use grouped self-attention and progressive down-sampling to reduce the
complexity of the self-attention layer. In~\cite{guo2021recent}, the
authors provide training recipes and extensive comparisons between
conformers and transformers on several corpora. In~\cite{wang2020efficient} the
authors replace the transformer layer with performer. In~\cite{li2021efficient},
the authors use linear self-attention layers. In~\cite{zeineldeen2022improving} the
authors use two convolutional layers for each conformer block and layer normalization instead of batch norm. Similar to our work, in~\cite{jiang2022nextformer},
the authors replace the convolutional layers with a more powerful representation called ConvNeXt.
The main contributions of this work are summarized below:
\begin{itemize}
\setlength\itemsep{-1pt}
\item We apply diagonal state space models to speech recognition and report experimental results on three public corpora.
\item We show that DSSformers outperform conformers when used as encoders for neural transducers and achieve state-of-the-art results for single non-AED models on Switchboard telephony speech and MALACH.
\item We study the effect of DSS initialization and provide some insights into what the DSS layers actually learn.
\end{itemize}
The rest of the paper is organized as follows: in section~\ref{dss-sec} we review the DSS formalism; in section~\ref{exp-sec} we present experimental evidence of its utility and in section~\ref{con-sec} we summarize our findings.
\section{DSS formulation}
\label{dss-sec}
We briefly review the main concepts behind the diagonal state spaces framework for readers from the ASR community who may not be familiar with this new sequence-to-sequence modeling approach.\\
\subsection{State space model}
Borrowing some definitions and notations from~\cite{gu2021efficiently,gupta2022diagonal}, a continuous state space model (SSM), sometimes referred to in the literature as a linear time-invariant or a linear dynamical system, is defined by the linear ODE:
\begin{align}
\nonumber x^\prime(t)&={\bf A}x(t)+{\bf B}u(t), &{\bf A}\in\mathbb{R}^{N\times N},~{\bf B}\in\mathbb{R}^{N\times 1} \\
y(t)&={\bf C}x(t),& {\bf C}\in\mathbb{R}^{1\times N}\label{ssm}
\end{align}
~~\\
that maps the continuous 1-dimensional input $u(t)\in\mathbb{R}$ to an $N$-dimensional latent state $x(t)\in\mathbb{R}^{N}$ before projecting it to a 1-dimensional output $y(t)\in\mathbb{R}$. The state space is parameterized by the state transition matrix ${\bf A}$ as well as trainable parameters ${\bf B},{\bf C}$.
\subsection{Discretization and link to linear RNNs} Consider a sampling interval $\Delta>0$ and define $u_k:=u(k\Delta), k=0\ldots L-1$ the sampled input signal. Correspondingly, we have $x_k=x(k\Delta)$ and $y_k=y(k\Delta)$. Equation~(\ref{ssm}) can be turned into a discrete recurrence that maps $(u_0,\ldots,u_{L-1})\mapsto(y_0,\ldots,y_{L-1})$ by integrating over $[(k-1)\Delta,k\Delta]$ under the zero-order hold (ZOH) assumption $u(t)=u_k,~(k-1)\Delta\le t< k\Delta$:
\begin{align}
\nonumber x_k&=\overline{\bf A}x_{k-1}+\overline{\bf B}u_k, &\overline{\bf A}=e^{{\bf A}\Delta},~\overline{\bf B}=(e^{{\bf A}\Delta}-{\bf I}){\bf A}^{-1}{\bf B}\\
y_k&=\overline{\bf C}x_k, &\overline{\bf C}={\bf C}\label{rnn}
\end{align}
\subsection{Convolutional representation} With the convention $x_{-1}=0$, the recurrence can be unrolled and rewritten by eliminating the state variables $x_k$:
\begin{equation}
y_k = \sum_{j=0}^k\overline{\bf C}\overline{\bf A}^j\overline{\bf B}u_{k-j},\qquad k=0,\ldots,L-1
\label{unroll}
\end{equation}\\
By grouping the scalar coefficients $\overline{\bf C}\overline{\bf A}^k\overline{\bf B}$ into the SSM kernel $\overline{\bf K}\in\mathbb{R}^L,~
\overline{\bf K}=(\overline{\bf C}\overline{\bf B},\overline{\bf C}\overline{\bf A}\overline{\bf B},\ldots,\overline{\bf C}\overline{\bf A}^{L-1}\overline{\bf B})$, (\ref{unroll}) can be elegantly reformulated as a convolution
\begin{equation}
y=\overline{\bf K}*u
\label{conv}
\end{equation}\\
Computing~(\ref{conv}) naively would require ${\cal O}(L^2)$ operations. Instead, we observe that $y_k$ is the coefficient of $z^k$ of the product $u(z)\cdot\overline{K}(z)$ of two $(L-1)$-degree univariate polynomials $u(z)=\sum_{i=0}^{L-1}u_i z^i$ and $\overline{K}(z)=\sum_{i=0}^{L-1}\overline{K}_i z^i$. By the circular convolution theorem, this product can be computed efficiently in ${\cal O}(L\log L)$ using FFT and its inverse.
\subsection{Diagonal state spaces}
Based on the above, computing $y$ from $\overline{\bf K}$ and $u$ is easy; the hard part is how to compute $\overline{\bf K}$ efficiently. The main result in~\cite{gupta2022diagonal} states that if ${\bf A}\in\mathbb{C}^{N\times N}$ is diagonalizable over $\mathbb{C}$ with eigenvalues $\lambda_1,\ldots,\lambda_N$ such that, $\forall i$, $\lambda_i\ne 0$ and $e^{L\lambda_i\Delta} \ne 1$, there $\exists w \in \mathbb{C}^{1\times N}$ such that
\begin{equation}
\overline{\bf K} = w\cdot\mathbf{\Lambda}^{-1}\cdot\mbox{row-softmax}({\bf P})
\label{dss}
\end{equation}
~~\\
where ${\bf P}\in\mathbb{C}^{N\times L},~p_{ik}=\lambda_i k\Delta$ and $\mathbf{\Lambda}=\mbox{diag}(\lambda_1,\ldots,\lambda_N)$. The proof uses the diagonalization of ${\bf A}={\bf V}\mathbf{\Lambda}{\bf V}^{-1}$ which, from the expression of $\overline{\bf A}$ from~(\ref{rnn}), implies $\overline{\bf A}^k = e^{k{\bf A}\Delta}={\bf V}e^{k\mathbf{\Lambda}\Delta}{\bf V}^{-1}$, and the geometric series identity $\sum_{k=0}^{L-1}z^k=\frac{z^L-1}{z-1}$. We refer the reader to~\cite{gupta2022diagonal} for the complete proof.
\subsection{DSS layer}
A DSS layer operates as follows. It receives an $H\times L$ input sequence and produces an $H\times L$ output sequence where $H$ is the number of channels and $L$ is the sequence length. It does this by applying $H$ DSS kernels to the input (with a shortcut connection) according to (\ref{conv}), one for each coordinate. We apply a Gaussian Error Linear Unit (GELU) nonlinearity to the result followed by an $H\times H$ pointwise linear layer needed to exchange information between the dimensions. After mixing, we apply a Gated Linear Unit (GLU) activation to the output. The implementation of a DSS layer as described so far is publicly available at \url{https://github.com/ag1988/dss}.
For a state space dimension $N$, the trainable parameters of the DSS layer are: $\mathbf{\Lambda}_{re},\mathbf{\Lambda}_{im}\in\mathbb{R}^N$ the diagonal entries of the transition matrix (tied across all channels), $W\in\mathbb{C}^{H\times N}$ from~(\ref{dss}), $\Delta\in\mathbb{R}^H$ the time sampling intervals, and $W_{out}\in\mathbb{R}^{H\times H}$ the output mixing matrix.
Just like the depthwise separable convolution module in the conformer architecture, the DSS layer is sandwiched between two pointwise convolutions which serve to increase the inner dimension (typically by a factor of 2) on which the layer operates as shown in Figure~\ref{dss-arch}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{dss-crop}
\caption{Proposed architecture: top DSSformer block, bottom DSS module (non-linearities are omitted for clarity).}
\label{dss-arch}
\end{figure}
\section{Experiments and results}
\label{exp-sec}
We investigate the effectiveness of the proposed model on three public corpora: Switchboard English conversational telephone speech 300 hours, Switchboard+Fisher 2000 hours, and MALACH 176 hours.
\subsection{Experiments on Switchboard 300 hours}
\label{swb300}
The acoustic training data comprises 300 hours of English telephone conversations
between two strangers on a preassigned topic. We follow the Kaldi {\tt s5c} recipe~\cite{povey11} for data preparation and segmentation and report results on the
Hub5 2000 (Switchboard and CallHome), Hub5 2001 and RT'03 test sets which are processed according to the LDC segmentation and scored using Kaldi WER measurement.
\subsubsection{Feature processing}
Our feature extraction and training recipe largely mirrors~\cite{saon2021advancing} with some notable differences. We extract 40-dimensional speaker independent log-Mel features every 10ms with speaker-based mean and variance normalization augmented with $\Delta$ and $\Delta\Delta$ coefficients. We perform temporal subsampling by a factor of 2 by stacking every two consecutive frames and skipping every second stacked frame which results in 50 240-dimensional feature vectors per second. Unlike~\cite{saon2021advancing,cui2022improving,zeineldeen2022improving}, we do not use appended i-vectors as we found them to be less effective with conformer transducers.
We create 4 additional replicas of the training data using speed and tempo perturbation~\cite{ko15} both with values in $\{0.9,1.1\}$ which, together with the original data, amounts to 1500 hours of training data every epoch. We perturb the data in three different ways: (i) sequence noise injection adds, with probability 0.8, a down-scaled spectrum of a random utterance to the current utterance~\cite{saon19a}; (ii) SpecAugment randomly masks blocks in both time and frequency with the settings from~\cite{park19}; (iii) Length perturbation randomly deletes and inserts contiguous frames with probability 0.7~\cite{cui2022improving}.
\subsubsection{Transducer architecture}
We trained neural transducers (or RNN-Ts\footnote{Both terms are used interchangeably in the literature even for models where the encoder is not an RNN.})~\cite{graves12} with either conformer or DSSformer encoders with 12 layers, feed-forward dimension of 384 and 6$\times$96-dimensional attention heads for an inner dimension of 512. All DSS layers use bidirectional kernels with a state space dimension $N$=96. The joint network projects the 384-dim vectors from the last encoder layer to 256 and multiplies the result elementwise~\cite{saon2021advancing,zhang2022improving} with a 256-dim projection of a label embedding computed by a unidirectional 1024-cell LSTM prediction network. After the application of hyperbolic tangent, the output is projected to 46 logits followed by a softmax
layer corresponding to 45 characters plus BLANK. The baseline conformer RNN-T has an optimal size of 63M parameters and the DSSformer RNN-T has 73M parameters.
\subsubsection{Training and decoding}
The models were trained in Pytorch to minimize the RNN-T loss with CTC loss smoothing from the encoder with a weight of 0.1. Training was carried out on single A100 GPUs for 24 epochs with AdamW SGD and a one cycle learning rate policy which ramps up the step size linearly from 5e-5 to 5e-4 for the first 8 epochs followed by a linear annealing phase to 0 for the remaining 16 epochs. All experiments use a batch size of 64 utterances. Decoding was done using alignment-length synchronous beam search~\cite{saon2020alignment}. We also report results with density ratio shallow language model fusion~\cite{mcdermott19} where the target LM is a 12-layer, 512-dimensional transformerXL character LM~\cite{dai2019transformer} trained on the Switchboard+Fisher acoustic transcripts (126M characters) and the source LM has the same configuration as the prediction network and was trained on the 300 hours transcripts only (15M characters).
\subsubsection{DSS layer initialization and recognition results}
In Table~\ref{swb300-tab}, we compare the performance of baseline conformer and DSSformer transducers with different initializations of the $\mathbf{\Lambda}$ matrix. Concretely, $\mathbf{\Lambda}$ HiPPO uses the top $N$ eigenvalues with positive imaginary part from the skew-symmetric $2N\times 2N$ matrix $a_{ij}=\left\{\begin{array}{ll}2(i+1)^{1/2}(2j+1)^{1/2},&i<j\\-1/2,&i=j\\-2(i+1)^{1/2}(2j+1)^{1/2},&i>j\\ \end{array}\right.$~\cite{gupta2022diagonal}. For $\mathbf{\Lambda}$ exp random, $\lambda_n=-e^{a_n}+i\cdot e^{b_n}$ where $a_n,b_n\sim{\cal U}[-1,1]$~\cite{mehta2022long}. For $\mathbf{\Lambda}$ S4D-Inv, $\lambda_n=-\frac{1}{2}+i \frac{N}{\pi}\left(\frac{N}{2n+1}-1\right)$, whereas for $\mathbf{\Lambda}$ S4D-Lin, $\lambda_n=-\frac{1}{2}+i\pi n$~\cite{gu2022parameterization}. For all experiments, $\Delta$ is parameterized in log-space with values drawn from ${\cal U}[\log(0.001),\log(0.1)]$ and the real and imaginary parts for $w$ in~(\ref{dss}) are initialized from ${\cal N}(0,1)$.
\begin{table}[H]
\begin{center}
\setlength\tabcolsep{6pt}
\begin{tabular}{|l|c|c|c|c|c|} \hline
\multirow{2}{*}{Encoder} & \multicolumn{3}{|c|}{Hub5'00} & \multirow{2}{*}{Hub5'01} & \multirow{2}{*}{RT'03}\\ \cline{2-4}
& swb & ch & avg & & \\ \hline
Conformer & 7.5 & 15.0 & 11.2 & 11.2 & 14.4\\
~~~~~~~-MHSA+DSS & 8.0 & 15.9 & 12.0 & 12.2 & 15.7\\ \hline
$\mathbf{\Lambda}$ HiPPO~\cite{gupta2022diagonal} & 7.2 & 14.2 & 10.7 & 10.7 & 13.5 \\ \hline
$\mathbf{\Lambda}$ exp random~\cite{mehta2022long} & 7.3 & 14.5 & 10.9 & 10.8 & 13.4\\ \hline
$\mathbf{\Lambda}$ S4D-Inv~\cite{gu2022parameterization} & 7.5 & 14.7 & 11.1 & 10.9 & 13.8\\ \hline
$\mathbf{\Lambda}$ S4D-Lin~\cite{gu2022parameterization} & 7.2 & 14.3 & 10.8 & 10.9 & 13.3\\ \hline
$\lambda_n=-1+i\cdot n$ & 7.1 & 13.9 & 10.5 & 10.5 & 13.3\\ \hline
\end{tabular}
\end{center}
\caption{\label{swb300-tab} Recognition results for conformer, conformer with MHSA replaced by DSS, and DSSformer transducers with different $\mathbf{\Lambda}$ initializations on Switchboard 300 hours (Hub5'00, Hub5'01, RT'03 test sets). All models are trained for 24 epochs without length perturbation and decodings are done without external LM.}
\end{table}
The $\mathbf{\Lambda}$ initialization from the last row in Table~\ref{swb300-tab} was motivated by inspecting the converged values of $\lambda_n$ when the DSS layers were initialized with S4D-Lin. Interestingly, the imaginary parts of $\lambda_n$ converge from $\pi n$ to approximately $0.95n$ across all layers as shown in Figure~\ref{im-lambda}. In contrast, in Figure~\ref{re-lambda} the real parts converge to values that are layer-dependent\footnote{The curves have been smoothed with Bezier interpolation for ease of visualization.}. This suggests that the DSS layers learn damped Fourier basis functions $F_n(t)=e^{-\lambda_n t}$ where the attenuation coefficients are layer specific and the frequency coefficients are linearly spaced and common across layers. The benefit of using FFT layers for mixing input sequences has also been shown in the FNet architecture~\cite{lee2021fnet}.
\begin{figure}[H]
\hspace*{-3mm}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{re}
\caption{Real parts of trained $\lambda_n$}
\label{re-lambda}
\end{subfigure}
\hspace*{2mm}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{im}
\caption{Imaginary parts of trained $\lambda_n$}
\label{im-lambda}
\end{subfigure}
\caption{Converged eigenvalues for S4D-Lin initialization for first, middle and last layers on Switchboard 300 hours.}
\label{lambdas}
\end{figure}
In Table~\ref{swb300-best} we compare the performance of our best single DSSformer model with existing approaches from the literature. Here, the model was trained for 30 epochs with length perturbation with the following settings from~\cite{cui2022improving}: insertion and deletion probabilities of 0.7, 10\% of frames selected as starting points for both, maximum deletion length of 7 frames and maximum insertion length of 3 frames. Length perturbation is lifted after 25 epochs.
\begin{table}[H]
\setlength\tabcolsep{3pt}
\begin{center}
\begin{tabular}{|l|l|l|c|c|c|c|c|} \hline
\multirow{2}{*}{Work} &\multirow{2}{*}{Model} & \multirow{2}{*}{Encoder} & \multirow{2}{*}{LM} & \multicolumn{3}{|c|}{Hub5'00} & \multirow{2}{*}{Hub5'01}\\ \cline{5-7}
& & & & swb & ch & avg & \\ \hline
\cite{guo2021recent} & AED & Conformer & -- & 7.1 & 15.0 & 11.1 & -- \\ \hline
\multirow{3}{*}{\cite{tuske2021limit}} & \multirow{3}{*}{AED} & \multirow{3}{*}{Conformer} & -- & 6.7 & 13.0 & 9.9 & 10.0\\
& & & LSTM$^*$ & 5.7 & 11.4 & 8.6 & 8.5\\
& & &+Trafo$^*$ & 5.5 & 11.2 & 8.4 & 8.5\\ \hline
\multirow{2}{*}{\cite{zeineldeen2022improving}} & \multirow{2}{*}{HMM} & \multirow{2}{*}{Conformer} & n-gram & 7.1 & 13.5 & 10.3 & 10.4\\
& & & Trafo & 6.3 & 12.1 & 9.2 & 9.3\\ \hline
\multirow{2}{*}{\cite{cui2022improving}} & \multirow{2}{*}{RNN-T} & \multirow{2}{*}{LSTM} & -- & 6.9 & 14.5 & 10.7 & 11.2\\
& & & LSTM & 5.9 & 12.5 & 9.2 & 9.4\\ \hline
\multirow{2}{*}{\cite{zhou2022efficient}} & \multirow{2}{*}{RNN-T} & \multirow{2}{*}{Conformer} & n-gram & -- & -- & 10.3 & 10.6\\
& & & Trafo & -- & -- & 9.3 & 9.4\\ \hline
\multirow{2}{*}{Ours} & \multirow{2}{*}{RNN-T} & \multirow{2}{*}{DSSformer} & -- & 6.7 & 13.4 & 10.0 & 10.3\\
& & & Trafo & 5.6 & 12.2 & 8.9 & 9.0 \\ \hline
\end{tabular}
\end{center}
\caption{\label{swb300-best} Performance comparison of DSSformer transducer with other single-model approaches from the literature on Switchboard 300 hours ($^*$ are cross-utterance LMs).}
\end{table}
\subsection{Experiments on Switchboard+Fisher 2000 hours}
\label{swb2000}
The second set of experiments was carried out on 1975 hours (9875 hours after augmentation) comprised of 262 hours of
Switchboard 1 audio with segmentations and transcripts provided by Mississippi State
University plus 1698 hours from the Fisher data collection with transcripts provided by LDC plus 15 hours of CallHome audio. We trained neural transducers with either conformer (10 or 12 layers) or DSSformer encoders (10 layers), feed-forward dimension of 512 and 8$\times$64-dimensional attention heads. All DSS layers use bidirectional kernels with a state space dimension $N$=96. Training was carried out on 4 A100 GPUs with an effective batch size of 128 for 20 epochs with a one cycle LR policy with a maximum learning rate of 5e-4. The other settings are the same as in~\ref{swb300}. In Table~\ref{swb2000-tab} we show a comparison of baseline conformer and DSSformer transducers with various $\mathbf{\Lambda}$ initializations. As can be seen, DSSformer encoders outperform the conformer counterparts and the best $\mathbf{\Lambda}$ initialization is the same as in~\ref{swb300}. For contrast, we also compare our results with the single best performing model on this task from~\cite{tuske2021limit} and note that we achieve a comparable performance on two out of three test sets.
\begin{table}[H]
\begin{center}
\setlength\tabcolsep{6pt}
\begin{tabular}{|l|c|c|c|c|c|} \hline
\multirow{2}{*}{Encoder} & \multicolumn{3}{|c|}{Hub5'00} & \multirow{2}{*}{Hub5'01} & \multirow{2}{*}{RT'03}\\ \cline{2-4}
& swb & ch & avg & & \\ \hline
Conformer (10L) & 5.2 & 8.5 & 6.9 & 7.6 & 7.8 \\
Conformer (12L) & 5.4 & 8.5 & 6.9 & 7.6 & 8.2\\ \hline
$\mathbf{\Lambda}$ HiPPO & 5.2 & 8.4 & 6.8 & 7.4 & 7.5\\ \hline
$\mathbf{\Lambda}$ S4D-Lin & 5.3 & 8.4 & 6.8 & 7.6 & 7.5\\ \hline
$\lambda_n=-1+i\cdot n$ & 5.1 & 8.5 & 6.8 & 7.4 & 7.4\\
~~~+length perturb. & 5.2 & 8.2 & 6.7 & 7.2 & 7.5\\ \hline\hline
Conformer AED~\cite{tuske2021limit}& 4.8 & 8.0 & 6.4 & 7.3 & 7.5\\ \hline
\end{tabular}
\end{center}
\caption{\label{swb2000-tab} Recognition results for conformer (10 and 12 layers) and DSSformer transducers (10 layers) with different $\mathbf{\Lambda}$ initializations on Switchboard 2000 hours (Hub5'00, Hub5'01, RT'03 test sets). All decodings are done without external LM.}
\end{table}
\subsection{Experiments on MALACH 176 hours}
\label{malach}
Lastly, we test the proposed models on the public MALACH corpus~\cite{bhuvana03} (released by LDC as LDC2019S11) which consists of Holocaust testimonies collected by the
Survivors of the Shoah Visual History Foundation. The corpus is 16kHz audio broken
down into 674 conversations totaling 176 hours for training (880 hours after augmentation) and 8
conversations of 3.1 hours for testing. The collection consists of
unconstrained, natural speech filled with disfluencies, heavy accents,
age-related coarticulations, un-cued speaker and language switching,
and emotional speech, all of which present significant challenges for
current ASR systems. Because of this, the error rates reported are
significantly higher than for the previous corpora. We trained conformer and DSSformer transducers with the same feature extraction, architecture, DSS layer initialization and training recipe as in~\ref{swb300} without length perturbation and with S4D-Lin $\mathbf{\Lambda}$ initialization. In Table~\ref{malach-tab} we report results with and without external LM fusion where the LM is a 10 layer 512-dimensional transformerXL trained on 7.2M characters. Our results show a 7\% relative improvement in WER over the previous best hybrid LSTM approach.
\begin{table}[H]
\begin{center}
\begin{tabular}{|l|l|l|c|c|} \hline
Work & Model & Encoder & LM & WER\\ \hline
\cite{bhuvana03} & HMM & GMM & n-gram & 32.1\\ \hline
\multirow{2}{*}{\cite{picheny2019challenging}} & \multirow{2}{*}{HMM} & \multirow{2}{*}{LSTM} & n-gram & 23.9\\
& & & LSTM & 21.7 \\ \hline
\multirow{3}{*}{Ours} & \multirow{3}{*}{RNN-T} & Conformer & -- & 21.5\\ \cline{3-5}
& & \multirow{2}{*}{DSSformer} & -- & 20.9\\
& & & Trafo & 20.2\\ \hline
\end{tabular}
\end{center}
\caption{\label{malach-tab} Performance comparison of conformer and DSSformer transducer with other single-model approaches from the literature on MALACH 176 hours.}
\end{table}
\section{Discussion}
\label{con-sec}
Diagonal state space models are a promising alternative to temporal convolutions with fixed-length kernels for ASR when used in a conformer-style architecture. We attribute their success to the connection with function approximation theory and to the interpretability of their parameters. In future work we will investigate better ways of integrating DSS layers with self-attention and feedforward modules as opposed to simply using them as a drop-in replacement for the depthwise convolutions in conformer. For example, the DSS mixing matrix can be combined with the second pointwise convolution which will simplify the overall architecture. Another avenue of research is to improve the initialization for the real parts of the eigenvalues of the state transition matrices and possibly keep the $\mathbf{\Lambda}$s fixed during training which will reduce the number of free parameters. Lastly, we plan to study the effectiveness of DSS for other end-to-end ASR modeling approaches.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.14132",
"language": "en",
"timestamp": "2023-03-01T02:01:58",
"url": "https://arxiv.org/abs/2302.14132",
"yymm": "2302"
} |
\section{Introduction}
\label{sec:intro}
Self-supervised speech representation learning (SSL) has achieved great success in a variety of speech processing tasks~\cite{superb, ssl-review, ssl-for-asr, ssl-for-se-ss, slue, espnet-slu, ssl-for-slu}. However, SSL pre-trained models (e.g., wav2vec2~\cite{wav2vec2}, HuBERT~\cite{hubert} and WavLM~\cite{wavlm}) usually require large memory and high computational cost. Hence, it is difficult to deploy them in real-world applications. Recent studies have utilized model compression techniques to reduce the model size and computation without degradation in accuracy. One common approach is knowledge distillation~\cite{hinton2015distilling}, which trains a small student model with a pre-specified architecture to match the soft targets generated by a large pre-trained model. Distillation has shown to be effective in natural language processing (NLP)~\cite{distilbert, tinybert} and speech processing~\cite{shrinking-bigfoot, distilhubert, lighthubert, fithubert}, but it usually performs general distillation using large amounts of unlabeled data before task-specific distillation or fine-tuning. This can make the training procedure computationally expensive.
Another compression technique is pruning, which extracts a compact and accurate subnetwork from the original model. Pruning has been widely used in computer vision (CV)~\cite{han2015pruning, filter-pruning-conv, network-slimming, platon-icml22} and NLP~\cite{platon-icml22, asapp-pruning, cofi, super-tickets-nlp}. For speech models, \cite{structured-prune-rnn, prune-lstm} prune recurrent neural networks (RNNs) for resource-limited applications.
Another work~\cite{deliang-compressing-enhancement} prunes deep neural networks (DNNs) based speech enhancement models using the sparse group lasso regularization~\cite{group-sparse-reg}. These studies do not consider SSL models.
PARP~\cite{parp} is one of the first pruning methods designed for SSL speech models, which prunes individual weights based on magnitudes. Despite its good performance in low-resource automatic speech recognition (ASR), PARP is a type of unstructured pruning and thus cannot achieve an actual speedup without an efficient sparse matrix computation library, which is not usually available in many deployment scenarios.
Another limitation is that PARP only prunes the Transformer layers while keeping the convolutional feature extractor (CNN) fixed. As discussed in~\cite{felix-sew}, although the CNN has much fewer parameters than the Transformer, its computational cost is large and cannot simply be ignored. For example, in wav2vec2-base, the CNN has less than 5\% of the total parameters but its computational cost is nearly 33\% in terms of multiply-accumulate (MAC) operations for a 10-second audio.
In this work, we propose HJ-Pruning\xspace (Heterogeneous Joint Pruning\xspace) where both CNN and Transformer components are pruned jointly.
We consider three variants:
(a) \textit{\method based on the overall model size\xspace} sets a single sparsity for the entire model size.
(b) \textit{\method based on separate model sizes\xspace} introduces a separate sparsity hyperparameter for each component which allows fine-grained tuning to find a trade-off between CNN and Transformer.
(c) \textit{\method based on the overall MACs\xspace} uses multiply–accumulate (MAC) operations as the sparsity criterion to find the best allocation of the computation budget across different components.
We evaluate our methods in the ASR and spoken language understanding (SLU) tasks. Experiments on LibriSpeech and SLURP show that all HJ-Pruning\xspace methods outperform the previous Transformer-only pruning strategy. Our \method-MAC\xspace is more accurate than the original wav2vec2-base with 10\% to 30\% less computation, and is able to reduce the computation by 40\% to 50\% without any degradation.
\section{Background}
\subsection{Self-supervised pre-trained speech models}
\label{subsec:ssl-models}
We evaluate the pruning algorithms mainly using wav2vec2~\cite{wav2vec2}, but our proposed methods can be easily applied to other SSL models with as a similar architecture such as HuBERT~\cite{hubert} (see Sec.~\ref{subsec:other-compression}), SEW-D~\cite{felix-sew}, and WavLM~\cite{wavlm}. The wav2vec2-base model (pretrained on Librispeech 960h~\cite{librispeech-corpus}) consists of a convolutional feature extractor (CNN) and a Transformer~\cite{transformer} encoder. The CNN contains seven temporal convolutions with 512 channels and GeLU~\cite{gelu} activations. The Transformer encoder is a stack of 12 Transformer layers with a hidden dimension of 768 and 12 attention heads.
\subsection{Structured pruning using $L_0$ regularization}
\label{subsec:pruning-l0reg}
We follow \cite{asapp-pruning, cofi, l0sparse-iclr18} to formulate the structured pruning task as a regularized learning problem, which aims to learn a sparse model. Let $f(\cdot;\thetav)$ be a model with parameters $\thetav = \{\theta_j\}_{j=1}^n$, where each $\theta_j$ is a group of parameters (e.g., an attention head) and $n$ is the number of groups. The pruning decisions are given by a set of binary variables called \textit{gates}: $\mathbf{z} = \{z_j\}_{j=1}^n$ where $z_j\in \{0, 1\}$. The model parameters after pruning are $ \tilde{\thetav} = \{\tilde{\theta}_j\}_{j=1}^n$ such that $\tilde{\theta}_j = \theta_j z_j$. We usually sample gates from some distributions (e.g., Bernoulli) and update their parameters during training. Suppose the gates follow a distribution $q(\mathbf{z};\alphav)$ with parameters $\alphav = \{\alpha_j\}_{j=1}^n$, then our training objective is:
\begin{align}
\label{eq:total-loss}
\min_{\thetav, \alphav} ~~
\mathbb{E}_{\mathbf{z} \sim q}\left[\frac{1}{D} \sum_{i=1}^D \mathcal{L}(f(\mathbf{x}_i;\tilde{\thetav}),\mathbf{y}_i) + \lambda \norm{\tilde{\thetav}}_0 \right],
\end{align}
where $\{(\mathbf{x}_i, \mathbf{y}_i)\}_{i=1}^D$ is the training data containing $D$ samples, $\mathcal{L}$ is the training loss (i.e., CTC loss for ASR, cross entropy loss for SLU), and $\lambda > 0$ is a hyperparameter to control the sparsity. However, it is intractable to optimize Eq.~\eqref{eq:total-loss} using gradient descent because the gates are discrete. Louizos et al.~\cite{l0sparse-iclr18} propose a reparameterization trick to make the loss differentiable, which has been widely used in sparse model learning. Here we only introduce their final approach. Please refer to \cite{l0sparse-iclr18} for the derivation. Louizos et al. adopt the Hard Concrete Distribution~\cite{l0sparse-iclr18} to model the gates $\mathbf{z}$:
\begin{align}
\label{eq:hard-concrete}
\begin{split}
\mathbf{u} &\sim \mathcal{U}(0,1), ~~
\mathbf{v}(\alphav) = \sigma\left(\frac{1}{\beta}\left(\log\frac{\mathbf{u}}{1-\mathbf{u}} + \log\alphav\right)\right), \\
\bar{\mathbf{v}}(\alphav) &= (r-l)\cdot\mathbf{v}(\alphav) + l, ~~ \mathbf{z} = \min(1,\max(0,\bar{\mathbf{v}}(\alphav))),
\end{split}
\end{align}
where $\mathcal{U}(0,1)$ is a uniform distribution over the interval $[0,1]$, $\sigma(\cdot)$ is the sigmoid function and $\beta$ is a temperature constant. The actual parameters are $\alphav$. $l < 0$ and $r > 0$ are two constants to stretch the output of sigmoid to $[l, r]$, which is finally rectified to $[0,1]$. It is proven that the first term in Eq.~\eqref{eq:total-loss} now becomes differentiable w.r.t. all parameters. We can write the second term in a closed-form expression based on the distribution of $\mathbf{z}$ shown in Eq.~\eqref{eq:hard-concrete}:
\begin{equation}
\label{eq:expected-norm}
\mathbb{E}_{\mathbf{z}}\left[\norm{\tilde{\thetav}}_0\right]
= \sum_{j=1}^n P(z_j \neq 0)
= \sum_{j=1}^n \sigma\left( \log \alpha_j - \beta \log\frac{-l}{r} \right),
\end{equation}
which is also differentiable. $P(\cdot)$ denotes the probability.
Now we can train a sparse model using Eq.~\eqref{eq:total-loss}. However, it is difficult to exactly control the pruned model size~\cite{asapp-pruning, cofi}. Instead of adding a regularizer $\lambda \norm{\tilde{\thetav}}_0$, prior studies~\cite{asapp-pruning, cofi} suggest optimizing the training loss subject to an explicit equality constraint on sparsity:
\begin{align}
\label{eq:min-st-cons}
\min_{\thetav, \alphav} ~~
\mathbb{E}_{\mathbf{z}}\left[\frac{1}{D} \sum_{i=1}^D \mathcal{L}(f(\mathbf{x}_i;\tilde{\thetav}),\mathbf{y}_i) \right]~~\text{s.t.}~~s(\alphav)=t,
\end{align}
where $s(\alphav)$ is the current sparsity and $t$ is a pre-specified target sparsity. The sparsity is defined as the percentage of parameters that are pruned. Similar to Eq.~\eqref{eq:expected-norm}, given the current parameters $\alphav$, we can calculate the expected number of nonzero gates in every module of the model. Recall that each gate is associated with a group of parameters. Hence, we know the expected number of parameters that are still kept, which further gives us the sparsity $s(\alphav)$. Eq.~\eqref{eq:min-st-cons} can be rewritten as an adversarial game according to the augmented Lagrangian method~\cite{asapp-pruning}:
\begin{align}
\label{eq:minimax}
\max_{\lambdav} \min_{\thetav, \alphav} ~~ \mathbb{E}_{\mathbf{z}} \left[\frac{1}{D} \sum_{i=1}^D \mathcal{L}(f(\mathbf{x}_i;\tilde{\thetav}),\mathbf{y}_i) \right] + g(\lambdav, \alphav),\\
\label{eq:lagterm}
g(\lambdav, \alphav) = \lambda_1 (s(\alphav)-t) + \lambda_2 (s(\alphav) - t)^2,
\end{align}
where $\lambda_1, \lambda_2 \in \mathbb{R}$ are two Lagrange multipliers that are jointly trained with other parameters.
Once the game reaches equilibrium, the equality constraint will be satisfied. Hence, we can precisely control the sparsity of the pruned model. To facilitate training, we linearly increase the target sparsity $t$ from zero to the desired value.
\subsection{Structured pruning of Transformer layers}
\label{subsec:prune-transformer}
A Transformer~\cite{transformer} layer consists of a multi-head self-attention (MHA) block and a position-wise feed-forward network (FFN). We consider three pruning units, i.e., attention heads (12 per layer), intermediate size of FFN (3072 per layer), and the model's hidden size (768).
We define a gate for each pruning unit. Given an input sequence $\mathbf{X}\in \mathbb{R}^{T\times d}$ of length $T$ and feature size $d$, the MHA and FFN at a particular layer are the following:
\begin{align}
\mathrm{MHA}(\mathbf{X}) &= \sum_{k=1}^{h} (z^\text{head}_k \cdot \text{ATT}(\mathbf{X}; \mathbf{W}^\text{att}_k) ), \\
\text{FFN}(\mathbf{X}) &= \text{GeLU}(\mathbf{X}\mathbf{W}^\text{ffn}_1)\cdot \text{diag}(\mathbf{z}^\text{int})\cdot \mathbf{W}^\text{ffn}_2,
\end{align}
where $\text{ATT}(\cdot; \mathbf{W}^\text{att}_k)$ denotes the $k$-th attention head parameterized by $\mathbf{W}^\text{att}_k$, and $z_k^\text{head}$ is a scalar gate. There are $h$ heads in total. $\mathbf{z}^\text{int}$ is a $d^\text{int}$-dimensional gate for the FFN intermediate size. $\text{diag}(\cdot)$ creates a diagonal matrix with its argument vector on the diagonal. GeLU is an activation~\cite{gelu}. FFN has two linear layers $\mathbf{W}^\text{ffn}_1 \in \mathbb{R}^{d\times d^\text{int}}, \mathbf{W}^\text{ffn}_2 \in \mathbb{R}^{d^\text{int}\times d}$. Each Transformer layer has its own gates and their parameters are independent. For the main hidden size, we define a gate $\mathbf{z}^\text{hid}$ of size $d$ and share it across layers as in~\cite{cofi}.
\section{Proposed Methods}
\subsection{Joint pruning based on the model size}
\label{subsec:joint-size}
As introduced in Sec.~\ref{sec:intro}, the convolutional feature extractor (CNN) in SSL models is small in size but heavy in computation. To optimize the overall computation, we propose to jointly prune the CNN and Transformer. We have introduced the pruning units for Transformer in Sec.~\ref{subsec:prune-transformer}. For CNN, we prune convolution channels by introducing gate variables for every channel in every CNN layer, i.e., each output channel is multiplied with a gate. To train the model using Eq.~\eqref{eq:minimax}, we need to define the model sparsity $s(\alphav)$. Our first proposed method is \textbf{\method-Size\xspace} (\method based on the overall model size\xspace), which can be viewed as a direct extension from prior work~\cite{asapp-pruning, cofi}. Specifically, given the current distribution parameters $\alphav$, we can calculate the probability of each gate being nonzero (i.e., the corresponding module is kept) as in Eq.~\eqref{eq:expected-norm}. We then know the current sizes of all modules, including the model's hidden size, CNN channels, attention heads, and FFN intermediate sizes. Based on these sizes, we can compute the percentage of parameters that are pruned, which is the overall size sparsity $s^\text{all}_\text{size}(\alphav)$.
However, Sec.~\ref{subsec:main-results} shows that this approach does not work well in practice, because the CNN has much fewer parameters than the Transformer. If we simply set an overall sparsity, parameters will be pruned mainly from Transformer. To solve this problem, we propose the second method, i.e., \textbf{\method-SepSize\xspace} (\method based on separate model sizes\xspace). We calculate the size sparsity separately for CNN ($s^\text{cnn}_\text{size}(\alphav)$) and Transformer ($s^\text{trans}_\text{size}(\alphav)$). We also specify separate target sparsities $t^\text{cnn}_\text{size}, t^\text{trans}_\text{size}$ and extend Eq.~\eqref{eq:lagterm} to have two terms:
\begin{align}
\label{eq:prune-sep-size}
\begin{split}
g_\text{size} &= \lambda_1^\text{cnn} (s^\text{cnn}_\text{size}(\alphav) - t^\text{cnn}_\text{size}) + \lambda_2^\text{cnn} (s^\text{cnn}_\text{size}(\alphav) - t^\text{cnn}_\text{size})^2 \\
&+ \lambda_1^\text{trans} (s^\text{trans}_\text{size}(\alphav) - t^\text{trans}_\text{size}) + \lambda_2^\text{trans} (s^\text{trans}_\text{size}(\alphav) - t^\text{trans}_\text{size})^2.
\end{split}
\end{align}
As shown in Sec.~\ref{subsec:main-results}, this method achieves strong performance. However, it requires careful tuning of the separate target sparsities. We always need to search over the two sparsities to meet a particular budget, which is computationally expensive.
\subsection{Joint pruning based on the MACs}
\label{subsec:prune-macs}
The third method we propose is \textbf{\method-MAC\xspace} (\method based on the overall MACs\xspace). Unlike prior methods, we prune the entire model to directly meet a computational budget measured by MACs. We follow the formulas used in the DeepSpeed flops profiler to calculate MACs.~\footnote{\url{https://github.com/microsoft/DeepSpeed}}
For an input sequence of length $T$ and hidden size $d$, the MACs for each MHA and FFN block are as follows:
\begin{align}
\text{MAC}^\text{mha} &= 4Thdd^\text{head} + 2T^2hd^\text{head},\\
\text{MAC}^\text{ffn} &= 2Tdd^\text{int},
\end{align}
where $h$ is the number of attention heads and $d^\text{head}$ is the size per head. $d^\text{int}$ denotes the intermediate size of FFN.
The MACs of a 1-D convolution can be computed by
\begin{align}
\text{MAC}^\text{cnn} = T^\text{out} C^\text{out} C^\text{in} K,
\end{align}
where $T^\text{out}$ is the output length and $K$ is the kernel size. $C^\text{in}$ and $C^\text{out}$ are the input and output channels, respectively. Note that $h, d, d^\text{int}, C^\text{in}, C^\text{out}$ are calculated from the current gate distributions (similar to Eq.~\eqref{eq:expected-norm}). They are differentiable functions of $\alphav$.
We define the percentage of MACs that are pruned as the MACs-based sparsity $s^\text{all}_\text{macs}(\alphav)$.~\footnote{The computation of MACs also depends on the sequence length $T$, because MHA has quadratic complexity w.r.t. $T$. We use 10 seconds to compute MACs in our experiments. This is a ``virtual'' length used only for computing MACs. We do not modify any training utterances.} It is differentiable w.r.t. parameters $\alphav$. Hence, we can still train the model using Eq.~\eqref{eq:minimax}.
\section{Experiments}
\subsection{Experimental setup}
We focus on task-specific structured pruning of SSL speech models. We mainly prune wav2vec2-base, but we also show that our methods can be directly applied to HuBERT-base in Sec.~\ref{subsec:other-compression}. We conduct experiments using PyTorch~\cite{pytorch} and HuggingFace's transformers~\cite{huggingface-transformers}. Our implementation of the basic pruning algorithm is based on prior work in NLP~\cite{cofi}. For each task, we add a linear layer on top of the pre-trained SSL model and fine-tune the entire model to obtain an unpruned model. Then, we prune this fine-tuned model to reach a specific sparsity using Eq.~\eqref{eq:minimax}. We employ an AdamW optimizer and a linear learning rate scheduler for all experiments.
\textbf{ASR}: The 100-hour clean set of LibriSpeech~\cite{librispeech-corpus} is utilized. In Sec.~\ref{subsec:robustness}, the Tedlium~\cite{tedlium-corpus} test set is used as out-of-domain data to demonstrate the robustness of structured pruning. The training loss is CTC~\cite{ctc}. We fine-tune a pre-trained model for 25 epochs and prune for 30 epochs with a learning rate of 1.5e-4 and a batch size of 64. The target sparsity is linearly increased to the desired value during the first 5 epochs. The learning rate of $\alphav$ and $\lambdav$ is selected from $\{0.02, 0.05\}$. The pruned model is fine-tuned for another 10 epochs with a learning rate of 5e-5. The learning rate warmup steps are 3k, 3k, and 1k for training, pruning, and fine-tuning, respectively.
\textbf{SLU}: The SLURP corpus~\cite{slurp-corpus} is used for intent classification. A pre-trained SSL model is fine-tuned for 50 epochs and pruned for 50 epochs with a learning rate of 1e-4 and a batch size of 80. The final fine-tuning has 10 epochs with a learning rate of 1e-5. The learning rate warmup is performed for 4k, 4k, 1k steps for training, pruning, and fine-tuning, respectively. Other configs are the same as ASR.
\input{figures/asr.tex}
\input{figures/slu.tex}
\input{figures/sep-size-combined.tex}
\input{figures/tedlium.tex}
\input{figures/arch-all.tex}
\input{figures/hubert.tex}
\subsection{Main results}
\label{subsec:main-results}
Fig.~\ref{fig:asr-main} compares various pruning methods for LibriSpeech ASR. The unpruned model has good performance (5.77\% WER) but is computationally expensive (74.4 GMACs). At a low sparsity ($>$55 GMACs), all pruned models achieve similar WERs which are even better than the original result, because the pruning target can regularize the training. As the sparsity increases, the baseline method which only prunes Transformer drastically degrades. Our proposed three algorithms which jointly prune CNN and Transformer consistently outperform the baseline by a large margin. We can reduce over 40\% of the total computation without degradation in WER.
\method-MAC\xspace has similar performance with \method-SepSize\xspace, both outperforming \method-Size\xspace. This is because the CNN has much fewer parameters than Transformer. If we simply set an overall size sparsity, the pruned parameters are mainly from Transformer, while CNN still has high computational overhead. To prune them based on separate sizes (Eq.~\eqref{eq:prune-sep-size}), we have to search for the best combination of the two target sparsities. This model selection procedure is presented in Fig.~\ref{fig:asr-sep-size}, where we perform a grid search and select the Pareto frontiers. This requires much more computation than the other methods. Hence, the \method-MAC\xspace is probably the best method in terms of performance and complexity.
Fig.~\ref{fig:slu-main} shows the results of intent classification on SLURP. The overall trend is very similar to that of ASR. Our joint pruning methods outperform the baseline by a large margin, especially at a high sparsity (low MACs). \method-SepSize\xspace is comparable with \method-MAC\xspace, but again, it requires a grid search over the two target sparsities as shown in Fig.~\ref{fig:slu-sep-size}. This high complexity hinders its usage in practice. Compared to ASR, we can achieve a higher compression rate (over 55\%) without loss in accuracy. This is probably because the classification task is easier and thus requires less information than the sequence-to-sequence task.
\subsection{Robustness of structured pruning}
\label{subsec:robustness}
To investigate the robustness of the proposed structured pruning methods, we test the ASR models using an \textit{out-of-domain} corpus, TED-LIUM~\cite{tedlium-corpus}. Note that these models are trained only with LibriSpeech data. As shown in Fig.~\ref{fig:robustness}, again, our joint pruning methods consistently outperform the baseline, and the trend is very similar to that of the in-domain evaluation (see Fig.~\ref{fig:asr-main}). This demonstrates that our pruning methods are robust.
\subsection{Architectures of pruned models}
Fig.~\ref{fig:pruned-arch} shows the remaining CNN channels, attention heads and FFN intermediate sizes after \method-MAC\xspace. The target sparsity ranges from 10\% to 40\%. For CNN, the sequence length gradually decreases due to downsampling. The first few layers have higher computational cost, so they tend to be pruned more. For MHA and FFN, the upper layers are pruned the most, indicating that upper layers are more redundant. Prior studies had similar observations by analyzing the self-attention patterns in speech encoders~\cite{diagnality, branchformer, maekaku22interspeech}. The overall trend is also consistent with a prior work in NLP~\cite{cofi}.
\subsection{Comparison with other compression methods}
\label{subsec:other-compression}
As introduced in Sec.~\ref{subsec:ssl-models}, HJ-Pruning\xspace can be directly applied to other SSL models. In Fig.~\ref{fig:hubert}, we prune the HuBERT-base model based on the overall MACs for ASR. The performance is similar to that of the wav2vec2. We also include other compressed models for comparison, including DistilHuBERT~\cite{distilhubert} and LightHuBERT~\cite{lighthubert}.
Note that these results are not really comparable due to: (1) Their WERs are from SUPERB~\cite{superb}, which combines a frozen SSL model with another learnable RNN. We also tried to replace the RNN with a single linear layer and fine-tune the entire model (same as our setting), but the performance was clearly worse. (2) Their compressed models are initially distilled using the 960h unlabeled LibriSpeech data and then fine-tuned on the 100h labeled data, but our task-specific pruning \textit{only} utilizes the 100h data. This comparison shows that our task-specific pruning method is highly effective.
\section{Conclusion}
In this paper, we propose HJ-Pruning\xspace to jointly prune heterogeneous components of SSL speech models, which achieves strong performance-efficiency tradeoffs compared to several baselines.
At a small sparsity (0.1 to 0.3), HJ-Pruning\xspace improves the wav2vec2 baseline while being faster.
Depending on the task, HJ-Pruning\xspace saves 40\% or 50\% MACs while maintaining the performance of wav2vec2.
HJ-Pruning\xspace is a general method that can be applied to most of speech SSL models such as HuBERT.
In the future, we plan to explore the application of HJ-Pruning\xspace on encoder-decoder SSL models~\cite{wu2022wav2seq} and other SLU tasks~\cite{lugosch2019speech,slue}.
|
{
"arxiv_id": "2302.14179",
"language": "en",
"timestamp": "2023-03-01T02:03:55",
"url": "https://arxiv.org/abs/2302.14179",
"yymm": "2302"
} | \section{Introduction}
\IEEEPARstart{B}{eing} able to accurately measure the performance of an algorithm is fundamental to algorithm comparison and thus to scientific progress in algorithm design. For single-objective problems, this is relatively straightforward, one can simply compare the performances of the solutions returned by the algorithms. For multi-objective problems this is more challenging, as the algorithm returns a set of Pareto-optimal solutions, with different trade-offs regarding the underlying objectives. Researchers have developed a range of performance metrics for deterministic multi-objective problems, with the Hypervolume (HV), the Inverted Generational Distance (IGD) and the R2 metric being the most popular ones. These are all unary metrics which aggregate the performance of a set of solutions in a single value, capturing aspects such as closeness to the Pareto frontier, spread, and uniformity of the distribution of solutions along the Pareto frontier.
Measuring performance in a noisy single-objective problem is straightforward - one can simply use the true (noiseless) fitness of the solution returned by the algorithm. This is readily available for most artificial benchmark problems and can be estimated to arbitrary precision by averaging over a sufficiently large number of evaluations otherwise.
\begin{figure}
\centering
\subfloat[Error by exclusion]{
\includegraphics[width=0.3\textwidth]{exclusion.pdf}
}\\
\subfloat[Error by inclusion]{
\includegraphics[width=0.3\textwidth]{inclusion.pdf}
}\\
\subfloat[Error by selection]{
\includegraphics[width=0.3\textwidth]{selection.pdf}
}
\caption{Three possible errors due to noise. Black dots are the true objective function values of the solutions, the diamonds are the perceived objective function values. (a) The orange solution is erroneously excluded because it appears dominated while it is not.
(b) The orange solution is erroneously included as Pareto optimal even though it is not. (c) A decision maker with a utility function represented by the organge line would erroneously select solution A even though the true utility of solution B would be higher.}
\label{fig:errors}
\end{figure}
In multi-objective problems however, noise is more difficult to deal with.
As has already been argued by \cite{branke2007efficient}, in a setting with noise, simply using the true (noiseless) fitness values of the returned solution set would not be sufficient, because it ignores the fact that the decision maker (DM) is presented with a set of noisy and thus mis-estimated objective values for each solution, and thus there is the additional risk that the solution they pick, i.e., the solution that \emph{appears} to be the one with the highest utility, actually has a lower true utility than another solution in the set. In other words, the DM is misguided by the estimation errors and may therefore pick an inferior solution from the returned set. In fact, they might even pick a dominated solution over its dominating solution.
Thus it is clear that in a multi-objective setting under noise, the mis-estimation of the fitness values and the consequential selection errors of the DM need to be taken into account when measuring the performance of an algorithm.
This paper proposes two such metrics.
\section{Background and Literature Review}
Comparing multi-objective optimisation algorithms requires comparing sets of solutions, and various aspects are relevant, such as the closeness of the solutions to the Pareto front, the range of solutions found along each objective, and the uniformity of spread of the solutions. Various performance measures have been proposed, with the most prominent being the Hypervolume (HV) \cite{zitzler2003performance}, the Inverse Generational Distance ($IGD$) \cite{coello2004study} with its enhanced version $IGD+$
\cite{ishibuchi2015modified} and R2 \cite{hansen1994evaluating}.
A metric is called Pareto-compliant, if, whenever an approximation set $A$ dominates another approximation set $B$, the metric yields a strictly better quality value for the former than for the latter set. So far, only two performance metrics are known that are Parteto compliant, HV and $IGD^+$.
While there are many papers on multi-objective optimisation under noise (e.g., \cite{syberfeldt2010evolutionary,goh2007investigation,fieldsend2014rolling,salomon2016toolkit,teich2001pareto}), there is much less discussion on performance metrics for such problems.
Most papers in the area of noisy multi-objective optimisation just use standard performance metrics such as HV or IGD but apply them to the true fitness values of the returned set of solutions, see, e.g., \cite{chen2015performance,eskandari2009evolutionary,fieldsend2014rolling}.
Hunter et al.~\cite{hunter2019introduction} argue that there are two errors in noisy multi-objective problems: error by inclusion (a dominated solution is falsely included in the Pareto front), error by exclusion (a non-dominated solution is falsely excluded and not returned to the DM). But even if all non-dominated solutions have been correctly identified, the DM may pick a less preferred solution because they can only make their decision based on the returned noisy objective values and not based on the true objective values. See Figure~\ref{fig:errors}
\cite{chen2015performance} propose to measure the percentage of returned solutions that are still non-dominated based on their true (undisturbed) objective values. \cite{fieldsend2014rolling} additionally propose to measure the "noise misinformation", which they define as the average distance between the returned solutions' predicted and true fitness values.
In the context of multi-objective ranking and selection, \cite{branke2019identifying} proposed to optimise a metric called hypervolume difference, motivated by the challenge for a DM to pick the correct solution in the end.
\section{Suggested Performance Metrics}
We propose two alternative performance measures, one based on IGD, the other based on the R2 metric. The basic idea is that it is necessary to simulate the DM picking a solution based on the solution set and estimated function values returned by the algorithm, then evaluating the true (noiseless) utility of this solution.
Let us assume the algorithm returns a solution set $S=\{s_1,\ldots,s_n\}$ which have true objective values $T=\{t_1,\ldots,t_n\}$ and estimated objective values $R=\{r_1,\ldots,r_n\}$, where $t_i=(f^1(s_i),\ldots f^D(s_i))$ and $r_i=(\hat f^1(s_i),\ldots \hat f^D(s_i))$.
\subsection{R2 Metric for Noisy Problems}
The R2 metric requires a parameterised utility function $U(f(x),\lambda$, and a probability distribution for the parameter $\lambda$. The simplest form is a linear combination of objectives, i.e., $U(f(x),\lambda)=\sum_{i=1}^m\lambda_i f^i(x)$. However, the concept applies equally to other utility functions such as the Tchebycheff utility of the Cobb-Douglas utility.
To calculate the R2 metric for a minimisation problem, for a set of sampled utility functions with parameters $\Lambda=\{\lambda_1,\ldots,\lambda_m\}$, the $R2$ metric
can be calculated as
\begin{eqnarray}
R2(T,\Lambda)=\frac{1}{m}\sum_{i=1}^m \min_j\{U(t_j,\lambda_i)\},
\end{eqnarray}
i.e., for every sampled utility function, we count in the sum the utility of the best solution.
To adapt this to the noisy case, we propose the following new metric
\begin{eqnarray}
nR2(R,\Lambda)=\frac{1}{m}\sum_{i=1}^m U(t_{j(i)},\lambda_i)
\end{eqnarray}
where $j(i)$ is the index of the solution $s_j$ with the smallest perceived utility based on the estimated fitness values, i.e.,
\begin{eqnarray}
j(i)=\argmin_j U(r_j,\lambda_i).
\end{eqnarray}
Note that for 2 objectives and linear utility functions, the R2 metric can be calculated analytically for a continuous probability distribution for $\lambda$, rather than approximating it via Monte Carlo sampling over $\lambda$.
\subsection{Inverted Generational Distance for Noisy Problems}
The $IGD$ measure uses an approximation of the target Pareto front $A=\{a_1,\ldots,a_m\}$ of the Pareto front and then, for each solution $a\in A$ calculates the (usually Euclidean) distance to the closest point in the solution set.
This can be put into the proposed framework where different DMs are represented the solutions in $A$. For each $a\in A$, the DM would pick the closest solution according to the estimated function values. Then, the distance of this solution's true fitness values to the desired solution $a$ is used in the calculation of $IGD$.
Let $d(a,r)$ denote the distance in objective space between two points $a$ and $r$. Then, the proposed $IGD^n$ metric of a returned solution set $S$ is defined as follows:
\begin{eqnarray}
nIGD(R,A)=\sum_{i=1}^m d(a_i,t_{j(i)})
\end{eqnarray}
where $j(i)$ is the index of the solution $s$ whose estimated function values are closest to point $a_i$, i.e.,
\begin{eqnarray}
j(i)=\argmin_j d(a_i,r_j).
\end{eqnarray}
It is straightforward to adapt this from IGD to $IGD^+$ \cite{ishibuchi2015modified}, simply by adjusting the distance calculation and replacing $d(a_i,t_{j(i)})$ by
\begin{eqnarray}
d_{IGD^+}(a_i, t_{j(i)})= \sqrt{\sum_{k=1}^D (\max\{t^k_{j(i)}-a_i^k,0\})^2}
\end{eqnarray}
\section{Example\label{sec:example}}
In this section, we go through a simple example, demonstrating why the standard $IGD^+$ and R2 metrics are not suitable and explaining the difference to the proposed new metrics $nIGD^+$n and $nR2$.
Figure~\ref{fig:data} shows the underlying data used: The true Pareto front (blue curve), the true quality of the solution set (blue circles), the observed quality of the solution set (red circles) and the target solutions as used by $IGD^+$ and $nIGD^+$. The true and observed qualities of each solution are connected by a line. As can be seen, the two observations at the lower end appear to be better than the true Pareto front.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{data.pdf}
\caption{Observed and true quality of a set of solutions, together with true Pareto front and target values for $IGD$ calculations.}
\label{fig:data}
\end{figure}
\begin{figure}
\centering
\subfloat[R2 metric]{
\includegraphics[width=0.4\textwidth]{R2.pdf}
}\\
\subfloat[nR2 metric]{
\includegraphics[width=0.4\textwidth]{noisyR2.pdf}
}
\caption{Visualisation of (a) the R2 metric and (b) the nR2 metric.}
\label{fig:R2}
\end{figure}
Figure~\ref{fig:R2} (a) visualises how the standard R2 metric would be calculated, using the observed solution qualities. Only four of the observed solutions are on the convex hull and will be taken into account. The short blue lines indicate the slopes of the utility functions that would favour this particular observed solution. The nR2 metric instead uses the utilities of the true values of these very same solutions, see Figure~\ref{fig:R2} (b). Even though solution "A" is clearly non-dominated with respect to the true solution qualities, it has not been observed as such, and thus is not part of the calculation.
\begin{figure}
\centering
\subfloat[R2 metric Chebycheff]{
\includegraphics[width=0.4\textwidth]{chebycheff7.pdf}
}\\
\subfloat[nR2 metric Chebycheff]{
\includegraphics[width=0.4\textwidth]{nchebycheff7.pdf}
}
\caption{Visualisation of (a) the R2 metric and (b) the nR2 metric when using Chebycheff scalarisation.}
\label{fig:chebycheff}
\end{figure}
The $IGD^+$ metric is similarly visualised in Figure~\ref{fig:igd}. Part~(a) highlights which pairs of target and observation are taken into account for the calculation, and Part~(b) shows the same for target and true quality values as in the proposed $nIGD^+$ metric.
Where the target solution is connected to not the nearest true solution quality (which is clearly seen by lines crossing), the DM would pick the wrong solution based on the (noisy) observations.
When using Chebycheff scalarisation, the results are visualised in Figure~\ref{fig:chebycheff}. The rays represent the different weightings for the Chebycheff scalarisation, and each is connected to the solution that the DM would have picked.
\begin{figure}
\centering
\subfloat[$IGD^+$ metric]{
\includegraphics[width=0.4\textwidth]{IGDplus.pdf}
}\\
\subfloat[$nIGD^+$ metric]{
\includegraphics[width=0.4\textwidth]{noisyIGDplus.pdf}
}
\caption{Visualisation of (a) the $IGD^+$ metric and (b) the $nIGD^+$ metric.}
\label{fig:igd}
\end{figure}
Intuitively, an algorithm that is able to reduce the noise and provide more accurate predictions of a solution's true quality should be considered superior to an algorithm that reports back observed solution qualities that are far from their true qualities.
To demonstrate that the new proposed metrics have this property, we run a test where the true solution qualities are disturbed by uniformly distributed noise in $[-\eta,\eta]^2$. The examples above show a case for $\eta=0.1$.
Table~\ref{tab:noiseImpact} reports on the computed metrics for different $\eta$ values. While the standard R2 and $IGD^+$ metric reward noise as some of the solutions appear more favourable than they are, the new nR2 and $nIGD^+$ metrics appropriately evaluate the scenarios with less noise as better.
As the noise is approaches zero, the two metrics become identical.
For the case of just using the true objective function values as is often done in the literature, the noise doesn't have any impact, so it doesn't pay off for an algorithm to improve the accuracy of the solutions' predicted objective values.
\begin{table}[h]
\caption{Different metrics tested with different levels of noise $\eta$. Lower values are better in all cases. R2 is the R2 metric with linear utility functions, R2c is the R2 metric with Chebycheff utility function.\label{tab:noiseImpact}}
\centering
\begin{tabular}{c|cccccc}
$\eta$&R2&nR2&R2c&nR2c&$IGD^+$&$nIGD^+$ \\\hline
0.01&12.7475 & 12.9576 & 1.1083 &1.1148& 6.4231 & 6.4411\\
0.05&11.6563 & 13.0960 & 1.0562 & 1.1689 & 6.1052 & 6.8660\\
0.1&10.1826 &13.1876 & 0.9720 & 1.2282 & 5.5193 & 8.1651\\
0.2& 7.0602 & 13.4119 & 0.8372 & 1.3339 & 4.2270 & 8.3205\\\hline
\end{tabular}
\end{table}
\section{Discussion}
As explained above, in noisy multi-objective optiimisation it is important to take into account the selection error of the DM due to wrongly predicted objective function values. This is straightforward for the R2 metric, because this metric samples from DM utility functions and thus the selection process of the DM can be directly modelled. A pre-requisite for this metric is however that we can identify a probability distribution over the parameters of a representative utility function model. Using a linear utility function also only rewards solutions that are predicted to be on the convex hull.
It is also possible based on the $IGD$ metric, if one assumes that the target approximation of the Pareto optimal front represents different possible DM preferences, and a DM with a particular target solution would select the closest predicted solution. While the DM model (picking the closest predicted solution) seems straightforward and natural, the $IGD$ metric requires an approximation of the true Pareto frontier, which obviously is not available for real-world problems.
It is less obvious but interesting for future work to also find an adaptation of the popular HV metric.
Note that the metrics proposed can compare solution sets of different sizes, so an interesting question for an algorithm designer is how many solutions perceived as Pareto-optimal the algorithm should return. It doesn't make sense to return a solution that is dominated by another solution in the returned set, as this dominated solution would never enter the calculation. However, it can be beneficial to for example remove even predicted non-dominated solutions from the set if their predicted objective function values are very uncertain, as their true values may turn out much worse than predicted, deteriorating the performance metric.
\section{Conclusion}
We have pointed out that when evaluating an algorithm for noisy multi-objective optimisation, one should take into account the utility loss for a DM who may pick an inferior solution because the algorithm didn't accurately predict the quality of the identified solutions. Based on this observation, we proposed modifications of the well-know R2 and $IGD/IGD^+$ metrics and demonstrated with an example that they have the desired property.
As future work, it may be worthwhile to also develop an adaptation of the popular hypervolume metric to the noisy case. Also, it would be helpful to have a performance metric that doesn't rely on knowing the solutions' true objective function values.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.14138",
"language": "en",
"timestamp": "2023-03-01T02:02:22",
"url": "https://arxiv.org/abs/2302.14138",
"yymm": "2302"
} |
\section{Introduction}
Self-supervision has demonstrated undoubted power in learning strong visual representation, with two mainstream representative methods: Contrastive learning (CL)~\citep{simclr,moco,mocov2, mocov3,grill2020bootstrap,caron2021emerging}, and Mask Image Modeling (MIM)~\citep{bao2021beit,he2021masked,xie2022simmim,dong2021peco,dong2022bootstrapped}. The two methods follow different mechanisms, and often manifest different strengths. Generally, CL performs the instance-level task that pulls augmented views from the same image to be similar while pushing different images to distribute diversely, making it versatile at learning semantic-aware clustering structures across images. In contrast, MIM draws inspiration from BERT~\citep{devlin2018bert} and performs masked token or pixel reconstruction that facilitates the learning of rich local structures within the same image. In particular, although the latter one, MIM, has recently surpassed CL on the fine-tuning performance of many datasets, CL often remains to be a top competitor in data-scarce, few-shot downstream applications~\citep{simclrv2,mocov2,tian2020rethinking}.
A natural question then follows: \textit{are CL and MIM indeed complementary to each other, and is there a way to best combine their strengths?}. One immediate, conceptually simple idea is to refer to multiple task learning (MTL) and jointly optimize the two losses on top of the same backbone. Unfortunately, our preliminary experiment (See Section~\ref{sec:conflict}) shows that such a vanilla combination fails to improve over either baseline, in fact often compromising the single loss's performance. A deeper dive reveals that the two losses, when being optimized together, will incur increasingly severe conflicts in the gradient directions, as the layers go deeper (see Figure~\ref{fig:gradientDist}). That causes considerable hurdles for the (pre-)training to effectively proceed.
We are then inspired to ask: \textit{if the two losses conflict when both are placed at the end, how about placing them differently, such as appending them to different layers?} Based on experimental observations, it appears that lower layers tend to learn better from the MIM loss in order to capture local spatial details; while higher layers tend to benefit more from the CL loss in order to learn semantically-aware grouping and invariance. Inspired by so, we propose a simple MIM$\rightarrow$CL Grafting idea to combine the bests of both worlds: (step i) first training the lower layers with MIM loss and fixing their weights, on top of which (step ii) higher layer weights continue to be trained under another CL loss. This simple cascaded training idea neatly separates MIM and CL losses to avoid their conflicts against each other if placed together; each loss is also strategically placed to pre-training its most suitable portion. Practically, we ```smooth out" the grafting by allowing lower layers to be slowly tuned in step ii. Our ablation experiments also find that \textbf{the order of grafting matters}, i.e., reversing MIM/CL loss locations and performing CL$\rightarrow$MIM will considerably damage the performance. The contributions of this paper are summarized as follows:
\begin{itemize}
\item We propose Layer Grafted Pre-training, a principled framework to merge MIM and CL, improving representation learning beyond both, with no bells and whistles.
\item We investigate the different preferences of lower and higher layers towards CL and MIM losses, and show the order of grafting to matter.
\item Despite its embarrassing simplicity, the proposed Layer Grafted Pre-training demonstrates more desirable representation quality, and consequently superior label efficiency in downstream applications,
yielding strong few-shot performance besides linear evaluation. For example, we achieve [65.5\%, {77.8\%}, {77.7\%}] in terms of [1\% few-shot, 10\% few-shot, linear evaluation] performance, improving over MIM and CL baselines by [14.4\%, {4.5\%}, {9.7\%}] and [2.1\%, {2.4\%}, {1.0\%}], respectively.
\end{itemize}
\section{Related Works}
\label{related}
\textbf{Contrastive Learning.}
CL performs instance classification tasks by contrasting positive pairs against negative pairs~\citep{simclr,moco,zhuang2019local,dosovitskiy2014discriminative}. Other close works also explore learning without negative samples~\citep{grill2020bootstrap,bardes2021vicreg,caron2021emerging,zbontar2021barlow, chen2021exploring} and the clustering based approachs~\citep{caron2020unsupervised}.
One common merit of these methods is their strong performance on learning good representations, which shows strong clustering pattern~\citep{dwibedi2021little} and leads to state-of-the-art few-shot/semi-supervised performance~\citep{simclrv2,tian2020rethinking,li2021improve,jiang2022dna,you2022mine}.
However, they contain an implicit assumption that the features should be invariant to heavy augmentations, which, however, could further lead to worse performance when the downstream performance violates it~\citep{xiao2020should}. The proposed Layer Grafted Pre-training address this via leveraging MIM for processing the features of lower layers.
\textbf{Mask Image Modeling.}
Mask Image Modeling (MIM) is inspired by the success of BERT~\citep{devlin2018bert} in Natural Language Processing (NLP). iGPT~\citep{chen2020generative} begins the exploration of this idea in Computer Vision (CV).
The emergence of ViT~\citep{dosovitskiy2020image} further shrinks the gap of backbones between CV and NLP, motivating more researchers to delve into this direction. Beit~\citep{bao2021beit,peng2022beit}, MaskFeat~\citep{wei2022masked} and Peco~\citep{dong2021peco} focus on predicting tokens. MAE~\citep{he2021masked} and simMIM~\citep{xie2022simmim} further show the possibility of directly reconstructing the original pixels. Following works ~\citep{dong2022bootstrapped,chen2022context,wang2022bevt} continue to improve the performance or extend to other modalities.
However, MIM achieves the most success in fine-tuning with enough data points. For downstream tasks with limited data, MIM fails to surpass CL given the lack of linear separability for its representations~\citep{tao2022siamese}. We address this drawback by employing CL for learning the semantic alignment for higher layers.
\textbf{Bridging Contrastive Learning and Mask Image Modeling.}
Only recently, researchers begin to explore the potential of combining MIM and CL. iBoT~\citep{zhou2021ibot}, one of the pioneers in this direction, proposes to switch the modeling of images to the modeling of features. Some concurrent works also follow this self-distillation paradigm~\citep{tao2022siamese,assran2022masked}. However, just like CL, this paradigm still relays on the involvement of strong augmentations to avoid collapsing, which could lead to over-suppressing of some features (i.e. color)~\citep{xiao2020should}. In contrast, we employ MAE~\citep{he2021masked}, an image modeling framework that is free from strong augmentations. Besides, previous combination works treat the network as a whole while we provide a new layer-wise perspective.
\textbf{Comparison to other multiple-step pre-training tasks.} One may relate the proposed method with previous multiple-step pre-training tasks like intermediate fine-tuning ( e.g., finetuning a MIM model using ImageNet22k and transfer to ImageNet1k~\citep{bao2021beit}) or self-supervised fine-tuning like ~\cite{reed2022self}. The main differences lie in two aspects: (1) The key innovation of the proposed method is to reveal and utilize the layerwise difference between MIM and CL. In contrast, intermediate finetuning and self-supervised fine-tuning are treating the model as a whole; and (2) While intermediate finetuning and self-supervised are designed for the \textbf{same pretraining methods} across \textbf{different domains}, the proposed method is devised for \textbf{different pretraining methods} in the \textbf{same domain}.
\section{Method}
\subsection{Preliminary and Overview}
In Contrastive Learning (CL), the learning target is to pull the positive pairs together in the feature space while pushing negative pairs apart. Formally, the loss can be defined as:
\begin{equation}
\label{equ:CL_formula}
\mathcal{M}(v_i,v_i^+,V^-,\tau)=\frac{1}{N} \sum_{i=1}^{N}-\log \frac{\exp \left(v_i \cdot v_i^{+} / \tau\right)}{\exp \left(v_i \cdot v_i^{+} / \tau\right)+\sum_{v_i^{-} \in V^{-}} \exp \left(v_i \cdot v_i^{-} / \tau\right)}
\end{equation}
where $(v_i,v_i^{+})$ represents features of the positive pairs while $(v_i,v_i^{-})$ means features of negative pairs. Also, $V^-$ is the pool of negative features. $\tau$ denotes the temperature. $N$ is the number of samples.
In practice, the positive pairs are often the different augmented views from the same image while the negative pool is composed by all the views from different images~\citep{mocov3}.
On the other hand, Mask Image Modeling (MIM) learns to reconstruct a corrupted image where some parts of the image or feature map are masked out. The learning target can be formulated as:
\begin{equation}
\label{equ:CL_formula_2}
\mathcal{L}(x_i, M)= \frac{1}{N} \sum_{i=1}^{N} D(d(f(Mx_i)), x_i)
\end{equation}
where $x_i$ and $M$ are input images and randomly generated masks, respectively. $f$ and $d$ represent the encoding and decoding functions, respectively. $d(f(Mx_i))$ is the generated image conditioned by masked image $Mx_i$. $D$ measures the difference between $d(f(Mx_i))$ and the original image $x_i$.
\textbf{Overview.} In the following parts of this section, we first introduce our preliminary exploration on the MTL of MIM and CL tasks in Section~\ref{sec:conflict}, which reveals the existence of the conflicting gradient direction. Afterward, in Section~\ref{sec:whySeparate}, we provide a simple separating idea towards mitigating the conflicts, which further leads to the proposed Layer Grafted Pre-training in Section~\ref{sec:howSeparate}.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/grad_cos_sim_boxPlot.pdf}
\caption{The box plot of $\bold{C}_{\text{MIM},\text{CL}}(x)$ across different blocks for MTL combination of MIM and CL. This is measured on training datasets when the network is trained to 100 epochs (total 300 epochs). The red dash line indicates the linear regression of median numbers. }
\label{fig:gradientDist}
\end{figure}
\subsection{Conflicts Prevent Multi-Task Learning from Working}
\label{sec:conflict}
Our first step towards answering the question of whether CL and MIM can complement each other is to examine the most straightforward and conceptually simple idea - Multi-Task Learning (MTL) combination. Specifically, each iteration of MTL is composed of two steps. Firstly, the images are augmented twice for computing the CL loss following Moco V3~\cite{mocov3}. Then, the image with minimal augmentation would be utilized for computing MIM loss following MAE~\cite{he2021masked}. These two losses share the same encoder and would be added together as the final loss.
As summarized in Table~\ref{tab:prelim_compare},
MTL only yields a marginal performance improvement of 0.4\% on linear evaluation compared to the MIM baseline. However, it is still much lower that the CL baseline (-8.3\%). Moreover, on both 1\% few-shot and fine-tuning performance, MTL is even inferior to both MIM and CL baselines. Similar observations were also made by~\cite{wang2022repre}.
We further conjecture that the conflicts between these two targets in the MTL combination is the cause of the bad performance. To verify it, we design a gradient surgery experiment by computing the cosine similarity between gradients of two tasks following~\cite{yu2020gradient}. Formally, the cosine similarity is calculated as follows:
\begin{equation}
\bold{C}_{\text{MIM},\text{CL}}(x) =\frac{\nabla_\theta L_\text{MIM}\left(x\right)^T}{\left\|\nabla_\theta L_\text{MIM}\left(x\right)\right\|} \frac{\nabla_\theta L_\text{CL}\left(x\right)}{\left\|\nabla_\theta L_\text{CL}\left(x\right)\right\|}
\end{equation}
where $L_\text{MIM}$ and $L_\text{CL}$ denote the losses for MIM and CL, respectively. $x$ is a batch of input samples. We measure the distribution of $\bold{C}_{\text{MIM},\text{CL}}(x)$ across different layers of a pre-trained MTL model. As shown in Figure~\ref{fig:gradientDist}, there always exist negative values for $\bold{C}_{\text{MIM},\text{CL}}(x)$, where the MIM and CL are optimized in opposite directions. Moreover, the gradient direction varies across layers - more severe as layers go deeper.
Also, the conflicts can be reflected in two losses' contradictory targets to enforce. The MIM loss, for instance, requires that the reconstruction have the same brightness, color distribution, and positions as the input image, therefore the model needs to be sensitive to all these augmentations.
Conversely, CL loss is designed to ensure that the model remains invariant regardless of different augmentations.
\subsection{Addressing the Conflicts via Separating}
\label{sec:whySeparate}
Given the conflicts of the MTL combination, we ask the following question: \textit{if the two losses conflict when both are placed at the end, how about placing them differently, such as appending them to different layers?} Fortunately, recent empirical evidence suggests that CL and MIM may favor different pre-training methods. For MIM, ~\cite{wang2022closer} points out that, when only the pre-trained lower layers are retained while the higher layers are reset to random initialization, most of the gain is still preserved for downstream fine-tuning tasks. Based on this observation, the lower layers appear to be a key element in MIM. On the other hand,~\cite{mocov3} finds CL to be ineffective and even unstable for training the projection layer, the earliest layer of ViT~\citep{dosovitskiy2020image}. Fixing its weight to be random initialization can even yield significantly higher performance. Additionally, CL excels at semantic concepts, which happen often at higher layers of the neural network.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\textwidth]{images/pipeline.pdf}
\end{center}
\caption{The pipelines of the MIM$\rightarrow$CL, CL$\rightarrow$MIM Grafting, and Layer Grafted Pre-training. The former two are employed for preliminary experiments. The latter one is the final adopt pipeline, which is the `smooth out' version of MIM$\rightarrow$CL Grafting.}
\label{fig:pipeline}
\end{figure}
\begin{table}[t]
\centering
\caption{Illustration of preliminary study experiments' performance on ViT-B/16. Linear, 1\% and Fine-tuning denote linear evaluation, 1\% few-shot and fine-tuning performance, respectively. The performance of MIM and CL are from MAE~\citep{he2021masked} and MocoV3~\citep{mocov3}, respectively. MTL combination denotes the Muti-Task Learning (MTL) Combination of MIM and CL. MTL combination is pretrained for 300 epochs. For step 1 of MIM$\rightarrow$CL and CL$\rightarrow$MIM Grafting, we directly adopt the pre-trained model of MAE and MoCo V3, respectively. Step 2 of MIM$\rightarrow$CL and CL$\rightarrow$MIM Grafting is trained for 100 epochs.}
\label{tab:prelim_compare}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{@{}L{3.0cm}C{2.5cm}C{2.3cm}C{2.3cm}@{}}
\toprule
Method & Linear & 1\% & Fine-tuning \\ \midrule
MIM (MAE) & 68.0 & 51.1 & 83.6 \\
CL (Moco V3) & 76.7 & 63.4 & 83.2 \\
MTL combination & 68.4 & 47.6 & 81.0 \\
CL$\rightarrow$MIM Grafting & 65.5 & 32.5 & 82.5 \\
MIM$\rightarrow$CL Grafting & 74.5 & 56.5 & 83.6 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
Driven by the above analysis, we propose a simple MIM$\rightarrow$CL Grafting framework.
As shown in Figure~\ref{fig:pipeline}, MIM$\rightarrow$CL Grafting can be separated into two steps: (step i) the lower layers are first trained with MIM and then fixed, on the top of which (step ii) higher layers continue to learn with CL. Despite the simplicity of this framework, it yields promising preliminary results as shown in Table~\ref{tab:prelim_compare}, exceeding the MTL combination by [6.1\%, 8.9\%, 2.6\%] for [linear evaluation, 1\% few-shot, Fine-tuning] performance, respectively.
In contrast, when the order of two tasks is reversed, the resulting CL$\rightarrow$MIM Grafting, would suffer a dramatic drop in performance, which is even lower than the MTL combination by [2.9\%, 15.1\%] in terms of [linear evaluation, 1\% few-shot] performance, respectively. The huge gap between CL$\rightarrow$MIM and MIM$\rightarrow$CL Grafting further confirms the preference of MIM and CL towards lower and higher layers, respectively.
The example discussed at the end of Section~\ref{sec:conflict} can also explain why this preference difference happens: The two different prior knowledge types requested by MIM and CL, while seemingly at odds, may work together at different layers of the model. For example, the sensitivity to augmentations can be helpful for recognizing the local feature with strong color patterns~\citep{xiao2020should} in the lower layers (i.e. the fur of leopards). Meanwhile, for a consistent semantic understanding, the influence of the lightness difference should be eliminated when it comes to understanding the context feature at higher layers.
\subsection{Layer Grafted Pre-training}
\label{sec:howSeparate}
To fully unleash the power of Grafting, we `smooth out' the boundary of MIM$\rightarrow$CL grafting to avoid a sudden change in the feature space. Specifically, rather than fixing the lower layers, we assign them a small learning rate. The resultant method, termed as Layer Grafted Pre-training, is shown in Figure~\ref{fig:pipeline}. In Section~\ref{sec:lr_search_Exp}, we also explore other LR choices and our results indicate that employing small and large LR for lower and higher layers, respectively, yields the best performance.
By effectively capturing the augmentation-sensitive features (i.e. the colors) in the lower layers with MIM while learning semantic alignment in the higher layers with CL. The proposed Layer Grafted Pre-training enables the learning of strong visual representations. It not only provides strong inter-class variance that helps to cluster but also benefits the intra-class variance by keeping the diversity of samples in the early features.
\section{Experiment}
\subsection{Setting}
\label{sec:exp_setting}
\textbf{General.}
We conduct all the experiments on ImageNet-1k~\citep{deng2009imagenet} with Nvidia V100 GPUs. The code is implemented with Pytorch~\citep{NEURIPS2019_9015}.
\textbf{Backbone.}
We adopt the standard ViT-B and ViT-L architecture~\citep{dosovitskiy2020image} with the token size of 16$\times$16. The ViT-B is by default employed unless specified. When training with CL loss, we employ the projection and prediction head following Moco V3~\citep{mocov3}. The settings for pre-training and evaluation protocols can be found at Appendix~\ref{sec:appendix_setting}.
\subsection{Comparison with State-of-The-Art Methods}
\label{sec:exp_sota}
\begin{table}[b]
\centering
\caption{Comparison with State-of-The-Arts on ViT-B/16 and ViT-L/16. Linear, 1\% and 10\% denote the top-1 accuracy ($\%$) of linear evaluation, 1\% and 10\% few-shot learning, respectively. $\dag$: We employ the result of iBoT without augmentations from~\cite{zhou2021ibot} for fair comparison.}
\label{tab:main_exp}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{@{}L{2cm}C{5cm}C{2cm}C{2cm}C{2cm}@{}}
\toprule
BackBone & Method & Linear & 1\% & 10\% \\ \midrule
\multirow{ 5}{*}{ViT-B/16} & MAE~\citep{he2021masked} & 68.0 & 51.1 & 73.3 \\
& Moco V3~\citep{mocov3} & 76.7 & 63.4 & 75.4 \\
& iBoT$\dag$~\citep{zhou2021ibot} & 76.0 & - & - \\
& SIM~\citep{tao2022siamese} & 76.4 & 65.3 & - \\
& C-MAE~\citep{huang2022contrastive}& 73.9 & 65.3 & 77.3 \\
& MimCo~\citep{zhou2022mimco} & 70.2 & 62.7 & - \\
& Layer Grafted Pre-training (Ours) & \textbf{77.7} & \textbf{65.5} & \textbf{77.8} \\ \midrule
\multirow{ 3}{*}{ViT-L/16} & MAE~\citep{he2021masked} & 75.8 & 55.2 & 78.7 \\
& Moco V3~\citep{mocov3} & 77.6 & - & - \\
& Layer Grafted Pre-training (Ours) & \textbf{81.0} & \textbf{69.3} & \textbf{80.1} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
We start by verifying the effectiveness of the proposed Layer Grafted Pre-training by comparing it with state-of-the-art methods. As shown in Table~\ref{tab:main_exp}, in ViT-B/16, compared to the employed MIM and CL baselines, the proposed Layer Grafted Pre-training leads to a consistent improvement. For instance, it improves MAE and Moco V3 by [9.7\%, 14.4\%, 4.5\%] and [1.0\%, 2.1\%, 2.4\%] for [linear evaluation, 1\% few-shot, 10\% few-shot], respectively.
Compared to close competitors which also attempt to combine MIM and CL, the proposed Layer Grafted Pre-training surpasses iBoT by 1.5\% for linear evaluation performance. Compared to SIM, the proposed Layer Grafted Pre-training yields an improvement of 1.1\% and 0.2\% for linear evaluation and 1\% few-shot learning performance, respectively.
Our method also demonstrates good scalability toward larger models size. For instance, when scaling from ViT-B/16 to ViT-L/16, Layer Grafted Pre-training further improves accuracy by [3.3\%, 3.8\%, 2.3\%] in terms of [linear evaluation, 1\% few-shot, 10\% few-shot], respectively. Remarkably, the gap above Moco V3 in linear evaluation performance also increases from 1.0\% to 3.4\%.
\begin{figure}
\centering
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/tsne/TSNE_moco.png}
\caption{Moco V3}
\label{fig:tsne_moco}
\end{subfigure}%
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/tsne/TSNE_graft.png}
\caption{Layer Grafted Pre-training}
\label{fig:tsne_layer_graft}
\end{subfigure}%
\caption{t-SNE~\citep{van2008visualizing} visualization for feature distribution of Moco V3 and Layer Grafted Pre-training. Different colors represent different classes. Best viewed in color.}
\label{fig:tsne}
\end{figure}
\begin{figure}[b!]
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/linear_finetune/linear1.pdf}
\caption{Linear, 3rd Stage LR: 1.5e-4 }
\label{fig:linear_m6}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/linear_finetune/linear2.pdf}
\caption{Linear, 3rd Stage LR: 1.5e-5}
\label{fig:linear_m5}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/linear_finetune/linear3.pdf}
\caption{Linear, 3rd Stage LR: 1.5e-6}
\label{fig:linear_m4}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/linear_finetune/finetune1.pdf}
\caption{Tuning, 3rd Stage LR: 1.5e-4}
\label{fig:tune_m6}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/linear_finetune/finetune2.pdf}
\caption{Tuning, 3rd Stage LR: 1.5e-5}
\label{fig:tune_m5}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/linear_finetune/finetune3.pdf}
\caption{Tuning, 3rd Stage LR: 1.5e-6}
\label{fig:tune_m4}
\end{subfigure}%
\caption{Illustration of the LR grid search results for different stages in terms of linear evaluations and fine-tuning performance. The grid is [1.5e-6,1.5e-5,1.5e-4] for each stage. [(a), (b), (c)] and [(d), (e), (f)] denotes the linear evaluation and fine-tuning performance with third stage LR of [1.5e-4, 1.5e-5, 1.5e-6], respectively. [(a), (b)] and [(e), (f)] share the same color bar with (c) and (g), respectively. That of (d), (e) and (f) are also the same. The tested points are highlighted with blue dots in each plot. Best view in color.}
\label{fig:LR_search}
\end{figure}
We further qualitatively evaluate the representations learned by the proposed Layer Grafted Pre-training using t-SNE~\citep{van2008visualizing}. As shown in Figure~\ref{fig:tsne_layer_graft}, the proposed Layer Grafted Pre-training shows better inter-class variance. For example, the categories represented by pink ({\color[HTML]{f032e6}$\bullet$}) and light blue ({\color[HTML]{46f0f0}$\bullet$}) points are hard to separate given they are very close with each other in Figure~\ref{fig:tsne_moco}. In contrast, for representation of the proposed Layer Grafted Pre-training, they form two clusters with a clear boundary in Figure~\ref{fig:tsne_layer_graft}. Besides, the proposed Layer Grafted Pre-training also shows better intra-variance: the red ({\color[HTML]{e6194b}$\bullet$}), green ({\color[HTML]{3cb44b}$\bullet$}) and yellow ({\color[HTML]{ffe119}$\bullet$}) points of Moco V3 collapse to a smaller region than the proposed Layer Grafted Pre-training.
\subsection{LR search Layer Grafted Pre-training}
\label{sec:lr_search_Exp}
We further verify if small and large LR work the best for the proposed Layer Grafted Pre-training. Specifically, we study the performance with different LR settings on the three stages on ViT-B/16 (Refer to fine-tuning part of Appendix~\ref{sec:appendix_setting} for the definition of stages), where each stage is searched with a grid of [1.5e-6,1.5e-5,1.5e-4].
As demonstrated in Figure~\ref{fig:LR_search}, when the LR increase from 1.5e-6 to 1.5e-4 for the third stage, both linear evaluation and fine-tuning performance achieve an improvement. Taking linear evaluation as an example, as shown in Figure~\ref{fig:linear_m6},~\ref{fig:linear_m5} and~\ref{fig:linear_m4}, when the LR of the third stage increase from 1.5e-6 to 1.5e-5 and 1.5e-4, the performance range could improve from 2.5\%-63.0\% to 64.8\%-70.9\% and 73.7\%-75.1\%, respectively.
In contrast, a big LR of the first stage would lead to a drop in performance. For instance, in terms of the linear evaluation performance with third stage LR of 1.5e-4, as shown in Figure~\ref{fig:linear_m6}, the performance ranges are 74.9\%-75.1\%, 74.8\%-75.0\% and 73.7\%-73.8\% for first stage LR of 1.5e-6, 1.5e-5 and 1.5e-4, respectively. The LR of the second stage is not as sensitive as that of the first or third stage.
\begin{wrapfigure}{r}{0.4\textwidth}
\begin{center}
\includegraphics[width=0.4\textwidth]{images/partialTune.pdf}
\end{center}
\caption{Comparison between the proposed Layer Grafted Pre-training and MAE under different numbers of fixing blocks on ViT-B/16 in terms of fine-tuning performance. The training dataset is the full ImageNet-1k. Best view in color.}
\vspace{-12mm}
\label{fig:partialTune}
\end{wrapfigure}
The trend of fine-tuning performance is also similar to that of the linear evaluation performance. The preference for larger LR for higher layers indicates that they can benefit by performing CL. Meanwhile, lower layers prefer a smaller LR means that keeping MIM features for these layers can be helpful.
\subsection{More Ablations}
\label{sec:ablationExp}
\begin{table}[b!]
\centering
\caption{Top 1 Fine-tuning performance comparison.}
\label{tab:finetune}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{@{}L{2cm}C{5cm}C{2cm}@{}}
\toprule
BackBone & Method & Fine-tuning \\ \midrule
\multirow{ 5}{*}{ViT-B/16} &MAE~\citep{he2021masked} & 83.6 \\
&Moco V3~\citep{mocov3} & 83.2 \\
&SIM~\citep{tao2022siamese} & 83.8 \\
&ConMIM~\citep{yi2022masked} & 83.7 \\
&Layer Grafted Pre-training (Ours) & \textbf{83.9} \\ \midrule
\multirow{ 3}{*}{ViT-L/16} &MAE~\citep{he2021masked} & \textbf{85.9} \\
&Moco V3~\citep{mocov3} & 84.1 \\
&Layer Grafted Pre-training (Ours) & \textbf{85.9} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\textbf{Fine-tuning Performance Comparison.} We also compare State-of-The-Art methods for fine-tuning performance. As shown in Table~\ref{tab:finetune}, the proposed Layer Grafted Pre-training yields a competitive fine-tuning performance of $83.9\%$, which is $0.3\%$ and $0.7\%$ higher than the employed MIM (MAE) and CL (Moco V3) baselines, respectively. Moreover, Layer Grafted Pre-training also surpasses SIM by $0.1\%$.
\textbf{Partial Fine-tuning.} Follow MAE~\citep{he2021masked}, we evaluate the performance of the proposed Layer Grafted Pre-training with different number of fixing blocks. As illustrated in Figure~\ref{fig:partialTune}, Layer Grafted Pre-training consistently yields higher performance than MAE. And this gap continue to grow larger when more layers are fixed, indicating the superiority of the representations learned by the proposed method.
\begin{figure}[t!]
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/pretrain_VIC/mae_var.pdf}
\caption{Variance}
\label{fig:VIC_V_mae}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/pretrain_VIC/mae_invar.pdf}
\caption{Invariance}
\label{fig:VIC_I_mae}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/pretrain_VIC/mae_covar.pdf}
\caption{Covariance}
\label{fig:VIC_C_mae}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/pretrain_VIC/moco_var.pdf}
\caption{Variance}
\label{fig:VIC_V_moco}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/pretrain_VIC/moco_invar.pdf}
\caption{Invariance}
\label{fig:VIC_I_moco}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.99\textwidth]{images/pretrain_VIC/moco_covar.pdf}
\caption{Covariance}
\label{fig:VIC_C_moco}
\end{subfigure}%
\caption{The Variance-Invariance-Covariance (VIC) analysis for different methods. VIC are computed on ViT-B/16 following~\cite{bardes2021vicreg}. The input features are first averaged over all tokens and then normalized to remove the effect from magnitude. [(a), (d)], [(b), (e)] and [(c), (f)] study variance, covariance and invariance, respectively. Best view in color.}
\vspace{3mm}
\label{fig:VIC}
\end{figure}
\textbf{Variance-Invariance-Covariance Analysis.}
A study of the Variance-Invariance-Covariance pattern for the output of each block is conducted in order to better understand the Layer Grafted Pre-training. As illustrated in Figure~\ref{fig:VIC}, we find that the VIC pattern of Layer Grafted Pre-training tends to be similar to that of fine-tuning. In the MAE case, the VIC curve of Layer Grafted Pre-training closely matches that of MAE Fine-tune, much closer than MAE itself. The similarity between the proposed Layer Grafted Pre-training and fine-tuning in the VIC pattern also explains the high few-shot performance: weights of the pre-training do not need to change substantially to match the downstream task.
\textbf{Layer Grafted Pre-training with VICReg.}
We further examine the generalizability of the proposed idea on a different pre-training method - VICRegn~\citep{bardes2021vicreg}. As shown in Table~\ref{tab:vic_compare}, when replacing the CL loss with the VICReg loss, the proposed Layer Grafted Pre-training still yields strong performance, surpassing the VICReg baseline by [4.8\%, 2.4\%] for [linear evaluation, fine-tuning] performance, respectively.
\begin{table}[t!]
\centering
\caption{Layer Grafted Pre-training on ViT-B/16 with VICReg~\citep{bardes2021vicreg}. We train ViT-B/16 with VICReg for 100 epochs as the baseline. For Layer Grafted Pre-training - VICReg, the CL loss of stage ii is replaced with VICReg loss.}
\label{tab:vic_compare}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{@{}L{7cm}C{2cm}C{2cm}@{}}
\toprule
Method & Linear & Fine-tuning \\ \midrule
VICReg~\citep{bardes2021vicreg} & 70.1\% & 81.2\% \\
Layer Grafted Pre-training - VICReg (Ours) & 74.9\% & 83.6\% \\
\bottomrule
\end{tabular}
\end{adjustbox}
\vspace{3mm}
\end{table}
\section{Conclusion}
In this work, we propose Layer Grafted Pre-training, a simple yet principled method for understanding and bridging two popular types of self-supervised learning methods - Mask Image Modeling (MIM) and Contrastive Learning (CL). Our work provides a simple remedy to the conflicts between MIM and CL and further reveals the different preferences of these two methods toward different parts of the neural network. It advances the quality of the self-supervised representations and achieves strong performance on linear evaluation and few-shot performance. Potential future work includes assessing or extending the proposed method to real-world unlabeled data with more challenges such as long-tail distribution or imbalance~\citep{jiang2021self}.
\section{Appendix}
This appendix contains the following details that we could not include in the main paper due to space restrictions.
\subsection{More Analysis for the Layer-wise Difference Between MIM and CL}
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\includegraphics[width=.5\textwidth]{images/Average_Attn_Dist_Compare.pdf}
\caption{Demonstration of average attention distance~\citep{dosovitskiy2020image} for MIM (MAE), CL (Moco V3) and Graft in terms of the across all attention heads.}
\label{fig:ave_attn_dist}
\end{wrapfigure}
We provide more analysis to further understand the layerwise difference between MIM and CL. We start by analyzing the average attention distance across different layers. As shown in Figure~\ref{fig:ave_attn_dist}, on the one hand, for the deep layers (i.e. from 8th to 12th blocks), the average attention distance of CL would keep increasing, where the aggregation of local features is likely to happen. In contrast, the average attention distance of MIM keeps the same level for the deep layers. On the other hand, the average attn distance of shallow layers (i.e. 1st and 2nd blocks) of CL is much larger than that of MIM, which may distract the model from extracting local features. The proposed method combines the lower and higher layers' patterns of MIM and CL, respectively, forming a gradually increasing attention distance pattern. Remarkably, this pattern echos the philosophy of gradually increasing receptive field for designing network architecture~\cite{he2016deep,liu2021swin}.
\begin{table}
\centering
\caption{Comparison between MIM (MAE) and CL (Moco V3) for frozen features from blocks [6,9,12], on top of which we employ various numbers of blocks (\#Blocks) for fine-tuning. 0 block indicates that only a linear classification head is employed (identical to linear evaluation). Top 1 accuracy on ImageNet-1K is reported and the best performance under each setting is highlighted with bold text.}
\label{tab:feature_evaluation}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{@{}L{2cm}C{1.2cm}C{1.2cm}C{1.2cm}C{1.2cm}C{1.2cm}C{1.2cm}@{}}
\toprule
\multirow{ 3}{*}{\#Blocks} & \multicolumn{6}{c}{The block index of the feature } \\ \cmidrule{2-7}
& \multicolumn{2}{c}{6} & \multicolumn{2}{c}{9} & \multicolumn{2}{c}{12} \\ \cmidrule(l{2pt}r{2pt}){2-3} \cmidrule(l{2pt}r{2pt}){4-5} \cmidrule(l{2pt}r{2pt}){6-7}
& MIM & CL & MIM & CL & MIM & CL \\ \midrule
0 & 38.9\% & {\bf 43.6\%} & 59.3\% & {\bf 65.5\%} & 68.0\% & {\bf 76.7\%} \\
1 & 70.1\% & {\bf 72.7\%} & 77.8\% & {\bf 78.1\%} & 78.9\% & {\bf 79.1\%} \\
2 & 76.3\% & {\bf 76.8\%} & {\bf 80.2\%} & 79.2\% & {\bf 80.6\%} & 79.7\% \\
4 & {\bf 78.5\%} & 78.1\% & {\bf 81.2\%} & 79.4\% & {\bf 81.4\%} & 79.2\% \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
Secondly, we study the different properties of features across different layers. Specifically, in ImageNet 1K, we turn several random initialized blocks on top of features from different layers. As demonstrated in Table~\ref{tab:feature_evaluation}, when only fine-tuning the classification head ($\#\text{Block}=0$), the performance of MIM is much lower than CL, indicating that the feature of CL is more close to semantic representation. By contrast, when increasing the turnable blocks, the performance of MIM would significantly increase and even surpass CL, demonstrating the features of MIM better encode the local features.
The potential of these local features can be stimulated by adding enough modules to aggregate them.
The proposed Layer Grafted Pre-training employs MIM for producing high-quality early features while utilizing the CL in higher layers for aggregation.
\subsection{Discussion and Comparison with Concurrent Works}
\begin{figure}[t]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=.8\textwidth]{images/gradient_conflict_analyse/grad_cos_sim_boxPlot_200ep.pdf}
\caption{200 epochs}
\label{fig:gradient_conflict_200ep}
\end{subfigure}%
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=.8\textwidth]{images/gradient_conflict_analyse/grad_cos_sim_boxPlot_300ep.pdf}
\caption{300 epochs}
\label{fig:gradient_conflict_300ep}
\end{subfigure}%
\caption{The box plot of $\bold{C}_{\text{MIM},\text{CL}}(x)$ across different blocks for MTL combination of MIM and CL. This is measured on training datasets when the network is trained to (a) 200 epochs and (b) 300 epochs (total 300 epochs). The red dash line indicates the linear regression of median numbers. }
\label{fig:gradient_conflict_more}
\end{figure}
In this section, we discuss the difference between the proposed method with four concurrent works~\citep{tao2022siamese,huang2022contrastive,zhou2022mimco,yi2022masked} and provide more comparisons. To combine the strength of MIM and CL, SIM~\citep{tao2022siamese} proposes to predict the dense representations of an augmented view for enforcing semantic alignment and spatial sensitivity simultaneously. CMAE~\citep{huang2022contrastive} proposes two new components: pixel shift and feature decoder. MimCo~\citep{zhou2022mimco} utilizes the CL pre-trained model as a teacher and performs patch-level and image-level reconstruction tasks. ConMIM~\citep{yi2022masked} utilizes contrastive constraints to produce a dynamic masked prediction target.
The difference between the proposed Layer Grafted Pre-training and these works are as follows:
\begin{itemize}
\item While all the concurrent works are treating the network as a whole, we reveal the different preferences of MIM and CL towards their internal different layers, which motivates us a design a novel layerwise method.
\item Our method employs the original design of MIM and CL, which not only is simple but also enables an apple-to-apple comparison. In contrast, It’s non-straightforward to tell whether the improvements of the concurrent works are from the newly introduced module or the original MIM/CL design.
\item The proposed Layer Grafted Pre-training provides an in-depth analysis of the reason why MIM and CL cannot be directly combined together through gradient analysis.
\end{itemize}
Also, here we further analyze why the proposed method fails to surpass the concurrent work C-MAE~\citep{huang2022contrastive} in terms of fine-tuning performance. One possible reason lies in whether the masked view is employed for contrastive learning. C-MAE is contrasting a masked and a full image while the proposed method is contrasting two full images following Moco V3. The difference in design further leads to different strengths: On the one hand, empirical results highlight that the masked view can benefit the downstream fine-tuning task~\citep{touvron2022deit,he2021masked}, which may be because it helps to learn the correlation between sparse patches that cannot be built under full view. On the other hand, contrasting full images leads to a smaller gap with downstream tasks and thus benefiting the downstream few-shot and linear evaluation tasks.
\subsection{More Gradient analysis results}
We measure the distribution of $\bold{C}_{\text{MIM},\text{CL}}(x)$ across different training epochs and confirmed that the statistical results persist the same across the entire training process, rather than just a few specific epochs. As two examples, Figures~\ref{fig:gradient_conflict_200ep} and~\ref{fig:gradient_conflict_300ep} show that a considerable part of values is negative in the 200 and 300 epochs.
\subsection{Transfer Learning Results}
\begin{table}
\centering
\caption{ADE20K semantic segmentation comparison using UperNet with different pre-training methods.}
\label{tab:segmentation}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{@{}L{5cm}C{2cm}@{}}
\toprule
Method & mIoU \\ \midrule
MAE~\citep{he2021masked} & 48.1 \\
Moco V3~\citep{mocov3} & 47.3 \\
Layer Grafted Pre-training (Ours) & \textbf{48.7} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
We further evaluate the proposed method for the standard transfer semantic segmentation task. As shown in Table~\ref{tab:segmentation}, on ADE20K with UperNet, the proposed method achieves higher performance than both MAE and Moco V3.
\subsection{More settings}
\label{sec:appendix_setting}
\textbf{Pre-training.} We by default adopt MAE~\citep{he2021masked} and Moco V3~\citep{mocov3} as our MIM and CL frameworks, respectively. For the first step of Layer Grafted Pre-training, since it identically follows the original MIM pipeline, we directly adopt the pre-trained model of MAE~\citep{he2021masked} and conduct the second step, where we initialize the network with the MIM pre-trained model and train with Moco V3~\citep{mocov3} for 300 epochs. For LR, We first split the network into three stages. Each of them contains the same number of blocks. (i.e. 4 and 8 blocks for ViT-B and ViT-L, respectively.) Then, the base LR of the first and second stages (corresponding to lower layers) is assigned as 1.5e-5 while the third stage is set as 1.5e-4 by default. In the second stage of ViT-L, we further ensure the early layers of the resultant model are close to the MIM pre-training by minimizing the $l^2$ distance between the first 12 layers between them (Refer Section~\ref{sec:ablationReg} for more ablations). Other settings are identically followed from Moco V3~\citep{mocov3}.
\textbf{Fine-tuning.} For fine-tuning, we train with AdamW~\citep{loshchilov2017decoupled} for 100 epochs following MAE~\citep{he2021masked}. We employ a base LR of 5e-4 with linear scaling rule~\citep{goyal2017accurate}. The layer-wise LR decay ratio is set as 0.6~\citep{clark2020electra}. For other settings such as data augmentation, LR scheduling or weight decay, we identically follow~\cite{he2021masked}.
\textbf{Few-shot Learning.} We conduct few-shot learning with 1\% or 10\% available labels. The sub-sampling splits are adopted from~\cite{simclrv2}. For 1\% few-shot evaluation, following~\cite{caron2021emerging}, we first generate frozen features on training images without data augmentation, on top of which a logistic regression classifier is trained for prediction. For 10\% semi-supervised learning, we train from the first layer of the projection head following~\cite{simclrv2}. We train for 400 epochs with an initial base LR of 3e-5. Other settings are identical to the fine-tuning.
\textbf{Linear Evaluation.} For linear evaluation, we train a linear classifier on top of frozen pre-train features to measure the quality of the visual representations following common practices~\citep{simclr}. Following Moco V3~\citep{mocov3}, the classifier is trained for 90 epochs with SGD optimizer and weight decay of 0. The LR is swept for each case.
\subsection{Ablation study for $l^2$ regularization}
\label{sec:ablationReg}
In this section, we ablation study the effectiveness of $l^2$ regularization mentioned at section~\ref{sec:appendix_setting}. As shown in Table~\ref{tab:l2_ablation}, employing the $l^2$ regularization can help to preserve the Mask Image Modeling features of the lower layers and yield consistent improvement across multiple benchmarks.
\begin{table}
\centering
\caption{Ablation study for $l^2$ regularization on ViT-L. Top 1 accuracy on ImageNet-1K is reported and the best performance under each setting is highlighted with bold text.}
\label{tab:l2_ablation}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{@{}C{3.5cm}C{1.5cm}C{1.5cm}C{1.5cm}C{1.5cm}@{}}
\toprule
$l^2$ regularization & Linear & 1\% & 10\% & Finetune \\ \midrule
\xmark & 80.5 & 68.9 & 79.8 & 85.7 \\
$\checkmark$ & \textbf{81.0} & \textbf{69.3} & \textbf{80.1} & \textbf{85.9} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
|
{
"arxiv_id": "2302.14157",
"language": "en",
"timestamp": "2023-03-01T02:03:02",
"url": "https://arxiv.org/abs/2302.14157",
"yymm": "2302"
} | \section*{\refname}}
\bibliographystyle{unsrt}
\usepackage[]{lineno}
\setlength{\parindent}{0cm}
\begin{document}
\title{Structural constraints on the emergence of oscillations in multi-population neural networks}
\author{Jie Zang\begin{CJK}{UTF8}{gbsn} (臧杰)
\end{CJK}}
\author{Shenquan Liu\begin{CJK}{UTF8}{gbsn} (刘深泉)\end{CJK}}
\email{mashqliu@scut.edu.cn}
\affiliation{School of Mathematics, South China University of Technology, Guangdong, China.}
\author{Pascal Helson}
\author{Arvind Kumar}
\email{arvkumar@kth.se}
\affiliation{ Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden}
\begin{abstract}
Oscillations arise in many real-world systems and are associated with both functional and dysfunctional states. Therefore, it is important to determine the causes of oscillations in a network. Whether a network can oscillate can be estimated if we know the strength of interaction between nodes. But in real-world networks (in particular in biological networks) it is usually not possible to know the exact connection weights. Therefore, it is important to determine the structural properties of a network necessary to generate oscillations. Here, we provide a proof that uses dynamical system theory to prove that an odd number of inhibitory nodes and strong enough connections are necessary to generate oscillations in a single cycle threshold-linear network. We illustrate these analytical results in a biologically plausible network with either firing-rate based or spiking neurons. Our work provides structural properties necessary to generate oscillations in a network. We use this knowledge to reconcile recent experimental findings about oscillations in basal ganglia with classical findings.
\end{abstract}
\keywords{network structure, oscillations, basal ganglia, neural dynamics}
\maketitle
\section*{Introduction}\label{sec:introduction}
Oscillations are ubiquitous in dynamical systems \cite{Strogatz2018,pikovsky2002synchronization}. They have important functional consequences but can also cause system malfunction. In the brain for instance, oscillations take part in information transfer \cite{Fries2015Neuron,Hahn2019}. However, persistent beta band ($13$-$30$\,Hz) oscillations are associated with the pathological symptoms of Parkinson's disease \cite{Brown2001PD}. Therefore, it is important to determine when and how a system of many interacting nodes (network) oscillates.
This question is usually very difficult to answer analytically. The main tool that can be used is the Poincaré–Bendixson theorem \cite{poincare1880courbes,bendixson_sur_1901} which is only valid in $2$ dimensions, which drastically reduce its applicability. In some cases when we know the model parameters it is possible to calculate whether the system will oscillate or not. However, often such parameters cannot be measured experimentally. For example, in most physical, chemical, and biological networks, it is usually not possible to get the correct value of connectivity strength. By contrast, it is much easier to know whether two nodes in a system are physically connected and what is the sign (positive or negative) of their interactions. Therefore, it is much more useful to identify necessary structural conditions for the emergence of oscillations. A good example is the conjecture postulated by Thomas \cite{thomas1981relation}: when considering a coupled dynamical system ($\Dot{x} = f(x)$ and $x(0)\in \mathbb{R}^n$) with a Jacobian matrix that has elements of fixed sign, it can exhibit oscillations only if the directed graph obtained from the nodes' connectivity (Jacobian matrix) admits a negative loop of two or more nodes (loop with an odd number of inhibition). This conjecture has been proven using graph theory for smooth functions $f$ \cite{snoussi1998necessary,gouze1998positive}.
Thomas also conjectured that the assumption on the constant sign of the Jacobian matrix may not be necessary \cite{thomas2006circular}, i.e. having a negative loop in some domain of the phase space should be sufficient to generate oscillations. This condition is more realistic due to the ubiquity of the non-linearity in biological systems. For example, in the brain, even though neurons are (usually) either excitatory or inhibitory, the transfer function linking the neurons is non-linear and can thus lead to elements of the Jacobian matrix with non-constant sign. To the best of our knowledge, this last conjecture has not been proved yet but there are many examples of it. For instance, oscillation can emerge from a simple EI network Wilson-Cowan model \cite{Ledoux2011}. Our study is an example of this conjecture on the threshold-linear network (TLN) model \cite{hartline1958spatial} which can closely capture the neural population dynamics.
Here, we study the long term behaviour of the TLN model in the case of a single cycle interaction containing all nodes. We show analytically that regardless of the sign of this loop, the system cannot oscillate when connections are too weak as the system possesses a unique globally asymptotically stable fixed point. However, when connections are strong enough, the system either possesses two asymptotically stable fixed points (positive loop) or a unique unstable fixed point (negative loop). In addition, the system can be shown to be bounded and thus, it has one of the following long term behavior: limit cycle, quasi-periodic or chaotic behavior. Interestingly, we can show that such dynamics can be shut down by introducing positive external input to excited nodes.
Based on our analytical results, we used simulations of basal ganglia (BG) network models with either firing rate-based or spiking neurons to explain recent experimental findings about the origin of oscillations in Parkinson's disease (PD). Traditionally, the subthalamic nucleus and globus pallidus (STN-GPe) subnetwork is considered to be the key network underlying the emergence of oscillations in PD \cite{Plenz1996beta, Terman2002, Kumar2011}. However, recent experiments have shown that near complete inhibition of GPe but not of STN is sufficient to quench oscillations \cite{Crompe2020}. This observation contradicts several previous models and even clinical observations in which surgical removal of STN is used to alleviate PD symptoms. Our theory suggests that there are at least $6$ possible cycles in the Cortex-BG network that have the potential to oscillate based on the connectivity structure. We show that even if STN is inhibited, other 'cycles' can sustain pathological oscillations. Interestingly, we found that GPe features in $5$ out of $6$ oscillatory cycles and therefore GPe inhibition is likely to affect PD-related oscillations in most cases.
\section*{Results}\label{sec:results}
We study how the emergence of oscillations in a network of excitatory and inhibitory populations depends on the connectivity structure. We first consider a network of nodes with dynamics representing the average firing rate of a population. We derive structural conditions for the emergence of oscillations when the dynamics of individual nodes are described according to the threshold-linear network (TLN) model. Next, we use numerical simulations to test whether such results might still hold on two other models: the Wilson-Cowan population rate-based model \cite{WilsonCowan1972} and a network model of the basal ganglia with spiking neurons (see Methods).
\subsection*{Structural conditions to generate oscillations}
\subsubsection*{Intuition behind the analytical results}
There exist many ways to generate oscillations in a network. Oscillation can arise from individual nodes due to their intrinsic dynamics (a spiking neuron can have a periodic behaviour given the ionic channel composition \cite{lee2018critical}) or from the weights' dynamics when considering synaptic plasticity \cite{izhikevich2008large}. Here we assume that the system's ability to oscillate only depends on the connectivity structure: the presence of positive or negative loops and the connections strength (Jacobian matrix) within them. That is, neither plasticity nor biophysics of neurons is considered.
Consider a small network of two nodes. If we connect them mutually with excitatory synapses, intuitively we can say that the two-population network will not oscillate. Instead, the two populations will synchronize. The degree of synchrony will, of course, depend on the external input and the strength of mutual connections. If both these nodes are inhibitory, one of the nodes will emerge as a winner and the other will be suppressed \cite{Ermentrout1992WTA}. Hence, a network of two mutually connected inhibitory populations cannot oscillate either. We can extend this argument to three population networks with three connections that form a closed loop or 'cycle' (Fig.\,\ref{fig:fig1}a, top). When all three connections in the cycle are excitatory, the three populations will synchronize. Essentially, we will have a single population. Thus, these two and three population motifs are not capable of oscillations.
The simplest network motif which is capable of oscillating consists of two mutually connected nodes: one excitatory and one inhibitory (EI motif: Fig.\,\ref{fig:fig1}a, bottom) \cite{Ledoux2011}. When there are three populations connected with three connections to form a cycle, the potential to oscillate depends on the number of inhibitory connections. A cycle with one inhibitory connection (EEI motif) can be effectively reduced to an EI motif and therefore, can oscillate. However, when there are two inhibitory connections (EII motif, Fig.\,\ref{fig:fig1}a, top)), the two inhibitory neurons engage in a winner-take-all type dynamics and the network is not capable of oscillations. Finally, if there are three inhibitory connections (i.e. all three nodes are inhibitory, III motif) the network enters in a winner-less-competition \cite{Rabinovich2001dynamical} and can exhibit oscillations (Fig.\,\ref{fig:fig1}a, bottom).
These examples of two or three nodes suggest that a network can generate oscillations if there are one or three inhibitory connections in the network. These observations form the basis for the conjecture of Thomas \cite{thomas1981relation} that gives a necessary condition for oscillations to emerge. This condition is of course not sufficient. In the following we find additional constraints (input and minimum connection strength) needed to determine the emergence of oscillations in a network. To this end we use the TLN model which captures the neural population dynamics to a great extent. After proving the key theorems, we test with simulation whether similar results hold on a more realistic Wilson-Cowan model and a model of basal ganglia with spiking neurons.
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{FigureA.eps}
\caption{\textbf{Structural condition for oscillations: odd inhibitory cycle rule and its illustrations.} \textbf{a}: Examples of oscillating motifs and non-oscillating motifs in Wilson-Cowan model. Motifs that cannot oscillate show features of Winner-take-all: the winner will inhibit other nodes with a high activity level. Inversely, the oscillatory ones all show features of winner-less competition, which may contribute to oscillation. \textbf{b}: The odd inhibitory cycle rule for oscillation prediction with the sign condition of a network. \textbf{c}: Illustrations of oscillation in complex networks. Based on the odd inhibitory cycle rule, Network I can't oscillate, while Network II could oscillate by calculating the sums of their motifs. The red or black arrows indicate inhibition or excitation, respectively. Hollow nodes and solid nodes represent excitatory and inhibitory nodes, respectively.
}
\label{fig:fig1}
\end{figure*}
\subsubsection*{Threshold linear network model}
We consider the TLN($W,b$) in which individual nodes follow the dynamics
\begin{equation}
\dfrac{dx_i}{dt}=-x_i+\left[\sum_{j=1}^nW_{ij}x_j+b_i\right]_+,\quad i=1,\dots,n
\label{eq:TLN}
\end{equation}
where $n$ is the number of nodes, $x_i(t)$ is the activity level of the $i$th node at time $t\geq 0$, $W_{ij}$ is the connection strength from node $j$ to node $i$ and $[\,\cdot\,]_+\overset{\mathrm{def}}{=}\mathrm{max}\{\,\cdot\,, 0\}$ is the threshold non-linearity. For all $i \in [n] \overset{\mathrm{def}}{=} \{1,\dots,n\}$, the external inputs $b_i \in \mathbb{R}$ are assumed to be constant in time. We refer to a $n$ neurons network with dynamics given by eq. \ref{eq:TLN} as TLN($W,b$).
In order to help the definition of cycle connectivity matrices, we define
\[
C_n \overset{\mathrm{def}}{=} \{ (i,j) \in [n]^2 | i-j=1 \} \cup \{ (1,n) \}.
\]
We denote by $\delta_{i,I}$ the Kronecker delta which equals $1$ when node $i$ is inhibitory and $0$ otherwise (node $i$ is excitatory).
In the following, we use the convention that node $0$ is node $n$ and node $n+1$ is node $1$.
For a given set of elements $\{y_k\}_{k\in \mathbb{N}}$ in $\mathbb{R}$, we will use the convention:
\begin{align}\label{eq:conv_prod}
\prod_{k=i}^{j} y_k = 1 \text{ when }j<i.
\end{align}
We define $\mathcal{A}=\{a_1,\dots,a_{n_I}\}$ as the ensemble of inhibited nodes ($\{k \in [n] | \delta_{k-1,I} = 1\}$) put in order such that $a_1<\dots<a_{n_I}$. Denoting by $\text{card}(\cdot)$ the cardinal function, we have that $\text{card}(a)=n_I$. We also use the cycle convention for $\mathcal{A}$: $a_{n_I+1} = a_1$.
\subsection*{Analytical results}
\begin{theorem}
Let a network of inhibitory and excitatory nodes be connected through a graph $G$ which does not contain any directed cycle. Assume that its nodes follow TLN($W,b$) dynamics (eq. \ref{eq:TLN}) with
\begin{align*}
W_{ij}=\begin{cases}
w_{ij} (-1)^{\delta_{j,I}}&\text{when edge} \ i \leftarrow j \in \text{G} \\
0& \text{otherwise}, \
\end{cases}
\end{align*}
where $w_{ij} \in \mathbb{R}^+$ $\forall~ i,j \in [n]$.
Then, TLN($W,b$) has a unique globally asymptotically stable fixed point.
\label{thm1}
\end{theorem}
\begin{theorem}
Let G be a cyclical graph with $n_I\in \mathbb{N^+}$ inhibitory nodes and $n_E \in \mathbb{N}$ excitatory nodes such that $n_I + n_E \geq 2$ ($\geq 3$ when $n_I = 1$). Assume that the nodes follow the TLN($W,b$) dynamics (eq.\,\ref{eq:TLN}) with for all $i,j \in [n]$, $w_j \in \mathbb{R}^+$,
\begin{align*}
W_{ij}=
\begin{cases}
w_j (-1)^{\delta_{j,I}}&\text{when} \ (i,j) \in C_n \\
0& \text{otherwise},
\end{cases}
\end{align*}
and $b_i = 0$ when the node $i-1$ is excitatory and $b_i > 0$ otherwise.
Moreover, using convention (eq.\,\ref{eq:conv_prod}), assume that the initial state is bounded,
\begin{align}\label{eq:bound}
\forall j \in \{x_{a_k}, \dots, x_{a_{k+1}-1}\},\quad x_{j}(0) \in [0,b_{a_{k}}\prod_{i=a_{k}}^{j-1}w_i].
\end{align}
Then, the long time behaviour of the network depends on the following conditions,
\begin{align}
\forall k \in [n_I], \ \ \
\prod_{i=a_k}^{a_{k+1}-1}w_i < \frac{b_{a_{k+1}}}{b_{a_k}},\label{eq:cond_w_b}\\
\prod_{i=a_k}^{a_{k+1}-1}w_i > \frac{b_{a_{k+1}}}{b_{a_k}},\label{eq:cond_w_b_bar}
\\
\sqrt[n]{\prod_{i=1}^{n}w_{i}} < \frac{1}{cos(\pi/n)}\label{eq:cond_w_cos},\\
\sqrt[n]{\prod_{i=1}^{n}w_{i}} > \frac{1}{cos(\pi/n)}\label{eq:cond_w_cos_bar}.
\end{align}
\noindent If $n_I$ is even and
\begin{itemize}
\item eq.\,\ref{eq:cond_w_b} is satisfied, TLN($W,b$) has a unique globally asymptotically stable fixed point with support $[n]$,
\item eq.\,{\ref{eq:cond_w_b_bar}} is satisfied, TLN($W,b$) has two asymptotically stable fixed points with strict complementary subsets of $[n]$ as supports.
\end{itemize}
If $n_I$ is odd and
\begin{itemize}
\item eq.\,\ref{eq:cond_w_b} is satisfied, TLN($W,b$) has a unique fixed point which is globally asymptotically stable and its support is $[n]$,
\item eq.\,\ref{eq:cond_w_b_bar} \& eq.\,\ref{eq:cond_w_cos} is satisfied, TLN($W,b$) has a unique fixed point which is asymptotically stable (not globally) and its support is $[n]$,
\item eq.\,\ref{eq:cond_w_b_bar} \& eq.\,\ref{eq:cond_w_cos_bar} are satisfied, TLN($W,b$) has a unique fixed point which is unstable and has $[n]$ as support.
\end{itemize}
\label{thm2}
\end{theorem}
\begin{remark}
First, note that eq.\,\ref{eq:cond_w_b} implies
\begin{align}
\sqrt[n]{\prod_{i=1}^{n}w_{i}} < 1, \label{eq:cond_w}
\end{align}
and similarly, eq.\,\ref{eq:cond_w_b_bar} implies
\begin{align}
\sqrt[n]{\prod_{i=1}^{n}w_{i}} > 1 \label{eq:cond_w_bar}.
\end{align}
In addition, the bound on the initial state eq. \ref{eq:bound} can be easily removed. We use it because it eases the proof as we then don't need to introduce technical details that are not interesting for this study.
Then, Theorem \ref{thm2} says that a possible condition for the one cycle TLN to oscillate is that the number of inhibitory nodes is odd when the connection strength are strong enough (i.e. eq.\,\ref{eq:cond_w_b_bar} \& eq.\,\ref{eq:cond_w_cos_bar}). In that case, the system has no stable fixed point and from Lemma \ref{lem:boundedness} it is bounded so it has one of limit cycle, quasi-periodic or chaotic behaviours. In particular, Theorem \ref{thm2} states that the odd number of inhibitory nodes is not sufficient. Indeed, when eq.\,\ref{eq:cond_w_b} holds and $n_I$ is odd, no oscillations are possible as the fixed point is globally stable. It is also the case when $n_I$ is even which corresponds to Thomas' conjecture.
Finally, there is a gap in between the conditions (between eq.\,\ref{eq:cond_w_b} and eq.\,\ref{eq:cond_w_b_bar} for example) for which the long term behaviour is not determined.
\end{remark}
\begin{remark}
In particular, if for all $i\in[n]$, $w_i=w\in \mathbb{R}_+^*$ and for all $k\in[n_i]$, $b_{a_k}=b \in \mathbb{R}_+^*$, then the dynamics of the system only depends on $w$. When $n_I$ is even: $w<1$ implies that TLN($W,b$) have a unique globally asymptotically stable fixed point; $w>1$ implies that the fixed point for $w<1$ becomes unstable and TLN($W,b$) has two more asymptotically stable fixed point. If $n_I$ is odd, TLN($W,b$) only has a unique fixed point which is asymptotically stable when $w<\frac{1}{cos(\pi/n)}$ (globally when $w<1$) and unstable when $w>\frac{1}{cos(\pi/n)}$.
\end{remark}
\begin{remark}
In Theorem \ref{thm2}, we assume that the external inputs are absent for excited nodes. Assume that the external input to any excited node, say node $a_k < i < a_{k+1}$, is strictly positive. Then, bounding its dynamics as in Lemma \ref{lem:boundedness}, we know that its activity will be more than $b_i$.
Hence, the next inhibited node $a_{k+1}$ can be silenced forever if
$$
b_i \prod_{j=i}^{a_{k+1}-1}w_{j} > b_{a_{k+1}},
$$
thus destroying the cycle structure and thus preventing oscillation from emerging.
On the other hand, if the external inputs to excited nodes are strictly negative, Theorem \ref{thm2} conclusion will be similar
but now with condition described in eq.\,\ref{eq:cond_w_b} replaced by
\begin{align*}
b_{a_k}\prod_{i=a_k}^{a_{k+1}-1}w_i-\sum_{j=a_k+1}^{a_{k+1}-1}b_j\prod_{i=j}^{a_{k+1}-1}w_i<b_{a_{k+1}}.
\end{align*}
This means that cycles with even (odd) inhibitory nodes need strong enough connections to generate multi-stability (limit cycle). We now clarify that the latter condition relates to weights' strength. With $w=(w_1,\cdots,w_n)$ and using the set of functions increasing functions $(f_i)_{1 \leq i \leq n}$ such that
\begin{align*}
f_i^{w,b}(x) = w_{i-1}x_{i-1} - b_i
\end{align*}
one can write the last condition as
\begin{align*}
f_{a_{k+1}}^{w,b} \circ \cdots \circ f_{a_{k}}^{w,b}(b_{a_k}) < 0.
\end{align*}
Hence, the left term is increasing with any weight strength.
One should also note that under this negative input assumption to excited nodes, when weights are weak, the support of the fixed point might be different from $[n]$. In particular, some excited nodes might not belong to the support.
\end{remark}
\begin{remark}
When the decay rates are not the same (here all of them are $-1$), similar results hold but then the conditions for stability are more difficult to state precisely. Finally, when considering the EI network (two nodes), the system always admits to a unique globally asymptotically stable fixed point with support $\{1,2\}$. Indeed, it is easy to show that the system will always reach the domain where the inhibitory node is small enough so that one can remove the threshold function in eq. \ref{eq:TLN} and thus the eigenvalues of the Jacobian matrix are $\pm i \sqrt{w_1w_2}-1$. No oscillations are then possible, which is an easy example to show that negative loops are not sufficient to generate oscillation in non smooth dynamical systems.
\end{remark}
Similar results have been shown by Snoussi \cite{snoussi1998necessary} and Gouze \cite{gouze1998positive}. Considering dynamical systems of the form $\Dot{x} = f(x)$ where $f$ is a continuously differentiable function on a given open convex set and $f$ has a constant sign Jacobian matrix, they used graph theory methods to show that negative loop in this matrix is a necessary condition to generate oscillations. In our case, $f$ is not continuously differentiable, the Jacobian matrix elements can change sign within the state space and we show that there is a need of additional constraints for oscillations to arise.
A formal proof of the aforementioned theorems is provided in Appendix \ref{apdx:proof} by using classical dynamical theory tools.
\subsection*{Intuition behind the proof of the theorems}
The idea behind our proof can be explained graphically. We assume that nodes cannot oscillate due to their intrinsic activity and a fixed external input only drives them to a non-zero activity which does not change over time. Therefore, they need input from their pre-synaptic (upstream) nodes to change their state in a periodic manner to generate oscillations. In such a network, if we perturb the node $i$ with a pulse-like input, it is necessary that the perturbation travels through the network and returns to the node $i$ with a 180$^o$ phase shift (i.e. with an inverted sign). Otherwise, the perturbation dies out and each node returns to a state imposed by its external input.
In a network without directed cycles, it is possible to sort the nodes into smaller groups where nodes do not connect to each other (Fig.\,\ref{fig:proof}a). That is, a network with no directed cycles, can be rendered as a feed-forward network in which the network response by definition does not return to the node (or group) that was perturbed. Such a network can only oscillate when the intrinsic dynamics of individual nodes allow for oscillatory dynamics.
However, having a directed cycle is no guarantee of oscillations because network activity must return to the starting node with a 180$^{\circ}$ phase shift. This requirement puts a constraint on the number of inhibitory connections in the cycles. When we assume that there are no delays (or the delay is constant) in the connections, excitatory connections do not introduce any phase shift, however, inhibitory connections shift the phase by 180$^{\circ}$ (in the simplest case invert the sign of the perturbation). Given this, when a cycle has an even number of inhibitory connections the cycle cannot exhibit oscillations (Fig.\,\ref{fig:proof}b,top). However, replacing an inhibitory connection by an excitatory one can render this cycle with an ability to oscillate (Fig.\,\ref{fig:proof}b, bottom). Therefore, odd number of inhibitory appears to be necessary for oscillation to emerge.
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{FigureB.eps}
\caption{\textbf{The intuitive explanations of Theorem\,\ref{thm1} and \ref{thm2}.} \textbf{a}: A visual representation of why directed cycles are important in network oscillation. By rearranging all nodes, any network without directed cycles can be seen as a feed-forward network which will make the system reach a stable fixed point. \textbf{b}: An intuitive explanation of the odd inhibitory cycle rule by showing the activities of two 6-node-loops. Odd inhibitory connections (bottom) can help the system oscillate, while even inhibitory connections has the opposite effect.}
\label{fig:proof}
\end{figure*}
\subsection*{The effect of network parameters on oscillations}
To test the validity of our theorems in more realistic biological neuronal networks, we numerically simulated the dynamics of the Wilson-Cowan model. Specifically, we investigated the role of synaptic transmission delays, synaptic weights, external inputs and self-connection in shaping the oscillations when the network has directed cycles. In particular, we focused on two networks: the III motif with three inhibitory nodes (odd inhibitory links) and the EII motif with one excitatory and two inhibitory nodes (even inhibitory links).
Our numerical simulation showed that for a wide range of parameters (synaptic delays, synaptic weights, external input and self-inhibition), while III network showed oscillations, EII network did not show any oscillations (Fig.\,\ref{fig:wcnetwork} and \ref{fig:supp_III}). The oscillation frequency however depended on the exact value of the synaptic delays, synaptic weights and external inputs. For instance, increasing the synaptic delay reduced the oscillation frequency (Fig.\,\ref{fig:wcnetwork}\,b). Synaptic delays play a more important role in shaping the oscillations in an EI type network (see Supplementary Fig.\,\ref{fig:supp_EI}). The effect of increasing the synaptic strength was contingent on the external inputs. In general, increasing the synaptic strength resulted in a reduction in the oscillation frequency (Fig.\,\ref{fig:wcnetwork}\,c). Next, oscillation frequency changed in a non-monotonic fashion as a function of external input irrespective of the choice of other parameters (Fig.\,\ref{fig:wcnetwork}\,d). Typically, a mid-range input strength resulted in maximum oscillation frequency. Finally, increasing the self-connection of nodes increased the oscillation frequency but beyond a certain self-connection the node was completely silenced and it changed the network topology and oscillations disappeared (Fig.\,\ref{fig:wcnetwork}\,e).
Overall, these results are consistent with our rule that odd number of inhibitory nodes and strong enough connections are necessary to induce oscillations in a directed cycle. The actual frequency of oscillations depends on specific network parameters.
\begin{figure*}
\centering
\includegraphics[width=0.7\linewidth]{FigureC.eps}
\caption{\textbf{Influence of network properties on the oscillation frequency in motifs III and EII with Wilson-Cowan model.} \textbf{a}: The changed network parameters are shown in the table. Red (green) connections are inhibitory (excitatory) and black arrows are the external inputs. \textbf{b-e}: We systematically varied the synaptic delay time \textbf{b}, synaptic weights \textbf{c}, external input \textbf{d}, and self-connection \textbf{e}. These parameters were varied simultaneously for all the synapses i.e. in each simulation all synapses were homogeneous. Green, orange, red and turquoise respectively show the effect of synaptic delay, synaptic strength, external input and self-inhibition. See the Supplementary Fig.\,\ref{fig:supp_III} and Fig.\,\ref{fig:supp_EI} for more detailed results about III and EI network motifs.}
\label{fig:wcnetwork}
\end{figure*}
\subsection*{Oscillators in the cortex-basal ganglia network}
Next, we use our theorem to explain recent experimental observations about the mechanisms underlying the emergence of oscillations in the basal ganglia. Emergence of 15-30\,Hz (beta band) oscillations in the cortico-basal ganglia (CBG) network is an ubiquitous feature of Parkinson's disease (PD) \cite{raz2000firing, bergman1998physiological,sharott2014activity, neumann2016subthalamic}. Based on their connectivity and activity subthalamic nucleus (STN) and the globus pallidus externa (GPe) subnetwork has emerged as the most likely generator of beta oscillations \cite{plenz1999basal, bevan2002move}. The STN-GPe subnetwork becomes oscillatory when their mutual connectivity is altered \cite{Terman2002,Holgado2010conditions} or neurons become bursty \cite{tachibana2011subthalamo, Bahuguna2022} or striatal inputs to GPe increase \cite{Kumar2011, Mirzaei2017, Sharott2017, chakravarty2022transient}. However, oscillations might also be generated by the striatum \cite{mccarthy2011striatal}, by the interaction between the direct and hyperdirect pathways \cite{leblois2006competition} and even by cortical networks that project to the BG \cite{brittain2014oscillations}.
Recently, de la Crompte et al. \cite{Crompe2020} used optogenetic manipulations to shed light on the mechanisms underlying oscillation generation in PD. They showed that GPe is essential to generate beta band oscillations while motor cortex and STN are not. These experiments force us to rethink the mechanisms by which beta band oscillations are generated in the CBG network.
To better understand when GPe and/or STN are essential for beta band oscillations, we identified the network motifs which fulfill the odd inhibitory cycle rule. For this analysis, we excluded D1 SPNs because they have a very low firing rate in the PD condition \cite{Sharott2017}. In addition, cortex is assumed as a single node in the CBG network.
The CBG network can be partitioned into 238 subnetworks with 2, 3, 4, 5 or 6 nodes (see Supplementary Fig.\,\ref{fig:bg2nodes}-\ref{fig:bg7nodes}). Among these partitions, there are five loops (or cycles) in the CBG network with one or three inhibitory projections: Proto-STN, STN-GPi-Th-cortex, Proto-Arky-D2, Proto-FSN-D2, and Proto-GPi-Th-Cortex-D2 (Fig.\,\ref{fig:bg_motifs}a). One or more of these 5 loops appeared in 88 (out of 238) subnetworks of CBG (see Fig.\,\ref{fig:bg_motifs}b, colors indicate different loops). Larger subnetworks consisting of 5 and 6 nodes have multiple smaller subnetworks (with 2 or 3 nodes) that can generate oscillations (boxes with multiple colors in Fig.\,\ref{fig:bg_motifs}b).
Based on our odd inhibitory cycles in BG, we found three oscillatory subnetworks which do not involve the STN (Fig.\,\ref{fig:bg_motifs}a, cyan, green and purple subnetworks). However, each of these oscillatory subnetworks involves Prototypical neurons (from the GPe) which receive excitatory input from STN. Therefore, it is not clear whether inhibition of STN can affect oscillations or not. To address this question, we first simulated the dynamics of a four-node motif (Fig.\,\ref{fig:bg_motifs}c top) using the Wilson-Cowan type model (see Methods). In this subnetwork, we have three cycles: Proto-STN loop with one inhibitory connection, Proto-STN-Arky-D2 loop with three inhibitory connections and Proto-Arky-D2 with three inhibitory connections.
We systematically varied external inputs to the STN and D2-SPNs and measured the frequency of oscillations (see Methods). We found that for weak inputs to the D2-SPNs, the Proto-STN subnetwork generated oscillations for weak positive input (Fig.\,\ref{fig:bg_motifs}c bottom). However, as the input to D2-SPNs increased, the oscillation frequency decreased and oscillations were observed even for very strong drive to STN (Fig.\,\ref{fig:bg_motifs}c bottom). That is, in this model, both Proto-STN and Proto-D2-Arky subnetworks compete for oscillations, which subnetwork wins depending on their inputs. To disentangle the oscillations of each of these two subnetworks, we performed 'lesion' experiments in our model (see Methods). These experiments also mimicked lesions performed in non-human primates \cite{tachibana2011subthalamo}.
When we removed the D2-SPN to Proto projections, the network could oscillate but only because of the Proto-STN subnetwork (Fig.\,\ref{fig:bg_motifs}d). In this setting, we get relatively high-frequency beta band oscillations but only for a small range of excitatory inputs to the STN (Fig.\,\ref{fig:bg_motifs}d, bottom). In this setting, inhibition of STN would certainly abolish any oscillation. Next, we removed the STN output (equivalent to inhibition of STN), the Proto-D2-Arky subnetwork generated oscillations for weak positive inputs to the D2-SPNs (Fig.\,\ref{fig:bg_motifs}d, bottom). Note that unlike in Fig.\,\ref{fig:bg_motifs}c, here we injected additional input to Proto to compensate for the loss of excitatory input from STN and to ensure that it had sufficient baseline activity. The frequency of Proto-D2-Arky oscillations was smaller than that observed for the Proto-STN subnetwork because the former involves a three synapses loop. However, as we have shown earlier, frequency of oscillation can be changed by scaling the connection weights or external inputs (Fig.\,\ref{fig:wcnetwork}). Overall, these results suggest that, in principle, it is possible for CBG network to oscillate even when STN is removed from the network.
\begin{figure*}
\centering
\includegraphics[width=0.7\linewidth]{FigureD.eps}
\caption{\textbf{Schematic of CBG network model with potential oscillators and the interaction between two oscillators in Wilson-Cowan model.} \textbf{a}: CBG structure with red lines denoting inhibition and green lines denoting excitation, along with five potential oscillators based on the odd inhibitory cycle rule. \textbf{b}: Oscillation in all BG motifs from 2 nodes to 6 nodes based on the odd inhibitory cycle rule. Each grid represents a separate motif. We use different colors to mark motifs that can oscillate, and each color means an oscillator from panel \textbf{a}. \textbf{c}: The reaction of oscillation frequency to different external inputs to D2 and STN in a BG subnetwork. External inputs to Proto and Arky are 1 and 3, respectively. \textbf{d}: Same thing as \textbf{c} but ruining the connection from D2 to Proto. \textbf{e}: Same thing as \textbf{c} but destroying the connections from STN and increasing the input to Proto from 1 to 4.
}
\label{fig:bg_motifs}
\end{figure*}
\subsection*{Oscillations in model of basal ganglia with spiking neurons}
Thus far we have only illustrated the validity of our theorems in a firing rate-based model. To be of any practical value to brain science, it is important to check whether our theorems can also help in a network with spiking neurons. To this end, we simulated the two subnetworks with 3 inhibitory connections: Proto-D2-FSN and Proto-D2-Arky (see Methods). These subnetworks were simulated using a previous model of BG with spiking neurons \cite{chakravarty2022transient}.
The subnetworks Proto, Arky, D2-SPN, FSN have very little recurrent connectivity to oscillate on their own. We provided Poisson type external input. All neurons in a subnetwork received the same input rate but a different realization of the Poisson process. Both Proto-D2-FSN (Fig.\,\ref{fig:bgspiking}a) and Proto-D2-Arky (Fig.\,\ref{fig:bgspiking}b) subnetworks showed $\beta$-band oscillations. In the loop Proto-D2-FSN, D2-SPN neurons have a relatively high firing rate. This could be a criterion to exclude this loop as a potential contributor to the beta oscillations.
Next, we mimicked the STN inhibition experiments performed by de la Crompe et al. \cite{Crompe2020} in our model. To this end, we simulated the dynamics of BG network excluding D1-SPNs (because of their low firing rate in PD condition) and FSN (because with FSNs in the oscillation loop, D2-SPNs may have non-physiological firing rates). In this reduced model of BG, we changed inputs to operate in a mode where either Proto-D2-Arky (Fig.\,\ref{fig:bgspiking}c) or Proto-STN (Fig.\,\ref{fig:bgspiking}d) loop was generating the oscillations. In both cases, we systematically increased the inhibition of STN neurons.
In the Proto-D2-Arky mode, as we inhibited STN neurons, firing rate of the Proto neurons decreased and oscillations in STN population diminished but Proto neurons showed clear beta band oscillations (Fig.\,\ref{fig:bgspiking}c). By contrast, and as expected when STN-Proto loop was generating the oscillations, increasing the STN inhibition abolished the oscillations in both STN and Proto neurons (Fig.\,\ref{fig:bgspiking}d). In STN-Proto loop, when STN is inhibited, there is no cycle left in the network and therefore oscillations diminished, whereas the Proto-D2-Arky loop remains unaffected by the STN inhibition (except for a change in the firing rate of the Proto neurons). As shown in Fig.\,\ref{fig:bg_motifs}c, whether oscillations are generated by the Proto-D2-Arky or STN-Proto loop depends on the relative input to the D2 or STN neurons. So it is possible that in rodents, D2-SPN have stronger input from the cortex than STN and therefore, oscillations survive despite near complete inhibition of STN.
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{FigureE.eps}
\caption{\textbf{Oscillations in a leaky integrate-and-fire (LIF) spiking neuronal network model of specific BG motifs.} \textbf{a-b}: Average peristimulus time histograms (PSTH) of all neurons in \textbf{a} Proto-FSN-D2 and \textbf{b} Proto-Arky-D2 motifs under Parkinson condition with power spectral density (PSD) at the top right. \textbf{c}: PSTH of Proto and STN in a BG subnetwork with motif Proto-Arky-D2 as the oscillator during different STN inhibition. \textbf{d}: Same thing as \textbf{c} but changing the oscillator from Proto-Arky-D2 to Proto-STN. }
\label{fig:bgspiking}
\end{figure*}
\section*{Discussion}\label{sec:discussion}
Here we prove in a single cycle TLN model and illustrate with numerical simulations of biological networks that when the number of inhibitory nodes in a directed cycle is odd and connections are strong enough, then the system has the potential to oscillate. In 1981, Thomas \cite{thomas1981relation} conjectured that at least one negative feedback loop (i.e., a loop with an odd number of repressors) is needed for gene regulatory networks to have periodic oscillating behavior. This conjecture was proven for Boolean dynamical systems by Snoussi \cite{snoussi1998necessary} and Gouze \cite{gouze1998positive}. But their proof required that node transfer-function is differentiable everywhere. We here prove a more complete theorem for a case where node transfer-function is threshold-linear as is the case for many network in the brain. Thus, together with previous results of Snoussi \cite{snoussi1998necessary} and Gouze \cite{gouze1998positive} we further expand the scope within which we can comment on the potential of a network to generate oscillation based only on the connectivity structure alone. In addition, we complete this condition by one on weights' strength stating that the latter needs to be strong enough for the system to possibly oscillate. Eventually, oscillations can be quenched by adding positive external input to excited nodes.
A key assumption of our analysis is that there are no delays in the network. Indeed, delays within and between subnetwork connections can have a big effect on the oscillations \cite{Kim2020dynamics}. In the numerical simulations of basal ganglia network, we included biologically realistic synaptic delays (i.e. connection delays were shorter than the time constants of the neurons). Our results suggest that such delays do not influence our results and they only determine the oscillation frequency. But it is not possible to comment on how the results may change when delays become longer than the time constant of the node.
\subsubsection*{Interactions between input and network structure}
Previous models suggest that when we excite the excitatory node or inhibit the inhibitory node oscillations can emerge and strengthen \cite{Kumar2011,Ledoux2011}. By contrast, when we inhibit the excitatory node or excite the inhibitory node, oscillations are quenched. This can be summarised as the 'Oscillations Sign Rule'. Let us label the excitatory population as positive and inhibitory as negative. Let us also label excitatory inputs as positive and inhibitory inputs as negative. Now if we multiple the sign of the node and sign of the stimulation, we can comment on the fate of oscillations in a qualitative manner. For example, inhibition of inhibitory nodes would be $-\times - = +$ i.e. oscillations should be increased and when we inhibit excitatory nodes, it would be $-\times + = -$ i.e. oscillations should be decreased. The 'Oscillation sign rule' scales to larger network with more nodes. With the 'Odd Cycle Rule' as we have shown we can comment on whether a directed cycle will oscillate or not from the count of inhibitory links. When we combine the 'Oscillations Sign Rule' with the 'Odd Cycle Rule' we can get a more complete qualitative picture of whether a stimulating a node in a network will generate oscillations or not.
\subsubsection*{Interaction between node properties and network structure}
In our proof we have assumed that nodes follow rather simple dynamics and have a threshold-linear transfer-function. In reality nodes in physical, chemical and biological systems can have more complex dynamics. For instance, biological neurons have the property of spike frequency adaptation or rebound spiking. Similarly, synapses in the brain can increase or decrease their weights based on the recent history of inputs which is referred to as the short-term-facilitation or short-term-depression of synapses \cite{Stevens1995facilitation}. Such biological properties can be absorbed in the network structure in the form of an extra inhibitory or excitatory connection. When nodes can oscillate given their intrinsic dynamics then the question becomes more about whether a network structure can propagate oscillations to other nodes.
\subsubsection*{Oscillations in the basal ganglia}
We applied our results to understand the mechanisms underlying the emergence of PD-related pathological oscillations in the basal ganglia. Given that there are 8 key neuron populations in the basal ganglia, we enumerated 238 possible directed cycles. From 2-node-motifs to 6-node-motifs, our odd cycle rule identified 88 potential directed cycles that can generate oscillations. Among these, 81 cycles feature GPe (either Proto or Arky type or both) and 66 feature STN. Which specific cycle underlies oscillations depends on the exact input structure. For instance, when input to STN is higher than the D2 neurons, the STN-GPe network generates oscillations. But when inputs to D2 neurons are stronger, the D2-Proto-Arky cycle can become the oscillator. That is, STN is not necessary to generate oscillations in the basal ganglia. Our results also suggest that besides focusing on the network connectivity, we should also estimate the inputs to different nodes in order to pinpoint the key nodes underlying the PD-related pathological oscillations - that would be the way to reconcile the recent findings of de la Crompe \cite{Crompe2020} with previous results.
\subsubsection*{Beyond neural networks}
In this work we have used the odd cycle rule to study oscillations in basal ganglia. However, oscillatory dynamics and the odd cycle rule show up in many chemical, biological and even social systems such as neuronal networks \cite{bel2021periodic}, psychological networks \cite{greenwald2002unified}, social and political networks \cite{leskovec2010signed, milo2004superfamilies, Heider1946attitudes, Cartwright1956attitudes}, resting‑state networks in autism \cite{moradimanesh2021altered} and gene networks \cite{farcot2010limit,jafari2021structure}. In fact, originally Thomas' conjecture \cite{thomas1981relation} about the structural conditions for oscillations was made for gene regulatory networks. Therefore, we think that insights obtained from our analytical work can be extended to many other chemical, biological and social networks. It would be interesting to check to what extent our prediction of quenching oscillation by exciting the excitatory nodes holds in other systems besides biological neuronal networks.
\section*{Methods}\label{sec:methods}
To study the emergence of oscillations in the basal ganglia, we used three models: Threshold-Linear Network (TLN), Wilson-Cowan model and network with spiking neurons. TLN model (eq. \ref{eq:TLN}) was used here to rigorously prove that simple conditions, such as the odd inhibitory cycle rule, can lead to oscillations (Theorems\, \ref{thm1} and \ref{thm2}). Wilson-Cowan type firing rate-based model was used to find the structural constraints on oscillations and to determine the effect of network properties (such as delays, synaptic weights, external inputs, and self-inhibition) on the emergence of oscillations. Finally, to demonstrate the validity of the odd inhibitory cycle rule in a more realistic model, we use a network with spiking neurons.
\subsection*{Wilson-Cowan dynamics}
In the firing rate-based models, we reduced each cortex-basal ganglia (CBG) subnetwork to a single node. To describe firing rate dynamics of such a node, we used the classic Wilson-Cowan model \cite{WilsonCowan1972}
\begin{equation}
\tau \frac{d r_i(t)}{d t}=-r_i(t)+F\left(\sum_{j=1}^n w_{ij}r_j+I_i^{ext}\right)
\end{equation}
where $r_i(t)$ is the firing rate of the $i$th node, $\tau$ is the time constant of the population activity, $n$ is the number of nodes (or subnetworks), $w_{ij}$ is the strength of connection from node $j$ to $i$, and $I_i^{ext}$ is the external input to the population. $F$ is a nonlinear activation function relating output firing rate to input, given by
\begin{equation}
F(x)=\frac{1}{1+e^{-a(x-\theta)}}-\frac{1}{1+e^{a \theta}}
\end{equation}
where the parameter $\theta$ is the position of the inflection point of the sigmoid, and $\frac{a}{4}$ is the slope at $\theta$. Here, $\tau$, $\theta$, and $a$ are set as 20, 1.5, and 3. Other parameters of the model varied with each simulation. The simulation specific parameters for Fig.\,\ref{fig:wcnetwork} are shown in Tables\,\ref{tab:IIInetwork}, \ref{tab:EIInetwork} and for Fig.\,\ref{fig:bg_motifs} are shown in Table\,\ref{tab:bgnetwork}, respectively.
\subsection*{Network model with spiking neurons}
The basal ganglia network with spiking neurons was taken from a previous model by Chakravarty et al. \cite{chakravarty2022transient}. Here we describe the model briefly and for details we refer the reader to the paper by Chakravarty et al. \cite{chakravarty2022transient}.
\subsubsection*{Spiking neuron model}
Here, we excluded D1-SPNs because they have a rather small firing rate in PD conditions. The striatal D2-type spiny neurons (D2-SPN), fast-spiking neurons
(FSNs) and STN neurons were modelled as standard LIF neurons with conductance-based synapses. The membrane potential $V^x(t)$ of these neurons was given by:
\begin{equation}
C_m^{x} \frac{d V^{x}(t)}{d t}= I_{e}(t) + I_{syn }(t)-g_L^{x}\left[V^{x}(t)-V_{reset }^{x}\right]
\end{equation}
where $x \in \{$D2-SPN, FSN, and STN$\}$, $I_{e}(t)$ is the external current induced by Poisson type spiking inputs (see below), $I_{syn }(t)$ is the total synaptic input (including both excitatory and inhibitory inputs). When $V^x$ reached the threshold potential $V^x_{th}$, the neuron was clamped to $V^x_{reset}$ for a refractory duration $t_{ref}$ = 2 ms. All the parameter values and their meaning for D2-SPN, FSN and STN are summarized in Tables\,\ref{tab4}, \ref{tab5}, \ref{tab6}, respectively.
We used the LIF model with exponential adaptation (AdEx) to simulate Proto and Arky neurons of the globus pallidus externa (GPe), with their dynamics defined as
\begin{align*}
C^{x} \frac{V^{x}(t)}{dt} &= - g_L^{x}[V^{x}(t)-V^{x}_{\text{reset}}] - w^{x}+I^{x}_{\text{syn}}(t) + I_{e} +g_L^{x} \Delta_{T} \exp \left(\frac{V^{x}(t)-V_{T}^{x}}{\Delta_{T}}\right)\\
\tau_{w} \dot{w}^{x} &= a\left(V^{x}(t)-V^{x}_{\text{reset}}\right)-w^{x}
\end{align*}
where $x \in \{$Proto, Arky$\}$. Here when $V^{x}(t)$ reaches the threshold potential ($V^x_{th}$), a spike is generated and $V^{x}(t)$ as well as $w^x$ will be reset as $V_{\text{reset}}^{x}$, $w^{x}+b$, respectively, where b denotes the spike-triggered adaptation.
The parameter values and their meaning for Proto and Arky are specified in Table\,\ref{tab7}.
Neurons were connected by static conductance-based synapses. The transient of each incoming synaptic current is given by:
\begin{equation*}
I_{\text {syn }}^{x}(t)=g_{\text {syn }}^{x}(t)\left[V^{x}(t)-E_{r e v}^{x}\right]
\end{equation*}
where $x \in \{$D2-SPN, FSN, STN, Arky, and Proto$\}$. $E^x_{rev}$ is the synaptic reversal potential and $g_{\text{syn}}^{x}(t)$ is the time course of the conductance transient, given as follows:
\begin{equation*}
g_{ {syn }}^{x}(t)=\left\{\begin{array}{l}
J_{ {syn }}^{x} \frac{t}{\tau_{ {syn }}} \exp \left(\frac{-\left(t-\tau_{ {syn }}\right)}{\tau_{ {syn }}}\right), \text{ for } t \geq 0 \\
0, \text{ for } \mathrm{t}<0
\end{array},\right.
\end{equation*}
where $syn \in \{$exc, inh$\}$, $J_{ {syn }}^{x}$ is the peak of the conductance transient and $\tau_{ {syn }}^{x}$ is synaptic time constant. The synaptic parameters are shown in Table\,\ref{tab8}.\\
Some of model parameters were changed to operate the BG model in specific modes dominated by a 2 or 3 nodes cycle. The Table\,\ref{table:STNinhibition1} and Table\,\ref{table:STNinhibition2} show the parameters of Fig.\,\ref{fig:bgspiking}c and Fig.\,\ref{fig:bgspiking}d, respectively.
\subsection*{External input}
Each neuron in each sub-network of the BG received external input in the form of excitatory Poisson-type spike trains. This input was provided to achieve a physiological level of spiking activity in the network. For more details please see Chakravarthy et al. \cite{chakravarty2022transient}. Briefly, the external input was modelled as injection of Poisson spike-train for a brief period
of time by using the inhomogeneous\_poisson\_generator device in NEST. The strength of input stimulation can be controlled by varying the amplitude of the EPSP from the injected spike train.
\subsection*{STN inhibition experiment}
We set a subnetwork of basal ganglia to study how STN inhibition affects oscillation when different motifs dominate the system. The connections and external inputs to each neuron in Fig.\,\ref{fig:bgspiking}c and \,\ref{fig:bgspiking}d are shown in Tables\,\ref{table:STNinhibition1} and \ref{table:STNinhibition2}. To simulate the increasing inhibition to STN, the external input to STN was reduced from 1 pA to -99 pA in Fig.\,\ref{fig:bgspiking}c and from 30 pA to -50 pA in Fig.\,\ref{fig:bgspiking}d.
\subsection*{Data analysis}
The estimate of oscillation frequency of the firing rate-based model was done using the power spectral density calculated by \texttt{pwelch} function of MATLAB. The spiking activity of all the neurons in a sub-population were pooled and binned (rectangular bins, bin width = 0.1\,ms). The spectrum of spiking activity was then calculated for the binned activity using \texttt{pwelch} function of MATLAB.
\subsection*{Simulation tools}
Wilson-Cowan type firing rate-based model was simulated using Matlab. All the relevant differential equations were integrated using Euler method with a time step of 0.01 ms. The network of spiking neurons was simulated in Python 3.0 with the simulator NEST 2.20 \cite{fardet2020nest}. During the simulation, differential equations of BG neurons were integrated using Runga–Kutta method with a time step of 0.1 ms.
\subsection*{Code availability}
The simulation code will be made available on GitHub upon publication of the manuscript.
\newpage
\begin{table*}[h]
\begin{center}
\begin{minipage}{\textwidth}
\caption{Parameters of III network for Fig.\,\ref{fig:wcnetwork} and \ref{fig:supp_III}}
\label{tab:IIInetwork}
\begin{ruledtabular}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc@{\extracolsep{\fill}}}
& \multicolumn{3}{@{}c@{}}{Synaptic weights} & \multicolumn{2}{@{}c@{}}{population properties} \\\cmidrule{2-4} \cmidrule{5-6}
Populations & I1 & I2 & I3 &external input & delay \\
\midrule
I1 & 0 (-20 -- 0) & 0 & -15 (-20 -- 0) & 6 (0 -- 20) & 0 (0 -- 10) \\
I2 & -15 (-20 -- 0) &0 (-20 -- 0) & 0 & 6 (0 -- 20) & 0 (0 -- 10)\\
I3 & 0 & -15 (-20 -- 0) & 0 (-20 -- 0) & 6 (0 -- 20) & 0 (0 -- 10)\\
\end{tabular*}
\end{ruledtabular}
\footnotetext{Note: The range in parentheses indicates the variety of parameters when controlled}
\end{minipage}
\end{center}
\end{table*}
\begin{table*}[h!]
\begin{center}
\begin{minipage}{\textwidth}
\caption{Parameters of EII network for Fig.\,\ref{fig:wcnetwork}}
\label{tab:EIInetwork}
\begin{ruledtabular}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc@{\extracolsep{\fill}}}
& \multicolumn{3}{@{}c@{}}{Synaptic weights} & \multicolumn{2}{@{}c@{}}{population properties} \\\cmidrule{2-4} \cmidrule{5-6}
Populations & E1 & I1 & I2 &external input & delay \\
\midrule
E1 & 0 & 0 & -15 (-20 -- 0) & 6 & 0 (0 -- 10)\\
I1 & 15 (0 -- 20) & 0 (-20 -- 0) & 0 & 6 (0 -- 20) & 0 (0 -- 10) \\
I2 & 0 & -15 (-20 -- 0) & 0 (-20 -- 0) & 6 (0 -- 20) & 0 (0 -- 10)\\
\end{tabular*}
\end{ruledtabular}
\footnotetext{Note: The range in parentheses indicates the variety of parameters when controlled}
\end{minipage}
\end{center}
\end{table*}
\begin{table*}[h!]
\begin{center}
\begin{minipage}{\textwidth}
\caption{Parameters for Fig.\,\ref{fig:bg_motifs} (Wilson-Cowan model)}
\label{tab:bgnetwork}
\begin{ruledtabular}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc@{\extracolsep{\fill}}}
& \multicolumn{4}{@{}c@{}}{Synaptic weights} & \multicolumn{2}{@{}c@{}}{population properties} \\\cmidrule{2-5} \cmidrule{6-7}
Populations & D2 & Arky & proto & STN & external input & delay \\
\midrule
D2 & 0 & -15 & 0 & 0 & 4 (0 -- 20) & 2 \\
Arky & 0 & 0 & -15 & 15 & 3 & 2 \\
Proto & -15 & 0 & -8 & 15 & 1/4\footnotemark[1] & 2 \\
STN & 0 & 0 & -15 & 5 & 4 (0 -- 20) & 2 \\
\end{tabular*}
\end{ruledtabular}
\footnotetext{Note: The range in parentheses indicates the variety of parameters when controlled}
\footnotetext[1]{The external input to Proto is 1 in Fig.\,\ref{fig:bg_motifs}c and \,\ref{fig:bg_motifs}d while it was changed into 4 in Fig. \ref{fig:bg_motifs}e to help motif Proto-Arky-D2 oscillate.}
\end{minipage}
\end{center}
\end{table*}
\pagebreak
\section*{Acknowledgments}
We thank Kingshuk Chakravarthy for sharing the code of the basal ganglia network with spiking neurons. We thank Dr. Henri Rihiimaki for helpful comments and suggestions. This work was funded in parts by Swedish Research Council (VR), StratNeuro (to AK), Digital Futures grants (to AK and PH), the Inst. of Advanced Studies, University of Strasbourg, France Fellowship (to AK), and the National Natural Science Foundation of China under Grant No.11572127
and 11872183 (to SL).
|
{
"arxiv_id": "2302.14182",
"language": "en",
"timestamp": "2023-03-01T02:04:08",
"url": "https://arxiv.org/abs/2302.14182",
"yymm": "2302"
} | \section{Introduction}
Actor-critic algorithms underlie many of the recent successes of deep RL \citep[][]{fujimoto2018addressing,haarnoja2018soft,lillicrap2015continuous,schulman2015high,schulman2015trust,schulman2017proximal,silver2014deterministic,voelcker2022value}.
In these algorithms, the actor provides the control policy while the critic provides estimates of the policy's expected long-term returns \citep[i.e. a value function; ][]{barto1983neuronlike,konda1999actor}.
The critic is typically trained using some form of temporal-difference (TD) update \citep[e.g.][]{lillicrap2015continuous,silver2014deterministic, fujimoto2018addressing,haarnoja2018soft,schulman2017proximal}.
These TD updates need to be computed in expectation over a large distribution of visited states and actions, induced by the policy and the environment dynamics \citep[][]{sutton1988learning,sutton2018reinforcement}.
Since this expectation is analytically intractable, TD updates are typically performed based on individually sampled state-action pairs from real environmental transitions (i.e. \ Monte Carlo (MC) estimates).
However, the variance of (MC) TD updates can be quite large, meaning that we need to average over many TD updates for different initial states and actions to get a good estimate of the expected updates \citep{fairbank2011local}.
Model-based strategies provide a promising candidate to tackle this high variance \citep[][]{kaelbling1996reinforcement}.
For instance, Dyna methods, among the most popular model-based strategies, use a learned model of the environment transitions to generate additional imaginary transitions.
These imaginary transitions can be used as extra training samples for TD methods \citep[e.g.][]{sutton1990integrated, gu2016continuous,feinberg2018model,janner2019trust, d2020learn, buckman2018sample}.
Although the additional (imaginary) transitions help in reducing the variance in the expected TD updates, Dyna methods still rely on the same, potentially high-variance (MC) TD-updates as standard TD-learning.
We address the issue of high-variance TD-updates by formulating an expected TD-update over a small distribution of state-action pairs.
We show this expected update can be analytically estimated with a first-order Taylor expansion, in an approach we call \emph{Taylor TD}.
By analytically estimating this expected update, rather than exclusively relying on MC estimates (as in e.g.\ Dyna), we show both theoretically and empirically to achieve lower variance TD updates.
Additionally, we show Taylor TD does not affect the stable learning guarantees of TD-learning under linear function approximation \citep[for a fixed policy as shown by][]{tsitsiklis1996analysis}.
Next, we propose a model-based off-policy algorithm, Taylor TD3 (TaTD3), which uses Taylor TD in combination with the TD3 algorithm \citep[][]{fujimoto2018addressing}.
We show TaTD3 performs as well as if not better than several state-of-the art model-free and model-based baseline algorithms on a set of standard benchmark tasks.
Finally, we compare TaTD3 to its ``Dyna'' equivalent, which exclusively relies on MC TD-updates.
We found the largest benefits of Taylor TD may appear in high dimensional state-action spaces.
\section{Related work}
Model-based strategies provide a promising solution to improving the sample complexity of RL algorithms \citep[][]{kaelbling1996reinforcement}.
In Dyna methods, a model of the environment transitions is learned through interactions with the environment and then employed to generate additional imaginary transitions, for instance in the form of model roll-outs \citep[][]{sutton1990integrated}.
These imaginary transitions, can be used to enhance existing model-free algorithms, leading to improved sample complexity.
For example, within TD-learning, imaginary transitions can be used to train a Q-function by providing additional training examples \citep[e.g.][]{sutton1990integrated,gu2016continuous,d2020learn}.
Alternatively, imaginary transitions can be used to provide better TD targets for existing data points \citep[e.g.][]{feinberg2018model} or to train the actor and/or critic by generating short-horizon trajectories starting at existing state-action pairs \citep[e.g.][]{janner2019trust,clavera2020model,buckman2018sample}.
These (Dyna) approaches have a clear relation to our approach (Taylor TD), as they attempt to estimate the same expected TD-update in Eq.~\eqref{eq_Expected_TD}.
However, Dyna approaches only use potentially high-variance MC estimates, while Taylor TD exploits analytic results to reduce that variance.
Conceptually, our approach may resemble previous methods that also rely on analytical computations of expected updates to achieve lower-variance critic or policy updates \citep[e.g.][]{ciosek2018expected,whiteson2020expected,van2009theoretical,sutton2018reinforcement,asadi2017mean}.
The most well-known example of this is Expected-SARSA.
Expected-SARSA achieves a lower variance TD-update (relative to SARSA), by analytically computing the expectation over the distribution of target actions in the TD-update (i.e. assuming a stochastic target policy) \citep{van2009theoretical,sutton2018reinforcement};
\begin{align}
\delta_{\theta}(\mathbf{s},\a) = r(\mathbf{s},\a) + \gamma \E_{a' \sim \pi} \bcket{[}{]}{ Q_{\theta}(\mathbf{s}',\a')} - Q_\theta(\mathbf{s},\a)
\end{align}
This approach can only reduce variance of TD-updates at the level of the target actions, $a'$, induced by a stochastic target policy.
In the case of a deterministic target policy, Expected-SARSA does not provide any benefit.
Conversely, our approach attempts to reduce the variance at the level of the initial state-action pairs, $(s,a)$ at which TD-updates are evaluated. That is; we take the expectation over $(s, a)$ instead of $a'$ (see Eq.~\ref{eq_Expected_TD} and \ref{eqSate}) which yields benefits with both stochastic and deterministic target policies.
Other RL approaches exploiting analytical computations of expected updates are Expected Policy Gradients \cite{ciosek2018expected,whiteson2020expected} and Mean Actor Critic \cite{asadi2017mean}.
Both methods attempt to reduce the variance of the stochastic policy gradient update by integrating over the action distribution.
Although similar in principle to our approach, these two methods focus on the policy update instead of the critic update and, similarly to Expected-SARSA only apply to stochastic target policies.
In practice, our approach may relate to value gradient methods, as it explicitly incorporates the gradient of the value function into the update \citep[e.g.][]{heess2015learning, clavera2020model,amos2021model,d2020learn,balduzzi2015compatible,fairbank2012value}.
To our knowledge, the value gradient approach that most relates to our work is MAGE \citep{d2020learn}, which, nonetheless, has a radically different motivation from Taylor TD.
MAGE is motivated by noting that the action-gradients of Q drive deterministic policy updates \citep{silver2014deterministic}, so getting the action-gradients right is critical for policy learning.
In order to encourage the action-gradients of Q to be correct correct, MAGE explicitly adds a term to the objective that consists of the norm of the action-gradient of the TD-error, which takes it outside of the standard TD-framework.
In contrast, our motivation is to reduce the minibatch gradient variance of standard TD updates by performing some analytic integration.
We do this through a first-order Taylor expansion of the TD update.
This difference in motivation leads to numerous differences in the method and analysis, the least of which is that MAGE uses only the action-gradients, while Taylor TD additionally suggests using the state-gradients, as both the state and action gradients can be used to reduce the variance in the minibatch updates.
\section{Background}
Reinforcement learning aims to learn reward-maximising behaviour by interacting with the surrounding environment.
At each discrete time step, $t$, the agent in state $\mathbf{s} \in \mathcal{S}$, chooses an action $\a \in \mathcal{A}$ based on a policy $\pi : \mathcal{S} \rightarrow \mathcal{A}$, and observes a scalar reward, $r$ and a new state $\mathbf{s}' \in \mathcal{S}$ from the environment.
The agent's goal is to find the policy that maximises the expected sum of rewards (i.e. the expected return), from a distribution of initial states (or state-action pairs).
As such, it is usually necessary to compute the expected return for a state-action pair $(\mathbf{s},\a)$ and a policy $\pi$; which we can do with a value function.
Given a policy $\pi$ and an initial state-action pair $(\mathbf{s},\a)$,
we define the value function $Q^\pi (\mathbf{s},\a) = \E\bcket{[}{]}{R_t \mid S_t =\mathbf{s}, A_t = \a}$, where $R_t = \sum_{i=t}^T \gamma^{i-t} r_i$
is the discounted sum of future rewards from the current time step $t$ until termination $T$, with discount factor $\gamma \in [0, 1]$.
The value function or critic, $Q^\pi$, quantifies how good the policy, $\pi$, is in terms of its expected return when taking action $\a$ in state $\mathbf{s}$ and following the policy $\pi$ thereafter.
To estimate the value function for a policy $\pi$, we must usually interact with the environment.
The most popular approach to do so is temporal difference (TD) learning \citep[][]{sutton1988learning}, which is based on the Bellman equation \citep[][]{bellman1966dynamic}.
Assuming the (true) underlying critic $Q^\pi$ is approximated by a differentiable function approximation $Q_\theta$, we can write the overall TD-update over the entire distribution of visited state-action pairs as:
\begin{align}\label{eq_full_exptTD}
\E\bcket{[}{]}{\Delta \theta} = \ \E_{\mathbf{s} \sim d^\pi ; \a \sim \pi}\bcket{[}{]}{\delta_{\theta}(\mathbf{s},\a) \nabla_\theta Q_\theta(\mathbf{s},\a)}
\intertext{where $d^\pi$ is the state distribution induced by policy $\pi$, and}
\delta_{\theta}(\mathbf{s},\a) = r(\mathbf{s},\a) + \gamma Q_{\theta}(\mathbf{s}',\a') - Q_\theta(\mathbf{s},\a)
\end{align}
Here, $\delta_{\theta}(\mathbf{s},\a)$ represents the TD-error for TD(0), although the same expected update applies for any TD method by adjusting the TD-error term \citep[e.g. TD(n), TD($\lambda)$ ][]{sutton1988learning,van2015empirical}.
Note that in off-policy learning, this expectation is computed over the off-policy state distribution $d^{\pi_\text{b}}$, and the behavioural policy $\pi_\text{b}$ while $\a'$ still comes from the target policy $\pi$
\begin{align}\label{eq_full_exptTD_offPol}
E\bcket{[}{]}{\Delta \theta} = \ \E_{\mathbf{s} \sim d^{\pi_\text{b}} ; \a \sim \pi_\text{b}}\bcket{[}{]}{\delta_{\theta}(\mathbf{s},\a) \nabla_\theta Q_\theta(\mathbf{s},\a)}
\end{align}
where $Q_\theta$ still approximates the target (policy) Q-function $Q^\pi$.
Conversely, in model-based off-policy learning, the initial action, $\a$, in the TD update may be sampled from a third policy which does not correspond to either the behavioral $\pi_\text{b}$ or the target policy $\pi$.
This enables us to explore the value of additional actions independently from the behavioural policy, thanks to the model generating transitions for these additional actions.
In practice, analytically computing the expected updates in Eq.~\eqref{eq_full_exptTD} and ~\eqref{eq_full_exptTD_offPol} is intractable, due to the complex underlying state-action pair distributions (beyond requiring access to the environment dynamics).
Hence, TD-learning methods typically employ a Monte Carlo (MC) (i.e. sampled-based) estimate of these expected updates.
For instance, at each time step $t$, a TD-update is computed based on state-action pairs that are sampled from the environment, a replay buffer (i.e. off-policy) or a model (or any combination of those three).
\section{Taylor TD-learning}
As mentioned above, we can not analytically compute the (expected) TD update over the entire distribution of state-action pairs (e.g.\ Eq.~\ref{eq_full_exptTD}).
However, we can reduce the variance by combining MC estimates with some analytic integration.
For instance, at time step $t$, we can consider an expected TD-update over a distribution over actions.
We could compute this expected update for continuous actions being drawn from a Gaussian distribution with mean $\a_t$ and covariance $\mathbf{\Sigma}_\text{a}$.
In particular, we can do this by re-parametrizing the action, $\a$, at which we evaluate the TD-update, in terms of a (deterministic) action at time $t$ (i.e.\ $\a_t$), plus some zero-mean Gaussian random noise, $\d_\text{a}$, with covariance $\mathbf{\Sigma}_\text{a}$.
\begin{align}\label{eq_reparam_actions}
\a &= \a_t + \d_\text{a}
\end{align}
\begin{align}\label{eq_taylor_expct}
\E_{\d_\text{a}}\bcket{[}{]}{\d_\text{a}} &= {\bf {0}} &
\E_{\d_\text{a}}\bcket{[}{]}{\d_\text{a} \d_\text{a}^T} &= \mathbf{\Sigma}_\text{a}
\end{align}
The expected TD update, averaging over actions from the Gaussian distributed policy is,
\begin{align}\label{eq_Expected_TD}
\E_{\d_\text{a}}[\Delta \theta_t] = \eta \E_{\d_\text{a}}[\delta_{\theta}(\mathbf{s}_t,\a_t + \d_\text{a}) \nabla_\theta Q_\theta(\mathbf{s}_t,\a_t + \d_\text{a})]
\end{align}
Standard TD-learning updates, which sample actions from a (Gaussian) policy, can be understood as computing MC estimates of this expected update.
However, these MC estimates would likely be high variance \citep[e.g.][]{ciosek2018expected}, leading to slow learning.
We should stress that $\mathbf{\Sigma}_\text{a}$, the covariance of initial actions in this expected TD-update does not necessarily need to match the covariance of the behavioural policy.
For instance, $\mathbf{\Sigma}_\text{a}$ could be larger than the behavioural policy covariance, enabling to learn the value of broader actions that are not taken in the environment (i.e. assuming knowledge of $\delta_{\theta}$ and $ Q_\theta$ is available for those actions).
\subsection{Action expansion}\label{sec_action_expans}
Here, we show we can analytically approximate the expected-TD update in Eq.~\eqref{eq_Expected_TD}, using a first-order Taylor expansion.
The full expectation is taken over $\d_a$,
\begin{align}
\E\bcket{[}{]}{\Delta \theta_t} = \ &\eta \E\biggl[\delta_{\theta}(\mathbf{s}_t, \a_t + \d_\text{a}) \dd{\theta} Q(\mathbf{s}_t, \a_t + \d_\text{a})\biggr]
\end{align}
Applying the first-order Taylor expansion,
\begin{align}
\E\bcket{[}{]}{\Delta_\text{Ta} \theta_t} = \ \eta \E\biggl[&\b{\delta_{\theta}(\mathbf{s}_t, \a_t) + \d_\text{a}^T \dd{\a} \delta_{\theta}(\mathbf{s}_t, \a_t)} \\
&\dd{\theta} \b{Q_\theta(\mathbf{s}_t, \a_t) + \d_\text{a}^T \dd{\a} Q_\theta(\mathbf{s}_t, \a_t)}\biggr]
\nonumber
\end{align}
As $\dd{\theta}$ is a linear operator, and $\d_\text{a}$ does not depend on $\theta$,
\begin{align}
\E\bcket{[}{]}{\Delta_\text{Ta} \theta_t} = \ &\eta \E\biggl[\b{\delta_{\theta}(\mathbf{s}_t, \a_t) + \d_\text{a}^T
\dd{\a} \delta_{\theta}(\mathbf{s}_t, \a_t)} \\
&\b{\dd{\theta} Q_\theta(\mathbf{s}_t, \a_t) + \d_\text{a}^T \dd{\a, \theta}^2 Q_\theta(\mathbf{s}_t, \a_t)}\biggr] \nonumber\\
\intertext{where $\dd{\a, \theta}^2 Q_\theta(\mathbf{s}_t, \a_t)$ is a matrix of second derivatives. The expectation of $\d_\text{a}$ is zero, so the terms linear in $\d_\text{a}$ are zero, leading to,}
\E\bcket{[}{]}{\Delta_\text{Ta} \theta_t} = \ &\eta \delta_{\theta}(\mathbf{s}_t, \a_t)\dd{\theta} Q_\theta(\mathbf{s}_t, \a_t) \\
\nonumber
+ &\;\eta \E\bcket{[}{]}{\b{\d_\text{a}^T \dd{\a} \delta_{\theta}(\mathbf{s}_t, \a_t)} \b{\d_\text{a}^T \dd{\a, \theta}^2 Q_\theta(\mathbf{s}_t, \a_t)}}
\end{align}
Swapping the order of the terms in the expectation,
\begin{align}
\E\bcket{[}{]}{\Delta_\text{Ta} \theta_t} = \ &\eta \delta_{\theta}(\mathbf{s}_t, \a_t)\dd{\theta} Q_\theta(\mathbf{s}_t, \a_t) \\
\nonumber
+ &\;\eta \E\bcket{[}{]}{\b{\d_\text{a}^T \dd{\a, \theta}^2 Q_\theta(\mathbf{s}_t, \a_t)} \b{\d_\text{a}^T \dd{\a} \delta_{\theta}(\mathbf{s}_t, \a_t)}}
\end{align}
And transposing the first term in the expectation (which we can do as it is overall a scalar),
\begin{align}
\E\bcket{[}{]}{\Delta_\text{Ta} \theta_t} = \ &\eta \delta_{\theta}(\mathbf{s}_t, \a_t)\dd{\theta} Q_\theta(\mathbf{s}_t, \a_t) \\
\nonumber
+ &\;\eta \E\bcket{[}{]}{\b{\dd{\theta, \a}^2 Q_\theta(\mathbf{s}_t, \a_t)\d_\text{a}} \b{\d_\text{a}^T \dd{\a} \delta_{\theta}(\mathbf{s}_t, \a_t)}}
\end{align}
We can then move the terms independent of $\d_\text{a}$ out of the expectation:
\begin{align}
\E\bcket{[}{]}{\Delta_\text{Ta} \theta_t} = \ &\eta \delta_{\theta}(\mathbf{s}_t, \a_t)\dd{\theta} Q_\theta(\mathbf{s}_t, \a_t) \\
+ &\eta \dd{\theta, \a}^2 Q_\theta(\mathbf{s}_t, \a_t) \E\bcket{[}{]}{\d_\text{a} \d_\text{a}^T} \dd{\a} \delta_{\theta}(\mathbf{s}_t, \a_t) \nonumber \\
\intertext{Finally, we know $\E\bcket{[}{]}{\d_\text{a} \d_\text{a}^T} = \mathbf{\Sigma}_\text{a}$,}
\E\bcket{[}{]}{\Delta_\text{Ta} \theta_t} = \ &\eta \delta_{\theta}(\mathbf{s}_t, \a_t)\dd{\theta} Q_\theta(\mathbf{s}_t, \a_t) \\
+ &\eta \dd{\theta, \a}^2 Q_\theta(\mathbf{s}_t, \a_t) \mathbf{\Sigma}_\text{a} \dd{\a} \delta_{\theta}(\mathbf{s}_t, \a_t) \nonumber
\end{align}
If we assume the action covariance is isotropic, $\mathbf{\Sigma}_\text{a} = \lambda_\text{a} {\bf {I}}$, we get the following (1st-order) Taylor TD-update estimating the expected TD-update formulated in Eq.~\ref{eq_Expected_TD}:
\begin{align}
\label{FirstDirect_ActionUpdate}
\E\bcket{[}{]}{\Delta_\text{Ta} \theta_t} = \ &\eta \delta_{\theta}(\mathbf{s}_t, \a_t)\dd{\theta} Q_\theta(\mathbf{s}_t, \a_t) \\
+ &\eta \lambda_\text{a} \dd{\theta, \a}^2 Q_\theta(\mathbf{s}_t, \a_t) \dd{\a} \delta_{\theta}(\mathbf{s}_t, \a_t) \nonumber
\end{align}
The first term is the standard TD update with state $\mathbf{s}_t$ and action $\a_t$.
The new second term tries to align the action gradient of the critic (Q-function) with the action gradient of the TD target.
Conceptually, this gradient matching should help reduce the variance across TD-updates since it provides a way to estimate the expected update in Eq.~\eqref{eq_Expected_TD}.
In the appendix, we include a proof that at least under linear function approximation, these extra Taylor gradient terms do not affect the stability of TD-learning, assuming $\lambda_a$ and $\eta$ are chosen in a certain way (see Appendix \ref{app.stabilityProof}).
Critically, even with linear function approximation, there are errors in the first-order Taylor expansion, as we use a nonlinear function to transform actions into a feature vector, while the Taylor expansion is taken with respect to the underlying action.
Nevertheless, we provide theoretical and empirical evidence that the first-order Taylor expansion reduces the variance of standard TD-updates and support efficient learning, even under non-linear function approximation (see sections, ~\ref{sec_varAnly}, \ \ref{sec_varReduct} and \ref{sec_main_results}).
\subsection{State expansion}
We are not limited to formulating an expected TD-update over a distribution of actions, but we can expand this to a distribution of states too.
Namely, instead of performing a TD-update at the single state location, $\mathbf{s}_t$, we perform this update over a distribution of states.
We take this distribution to be Gaussian with mean at $\mathbf{s}_t$ and covariance $\mathbf{\Sigma}_\text{s}$.
To do so, we can re-write the state at time $t$ as:
\begin{align}
\mathbf{s} &= \mathbf{s}_t + \d_\text{s}
\end{align}
where $\d_\text{s}$ is a Gaussian random variable with mean zero and covariance $\mathbf{\Sigma}_\text{s}$,
\begin{align}
\E_{\d_\text{s}}\bcket{[}{]}{\d_\text{s}} &= {\bf {0}} &
\E_{\d_\text{s}}\bcket{[}{]}{\d_\text{s} \d_\text{s}^T} &= \mathbf{\Sigma}_\text{s}
\end{align}
Based on this, we can formulate an expected TD-update, averaging over this Gaussian distribution of states.
\begin{align}
\E_{\d_\text{s}}[\Delta \theta_t] = \eta \E_{\d_\text{s}} [\delta_{\theta}(\mathbf{s}_t + \d_\text{s},\a_t) \nabla_\theta Q_\theta(\mathbf{s}_t + \d_\text{s},\a_t)] \label{eqSate}
\end{align}
Again, we can approximate this expected update with a first-order Taylor approximation, but this time, expanding it around $\mathbf{s}_t$.
Based on a similar derivation to the action expansion, we get the following update (see Appendix \ref{Sate_proof} for the full derivation):
\begin{multline}
E_{\d_\text{s}}[\Delta_\text{Ta} \theta_t] = \ \eta \delta_{\theta}(\mathbf{s}_t, \a_t)\dd{\theta} Q_\theta(\mathbf{s}_t, \a_t) \\
+ \ \eta \lambda_s \dd{\theta, \mathbf{s}}^2 Q_\theta(\mathbf{s}_t, \a_t) \dd{\mathbf{s}} \delta_{\theta}(\mathbf{s}_t, \a_t) \label{FirstDirect_SateUpdate}
\end{multline}
The rational behind this update is trying to tackle some of the TD-update variance induced by the (visited) state distribution, although we expect this only to work for states close-by to the visited ones (i.e. \ for small values of $\lambda_\text{s}$)
\begin{algorithm}[t]
\caption{Taylor TD-learning}\label{alg_general}
\begin{algorithmic}
\STATE $\text{Initialise model, critic and policy parameters} \ \{w, \theta, \phi\}$
\STATE $\text{Initialise} \ \zeta_\text{a}, \zeta_\text{s} = 0 $
\\
\FOR{each training step}
\STATE $ \text{Collect transition} \ (\mathbf{s},\a,\mathbf{s}') \ \text{acting according to policy} \ \pi_\phi$
\STATE $ w \leftarrow w - \eta_m \nabla_w \mathcal{L}(\mathbf{s},\a,\mathbf{s}')$
\STATE $\hat{\mathbf{s}}' \sim p_w \b{\cdot \mid \mathbf{s},\a} $
\STATE $\delta = r(\mathbf{s},\a) + \gamma Q_\theta(\hat{\mathbf{s}}',\pi_\phi(\hat{\mathbf{s}}')) - Q_\theta(\mathbf{s},\a)$
\\
\IF{Action expansion}
\STATE $ \zeta_\text{a} = \text{CosineSimilarity} (\dd{\a} Q_\theta(\mathbf{s}, \a), \ \dd{\a} \delta(\mathbf{s}, \a))$
\ENDIF
\\
\IF{State expansion}
\STATE $ \zeta_\text{s} = \text{CosineSimilarity} (\dd{\mathbf{s}} Q_\theta(\mathbf{s}, \a), \ \dd{\mathbf{s}} \delta(\mathbf{s}, \a) )$
\ENDIF
\\
\STATE $ \theta \leftarrow \theta + \eta_c \delta \ \nabla_\theta Q_\theta(\mathbf{s},\a) + \eta_c \lambda_\text{a} \nabla_\theta \zeta_\text{a} + \eta_c \lambda_\text{s} \nabla_\theta \zeta_\text{s} $
\STATE $ \phi \leftarrow \phi + \eta_p \nabla_\phi J(\phi)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{State-Action expansion}
Finally, we can combine the two Taylor expansions into a single TD-update involving both state and action expansions.
Nevertheless, computing the dot products between $\nabla \delta_{\theta}$ and $\nabla Q_\theta$ terms for both state and action terms may not be optimal.
One reason for this is dot products are unbounded, increasing the risk of high variance (TD) updates \citep[e.g.][]{luo2018cosine}.
To tackle this issue, we use cosine distances between the gradient terms instead of dot products (see Appendix \ref{app.CosineDist} for the benefits of this).
The cosine distance has the advantage of being bounded.
By putting everything together, we propose a novel TD update, which we express below in terms of a loss:
\begin{multline}
\mathcal{L}_\theta = \ \eta \delta(\mathbf{s}_t, \a_t) Q_\theta(\mathbf{s}_t, \a_t) \\
+ \eta \lambda_\text{a} \ \text{CosineSimilarity} (\dd{\a} Q_\theta(\mathbf{s}_t, \a_t), \ \dd{\a} \delta(\mathbf{s}_t, \a_t)) \\
+ \eta \lambda_\text{s} \ \text{CosineSimilarity} (\dd{\mathbf{s}} Q_\theta(\mathbf{s}_t, \a_t), \ \dd{\mathbf{s}} \delta(\mathbf{s}_t, \a_t) ) \label{FirstDirect_SateActionUpdate}
\end{multline}
Note we used the notation $\delta$ instead of $\delta_\theta$ to indicate we are treating $\delta(\mathbf{s}_t, \a_t)$ as a fixed variable independent of $\theta$.
This ensures when we take the gradient of this loss relative to $\theta$, we do not differentiate through any $\delta$ terms (following the standard implementation of TD-updates in autodiff frameworks such as PyTorch, see Appendix ~\ref{app_TaylorTD_loss}).
It should be noted Taylor TD requires a differentiable model of the environment transitions as well as reward function in order to compute $\dd{\a} \delta_{\theta}(\mathbf{s}_t, \a_t)$ and $\dd{\mathbf{s}} \delta_{\theta}(\mathbf{s}_t, \a_t)$.
In principle, Taylor TD can be used with any actor-critic approach that relies on TD-learning, and even be extended to Monte Carlo returns.
However, in practice, computing $\dd{} \delta_{\theta}(\mathbf{s}_t, \a_t)$ over long horizons of states and actions will suffer from the same exploding/vanishing gradient problem as backpropagating rewards through several transitions \citep[e.g.][]{clavera2020model,xu2022accelerated}.
Therefore, we implement Taylor TD within a TD(0) set-up and expect it to work best with short-horizon TD updates.
\subsection{Variance analysis}\label{sec_varAnly}
Here, we show that the Taylor TD update in Eq.~\eqref{FirstDirect_ActionUpdate} has lower variance than standard (MC) TD-updates over the same distribution of actions.
We only provide this variance analysis for the distribution over actions, because analogous theorems can be derived for the distribution over states (i.e. \ Eq.~\ref{FirstDirect_SateUpdate}).
To begin, we apply the law of total variance, to standard TD-updates,
\begin{align}
\operatorname{Var}\bcket{[}{]}{\Delta \theta_t} &= \E_{\mathbf{s}_t}\bcket{[}{]}{\operatorname{Var}_\pi\bcket{[}{]}{\Delta \theta_t| \mathbf{s}_t}} + \operatorname{Var}_{\mathbf{s}_t}\bcket{[}{]}{\E_\pi\bcket{[}{]}{\Delta \theta_t| \mathbf{s}_t}}
\end{align}
Recall that the updates, $\Delta \theta_t = \delta_{\theta}(\mathbf{s}_t, \a_t) \nabla_\theta Q_\theta(\mathbf{s}_t, \a_t)$, depend on the starting state, $s_t$, and action, $a_t \sim \pi(\cdot | \mathbf{s}_t)$.
The inner expectation and inner variance sample actions from the policy, $\pi$, while the outer expectation and outer variance sample states from the distribution of visited states.
To relate this expression to Taylor TD, recall that Taylor TD updates are motivated as performing analytic integration over actions from the policy, $\pi$, using a first-order Taylor expansion based approximation (i.e. \ assuming $\pi$ corresponds to the re-parameterize actions in Eq.~\ref{eq_reparam_actions}),
\begin{align}
\Delta_\text{Ta} \theta_t &\approx \E_\pi\bcket{[}{]}{\Delta \theta_t| \mathbf{s}_t} = \Delta_\text{Exp} \theta_t
\intertext{Here, we defined $\Delta_\text{Exp} \theta_t$ as the exact expected update, averaging over actions. Thus, the variance of standard (MC) TD-updates is exactly the variance of $\Delta_\text{Exp} \theta_t$, plus an additional term to account for variance induced by sampling actions,}
\operatorname{Var}\bcket{[}{]}{\Delta \theta_t} &= \E_{\mathbf{s}_t}\bcket{[}{]}{\operatorname{Var}_\pi\bcket{[}{]}{\Delta \theta_t| \mathbf{s}_t}} + \operatorname{Var}_{\mathbf{s}_t}\bcket{[}{]}{\Delta_\text{Exp} \theta_t}
\end{align}
This directly gives a theorem,
\begin{theorem}\label{theorem_var_anlysis} The variance for standard (MC) TD-updates is larger than the variance of $\Delta_\text{Exp} \theta_t$ that arises from exact integration over actions (Eq.~\ref{eq_Expected_TD}),
\begin{align}
\operatorname{Var}\bcket{[}{]}{\Delta \theta_t} \geq \operatorname{Var}_{\mathbf{s}_t}\bcket{[}{]}{\Delta_{\operatorname{Exp}} \theta_t}
\end{align}
\end{theorem}
Of course, we ultimately seek to relate the variance of the standard (MC) TD updates, $\Delta \theta_t$ to the Taylor-TD updates, $\Delta_\text{Ta} \theta_t$ which involve some degree of approximation from the first order Taylor expansion.
While at first, it might seem that we are only able to get an approximate relationship,
\begin{align}
\operatorname{Var}\bcket{[}{]}{\Delta \theta_t} &\approx \E_{\mathbf{s}_t}\bcket{[}{]}{\operatorname{Var}_\pi\bcket{[}{]}{\Delta \theta_t| \mathbf{s}_t}} + \operatorname{Var}_{\mathbf{s}_t}\bcket{[}{]}{\Delta_\text{Ta} \theta_t}
\end{align}
we can in actuality obtain a formal relationship by considering differentiable $\Delta \theta_t = \delta_{\theta}(\mathbf{s}_t,\a_t) \nabla_\theta Q_\theta(\mathbf{s}_t,\a_t)$.
If $\Delta \theta_t$ is differentiable then the Taylor series expansion becomes increasingly accurate as we consider smaller regions around the mean action, which correspond to smaller variances, $\lambda_\text{a}$, in the distribution over actions.
\begin{theorem}\label{theorem_var_anlysis}
If $\Delta \theta_t(\mathbf{s}_t, \a_t) = \delta_{\theta}(\mathbf{s}_t,\a_t) \nabla_\theta Q_\theta(\mathbf{s}_t,\a_t)$ is a continuous and differentiable function of $\a_t$, and if for all $s_t \in S$ $\operatorname{Var}_\pi\bcket{[}{]}{\Delta \theta_t| \mathbf{s}_t} > \epsilon$ for some $\epsilon > 0$, and if we truncate the distribution over actions at some multiple of the standard deviation (e.g.\ sampled actions cannot be more than 10 standard deviations from the mean) then there exists $\lambda_\text{\emph{a}} > 0 \ $ for which
\begin{align}
\operatorname{Var}\bcket{[}{]}{\Delta \theta_t} > \operatorname{Var}_{\mathbf{s}_t}\bcket{[}{]}{\Delta_{\operatorname{Ta}} \theta_t}
\end{align}
\end{theorem}
\section{Experiments}
\subsection{Variance reduction}\label{sec_varReduct}
In this section we empirically test the claim that Taylor TD updates are lower variance than standard (MC) TD-learning updates.
To do so, we compute "batch updates" \citep{sutton2018reinforcement}, where given an approximate value function $Q_\theta$ and a policy $\pi$, several $Q_\theta$ updates are computed across several sampled states and actions, updating $Q_\theta$ only once, based on the sum of all updates.
Batch updates ensure the variance of the updates is estimated based on the same underlying value function.
We compute batch updates for both Taylor TD and standard (MC) TD updates, comparing the variance of the updates between the two approaches (see \ Appendix~\ref{app_var} for more details).
This analysis is based on a benchmark continuous control tasks (i.e.\ HalfCheetah-v2).
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{results/Var_plot.png}}
\caption{Mean update variance between Taylor TD and standard (MC) TD-learning (batch) updates, based on several sampled states and the distribution of actions for those states (i.e. \ the policy). The analysis is based on the continuous control tasks HalfCheetah-v2.}
\label{fig:VarPlot}
\end{center}
\vskip -0.2in
\end{figure}
Fig.~(\ref{fig:VarPlot}) shows Taylor TD provides lower variance updates compared to standard TD-learning.
\subsection{Comparison with baselines}
\subsubsection{Algorithm}
We implement Taylor TD (i.e. Algorithm \ref{alg_general}) with the TD3 algorithm \citep[][]{fujimoto2018addressing} in a model-based off-policy algorithm we denote as Taylor TD3 (TaTD3).
TaTD3 aims to provide a state-of-the-art implementation of Taylor TD for comparison with baseline algorithms.
At each iteration, TaTD3 uses a learned model of the transitions and learned reward function to generate several differentiable (imaginary) 1-step transitions, starting from real states (sampled from a reply buffer).
These differentiable 1-step transitions are used to train two critics (i.e. TD3) with several Taylor TD updates in a hybrid value gradient and Dyna approach.
The model of the transitions consists of an ensemble of 8 Gaussian models trained by maximum likelihood on the observed environment transitions.
This model ensemble aims to reduce over-fitting and model biases \citep{deisenroth2011pilco}.
The reward function is a Neural Network trained with mean-square error on the observed environment rewards.
Hence, TaTD3 does not require any a priori knowledge of the environment transitions or reward function.
Crucially, we found we could get good performance for TaTD3 across all tested environments without needing to fine tune the value of $\lambda_\text{a}$ and $\lambda_\text{s}$ to each environment (see Appendix~\ref{app_hyperparam}).
Finally, the actor is trained with the deterministic policy gradient \citep[][]{silver2014deterministic} on real states as in standard TD3 \citep{fujimoto2018addressing}.
\subsubsection{Environments}\label{sec_envs}
The first environment consists of a classic problem in control theory used to evaluate RL algorithms \citep[i.e. Pendulum, ][]{brockman2016openai}.
The last 3 environments consist of standard MuJoCo continous control tasks, also frequently used to evaluate RL algorithms \citep[i.e. HalfCheetah, Walker2d and Humanoid, ][]{todorov2012mujoco}.
All results are reported in terms of average performance across 5 runs, each with a different random seed (shade represents 95\% CI).
\subsubsection{Code}
All the code is available at \url{https://anonymous.4open.science/r/TaylorTD-6F15}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\textwidth]{results/Result_diagram.png}
\caption{Top row, performance in terms of average returns for TaTD3 and four state-of-the-art baseline algorithms on four benchmark continuous control tasks. TaTD3 performs as well, if not better, than the four baseline algorithms on all four tasks. Bottom row, performance comparison of TaTD3 with its Monte Carlo equivalent, MC Expected-TD3. All performance are based on 5 runs, with shade representing 95\% c.i.}
\label{fig:MainResults}
\end{figure*}
\subsubsection{Results}\label{sec_main_results}
Here, we report the comparison of TaTD3 with some state-of-the art model -free and -based baselines on the four benchmark environments.
These baselines include 3 model-based algorithms and one 1 model-free algorithm.
The first model-based algorithm is Model-based Policy Optimization (MBPO) \citep[][]{janner2019trust}, which employs the soft actor-critic algorithm (SAC) \citep[][]{haarnoja2018soft} within a model-based Dyna setting.
Plotted performance of MBPO was directly taken from the official algorithm repository on GitHub.
The second model-based algorithm is Model-based Action-Gradient-Estimator Policy Optimization (MAGE) \citep[][]{d2020learn}, which uses a differentiable model of the environment transitions to train the critic by minimising the norm of the action-gradient of the TD-error.
The third model-based algorithm is TD3 combined with a model-based Dyna approach (i.e. Dyna-TD3).
This algorithm was proposed by \citet{d2020learn} and was shown to outperform its model-free counterpart, TD3 \cite{fujimoto2018addressing} on most benchmark tasks.
Dyna-TD3 is conceptually similar to MBPO, with the main difference of MBPO relying on SAC instead of TD3.
Plotted performances of both MAGE and Dyna-TD3 were obtained by re-running these algorithms on the benchmark environments, taking the implementations from the official algorithms' repository.
Finally, we included SAC \citep[][]{haarnoja2018soft} as a model-free baseline.
Plotted performance of SAC was obtained by running the Stable Baselines implementation of this algorithm on the four benchmark environments \citep{stable-baselines}.
Fig.~(\ref{fig:MainResults}, top row) shows TaTD3 performs at least as well, if not batter, than the baseline algorithms in all four benchmark tasks: note the much poorer performance of MAGE on Walker2d-v2, of MBPO on Humanoid-v2 relative to TaTD3.
\subsection{Taylor vs MC-sampling TD-learning}
Next, we ask whether Taylor TD provides any performance benefit in computing the expected TD updates in Eq.~\eqref{eq_Expected_TD} and \eqref{eqSate} over standard MC estimates.
To do so, we implement a model-based TD3 algorithm analogous to TaTD3, but where the expected TD updates, Eq.~\eqref{eq_Expected_TD} and \eqref{eqSate}, are estimated by sampling several state and action perturbations at each time step (i.e. instead of being analytically computed through the Taylor expansions).
We denote this algorithm MC Expected-TD3 (available at \url{https://anonymous.4open.science/r/TaylorTD-6F15}).
In practice, at each time step, MC Expected-TD3 uses a (learned) model of the transitions to compute multiple TD-updates by sampling several state perturbations of visited states and action perturbations of the current policy (i.e. \ estimating Eq.~\eqref{eq_Expected_TD} and \eqref{eqSate} through MC estimates).
Crucially, we ensure the variance of the state and action perturbations (i.e. \ $\lambda_\text{a}$ and $\lambda_\text{s}$) is matched between TaTD3 and MC Expected-TD3.
In Fig.~(\ref{fig:MainResults}, bottom row), we can see TaTD3 provides performance benefits over MC Expected-TD3 across the three most difficult environments.
Interestingly, the benefit of the Taylor expansion (i.e \ TaTD3) over MC sampling (i.e \ MC Expected-TD3) may be more evident in high dimensional state-action spaces.
Indeed, the largest performance advantage of TaTD3 is seen in Humanoid-v2, which has the highest dimensional state-action space by a large margin.
Conversely, the least advantage of TaTD3 over MC Expected-TD3 is seen in Pendulum-v1, which is the task with smallest dimensional state-action space.
We further explore this proposal in a toy example comparing MC estimates and Taylor expansions of expected updates.
We perform this comparison across data points of different dimensions, and find the benefits of the Taylor expansion (over MC) seem to increase with the dimension of the data points (see Appendix~\ref{sec_toy}).
Finally, we should point out MC Expected-TD3 is different from Dyna-TD3, as the latter does not implement any action or state perturbation in the TD-updates.
Hence, unlike MC Expected-TD3, Dyna-TD3 does not compute the expected updates in Eq.~\eqref{eq_Expected_TD} and \eqref{eqSate}, but relies on standard TD-learning (This is also evident in the massive performance difference between Dyna-TD3 and MC Expected-TD3 - i.e. see Fig.~\ref{fig:MainResults}).
\subsection{State expansion ablation}
Here, we ask whether the Taylor state expansion brings any benefit to performance, on top of the Taylor action expansion.
To do so, we compare the TaTD3 algorithms with and without state expansion on two standard benchmark tasks (i.e. analogous to setting $\lambda_\text{s} = 0$ in the update Eq.~\ref{FirstDirect_SateActionUpdate}).
Fig.~(\ref{fig:ActionComparis}) shows that including the state expansion is beneficial to both environments.
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{results/ActionComparison.png}}
\caption{Performance in terms of average returns for TaTD3 (with both state and action expansions) compared to a version of TaTD3 that uses the action expansion only, on two benchmark continuous control tasks. Including the state expansion in TaTD3 seem to improve performance on both tasks (5 runs, 95\% c.i.).}
\label{fig:ActionComparis}
\end{center}
\vskip -0.2in
\end{figure}
\section{Conclusion}
In this article, we introduce a model-based RL framework, Taylor TD, to help reduce the variance of standard TD-learning updates and, speed-up learning of critics.
We theoretically and empirically show Taylor TD updates are lower variance than standard (MC) TD-learning updates.
We show the extra gradient terms used by Taylor TD do not affect the stable learning guarantees of TD-learning under linear function approximation.
Next, we combine Taylor-TD with the TD3 algorithm \citep{fujimoto2018addressing} into a model-based off-policy algorithm we denote as TaTD3.
We show TaTD3 performs as well, if not better, than several state-of-the art model-free and model-based baseline algorithms on a set of standard benchmark tasks.
Finally, we further analyse the settings in which the Taylor TD approach may be most beneficial to performance relative to standard TD-learning.
\section*{Acknowledgements}
This work made use of the HPC system Blue Pebble at the University of Bristol, UK.
|
{
"arxiv_id": "2302.14111",
"language": "en",
"timestamp": "2023-03-01T02:01:25",
"url": "https://arxiv.org/abs/2302.14111",
"yymm": "2302"
} | \section{\label{sec:Introduction}Introduction}
Exotic neutron-deficient nuclei provide an excellent opportunity to explore new decay modes. Large $\beta$-decay Q-values make it possible to populate proton- or $\alpha$-unbound states in daughter nuclei, paving the way for observation of $\beta$-delayed charged-particle emissions. Reviews of advances in $\beta$-delayed charged-particle emission studies can be found in Ref. \cite{Pfutzner,Blank_2008}, where $\beta$-delayed one, two, and three proton decays as well as $\alpha$p/p$\alpha$ decays are discussed. Here we report on a new decay mode that has not been observed before, the $\beta$3$\alpha$p. Not only do we identify these exotic decays of $^{13}$O, but we were also able to use it to obtain information on cluster structure in excited states of the daughter nucleus, $^{13}$N.
Clustering phenomena are prevalent in light nuclei and are an excellent test ground for understanding few-body systems that are theoretically accessible. These clustering phenomena have been well-studied in $\alpha$-conjugate nuclei. Much less experimental information is available for N$\ne$Z nuclei. Yet, theoretical studies (e.g. \cite{Seya,Oertzen1,Kanada}) indicate that cluster configurations may be even richer in non-self-conjugate nuclei, opening a window of opportunity to confront the highly-non-trivial theoretical predictions with experimental data. Recent experimental studies of clustering in non-self-conjugate nuclei already produced exciting results, such as hints for linear chain structures stabilized by ``extra" nucleons (e.g. \cite{Oertzen,Milin,Yamaguchi}) and indications for super-radiance \cite{Marina,Volya}.
Of particular interest is the nucleus $^{13}\mathrm{N}$ where three $\alpha$ clusters and an ``extra'' proton can form exotic cluster configurations. Resonant $^{9}\mathrm{B}$+$\alpha$ scattering or $\alpha$-transfer reactions are not possible because $^{9}\mathrm{B}$ is proton unbound with a half life of the order of $10^{-18}$ s. Instead, one may use $\beta$-delayed charged-particle spectroscopy to populate states in $^{13}\mathrm{N}$ via $^{13}\mathrm{O}$ and observe the decays to a final state of 3$\alpha$p. The $\beta$-delayed proton channel has previously been studied for $^{13}\mathrm{O}$ \cite{Knudsen} where limited statistics showed only a very small sensitivity to populating the p+$^{12}\mathrm{C}(0_{2}^{+})$ (Hoyle state) which results in a $3\alpha$+p final state. Utilizing the Texas Active Target (TexAT) Time Projection Chamber to perform one-at-a-time $\beta$-delayed charged-particle spectroscopy, $\alpha$-decays from the near $\alpha$-threshold excited states in $^{13}$N have been observed for the first time, providing insights into the $\alpha$+$^{9}\mathrm{B}$ clustering. Capitalizing on the advantages of TPCs for $\beta$-delayed charged-particle emission studies, unambiguous and background-free identifications of the $\beta$3$\alpha$p events were made. Reconstruction of complete kinematics for these exotic decays allowed for robust decay channel assignments, providing insights into the cluster structure of the $^{13}$N excited states. Evidence for the $\frac{1}{2}^{+}$ first excited state in $^{9}\mathrm{B}$, mirror of the well-known $\frac{1}{2}^{+}$ in $^9$Be, was an unexpected byproduct of these measurements, demonstrating the sensitivity of the technique.\par
\section{\label{sec:setup}Experimental setup}
The $\beta$-delayed charged-particle spectroscopy technique with the TexAT TPC has previously been applied for $\beta$-delayed 3$\alpha$ decay studies of $^{12}\mathrm{N}$ via $^{12}\mathrm{C}^{\star}$ \cite{Hoyle}. A detailed description of the technique is provided in \cite{NIM}. Here, we utilize the same experimental approach to observe the $\beta$-delayed 3$\alpha$p decays of $^{13}\mathrm{O}$ via $^{13}\mathrm{N}^{\star}$. We implant $\beta$-decaying $^{13}$O one-at-a-time into the TexAT TPC by providing a phase shift signal to the K500 Cyclotron at Texas A\&M University when a successful implantation has taken place to halt the primary beam. This phase shift then lasts for three half-lives or until the observation of a $\beta$-delayed charged particle in TexAT, with the DAQ ready to accept the trigger. The phase shift is then reset to allow for the next implantation. A beam of $^{13}\mathrm{O}$ was produced via the $^3$He($^{14}$N,$^{13}$O) reaction at the MARS (Momentum Achromat Recoil Separator) \cite{MARS} with a typical intensity of 5 pps with an energy of 15.1 MeV/u, degraded by an aluminum foil to 2 MeV/u, to stop inside of the TexAT sensitive area, filled with 50 Torr of CO$_2$ gas. To measure the correlated implantation/decay events, the 2p trigger mode of GET electronics \cite{GET} was employed where the occurrence of two triggers within a 30 ms time window was required for a full event. The first trigger, the L1A (implantation), is generated if the Micromegas pad multiplicity exceeds 10. If, during the 30 ms following the L1A trigger, another trigger occurs with Micromegas pad multiplicity above two, the second L1B (decay) trigger event and the time between the L1A and L1B are recorded. For normalization and beam characterization, all events where recorder, even if L1B trigger never came.
\section{Analysis}
The complete L1A (implant) + L1B (decay) events were selected with the time between the two triggers in the range of 1-30 ms. The short times ($<$1 ms) were omitted to remove double trigger events due to sudden beam-induced noise. To ensure the implanted ion is $^{13}\mathrm{O}$, the energy deposited by the beam implant event in the Micromegas ``Jr" (MM Jr) beam tracker \cite{MMJr} at the entrance to the TexAT chamber was recorded. The beam contaminants were $^{7}\mathrm{Be}$ and $^{10}\mathrm{C}$, dominated by $^{7}\mathrm{Be}$ at $\approx$ 28\% of the beam intensity.\par
Following an identification of $^{13}\mathrm{O}$ implant, the stopping position was evaluated event-by-event using implant tracks, selecting only those which stopped inside the active area of the Micromegas and not closer than 31.5 mm from the edge. The spread of the $^{13}\mathrm{O}$ stopping position inside TexAT was 67.5 mm due to straggling.\par
Further selection was performed by imposing tight correlation ($<$5 mm) between the $^{13}$O stopping location and the vertex location of the respective decay event. Events which passed this test were then fit with a single track segment using a randomly-sampled $\chi$-squared minimization algorithm. If good fit is achieved, these events were identified as single proton events. The $\beta$-delayed proton spectrum replicates the previous results \cite{Knudsen} well, albeit with decreased resolution. The remaining events were fit with four track segments as candidates for $\beta$3$\alpha$p decay using randomly-sampled $\chi$-squared minimization. They were then inspected visually to evaluate the fits' quality. Given the complexity of the fits, manual modifications of the fit algorithm parameters were required for some events.
\section{3$\alpha$+proton events\label{sec:3ap}}
Overall, 149 $\beta$3$\alpha$p events were identified. Due to the size of the TPC and limitations on reconstruction in parts of the TexAT TPC, only 102 out of 149 of these events allow for complete reconstruction. The ``incomplete'' events are dominated by the $^9$B(g.s.)+$\alpha$ decay as this produces a high-energy $\alpha$-particle that may escape from the active volume of the TexAT TPC. The efficiency for the $\alpha_0$ decay starts to deviate from 100\% at $E_{x}$ = 10 MeV, slowly drops to around 60\% at $E_{x}$ = 14 MeV. The efficiency for $\alpha_1$ and $\alpha_3$ are less affected and only decrease to 70\% at $E_{x}$ = 14 MeV. In proton decays to the Hoyle state, most of the energy is taken by protons and the resulting three $\alpha$-tracks of the pre-selected events are always confined to the active volume of the TPC. Proton tracks were not required in reconstruction as complete kinematics can be recovered from the remaining three $\alpha$-tracks. Therefore, there was no efficiency reduction for the p+$^{12}$C(Hoyle) decays. The yields given in Table~\ref{tab:states} are corrected for these experimental effects.\par
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{8Be.pdf}}
\caption{Relative energy spectrum for pairs of $\alpha$-particles with the smallest relative energy of the three $\alpha$-tracks. The $^{8}\mathrm{Be}$(g.s) at 92 keV is well-reproduced.\label{fig:8Be}}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{Ex9B_new_inset.pdf}}
\caption{For events that do not decay via the Hoyle state, the relative energy spectrum is shown here which is generated by selecting the two $\alpha$-particles that produce the $^{8}\mathrm{Be}$(g.s) and then reconstructing the $^{9}\mathrm{B}$ relative energy with the proton. Overlaid in dashed red are simulated data for the ground state contribution and in solid red are the $\frac{1}{2}^{+}$ and $\frac{5}{2}^{+}$ states from single channel R-Matrix calculations convoluted with a Gaussian with $\sigma$ = 0.23 MeV. The $\frac{1}{2}^{+}$ parameters are those obtained by Wheldon \cite{Wheldon} which show excellent agreement. Inset: projection of an example $\alpha$+$^{9}\mathrm{B}$(g.s) event in the TPC with the color indicating energy deposition. The lower energy deposition proton can be seen extending upwards and then escapes the TPC active area.\label{fig:9B}}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{12C.pdf}}
\caption{Invariant mass spectrum for $^{12}\mathrm{C}$ from 3$\alpha$-particles. A peak at 7.65 MeV is seen, well reproducing the Hoyle state energy and a broad peak is seen at higher excitation energies which correspond to events that decay via $^{9}\mathrm{B}+\alpha$.\label{fig:12C}}
\end{figure}
In order to identify the parent state in $^{13}\mathrm{N}^{\star}$, the lowest energy deposition arm was identified as the proton track and the momentum of the 3 $\alpha$-particles was determined by the length and direction of $\alpha$-tracks in the gas. Protons almost always escape the sensitive volume, and the proton momentum is reconstructed from momentum conservation. The decay energy is then the sum of the three $\alpha$-particles' and proton energy. From here, the $^{8}\mathrm{Be}$ (Fig.~\ref{fig:8Be}), $^{9}\mathrm{B}$ (Fig.~\ref{fig:9B}) and $^{12}\mathrm{C}$ (Fig.~\ref{fig:12C}) excitation energies were determined from the invariant mass. This allowed for a selection of events which proceeded to decay via p+$^{12}\mathrm{C}(0_{2}^{+})$ [$p_{2}$], $\alpha$+$^{9}\mathrm{B}$(g.s) [$\alpha_{0}$], $\alpha$+$^{9}\mathrm{B}(\frac{1}{2}^{+})$ [$\alpha_{1}$] and $\alpha$+$^{9}\mathrm{B}(\frac{5}{2}^{+})$ [$\alpha_{3}$]. There is evidence of strength in $^{9}\mathrm{B}$ between 1 and 2.4 MeV excitation energy (Fig.~\ref{fig:9B}). It is difficult to explain it without the $\frac{1}{2}^{+}$ state in $^{9}\mathrm{B}$ \cite{Wheldon} that is the mirror of the well-known $\frac{1}{2}^{+}$ first excited state in $^9$Be. Attempts to fit the spectrum without the $\frac{1}{2}^{+}$ in $^9$B fail because it is difficult to explain excess of counts at excitation energies between 1.4 and 2.4 MeV comparable to the 2.4 - 3.5 MeV region where there are known excited state in $^9$B states. Contributions from the broad 2.78 MeV $\frac{1}{2}^{-}$ may give a signature similar to that seen albeit at lower energies (peaking at $E_{rel}$ = 1.3 MeV for a $^{13}\mathrm{N}$($E_{x}$) = 12.4 MeV) when considering the expected yield from a $\frac{1}{2}^{-}$ state in $^{13}\mathrm{N}$. The L=0 $\alpha$-decay to the broad $\frac{1}{2}^{-}$ in $^{9}\mathrm{B}$ will increase the yield at small excitation energies. While this possibility is disfavored from the observed spectrum due to the energy offset, it is mentioned here for completeness. The $\frac{1}{2}^{+}$ state in $^{9}\mathrm{B}$ was selected by taking an excitation energy of between 1.4 and 2.4 MeV in $^{9}\mathrm{B}$ (following the centroid and width as observed via $^{9}\mathrm{Be}(^{3}\mathrm{He},t)$ \cite{Wheldon} which is consistent with our current results) and the $\frac{5}{2}^{+}$ was taken as having an excitation energy of above 2.4 MeV. Any contribution from the relatively-narrow 2.345 MeV $\frac{5}{2}^{-}$ is not present in the presented plots as this state decays almost exclusively via $^{5}\mathrm{Li}$ and therefore would not correspond to a peak in the $^{8}$Be spectrum. There were only 3 events associated with this decay to $^{5}\mathrm{Li}$ hence the statistics were insufficient to incorporate into the analysis.
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{13N_levelscheme.png}}
\caption{Level scheme of measured 3$\alpha$+p states in $^{13}\mathrm{N}$ in the central column with the proposed spin-parity assignments. The location of the thresholds for proton and $\alpha$ decay are shown in red with the equivalent excitation energy shown. The corresponding states in the daughter nuclei (${12}\mathrm{C}$ and $^{9}\mathrm{B}$) are also shown. \label{fig:levelscheme}}
\end{figure}
\par
Following the channel selection, the excitation energy in $^{13}\mathrm{N}$ was calculated and is shown in Fig.~\ref{fig:3ap}. Despite low statistics, a number of states can be seen and will be discussed individually. A summary of the properties of these states observed is then shown in Table~\ref{tab:states}. A GEANT4 simulation was performed to test the variation in experimental resolution as a function of excitation energy for the $\alpha_0$ channel which, is typically around $\sigma$ = 200 keV. The $p_2$ channel resolution is almost entirely dominated by discrepancies between the calculated and real stopping powers for the $\alpha$-particles and therefore cannot be accurately determined. For all excitation energies, it is realistically greater than $\sigma$ = 160 keV.
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{Ex13N.pdf}}
\caption{Excitation spectrum in $^{13}\mathrm{N}$ for 3$\alpha+p$ separated by channels. Dashed vertical lines show previously-known states populated by $\beta$-decay in black and new states observed are shown in magenta. A magenta arrow shows a shift in the excitation energy between a suggested state at 13.26(10) MeV to 13.1(1) MeV.\label{fig:3ap}}
\end{figure}
\par
\begin{table*}[!ht]
\setlength\extrarowheight{2.0pt}
\caption{\label{tab:states}Excited states in $^{13}\mathrm{N}$ observed in this work with tentative spin-parity assignments, decay properties of the states, and the efficiency-corrected fractional reduced widths.}
\centering
\begin{tabular}{|l||l||l|l|l||l|l|l||l|l|l|l|l|l|}
\hline
\multicolumn{2}{|c|}{State} & \multicolumn{6}{|c|}{Counts} & \multicolumn{6}{|c|}{Efficiency-corrected $\bar{\gamma}^2$} \\ \hline
$E_{x}$ & $J^{\pi}$ & $\alpha_0$ & $\alpha_1$ & $\alpha_3$ & $p_0$ \cite{Knudsen} & $p_1$\cite{Knudsen} & $p_2$ & $\alpha_0$ & $\alpha_1$ & $\alpha_3$ & $p_0$ & $p_1$ & $p_2$ \\ \hline
11.3(1) & 3/2- & $18(4.4)$ & 0 & 0 & $6(2.6)$ & $<3$ & $7(2.8)$ & 67(21)\% & 0\% & 0\% & 4(2)\% & $<$1\% & 29(13)\% \\ \hline
11.8(1) & 3/2- & $<1.8$ & 0 & 0 & $28(14)$ & $<4$ & $4(2.2)$ & $<$12\% & 0\% & 0\% & 50(30)\% & 0\% & 38(25)\% \\ \hline
12.4(1) & 3/2- & $22(4.8)$ & $4(2.2)$ & 0 & $<3$ & $<10$ & $5(2.5)$ & 6(2)\% & 88(49)\% & 0\% & $<$0.1\% & $<$2\% & 2(1)\% \\ \hline
\multirow{2}{*}{13.1} & 1/2- & \multirow{2}{*}{0} & \multirow{2}{*}{$3(2)$} & \multirow{2}{*}{$5(2.5)$} & \multirow{2}{*}{$21(6)$} & \multirow{2}{*}{$<10$} & \multirow{2}{*}{0} & 0\% & 1(1)\% & 98(48)\%\footnote{ Here the $\alpha_3$ channel is assumed to be the $J^{\pi}=\frac{1}{2}^{-}$ channel rather than the $J^{\pi}=\frac{5}{2}^{+}$ state.} & 0\% & $<$0.4\% & 0\% \\
& 5/2- & & & & & & & 0\% & 10(10)\% & 89(44)\% & 0.7(0.2)\% & $<$0.2\% & 0\% \\ \hline
13.7(1) & 3/2- & $1(1.4)$ & $3(2)$ & $4(2.2)$ & $<3$ & $<10$ & $6(2.7)$ & 1(1)\% & 8(8)\% & 75(42)\% & $<$0.5\% & $<$7\% & 8(3)\% \\ \hline
\end{tabular}
\end{table*}
\subsection{11.3 MeV state}
The first peak in the spectrum corresponds to an excitation energy of 11.3 MeV in $^{13}\mathrm{N}$. The strength is almost entirely dominated by the $^{9}\mathrm{B(g.s)}+\alpha$ channel with a small fraction of $^{12}\mathrm{C}(0_{2}^{+})$+p. The yield in the $p_{0}$ from the previous Knudsen data \cite{Knudsen} shows a small, very narrow peak at the energy associated with this state ($E_p$(lab) = 8.64 MeV) and is taken as $6(2.6)$. The yield in the $p_{1}$ channel is harder to estimate due to the larger background from other states in this region but also shows no evidence of a peak and is also taken to be negligible. Fitting this peak in conjunction with neighboring peaks, the yield in the $\alpha_0$ channel is $18(4.4)$ and yielding $\sigma$ = 280(80) keV and $E_{x} = 11.3(1)$ MeV. In the $p_2$ channel, the yield is $7(2.8)$ with $\sigma$= 220(100) keV and $E_{x}$ = 11.0(1) MeV. These widths are commensurate with the experimental resolution therefore $\Gamma$ is expected to be relatively small ($\Gamma < 200$ keV). Given the yields for $\alpha_0$ and $p_2$ are both strong, the spin-parity assignment is favored towards $J^{\pi}=\frac{3}{2}^{-}$ where the angular momentum transfer is L=0 and L=1 respectively. A choice of $J^{\pi}=\frac{1}{2}^{-}$ or $J^{\pi}=\frac{5}{2}^{-}$ would require L=2 for the $\alpha_0$ channel which should heavily suppress the yield and $J^{\pi}=\frac{5}{2}^{-}$ would correspond to L=3 for $p_{2}$ so these options are strongly disfavored.
From Table~\ref{tab:states}, when taking the yield of the states and correcting for the different channel penetrabilities, $P_L$, and efficiencies, one can determine the structure of the measured states without a measurement of the width of the state to compare to the Wigner limit. Many of the states in $^{9}\mathrm{B}$ are very broad and the extreme simplification of calculating the penetrability to the resonant energy is made. In reality, the average penetrability will be higher. The structure is therefore determined by the fractional reduced-width, ${\bar{\gamma}^2_{i}} = \frac{\gamma^2_{i}}{\sum_j \gamma^2_{j}}$ where $\gamma^2_{i} = \frac{\Gamma_{i}}{2P_{iL}}$. This variable shows the type of clustering but not the magnitude of the clustering. This state has considerable strength in both $\alpha_0$ and $p_2$ with ${\bar{\gamma}^2_i}$ as 63\% and 35\% respectively.
Taking the assumption that the total width, $\Gamma$, of the state is $< 200$ keV, one may compare to the Wigner limit, $\gamma_W^2 = \frac{\hbar^2}{\mu a^2}$ which is 0.57 and 2.1 MeV for $\alpha$-decay and $p$-decay respectively. Correspondingly, the ratio to Wigner, $\theta_W^2$ $<28$ $\%$ and $<4 \%$ for $\alpha_0$ and $p_2$ respectively. The former of these (while notably only an upper limit) constitutes a well-clustered state.
\subsection{11.8 MeV state}
In the $p_2$ channel, the yield is $4(2.2)$ with $\sigma$= 170(110) keV and $E_{x}$ = 11.8(1) MeV. Counts in the $\alpha_1$ channel are from higher excitation energies extending down as the $P_l$ for $\alpha_1$ is extremely suppressed prohibiting any strength. Due to the strength of the two nearby states in the $\alpha_0$ channel, the yield in the $\alpha_0$ channel has very large uncertainties and can only be limited to be less than $1.8$. There are two states previously known at this energy, a $\frac{3}{2}^{-}$ and a $\frac{5}{2}^{-}$ with widths of 115(30) and 530(80) keV respectively. Our data are more consistent with the narrower $\frac{3}{2}^{-}$ which was also populated in previous work \cite{Knudsen}. Additionally, a $\frac{5}{2}^{-}$ assignment is the least favored from an angular momentum perspective (L=3 vs L=1 for $\frac{1}{2}^{-}$ or $\frac{3}{2}^{-}$) and this state is seen to populate the $p_2$ channel reasonably well. From previous work, the yield in the $p_0$ was determined to be 28$(14)$. Making the same corrections for penetrabilities as above, this state shares strength in the $p_0$ and $p_2$ channels with ${\bar{\gamma}^2_i}$ $>51\%$ and $>39\%$ respectively with the remaining $\alpha_0$ component being $<10\%$. The width for this state is known and the reduced width for $p_2$ can be compared to the Wigner limit and is $\sim 1 \%$. Therefore, the contribution of the $^{12}\mathrm{C}(0_{2}^{+})\bigotimes p$ configuration is small.
\subsection{12.4 MeV state}
Fitting this peak in conjunction with neighboring peaks, the yield in the $\alpha_0$ channel is $22(4.8)$ and yielding $\sigma$ = 310(90) keV and $E_{x} = 12.4(1)$ MeV. The corresponding yield of $\alpha_{1}$ is $4(2.2)$. In the $p_2$ channel, the yield is $5(2.5)$ with $\sigma$= 110(70) keV and $E_{x}$ = 12.5(1) MeV. Despite the relatively small yield in the $\alpha_1$ channel, when correcting for penetrability, the $\alpha_1$ dominates the strength with ${\bar{\gamma}^{2}_{i}}=91\%$ with $\alpha_0$ and $p_2$ sharing the remainder with 6\% and 3\% respectively. The strong contribution of the $^{9}\mathrm{B}(\frac{1}{2}^{+}) \bigotimes \alpha$ configuration suggests this is a near-threshold p-wave state.\par
The $^{9}\mathrm{Be}(\alpha,\alpha_0)$ \cite{Be9aa,Ivano} and $^{9}\mathrm{Be}(\alpha,n_0)$ \cite{Obst} reactions data are available at this excitation energy and above and one may look for analogous states in $^{13}\mathrm{C}$. Given this state is in the s-wave in the entrance channel (assuming $J^{\pi}=\frac{3}{2}^{-}$) and is expected to be relatively narrow, and previous data seem to have a very large experimental width, it is perhaps possible to explain that such a state has not been observed in $^{13}\mathrm{C}$ in the $^{9}\mathrm{Be}(\alpha,\alpha_0)$ channel. The sole dominant feature in this region is a strong $\frac{5}{2}^{+}$ state at 11.95 MeV. \par
It is worth noting that the $\alpha_1$ channel is sub-threshold in $^{13}\mathrm{C}$ and the $n_{2}$ channel is heavily-suppressed until $^{13}\mathrm{C}$ excitation energies of above 13 MeV \cite{Obst}. There are many states in this region ($E_{\alpha} > 2$ MeV) visible in the $^{9}\mathrm{Be}(\alpha,n_0)$ channel but the resolution is insufficient to provide spin-parity and width assignments.\par
This perhaps motivates a more extensive investigation of near-threshold states in $^{13}\mathrm{C}$ from the $^{9}\mathrm{Be}+\alpha$ channel with higher resolution and angular coverage.
It is also worth noting that in the previous proton data \cite{Knudsen} there is a peak at this corresponding energy for the $p_1$ channel ($E_p$(lab) = 5.55 MeV) where a peak with a yield of $\approx 6$ can be seen above a considerable background. The conservative limit of $<10$ for $p_1$ is therefore taken. The width in this spectrum is also seen to be small which agrees with our results. \par
\subsection{13.1 MeV state}
A relatively strong peak is seen at 13.1 MeV in the $\alpha_3$ channel where decays occur through the 2.75 MeV $\frac{5}{2}^{+}$. There is only a very small contribution from the $\alpha_1$ channel at this excitation energy so this state is almost exclusively $^{9}\mathrm{B}(\frac{5}{2}^{+}) \bigotimes \alpha$. Given the dominance of $\alpha_3$, this suggests a spin-parity of $J^{\pi} = \frac{5}{2}^{-}$ which suppresses the other channels. \par
In $^{9}\mathrm{B}$, there is also the extremely-broad 2.78 MeV $\frac{1}{2}^{-}$ with $\Gamma$ = 3.13 MeV which may actually be the source of the $\alpha_3$ strength. Our data do not have sufficient statistics to exclude this possibility and the $\frac{1}{2}^{-}$ decays primarily through $^{8}\mathrm{Be}$ via proton-decay. In this possibility, the preferred spin-parity assignment is obviously $J^{\pi} =\frac{1}{2}^{-}$ corresponding to L=0 $\alpha_3$ decay. The results for both spin parities assignments are included in Table~\ref{tab:states}.\par
As with the 12.4 MeV state, there is evidence of a peak in previous data at the correct energy in the $p_1$ channel ($E_p$(lab) = 6.20 MeV) which is given a similar limit of $<10$.
\subsection{13.7 MeV state}
There is a collection of strength in the $p_2$, $\alpha_0$, $\alpha_1$ and $\alpha_3$ channel. With a yield of $6(2.7)$, the state is dominated by $p_2$ and has parameters of $\sigma$= 260(70) keV and $E_{x}$ = 13.7(1) MeV. Given the small yield in the $\alpha_3$, this state can be assigned as either $\frac{3}{2}^{-}$ or $\frac{5}{2}^{-}$. A $\frac{5}{2}^{-}$ would correspond to L=3 for the $p_2$ channel so a $\frac{3}{2}^{-}$ assignment would be more commensurate with the reasonable $p_2$ yield. This state also exhibits a $^{9}\mathrm{B}(\frac{5}{2}^{+}) \bigotimes \alpha$ structure. \par
Examining the previous work for evidence of a peak in the $p_1$ is not possible for this state due to the presence of a strong $p_0$ branch from a lower-lying state at the same energy. A similar limit of $<10$ is therefore placed on this state.
\section{Conclusions}
$\beta$-delayed 3$\alpha$p decay has been observed for the first time. While $\beta$-delayed $\alpha$p has been previously observed in $^{9}$C \cite{9C}, $^{17}\mathrm{Ne}$ \cite{17Ne}, $^{21}\mathrm{Mg}$ \cite{21Mg} and $^{23}\mathrm{Si}$ \cite{23Si}, these states did not provide any structural insight and instead were mainly seen through isobaric analogue states that were well fed by $\beta$-decay. In this work, $\beta$3$\alpha$p decay was observed from the states below the isobaric analog in $^{13}\mathrm{N}$ at $E_{x}$ = 15 MeV, demonstrating this is not merely a phase-space effect. The $\beta$-delayed 3$\alpha$p decays observed here are in strong competition with $\beta$-delayed proton decay and therefore the states must have significant clustering. \par
Three new states and a previously-tentative state in $^{13}\mathrm{N}$ have been observed with a strong $3\alpha+p$ nature. The first is a narrow $\frac{3}{2}^{-}$ state at $E_{x}$ = 11.3(1) MeV with mixed $^{9}\mathrm{B}(\mathrm{g.s}) \bigotimes \alpha$ and $p+^{12}\mathrm{C}(0_{2}^{+})$ nature. \\
Another previously-observed $\frac{3}{2}^{-}$ was seen to have mixed $p+^{12}\mathrm{C}(\mathrm{g.s.})$ and $p+^{12}\mathrm{C}(0_{2}^{+})$ nature at 11.8 MeV with around half of the total strength as $p+^{12}\mathrm{C}(\mathrm{g.s.})$. \\
At higher excitation, another strong $\alpha$-decaying state was seen at $E_{x}$ = 12.4(1) MeV although this state has a much stronger $^{9}\mathrm{B}({\frac{1}{2}^{+}}) \bigotimes \alpha$ nature. \\
A revised excitation energy of 13.1(1) MeV is suggested for a previously-seen state at 13.26 MeV. The $^{9}\mathrm{B}({\frac{5}{2}^{+}}) \bigotimes \alpha$ structure dominates in this state and a spin assignment of $J^{\pi}$ = ${\frac{1}{2}^{-}}$ or ${\frac{5}{2}^{-}}$ are therefore preferred. \\
Finally, another $\frac{3}{2}^{-}$ is seen at 13.7 MeV which is also dominated by $^{9}\mathrm{B}({\frac{5}{2}^{+}}) \bigotimes \alpha$. \par
The inability to extract the width of these narrow states means that the magnitude of clustering cannot be fully evaluated, however, the type (channel) of clustering can be determined without this information. Higher resolution data focusing on the proton channel may provide further information on the magnitude of this clustering phenomenon. From our current data, one may conclude that the clustered channels are competitive against the single-particle $p_0$ channel, highlighting the importance of cluster configurations in non-self-conjugate nucleus $^{13}N$.\par
Evidence for the low-lying $\frac{1}{2}^{+}$ in $^{9}\mathrm{B}$ in these background-free data, matching the parameters of previous observations \cite{Wheldon}, brings us closer to resolving the long-standing problem of searches for this elusive state.
\section{Acknowledgments}
We thank Vlad Goldberg for helpful feedback on this work. This work was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Science under Award No. DE-FG02-93ER40773 and by the National Nuclear Security Administration through the Center for Excellence in Nuclear Training and University Based
Research (CENTAUR) under Grant No. DE-NA0003841. G.V.R. also acknowledges the support of the Nuclear Solutions Institute. S.A., S.M.C., C.K., D.K., S.K. and C.P. also acknowledge travel support from the IBS grant, funded by the Korean Government under grant number IBS-R031-D1. C.N.K acknowledges travel support from the National Research Foundation of Korea (NRF) grant, funded by the Korea government (MSIT) (No. 2020R1A2C1005981 and 2013M7A1A1075764).
|
{
"arxiv_id": "2302.14119",
"language": "en",
"timestamp": "2023-03-01T02:01:34",
"url": "https://arxiv.org/abs/2302.14119",
"yymm": "2302"
} |
\section{Acknowledgments}
This publication is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No.~OSR-2019-CRG8-4033, the Alexander von Humboldt Foundation, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- 333849990/GRK2379 (IRTG Modern Inverse Problems), and was partially supported by the Flexible Interdisciplinary Research Collaboration Fund at the University of Nottingham Project ID 7466664.
\section{Proof of Remark \ref{rmk:num}}\label{ap:Rmk}
\begin{proof}
We first assume that $N_e=q=1$. From \eqref{eq:likelihood}, the numerator in the logarithm in \eqref{eq:EIG} is given by the following:
\begin{align}\label{eq:num}
&\int_{\bb{R}}\log\left(\frac{1}{\sqrt{2\pi\sigma_{\epsilon}^2}}\exp\left(-\frac{\epsilon^2}{2\sigma_{\epsilon}^2}\right)\right)\frac{1}{\sqrt{2\pi\sigma_{\epsilon}^2}}\exp\left(-\frac{\epsilon^2}{2\sigma_{\epsilon}^2}\right)\di{}\epsilon\nonumber\\
=&-\frac{1}{2}\log\left(2\pi\sigma_{\epsilon}^2\right)-\frac{1}{\sqrt{2\pi\sigma_{\epsilon}^2}}\frac{1}{2\sigma_{\epsilon}^2}\int_{\bb{R}}\epsilon^2\exp\left(-\frac{\epsilon^2}{2\sigma_{\epsilon}^2}\right)\di{}\epsilon.
\end{align}
We can write $s\coloneqq1/(2\sigma_{\epsilon}^2)$ and solve the integral in \eqref{eq:num} by taking the derivative with respect to $s$:
\begin{align}
\int_{\bb{R}}\epsilon^2e^{-s\epsilon^2}\di{}\epsilon&=-\int_{\bb{R}}\frac{\partial}{\partial s}e^{-s\epsilon^2}\di{}\epsilon=-\frac{\partial}{\partial s}\int_{\bb{R}}e^{-s\epsilon^2}\di{}\epsilon=-\sqrt{\pi}\frac{\partial}{\partial s}s^{-\frac{1}{2}}\nonumber\\
&=\sqrt{\frac{\pi}{s}}\frac{1}{2s}=\sqrt{2\pi\sigma_{\epsilon}^2}\sigma_{\epsilon}^2.
\end{align}
Inserting this back into \eqref{eq:num} changes the numerator in \eqref{eq:EIG} to $-(\log(2\pi\sigma_{\epsilon}^2)+1)/2$. The case for $\bs{\epsilon}_i\in\bb{R}^q$, $1\leq i\leq N_e$, follows trivially under the hypothesis.
\end{proof}
\section{Derivation of the finite element formulation}\label{ap:FEM}
We let $(\Omega,\cl{F},\bb{P})$ be a complete probability space with outcomes $\Omega$, $\sigma$-field $\cl{F}$, and probability measure $\bb{P}$. We define $\cl{H}\coloneqq H^1(\cl{D}\times \cl{D})$ as the space of the solution for the coupled thermomechanical fields $(\vartheta(\omega), \bs{u}(\omega))$ for $\omega\in\Omega$. In addition, $H^1(\cl{D}\times \cl{D})$ is the standard Sobolev space $W^{1,2}(\cl{D}\times \cl{D})$ with corresponding Sobolev norm. Furthermore, we define the Bochner space as follows:
\begin{equation}
V_{\vartheta}\times V_U\equiv L^2_{\bb{P}}(\Omega;\cl{H})\coloneqq \left\{(\vartheta,\bs{u})\colon\Omega\to\cl{H}\quad \text{s.t.}\,\left(\int_{\Omega}\lVert \vartheta(\omega),\bs{u}(\omega)\rVert_{\cl{H}}^2\di{}\bb{P}(\omega)\right)^{1/2}<\infty\right\}.
\end{equation}
We aim to determine $(\vartheta,\bs{u})\in L^2_{\bb{P}}(\Omega;\cl{H})$ such that the weak formulations \eqref{eq:weak.temperature} and \eqref{eq:weak.mechanical} are fulfilled for all $(\hat{\vartheta},\bs{\hat{u}})\in L^2_{\bb{P}}(\Omega;\cl{H})$.
\section{Introduction}
A nested integral is an integral of a usually nonlinear function of another parametric integral. Integrals of this type appear in many fields (e.g., geology \cite{God18}, mathematical finance \cite{Xu20}, medical decision-making \cite{Fan22}, and optimal experimental design (OED) \cite{Rya03}). The Monte Carlo (MC) method is one of the most popular approximation techniques for integrals, especially high-dimensional ones. For nested integrals, both can be approximated using the MC method, resulting in the double-loop MC (DLMC) estimator.
Using MC to approximate a single integral to a specified error tolerance $TOL>0$ requires a sample size of $\cl{O}(TOL^{-2})$, whereas using DLMC results in worse complexity, with an overall number of samples of $\cl{O}(TOL^{-3})$ \cite{Rya03}. The two MC estimators in the DLMC estimator are connected through a nonlinear function; thus, the statistical error of the inner MC estimator causes bias in the DLMC estimator. The sample sizes of both MC estimators must be carefully controlled to guarantee that the error in the DLMC estimator is below $TOL$ with a certain confidence, controlling both the statistical error and bias. Improving the DLMC performance is the goal of intense research, with some approaches proposing the use of Laplace approximation \cite{Lon13}, importance sampling \cite{Bec18}, and multilevel MC (MLMC) \cite{Fan22}.
The randomized quasi-MC (RQMC) method \cite{Caf98, Hic98, Nie92, Owe03, Dic10, Lec18} is a promising technique to improve the efficiency of the basic MC method (i.e., the required number of samples to meet a certain tolerance) while maintaining nonintrusive sampling.
The RQMC estimator uses deterministic points from a low-discrepancy sequence and randomizes the entire sequence while maintaining a low-discrepancy structure.
Randomization allows for the use of either the central limit theorem or Chebyshev inequality to estimate and subsequently bound the error asymptotically \cite{Tuf04, Lec10, Lec18}.
Given appropriate regularity assumptions on the integrand, the RQMC method can reduce the order of convergence of the approximation error without introducing additional bias to the MC estimator. Furthermore, if the integrand is sufficiently smooth, this approach can yield an asymptotic rate of 1 as the number of low-discrepancy points increases.
The number of evaluations needed by the RQMC estimator to achieve a tolerance $TOL$ is $\cl{O}(TOL^{-1})$.
Recently, researchers \cite{Gob22} have investigated the optimal error tolerance that can be achieved by RQMC, given a fixed number of samples and a certain confidence level. They demonstrated that combining RQMC with robust estimation improves error tolerances. The RQMC concepts have been applied in the context of OED in \cite{Dro18}, but only to the outer integral of a nested integration problem. In \cite{Dro18}, the inner integral is approximated using the MC method. A reduced sample standard deviation was observed for several numerical experiments for this scheme compared to using the MC method for both integrals, which was demonstrated for a fixed number of outer and inner samples.
In recent work \cite{Fan22}, an RQMC method was used to approximate the outer integral of a nested estimator in medical decision-making. The authors estimated the variance error using the sample variance and bias error by successively doubling the number of inner samples. They compared this RQMC method with an MCLC approach and standard nested (i.e., double loop) MC by specifying a target mean squared error and observing the number of samples needed until this target is reached. Both MLMC and RQMC have similar performance results in practice, depending on the number of parameters and other measures of model complexity.
In \cite{God18}, nested integrals are approximated using MLMC techniques. The number of inner samples is increased for each level to reduce the bias induced on the outer approximation by the variance of the inner approximation. The RQMC estimator approximates the inner integral and reduces the variance, requiring a smaller sample size in the inner loop to achieve the error tolerance in the MLMC setting specified in \cite[Theorem 3.1]{Gil08}. This outcome is presented both theoretically and via examples.
A similar approach is followed in \cite{Xu20}. In this study, a discontinuous inner integrand is approximated using a sigmoid (i.e., smooth function), allowing the RQMC method to be applied.
To further reduce the number of samples required to estimate nested integrals up to a specified error tolerance $TOL$, we use RQMC for both integrals to build a double-loop quasi-MC (DLQMC) estimator. Indeed, under suitable regularity conditions, the DLQMC method can significantly reduce the required number of overall samples to $\cl{O}(TOL^{-1.5})$, compared to $\cl{O}(TOL^{-3})$ for the DLMC method. Moreover, we demonstrate that using RQMC for the outer integral has a greater effect on the overall number of samples than using it for the inner integral, but further savings can still be achieved by applying RQMC to both integrals. We also consider the case where the inner integrand is given only approximately in terms of a computational model, resulting in additional bias for the outer approximation. We provide approximate error bounds using suitable RQMC estimators and verify them on numerical examples.
This paper provides a quick overview of the MC and quasi-MC (QMC) methods, including bounds on the absolute error in Section~\ref{sec:MC.QMC}. We introduce the proposed nested RQMC estimator in Section~\ref{sec:nested}. As the main contributions of this work, we derive asymptotic error bounds on the number of inner and outer samples in Propositions~\ref{prop:B.DLQ} and \ref{prop:V.DLQ}, and the optimal setting for this estimator in Proposition~\ref{prop:W.DLQ}. Finally, in Section~\ref{sec:numerical.results}, we present two examples from Bayesian OED, where nested integrals frequently arise. The first example is an algebraic model introduced in \cite{Hua13}, which can be evaluated at a low cost and serves as a toy problem to highlight the effectiveness of the proposed method. The second example demonstrates an application from solid mechanics involving the solution to a partial differential equation (PDE) with favorable regularity properties, demonstrating the practical applicability of the DLQMC estimator.
\section*{Greek alphabet}
$\alpha$ confidence level
$\beta$ increase in convergence rate of QMC beyond MC for the outer integral
$\gamma$ work rate for the finite element method
$\delta$ increase in convergence rate of QMC beyond MC for the inner integral
$\bs{\epsilon}$ observation noise
$\varepsilon$ central limit theorem error
$\bs{\varepsilon}$ strain tensor
$\eta$ convergence rate of the discretization error
$\bs{\theta}$ parameter of interest
$\bs{\vartheta}$ dummy variable for the parameter of interest
$\vartheta$ absolute temperature
$\kappa$ splitting parameter between bias and variance error
$\lambda$ first Lam\'{e} constant
$\mu$ second Lam\'{e} constant
$\bs{\xi}$ design parameter
$\pi$ probability density function of the parameter of interest
$\bs{\rho}$ randomization
$\rho$ material density
$\sigma$ diagonal elements of the error covariance matrix
$\bs{\Sigma}$ error covariance matrix and approximate negative inverse Hessian of the log-likelihood
$\Phi$ cumulative distribution function of the standard normal distribution
$\omega$ random outcome in the finite element formulation
$\Omega$ space of outcomes in the finite element formulation
\section{Brief overview of Monte Carlo and quasi-Monte Carlo integration}\label{sec:MC.QMC}
Before we address the case of nested integration, which is the focus of this work, we first recall the basic concepts for approximating integrals using MC and RQMC for the reader's convenience.
\subsection{Monte Carlo method}\label{sec:MC}
We can approximate the integral
\begin{equation}\label{eq:integral}
I=\int_{[0,1]^d}g(\bs{x})\di{}\bs{x},
\end{equation}
where $g:[0,1]^d\to\bb{R}$ is square-integrable, and $d$ a positive integer, using the MC estimator:
\begin{equation}\label{eq:monte.carlo}
I\approx I_{\rm{MC}}\coloneqq\frac{1}{N}\sum_{n=1}^{N}g(\bs{x}^{(n)}).
\end{equation}
The MC method uses random points $\bs{x}^{(1)},\ldots,\bs{x}^{(N)}$ that are independent and identically distributed (iid) samples from the uniform distribution $\cl{U}\left([0,1]^d\right)$ to approximate $I$ in \eqref{eq:integral}. Using the central limit theorem (CLT) \cite{Dur19} to analyze the error of the MC estimator, we find that
\begin{equation}\label{eq:CLT.prob}
\bb{P}(|I - I_{\rm{MC}}| \le \varepsilon_{\rm{MC}}) \ge 1 - \alpha
\end{equation}
where
\begin{equation}\label{eq:CLT.error}
\varepsilon_{\rm{MC}} \coloneqq \frac{C_{\alpha} \sqrt{\bb{V}[g]}}{\sqrt{N}},
\end{equation}
as $N\to\infty$, where $C_{\alpha} = \Phi^{-1}(1-\alpha/2)$, $\Phi^{-1}$ is the inverse cumulative distribution function (cdf) of the standard normal distribution, and $\bb{V}[g]$ is the variance of the integrand for $0<\alpha\ll1$. Alternatively, Chebyshev's inequality could be used to obtain an error estimate similar to \eqref{eq:CLT.prob} and \eqref{eq:CLT.error}, although with a potentially larger constant $C_{\alpha}$.
\subsection{Quasi-Monte Carlo method}\label{sec:QMC}
The MC method always converges to the true value of the integral, even under weak assumptions, but the rate of $N^{-0.5}$ can be improved for certain integrands. We can instead use the RQMC method, which achieves a better convergence rate by exploiting the regularity properties of the integrand. For a square-integrable function $g:[0,1]^d\to\bb{R}$, the RQMC estimator to approximate the integral \eqref{eq:integral} is given by
\begin{equation}\label{eq:quasi.monte.carlo}
I_{\rm{Q}} \coloneqq \frac{1}{N}\sum_{n=1}^Ng(\bs{x}^{(n)}),
\end{equation}
where $\bs{x}^{(1)},\ldots,\bs{x}^{(N)}$ are chosen from a sequence of points consisting of a deterministic component $\bs{\xi}\in[0,1]^d$ and a random component $\bs{\rho}$,
\begin{equation}\label{eq:qmc.points}
\bs{x}^{(n)}=\{\bs{\xi}^{(n)},\bs{\rho}\}, \quad 1\leq n\leq N.
\end{equation}
In particular, choosing $\bs{\xi}^{(1)},\ldots,\bs{\xi}^{(N)}$ from a low-discrepancy sequence \cite{Nie92, Hic98} results in improved convergence for smooth integrands. One example that can achieve this is lattice rules \cite{Hic98}, and $\bs{\rho}\sim\mathcal{U}[0,1]^d$ is a random shift. This example provides points of the shape
\begin{equation}
\{\bs{\xi}^{(n)},\bs{\rho}\}=\mathfrak{fr}\left(\bs{\xi}^{(n)}+\bs{\rho}\right),
\end{equation}
where $\mathfrak{fr}(\cdot)$ denotes the componentwise fractional part operator. Every one-dimensional projection of this point set is injective. Another common method for selecting a suitable low-discrepancy sequence is to choose the deterministic points $\bs{\xi}^{(1)},\ldots,\bs{\xi}^{(N)}$ from a digital sequence \cite{Nie92, Owe03} and set $\bs{\rho}$ to be random permutations of the digits of $\bs{\xi}^{(1)},\ldots,\bs{\xi}^{(N)}$. By splitting $[0,1]^d$ into equally spaced subintervals in each dimension, each subinterval contains the same number of points. We use a digital sequence called Sobol sequence \cite{Sob67} throughout this work, as it has performed best on numerical tests. A number of points $N$, such that $\log_2(N)\in\bb{N}$, must be used to achieve the best results for this sequence type. The difference between the estimators \eqref{eq:monte.carlo} and \eqref{eq:quasi.monte.carlo} lies in the points used to evaluate the function to be integrated.
Traditional error estimates for numerical integration based on deterministic low-discrepancy sequences (i.e., $\bs{x}^{(n)}=\{\bs{\xi}^{(n)}\}$ in \eqref{eq:qmc.points}) use the Koksma--Hlawka inequality \cite{Hla61, Nie92} to bound the error by a product of suitable measures for the low-discrepancy sequence and integrand, respectively. This approach can be problematic in practice because, in most instances, sharp estimates of these quantities are exceedingly difficult to obtain, and the resulting quadrature error bound is far from optimal \cite{Lec18}.
To obtain a probabilistic estimate using the CLT, we must use several iid randomizations $\bs{\rho}^{(r)}$, $1\leq r\leq R$ in \eqref{eq:qmc.points}. Then, we find that
\begin{equation}\label{eq:CLT.bound}
\bb{P}(|I - I_{\rm{Q}}| \le \varepsilon_{\rm{Q}}) \ge 1 - \alpha
\end{equation}
approximately holds for
\begin{equation}\label{eq:QMC.error}
\varepsilon_{\rm{Q}} \coloneqq \frac{C_{\alpha} \sqrt{\bb{V}[I_{\rm{Q}}]}}{\sqrt{R}},
\end{equation}
for $0 < \alpha\ll1$, where the variance of the RQMC estimator can be approximated as follows:
\begin{equation}\label{eq:qmc.var}
\bb{V}[I_{\rm{Q}}] \approx
\frac{1}{R-1}\sum_{r=1}^R\left(\frac{1}{N}\sum_{n=1}^Ng(\{\bs{\xi}^{(n)},\bs{\rho}^{(r)}\})-\bar{I}_{\rm{Q}}\right)^2,
\end{equation}
and
\begin{equation}
\bar{I}_{\rm{Q}}\coloneqq\frac{1}{R}\sum_{r=1}^R\frac{1}{N}\sum_{n=1}^Ng(\{\bs{\xi}^{(n)},\bs{\rho}^{(r)}\}).
\end{equation}
For a fixed $R$, the error \eqref{eq:QMC.error} decreases at the rate $\mathcal{O}\left(N^{-\frac{(1+\delta)}{2}}\right)$ for $N\to\infty$ \cite{Gob22, Loh03, Dic13, Lec18}, where $0\leq\delta\leq 1$ depends on the dimension $d$ and may depend on the regularity of the integrand $g$. For $\delta=0$, this provides the usual MC rate of 1/2. For certain functions $g$ with desirable properties such as smoothness and boundedness, more precise statements are possible \cite{Gob22, Owe08}. The CLT-based error estimate \eqref{eq:CLT.bound} only holds asymptotically as $R\to\infty$. It can still be used in practice to obtain a confidence interval of the error \eqref{eq:QMC.error}; however, keeping $R$ fixed and letting $N\to\infty$ is sometimes problematic, as \cite{Tuf04, Lec10} noted. Specifically, the convergence of the distribution of the estimator \eqref{eq:quasi.monte.carlo} to a normal distribution cannot be guaranteed. Chebyshev's inequality can also justify the convergence rate of $(1+\delta)/2$. We employ the CLT for the error analysis and demonstrate that the derived error bounds hold at the specified confidence level for a simple example. However, we remark that it is straightforward to adapt the analysis to the Chebyshev bounds instead.
\begin{rmk}[Integration over general domains]
The (RQ)-MC method is commonly defined for integration over the unit cube and uniform random variables. For integrals over general domains (e.g., normal random variables), the corresponding inverse cdf can be applied to maintain the general shape of the estimators \eqref{eq:monte.carlo} and \eqref{eq:quasi.monte.carlo}.
\end{rmk}
\section{Nested integration}\label{sec:nested}
After discussing the basics of the RQMC estimator, we address the focus of this work.
This section establishes the DLQMC estimator for nested integration problems, derives asymptotic error bounds in the number of samples, and analyzes the optimal work required for this estimator to meet a tolerance goal.
\begin{set}[Nested integral]
We define a nested integral as
\begin{equation}\label{eq:double.loop}
I = \int_{[0,1]^{d_1}}f\left(\int_{[0,1]^{d_2}}g(\bs{y},\bs{x})\di{}\bs{x}\right)\di{}\bs{y},
\end{equation}
where the square-integrable function $f:\bb{R}\to\bb{R}$ is nonlinear and twice differentiable with respect to $\bs{y}$. In addition, $g:[0,1]^{d_1}\times[0,1]^{d_2}\to\bb{R}$ is square-integrable and defines a nonlinear relation between $\bs{x}$ and $\bs{y}$, where $d_1, d_2$ are positive integers.
\end{set}
\begin{exa}[Nested integral]
The integral
\begin{equation}\label{exa:double.loop.nonl}
I = \int_{[0,1]^{d_1}}\log\left(\int_{[0,1]^{d_2}}\exp(\bs{y}\cdot\bs{G}(\bs{x}))\di{}\bs{x}\right)\di{}\bs{y},
\end{equation}
where $\bs{G}(\bs{x})$ is a nonlinear function, is of the nested type and is typically not solvable in closed form, motivating the use of numerical integration techniques to approximate $I$.
\end{exa}
A standard method to approximate a nested integral \eqref{eq:double.loop} is via the DLMC estimator \cite{Rya03}, defined as
\begin{equation}\label{eq:dlmc}
I_{\rm{DLMC}} \coloneqq \frac{1}{N}\sum_{n=1}^Nf\left(\frac{1}{M}\sum_{m=1}^Mg(\bs{y}^{(n)},\bs{x}^{(n,m)})\right),
\end{equation}
where the points $\bs{y}^{(n)}$, $1\leq n\leq N$, are sampled iid from $\cl{U}\left([0,1]^{d_1}\right)$, and $\bs{x}^{(n,m)}$, $1\leq n\leq N$, $1\leq m\leq M$, are sampled iid from $\cl{U}\left([0,1]^{d_2}\right)$. The standard MC estimator \eqref{eq:monte.carlo} for a single integral is unbiased and has a variance that decreases with the number of samples, which holds for the inner MC estimator in \eqref{eq:dlmc}, where the variance decreases with the number of inner samples $M$. The outer MC estimator in \eqref{eq:dlmc} has a variance that decreases with $N$ but also has a bias relative to the size of the variance of the inner integral estimate. Thus, we typically require many inner and outer samples to keep the bias and variance of this estimator in check, significantly limiting its practical usefulness, particularly for computationally demanding problems.
Directly evaluating the function $g$ is often not possible. For example, if evaluating $g$ requires solving a PDE, we may only have access to a finite element method (FEM) approximation $g_h$ with discretization parameter $h$. As $h \to 0$ asymptotically, the convergence order of $g_h$ is given by
\begin{equation}\label{eq:FEM}
\bb{E}[| g(\bs{y},\bs{x})-g_h(\bs{y},\bs{x})|] = C_{\rm{disc}}h^{\eta}+h.o.t.,
\end{equation}
where $\eta>0$ is the $h$-convergence rate and $C_{\rm{disc}}>0$ is a constant. The work of evaluating $g_h$ is assumed to be $\cl{O}(h^{-\gamma})$, for some $\gamma > 0$.
Unless stated otherwise, we assume that a discretized $g_h$ is used in the numerical estimators and omit the subscript for concision. The DLMC estimator has a bias with the following upper bound:
\begin{equation}\label{eq:bias.bound}
|\mathbb{E}[I_{\rm{DLMC}}]-I|\leq C_{\rm{disc}}h^{\eta}+\frac{C_{\mathrm{MC},3}}{M}+o(h^{\eta})+\cl{O}\left(\frac{1}{M^2}\right),
\end{equation}
where $C_{\rm{MC}, 3}>0$ is a constant related to the variance of the inner MC estimation in \eqref{eq:dlmc}, and $C_{\rm{disc}}>0$ might be different from the one introduced in \eqref{eq:FEM}. The DLMC estimator has a variance with the following upper bound:
\begin{equation}
\bb{V}[I_{\rm{DLMC}}]\leq \frac{C_{\mathrm{MC},1}}{N}+\frac{C_{\mathrm{MC},2}}{NM}+\cl{O}\left(\frac{1}{NM^2}\right),
\end{equation}
where $C_{\rm{MC}, 1},C_{\rm{MC}, 2} >0$ are constants \cite{Bec18}.
The optimal work of the DLMC estimator for a specified error tolerance $TOL>0$ is given by
\begin{equation}\label{eq:Opt.Work}
W_{DLMC}^{\ast}\propto TOL^{-(3+\frac{\gamma}{\eta}) }.
\end{equation}
Proofs for the specific case of approximating the expected information gain (EIG) in Bayesian OED are presented in \cite{Bec18}, but adapting them to the general case is straightforward.
To obtain smaller error bounds and, subsequently, smaller optimal work, we replaced both MC approximations in \eqref{eq:dlmc} with RQMC approximations and arrived at the DLQMC estimator, which we define below.
\begin{set}[DLQMC estimator]
The DLQMC estimator of a nested integral \eqref{eq:double.loop} is defined as follows:
\begin{equation}\label{eq:dlqmc}
I_{\rm{DLQ}} \coloneqq \frac{1}{N}\sum_{n=1}^Nf\left(\frac{1}{M}\sum_{m=1}^Mg(\bs{y}^{(n)},\bs{x}^{(n,m)})\right),
\end{equation}
where the square-integrable function $f:\bb{R}\to\bb{R}$ is nonlinear, and $g:[0,1]^{d_1}\times[0,1]^{d_2}\to\bb{R}$ is square-integrable. The sample points have the following shape (see \eqref{eq:qmc.points}):
\begin{align}
\bs{y}^{(n)}&=\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\}, \quad\,\, 1\leq n\leq N\nonumber\\
\bs{x}^{(n,m)}&=\{\bs{\xi}_{d_2}^{(m)},\bs{\rho}_{d_2}^{{(n)}}\}, \quad 1\leq n\leq N, \,1\leq m\leq M,
\end{align}
where $\bs{\xi}_{d_1}\in [0,1]^{d_1}$ and $\bs{\xi}_{d_2}\in [0,1]^{d_2}$.
\end{set}
One instance of the estimator \eqref{eq:dlqmc} requires $N+1$ iid randomizations. The total number of function evaluations required to obtain a probabilistic error estimate using iid randomizations $\bs{\rho}^{(r)}\equiv\{\bs{\rho}_{d_1},\bs{\rho}_{d_2}^{(1)},\ldots,\bs{\rho}_{d_2}^{(n)}\}^{(r)}$, $1\leq r\leq R$, is $N\times M\times R$. The difference between the estimators \eqref{eq:dlmc} and \eqref{eq:dlqmc} lies in the points used to evaluate the function to be integrated.
Next, we analyze the error of DLQMC. First, we split the error into the bias and statistical errors, respectively, and estimate each individually. Specifically, these terms are
\begin{equation}
|I_{\rm{DLQ}}-I|\leq\underbrace{|\mathbb{E}[I_{\rm{DLQ}}]-I|}_{\text{bias error}}+\underbrace{|I_{\rm{DLQ}}-\mathbb{E}[I_{\rm{DLQ}}]|}_{\text{statistical error}}.
\end{equation}
The CLT allows us to replace the statistical error with the variance $\bb{V}[I_{\rm{DLQ}}]$.
\begin{prop}[Bias of the DLQMC estimator]\label{prop:B.DLQ}
The DLQMC estimator for \eqref{eq:double.loop} has a bias with the following upper bound:
\begin{equation}\label{eq:Bias.constraint}
|\mathbb{E}[I_{\rm{DLQ}}]-I|\leq C_{\mathrm{disc}}h^{\eta} + \frac{C_{\mathrm{Q},3}}{M^{(1+\delta)}}+\mathcal{O}\left(h^{\eta+1}\right)+\mathcal{O}\left(\frac{1}{M^{2(1+\delta)}}\right),
\end{equation}
where $C_{\mathrm{Q},3}, C_{\mathrm{disc}}>0$, and $\eta$ is the weak rate defined in \eqref{eq:bias.bound}, which is induced by the approximate function $g_h$. The parameter $0\leq\delta\leq1$ depends on the dimension $d_2$ and may depend on the smoothness of $g$.
\end{prop}
\begin{proof}
We start by introducing $I_{\rm{DLQ}}^{\rm{ex}}=\lim_{h\to 0^{+}}I_{\rm{DLQ}}$, the DLQMC estimator for $g$ evaluated exactly, and further split the bias into
\begin{equation}
|\mathbb{E}[I_{\rm{DLQ}}]-I|\leq \underbrace{|\mathbb{E}[I_{\rm{DLQ}} - I_{\rm{DLQ}}^{\rm{ex}}]|}_{\text{discretization bias}} + \underbrace{|\mathbb{E}[I_{\rm{DLQ}}^{\rm{ex}}]-I|}_{\text{inner sampling bias}}.
\end{equation}
For the discretization bias, we have
\begin{equation}
|\mathbb{E}[I_{\rm{DLQ}} - I_{\rm{DLQ}}^{\rm{ex}}]|\leq C_{\mathrm{disc}}h^{\eta} +\mathcal{O}\left(h^{\eta+1}\right),
\end{equation}
where $\eta$ is the weak rate defined in \eqref{eq:bias.bound}.
For the bias from the inner sampling, we first define the following:
\begin{equation}
I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})\coloneqq\frac{1}{M}\sum_{m=1}^Mg(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\},\{\bs{\xi}_{d_2}^{(m)},\bs{\rho}_{d_2}\}).
\end{equation}
Next, we use the second-order Taylor expansion of $f(X)$ for a random variable $X$ around $\mathbb{E}[X]$,
\begin{equation}\label{eq:Taylor}
f(X) = f(\mathbb{E}[X]) + f'(\mathbb{E}[X])(X-\mathbb{E}[X]) + \frac{1}{2}f''(\mathbb{E}[X])(X-\mathbb{E}[X])^2 + \mathcal{O}\left(|X-\mathbb{E}[X]|^3\right),
\end{equation}
to Taylor expand $f(I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))$ around $\mathbb{E}[I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})|\bs{\rho}_{d_1}]=g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})$ (with a slight abuse of notation)
\begin{align}\label{eq:Taylor.IM}
f(I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))&=f(g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})) + f'(g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))(I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})-g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))\nonumber\\
&+\frac{1}{2}f''(g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))\left(I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})-g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})\right)^2\nonumber\\
&+ \mathcal{O}\left(|I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})-g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})|^3\right).
\end{align}
Taking the expectation conditioned on $\bs{\rho}_{d_1}$, we obtain
\begin{equation}\label{eq:Taylor.ev}
\mathbb{E}[f(I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))|\bs{\rho}_{d_1}]=\mathbb{E}[f(g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))|\bs{\rho}_{d_1}] +\frac{1}{2}f''(g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))\mathbb{V}\left[I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})|\bs{\rho}_{d_1}\right]+\mathcal{O}\left(\frac{1}{M^{2(1+\delta)}}\right),
\end{equation}
where the higher-order term can be derived using the Bienaym\'{e} formula, resulting in
\begin{align}
\mathbb{E}[I_{\rm{DLQ}}]-I&=\mathbb{E}\Bigg[\mathbb{E}[f(I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))|\bs{\rho}_{d_1}]-\bb{E}[f(g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))|\bs{\rho}_{d_1}]\Bigg]\nonumber\\
&=\mathbb{E}\Bigg[\frac{1}{2}f''(g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))\mathbb{V}\left[I_Q(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\})|\bs{\rho}_{d_1}\right]\Bigg]+\mathcal{O}\left(\frac{1}{M^{2(1+\delta)}}\right)\nonumber\\
&\leq\frac{1}{M^{(1+\delta)}}\mathbb{E}\left[\frac{1}{2}f''(g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))\tilde{C}_{\mathrm{Q},3}(\bs{\rho}_{d_1})\right]+\mathcal{O}\left(\frac{1}{M^{2(1+\delta)}}\right).
\end{align}
\end{proof}
The parameter $\delta$ can be estimated numerically along with $\tilde{C}_{\mathrm{Q},3}(\bs{\rho}_{d_1})$ for practical applications. In addition, the constant $C_{\mathrm{disc}}$ in \eqref{eq:Bias.constraint} might be different from that in \eqref{eq:bias.bound}.
\begin{prop}[Variance of the DLQMC estimator]\label{prop:V.DLQ}
The DLQMC estimator for \eqref{eq:double.loop} has a variance with the following upper bound:
\begin{equation}\label{eq:variance}
\mathbb{V}[I_{\rm{DLQ}}]\leq \frac{C_{\mathrm{Q},1}}{N^{(1+\beta)}} + \frac{C_{\mathrm{Q},2}}{NM^{(1+\delta)}}+\mathcal{O}\left(\frac{1}{N^{(1+\tilde{\beta})}M^{2(1+\delta)}}\right),
\end{equation}
where $C_{\mathrm{Q},1}, C_{\mathrm{Q},2}>0$ are constants and $0\leq \beta\leq1$ depends on the dimension $d_1$ and may depend on the smoothness of $f$, $0\leq \delta\leq1$ depends on the dimension $d_2$ and may depend on the smoothness of $g$, and $0\leq \tilde{\beta}\leq1$ depends on the dimension $d_2$ and may depend on the smoothness of the higher-order terms.
\end{prop}
\begin{proof}
By the law of total variance, we have
\begin{equation}\label{eq:total.variance}
\mathbb{V}[I_{\rm{DLQ}}]=\mathbb{V}\left[\mathbb{E}\left[\frac{1}{N}\sum_{n=1}^Nf(I_Q(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\}))\Bigg|\bs{\rho}_{d_1}\right]\right] + \mathbb{E}\left[\mathbb{V}\left[\frac{1}{N}\sum_{n=1}^Nf(I_Q(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\}))\Bigg|\bs{\rho}_{d_1}\right]\right].
\end{equation}
Using \eqref{eq:Taylor.ev} for the first term yields
\begin{align}
&\mathbb{V}\left[\frac{1}{N}\sum_{n=1}^N \Bigg(f(g(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\})) +\frac{1}{2}f''(g(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\}))\mathbb{V}\left[I_Q(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\})\Bigg|\bs{\rho}_{d_1}\right]\Bigg)\right]\nonumber\\
+&\mathcal{O}\left(\frac{1}{N^{(1+\tilde{\beta})}M^{2(1+\delta)}}\right)\nonumber\\
=&\mathbb{V}\left[\frac{1}{N}\sum_{n=1}^N \Bigg(f(g(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\})) +\frac{f''(g(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\}))\tilde{D_1}(\bs{\rho}_{d_1})}{2M^{(1+\delta)}}\Bigg)\right]+\mathcal{O}\left(\frac{1}{N^{(1+\tilde{\beta})}M^{2(1+\delta)}}\right)\nonumber\\
&=\frac{\tilde{C}_{\mathrm{Q},1}}{N^{(1+\beta)}}+\mathcal{O}\left(\frac{1}{N^{(1+\tilde{\beta})}M^{2(1+\delta)}}\right).
\end{align}
Only using \eqref{eq:Taylor.IM} up to the first order for the second term in \eqref{eq:total.variance} results in
\begin{align}
&\mathbb{E}\left[\mathbb{V}\left[\frac{1}{N}\sum_{n=1}^N \Bigg(f(g(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\})) + f'(g(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\}))\left(I_{Q}(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\})-g(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\})\right)\Bigg)\Bigg|\bs{\rho}_{d_1}\right]\right] \nonumber\\
&+ \mathcal{O}\left(\frac{1}{NM^{2(1+\delta)}}\right)\nonumber\\
&=\mathbb{E}\left[\mathbb{V}\left[\frac{1}{N}\sum_{n=1}^N\Bigg(f'(g(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\}))I_{Q}(\{\bs{\xi}_{d_1}^{(n)},\bs{\rho}_{d_1}\})\Bigg)\Bigg|\bs{\rho}_{d_1}\right]\right]+ \mathcal{O}\left(\frac{1}{NM^{2(1+\delta)}}\right)\nonumber\\
&=\frac{\mathbb{E}\left[f'(g(\{\bs{\xi}_{d_1},\bs{\rho}_{d_1}\}))^2\tilde{C}_{\mathrm{Q},2}(\bs{\rho}_{d_1})\right]}{NM^{(1+\delta)}}+ \mathcal{O}\left(\frac{1}{NM^{2(1+\delta)}}\right).
\end{align}
\end{proof}
We expect $\beta$, $\delta$, and $\tilde{\beta}$ to be different from each other in general, as the approximated integrands might have different smoothness properties and dimensions.
With the error bounds established in Propositions~\eqref{prop:B.DLQ} and \eqref{prop:V.DLQ}, we can analyze the work required for the DLQMC estimator in terms of the number of samples and approximated model. We assume that the work for each model evaluation is $\cl{O}\left(h^{-\gamma}\right)$.
\begin{prop}[Optimal work of the DLQMC estimator]\label{prop:W.DLQ}
The total work of the optimized DLQMC estimator for a specified error tolerance $TOL>0$ is given by
\begin{equation}\label{eq:work.dlq}
W_{DLQ}^{\ast}\propto TOL^{-(\frac{2}{(1+\beta)}+\frac{1}{(1+\delta)}+\frac{\gamma}{\eta}) }
\end{equation}
as $TOL\to 0$, where $0\leq \beta\leq1$ depends on the dimension $d_1$ and may depend on the smoothness of $f$, and $0\leq \delta\leq1$ depends on the dimension $d_2$ and may depend on the smoothness of $g$.
\end{prop}
\begin{proof}
The computational work of the DLQMC estimator is
\begin{equation}\label{eq:Work}
W_{\mathrm{DLQ}}\propto NMh^{-\gamma},
\end{equation}
where $h^{-\gamma}$ is proportional to the work required for evaluating $g_h$ for discretization parameter $h$. The CLT allows us to approximately bound the statistical error \eqref{eq:variance} above in probability. We obtain the optimal setting by solving
\begin{equation}\label{eq:Lagrangian}
(N^{\ast},M^{\ast},h^{\ast},\kappa^{\ast})=\argmin_{(N,M,h,\kappa)}NMh^{-\gamma} \quad \text{subject to}\quad \begin{cases}\frac{C_{\rm{Q},1}}{N^{(1+\beta)}}+\frac{C_{\rm{Q},2}}{NM^{(1+\delta)}}\leq\left(\frac{\kappa TOL}{C_{\alpha}}\right)^2\\C_{\rm{disc}}h^{\eta}+\frac{C_{\rm{Q},3}}{M^{(1+\delta)}}\leq(1-\kappa)TOL\end{cases},
\end{equation}
for $C_{\alpha}=\Phi^{-1}(1-\frac{\alpha}{2})$, the inverse cdf of the standard normal at confidence level $1-\alpha$, and $TOL>0$ (the allotted tolerance). In addition, $\kappa\in(0,1)$ is an error-splitting parameter.
We can solve this problem using Lagrange multipliers to derive the optimal $M^{\ast}$ and $h^{\ast}$ in terms of $\kappa$ and $N$. The equation for $\kappa^{\ast}$ is cubic and has a closed-form solution, but it is unwieldy to state explicitly here. The last remaining equation for $N^{\ast}$ unfortunately has no closed-form solution for $0<\beta<1$, so we must solve a simplified version and demonstrate that the resulting solution converges to the true solution as $TOL\to 0$. The optimal values are given by
\begin{equation}\label{eq:M.opt}
M^{\ast}=\left(\frac{C_{\rm{Q},2}}{N\left(\frac{\kappa TOL}{C_{\alpha}}\right)^2-\frac{C_{\rm{Q},1}}{N^{\beta}}}\right)^{1/(1+\delta)},
\end{equation}
\begin{equation}\label{eq:h.opt}
h^{\ast}=\left(\frac{(1-\kappa)TOL-\frac{C_{\rm{Q},3}}{C_{\rm{Q},2}}\left(N\left(\frac{\kappa TOL}{C_{\alpha}}\right)^2-\frac{C_{\rm{Q},1}}{N^{\beta}}\right)}{C_{\rm{disc}}}\right)^{1/\eta}.
\end{equation}
The optimal $\kappa^{\ast}$ is given by the real root of
\begin{align}
&\left[\frac{C_{\rm{Q},3}}{C_{\rm{Q},2}}\left(\frac{NTOL}{C_{\alpha}}\right)^2\left(\eta + \gamma(1 + \delta)\right)\right]\kappa^{\ast 3} + \left[NTOL(\eta+\frac{\gamma(1+\delta)}{2})\right]\kappa^{\ast 2}\nonumber\\
-&\left[N\left(\eta TOL+\frac{C_{\rm{Q},3}}{C_{\rm{Q},2}}\frac{C_{\rm{Q},1}}{N^{\beta}}(\eta+\gamma(1+\delta))\right)\right]\kappa^{\ast}-\frac{\gamma(1+\delta)C_{\rm{Q},1}C_{\alpha}^2}{2N^{\beta}TOL}=0.
\end{align}
Finally, the optimal $N^{\ast}$ is given by the solution to
\begin{equation}\label{eq:N.num}
\left[\frac{C_{\rm{Q},3}}{C_{\rm{Q},2}}\left(\frac{\kappa TOL}{C_{\alpha}}\right)^2\right]N^{(2+\beta)}-\left[TOL\left(1-\kappa\left(1+\frac{\gamma}{2\eta}\right)\right)\right]N^{(1+\beta)}-\frac{C_{\rm{Q},3}}{C_{\rm{Q},2}}C_{\rm{Q},1}N+\frac{\gamma C_{\alpha}^2C_{\rm{Q},1}\beta}{2\eta\kappa TOL}=0.
\end{equation}
It immediately follows from \eqref{eq:Lagrangian} that such an $N^{\ast}$ exists. However, it is only given in closed form for the cases $\beta\in\{0,1\}$.
Assuming that the optimal discretization parameter $h^{\ast}$ is independent of the sampling method (MC or RQMC), we split the bias constraint \eqref{eq:Bias.constraint} as follows:
\begin{align}
C_{\rm{disc}}h^{\eta}&\leq\frac{1}{2}(1-\kappa)TOL,\\
\frac{C_{\rm{Q},3}}{M^{(1+\delta)}}&\leq\frac{1}{2}(1-\kappa)TOL,
\end{align}
to derive asymptotic rates in terms of the tolerance $TOL$. Splitting more elaborately might improve constant terms, but this is only performed for analytical purposes. Together with \eqref{eq:h.opt} and \eqref{eq:M.opt}, this implies that \begin{equation}\label{eq:h.rate}
h^{\ast}\propto TOL^{\frac{1}{\eta}}
\end{equation}
and
\begin{equation}\label{eq:M.rate}
M^{\ast}\propto TOL^{-\frac{1}{(1+\delta)}}.
\end{equation}
Next, we demonstrate that $N\propto TOL^{-\frac{2}{(1+\beta)}}$. From the variance constraint \eqref{eq:variance}, we obtain
\begin{equation}\label{eq:appr.variance}
\left( \frac{\kappa TOL}{C_{\alpha}} \right)^2N^{(1+\beta)} - (1 - \kappa) TOL N^{\beta} = C_{\rm{Q},1}.
\end{equation}
The term on the right-hand side of \eqref{eq:appr.variance} is constant in $TOL$. If we ignore the second term on the left-hand side, we can solve the equation \eqref{eq:appr.variance} and obtain the approximate solution:
\begin{equation}\label{eq:app.sol.N}
N\approx \left(\frac{C_{\alpha}^2C_{\rm{Q},1}}{\kappa^2}\right)^{\frac{1}{(1+\beta)}} TOL^{-\frac{2}{(1+\beta)}}.
\end{equation}
To determine that this approximation converges to the true solution, we check that the ignored term in \eqref{eq:appr.variance} approaches 0 as $TOL\to 0$ as we insert \eqref{eq:app.sol.N}. For this term, we have
\begin{equation}
(1 - \kappa) TOL N^{\beta}\approx(1-\kappa)\left(\frac{C_{\alpha}^2C_{\rm{Q},1}}{\kappa^2}\right)^{\frac{\beta}{(1+\beta)}}TOL^{1-\frac{2\beta}{(1+\beta)}}=(1-\kappa)\left(\frac{C_{\alpha}^2C_{\rm{Q},1}}{\kappa^2}\right)^{\frac{\beta}{(1+\beta)}}TOL^{\frac{(1-\beta)}{(1+\beta)}},
\end{equation}
where the exponent of $TOL$ is between 0 and 1 because $0<\beta< 1$. Thus, this term approaches 0 as $TOL\to 0$. In contrast, if we ignored the first term in \eqref{eq:appr.variance}, the approximate solution would be $N\approx-(C_{\rm{Q},1}/(1-\kappa))^{1/\beta}TOL^{-1/\beta}$. Inserting this solution, the first term in \eqref{eq:appr.variance} has an exponent of $TOL$ between negative infinity and 0; thus, it approaches negative infinity as $TOL\to 0$.
\end{proof}
The constants $C_{\rm{Q},1}$, $C_{\rm{Q},2}$, and $C_{\rm{Q},3}$ can be estimated using $R$ random shifts. Rather than using the approximate solution \eqref{eq:app.sol.N}, we can also solve the equation \eqref{eq:N.num} numerically.
\rmk[Borderline settings]{In the worst case, $\beta=\delta=0$, we obtain the optimal MC work $W_{\mathrm{DLMC}}^{\ast}\propto TOL^{-(3+\frac{\gamma}{\eta})}$, as presented in \eqref{eq:Opt.Work}. In the best case, $\beta=\delta=1$, we obtain the optimal DLQMC work $W_{\mathrm{DLQ}}^{\ast}\propto TOL^{-(\frac{3}{2}+\frac{\gamma}{\eta})}$, a reduction of order 3/2 compared to the DLMC method.}\label{rmk:bc}
\rmk[Rounding optimal parameters for practical purposes]{Because of the properties of quasi-random sequences, it is beneficial to round the optimal number of samples $N^{\ast}$ and $M^{\ast}$ up to the nearest power of 2. The number of mesh elements is typically obtained by rounding $1/h^{\ast}$ up to the nearest integer.}
The optimal $N^{\ast}$ provided by the numerical solution of \eqref{eq:N.num} displays the predicted behavior for the fixed rates $\beta$, $\delta$, $\gamma$, and $\eta$ and constants $C_{\rm{Q},1}$, $C_{\rm{Q},2}$, $C_{\rm{Q},3}$, and $C_{\rm{disc}}$, even for large tolerances $TOL$ (Figure~\ref{fig:rates}). Besides the DLMC and DLQMC methods (for the best case discussed in Remark~\ref{rmk:bc}), we also consider the case of using the RQMC method only on the inner loop ($\beta=0, \delta=1$). This scenario is based on the setting used in \cite{God18} and \cite{Xu20}, except that these authors also used multilevel techniques and did not consider finite element discretization. Finally, we consider the case of using the RQMC method only on the outer loop ($\beta=1, \delta=0$). This setting was used in \cite{Dro18} and \cite{Fan22}.
\begin{figure}[ht]
\centering
\includegraphics[width=.6\textwidth]{dlqmc_work_tol_05_10.pdf}
\caption{Optimal work vs. tolerance $TOL$ for DLMC ($\beta=\delta=0$), RQMC only on the inner loop ($\beta=0$ and $\delta=1$, based on the setting used in \cite{God18} and \cite{Xu20}), RQMC only on the outer loop ($\beta=1$ and $\delta=0$, this setting is used in \cite{Dro18} and \cite{Fan22}), and full DLQMC ($\beta=\delta=1$), where $\gamma=\eta=3$.}
\label{fig:rates}
\end{figure}
\section{Numerical results}\label{sec:numerical.results}
We demonstrate the effectiveness and applicability of the derived double-loop methods for two OED problems. In particular, in Bayesian OED, the goal is to maximize the EIG of an experiment, a quantity given by the expected Kullback--Leibler divergence of the posterior distribution of the parameters of interest $\bs{\theta}$ with respect to their prior distribution. We assume the data model
\begin{equation}\label{eq:data.model}
\bs{y}_i(\bs{\xi}) = G(\bs{\theta}_t,\bs{\xi}) + \bs{\epsilon}_i, \quad 1\leq i\leq N_e,
\end{equation}
where $\bs{Y}=(\bs{y}_1,\ldots,\bs{y}_{N_e})\in\bb{R}^{d_y\times N_e}$ is noisy data generated from the deterministic model $G:\bb{R}^{d_{\theta}}\times\bb{R}^{d_{\xi}}\to\bb{R}^{d_y}$ (evaluated at the true parameter vector $\bs{\theta}_t\in\bb{R}^{d_{\theta}}$) whose optimal design $\bs{\xi}\in\bb{R}^{d_{\xi}}$ we intend to find. Random observation noise is denoted $\bs{\epsilon}_i\in\bb{R}^{d_y}$ for $N_e$ available independent observations under a consistent experimental setup, where $d_y, d_{\theta}, d_{\xi}$ are positive integers. The noise $\bs{\epsilon}_i$ is assumed to come from a centered normal distribution with a known covariance matrix $\bs{\Sigma}_{\bs{\epsilon}}\in\bb{R}^{d_y\times d_y}$ independent of $\bs{\theta}$ and $\bs{\xi}$. Our knowledge of the parameters of interest $\bs{\theta}$ before conducting the experiment is encompassed in the prior probability density function (pdf) $\pi(\bs{\theta})$. After the experiment, our knowledge is described by the posterior pdf of $\bs{\theta}$, given by Bayes' theorem as follows:
\begin{equation}
\pi(\bs{\theta}|\bs{Y},\bs{\xi})=\frac{p(\bs{Y}|\bs{\theta},\bs{\xi})\pi(\bs{\theta})}{p(\bs{Y}|\bs{\xi})},
\end{equation}
where
\begin{equation}\label{eq:likelihood}
p(\bs{Y}|\bs{\theta},\bs{\xi})\coloneqq \det(2\pi\bs{\Sigma_{\bs{\epsilon}}})^{-\frac{N_e}{2}}\exp\left(-\frac{1}{2}\sum_{i=1}^{N_e}\bs{r}(\bs{y}_i,\bs{\theta},\bs{\xi})\cdot\bs{\Sigma}_{\bs{\epsilon}}^{-1}\bs{r}(\bs{y}_i,\bs{\theta},\bs{\xi})\right)
\end{equation}
is the likelihood function. We omit the design parameter $\bs{\xi}$ for concision and distinguish between $\pi$ for the pdfs of the parameters of interest and $p$ for the pdfs of the data. The data residual is given by \eqref{eq:data.model} as follows:
\begin{equation}\label{eq:residual}
\bs{r}(\bs{y}_i, \bs{\theta},\bs{\xi})\coloneqq \bs{y}_i-G(\bs{\theta},\bs{\xi}), \quad 1\leq i\leq N_e.
\end{equation}
The amount of information regarding $\bs{\theta}$ gained from the experiment is given by
\begin{equation}\label{eq:EIG}
EIG\coloneqq \int_{\bs{\Theta}}\int_{\cl{Y}}\log\left(\frac{p(\bs{Y}|\bs{\theta})}{\int_{\bs{\Theta}}p(\bs{Y}|\bs{\vartheta})\pi(\bs{\vartheta})\di{}\bs{\vartheta}}\right)p(\bs{Y}|\bs{\theta})\di{}\bs{Y}\pi(\bs{\theta})\di{}\bs{\theta},
\end{equation}
where $\bs{\vartheta}$ is a dummy variable used for integration. Specifically, in this setting, the nonlinear outer function $f$ in \eqref{eq:double.loop} is the logarithm, and the inner function $g$ is the likelihood function.
Alternatively, we can construct an importance sampling distribution for $\bs{\vartheta}$ to reduce the variance of (quasi) MC methods \cite{Lon13, Lon21, Sch20, Bec18, Bec20, Wac17, Sti86, Tie86, Tie89, Kas90, Lon15}. For a thorough discussion on the Bayesian OED formulation, we refer the reader to \cite{Lon13, Bec18, Car20}.
\begin{rmk}[Closed-form expression for the numerator in the EIG]\label{rmk:num}
From \eqref{eq:residual}, we observe that the numerator in the logarithm in \eqref{eq:EIG} only depends on $\bs{\epsilon}$ and can be computed in closed form for $\bs{\epsilon}_i\sim\cl{N}\left(\bs{0},\bs{\Sigma}_{\bs{\epsilon}}\right)$, $1\leq i\leq N_e$, where $\bs{\Sigma}_{\bs{\epsilon}}$ is a diagonal matrix in $\bb{R}^{q\times q}$ with entries $\sigma_{\epsilon\{j,j\}}^2$, $1\leq j\leq q$. It is given by
\begin{equation}
-\frac{N_e}{2}\sum_{j=1}^q \left(\log\left(2\pi\sigma_{\epsilon\{j,j\}}^2\right)+1\right).
\end{equation}
A similar result is presented in \cite{Tsi17}, but we provide a derivation in Appendix~\ref{ap:Rmk} for completeness.
\end{rmk}
The denominator in the logarithm in \eqref{eq:EIG} constitutes the inner integral of a nested integration problem to which we apply the DLQMC estimator. Alternatively, we can estimate the EIG using a single-loop MC method. The inner integral in this setting is estimated instead by a Laplace approximation. In this case, the EIG is approximated as
\begin{equation}\label{eq:LA}
EIG \approx \int_{\bs{\Theta}}\int_{\bs{Y}}\left[\frac{1}{2}\log((2\pi)^{d_{\theta}}|\bs{\Sigma}|)-\frac{d_{\theta}}{2}-\log(\pi(\bs{\hat{\theta}}))-\frac{\mathrm{tr}\mskip2mu(\bs{\Sigma}\nabla_{\bs{\theta}}\nabla_{\bs{\theta}}\log(\pi(\hat{\bs{\theta}})))}{2}\right]p(\bs{Y}|\bs{\theta})\di{}\bs{Y}\pi(\bs{\theta})\di{}\bs{\theta},
\end{equation}
where
\begin{equation}
\hat{\bs{\theta}}\coloneqq\argmin_{\bs{\theta}}\left[\frac{1}{2}\sum_{i=1}^{N_e}\bs{r}(\bs{y}_i,\bs{\theta},\bs{\xi})\cdot\bs{\Sigma}_{\bs{\epsilon}}^{-1}\bs{r}(\bs{y}_i,\bs{\theta},\bs{\xi})-\log(\pi(\bs{\theta}))\right]
\end{equation}
is the maximum a posteriori (MAP) estimate and
\begin{equation}\label{eq:Sigma.mat}
\bs{\Sigma}^{-1} \coloneqq -\nabla_{\bs{\theta}}\nabla_{\bs{\theta}}G(\hat{\bs{\theta}},\bs{\xi})\cdot\bs{\Sigma}_{\bs{\epsilon}}^{-1}\sum_{i=1}^{N_e}\bs{r}(\bs{y}_i,\hat{\bs{\theta}},\bs{\xi}) +N_e\nabla_{\bs{\theta}}G(\hat{\bs{\theta}},\bs{\xi})\cdot\bs{\Sigma}_{\bs{\epsilon}}^{-1}\nabla_{\bs{\theta}}G(\hat{\bs{\theta}},\bs{\xi})-\nabla_{\bs{\theta}}\nabla_{\bs{\theta}}\log(\pi(\hat{\bs{\theta}}))
\end{equation}
is the approximate negative inverse Hessian of the log-likelihood. The MC Laplace (MCLA) and QMC Laplace (QMCLA) estimators based on \eqref{eq:LA} have a bias of $\cl{O}_{\bb{P}}\left(\frac{1}{N_e}\right)$.\footnote{We specify the notation $X_M=\cl{O}_{\bb{P}}(C_M)$ for a sequence of random variables $X_M$ and constants $C_M$ as follows. For any $\epsilon>0$, there exists finite $K(\epsilon),M_0>0$ such that $\bb{P}(|X_M|>K(\epsilon)|C_M|)<\epsilon$ holds for all $M\geq M_0$.} For details, see \cite{Lon13, Bec18}. The pdf
\begin{equation}\label{eq:importance.sampling}
\tilde{\pi}(\bs{\theta}|\bs{Y})=\frac{1}{(2\pi)^{\frac{d_{\bs{\theta}}}{2}}|\bs{\Sigma}|^{\frac{1}{2}}}\exp\left(-\frac{(\bs{\theta}-\hat{\bs{\theta}})\cdot\bs{\Sigma}^{-1}(\bs{\theta}-\hat{\bs{\theta}})}{2}\right)
\end{equation}
can also be used as an importance sampling distribution for the inner integral in \eqref{eq:EIG} (see \cite{Bec18}), which provides two new estimators, the DLMC estimator with importance sampling (DLMCIS) and DLQMC with importance sampling (DLQMCIS). Laplace-based importance sampling can significantly reduce the number of required inner samples for both estimators. For the construction of the QMCLA and the DLQMCIS estimator, we follow the steps in \cite{Lon13} and \cite{Bec18}, respectively, replacing MC samples with RQMC samples.
\begin{rmk}[Number of randomizations]
To maximize the advantage of RQMC over MC, we only use one randomization in the estimator \eqref{eq:quasi.monte.carlo}. Multiple randomizations $R>1$ are only used to estimate the error \eqref{eq:QMC.error} in a pilot run.
\end{rmk}
\subsection{Example 1: Linear example with exact sampling}
For this example \cite{Hua13}, we assume the following data model, as presented in \eqref{eq:data.model}:
\begin{equation}\label{eq:ex.1}
y_i(\xi)=G(\theta_t,\xi)+\epsilon_i, \quad 1\leq i\leq N_e,
\end{equation}
where $y_i\in\bb{R}$ are noisy observations of the model,
\begin{equation}
G(\theta, \xi)=\theta^3\xi^2+\theta\exp(-|0.2-\xi|),
\end{equation}
$\xi\in[0,1]$ is the design variable, and $\theta\sim\cl{U}([0,1])$ is the parameter of interest. For the observation noise, we assume $\epsilon\sim\cl{N}(0,10^{-3})$. We consider 10 experiments under the same setup; therefore, $N_e=10$. Thus, the outer integration in \eqref{eq:EIG} is $N_e\times d_y+d_{\theta}=11$ dimensional, and the inner integration is $d_\theta=1$ dimensional. The function $G$ can be evaluated exactly; therefore, no discretization is needed.
The EIG for model \eqref{eq:ex.1} is displayed in Figure~\ref{fig:ex1.xi} using the DLMCIS and DLQMCIS estimators with $N=2^{15}$ outer samples and $M=1$ inner sample.
\begin{figure}[ht]
\subfloat[DLMCIS\label{subfig-1:xi.MC}]{%
\includegraphics[width=0.45\textwidth]{ex_1_xi_MC.pdf}
}
\subfloat[DLQMCIS\label{subfig-1:xi,QMC}]{%
\includegraphics[width=0.45\textwidth]{ex_1_xi.pdf}
}
\caption{Example 1: Expected information gain (EIG) as a function of the design $\xi$ estimated using the DLMCIS and DLQMCIS estimators with $N=2^{15}$ outer samples and $M=1$ inner sample.}
\label{fig:ex1.xi}
\end{figure}
We estimate the constants $C_{\rm{Q,1}}, C_{\rm{Q,2}}, C_{\rm{Q,3}}, \beta$, and $\delta$ for DLQMCIS using a pilot run with $N=M=2048$ samples and $R=32$ randomizations. The constants $C_{\rm{QLA},1}$ and $C_{\rm{QLA},2}$, relating to the variance and bias of the QMCLA estimator, respectively, along with $\delta$, are estimated using $N=256$ outer samples and $R=32$ randomizations. Figure~\ref{fig:rates.ex1} presents the rates at which the inner ($N^{\ast}$) and outer ($M^{\ast}$) numbers of samples increase as the allowed tolerance decreases. The estimation reveals that, for this example, $\beta\approx0.84$ and $\delta\approx0.77$. Thus, $N^{\ast}$ increases at a rate of $2/(1+\beta)\approx1.09$ rather than rate 2, as is the case for DLMCIS, and $M^{\ast}$ increases at a rate of $1/(1+\delta)\approx0.57$ rather than a rate of 1. Importance sampling reduces the variance of the inner estimation so that only one sample is needed for large and intermediate tolerances.
We observe that although the constants $C_{\mathrm{Q},2}$ and $C_{\mathrm{Q},3}$ are reduced significantly when using importance sampling, the rate $\delta$ deteriorates. If the importance sampling distribution \eqref{eq:importance.sampling} is optimal, the new integrand is constant. For an increasingly nonlinear $G$, the Laplace approximation error increases \cite{Hel22}; hence, the new integrand becomes less regular. This effect is more strongly observed away from the MAP point. Furthermore, the updated integrand for this example is no longer periodic, rendering RQMC less effective. For the QMCLA estimator, $\delta$ was estimated to be 1, meaning that the number of samples $N$ increases at a rate of 1 rather than a rate of 2 as in the MCLA estimator. The bias of the QMCLA and MCLA estimators can only be reduced by increasing the number of experiments $N_e$; thus, we consider it fixed. As the allowed tolerance approaches this bias, the number of required samples approaches infinity. For the pilot runs of DLMCIS and MCLA, $N=M=2048$ samples were used. The bias from the Laplace approximation is estimated using DLQMCIS as a reference.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\textwidth]{ex_1_opt_sobol_IS_an_15_02.pdf}
\caption{Example 1: Optimal number of the outer ($N^{\ast})$ and inner ($M^{\ast}$) samples vs. tolerance $TOL$ for the DLQMCIS, DLMCIS, QMCLA, and MCLA estimators. Only one inner sample is needed, except for minimal tolerances due to the importance sampling. The number of required samples approaches infinity for MCLA and QMCLA as the tolerance approaches the inherent bias from the Laplace approximation.}
\label{fig:rates.ex1}
\end{figure}
The optimal work given by $N^{\ast}\times M^{\ast}$ is presented in Figure~\ref{fig:work.ex1}. The optimal number of inner samples $M^{\ast}$ is rounded up to 1. Hence, the projected rate is only attained for small tolerances.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\textwidth]{ex_1_work_15_02.pdf}
\caption{Example 1: Optimal work ($N^{\ast}\times M^{\ast}$) vs. tolerance $TOL$ for the DLQMCIS and DLMCIS estimators. Only one inner sample is needed, except for minimal tolerances due to the importance sampling. Hence, the projected rate is only attained for small tolerances.}
\label{fig:work.ex1}
\end{figure}
To demonstrate that the number of required samples indicated by the pilot run for the DLQMCIS estimator is sufficient for the error to be within the allowed tolerance with the specified confidence $\alpha=0.05$, we ran the estimation 100 times for various tolerances $TOL=\{0.01, 0.1, 1\}$ with only one randomization $R=1$. The results are presented in Figure~\ref{fig:evt.ex1}. The error was outside the allowed tolerance only once for $TOL=0.01$ and was always within the allowed tolerance for $TOL=0.1$ and $TOL=1$. The likely reason for this overconfidence is the rounding up to a power of 2 of the outer and inner samples. The mean for all runs for the least tolerance was used as a reference solution.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\textwidth]{ex_1_evt_05_10.pdf}
\caption{Example 1: Error vs. tolerance consistency plot. The DLQMCIS estimator stays within the allowed tolerance $TOL$ with a predefined confidence parameter $C_{\alpha}=1.96$ $(\alpha=0.05$).}
\label{fig:evt.ex1}
\end{figure}
\subsection{Example 2: Thermoelasticity example with inexact sampling}
In this example, we assume that $\bs{G}$ is the solution operator of a PDE; therefore, we work with an appropriate finite element approximation $\bs{G}_h$ with mesh size parameter $h$. The domain is given by $\cl{D}=[0,1]^2\setminus\cl{B}(r)$, where $\cl{B}$ is a ball centered at $\bs{0}$ with radius of 0.1 and the problem is time-dependent, with observations occurring in $t\in[0,T]$. We solve the following problem for the unknown absolute temperature $\vartheta$ (not to be confused with the dummy variable presented in \eqref{eq:likelihood}) and unknown displacement $\bs{u}$ for fully coupled thermomechanical fields. For more information on this problem and a staggered algorithm, see \cite{Far91}.
A weak form of the heat equation is given by
\begin{equation}\label{eq:weak.temperature}
\int_{\cl{D}}\rho \vartheta_0\dot{s}\hat{\vartheta}\di{}\cl{D}-\int_{\cl{D}}\bs{q}\cdot\nabla\hat{\vartheta}\di{}\cl{D}=-\int_{\partial\cl{D}}\bs{q}\cdot \bs{n} \hat{\vartheta}\di{}S \quad \forall \hat{\vartheta}\in V_{\vartheta},
\end{equation}
where
\begin{equation}
\rho \vartheta_0\dot{s}=\rho \theta_2\dot{\vartheta} + \theta_1(3\lambda + 2\mu) \vartheta_0\mathrm{tr}\mskip2mu(\dot{\bs{\varepsilon}}),
\end{equation}
and
\begin{equation}
\bs{q}=-\theta_3\nabla{\vartheta}.
\end{equation}
We consider $\bs{\theta}=(\theta_1,\theta_2,\theta_3) \in\bb{R}^3$; $\theta_1$, the thermal expansion coefficient; $\theta_2$, the specific heat per unit volume at constant strain; and $\theta_3$, the thermal conductivity, to be the parameters of interest. Furthermore, we consider the material density $\rho=2700\si{\kg}/\si{\m}^3$, entropy per unit of mass in the current $s$, Lam\'{e} constants $\lambda=7000/1.56$ and $\mu=35000/1.3$, and initial temperature $\vartheta_0=293\si{\kelvin}$ to be fixed parameters. Last, we consider the strain tensor $\bs{\varepsilon}=\nabla^{\rm{s}}\bs{u}$, the symmetric part of the gradient of $\bs{u}$,
\begin{equation}
\nabla^{\rm{s}}\bs{u}=\frac{1}{2}(\nabla\bs{u}+(\nabla\bs{u})^{\scriptscriptstyle\mskip-1mu\top\mskip-2mu}),
\end{equation}
the unit outward normal $\bs{n}$, and the function space for the temperature field $V_{\vartheta}$. For the time derivatives, we use the implicit Euler scheme. A weak form of the momentum equation is given by
\begin{equation}\label{eq:weak.mechanical}
\int_{\cl{D}}(\lambda \mathrm{tr}\mskip2mu(\bs{\varepsilon})\bs{1} + 2\mu\bs{\varepsilon} - \theta_1(3\lambda + 2\mu)(\vartheta-\vartheta_0)\bs{1})\colon\nabla_s \bs{\hat{u}}\di{}\cl{D}=W_{\rm{ext}}(\bs{\hat{u}}) \quad \forall \bs{\hat{u}}\in V_U,
\end{equation}
where $V_U$ is the function space for the displacement, $\bs{1}$ is the identity tensor, and $W_{\rm{ext}}$ is the linear functional corresponding to the work of body external forces and traction on the boundary. We also replace the unknown absolute temperature $\vartheta$ with the temperature variation $\Delta=\vartheta-\vartheta_0$. For this experiment, we consider a temperature increase of $\Delta=10\si{\degreeCelsius}$, applied at the circular exclusion. Stress and flux-free conditions are applied at the remaining boundaries, and symmetry conditions are applied on the corresponding symmetry planes. For details on the FEM, see Appendix~\ref{ap:FEM}.
For the discretization parameter $h$, we chose $\lceil 1/ h\rceil$, where $\lceil\cdot\rceil$ is the ceiling operator, as the mesh-resolution parameter, and the number of time steps is assumed to depend quadratically on $h$, (i.e., $\lceil 1/ h^2\rceil$). This is because the solution to the heat equation \eqref{eq:weak.temperature} depends linearly on time and quadratically on space. Choosing different discretization parameters $h_{\rm{t}}$ and $h_{\rm{s}}$ for time and space would enable the use of multi-index MC techniques \cite{haji2016multi}. The actual observation times are chosen on a log scale, as the problem can be stiff initially. In addition, as the problem approaches a steady state, the observations become increasingly similar, leading to numerical issues unless the observation times are sufficiently spread apart.
For the parameters of interest $\bs{\theta}$, we assume the following prior distributions:
\begin{equation}
\theta_1\sim\cl{U}\left(1.81, 2.81\right)\times 10^{-5}, \quad \theta_2\sim\cl{U}\left(8.6, 9.6\right)\times 10^{-4}, \quad \theta_2\sim\cl{U}\left(1.87, 2.87\right)\times 10^{-4}.
\end{equation}
For the observation noise, we assume $\bs{\epsilon}_i\sim\cl{N}(\bs{0},\bs{\Sigma}_{\bs{\epsilon}})$, $1\leq i\leq N_e$. Moreover, $\bs{\Sigma}_{\bs{\epsilon}}$ is a diagonal matrix in $\bb{R}^{d_y\times d_y}$, where the first $d_y/2$ diagonal entries corresponding to the displacement measurements are $\sigma_{\bs{\epsilon},1}^2=10^{-12}$, and the second $d_y/2$ diagonal entries corresponding to the temperature measurements are $\sigma_{\bs{\epsilon},2}^2=10^{-8}$. The data model, as presented in \eqref{eq:data.model}, is given by
\begin{equation}\label{eq:ex.2}
\bs{y}_i(\bs{\xi})=\bs{G}_h(\bs{\theta}_t,\bs{\xi})+\bs{\epsilon}_i, \quad 1\leq i\leq N_e,
\end{equation}
where $\bs{y}_i\in\bb{R}^{d_y}$ are the noisy measurements of the displacement and temperature, and $\bs{G}_h(\bs{\theta},\bs{\xi})$ is the solution operator of the discretized weak forms \eqref{eq:weak.temperature} and \eqref{eq:weak.mechanical}.
We consider two measurements of the displacement and two measurements of the temperature at three observation times each, yielding $d_y=12$ total observations. For $N_e=1$ experiment, the outer integration in \eqref{eq:EIG} is $N_e\times d_y+d_{\theta}=15$ dimensional, and the inner integration is $d_{\theta}=3$ dimensional. The domain $\cl{D}$ is presented in Figure~\ref{fig:ex2.domain}. There is a circular exclusion in the lower left corner. For the design parameter $\bs{\xi}=(\xi_1, \xi_2)$, we consider $\xi_1$ to be the position of the sensors and $\xi_2$ to be the maximum observation time. The sensors for the displacement are at $(0.0+\xi_1,1)$ and $(0.6+\xi_1,1)$. The sensors for the temperature are at $(0.2+\xi_1,1)$ and $(0.4+\xi_1,1)$, where $\xi_1\in\{0.0, 0.2, 0.4\}$. Figure~\ref{fig:thermoelast} depicts the temperature increase at observation times $t=\{10^3, 10^4\}$ for $\xi_1=0.2$.
\begin{figure}[ht]
\subfloat[$\xi_1=0.0$\label{subfig-1:xi.0}]{%
\includegraphics[width=0.45\textwidth,trim={17cm 5cm 17cm 5cm},clip]{thermo_domain_xi0.png}
}
\subfloat[$\xi_1=0.2$\label{subfig-1:xi.2}]{%
\includegraphics[width=0.45\textwidth,trim={17cm 5cm 17cm 5cm},clip]{thermo_domain_xi02.png}
}\\
\subfloat[$\xi_1=0.4$\label{subfig-2:xi.4}]{%
\includegraphics[width=0.45\textwidth,trim={17cm 5cm 17cm 5cm},clip]{thermo_domain_xi04.png}
}
\caption{Example 2: Rectangular domain with circular exclusion and sensor locations (green marks displacement and red marks temperature).}
\label{fig:ex2.domain}
\end{figure}
\begin{figure}[ht]
\subfloat[$t=1000$\label{subfig-1:comp.small}]{%
\includegraphics[width=0.45\textwidth,trim={17cm 5cm 17cm 5cm},clip]{thermo_t1000.png}
}
\subfloat[$t=10000$\label{subfig-1:comp.int}]{%
\includegraphics[width=0.45\textwidth,trim={17cm 5cm 17cm 5cm},clip]{thermo_t10000.png}
}
\caption{Example 2: Temperature increase at observation times of the experiment for the rectangular domain with circular exclusion. Point-sensor locations are displayed as circles at the top of the domain.}
\label{fig:thermoelast}
\end{figure}
Figure~\ref{fig:rates.ex2} presents the number of required optimal samples for the DLQMCIS, DLMCIS, QMCLA, and MCLA estimators obtained through pilot runs using $N=M=64$ samples and $R=32$ randomizations for the DLQMCIS estimator. To estimate the constants for the QMCLA estimator, $N=256$ samples and $R=32$ randomizations were used. For the pilot run of DLMCIS, $N=M=128$ samples were used. Finally, the pilot for MCLA was run with $N=1024$ samples. The resulting rates for DLQMCIS were $\beta=0.71$ and $\delta=0.44$. Thus, $N^{\ast}$ increases at a rate of $2/(1+\beta)\approx1.17$ rather than rate 2, and $M^{\ast}$ increases at a rate of $1/(1+\delta)\approx0.7$ rather than a rate of 1. Although these rates are worse than those for the previous example, they still reveal that we can exploit the regularity in the physical model and benefit from using RQMC instead of MC. For QMCLA, the estimated rate was $\delta=1$. Thus, the number of samples $N$ increases at a rate of 1. The bias from the Laplace approximation is again estimated using DLQMCIS as a reference.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{ex_2_opt_09_09.pdf}
\caption{Example 2: Optimal number of the outer ($N^{\ast})$ and inner ($M^{\ast}$) samples vs. tolerance $TOL$ for the DLQMCIS, DLMCIS, QMCLA, and MCLA estimators. The number of samples needed to reduce the variance approaches infinity for MCLA and QMCLA as the tolerance approaches the inherent bias from the Laplace approximation.}
\label{fig:rates.ex2}
\end{figure}
As the allowed tolerance $TOL$ approaches the bias from the Laplace approximation, the optimal splitting parameter $\kappa^{\ast}$ approaches 0 for the QMCLA and MCLA estimators. It is almost constant for the DLQMCIS and DLMCIS estimators, where the bias from the inner integration can be reduced by increasing the number of inner samples (Figure~\ref{fig:kappa.ex2}).
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{ex_2_kappa_13_09.pdf}
\caption{Example 2: Optimal splitting parameter ($\kappa^{\ast})$ vs. tolerance $TOL$ for the DLQMCIS, DLMCIS, QMCLA, and MCLA estimators. The optimal $\kappa^*$ approaches zero for the MCLA and QMCLA estimators as the tolerance approaches the inherent bias from the Laplace approximation.}
\label{fig:kappa.ex2}
\end{figure}
The optimal work given by $N^{\ast}\times M^{\ast}\times h^{-\gamma}$ is depicted in Figure~\ref{fig:work.ex2}. The optimal number of inner samples $M^{\ast}$ is rounded up to 1. Hence, the projected rate is only attained for small tolerances.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{ex_2_work_13_09.pdf}
\caption{Example 2: Optimal work ($N^{\ast}\times M^{\ast}\times h^{-\gamma}$) vs. tolerance $TOL$ for the DLQMCIS and DLMCIS estimators. Only one inner sample is needed, except for minimal tolerances due to the importance sampling. Hence, the projected rate is only attained for small tolerances.}
\label{fig:work.ex2}
\end{figure}
The EIG for model \eqref{eq:ex.2} is illustrated in Figure~\ref{fig:xi.ex2}, where we evaluate the experiment for various sensor placements $\xi_1$ at observation times $t=\{10\times\xi_2, 100\times\xi_2, 1000\times\xi_2\}$, $\xi_1\in\{0.0, 0.2, 0.4\}$, and $0.5\leq\xi_2\leq 80$. It takes some time until the sensors start gaining sufficient information. Eventually, the process converges to a steady state, at which point the EIG decreases again. However, even for fairly large $\xi_2$, the EIG is still high, which could indicate that various steady states are attained for different input parameters $\bs{\theta}$ and that the decrease in EIG only stems from the observations being almost constant in time. The highest EIG was attained for $\xi_1=0$, where the sensors are closest to the source in the lower left corner. For this setting, the optimal observation time was $\xi_2=20$. At the optimal observation time, the sensor placement has little effect on the EIG, but the sensor placement is relevant for less-than-optimal observation times. The DLQMCIS estimator for the allowed error tolerance $TOL=0.2$ was used with only one randomization $R=1$ and $N^{\ast}$, $M^{\ast}$, and $h^{\ast}$ from the pilot for the EIG approximation. This tolerance necessitates a discretization parameter $h\approx 0.13$. Figure~\ref{fig:xi.3d.ex2} illustrates the EIG for model \eqref{eq:ex.2} as a three-dimensional plot.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{ex_2_xi_25_08.pdf}
\caption{Example 2: Expected information gain (EIG) as a function of the design $\bs{\xi}$, estimated with the DLQMCIS estimator with an allowed tolerance of $TOL=0.2$. The maximum EIG is reached for $\xi_1=0$ and $\xi_2=20$ (highlighted in gold).}
\label{fig:xi.ex2}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{ex_2_xi_3d_25_08.pdf}
\caption{Example 2: Expected information gain (EIG) as a function of the design $\bs{\xi}$, estimated with the DLQMCIS estimator with an allowed tolerance of $TOL=0.2$. The maximum EIG is reached for $\xi_1=0$, $\xi_2=20$.}
\label{fig:xi.3d.ex2}
\end{figure}
\section{Conclusion}
We propose estimating nested integrals using randomized quasi-Monte Carlo (RQMC) approximations for the outer and inner integrals, furnishing the double-loop quasi-Monte Carlo estimator (DLQMC). For this method, we derive asymptotic error bounds and obtain the optimal number of samples required for each approximation as the main contribution of the work. Application to Bayesian optimal experimental design indicates that this method is superior to using the RQMC method for only one or neither integral approximation. Replacing the inner integral with the Laplace approximation yields an even more efficient estimator but is only feasible for large error tolerances. For small tolerances, a Laplace-based importance sampling for the inner integral further improves the advantages of the proposed DLQMC estimator.
|
{
"arxiv_id": "2302.14155",
"language": "en",
"timestamp": "2023-03-03T02:04:22",
"url": "https://arxiv.org/abs/2302.14155",
"yymm": "2302"
} |
\section{Methods}
\subsection{\textit{JWST}/NIRSpec spectra}
The NIRSpec \citep{Jakobsen2022A&A...661A..80J} prism/R100 and gratings/R1000 spectra of JADES-GS-z7-01-QU\xspace presented in this work were obtained as part of our JADES GTO programme (PI: N. Lützgendorf, ID:1210) observations in the Great Observatories Origins Deep Survey South (GOODS-S) field between October 21--25, 2022.
The R100 observations were obtained using the disperser/filter configuration
{\sc prism/clear}, which covers the wavelength range between 0.6~µm and 5.3~µm and provides spectra with a wavelength dependent spectral resolution
of $R \ensuremath{\sim\!}\xspace$30--330. The R100 spectrum of JADES-GS-z7-01-QU\xspace is presented in Fig.~\ref{pPXF_fit}.
The medium resolution R1000 observations, with a spectral resolution of $R \ensuremath{\sim\!}\xspace$500--1340 used the disperser/filter configurations {\sc G140M/F070LP}, {\sc G235M/F170LP} and {\sc G395M/F290LP}, which were exposed for 14~h, 7~h
and 7~h. A zoom-in on the R1000 spectrum (into the region with spectral lines best tracing star-formation activity) is shown in Fig.~\ref{pPXF_R1000}. Finally, high-resolution R2700 observations used {\sc G395H/F290LP} and
were exposed for 7~h (like the R1000 spectrum, the R2700 spectrum of JADES-GS-z7-01-QU\xspace contains no detections hence is not shown).
The programme observed a total of 253 galaxies over three dither pointings, with JADES-GS-z7-01-QU\xspace has been observed in each of the three pointings. Each dither pointing had a different microshutter array (MSA) configuration to place the spectra at different positions on the detector - to decrease the impact of detector gaps, mitigate detector artefacts and improve the signal-to-noise ratio for high-priority targets, while increasing the density of observed targets. Within each individual dither pointing the telescope executed a three nod pattern (by slightly re-orienting the telescope by the length of one microshutter, keeping the same MSA configuration). In each of the three nodding pointings, three microshutters were opened for each target, with the targets in the central shutter.
Each three-point nodding was executed within 8403 seconds. The nodding pattern has been repeated four times in the prism/clear configuration, two times in the G140M/F070LP combination, once in the G235M/F170LP combination and once in the G395M/F290LP combination. This resulted in a total exposure time for JADES-GS-z7-01-QU\xspace of 28 hours in R100, 14 hours in G140M, and 7 hours in each of G235M, G395M and G395H.
The flux-calibrated spectra were extracted using pipelines developed by the ESA NIRSpec Science Operations Team (SOT) and the NIRSpec GTO Team. A detailed description of the pipelines will be presented in a forthcoming NIRSpec/GTO collaboration paper.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/8115_R1000.pdf}
\caption{NIRSpec R1000/grating spectrum of the quiescent galaxy JADES-GS-z7-01-QU\xspace at z = 7.3, confirming the absence of emission lines. The blue shaded region shows the 1D noise level. The upper panel shows the signal-to-noise ratio in the 2D grating spectrum. The spectrum is median-smoothed, for visualisation.
}
\label{pPXF_R1000}
\end{figure}
\subsection{\textit{JWST}/NIRCam image and morphology}
A \textit{JWST}/NIRCam F444W-F200W-F090W rgb (red-green-blue) colour image of JADES-GS-z7-01-QU\xspace from our JADES programme (PI: Daniel J. Eisenstein, ID:1180), created from cutouts of the mosaics in each filter, at wavelengths $\lambda \approx 0.8 - 5~\mu$m, is shown in Fig.~\ref{fig:NIRCam}.
For the spectro-photometric modelling of JADES-GS-z7-01-QU\xspace we used the photometry from the JADES and JEMS \citep{Williams2023arXiv230109780W} NIRCam \citep{Rieke2005SPIE.5904....1R, Rieke2022arXiv221212069R} surveys. In particular, the modelling included deep infrared NIRCam observations with the following filters: F090W, F115W, F150W, F182M, F200W, F210M, F277W, F335M, F356W, F410M, F430M, F444W, F460M and F480M. The JADES photometry reduction pipeline made use of the JWST Calibration Pipeline (JWSTCP,
v1.9.2) with the CRDS pmap context 1039. The raw images were transformed into count-rate images, making use of the JWSTCP stage 1, where detector-level corrections and `snowballs' were accounted for. The count-rate images were then flat fielded and flux calibrated with a customised methodology, using JWSTCP stage 2. Finally, the mosaics were created using the stage 3 of the pipeline. For further details on the JADES photometry data reduction pipeline we refer to Robertson et al. 2022 \citep{Robertson2022arXiv221204480R} and Tacchella et al. 2023 \citep{Tacchella2023arXiv230207234T}.
Morphologically, JADES-GS-z7-01-QU\xspace appears as a compact, discy galaxy (half-light
radius \ensuremath{R_\mathrm{e}}\xspace= $36\pm1$~mas $\widehat{=}$ 0.2~kpc $\widehat{=}$ 0.04 arcsec, S\'ersic index $n=0.95\pm0.03$; Fig.~\ref{fig:NIRCam}). The
images also show a distinct, fainter source 0.13~arcsec to the East.
This secondary source could not be deblended in the spectroscopy, but we
obtained deblended photometry using {\sc forcepho}\xspace (Johnson et~al., in~prep.). For more details on the multi-component modelling procedure with {\sc forcepho}\xspace, see Tacchella et al. 2023 \citep{Tacchella2023arXiv230207234T}.
The contribution of the secondary source to the total flux ranges from a maximum of 27~percent
(in the F115W band) to 17~percent (F444W band), therefore its SED is
significantly bluer than that of the main source.
Its photometric redshift z=7.50$\pm0.13$ (1-\textsigma) is consistent with the spectroscopic redshift of the main
source. At a redshift of $z$=$7.3$, this secondary source would lie within
0.7~kpc (or 3~\ensuremath{R_\mathrm{e}}\xspace) from the centre of JADES-GS-z7-01-QU\xspace; its interpretation as a
clump or satellite is unclear.
To attempt removing its contribution from the spectrum of the main source,
we extracted a spectrum from the central three pixels (0.3 arcsec) from the NIRSpec 2-d
spectrum; using this spectrum does not change the interpretation of our
results, i.e., JADES-GS-z7-01-QU\xspace is still quiescent.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/tob_gal_newer.pdf}
\caption{\textit{JWST}/NIRCam F444W-F200W-F090W rgb colour image, created from cutouts of the mosaics at wavelengths \textlambda $\approx\!0.8$--5~\textmu m, covering JADES-GS-z7-01-QU\xspace and its nearby projected environment. The five NIRSpec microshutter positions used for this target are overlaid in white.
}\label{fig:NIRCam}
\end{figure}
\subsection{Full spectral fitting}
\subsubsection{{\sc ppxf}\xspace}\label{sec.ppxf}
The red model fit of the stellar continuum in Fig.~\ref{pPXF_fit} was performed with the $\chi^2$-minimization Penalized PiXel-Fitting code\footnote{\url{ https://pypi.org/project/ppxf/}} {\sc ppxf}\xspace \citep{cappellari2017, cappellari2022}, using a library of single stellar-population (SSP) templates spectra obtained combining the synthetic C3K model atmospheres \citep{Conroy2019} with MIST isochrones \citep{Choi2016} and solar abundances. The SSP spectra span a full 2D logarithmic grid of 32 ages and 12 metallicities from age$_{SSP}$ = $10^{7.0}$~yr to $10^{9.2}$~yr (generously older than the age of the Universe at z$=$7.3) and $\ensuremath{\mathrm{log_{10}(Z/Z_{\odot})}}\xspace_{SSP}$ = -2.5 to 0.5. Due to the low resolution of the R100 spectrum, we fix the stellar velocity dispersion to its virial estimate $\sigma_{\rm *} \approx \sigma_{\rm vir} \equiv \sqrt{G M_*/(5R_{\rm e})} = \mathrm{50~km/s}$. To account for dust reddening, the fitted SSP are multiplicatively coupled to the Calzetti et al. \cite{Calzetti1994} dust attenuation curve. From this, we infer a dust attenuation of the stars in this galaxy of $A_V=0.4 \pm 0.1$. It should be noted that the presence of dust in the {\sc ppxf}\xspace fit is mainly driven by the UV slope. The complex physics of the \hbox{Ly$\alpha$}\xspace drop is not included in the SSP templates. Masking this part of the spectrum returns a nearly dust-free fit with older and metal-richer stellar populations, which would make JADES-GS-z7-01-QU\xspace even more
quiescent. To infer the stellar population weight-grid shown in Fig.~\ref{fig:ppxf_grid}, we perform a residual-based bootstrapping of the initial {\sc ppxf}\xspace bestfit without regularization, see \citep[Looser et al., in prep.]{cappellari2017, cappellari2022} for more details. The single SSP-weight in the top right corner of Fig.~\ref{fig:ppxf_grid} contributes $<2\%$ to the light-weighted superposition of SSP-templates in the fit and is hence spurious. As stated in the main text, we infer an extremely low average stellar metallicity of \ensuremath{\mathrm{log_{10}(Z/Z_{\odot})}}\xspace$\approx-2$ with {\sc ppxf}\xspace. It should be noted that the dominant reconstructed stellar populations lie at \ensuremath{\mathrm{log_{10}(Z/Z_{\odot})}}\xspace$ \approx -2.5$, at the boundary of the available grid of synthetic spectra. This suggests that model SSP spectra of even lower metallicity might be needed in the future to accurately model the stellar populations in galaxies at high redshift.
From {\sc ppxf}\xspace, we infer that the oldest significant population of stars (i.e. indicating the start of the SF) in the galaxy is 100~Myr old, while the youngest is 20~Myr, resulting in an extremely short duration of the star formation of just 80~Myr between the formation of the galaxy and its quenching.
\subsubsection{{\sc bagpipes}\xspace}\label{sec.bagpipes}
We used the Bayesian Analysis of Galaxies for Physical Inference and Parameter EStimation (\textsc{bagpipes}) code \cite{Carnall2018MNRAS.480.4379C} to simultaneously fit the NIRSpec PRISM measurements and NIRCam photometry. Following Witstok et al. 2023 \cite{2023arXiv230205468W}, we employed the \textsc{bpass} v2.2.1 stellar population synthesis models \cite{Eldridge2017} as underlying stellar models. These include binary stars under the default \textsc{bpass} initial mass function (IMF), having a slope of $-2.35$ (for $M > 0.5 \, \mathrm{M_\odot}$) and ranging in stellar mass from $1 \, \mathrm{M_\odot}$ to $300 \, \mathrm{M_\odot}$.
For the presented {\sc bagpipes}\xspace fit, we assumed two bins of constant SFH, one fixed bin over the last 10~Myr and one variable bin spanning a range beyond 10~Myr (minimum age ranging between 10~Myr and 0.5~Gyr, maximum age between 11~Myr and the age of the Universe). We varied the total stellar mass formed between $0$ and $10^{15} \, \mathrm{M_\odot}$, and the stellar metallicity of the variable SFH bin between $0.01 \, \mathrm{Z_\odot}$ and $1.5 \, \mathrm{Z_\odot}$ (the 10~Myr bin having a fixed metallicity of $0.2 \, \mathrm{Z_\odot}$). Nebular emission is modelled self-consistently with a grid of \textsc{Cloudy} \citep{Ferland2017RMxAA..53..385F} models with the ionisation parameter ($-3 < \log_{10} U < -0.5$) as a free parameter. We included a flexible Charlot \& Fall \cite{Charlot2000ApJ...539..718C} dust attenuation prescription with visual extinction and power-law slope freely varying ($0 < A_V < 7$, $0.4 < n < 1.5$), while fixing the fraction of attenuation from stellar birth clouds to 60\% (the remaining fraction arising in the diffuse ISM; \cite{2019MNRAS.483.2621C}). A first-order correction polynomial \citep{Carnall2019MNRAS.490..417C} is fitted to the spectroscopic data to account for aperture and flux calibration effects. The spectro-photometric fit and the corresponding corner plot are shown in Fig.~\ref{Bagpipes_corner_plot}. We find a very low SFR (consistent with 0) in the last 10~Myr for JADES-GS-z7-01-QU\xspace, noting that other tested SFH parametrisations, namely the double power-law SFH described in Carnall et al. 2023 \cite{carnall+2023c} and a single-bin constant SFH with flexible beginning and end of star formation agree that the galaxy is quiescent. We infer that the oldest stellar population is 40~Myr old, which is equivalent to a formation redshift of z$=$7.6. The galaxy has been quiescent for 10~Myr, resulting in a short duration of star formation of 30~Myr from the formation of the galaxy to its quenching.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/Bagpipes_corner_plot_and_fit.png}
\caption{Bottom left: {\sc bagpipes}\xspace corner plot. Top right: spectro-photometric {\sc bagpipes}\xspace fit of the JADES-GS-z7-01-QU\xspace R100/prism spectrum.}
\label{Bagpipes_corner_plot}
\end{figure}
\subsubsection{{\sc beagle}\xspace}\label{sec.beagle}
{\sc beagle}\xspace is a Bayesian analysis tool to interpret galaxy spectra, incorporating a consistent modelling of stellar radiation and its transfer through the interstellar and intergalactic media \citep{Chevallard2016MNRAS.462.1415C}. A crucial and unique
capability of {\sc beagle}\xspace is the possibility to model a varying \ensuremath{f_{\rm esc}}\xspace.
The corner plot in Fig.~\ref{Beagle_corner_plot} shows the {\sc beagle}\xspace posterior probability distributions of the {\sc beagle}\xspace fit to the R100 spectrum of JADES-GS-z7-01-QU\xspace. The 2D (off-diagonal) and 1D (along the main diagonal) subplots show the posterior distributions on stellar mass \ensuremath{M_\star}\xspace, metallicity $Z$, SFR, maximum age of stars \ensuremath{\mathrm{t_{form}}\xspace}, minimum age of stars \ensuremath{\mathrm{t_{quench}}}\xspace, redshift $z$, effective dust attenuation optical depth in the V-band $A_V$, and the escape fraction of ionising photons $\mathrm{f_{esc}}$. The dark, medium and light blue contours show the extents of the 1-, 2-, and 3-\textsigma credible regions.
{\sc beagle}\xspace gives a current SFR of less than $10^{-1.5}~\ensuremath{{\rm M}_\odot}\xspace\,\mathrm{yr}^{-1}$, a formation time of less than 160~Myr before observation and a quenching time of $\sim$15~Myr before observation.
We also note that {\sc beagle}\xspace, as all other three codes, requires some degree of dust attenuation, which suggests some cold gas is still present, which in turn is incompatible with $\mathrm{f_{esc}}\sim 1$.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/8115_BEAGLE_triangle_and_marginal.png}
\caption{Bottom left: {\sc beagle}\xspace corner plot with free $\mathrm{f_{esc}}$. Top right: {\sc beagle}\xspace fit of the R100 spectrum.}
\label{Beagle_corner_plot}
\end{figure}
\subsubsection{{\sc prospector}\xspace}\label{sec.prospector}
We use the Bayesian SED fitting code {\sc prospector}\xspace \citep{Johnson2021ApJS..254...22J} to model the spectro-photometric data of JADES-GS-z7-01-QU\xspace. The posterior corner plot for a few key parameters from {\sc prospector}\xspace is shown in Fig.~\ref{Prospector_corner_plot}. The code uses a flexible spectroscopic calibration model, combined with forward modelling of spectra and photometry, to infer physical properties. Following the setup in Tacchella et al. 2022 \citep{Tacchella2022ApJ...926..134T}, we include a flexible SFH (10 bins with the bursty continuity prior), a flexible attenuation law (diffuse dust optical depth with a power-law modifier to shape of the Calzetti et al. (2000) attenuation curve of the diffuse dust), and fit for the stellar metallicity. Interestingly, {\sc prospector}\xspace infers a low dust attenuation with $A_V= 0.1^{+0.1}_{-0.0}$ with a rather steep attenuation law ($A_{UV}/A_V = 2.6^{+1.4}_{-0.8}$). This is consistent with the idea that the galaxy has a low gas content and the low SFR in the past 30~Myr before observation. {\sc prospector}\xspace infers that the oldest stellar population (as defined by the lookback time when the first 10\% of the stellar mass formed) has an age of $\sim$100~Myr, which means a nominal formation redshift of z$=$8.8. The SFR increases significantly $\sim$80~Myr before observation. After this final burst, lasting $\sim$50~Myr, the galaxy quenched on a short timescale.
We have also experimented with the standard continuity prior \citep{Leja2019ApJ...876....3L}, which weights against sharp transition in the SFH. The overall shape of the SFH is the same, indicating that the data strongly prefers a decreasing SFH in the past $\sim50$~Myr. Quantitatively, the recent SFR (averaged over the past 10 Myr) increases with this prior to $\log_{10}(\mathrm{SFR}/(M_{\odot}\mathrm{yr}^{-1}) = -0.4^{+0.4}_{-0.9}$, still consistent with being quiescent and within the uncertainties of the fiducial value obtained with the bursty continuity prior. The quenching time is slightly more recent ($24_{-9}^{+6}$ Myr), but consistent within the uncertainties quoted in Table~\ref{tab:Basic_outputs}.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/Prospector_0_corner_run14.pdf}
\caption{{\sc prospector}\xspace corner plot with stellar mass \ensuremath{M_\star}\xspace, SFR, \ensuremath{\mathrm{t_{form}}\xspace}, \ensuremath{\mathrm{t_{quench}}}\xspace, dust attenuation $A_V$, $A_{UV}/A_V$, and stellar metallicity Z.}
\label{Prospector_corner_plot}
\end{figure}
\subsection{High-SFR, High-\texorpdfstring{\ensuremath{f_{\rm esc}}\xspace}{f esc} interpretation}
It should be noted that the absence of nebular lines always allows, by construction, a solution with high SFR and $\mathrm{f_{esc}}\sim 1$ -- the question is whether this solution is accompanied by the production of ionising photons associated with ongoing star formation. Current observations based on the UV continuum slope
$\beta$ suggest a correlation between more negative $\beta$ values
and high \ensuremath{f_{\rm esc}}\xspace \citep{Topping2022ApJ...941..153T}. The relatively
flat UV slope of -1.5 thus suggests that JADES-GS-z7-01-QU\xspace is not an
extreme LyC leaker.
To assess this scenario quantitatively, we use {\sc beagle}\xspace to compare
two SED models. The model already described (see \ref{sec.beagle}) formally
allows a star-forming solution with high \ensuremath{f_{\rm esc}}\xspace. The alternative
model has a simplified SFH consisting of a constant SFR; in this
way, low-SFR solutions are effectively removed by the constraint
to form sufficient stellar mass of the appropriate age to reproduce
the observed spectrum.
This alternative model gives $\ensuremath{f_{\rm esc}}\xspace = 0.98^{+0.01}_{-0.04}$ and $\mathrm{SFR} = 0.63^{+0.05}_{-0.05}~\ensuremath{{\rm M}_\odot}\xspace~\rm{yr}^{-1}$, which is a much higher SFR than the alternative solution.
To select the preferred model, we use the Bayes ratio, i.e. the
ratio between the evidence of the models. The log-difference between the evidences is $5\pm0.3$; using Jeffrey's criterion,
we conclude this is strong evidence for the quiescent solution and
we adopt it as our fiducial model.
\subsection{Empirical measurements}
To estimate the flux upper limits on \ensuremath{\rm H\upbeta}\xspace and \hbox{[O\,{\sc iii}]$\lambda5007$}\xspace we sum the
formal variance over three pixels. For $\rm EW_{\ensuremath{\rm H\updelta}\xspace_A}$ we use
the bands in the Lick definition \citep{worthey+1994}, but without
any further correction due to spectral resolution. The upper limits
on the SFR were estimated based on the \ensuremath{\rm H\upbeta}\xspace flux limit, using
the \ensuremath{\rm H\upalpha}\xspace calibration \citep{kennicutt1998} and a Balmer decrement
corresponding to $A_V=0.4$~mag, which we derived from {\sc ppxf}\xspace. The
resulting SFR was down scaled to match a Chabrier initial mass function \citep{chabrier2003}.
\noindent{}
\bmhead{Code availability}
The {\sc ppxf}\xspace, {\sc bagpipes}\xspace and {\sc prospector}\xspace codes are publicly available. {\sc beagle}\xspace is available via a {\sc Docker} image (distributed through {\sc Docker Hub}) upon request at https:/iap.fr/beagle.
\bmhead{Data availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\backmatter
\bmhead{Acknowledgements}
{\small TJL, FDE, RM, JW, WB, LS and JS acknowledge support by the Science and Technology Facilities Council (STFC) and by the ERC through Advanced Grant 695671 “QUENCH”. RM also acknowledges funding from a research professorship from the Royal Society. JW further acknowledges support from the Fondation MERAC. This study made use of the Prospero high performance computing facility at Liverpool John Moores University. BDJ, EE, MR and BER acknowledge support from the \textit{JWST}/NIRCam Science Team contract to the
University of Arizona, NAS5-02015. ECL acknowledges support of an STFC Webb Fellowship (ST/W001438/1). SC acknowledges support by European Union’s HE ERC Starting Grant No. 101040227 - WINGS. AJB, JC, AJC, AS, GJC acknowledge funding from the "FirstGalaxies" Advanced Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 789056). RS acknowledges support from a STFC Ernest Rutherford Fellowship (ST/S004831/1). SA, BRP and MP acknowledges support from the research project PID2021-127718NB-I00 of the Spanish Ministry of Science and Innovation/State Agency of Research (MICIN/AEI). The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant no.140. H{\"U} gratefully acknowledges support by the Isaac Newton Trust and by the Kavli Foundation through a Newton-Kavli Junior Fellowship. DJE is supported as a Simons Investigator and by \textit{JWST}/NIRCam contract to the University of Arizona, NAS5-02015. Funding for this research was provided by the Johns Hopkins University, Institute for Data Intensive Engineering and Science (IDIES). The reserach of CCW is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. This research is supported in part by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
We thank Christopher Lovell, Annalisa Pillepich, Deborah Sijacki and Nicolas Laporte for helpful comments and discussions.}
\bmhead{Author contributions}
TJL, FDE and RM led the writing of the paper. All authors have contributed to the interpretation of the results.
TJL, JW, LS, ST, FDE, ECL, JC, BDJ, WB and KS led the spectro(-photometric) modelling of JADES-GS-z7-01-QU\xspace and the data visualisation. AB, AD, CNAW, CW, DJE, H-WR, MR, PF, RM, SAl and SAr contributed to the design of the JADES survey. CW contributed to the design of the spectroscopic observations and MSA configurations. KH, ECL, JMH, JL, LW, RE and REH contributed the photometric redshift determination and target selection. AJC, AB, CNAW, ECL, HU, RB and KB, contributed to the selection, prioritisation and visual inspection of the targets. SCa, MC, JW, PF, GG, SAr and BRdP contributed to the NIRSpec data reduction and to the development of the NIRSpec pipeline. SAr, SCh, JC, MC, FDE, AdG, ECL, MM, RM, BRdP, TJL, AS, LS, JS, RS and JW contributed to the development of the JADES tools for the spectroscopic data analysis.
BER, ST, BDJ, CNAW, DJE, IS, MR, RE and ZC contributed to the JADES imaging data reduction. CCW, ST, MM, BER, BDJ, CW, DJE, ZJ, JH, AS, KB, AJB, SC, SCh, JC, ECL, AdG, EE, NK, RM, EJN, MJR, LS, IS, RS, KS, HU and KW contributed to the JEMS survey. RHa, BER contributed to the JADES imaging data visualisation. BDJ, ST, AD, DPS, LW, MWT and RE contributed the modelling of galaxy photometry. NB and SAr contributed to the design and optimisation of the MSA configurations. PF, MS, TR, GG, NL, NK and BRdP contributed to the design, construction and commissioning of NIRSpec. MR, CNAW, EE, FS, KH and CCW contributed to the design, construction, and commissioning of NIRCam. BER, CW, DJE, DPS, MR, NL, and RM serve as the JADES Steering Committee.
\section{Methods}
\subsection{JWST/NIRSpec spectra}
The NIRSpec \citep{Jakobsen2022A&A...661A..80J} prism/R100 and gratings/R1000 spectra of JADES-GS-z7-01-QU\xspace presented in this work were obtained as part of GTO program (PI: N. Lützgendorf, ID:1210) observations in the Great Observatories Origins Deep Survey South (GOODS-S) field between October 21-25, 2022.
The R100 observations were obtained using the disperser/filter configuration
{\sc prism/clear}, which covers the wavelength range between 0.6~µm and 5.3~µm and provides spectra with a wavelength dependent spectral resolution
of $R \sim 30-330$. The R100 spectrum of JADES-GS-z7-01-QU\xspace is presented in Fig.~\ref{pPXF_fit}.
The medium resolution R1000 observations, with a spectral resolution of $R \sim 500-1340$ used the disperser/filter configurations {\sc G140M/F070LP}, {\sc G235M/F170LP} and {\sc G395M/F290LP}, which were exposed for 14~h, 7~h
and 7~h. A zoom-in on the R1000 spectrum (into the region with spectral lines best tracing star-formation activity) is shown in Fig.~\ref{pPXF_R1000}. Finally, high-resolution R2700 observations used {\sc G395H/F290LP} and
were exposed for 7~h (like the R1000 spectrum, the R2700 spectrum of JADES-GS-z7-01-QU\xspace exhibits only noise and is hence not shown).
The program observed a total of 253 galaxies over three dither pointings, where JADES-GS-z7-01-QU\xspace has been observed in each of the three pointings. Each dither pointing had a different microshutter array (MSA) configuration to place the spectra at different positions on the detector - to decrease the impact of detector gaps, mitigate detector artefacts and improve the signal-to-noise for high-priority targets, while increasing the density of observed targets. Within each individual dither pointing the telescope executed a three nod pattern (by slightly re-orienting the telescope by the length of one microshutter, keeping the same MSA configuration). In each of the three nodding pointings, three microshutters were opened for each target, with the targets in the central shutter, for background subtraction.
Each three-point nodding was executed within 8403 seconds. The nodding pattern has been repeated four times in the prism/clear configuration, two times in the G140M/F070LP combination, once in the G235M/F170LP combination and once in the G395M/F290LP combination. This resulted in a total exposure time for JADES-GS-z7-01-QU\xspace of 28 hours in R100, 14 hours in G140M, and 7 hours in each of G235M, G395M and G395H.
The flux-calibrated spectra were extracted using pipelines developed by the ESA NIRSpec Science Operations Team (SOT) and the NIRSpec GTO Team. A detailed description of the pipelines will be presented in a forthcoming NIRSpec/GTO collaboration paper.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/ppxf_fit_008115_R1000.pdf}
\caption{NIRSpec R1000/grating spectrum of the quiescent galaxy JADES-GS-z7-01-QU\xspace at z = 7.3, confirming the complete absence of emission lines.
}
\label{pPXF_R1000}
\end{figure}
\subsection{JWST/NIRCam image and morphology}
A JWST/NIRCam F444W-F200W-F090W colour mosaic at wavelengths $\lambda \approx 0.8 - 5~\mu$m of JADES-GS-z7-01-QU\xspace from the JADES programme is shown in Fig.~\ref{fig:NIRCam}.
\medskip
Morphologically, JADES-GS-z7-01-QU\xspace appears as a compact, disky galaxy (half-light
radius \ensuremath{R_\mathrm{e}}\xspace= $36\pm1$~mas $\widehat{=}$ 0.2~kpc $\widehat{=}$ 0.04 arcsec, S\'ersic index $n=0.95\pm0.03$; Fig.~\ref{fig:NIRCam}). The
images also show a fainter, distinct source 0.13~arcsec to the East.
This secondary source could not be deblended in the spectroscopy, but we
obtained deblended photometry using {\sc forcepho}\xspace (Johnson et~al., in~prep.).
Its contribution to the total flux ranges from a maximum of 27~percent
(in the F115W band) to 17~percent (F444W band), therefore its SED is
significantly bluer than that of the main source.
Its photometric redshift z=7.50$\pm0.13$ (1-$\sigma$) is consistent with the spectroscopic redshift of the main
source. At a redshift of $z$=$7.3$, this secondary source would lie within
0.7~kpc (or 3~\ensuremath{R_\mathrm{e}}\xspace) from the centre of JADES-GS-z7-01-QU\xspace; its interpretation as a
clump or satellite is unclear.
To attempt removing its contribution from the spectrum of the main source,
we extracted a spectrum from the central three pixels (0.3 arcsec) from the NIRSpec 2-d
spectrum; using this spectrum does not change the interpretation of our
results, i.e., JADES-GS-z7-01-QU\xspace is still quiescent.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/tob_gal_newer.pdf}
\caption{JWST/NIRCam F444W-F200W-F090W colour mosaic at wavelengths $\lambda \approx 0.8-5~\mu m$, covering JADES-GS-z7-01-QU\xspace and nearby projected environment. The five NIRSpec microshutter positions used for this target are overlaid in white.
}\label{fig:NIRCam}
\end{figure}
\subsection{Full spectral fitting}
\subsubsection{{\sc ppxf}\xspace}\label{sec.ppxf}
The red model fit of the stellar continuum in Fig.~\ref{pPXF_fit} was performed with the $\chi^2$-minimization Penalized PiXel-Fitting code\footnote{\url{ https://pypi.org/project/ppxf/}} {\sc ppxf}\xspace \citep{cappellari2017, cappellari2022}, using a library of single stellar-population (SSP) templates spectra obtained combining the synthetic C3K model atmospheres \citep{Conroy2019} with MIST isochrones \citep{Choi2016} and solar abundances. The SSP spectra span a full 2D logarithmic grid of 32 ages and 12 metallicities from age$_{SSP}$ = $10^{7.0}$~yr to $10^{9.2}$~yr (generously older than the age of the Universe at z$=$7.3) and $\ensuremath{\mathrm{log_{10}(Z/Z_{\odot})}}\xspace_{SSP}$ = -2.5 to 0.5. Due to the low resolution of the R100 spectrum, we fix the stellar velocity dispersion to its virial estimate $\sigma_{\rm *} \approx \sigma_{\rm vir} \equiv \sqrt{G M_*/(5R_{\rm e})} = \mathrm{50~km/s}$. To account for dust reddening, the fitted SSP are multiplicatively coupled to the Calzetti et al. \cite{Calzetti1994} dust attenuation curve. From this, we infer a dust attenuation of the stars in this galaxy of $A_V=0.4 \pm 0.1$. (It should be noted that the presence of dust in the {\sc ppxf}\xspace fit is mainly driven by the UV slope redwards of the \hbox{Ly$\alpha$}\xspace drop. The complex physics of the \hbox{Ly$\alpha$}\xspace drop is not included in the SSP templates. Masking this part of the spectrum returns a nearly dust-free fit with older and metal-richer stellar populations. I.e. biasing the fit towards an even more quenched interpretation.) To infer the stellar population weight-grid shown in Fig.~\ref{fig:ppxf_grid}, we perform a residual-based bootstrapping of the initial {\sc ppxf}\xspace bestfit without regularization, see \citep[Looser et al., in prep.]{cappellari2017, cappellari2022} for more details. The single SSP-weight in the top right corner of Fig.~\ref{fig:ppxf_grid} contributes $<2\%$ to the light-weighted superposition of SSP-templates in the fit and is hence spurious. As stated in the main text, we infer an extremely low average stellar metallicity of \ensuremath{\mathrm{log_{10}(Z/Z_{\odot})}}\xspace$\approx-2$ with {\sc ppxf}\xspace. It should be noted that the dominant reconstructed stellar populations lie at \ensuremath{\mathrm{log_{10}(Z/Z_{\odot})}}\xspace$ \approx -2.5$, at the boundary of the available synthetic spectra grid. This suggests that even lower metallicity SSP model spectra might be needed in the future to accurately model the stellar populations in galaxies at high redshift.
From {\sc ppxf}\xspace, we infer that the oldest significant population of stars (i.e. indicating the start of the SF) in the galaxy is 100~Myr old, while the youngest is 30~Myr, resulting in an extremely short duration of the star formation of just 70~Myr between the formation of the galaxy and its quenching.
\subsubsection{{\sc bagpipes}\xspace}\label{sec.bagpipes}
We used the Bayesian Analysis of Galaxies for Physical Inference and Parameter EStimation (\textsc{bagpipes}) code \cite{Carnall2018MNRAS.480.4379C} to simultaneously fit the NIRSpec PRISM measurements and NIRCam photometry. Following Witstok et al. 2023 \cite{2023arXiv230205468W}, we employed the \textsc{bpass} v2.2.1 stellar population synthesis models \cite{Eldridge2017} as underlying stellar models. These include binary stars under the default \textsc{bpass} initial mass function (IMF), having a slope of $-2.35$ (for $M > 0.5 \, \mathrm{M_\odot}$) and ranging in stellar mass from $1 \, \mathrm{M_\odot}$ to $300 \, \mathrm{M_\odot}$.
For the presented {\sc bagpipes}\xspace fit, we assumed two bins of constant SFH, one fixed bin over the last 10~Myr and one variable bin spanning a range beyond 10~Myr (minimum age ranging between 10~Myr and 0.5~Gyr, maximum age between 11~Myr and the age of the Universe). We varied the total stellar mass formed between $0$ and $10^{15} \, \mathrm{M_\odot}$, and the stellar metallicity of the variable SFH bin between $0.01 \, \mathrm{Z_\odot}$ and $1.5 \, \mathrm{Z_\odot}$ (the 10~Myr bin having a fixed metallicity of $0.2 \, \mathrm{Z_\odot}$). Nebular emission is modelled self-consistently with a grid of \textsc{Cloudy} \citep{Ferland2017RMxAA..53..385F} models with the ionisation parameter ($-3 < \log_{10} U < -0.5$) as a free parameter. We included a flexible Charlot \& Fall \cite{Charlot2000ApJ...539..718C} dust attenuation prescription with visual extinction and power-law slope freely varying ($0 < A_V < 7$, $0.4 < n < 1.5$), while fixing the fraction of attenuation from stellar birth clouds to 60\% (the remaining fraction arising in the diffuse ISM; e.g. ref. \cite{2019MNRAS.483.2621C}). A first-order correction polynomial \citep{Carnall2019MNRAS.490..417C} is fitted to the spectroscopic data to account for aperture and flux calibration effects. The spectrophotometric fit and the corresponding corner plot are shown in Fig.~\ref{Bagpipes_corner_plot}. We find a very low SFR (consistent with 0) in the last 10~Myr for JADES-GS-z7-01-QU\xspace, noting that other tested SFH parametrisations, namely the double power-law SFH described in Carnall et a. 2023 \cite{carnall+2023c} and a single-bin constant SFH with flexible beginning and end of star formation agree that the galaxy is quiescent. We infer that the oldest stellar population is 40~Myr old, which is equivalent to a formation redshift of z$=$7.6. The galaxy has been quiescent for 10~Myr, resulting in a short duration of star formation of 30~Myr from the formation of the galaxy to its quenching.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/Bagpipes_corner_plot_and_fit.png}
\caption{Bottom left: {\sc bagpipes}\xspace corner plot. Top right: Spectrophotometric {\sc bagpipes}\xspace fit of the JADES-GS-z7-01-QU\xspace R100/prism spectrum.}
\label{Bagpipes_corner_plot}
\end{figure}
\subsubsection{{\sc beagle}\xspace}\label{sec.beagle}
{\sc beagle}\xspace is a Bayesian analysis tool to interpret galaxy spectra, incorporating a consistent modelling of stellar radiation and its transfer through the interstellar and intergalactic media \citep{Chevallard2016MNRAS.462.1415C}.
The corner plot in Fig.~\ref{Beagle_corner_plot} shows the {\sc beagle}\xspace posterior probability distributions of the {\sc beagle}\xspace fit to the R100 spectrum of JADES-GS-z7-01-QU\xspace. The 2D (off-diagonal) and 1D (along
the main diagonal) subplots show the posterior distributions on stellar mass \ensuremath{M_\star}\xspace, metallicity $Z$, SFR, maximum age of stars \ensuremath{\mathrm{t_{form}}\xspace}, minimum age of stars \ensuremath{\mathrm{t_{quench}}}\xspace, redshift $z$, effective dust attenuation optical depth in the V-band $\hat{\tau}_V$, and the escape fraction of ionizing photons $\mathrm{f_{esc}}$. The dark, medium and light blue contours show the extents of the 1-, 2-, and 3-$\sigma$ credible regions.
It should be noted that the SFR and $\mathrm{f_{esc}}$ are quantities derived from the underlying physical model; the absence of nebular lines always allows, by construction, a solution with $\mathrm{f_{esc}}\sim 1$, the question is whether this solution is accompanied by the production of ionizing photons associated with ongoing star formation. The {\sc beagle}\xspace posterior distribution does not highlight a solution with high $\mathrm{f_{esc}}$ and ongoing star formation \citep{Trebitsch2017geat.confE..49T,Zackrisson2017ApJ...836...78Z,Topping2022ApJ...941..153T}. On the contrary, while $\mathrm{f_{esc}}$ is unconstrained, even a value approaching unity indicates a low SFR$< 1~\ensuremath{{\rm M}_\odot}\xspace\,{\rm yr}^{-1}$ at 3-$\sigma$ level (fifth subplot from the left at the bottom of Fig.~\ref{Beagle_corner_plot}). Hence, confirming quiescence even in the high \ensuremath{f_{\rm esc}}\xspace scenario.
{\sc beagle}\xspace gives a current SFR of less than $10^{-1.5}~\ensuremath{{\rm M}_\odot}\xspace\,\mathrm{yr}^{-1}$, a formation time of less than 160~Myr before observation and a quenching time of $\sim$15~Myr before observation.
We also note that {\sc beagle}\xspace, as all other three codes, requires some degree of dust attenuation, which is incompatible with $\mathrm{f_{esc}}\sim 1$.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/8115_BEAGLE_triangle_and_marginal.png}
\caption{Bottom left: {\sc beagle}\xspace corner plot with free $\mathrm{f_{esc}}$. Top right: {\sc beagle}\xspace fit of the R100 spectrum.}
\label{Beagle_corner_plot}
\end{figure}
\subsubsection{{\sc prospector}\xspace}\label{sec.prospector}
We use the Bayesian SED fitting code {\sc prospector}\xspace \citep{Johnson2021ApJS..254...22J} to model the spectrophotometric data of JADES-GS-z7-01-QU\xspace. The posterior corner plot for {\sc prospector}\xspace is shown in Fig.~\ref{Prospector_corner_plot}. The code uses a flexible spectroscopic calibration model, combined with forward modelling of spectra and photometry, to infer physical properties, Including stellar population and nebular emission properties and constraining SFHs. Estimating maximal-likely-hood and posterior probability distributions using Monte Carlo sampling. For the {\sc prospector}\xspace fit presented in this paper we assume a XXX SFH. Nebular emission lines and nebular continuum is modelled self-consistently as XXX. We included a flexible XXX dust attenuation prescription, with free emission in UV. Interestingly, {\sc prospector}\xspace infers only dust-attenuation in the UV part of the spectrum ($A_{UV}/A_V = 4.6^{+1.0}_{-1.2}$, while $A_V= 0.1^{+0.1}_{-0.0}$. This is likely due to the clump or satellite in the slit. We find a very low SFR (consistent with 0) in the last 25~Myr before observation for JADES-GS-z7-01-QU\xspace with {\sc prospector}\xspace, noting that other tested parametric SFHs, including a continuity prior (biasing the fit towards a star-forming solution), agree that the galaxy is quiescent. We infer that the oldest significant stellar population is 80~Myr old, which is equivalent to a formation redshift of z$=$8.0. The galaxy formed half of its mass by $t_* =59.9^{+3.6}_{-3.7}$ has been quiescent for 25~Myr, resulting in a relatively short burst of star formation of 55~Myr before the galaxy quenched.
\todo{Describe fit and inferred parameters, numbers when model finalised.}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/0_corner_run5.pdf}
\caption{{\sc prospector}\xspace corner plot with stellar mass \ensuremath{M_\star}\xspace, specific SFR (sSFR), half-mass time $t_*$, dust attenuation $A_V$, $A_{UV}/A_V$, and stellar metallicity Z. \todo{Replace SFH with spectral fit. Check sSFR, \ensuremath{M_\star}\xspace.}}
\label{Prospector_corner_plot}
\end{figure}
\noindent{}
\bmhead{Code availability}
The {\sc ppxf}\xspace, {\sc bagpipes}\xspace and {\sc prospector}\xspace codes are publicly available. {\sc beagle}\xspace is available via a Docker image (distributed through Docker Hub) upon request at https:/iap.fr/beagle.
\bmhead{Data availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\backmatter
\bmhead{Acknowledgements}
{\small TJL, FDE, RM, JW, WB and LS acknowledge support by the Science and Technology Facilities Council (STFC), ERC Advanced Grant 695671 “QUENCH”. RM is further supported by a research professorship from the Royal Society. We thank Christopher Lovell, Annalisa Pillepich, Deborah Sijacki and Nicolas Laporte for helpful discussions.}
\bmhead{Author contributions}
TJL, FDE, and RM led the writing of this paper. TJL, JW, LS, ST, FDE, ECL, JC and WB led the spectrophotometric modelling of JADES-GS-z7-01-QU\xspace and the data visualization. AB, AD, CNAW, CW, DJE, H-WR, MR, PF, RM, SAl, SAr contributed to the design of the JADES survey. BER, CW, DJE, DPS, MR, NL, and RM serve as the JADES Steering Committee. CW contributed to the design of the spectroscopic observations and MSA configurations. KH, JMH, JL, LW, RE, REH contributed the photometric redshift determination and target selection. AJC, AB, CNAW, ECL, HU, RB, KB, contributed to the selection, prioritisation and visual inspection of the targets. SCa, MC, JW, PF, GG, SAr, BRdP, contributed to the NIRSpec data reduction and to the development of the NIRSpec pipeline. SCh, JC, ECL, RM, JW, RS, FDE, TJL, MC, AdG, AS, and LS contributed to analysis of the spectroscopic data. FDE, TJL, MM, MC, BRdP, RM, SAr contributed to the development of the tools for the spectroscopic data analysis, visualisation and full spectral fitting. BER, ST, BDJ, CNAW, DJE, IS, MR, RE, ZC contributed to the JADES imaging data reduction. RHa, BER contributed to the JADES imaging data visualization. BDJ, ST, AD, DPS, LW, MWT, RE contributed the modelling of galaxy photometry. NB, SAr contributed to the design and optimisation of the MSA configurations. PF, MS, TR, GG, NL, NK, BRdP contributed to the design, construction and commissioning of NIRSpec. MR, CNAW, EE, FS, KH, CCW contributed to the design, construction, and commissioning of NIRCam.
\section{Methods}
\subsection{JWST/NIRSpec spectra}
The NIRSpec \citep{Jakobsen2022A&A...661A..80J} prism/R100 and gratings/R1000 spectra of JADES-GS-z7-01-QU\xspace presented in this work were obtained as part of GTO program (PI: N. Lützgendorf, ID:1210) observations in the Great Observatories Origins Deep Survey South (GOODS-S) field between October 21-25, 2022.
The R100 observations were obtained using the disperser/filter configuration
{\sc prism/clear}, which covers the wavelength range between 0.6~µm and 5.3~µm and provides spectra with a wavelength dependent spectral resolution
of $R \sim 30-330$. The R100 spectrum of JADES-GS-z7-01-QU\xspace is presented in Fig.~\ref{pPXF_fit}.
The medium resolution R1000 observations, with a spectral resolution of $R \sim 500-1340$ used the disperser/filter configurations {\sc G140M/F070LP}, {\sc G235M/F170LP} and {\sc G395M/F290LP}, which were exposed for 14~h, 7~h
and 7~h. A zoom-in on the R1000 spectrum (into the region with spectral lines best tracing star-formation activity) is shown in Fig.~\ref{pPXF_R1000}. Finally, high-resolution R2700 observations used {\sc G395H/F290LP} and
were exposed for 7~h (like the R1000 spectrum, the R2700 spectrum of JADES-GS-z7-01-QU\xspace exhibits only noise and is hence not shown).
The program observed a total of 253 galaxies over three dither pointings, where JADES-GS-z7-01-QU\xspace has been observed in each of the three pointings. Each dither pointing had a different microshutter array (MSA) configuration to place the spectra at different positions on the detector - to decrease the impact of detector gaps, mitigate detector artefacts and improve the signal-to-noise for high-priority targets, while increasing the density of observed targets. Within each individual dither pointing the telescope executed a three nod pattern (by slightly re-orienting the telescope by the length of one microshutter, keeping the same MSA configuration). In each of the three nodding pointings, three microshutters were opened for each target, with the targets in the central shutter, for background subtraction.
Each three-point nodding was executed within 8403 seconds. The nodding pattern has been repeated four times in the prism/clear configuration, two times in the G140M/F070LP combination, once in the G235M/F170LP combination and once in the G395M/F290LP combination. This resulted in a total exposure time for JADES-GS-z7-01-QU\xspace of 28 hours in R100, 14 hours in G140M, and 7 hours in each of G235M, G395M and G395H.
The flux-calibrated spectra were extracted using pipelines developed by the ESA NIRSpec Science Operations Team (SOT) and the NIRSpec GTO Team. A detailed description of the pipelines will be presented in a forthcoming NIRSpec/GTO collaboration paper.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/ppxf_fit_008115_R1000.pdf}
\caption{NIRSpec R1000/grating spectrum of the quiescent galaxy JADES-GS-z7-01-QU\xspace at z = 7.3, confirming the complete absence of emission lines.
}
\label{pPXF_R1000}
\end{figure}
\subsection{JWST/NIRCam image and morphology}
A JWST/NIRCam F444W-F200W-F090W colour mosaic at wavelengths $\lambda \approx 0.8 - 5~\mu$m of JADES-GS-z7-01-QU\xspace from the JADES programme is shown in Fig.~\ref{fig:NIRCam}.
\medskip
Morphologically, JADES-GS-z7-01-QU\xspace appears as a compact, disky galaxy (half-light
radius \ensuremath{R_\mathrm{e}}\xspace= $36\pm1$~mas $\widehat{=}$ 0.2~kpc $\widehat{=}$ 0.04 arcsec, S\'ersic index $n=0.95\pm0.03$; Fig.~\ref{fig:NIRCam}). The
images also show a fainter, distinct source 0.13~arcsec to the East.
This secondary source could not be deblended in the spectroscopy, but we
obtained deblended photometry using {\sc forcepho}\xspace (Johnson et~al., in~prep.).
Its contribution to the total flux ranges from a maximum of 27~percent
(in the F115W band) to 17~percent (F444W band), therefore its SED is
significantly bluer than that of the main source.
Its photometric redshift z=7.50$\pm0.13$ (1-$\sigma$) is consistent with the spectroscopic redshift of the main
source. At a redshift of $z$=$7.3$, this secondary source would lie within
0.7~kpc (or 3~\ensuremath{R_\mathrm{e}}\xspace) from the centre of JADES-GS-z7-01-QU\xspace; its interpretation as a
clump or satellite is unclear.
To attempt removing its contribution from the spectrum of the main source,
we extracted a spectrum from the central three pixels (0.3 arcsec) from the NIRSpec 2-d
spectrum; using this spectrum does not change the interpretation of our
results, i.e., JADES-GS-z7-01-QU\xspace is still quiescent.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/tob_gal_newer.pdf}
\caption{JWST/NIRCam F444W-F200W-F090W colour mosaic at wavelengths $\lambda \approx 0.8-5~\mu m$, covering JADES-GS-z7-01-QU\xspace and nearby projected environment. The five NIRSpec microshutter positions used for this target are overlaid in white.
}\label{fig:NIRCam}
\end{figure}
\subsection{Full spectral fitting}
\subsubsection{{\sc ppxf}\xspace}\label{sec.ppxf}
The red model fit of the stellar continuum in Fig.~\ref{pPXF_fit} was performed with the $\chi^2$-minimization Penalized PiXel-Fitting code\footnote{\url{ https://pypi.org/project/ppxf/}} {\sc ppxf}\xspace \citep{cappellari2017, cappellari2022}, using a library of single stellar-population (SSP) templates spectra obtained combining the synthetic C3K model atmospheres \citep{Conroy2019} with MIST isochrones \citep{Choi2016} and solar abundances. The SSP spectra span a full 2D logarithmic grid of 32 ages and 12 metallicities from age$_{SSP}$ = $10^{7.0}$~yr to $10^{9.2}$~yr (generously older than the age of the Universe at z$=$7.3) and $\ensuremath{\mathrm{log_{10}(Z/Z_{\odot})}}\xspace_{SSP}$ = -2.5 to 0.5. Due to the low resolution of the R100 spectrum, we fix the stellar velocity dispersion to its virial estimate $\sigma_{\rm *} \approx \sigma_{\rm vir} \equiv \sqrt{G M_*/(5R_{\rm e})} = \mathrm{50~km/s}$. To account for dust reddening, the fitted SSP are multiplicatively coupled to the Calzetti et al. \cite{Calzetti1994} dust attenuation curve. From this, we infer a dust attenuation of the stars in this galaxy of $A_V=0.4 \pm 0.1$. (It should be noted that the presence of dust in the {\sc ppxf}\xspace fit is mainly driven by the UV slope redwards of the \hbox{Ly$\alpha$}\xspace drop. The complex physics of the \hbox{Ly$\alpha$}\xspace drop is not included in the SSP templates. Masking this part of the spectrum returns a nearly dust-free fit with older and metal-richer stellar populations. I.e. biasing the fit towards an even more quenched interpretation.) To infer the stellar population weight-grid shown in Fig.~\ref{fig:ppxf_grid}, we perform a residual-based bootstrapping of the initial {\sc ppxf}\xspace bestfit without regularization, see \citep[Looser et al., in prep.]{cappellari2017, cappellari2022} for more details. The single SSP-weight in the top right corner of Fig.~\ref{fig:ppxf_grid} contributes $<2\%$ to the light-weighted superposition of SSP-templates in the fit and is hence spurious. As stated in the main text, we infer an extremely low average stellar metallicity of \ensuremath{\mathrm{log_{10}(Z/Z_{\odot})}}\xspace$\approx-2$ with {\sc ppxf}\xspace. It should be noted that the dominant reconstructed stellar populations lie at \ensuremath{\mathrm{log_{10}(Z/Z_{\odot})}}\xspace$ \approx -2.5$, at the boundary of the available synthetic spectra grid. This suggests that even lower metallicity SSP model spectra might be needed in the future to accurately model the stellar populations in galaxies at high redshift.
From {\sc ppxf}\xspace, we infer that the oldest significant population of stars (i.e. indicating the start of the SF) in the galaxy is 100~Myr old, while the youngest is 30~Myr, resulting in an extremely short duration of the star formation of just 70~Myr between the formation of the galaxy and its quenching.
\subsubsection{{\sc bagpipes}\xspace}\label{sec.bagpipes}
We used the Bayesian Analysis of Galaxies for Physical Inference and Parameter EStimation (\textsc{bagpipes}) code \cite{Carnall2018MNRAS.480.4379C} to simultaneously fit the NIRSpec PRISM measurements and NIRCam photometry. Following Witstok et al. 2023 \cite{2023arXiv230205468W}, we employed the \textsc{bpass} v2.2.1 stellar population synthesis models \cite{Eldridge2017} as underlying stellar models. These include binary stars under the default \textsc{bpass} initial mass function (IMF), having a slope of $-2.35$ (for $M > 0.5 \, \mathrm{M_\odot}$) and ranging in stellar mass from $1 \, \mathrm{M_\odot}$ to $300 \, \mathrm{M_\odot}$.
For the presented {\sc bagpipes}\xspace fit, we assumed two bins of constant SFH, one fixed bin over the last 10~Myr and one variable bin spanning a range beyond 10~Myr (minimum age ranging between 10~Myr and 0.5~Gyr, maximum age between 11~Myr and the age of the Universe). We varied the total stellar mass formed between $0$ and $10^{15} \, \mathrm{M_\odot}$, and the stellar metallicity of the variable SFH bin between $0.01 \, \mathrm{Z_\odot}$ and $1.5 \, \mathrm{Z_\odot}$ (the 10~Myr bin having a fixed metallicity of $0.2 \, \mathrm{Z_\odot}$). Nebular emission is modelled self-consistently with a grid of \textsc{Cloudy} \citep{Ferland2017RMxAA..53..385F} models with the ionisation parameter ($-3 < \log_{10} U < -0.5$) as a free parameter. We included a flexible Charlot \& Fall \cite{Charlot2000ApJ...539..718C} dust attenuation prescription with visual extinction and power-law slope freely varying ($0 < A_V < 7$, $0.4 < n < 1.5$), while fixing the fraction of attenuation from stellar birth clouds to 60\% (the remaining fraction arising in the diffuse ISM; e.g. ref. \cite{2019MNRAS.483.2621C}). A first-order correction polynomial \citep{Carnall2019MNRAS.490..417C} is fitted to the spectroscopic data to account for aperture and flux calibration effects. The spectrophotometric fit and the corresponding corner plot are shown in Fig.~\ref{Bagpipes_corner_plot}. We find a very low SFR (consistent with 0) in the last 10~Myr for JADES-GS-z7-01-QU\xspace, noting that other tested SFH parametrisations, namely the double power-law SFH described in Carnall et a. 2023 \cite{carnall+2023c} and a single-bin constant SFH with flexible beginning and end of star formation agree that the galaxy is quiescent. We infer that the oldest stellar population is 40~Myr old, which is equivalent to a formation redshift of z$=$7.6. The galaxy has been quiescent for 10~Myr, resulting in a short duration of star formation of 30~Myr from the formation of the galaxy to its quenching.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/Bagpipes_corner_plot_and_fit.png}
\caption{Bottom left: {\sc bagpipes}\xspace corner plot. Top right: Spectrophotometric {\sc bagpipes}\xspace fit of the JADES-GS-z7-01-QU\xspace R100/prism spectrum.}
\label{Bagpipes_corner_plot}
\end{figure}
\subsubsection{{\sc beagle}\xspace}\label{sec.beagle}
{\sc beagle}\xspace is a Bayesian analysis tool to interpret galaxy spectra, incorporating a consistent modelling of stellar radiation and its transfer through the interstellar and intergalactic media \citep{Chevallard2016MNRAS.462.1415C}.
The corner plot in Fig.~\ref{Beagle_corner_plot} shows the {\sc beagle}\xspace posterior probability distributions of the {\sc beagle}\xspace fit to the R100 spectrum of JADES-GS-z7-01-QU\xspace. The 2D (off-diagonal) and 1D (along
the main diagonal) subplots show the posterior distributions on stellar mass \ensuremath{M_\star}\xspace, metallicity $Z$, SFR, maximum age of stars \ensuremath{\mathrm{t_{form}}\xspace}, minimum age of stars \ensuremath{\mathrm{t_{quench}}}\xspace, redshift $z$, effective dust attenuation optical depth in the V-band $\hat{\tau}_V$, and the escape fraction of ionizing photons $\mathrm{f_{esc}}$. The dark, medium and light blue contours show the extents of the 1-, 2-, and 3-$\sigma$ credible regions.
It should be noted that the SFR and $\mathrm{f_{esc}}$ are quantities derived from the underlying physical model; the absence of nebular lines always allows, by construction, a solution with $\mathrm{f_{esc}}\sim 1$, the question is whether this solution is accompanied by the production of ionizing photons associated with ongoing star formation. The {\sc beagle}\xspace posterior distribution does not highlight a solution with high $\mathrm{f_{esc}}$ and ongoing star formation \citep{Trebitsch2017geat.confE..49T,Zackrisson2017ApJ...836...78Z,Topping2022ApJ...941..153T}. On the contrary, while $\mathrm{f_{esc}}$ is unconstrained, even a value approaching unity indicates a low SFR$< 1~\ensuremath{{\rm M}_\odot}\xspace\,{\rm yr}^{-1}$ at 3-$\sigma$ level (fifth subplot from the left at the bottom of Fig.~\ref{Beagle_corner_plot}). Hence, confirming quiescence even in the high \ensuremath{f_{\rm esc}}\xspace scenario.
{\sc beagle}\xspace gives a current SFR of less than $10^{-1.5}~\ensuremath{{\rm M}_\odot}\xspace\,\mathrm{yr}^{-1}$, a formation time of less than 160~Myr before observation and a quenching time of $\sim$15~Myr before observation.
We also note that {\sc beagle}\xspace, as all other three codes, requires some degree of dust attenuation, which is incompatible with $\mathrm{f_{esc}}\sim 1$.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/8115_BEAGLE_triangle_and_marginal.png}
\caption{Bottom left: {\sc beagle}\xspace corner plot with free $\mathrm{f_{esc}}$. Top right: {\sc beagle}\xspace fit of the R100 spectrum.}
\label{Beagle_corner_plot}
\end{figure}
\subsubsection{{\sc prospector}\xspace}\label{sec.prospector}
We use the Bayesian SED fitting code {\sc prospector}\xspace \citep{Johnson2021ApJS..254...22J} to model the spectrophotometric data of JADES-GS-z7-01-QU\xspace. The posterior corner plot for {\sc prospector}\xspace is shown in Fig.~\ref{Prospector_corner_plot}. The code uses a flexible spectroscopic calibration model, combined with forward modelling of spectra and photometry, to infer physical properties, Including stellar population and nebular emission properties and constraining SFHs. Estimating maximal-likely-hood and posterior probability distributions using Monte Carlo sampling. For the {\sc prospector}\xspace fit presented in this paper we assume a XXX SFH. Nebular emission lines and nebular continuum is modelled self-consistently as XXX. We included a flexible XXX dust attenuation prescription, with free emission in UV. Interestingly, {\sc prospector}\xspace infers only dust-attenuation in the UV part of the spectrum ($A_{UV}/A_V = 4.6^{+1.0}_{-1.2}$, while $A_V= 0.1^{+0.1}_{-0.0}$. This is likely due to the clump or satellite in the slit. We find a very low SFR (consistent with 0) in the last 25~Myr before observation for JADES-GS-z7-01-QU\xspace with {\sc prospector}\xspace, noting that other tested parametric SFHs, including a continuity prior (biasing the fit towards a star-forming solution), agree that the galaxy is quiescent. We infer that the oldest significant stellar population is 80~Myr old, which is equivalent to a formation redshift of z$=$8.0. The galaxy formed half of its mass by $t_* =59.9^{+3.6}_{-3.7}$ has been quiescent for 25~Myr, resulting in a relatively short burst of star formation of 55~Myr before the galaxy quenched.
\todo{Describe fit and inferred parameters, numbers when model finalised.}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/0_corner_run5.pdf}
\caption{{\sc prospector}\xspace corner plot with stellar mass \ensuremath{M_\star}\xspace, specific SFR (sSFR), half-mass time $t_*$, dust attenuation $A_V$, $A_{UV}/A_V$, and stellar metallicity Z. \todo{Replace SFH with spectral fit. Check sSFR, \ensuremath{M_\star}\xspace.}}
\label{Prospector_corner_plot}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/Number_density.png}
\caption{{\bf \color{green}[Currently this plot is not included in the paper and we plan not to include it because of various caveats with it; starting with the fact that we're extrapolating to masses below 10$^9$, the Eagle simulation anyhow also fail to produce enough passive also at z$\sim$0, etc...; but comments welcome.]}The red circle shows the estimated number density of quiescent galaxies at redshift z$>7$ based on this target. The inferred number density is orders of magnitudes higher than what is predicted by simulations. \todiscuss{Chris Lovell's caveats:
I have also included the results for the lower mass cut $logM>8$ (see $Number_density_v1.png$). However I would strongly recommend that you do not use such low limits, or at least explain the significant caveats with using such low limits. The attached plot shows the star forming sequence (Mstar vs sSFR) with this lower limit, and you can see that at masses < $10^9$ Msol significant resolution effects become apparent. Essentially, a $10^8$ Msol galaxy is composed of only ~100 particles in our fiducial model, and so discreteness effects start to appear. A single newly formed young stellar particle can essentially move a galaxy from the passive to the star forming region instantaneously. It is for this reason that we focused only on more massive, better resolved galaxies for this particular study.}}
\label{Number_density_QG}
\end{figure}
\noindent{}
\bmhead{Code availability}
The {\sc ppxf}\xspace, {\sc bagpipes}\xspace and {\sc prospector}\xspace codes are publicly available. {\sc beagle}\xspace is available via a Docker image (distributed through Docker Hub) upon request at https:/iap.fr/beagle.
\bmhead{Data availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\backmatter
\bmhead{Acknowledgements}
{\small TJL, FDE, RM, JW, WB and LS acknowledge support by the Science and Technology Facilities Council (STFC), ERC Advanced Grant 695671 “QUENCH”. RM is further supported by a research professorship from the Royal Society. We thank Christopher Lovell, Annalisa Pillepich, Deborah Sijacki and Nicolas Laporte for helpful discussions.}
\bmhead{Author contributions}
TJL, FDE, and RM led the writing of this paper. TJL, JW, LS, ST, FDE, ECL, JC and WB led the spectrophotometric modelling of JADES-GS-z7-01-QU\xspace and the data visualization. AB, AD, CNAW, CW, DJE, H-WR, MR, PF, RM, SAl, SAr contributed to the design of the JADES survey. BER, CW, DJE, DPS, MR, NL, and RM serve as the JADES Steering Committee. CW contributed to the design of the spectroscopic observations and MSA configurations. KH, JMH, JL, LW, RE, REH contributed the photometric redshift determination and target selection. AJC, AB, CNAW, ECL, HU, RB, KB, contributed to the selection, prioritisation and visual inspection of the targets. SCa, MC, JW, PF, GG, SAr, BRdP, contributed to the NIRSpec data reduction and to the development of the NIRSpec pipeline. SCh, JC, ECL, RM, JW, RS, FDE, TJL, MC, AdG, AS, and LS contributed to analysis of the spectroscopic data. FDE, TJL, MM, MC, BRdP, RM, SAr contributed to the development of the tools for the spectroscopic data analysis, visualisation and full spectral fitting. BER, ST, BDJ, CNAW, DJE, IS, MR, RE, ZC contributed to the JADES imaging data reduction. RHa, BER contributed to the JADES imaging data visualization. BDJ, ST, AD, DPS, LW, MWT, RE contributed the modelling of galaxy photometry. NB, SAr contributed to the design and optimisation of the MSA configurations. PF, MS, TR, GG, NL, NK, BRdP contributed to the design, construction and commissioning of NIRSpec. MR, CNAW, EE, FS, KH, CCW contributed to the design, construction, and commissioning of NIRCam.
|