id
stringlengths 10
10
| title
stringlengths 5
246
| abstract
stringlengths 42
3.32k
| authors
stringlengths 5
21.5k
| published_date
timestamp[s] | link
stringlengths 33
34
| markdown
stringlengths 140
1.08M
| abstract_ja
stringlengths 0
1.35k
|
---|---|---|---|---|---|---|---|
2309.07488 | Long-Term Mean-Variance Optimization Under Mean-Reverting Equity Returns | This paper studies the mean-variance optimal portfolio choice of an investor
pre-committed to a deterministic investment policy in continuous time in a
market with mean-reversion in the risk-free rate and the equity risk-premium.
In the tradition of Markowitz, optimal policies are restricted to a subclass of
factor exposures in which losses cannot exceed initial capital and it is shown
that the optimal policy is characterized by an Euler-Lagrange equation derived
by the method of Calculus of Variations. It is a main result, that the
Euler-Lagrange equation can be recast into a matrix differential equation by an
integral transformation of the factor exposure and that the solution to the
characteristic equation can be parametrized by the eigenvalues of the
associated lambda-matrix, hence, the optimization problem is equivalent to a
spectral problem. Finally, explicit solutions to the optimal policy are
provided by application of suitable boundary conditions and it is demonstrated
that - if in fact the equity risk-premium is slowly mean-reverting - then
investors committing to long investment horizons realize better risk-return
trade-offs than investors with shorter investment horizons. | Michael Preisel | 2023-09-14T07:55:21 | http://arxiv.org/abs/2309.07488v2 | # Long-term mean-variance optimization under mean-reverting equity returns
###### Abstract.
Being a long-term investor has become an argument by itself to sustain larger allocations to risky assets, but - although intuitively appealing - it is rarely stated exactly why capital markets would provide a better opportunity set to investors with long investment horizons than to other investors.
In this paper, it is shown that if in fact the equity risk-premium is slowly mean-reverting then an investor committing to a long-term deterministic investment strategy would realize a better risk-return trade-off in a mean-variance optimization than investors with shorter investment horizons.
It is well known that the problem of mean-variance optimization cannot be solved by dynamic programming. Instead, the principle of Calculus of Variations is applied to derive an Euler-Lagrange equation characterizing the optimal investment strategy.
It is a main result that the optimization problem is equivalent to a spectral problem by which explicit solutions to the optimal investment strategy can be derived for an equilibrium market of bonds and equity. In this setting, the paper contributes to portfolio choice in continuous time in the tradition of Markowitz.
**JEL Classification:** G11
**Key Words:** Mean-Variance Optimization, Deterministic Investment Strategy, Mean-Reversion, Spectral Method, Calculus of Variations.
1
Footnote 1: The term ’pension provider’ covers any savings-based pension arrangement, say, pension funds, individual accounts, target date funds, etc.
## 1. Introduction
It is a long-standing investment advice that younger people should hold higher proportions of risky assets than other investors given their longer investment horizon. This implies that - on a risk-adjusted basis - it is rewarded simply to hold a risky assets for an extended period of time. It is therefore a statement of - or an assumption about - capital market structure which is supported by the literature on rational portfolio choice only if returns on risky assets display non-trivial behaviour, say, if returns are mean reverting [1, 5, 15, 17, 18, 22].
Another aspect is, if young people actually would hold risky positions over sustained periods of time - no matter what? One such motive would be to save for retirement. Retirement savings are often organized through specialized financial institutions tasked with converting funds put aside during working life into an income stream during retirement. Common to such pension providers1 is, that funds committed in working life will not be released until after retirement, hence, the savings vehicle institutionalizes long holding periods. Therefore, pension providers are - on behalf of younger people - examples of 'long-term' investors in the sense of this paper as they truly can plan capital and (buy and) hold assets over very long periods of time.
This of course raises the question of what investment strategy a pension provider should apply? By tradition, for reasons of governance or - often - simply by regulation, the current and future portfolio composition is set by a strategic asset allocation, a deterministic investment strategy detailing the allocation to major asset classes, say, stocks and bonds, from present day to all future dates. Since the pension provider - at least formally - pre-commits to a fixed investment strategy the optimization problem is reduced to an optimization over deterministic investment strategies only - see [7, 8] for similar applications to retirement savings.
Since Markowitz [16], the workhorse of the asset management industry remains mean-variance optimization. The original work was a single-period optimization but extension to multiple periods has proven not to be straightforward and the literature on this topic is vast, see [19, 20] and references therein.
In combination, a long-term investor therefore would seek the (deterministic) investment strategy to maximize the expected return on a given horizon for a fixed variance target. If asset-returns mean-revert over time-scales comparable to or longer than the investment horizon then - in support of the claim to sustain larger allocations to risky assets - the risk-return trade-off should improve with investment horizon as a testament to long-term risk being rewarded.
In the case of continuous-time mean-variance optimization, it is well known that the optimization problem cannot be solved by standard dynamic programming techniques. Several authors have addressed this problem: In [9], it is shown that for complete markets the distribution of optimal wealth can be derived by martingale methods, hence, the optimal strategi is unique and exists; [2] extends solutions to incomplete markets by splitting the optimization criterion in to a local term and a term taking time-inconsistency into account to apply a dynamic program; [20] provides an exact recursive solution for a multi-period portfolio tree; [4] proposes a game-theoretic approach by which the portfolio is optimized recursively taking into account each subsequent allocations will do the same; [10] proposes the concept of local mean-variance efficient portfolios by which the portfolio remains mean-variance efficient over any subperiod, and [6, 7] derives a characterizing set of differential equations from the Hamilton-Jacobi-Bellman and Pontryagin minimization principles for deterministic investment strategies for a specialized version of the optimization criterium.
Common to these approaches is that analytical solutions only are derived for Black-Scholes type of capital markets - otherwise only numerical solutions are provided.
The objective of this paper therefore is - for a given investment horizon - to derive explicit optimal investment strategies maximizing horizon expected return for fixed horizon variance in a capital market providing a non-trivial investment opportunity set in bonds and stocks, respectively. Consistent with the practice of Strategic Asset Allocation, investment strategies are allowed to change deterministically over time but not to depend on the state of the capital market. Bond returns follow a Vasicek model and stock returns display mean-reversion in surplus return.
The capital market model was first proposed by [17] where it provides the opportunity set to an investor optimizing a real CRRA utility on a fixed horizon but limited to investing in stocks and nominal bonds only. The capital market model was later extended to include index-linked bonds in [14] where also an algorithm for exact simulation is derived. For the purposes of this paper, the model is reduced to nominal assets only.
The solution of the mean-variance optimization problem follows [13] who recently demonstrated by direct methods that explicit solutions can be derived from the principle of Calculus of Variations in the special case of zero correlation between stocks and bonds.
The main result of this paper is to show that by an exponential transformation of the factor allocations to cash, bonds, and equity, the Euler-Lagrange equation of the full optimization problem is an inhomogegeous second-order matrix differential equation in the transformed variable. Furthermore, from the theory of solvents [11] it is shown that the homogeneous solution is equivalent to a spectral problem and explicit formulas for the eigenvalues are provided. Finally, by careful analysis of boundary conditions explicit solutions to the optimization problem is provided.
## 2. Capital Market Model
We assume the capital market provides investment opportunity in one risky asset, \(S_{u}\), at time \(u\) which - in line with common terminology - we will refer to as 'equity'.
Consistent with the assumption of risk being rewarded long-term, the equity surplus-return, \(x_{u}\), displays mean-reversion and is (perfectly) negatively correlated with equity returns, that is, equity returns will tend to be higher following a loss and lower following a gain. This allows the model to distinguish between short-term fluctuations and longer-term volatility which will be suppressed from the mean-reversion of the equity surplus-return, hence, holding equity is explicitly rewarded on long investment horizons - see also the discussion of model properties in [14].
To allow full control of horizon variance, the capital market model also explicitly provides investment options in an equilibrium bond market by providing a full nominal term structures of interest rates at all points in time as discussed in Section 2.2.1 below.
### Three Factor Capital Market Model
The capital market model is stated by its real-world dynamics - or P-dynamics - reflecting the risks a holder of securities in the market faces. The equity return, \(S_{u}\), satisfies the stochastic differential equation
\[\frac{dS_{u}}{S_{u}}=(r_{u}+x_{u})du+\sigma_{S}dW_{u}^{S} \tag{1}\]
at time \(u\) where \(r_{u}\) is the (nominal) short rate, \(x_{u}\) a state-dependent equity surplus-return, \(\sigma_{S}\) is equity volatility, and \(W_{u}^{S}\) is a standard Brownian motion.
The equity surplus-return is mean-reverting and assumed to follow an Ornstein-Uhlenbeck process given by
\[dx_{u}=\alpha(\bar{x}-x_{u})du-\sigma_{x}dW_{u}^{S} \tag{2}\]
with \(\alpha\) the mean-reversion strength, \(\bar{x}\) the average surplus-return, and \(\sigma_{x}\) the volatility of the equity surplus-return. The equity surplus-return is assumed to be perfectly negatively correlated with equity returns, hence, it loads on the same Brownian motion driving the stock index, \(W_{u}^{S}\), albeit with a negative sign.
The short rate, \(r_{u}\), is also assumed to follow an Ornstein-Uhlenbeck process and is given by
\[dr_{u}=\kappa(\bar{r}-r_{u})du+\sigma_{r}dW_{u}^{r} \tag{3}\]
where \(\kappa\) is the mean-reversion strength, \(\bar{r}\) is the (long-term) average interest rate, \(\sigma_{r}\) is the interest rate volatility, and \(W_{u}^{r}\) is a standard Brownian motion.
In total, the capital market model defines three state variables driven by a two-dimensional Brownian motion,
\[W_{u}=(W_{u}^{r},W_{u}^{S})^{T},\]
with correlation \(\rho\). We will further assume the model is not degenerate, that is, \(\rho^{2}<1\). The key features of the model are summarized in Appendix A where also explicit solutions for the state variables can be found.
### Pricing Consistency and the Risk Premium
It is well known from arbitrage theory [3], that in a complete market, the risk premium, \(\xi_{u}\), is the stochastic process under which the discounted price process, \(p_{u}/b_{u}\), becomes a martingale under a new risk-neutral measure, \(Q\), defined by the Girsanov transformation
\[dW_{u}=dW_{u}^{Q}-\xi_{u}du\]
where \(p_{u}\) is the price process of any security in the market, \(W_{u}^{Q}\) is a Brownian motion under \(Q\), and \(b_{s}=\exp(\int_{t}^{s}r_{u}du)\) is the bank account numeraire at time \(s\) given initial time \(t\). Hence, the risk-neutral - or Q - dynamics of the model is
\[dr_{u} =\kappa(\bar{r}-r_{u}-\sigma_{r}\xi_{u}^{r})du+\sigma_{r}dW_{u}^{ Q(r)} \tag{4a}\] \[dx_{u} =\alpha(\bar{x}-x_{u}+\sigma_{x}\xi_{u}^{S})du-\sigma_{x}dW_{u}^ {Q(S)}\] (4b) \[\frac{dS_{u}}{S_{u}} =(r_{u}+x_{u}-\sigma_{S}\xi_{u}^{S})du+\sigma_{S}dW_{u}^{Q(S)}. \tag{4c}\]
Upon inspection of (4c), it is immediately clear that \(x_{u}/\sigma_{S}\) is the equity risk-premium. Furthermore, it is easily verified that if the risk-premium process is parameterized as
\[\xi_{u}=(\xi_{u}^{r},\xi_{u}^{S})^{T}=\begin{pmatrix}[(a-\kappa)r_{u}+\kappa \bar{r}-ab]/\sigma_{r}\\ x_{u}/\sigma_{S}\end{pmatrix} \tag{5}\]
where \(\xi_{u}^{r}\) and \(\xi_{u}^{S}\) are the interest rate and equity risk-premium, respectively, then the short rate, \(dr_{u}=a(b-r_{u})+\sigma_{r}dW_{u}^{Q(r)}\), retains its structure as an Ornstein-Uhlenbeck process under Q-measure, hence, it is a Vasicek model and the term structure of interest rates is well known [21].
For future reference, we state the capital market assumptions in the following proposition:
**Proposition 2.1**.: _Assume that volatiles \(\sigma_{r},\sigma_{x},\sigma_{S}>0\), mean-reversion strengths \(\kappa\neq\alpha\neq a>0\), mean levels \(\bar{r},\bar{x},b\) are scalar constants, and \(\rho^{2}<1\) then for \(s\geq t\) the risk-premium process, \(\xi_{s}\), can be decomposed into_
\[\xi_{s}=\bar{\xi}+e^{-\boldsymbol{\Gamma}(s-t)}\xi_{t}+\boldsymbol{\xi}\,\int _{t}^{s}e^{-\boldsymbol{\Gamma}(s-u)}dW_{u} \tag{6}\]
_with respect to initial values \(r_{t},x_{t}\) at time \(t\) where_
\[\bar{\xi} =\begin{pmatrix}a(\bar{r}-b)/\sigma_{r}\\ \bar{x}/\sigma_{S}\end{pmatrix}, \xi_{t} =\begin{pmatrix}(r_{t}-\bar{r})(a-\kappa)/\sigma_{r}\\ (x_{t}-\bar{x})/\sigma_{S}\end{pmatrix},\]
_and_
\[\boldsymbol{\xi} =\begin{cases}(a-\kappa)&0\\ 0&-\sigma_{x}/\sigma_{S}\end{cases}, \boldsymbol{\Gamma} =\begin{cases}\kappa&0\\ 0&\alpha\end{cases}\]
_are diagonal matrices of full rank._
_Furthermore, the risk-premium, \(\xi_{s}\), is Normally distributed with conditional mean_
\[\mathbb{E}\,\xi_{s|t}=\bar{\xi}+e^{-\boldsymbol{\Gamma}(s-t)}\xi_{t}\]
_and conditional variance_
\[\mathbb{V}\,\xi_{s|t}=\boldsymbol{\xi}V_{s|t}^{\xi}\boldsymbol{\xi}\]
_where_
\[\boldsymbol{V}_{s|t}^{\xi}=\left\{\begin{matrix}\psi_{2\kappa}(s-t)&\rho\psi_{ \kappa+\alpha}(s-t)\\ \rho\psi_{\kappa+\alpha}(s-t)&\psi_{2\alpha}(s-t)\end{matrix}\right\} \tag{7}\]
_and_
\[\psi_{\alpha}(s-t)=\int_{t}^{s}e^{-\alpha(u-t)}du=\frac{1}{\alpha}\big{(}1-e^{ -\alpha(s-t)}\big{)}. \tag{8}\]
Proof.: See appendix B.
It is clear from (6) that the local (short-term) expected risk-premium is stochastic and in general deviates from the average risk-premium. Similarly, from (7) and the properties of (8), local variance initially grows linearly with investment horizon, \(s\), whereas 'in the long-run', the risk-premium is stationary with asymptotic variance, (36), and expected return \(\bar{\xi}\), hence, the model explicitly allows'short-term' risk-premium characteristics to be distinguishable from 'long-term'.
#### 2.2.1. Bonds
As discussed above, the interest-rate risk-premium process is a Vasicek model under \(Q\)-measure, hence, in addition to the risky asset, the model provides an equilibrium bond market consistently priced by the Vasicek pricing formula.
To simplify the exposition, we will - similar to the equity index - assume the existence of a single zero-coupon bond, \(B_{u}\), with fixed maturity, \(M_{B}\), coinciding with - or beyond - the investment horizon, \(s\). Since the Vasicek model is a one-factor model, it is well-known that the full (bond) opportunity set is covered by a single bond issue. For now, we will therefore consider the choice of \(M_{B}\) to be arbitrary; the relation between investment horizon and bond maturity will be discussed further below.
It is shown in Appendix A, (33), that bond volatility, \(\sigma_{u}^{B}\), is negative and given by
\[\sigma_{u}^{B}=-\psi_{a}(M_{B}-u)\sigma_{r} \tag{9}\]
where \(\psi_{a}(\cdot)\) is given by (8). Negativity follows from the trivial fact that bond prices decrease with increasing interest rates and vice versa, hence, the bond price-dynamics is given by
\[\frac{dB_{u}}{B_{u}}=(r+\xi_{u}^{r}\sigma_{u}^{B})du+\sigma_{u}^{B}dW_{u}^{r}.\]
## 3. Portfolio Dynamics
The risk-premium dynamics is an expression of the capital market opportunity set, hence, to derive the risk-return characteristics of a portfolio invested in the capital market, the next step is to define an investment strategy.
As discussed in the introduction, we consider investment strategies pre-committed to a specific asset allocation as time unfolds as practiced by pension providers and similar long-term investors, hence, investment strategies depend on time only. We therefore define an investment strategy, \(\phi(u)\), starting at time \(t\) with investment horizon \(s>t\) as a continuous function,
\[\phi(u)=(b(u),x_{B}(u),x_{S}(u))^{T},\]
on the interval \(t\leq u\leq s\) where \(x_{B}(u)\) is the number of bonds, \(x_{S}(u)\), the number of shares at time \(u\), and \(b(u)\) is the holding in the bank account, \(B_{u}\) at time \(u\).
Following [12], the value, \(V_{u}(\phi)\), of the investment strategy, \(\phi\), at time \(s>t\) subject to the self-financing conditions is then given as
\[V_{s}(\phi)=V_{t}+\int_{t}^{s}\big{[}b(u)dB_{u}+x_{B}(u)dB_{u}+x_{S}(u)dS_{u} \big{]} \tag{10}\]
where \(V_{t}>0\) is the initial portfolio value. The integral is well-defined since \(\phi\) is trivially adapted and square integrable, hence, the investment strategy, \(\phi\), is attainable and the portfolio value, \(V_{s}(\phi)\), is well-defined.
Next, following [3] the value process is given by
\[V_{u}(\phi)=b(u)B_{u}+x_{B}(u)B_{u}+x_{s}(u)S_{u}\]
which - upon combination with the self-financing condition, (10) - yields the following portfolio dynamics
\[dV_{u}=V_{u}rdu+ b(u)dB_{u}+x_{B}(u)dB_{u}+x_{S}(u)dS_{u} \tag{11}\] \[-\big{[}b(u)B_{u}+x_{B}(u)B_{u}+x_{S}(u)S_{u})\big{]}rdu\]
where the shorthand \(V_{u}(\cdot)\equiv V_{u}\) was introduced.
At this stage, the model puts no restrictions on leverage nor shorting. In portfolio theory, it is customary to apply a non-negativity constraint to the allocation to each asset, one example would be the original model by Markowitz, [16], but it is well-known that such constraints are difficult to solve analytically and one must therefore often resort to numerical methods.
A less restrictive interpretation, though, is that the rationale for the no-shorting condition is to preclude portfolio loss beyond the initial capital. In this spirit we therefore rewrite (11) as
\[\frac{dV_{u}}{V_{u}} =rdu+\frac{x_{B}(u)B_{u}}{V_{u}}\left(\frac{dB_{t}}{B_{t}}-rdt \right)+\frac{x_{S}(u)S_{u}}{V_{u}}\left(\frac{dS_{t}}{S_{t}}-rdt\right)\] \[=rdu+\sum_{i=r,S}f_{i}(u)\Big{(}\xi^{i}(u)du+dW_{u}^{i}\Big{)}, \quad V_{u}>0,t\leq u\leq s\] \[=(r_{u}+f_{u}^{T}\xi_{u})du+f_{u}^{T}dW_{u} \tag{12}\]
where the exposure, \(f_{u}=(f_{u}^{r},f_{u}^{S})^{T}\), is given by
\[f(u)=V_{u}^{-1}\begin{pmatrix}x_{B}(u)B_{u}\sigma_{u}^{B}\\ x_{S}(u)S_{u}\sigma_{S}\end{pmatrix}\]
is the volatility of the allocation to bonds and equity, respectively.
The following Proposition shows that these are sufficient conditions to ensure the investment risk never exceeds total loss of the initial capital:
**Proposition 3.1**.: **(Portfolio Dynamics)** _Given the assumptions of Proposition 2.1, given initial values, \(r_{t},x_{t}\), at time \(t\), and given the investment horizon \(s>t\) and further assuming that the factor exposure, \(f_{u}=(f_{u}^{r},f_{u}^{S})^{T}\), for \(t\leq u\leq s\) is deterministic (depends on time only), then given the initial portfolio value, \(V_{t}\), the horizon portfolio value, \(V_{s}\), is given by_
\[V_{s}=V_{t}\exp\left\{\int_{t}^{s}(r_{u}+f_{u}^{T}\xi_{u})du-\frac{1}{2}\int_ {t}^{s}f_{u}^{T}\mathbf{C}f_{u}du+\int_{t}^{s}f_{u}^{T}dW_{u}\right\} \tag{13}\]
_and is log-Normally distributed with conditional mean_
\[\mathbb{E}\log(V_{s}/V_{t})=\int_{t}^{s}\left\{\epsilon_{0}+\left(\epsilon_{1 }+f_{u}\right)^{T}\left(\bar{\xi}+e^{-\mathbf{\Gamma}(u-t)}\xi_{t}\right)-\frac{1} {2}f_{u}^{T}\mathbf{C}f_{u}\right\}du \tag{14}\]
_and conditional variance_
\[\mathbb{V}\log(V_{s}/V_{t})=\int_{t}^{s}h_{u}^{T}\mathbf{C}h_{u}du \tag{15}\]
_where_
\[h_{u}=f_{u}+\boldsymbol{\xi}\int_{u}^{s}e^{-\boldsymbol{\Gamma}(v-u)}(\epsilon_{1 }+f_{v})dv \tag{16}\]
_and_
\[\epsilon_{0}=\frac{ab-\bar{r}\kappa}{a-\kappa},\epsilon_{1}=\left(\frac{ \sigma_{r}}{a-\kappa},0\right)^{T}.\]
Proof.: See appendix C.
## 4. Portfolio Optimization
Given a factor allocation (investment strategy), \(f_{u}\), Proposition 3.1 provides horizon mean, (14), and horizon variance, (15), for any given investment horizon, \(s\), hence, we have the necessary tools to proceed to search for _the_ factor allocation that maximizes horizon expected return for fixed horizon variance.
It is important to stress, that factor allocation is allowed to change over time, hence, the optimial factor allocation for horizon \(s\) cannot be expected to be an aggregate of optimal factor allocations over sub-periods: Each investment horizon provides a different opportunity set, hence, the optimal factor allocation explicitly depends on the investment horizon.
Before we move to the derivation of the optimality condition, it is convenient first to define the exponential transformation, \(y_{u}\), as
\[y_{u}=\int_{u}^{s}e^{-\boldsymbol{\Gamma}(v-u)}f_{v}dv\quad\Rightarrow\quad \dot{y}_{u}=\frac{\partial y}{\partial u}=-f_{u}+\boldsymbol{\Gamma}y_{u} \tag{17}\]
from which it follows that the factor allocation, \(f_{u}\), can be reconstructed as
\[f_{u}=\boldsymbol{\Gamma}y_{u}-\dot{y}_{u}.\]
With this definition, we can re-parameterize the horizon mean, (14), to define the integral
\[I=\mathbb{E}\log(V_{s}/V_{t})=\int_{t}^{s}p(y_{u},\dot{y}_{u},u)du \\ =\int_{t}^{s}\Big{\{}\epsilon_{0}+\left(\epsilon_{1}+ \boldsymbol{\Gamma}y_{u}-\dot{y}_{u}\right)^{T}\left(\bar{\xi}+e^{-\boldsymbol {\Gamma}(u-t)}\xi_{t}\right)\\ -\frac{1}{2}(\boldsymbol{\Gamma}y_{u}-\dot{y}_{u})^{T}\boldsymbol {C}(\boldsymbol{\Gamma}y_{u}-\dot{y}_{u})\Big{\}}du \tag{18}\]
and similarly, we can re-parameterize horizon variance, (15), by combining (16) and (17) to define the integral
\[J=\mathbb{V}\log(V_{s}/V_{t})=\int_{t}^{s}q(y_{u},\dot{y}_{u},u)du=\int_{t}^{s }h_{u}^{T}\boldsymbol{C}h_{u}du\]
where \(h_{u}\) is now given by
\[h_{u} =f_{u}+\boldsymbol{\xi}\int_{u}^{s}e^{-\boldsymbol{\Gamma}(v-u) }(\epsilon_{1}+f_{v})dv \tag{19}\] \[=\underbrace{\boldsymbol{\Gamma}y_{u}-\dot{y}_{u}}_{f_{u}}+\int_ {u}^{s}e^{-\boldsymbol{\Gamma}(v-u)}\boldsymbol{\xi}\epsilon_{1}dv+ \boldsymbol{\xi}\underbrace{\int_{u}^{s}e^{-\boldsymbol{\Gamma}(v-u)}f_{v}dv }_{y_{u}}\] \[=\boldsymbol{\Gamma}_{\xi}y_{u}-\dot{y}_{u}+\psi_{\kappa}(s-u) \eta^{r}\]
with \(\boldsymbol{\Gamma}_{\xi}=\boldsymbol{\Gamma}+\boldsymbol{\xi}\), and where \(\eta^{r}=(\sigma_{r},0)^{T}\).
### The Euler-Lagrange Equation
Given this new parameterization, mean-variance optimization is equivalent to maximize \(I\) for fixed \(J\) which - following the original idea of [13] - can be solved by classical methods of Calculus of Variations.
By introducing the Lagrange multiplier, \(\nu\), the mean-variance optimization criteria therefore is recast into optimizing
\[I^{*}=I+\nu J\]
where \(I^{*}\) balances the objective of maximizing the expected return, \(I\), at the 'cost' of variance, \(J\), hence, at maximum, the Lagrange multiplier, \(\nu\), determines the ratio of marginal risk to marginal return. For this ratio to be positive, the Lagrange multiplier therefore must be negative corresponding to the upper branch of the efficient frontier in traditional mean-variance optimization, whereas for \(\nu>0\), the marginal gain is negative corresponding to the lower branch of the efficient frontier.
Furthermore, the limit \(\nu\to-\infty\) is the global minimum for horizon variance whereas \(\nu\to 0-\) is the free optimization of horizon return subject to the maximum-loss restriction discussed in Section 3.
Following [23], \(I^{*}\) is optimized by the (re-parameterized) factor allocations \(y_{u}\) satisfying the Euler-Lagrange equation
\[\frac{\partial p_{u}^{*}}{\partial y_{u}}-\frac{d}{du}\left(\frac{\partial p_ {u}^{*}}{\partial\dot{y}_{u}}\right)=0 \tag{20}\]
where \(p^{*}(y_{u},\dot{y}_{u},u)=p(y_{u},\dot{y}_{u},u)+\nu q(y_{u},\dot{y}_{u},u)\) and \(p(\cdot)\) is defined in (18) and \(q(\cdot)\) is defined in (4).
Care must be taken in setting the boundary conditions: At the initial time, \(u=t\), the optimization is free, hence, the lower boundary is a transversal - or natural - boundary condition problem whereas the upper limit is dictated by the definition of \(y_{u}\), (17), which by construction must be zero at the upper boundary, i.e., \(y_{s}=0\). In summary, the boundary conditions are
\[\frac{\partial p_{u}^{*}}{\partial\dot{y}_{u}}\Big{|}_{u=t} =0 \text{(lower boundary)} \tag{21a}\] \[y_{u}|_{u=s} =0 \text{(upper boundary)} \tag{21b}\]
for \(u=t\) and \(u=s\).
It follows, that the optimal factor allocation is a solution to the Euler-Lagrange equation subject to these boundary conditions. We summarize the result in the following Theorem:
**Theorem 4.1**.: **(Mean-Variance Optimization)** _Given the assumptions and definitions of Propositions 2.1 and 3.1 then the (deterministic) investment strategy, \(f_{u}=(f_{u}^{r},f_{u}^{S})^{T}\), maximizing the expected nominal return, \(\mathbb{E}\log(V_{s}/V_{t})\), for fixed (constant) horizon volatility, \(c=\mathbb{V}\log(V_{s}/V_{t})\), is given by_
\[f_{u}=\boldsymbol{\Gamma}y_{u}-\dot{y}_{u},\quad\text{for }t\leq u\leq s\]
_where \(y_{u}\) is the solution to the inhomogeneous second-order (matrix) differential equation_
\[(1-2\nu)\Big{[}\boldsymbol{C}\ddot{y}_{u}+\boldsymbol{B}\dot{y}_{u}- \boldsymbol{A}y_{u}\Big{]}=g_{u} \tag{22}\]
_and_
\[\boldsymbol{A}=\begin{cases}\gamma_{r}^{2}&0\\ 0&\gamma_{S}^{2}\end{cases}+\rho\begin{cases}0&a_{\nu}\\ a_{\nu}&0\end{cases}\qquad\boldsymbol{B}=\rho\begin{cases}0&b_{\nu}\\ -b_{\nu}&0\end{cases}\]
_with_
\[\gamma_{r}^{2} =\frac{\kappa^{2}-2\nu a^{2}}{1-2\nu} \gamma_{S}^{2} =\frac{\alpha^{2}-2\nu(\alpha^{\prime})^{2}}{1-2\nu}\] \[a_{\nu} =\frac{(\kappa-\alpha)-2\nu(a-\alpha^{\prime})}{1-2\nu} b_{\nu} =\frac{\alpha\kappa-2\nu a\alpha^{\prime}}{1-2\nu}\]
_where \(\alpha^{\prime}=\alpha-\sigma_{x}/\sigma_{S}\) and_
\[g_{u}=-\Big{[}\begin{pmatrix}\kappa\xi_{0}^{r}\\ \alpha\xi_{0}^{S}\end{pmatrix}-2\nu\sigma_{r}\begin{pmatrix}1-(\kappa+a)\psi_{ \kappa}(u)\\ \rho[1-(\kappa+\alpha^{\prime})\psi_{\kappa}(u)]\end{pmatrix}\Big{]}\]
_and where \(\nu\), \(-\infty<\nu<0\), is a Lagrange multiplier. Furthermore, \(y_{u}\) satisfies the boundary conditions_
\[b_{0}+\big{(}b_{u}-[\mathbf{\Gamma}-2\nu\mathbf{\Gamma}_{\xi}]y_ {u}+(1-2\nu)\dot{y}_{u}\big{)}\big{|}_{u=t}=0 \text{(lower boundary)}\] \[y_{u}|_{u=s}=0 \text{(upper boundary)}\]
_at times \(t\) and \(s\), respectively, where_
\[\mathbf{\Gamma}_{\xi}=\mathbf{\Gamma}+\mathbf{\xi},\qquad\mathbf{C}b_{0}= \bar{\xi},\qquad\mathbf{C}b_{u}=e^{-\mathbf{\Gamma}(u-t)}\xi_{t}+2\nu\sigma_{ r}\psi_{\kappa}(s-u)\begin{pmatrix}1\\ \rho\end{pmatrix}.\]
_The Lagrange multiplier, \(\nu\), is determined from the normalizing condition_
\[\int_{t}^{s}h_{u}^{T}\mathbf{C}h_{u}du=c\]
_where \(h_{u}\) is given by (19)._
Proof.: See appendix D.
## 5. The Spectral Problem
Theorem 4.1 shows that the (transformed) optimal factor allocation is the solution to an inhomogeneous second-order differential equation, hence, the solution - following standard arguments - is the sum of a particular and the homogeneous solution to (22). In our case, we will consider homogeneous solutions of the \(y_{u}^{h}=e^{\mathbf{S}(s-u)}k\), where \(\mathbf{S}\in\mathbb{R}^{2\times 2}\), \(k\in\mathbb{R}^{2}\), hence, the characteristic equation becomes a matrix equation
\[P(\mathbf{S})=\mathbf{C}\mathbf{S}^{2}-\mathbf{B}\mathbf{S}-\mathbf{A}=0 \tag{23}\]
(notice the change of sign of \(\mathbf{B}\)) where \(P\) is a second order matrix polynomial with coefficients given by Theorem 4.1. Alas, the characteristic equation cannot be solved by standard (scalar) methods. Be also aware that generally, there will be more than two solutions to (23) but, as will be shown below, under mild additional assumptions, unique pairs of solutions exist which will suffice for our purposes. Following [11] we will refer to solutions of (23) as _solvents_ of \(P\).
In order to determine a suitable pair of solvents, we follow [11] and first define the lambda-matrix, \(\boldsymbol{M}_{\lambda}\), as
\[\boldsymbol{M}_{\lambda}=P(\lambda\mathbb{I})=\mathbf{C}\lambda^{2}-\mathbf{B} \lambda-\mathbf{A} \tag{24}\]
where \(\lambda\in\mathbb{C}\) is a scalar in order to determine the _latent_ roots of \(P\), that is, values of \(\lambda\) of which the lambda matrix, \(\boldsymbol{M}_{\lambda}\), is degenerate:
\[\det\boldsymbol{M}_{\lambda}=\det\big{(}\mathbf{C}\lambda^{2}-\mathbf{B} \lambda-\mathbf{A}\big{)}=0. \tag{25}\]
Since \(\mathbf{A},\mathbf{B},\mathbf{C}\in\mathbb{R}^{2\times 2}\) this amounts to determining the roots of a 4'th-order scalar polynomial.
The problem of determining the latent roots can be simplified, though: By the properties of the coefficients of \(\boldsymbol{M}\), cf. Theorem 4.1, and by the properties of the determinant, it follows that
\[\det\boldsymbol{M}_{\lambda}=\det\boldsymbol{M}_{\lambda}^{T}=\det\big{(} \boldsymbol{C}(-\lambda)^{2}-\boldsymbol{B}(-\lambda)-\boldsymbol{A}\big{)}=0,\]
hence, if \(\lambda\) is a latent root, then \(-\lambda\) is also a latent root, that is, (25) is a quadratic (scalar) polynomial in \(\lambda^{2}\). The full set of latent roots can therefore be written as \(\lambda=(\lambda_{1},\lambda_{2},-\lambda_{1},-\lambda_{2})\) where we for now will assume the four roots to be distinct (the exact condition is given in Theorem 5.1 below).
Since by definition, \(\boldsymbol{M}_{\lambda}\) is singular at the latent roots, the equation
\[\boldsymbol{M}_{\lambda_{i}}r_{i}=0 \tag{26}\]
has a non-trivial solution for each latent root. The set \(\{r_{1},\ldots,r_{4}\}\) is called the _right latent vectors_ of \(\boldsymbol{M}_{\lambda}\) and cannot - in general - be assumed to be distinct but by Theorem 4.1 of [11] every linearly independent pair of right latent vectors, \(\{r_{i},r_{j}\}\), provides a (right) solvent,
\[\boldsymbol{S}_{ij}=\boldsymbol{Q}_{ij}\boldsymbol{\Lambda}_{ij}\boldsymbol{Q} _{ij}^{-1}, \tag{27}\]
of \(P\) where \(\boldsymbol{\Lambda}_{ij}=\operatorname{diag}(\lambda_{i},\lambda_{j})\) and \(\boldsymbol{Q}=\{r_{i};r_{j}\}\) is a square matrix of full rank with columns \(r_{i}\) and \(r_{j}\), respectively. A set of solvents is called complete if the eigenvalues exactly coincide with the latent roots.
To see the solvent, (27), is a solution, insert (27) into (23):
\[P(\boldsymbol{S}_{ij}) =\boldsymbol{C}\boldsymbol{Q}_{ij}\boldsymbol{\Lambda}_{ij}^{2} \boldsymbol{Q}_{ij}^{-1}-\boldsymbol{B}\boldsymbol{Q}_{ij}\boldsymbol{\Lambda} _{ij}\boldsymbol{Q}_{ij}^{-1}-\boldsymbol{A}\] \[=\Big{[}\boldsymbol{C}\{\lambda_{i}^{2}r_{i};\lambda_{j}^{2}r_{j }\}-\boldsymbol{B}\{\lambda_{i}r_{i};\lambda_{j}r_{j}\}-\boldsymbol{A}\{r_{i} ;r_{j}\}\big{]}\boldsymbol{Q}_{ij}^{-1}\] \[=\Big{\{}\big{[}\boldsymbol{C}\lambda_{i}^{2}-\boldsymbol{B} \lambda_{i}-\boldsymbol{A}\big{]}r_{i};\big{[}\boldsymbol{C}\lambda_{j}^{2}- \boldsymbol{B}\lambda_{j}-\boldsymbol{A}\big{]}r_{j}\Big{\}}Q_{ij}^{-1}\]
which column by column is zero by (26).
Corollary E.1 provides the condition for latent roots to be distinct as well as provides explicit formulas for latent roots, latent vectors, and solvents. We summarize our findings in the following theorem:
**Theorem 5.1**.: **(Spectral Problem)** _Given the assumptions of Theorem 5.1 and if the discriminant_
\[D=(1-\rho^{2})\big{[}\gamma_{r}^{2}-\gamma_{S}^{2}\big{]}^{2}+\rho^{2}\big{[} \gamma_{r}^{2}+\gamma_{S}^{2}-(2a_{\nu}+b_{\nu}^{2})\big{]}^{2}-\rho^{2}(1-\rho ^{2})(4a_{\nu}+b_{\nu}^{2})b_{\nu}^{2}\]
_is non-zero, then there exists a pair of latent roots, \((\lambda_{1},\lambda_{2})\in\mathbb{C}^{2}\), such that the set of solvents, \(\boldsymbol{S}_{1},\boldsymbol{S}_{2}\), of (23) is real and complete and is given by_
\[\boldsymbol{S}_{1}=\boldsymbol{Q}_{1}\boldsymbol{\Lambda}\boldsymbol{Q}_{1}^ {-1},\qquad\boldsymbol{S}_{2}=-\boldsymbol{Q}_{2}\boldsymbol{\Lambda} \boldsymbol{Q}_{2}^{-1},\qquad\boldsymbol{S}_{1},\boldsymbol{S}_{2}\in \mathbb{R}^{2\times 2}\]
_where \(\boldsymbol{\Lambda}=\operatorname{diag}(\lambda_{1},\lambda_{2})\in\mathbb{ C}^{2\times 2}\) and \(\boldsymbol{Q}_{1},\boldsymbol{Q}_{2}\in\mathbb{C}^{2\times 2}\), are square matrices where columns are right-latent vectors of \((\lambda_{1},\lambda_{2})\) and \((-\lambda_{1},-\lambda_{2})\), respectively._
Proof.: This follows directly Theorems 4.1 and 4.5 in [11] and Corollary E.1.
## 6. General Solution
By standard arguments, the transformed optimal factor allocation, \(y_{u}\), is the sum,
\[y_{u}=y_{u}^{p}+y_{u}^{h},\]
of a particular, \(y_{u}^{p}\), and the homogeneous, \(y_{u}^{h}\), solutions to (22), respectively. Theorem 5.1 provides the homogeneous solution and the particular solution and implications of
the boundary conditions, 21a and 21b, are given in Appendix F. The general solution is stated in the following Theorem:
**Theorem 6.1**.: **(General Solution)** _Given the assumptions of Theorem 4.1 and Theorem 5.1 then the optimal factor allocation, \(f_{u}\), is given by_
\[f_{u}=(\boldsymbol{\Gamma}k_{1}+k_{2})+(\boldsymbol{\Gamma}+\boldsymbol{S}_{1 })e^{\boldsymbol{S}_{1}(s-u)}q_{1t}+(\boldsymbol{\Gamma}+\boldsymbol{S}_{2})e ^{\boldsymbol{S}_{2}(s-u)}q_{2t},\quad t\leq u\leq s \tag{28}\]
_where \(k_{1},k_{2}\in\mathbb{R}^{2}\) are given by_
\[k_{2}=\left(\frac{\sigma_{r}}{\kappa-a},0\right)^{T}\qquad(1-2\nu)\boldsymbol{ A}k_{1}=\begin{pmatrix}\kappa\bar{\xi}^{r}\\ \alpha\bar{\xi}^{S}\end{pmatrix}+\frac{\sigma_{r}}{a-\kappa}\begin{pmatrix} \kappa-2\nu a\\ \rho(\alpha-2\nu\alpha^{\prime})\end{pmatrix}\]
_and \(q_{1t},q_{2t}\in\mathbb{R}^{2}\) as the solution to_
\[\begin{cases}\mathbb{I}&\mathbb{I}\\ \boldsymbol{D}_{1}e^{\boldsymbol{S}_{1}(s-t)}&\boldsymbol{D}_{2}e^{\boldsymbol {S}_{2}(s-t)}\end{cases}\begin{pmatrix}q_{1t}\\ q_{2t}\end{pmatrix}=\begin{pmatrix}-k_{1}\\ \boldsymbol{C}^{-1}(\bar{\xi}+\xi_{t})-(\boldsymbol{\Gamma}k_{1}+k_{2})+2\nu( \boldsymbol{\Gamma}_{\xi}k_{1}+k_{2})\end{pmatrix}.\]
_where \(\boldsymbol{D}_{1}=\boldsymbol{\Gamma}-2\nu\boldsymbol{\Gamma}_{\xi}+(1-2\nu )\boldsymbol{S}_{1}\) and \(\boldsymbol{D}_{2}=\boldsymbol{\Gamma}-2\nu\boldsymbol{\Gamma}_{\xi}+(1-2\nu )\boldsymbol{S}_{2}\)._
Proof.: See Appendix F.
We see that the optimal strategy is a linear combination of exponentials of the two solvents, \(\boldsymbol{S}_{1}\) and \(\boldsymbol{S}_{2}\). To gain a little more insight into the structure of the solution, from Theorem 5.1 we write for an arbitrary term
\[e^{\boldsymbol{S}(s-u)}=\boldsymbol{Q}\begin{bmatrix}e^{\lambda_{1}(s-u)}&0 \\ 0&e^{\lambda_{j}(s-u)}\end{bmatrix}\boldsymbol{Q}^{-1},\]
that is, the optimal factor allocation explicitly depends on the investment horizon when the inverse of (at least one of) the latent roots, \(|\lambda.|^{-1}\), is of the order of the investment horizon.
Elaborating on this, consider two investors starting at different times \(t_{2}>t_{1}\), respectively, but with the same investment horizon date, \(s\): The first investor would commit to a deterministic investment strategy at \(t_{1}\) from prevailing market conditions at that time, that is, the state of the market as expressed by \(\xi_{t_{1}}\). When the second investor enters the market at \(t_{2}\), market conditions would have changed and the investor would commit to a different investment strategy than the first investor even if they agree on model and parameters, hence, are solving the same spectral problem.
Finally, a particular property of the model is, that the optimization is over a complete bond market. It is therefore an endogeneous outcome that in the limit of infinite risk aversion, \(\nu\to-\infty\), the optimal factor strategy is to buy (and hold) the zero-coupon maturing at the investment horizon to limit horizon variance to zero as stated in the following Corollary:
**Corollary 6.2**.: **(Infinite Risk-Aversion)** _Given the assumptions of Theorem 6.1 then the optimal factor allocation, \(f_{u}^{\infty}\), in the limit of infinite risk-aversion, \(\nu\to-\infty\), is given by_
\[f_{u}^{\infty}=\begin{pmatrix}-\psi_{a}(s-u)\sigma_{r}\\ 0\end{pmatrix}\]
_and coincides with buy-and-holding a zero-coupon bond maturing at \(s\) where \(\psi_{a}(\cdot)\) is given by (8)._
Proof.: See Appendix F.1.
## 7. Discussion
Efficient frontiers are illustrated in Figure 1 for two illustrative sets of capital market assumptions, cf. Table 1: The first set of parameters, (left), assumes slow mean-reversion in the equity risk-premium, hence, holding equity for longer periods of time is rewarded as mean-reversion manifests itself. The second set, (right), assumes fast (no) mean-reversion in the equity risk-premium, hence, there is no advantage in holding equity for longer periods of time to short periods.
In either case, efficient frontiers start in the same point for zero volatility, since zero horizon volatility can only be realized by allocation to a zero-coupon bond maturing at the investment horizon. The variation across investment horizon reflects the bond risk-premium as given by the (Vasicek) term structure is different by maturity whereas the initial point is independent of the equity risk-premium.
For less risk-averse investors, the risk-return trade-off is markedly different if the equity risk-premium is mean-reverting slowly or not: In the latter case, (fast mean-reversion) short-term and long-term investors are essentially offered the same risk-return trade-off, hence, efficient frontiers converge to the same point.
If the equity risk-premium mean-reverts slowly, long-term investors are rewarded by the ability to hold risky positions with a much better risk-return trade-off than offered shorter-term investors: The slope of the efficient frontier is particularly steep for small levels of risk and it is clear that long-term investors would benefit materially by moving from no risk to just a little risk in the portfolio. This potentially has huge implications to pension funds and other asset managers obliged to provide stable income streams because assuming just a little risk materially brings down the cost of such products by accepting just a minor increase in risk.
### Factor Allocation
The deterministic factor allocation over the investment horizon is illustrated in Figure 2 for three levels of risk aversion set by Lagrange multiplier \(\nu=-10,-1,-0.1\) assuming slow (left) and fast (right) mean-reversion in the equity risk-premium.
_Annualized horizon volatility vs annualized horizon mean return for investment horizons 10Y, 20Y, 30Y, and 50Y. Each efficient frontier is parametrized by the Lagrange multiplier, \(\nu\), and is plotted for the interval: \(\nu\in(-\infty,0)\). Illustrative parameters are given in Table 1. (Left) Slow mean-reversion in equity risk-premium (\(\alpha=0.01\)). (Right) Fast mean-reversion in equity risk-premium(\(\alpha=0.25\)); all other parameters unchanged._
Figure 1. Efficient Frontier by Investment Horizon
Starting with the case of fast mean-reversion (right) in the equity risk-premium, we see that allocation to equity for all practical purposes is constant over the investment horizon in support of well known practices of re-balancing portfolio risk over time. For bonds, this is only true for low levels of risk-aversion when terminal volatility is high. For higher levels of risk-aversion, the allocation to bond is decreasing in absolute terms with the investment horizon. Bond volatility is proportional to bond duration, hence, the declining volatility is a reflection of some horizon variance being hedged by the matching bond.
For slow mean reversion (left) in the equity risk-premium, first notice that generally equity levels are higher than in the 'fast' case: This is because the negative serial correlation introduced by the mean-reverting equity risk-premium results in a narrower distribution in equity returns, hence, a higher level of equity can be sustained relative to the alternative case. Moreover, the equity allocation tend to be declining over time when the equity risk-premium mean-reverts slowly.
This is in contrast to bond allocations, which more or less are identical to the alternative case. Also in this case is bond volatility declining over time as horizon volatility is hedged more and more by bonds with increasing risk aversion.
## 8. Conclusion
Being a long-term investor has become an argument by itself for holding larger allocations to risky assets despite little theoretical support for this argument. In many applications, it is argued that investor preferences depend on investor age, hence, arguments more have the character of being postulate than rationale as to why capital market would provide a different opportunity set to investors with different investment horizons.
In contrast, the objective of this paper is to formulate a theoretical foundation in which such investment beliefs are reflected directly in the capital market opportunity set and to derive proper mean-variance efficient asset allocations from these assumptions. It was shown, that long-term investors indeed can justify higher allocations to risky assets if risk-premia are slowly mean-reverting whereas if risk-premia are not, there is no particular
Figure 2. Factor Allocation by Risk-Aversion
argument that being a long-term investor poses a different allocation problem than to any other investor.
## Appendix A Capital Market Model
Given time-\(t\) values of the variables: \((r_{t},x_{t},S_{t})\), then for \(s>t\), we quote select integral representations of (1) to (3) without proof from [14]
\[r_{s} =\bar{r}+e^{-\kappa(s-t)}(r_{t}-\bar{r})+\sigma_{r}\int_{t}^{s}e^{ -\kappa(s-u)}dW_{u}^{r},, \tag{29a}\] \[x_{s} =\bar{x}+e^{-\alpha(s-t)}(x_{t}-\bar{x})-\sigma_{x}\int_{t}^{s}e^{ -\alpha(s-u)}dW_{u}^{S}, \tag{29b}\]
with mean
\[\mathbb{E}\,r_{s} =\bar{r}+e^{-\kappa(s-t)}(r_{t}-\bar{r}) \tag{30a}\] \[\mathbb{E}\int_{t}^{s}r_{u}du =\bar{r}(s-t)+\psi_{\kappa}(s-t)(r_{t}-\bar{r}) \tag{30b}\]
and variance
\[\mathbb{V}\,r_{s} =\sigma_{r}^{2}\psi_{2\kappa}(s-t) \tag{31a}\] \[\mathbb{V}\int_{t}^{s}r_{u}du =\sigma_{r}^{2}\upsilon_{\kappa}(s-t) \tag{31b}\]
\begin{table}
\begin{tabular}{l|c|c|c} Description & Parameter & \multicolumn{2}{c}{Mean-Reversion} \\ & & Slow & Fast (None) \\ \hline Interest Rate - P-measure & & & \\ \hline Mean-Revertion Strength & \(\kappa\) & 0.05 & 0.05 \\ Mean Level & \(\bar{r}\) & 0.02 & 0.02 \\ Volatility & \(\sigma_{r}\) & 0.01 & 0.01 \\ \hline Interest Rate - Q-measure & & & \\ \hline Mean-Reversion Strength & \(a\) & 0.04 & 0.04 \\ Mean Level & \(b\) & 0.03 & 0.03 \\ \hline Equity Risk-Premium & & & \\ \hline Mean-Revertion Strength & \(\alpha\) & 0.01 & 0.25 \\ Mean Level & \(\bar{x}\) & 0.04 & 0.04 \\ Volatility & \(\sigma_{x}\) & 0.007 & 0.007 \\ \hline Equity & & & \\ \hline Volatility & \(\sigma_{S}\) & 0.15 & 0.15 \\ Correlation With Interest Rate & \(\rho\) & 0.25 & 0.25 \\ \end{tabular} _Illustrative capital market parameters for slow (\(\alpha=0.01\)) and fast (\(\alpha=0.25\)) mean-reversion in the equity risk-premium. Notice that interest-rate parameters are chosen to ensure a negative risk-premium for bonds, cf. (9)._
\end{table}
Table 1. Illustrative Capital Market Parameters
where
\[\psi_{\alpha}(s-t)=\int_{t}^{s}e^{-\alpha(u-t)}du=\frac{1}{\alpha} \big{(}1-e^{-\alpha(s-t)}\big{)} \tag{32}\] \[\upsilon_{\alpha}(s-t)=\int_{t}^{s}\psi_{\alpha}^{2}(u-t)du=\frac{ (s-t)-2\psi_{\alpha}(s-t)+\psi_{2\alpha}(s-t)}{\alpha^{2}}.\]
The full covariance matrix is given in [14].
### Nominal Bonds
Following [14], the price, \(p_{t}(s)\), at time \(t\) of a (nominal) zero-coupon bond maturing at time \(s\) is given by the Vasicek formula
\[p_{t}(s)=e^{-R_{t}(s)(s-t)}\]
where the zero-coupon rate, \(R_{t}(s)\), is given by
\[R_{t}(s)=b+\frac{r_{t}-b}{s-t}\psi_{a}(s-t)-\frac{\sigma_{r}^{2}}{2(s-t)} \upsilon(a,s-t)\]
and
\[\upsilon(a,\tau)=\frac{\tau-2\psi_{a}(\tau)+\psi_{2a}(\tau)}{a^{2}}.\]
By Ito's Lemma the temporal dynamic is
\[dp_{t}(s)=p_{t}(s)\big{\{}[r_{t}-\lambda_{t}^{r}\psi_{a}(s-t) \sigma_{r}]dt-\psi_{a}(s-t)\sigma_{r}dW_{t}^{r}\big{\}},\]
hence, a positive excess return corresponds to a negative risk premium, \(\lambda^{r}\).
If we assume the existence of a nominal constant-maturity bond nominal bond with maturity \(M_{B}\) or - for most practical application - the existence of a bond index with an approximate constant maturity then following [14], the dynamics of the value, \(B_{t}\), of this bond is given by
\[dB_{t}=B_{t}\big{\{}[r-\lambda_{t}^{r}\psi_{a}(M_{B})\sigma_{r}] dt-\psi_{a}(M_{B})\sigma_{r}dW_{t}^{r}\big{\}}. \tag{33}\]
Comparing to Proposition 3.1 reveals the factor exposure, \(f_{u}^{B}\equiv f_{B}\), is constant and given by
\[f_{B}=(-\psi_{a}^{M}\sigma_{r},0)^{T},\]
where we have introduced the short hand \(\psi_{a}^{B}\equiv\psi_{a}(M_{B})\), hence,
\[\log B_{s}/B_{t} =\int_{t}^{s}(r_{u}+f_{B}\lambda_{u})du-\frac{1}{2}\int_{t}^{s}f _{B}^{T}\mathbf{C}f_{B}du+\int_{t}^{s}f_{B}^{T}dW_{u} \tag{34}\] \[=\int_{t}^{s}r_{u}du-\psi_{a}^{B}\sigma_{r}\int_{t}^{s}\lambda_{u }^{r}du-\frac{1}{2}\psi_{a}^{2}(M)\sigma_{r}^{2}(s-t)-\psi_{a}^{M}\sigma_{r} \int_{t}^{s}dW_{u}^{r}.\]
## Appendix B Proof of Proposition 2.1
_Proof_. By direct insertion of (29a) and (29b) into (5) it follows
\[\xi_{s}=\begin{pmatrix}\big{[}(a-\kappa)\{\bar{r}+e^{-\kappa(s-t)}(r_{t}-\bar {r})+\sigma_{r}\int_{t}^{s}e^{-\kappa(s-u)}dW_{u}^{r}\}+\kappa\bar{r}-ab\big{]} \,/\sigma_{r}\\ \big{\{}\bar{x}+e^{-\alpha(s-t)}(x_{t}-\bar{x})-\sigma_{x}\int_{t}^{s}e^{- \alpha(s-u)}dW_{u}^{S}\big{\}}\,/\sigma_{S}\end{pmatrix} \tag{35}\]
which by defining
\[\mathbf{\Gamma}=\begin{cases}\kappa&0\\ 0&\alpha\end{cases}\]
is rewritten into
\[\xi_{s}=\underbrace{\begin{pmatrix}a(\bar{r}-b)/\sigma_{r}\\ \bar{x}/\sigma_{S}\end{pmatrix}}_{\bar{\xi}}+e^{-\boldsymbol{\Gamma}(s-t)} \underbrace{\begin{pmatrix}(a-\kappa)(r_{t}-\bar{r})/\sigma_{r}\\ (x_{t}-\bar{x})/\sigma_{S}\end{pmatrix}}_{\xi_{t}}\\ +\underbrace{\begin{cases}(a-\kappa)&0\\ 0&-\sigma_{x}/\sigma_{S}\end{cases}}_{\boldsymbol{\xi}}\int_{t}^{s}e^{- \boldsymbol{\Gamma}(s-u)}dW_{u}\]
from which Proposition 2.1 follows.
Since the integrand of the stochastic integral is deterministic, it follows that \(\xi_{s}\) is Normally distributed with conditional mean
\[\mathbb{E}\,\xi_{s|t}=\bar{\xi}+e^{-\boldsymbol{\Gamma}(s-t)}\xi_{t}\]
and conditional variance
\[\mathbb{V}\,\xi_{s|t}=\boldsymbol{\xi}\int_{t}^{s}e^{-\boldsymbol{\Gamma}(s- u)}\boldsymbol{C}e^{-\boldsymbol{\Gamma}(s-u)}du\,\boldsymbol{\xi}=\boldsymbol{ \xi}\boldsymbol{V}_{s|t}^{\xi}\boldsymbol{\xi}\]
where from the definition (32)
\[\boldsymbol{V}_{s|t}^{\xi}=\begin{cases}\Psi(2\kappa,s-t)&\rho\Psi(\kappa+ \alpha,s-t)\\ \rho\Psi(\kappa+\alpha,s-t)&\Psi(2\alpha,s-t)\end{cases}.\]
For \(s\to\infty\), we see that \(\bar{\xi}\) indeed is the asymptotic mean whereas the asymptotic variance, \(\mathbb{V}\,\xi_{\infty}\), is given by \(\mathbb{V}\,\xi_{\infty}=\boldsymbol{\xi}\boldsymbol{V}_{\infty}^{\xi} \boldsymbol{\xi}\) with
\[\boldsymbol{V}_{\infty}^{\xi}=\begin{cases}1/(2\kappa)&\rho/(\kappa+\alpha) \\ \rho/(\kappa+\alpha)&1/(2\alpha)\end{cases}. \tag{36}\]
\(\square\)
## Appendix C Proof of Proposition 3.1
_Proof_. Let \(g(x)=e^{x}\) and for \(s\geq t\) let
\[X_{s}=\int_{t}^{s}(r_{u}+f_{u}^{T}\xi_{u})du-\frac{1}{2}\int_{t}^{s}f_{u}^{T} \boldsymbol{C}f_{u}du+\int_{t}^{s}f_{u}^{T}dW_{u} \tag{37}\]
then it is straightforward by the multi-dimensional Ito's Lemma, cf. Proposition 4.18 of [3], to show that the derivative of \(V_{s}=V_{t}\,g(X_{s})\) is given by
\[dV_{s}=V_{s}\Big{\{}(r_{u}+f_{u}^{T}\xi_{u})dt+f_{u}^{T}dW_{s}\Big{\}}. \tag{38}\]
Since \(V_{s}=V_{t}g(X_{s})\) and since (38) is identical to (12), it follows that \(V_{t}g(X_{s})\) is the solution to (12), hence
\[V_{s}=V_{t}\exp\left\{\int_{t}^{s}(r_{u}+f_{u}^{T}\xi_{u})du-\frac{1}{2}\int_ {t}^{s}f_{u}^{T}\boldsymbol{C}f_{u}du+\int_{t}^{s}f_{u}^{T}dW_{u}\right\}\]
as stated. \(\square\)
### Mean and Variance
From the definition of \(X_{s}\), (37), first notice that the last integral is a (sum of) Ito-integrals over a deterministic integrand. Second, it follows from (29a), (35), and (37), that the first term of \(X_{s}\) is (a sum of) Ito-integrals of the structure
\[I=\int_{t}^{s}g(u)du\int_{t}^{u}e^{-\alpha(u-v)}dW_{v}\]
where \(g(u)\) is a continuous, deterministic function and \(W_{v}\) a Brownian motion. By interchanging the order of integration, we find
\[I=\int_{t}^{s}dW_{v}\int_{v}^{s}g(u)e^{-\alpha(u-v)}du=\int_{t}^{s}G(v)dW_{v}\]
where \(G(v)\) also is a continuous, deterministic function, hence, the first term has the same structure as the last term.
It is well known, [3], that each individual integral is Normally distributed, hence, \(X_{s}\) is also normally distributed with mean
\[\mathbb{E}\left(X_{s}|X_{t}\right)= X_{t}+\int_{t}^{s}(r_{u}+f_{u}^{T}\xi_{u})du-\frac{1}{2}\int_{t}^{s}f_{u }^{T}\mathbf{C}f_{u}du\]
and - by the Ito isometry - variance
\[\mathbb{V}\left(X_{s}|X_{t}\right)= \int_{t}^{s}f_{u}^{T}\mathbf{C}f_{u}dt.\]
Furthermore, from Proposition 2.1 and upon defining
\[\epsilon_{0}=\frac{ab-\bar{r}\kappa}{a-\kappa},\epsilon_{1}=\left(\frac{ \sigma_{r}}{a-\kappa},0\right)^{T}\]
we can rewrite (29a) as
\[r_{u} =\bar{r}+e^{-\kappa(u-t)}(r_{t}-\bar{r})+\sigma_{r}\int_{t}^{u}e^ {-\kappa(u-v)}dW_{v}^{r}\] \[=\epsilon_{0}+\epsilon_{1}^{T}\left(\bar{\xi}+e^{-\mathbf{\Gamma}(u- t)}\xi_{t}\right)+\epsilon_{1}^{T}\int_{t}^{u}e^{-\mathbf{\Gamma}(u-v)}\mathbf{\xi} dW_{v}\]
which upon insertion into (37) yields
\[X_{s}= \int_{t}^{s}\Big{\{}\underbrace{\left[\epsilon_{0}+\epsilon_{1} ^{T}\left(\bar{\xi}+e^{-\mathbf{\Gamma}(u-t)}\xi_{t}\right)+\epsilon_{1}^{T}\int_{ t}^{u}e^{-\mathbf{\Gamma}(u-v)}\mathbf{\xi}dW_{v}\right]}_{r_{u}}+f_{u}^{T}\xi_{u}\Big{\}}du\] \[\qquad\qquad-\frac{1}{2}\int_{t}^{s}f_{u}^{T}\mathbf{C}f_{u}du+\int_ {t}^{s}f_{u}^{T}dW_{u}\] \[= \int_{t}^{s}\left\{\epsilon_{0}+\left(\epsilon_{1}+f_{u}\right) ^{T}\left(\bar{\xi}+e^{-\mathbf{\Gamma}(u-t)}\xi_{t}(t)\right)-\frac{1}{2}f_{u}^{ T}\mathbf{C}f_{u}\right\}du\] \[\qquad\qquad+\int_{t}^{s}\left[(\epsilon_{1}+f_{u})^{T}\int_{t}^ {u}e^{-\mathbf{\Gamma}(u-v)}\mathbf{\xi}dW_{v}\right]du+\int_{t}^{s}f_{u}^{T}dW_{u}.\]
By interchanging the order of integration of the double integral
\[\int_{t}^{s}\left[\int_{t}^{u}(\epsilon_{1}+f_{u})^{T}e^{-\mathbf{\Gamma}(u-v)} \mathbf{\xi}dW_{v}\right]du=\int_{t}^{s}\left[\int_{u}^{s}(\epsilon_{1}+f_{v})^{T }e^{-\mathbf{\Gamma}(v-u)}\mathbf{\xi}dv\right]dW_{u}\]
we find
\[X_{s}=\int_{t}^{s}\left\{\epsilon_{0}+(\epsilon_{1}+f_{u})^{T} \left(\bar{\xi}+e^{-\boldsymbol{\Gamma}(u-t)}\xi_{t}(t)\right)-\frac{1}{2}f_{u}^ {T}\boldsymbol{C}f_{u}\right\}du\\ +\int_{t}^{s}\left\{f_{u}^{T}+\int_{u}^{s}(\epsilon_{1}+f_{v})^{ T}e^{-\boldsymbol{\Gamma}(v-u)}\boldsymbol{\xi}dv\right\}dW_{u}.\]
It follows that \(\log(V_{s}/V_{t})=X_{s}\) is normally distributed with mean
\[\mathbb{E}\log(V_{s}/V_{t})=\int_{t}^{s}\left\{\epsilon_{0}+\left(\epsilon_{1} +f_{u}\right)^{T}\left(\bar{\xi}+e^{-\boldsymbol{\Gamma}(u-t)}\xi_{t}(t) \right)-\frac{1}{2}f_{u}^{T}\boldsymbol{C}f_{u}\right\}du\]
and variance
\[\mathbb{V}\log(V_{s}/V_{t})=\int_{t}^{s}h_{u}^{T}\boldsymbol{C}h_{u}du\]
where
\[h_{u}=f_{u}+\boldsymbol{\xi}\int_{u}^{s}e^{-\boldsymbol{\Gamma}(v-u)}( \epsilon_{1}+f_{v})dv\]
where it was utilized that \(\boldsymbol{\Gamma}\) and \(\boldsymbol{\xi}\) commute. \(\square\)
### Cash Only
In the special case of 'cash only', that is, of no investment strategy (\(f_{u}\equiv 0\)), the conditional mean, \(m^{0}_{s|t}\), is given by (C.1):
\[m^{0}_{s|t}=\int_{t}^{s}\left\{\epsilon_{0}+\epsilon_{1}^{T}(\bar{\xi}+e^{- \boldsymbol{\Gamma}(u-t)}\xi_{t})\right\}=\bar{r}(s-t)+\psi_{\kappa}(s-t)(r_{ t}-\bar{r})\]
where \(\psi_{\kappa}(,\cdot)\) is given by (8) and we have recovered (30b).
Furthermore, the conditional variance, \(v^{0}_{s|t}\), is given by (C.1) with
\[h_{u}^{0}=\xi_{2}^{r}\int_{u}^{s}e^{-\kappa(v-u)}\epsilon_{1}^{r}dv=\sigma_{r }\psi_{\kappa}(,s-u),\]
hence,
\[v^{0}_{s|t}=\sigma_{r}^{2}\int_{t}^{s}\psi_{\kappa}^{2}(s-u)du=\frac{\sigma_{ r}^{2}}{\kappa^{2}}\left((s-t)-2\psi(\kappa,s-t)+\psi(2\kappa,s-t)\right)\]
where we have recovered (31b). \(\square\)
## Appendix D Proof of Theorem 4.1
_Proof_. All vector and matrix manipulations follow the Jacobian - or numerator - formulation. First, notice that from the definition of \(p_{u}\), (18), it follows that
\[\frac{\partial p}{\partial y}=(\boldsymbol{\Gamma}l_{u})^{T}\quad\text{and} \quad\frac{\partial p}{\partial\dot{y}}=-l_{u}^{T} \tag{39}\]
where it was utilized that
\[l_{u}=\bar{\xi}+e^{-\boldsymbol{\Gamma}(u-t)}\xi_{t}-\boldsymbol{C}( \boldsymbol{\Gamma}y_{u}-\dot{y}_{u})\]
satisfies the relation
\[\boldsymbol{\Gamma}l_{u}+\dot{l}_{u}=\boldsymbol{C}\ddot{y}_{u}+(\boldsymbol{ \Gamma}\boldsymbol{C}-\boldsymbol{C}\boldsymbol{\Gamma})\dot{y}_{u}- \boldsymbol{\Gamma}\boldsymbol{C}\boldsymbol{\Gamma}y_{u}+\boldsymbol{\Gamma} \bar{\xi}.\]
Similarly, from the definition of \(q_{u}\), (4), it follows that
\[\frac{\partial q}{\partial y}=2\big{(}\boldsymbol{\Gamma}_{\xi}\boldsymbol{C}h _{u}\big{)}^{T}\quad\text{and}\quad\frac{\partial q}{\partial\dot{y}}=-2( \boldsymbol{C}h_{u})^{T} \tag{40}\]
where \(\boldsymbol{\Gamma}_{\xi}=\boldsymbol{\Gamma}+\boldsymbol{\xi}\) and it was utilized that \(h_{u}\), (19), satisfies the relation
\[\boldsymbol{\Gamma}_{\xi}\boldsymbol{C}h_{u}+\boldsymbol{C}\dot{h}_ {u}=-\boldsymbol{C}\ddot{y}_{u} +(\boldsymbol{C}\boldsymbol{\Gamma}_{\xi}-\boldsymbol{\Gamma}_{ \xi}\boldsymbol{C})\dot{y}_{u}+\boldsymbol{\Gamma}_{\xi}\boldsymbol{C} \boldsymbol{\Gamma}_{\xi}y_{u}\] \[-\boldsymbol{C}\boldsymbol{\xi}\epsilon_{1}+(\boldsymbol{ \Gamma}_{\xi}\boldsymbol{C}+\boldsymbol{C}\boldsymbol{\Gamma})\psi_{\kappa}(s-u )\boldsymbol{\xi}\epsilon_{1}.\]
With these intermediaries, the Euler-Lagrange equation, (20), becomes
\[\Big{\{}\boldsymbol{\Gamma}l_{u}-(-\dot{l}_{u})\Big{\}}+2\nu \Big{\{}\boldsymbol{\Gamma}_{\xi}\boldsymbol{C}h_{u}-(-\boldsymbol{C}\dot{h}_ {u})\Big{\}}=\\ \Big{\{}(1-2\nu)\boldsymbol{C}\ddot{y}_{u}+\big{[}(\boldsymbol{ \Gamma}\boldsymbol{C}-\boldsymbol{C}\boldsymbol{\Gamma})-2\nu(\boldsymbol{ \Gamma}_{\xi}\boldsymbol{C}-\boldsymbol{C}\boldsymbol{\Gamma}_{\xi})\big{]} \dot{y}_{u}+\boldsymbol{\Gamma}\bar{\xi}\\ +\big{[}2\nu\boldsymbol{\Gamma}_{\xi}\boldsymbol{C}\boldsymbol{ \Gamma}_{\xi}-\boldsymbol{\Gamma}\boldsymbol{C}\boldsymbol{\Gamma}\big{]}y_{u }-2\nu\Big{[}\boldsymbol{C}-(\boldsymbol{\Gamma}_{\xi}\boldsymbol{C}+ \boldsymbol{C}\boldsymbol{\Gamma})\psi_{\kappa}(s-u)\ \Big{]}\eta^{r}\Big{\}}=0 \tag{41}\]
where \(\eta^{r}=\boldsymbol{\xi}\epsilon_{1}\), hence, \(y_{u}\) must satisfy the inhomogeneous second-order differential equation
\[(1-2\nu)\Big{[}\boldsymbol{C}\ddot{y}_{u}+\boldsymbol{B}\dot{y}_{u}- \boldsymbol{A}y_{u}\Big{]}=g_{u}\]
where
\[(1-2\nu)\boldsymbol{A} =\boldsymbol{\Gamma}\boldsymbol{C}\boldsymbol{\Gamma}-2\nu \boldsymbol{\Gamma}_{\xi}\boldsymbol{C}\boldsymbol{\Gamma}_{\xi},\] \[(1-2\nu)\boldsymbol{B} =(\boldsymbol{\Gamma}\boldsymbol{C}-\boldsymbol{C}\boldsymbol{ \Gamma})-2\nu(\boldsymbol{\Gamma}_{\xi}\boldsymbol{C}-\boldsymbol{C} \boldsymbol{\Gamma}_{\xi}),\] \[g_{u} =2\nu\left(\boldsymbol{C}-(\boldsymbol{\Gamma}_{\xi}\boldsymbol {C}+\boldsymbol{C}\boldsymbol{\Gamma})\psi_{\kappa}(s-u)\right)\eta^{r}- \boldsymbol{\Gamma}\bar{\xi}.\]
or - by Proposition 2.1 - in the primary parameterization:
\[\boldsymbol{A}=\begin{cases}\gamma_{r}^{2}&0\\ 0&\gamma_{S}^{2}\end{cases}+\rho\begin{cases}0&a_{\nu}\\ a_{\nu}&0\end{cases}\qquad\boldsymbol{B}=\rho\begin{cases}0&b_{\nu}\\ -b_{\nu}&0\end{cases}\]
with
\[\gamma_{r}^{2} =\frac{\kappa^{2}-2\nu a^{2}}{1-2\nu} \gamma_{S}^{2} =\frac{\alpha^{2}-2\nu(\alpha^{\prime})^{2}}{1-2\nu}\] \[a_{\nu} =\frac{\alpha\kappa-2\nu a\alpha^{\prime}}{1-2\nu} b_{\nu} =\frac{(\kappa-\alpha)-2\nu(a-\alpha^{\prime})}{1-2\nu}\]
where \(\alpha^{\prime}=\alpha-\sigma_{x}/\sigma_{S}\) and
\[g_{u}=-\Big{[}\begin{pmatrix}\kappa\bar{\xi}^{r}\\ \alpha\bar{\xi}^{s}\end{pmatrix}-2\nu\sigma_{r}\begin{pmatrix}1-(\kappa+a)\psi _{\kappa}(u)\\ \rho[1-(\kappa+\alpha^{\prime})\psi_{\kappa}(u)]\end{pmatrix}\Big{]}.\]
Furthermore, combining (21a), (39), and (40) it follows that \(y_{u}\) satisfies the lower boundary condition
\[(l_{u}+2\nu\boldsymbol{C}h_{u})^{T}\Big{|}_{u=t}=\\ \big{(}\bar{\xi}+e^{-\boldsymbol{\Gamma}(u-t)}\xi_{t}-\boldsymbol{C}( \boldsymbol{\Gamma}y_{u}-\dot{y}_{u})-2\nu\boldsymbol{C}\{\boldsymbol{\Gamma }_{\xi}y_{u}-\dot{y}_{u}+\psi_{\kappa}(s-u)\eta^{r}\}\big{)}^{T}\Big{|}_{u=t,s }=0,\]
and in combination with the upper boundary condition, (21b), the boundary conditions become
\[b_{0}+\big{(}b_{u}-[\boldsymbol{\Gamma}-2\nu\boldsymbol{\Gamma}_{\xi}]y_{u}+ (1-2\nu)\dot{y}_{u}\big{)}\Big{|}_{u=t}=0 \text{(lower boundary)}\]
\[y_{u}|_{u=s}=0 \text{(upper boundary)}\]
where
\[b_{0}= \boldsymbol{C}^{-1}\bar{\xi},\] \[b_{u}= \boldsymbol{C}^{-1}e^{-\boldsymbol{\Gamma}(u-t)}\xi_{t}+2\nu\psi_{ \kappa}(s-u)\eta^{r}.\]
The Lagrange multiplier is determined from the constant variance, ie. by combining (4) and (19)
\[\int_{t}^{s}\Big{\{}\left[\mathbf{\Gamma}_{\xi}y_{u}-\dot{y}_{u}+\psi_{ \kappa}(s-u)\eta^{\tau}\right]^{T}\] \[\qquad\qquad\times\mathbf{C}\left[\mathbf{\Gamma}_{\xi}y_{u}-\dot{y}_{u}+ \psi_{\kappa}(s-u)\eta^{\tau}\right]\Big{\}}du=c.\]
\(\square\)
## Appendix E Latent Roots
**Corollary E.1**.: **Latent Roots**_. Given the assumptions of Theorem 4.1 then the Lambda-matrix \(\mathbf{M}_{\lambda}\), (24), is given by_
\[\mathbf{M}_{\lambda}=\begin{Bmatrix}\lambda^{2}-\gamma_{r}^{2}&\rho[(\lambda^{2}- a_{\nu})+b_{\nu}\lambda]\\ \rho[(\lambda^{2}-a_{\nu})-b_{\nu}\lambda]&\lambda^{2}-\gamma_{S}^{2}\end{Bmatrix} \tag{42}\]
_with squared latent roots \(\lambda_{1}^{2},\lambda_{2}^{2}\). Let the discriminant, \(D\), be given by_
\[D=(1-\rho^{2})\big{[}\gamma_{r}^{2}-\gamma_{S}^{2}\big{]}^{2}+\rho^{2}\big{[} \gamma_{r}^{2}+\gamma_{S}^{2}-(2a_{\nu}+b_{\nu}^{2})\big{]}^{2}-\rho^{2}(1- \rho^{2})(4a_{\nu}+b_{\nu}^{2})b_{\nu}^{2}\]
_then (a) iff \(D>0\), the squared latent roots are real, positive, and distinct and are given by_
\[\lambda_{1}^{2}=\frac{\gamma_{r}^{2}+\gamma_{S}^{2}-\rho^{2}(2a_{\nu}+b_{\nu} ^{2})-\sqrt{D}}{2(1-\rho^{2})},\quad\lambda_{2}^{2}=\frac{\gamma_{r}^{2}+ \gamma_{S}^{2}-\rho^{2}(2a_{\nu}+b_{\nu}^{2})+\sqrt{D}}{2(1-\rho^{2})}\]
_and the solvents, \(\mathbf{S}_{1},\mathbf{S}_{2}\in\mathbb{R}^{2}\) are given by_
\[\mathbf{S}_{1}=\mathbf{Q}_{1}\mathbf{\Lambda}\mathbf{Q}_{1}^{-1},\qquad\mathbf{S}_{2}=\mathbf{Q}_{2}(- \mathbf{\Lambda})\mathbf{Q}_{2}^{-1}\]
_where \(\mathbf{\Lambda}=\mathrm{diag}(\lambda_{1},\lambda_{2})\) and \(\mathbf{Q}_{1},\mathbf{Q}_{2}\in\mathbb{R}^{2\times 2}\), \(\mathbf{Q}_{1}=\{r_{1};r_{2}\}\) and \(\mathbf{Q}_{2}=\{r_{3};r_{4}\}\), with column right-latent vectors, \(r_{i}\), where \(r_{i}\) is the larger of_
\[\bar{r}_{i}=\begin{pmatrix}\rho[\lambda_{i}^{2}-a_{\nu}-b_{\nu}\lambda_{i}]\\ \gamma_{r}^{2}-\lambda_{i}^{2}\end{pmatrix}\qquad\underline{r}_{i}=\begin{pmatrix} \gamma_{S}^{2}-\lambda_{i}^{2}\\ \rho[\lambda_{i}^{2}-a_{\nu}+b_{\nu}\lambda_{i}]\end{pmatrix}\]
_with respect to the Euclidian norm for \(\lambda_{i}=(\lambda_{1},\lambda_{2},-\lambda_{1},-\lambda_{2})\), respectively. and (b) iff \(D<0\), the latent roots are complex, distinct, and each others complex conjugate and are given by_
\[\lambda_{1}^{2}=(\lambda_{2}^{2})^{*}=\frac{\gamma_{r}^{2}+\gamma_{S}^{2}- \rho^{2}(2a_{\nu}+b_{\nu}^{2})-i\sqrt{-D}}{2(1-\rho^{2})}\]
_where \((\cdot)^{*}\) denotes the complex conjugate and the solvents, \(\mathbf{S}_{1},\mathbf{S}_{2}\), are real and given by_
\[\mathbf{S}_{i}=\frac{1}{\mathrm{Im}(r_{i1}r_{i2}^{*})}\mathrm{Im}\begin{Bmatrix} \lambda r_{i1}r_{i2}^{*}&\lambda^{*}|r_{i1}|^{2}\\ \lambda|r_{i}|^{2}&\lambda^{*}r_{i1}r_{i2}^{*}\end{Bmatrix},\]
\(\lambda\) _is the principal (complex) root of_
\[\lambda^{2}=\frac{\gamma_{r}^{2}+\gamma_{S}^{2}-\rho^{2}(2a_{\nu}+b_{\nu}^{2} )-i\sqrt{-D}}{2(1-\rho^{2})},\]
_the latent vectors \(r_{1},r_{2}\in\mathbb{C}^{2}\) are given by_
\[r_{1}=\begin{pmatrix}\rho[\lambda^{2}-a_{\nu}+b_{\nu}\lambda]\\ \gamma_{r}^{2}-\lambda^{2}.\end{pmatrix},\qquad r_{2}=\begin{pmatrix}\rho[ \lambda^{2}-a_{\nu}-b_{\nu}\lambda]\\ \gamma_{r}^{2}-\lambda^{2},\end{pmatrix},\]
_respectively._
Proof.: \(D>0\): It follows from (42) that the determinant is a quadratic polynomial in \(\lambda^{2}\)
\[\det\boldsymbol{M}_{\lambda}=\underbrace{(1-\rho^{2})}_{A}\lambda^{4}-\underbrace {\left[(\gamma_{r}^{2}+\gamma_{S}^{2})-\rho^{2}(2a_{\nu}+b_{\nu}^{2})\right]}_{B} \lambda^{2}+\underbrace{\left[\gamma_{r}^{2}\gamma_{S}^{2}-\rho^{2}a_{\nu}^{2} \right]}_{C}\]
where all coefficients \(A,B\), and \(C\) are positive:
* _Positivity of \(A\)_ follows directly from the assumption \(\rho^{2}<1\).
* _Positivity of \(B\)_ holds, if \(\gamma_{r}^{2}+\gamma_{S}^{2}\geq 2a_{\nu}+b_{\nu}^{2}\). Upon multiplication by \((1-2\nu)^{2}\) we find \[(1-2\nu)^{2}\big{[}\gamma_{r}^{2}+\gamma_{S}^{2}-\rho^{2}(2a_{\nu}-b_{\nu}^{ 2})\big{]}\geq 0 \Leftrightarrow\] \[-2\nu\Big{(}(\kappa-\alpha)-(a-\alpha^{\prime})\Big{)}^{2}\geq 0\] which holds for all parameter choices since \(\nu<0\).
* _Positivity of \(C\)_ holds, if \(\gamma_{r}^{2}\gamma_{S}^{2}>\rho^{2}a_{\nu}^{2}\) since \(b_{\nu}\) is positive. It follows upon multiplication by \((1-2\nu)^{2}\) that \[\left[\kappa^{2}-2\nu a^{2})\right]\big{[}\alpha^{2}-2\nu(\alpha^{ \prime})^{2}\big{]}-\rho^{2}\big{[}\alpha\kappa-2\nu a\alpha^{\prime}\big{]}^{ 2}>0 \Leftrightarrow\] \[(1-\rho^{2})(\alpha^{2}\kappa^{2}+4\nu^{2}a^{2}(\alpha^{\prime})^ {2})-2\nu(\kappa\alpha^{\prime}-a\alpha)^{2}>0\] which also holds for all parameter choices.
Since \(B\) is positive, the vertex is positive, hence, the larger root is positive. Furthermore, since \(A\) is positive, the parabola opens upwards, and since \(C\) (the intersection) is also positive, the smaller root is positive too. Finally, since \(D>0\) the roots are distinct.
The lambda matrix is degenerate at the latent roots, hence, the top and bottom row of \(\boldsymbol{M}_{\lambda}\) become proportional. The latent (right) vectors, \(r_{i}\), solve
\[\boldsymbol{M}_{\lambda_{i}}r_{i}=0 \tag{43}\]
hence, the right latent vector is given by
\[r_{i}=\begin{pmatrix}\rho[\lambda_{i}^{2}-a_{\nu}-b_{\nu}\lambda_{i}]\\ \gamma_{r}^{2}-\lambda_{i}^{2}\end{pmatrix}\]
or in case this is a zero-vector
\[r_{i}=\begin{pmatrix}\gamma_{S}^{2}-\lambda_{i}^{2}\\ \rho[\lambda_{i}^{2}-a_{\nu}+b_{\nu}\lambda_{i}]\end{pmatrix}.\]
Furthermore, right latent vectors of the pair \(\lambda_{1}\) and \(\lambda_{2}\) are linearly independent.
\(D<0\): The latent roots are complex and given by
\[\lambda_{1}^{2}=(\lambda_{2}^{2})^{*}=\frac{\gamma_{r}^{2}+\gamma_{S}^{2}-\rho ^{2}(2a_{\nu}+b_{\nu}^{2})-i\sqrt{-D}}{2(1-\rho^{2})}\]
hence, there are four distinct (complex) roots, \(\bar{\gamma}_{i}=\{\bar{\gamma},\bar{\gamma}^{*},-\bar{\gamma},-\bar{\gamma}^ {*}\}\) where \(\bar{\gamma}\) is the principal square root of \(\lambda_{1}^{2}\).
Since the latent roots are complex, the latent vectors are given by (43) and from the ordering of the latent roots,\(\lambda_{i}\), \(\boldsymbol{Q}_{1}\) and \(\boldsymbol{Q}_{2}\) are given by
\[\boldsymbol{Q}_{1} =\{r_{1};r_{1}^{*}\}, r_{1} =\begin{pmatrix}\rho[\gamma^{2}-a_{\nu}+b_{\nu}\gamma]\\ \gamma_{r}^{2}-\gamma^{2}.\end{pmatrix}\] \[\boldsymbol{Q}_{2} =\{r_{2};r_{2}^{*}\}, r_{2} =\begin{pmatrix}\rho[\gamma^{2}-a_{\nu}-b_{\nu}\gamma]\\ \gamma_{r}^{2}-\gamma^{2}.\end{pmatrix}.\]
For either \(\mathbf{Q}_{1}\) or \(\mathbf{Q}_{2}\) we write with a minor abuse of notation
\[\mathbf{Q}=\begin{Bmatrix}r_{1}&r_{1}^{*}\\ r_{2}&r_{2*}\end{Bmatrix}\Rightarrow\mathbf{Q}^{-1}=\frac{-i}{2\text{Im}(r_{1}r_{2} *)}\begin{Bmatrix}r_{2}*&-r_{1}^{*}\\ -r_{2}&r_{1}\end{Bmatrix}\]
and therefore
\[\mathbf{S}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}=\frac{1}{\text{Im}(r_{1}r_{2}*)}\text{Im }\begin{Bmatrix}\gamma r_{1}r_{2}^{*}&\gamma^{*}|r_{1}|^{2}\\ \gamma|r_{2}|^{2}&\gamma^{*}r_{1}r_{2}^{*}\end{Bmatrix},\]
that is, the _solvents_\(\mathbf{S}_{1},\mathbf{S}_{2}\) are real. \(\square\)
## Appendix F Proof of Theorem 6.1
Writing the solution
\[y_{u}=y_{u}^{p}+y_{u}^{h}\]
as the sum of the particular, \(y_{u}^{p}\) and homogeneous, \(y_{u}^{h}\) parts, respectively, by Theorem 5.1 the homogeneous solution is given by
\[y_{u}^{h}=e^{\mathbf{S}_{1}(s-u)}q_{1}+e^{\mathbf{S}_{2}(s-u)}q_{2} \tag{44}\]
where \(q_{1},q_{2}\in\mathbb{R}^{2}\) are arbitrary integration constants. It will become clear below, that \(q_{1},q_{2}\) depend explicitly on the initial time, \(t\), but for the purposes of this proof we will suppress this dependence.
For the particular solution, we assume the following form
\[y_{u}^{p}=k_{1}+\psi_{\kappa}(s-u)k_{2} \tag{45}\]
where \(k_{1},k_{2}\in\mathbb{R}^{2}\) are constants. Upon insertion into (22) and matching factors of \(\psi_{\kappa}(s-u)\), we find
\[(1-2\nu)\Big{[}\kappa^{2}\mathbf{C}+\kappa\mathbf{B}-\mathbf{A}\Big{]}k_{2}=-2\nu\sigma_{ r}\begin{pmatrix}a+\kappa\\ \rho(\alpha^{\prime}+\kappa)\end{pmatrix} \tag{46a}\] \[(1-2\nu)\Big{[}(\kappa\mathbf{C}+\mathbf{B})k_{2}+\mathbf{A}k_{1}\Big{]}= \begin{pmatrix}\kappa\bar{\xi}^{r}\\ \alpha\bar{\xi}^{S}\end{pmatrix}-2\nu\sigma_{r}\begin{pmatrix}1\\ \rho\end{pmatrix} \tag{46b}\]
Rewriting the left-hand side of (46a)
\[(1-2\nu)\big{\{} \kappa^{2}\mathbf{C}+\kappa\mathbf{B}-\mathbf{A}\big{\}}k_{2}\] \[=\begin{Bmatrix}2\nu(a+\kappa)(a-\kappa)&\rho[2\kappa(k-\alpha)- 2\nu(\kappa-\alpha^{\prime})(\kappa+a)]\\ 2\nu\rho(\kappa+\alpha^{\prime})(a-\kappa)&(1-2\nu)(\kappa^{2}-\gamma_{S}^{2} )\end{Bmatrix}k_{2}\]
it is clear that the first column is proportional to the right-hand side of (46a), hence,
\[k_{2}=\Big{(}\frac{\sigma_{r}}{\kappa-a},0\Big{)}^{T}.\]
Insertion of \(k_{2}\) into (46b) yields \(k_{1}\):
\[(1-2\nu)\mathbf{A}k_{1}=\begin{pmatrix}\kappa\bar{\xi}^{r}\\ \alpha\bar{\xi}^{S}\end{pmatrix}+\frac{\sigma_{r}}{a-\kappa}\begin{pmatrix} \kappa-2\nu a\\ \rho(\alpha-2\nu\alpha^{\prime})\end{pmatrix}\]
where the inverse of \(\mathbf{A}\) is given by
\[\mathbf{A}^{-1}=\frac{1}{\gamma_{r}^{2}\gamma_{S}^{2}-\rho^{2}a_{\nu}^{2}}\begin{Bmatrix} \gamma_{S}^{2}&-\rho a_{\nu}\\ -\rho a_{\nu}&\gamma_{r}^{2}\end{Bmatrix}.\]
The boundary conditions are stated in Theorem 4.1 which together with (44) and (45) yields the following condition to determine integration constants \(q_{1}\) and \(q_{2}\):
\[\left\{\begin{aligned} & q_{1}+q_{2}+k_{1}=\mathbf{0}\\ &\boldsymbol{C}b_{0}+\boldsymbol{C}b_{t}-\boldsymbol{C}\big{[} \boldsymbol{\Gamma}-2\nu\boldsymbol{\Gamma}_{\lambda}\big{]}\Big{(}k_{1}+k_{2 }\psi_{\kappa}(t)+\boldsymbol{Q}_{1}e^{\boldsymbol{\gamma}(s-t)}\boldsymbol{Q }_{1}^{-1}q_{1}+\boldsymbol{Q}_{2}e^{-\boldsymbol{\gamma}(s-t)}q_{2} \boldsymbol{Q}_{2}^{-1}\Big{)}\\ &+(1-2\nu)\boldsymbol{C}\big{[}(\kappa\psi_{\kappa}(t)-1)k_{2}- \boldsymbol{Q}_{1}\boldsymbol{\gamma}e^{\boldsymbol{\gamma}(s-t)}\boldsymbol{Q }_{1}^{-1}q_{1}+\boldsymbol{Q}_{2}\boldsymbol{\gamma}e^{-\boldsymbol{\gamma}( s-t)}\boldsymbol{Q}_{2}^{-1}q_{2}\big{]}=\mathbf{0}\end{aligned}\right.\]
- or on block matrix form
\[\left\{\begin{aligned} &\mathbb{I}\quad\mathbb{I}\\ &\boldsymbol{D}_{1}e^{\boldsymbol{S}_{1}(s-t)}& \boldsymbol{D}_{2}e^{\boldsymbol{S}_{2}(s-t)}\end{aligned}\right\} \begin{pmatrix}q_{1}\\ q_{2}\end{pmatrix}=\begin{pmatrix}-k_{1}\\ \boldsymbol{C}^{-1}(\bar{\xi}+\xi_{t})-(\boldsymbol{\Gamma}k_{1}+k_{2})+2\nu( \boldsymbol{\Gamma}_{\xi}k_{1}+k_{2})\end{pmatrix}.\]
where \(\boldsymbol{D}_{1}=\boldsymbol{\Gamma}-2\nu\boldsymbol{\Gamma}_{\xi}+(1-2\nu )\boldsymbol{S}_{1}\) and \(\boldsymbol{D}_{2}=\boldsymbol{\Gamma}-2\nu\boldsymbol{\Gamma}_{\xi}+(1-2\nu )\boldsymbol{S}_{2}\).
Finally, by Theorem 4.1, (44), and (45) it follows that the optimal factor allocation, \(f_{u}\), is given by
\[f_{u} =\boldsymbol{\Gamma}y_{u}-\dot{y}_{u}\] \[=\boldsymbol{\Gamma}\big{\{}k_{1}+\psi_{\kappa}(u)k_{2}+e^{ \boldsymbol{S}_{1}(s-u)}q_{1}+e^{\boldsymbol{S}_{2}(s-u)}\big{\}}\] \[\qquad-(\kappa\psi_{\kappa}(u)-1)k_{2}+\boldsymbol{S}_{1}e^{ \boldsymbol{S}_{1}(s-u)}q_{1}+\boldsymbol{S}_{2}e^{\boldsymbol{S}_{2}(s-u)}q_ {2}\] \[=(\boldsymbol{\Gamma}k_{1}+k_{2})+(\boldsymbol{\Gamma}+ \boldsymbol{S}_{1})e^{\boldsymbol{S}_{1}(s-u)}q_{1}+(\boldsymbol{\Gamma}+ \boldsymbol{S}_{2})e^{\boldsymbol{S}_{2}(s-u)}q_{2}.\]
\(\square\)
### Proof of Corollary 6.2
From Theorem 4.1 we find in the limit \(\nu\to-\infty\) that
\[\gamma_{r}^{2}\to a^{2}\qquad\gamma_{S}^{2}\to(\alpha^{\prime})^{2}\qquad a_ {\nu}\to a\alpha^{\prime}\qquad b_{\nu}\to(a-\alpha^{\prime})\]
and
\[\boldsymbol{A}\to\left\{\begin{matrix}a^{2}&\rho a\alpha^{\prime}\\ \rho a\alpha^{\prime}&(\alpha^{\prime})^{2}\end{matrix}\right\}\qquad\boldsymbol {B}\to\left\{\begin{matrix}0&\rho(a-\alpha^{\prime})\\ -\rho(a-\alpha^{\prime})&0\end{matrix}\right\},\]
hence, the limiting value of the lambda matrix, (24), becomes
\[\boldsymbol{M}_{\lambda}\to\left\{\begin{matrix}\lambda^{2}-a^{2}&\rho[\lambda ^{2}-a\alpha^{\prime}+(a-\alpha^{\prime})\lambda]\\ \rho[\lambda^{2}-a\alpha^{\prime}-(a-\alpha^{\prime})\lambda]&\lambda^{2}-( \alpha^{\prime})^{2}\end{matrix}\right\}\]
with limiting latent roots \(\lambda^{2}=a^{2},(\alpha^{\prime})^{2}\).
From
\[\boldsymbol{A}^{-1}\to\frac{1}{(1-\rho^{2})a^{2}(\alpha^{\prime})^{2}}\left\{ \begin{matrix}(\alpha^{\prime})^{2}&-\rho a\alpha^{\prime}\\ -\rho a\alpha^{\prime}&a^{2}\end{matrix}\right\}\]
it further follows that
\[k_{1}\to\frac{1}{(1-\rho^{2})a^{2}(\alpha^{\prime})^{2}}\left\{\begin{matrix}( \alpha^{\prime})^{2}&-\rho a\alpha^{\prime}\\ -\rho a\alpha^{\prime}&a^{2}\end{matrix}\right\}\frac{\sigma_{r}}{a-\kappa} \left(\begin{matrix}a\\ \rho\alpha^{\prime}\end{matrix}\right)=\frac{\sigma_{r}}{a-\kappa}\left( \begin{matrix}1/a\\ 0\end{matrix}\right),\]
that is, \(k_{1}\to-a^{-1}k_{2}\), hence,
\[\boldsymbol{\Gamma}_{\xi}k_{1}+k_{2}\to 0. \tag{47}\]
From Theorem 6.1 we find using (47) that for terms proportional to \(\nu\), the limiting boundary conditions are
\[\left\{\begin{matrix}\mathbb{I}&\mathbb{I}\\ (\boldsymbol{\Gamma}_{\xi}+\boldsymbol{S}_{1})e^{\boldsymbol{S}_{1}(s-t)}&0 \end{matrix}\right\}\begin{pmatrix}q_{1}\\ q_{2}\end{pmatrix}=\begin{pmatrix}-k_{1}\\ 0\end{matrix}\right.\]
where it is easy to check that \(\boldsymbol{S}_{2}=-\boldsymbol{\Gamma}_{\xi}\) is a solvent, hence, \(\boldsymbol{D}_{2}\to 0\) and
\[q_{1}=0\qquad q_{2}=-k_{1}.\]
Finally, the limiting optimal asset allocation, \(f_{u}^{\infty}\), is given by insertion in (28)
\[f_{u}^{\infty}=\kappa(-a^{-1})k_{2}+k_{2}+(\mathbf{\Gamma}-\mathbf{\Gamma}_{ \xi})e^{-a(s-u)}(-k_{1})=\begin{pmatrix}-\sigma_{r}\psi_{a}(s-u)\\ 0\end{pmatrix}\]
where \(\psi_{a}(\cdot)\) is given by (8). \(\square\)
| この論文は、決定的な投資ポリシーに従って継続時間における投資家の最適ポートフォリオ選択を研究しています。リスクフリー利回と株式リスク premia に伴い、市場ではリスクの反転が起こっています。Markowitz の伝統に従い、最適な政策は、損失が初期キャピタルを超えない、因子への露出のサブクラスに限定されます。この論文では、Euler-Lagrange
方程式が、変分積分法を用いて導出されています。この方程式は、因子露出の積分変換によってマトリックス微分方程式に言い換えられます。その結果、特徴方程式の解は、関連するlambdaマトリックスの主値によってパラメータ化されます。したがって、最適化の問題はスペクトル問題と等価になります。最後に、最適なポリシーに対する明確な解は、適切な境界条件を適用して導出されます。さらに、株式リスク premia がゆっくり |
2305.19563 | Zero-Shot Automatic Pronunciation Assessment | Automatic Pronunciation Assessment (APA) is vital for computer-assisted
language learning. Prior methods rely on annotated speech-text data to train
Automatic Speech Recognition (ASR) models or speech-score data to train
regression models. In this work, we propose a novel zero-shot APA method based
on the pre-trained acoustic model, HuBERT. Our method involves encoding speech
input and corrupting them via a masking module. We then employ the Transformer
encoder and apply k-means clustering to obtain token sequences. Finally, a
scoring module is designed to measure the number of wrongly recovered tokens.
Experimental results on speechocean762 demonstrate that the proposed method
achieves comparable performance to supervised regression baselines and
outperforms non-regression baselines in terms of Pearson Correlation
Coefficient (PCC). Additionally, we analyze how masking strategies affect the
performance of APA. | Hongfu Liu, Mingqian Shi, Ye Wang | 2023-05-31T05:17:17 | http://arxiv.org/abs/2305.19563v1 | # Zero-Shot Automatic Pronunciation Assessment
###### Abstract
Automatic Pronunciation Assessment (APA) is vital for computer-assisted language learning. Prior methods rely on annotated speech-text data to train Automatic Speech Recognition (ASR) models or speech-score data to train regression models. In this work, we propose a novel zero-shot APA method based on the pre-trained acoustic model, HuBERT. Our method involves encoding speech input and corrupting them via a masking module. We then employ the Transformer encoder and apply k-means clustering to obtain token sequences. Finally, a scoring module is designed to measure the number of wrongly recovered tokens. Experimental results on speechocean762 demonstrate that the proposed method achieves comparable performance to supervised regression baselines and outperforms non-regression baselines in terms of Pearson Correlation Coefficient (PCC). Additionally, we analyze how masking strategies affect the performance of APA.
Hongfu Liu, Mingqian Shi, Ye Wang School of Computing, National University of Singapore, Singapore
{hongfu,m-shi,wangye}@comp.nus.edu.sg
**Index Terms**: automatic pronunciation assessment, zero-shot learning, self-supervised learning, HuBERT
## 1 Introduction
Learning a second language (L2) is a common requirement in bilingual or multilingual communities. However, L2 learners often struggle with achieving good proficiency in pronunciation. Computer-assisted pronunciation training (CAPT) is a notable application that enables language learners to effectively learn the pronunciation of new languages [1, 2]. CAPT provides feedback containing evaluation results, which can be automatically generated based on pronunciation, facilitating L2 learners in adjusting their pronunciation for improvement. Therefore, providing an overall assessment of pronunciation automatically is one of the primary objectives of CAPT.
Automatic pronunciation assessment has been extensively investigated over a prolonged period. Existing pronunciation assessment methods are implemented in the supervised setting. These approaches involve the usage of collected speech data with text annotations for training ASR models. Then the evaluation can be conducted based on the recognition results of ASR models. Goodness of Pronunciation (GoP) is one of the most commonly used metrics, aiming to provide phoneme-level scores for a given utterance. GoP requires calculating the log-posterior probability for each reference phoneme based on the contextual information [3, 4, 5]. On the other hand, there is an alternative research line that involves using speech data from non-native speakers with pronunciation scores annotated by domain experts to train regression models. Various features of speech data have been explored in this line, one of which is the phone-level features of speech [6, 7]. To enhance regression performance, [8] propose to use deep features transferred from the acoustic models of ASR. Using speech representations of pre-trained acoustic models such as wav2vec 2.0 or HuBERT also contributes to improving the regression performance by fine-tuning [9, 10]. Furthermore, multi-aspect pronunciation assessment at multiple granularities [11, 12] has been explored with multi-task supervised learning. However, there is a lack of unsupervised assessment approaches in the literature. All current pronunciation assessment methods require supervised signals to obtain the evaluation results.
Resource-efficient methods have been widely investigated for the low-resource scenario in the speech community [13, 14]. Nevertheless, it remains challenging to evaluate the quality of pronunciation using few or no data samples. Recent advances in Self-Supervised Learning (SSL) pre-trained language models (PLMs) have demonstrated strong few-shot and zero-shot learning abilities in the natural language processing community [15, 16] due to the knowledge acquired during the pre-training stage. PLMs are capable of performing downstream tasks via appropriate prompting with limited or even no data samples. However, the zero-shot ability has not been fully explored for SSL pre-trained acoustic models. This is because they learn at the acoustic level and it is challenging to learn linguistic representations from raw audio [17, 18], making it difficult to adapt them to downstream tasks without fine-tuning. While fine-tuning SSL pre-trained acoustic models with supervised data has been shown to be effective in automatic pronunciation assessment [9, 10], zero-shot pronunciation assessment has yet to be explored. Nevertheless, the acoustic-level knowledge acquired by SSL pre-trained acoustic models presents a viable option for zero-shot pronunciation assessment based on the unlabelled speech data observed during pre-training.
In this work, we propose a zero-shot pronunciation assessment approach that requires no annotated speech data. This is achieved by leveraging the SSL pre-trained acoustic model, HuBERT [19], for conducting the masked token prediction task. Our method involves encoding the waveform speech input into frame sequences and transforming them into corrupted sequences via a masking module. We then employ the Transformer Encoder of HuBERT and apply k-means clustering to obtain tokens of frame sequences and recovered tokens of corrupted sequences. Finally, a scoring module is designed to evaluate the pronunciation of a given speech by measuring the number of wrongly recovered tokens. Our proposed method is unsupervised and requires no fine-tuning. We conduct experiments on the speechocean762 dataset [7]. The experimental results demonstrate that the proposed method achieves comparable performance compared to supervised baselines and outperforms non-regression baselines in terms of the Pearson Correlation Coefficient.
## 2 Method
### Overview
An overview of our proposed method is shown in Figure 1. We developed three main steps to achieve this assessment as shown in Figure 1. The first step is to input the waveform speech audio to the convolutional neural network (CNN) encoder to get a frame sequence. The Transformer encoder takes as input the frame sequence and the k-means clustering is utilized to obtain the token sequences. The second step is to apply a mask module on the frame sequence gained in Step 1 and input the masked sequence to the Transformer encoder followed by the k-means clustering to obtain the recovered tokens of masked spans. Finally, a scoring module is employed to measure the number of wrongly recovered tokens based on the outputs of Step 1 and Step 2. The intuition behind this is that for well-pronounced speech, recovered tokens would be similar to tokens of corresponding positions obtained from uncorrupted input. On the contrary, for mispronunciation speech, recovered tokens would differ from their counterparts a lot.
### HuBERT Module
The HuBERT Module is adapted from the original HuBERT architecture [19]. This module consists of one CNN encoder, one Transformer encoder, one k-means clustering, and one masking module. Let \(X=\{x_{1},...,x_{T}\}\) denote the output of the CNN encoder with \(T\) frames. Then the Transformer encoder is employed to get latent representations of \(X\) that are further utilized to obtain the token sequences \(Z=\{z_{1},...,z_{T}\}\) through k-means clustering, where \(z_{t}\in[C]\) is a \(C\)-class categorical variable and \(t\in[T]\). \(z_{t}\) is also known as the hidden acoustic unit.
### Masking Module
To construct the masked token prediction task, we employ a masking strategy \(r\) on \(X\). If the set of indices to be masked is denoted by \(M\subset[T]\) for a sequence \(X\) with \(T\) frames, then the corrupted version is denoted by \(X^{*}=r(X,M)\), where \(x_{m}\) is replaced by a mask embedding \(x^{*}\) for \(m\in M\). Then we feed \(X^{*}\) into the same Transformer encoder and use the same k-means clustering. As a consequence, the output token sequence of masked spans is denoted by \(Z^{*}=\{z_{m}|m\in M\}\).
Masking strategy is of great importance in the proposed method. Basically, we aim to mask mispronounced segments and expect the SSL pre-trained acoustic model can recover them with correctly pronounced tokens. However, whether the speech is mispronounced and where the mispronunciation occurs are unknown due to our unsupervised setting. To address this issue, we propose two strategies that are used to mask out the mispronunciation segments.
#### 2.3.1 Random Masking
Random masking is a direct approach that is based on the masking strategy employed in pre-training. However, a single instance of random masking may have a lower probability of covering the mispronunciation component. To address this concern, we propose to repeat random masking \(k\) times for a given sequence \(X\). Specifically, we randomly select \(p\%\) of the frames in \(X\) as starting indices, and subsequently mask spans of \(l\) for each start index. These spans are mutually exclusive, with no overlap between them. By increasing the value of \(k\), it is possible to ensure that each frame is masked at least once.
#### 2.3.2 Regular Masking
Regular masking is an alternative approach that masks frames in a rule-based way. This strategy involves segmenting the input into \(k\) slices of equal length. We then proceed to mask one of those segments at a time and perform inference. The process is repeated until every segment has been masked at least once. The number \(k\) of segmented slices determines the granularity of the segmentation.
### Scoring Module
In order to assess the quality of speech pronunciation, we introduce the scoring module, which measures the number of incor
Figure 1: Overview of the zero-shot automatic pronunciation assessment.
rectly recovered tokens based on \(Z\) and \(Z^{*}\). Specifically, the average Mis-Recovered Token (aMRT) is proposed as a metric to measure the performance of pronunciation. Formally,
\[\mathbf{aMRT}=\frac{1}{k}\sum_{j=1}^{k}\sum_{i\in M_{j}}\delta(z_{i},z_{i}^{*})\]
where \(M_{j}\subset[T]\) represents the \(j\)-th set of indices to be masked, and function \(\delta\) is defined as:
\[\delta(z,z^{*})=\begin{cases}0,&z=z^{*}\\ 1,&z\neq z^{*}\end{cases}\]
A higher aMRT value corresponds to a greater number of mis-recovered tokens and thus a lower quality of pronunciation. To obtain the PCC results between our proposed metrics and ground-truth scores, we adopt the negative values of aMRT as our final metrics.
## 3 Experiments
### Dataset
We conduct experiments on the dataset speechocean762 [7], which is specifically designed for pronunciation assessment. This open-source speech corpus is composed of 5,000 English utterances collected from 250 non-native speakers, half of whom are children. The corpus provides rich label information including phoneme, word, and sentence levels, and includes assessment scores ranging from 0 to 10 annotated by five experts. Our proposed approach is evaluated at the sentence level on the test set, which contains 2,500 utterances. We choose this public dataset for easy reproduction and comparison.
### Baseline Models
We compare our proposed method with regression-based and non-regression-based baselines. The regression-based baselines include GoP [3, 20], DeepFeature1[8], and the state-of-the-art GOPT [12], all of which are supervised with human-annotated pronunciation scores. The non-regression-based baseline, on the other hand, utilizes the average phoneme-level GoP over the entire sentence as the measurement, and is referred to as non-reg GoP. This method does not require score annotations but instead uses a supervised ASR model.
Footnote 1: DeepFuture refers to the methods in [8] using deep features of ASR acoustic model
### Experimental Setup
We utilize the HuBERT-Base2 model and adopt the CNN encoder, Transformer encoder, and k-means clustering in the experiments. HuBERT-Base is pre-trained on the LibriSpeech-960 [21], and the k-means with 100 clusters is fitted on LibriSpeech train-clean-100 split as per [22] using intermediate representations from HuBERT-Base. The output of the 7th layer of the Transformer Encoder is chosen as the default feature for clustering, as the resulting acoustic units perform well in discrimination tests [19, 17, 23]. We set masking probability \(p=20\%\), masking length \(l=5\), and repeating times \(k=50\) as the default. Each experiment is repeated three times with three different random seeds \(\{13,21,100\}\), and the mean and standard deviation of the results are reported. Prior to performing the inference steps, all input audios are resampled with 16000 as the sample rate. The non-reg GoP is computed using Kaldi [24] to obtain the average phoneme-level GoP of the entire sentence. The ASR model utilized in this calculation is Librispeech ASR Chain Model3, as per [12].
Footnote 2: [https://github.com/pytorch/fairseq](https://github.com/pytorch/fairseq)
### Main Results
Two comparative studies are conducted to assess the effectiveness of the proposed method. The first study involves PCC performance comparison between our proposed method with regression-based and non-regression-based baselines, while the second study compares the PCC performance of different masking strategies.
The performances of various regression-based baselines and non-regression-based baselines are presented in Table 1. The results indicate that, compared to regression-based baselines, the proposed method lags behind the basic supervised baseline by a small margin of 0.04 PCC, despite a large performance gap of 0.14 PCC with the state-of-the-art supervised baseline. Notably, the proposed method is achieved by leveraging only the acoustic knowledge of HuBERT-Based acquired during pretraining, without the usage of annotated scores.
Furthermore, in comparison with the non-regression-based baseline, our proposed method shows a performance improvement of 0.03 PCC over the non-reg GoP. It is noteworthy that non-reg GoP requires an ASR model, while our method does not, underscoring the effectiveness of our ASR-free approach.
Table 2 presents the performance comparison of two masking strategies employed in this study. The results show that random masking achieves superior performance with an improvement of 0.014 PCC over regular masking. We conjecture that this may be due to the fact that the input distribution with random masking is closer to the input distribution during pretraining, leading to enhanced performance. In addition, the experimental results reveal that random masking exhibits a low variance, indicating the stability of the method.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Model** & **PCC** \\ \hline
**Regression based** & \\ \hline GoP [3] & 0.64 \\ GoP(2BLSTM+MLP) [20] & 0.67 \\ DeepFeature [6] & 0.72 \\ GOPT [12] & 0.74 \\ \hline
**Non-regression based** & \\ \hline non-reg GoP & 0.57 \\ Ours & 0.60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between our method with regression-based and non-regression-based baselines on speechocean762
\begin{table}
\begin{tabular}{c c} \hline \hline
**Masking Strategy** & **PCC** \\ \hline Random Masking & \(0.595\pm 0.002\) \\ Regular Masking & \(0.581\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of two masking strategies. The standard derivation of Random Masking is reported.
### Impact of masking hyperparameters
#### 3.5.1 Random Masking
In order to further examine the impact of various hyperparameters of random masking, including masking probability, masking length, and feature layers used for clustering on the final results, three additional experiments are carried out. The results are presented in Figure 2.
The initial subfigure 2 illustrates the impact of mask probability on PCC results, with mask probability ranging from 0.1 to 0.5 with an interval of 0.1. The mask length is set to 5, and the feature layer is set to 7. The results indicate that the mask probability of 0.3 yields the best performance, while both higher and lower mask probabilities produce inferior outcomes. This observation may be attributed to the fact that the high mask probability may discard essential information that is required for reconstruction, whereas the low mask probability may decrease the possibility of masking mispronunciation parts.
Subfigure 2 showcases how the length of each masked span affects the PCC results. The mask length ranges from 2 to 10 with an interval of 2, while the mask probability is set to 0.2, and the feature layer is set to 7. The curve of this figure suggests a linear decrease in performance as the length increases. This phenomenon may stem from the pre-trained HuBERT-Base's inadequate ability to recover a long masked span given the context.
Apart from the aforementioned factors, this study also investigates the degree to which the features used for clustering can contribute to pronunciation assessment. Therefore, the features from various layers of the Transformer Encoder ranging from 7 to 12 are examined. The outcomes presented in subfigure 2 reveal that using features from the 9th layer results in the best PCC performance. Generally, features from the 7th to 10th layer are considered useful for pronunciation assessment, whereas deeper features lead to poorer performance.
#### 3.5.2 Regular Masking
For regular masking, we mainly investigate the impact of the slice number on the PCC results, namely how the mask granularity affects the PCC results. The results are presented in Figure 3. Our finding suggests that the more refined granularity of a single mask span does not necessarily lead to improved performance. One potential explanation for this outcome is that the use of a single mask span causes a shift from the input distribution, leading to poor performance. In addition, shorter masked spans may fail to cover entire words or even phonemes, which can have an adverse impact on the results.
## 4 Discussion
While our zero-shot method achieves results comparable to supervised methods, it is essential to acknowledge that our method differs from the canonical-text-based pronunciation assessment. Our method draws on the acoustic knowledge obtained during pre-training, and thus, even if the transcription is different from the canonical text, a speech that is accurately pronounced may still receive a high score. Moreover, our method is limited to sentence-level assessment, and the exploration of unsupervised pronunciation assessment at the phoneme and word levels will be left as future work. The objective of this study is to establish a baseline and provide a pilot study of unsupervised pronunciation assessment.
## 5 Conclusion
In this paper, we present a zero-shot automatic pronunciation assessment approach. Instead of training regression models or using ASR models to compute GoP, we directly utilize a SSL pre-trained acoustic model and use the acoustic knowledge acquired from pre-training. To perform the ASR-free pronunciation assessment, we design two masking strategies and a novel evaluation metric to score the pronunciation of given speeches at the sentence level. Experimental results on speechocean762 achieve comparable performance to the supervised regression-based baseline and outperform the non-regression-based baseline. In the future, we hope to extend this research line of unsupervised pronunciation assessment to phoneme and word levels.
## 6 Acknowledgements
The authors would like to thank anonymous reviewers for their valuable suggestions. This project is funded in part by a research grant MOESOL-2021-0017 from the Ministry of Education in Singapore.
Figure 3: Impact of slice number on PCC results
Figure 2: Impact of (a) mask probability, (b) mask length, and (c) feature layer on PCC results | 自動音声評価 (APA) はコンピュータアシスタント言語学習において重要な役割を果たします。従来の方法では、アノテーションされた音声-テキストデータを使用して自動音声認識 (ASR) モデルをトレーニングするか、音声スコアデータをトレーニングする regressione モデルに使用していました。この研究では、事前学習された音響モデル、HuBERTに基づいた新しいゼロショット APA メソッドを提案しました。この方法は、音声入力のエンコードとマスクモジュールの適用により、音声入力を破損させます。その後、変換エンコーダを用いて、k-means クラスタリングを使用して、トークンシーケンスを取得します。最後に、スコアモジュールは、誤った復元トークンの数を測定するように設計されています。 speechocean762 の実験結果から、提案された方法は、監督型 regressione ベースラインと比較して同様の性能を達成し、PearsonCORRELATIONCOEFFICIENT (PCC) 方面で非 regressione ベースラインと |
2308.16874 | D-VAT: End-to-End Visual Active Tracking for Micro Aerial Vehicles | Visual active tracking is a growing research topic in robotics due to its key
role in applications such as human assistance, disaster recovery, and
surveillance. In contrast to passive tracking, active tracking approaches
combine vision and control capabilities to detect and actively track the
target. Most of the work in this area focuses on ground robots, while the very
few contributions on aerial platforms still pose important design constraints
that limit their applicability. To overcome these limitations, in this paper we
propose D-VAT, a novel end-to-end visual active tracking methodology based on
deep reinforcement learning that is tailored to micro aerial vehicle platforms.
The D-VAT agent computes the vehicle thrust and angular velocity commands
needed to track the target by directly processing monocular camera
measurements. We show that the proposed approach allows for precise and
collision-free tracking operations, outperforming different state-of-the-art
baselines on simulated environments which differ significantly from those
encountered during training. Moreover, we demonstrate a smooth real-world
transition to a quadrotor platform with mixed-reality. | Alberto Dionigi, Simone Felicioni, Mirko Leomanni, Gabriele Costante | 2023-08-31T17:21:18 | http://arxiv.org/abs/2308.16874v2 | # D-VAT: End-to-End Visual Active Tracking
###### Abstract
Visual active tracking is a growing research topic in robotics due to its key role in applications such as human assistance, disaster recovery, and surveillance. In contrast to passive tracking, active tracking approaches combine vision and control capabilities to detect and actively track the target. Most of the work in this area focuses on ground robots, while the very few contributions on aerial platforms still pose important design constraints that limit their applicability. To overcome these limitations, in this paper we propose D-VAT, a novel end-to-end visual active tracking methodology based on deep reinforcement learning that is tailored to micro aerial vehicle platforms. The D-VAT agent computes the vehicle thrust and angular velocity commands needed to track the target by directly processing monocular camera measurements. We show that the proposed approach allows for precise and collision-free tracking operations, outperforming different state-of-the-art baselines on simulated environments which differ significantly from those encountered during training.
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
## I Introduction
Micro aerial vehicles (MAVs) are gaining increasing interest thanks to their agility and low cost, which make them suitable for a wide variety of robotic tasks, especially those performed in cluttered or dangerous environments. Applications include transportation, exploration, surveillance, and tracking [1]. In this paper, we focus on the visual active tracking (VAT) task, which requires a _tracker_ vehicle to maintain visual contact with a dynamic _target_. In contrast to passive tracking, where the pose of the camera is fixed, active tracking approaches actively regulate the camera pose by suitably controlling the vehicle, in order to keep the target inside the camera field-of-view (FoV). The VAT problem is far more challenging than passive tracking as it requires to directly map high-dimensional image data into suitable control actions. Previous research on this problem combined a dedicated perception module (_e.g._, an object detector) with a separate closed-loop control module for the vehicle motion [2, 3, 4]. This approach has two fundamental limitations: (i) the two modules are designed separately and not jointly optimized; (ii) their combination requires extra effort for tuning and implementation.
A viable alternative to overcome these drawbacks is to adopt end-to-end deep reinforcement learning (DRL), which has already shown impressive results in many fields of robotics [5, 6, 7, 8]. Recently, this paradigm has been explored for VAT [9, 10]. Most of the related works focus on ground robots and take advantage of the physical characteristics of these platforms (_i.e.,_ low dimensionality of the configuration space and limited number of possible actions) to facilitate the design of VAT policies. However, much less attention has been devoted to more complex platforms such as MAVs, which require a more sophisticated policy to be learned by the DRL agent. State-of-the-art (SotA) works have addressed this issue by relying on some simplifying assumptions, _e.g.,_ by ignoring the vehicle dynamics [11] or by constraining the possible control actions to a predefined subset of the action space [12]. Solutions based on these simplifications are, in general, less robust and performing.
In this paper, we aim to remove these assumptions and propose D-VAT, a novel end-to-end DRL-based continuous control model for visual active tracking that is tailored to MAV systems. D-VAT relies on a monocular setup, _i.e.,_ it requires only an RGB image stream collected by an onboard camera to directly compute the thrust and angular velocity commands needed to track the target with high accuracy (see [13] for a justification of such commands). To the best of our knowledge, this is the first end-to-end approach that solves the VAT problem for MAVs without severely constraining the motion of the target or the tracker vehicle. We compare D-VAT to both model-based and data-driven SotA strategies on photorealistic simulated environments considerably different from those employed during training, where it achieves a much better tracking performance than these methods.
The rest of this work is organized as follows: Section II contains literature review and details the paper contribution; Section III provides the preliminary definitions; Section IV formalizes the considered tracking problem; Section V describes the experiments and discusses the results; Section VI
Fig. 1: Overview of the VAT task. The _tracker_ MAV (blue) adjusts its position and orientation so as to keep the _target_ MAV (red) at the center of the camera FoV and at a predefined distance. Our approach exploits an end-to-end DRL-based VAT method that directly maps RGB images into thrust and angular velocity commands that are fed to the tracker.
draws the conclusions and outlines future research directions.
## II Related Work
In recent years, VAT has become a central research topic in robotics. VAT applications consider either pan-tilt-zoom (PTZ) vision sensors attached to a fixed base or cameras mounted on robotic vehicles to meet the goal of keeping the tracked object in sight. For instance, [14] presents a visual tracking solution that enables a PTZ camera to track the behavior of a moving person in surveillance applications. Along the same line, [15] proposes a two layer architecture for real-time human motion tracking. In the context of mobile robots, VAT takes advantage of the control degrees of freedom of the vehicle to maintain the visibility of the tracked object. Most of the related approaches employ modular architectures that combine passive perception and motion control components [2, 3, 4]. In particular, [16] couples the perception module with a low-level controller based on DRL. The former computes semantic segmentation maps from RGB images to obtain an intermediate representation that facilitates the agent in controlling the vehicle. Despite the significant results achieved by modular approaches such as the above ones, the combination of perception and control components poses, in general, important challenges. First, the modules are designed independently and not jointly optimized, reducing the effectiveness of the overall pipeline. Secondly, their integration is usually based on several tuning parameters whose optimal values are non-trivial to determine. Moreover, a performance drop in one module might cause the overall system to fail.
The aforementioned challenges can be addressed by leveraging DRL techniques [8, 17, 18]. A vast literature is available on DRL-based VAT approaches for ground vehicle systems. [19] proposes an end-to-end deep neural network architecture to train a DRL agent in simulated environments and takes advantage of domain randomization in order to favor generalization to real-world scenarios. [20] develops an asymmetric dueling training procedure employing an adversarial target that stimulates the development of an effective policy. In [10], the assumption of having the target within the camera FoV at the beginning of the maneuver is removed, so that the agent is able to explore an unknown environment, find the target and track it. All these approaches feature a discrete action space and therefore they cannot explore the full performance envelope of the vehicle. In fact, the resulting maneuvers are non-smooth and prone to losing visual contact with the target. An end-to-end architecture that exploits continuous actions is presented in [9].
Compared to ground robots, the design of learning-based policies for MAVs is significantly more challenging. In [21], a multi-layer perceptron is coupled with a low-level PID controller in order to stabilize the MAV hovering configuration. This method employs absolute position measurements provided by motion capture system, and does not address the VAT problem. A VAT solution is proposed in [22] to allow a MAV to fly and track a moving object. In particular, the control system of the MAV is designed to track ground targets by processing down-looking images, which precludes the application of the method to scenarios featuring front-looking cameras and flying targets. [11] presents an active tracking module for MAVs equipped with a pan-tilt camera that is able to track a person in various complex scenes. Nonetheless, the MAV dynamics are not fully exploited in the design of the control policy and the action space is discrete, which poses a hard limit on the achievable performance. A continuous action space is considered in [12], where a RL-based policy is coupled with a low-level PID control layer. However, the positioning of the MAV is constrained to a plane and thus the tracker is not free to move in 3D. Very few studies addressed the VAT problem for MAVs without relying on restrictive assumptions on the motion of the target-tracker pair. The recent work [23] tackles this problem by adopting an image-based visual servoing approach that features a modular design similar to those discussed at the beginning of this section. Nevertheless, such a design leads to position and orientation errors in the order of 1 m and 0.1 rad, respectively, and it requires full attitude information.
### _Contribution_
As highlighted by the previous literature review, an increasing number of studies is focusing on VAT in the context of MAV applications. Model-based techniques (see, _e.g.,_[23]) present design and integration issues that inherently limit their performance and entail tracking errors that may limit their applicability. On the other hand, existing learning-based approaches are affected by different constraints: (i) the target lies on a plane [22]; (ii) the tracker is controlled by discrete actions [11]; (iii) the agent is trained with continuous actions that are confined to a subset of the tracker action space [12]. To overcome these limitations, in this paper we provide the following contributions:
* We propose D-VAT, a novel end-to-end DRL continuous control model for VAT applications involving MAVs.
* The proposed DRL policy directly maps RGB image data into thrust and angular velocity commands, and does not make restrictive assumptions on the trajectories of both the tracker and the target.
* We show the benefits of D-VAT by comparing it against different model-based and data-driven SotA approaches. Our approach outperforms the baselines also in scenarios that differ substantially from the training ones, demonstrating remarkable generalization capabilities.
## III Preliminary Definitions
The optimization of RL models requires a significant number of interactions with the environment and this number becomes massive when deep approximators come into play. In practice, this excludes the possibility of using real MAVs to collect interaction episodes, both for efficiency and safety reasons. To overcome this issue, highly photorealistic simulation frameworks can be used to generate an unlimited amount of episodes and train the DRL models without any physical risk to the vehicle. In this work, we follow this practice and optimize our D-VAT model in simulated environments.
Before detailing the characteristics of D-VAT and its training procedure, in this section we describe the dynamic model which is integrated into the simulation engine to generate realistic motions. In particular, we follow [24] and consider a surrogate model in which the tracker is controlled by thrust and angular velocity inputs. The model is given by:
\[\ddot{p} = \frac{f}{m}R_{3}-g \tag{1}\] \[\dot{R} = R\,[\omega]_{\times}\]
In system (1), \(p\) and \(R\) are the tracker absolute position and orientation, while \(m\) and \(g=[0\ 0\ 9.8]^{\top}\mathrm{m\,s^{-2}}\) are the vehicle mass and the gravity vector, respectively. Moreover, \(f\) and \(\omega\) indicate the collective thrust and the angular velocity inputs. The notation \([\omega]_{\times}\) refers to the skew-symmetric representation of vector \(\omega=[\omega_{x}\,\omega_{y}\,\omega_{z}]^{T}\). Since our DRL optimization framework is discrete-time, we apply a zero-order-hold discretization to system (1) and denote by \(z(k)\) the value taken by a signal \(z(t)\) at the sampling instant \(t=kt_{s}\), where \(t_{s}\) the sampling time. The motion of the target is modeled by a parameterized class of trajectories denoted by \(p_{r}(k)\), as detailed in Section IV-D. It is important to highlight that D-VAT is trained in a model-free manner and has no explicit information about the dynamics (1). The simulation model is only used to generate realistic MAV trajectories.
## IV Approach
### _Problem Formulation_
The goal of VAT is to control the motion of a tracker agent equipped with a vision sensor, so as to maintain the target within the FoV of the camera and at a predefined distance. In this paper, we assume that both the tracker and the target are MAVs that are free to move in 3D. The vision sensor is an RGB camera whose reference frame is coincident with the tracker body-fixed frame. In particular, the optical axis is aligned with the \(x\)-axis direction. At the beginning of the VAT task, the target is located ahead of the tracker (within the camera FoV), and starts moving along a time-varying trajectory. The tracker employs only the image stream coming from its front camera as a source of information and computes the thrust and angular velocity commands needed to meet the control goal. Similarly to other complex navigation and control tasks, VAT can be tackled by formulating a suitable reinforcement learning (RL) problem [25]. In particular, we treat the tracker as an RL agent which repeatedly interacts with an environment over a series of independent episodes. For each discrete timestep, the agent receives an observation \(o(k)\), a reward \(r(k)\), and produces an action \(u(k)\). The observation is given by the aforementioned sequence of camera images, while the action is a continuous command that specifies the thrust and the angular velocity of the tracker MAV, _i.e.,_\(u(k)=(f(k),\omega(k))\). The reward is defined in Section IV-C.
### _Deep Reinforcement Learning Strategy_
The proposed end-to-end VAT strategy relies on a monocular setup and requires only an RGB image stream collected by the onboard camera to directly compute the MAV control commands. RGB images are partial observations of the full MAV state and are composed of a large number of pixels that form a huge observation space. For this reason, it is not viable to train the agent using classical RL algorithms, and more advanced solutions based on Deep Neural Network (DNN) approximators must be applied. In particular, we adopt the _asymmetric actor-critic_ formulation [26, 10]. According to this framework [25], we design two different DNN architectures for the _actor_ (A-DNN) and for the _critic_ (C-DNN). The former learns the optimal policy \(u(k)=\pi(o(k))\) with respect to the given task, while the latter aims to evaluate such a policy during the training phase. The asymmetric structure of this framework allows the critic network to be fed with more privileged information than the actor network, thus stimulating the development of an effective policy evaluation. It is worth remarking that the A-DNN is the only agent operating at inference time.
The A-DNN is a convolutional neural network composed of a ResNet18 [27] and three additional hidden layers, each one characterized by 512 neurons and ReLU activations. In order to learn temporal relations, the proposed A-DNN design processes a sequence of \(H\) front-view camera images. This turned out to play a key role in improving the tracking performance. The image sequence is given by
\[o(k)\!=\!\left[\begin{array}{cc}I(k)&I(k-1)&\ldots&I(k-H+1)\end{array} \right]^{T}, \tag{2}\]
where \(I(k)\) is the RGB frame acquired at the \(k\)-th time step. Moreover, the A-DNN extracts 512 visual features from each image through its convolutional block. Subsequently, the \(H\times 512\) features are concatenated and fed to the linear layers to compute the action. The control actions are saturated to be consistent with the physical characteristics of MAV actuators. In particular, a \(\tanh\) saturation is adopted to confine the action values computed by the A-DNN within prescribed limits (see angular rate and thrust limits in Table I).
The C-DNN design consists of a fully connected neural network with three hidden layers, each one composed of 256 neurons and ReLU activations. The correct selection of the inputs to the C-DNN is, in general, nontrivial. In this work, we explored different possibilities and selected the input set that we found to be the most informative without unnecessarily increasing the network complexity. In particular, we define the observation of the C-DNN as a vector \(o_{c}(k)\) representing the relative state as follows:
\[o_{c}(k)\!=\!\left[\begin{array}{c}y(k)\\ v(k)\\ a(k)\end{array}\right]\!=\!\left[\begin{array}{c}R(k)^{T}[p_{r}(k)-p(k)]\\ R(k)^{T}[\dot{p}_{r}(k)-\dot{p}(k)]\\ R(k)^{T}[\ddot{p}_{r}(k)-\ddot{p}(k)]\end{array}\right], \tag{3}\]
where \(y(k)\), \(v(k)\) and \(a(k)\) denote respectively the position, velocity and acceleration of the target relative to the tracker, expressed in the tracker body-fixed frame. The C-DNN output is a scalar representing the estimated _action-value_\(Q_{\pi}(o_{c}(k),u(k))\). The overall design is illustrated in Fig. 2.
### _Optimization_
The A-DNN and the C-DNN are both trained by using the popular RL-based Soft Actor-Critic (SAC) framework
[28], where the reward signal \(r(k)\) is specifically designed to address the VAT problem in MAVs scenarios, taking into account the distinctive characteristics and requirements of the considered control task. In particular, the main control objective is to align the target with the center of the tracker camera FoV while keeping a predefined distance between the two vehicles. To this purpose, the reward is defined as:
\[r_{e}(k)=(r_{x}(k)\,r_{y}(k)\,r_{z}(k))^{\beta}, \tag{4}\]
where \(\beta>0\) is a suitable exponent and
\[r_{x} = \max(0,1-|y_{x}(k)-d_{r}|),\] \[r_{y} = \max\left(0,1-\left|\frac{2}{A_{\text{FoV}}}\arctan\left(\frac{y_ {y}(k)}{y_{x}(k)}\right)\right|\right), \tag{5}\] \[r_{z} = \max\left(0,1-\left|\frac{2}{A_{\text{FoV}}}\arctan\left(\frac{y _{z}(k)}{y_{x}(k)}\right)\right|\right).\]
In Eq. (5), \(r_{x}\) is maximal when the first entry of \(y(k)=[y_{x}(k)\,y_{y}(k)\,y_{z}(k)]^{T}\) matches the desired distance \(d_{r}\) to the target (\(d_{r}\) is specified along the \(x\)-axis of the body-fixed frame, which is assumed coincident with the optical axis). Moreover, \(r_{y}\) and \(r_{z}\) are functions that encourage the agent to keep the target at the center of the image plane and thus away from the camera FoV limits, being \(A_{\text{FoV}}\) the FoV amplitude in radians. The reward term \(r_{e}(k)\) in (4) is clipped in the interval \([0,\ 1]\) to favor the learning process, and it is maximal (\(r_{e}=1\)) when the VAT goal is achieved.
Two additional reward terms are included in the formulation to optimize also the control effort and the MAV linear velocity. In particular, we define a velocity penalty \(r_{v}\) and a control effort penalty \(r_{u}\) as follows:
\[r_{v}(k)=\frac{\|v(k)\|}{1+\|v(k)\|},\ \ r_{u}(k)=\frac{\|u(k)\|}{1+\|u(k)\|}. \tag{6}\]
Collision avoidance constraints are taken into consideration by penalizing the RL agent whenever \(\|y(k)\|<d_{m}\), where \(d_{m}\) is the minimum distance allowed.
The reward function is obtained by adding up all the above contributions, which results in:
\[r(k)=\left\{\begin{aligned} & r_{e}(k)-k_{v}r_{v}(k)-k_{u}r_{u}(k) \quad\|y(k)\|>d_{m}\\ &-k_{c}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
### _Experimental Setup_
The A-DNN and C-DNN have been optimized by using the Stable-Baselines3[30] implementation of SAC, which we customize2 to extend it to the _asymmetric actor-critic_ formulation of our approach. The networks have been optimized for approximately 18,000 episodes executed in 6 parallel environments, using the Adam optimizer with a learning rate of 0.0003, a discount factor \(\gamma\) of 0.99, and a batch size of 64. Each training episode has a maximum duration of \(40\) s, and the observation sequence length for the A-DNN is set to \(H=3\). The other hyper-parameters and settings are reported in Table I. The training process is performed on a workstation equipped with 2 x NVIDIA RTX 2080Ti with 11GB of VRAM, an Intel Core processor i7-9800X (3.80GHz x16) and 64 GB of DDR4 RAM.
Footnote 2: The source code will be available upon acceptance.
Our approach is tested on two environment classes: the first one contains scenes similar to those used during the training phase, although with different room shapes, objects disposition, and textures (we refer to these scenes as Box Environments). The second is, instead, aimed at testing the generalization capabilities of D-VAT and has more complex and photo-realistic environments, _i.e.,_ an outdoor urban scenario (Urban), an outdoor park environment (Park), and an indoor scene of an office building (Office). These are depicted in Fig. 4 and are significantly different from the ones used to train our model.
We run a total of 20 maneuver realizations for each test environment. In each run, the tracker is spawned at a random initial position, while the target is initially placed in front of the tracker at the optimal distance. To assess the generalization capabilities of our approach, we test also target trajectories that differ from the training ones. In particular, we consider constant setpoints and rectilinear trajectories with different shapes such as ramp-like and cubic. In the following, the D-VAT agent is compared to the SotA baselines described hereafter.
### _Baselines_
**Active Object Tracking (AOT)[19]**. In this approach, the agent is trained to track predefined target trajectories by using discrete actions. To comply with the dynamic model (1), which takes as input the collective thrust and angular velocity of the MAV, we define the action set as follows: \(\{+\Delta\omega_{x},-\Delta\omega_{x},\)\(+\Delta\omega_{y},-\Delta\omega_{y},\)\(+\Delta\omega_{z},-\Delta\omega_{z},\)\(+\Delta f,-\Delta f,\)\(no\_op\) }, where the operator \(\Delta\) indicates a fixed increment of thrust or angular velocity and \(no\_op\) prescribes a zero thrust or angular velocity increment. The size of the \(\Delta\) increments has been manually tuned to meet the task specifications.
**AD-VAT+[20]**. The model policy is learned during the adversarial dueling against the target, which is itself an RL agent. This approach employs the same discrete action space as the AOT baseline.
**C-VAT[9]**. The model is optimized using a target that is randomly spawned in the surrounding of the tracker. In particular, a heuristic trajectory generator (HTG) is combined with a suitable set of auxiliary losses in order to facilitate the convergence of the training process. Herein, we implement the HTG with a Linear Quadratic Gaussian (LQG) controller that exploits ground truth pose information to control the tracker so as to achieve the VAT goal. Moreover, the auxiliary losses in [9] have been extended to a 3D environment.
**SiamRPN++ PID**. This modular baseline combines the object tracker SiamRPN++[31] with a standard MAV control architecture featuring two Proportional-Integral-Derivative (PID) feedback loops. In order to achieve the VAT goal, the outer loop processes the bounding box information provided by SiamRPN++ (_i.e.,_ position and size of the bounding box enclosing the target) to compute roll, pitch, yaw, and thrust signals that are are fed to the inner (attitude control) loop. The PID parameters have been tuned using a trial and error approach on relevant scenarios, so as to achieve a suitable trade-off between reactivity to tracking errors and sensitivity to noise. The inner loop needs attitude information and, in our tests, we provide the ground-truth attitude angles returned by the simulator. This baseline is favored with respect to D
Fig. 4: Images from the photo-realistic environments employed to test the generalization capabilities of D-VAT. From left to right: an urban setting (Urban), a park environment (Park), and an office space (Office). It should be noted that the visual appearance of these scenarios differs significantly from the scenes used during training.
Fig. 3: Examples of the training environment randomization. The tracker (blue) and the target (red) MAVs are spawned in a large room with random characteristics including walls height, objects shape and disposition, textures, light conditions, and presence of distracting objects in the background.
VAT because it has access to privileged information, _i.e.,_ the attitude of the MAV.
**SiamRPN++ LQG**. This modular baseline combines SiamRPN++ with a model-based design that couples feedback linearization and a linear control law (see, _e.g.,_[32]). In particular, we adopt a Linear-Quadratic-Gaussian (LQG) design. The resulting policy uses the bounding box information to regulate directly the thrust and angular velocity of the tracker so as to meet the VAT objective. The LQG weights have been tuned extensively to achieve a fair trade-off between performance and robustness. This baseline requires attitude information (to linearize the MAV dynamics by feedback) and hence it is favored with respect to D-VAT.
### _Metrics_
To evaluate the performance of D-VAT against that of the baselines, we adapted the tracking metrics in [9, 10] to a 3D environment. For convenience, the metrics are defined by expressing the ground-truth position of the target relative to the tracker in a spherical coordinate system, whose axes are aligned with those of the tracker body-fixed frame. The spherical coordinates are denoted by \((\rho,\theta,\varphi)\). The considered metrics are detailed below.
**Distance Score**: measures the ability of the tracker to maintain the desired distance from the target, as follows
\[\tilde{P}_{\rho}(k)=\begin{cases}\max\left(0,1-2|\rho(k)-d_{r}|\right),&\text {if}\quad\begin{array}{l}|\theta(k)|<\frac{A_{\text{RV}}}{2}\\ |\varphi(k)|<\frac{A_{\text{RV}}}{2}\\ \end{array}\\ 0&\text{otherwise}\end{cases}\]
**Elevation Score**: measures the ability of the tracker to maintain the target vertically aligned to the center of the FoV, as follows
\[\tilde{P}_{\theta}(k)=\begin{cases}\max\left(0,1-\frac{2|\theta(k)|}{A_{\text{ RV}}}\right),&\text{if}\quad\begin{array}{l}|\varphi(k)|<\frac{A_{\text{RV}}}{2}\\ |\rho(k)-d_{r}|<0.5\\ \end{array}\\ 0&\text{otherwise}\end{cases}\]
**Azimuth Score**: measures the ability of the tracker to maintain the target horizontally aligned to the center of the FoV, as follows
\[\tilde{P}_{\varphi}(k)=\begin{cases}\max\left(0,1-\frac{2|\varphi(k)|}{A_{ \text{RV}}}\right),&\text{if}\quad\begin{array}{l}|\theta(k)|<\frac{A_{ \text{RV}}}{2}\\ |\rho(k)-d_{r}|<0.5\\ \end{array}\\ 0&\text{otherwise}\end{cases}\]
**Total Score**: it is the arithmetic mean of the above metrics, given by \(\tilde{P}_{c}(k)=(\tilde{P}_{\rho}(k)+\tilde{P}_{\theta}(k)+\tilde{P}_{\varphi }(k))/3\).
Notice that if \(\tilde{P}_{\rho}(k)=1\), then the tracker is at the desired distance from the target. Moreover, if \(\tilde{P}_{\theta}\) and \(\tilde{P}_{\phi}\) are both equal to \(1\), then the target centroid is at the center of the FoV. Summarizing, \(\tilde{P}_{c}(k)=1\) when perfect visual tracking is achieved at step \(k\).
The metrics are averaged with respect to the episode time and across the 20 runs performed in each scenario, resulting in \(P_{m}=\frac{1}{20N_{c}}\sum_{i=1}^{20}\sum_{k=0}^{N_{c}-1}{{}^{(i)}\tilde{P}_ {m}(k)}\quad,\) where \(m\in\{\rho,\theta,\varphi,c\}\), \({}^{(i)}\tilde{P}_{m}\) indicates that the performance is evaluated on the \(i\)-th run, and \(N_{c}\) is the number of samples within the episode.
### _Comparison Results_
The results of the experimental campaign are presented in Tables II and III. Our first important finding is that D-VAT outperforms all the baselines with respect to the performance metrics, and it is able to track the target by producing low-level control commands directly from RGB images. A visual inspection of the experiments (see the supplementary videos for qualitative results) shows that D-VAT is able to react promptly and effectively to the target movements. Specifically, it (i) computes fast maneuvers when the target approaches the boundary of camera FoV to avoid losing it, and (ii) provides a smooth control policy that is close to being optimal (_i.e.,_ the target is almost always maintained at the center of the image plane and at the desired distance).
The learning-based approaches AOT, AD-VAT+ and C-VAT fail to converge to a suitable tracking policy. This could be explained by considering the high complexity of the task. AOT and AD-VAT+ are both strategies that rely on a discrete action space. Thus, they generate non-smooth control policies that struggle to maintain the target visibility and might even result in unstable maneuvers that cause the target to disappear outside the FoV. Even C-VAT, despite being designed to provide continuous commands, fails to provide an efficient tracking policy. To explain this result, it is important to notice that the dimension of the MAV action space is doubled with respect to that of a planar ground robot (which is the platform considered in the original C-VAT work [9]). The increased complexity of the quadrotor dynamics make the model optimization more challenging and, in the case of C-VAT, this entails a large performance degradation.
The baselines that combine two separate modules, _i.e.,_ an object detector and a controller (LQG or PID), are instead able to achieve better results. Nonetheless, the overall tracking performance is inferior to that of D-VAT. This can be attributed to the modular nature of these baselines. As the two components are designed independently, their coupling turns out to be inefficient and can cause the overall system to fail. In practice, this problem emerges since the controller, which has been designed under the assumption that the relative position is accurately known, is fed with position measurements extracted from the bounding box information provided by the object detector. These measurements, due to non-ideal image conditions or aggressive target maneuvers, might violate the design assumptions. This aspect becomes even more critical in realistic environments that are characterized by a high density of distracting objects in the background (_e.g.,_ the photorealistic scenarios Urban and Office in Fig. 4). In this regard, it should be noted that the PID scheme, thanks to its more adaptable design, is more robust to model mismatch than the LQG counterpart.
On the other hand, thanks to the domain randomization strategy we employ, D-VAT has learned a tracking policy that can deal effectively with a wide range of scenarios and at the same time achieve high performance. This holds even when the visual conditions of the environment are very different from those employed in the training phase (see the results
obtained for the Urban, Park and Office scenarios in Table II).
To further study the comparison between D-VAT and the modular baselines, we run additional experiments by varying the maximum velocity of the target. We perform these experiments on a simplified scene with low amount of texture and no objects. In Table III, it can be seen that for low target velocities, the modular baselines and D-VAT achieve similar performance. However, when the target performs faster and more aggressive trajectories, the performance of both the modular baselines decreases, while D-VAT tracking capabilities are almost unaffected. This suggests that the proposed learning-based approach is more robust and responsive in challenging scenarios where the ability of traditional control strategies may be limited.
### _DRL Controller Validation_
To validate the learned controller in a realistic environment, we used a simulation model in which system (1) is augmented with the angular velocity and the thrust dynamics. The former are stabilized by a low-level proportional controller, which is a common setting for embedded MAV autopilots, while the latter are represented by a first order model that is typical for the actuator. Moreover, we included the effect of air drag. The simulation model is then given by
\[\begin{bmatrix}\ddot{p}\\ \dot{R}\\ \dot{\omega}\\ \dot{f}\end{bmatrix}=\begin{bmatrix}\frac{1}{m}(R_{3}f+f_{drag})-g\\ R\left[\omega\right]_{\times}\\ J^{-1}(k_{\omega}(\omega_{cmd}-\omega)-\left[\omega\right]_{\times}J\omega)\\ k_{f}(f_{cmd}-f)\end{bmatrix}, \tag{8}\]
where \(J\) is the inertia matrix of the MAV, \(f_{cmd}\) and \(\omega_{cmd}\) are the commanded total thrust and body rates provided by the DRL controller, \(k_{f}\) and \(k_{\omega}\) are suitable scalar gains, and \(f_{drag}=-K_{v}\dot{p}\) is a linear drag term, being \(K_{v}\) the drag coefficient matrix. The following parameter values have been employed according to [13]: \(J=\text{diag}(0.0025,0.0021,0.0043)\)\(\text{kgm}^{2}\) and \(K_{v}=\text{diag}(0.3,0.3,0.15)\). Moreover, we set \(k_{f}=30\) and \(k_{\omega}=1\), resulting in a thrust settling time of about \(0.1\) s and in a peak control torque in the order of 1 Nm. These figures are compatible with the MAV actuator specifications. Besides the simulation model (8), the validation scenario includes two moving objects that occasionally appear in the tracker FoV. One of them shares the same shape as the target one but has a different color, while the other has a different shape but the same color as the target MAV.
Table IV compares the results in Table III with those obtained in the validation environment, in the absence (second row) and in the presence (third row) of dynamic distracting objects. It can be seen that the performance drop with respect to the tests with the simplified model (1) is negligible. Moreover, the tracker agent is nearly unaffected by the presence of dynamic distracting objects, which proves that it did not overfit with respect to the object color or its shape individually. This is even more remarkable if we consider that the agent was trained on model (1) and that no moving objects other than the target MAV were included during the training phase. From these results, it can be concluded that our strategy offers good robustness and generalization capabilities against unmodeled dynamics.
Finally, notice that in the definition of the reward function (7) we did not penalize variations of the control command, so as to favor fast maneuvering and to reduce as much as possible the tracking error. As a downside, this choice can lead to a nervous tracking behavior. One possibility to mitigate this issue is to low-pass filter the A-DNN output. For instance, we found out that using a first order low-pass filter with a cut-off frequency of 2 Hz does indeed result in smoother trajectories (see the attached videos). However, it entails a percentage drop in performance of about \(20\%\).
## VI Conclusions
In this work, we proposed D-VAT, an end-to-end visual active tracking approach for MAV systems. The D-VAT agent is trained by exploiting an asymmetric actor-critic DRL formulation. Once optimized, it is capable of computing thrust and angular velocity commands for the tracker MAV directly from input images. Experiments against different baselines show that our approach exhibits superior tracking capabilities and it is capable of generalizing over scenarios that considerably differ from those used during training.
Currently, D-VAT can track vehicles whose appearance is similar to that of the target MAV used for the optimization. Future work will consider methodologies to make the tracker agent independent from the appearance of the target MAV.
| ビジュアルアクティブトラッキングは、人助け、災害復旧、監視などのアプリケーションにおける重要な役割を果たすため、 ロボティクスにおける研究の注目課題となっています。 pasiveなトラッキングとは異なり、アクティブなトラッキングは、視覚と制御能力を組み合わせることで、対象物に対して検出し、アクティブに追跡します。この分野のほとんどの研究は、地上ロボットに焦点を当てており、空中プラットフォームでの貢献は非常に限られており、その適用性を制限する重要な設計制約を提示しています。これらの制約を克服するため、この論文では、微小空中機(MAV)プラットフォームに適応された、D-VATという革新的な端から端のビジュアルアクティブ追跡手法を提案しています。D-VATエージェントは、単眼カメラの測定値を直接処理することで、目標物を追跡するために車両の thrust と角速度の命令を計算 |
2309.06796 | Computing solubility and thermodynamics properties of H2O2 in water | Hydrogen peroxide plays a key role in many environmental and industrial
chemical processes. We performed classical Molecular Dynamics and Continuous
Fractional Component Monte Carlo simulations to calculate thermodynamic
properties of H2O2 in aqueous solutions. The quality of the available force
fields for H2O2 developed by Orabi & English, and by Cordeiro was
systematically evaluated. To assess which water force field is suitable for
predicting properties of H2O2 in aqueous solutions, four water force fields
were used, namely the TIP3P, TIP4P/2005, TIP5P-E, and a modified TIP3P force
field. While the computed densities of pure H2O2 in the temperature range of
253-353 K using the force field by Orabi & English are in excellent agreement
with experimental results, the densities using the force field by Cordeiro are
underestimated by 3%. The TIP4P/2005 force field in combination with the H2O2
force field developed by Orabi & English can predict the densities of H2O2
aqueous solution for the whole range of H2O2 mole fractions in very good
agreement with experimental results. The TIP4P/2005 force field in combination
with either of the H2O2 force fields can predict the viscosities of H2O2
aqueous solutions for the whole range of H2O2 mole fractions in good agreement
with experimental results. The diffusion coefficients for H2O2 and water
molecules using the TIP4P/2005 force field with either of the H2O2 force fields
are almost constant for the whole range of H2O2 mole fractions. The Cordeiro
force field for H2O2 in combination with either of the water force fields can
predict the Henry coefficients of H2O2 in water in better agreement with
experimental values than the force field by Orabi & English. | Tijin H. G. Saji, José Manuel Vicent-Luna, Thijs J. H. Vlugt, Sofía Calero, Behnaz Bagheri | 2023-09-13T08:39:12 | http://arxiv.org/abs/2309.06796v1 | # Computing solubility and thermodynamics properties of H\({}_{2}\)O\({}_{2}\) in water1
###### Abstract
Hydrogen peroxide plays a key role in many environmental and industrial chemical processes. We performed classical Molecular Dynamics and Continuous Fractional Component Monte Carlo simulations to calculate thermodynamic properties of H\({}_{2}\)O\({}_{2}\) in aqueous solutions. The quality of the available force fields for H\({}_{2}\)O\({}_{2}\) developed by Orabi & English, and by Cordeiro was systematically evaluated. To assess which water force field is suitable for predicting properties of H\({}_{2}\)O\({}_{2}\) in aqueous solutions, four water force fields were used, namely the TIP3P, TIP4P/2005, TIP5P-E, and a modified TIP3P force field. While the computed densities of pure H\({}_{2}\)O\({}_{2}\) in the temperature range of 253 - 353 K using the force field by Orabi & English are in excellent agreement with experimental results, the densities using the force field by Cordeiro are underestimated by 3%. The TIP4P/2005 force field in combination with the H\({}_{2}\)O\({}_{2}\) force field developed by Orabi & English can predict the densities of H\({}_{2}\)O\({}_{2}\) aqueous solution for the whole range of H\({}_{2}\)O\({}_{2}\) mole fractions in very good agreement with experimental results. The TIP4P/2005 force field in combination with either of the H\({}_{2}\)O\({}_{2}\) force fields can predict the viscosities of H\({}_{2}\)O\({}_{2}\) aqueous solutions for the whole range of H\({}_{2}\)O\({}_{2}\) mole fractions in reasonably good agreement with experimental results. The computed diffusion coefficients for H\({}_{2}\)O\({}_{2}\) and water molecules using the TIP4P/2005 force field with either of the H\({}_{2}\)O\({}_{2}\) force fields are almost constant for the whole range of H\({}_{2}\)O\({}_{2}\) mole fractions. Hydrogen bond analysis showed a steady increase in the number of hydrogen bonds with the solute concentrations in H\({}_{2}\)O\({}_{2}\) aqueous solutions for all combinations except for the Cordeiro-TIP5P-E and Orabi-TIP5P-E systems, which showed a minimum at intermediate concentrations. The Cordeiro force field for H\({}_{2}\)O\({}_{2}\) in combination with either of the water force fields can predict the Henry coefficients of H\({}_{2}\)O\({}_{2}\) in water in better agreement with experimental values than the force field by Orabi & English.
Hydrogen peroxide, Aqueous solution, Molecular Dynamics, Monte Carlo simulations +
Footnote †: preprint: APS/123-QED
## I Introduction
Hydrogen peroxide, H\({}_{2}\)O\({}_{2}\), has attracted considerable interest as it plays a key role in the oxidative chemistry of the troposphere. It can be found both in the gas and in the aqueous phase [1; 2], and has several industrial [3], environmental [4], and biological [5] applications. The recombination of hydroperoxyl (HO\({}_{2}\)) radicals is the most important chemical pathway leading to the production of H\({}_{2}\)O\({}_{2}\) in the troposphere [6; 7; 8]. Subsequently, H\({}_{2}\)O\({}_{2}\) can lead to the acidification of clouds, rain, and fog by oxidizing SO\({}_{2}\) and converting it into H\({}_{2}\)SO\({}_{4}\) (and to a less extent oxidizing NO\({}_{2}\) and converting it into HNO\({}_{3}\)) [9; 10; 11; 12; 13]. H\({}_{2}\)O\({}_{2}\) also serves as a reservoir of HO\({}_{x}\) radicals that are key oxidants in controlling the self-cleaning of the atmosphere [14; 15; 16].
H\({}_{2}\)O\({}_{2}\) was first synthesized by Thenard [17] by the reaction of barium peroxide with nitric acids in 1818 and is now considered an important reagent of green chemistry since it decomposes to water and oxygen as the only reaction products. This feature makes H\({}_{2}\)O\({}_{2}\) an environmentally friendly oxidizing agent for a wide range of applications such as pulp and paper bleaching, textile applications, detergent applications, disinfectant applications, wastewater treatment, and chemical oxidation processes [18; 19]. It could also serve as a liquid fuel, an alternative to H\({}_{2}\) and O\({}_{2}\), in a fuel cell [20; 21; 22].
H\({}_{2}\)O\({}_{2}\) is currently produced on an industrial scale with the anthraquinone oxidation (AO) process in which hydrogen, atmospheric oxygen, and an anthraquinone derivative (typically 2-alkyl-anthraquinone) are used with the latter acting as a reaction carrier [18; 19]. The ubiquitous AO process involves multiple steps which require significant energy input and generates waste. In addition, the transport, storage, and handling of bulk H\({}_{2}\)O\({}_{2}\) involve hazards as it is irritating to nose and eyes, and high concentration of H\({}_{2}\)O\({}_{2}\) is explosive[23]. Other methods for large-scale production of H\({}_{2}\)O\({}_{2}\) include partial oxidation of primary or secondary alcohols, and electrochemical methods [24]. Novel alternatives are under investigation such as direct synthesis of H\({}_{2}\)O\({}_{2}\) from O\({}_{2}\) and H\({}_{2}\) using a variety of catalysts like alumina, silica, carbon, solvents (e.g., water) [25; 26], photocatalytic reactions over semiconductors where reactive oxygen-based species (e.g., OH\({}^{\bullet}\), O\({}^{2-}\), and H\({}_{2}\)O\({}_{2}\)) are formed at the surface of semiconductor oxides under UV irradiation [27]. An alternative technology to produce H\({}_{2}\)O\({}_{2}\) is to use low
temperature (or non-thermal) plasmas [28; 29] which allows H\({}_{2}\)O\({}_{2}\) production at ambient temperatures and pressures [30; 31; 32; 33; 34]. This enables direct delivery of H\({}_{2}\)O\({}_{2}\) to different substrates; even to heat sensitive substrates such as living tissues. The latter has led to biomedical applications of low temperature plasmas[35]. For such applications, it is important to know which mechanisms determine the uptake of plasma products (e.g., H\({}_{2}\)O\({}_{2}\)) in the liquid around the cells. For this, information on solubility and thermodynamics properties of plasma products are necessary so that this can be leveraged into macroscopic plasma fluid models[36] to predict the final concentration of plasma products in the liquid phase. The motivation of this work is to provide such data for H\({}_{2}\)O\({}_{2}\) as limited data are available.
Due to the pivotal role of H\({}_{2}\)O\({}_{2}\) in many chemical processes, many experimental and computational studies have been conducted to investigate its properties. The crystal structure of H\({}_{2}\)O\({}_{2}\) was investigated using diffraction methods or Raman spectroscopy in Refs. [37; 38; 39; 40; 41]. Other experimental studies have investigated its densities [42], viscosities [43], vibrational spectra [44; 45; 46; 47; 38], vapor pressures [48] and other thermodynamics properties [49]. In addition, densities, freezing points, and vapor pressures of aqueous H\({}_{2}\)O\({}_{2}\) solutions were investigated experimentally in Refs. [50; 51; 52; 42].
Various computational studies have been carried out which shed light on structural properties of H\({}_{2}\)O\({}_{2}\) monomers as well as its clusters, torsional barrier energies, and vibrational-rotational energy levels [53; 54; 55; 56; 57] using quantum mechanical approaches. Structure and dynamics of H\({}_{2}\)O\({}_{2}\) in water were also investigated using quantum mechanical methods in Refs. [58; 59; 60].
In this work, we use force field based Molecular Dynamics (MD) and Continuous Fractional Component Monte Carlo (CFCMC) simulations with the purpose of obtaining solubilities and thermodynamics properties of H\({}_{2}\)O\({}_{2}\) in water, for the first time, in a systematic manner such that the quality of the available force fields for H\({}_{2}\)O\({}_{2}\) are assessed.
Although several force fields are available for H\({}_{2}\)O\({}_{2}\)[61; 62; 63; 64], only a few of them have been parameterized with respect to the interactions between both H\({}_{2}\)O\({}_{2}\) - H\({}_{2}\)O\({}_{2}\) and H\({}_{2}\)O\({}_{2}\) - H\({}_{2}\)O. One is the ABEEM/MM, the atom-bond electronegativity equalization fluctuating charge molecular force field [65; 66], which is computationally very expensive due to its complex potential energy functional form [65; 66]. A simple additive potential model for H\({}_{2}\)O\({}_{2}\) was proposed by Orabi & English [67] which was parameterized to account for interactions of H\({}_{2}\)O\({}_{2}\) with itself and with water. The model was calibrated with regard to the experimental density and heat of vaporization of pure liquid H\({}_{2}\)O\({}_{2}\) at 0\({}^{\circ}\) C, and was able to reproduce the experimental diffusion coefficient at 0\({}^{\circ}\) C and the heat capacity at 25\({}^{\circ}\) C of liquid H\({}_{2}\)O\({}_{2}\). With a combination of the modified TIP3P water force field [68], the H\({}_{2}\)O\({}_{2}\) force field could predict the experimental hydration free energies and densities of aqueous H\({}_{2}\)O\({}_{2}\) solutions [67]. Another force field parametrization is from the work of Cordeiro [69], wherein the bonded interactions were obtained from _ab initio_ quantum calculations [54; 70], and the Lennard-Jones parameters and partial charges were modified to reproduce the properties of pure liquid H\({}_{2}\)O\({}_{2}\) and its hydration free energy. This force field was used to study the distribution, mobility and residence times of H\({}_{2}\)O\({}_{2}\) at the interface of water and phospholipid biomembranes. In addition, there is another parametrization in the paper by Vacha _et al._[64] in which the behaviour of H\({}_{2}\)O\({}_{2}\) at the air-water interface was investigated. The force field by Vacha _et al._[64] is a rigid force field; it only includes electrostatic and van der Waals interactions which were calibrated against the experimental hydration energies of H\({}_{2}\)O\({}_{2}\).
In this manuscript, we evaluate the quality of the force fields which were developed by Cordeiro [69], and Orabi & English [67] for predicting the thermodynamic properties of H\({}_{2}\)O\({}_{2}\) in aqueous solutions. Both force fields by Cordeiro [69], Orabi & English [67] are non-rigid, thereby we exclude the force field by Vacha _et al._[64] from our study as it is a rigid force field. We compute the densities of pure H\({}_{2}\)O\({}_{2}\) for a range of temperatures (253 K to 353 K), and compare the results with experimental values. In addition, we compute densities, viscosities, and diffusion coefficients of H\({}_{2}\)O\({}_{2}\) and water in aqueous solutions of H\({}_{2}\)O\({}_{2}\) for the whole range of H\({}_{2}\)O\({}_{2}\) mole fractions at ambient temperatures and pressures. To evaluate which water force field is suitable for predicting properties of H\({}_{2}\)O\({}_{2}\) aqueous solutions, we use four different water force fields: TIP3P [68; 71] as it performs better in calculating the specific heats of water [72], TIP5P-E as it can capture the thermal conductivities of water [72] and TIP4P/2005 [73] as it can predict the densities and self-diffusion coefficients of water with commendable accuracy [73]. The fourth water force field is a modified version of TIP3P (mTIP3P) [68], which was used in the work by Orabi & English [67]. The results are compared with experimental values. Finally, we compute the Henry coefficients of H\({}_{2}\)O\({}_{2}\) in water at 300 K.
The rest of this manuscript is organized as follows. In section II, details of the force fields which were developed by Cordeiro [69], and Orabi & English [67] are provided, and the MD and CFCMC simulations are described. The results are presented and discussed in section III. Finally, concluding remarks are presented in section IV.
## II Methodology
### Force Fields
Both force fields developed by Cordeiro [69], and Orabi & English [67] for H\({}_{2}\)O\({}_{2}\) consider a non-rigid H\({}_{2}\)O\({}_{2}\) molecule, that is, they incorporate bonds, angles and dihedrals information with the van der Waals (vdW) and electrostatic interactions. The total potential energy
(\(E_{\rm total}\)) is given by
\[E_{\rm total}=E_{\rm bonds}+E_{\rm angles}+E_{\rm dihedrals}+E_{\rm vdW}+E_{\rm electrostatic}, \tag{1}\]
where \(E_{\rm bonds}\), \(E_{\rm angles}\), \(E_{\rm dihedrals}\), \(E_{\rm vdW}\), and \(E_{\rm electrostatic}\) are presented in Table 1 for both force fields. The bonded interaction parameters (\(E_{\rm bonds}\), \(E_{\rm angles}\), and \(E_{\rm dihedrals}\)) listed in Table 1 have the following definitions: \(b\) is the bond distance, \(\theta\) is the bond angle, \(\phi\) is the dihedral angle, \(\delta\) is the multiplicity factor, and \(\psi\) is the supplementary angle of \(\phi\). \(k_{b}\), \(k_{\theta}\), \(k_{\phi}\) are the force constants of the bond stretching, angle vibration, and dihedral potentials. \(b_{0}\) and \(\theta_{0}\) represent the equilibrium bond distance and bond angle, respectively. \(C_{n}\) with \(n\) ranging from 0 to 5 represents the coefficients for the Ryckaert-Bellelemans dihedral potential [74]. \(q\) represents the atomic partial charges of the electrostatic energy (\(E_{\rm electrostatic}\)) term. A Lennard-Jones (L-J) potential is used for the long-range van der Waals interactions, in which \(\sigma\) represents the distance at which the particle-particle interaction energy is zero, and \(\epsilon\) represents the depth of the potential well. The mixing rules for the L-J parameters for two dissimilar non-bonded atoms are given by Lorentz-Berthelot [75] [\(\sigma_{ij}=\frac{\sigma_{i}+\sigma_{j}}{2}\), \(\epsilon_{ij}=\sqrt{\epsilon_{i}\epsilon_{j}}\)] for the force field by Orabi & English and geometric average [\(\sigma_{ij}=\sqrt{\sigma_{i}\sigma_{j}}\), \(\epsilon_{ij}=\sqrt{\epsilon_{i}\epsilon_{j}}\)] for the force field by Cordeiro. The values of these parameters are provided (using the GROMACS convention) in Tables S1 and S2 of the Supporting Information (SI) for both force fields. The cutoff radius for Lennard-Jones and Coulombic interactions was set to 9 A. The Particle-Mesh-Ewald [76; 77] method was used to treat long-range electrostatic interactions. Long-range tail corrections were applied to both energies and pressures [78].
We use three different rigid water force fields in this study, namely TIP3P, [68; 71]TIP4P/2005, [73] and TIP5P-E [79; 80]. We also use a modified TIP3P water force field (mTIP3P) [68] which was used in the work by Orabi & English [67]. In the remainder of this manuscript, the force field developed by Cordeiro [69] is referred to as "Cordeiro" and the force field developed by Orabi & English [67] is referred to as "Orabi".
### MD simulations
All-atom Molecular Dynamics (MD) simulation of anhydrous H\({}_{2}\)O\({}_{2}\) for a range of temperatures from 253 K to 353 K, and H\({}_{2}\)O\({}_{2}\) aqueous solutions for various mole fractions of H\({}_{2}\)O\({}_{2}\) in the range from 0 to 1.0 were performed using the GROningen MAchine for Chemical Simulations (GROMACS) version 2022.4 [81; 82; 83; 84; 85]. Each system was prepared in a simulation box with an initial length of 27.6 A, containing 500 molecules. A snapshot of a simulation box containing 250 H\({}_{2}\)O\({}_{2}\) molecules and 250 H\({}_{2}\)O molecules is shown in Figure 1.
After energy minimization using the steepest descent algorithm followed by a conjugate gradient algorithm, the MD simulations were run for 100 ps in the constant number of atoms/molecules, volume and temperature (NVT) ensemble. The simulations were then continued in the constant number of atoms/molecules, pressure and temperature (NPT) ensemble for 25 ns. For calculating the viscosities and self-diffusivities, the simulations were continued in the NVT ensemble for another 20 ns. The temperature was kept fixed by Nose-Hoover thermostat [86]. The Parinello-Rahman barostat [87] with a time constant of 1 ps and compressibility of 4.5 \(\times\) 10\({}^{-5}\) bar\({}^{-1}\) was used to keep the pressure at 1 bar. In all simulations, the Newton's equations of motion were integrated with a leap-frog [88] algorithm with a time step of 2 fs. Periodic boundary conditions were applied in all Cartesian directions. The parallel linear constraint solver (P-LINCS) [89; 90] was used to constrain H bonds.
Figure 1: A snapshot of a simulation box containing 250 H\({}_{2}\)O\({}_{2}\) (red and white spheres represent oxygen and hydrogen atoms) and 250 H\({}_{2}\)O molecules (green spheres), generated by the Visual Molecular Dynamic (VMD) software [91].
\begin{table}
\begin{tabular}{l l l l l l} \hline Force field & \(E_{\rm bonds}\) & \(E_{\rm angle}\) & \(E_{\rm dihedrals}\) & \(E_{\rm vdW}\) & \(E_{\rm electrostatic}\) \\ \hline Cordeiro [69] & \(\frac{1}{4}k_{b}(b^{2}-b_{0}^{2})^{2}\) & \(\frac{1}{2}k_{b}(\cos\theta-\cos\theta_{0})^{2}\) & \(\sum_{n=0}^{5}C_{n}(\cos\psi)^{n}\) & \(4\epsilon_{ij}[(\frac{\sigma_{ij}^{2}}{\tau_{ij}})^{12}-(\frac{\sigma_{ij}}{ \tau_{ij}})^{6}]\) & \(\frac{\sigma_{ij}\sigma_{ij}}{4\pi\epsilon_{ij}}\) \\ Orabi \& English [67] & \(\frac{1}{2}k_{b}(b-b_{0})^{2}\) & \(\frac{1}{2}k_{b}(\theta-\theta_{0})^{2}\) & \(k_{\phi}(1+\cos 2\phi-\delta)\) & \(4\epsilon_{ij}[(\frac{\sigma_{ij}^{2}}{\tau_{ij}})^{12}-(\frac{\sigma_{ij}^{ 2}}{\tau_{ij}})^{6}]\) & \(\frac{1}{4\pi\epsilon_{\sigma}}\frac{\sigma_{ij}d_{j}}{\tau_{ij}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Potential energy functions for force fields developed by Cordeiro [69], and Orabi & English [67]. \(E_{\rm bonds}\), \(E_{\rm alengla}\), \(E_{\rm vdW}\), and \(E_{\rm electrostatic}\) represent the stretching, bending, torsional, van der Waals and electrostatic energies, respectively. The definition of parameters is explained in the text (see section II.1). The parameters are provided in Tables S2 and S3 of the Supplementary Information.
### MC simulations
Continuous Fractional Component Monte Carlo (CFCMC) simulations [92; 93; 94] using the open-source Brick-CFCMC software [95; 96; 94] were performed in the isothermal-isobaric (NPT) ensemble. In the CFCMC technique, fractional molecules (compared to normal or "whole" molecules) are introduced whose interactions with the rest of the system are scaled with a continuous coupling parameter \(\lambda\) (\(\lambda\in[0,1]\)). The minimum value of \(\lambda\) (\(\lambda=0\)) indicates no interactions between the fractional molecule and the rest of the molecules in the system (i.e., fractional molecules act as ideal gas molecules). \(\lambda=1\) represents full interactions between the fractional molecules and the other molecules in the system (i.e., the fractional molecule acts as whole molecules). The coupling parameter \(\lambda\) is biased with a weight function (\(W(\lambda)\)) using the Wang-Landau algorithm [97] to improve molecule transfers (insertions/deletions). This ensures a smooth observed probability distribution of \(\lambda\). We used 100 bins to construct a histogram for the \(\lambda\) values and its probability of occurrence (p(\(\lambda\))). The Boltzmann average of any property (A) is then computed using [98]
\[\langle A\rangle=\frac{\langle\ A\ \text{exp}[-W(\lambda)]\rangle_{\text{biased}}}{ \langle\text{exp}[-W(\lambda)]\rangle_{\text{biased}}}. \tag{2}\]
The chemical potential of species \(i\) is calculated with respect to its ideal gas chemical potential [95]
\[\mu_{i}=\mu_{i}^{\text{ideal}}+\mu_{i}^{\text{ex}}, \tag{3}\]
where \(\mu_{i}^{\text{ideal}}\) and \(\mu_{i}^{\text{ex}}\) are the ideal gas and excess chemical potential of the species \(i\), respectively. The excess chemical potential can be related to the Boltzmann sampled probability distribution of \(\lambda\) by the following equation [95]
\[\mu_{i}^{\text{ex}}=-k_{\text{B}}T\ \text{ln}\ \frac{p(\lambda=1)}{p(\lambda=0)}, \tag{4}\]
where \(p(\lambda\)=1) and \(p(\lambda\)=0) are the Boltzmann sampled probability distributions of \(\lambda\) at 1 and 0, respectively. \(k_{\text{B}}\) is the Boltzmann constant, and \(T\) is the absolute temperature. The excess chemical potential at infinite dilution (\(\mu^{\text{ex},\infty}\)) can be used to determine the Henry volatility coefficient[99] (\(K_{v}^{\text{px}}\)) by[100]
\[K_{v}^{\text{px}}=\rho\ k_{\text{B}}T\exp\bigg{(}\frac{\mu^{\text{ex},\infty }}{k_{\text{B}}T}\bigg{)}, \tag{5}\]
where \(\rho\) is the number density of the solvent. This yields the Henry volatility coefficient (\(K_{v}^{\text{px}}\)) in units of [Pa]. The Henry coefficient (\(H_{s}^{\text{cp}}\)) in units of [mol/m\({}^{3}\)Pa] can be obtained using the following conversion: \(H_{s}^{\text{cp}}\approx\frac{\mu_{\text{H_{2}O}}}{M_{\text{H_{2}O}}K_{v}^{ \text{px}}}\) in which \(\rho_{\text{H_{2}O}}\) is the density of water, and \(M_{\text{H_{2}O}}\) is the molar mass of water [101].
CFCMC simulations contained 300 water molecules in a cubic simulation box with initial dimensions of 21 A. A single fractional molecule of H\({}_{2}\)O\({}_{2}\) was introduced to calculate its excess chemical potential. The cut-off radius for the intermolecular L-J and Coulombic interactions was set to 9 A. The Ewald summation [102] method was used for calculating electrostatic interactions. Long-range tail corrections were applied to the L-J potential. Periodic boundary conditions were applied in all directions.
For CFCMC simulations, 1,000 initialization cycles were carried out followed by \(5\times 10^{6}\) equilibration cycles and \(5\times 10^{6}\) production cycles. One cycle refers to \(N\) number of trial moves, where \(N\) is the total number of molecules. Trial moves were selected with the following probabilities: 32% translation moves, 22% rotation moves, 1% volume changes, 5% each of bending and torsion moves, 25% \(\lambda\) changes and 10% hybrid moves that combined swap and identity change moves [95]. Three independent simulations were performed for each combination of water force field and H\({}_{2}\)O\({}_{2}\) force field to obtain an average value and the standard deviation for the Henry coefficients.
## III Results and discussion
### Densities
The densities of anhydrous H\({}_{2}\)O\({}_{2}\) for a temperature range of 253 K to 353 K (in steps of 20 K) for both the Orabi and Cordeiro force fields are plotted in Figure 2. We used the _gmx density_ tool to compute the average density of each system. The experimental values are shown in black circles [42]. The melting point and boiling point of H\({}_{2}\)O\({}_{2}\) are reported as 272.74 K and 423.15 K, respectively [103]. While the Cordeiro force field underestimates the densities of anhydrous H\({}_{2}\)O\({}_{2}\) by about 3% compared to the experimental values, the densities of anhydrous H\({}_{2}\)O\({}_{2}\) using the Orabi force field are in excellent agreement with the experimental values.
Next, we evaluate which water force field is suitable for predicting the densities of H\({}_{2}\)O\({}_{2}\) aqueous solutions. To this end, we modelled systems of H\({}_{2}\)O\({}_{2}\) aqueous solutions for the whole range of H\({}_{2}\)O\({}_{2}\) mole fractions (0 to 1.0) at \(T=298\) K and 1 bar using the Orabi or Cordeiro force fields in combination with four different water force fields: TIP3P, TIP4P/2005, TIP5P-E, and the modified TIP3P water force field (mTIP3P) which was used in the work by Orabi & English [67]. The choice of temperature at 298 K and pressure at 1 bar was motivated by the availability of experimental results by which we could validate our models. The results as a function of the H\({}_{2}\)O\({}_{2}\) mole fraction are shown in Figure 3. The experimental values [50] are added for comparison.
The densities of pure water (i.e., a mole fraction of zero), using the four water force fields are in good agreement with the reported data at 298 K and 1 bar [73; 104].
The Orabi force field for H\({}_{2}\)O\({}_{2}\) in combination with the TIP4P/2005 water force field or mTIP3P water force field predicts the densities of the aqueous solutions in good agreement (ca. 0.6%) with the experimental values[50]. The predicted values for densities of solutions using the TIP5P-E water force field in combination with the Orabi or Cordeiro force fields at low and high concentrations of H\({}_{2}\)O\({}_{2}\) are in good agreement with the experimental values. At intermediate concentrations (0.4 - 0.6 mole fractions), however, the TIP5P-E in combination with the Orabi or Cordeiro force fields overestimates the densities of solutions by 2% and 5%, respectively. The TIP3P water force field in combination with the Orabi force field underestimates the densities of the solutions by 2%. The Cordeiro-TIP3P and Cordeiro-mTIP3P models underestimate the densities by 3% at intermediate concentrations. The Cordeiro force field in combination with the TIP4P/2005 water force field also underestimates the densities of the solution with a more pronounced effect at higher mole fractions of H\({}_{2}\)O\({}_{2}\) (\(\geq\) 0.5, by 3%).
We conclude that the Orabi force field is a better force field than the Cordeiro force field for predicting the densities of pure H\({}_{2}\)O\({}_{2}\) in the temperature range of 253 - 353 K. In addition, the TIP4P/2005 or the mTIP3P force field in combination with the Orabi force field predicts the densities of H\({}_{2}\)O\({}_{2}\) aqueous solutions for the whole range of mole fractions (0 - 1.0) in very good agreement with the experimental values.
### Viscosities
We used the _gmx energy_ tool to compute the viscosities [105]. This tool uses the Navier-Stokes equation in which an external force is applied to the system. This causes a velocity gradient in the system from which the viscosity can be calculated [105]. The viscosities of H\({}_{2}\)O\({}_{2}\) aqueous solutions for various H\({}_{2}\)O\({}_{2}\) mole fractions (0 to 1.0) at \(T=\)293 K were computed. Figure S1 shows the values of viscosity using the Orabi (a) and Cordeiro (b) force fields in combination with the TIP3P, mTIP3P and TIP4P/2005 force fields. The results including the TIP5P-E water force field are shown in Figure S1 of the Supplementary Information. The standard deviation is used to estimate error bars. The experimental values at 293 K are included for comparison [43].
The viscosities of pure water (i.e., mole fraction = 0 ) are in good agreement with the computed values using the TIP3P, mTIP3P, TIP4P/2005, and TIP5P-E force fields [106]. The experimental value of the viscosity of pure H\({}_{2}\)O\({}_{2}\) at 293 K is 1.25 mPa s [43]. The computed value is 1.36 mPa s by using the Orabi force field, and is 1.34 mPa s by using the Cordeiro force field.
The combination of the Orabi force field with the mTIP3P or TIP4P/2005 underestimates the viscosities of H\({}_{2}\)O\({}_{2}\) aqueous solutions for mole fractions up to 0.9. The Orabi-TIP3P model underestimates the values up to a mole fraction of 0.8, above which it slightly over
Figure 3: Densities of H\({}_{2}\)O\({}_{2}\) aqueous solution for various mole fractions of H\({}_{2}\)O\({}_{2}\) at \(T=298\) K and 1 bar using the (a) Orabi [67] and (b) Cordeiro [69] force fields in combination with the TIP3P [71], mTIP3P [68], TIP4P/2005 [73] and TIP5P-E [79; 80] water force fields. Experimental values [50] are added for comparison. Error bars are estimated based on the standard deviation and are much smaller than the markers used in the figure.
Figure 2: Densities of anhydrous H\({}_{2}\)O\({}_{2}\) at various temperatures using the Cordeiro [69] and Orabi [67] force fields at 1 bar with the experimental values [42]. Error bars are estimated based on the standard deviation and are much smaller than the markers used in the figure.
estimates. The combination of Cordeiro force field with the mTIP3P or TIP3P or TIP4P/2005 water force fields follows a similar trend. The Cordeiro-mTIP3P and Cordeiro-TIP3P models underestimate the viscosities up to a mole fraction of 0.8, above which it slightly overestimates. The Cordeiro-TIP4P/2005 model, however, underestimates the values of viscosities by 7% up to a mole fraction of 0.5 while it overestimates by ca. 5% for H\({}_{2}\)O\({}_{2}\) mole fractions higher than 0.8. Contrary to the other water force fields, the TIP5P-E water force field in combination with either the Orabi or Cordeiro force fields predicts a relatively high peak in viscosity at the intermediate mole fractions (mole fraction of 0.5), see Figure S1 of the Supplementary Information. This may be due to structural changes which the TIP5P-E water force field induces in the system. This is addressed in section III.4 using radial distribution functions.
We conclude that the TIP4P/2005 water force field in combination with the Orabi force field or the Cordeiro force field predicts the viscosities of H\({}_{2}\)O\({}_{2}\) aqueous solutions in better agreement with the experimental values.
### Self-diffusion coefficients
Diffusion coefficients were calculated from the mean-squared displacements (MSD), and were corrected for finite-size effects with the Yeh-Hummer equation [107, 108]
\[D=D_{\mathrm{MD}}+\frac{k_{\mathrm{B}}T\xi}{6\pi\eta L}, \tag{6}\]
where \(D\) and \(D_{\mathrm{MD}}\) denote the diffusion coefficient calculated with and without the finite-size effects corrections, respectively. \(k_{\mathrm{B}}\) is the Boltzmann constant, \(T\) is the absolute temperature (in K), \(\xi\) is a dimensionless number which for a cubic simulation box is equal to 2.837, \(L\) is the length of the cubic simulation box, and \(\eta\) is the viscosity of the system. We used the _gmx msd_ tool to obtain the MSD as a function of time. \(D_{\mathrm{MD}}\) is obtained by fitting the MSD to
\[\langle r^{2}\rangle_{\mathrm{MSD}}=2dD_{\mathrm{MD}}t, \tag{7}\]
where \(d=3\) is the dimension of the system. The self-diffusion coefficients were calculated from 1 ns to 20 ns NVT trajectories. Figure S2 of the SI shows an example of MSD versus time on a logarithmic scale. Figure 5 shows the self-diffusion coefficients of H\({}_{2}\)O\({}_{2}\) and water in aqueous H\({}_{2}\)O\({}_{2}\) solutions for the whole range of hydrogen peroxide mole fractions (0 to 1.0). The self-diffusion coefficients of pure water (i.e., mole fraction=0) for the four water force fields are in good agreement with the values reported in Ref. [79, 73, 109]. The TIP5P-E and TIP4P/2005 water force fields predict the value of the self-diffusion coefficient in better agreement with the experimental value (2.3\(\times 10^{-9}\)m\({}^{2}\)/s [110]).
The self-diffusion coefficients of both the water and the H\({}_{2}\)O\({}_{2}\) molecules decrease monotonically by increasing the mole fraction of H\({}_{2}\)O\({}_{2}\) using the Orabi-TIP3P or Orabi-mTIP3P models. There is a similar trend for the Cordeiro-TIP3P and Cordeiro-mTIP3P models. The TIP4P/2005 water force field in combination with either the Orabi force field or the Cordeiro force field predicts a relatively constant self-diffusion coefficient for both water and H\({}_{2}\)O\({}_{2}\) for the whole range of mole fractions. This is in agreement with a recent experimental study [33] where it was concluded that the self-diffusion coefficients of H\({}_{2}\)O\({}_{2}\) in solutions are insensitive to its concentration. The TIP5P-E water force field in combination with either the Orabi or Cordeiro force fields predicts a minimum at a mole fraction of 0.5 for the self-diffusion coefficients of both water and H\({}_{2}\)O\({}_{2}\). This is correlated with its very
Figure 4: Viscosities of H\({}_{2}\)O\({}_{2}\) aqueous solution for various mole fractions of H\({}_{2}\)O\({}_{2}\) at 293 K for the (a) Orabi [67] and (b) Cordeiro [69] force fields in combination with the TIP3P [71], mTIP3P [68] and TIP4P/2005 [73]. The results including the TIP5P-E water force field are shown in Figure S1 of the Supplementary Information. Error bars are estimated based on the standard deviation. The experimental values [43] at 293 K are added for comparison.
high value of viscosities (see Figure S1 of the Supplementary Information).
### Radial Distribution Functions
Structural properties of H\({}_{2}\)O\({}_{2}\) aqueous solution at various mole fractions were investigated using the radial distribution functions (RDF). Note that there are 4 atom types in each system: hydrogen of water (H\({}_{\text{w}}\)), oxygen of water (O\({}_{\text{w}}\)), hydrogen of H\({}_{2}\)O\({}_{2}\) (H\({}_{\text{p}}\)), and oxygen of H\({}_{2}\)O\({}_{2}\) (O\({}_{\text{p}}\)).
The RDFs for O\({}_{\text{w}}\) and H\({}_{\text{w}}\) in pure water using the mTBP3P, TIP5P-E and TIP4P/2005 water force fields are shown in Figure S3 (a). The results show a first peak at approximately 0.18 nm, and a second peak at approximately 0.32 nm. The peak heights slightly differ between the water force fields with the TIP4P/2005 predicting a higher value followed by the TIP5P-E, and then the mTIP3P predicting a smaller value.
The respective RDFs in H\({}_{2}\)O\({}_{2}\) aqueous solutions using the Orabi force field in combination with the three water force fields for systems with mole fractions of 0.1, 0.5 and 0.9 are also shown in Figure S3 (b, c, d). The RDFs for O\({}_{\text{w}}\) and H\({}_{\text{w}}\) for systems using the Cordeiro force field in combination with the mTIP3P, TIP5P-E and TIP4P/2005 are shown in Figure S3 of the SI. In the system with a mole fraction of 0.1, the position of the peaks is independent of the water force fields. As the mole fraction of H\({}_{2}\)O\({}_{2}\) is increased to 0.5 and 0.9, the position of the peaks does not change in systems where the mTIP3P or TIP4P/2005 force field is used. In the systems where the TIP5P-E water force field is used, however, the RDF changes: the height of the peaks become smaller and an additional structural correlation appears
Figure 5: Self-diffusion coefficients of water molecules and H\({}_{2}\)O\({}_{2}\) molecules in H\({}_{2}\)O\({}_{2}\) aqueous solutions for different mole fractions of H\({}_{2}\)O\({}_{2}\) at 298 K using the (a) Orabi [67] and (b) Cordeiro [69] force fields in combination with the TIP3P [71], mTIP3P [68], TIP4P/2005 [73] and TIP5P-E [79; 80] water force fields. Error bars are estimated based on the standard deviation and are much smaller than the markers used in the figure.
Figure 6: Radial distribution functions (RDFs) as a function of radial distance, \(r\) [nm], for O\({}_{\text{w}}\) (O of water) - H\({}_{\text{w}}\) (H of water) for H\({}_{2}\)O\({}_{2}\) aqueous solutions with \(x=0.1\) (b), \(x=0.5\) (c), and \(x=0.9\) (d) at 298 K and 1 bar using the Orabi force field in combination with the mTIP3P [68], TIP5P-E [79; 80], and TIP4P/2005 [73] water force fields, where \(x\) is the mole fraction of H\({}_{2}\)O\({}_{2}\). The RDF for pure water is plotted in (a).
in the system with a mole fraction of 0.5. This additional correlation between the water molecules persists till 0.8 nm, whereas for the systems in which the mTIP3P or TIP4P/2005 is used, the structural correlation persists only till 0.6 nm. In the Orabi-TIP5P-E system with a mole fraction 0.9, the first two peaks disappear. A similar trend can be observed for the combinations involving the Cordeiro force field with the mTIP3P, TIP5P and TIP4P/2005 water force fields (see Figure S3 of SI). In the Cordeiro-TIP5P-E combination with a mole-fraction of 0.5, however, the structural correlation between the water molecules is stronger than that of the corresponding Orabi-TIP5P-E combination. This can be seen from a more prominent peak after 0.4 nm compared to the other models.
The RDFs for O\({}_{\text{p}}\) and H\({}_{\text{p}}\) in H\({}_{2}\)O\({}_{2}\) aqueous solution with mole fractions of 0.1, 0.5 and 0.9 using the Orabi force field in combination with the three water force fields are shown in Figure S5 (a, b, c). Similarly, the respective RDFs using the Cordeiro force field in combination with the three water force fields are shown in Figure S4 of SI. The RDF of O\({}_{\text{p}}\) and H\({}_{\text{p}}\) in pure H\({}_{2}\)O\({}_{2}\) using the Orabi and the Cordeiro force fields are also shown in Figure S5 (d). The first peak in the RDF has a large amplitude, therefore we removed it to be able to distinguish the differences between the systems more clearly (see Figure S5 of SI). RDFs of the system at a mole fraction of 0.9 are almost identical using the three different water force fields with a second peak at 0.37 nm. By decreasing the mole fraction of H\({}_{2}\)O\({}_{2}\) to 0.5, and 0.1, the RDFs remain the same for the systems in which the mTIP3P-E or the TIP4P/2005 is used. For the system in which the TIP5P-E water force field is used, however, the RDF changes drastically. This is also the case with the Cordeiro-TIP5P-E model. The RDF for pure H\({}_{2}\)O\({}_{2}\) using the Orabi force field is almost identical as the system using the Cordeiro force field.
The RDFs for O\({}_{\text{p}}\) and H\({}_{\text{w}}\), O\({}_{\text{p}}\) and O\({}_{\text{w}}\), and O\({}_{\text{w}}\) and H\({}_{\text{p}}\) using the Orabi force field with the three water force fields (mTIP3P, TIP4P/2005, and TIP5P-E) for solutions with mole fractions of 0.1, 0.5 and 0.9, are shown in Figure S6. Likewise, RDFs with the Cordeiro force field and the water force fields are shown in Figure S6 of SI. In systems where the mTIP3P and TIP4P/2005 water force fields were used, RDFs have the same structure in which the position of the first peak is in good agreement with X-ray measurements on crystals of H\({}_{2}\)O\({}_{2}\):2H\({}_{2}\)O [111] and simulation results [67]. On the contrary, the structural properties in systems where the TIP5P-E force field was used, have changed. A comparable effect can be seen in Figure S6 of SI where the Cordeiro-TIP5P-E combination is used.
The number of water molecules in the micro and first solvation shells of H\({}_{2}\)O\({}_{2}\) molecule were obtained by integrating up to the first and second minima of the RDF for O\({}_{\text{p}}\) - O\({}_{\text{w}}\), respectively. The results are shown in Table S3 and S4 of the SI. Orabi & English [67] reported 3.0 and 9.4 water molecules in the micro and first solvation shells, respectively, for a single peroxide molecule in 500 water molecules using the mTIP3P water force field. Authors in Ref. [58] report 6.0 water molecules in the first solvation shell of H\({}_{2}\)O\({}_{2}\) using a hybrid quantum-classical simulation. According to our results, the number of water molecules in the first solvation shell at mole fractions of 0.1 and 0.9 are lower for systems where the TIP5P-E water force field is used. At a mole fraction of 0.5, however, the number of water molecules in the first solvation shell slightly increases.
Our results suggest that the addition of the TIP5P-E water force field to the H\({}_{2}\)O\({}_{2}\) Orabi or Cordeiro models disturbs the structural properties of the systems such that there is a stronger interaction between the water molecules and H\({}_{2}\)O\({}_{2}\). This effect is more pronounced at a mole fraction of 0.5 which is correlated with the
Figure 7: Radial distribution functions (RDFs) as a function of radial distance, \(r\) [nm], for O\({}_{\text{p}}\) (O of H\({}_{2}\)O\({}_{2}\)) - H\({}_{\text{p}}\) (H of H\({}_{2}\)O\({}_{2}\)) for H\({}_{2}\)O\({}_{2}\) aqueous solutions with \(x=0.1\) (b), \(x=0.5\) (c), and \(x=0.9\) (d) at 298 K and 1 bar using the Orabi [67] force field in combination with the mTIP3P [68], TIP5P-E [79; 80], and TIP4P/2005 [73] water force fields, where \(x\) is the mole fraction of H\({}_{2}\)O\({}_{2}\). The first peak (at ca. 0.19 nm) was removed to distinguish the differences between the combinations clearly. The RDF for pure H\({}_{2}\)O\({}_{2}\) is plotted in (d) using the Orabi force field and the Cordeiro force field.
prediction of the Orabi-TIP5P-E and Cordeiro-TIP5P-E models for densities, viscosities, and self-diffusion coefficients (see Figure 3, S1 and 5).
### Hydrogen Bond analysis
The number of hydrogen bonds (calculated as the summation of hydrogen bonds between H\({}_{2}\)O\({}_{2}\) - H\({}_{2}\)O\({}_{2}\), H\({}_{2}\)O\({}_{2}\) - water and water - water) per H\({}_{2}\)O\({}_{2}\) molecule for combinations of the Orabi and Cordeiro force fields with the water force fields is shown in Figure 9. We used the geometric criterion for hydrogen bonds proposed in Ref. [112]. The number of hydrogen bonds for both the Orabi and Cordeiro force fields in combination with the TIP4P/2005, TIP3P and mTIP3P water force fields exhibit a steady increase until a mole fraction of 0.9. Existing literature indicates the existence of about 4 hydrogen bonds between hydrogen peroxide and water molecules [58; 60]. For pure H\({}_{2}\)O\({}_{2}\) systems, the number of hydrogen bonds sharply decreases to about 5 for both the Orabi and Cordeiro systems. The Orabi-TIP5P-E and Cordeiro-TIP5P-E combinations are however different compared to the others. These combinations indicate a minimum in the number of hydrogen bonds at a mole fraction of 0.4 (around 3 hydrogen bonds for Orabi-TIP5P-E and 2 for Cordeiro-TIP5P-E). This minima for the number of hydrogen bonds at the intermediate concentrations for both these systems are concurrent with the high viscosities and low diffusion seen earlier. Analysis of the RDFs, solvation shells and the number of hydrogen bonds of the TIP5P-E systems indicate a variation in the arrangement of water molecules around a H\({}_{2}\)O\({}_{2}\) molecule. The effects of this structural difference is reflected in the incongruity of the viscosities and self-diffusion coefficients of the TIP5P-E systems with respect to other water force fields based systems.
Figure 8: Radial distribution functions (RDFs) as a function of radial distance, \(r\) [nm], for O\({}_{\rm p}\) (O of H\({}_{2}\)O\({}_{2}\)) and H\({}_{\rm w}\) (H of water) (a - c), O\({}_{\rm w}\) (O of water) and O\({}_{\rm p}\) (O of H\({}_{2}\)O\({}_{2}\)) (d - f), and O\({}_{\rm w}\) (O of water) and H\({}_{\rm p}\) (H of H\({}_{2}\)O\({}_{2}\)) (g - i) for systems using the mTIP3P [68], TIP5P-E [79; 80], and TIP4P/2005 [73] water force fields in combination with the Orabi force field for \(x=0.1\), \(x=0.5\), and \(x=0.9\) at 298 K and 1 bar, where \(x\) is the mole fraction of H\({}_{2}\)O\({}_{2}\).
### Henry coefficients
The Henry coefficients were computed for H\({}_{2}\)O\({}_{2}\) in water using the Orabi and Cordeiro force fields in combination with the various water force fields. It should be noted that we have not further considered the TIP5P-E water force field for the solubility calculations as it has been ascertained from the earlier sections that neither the Cordeiro nor Orabi force fields in combination with the TIP5P-E water force field could accurately predict the densities, viscosities and self-diffusion coefficients of H\({}_{2}\)O\({}_{2}\) aqueous systems. The results of the solubility calculations are provided in Table S6. The reported values range from 670 to 1400 [mol/m\({}^{3}\) Pa] [113; 114; 115; 116]. It is evident that the Cordeiro force field in combination with the TIP3P or mTIP3P water force fields predicts the Henry constants that are within the range of the reported experimental values.
## IV Conclusions
We performed MD and CFCMC simulations to study thermodynamics properties of aqueous solutions of H\({}_{2}\)O\({}_{2}\). The quality of the available force fields of H\({}_{2}\)O\({}_{2}\), Cordeiro [69] and Orabi [67], was evaluated by comparing the results with experiments. The densities of pure H\({}_{2}\)O\({}_{2}\) computed using the Orabi force field are in excellent agreement with the experimental values for the temperature range of 253 K to 353 K. The Cordeiro force field underestimates the densities of pure H\({}_{2}\)O\({}_{2}\) by 3%. We computed densities, viscosities, and self-diffusion coefficients of H\({}_{2}\)O\({}_{2}\) in aqueous solutions for the whole range of mole fractions of H\({}_{2}\)O\({}_{2}\) (0 to 1.0) at ambient temperatures and pressures using four water force fields: TIP3P, mTIP3P, TIP4P/2005, and TIP5P-E. The results show that the TIP4P/2005 water force field in combination with the Orabi force field can predict the densities of H\({}_{2}\)O\({}_{2}\) aqueous solution in excellent agreement with experimental values. Both the Orabi and Cordeiro force fields in combination with the TIP4P/2005 water force field predict the viscosities of H\({}_{2}\)O\({}_{2}\) in reasonable agreement with experimental results. The TIP5P-E water force field leads to a very high value (maximum) for viscosity of H\({}_{2}\)O\({}_{2}\) aqueous solutions at a mole fraction of 0.5, and thereby a very small value (minimum) for self-diffusion coefficient of H\({}_{2}\)O\({}_{2}\) and water. The TIP4P/2005 force field in combination with either of the Orabi or Cordeiro force fields predicts a relatively constant diffusion coefficient for the whole range of H\({}_{2}\)O\({}_{2}\) mole fractions that is in agreement with a recent experimental study [33]. We studied the structural properties of H\({}_{2}\)O\({}_{2}\) aqueous solutions using radial distribution functions. These results suggest that the use of the TIP5P-E water force field in combination with either the Orabi or Cordeiro force field, predict a stronger interaction between water molecules and H\({}_{2}\)O\({}_{2}\) molecules. Hydrogen bond analysis indicates a steady increase in the number of hydrogen bonds per H\({}_{2}\)O\({}_{2}\) molecule with increasing solute concentration for H\({}_{2}\)O\({}_{2}\) aqueous solutions. The Cordeiro-TIP5P-E and Orabi-TIP5P-E systems exhibited a minimum at the intermediate solute concentrations. This is in line with the deviation in the dynamic properties (viscosities and self-diffusion coefficients) of these systems. Finally, we
\begin{table}
\begin{tabular}{c c c c} \hline Model & \(\mu^{\text{ex}}/[\text{K}]\) & \(K^{\text{ex}}_{\text{v}}/[\text{Pa}]\) & \(H^{\text{cr}}_{s}/[\text{mol}/\text{m}^{3}\text{Pa}]\) \\ \hline Orabi - TIP4P/2005 & \(-3734\pm 30\) & \(512\pm 52\) & \(109\pm 11\) \\ Orabi - TIP3P & \(-3836\pm 27\) & \(365\pm 32\) & \(153\pm 14\) \\ Orabi - mTIP3P & \(-3963\pm 14\) & \(239\pm 11\) & \(232\pm 11\) \\ Cordeiro - TIP4P/2005 & \(-4142\pm 32\) & \(132\pm 14\) & \(424\pm 46\) \\ Cordeiro - TIP3P & \(-4392\pm 58\) & \(58\pm 11\) & \(989\pm 190\) \\ Cordeiro - mTIP3P & \(-4487\pm 57\) & \(42\pm 7\) & \(1357\pm 275\) \\ \hline \end{tabular}
\end{table}
Table 2: Excess chemical potentials (\(\mu^{\text{ex}}\)), the Henry volatility coefficient (\(K^{\text{px}}_{\text{v}}\)), and the Henry coefficient (\(H^{\text{cr}}_{s}\)), using the Orabi and the Cordeiro force fields in combination with the TIP4P/2005,TIP3P and mTIP3P water force fields. Errors are estimated using standard deviations of independent simulations.
Figure 9: Number of hydrogen bonds per H\({}_{2}\)O\({}_{2}\) molecule for systems with various mole fractions of H\({}_{2}\)O\({}_{2}\) at \(T=298\) K and 1 bar using the (a) Orabi [67] and (b) Cordeiro [69] force fields in combination with the TIP3P [71], mTIP3P [68], TIP4P/2005 [73] and TIP5P-E [80; 79] water force fields.
computed the Henry coefficients of H\({}_{2}\)O\({}_{2}\) in water. The values using the Cordeiro force field in combination with either of the TIP3P or mTIP3P water force fields are within the range of experimental values. The quantitative data presented in this work can be used by macroscopic plasma fluid models to determine the uptake of H\({}_{2}\)O\({}_{2}\) from the gas phase plasma by liquid [36] or to interpret and complement experimental findings [117].
## Author contributions
TS carried out the simulations and data analysis. All authors provided critical feedback on the interpretation of data analysis. BB conceived and supervised the project. TS and BB wrote the manuscript in collaboration with all the authors.
## Supplementary information
Force field parameters for Cordeiro and Orabi & English using the GROMACS convention are provided in Tables S1 and S2, respectively; Figure S1 shows viscosities of H\({}_{2}\)O\({}_{2}\) aqueous solutions for various mole fractions of H\({}_{2}\)O\({}_{2}\); Figure S2 shows an example of the MSD versus time on a logarithmic scale; Radial distribution functions are illustrated in Figures S3-S6; The number of water molecules in the micro solvation shell, and the first solvation shell of H\({}_{2}\)O\({}_{2}\) in aqueous solution for various mole fractions of H\({}_{2}\)O\({}_{2}\) is shown in Tables S3 and S4, respectively.
## Conflicts of interest
There are no conflicts to declare.
## Acknowledgements
BB thanks the strategic alliance between TU/e, Utrecht University, and University Medical Center Utrecht, and TS thanks the Institute for Complex Molecular Systems for financial support.
| Hydrogen peroxide は環境と産業化学における多くのプロセスで重要な役割を果たしています。私たちは、H2O2 の水溶液における thermodynamic property を計算するために、古典的な分子ダイナミクスと連続的分子の比率 Monte Carloシミュレーションを実行しました。Orabi & English による H2O2 の force field と Cordeiro の force field の品質を系統的に評価しました。H2O2 の水力場の適性について評価するために、4 つの水力場を使用しました。それは、TIP3P、TIP4P/2005、TIP5P-E、および Orabi & English の force field を修正した TIP3P の force field です。Orabi & English の force field を使用した純 H2O2 の密度計算は、実験結果と非常に良好な一致を示しています。しかし Cordeiro の force field を使用した密度計算は、3%低くなっています。TIP4P/200 |
2309.16585 | Text-to-3D using Gaussian Splatting | Automatic text-to-3D generation that combines Score Distillation Sampling
(SDS) with the optimization of volume rendering has achieved remarkable
progress in synthesizing realistic 3D objects. Yet most existing text-to-3D
methods by SDS and volume rendering suffer from inaccurate geometry, e.g., the
Janus issue, since it is hard to explicitly integrate 3D priors into implicit
3D representations. Besides, it is usually time-consuming for them to generate
elaborate 3D models with rich colors. In response, this paper proposes GSGEN, a
novel method that adopts Gaussian Splatting, a recent state-of-the-art
representation, to text-to-3D generation. GSGEN aims at generating high-quality
3D objects and addressing existing shortcomings by exploiting the explicit
nature of Gaussian Splatting that enables the incorporation of 3D prior.
Specifically, our method adopts a progressive optimization strategy, which
includes a geometry optimization stage and an appearance refinement stage. In
geometry optimization, a coarse representation is established under 3D point
cloud diffusion prior along with the ordinary 2D SDS optimization, ensuring a
sensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians
undergo an iterative appearance refinement to enrich texture details. In this
stage, we increase the number of Gaussians by compactness-based densification
to enhance continuity and improve fidelity. With these designs, our approach
can generate 3D assets with delicate details and accurate geometry. Extensive
evaluations demonstrate the effectiveness of our method, especially for
capturing high-frequency components. Our code is available at
https://github.com/gsgen3d/gsgen | Zilong Chen, Feng Wang, Yikai Wang, Huaping Liu | 2023-09-28T16:44:31 | http://arxiv.org/abs/2309.16585v4 | # Text-to-3D using Gaussian Splatting
###### Abstract
In this paper, we present Gaussian Splatting based text-to-3D generation (Gsgen), a novel approach for generating high-quality 3D objects. Previous methods suffer from inaccurate geometry and limited fidelity due to the absence of 3D prior and proper representation. We leverage 3D Gaussian Splatting, a recent state-of-the-art representation, to address existing shortcomings by exploiting the explicit nature that enables the incorporation of 3D prior. Specifically, our method adopts a progressive optimization strategy, which includes a geometry optimization stage and an appearance refinement stage. In geometry optimization, a coarse representation is established under a 3D geometry prior along with the ordinary 2D SDS loss, ensuring a sensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians undergo an iterative refinement to enrich details. In this stage, we increase the number of Gaussians by compactness-based densification to enhance continuity and improve fidelity. With these designs, our approach can generate 3D content with delicate details and more accurate geometry. Extensive evaluations demonstrate the effectiveness of our method, especially for capturing high-frequency components. Our code is available at [https://github.com/gsgen3d/gsgen/](https://github.com/gsgen3d/gsgen/).
## 1 Introduction
Diffusion model based text-to-image generation (Saharia et al., 2022; Rombach et al., 2022; Ramesh et al., 2022; Alex et al., 2023) has achieved remarkable success in synthesizing photo-realistic images from textual prompts. Nevertheless, for high-quality text-to-3D content generation, the advancements lag behind that of image generation due to the inherent complexity of real-world 3D scenes. Recently, DreamFusion (Poole et al., 2023) has made great progress in generating delicate assets by utilizing score distillation sampling with a pre-trained text-to-image diffusion prior. Its follow-up works further improve this paradigm in quality (Wang et al., 2023; Chen et al., 2023), training speed (Lin et al., 2023; Metzer et al., 2022), and generating more reasonable geometry (Armandpour et al., 2023; Zhu
Figure 1: Delicate 3D assets generated using the proposed Gsgen. See our project page sggen3d.github.io for videos of these images.
& Zhuang, 2023; Seo et al., 2023). However, most existing text-to-3D methods still suffer greatly from collapsed geometry and limited fidelity, and are difficult to incorporate 3D priors due to the implicit nature of NeRF (Mildenhall et al., 2020) and DMTet(Shen et al., 2021).
Recently, 3D Gaussian Splatting (Kerbl et al., 2023) has garnered significant attention in the field of 3D reconstruction, primarily due to its remarkable ability to represent intricate scenes and capability of real-time rendering. By modeling a scene using a set of 3D Gaussians, Kerbl et al. (2023) adopt an explicit and object-centric approach that fundamentally diverges from implicit representations like NeRF and DMTet. This distinctive approach paves the way for the integration of explicit 3D priors into text-to-3D generation. Building upon this insight, instead of a straightforward replacement of NeRFs with Gaussians, we propose to guide the generation with an additional 3D point cloud diffusion prior to enhancing geometrical coherence. By adopting this strategy, we can better harness the inherent advantages of 3D Gaussians in the creation of complex and 3D-consistent assets.
Specifically, we propose to represent the generated 3D content with a set of Gaussians and optimize them progressively in two stages, namely geometry optimization and appearance refinement. In the geometry optimization stage, we optimize the Gaussians under the guidance of a 3D point cloud diffusion prior along with the ordinary 2D image prior. The incorporation of this extra 3D SDS loss ensures a 3D-consistent rough geometry. In the subsequent refinement stage, the Gaussians undergo an iterative enhancement to enrich the delicate details. Due to the sub-optimal performance of the
Figure 2: Compared to previous methods, Gsgen alleviates the Janus problem by representing the 3D scene using 3D Gaussian Splatting, which is capable of applying direct 3D geometry guidance and expressing content with delicate details. Note that the results of DreamFusion and Magic3D are obtained using Stable DreamFusion (Tang, 2022) and threestudio (Guo et al., 2023) since the official implementations are not publicly available due to the utilization of private diffusion models. All the results are obtained using StableDiffusion (Rombach et al., 2022) on checkpoint _runwayml/stable-diffusion-v1-5_ for a fair comparison. Videos of these images are provided in the supplemental video.
original adaptive control under SDS loss, we introduce an additional compactness-based densification technique to enhance appearance and fidelity. Besides, to prevent potential degeneration and break the symmetry in the early stage, the Gaussians are initialized with a coarse point cloud generated by a text-to-point-cloud diffusion model. As a result of these techniques, our approach can generate 3D assets with accurate geometry and exceptional fidelity. Fig.2 illustrates a comparison between Gsgen and previous state-of-the-art methods on generating assets with asymmetric geometry.
In summary, our contributions are:
* We propose Gsgen, the first text-to-3D generation method using 3D Gaussians as representation. By incorporating geometric priors, we highlight the distinctive advantages of Gaussian Splitting in text-to-3D generation.
* We introduce a two-stage optimization strategy that first exploits joint guidance of 2D and 3D diffusion prior to shaping a coherent rough structure in geometry optimization; then enriches the details with a compactness-based densification in appearance refinement.
* We validate Gsgen on various textual prompts. Experiments show that our method can generate 3D assets with more accurate geometry and enhanced fidelity than previous methods. Especially, Gsgen demonstrates superior performance in capturing _high-frequency components_ in objects, such as feathers, surfaces with intricate textures, animal fur, etc.
## 2 Related Work
### 3D Scene Representations
Representing 3D scenes in a differentiable way has achieved remarkable success in recent years. NeRFs (Mildenhall et al., 2020) demonstrates outstanding performance in novel view synthesis by representing 3D scenes with a coordinate-based neural network. After works have emerged to improve NeRF in reconstruction quality (Barron et al., 2021, 2023; Wang et al., 2022c), handling large-scale (Tancik et al., 2022; Zhang et al., 2020; Martin-Brualla et al., 2021; Chen et al., 2022b) and dynamic scenes (Park et al., 2021; Attal et al., 2023; Wang et al., 2022b; Sara Fridovich-Keil and Giacomo Meanti et al., 2023; Pumarola et al., 2021), improving training (Yu et al., 2021a; Chen et al., 2022a; Sun et al., 2022; Muller et al., 2022) and rendering (Reiser et al., 2023; Hedman et al., 2021; Yu et al., 2021b) speed. Although great progress has been made, NeRF-based methods still suffer from low rendering speed and high training-time memory usage due to their implicit nature. To tackle these challenges, Kerbl et al. (2023) propose to represent the 3D scene as a set of anisotropic Gaussians and render the novel views using a GPU-optimized tile-based rasterization technique. 3D Gaussian Splitting could achieve comparing reconstruction results while being capable of real-time rendering. Our research highlights the distinctive advantages of Gaussian Splitting within text-to-3D by incorporating explicit 3D prior, generating 3D consistent and highly detailed assets.
### Diffusion Models
Diffusion models have arisen as a promising paradigm for learning and sampling from a complex distribution. Inspired by the diffusion process in physics, these models involve a forward process to gradually add noise and an inverse process to denoise a noisy sample with a trained neural network. After DDPM (Ho et al., 2020; Song et al., 2021b) highlighted the effectiveness of diffusion models in capturing real-world image data, a plethora of research has emerged to improve the inherent challenges, including fast sampling (Lu et al., 2022; Bao et al., 2022; Song et al., 2021a) and backbone architectural improvements (Bao et al., 2023; Podell et al., 2023; Liu et al., 2023b; Dhariwal and Nichol, 2021; Hoogeboom et al., 2023; Peebles and Xie, 2022). One of the most successful applications of diffusion models lies in text-to-image generation, where they have shown remarkable progress in generating realistic images from text prompts (Ho and Salimans, 2022; Ramesh et al., 2022; Alex et al., 2023). To generate high-resolution images, current solutions either adopt a cascaded structure that consists of a low-resolution diffusion model and several super-resolution models (Saharia et al., 2022; Balaji et al., 2022; Alex et al., 2023) or trains the diffusion model in latent space with an auto-encoder (Rombach et al., 2022; Gu et al., 2022). Our proposed Gsgen is built upon StableDiffusion (Rombach et al., 2022), an open-source latent diffusion model that provides fine-grained guidance for high-quality 3D content generation.
### Text-to-3D generation
Early efforts in text-to-3D generation, including CLIP-forge (Sanghi et al., 2021), Dream Fields (Jain et al., 2022), Text2Mesh (Michel et al., 2022), TANGO (Chen et al., 2022c), CLIPNeRF (Wang et al., 2022a), and CLIP-Mesh (Khalid et al., 2022), harness CLIP (Radford et al., 2021) guidance to create 3D assets. To leverage the stronger diffusion prior, DreamFusion (Poole et al., 2023) introduces the score distillation sampling loss that optimizes the 3D content by minimizing the difference between rendered images and the diffusion prior. This development sparked a surge of interest in text-to-3D generation through image diffusion prior (Wang et al., 2023a; Raj et al., 2023; Lorraine et al., 2023; Zhu and Zhuang, 2023; Qian et al., 2023). Magic3D (Lin et al., 2023) employs a coarse-to-fine strategy, optimizing a NeRF with a low-resolution diffusion prior and then enhancing texture under latent diffusion prior with a DMTet initialized with the coarse NeRF. Latent-NeRF (Metzer et al., 2022) trains a NeRF within the latent space of StableDiffusion and introduces the Sketch-Shape method to guide the generation process. Fantasia3D (Chen et al., 2023) disentangles the learning of geometry and material, harnessing physics-based rendering techniques to achieve high-fidelity mesh generation. ProlificDreamer (Wang et al., 2023c) introduces variational score distillation to improve SDS and facilitate the generation of high-quality and diverse 3D assets, whose contribution is orthogonal to ours since we focus on incorporating 3D prior with more advanced representation. Another line of work lies in generating 3D assets directly through a 3D diffusion model based on NeRF or other differentiable representations (Wang et al., 2023b; Jun and Nichol, 2023; Liu et al., 2023a; Cheng et al., 2023). Our approach builds upon Point-E (Nichol et al., 2022), a text-to-point-cloud diffusion model trained on millions of 3D models, which offers valuable 3D guidance and coarse initialization.
## 3 Preliminary
### Score Distillation Sampling
Instead of directly generating 3D models, recent studies have achieved notable success by optimizing 3D representation with a 2D pre-trained image diffusion prior based on score distillation sampling, as proposed by Poole et al. (2023). In this paradigm, the scene is represented as a differentiable image parameterization (DIP) denoted as \(\theta\), where the image can be differentially rendered based on the given camera parameters through a transformation function \(g\). The DIP \(\theta\) is iteratively refined to ensure that, for any given camera pose, the rendered image \(\mathbf{x}=g(\theta)\) closely resembles a plausible sample derived from the guidance diffusion model. DreamFusion achieves this by leveraging Imagen (Saharia et al., 2022) to provide a score estimation function denoted as \(\epsilon_{\phi}(x_{t};y,t)\), where \(x_{t}\), \(y\), and \(t\) represent the noisy image, text embedding, and timestep, respectively. This estimated score plays a pivotal role in guiding the gradient update, as expressed by the following equation:
\[\nabla_{\theta}\mathcal{L}_{\text{SDS}}=\mathbb{E}_{\epsilon,t}\left[w(t)( \epsilon_{\phi}(x_{t};y,t)-\epsilon)\frac{\partial\mathbf{x}}{\partial\theta}\right] \tag{1}\]
where \(\epsilon\) is a Gaussian noise and \(w(t)\) is a weighting function. Our approach combines score distillation sampling with 3D Gaussian Splatting at both 2D and 3D levels with different diffusion models to generate 3D assets with both detailed appearance and 3D-consistent geometry.
### 3D Gaussian Splatting
Gaussian Splatting, as introduced in Kerbl et al. (2023), presents a pioneering method for novel view synthesis and 3D reconstruction from multi-view images. Unlike NeRF, 3D Gaussian Splatting adopts a distinctive approach, where the underlying scene is represented through a set of anisotropic 3D Gaussians parameterized by their positions, covariances, colors, and opacities. When rendering, the 3D Gaussians are projected onto the camera's imaging plane (Zwicker et al., 2001). Subsequently, the projected 2D Gaussians are assigned to individual tiles. The color of \(\mathbf{p}\) on the image plane is rendered sequentially with point-based volume rendering technique (Zwicker et al., 2001):
\[C(\mathbf{p})=\sum_{i\in\mathcal{N}}c_{i}\alpha_{i}\prod_{j=1}^{i-1}(1-\alpha _{j})\quad\text{ where, }\alpha_{i}=o_{i}e^{-\frac{1}{2}(\mathbf{p}-\mu_{i})^{T}\Sigma_{i}^{-1}( \mathbf{p}-\mu_{i})}, \tag{2}\]
where \(c_{i}\), \(o_{i}\), \(\mu_{i}\), and \(\Sigma_{i}\) represent the color, opacity, position, and covariance of the \(i\)-th Gaussian respectively, and \(\mathcal{N}\) denotes the Gaussians in this tile. To maximize the utilization of shared memory,
Gaussian Splatting further designs a GPU-friendly rasterization process where each thread block is assigned to render an image tile. These advancements enable Gaussian Splatting to achieve more detailed scene reconstruction, significantly faster rendering speed, and reduction of memory usage during training compared to NeRF-based methods. In this study, we expand the application of Gaussian Splatting into text-to-3D generation and introduce a novel approach that leverages the explicit nature of Gaussian Splatting by integrating 3D diffusion priors, highlighting the potential of 3D Gaussians as a fundamental representation for generative tasks.
## 4 Approach
Our goal is to generate 3D content with accurate geometry and delicate detail. To accomplish this, Gsgen exploits the 3D Gaussians as representation due to its flexibility to incorporate geometry priors and capability to represent high-frequency details. Based on the observation that a point cloud can be seen as a set of isotropic Gaussians, we propose to integrate a 3D SDS loss with a pre-trained point cloud diffusion model to shape a 3D-consistent geometry. With this additional geometry prior, our approach could mitigate the Janus problem and generate more sensible geometry. Subsequently, in appearance refinement, the Gaussians undergo an iterative optimization to gradually improve fine-grained details with a compactness-based densification strategy, while preserving the fundamental geometric information. The detailed Gsgen methodology is presented as follows.
### Geometry Optimization
Many text-to-3D methods encounter the significant challenge of overfitting to several views, resulting in assets with multiple faces and collapsed geometry (Poole et al., 2023; Lin et al., 2023; Chen et al., 2023). This issue, known as the Janus problem (Armandpour et al., 2023; Seo et al., 2023), has posed a persistent hurdle in the development of such methodologies. In our early experiments, we faced a similar challenge that relying solely on 2D guidance frequently led to collapsed results. However, we noticed that the geometry of 3D Gaussians can be directly rectified with a point cloud prior, which is not feasible for previous text-to-3D methods using NeRFs and DMTet. Recognizing this distinctive advantage, we introduce a geometry optimization process to shape a reasonable structure. Concretely, in addition to the ordinary 2D image diffusion prior, we further optimize the positions of Gaussians using Point-E (Nichol et al., 2022), a pre-trained text-to-point-cloud diffusion model. Instead of directly aligning the Gaussians with a Point-E generated point cloud, we apply a 3D SDS loss to lead the positions inspired by image diffusion SDS, which avoids challenges including registration, scaling, and potential degeneration. Notably, we only apply the Point-E SDS gradients to positions, as empirical observations suggest that Point-E may generate relatively simple color patterns. We summarize the loss in the geometry optimization stage as the following equation:
\[\nabla_{\theta}\mathcal{L}_{\text{geometry}}=\mathbb{E}_{\epsilon_{I},t}\left[w _{I}(t)(\epsilon_{\phi}(x_{t};y,t)-\epsilon_{I})\frac{\partial\mathbf{x}}{ \partial\theta}\right]+\lambda_{\text{3D}}\cdot\mathbb{E}_{\epsilon_{P},t} \left[w_{P}(t)(\epsilon_{\psi}(p_{t};y,t)-\epsilon_{P})\right], \tag{3}\]
Figure 3: **Overview of the proposed Gsgen.** Our approach aims at generating 3D assets with accurate geometry and delicate appearance. Gsgen starts by utilizing Point-E to initialize the positions of the Gaussians (Sec 4.3). The optimization is grouped into geometry optimization (Sec 4.1) and appearance refinement (Sec 4.2) to meet a balance between coherent geometry structure and highly detailed texture.
where \(p_{t}\) and \(x_{t}\) represent the noisy Gaussian positions and the rendered image, \(w_{*}\) and \(\epsilon_{*}\) refer to the corresponding weighting function and Gaussian noise.
### Appearance Refinement
While the introduction of 3D prior does help in learning a more reasonable geometry, we experimentally find it would also disturb the learning of appearance, resulting in insufficiently detailed assets. Based on this observation, Gsgen employs another appearance refinement stage that iteratively refines and densities the Gaussians utilizing only the 2D image prior. To densify the Gaussians, Kerbl et al. (2023) propose to split Gaussians with a large view-space spatial gradient. However, we encountered challenges in determining the appropriate threshold for this spatial gradient under score distillation sampling. Due to the stochastic nature of SDS loss, employing a small threshold is prone to be misled by some stochastic large gradient thus generating an excessive number of Gaussians, whereas a large threshold will lead to a blurry appearance, as illustrated in Fig.8. To tackle this, we propose compactness-based densification as a supplement to positional gradient-based split with a larger threshold. Specifically, for each Gaussian, we first obtain its K nearest neighbors with a KD-Tree. Then, for each of the neighbors, if the distance between the Gaussian and its neighbor is smaller than the sum of their radius, a Gaussian will be added between them with a radius equal to the residual. As illustrated in Fig.4, compactness-based densification could "fill the holes", resulting in a more complete geometry structure. To prune unnecessary Gaussians, we add an extra loss to regularize opacity with a weight proportional to its distance to the center and remove Gaussians with opacity smaller than a threshold \(\alpha_{min}\) periodically. Furthermore, we recognize the importance of ensuring the geometry consistency of the Gaussians throughout the refinement phase. With this concern, we penalize Gaussians which deviates significantly from their positions obtained during the preceding geometry optimization. The loss function in the appearance refinement stage is summarized as the following:
\[\nabla_{\theta}\mathcal{L}_{\text{refine}}=\lambda_{\text{SDS}}\mathbb{E}_{ \epsilon_{t},t}\left[w_{I}(t)(\epsilon_{\phi}(x_{t};y,t)-\epsilon_{I})\frac{ \partial\mathbf{x}}{\partial\theta}\right]+\lambda_{\text{mean}}\nabla_{ \theta}\sum_{i}||\mathbf{p}_{i}||+\lambda_{\text{opacity}}\nabla_{\theta}\sum _{i}\mathbf{sg}(||\mathbf{p}_{i}||)-o_{i}, \tag{4}\]
where \(\mathbf{sg}(\cdot)\) refers to the stop gradient operation, \(\mathbf{p}_{i}\) and \(o_{i}\) represents the position and opacity of the \(i\)-th Gaussian respectively. \(\lambda_{\text{SDS}}\), \(\lambda_{\text{mean}}\) and \(\lambda_{\text{opacity}}\) are loss weights.
### Initialization with Geometry Prior
Previous studies (Chen et al., 2023; Lin et al., 2023; Metzer et al., 2022) have demonstrated the critical importance of starting with a reasonable geometry initialization. In our early experiments, we also found that initializing with a simple pattern could potentially lead to a degenerated 3D object. To overcome this, we opt for initializing the positions of the Gaussians either with a generated point cloud or with a 3D shape provided by the users (either a mesh or a point cloud). In the context of general text-to-3D generation, we employ a text-to-point-cloud diffusion model, _Point-E_(Nichol et al., 2022), to generate a rough geometry according to the text prompt. While Point-E can produce colored point clouds, we opt for random color initialization
Figure 4: An illustration of the proposed compactness-based densification. For two Gaussians, if the distance between them (\(r_{12}\)) and smaller than the sum of their radius (\(r_{1}+r_{2}\)), a Gaussian will be augmented to achieve a more complete geometry.
Figure 6: The impact of adopting Point-E generated color.
based on empirical observations, as direct utilization of the generated colors has been found to have detrimental effects in early experiments (shown in Fig.6). The scales and opacities of the Gaussians are assigned with fixed values, and the rotation matrix is set to the identity matrix. For user-guided generation, we convert the preferred shape to a point cloud. To avoid too many vertices in the provided shape, we use farthest point sampling (Eldar et al., 1997) for point clouds and uniform surface sampling for meshes to extract a subset of the original shape instead of directly using all the vertices or points.
## 5 Experiments
In this section, we present our experiments on validating the effectiveness of the proposed approach. Specifically, we compare Gsgen with previous state-of-the-art methods in general text-to-3D generation. Additionally, we conduct several ablation studies to evaluate the importance of initialization, 3D guidance, and densification strategy. The detailed results are shown as follows.
Figure 5: Qualitative comparison between the proposed Gsgen and state-of-the-art generation methods, including DreamFusion (Poole et al., 2023), Magic3D (Lin et al., 2023), Fantasia3D (Chen et al., 2023). Our approach achieves better visual quality, especially in high-frequency details, such as the hatched roof and the surface of the strawberry. The prompts are provided under the images. For more qualitative comparison results, please refer to Appendix B.3. Videos of these images are provided in the supplemental video.
### Implementation Details
**Guidance model setup.** We implement the guidance model based on the publicly available diffusion model, StableDiffusion (Rombach et al., 2022; von Platen et al., 2022). For the guidance scale, we adopt 100 for _StableDiffusion_ as suggested in DreamFusion and other works. We also exploit the view-dependent prompt technique proposed by DreamFusion. All the assets demonstrated in this section are obtained with StableDiffusion checkpoint _runwayml/stable-diffusion-v1-5_.
**3D Gaussian Splatting setup.** We implement the 3D Gaussian Splatting with a pytorch CUDA extension, and further add learnable background support to facilitate our application. For densification, we split the Gaussians by view-space position gradient every 500 iterations with a threshold \(T_{pos}=0.02\), as suggested by the original implementation (Kerbl et al., 2023), and perform compactness-based densification every 1000 iterations which we empirically found effective for achieving a complete geometry. For pruning, we remove Gaussians with opacity lower than \(\alpha_{min}=0.05\), and excessively large world-space or view-space radius every 200 iterations.
**Training setup.** We use the same focal length, elevation, and azimuth range as those of DreamFusion (Poole et al., 2023). To sample more uniformly in the camera position, we employ a stratified sampling on azimuth. We choose the loss weight hyperparameters \(\lambda_{\text{SDS}}=0.1\) and \(\lambda_{\text{3D}}=0.01\) in geometry optimization stage, and \(\lambda_{\text{SDS}}=0.1\), \(\lambda_{\text{mean}}=1.0\) and \(\lambda_{\text{opacity}}=100.0\) in appearance refinement.
### Text-to-3D Generation
We evaluate the performance of the proposed Gsgen in the context of general text-to-3D generation and present qualitative comparison results against state-of-the-art methods. As illustrated in Fig.2, our approach produces delicate 3D assets with more accurate geometry and intricate details. In contrast, previous state-of-the-art methods (Tang, 2022; Poole et al., 2023; Lin et al., 2023; Guo et al., 2023; Chen et al., 2023) struggle in generating collapsed geometry under the same guidance and prompt, which underscores the effectiveness of our approach. We present more qualitative comparison results in Fig.5, where we compare the 3D assets generated by Gsgen with those generated by Magic3D (Lin et al., 2023) and Fantasia3D (Chen et al., 2023). Our approach showcases notable enhancements in preserving high-frequency details such as the intricate patterns on sushi, the feathers of the peacock, and the thatched roof. In contrast, Magic3D and Fantasia3D yield over-smooth geometry due to the
Figure 7: Ablation study results on initialization and 3D prior. _Coarse Model_ here refers to the rough assets obtained after geometry optimization. We can observe that the contents generated with random initialization suffer from degeneration with completely inconsistent geometry (in the first column). Although the Point-E initialized assets have a slightly better geometry, they still suffer from the Janus problem. The proposed Gsgen utilizes Point-E initialization and 3D guidance to generate shapes with better multi-view consistency.
limitation of mesh-based methods, making the generated assets less realistic. For more one-to-one qualitative comparisons, please refer to the supplemental material for the video results and appendix B.3 for multi-view image comparison.
### Ablation Study
**Initialization.** To assess the impact of initialization, we introduce a variant that initiates the positions of the Gaussians with an origin-centered Gaussian distribution which emulates the initialization adopted in DreamFusion (Poole et al., 2023). The qualitative comparisons are shown in Fig.7a. It is evident that assets generated with DreamFusion-like initialization encounter severe degeneration issues, especially for prompts depicting asymmetric scenes, resulting in collapsed geometry. In contrast, Point-E initialization breaks the symmetry by providing an anisotropic geometry prior, leading to the creation of more 3D-consistent objects.
**3D Prior.** We evaluate the necessity of incorporating 3D prior by generating assets without point cloud guidance during geometry optimization. The qualitative comparisons of multi-view images are visualized in Fig.7b. Although achieved better geometry consistency compared to random initialization, relying solely on image diffusion prior still suffers from the Janus problem, which is particularly evident in cases with asymmetric geometries, such as the dog and the panda. In contrast, our approach effectively addresses this issue with the introduction of 3D prior, rectifying potentially collapsed structures in the geometry optimization stage and resulting in a 3D-consistent rough shape.
**Densification Strategy.** To valid the effectiveness of the proposed densification strategy, we propose two variants for comparison: (1) The original densification strategy that split Gaussians with an average view-space gradient larger than \(T_{pos}=0.0002\). (2) With larger \(T_{pos}=0.02\) that avoids too many new Gaussians. While effective in 3D reconstruction, the original densification strategy that relies only on view-space gradient encounters a dilemma in the context of score distillation sampling: within limited times of densification, a large threshold tends to generate an over-smoothed appearance while a small threshold is easily affected by unstable gradients. As shown in Fig.8, the proposed compactness-based densification is an effective supplement to the original densification strategy under SDS guidance.
## 6 Limitations and Conclusion
**Limitations.**Gsgen tends to generate unsatisfying results when the provided text prompt contains a complex description of the scene or with complicated logic due to the limited language understanding ability of Point-E and the CLIP text encoder used in _StableDiffusion_. Moreover, although incorporating 3D prior mitigates the Janus problem, it is far from eliminating the potential degenerations, especially when the textual prompt is extremely biased in the guidance diffusion models. Concrete failure cases and corresponding analyses are illustrated in appendix C.
**Conclusion.** In this paper, we propose Gsgen, a novel method for generating highly detailed and 3D consistent assets using Gaussian Splatting. In particular, we adopt a two-stage optimization strategy including geometry optimization and appearance refinement. In the geometry optimization stage, a rough shape is established under the joint guidance of a point cloud diffusion prior along with the common image SDS loss. In the subsequent appearance refinement, the Gaussians are further optimized to enrich details and densified to achieve better continuity and fidelity with compactness-based densification. We conduct comprehensive experiments to validate the effectiveness of the proposed method, demonstrating its ability to generate 3D consistent assets and superior performance in capturing high-frequency components. We hope our method can serve as an efficient and powerful approach
Figure 8: Ablation study on densification strategy. The textual prompt used in this figure is _A mug of hot chocolate with whipped cream and marshmallows_.
for high-quality text-to-3D generation and could pave the way for more extensive applications of Gaussians Splatting and direct incorporation of 3D prior.
| 自動的に3次元オブジェクトを生成する技術は、スコア蒸散サンプリング(SDS)とVolumeレンダリングの最適化を組み合わせることで、現実的な3Dオブジェクトの合成に驚くべき進歩を遂げました。しかし、現存するSDSとVolumeレンダリングによるテキスト23D方法には、正確な幾何学的データが欠如していることが多く、例えば、Janus問題のような問題が発生しています。これは、3Dの初期値を非明確な3D表現に統合するのが難しいからです。また、これらの方法で生成する3Dモデルは、通常、詳細な3Dモデルを生成するのが遅いです。そのため、本論文ではGSGENという新しい方法を提案します。GSGENは、最新のState-of-the-artの表現であるガウススプラッターを採用することで、テキスト23D生成に適用されています。GSGENは、高品質な3Dオブジェクトを生成し、既存 |
2309.08963 | Struc-Bench: Are Large Language Models Really Good at Generating Complex
Structured Data? | Despite the remarkable capabilities of Large Language Models (LLMs) like
GPT-4, producing complex, structured tabular data remains challenging. Our
study assesses LLMs' proficiency in structuring tables and introduces a novel
fine-tuning method, cognizant of data structures, to bolster their performance.
We unveil Struc-Bench, a comprehensive benchmark featuring prominent LLMs
(GPT-NeoX-20B, GPT-3.5, GPT-4, and Vicuna), which spans text tables, HTML, and
LaTeX formats. Our proposed FormatCoT aids in crafting format-specific
instructions from the intended outputs to populate this benchmark. Addressing
the gap in task-centered evaluation, we propose two innovative metrics, P-Score
(Prompting Score) and H-Score (Heuristical Score), to more accurately gauge LLM
performance. Our experiments show that applying our structure-aware fine-tuning
to LLaMA-7B leads to substantial performance gains, outshining its LLM
counterparts across most measures. In-depth error analysis and creating an
ability map across six dimensions -- coverage, formatting, reasoning,
comprehension, pragmatics, and hallucination -- highlight areas for future
enhancements and suggest forthcoming research trajectories. Our code and models
can be found at https://github.com/gersteinlab/Struc-Bench. | Xiangru Tang, Yiming Zong, Jason Phang, Yilun Zhao, Wangchunshu Zhou, Arman Cohan, Mark Gerstein | 2023-09-16T11:31:58 | http://arxiv.org/abs/2309.08963v3 | # Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data?
###### Abstract
Despite the power of Large Language Models (LLMs) like GPT-4, they still struggle with tasks that require generating complex, structured outputs. In this study, we assess the capability of Current LLMs in generating complex structured data and propose a structure-aware fine-tuning approach as a solution to improve this ability. To perform a comprehensive evaluation, we propose Struc-Bench, include five representative LLMs (i.e., GPT-NeoX-20B, GPT-3.5, GPT-4, and Vicuna) and evaluate them on our carefully constructed datasets spanning raw text, HTML, and LaTeX tables. Based on our analysis of current model performance, we identify specific common formatting errors and areas of potential improvement. To address complex formatting requirements, we utilize a FormatCoT (Chain-of-Thought) to generate format instructions from target outputs. Our experiments show that our structure-aware fine-tuning method, when applied to LLAMA-7B, significantly improves adherence to natural language constraints, outperforming other evaluated LLMs. Based on these results, we present an ability map of model capabilities from six dimensions (i.e., coverage, formatting, reasoning, comprehension, pragmatics, and hallucination). This map highlights the weaknesses of LLMs in handling complex structured outputs and suggests promising directions for future work. Our code and models can be found at [https://github.com/gersteinlab/Struc-Bench](https://github.com/gersteinlab/Struc-Bench).
## 1 Introduction
Significant advancements have been made in various natural language processing tasks by Large Language Models (LLMs) Brown et al. (2020); Scao et al. (2022); Ouyang et al. (2022); Muennighoff et al. (2022); OpenAI (2023); Zhao et al. (2023), especially in text generation tasks Qin et al. (2023). The ability to output structured data, one of the key aspects of generative capability, has also attracted great interest in previous studies Wu et al. (2022); Zhao et al. (2023).
However, LLMs still underperform in generating complex structured outputs-a critical ability for various applications ranging from coding assistance to automated report writing. Furthermore, most evaluation of LLMs has been on natural text or code generation, and relatively less research has been conducted to evaluate LLMs on their ability to generate structured output. This leaves it unclear _whether LLMs can generate complex structured data effectively_. We aim to address these unanswered questions and deliver an in-depth examination in our research.
_First, there is a lack of systematic analysis_ of the ability of LLMs to output complex structured data. Previous efforts on evaluating LLMs Qin et al. (2023); Ma et al. (2023) on structured data primarily centered around simple Information Extraction (IE) tasks: recognizing named entities, extracting relations, and detecting events. Here the goal of IE tasks is to gathered the extracted data in a highly structured form Zhong and Chen (2020). Much earlier work was considerably more task-centric as opposed to LLM-centric. The focus was predomi
Figure 1: A system for describing complex structured formats and learning to follow this format in human language. We use zero-shot for inference.
nantly on generating structured data from text (text-to-data) tasks with pre-trained models He et al. (2023); Rossiello et al. (2022); Whitehouse et al. (2023); Pietruszka et al. (2022) like BART Lewis et al. (2019) and T5 Raffel et al. (2020).
_Second, there is a lack of fine-grained evaluation and comprehensive benchmarks_ of LLMs performance. Existing benchmarks often rely on rudimentary objective metrics such as word overlap to measure the accuracy of the content generated by the model Li et al. (2023); Wu et al. (2022); Pietruszka et al. (2022). This may be insufficient for evaluating whether LLMs can generate structured output, as an ideal evaluation metric ought to also consider the format of generated content.
Third, is there potential for enhancing the performance of current LLMs to better _follow human natural language inputs, thereby generating outputs with the accurate format and error-free content?_
This work aims to fill in these gaps in the literature and expand on both the evaluation metrics and training datasets for LLMs generating structured output. Our contributions are summarized as:
(1) We develop a benchmark, called StrucBench focusing on generating structured texts in raw text, HTML, and LaTeX formats, and thoroughly examine the capabilities of popular LLMs, uncovering key issues in content accuracy, formatting, numerical reasoning, and handling long tables.
(2) Incorporating prominent datasets and expanding to diverse domains, we conduct empirical evaluations of popular LLMs on our structured text generation benchmark, providing a deeper understanding of the prevalent error types and dimensions of shortcomings. Our findings suggest that both GPT-3.5 and GPT-4 struggle to produce outputs that are exactly correct, with issues primarily stemming from erroneous content, inaccurate formatting, inadequate numerical reasoning abilities, and their inability to handle long tables. (3) To address these issues, we introduce structure-aware instruction tuning, using ChatGPT to generate format instructions and then training the LLaMA model to follow these formats. The promising results on both seen and unseen data indicate that it could greatly enhance the ability of LLMs to generate structured outputs.
## 2 Problem Analysis and Benchmark
### Preliminary
The task of generating complex structured data presents a notable challenge that tests the capabilities of LLMs in producing intricate, format-specific outputs. This task moves beyond conventional text generation. The complexity lies not only in the need to generate accurate and coherent content but also in maintaining a strict and specific data structure or format. For example, text-to-table is a task that aims to convert unstructured textual data into structured tabular data, by extracting necessary contents from text and following the required structure or format.
### Problem Analysis
In our study, we have identified a significant limitation of GPT-3.5 and GPT-4 in handling complex structured output. Despite being state-of-the-art LLMs developed by OpenAI, these models both have demonstrated certain limitations in generating output in more intricate formats, examples could be found in Appendix A.
This shortcoming becomes evident when the model is tasked with producing data that adhere to specific structural formats or templates, such as tables. We find that only 3% of the output of GPT-3.51 is completely correct, while GPT-4 is only 9%. This could be attributed to the inherent design of the GPT family, which, while excelling at capturing the statistical patterns of human language, does not specifically account for structured outputs that require maintaining a state across a longer span of tokens. Here, we select Rotowire as an investigation, as shown in Appendix B. We utilized the crowdsourcing approach on MTurk (See Appendix C) to examine the error types in 100 example instances. Figure 2 presents the proportions of errors and each error type: Element Errors, Element Format Errors, Structure Error, Structure Naming Errors.
Footnote 1: In all our scenarios we are using Azure OpenAI Service models. GPT-3.5 means gpt-35-turbo. We noticed that the results of the Azure deployed gpt-35-turbo-v0301 model diverge substantially from OpenAI gpt-3.5-turbo-0301.
### Benchmark
In our investigation, we incorporate four prominent data-to-text datasets: Rotowire Wiseman et al. (2019).
Figure 2: Error analysis by human annotation. Some error types are explained in Appendix A.
2017), E2E (Novikova et al., 2017), WikiTableText (Bao et al., 2018), and WikiBio (Lebret et al., 2016), we specifically selected tables with dimensions greater than 3x3 to ensure a sufficient level of complexity. Concurrently, we construct more diverse datasets drawn from broader domains, encompassing tables from LaTeX and HTML data sourced from GitHub. Each of these table types comes with its unique nuances, complexities, and levels of structuration, providing extensive coverage for our experiments. Table 1 gives statistics for the Rotowire dataset and our constructed datasets. Through empirical testing, we evaluate the capacity of popular LLMs, including GPT-NeoX-20B (Black et al., 2022), GPT-3.5 (Ouyang et al., 2022), GPT-4 (OpenAI, 2023) and Vicuna-13B (Chiang et al., 2023), on our Struc-Bench, see Section 4.2. For LaTex and HTML without paired text, we use GPT-3.5 to construct synthetic descriptions as input for our benchmark.
Raw text tables are more informal, unstandardized, and often need manual interpretation. In contrast, LaTeX tables are used for scientific documents and demand high precision in their structure and syntax. HTML tables, widely used on the web, carry their own tags and structure, aligning with the rules of HTML language.
## 3 Methodology
### Data Generation
As shown in Figure 1, we propose FormatCoT and self-instruct with GPT-3.5 to generate data, instruction pairs. Inspired by Gorilla (Patil et al., 2023), We provide three demos with in-context learning and task the model with generating instructions that describe the format of the given structure. We specifically instruct the model to use natural language. We have structured 6 demos for each of the three data formats, all of which are hand-written or modified data.
### Finetuning LLaMA-7B
Here we propose a structure-aware instruction tuning method to bolster the capability of LLMs in generating structured text. We employ the standard instruction tuning method to fine-tune LLaMA-7B (Touvron et al., 2023). Our ultimate goal is to enable LLaMA to comprehend the task at hand and deliver the output in a conversational mode. This is akin to engaging in a dialogue with the user, culminating in the successful completion of our defined task. The entire pipeline can be found in Figure 1.
### Evaluation Metrics
Evaluating the similarity of generated tables to the ground-truth tables is non-trivial: for instance, the same table can be formatted in many different ways in HTML or LaTeX. Hence, our evaluation metric should ideally capture meaningful differences in the data presented, while being invariant to insignificant differences in formatting.
We propose to break down the similarity of two tables into two coarse components: _content_ and _structure_. In scoring content similarity, we attempt to parse _content_ out the data within the table cells, and compute the similarity. This similarity is computed between the generated and ground-truth table cells by commonly used similarity metrics. In scoring structure similarity, we place higher emphasis on components such as the number of columns and rows, cell alignment, and the table caption. Both similarity scores do overlap (e.g. a table with the wrong number of rows/columns would likely score poorly on content), but we find that these two scoring categories allow us to perform more involved analysis on where predicted and ground-truth tables differ.
#### 3.3.1 GPTscore
We further take two approaches to score each metric. First, we perform model-based evaluation, querying GPT-3.5 with both tables and having it score the similarity of content and structure separately. Following Wang et al. (2023), we prompt the model to perform Chain-of-Thought Wei et al. (2023) reasoning before outputting its scores, and we query the model with the predicted and ground-truth tables in both orders and average the scores. We report these as the _GPTscore_. The prompt of GPTscore can be found in Appendix D.
#### 3.3.2 H-Score
In addition to model-based evaluation, we also implement hand-crafted scoring functions to score the similarity of the tables. Because of the many ways, the tables can be presented in the different data formats, we implement several heuristics to normalize the tables and to compute their similarity. The spe
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **\# Train** & **\# Test** & **Format** & **Rows \& Columns** \\ \hline Rotowire (Wiseman et al., 2017) & 3.4k & 728 & Raw text & 7.26 \& 8.75 \\ Struc-Bench HfX & 5.3k & 500 & HfX & 2.75 \& 4.47 \\ Struc-Bench HTML & 5.4k & 499 & HTML & 5.0 \& 3.54 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Struc-Bench data statistics. The number of Rows & Columns has been averaged.
cific implementation of scoring functions for different formats can be found in Appendix D. Where similarities between strings or data structures are computed, we use an average of Levenshtein distance and the Ratcliff/Obershelp similarity metric. We report these heuristically normalized metrics as the _H-Score_.
## 4 Experiments
### Basic Settings
For metrics, we use SacreBLEU, ROUGE-L, BERTScore, BARTScore and BLEURT metrics as they are all classical metrics to evaluate text similarity, which is also useful in this task. Besides, we use our two proposed metrics: GPT score and H-score. We evaluate the following models: GPT-NeoX-20B, GPT-3.5, GPT-4, Vicuna-13B, our structure-aware finetuning LLaMa-7B and original LLaMa-7B. GPT-NeoX-20B, GPT-3.5 and GPT-4 represent the state-of-art performance of current LLMs and Vicuna-13B is another version finetuned by LLaMa, which can reach 90% of the capacity of ChatGPT. We think these models are strong enough to be persuasive. For the first 4 models, we simply call their APIs from OpenAI or HuggingFace to generate results without further finetuning. In our dataset, each item consists of three parts: instruction, input, and output. When generating results, we put each item's instruction and input together as the final input to models.
During the inference process, we will provide the model with a natural language prompt to describe the form and content of our task, as well as the expected response (e.g., "please generate a table given by the following information and format").
### Results
Table 2 provides a comparative analysis of different language models based on several performance metrics. For 'Tables from Raw Text', the Ours-7B outperforms the other models in every metric. Interestingly, without fine-tuning, the performance drops significantly, particularly in SacreBLEU, ROUGE-L, and BERTScore. The results for 'LaTeX' reveal a similar trend where we again achieve the best results across all metrics, except for the BLEURT metric, where GPT-4 takes the lead. In the 'HTML' category, GPT-4 scores the highest in SacreBLEU and BERTScore. However, ours comes out on top for the rest of the metrics.
Considering the inconsistency observed by different metrics, we also conducted a human evaluation. We also carried out a human evaluation on 100 examples using MTurk. Evaluators rated each example on a scale from 0 to 10, assessing both format consistency and content consistency. Although we cannot enumerate the details due to space constraints, we discovered that the Content GPTscore and Content H-Score are more closely aligned with existing metrics. However, our proposed Format GPTscore and Format H-Score significantly surpass other metrics, particularly in terms of instance-level Spearman correlation for format accuracy. These human evaluations underscore the efficacy
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model & SacreBLEU & ROUGE-L & BERTScore & BARTScore & BLEURT & Content GPTscore & Format GPTscore & Content H-Score & Format H-Score \\ \hline \multicolumn{11}{c}{_Tables from Raw Text_} & & & & & & & & \\ GPT-NeoX-20B & 35.24 & 55.78 & 68.91 & -2.34 & 33.51 & 3.86 & 6.10 & 0.50 & -1.32 \\ GPT-3.5 & 56.92 & 70.97 & 91.35 & -1.68 & 36.85 & 6.19 & 8.16 & 0.52 & -1.27 \\ GPT-4 & 68.13 & 75.44 & 94.89 & -0.99 & 55.24 & 6.88 & 8.30 & 0.85 & 0.53 \\ Vicuna-13B & 40.12 & 50.77 & 75.21 & -2.05 & 40.02 & 4.07 & 6.33 & 0.55 & -1.38 \\ Our-7B & **90.6** & **89.85** & **98.54** & **-0.69** & **66.07** & **7.69** & **8.60** & **1.65** & **3.61** \\ \(w.a.finetune\) & 9.9 & 36.56 & 81.63 & -2.50 & 70.24 & 4.58 & 6.00 & 0.51 & -1.01 \\ \hline \multicolumn{11}{c}{_LaTeX_} \\ \hline \multicolumn{11}{c}{_GPT-NeoX-20B_} & 45.92 & 65.10 & 76.09 & -2.05 & 40.87 & 7.23 & 7.02 & 0.56 & 0.72 \\ GPT-3.5 & 56.94 & 75.99 & 86.25 & -1.30 & 42.89 & 8.22 & 8.41 & 0.99 & 1.27 \\ GPT-4 & 78.15 & 85.34 & 88.07 & -1.09 & **67.11** & 8.78 & 8.81 & 1.10 & 1.35 \\ Vicuna-13B & 50.80 & 69.48 & 80.44 & -1.07 & 36.74 & 7.70 & 8.10 & 0.78 & 1.06 \\ Our-7B & **89.13** & **88.99** & **98.55** & **-0.69** & 66.07 & **8.94** & **9.45** & **1.14** & **1.52** \\ \(w.a.finetune\) & 47.24 & 70.89 & 73.27 & -2.13 & 38.13 & 7.10 & 6.98 & 0.51 & 0.69 \\ \hline \multicolumn{11}{c}{_HITML_} \\ \hline \multicolumn{11}{c}{_GPT-NeoX-20B_} & 60.36 & 72.13 & 86.88 & -1.59 & 30.06 & 8.42 & 8.94 & 0.81 & 0.92 \\ GPT-3.5 & 73.80 & 85.19 & 96.76 & -1.46 & 34.81 & 9.11 & 9.35 & 1.10 & 2.15 \\ GP-4 & **79.25** & 85.95 & **97.22** & -1.31 & 41.59 & 9.17 & 9.62 & 1.15 & 2.29 \\ Vicuna-13B & 58.75 & 70.37 & 88.65 & -1.58 & 31.11 & 8.55 & 8.88 & 0.79 & 0.93 \\ Ours-7B & 77.50 & **86.08** & 96.25 & **-1.30** & **42.89** & **9.20** & **9.70** & **1.18** & **2.49** \\ \(w.a.finetune\) & 65.30 & 78.24 & 88.12 & -1.57 & 32.78 & 8.22 & 8.81 & 0.92 & 0.96 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Automated evaluation results on the test set, involving five types of previous metrics and four proposed ones. \(w.o.finetune\) means that we also compared the performance of our model without structure-aware finetuning as an ablation study.
of our proposed metrics. However, larger-scale human evaluations are needed to further explore and substantiate these findings.
Moreover, we delve into an in-depth analysis, attributing observed shortcomings to several error types, spanning two key dimensions: Content Selection and Format Planning, as well as the Reasoning Process, see details in Appendix G. Based on these, we present an ability map of model capabilities from six dimensions.
## 5 Conclusion
In conclusion, this research offers a comprehensive exploration of the structured text generation limitations inherent in Large Language Models (LLMs) like ChatGPT and GPT-4. Through developing a benchmark specifically designed for structured text generation and integrating a wide range of datasets, we have been able to thoroughly assess the capabilities of prevalent LLMs. Our analysis has identified several areas of concern, particularly in regard to content accuracy, formatting, numerical reasoning, and the handling of long tables.
## 6 Limitations
Although we present an in-depth and comprehensive analysis, the exploration of LLMs in structured text generation presented in this paper has several limitations:
Domain-Specific Benchmark DevelopmentWhile we've made strides in constructing benchmarks for structured text generation, it may be beneficial to develop benchmarks that cater to specific domains. Different fields might have unique structural requirements and understanding these nuances can significantly improve the models' applicability across diverse contexts.
Expand the Range of DatasetsThere are endless data types and sources that can be explored. Incorporating a broader variety of datasets could expose the models to an even wider range of structural formats, ultimately enhancing their overall performance.
Enhancing Numerical Reasoning CapabilitiesOur study identified inadequate numerical reasoning as one of the challenges faced by LLMs. Investigating techniques to bolster numerical reasoning in these models could lead to significant improvements in their performance.
Developing Advanced MethodsWhile our structure-aware instruction tuning method showed promising results, more sophisticated techniques could be developed. For instance, future work could explore ways of incorporating more explicit structural information into the model or developing methods that allow the model to learn structural patterns more effectively.
Exploring Multimodal LLMsAs LLMs continue to evolve, there are opportunities to explore multimodal models that can process and generate both text and other forms of data, such as sound or images (Kamigaito et al., 2023), in a structured manner.
| Despite the remarkable capabilities of Large Language Models (LLMs) likeGPT-4, complex, structured tabular data production remains challenging. Our study assesses LLMs' proficiency in structuring tables and introduces a novel fine-tuning method, cognizant of data structures, to bolster their performance. We unveil Struc-Bench, a comprehensive benchmark featuring prominent LLMs (GPT-NeoX-20B, GPT-3.5, GPT-4, and Vicuna), which spans text tables, HTML, and LaTeX formats. Our proposed FormatCoT aids in crafting format-specific instructions from the intended outputs to populate this benchmark. Addressing the gap in task-centered evaluation, we propose two innovative metrics, P-Score(Prompting Score) and H-Score (Heuristical Score), to more accurately gauge LLM performance. Our experiments show that applying our structure-aware fine-tuning to LLaMA-7B leads to substantial performance gains, outshining its L |
2309.12913 | A matter of attitude: Focusing on positive and active gradients to boost
saliency maps | Saliency maps have become one of the most widely used interpretability
techniques for convolutional neural networks (CNN) due to their simplicity and
the quality of the insights they provide. However, there are still some doubts
about whether these insights are a trustworthy representation of what CNNs use
to come up with their predictions. This paper explores how rescuing the sign of
the gradients from the saliency map can lead to a deeper understanding of
multi-class classification problems. Using both pretrained and trained from
scratch CNNs we unveil that considering the sign and the effect not only of the
correct class, but also the influence of the other classes, allows to better
identify the pixels of the image that the network is really focusing on.
Furthermore, how occluding or altering those pixels is expected to affect the
outcome also becomes clearer. | Oscar Llorente, Jaime Boal, Eugenio F. Sánchez-Úbeda | 2023-09-22T15:00:00 | http://arxiv.org/abs/2309.12913v1 | # A matter of attitude: Focusing on positive and active gradients to boost saliency maps
###### Abstract
Saliency maps have become one of the most widely used interpretability techniques for convolutional neural networks (CNN) due to their simplicity and the quality of the insights they provide. However, there are still some doubts about whether these insights are a trustworthy representation of what CNNs use to come up with their predictions. This paper explores how rescuing the sign of the gradients from the saliency map can lead to a deeper understanding of multi-class classification problems. Using both pretrained and trained from scratch CNNs we unveil that considering the sign and the effect not only of the correct class, but also the influence of the other classes, allows to better identify the pixels of the image that the network is really focusing on. Furthermore, how occluding or altering those pixels is expected to affect the outcome also becomes clearer1.
Footnote 1: All code to replicate our findings will be available here: [https://github.com/OscarLlorente/positive_active_saliency_maps](https://github.com/OscarLlorente/positive_active_saliency_maps)
keywords: Interpretability, convolutional neural networks, saliency maps, visualization, gradient signs +
Footnote †: journal: Neural Networks
## 1 Introduction
The overwhelming mediatic interest that generative artificial intelligence is drawing lately is fostering the adoption of deep learning models in almost every area of our lives. Unfortunately, the outstanding advances brought by these massive models have yet to be accompanied by an equivalent effort to make them more interpretable. Blindly using complex models without worrying about how their outputs are generated entails risks that must be mitigated if we strive to adopt them in sensitive sectors, as many authorized voices and legislators are already pointing out.
There are basically two approaches to address this issue: building models that are easier to understand by design and at the same time match the performance of their black box counterparts, or developing techniques to disclose what is going on inside the black boxes. This paper concentrates on the field of computer vision, where there are indeed some attempts
to construct interpretable models for object classification [1] and medical applications [2]. However, due to their great feature extraction capabilities, far better than reknown traditional engineered features such as SIFT [3], many modern computer vision solutions still rely on regular convolutional neural networks (CNNs) [4] as part of their pipeline, whose inner workings are hard to interpret and understand.
Over the past decade, the research community has produced several techniques that seek to shed some light on how CNNs come up with their predictions. Leaving the visual inspection of the convolutional filters aside, most of the proposals consist in studying the effect of exciting the model with known stimuli and projecting the result back into the input image.
One family of techniques attempts to approximate the trained network with simpler models. Zhou et al. [5] remove information from the input images to obtain a minimal representation that preserves as little visual information as possible without significantly impacting the classification score. This is done by segmenting the image and iteratively discarding those regions that contribute the least. Ribeiro et al. [6] propose LIME, an algorithm that approximates any classifier or regressor locally with interpretable models. To deal with images they extract \(K\) superpixels and treat them as input binary features for a linear model. Similarly, Frosst and Hinton [7] suggest building a binary soft decision tree from the learned filters.
Another popular approach relies on backpropagation. Deconvolutional Networks or DeconvNets [8] invert the order of the CNN layers to discover the input pixels responsible for the information encoded in every feature map. DeconvNets allow gathering evidence about the type of features every CNN layer is able to extract, from basic geometries like corners and edges at the beginning to class-specific features as one proceeds deeper into the network.
Due to their simplicity, and perhaps for being one of the seminal methods in this category, saliency maps [9] have become one of the most popular local interpretability methods for CNNs. They compute the absolute gradient of the target output with respect to every input feature (i.e., pixels on images) and are commonly employed in multi-class classification problems:
\[\hat{c}=\operatorname*{argmax}_{c\in C}S_{c}(\boldsymbol{I}) \tag{1}\]
where \(\boldsymbol{I}\) is the input image, \(C\) the set of all possible classes, and \(S_{c}\) corresponds to the classification score for a given class \(c\). Since images are usually encoded in the RGB color space, it is important to bear in mind that \(\boldsymbol{I}\in\mathbb{R}^{\text{channels}\times\text{height}\times\text{ width}}\) is a tensor. The original saliency map method is mathematically expressed as
\[\boldsymbol{M}_{i\,j}=\max_{k}\left|\frac{\partial S_{\hat{c}}}{\partial \boldsymbol{I}_{k\,i\,j}}\right| \tag{2}\]
\(\boldsymbol{M}\) is a 2D map, \(k\) indicates a specific channel and, \(i\) and \(j\), are the row and column of every pixel, respectively. According to [9], the brightest points in a saliency map (i.e., the derivatives with a larger absolute value) are the pixels that, with a minimal change, should affect the class score the most. As shown in (2), the maximum over the three channels is computed to obtain a single value per pixel. Even though this decision may seem arbitrary, it is the convention followed in almost every subsequent paper on saliency maps.
There have been several attempts to reduce the inherent noise of saliency maps like that of Shrikumar et al. [10], who suggest multiplying element-wise the input and the gradient to
produce sharper representations. However, the fact that on visual inspection the representation resembles the input image is no guarantee of its correctness as put forward in the sanity checks proposed by Adebayo et al. [11]. Apparently, the same happens to varying degrees in other similar methods such as \(\epsilon\)-LRP, DeepLIFT [12], or integrated gradients [13]. The technique does not highlight what the neural network has learned to pay attention to, but rather tends to augment features from the input image such as edges that may or may not be relevant to the prediction.
This paper proposes several improvements over the regular saliency maps to increase the insights that can be extracted. The contributions can be summarized in three main points:
* Instead of taking the absolute value of the gradients, and thus neglecting their sign, we prove that preserving this information enhances interpretability in multi-class classification problems by better identifying which pixels assist or deceive the network.
* The network would want pixels with a positive gradient to have higher intensities and those with negative gradients to be dimmed towards zero. This fact makes occlusion experiments more self-explanatory, since it is easier to understand the meaning of replacing a given pixel with a brighter or darker one, ultimately with white or black.
* Typically, only the class of interest is considered when analyzing saliency maps. Based on the gradient sign, a set of metrics have been defined to quantitatively compare the impact on the prediction of a particular class caused by the rest of the classes.
The remainder of the document is structured as follows. It starts with a brief discussion of the implications of ignoring the sign of the gradients (Section 2). Using the information provided by the sign, Section 3 explores the effect that modifying a pixel value has on a multi-class classification problem. Finally, Section 4 presents the experiments conducted to support the conclusions derived in Section 5.
## 2 The importance of the gradient sign
To the top of our knowledge, there is little research about the meaning or impact of the sign in saliency maps. The only article that briefly discusses this topic is [14], which explains that the raw value of gradients (without taking the absolute value) is commonly used in the MNIST dataset [15], but not in other datasets like ImageNet [16]. Apparently, experimental results suggest that on MNIST raw gradients produce clearer saliency maps and, at the same time, worse representations on ImageNet. Since the latter is the _de facto_ standard dataset for CNNs, in general saliency maps are implemented with the absolute value.
However, taking the absolute value of every pixel in the saliency map comes at a cost and some enlightening information is lost. In terms of explainability, the opportunity of knowing which regions of the image should be brighter or darker to improve the classification accuracy is disregarded. Moreover, if both pixels with positive and negative gradients are combined in the same image without any distinctions, the representation can become confusing. Sometimes it may seem as if the model is not able to tell apart regions that should be brighter or darker like, for instance, an object (positive gradient) on an uninformative background (negative). Therefore, two sets of pixels can be distinguished in the image:
* Pixels that improve the classification score of the predicted class if their value is _increased_, since they have a positive value (gradient) in the saliency map.
* Pixels that improve the classification score of the predicted class if their value is _decreased_, because they have a negative gradient.
The advantage of this separation with respect to focusing on the raw gradients and then normalizing their values to represent them on a single image is that in [14] zero gradients shine at medium intensity after scaling, conveying a misleading idea of importance to those pixels. Instead, we propose creating two different visualizations before taking the absolute value (or the ReLU function, which naturally provides the same result if positive and negative gradients are handled separately):
* Positive saliency maps: \[\boldsymbol{M}_{\,ij}=\max_{k}\left(\text{ReLU}\bigg{(}\frac{\partial S_{ \hat{c}}}{\partial\boldsymbol{I}_{k\,i\,j}}\bigg{)}\right)\] (3)
* Negative saliency maps: \[\boldsymbol{M}_{\,i\,j}=\max_{k}\left(\text{ReLU}\bigg{(}-\frac{\partial S_{ \hat{c}}}{\partial\boldsymbol{I}_{k\,i\,j}}\bigg{)}\right)\] (4)
## 3 Multi-class saliency maps
All saliency map techniques use the actual class to compute the derivatives. In multi-class classification problems this approach disregards the effect of the rest of the classes. Despite there can be pixel gradients computed with respect to incorrect classes with a higher value, the current techniques do not draw attention to this fact. Whenever this happens, the interpretation of the saliency map changes since if the value of this pixel is increased, the classification score for the incorrect class will improve more than that of the true class, worsening the prediction.
Taking the absolute value makes things even more undecipherable. Once you lose the sign information, you can no longer determine whether increasing the intensity of a pixel is bound to increase or decrease the score of a given class. In order to extend the scope of positive and negative saliency maps to consider the effect of all the classes, the definitions put forward in the previous section can be restated:
* _Active_ pixels are those that improve the most the classification score of the predicted class if their value is _increased_, more than that of any other class considered.
* Analogously, _inactive_ pixels are those that improve the most the score of the predicted class if their value is _decreased_. Their gradient with respect to the actual class is therefore the lowest (the most negative) among all the classes. These pixels are the ones that cause more confusion to the classifier.
Based on these definitions, two additional saliency map visualization can be derived:
* _Active saliency maps_ highlight the pixels that should be increased to improve the classification score of the true class: \[\boldsymbol{M}_{i\,j}=\max_{k}\begin{cases}\frac{\partial S_{k}}{\partial \boldsymbol{I}_{k\,i\,j}}&\text{if }\frac{\partial S_{k}}{\partial \boldsymbol{I}_{k\,i\,j}}=\text{argmax}_{c\in C}\frac{\partial S_{c}}{\partial \boldsymbol{I}_{k\,i\,j}}\\ 0&\text{otherwise}\end{cases}\] (5)
* _Inactive saliency maps_ depict the pixels that should be dimmed to enhance the classification score of the correct class: \[\boldsymbol{M}_{i\,j}=\max_{k}\begin{cases}\frac{\partial S_{c}}{\partial \boldsymbol{I}_{k\,i\,j}}&\text{if }\frac{\partial S_{c}}{\partial \boldsymbol{I}_{k\,i\,j}}=\text{argmin}_{c\in C}\frac{\partial S_{c}}{ \partial\boldsymbol{I}_{k\,i\,j}}\\ 0&\text{otherwise}\end{cases}\] (6)
In conclusion, where positive and negative saliency maps provide information about whether increasing or decreasing the value of particular pixels improves the score of the correct class, active and inactive saliency maps go a step further and identify those pixels that should be altered to increase the confidence of the model in the prediction of the true class.
## 4 Experiments
This section evaluates the proposed new saliency map representations both qualitatively and quantitatively on two different datasets: CIFAR-10 [17] and Imagenette [18]. The former is commonly used in image classification and interpretability papers. The latter is a subset of ten ImageNet classes that allows drawing grounded conclusions without requiring an immense computational effort.
The new saliency maps have been tested against both trained from scratch and pretrained models. Two models have been trained from scratch using CIFAR-10. The first is a basic CNN with several convolutional blocks with either max- or average-pooling and a final linear layer. The second uses the standard ResNet-18 architecture [19]. For Imagenette, in addition to the previous two models, pre-trained versions of ResNet-18 and ConvNeXt [20] have also been evaluated. In all cases, the networks were trained during 50 epochs with a learning rate of 0.001 using the AdamW optimizer. Table 1 shows the accuracies obtained.
\begin{table}
\begin{tabular}{l c c} \hline \hline & CIFAR-10 & Imagenette \\ \hline Basic CNN & 0.6307 & 0.6205 \\ ResNet-18 & 0.7569 & 0.8357 \\ ResNet-18 pre-trained & - & 0.9733 \\ ConvNext pre-trained & - & 0.9932 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test set accuracy.
### Qualitative evaluation
Following the same approach found in the literature of saliency maps [9; 21; 14; 13], first a visual inspection is carried out to compare the proposed visualizations with the standard saliency map. It is common practice to show only a few examples of the visualizations to perform the comparisons. However, none of the aforementioned articles explain how these examples are selected. Therefore, it could be the case that a particular visualization is better for a certain class or example. To prevent this problem, in this paper a correctly classified example has been randomly selected from the test set. To enhance readability, only two examples are shown in this section (Figures 1 and 2). Refer to Appendix A to check the rest of the images.
It is noticeable that the proposed techniques reduce the noise and produce sharper visualization than the original saliency map. The shape of the objects that the neural network is classifying is more defined and exhibits a higher level of detail. Notwithstanding, even though this evaluation is typical in the literature, it does not prove them better. It could be the case that a noisier visualization is more faithful to what the neural network has learned to focus on. This is why a quantitative evaluation is required.
Figure 1: Comparison saliency maps CNN CIFAR-10.
Figure 2: Comparison saliency maps ResNet-18 Imagenette.
### Quantitative evaluation
There have been some efforts in the literature to formulate metrics that measure the effectiveness of local interpretability techniques. While Ancona et al. [12] develop on the desirable properties of interpretability methods, [22] and [23] actually propose a metric to compare techniques. Specifically, [22] suggests using a metric called deletion that removes pixels in descending order of importance --according to the technique under evaluation-- and recomputes the probability of the correct output for each fraction of deleted pixels. Deleted pixels are either replaced with a constant value (e.g., black or gray) or random noise. Hooker et al. [23] claim that it is necessary to retrain the model after deleting pixels to maintain the same distribution in the training and the test sets. However, retraining affects the network's weights and the metric no longer provides a good estimate of how the original model behaves if some pixels are occluded.
The main drawback of the deletion metric is that the value of a pixel cannot be actually deleted. No matter what value pixels are replaced with, they will still affect the internal computations of the network. The replacement value introduces unknown biases unless pixels are separated in two different sets: the ones that should be brighter to improve the classification score of the original predictions (i.e., those identified by positive or active saliency maps) and the ones that should be darker (i.e., pixels in negative or inactive saliency maps). Thanks to this distinction, the meaning of replacing a pixel with white (white-deletion) or black (black-deletion) becomes instantly clear. For positive or active pixels, using white would tend to improve the classification score of the predicted class, whereas zeroing them out should severely harm the original classification. The opposite is expected to happen with negative and inactive pixels.
Both black- (Figure 3) and white-deletion (Figure 4) measure the change in the predicted classes with respect to the original classification, which we have decided to coin _allegiance_. Using the test set --the results for the training set can be found in B--, pixels are removed in descending order of importance in blocks of 10% as suggested in [23], except for the initial interval, in which we deem it necessary to study the response in more detail. The behavior observed in the graphs corresponds to what we expected. The decrease in allegiance for black-deletion is greater for active and positive saliency maps than for the standard implementation. Likewise, the decrease in allegiance for white-deletion is greater for inactive and negative saliency maps. Apparently, this confirms the hypothesis that the pixels identified are more important to the network when the sign of the gradients is taken into account.
It is important to note that that for active and inactive saliency maps the allegiance stops decreasing after around 50% of the pixels have been deleted. The reason is that many pixels
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{4}{c}{Imagenette} \\ \cline{2-7} & CNN & ResNet-18 & CNN & ResNet-18 & ResNet-18 pre-trained & ConvNeXt \\ \hline Original & 0.23 & 0.23 & 0.24 & 0.26 & 0.28 & 0.49 \\ Positive & 0.14 & 0.11 & 0.12 & 0.14 & 0.21 & 0.37 \\ Active & 0.15 & 0.11 & 0.16 & 0.14 & 0.24 & 0.39 \\ \hline \hline \end{tabular}
\end{table}
Table 2: AUC for black deletions in saliency maps.
Figure 3: Black-Deletion Benchmark.
Figure 4: White-Deletion Benchmark.
have a value of zero because their derivative with respect to the original predicted class is not the largest (for the active saliency map) or the smallest (for the inactive). Hence, after all the non-zero pixels from the active and inactive saliency maps have been deleted there is nothing else to remove. The same happens for the positive and negative saliency maps at approximately 80%.
To provide concrete numbers, the area under the curve is shown in Table 2 for black-deletion and in Table 3 for white-deletion. The results support the hypothesis that the proposed saliency maps better identify those pixels that, when made brighter or darker as appropriate, increase the confidence of the originally predicted class. Interestingly, although the improvement over the standard saliency map is clear, it is surprising how positive and negative saliency maps sometimes work better than active and inactive. It could still be due to the use of the extremes (i.e., either black or white) as replacement values, instead of slightly darker or brighter variants of the original pixel colors. Nevertheless, the results are still more interpretable than those provided by other metrics proposed in the literature because the effect of the alteration on the image is now known.
## 5 Conclusion and future work
There is more information hidden in the gradients of a saliency map than is usually exploited, both in the sign of the individual pixels and in the gradients with respect to the incorrect classes. Separating pixels according to these dimensions could pave the way to improving the quality of the insights extracted, not only from saliency maps but also from other local interpretability techniques based on gradients.
Furthermore, instead of arbitrarily choosing black to occlude pixels as it is typically done, the proposed approach allows to better understand the effect of replacing pixels with black or white, which can positively or negatively contribute to the classification score depending on the gradient sign. Analyzing the faithfulness of the different variations of the saliency map from this point of view is left as future work.
| |
2309.06966 | Ionization by electron impacts and ionization potential depression | We calculate the cross-section of ionization by free-electron impacts in high
or moderate density plasmas. We show that the so-called ionization potential
depression (IPD) strongly affects the magnitude of the cross-section in the
high-density domain. We use the well-known IPD formulas of Stewart-Pyatt and
Ecker-Kr\"oll. A more recent approach based on classical molecular dynamics
simulation is also investigated. The latter provides an alternative way to
calculate IPD values. At near-solid densities the effects of the free-electron
degeneracy should be investigated. The rates are then calculated within the
Fermi-Dirac statistics. We first use the semi-empirical formula of Lotz for
ionization cross-section. The results may differ significantly from measured
cross-sections or calculations with reliable atomic codes. Then, in a second
step, we propose a new formula that combines the Lotz formula and a polynomial
expansion in terms of the ratio of the energy of the incident electron and the
ionization energy. The coefficients of the polynomial expansion are adjusted to
fit the cross-section provided by robust atomic codes. A great advantage of the
new formula is that it allows a fully analytical calculation of the ionization
rate. Our results are compared to experiments measuring IPDs, cross-sections
and rate coefficients on aluminum at high and moderate densities and on Be-like
CNO ions. | Djamel Benredjem, Jean-Christophe Pain, Annette Calisti, Sandrine Ferri | 2023-09-13T14:01:40 | http://arxiv.org/abs/2309.06966v1 | # Ionization by electron impacts and ionization potential depression
###### Abstract
We calculate the cross-section of ionization by free-electron impacts in high or moderate density plasmas. We show that the so-called ionization potential depression (IPD) strongly affects the magnitude of the cross-section in the high-density domain. We use the well-known IPD formulas of Stewart-Pyatt and Ecker-Kroll. A more recent approach based on classical molecular dynamics simulation is also investigated. The latter provides an alternative way to calculate IPD values. At near-solid densities the effects of the free-electron degeneracy should be investigated. The rates are then calculated within the Fermi-Dirac statistics.
We first use the semi-empirical formula of Lotz for ionization cross-section. The results may differ significantly from measured cross-sections or calculations with reliable atomic codes. Then, in a second step, we propose a new formula that combines the Lotz formula and a polynomial expansion in terms of the ratio of the energy of the incident electron and the ionization energy. The coefficients of the polynomial expansion are adjusted to fit the cross-section provided by robust atomic codes. A great advantage of the new formula is that it allows a fully analytical calculation of the ionization rate.
Our results are compared to experiments measuring IPDs, cross-sections and rate coefficients on aluminum at high and moderate densities and on Be-like CNO ions.
## 1 Introduction
The radiative properties of hot and dense plasmas are well described in collisional-radiative calculations only if the rates of the involved processes are reliable and rapidly estimated in order to allow extensive calculations. Among these processes, the electron-impact ionization (EII) is investigated in this work because it plays an important role at high densities.
Different analytical formulas of the EII cross-section have been used in the past. Among them, the semi-empirical formulas of Drawin [1], Lotz [2, 3, 4] and Younger [5] have been widely used. More recently Bernshtam _et al._[6] proposed an empirical formula for the direct ionization cross-section, which is similar to the formula of Lotz. It involves two parameters that depend on the orbital quantum number of the initial state. The two parameters are adjusted to fit experimental results. Calculations of the total cross-section corresponding to direct EII channels of argon and iron show that the Bernshtam _et al._ empirical formula is more satisfactory than the formula of Lotz. Other authors utilize different empirical formulas. Let us mention the additional work of Lotz [7] which involves three parameters that are determined from experimental data. Rudge and Schwartz [8] also use a formula with three parameters which could be evaluated by fitting experimental data or numerical results. Llovet _et al._[9] described the essentials of
classical, semi-classical and quantum models, and made an extensive comparison of measured K, L, and M shells of all elements from hydrogen to einsteinium (see also the references therein).
As they depend on the ionization energy, the cross-section as well as the rate are strongly affected by the so-called ionization potential depression (IPD), in the high-density domain. To estimate the IPD, two models have been developed five decades ago by Stewart and Pyatt [10] and Ecker and Kroll [11]. In experiments performed at LCLS (Stanford) on aluminum [12, 13] the observation of the K-\(\alpha\) fluorescence and the measurement of the position of the K-edge of ions show that the formula of Ecker and Kroll is more adequate than the formula of Stewart and Pyatt. Nevertheless, the agreement is not satisfactory for the highest ion charges, _i.e._, from O-like to Be-like aluminum (see Fig. 4 in Ref. [12]). On the other hand, in an experiment performed at the Orion laser system (UK) [14], with a plasma at higher temperatures [500-700] eV and densities in the range [1-10] g/cm\({}^{3}\), the aluminum K-shell spectrum shows a better agreement with calculations if one uses the Stewart-Pyatt IPD rather than the Ecker-Kroll one. These two main experiments have stimulated many theoretical investigations of IPD (see for instance Refs. [15, 16, 17, 18], in particular using Density Functional Theory.
Recent calculations on continuum lowering in plasmas under solar-interior conditions [19] showed that the silicon IPD presents a good agreement with the measurements of Ciricosta _et al._[12] for low ion charges, \(z=4-6\), but disagrees for \(z=7-10\).
A model based on classical molecular dynamics (CMD) was developed at Aix-Marseille University [20]. It is designed to deal with neutral mixtures composed of ions of the same atom with different charge states and electrons. Thanks to the choice of the soft ion-electron potential, it has been possible to implement an ionization/recombination protocol to control the plasma ion charge distribution and the trapping of electrons in the ion wells. The ionization/recombination process allows an instantaneous knowledge of the potential energy of the valence electron of an ion with a given charge which takes into account the effects of the whole surrounding plasma. A statistical average of these data leads to a straightforward definition of the IPD. At the density and temperature of the experiments at LCLS, the IPD obtained within this approach shows a better agreement with the Ecker-Kroll IPD than with the Stewart-Pyatt IPD [21].
In Section 2, we calculate the IPD in an aluminum plasma at a mass density of 2.7 g/cm\({}^{3}\) and an electron thermal energy of 50 eV, and compare the approaches of Stewart-Pyatt, Ecker-Kroll and the one based on molecular dynamics.
In Section 3, we calculate the EII cross-section, restricting ourselves to direct transitions. Other mechanisms such as those involving a collisional excitation followed by an auto-ionization or by a collisional ionization will not be considered in this work. We use the semi-empirical formula proposed by Lotz [2]. Our calculated cross-sections involve the IPD.
In Section 4, we investigate the degeneracy effect of the free electrons on the EII by calculating the rate coefficient within the Fermi-Dirac statistics and comparing the results to the coefficient obtained within the Maxwell-Boltzmann statistics.
Because the Lotz formula is not always satisfactory, we propose in Sec. 5 a new cross-section expressed as a product of the Lotz formula and a polynomial expansion in terms of the ratio of the free-electron kinetic energy to the ionization energy. The coefficients of the polynomial expansion are adjusted so that the new cross-section fits the results of two atomic codes, namely FAC [22] and HULLAC [23].
## 2 Ionization potential depression
The two mostly used approaches of the ionization potential lowering are briefly presented. In the approach of Stewart and Pyatt the IPD of an ion of net charge \(ze\) is expressed as an expansion with respect to the ratio of the Debye length \(\lambda_{D}\) and ion radius \(R_{0}\), _i.e._
\[I_{\rm SP}(z)=\frac{3(z+1)e^{2}}{2R_{0}}\left\{\left[1+\left(\frac{\lambda_{D }}{R_{0}}\right)^{3}\right]^{2/3}-\left(\frac{\lambda_{D}}{R_{0}}\right)^{2} \right\}, \tag{1}\]
where
\[\lambda_{D}=\sqrt{\frac{kT}{4\pi(N_{e}+\sum_{z}N_{z}z^{2})e^{2}}},\]
with \(N_{e}\) and \(N_{z}\) representing the electron density and the density of ions of charge \(ze\), respectively. The ion radius is given by \(R_{0}=3/(4\pi N_{i})^{1/3}\) where \(N_{i}\) is the ion density: \(N_{i}=\sum_{z}N_{z}\).
We define \(\overline{Z^{q}}\) for a positive integer \(q\):
\[\overline{Z^{q}}=\sum_{z}p_{z}z^{q},\]
where \(p_{z}\) is the fraction of ions of charge \(ze\). The value \(q=1\) defines the average ion charge. The rhs member may be written \(\sum_{z}N_{z}z^{q}/N_{i}\) giving
\[\sum_{z}N_{z}z^{q}=N_{i}\overline{Z^{q}}.\]
When \(q=2\) the above relation gives the contribution of ions to the Debye length.
The high-density limit of the Stewart-Pyatt formula:
\[I_{\rm SP-HD}(z)=\frac{3(z+1)e^{2}}{2R_{0}}, \tag{2}\]
is widely used in the literature.
The Ecker-Kroll model provides the following IPD:
\[I_{\rm EK}(z)=\frac{(z+1)e^{2}}{R_{0}}\left\{\begin{array}{l l}R_{0}/\lambda _{D}&\quad\mbox{if}\quad N_{\rm cr}\geq N_{i}(1+\overline{Z})\\ C(1+\overline{Z})^{1/3}&\quad\mbox{if}\quad N_{\rm cr}<N_{i}(1+\overline{Z}), \end{array}\right. \tag{3}\]
where
\[N_{\rm cr}=\frac{3}{4\pi}\left(\frac{kT}{Z^{2}e^{2}}\right)^{3}\]
is the critical density, with \(Z\) the atomic number. The constant \(C\) is determined by imposing the continuity of the IPD at the critical density, giving
\[C=\left(\frac{R_{0}}{(1+\overline{Z})^{1/3}\lambda_{D}}\right)_{N_{\rm cr}}.\]
While \(I_{\rm SP-HD}\) depends only on the density, \(I_{\rm EK}\) also depends on the temperature through the average ion charge, but the variation is not important in our study.
The IPD measured at LCLS [13] showed a better agreement with the formula of Ecker and Kroll provided \(C=1\), than with the formula of Stewart and Pyatt. On the other hand, the experiment performed at the Orion laser showed that the Stewart and Pyatt IPD is more satisfactory when used in the FLYCHK code [24] to predict the X-ray emission of an aluminum plasma at mass densities reaching 10 g/cm\({}^{3}\).
The recent CMD approach consists in the simulation of the movement of interacting atoms or molecules treated as classical non relativistic point-like particles (for more details see for example Ref. [25] and references therein). The Bingo-TCP code, used in the present study, has been designed to deal with neutral mixtures composed of ions of various charges and electrons and to allow the ion charges to change from one to another according to the density-temperature conditions. For that purpose, a regularized electron-ion potential, depending on the ion charge \(ze\) is defined as:
\[V_{ie}(r)=-ze^{2}e^{-r/\lambda}(1-e^{-r/\delta(z)})/r, \tag{4}\]
where the regularization distance \(\delta(z)\) is chosen to reproduce the ionization energy \(E_{z}\) of the unperturbed ion of charge \(ze\) in the ground state when the electron is located at the ion (\(r=0\)). Note that \(\delta(z)\) is also used to define an exclusion sphere around ions and referred to ion stage radius.
\[\delta(z)=ze^{2}/E_{z}. \tag{5}\]
The screening factor, \(e^{-r/\lambda}\) where \(\lambda\) is half the simulation box size, helps to smooth the small fluctuations of forces arising with the periodic boundary conditions. It has been checked, here, that the results do not depend on the choice of \(\lambda\) provided that the box size is large enough (a few times the natural plasma screening length). The choice of this regularized ion-electron potential allows the implementation of an ionization/recombination protocol to control the plasma ion charge distribution and the trapping of electrons in the ion wells. The main idea of the model is to extract from the simulated particle positions and velocities, a local characterization of the plasma around an ion "A" in order to infer if the conditions are favorable to a ionization or recombination of this ion. For that purpose, the mutual nearest neighbor, \(\rm{NN_{A}}\) and the next nearest neighbor, \(\rm{NN_{NA}}\), electrons of "A" are identified and traced. Their total energy is calculated accounting for the whole complexity of the potential energy surface around "A" including the ionization energy lowering at a local level due to the surrounding charges. A shell noted \(S_{A}\), formed with the \(\rm{NN_{A}}\) and \(\rm{NN_{A}}\) is defined as the nearest environment of "A" if \(\rm{NN_{A}}\) is localized at a distance \(d_{A}\) of "A" such as \(\delta(z_{A})<d_{A}<\sqrt{2}\,\delta(z_{A})\). Depending on the total energy of the two neighbor electrons, the shell is labeled hot (positive energy favorable to ionization) or cold (negative energy favorable to recombination). A hot or cold shell around an ion results, respectively, into a pre-ionization, _i.e._, an increase by 1 of the ion charge and the appearance of one electron localized at the ion, or a recombination, _i.e._, a decrease by 1 of the ion charge and the removal of the nearest-neighbor electron with a transfer of the kinetic energy difference to the NNN. This local discontinuity over one time step is then accounted for by the whole system through a normal evolution. The pre-ionized state, _i.e._, an ion with a trapped electron can then be converted into an ionized state through multiple collisions. In this approach the ionization will be considered as completed when a new hot shell surrounds the ion opening the way to a further pre-ionization. In the mean-time the ion is considered as excited or multi-excited if there are more than one trapped electron in the ion potential. It is important to note that the coupling of electrons with radiation is ignored in our model and that the notion of discrete energy for the ionic excited states is replaced here by its continuous equivalent. During the initial step of equilibration, the system is driven toward equilibrium using a thermostat and is not supposed to be used for any measurements. Once the system has reached an equilibrium state, the happening of ionization/recombination process becomes far less frequent than it was in the equilibration step, giving one the ability to go further in the description of the charge interactions in plasma, accounting for mixtures of ions undergoing changes of their charge states. In particular, taking advantage of the particular design of the ionization protocol, it is possible, when the ion is in a pre-ionization state, to measure the necessary energy to free an electron in the ground state of a given ion and this, by accounting for all the interactions with the surrounding plasma. Due to the fluctuating local environment of the ions, the ionization energy is then characterized by a distribution function. If one compares the mean energies deduced from these distribution functions (see Fig. 1) with the corresponding energies for the isolated ions, it is possible to infer the IPD due to the interactions with the environment. Iglesias and Sterne [26] investigated the fluctuations of the number of free electrons -and consequently the ion sphere radius- and proposed simple analytical IPDs within the models of Stewart-Pyatt and Ecker-Kroll.
We investigate an aluminum plasma at a mass density 2.7 g/cm\({}^{3}\) and an electronic temperature \(T_{e}=50\) eV. A calculation of the average ion charge with FLYCHK code [24] gives \(\overline{Z}=5.77\) and \(\overline{Z^{2}}=34.1\). We have \(\lambda_{D}/R_{0}=0.215\) which means that the high density limit of the Stewart-Pyatt IPD is a good approximation. Following experimental considerations [13] we assume \(C=1\) and use the modified Ecker-Kroll IPD,
\[I_{\rm{mEK}}=\frac{2}{3}(1+\overline{Z})^{1/3}I_{\rm{SP-HD}}. \tag{6}\]
Figure 2 shows the calculated IPDs of aluminum ions having \(z\in[4-10]\). The experimental IPD [13] is also represented. The CMD calculation with an ion temperature \(T_{i}=300\) K (CMD2) shows a satisfactory
agreement with the Ecker-Kroll IPD. Calculations (CMD2 and EK) agree with experimental results for the lowest ion charges only.
As said above, a recent calculation on silicon under solar-interior conditions [19] shows the same divergence from experimental IPD for increasing \(z\). We can see that the Stewart-Pyatt model as well as CMD1 calculation (\(T_{i}=T_{e}\)) are not satisfactory.
The choice of a room temperature for \(T_{i}\) is explained by the experimental conditions. In fact, the electrons in the target are heated within 80 fs to temperature up to 180 eV depending on the photon energy of the irradiation. Moreover, the K-shell fluorescence, on which the interpretation of the experiment is based, only occurs while the target is irradiated. On this time scale, the ion motion is negligible and emission occurs in a plasma at solid density. To get closer to these conditions one can use CMD to simulate a two-component plasma of ions at room temperature and solid density, and electrons in pseudo equilibrium with the cold-ion population.
As the continuum lowering is important at this density, it will be taken into account in cross-section calculations.
## 3 Ionization cross-section
We first use the empirical formula of the cross-section proposed by Lotz [2]. Other authors used similar formulas (see for example the work of Bernshtam and co-workers [6]). The cross-section of the direct ionization between the ground levels \(g\) and \(g^{\prime}\) of ions of charges \(ze\) and \((z+1)e\), respectively, reads
\[\sigma(E)=A\,\xi\frac{\ln(E/E_{z,g})}{EE_{z,g}}, \tag{7}\]
where \(E\) is the incident electron energy, \(A\) a constant (in the range \(2.9-4.5\times 10^{-14}\) cm\({}^{2}\cdot\) eV\({}^{2}\), see Ref. [2]), \(E_{z,g}\) the ionization energy and \(\xi\) the number of electrons in the subshell from which the ionization occurs.
Figure 1: Normalized CMD distribution of the ionization energy for Li-, Be- and B-like aluminum. Density=2.7 g/cm\({}^{3}\), \(kT_{e}=50\) eV and \(kT_{i}=300\) K.
The ionization energy accounts for the continuum lowering, resulting in an increase of the cross-section.
In fact, owing to a fluctuating environment of the ions, the classical molecular dynamics approach provides a distribution of ionization energy. As a consequence, \(E_{z,g}\) is taken to be the average ionization energy. Therefore, this enables one to determine, not only the average ionization energy, but also the standard deviation of the latter, and all of its moments. For instance, the third- (skewness) and fourth- (kurtosis) order moments provide respectively information on the asymmetry and sharpness of the distribution. This opens the way to a statistical modeling of the ionization-energy distribution. Moreover, using the Bingo ionization-energy distribution, it is possible to average the whole cross-section itself, which could yield different results, compared to the procedure used in the present work consisting in including the average ionization energy in the cross-section.
In Fig. 3 we show the EII cross-section of O-like aluminum. As can be seen from Fig. 2 the EK and CMD2 IPDs are very close to 100 eV, while the SP IPD is around 80 eV. Then, as expected, when we take EK or CMD IPDs the cross-sections are very close. By comparing with the isolated ion case it is clear that the IPD is responsible for a large increase of the cross-section.
In Fig. 4 we represent the cross-section of Be-like aluminum. The values are smaller than those of the O-like ion by an order of magnitude. Moreover the difference between the various cross-sections is smaller than in the previous case.
The Lotz formula allows us to derive an analytical form of the rate coefficients for both Fermi-Dirac statistics and Boltzmann distribution. The obtained rates are then more suitable when one deals with collisional-radiative equations in which case extensive calculations are required.
Figure 2: Ionization potential depression in aluminum ions. CMD1 and CMD2: classical molecular dynamics with \(T_{i}=50\) eV and 300 K, respectively ; mEK (C=1): modified Ecker-Kröll formula, with \(C=1\) ; SP-HD: high density limit of the Stewart-Pyatt formula. Density=2.7 g/cm\({}^{3}\), \(kT_{e}=50\) eV.
Figure 4: Same as in Fig. 3 for Be-like aluminum.
Figure 3: EII cross-section of O-like aluminum as a function of the energy of the incident electron. Legend, density and electron temperature: as in Fig. 2. Blue curve: isolated ion.
Ionization rate
The ionization rate coefficient reads
\[q=\int_{E_{z,g}}^{\infty}\sigma(E)\sqrt{\frac{2E}{m}}\,\rho(E)\,dE, \tag{8}\]
where \(\sqrt{2E/m}\) is the electron velocity with \(m\) the electron mass. The probability density \(\rho(E)\) is expressed within the Fermi-Dirac statistics in order to account for the free-electron degeneracy.
### Fermi-Dirac statistics
Let us introduce the Fermi-Dirac integral of integer and half-integer order \(p\):
\[F_{p}(\eta,\chi)=\frac{1}{\Gamma(p+1)}\int_{\chi}^{\infty}\frac{\epsilon^{p}}{ e^{\epsilon-\eta}+1}d\epsilon, \tag{9}\]
where \(\Gamma\) is the Gamma function. \(\chi\) and \(\eta\) are the reduced energy and chemical potential, respectively: \(\epsilon=E/(kT)\) and \(\eta=\mu/(kT)\). If \(\chi\neq 0\), the integral is known as the incomplete Fermi-Dirac integral.
The normalized probability density is given by
\[\rho(E)=\frac{1}{D}\frac{\sqrt{E}}{e^{(E-\mu)/kT}+1}, \tag{10}\]
where \(D=(kT)^{3/2}\Gamma(3/2)F_{1/2}(\eta,0)\). The second factor in the rhs is due to the normalization of \(\rho\). We set \(\chi=E_{z,g}/(kT)\). Equations (7) and (8) then become:
\[\sigma=A\,\xi\frac{\ln(\epsilon/\chi)}{E\,E_{z,g}} \tag{11}\]
and
\[q=\sqrt{\frac{2}{m}}\frac{A}{D}\frac{1}{\chi}\int_{\chi}^{\infty}\frac{\ln( \epsilon/\chi)}{e^{\epsilon-\eta}+1}d\epsilon. \tag{12}\]
Let us focus on the integral in the equation above. We can write
\[\int_{\chi}^{\infty}\frac{\ln(\epsilon/\chi)}{e^{\epsilon-\eta}+1}d\epsilon= \ln\left(\frac{1}{\chi}\right)\Gamma(1)\,F_{0}(\eta,\chi)+\int_{\chi}^{\infty }\frac{\ln(\epsilon)}{e^{\epsilon-\eta}+1}d\epsilon,\]
where \(\Gamma(1)=1\). The incomplete Fermi-Dirac integral, \(F_{0}\), can be expressed analytically [27], giving the first term in the rhs of the equation above:
\[I_{1}=\ln\left(\frac{1}{\chi}\right)F_{0}(\eta,\chi)=\ln\left(\frac{1}{\chi} \right)\{\ln\left[e^{\chi-\eta}+1\right]-(\chi-\eta)\}.\]
The second integral
\[I_{2}=\int_{\chi}^{\infty}\frac{\ln(\epsilon)}{e^{\epsilon-\eta}+1}d\epsilon,\]
is calculated numerically. Such an integral can also be estimated using a Sommerfeld-type expansion (see 6). Thus the ionization rate coefficient becomes
\[q=\sqrt{\frac{2}{m}}\frac{A}{D}\frac{\xi}{\chi}(I_{1}+I_{2}) \tag{13}\]
or more explicitly
\[q=5.935\times 10^{7}\,A\,\xi\,\frac{1}{(kT)^{3/2}\,\Gamma(3/2)F_{1/2}(\eta,0)} \frac{1}{\chi}(I_{1}+I_{2}), \tag{14}\]
where the numerical factor is in \(\mathrm{cm}\cdot\mathrm{s}^{-1}\cdot\mathrm{eV}^{-1/2}\) and \(\Gamma(3/2)=\sqrt{\pi}/2\).
The chemical potential is obtained by
\[F_{1/2}(\eta,0)=\frac{(4\pi)^{3/2}}{2}\left(\frac{E_{I}}{kT}\right)^{3/2}N_{e}\,a _{0}^{3}, \tag{15}\]
where \(a_{0}\) is the Bohr radius and \(E_{I}\) the Rydberg energy. Knowing the electron density \(N_{e}\) and temperature \(T_{e}\) we calculate \(F_{1/2}(\eta,0)\). It is then easy to derive the chemical potential.
In Fig. 5 we show the EII rate coefficient of C-, N- and O-like aluminum. The rate coefficients calculated with the CMD2 IPD (\(T_{i}=\)300 K) show a good agreement with those obtained by using the EK IPD. When \(T_{i}=T_{e}=50\) eV, the CMD1 IPD yields too low rates, even lower than the rates obtained with the high-density limit of the Stewart-Pyatt formula.
In Fig. 6 we represent the EII rate coefficient of C- to Li-like aluminum. We notice a larger discrepancy between CMD and EK results. However we have a good agreement between the rates using CMD1 and Stewart-Pyatt IPDs.
Now we show the variation of the rate coefficient with electron temperature. Here the mass density is fixed to 0.34 g/cm\({}^{3}\) and three electron temperatures are investigated. The ion temperature is identical to \(T_{e}\). At 70 ev, the CMD and Ecker-Kroll IPDs are close to each other. As a consequence, the rates calculated within the Fermi-Dirac statistics, and with these two IPDs, are very close (see (Fig. 7)). When the temperature is increased (see Figs. 8-9), the rates calculated with CMD IPD tend to the rates obtained with the high-density limit of the Stewart-Pyatt IPD.
We know that the electron degeneracy effect is small when the electron temperature is well above the Fermi temperature. A direct consequence is that the Fermi-Dirac distribution tends towards the Boltzmann distribution. In the following, we show that the Lotz formula, coupled to a Boltzmann energy distribution of the free electrons, provides an analytical rate coefficient.
Figure 5: EII rate coefficient of aluminum ions, calculated within the Fermi-Dirac statistics for \(z=5-7\). Legend, density and temperature, as in Fig. 2.
### Boltzmann energy distribution
At the Boltzmann limit, the factor \(\sqrt{E}/\left(e^{(E-\mu)/kT}+1\right)\) in Eq. (10) becomes \(\sqrt{E}\,e^{-(E-\mu)/kT}\). As a result, the product \(\Gamma(3/2)F_{1/2}(\eta,0)\) is easily calculated:
\[\Gamma(3/2)F_{1/2}(\eta,0)\to e^{\eta}\int_{0}^{\infty}\epsilon^{1/2}e^{- \epsilon}d\epsilon=\frac{\sqrt{\pi}}{2}e^{\eta}.\]
The Boltzmann distribution then reads
\[\rho^{\prime}(E)=\frac{\sqrt{E}\,e^{-E/kT}}{\frac{\sqrt{\pi}}{2}(kT)^{3/2}}. \tag{16}\]
The term \(D\) in Eq. (10) becomes
\[D^{\prime}=\frac{\sqrt{\pi}}{2}(kT)^{3/2}e^{\eta} \tag{17}\]
while \(I_{1}\) and \(I_{2}\) are replaced by
\[I_{1}^{\prime}=\ln\left(\frac{1}{\chi}\right)e^{\eta}\int_{\chi}^{\infty}e^{- \epsilon}d\epsilon=\ln\left(\frac{1}{\chi}\right)e^{\eta-\chi}\]
and
\[I_{2}^{\prime} = e^{\eta}\int_{\chi}^{\infty}\ln(\epsilon)e^{-\epsilon}d\epsilon =e^{\eta}\left[e^{-\chi}\ln(\chi)+\int_{\chi}^{\infty}\frac{e^{-\epsilon}}{ \epsilon}d\epsilon\right]\] \[= e^{\eta}\left[e^{-\chi}\ln(\chi)+E_{1}(\chi)\right],\]
Figure 6: Same as in Fig. 5 for \(z=7-10\).
where \(E_{1}\) is the exponential integral [28]:
\[E_{1}(\chi)=\int_{\chi}^{\infty}\frac{e^{-\epsilon}}{\epsilon}d\epsilon.\]
We then obtain
\[I_{1}^{\prime}+I_{2}^{\prime}=e^{\eta}\,E_{1}(\chi)\]
and the ionization rate coefficient becomes
\[q^{\prime}=\sqrt{\frac{2}{m}}\frac{A\,\xi}{D^{\prime}}\frac{1}{\chi}(I_{1}^{ \prime}+I_{2}^{\prime}),\]
and finally
\[q^{\prime}=\frac{4}{\sqrt{2\pi m}}\frac{A\,\xi}{(kT)^{3/2}}\frac{E_{1}(\chi)}{\chi} \tag{18}\]
or
\[q^{\prime}=6.697\times 10^{7}\frac{A\,\xi}{(kT)^{3/2}}\frac{E_{1}(\chi)}{\chi},\]
where the numerical constant is given in \(\rm{cm}\cdot\rm{s}^{-1}\cdot\rm{eV}^{-1/2}\).
### Comparison
We compared the rates calculated with Eq. (14) (Fermi-Dirac statistics) and Eq. (18) (Maxwell-Boltzmann statistics). The IPD value is given by a molecular-dynamics calculation. The difference between the two rates is very small, less than 7 %, showing that the degeneracy of the free electrons plays a small role. In fact, at the density \(\rho=2.7\) g/cm\({}^{3}\) and thermal energy \(kT_{e}=50\) eV, the Fermi
Figure 7: EII rate coefficient of aluminum ions. \(\rho\)=0.34 g/cm\({}^{3}\), \(T_{e}=T_{i}=70\) eV. Legend: as in Fig. 2.
energy \(E_{\rm F}=20.745\) eV while the chemical potential \(\mu=-81.45\) eV. We then have \(e^{-\mu/kT}=5.1\). The Maxwell-Boltzmann limit is then relevant. Figures 1 and 2 of Ref. [29] confirm that our plasma is in the classical regime and consequently that the free-electron energy distributions (Eqs. (10) and (16)) are very close. Free-electron degeneracy therefore plays a very small role on the ionization at the above conditions.
On the other hand, a recent work [30] shows that the degeneracy plays an important role during the interaction of an EUV free-electron laser with solid creating a warm density plasma (\(kT_{e}<10\) eV).
## 5 Numerical calculations
The Lotz formula allows an easy analytical calculation of the rates. Unfortunately, sometimes, the results are very different from measured cross-sections. The alternative to the Lotz formula is a numerical calculation with a reliable atomic code. In this section, we rely on the two codes, FAC and HULLAC, to obtain accurate cross-sections. The resulting cross-sections are then compared to experimental results.
FAC is an integrated software package capable of investigating the atomic structure as well as most processes occurring in plasmas. It provides the level energies and the rate of the following processes: radiative transitions, collisional excitation, electron-impact ionization, photoionization, autoionization, radiative recombination and dielectronic capture. In this work, the EII cross-sections are computed in the distorted wave (DW) approximation. Bound and free states are determined via a self-consistent field model, and a local term for exchange is added to the potential (Dirac-Fock-Slater approach). The code also includes a collisional radiative model to construct synthetic spectra for plasmas under different physical conditions.
HULLAC is also an integrated computer package for atomic processes in plasmas. Like FAC it enables one to calculate atomic structure, cross-sections and rates for collisional and radiative atomic processes. The code is based on relativistic quantum-mechanical calculations including configuration interaction. The collisional cross-sections are calculated within the DW approach. The parametric
Figure 8: Same as in Fig. 7, with \(T_{e}=T_{i}=100\) eV.
potential method is used for both bound and free orbitals. The factorization-interpolation method is applied to the derivation of collisional rates. The continuum orbitals are computed in the framework of the phase-amplitude approach. The NJGRAF graphical method is used in the calculation of the angular momentum part of the matrix elements. Physics and code descriptions can be found in Ref. [23].
In the following, the densities of the investigated plasmas are much lower than in the experiments at LCLS and Orion laser. The ionization potential depression is then negligible. As a consequence, the cross-section and the rate involve the ionization energy of isolated ions.
### EII cross-section in Be-like CNO
We have calculated the cross-section of ionization from the ground level of Be-like ions forming Li-like ions, with FAC and HULLAC codes. Our calculations are compared to measurements utilizing the crossed electron and ion beams technique at Oak Ridge National Laboratory [31].
In Figs. 10-12 we show the EII cross-section in Be-like carbon, nitrogen and oxygen, respectively. We consider only the ionization from the ground level to the Li-like levels \(2s\) and \(2p\) (\(J=1/2,3/2\)). As clearly seen, the Lotz formula and the numerical (FAC and HULLAC) results have the same behaviour with respect to energy. However, the Lotz and HULLAC cross-sections differ by a significant amount from measurements for all ions, in almost the entire energy range. Our calculations with FAC show a better agreement with experimental results. This gives us confidence in the FAC code.
The ionization from the metastable states \(2s2p\) (\(J=0\), 1, 2) is not taken into account in this work. However, a preliminary calculation with FAC, and the fractions of the ground vs metastable states estimated by Fogle _et al._[31], shows a small difference with ionization from the ground state only. The study of the metastable states is in progress and their contribution to the total cross-section will be addressed in a future work.
In the following, we present a new approach that gives the EII rate coefficient. As above the rate coefficient is calculated within the Fermi-Dirac statistics and at the Boltzmann limit. In contrast to Sec. 4 we use a new approach utilizing both the accuracy of FAC code and an analytical calculation. The
Figure 9: Same as in Fig. 7, with \(T_{e}=T_{i}=190\) eV.
calculated rate is then compared to measurements.
### New rate coefficient. Comparison with experiment
In this section we first compare the semi-empirical formula, Eq. (11), used by Lotz [2] and Bernshtam _et al._[6] to cross-sections that are obtained with the FAC code [22]. When the difference is significant we propose in a second step the following procedure: we replace the semi-empirical formula by a new formula that reproduces the FAC cross-sections. This procedure has the advantage to allow for an analytical calculation of the rate. To be specific the semi-empirical cross-section is multiplied by a polynomial expansion in which the coefficients are adjusted to fit with the FAC cross-section, _i.e._
\[\sigma=A\,\xi\frac{\ln(\epsilon/\chi)}{E\,E_{z,g}}\times\sum_{p=0}^{N}a_{p} \left(\frac{\epsilon}{\chi}\right)^{p}, \tag{19}\]
where \(N\) is the degree of the polynomial, \(\epsilon/\chi=E/E_{z,g}\), _i.e._ the ratio of the incident electron energy and the ionization energy. The fit of the FAC cross-section yields the \(a_{p}\) coefficients. In the following, we show that the new rate coefficient can be expressed in terms of the \(a_{p}\)s.
Figure 10: Cross-section of EII from the ground level of Be-like carbon. The gray surface represents the experimental error.
Figure 11: Same as in Fig. 10 for nitrogen.
Figure 12: Same as in Fig. 10 for oxygen.
#### 5.2.1 Fermi-Dirac distribution
If the free electrons evolve according to the Fermi-Dirac distribution, Eq. (12) giving the rate reads
\[q = \sqrt{\frac{2}{m}}\frac{A}{D}\frac{\xi}{\chi}\int_{\chi}^{\infty} \frac{\ln(\epsilon/\chi)}{e^{\epsilon-\eta}+1}\sum_{p=0}^{N}a_{p}\left(\frac{ \epsilon}{\chi}\right)^{p}\,d\epsilon= \tag{20}\] \[= \sqrt{\frac{2}{m}}\frac{A}{D}\frac{1}{\chi}\sum_{p=0}^{N}\frac{a _{p}}{\chi^{p}}\int_{\chi}^{\infty}\frac{\ln(\epsilon/\chi)}{e^{\epsilon-\eta} +1}\epsilon^{p}\,d\epsilon\] \[= \sqrt{\frac{2}{m}}\frac{A}{D}\frac{1}{\chi}\sum_{p=0}^{N}\frac{a _{p}}{\chi^{p}}[I_{1}^{(p)}+I_{2}^{(p)}],\]
where \(I_{1}^{(p)}\) can be expressed in terms of the incomplete Fermi-Dirac integral
\[I_{1}^{(p)}=-\ln(\chi)\int_{\chi}^{\infty}\frac{\epsilon^{p}}{e^{\epsilon-\eta }+1}d\epsilon=-\ln(\chi)\Gamma(p)\,F_{p}(\eta,\chi)\]
and
\[I_{2}^{(p)}=\int_{\chi}^{\infty}\frac{\epsilon^{p}\,\ln\epsilon}{e^{\epsilon- \eta}+1}d\epsilon.\]
#### 5.2.2 Boltzmann distribution
In the case of the Maxwell-Boltzmann statistics, we have \(I_{1}^{(p)}\to I_{1}^{\prime(p)}\) and \(I_{2}^{(p)}\to I_{2}^{\prime(p)}\), with
\[I_{1}^{\prime(p)}=-e^{\eta}\ln(\chi)\int_{\chi}^{\infty}\epsilon^{p}\,e^{- \epsilon}\,d\epsilon=-e^{\eta}\ln(\chi)\Gamma(p+1,\chi),\]
where \(\Gamma(p+1,\chi)\) is the incomplete Gamma function, and
\[I_{2}^{\prime(p)}=e^{\eta}\int_{\chi}^{\infty}\epsilon^{p}\,e^{-\epsilon}\, \ln(\epsilon)\,d\epsilon.\]
The \(\Gamma\) functions can be calculated by using the relation \(\Gamma(p+1,\chi)=p\Gamma(p,\chi)+\chi^{p}e^{-\chi}\), and by knowing that \(\Gamma(1,\chi)=e^{-\chi}\). It is then easy to obtain
\[I_{1}^{\prime(0)} = -e^{\eta-\chi}\ln(\chi)\] \[I_{1}^{\prime(1)} = -e^{\eta-\chi}\ln(\chi)[\chi+1]\] \[I_{1}^{\prime(2)} = -e^{\eta-\chi}\ln(\chi)[\chi^{2}+2\chi+2]\] \[I_{1}^{\prime(3)} = -e^{\eta-\chi}\ln(\chi)[\chi^{3}+3\chi^{2}+6\chi+6]\] \[\vdots\]
The \(I_{2}^{\prime(p)}\) are integrated by parts. We have
\[I_{2}^{\prime(p)} = e^{\eta}\left[\frac{\epsilon^{p+1}}{p+1}e^{-\epsilon}\ln( \epsilon)\right]_{\chi}^{\infty}-\frac{1}{p+1}\int_{\chi}^{\infty}\epsilon^{p +1}\left[-e^{-\epsilon}\ln(\epsilon)+\frac{e^{-\epsilon}}{\epsilon}\right]d\epsilon\] \[= \frac{e^{\eta}}{p+1}\left[-\chi^{p+1}e^{-\chi}\ln(\chi)+\int_{ \chi}^{\infty}\epsilon^{p+1}e^{-\epsilon}\ln(\epsilon)d\epsilon-\int_{\chi}^ {\infty}\epsilon^{p}e^{-\epsilon}d\epsilon\right]\] \[= \frac{1}{p+1}\left[-\chi^{p+1}e^{\eta-\chi}\ln(\chi)+I_{2}^{ \prime(p+1)}-e^{\eta}\Gamma(p+1,\chi)\right].\]
It is then easy to write \(I_{2}^{\prime(p+1)}\) in terms of \(I_{2}^{\prime(p)}\):
\[I_{2}^{\prime(p+1)}=(p+1)I_{2}^{\prime(p)}+e^{\eta}\Gamma(p+1,\chi)+\chi^{p+1}e^ {\eta-\chi}\ln(\chi). \tag{21}\]
We calculate the \(p=0\) integral and deduce the higher-order ones. We have
\[I_{2}^{\prime(0)} = e^{\eta}\int_{\chi}^{\infty}e^{-\epsilon}\ln(\epsilon)d\epsilon =e^{\eta}\left(\left[-e^{-\epsilon}\ln(\epsilon)\right]_{\chi}^{\infty}+\int_ {\chi}^{\infty}\frac{e^{-\epsilon}}{\epsilon}d\epsilon\right)\] \[= e^{\eta}E_{1}(\chi)+e^{\eta-\chi}\ln(\chi)\]
and
\[I_{2}^{\prime(1)} = e^{\eta}E_{1}(\chi)+e^{\eta-\chi}[1+(1+\chi)\ln(\chi)]\] \[I_{2}^{\prime(2)} = 2e^{\eta}E_{1}(\chi)+e^{\eta-\chi}[\chi+3+(\chi^{2}+2\chi+2)\ln (\chi)]\] \[I_{2}^{\prime(3)} = 6e^{\eta}E_{1}(\chi)+e^{\eta-\chi}[\chi^{2}+5\chi+11+(\chi^{3}+ 3\chi^{2}+6\chi+6)\ln(\chi)].\] \[\vdots\]
The first \(I_{1}^{\prime(p)}+I_{2}^{\prime(p)}\) sums are then
\[I_{1}^{\prime(0)}+I_{2}^{\prime(0)} = e^{\eta}E_{1}(\chi)\] \[I_{1}^{\prime(1)}+I_{2}^{\prime(1)} = e^{\eta}[E_{1}(\chi)+e^{-\chi}]\] \[I_{1}^{\prime(2)}+I_{2}^{\prime(2)} = e^{\eta}[2E_{1}(\chi)+e^{-\chi}(3+\chi)]\] \[I_{1}^{\prime(3)}+I_{2}^{\prime(3)} = e^{\eta}[6E_{1}(\chi)+e^{-\chi}(\chi^{2}+5\chi+11)]\] \[\vdots\]
The rate coefficient is then given by Eqs. (20) replacing \(I_{1}^{p}+I_{2}^{p}\) by \(I_{1}^{\prime(p)}+I_{2}^{\prime(p)}\) and \(D\) by \(D^{\prime}\) (see Eq. (17)). We then have
\[q^{\prime}=6.697\times 10^{7}\frac{A\,\xi}{(kT)^{3/2}}\frac{e^{-\eta}}{\chi} \sum_{p=0}^{N}\frac{a_{p}}{\chi^{p}}[I_{1}^{\prime(p)}+I_{2}^{\prime(p)}]. \tag{22}\]
#### 5.2.3 Comparison with experiment
In the following, we compare our calculations on aluminum ions to the measurements of Greve _et al._[32]. In this experiment, aluminum and silicon ions were introduced in a well-diagnosed theta-pinch discharge by CO\({}_{2}\) laser driven ablation from solid targets. The authors interpreted the time histories of spectral lines from several ionization stages of these impurities, produced in the hot transient pinch plasma, in terms of effective ionization rate coefficients.
The measured electron densities and temperatures are reported in Table 1. We also give the reduced chemical potential (Eq. (15)) and ionization energy as well as the the ratio of the SP and EK IPD values to the ionization energy. We can see that \(\chi\) ranges in the interval \([1.5,2]\) and that \(\eta\simeq-20\), showing that the Boltzmann distribution describes very well the free electrons. The average ion charge is given by FLYCHK code [24]. We have \(\overline{Z}\simeq 11\) for most ions. Due to the low density values, the IPD is negligible with respect to the ionization energy. As a result, the wavefunctions and cross-sections are not affected by plasma density effects.
The cross-sections are given by Eq. (19) where the \(a_{p}\) coefficients are obtained by a fit with the FAC or HULLAC cross-sections. In Fig. 13 we show the cross-section of ionization from the ground state of C-like aluminum to the \(2p\) states of B-like aluminum. FAC and HULLAC give similar cross-sections. Nevertheless, the two codes show a significant difference with the Lotz cross-section. Our fit of the FAC and HULLAC cross-sections is satisfactory. The obtained \(a_{p}\) values are then expected to provide rates in agreement with experimental results.
Similarly, Fig. 14 shows the cross-section of ionization from N-like aluminum. The difference between FAC and HULLAC is larger than in the C-like aluminum case. More interesting, the HULLAC results show a better agreement with the Lotz formula than with the FAC results.
The fit procedure provides the \(a_{p}\) coefficients and we are then able to calculate the rate coefficients within the Fermi-Dirac statistics (Eq. (20)) or Maxwell-Boltzmann approximation (Eq. (22)). In our cases, a suitable fit is obtained with polynomials of order 5.
Figure 15 represents the rate coefficients of different aluminum ions. Our calculations using HULLAC cross-sections are in good agreement with experimental results. This is also the case with the FAC code, except for N-like aluminum. The two calculations yield close rate coefficients. The difference between the rate deduced from the Lotz formula and the experimental value is small.
## 6 Conclusion and prospective
This work is devoted to the calculation of the ionization potential depression and the EII rates in plasmas. We focused our attention on aluminum plasmas at high density and astrophysical plasmas (CNO). Our calculations are compared to experimental results.
The Bingo code uses the robust classical molecular dynamics method. It allows to calculate the ionization potential depression accounting for all charge-charge interactions in the particle motion, within the limits of classical mechanics. The choice of a regularized ion-electron potential which removes Coulomb divergence at short distances and accounts for some quantum effects, enables one to implement a ionization/recombination protocol to control the plasma ion charge distribution and the trapping of electrons in the ion wells. Contrary to widely used methods (EK, SP, FLYCHK...), it gives access to the ionization energy distribution function which considers the plasma perturbation and its fluctuating nature. Based on a statistical analysis of rare collisional events, the numerical determination of the ionization energy distribution function is very expensive and can be done mainly for the most probable populations of ion charges. The simulation accounts for the ion dynamics but ignores the excited states and the radiative properties.
The IPD calculated within classical molecular dynamics (CMD) is compared to the models of Ecker-Kroll and Stewart-Pyatt. At high density the CMD and Ecker-Kroll IPDs are close. However, both calculations show a substantial discrepancy with experiment for the highest ion charge (\(\simeq\) 30 eV). At low temperatures the CMD approach agrees with the formula of Ecker-Kroll. When the temperature is increased the CMD IPD is closer to the formula of Stewart-Pyatt. It seems that the Ecker-Kroll and
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \(kT_{e}\) & \(N_{e}\) & \(\eta\) & \(\chi\) & \(Z\) & SP-IPD/\(E_{z,g}\) & EK-IPD/\(E_{z,g}\) \\ \hline \hline Li-like & 225 & 3.2\(\times 10^{16}\) & -20.15 & 1.97 & 10.9 & 1.24 \(\times 10^{-3}\) & 3.43 \(\times 10^{-3}\) \\ \hline Be-like & 235 & 2.7\(\times 10^{16}\) & -20.39 & 1.69 & 10.9 & 1.19 \(\times 10^{-3}\) & 3.83 \(\times 10^{-3}\) \\ \hline B-like & 220 & 2.1\(\times 10^{16}\) & -20.54 & 1.5 & 10.9 & 1.18 \(\times 10^{-3}\) & 4.61 \(\times 10^{-3}\) \\ \hline C-like & 175 & 1.5\(\times 10^{16}\) & -20.53 & 1.62 & 10.6 & 1.10 \(\times 10^{-3}\) & 5.32 \(\times 10^{-3}\) \\ \hline N-like & 160 & 1.3\(\times 10^{16}\) & -20.54 & 1.50 & 10.4 & 1.09 \(\times 10^{-3}\) & 6.25 \(\times 10^{-3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Plasma status in the experiment of Greve [32]. \(N_{e}\) in cm\({}^{-3}\), \(kT_{e}\) in eV. \(\eta\) and \(\chi\) are the reduced chemical potential and ionization energy, _i.e._\(\eta=\mu/(kT_{e})\), \(\chi=E_{z,g}/(kT_{e})\). \(\bar{Z}\) is the average ion charge. SP-IPD/\(E_{z,g}\) (EK-IPD/\(E_{z,g}\)) is the ratio of the IPD calculated with the formula of Stewart-Pyatt (Ecker-Kroll) and the ionization energy.
Stewart-Pyatt formulas are two limits of the CMD model.
We have calculated the cross-sections and the rate coefficients in plasmas at near-solid density by using the Lotz formula where the IPD is taken into account. It is clear that the continuum lowering has an important effect on ionization by electron impacts.
In our plasma conditions (temperature and density) the free electrons degeneracy has a small effect on the ionization rate, which means that the Maxwell-Boltzmann approximation is satisfactory.
In a second work we investigated plasmas at lower densities. For such plasmas, the IPD is negligible. Because the Lotz formula sometimes overestimates the cross-section, we introduce a new cross-section consisting in the product of the Lotz cross-section and a polynomial expansion whose variable is the ratio of the free electron energy to the ionization energy. The coefficients of the expansion are then adjusted in order to reach a good fit of the accurate cross-section given by two efficient atomic codes (FAC and HULLAC), in the DW approach. This new definition provides rate coefficients that are in better agreement with experimental values than is the Lotz formula.
FAC and HULLAC are integrated software packages giving the atomic structure and cross-sections for collisional and radiative processes. Both are well-adapted to describe multicharged ion plasmas with configurations involving many open subshells giving rise to complex atomic structure. The main difference between FAC and HULLAC codes is that the first one uses a self-consistent potential and the second one a parametric potential. Other methods (R-matrix and close coupling for instance) can provide accurate cross-sections. However they are not applicable to our cases due to large atomic level sets and to wide incident electron energy ranges.
In the case of the CNO ions the measured cross-sections lie between the FAC and HULLAC results. The largest difference with experiment is of the order of 16 % (for HULLAC, Oxygen case). The agreement
Figure 13: EII cross-section of C-like aluminum as a function of the energy of the incident electron. Density and temperature: 1.5\(\times\)10\({}^{16}\) cm\({}^{-3}\) and 175 eV (see Table 1). FAC and HULLAC curves: numerical results given by FAC and HULLAC codes, respectively; FAC-f and HULLAC-f: our calculations, with the \(a_{p}\) coefficients obtained by a fit of the new cross-section (Eq. (19)) with FAC or HULLAC cross-sections ; Lotz1 and Lotz2: Lotz cross-section with ionization energy given by FAC and HULLAC codes, respectively.
between the experiment and the FAC code is better (less than 10 %) than with HULLAC. The Lotz formula overestimates the CNO cross-sections.
More accurate cross-sections that are suitable for a comparison with experiments are in progress. We are considering a larger set of initial states from which electron ionization occurs. The agreement between our calculations and experimental results should be improved if we take into account the ionization from the metastable states. In fact, the populations of these states are of the same order of magnitude as the population of the ground level, as shown in the experiment of Fogle _et al._[31]. The contribution of the metastable states will be taken into account in a forthcoming publication.
Our calculated ionization rates of aluminum at low density (\(\simeq 10^{16}\) electrons per cm\({}^{3}\)) are compared to measurements. We used the new cross-section defined above but restricted ourselves to ionization from the ground state to all allowed excited states of the final ion. Our results show a better agreement with experiment than the Lotz formula. The agreement with experiment would be better if the ionization from excited states was also taken into account. In this case the population fractions of these levels are needed.
Appendix A: Alternative method for the calculation of integral \(\int_{\chi}^{\infty}\frac{\ln(\epsilon)}{e^{\epsilon-\eta}+1}d\epsilon\)
Let \(H(\epsilon)\) be any function varying smoothly with energy \(\epsilon\) and \(f(\epsilon)\) the Fermi-Dirac distribution
\[f(\epsilon)=\frac{1}{e^{\epsilon-\eta}+1}. \tag{23}\]
Figure 14: EII cross-section of N-like aluminum as a function of the energy of the incident electron. Density and temperature: 1.3\(\times 10^{16}\) cm\({}^{-3}\) and 160 eV (see Ref. [32]). Legend, as in Fig. 13.
One has (see Ref. [33]):
\[\int_{\chi}^{\infty}H(\epsilon)f(\epsilon)d\epsilon= \int_{\chi}^{\eta}H(\epsilon)d\epsilon\] \[+ \sum_{m=1}^{\infty}(-1)^{m}\left[\int_{\chi}^{\eta}H(\epsilon)e^{m (\epsilon-\eta)}d\epsilon-\int_{\eta}^{\infty}H(\epsilon)e^{-m(\epsilon-\eta) }d\epsilon\right],\]
_i.e._
\[\int_{\chi}^{\infty}\frac{\ln(\epsilon)}{e^{\epsilon-\eta}+1}d \epsilon= \int_{\chi}^{\eta}\ln\epsilon d\epsilon\] \[+ \sum_{m=1}^{\infty}(-1)^{m}\left[\int_{\chi}^{\eta}\ln\epsilon~{} e^{m(\epsilon-\eta)}d\epsilon-\int_{\eta}^{\infty}\ln\epsilon~{}e^{-m(\epsilon-\eta) }d\epsilon\right],\]
which gives the result
\[\int_{\chi}^{\infty}\frac{\ln(\epsilon)}{e^{\epsilon-\eta}+1}d\epsilon=\int_{ \chi}^{\eta}\ln\epsilon d\epsilon+\sum_{m=1}^{\infty}(-1)^{m}\left[A_{m}-B_{ m}\right],\]
with
\[A_{m} = \int_{\chi}^{\eta}\ln\epsilon~{}e^{m(\epsilon-\eta)}d\epsilon\] \[= \frac{e^{-m\eta}}{m}\left[e^{\eta m}\ln\eta-e^{\chi m}\ln(\chi)+ E_{1}(-\eta m)-E_{1}(-\chi m)\right]\]
and
\[B_{m}=\int_{\eta}^{\infty}\ln\epsilon~{}e^{-m(\epsilon-\eta)}d\epsilon=\frac{ e^{\eta m}}{m}\left[E_{1}(\eta m)+e^{-\eta m}\ln\eta\right].\]
Figure 15: EII rate coefficient of aluminum ions versus ion charge. Density and temperature: see Table 1. | We calculate the cross-section of ionization by free-electron impacts in high-density plasmas. We show that the so-called ionization potential depression (IPD) strongly affects the magnitude of the cross-section in the high-density domain. We use the well-known IPD formulas of Stewart-Pyatt and Ecker-Kr\"oll. A more recent approach based on classical molecular dynamics simulation is also investigated. The latter provides an alternative way to calculate IPD values. At near-solid densities the effects of the free-electron degeneracy should be investigated. The rates are then calculated within the Fermi-Dirac statistics. We first use the semi-empirical formula of Lotz for ionization cross-section. The results may differ significantly from measured cross-sections or calculations with reliable atomic codes. Then, in a second step, we propose a new formula that combines the Lotz formula and a polynomial expansion in terms of the ratio of the energy of the incident electron and the |
2309.08793 | Fin-Fact: A Benchmark Dataset for Multimodal Financial Fact Checking and
Explanation Generation | Fact-checking in financial domain is under explored, and there is a shortage
of quality dataset in this domain. In this paper, we propose Fin-Fact, a
benchmark dataset for multimodal fact-checking within the financial domain.
Notably, it includes professional fact-checker annotations and justifications,
providing expertise and credibility. With its multimodal nature encompassing
both textual and visual content, Fin-Fact provides complementary information
sources to enhance factuality analysis. Its primary objective is combating
misinformation in finance, fostering transparency, and building trust in
financial reporting and news dissemination. By offering insightful
explanations, Fin-Fact empowers users, including domain experts and end-users,
to understand the reasoning behind fact-checking decisions, validating claim
credibility, and fostering trust in the fact-checking process. The Fin-Fact
dataset, along with our experimental codes is available at
https://github.com/IIT-DM/Fin-Fact/. | Aman Rangapur, Haoran Wang, Ling Jian, Kai Shu | 2023-09-15T22:24:00 | http://arxiv.org/abs/2309.08793v2 | # Fin-Fact: A Benchmark Dataset for Multimodal Financial Fact Checking and Explanation Generation
###### Abstract
Fact-checking in financial domain is under explored, and there is a shortage of quality dataset in this domain. In this paper, we propose Fin-Fact, a benchmark dataset for multimodal fact-checking within the financial domain. Notably, it includes professional fact-checker annotations and justifications, providing expertise and credibility. With its multimodal nature encompassing both textual and visual content, Fin-Fact provides complementary information sources to enhance factuality analysis. Its primary objective is combating misinformation in finance, fostering transparency, and building trust in financial reporting and news dissemination. By offering insightful explanations, Fin-Fact empowers users, including domain experts and end-users, to understand the reasoning behind fact-checking decisions, validating claim credibility, and fostering trust in the fact-checking process. The Fin-Fact dataset, along with our experimental codes is available at [https://github.com/IIT-DM/Fin-Fact/](https://github.com/IIT-DM/Fin-Fact/).
## 1 Introduction
In an era characterized by the rapid spread of misinformation and the proliferation of fake news, fact-checking has emerged as a critical tool for ensuring the accuracy and reliability of information (Saakyan et al., 2021; Wadden et al., 2020; Sarrouti et al., 2021). The emergence of social media platforms and the wide accessibility of multimodal content have intensified the complexities linked with verifying the accuracy of assertions (Mishra et al., 2022). Notably, the financial sector introduces its distinctive array of difficulties, given that precise and timely data plays a pivotal role in enabling well-informed investment choices and upholding market stability. Additionally, financial fact-checking encounters specific hurdles, such as the need for customized data to address unique requirements and nuances. Furthermore, the manipulation of images to exploit visualization bias presents another significant challenge in the verification process (Mansoor and Harrison, 2018).
The rise of misinformation in the financial domain has become a pressing concern, with potential impacts on public trust, investor decisions, and overall market stability (Kogan et al., 2019; Clarke et al., Forthcoming; Zhi et al., 2021; Liu and Moss, 2022). To counter the spread of misleading information, fact-checking methods have gained importance in financial reporting and analysis (Zhi et al., 2021; Mohankumar et al., 2023). However, the development of effective and dependable models in this domain has been hindered by the lack of suitable benchmark datasets that accurately represent the intricacies of financial information and context.
In recent years, there has been notable progress in creating various datasets for fact-checking (Wadden et al., 2020; Wadden and Lo, 2021; Wadden et al., 2022; Sarrouti et al., 2021; Saakyan et al., 2021). However, there is a noticeable gap in addressing the unique demands of fact-checking within the financial domain. Financial fact-checking faces several significant challenges.
Figure 1: A demonstration of comprehensive multimodal fact-checking and the creation of explanations.
**Firstly**, it requires meticulously curated data that can encompass the intricate nuances of financial discourse. Financial documents and journalistic pieces often employ specialized language that differs from conventional structures. However, existing datasets frequently lack comprehensive coverage of financial news articles, and the absence of expert annotations diminishes the reliability of the data. **Secondly**, financial data is highly context-sensitive and constantly evolving, emphasizing the need for a dataset that can accurately capture the dynamic nature of financial markets. **Lastly**, the landscape of financial fact-checking introduces the challenge of visualization bias, where deliberate manipulation of visual content can shape perception and distort the accuracy of claims.
In this paper, we tackle the challenge of compiling, annotating, and refining a comprehensive corpus of financial texts that faithfully represents financial reporting, accounting methodologies, and market fluctuations. The realms of financial fact-checking and explanation generation present distinct obstacles that require specialized approaches. The necessity for tailored data capable of navigating financial terminology and intricate visual elements underscores the interdisciplinary nature inherent in this research endeavor. Figure 1 illustrate a comprehensive multimodal fact-checking and the creation of explanations while Table 1 displays an example instance from the corpus.
Presenting Fin-Fact, a novel benchmark dataset specifically designed for multimodal financial fact-checking and explanation generation. Our key contributions and discoveries are as follows:
* We introduce Fin-Fact, the inaugural benchmark dataset designed for verifying claims within the financial domain. This dataset encompasses 3,562 claims, each accompanied by expert fact-checkers' justifications.
* Fin-Fact holds the potential for explanation generation, facilitated by the inclusion of authoritative ruling comments from skilled fact-checking professionals.
* Our investigation reveals performance challenges in applying the state-of-the-art models to Fin-Fact in open-domain contexts, underlining the need for improved generalization.
## 2 Related Work
**Fact Checking.** Significant efforts have been dedicated to creating fact-checking datasets for automated fact-checking systems (Wadden et al., 2020, 2022; Thorne et al., 2018; Saakyan et al., 2021). Previous studies have predominantly focused on predicting the accuracy of claims from diverse sources. While large-scale datasets from various domains have been utilized (Gupta and Srikumar, 2021), they might not be suitable for identifying misinformation related to financial matters due to domain-specific disparities.
Although general-content misinformation datasets are readily accessible, only a limited number of datasets pertain to online financial misinformation (Clarke et al., 2020; Kogan et al., 2019; Zhi et al., 2021; Liu and Moss, 2022; Zhang et al., 2022; Boehm and Kroner, 2023). Current financial misinformation datasets lack clear labeling and justifications, raising concerns about result reliability. In contrast, the Fin-Fact dataset is distinct with genuine data and a multimodal structure, combining text and images to encompass a wide range of financial information. Additionally, it includes expert fact-checker comments, enabling comprehensive explanations by models.
**Explanation Generation.** Explanation generation plays a pivotal role in facilitating human comprehension of claim credibility. It involves leveraging external knowledge graphs to create semantic traces originating from the claim itself (Gad-Elrab et al., 2019; Li et al., 2020; Sarrouti et al., 2021). These semantic traces function as explanations that substantiate the veracity of claims. This approach offers valuable insights into the rationale behind the model's decision-making, thereby fostering trust. Moreover, the process of explanation generation relies on drawing evidence from diverse sources (Thorne et al., 2018; Hanselowski et al., 2019; Fan et al., 2020) to validate claims. However,
\begin{table}
\begin{tabular}{l l} \hline \hline \multirow{2}{*}{**Claim**} & Student debt relief is taxable by the Internal \\ & Revenue Service. \\ Author & Jeff Cercone \\ Posted & 08/30/2022 \\
**Justification** & The up to $20,000 of federal student loan for- \\ & giveness announced by Joe Biden wont... \\
**Evidence** & The post shows a screenshot of an IRS web \\
**Image** & page about canceled debt, with this sentence... \\ & [https://politifact.com/photos/2a603743e.jpg](https://politifact.com/photos/2a603743e.jpg) \\ Caption & President Joe Biden speaks about student loan \\ & debt foreigness... \\ Visual Bias & 0 \\ Issues & Debt, Taxes \\
**Label** & True \\ \hline \hline \end{tabular}
\end{table}
Table 1: An example from Fin-Fact
this evidence is frequently comprised of isolated sentences extracted from extensive collections of documents, which can make it challenging for humans to interpret the broader context. To effectively generate explanations, a high-quality dataset annotated by humans is essential. This paper addresses the need for such a dataset.
## 3 The Fin-Fact Dataset
The Fin-Fact dataset presents a diverse array of labels that enhance the depth of analysis when evaluating financial claims. These labels contribute a multifaceted perspective to the fact-checking process, augmenting its analytical capabilities.
At the core of the dataset are the 'Claim' and 'Author' labels, which respectively denote the primary assertion and its originating source. The inclusion of the 'Posted Date' attribute introduces temporal context, while the 'Sci-digest' label provides the summary of the claim. Further contextualization is achieved through the 'Justification' label, elucidating the accuracy of the claim, and the 'Evidence' label, which presents corroborative information connected through 'Evidence link.' The dataset also acknowledges the visual dimension through 'Image link' and 'Image Caption.' Critically, the 'Visualisation Bias Label' evaluates potential biases linked to images. Complexities inherent in the claims are highlighted by the 'Issues' label, while the ultimate assessment is provided by the "Claim Label", offering a definitive classification of "True", "False", or "NEI (Not Enough Information)".
By amalgamating these labels, the dataset establishes a comprehensive and multidimensional resource. This resource accommodates textual, temporal, evidentiary, and visual components, all of which are imperative for a thorough evaluation of claims during the fact-checking process.
### Data Collection
PolitiFact1 and FactCheck2 are prominent online platforms dedicated to countering the spread of false information. These platforms engage skilled fact-checkers to meticulously analyze and verify individual claims, subsequently producing articles that offer their conclusions supported by relevant evidence. In our study, we leveraged these platforms as our primary sources of data. Specifically, we devised a comprehensive process to extract essential information from these platforms.
Footnote 1: [http://politifact.com/](http://politifact.com/)
Footnote 2: [http://factcheck.org](http://factcheck.org)
To elaborate, we devised a systematic process to gather essential information from PolitiFact and FactCheck. This encompassed the extraction of text-based claims and the assignment of corresponding truthfulness labels. Moreover, we retrieved both textual and visual evidence, along with their associated links, which contributed substantively to the assessment of claim accuracy.
It's noteworthy that the initial claims were collected by journalists affiliated with these platforms. These claims originated from diverse sources, including online speeches, public statements, news articles, and social media posts. Importantly, the fact-checkers from these platforms played a pivotal role by providing truthfulness labels, pertinent evidence, references to corroborating sources, and the articles delivering their final verdict. This comprehensive approach ensured the thorough and reliable collection of data, reinforcing the credibility of our assessment of claim accuracy.
### Dataset Statistics
The Fin-Fact dataset is an encompassing compilation of claims within the financial domain, spanning diverse sectors such as Income, Finance, Economy, Budget, Taxes, and Debt, as visualized in Figure 2. This dataset is meticulously crafted, comprising a total of 3,562 claims, curated to encapsulate the intricacies inherent in financial discourse.
In the Fin-Fact dataset, claims are categorized into three labels: "True", "False", and "NEI (Not Enough Information)" representing the veracity of each claim in the financial domain. The dataset contains 1,807 'True' claims that are verified as accurate, 1,315 'False' claims that have been proven inaccurate through fact-checking procedures, and 440 'NEI' instances where there is insufficient ev
Figure 2: Diverse sectors within the Fin-Fact dataset.
idence to make a determination. With its comprehensive span across a variety of claims, diverse sectors, and an equitable distribution of labels, the Fin-Fact dataset stands as a robust cornerstone for the development, assessment, and progression of fact-checking models in the domain of finance.
## 4 Experimental Results
In this series of experiments, the primary focus revolved around evaluating the accuracy of Natural Language Inference (NLI) models for fact-checking tasks. The assessment encompassed a range of prominent models, including ELECTRA Clark et al. (2020), BART Lewis et al. (2020), RoBERTa Liu et al. (2019), and GPT-2 Radford et al. (2019). Each model underwent scrutiny using the Fin-Fact dataset, enabling an assessment of their effectiveness in distinguishing financial claims. The outcomes of these fact-checking endeavors provided thought-provoking insights: ELECTRA demonstrated 29%, BART-Large achieved 33%, RoBERTa-Large showcased 32%, and GPT-2 emerged as the leader with an accuracy of 43% as shown in Table 2. These findings underscore the intricate challenges posed by financial fact-checking, with models displaying varying degrees of performance within this domain. Figure 3 illustrates the graph that compares the performance of ANLI models on Fin-Fact.
The final phase of experimentation delved into the intricate realm of generating explanations for the claims. For each claim in the dataset, we employed the BART model to generate explanations, extracting insights that highlight the key factors contributing to the determination of claim accuracy. These explanations were obtained using the justifications provided to the claim. To quantitatively evaluate the quality of these explanations, we leveraged the GLUE and ROUGE metrics as shown in Table. 3. The Evidence label in the dataset served as the ground truth, enabling us to assess the alignment between the generated explanations and the human-provided justifications.
While our primary focus was on evaluating NLI models for fact-checking and explanation generation, the Fin-Fact dataset offers versatility for various applications. Researchers in multimodal machine learning can leverage it as a valuable benchmark. With its unique combination of textual and visual financial data, Fin-Fact provides an ideal testbed for experimenting with state-of-the-art multimodal models. Users are encouraged to assess and enhance these models, contributing to advancements in this critical area.
## 5 Conclusion and Future Work
The emergence of Fin-Fact signifies a pivotal advancement in the quest to counter misinformation within the financial domain. By providing expert annotations, comprehensive claims, and the potential for explanatory insights, Fin-Fact empowers fact-checking systems to achieve heightened levels of precision and transparency. Its interdisciplinary design tackles financial language intricacies and evolving contextual complexities. This construct serves as a robust cornerstone for more effective and reliable fact-checking processes.
In the realm of finance, Fin-Fact fosters trust, bolsters credibility, and aids fact-checking. We're committed to addressing visualization bias, a prominent challenge, and exploring the impact of manipulated images on claim interpretation. We plan to extract quantitative insights from media, enhancing the dataset with tables and charts. Our goal is to release an improved dataset, offering a more comprehensive resource for researchers, fact-checkers, and decision-makers.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Model** & **Precision** & **Recall** & **F1 Score** & **Accuracy** \\ \hline BART & 0.369 & 0.344 & 0.300 & 0.331 \\ RoBERTa & 0.393 & 0.346 & 0.285 & 0.318 \\ ELECTRA & 0.351 & 0.330 & 0.276 & 0.287 \\ GPT-2 & 0.347 & 0.337 & 0.312 & 0.430 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of Fin-fact on ANLI models.
Figure 3: Comparison of Scores for ANLI Models.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Model & ROUGE-1 & ROUGE-2 & ROUGE-3 & GLUE \\ \hline BART & 0.84 & 0.63 & 0.46 & 0.062 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance scores of Fin-fact on BART model for explaination generation. | 金融分野における事実確認はまだ未調査であり、この分野の質の高いデータ不足があります。この論文では、Fin-Factという、金融分野における多様な事実確認のためのベンチマークデータセットを提案します。特に、専門の事実確認者による注釈と説明が含まれており、専門的な知識と信頼性を提供しています。Fin-Factは、テキストと視覚コンテンツを含む多様な性質を持ち、事実確認分析を支援する包括的な情報源を提供します。本データセットの主な目的は、金融分野における誤情報に対処すること、透明性向上、そして金融報告やニュースの配信における信頼性を構築することです。Fin-Factは、ユーザー、特に専門家やエンドユーザーに対して、事実確認の判断の理由を理解し、主張の信頼性を検証し、事実確認プロセスにおける信頼を高めるための説明を提供します。Fin-Factデータセットと実験コードは、https://github.com/IIT-DM/Fin- |
2308.16684 | Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor
Attack | The vulnerabilities to backdoor attacks have recently threatened the
trustworthiness of machine learning models in practical applications.
Conventional wisdom suggests that not everyone can be an attacker since the
process of designing the trigger generation algorithm often involves
significant effort and extensive experimentation to ensure the attack's
stealthiness and effectiveness. Alternatively, this paper shows that there
exists a more severe backdoor threat: anyone can exploit an easily-accessible
algorithm for silent backdoor attacks. Specifically, this attacker can employ
the widely-used lossy image compression from a plethora of compression tools to
effortlessly inject a trigger pattern into an image without leaving any
noticeable trace; i.e., the generated triggers are natural artifacts. One does
not require extensive knowledge to click on the "convert" or "save as" button
while using tools for lossy image compression. Via this attack, the adversary
does not need to design a trigger generator as seen in prior works and only
requires poisoning the data. Empirically, the proposed attack consistently
achieves 100% attack success rate in several benchmark datasets such as MNIST,
CIFAR-10, GTSRB and CelebA. More significantly, the proposed attack can still
achieve almost 100% attack success rate with very small (approximately 10%)
poisoning rates in the clean label setting. The generated trigger of the
proposed attack using one lossy compression algorithm is also transferable
across other related compression algorithms, exacerbating the severity of this
backdoor threat. This work takes another crucial step toward understanding the
extensive risks of backdoor attacks in practice, urging practitioners to
investigate similar attacks and relevant backdoor mitigation methods. | Sze Jue Yang, Quang Nguyen, Chee Seng Chan, Khoa D. Doan | 2023-08-31T12:38:29 | http://arxiv.org/abs/2308.16684v2 | # Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
###### Abstract
The vulnerabilities to backdoor attacks have recently threatened the trustworthiness of machine learning models in practical applications. Conventional wisdom suggests that not everyone can be an attacker since the process of designing the trigger generation algorithm often involves significant effort and extensive experimentation to ensure the attack's stealthiness and effectiveness. Alternatively, this paper shows that there exists a more severe backdoor threat: anyone can exploit an easily-accessible algorithm for silent backdoor attacks. Specifically, this attacker can employ the widely-used lossy image compression from a plethora of compression tools to effortlessly inject a trigger pattern into an image without leaving any noticeable trace; i.e., the generated triggers are natural artifacts. One does not require extensive knowledge to click on the "convert" or "save as" button while using tools for lossy image compression. Via this attack, the adversary does not need to design a trigger generator as seen in prior works and only requires poisoning the data. Empirically, the proposed attack consistently achieves 100% attack success rate in several benchmark datasets such as MNIST, CIFAR-10, GTSRB and CelebA. More significantly, the proposed attack can still achieve almost 100% attack success rate with very small (approximately 10%) poisoning rates in the clean label setting. The generated trigger of the proposed attack using one lossy compression algorithm is also transferable across other related compression algorithms, exacerbating the severity of this backdoor threat. This work takes another crucial step toward understanding the extensive risks of backdoor attacks in practice, urging practitioners to investigate similar attacks and relevant backdoor mitigation methods.
## 1 Introduction
Machine learning, especially deep neural networks (DNNs), has gained popularity due to their superior performance in various applications and tasks such as computer vision Krizhevsky et al. (2012); He et al. (2016), natural language processing Devlin et al. (2019) and healthcare Nwadike et al. (2021). The emergence of DNNs in high-stake applications has raised security concerns about their vulnerabilities to malicious attacks. Prior research has shown that DNNs are vulnerable to a wide range of attacks, including adversarial examples Carlini and Wagner (2017); Madry et al. (2018), poisoning attacks Munoz-Gonzalez et al. (2017); Shafahi et al. (2018) and backdoor attacks Bagdasaryan et al. (2020); Gu et al. (2019). Backdoor attacks impose serious security threats by injecting a stealthy and malicious trigger onto a DNN by poisoning the data or manipulating the training process Liu et al. (2017, 2018). The backdoored model will behave normally with clean inputs but behave maliciously whenever the trigger is present in the input. For example, an autonomous vehicle system will normally stop when it encounters a "stop" sign, but when a trigger (i.e., a yellow sticer) is present on the sign, the system will misclassify it as "speed limit of 110" (the attack target), causing the vehicle to speed up instead of stopping. This scenario demonstrates the severity of backdoor attacks in autonomous vehicle systems, for this malicious behavior may lead to serious accidents.
Early works expose the security vulnerability of backdoor attacks via poisoning the training data with hand-crafted but visible triggers Gu et al. (2019); Liu et al. (2020); Barni et al. (2019); when
a user trains a model with the poisoned datasets, the backdoor is successfully implanted and the model is under the control of the adversary. Since then, backdoor research has shown various threat models where the adversary can secretly launch an attack. For example, the adversary can design imperceptible triggers to fool both human and machine inspections Nguyen and Tran (2021); Saha et al. (2020), can arbitrarily control the attack target class during inference Doan et al. (2022); Shokri et al. (2020), can hide the backdoor inside model architectures instead of training data Bober-Irizar et al. (2023); Clifford et al. (2022), or can modify the pre-trained weights to craft the attack Rakin et al. (2020); Dumford and Scheirer (2020). While the existing works show undeniably harmful types of backdoor attacks, these attacks also demand significant effort and extensive experimentation from the adversary to ensure the attack's stealthiness and effectiveness. For example, WaNet Nguyen and Tran (2021) requires a special "noise"-mode training to ensure its effectiveness, while MAB Bober-Irizar et al. (2023) requires a special design of a trigger-detecting layer for its attack. Even the classical patch-based Gu et al. (2019) or blending-based Liu et al. (2020) triggers require sufficient knowledge of image editing and access to image-editing tools.
This paper continues the quest of unveiling zero-day vulnerabilities in backdoor-attack research by showing that everyone, not only the competent adversary, can launch backdoor attacks. Specifically, this adversary can re-purpose the widely used algorithm, lossy image compression, and its natural artifacts, a byproduct of the objective to retain the visual similarity between the compressed and original images, to mount an imperceptible and highly-effective backdoor attack for both the dirty-label and clean-label settings. Lossy image compression algorithms are ubiquitous and easily-accessible tools to transfer multimedia resources on the internet or display graphical content in browsers or smart-device applications. Consequently, crafting such an attack requires little or no effort or knowledge of machine learning. The overview of our method are denoted as in Figure 1. Our **contributions** are summarized below:
* We make the case that everyone can mount a powerful and stealthy backdoor attack. This sounds the alarm of yet another previously-undiscovered security threat of backdoor attack in practice.
* We repurpose a widely-used algorithm, i.e., lossy image compression, to launch this easily-accessible backdoor attack process. The proposed attack process requires little effort to craft the imperceptible trigger and a low poisoning rate to implant the backdoor even in the clean-label setting (which often necessitates a conspicuously higher data-poisoning rate).
* We empirically demonstrate the effectiveness of the proposed attack process on several benchmark datasets and backdoor settings. We show that the proposed method can achieve
Figure 1: An overview of our attack. In the **data poisoning** stage, we apply lossy image compression to the original images, and create poison images. During **training** stage, the clean images and poisoned images are the inputs to the model. For **inference**, the model behaves normally when we input a clean image, but the backdoor is triggered when the input is a lossy compressed image.
high attack success rates in both dirty-label and clean-label settings, can bypass most existing defensive measures, and can transfer across related compression algorithms.
## 2 Related Works
### Backdoor Attacks
Previous works have formulated backdoor attacks as a process of introducing malicious behaviors to a model, \(f_{\theta}\), parameterized by \(\theta\), trained on a dataset, \(\mathcal{D}\). This process involves a transformation function, denoted as \(T(\cdot)\) that injects malicious trigger onto the input, \(x\) and form an association with the desired model output, namely the target class, \(y_{t}\)Liu et al. (2018); Gu et al. (2019); Bagdasaryan and Shmatikov (2021). Currently, the main methodologies to inject this malicious behavior into the model are contaminating the training data Chen et al. (2017); Liu et al. (2018), altering the training algorithm Bagdasaryan and Shmatikov (2021), or overwriting/retraining the model parameters Kurita et al. (2020); Dumford and Scheirer (2020).
Discovering zero-day vulnerabilities in DNNs has always been a focus. For instance, BadNet Gu et al. (2019) exposed image recognition systems are susceptible to backdoor attacks, by injecting a malicious patch onto the image and changing its label to a predefined target class. Then, HTBA Saha et al. (2020) showed that without changing the labels of a dataset, an adversary can still attack by forming a strong association between a trigger patch and the ground truth class. SIG Barmi et al. (2019) proposed to superimpose sinusoidal waves onto images, whereas ReFool Liu et al. (2020) proposed to create a trigger pattern by the reflection of an image which is mostly imperceptible to human's vision. Besides, WaNet Nguyen and Tran (2021) and LIRA Doan et al. (2021) showed that a trigger pattern could be almost invisible from human's perception through optimizing the trigger generation function, shedding lights in the way of another attack. To further elaborate the severity of backdoor attacks, MAB Bober-Irizar et al. (2023) exploits the model's architecture by adding a pooling layer that will be activated to a trigger pattern, showing a new paradigm of attacking DNNs.
While these methods are effective, extensive experiments and manual interventions are required to design the triggers and verify their effectiveness. In addition, due to the complex nature of the trigger generation process, they are often only available to adversaries with a certain degree of understanding on backdoor attacks. These constraints limit the practicality and severity of the backdoor attacks, as they are only applicable to a small pool of adversaries. In contrast, we expose a zero-day vulnerability in backdoor attacks where everyone, not only knowledgeable adversaries, could cause severe damage to DNNs with little to minimal effort needed to design the trigger generator.
### Backdoor Defense
Due to the emergence of backdoor attacks, another line of research focusing on preventing and mitigating backdoor attacks has also gained attention. Several works have been developed to counter backdoor attacks such as backdoor detection Chen et al. (2018); Tran et al. (2018); Gao et al. (2019), input mitigation Liu et al. (2017); Li et al. (2020) and model mitigation Liu et al. (2018); Wang et al. (2019).
Detection-based backdoor defense methods aim to detect backdoored samples by analyzing the model's behavior. For example, Activation Clustering Chen et al. (2018) detects the model's malicious behavior by analyzing the activation values in the latent space and STRIP Gao et al. (2019) analyzes the entropy of the model's output on perturbed inputs. Input mitigation methods attempt to remove the trigger of inputs by altering or filtering the image such that the model will retain its normal behavior even when it is backdoored (i.e. it suppresses and deactivates the backdoor).
In contrast, model mitigation methods mitigate the backdoor attacks before deployment. Fine pruning Liu et al. (2018) combines both fine-tuning and pruning to eliminate redundant weights or neurons in DNNs with a training set, hoping to mitigate the injected backdoor. Besides, Neural Cleanse Wang et al. (2019) detects whether a trained model has been backdoored by searching for potential patterns that could trigger the backdoor.
## 3 Threat Model
Ultimately, a threat model that has all the following five characteristics is the worst nightmare of a backdoor attack.
**Dataset Access**: It is defined as the ability to apply modifications onto the model's training dataset. We consider dataset access hazardous, since deep learning projects usually involve data labeling and annotations, creating opportunities for adversaries to act maliciously on the data.
**Black-box Model Access**: It assumes that adversaries are only required to access/poison the data without involvement in the model's training process. In practice, this assumption holds as it is usually impossible for an adversary to be involved in the model's training process, as the training recipe and the model's architecture are usually trade secrets. Contrary, since most DNNs are pre-trained using publicly available datasets, it is much easier to poison the public dataset than to access the model.
**Accessible**: It refers to everyone could become an adversary, even without prior knowledge of machine learning. Specifically, this is the deadly sin of backdoor attack as it refers to anyone who can re-purpose any generic tools to launch a silent backdoor attack. Based on our knowledge, most if not all of the prior work have low accessibility where professional knowledge is a must to drive the complicated trigger generation process to launch an effective attack.
**Natural**: This refers to initiating an attack with a mechanism that is not meant for malicious purposes. The goal of "natural" is to reduce the suspicion of backdoor attacks, where the attack objective is shy away from the original intent of the mechanism.
**Stealthy**: It refers to the ability to hide the trigger pattern and backdoor attack from human inspections (i.e. poisoned images have high visual similarity to the clean image). Also, it is desirable for an attack to be stealthy against machine inspection, i.e., defensive algorithms. However, bypassing all existing defensive algorithms is extremely challenging and even impossible in the backdoor domain Li et al. (2020)
On the whole, fortunately or unfortunately, Table 1 shows this "exorcist" threat model has not been discovered until now. In next section, we show that the widely-used lossy image compression can be re-purposed as a natural backdoor attack. Also, since image compression is common and countless compression tools are available online/offline, everyone can easily launch a backdoor attack. This discovery takes another crucial step toward understanding the extensive risks of backdoor attacks in practice.
## 4 Methodology
### Problem Formulation
Consider a supervised image classification task, where a model, \(f_{\theta}\) maps the input domain, \(\mathcal{X}\) onto the target classes, \(\mathcal{C}\), where \(\theta\) is the trainable parameters: \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{C}\). The main objective is to learn \(\theta\) from the training dataset \(\mathcal{D}=\{(x_{i},y_{i}):x_{i}\in\mathcal{X},y_{i}\in\mathcal{C},i=1,\cdots,N\}\).
In backdoor attacks, a model, \(f_{\theta}\) is trained with the combination of both clean and poisoned subsets of \(\mathcal{D}\). Technically, in order to create a poisoned subset from the dataset, \(\mathcal{D}\), a clean sample with
\begin{table}
\begin{tabular}{l||c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & Dataset & \multicolumn{2}{c|}{Black-box} & \multirow{2}{*}{Accessible} & \multirow{2}{*}{Natural} & \multirow{2}{*}{Stealthy} \\ & Access & Model Access & & & \\ \hline \hline BadNet & ✓ & ✓ & ✗ & ✗ & ✗ \\ WaNet & ✓ & ✗ & ✗ & ✗ & ✓ \\ LIRA & ✓ & ✗ & ✗ & ✗ & ✓ \\ \hline \hline
**Ours** & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison to other methods. “✓” indicates the attribute is present, while “✗” indicates the attribute is missing.
corresponding label, \((x,y)\) is transformed into a backdoor sample \((T(x),\eta(y))\), where \(T\) is a backdoor transformation function that converts a benign input, \(x\) into a poisoned input, \(\hat{x}\). \(\eta\) is a target transform function, which converts an original class to other target classes. When training \(f_{\theta}\) with clean and poisoned samples, the behavior of \(f_{\theta}\) is altered such that:
\[f_{\theta}(x)=y,\quad f_{\theta}(T(x))=\eta(y), \tag{1}\]
for every pair of clean data \(x\in\mathcal{X}\) and its corresponding label, \(y\in\mathcal{C}\).
Generally, there are three commonly studied backdoor attacks: (i) _all-to-one_, (ii) _all-to-all_ and (iii) _clean label_. In **all-to-one attack**, the true label is changed to a constant target, \(\eta(y)=c,c\in\mathcal{C}\); while in **all-to-all attack**, the true label is one-shifted, \(\eta(y)=(y+1)\,mod\,|\mathcal{C}|,y\in\mathcal{C}\). In contrast, **clean label attacks** do not change the true label, and only apply \(T\) onto the target class' images, with the hope that the trigger pattern will create a strong association to the true label, causing misclassification when the trigger is applied to images from other classes, i.e. \(f_{\theta}(T(x_{c_{1}}))=y_{1}\), \(f_{\theta}(T(x_{c_{2}}))=y_{1}\) where \(c_{1}\neq c_{2};c_{1},c_{2}\in\mathcal{C}\).
Existing works have been focused on engineering an effective trigger transformation function, to achieve a high attack success rate and stealthiness. However, this process involves extensive design and engineering, and therefore very time-consuming and resource-intensive. Moreover, prior works also assume that the adversaries have access to the data and the model's training process. This assumption poses constraints on the threat model, as it does not reflect real world scenarios. For instance, the adversaries usually have zero to no access to the model's training process.
### Trigger Generation
This paper shows a zero-day vulnerability where anyone can exploit any easily-accessible algorithm for potential silent backdoor attacks. As such, an attacker does not need to engineer a trigger generator as in prior works Gu et al. (2019); Nguyen and Tran (2021); Doan et al. (2021) and only requires poisoning the data. Specifically, we show that lossy image compression can be re-purposed as the transformation function, \(T\) as it fulfills the following criteria: (i) _accessible_, (ii) _natural_ and (iii) _stealthy_, as shown in Table 1.
**Accessible:** We show that by re-purposing the lossy image compression algorithm, everyone could now become an adversary as easily as clicking a "convert" or "save as" button. This is because lossy image compression is widely accessible via a plethora of compression tools on the Internet, such as PNG to JPEG converter, or in a local machine (e.g. MS Paint or Adobe Photoshop). As such, this shows that little or no effort or knowledge of machine learning is required to launch a backdoor attack successfully. To the best of our knowledge, this is the first work that investigates and attains the accessibility of backdoor attacks.
**Natural:** The idea of lossy image compression algorithms is to compress the image information in the chrominance channel where human visual systems are naturally less sensitive Bull (2014). As such, we could view the byproduct of lossy image compression as a natural backdoor trigger. That is, to launch a backdoor attack, an adversary can maliciously inject the natural artifacts as a trigger pattern into an image without leaving any noticeable trace.
Figure 2: Residual of WaNet, LIRA and our method. Our method has larger magnitude of residuals compared to both WaNet and LIRA. The lower the quality of compressed image, the stronger the magnitude of the residuals, the higher the compression rate. Note that residuals of our method are _natural artifacts_ generated lossy image compression algorithms.
**Stealthy**: The original goal of lossy image compression is to reduce image size, while preserving the image contents visually. Naturally, it ensures the trigger's imperceptibility; hence, we could guarantee the stealthiness of our trigger visually. In terms of machine inspection, we show in Sec. 5.3, our proposal is resilient against some popular backdoor defensive algorithms.
Accordingly, our embarrassingly simple, but deadly threat model can be formulated as follows:
\[T(x)=C_{p}(x) \tag{2}\]
where \(C(\cdot)\) could be any publicly available lossy image compression algorithms, \(p\in\{1,\cdots,n\}\). We choose JPEG compression Lam & Goodman (2000) and WEBP Google (2010) in this paper, as they are among the most widely used.
### Trigger Injection
Consider the empirical risk minimization setting where one hopes to minimize the following loss function:
\[\theta^{*}=\operatorname*{arg\,min}_{\theta}\sum_{i=1}^{N}\mathcal{L}(f_{ \theta}(x_{i}),y_{i}) \tag{3}\]
Our goal is to minimize the risks, to yield an optimal classifier, \(f_{\theta}\) that could map \(x_{i}\) to \(y_{i}\) correctly. As our transformation function is non-trainable, we apply our transformation function directly to the data, and create a poisoned subset. We formulate the empirical risk minimization objective with backdoor samples as follows:
\[\theta^{*}=\operatorname*{arg\,min}_{\theta}\sum_{i=1}^{N}\mathcal{L}(f_{ \theta}(x_{i}),y_{i})+\sum_{j=1}^{M}\mathcal{L}(f_{\theta}(T(x_{j})),\eta(y_{ j})) \tag{4}\]
where \(N\) is the total number of clean images, \(M\) is the total number of poisoned images. By optimizing Eq. (4), we are able to jointly optimize the model, \(f_{\theta}\) for both benign and backdoor samples.
## 5 Experimental Results
### Experimental Setup
We choose four widely used datasets for backdoor attack studies: **MNIST** Deng (2012), **CIFAR-10**Krizhevsky et al. (2010), **GTSRB** Stallkamp et al. (2012) and **CelebA** Liu et al. (2015). For the classifier, \(f\), we follow WaNet and LIRA, where we used the same CNN for MNIST, Pre-Activation ResNet-18 He et al. (2016) for both CIFAR-10 and GTSRB, and ResNet-18 for CelebA.
For attack experiments, we compare our results against WaNet and LIRA, as they achieved state-of-the-art results. For hyperparameters, our initial learning rate is 1e-6, and we increase the learning rate to 5e-4 after 5 epochs. We use AdamW as our optimizer and follow a cosine learning rate
Figure 3: Compressed image quality vs. CA/ASR under 0.1% poisoning rate. At compression quality of 30, the poisoned image still retain high visual similarity with the original, yet it is able to achieve near 100% ASR.
schedule. We trained our classifier for 300 epochs. For the batch size, we used 1024 for all the datasets. We used a **lower poisoning rate of 5%** across our experiments. For the augmentation settings, we follow WaNet and LIRA. We remain the settings across all experiments, and conduct experiments in Pytorch Paszke et al. (2019).
For the lossy image compressions, we used two libraries: Pillow Clark (2015) and OpenCV Bradski (2000). OpenCV allows specifying compression quality of the image. By default, we use Pillow for our experiments. We found that Pillow's compression is equivalent to OpenCV's compression quality of 75. We conduct experiments on JPEG and WEBP compressions, as they are commonly used. We use JPEG compression by default.
MNIST and CIFAR-10 are naturally stored without compression, therefore we treat them as in.PNG format. For GTSRB, it is stored in.PPM format, which is a lossless image compression algorithm. For CelebA, the images are stored in.JPEG extension. We clarify that even if the images are in lossy compression format, it is possible to re-compress them with lossy image compression to create the triggers.
### Attack Experiments
In this experiment, we demonstrate that everyone can launch a backdoor attack, while achieving both attack effectiveness and stealthiness. First, we poison the classifier for each compared backdoor method and calculate its Clean Accuracy (CA) and Attack Success Rate (ASR). CA measures the model's accuracy without any trigger, while ASR measures the model's accuracy on poisoned images. We train and evaluate the backdoor models in three different settings: _all-to-one_, _all-to-all_ and _clean label_. We assume that the adversary only has access to the data, but is not involved in the model training phase. As such, our threat model is relatively relaxed compared to both WaNet and LIRA. Overall, we observe stronger residuals generated by our method compared to both WaNet and LIRA in Figure 2. Even our method creates a larger magnitude of residuals, the trigger remains stealthy. We study and analyze the relationship between stronger trigger strength (i.e. lower compression quality) and CA/ASR.
\begin{table}
\begin{tabular}{l||c||c|c|c|c} \hline \hline \multirow{2}{*}{AttackDataset} & \multicolumn{2}{c|}{MNIST} & CIFAR-10 & GTSRB & CelebA \\ \cline{2-6} & \multicolumn{1}{c|}{(PNG)} & \multicolumn{1}{c|}{(PNG)} & \multicolumn{1}{c|}{(PPM)} & \multicolumn{1}{c}{(JPEG)} \\ \hline \hline \multirow{2}{*}{WaNet} & All-to-One & 0.99 / 0.99 & 0.94 / 0.99 & 0.99 / 0.98 & 0.79 / 0.99 \\ \cline{2-6} & All-to-All & 0.99 / 0.95 & 0.94 / 0.93 & 0.99 / 0.98 & - \\ \hline \multirow{2}{*}{LIRA} & All-to-One & 0.99 / 1.00 & 0.94 / 1.00 & 0.99 / 1.00 & - \\ \cline{2-6} & All-to-All & 0.99 / 0.99 & 0.94 / 0.94 & 0.99 / 1.00 & - \\ \hline \hline \multirow{2}{*}{**Ours @ 5%**} & All-to-One & 0.98 / 0.99 & 0.96 / 1.00 & 0.97 / 1.00 & 0.80 / 1.00 \\ \cline{2-6} & All-to-All & 0.98 / 0.95 & 0.96 / 0.87 & 0.97 / 0.91 & 0.80 / 0.76 \\ \hline \multirow{2}{*}{**Ours @ 1%**} & All-to-One & 0.98 / 0.88 & 0.96 / 1.00 & 0.97 / 0.99 & 0.80 / 1.00 \\ \cline{2-6} & All-to-All & 0.99 / 0.01 & 0.96 / 0.81 & 0.97 / 0.73 & 0.80 / 0.74 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Attack Results. Blue denotes the Clean Accuracy (CA), while red denotes the Attack Success Rate (ASR). “-” denotes that the result is not available from the original paper. Note that both WaNet and LIRA use 10% as their poisoning rate; while we achieved comparable results with much lower poisoning rates (i.e. 5% and 1%, respectively).
\begin{table}
\begin{tabular}{l||c|c||c|c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c||}{JPEG to WEBP} & \multicolumn{2}{c}{WEBP to JPEG} \\ \cline{2-5} & Clean & Attack & Clean & Attack \\ \hline \hline MNIST & 0.98 & 0.43 & 0.98 & 1.00 \\ CIFAR-10 & 0.96 & 0.41 & 0.96 & 0.92 \\ GTSRB & 0.97 & 0.97 & 0.97 & 1.00 \\ CelebA & 0.80 & 0.42 & 0.80 & 0.86 \\ \hline \hline \end{tabular}
\end{table}
Table 3: All-to-One Attack Transferability
#### 5.2.1 All-to-One Attack
We evaluate our method by setting the target class to a constant class, which is 0. Table 2 shows that our method can achieve comparable performance against WaNet and LIRA, even though we used a _much lower poisoning rate of 1% and 5%_. We further investigate the effectiveness of our method by selecting a significantly lower poisoning rate, 0.1% (\(\sim\)50 samples) and evaluate with varying compression quality on CIFAR-10. The results are shown in Figure 3. We observe that the ASR is inversely proportional to the compression quality, depicting that a lower compression quality would produce a strong trigger pattern (i.e. large magnitude of artifacts), leading to higher ASR. Given the freedom of varying compression quality, adversaries can select the desired compression quality, causing different magnitudes of damage to DNNs.
Therefore, our method can achieve comparable ASR even with a significantly lower poisoning rate (**10x lower**). The trigger generated through lossy image compression algorithms has greater magnitudes of residuals compared to both WaNet and LIRA, as shown in Figure 2, but the trigger remains stealthy to humans.
#### 5.2.2 All-to-All Attack
Under this setting, the true label is converted to the target label by one-shifting. This attack aims to introduce a malicious behavior to a model where a backdoor trigger leads to the prediction of different classes, instead of a fixed target class. It resembles forming a one-to-many relationship between a trigger and multiple classes. Although _our poisoning rate is lower (5%)_, we observe a comparable result against WaNet and LIRA on all datasets as shown in Table 2.
#### 5.2.3 Clean Label Attack
In a clean label attack, the target labels remain the same. Instead, we only poison the images of the target class. By poisoning the target class' images, we create a malicious association between the trigger and the target class. Therefore, when the trigger is applied to other class' samples, the malicious association will mislead the model's predictions onto the target class. We poison _only 10% of the target class 0 in CIFAR-10_.
In Figure 4, we showed that our method could achieve nearly 100% ASR when the compression quality is 30 while remaining stealthy, as shown in Figure 2. Similarly, we evaluate GTSRB (see Figure 5), with 50% and 100% poisoning rates of class 1, respectively. We observe a similar trend
\begin{table}
\begin{tabular}{c||c|c||c|c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c||}{JPEG to WEBP} & \multicolumn{2}{c}{WEBP to JPEG} \\ \cline{2-5} & Clean & Attack & Clean & Attack \\ \hline \hline MNIST & 0.98 & 0.19 & 0.99 & 0.00 \\ CIFAR-10 & 0.96 & 0.26 & 0.96 & 0.81 \\ GTSRB & 0.97 & 0.65 & 0.97 & 0.90 \\ CelebA & 0.81 & 0.02 & 0.80 & 0.45 \\ \hline \hline \end{tabular}
\end{table}
Table 4: All-to-All Attack Transferability
Figure 4: Clean Label Attack on CIFAR-10. We show that compression quality of 30 are able to achieve near 100% ASR, while remaining stealthy.
where our method achieves near 100% ASR in GTSRB, while preserving the stealthiness of our trigger. The effectiveness of our attack is due to the widespread of our trigger across the image; i.e. lossy image compression algorithms will create artifacts on the entire image, disrupting the originally embedded features and linking the samples to the targeted class.
On top, our approach also offers an additional option, i.e. set out the image's compression quality. This allows adversaries a certain degree of freedom to compensate for the tradeoffs between ASR and stealthiness. We observe that ASR is inversely proportional to the compression quality, i.e. the trigger strength increases proportionally as the compression quality decreases (see Figures 4-5).
#### 5.2.4 Transferability of Attack
We evaluate the transferability of our attack across different lossy image compression algorithms (see Table 3-4). We train a model with JPEG trigger, and then evaluate it with WEBP trigger and vice versa. We show that even if a model is trained with JPEG compression, it is still susceptible to WEBP compression's attack, revealing another possibility of attacking mechanism.
The results are presented in Tables 3-4, respectively. We observe that WEBP trigger has better transferability compared to JPEG. This is because WEBP is a better lossy image compression algorithm Google (2010) where it tries to preserve more information content of an image (i.e. smaller artifacts), while reducing the image's file size.
Therefore, WEBP-compressed images have better stealthiness and the artifacts created are harder to learn by the model. Once the model learns the artifacts created by WEBP, it could be attacked even with JPEG-compressed images. We do not conduct experiments on transferability from lossy to lossless image compression algorithms as the nature of lossless image compression algorithms is to preserve all information content. Therefore, lossless image compression algorithms will not generate any artifacts that could be used as a trigger pattern.
Figure 5: Clean Label attack on GTSRB. We show that given the poisoning rate is high (100%), even high compression quality could achieve high ASR, while if poisoning rate is lower (50%), it requires lower compression quality (20) to achieve near 100% ASR.
Figure 6: STRIP. We observe similar entropy range between the backdoored model and the clean model, as our trigger combines with the features of the images, which will be disrupt by perturbations.
### Defense Experiments
In this section, we evaluate the backdoor-injected classifiers against several popular backdoor defense mechanisms, such as STRIP Gao et al. (2019), Fine Pruning Liu et al. (2018), Neural Cleanse Wang et al. (2019) and Grad-CAM Selvaraju et al. (2019).
#### 5.3.1 Strip
We first evaluate our method against STRIP where it perturbs a small subset of clean images and calculates the entropy of the model's prediction. A high entropy in STRIP indicates a low possibility of backdoor attacks, and vice versa. STRIP assumes that a poisoned image will create a strong association between the trigger and the target. Therefore, if the model does not change its prediction with perturbed inputs, the model has a high chance of being backdoored. Figure 6 shows that our method has comparable entropy to the clean model across all datasets. With our method, the trigger is implanted across the entire image and "combined" with the original image content; hence, perturbations by STRIP will break the trigger along with the original image content. Therefore, our method behaves like genuine models with similar entropy ranges.
#### 5.3.2 Fine Pruning
Fine Pruning analyzes the response of neurons at a specific layer. Given a set of clean images, Fine Pruning feeds the clean images into the network. It identifies the less active neurons, assuming that they are closely related to the trigger, and will not be activated by clean images. To mitigate the backdoor attack, this method works by pruning these dormant neurons, to reduce the effect of backdoor attacks on the network. In Figure 7, we found that our method is resilient to Fine Pruning in all datasets. This is because our trigger pattern will activate the neurons evenly, causing the backdoor neurons and benign neurons to have indistinguishable responses towards clean images.
#### 5.3.3 Neural Cleanse
Neural Cleanse is a model-defense method based on the pattern optimization approach. Neural Cleanse computes the optimal patch pattern that converts the prediction of a clean input to a target class. Then, it checks if any label has a significantly smaller pattern, which is a sign of a backdoor. Neural Cleanse classifies whether a model is backdoored by using Anomaly Index metric, \(\tau\) with
Figure 8: Neural Cleanse. This shows that our method is not detectable as Anomaly Index is \(<2\) for all backdoor models.
Figure 7: Fine Pruning. We observe that even high number of neurons are pruned, our ASR remains high, depicting the resistance of our backdoor trigger to pruning.
a threshold \(\tau=2\). A model is considered as backdoored when \(\tau>2\), and vice versa. We ran Neural Cleanse across all datasets and collect the Anomaly Index as shown in Figure 8. We observe that our attack is not detected across all datasets. We also observe that GTSRB has an abnormally high Anomaly Index in a clean model. In CIFAR-10, and CelebA, our method has almost similar Anomaly Index as the clean model.
#### 5.3.4 Grad-CAM
To further show the effectiveness of our backdoor attack, we employ visualization from Grad-CAM Selvaraju et al. (2019) to understand the backdoored network behavior on both clean and poisoned images. A patch-based trigger could be exposed by Grad-CAM easily as it occupies a small region in the image. In contrast, our method will create a trigger pattern across the image, making it undetectable by Grad-CAM. As shown in Figure 9, the heatmaps of our method look similar to the clean model.
## 6 Conclusion
This paper shows everyone could become an adversary against deep learning systems by re-purposing the lossy image compression, which is easily-accessible and widely-available. As such, our method requires minimal to no efforts/knowledge in designing an effective trigger generator. Besides, we proved that the accessibility of backdoor attacks seriously threatens all deep learning systems with extensive experiments. Specifically, we show that not only does the attack remain stealthy, but everyone can now launch such an attack easily. To the best of our knowledge, we are the first to investigate the accessibility of backdoor attacks. We urge researchers and practitioners to investigate this new line of attack, given the potentially serious consequences it could bring to deep learning systems.
| バックドア攻撃の脆弱性により、実用的なアプリケーションにおける機械学習モデルの信頼性が脅かされています。従来の考え方では、攻撃者は誰もいないとされていますが、トリガー生成アルゴリズムの設計プロセスには、攻撃の stealthiness と有効性を確保するために、多くの努力と広範な実験が必要なためです。代わりに、この論文では、より深刻なバックドア脅威が存在することが示されています。つまり、誰でも攻撃に利用できる。例えば、この攻撃者は、多くの圧縮ツールから利用可能な、簡単にアクセス可能なアルゴリズムを使用して、画像にトリガーパターンを無視可能な痕跡なしに注入することができます。つまり、生成されたトリガーは自然な artifacts です。圧縮ツールを利用して、変換または保存のボタンをクリックする必要がある知識は必要ありません。この攻撃により、攻撃者は、前述の作業のようにトリガー生成器を設計する必要はありません。そして、データの汚染だけで十分です。実証 |
2303.12948 | FTSO: Effective NAS via First Topology Second Operator | Existing one-shot neural architecture search (NAS) methods have to conduct a
search over a giant super-net, which leads to the huge computational cost. To
reduce such cost, in this paper, we propose a method, called FTSO, to divide
the whole architecture search into two sub-steps. Specifically, in the first
step, we only search for the topology, and in the second step, we search for
the operators. FTSO not only reduces NAS's search time from days to 0.68
seconds, but also significantly improves the found architecture's accuracy. Our
extensive experiments on ImageNet show that within 18 seconds, FTSO can achieve
a 76.4% testing accuracy, 1.5% higher than the SOTA, PC-DARTS. In addition,
FTSO can reach a 97.77% testing accuracy, 0.27% higher than the SOTA, with
nearly 100% (99.8%) search time saved, when searching on CIFAR10. | Likang Wang, Lei Chen | 2023-02-28T17:34:26 | http://arxiv.org/abs/2303.12948v1 | # FTSO: Effective NAS via First Topology Second Operator
###### Abstract
Existing one-shot neural architecture search (NAS) methods have to conduct a search over a giant super-net, which leads to the huge computational cost. To reduce such cost, in this paper, we propose a method, called FTSO, to divide the whole architecture search into two sub-steps. Specifically, in the first step, we only search for the topology, and in the second step, we search for the operators. FTSO not only reduces NAS's search time from days to \(0.68\) seconds, but also significantly improves the found architecture's accuracy. Our extensive experiments on ImageNet show that within \(18\) seconds, FTSO can achieve a \(76.4\%\) testing accuracy, \(1.5\%\) higher than the SOTA, PC-DARTS. In addition, FTSO can reach a \(97.77\%\) testing accuracy, \(0.27\%\) higher than the SOTA, with nearly \(100\%\) (\(99.8\%\)) search time saved, when searching on CIFAR10.
Machine Learning, ICML
## 1 Introduction
Since the great success of the AlexNet (Krizhevsky et al., 2012) in image classification, most modern machine learning models (Wang, 2023; Wang et al., 2022; 2023) have been developed based on deep neural networks. For neural networks, their performance is greatly determined by the architectures. Thus, in the past decade, a tremendous amount of work (Simonyan and Zisserman, 2015; Szegedy et al., 2015; He et al., 2016) has been done to investigate proper network architecture design. However, as the network size has grown larger and larger, it has gradually become unaffordable to manually search for better network architectures due to the expensive time and resource overheads. To ease this problem, a new technique called neural architecture search (NAS) was introduced. It allows computers to search for better network architectures automatically instead of relying on human experts.
Early-proposed reinforcement learning-based NAS methods (Zoph and Le, 2017; Baker et al., 2017; Zoph et al., 2018) typically have an RNN-based controller to sample candidate network architectures from the search space. Although these algorithms can provide promising accuracy, their computation cost is usually unaffordable, for instance, 1800 GPU-days are required for NASNet to find an image classification network on CIFAR10.
To ease the search efficiency problem, one-shot approaches (Pham et al., 2018; Cai et al., 2019; Liu et al., 2019) with parameter sharing have been proposed. These methods first create a huge directed acyclic graph (DAG) super-net, containing the whole search space. Then, the kernel weights are shared among all the sampled architectures via the super-net. This strategy makes it possible to measure the candidate architecture's performance without repeatedly retraining it from scratch. However, these algorithms suffer from the super-nets' computational overheads. This problem is particularly severe for differentiable models (Liu et al., 2019; Xu et al., 2020).
Limited by current NAS algorithms' inefficiency, it is rather challenging to find satisfying network architectures on large-scale datasets and complex tasks. For instance, current speed-oriented NAS approaches generally require days to accomplish one search trial on ImageNet, for example, \(8.3\) GPU-days for ProxylessNAS (Cai et al., 2019) and \(3.8\) GPU-days for PC-DARTS (Xu et al., 2020). Therefore, we argue that it is essential to propose a new well-defined search space, which is not only expressive enough to cover the most powerful architectures, but also compact enough to filter out the poor architectures.
We are motivated by Shu et al. (2020), who demonstrate that randomly replacing operators in a found architecture does not harm the accuracy much. As such, we believe that there would be no reduction in the test accuracy if we omit the influence of operators and cluster architectures according to the topology. Thus, in this paper, we propose to separately search for the network topology and the operators. We name this new method Effective NAS via First Topology Second Operator (FTSO).
In this paper, we first mathematically prove that FTSO reduces the number of network parameters by \(10^{8}\), decreases the FLOPs per iteration by \(10^{5}\) and lowers the operator's
complexity in magnitude. We then empirically reveal that FTSO shortens the required search period from 50 epochs to one iteration. Besides the great improvement in efficiency, FTSO also significantly promotes the effectiveness by easing the over-fitting phenomenon and the Matthew effect (Shu et al., 2020). To be specific, each architecture in DARTS has only one iteration to tune its kernel weights, and within one iteration, only the operators with few parameters may converge. The result is that the simpler operators outperform the more powerful ones in the super-net, then larger gradients to enlarge their advantages are achieved. In this way, the found architectures tend to only contain the simplest operators and perform poorly on both the training and testing sets. Such phenomenon is called the Matthew effect.
```
0: a set of nodes: \(n_{k}\)
0: the pruned architecture: \(A_{p}\)
1. Create an directed edge \(e_{i,j}\) with weight \(\beta_{i,j}\) between each pair of nodes \(n_{i}\) and \(n_{j}\) (\(i<j\))
2. Assign each edge \(e_{i,j}\) a skip connection operator \(o_{i,j}\) with kernel weights \(w_{i,j}\) while still in the first epoch do 1. Forward-propagate following \(n_{j}=\sum_{i<j}o(n_{i})\beta_{i,j}\) 2. Update architecture \(\beta\) by descending \(\nabla_{\beta}C_{val}(w,\beta)\) 3. Update weights \(w\) by descending \(\nabla_{w}\mathcal{L}_{train}(w,\beta)\) endwhile for each node \(n_{j}\in A_{p}\)do \(T_{j}\leftarrow\) the second largest \(\beta_{i,j}\) for each node \(n_{i}\)do if\(\beta_{i,j}<T_{j}\)then Prune edge \(e_{i,j}\) endif endfor endfor Derive the pruned architecture \(A_{p}\).
```
**Algorithm 1** topology search
Our extensive experiments show that FTSO can accomplish the whole architecture search in \(0.68\) seconds. On ImageNet, FTSO achieves \(76.4\%\) testing accuracy, \(1.5\%\) higher than the SOTA, within a mere 18 seconds. More importantly, when we only search for one iteration, FTSO consumes less than \(0.68\) seconds, while reaching \(75.64\%\) testing accuracy, \(0.74\%\) higher than the SOTA. Moreover, if we allow FTSO to search for \(19\) minutes, \(76.42\%\) Top1 and \(93.2\%\) Top5 testing accuracy can be achieved. In addition, FTSO can reach \(97.77\%\) testing accuracy, \(0.27\%\) higher than the SOTA, with nearly \(100\%\) (\(99.8\%\)) search time saved, when searching on CIFAR10. Although in this paper we have implemented FTSO within a continuous search space, we illustrate in Section 5 that FTSO can be seamlessly transferred to other NAS algorithms.
```
0: the pruned architecture produced by the topology search: \(A_{p}\)
0: the found architecture: \(A_{f}\) if replace with convolutions then Replace all the retained operators \(o_{i,j}\) in \(A_{p}\) with convolutions else Each node \(n_{j}\leftarrow\sum_{i<j}\sum_{o\in\mathcal{O}}\frac{\exp\alpha_{i,j}^{o}}{ \sum_{o^{\prime}\in\mathcal{O}}\exp\alpha_{i,j}^{o^{\prime\prime}}}o(n_{i})\) while not converged do Update architecture \(\alpha\) by descending \(\nabla_{\alpha}\mathcal{L}_{val}(w,\alpha)\) Update weights \(w\) by descending \(\nabla_{w}\mathcal{L}_{train}(w,\alpha)\) endwhile endif for each edge \(e_{i,j}\in A_{p}\)do Assign edge \(e_{i,j}\) the operator \(o^{\prime}\in\mathcal{O}\) with the highest \(\alpha_{i,j}^{o^{\prime}}\) endfor Derive the found architecture \(A_{f}\gets A_{p}\).
```
**Algorithm 2** operator search
Figure 1: The main structure of FTSO.
## 2 Related Work
In general, existing NAS algorithms can be divided into three categories, namely, reinforcement learning-based, revolution-based and differentiable. Early-proposed reinforcement learning-based methods (Zoph and Le, 2017; Zoph et al., 2018) generally suffer from high computational cost and low-efficiency sampling. Instead of sampling a discrete architecture and then evaluating it, DARTS (Liu et al., 2019) treats the whole search space as a continuous super-net. It assigns every operator a real number weight and treats every node as the linear combination of all its transformed predecessors. To be specific, DARTS's search space is a directed acyclic graph (DAG) containing two input nodes inherited from previous cells, four intermediate nodes and one output node. Each node denotes one latent representation and each edge denotes an operator. Every intermediate node \(\mathbf{x}_{j}\) is calculated from all its predecessors \(\mathbf{x}_{i}\), i.e., \(\mathbf{x}_{j}=\sum_{i<j}\sum_{o\in\mathcal{O}}\frac{\exp\alpha_{i,j}^{o}}{ \sum_{o^{\prime}\in\mathcal{O}}\exp\alpha_{i,j}^{o^{\prime}}}o(\mathbf{x}_{i})\), where \(\mathcal{O}\) denotes the collection of all candidate operators, \(\alpha_{i,j}^{o}\) denotes the weight for operator \(o\) from node \(i\) to \(j\). This strategy allows DARTS to directly use gradients to optimize the whole super-net. After the super-net converges, DARTS only retains the operators with the largest weights. In this way, the final discrete architecture is derived. The main defect of DARTS is that it needs to maintain and do all calculations on a giant super-net, which inevitably leads to heavy computational overheads and over-fitting.
To relieve the computational overhead of DARTS, DARTS-ES (Zela et al., 2020) reduces the number of searching epochs via early stopping, according to the Hessian matrix's max eigenvalue. PC-DARTS (Xu et al., 2020) decreases the FLOPs per iteration by only calculating a proportion of the input channels and keeping the remainder unchanged, and normalizes the edge weights to stabilize the search. To be specific, in PC-DARTS, every intermediate node \(\mathbf{x}_{j}\) is computed from all its predecessors \(\mathbf{x}_{i}\), i.e., \(\mathbf{x}_{j}=\sum_{i<j}\frac{\exp\beta_{i,j}}{\sum_{i^{\prime}<j}\exp\beta_ {i^{\prime},j}}f_{i,j}(\mathbf{x}_{i})\), where \(\beta_{i,j}\) describes the input node \(i\)'s importance to the node \(j\), and \(f_{i,j}\) is the weighted sum of all the candidate operators' outputs between node \(i\) and \(j\). Specifically, \(f_{i,j}(\mathbf{x}_{i},\mathbf{S}_{i,j})=\sum_{o\in\mathcal{O}}\frac{e^{\alpha _{i,j}^{o}}}{\sum_{o^{\prime}\in\mathcal{O}}e^{\alpha_{i,j}^{o^{\prime}}}}o( \mathbf{S}_{i,j}*\mathbf{x}_{i})+(1-\mathbf{S}_{i,j})*\mathbf{x}_{i}\), where \(\mathbf{S}_{i,j}\) denotes a binary vector, in which only \(1/K\) elements are \(1\).
## 3 FISO: Effective NAS via First Topology Second Operator
Existing NAS approaches generally suffer from a huge computational overhead and an unsatisfying testing accuracy led by the huge search space. Such problems are especially stern in one-shot and differentiable methods because these algorithms need to maintain and even do all the calculations directly on the search space.
Figure 2: FTSO’s found architectures. (a) and (b): Normal and reduction cells found on CIFAR10 after one epoch’s search; (c) and (d): Normal and reduction cells found on the entire ImageNet after one epoch’s search; (e) and (f): Normal and reduction cells found on CIFAR10, where we adopt the operator search, and use the \(3\times 3\)_separable convolution_ to search for the topology; (g): FTSO’s cell found on NATS-Bench; (h): DARTS’s cell found on NATS-Bench.
To ease such problems, it is of great demand to investigate the correlations among different architectures and to shrink the search space according to the prior knowledge. We notice that there is an important observation in Shu et al. (2020) that randomly substituting the operators in a found architecture does not observably influence the testing accuracy. Therefore, it would be great inspiration if we could cluster the architectures according to their connection topologies. To be specific, suppose we find an architecture only containing the simplest operators achieves high accuracy on the testing set, if we replace all the connections in this architecture with powerful operators, the converted architecture can perform well on the testing set with high confidence.
In this paper, we first propose to find the most effective network topology with simple operators. We then fix the topology, and search for the most suitable operators for the given topology. In this way, the testing accuracy can still be guaranteed, while the search space is shrunk in magnitude. We name this new NAS algorithm Effective NAS via First Topology Second Operator (FTSO).
We summarize the symbols used in this section in Table 2. As shown in Figure 1, we inherit the differentiable framework of PC-DARTS, and divide the architecture search into two phases. We name the two phases topology search and operator search, and illustrate how they work in Algorithms 1 and 2, respectively. In the first phase, we form a super-net only containing the simplest operator, _skip connection_. Since the _skip connection_ operator contains no kernel weights, we only need to optimize the architecture parameters \(\beta_{i,j}\). In fact, as shown in Table 1, the _max pooling_ operator also delivers satisfying results for the topology search. There are two reasons we use _skip connection_. The first is that the _skip connection_ operator not only requires zero parameters, but also demands the minimum computational cost. The second reason is that _max pooling_ may lead to the loss of useful information if the network is deep. Furthermore, as the only difference between our topology search and the vanilla DARTS is the number of candidate operators, the pruned architecture's connectivity can be guaranteed.
Similar to DARTS and PC-DARTS, after the topology search, for every intermediate node \(j\), we only retain its connections to its predecessors \(i^{*}\) with the highest two \(\beta_{i,j}\). In the second phase, we search for the operators suitable for the pruned topology with two strategies. The first strategy is similar to the vanilla DARTS. It replaces each operator in the pruned topology with a mix-operator \(f_{i,j}\). After that, we optimize the architecture parameters, \(\alpha^{o}_{i,j}\), \(\beta_{i,j}\) and the kernel weights \(\omega^{o}_{i,j}\) alternatively. After the super-net converges, we only retain one operator \(o^{*}\) with the highest \(\alpha^{o}_{i,j}\) for every two connected nodes \(i\) and \(j\). The second strategy directly replaces all the operators in the pruned topology with one single operator owning the highest model capacity,
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**FTSO’s Configuration**} & \multicolumn{3}{c}{**CIFAR10 Error (\%)**} & \multicolumn{3}{c}{**ImageNet Error (\%)**} & \multicolumn{3}{c}{**Search Cost (GPU-days)**} \\ \cline{2-7} & **600 Epoch** & **1200 Epoch** & **Top1** & **Top5** & **CIFAR** & **ImageNet** \\ \hline CIF. Topo(skip,1it.) & \(2.68\) & \(2.54\) & \(24.36\) & \(7.27\) & \(7.87\times 10^{-6}\) & - \\ CIF. Topo(skip,1ep.) & \(2.48\) & \(2.23\) & \(23.60\) & \(7.01\) & \(2\times 10^{-4}\) & - \\ CIF. Topo(skip,50ep.) & \(2.77\) & \(2.52\) & - & - & \(0.01\) & - \\ CIF. Topo(skip,1ep.)+Op(18ep.) & \(2.85\) & \(2.59\) & \(23.97\) & \(7.20\) & \(0.01\) & - \\ CIF. Topo(skip,50ep.)+Op(50ep.) & \(2.59\) & \(2.36\) & \(23.97\) & \(7.12\) & \(0.05\) & - \\ CIF. Topo(m.p.,50ep.)+Op(50ep.) & \(2.83\) & \(2.48\) & - & - & \(0.05\) & - \\ CIF. Topo(sep3,50ep.) & 2.63 & \(2.48\) & - & - & \(0.02\) & - \\ CIF. Topo(sep3,50ep.)+Op(50ep.) & \(2.56\) & \(2.52\) & \(24.73\) & \(7.60\) & \(0.06\) & - \\ CIF. Topo(30p,50ep.)+Op(50ep.) & \(2.59\) & \(2.50\) & - & - & \(0.05\) & - \\ CIF. Topo(40p,50ep.)+Op(50ep.) & \(2.68\) & \(2.59\) & - & - & \(0.07\) & - \\ Part ImageNet Topo(skip,1it.) & - & - & \(24.03\) & \(7.07\) & - & \(0.0002\) \\ Part ImageNet Topo(skip,1ep.) & - & - & \(23.94\) & \(7.05\) & - & \(0.0017\) \\ Part ImageNet Topo(skip,6ep.) & - & - & \(24.59\) & \(7.38\) & - & \(0.009\) \\ Full ImageNet Topo(skip,1ep.) & \(2.35\) & \(2.26\) & \(23.58\) & \(6.80\) & - & \(0.01\) \\ \hline \hline \end{tabular}
* ’3op’ means _max pool 3x3_, _skip connect_ and _none_; ’4op’ means: _sep conv 3x3_, _max pool 3x3_, _skip connect_ and _none_; ’m.p.’ means: _max pool 3x3_; ‘seq3_’ means: _sep conv 3x3_.
* ‘CIF’ means CIFAR10; ’Topo(skip,lit)’ means to search for topology with only skip connections for 1 iteration; ’1ep’ means 1 epoch; ’Part ImageNet’ means to search on part of ImageNet.
\end{table}
Table 1: Different configurations’ impacts to FTSO
e.g., a convolution operator.
In this paper, we take the second strategy as the default configuration because it is not only much more efficient, but also avoids the over-fitting phenomenon and the Matthew effect in DARTS. To be specific, in DARTS-like methods, suppose the found network perfectly fits the data, then the super-net must severely over-fit, since the super-net is much larger than the found network. As we know, an over-fitting model can hardly generalize well. Thus, the generated sub-graph is not likely to be the best architecture. While in the second strategy, since no super-net is adopted and all the simple operators in the sub-graph are replaced with powerful operators, the final architecture's model capacity gets promoted. Additional empirical comparisons between these two strategies can be found in Section 4.4.
In DARTS, the network topology and operators are jointly searched, which makes both the size and the computational cost of the super-net extremely high. We use \(n\) to denote the number of nodes and \(p\) to denote the number of candidate operators. Since we have two input nodes, one output node and \(n-3\) intermediate nodes, the super-net contains a total of \(\frac{1}{2}(n^{2}-3n)\) edges. At the same time, every edge keeps \(p\) operators, thus, the total number of operators in DARTS is \(\frac{1}{2}(n^{2}-3n)p\). By comparison, there are only \(\frac{1}{2}n(n-3)\) operations in our topology search, and \(2(n-3)p\) operations in our operator search. This is because in the topology search, every edge contains only one operator; and in the topology search, every intermediate node only connects to two predecessors. Since \(n\) is usually close to \(p\), FTSO reduces the number of operations from \(O(n^{3})\) to \(O(n^{2})\).
**Theorem 1**.: _The total number of FLOPs and parameters of DARTS are \(\frac{1}{2}pn(n-3)H_{out}W_{out}C_{out}(k^{2}C_{in}+1)\) and \(\frac{1}{2}n(n-3)p(k^{2}C_{in}+1)C_{out}\) respectively._
Proof.: Each vanilla convolutional operator needs \(k^{2}C_{in}H_{out}W_{out}C_{out}\) FLOPs and \((k^{2}C_{in}+1)C_{out}\) parameters, where \(k\) is the kernel size, \(C_{in}\) is the input tensor's channel number and \(H_{out}\), \(W_{out}\) and \(C_{out}\) are the output tensor's height, width and channel number respectively. For simplicity, assume all the candidate operators are convolutions. Since DARTS has \(\frac{1}{2}pn(n-3)\) edges, it needs to compute \(\frac{1}{2}pn(n-3)\) convolutions and \(\frac{1}{2}pn(n-3)\) tensor summations. Owing to each tensor summation consuming \(H_{in}W_{in}C_{in}\) FLOPs, DARTS requires a total of \(\frac{1}{2}pk^{2}n(n-3)C_{in}H_{out}W_{out}C_{out}\) convolution FLOPs and \(\frac{1}{2}pn(n-3)H_{out}W_{out}C_{out}\) summation FLOPs. Thus, the overall FLOPs are parameters are \(\frac{1}{2}pn(n-3)H_{out}W_{out}C_{out}(k^{2}C_{in}+1)\) and \(\frac{1}{2}n(n-3)p(k^{2}C_{in}+1)C_{out}\), respectively.
**Theorem 2**.: _The total number of FLOPs and parameters of FTSO are \(\frac{1}{2}n(n-3)H_{in}W_{in}C_{in}\) and \(\frac{1}{2}n(n-3)\) respectively._
Proof.: Each _skip connection_ operator needs \(0\) parameters and \(0\) FLOPs. If we first search for the topology and then directly substitute the operators, only \(\frac{1}{2}n(n-3)\) tensor summations need to be calculated, since FTSO has \(\frac{1}{2}n(n-3)\) operators.
In addition to the reduction in the number of operations, FTSO also dramatically decreases the internal cost of the operations. This is because during the topology search all the powerful operators are replaced by the simple operators. We summarize the main properties of DARTS and FTSO in Theorems 1 and 2. As a typical configuration, let \(k=5\), \(C_{in}=C_{out}=512\), \(n=7\), \(p=8\). Then, our algorithm requires only \(\frac{1}{p(k^{2}C_{in}+1)C_{out}}=1.9\times 10^{-8}\) times the parameters and \(\frac{1}{p(k^{2}C_{in}+1)}=9.8\times 10^{-6}\) times the forward-propagation FLOPs per iteration compared to those of DARTS.
FTSO's huge reduction on the parameter numbers provides us a large number of benefits. As mentioned above, it allows the algorithm to converge in only a few iterations and prevents over-fitting. This is because when extracting the discrete sub-graph from the super-net, many architecture parameters are set to 0. The introduced disturbance impacts more on the over-fitting super-nets since they prefer sharper local minimums. Furthermore, Since FTSO only contains one operator with 0 parameters, the Matthew effect is eliminated.
Figure 3: Ablation study Part 1. (a): CIFAR10: Accuracy - Search epochs (in the same run); (b): CIFAR10: Accuracy - Search epochs (multiple runs); (c): CIFAR10: Accuracy - Search iterations; (d): Accuracy on CIFAR10: 1 vs 2 search iterations (multiple runs).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Architecture**} & \multicolumn{3}{c}{**Search on CIFAR10**} & \multicolumn{3}{c}{**Search on CIFAR100**} & \multicolumn{3}{c}{**Search on ImageNet**} \\ \cline{2-10} & **CF10\({}^{*}\)** & **CF100** & **ImageNet** & **CF10** & **CF100** & **ImageNet** & **CF10** & **CF100** & **ImageNet** \\ \hline DARTS (1st) & \(54.30\) & \(15.61\) & \(16.32\)† & \(86.57\) & \(58.13\) & \(28.50\) & \(89.58\) & \(63.89\) & \(33.77\) \\ DARTS (2nd) & \(86.88\) & \(58.61\) & \(28.91\) & \(91.96\) & \(67.27\) & \(39.47\) & \(84.64\) & \(55.15\) & \(26.06\) \\ \hline FTSO & \(93.98\) & \(70.22\) & \(45.57\) & \(93.98\) & \(70.22\) & \(45.57\) & \(93.98\) & \(70.22\) & \(45.57\) \\ \hline \hline \end{tabular}
* This means that within NATS-Bench’s search space, when we use the 1st order DARTS to search for architectures on the CIFAR10 dataset, the found architecture can achieve \(16.32\%\) testing accuracy on ImageNet.
* CF10 means testing accuracy (%) on CIFAR10; CF100 means testing accuracy (%) on CIFAR100
\end{table}
Table 4: Comparison with existing state-of-the-art image classification architectures (NATS-Bench)
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Architecture**} & \multicolumn{3}{c}{**CIFAR Err. (\%)**} & \multicolumn{3}{c}{**ImageNet Err. (\%)**} & \multicolumn{3}{c}{**Search Cost (GPU-days)**} \\ \cline{2-7} & **600 Ep.** & **1200 Ep.** & **Top1** & **Top5** & **CIFAR** & **ImageNet** \\ \hline NASNet-A†Zoph et al. (2018) & \(2.65\) & - & \(26.0\) & \(8.4\) & \(1800\) & - \\ AmoebaNet†Real et al. (2019) & \(2.55\) & - & \(24.3\) & \(7.6\) & \(3150\) & - \\ PNASLiu et al. (2018) & \(3.41\) & - & \(25.8\) & \(8.1\) & \(225\) & - \\ ENAS†Pham et al. (2018) & \(2.89\) & - & - & - & \(0.5\) & - \\ \hline DARTS (2nd)†Liu et al. (2019) & \(2.76\) & - & \(26.7\) & \(8.7\) & \(1\) & \(4.0\) \\ SNA†Xie et al. (2019) & \(2.85\) & - & \(27.3\) & \(9.2\) & \(1.5\) & \\ ProxylessNAS†Cai et al. (2019) & \(2.08\) & - & \(24.9\) & \(7.5\) & \(4.0\) & \(8.3\) \\ P-DARTS†Chen et al. (2019) & \(2.50\) & - & \(24.4\) & \(7.4\) & \(0.3\) & - \\ BayesNAS†Zhou et al. (2019) & \(2.81\) & - & \(26.5\) & \(8.9\) & \(0.2\) & - \\ \hline PC-DARTS(CIFAR10)†Xu et al. (2020) & \(2.57\) & \(2.50\) & \(25.1\) & \(7.8\) & \(0.1\) & - \\ PC-DARTS(ImageNet)†Xu et al. (2020) & - & - & \(24.2\) & \(7.3\) & - & \(3.8\) \\ \hline FTSO (CIFAR10 + 1 epoch)† & 2.48 & **2.23** & \(23.60\) & \(7.01\) & \(2\times 10^{-4}\) & - \\ FTSO (CIFAR10 + 1 iteration)† & 2.68 & 2.54 & \(24.36\) & \(7.27\) & \(\mathbf{7.87\times 10^{-6}}\) & - \\ FTSO (Full ImageNet + 1epoch)† & **2.35** & \(2.26\) & **23.58** & **6.80** & - & \(\mathbf{0.01}\) \\ \hline \hline \end{tabular}
* When testing on CIFAR10, these models adopt cut-out.
* These models are directly searched on ImageNet.
\end{table}
Table 3: Comparison with existing state-of-the-art image classification architectures
## 4 Experiments
Our search algorithm is evaluated on the three most widely-used datasets in NAS papers, namely, CIFAR10 (Krizhevsky et al., 2009), ImageNet (Russakovsky et al., 2015) and NATS-Bench (Dong et al., 2021). Following DARTS, our search space contains a total of eight candidate operators: \(3\times 3\) and \(5\times 5\)_separable convolutions_, \(3\times 3\) and \(5\times 5\)_dilated separable convolutions_, \(3\times 3\)_max_ and _average pooling_, _skip connection_ (i.e., \(output=input\)) and _zero_ (i.e., \(output=0\)). When searching for the topology, we pick only one operator from the candidate set. As mentioned in Section 3, we have two strategies to determine the operators, including the one based on gradients and the one directly replacing operators. In Sections 4.1 and 4.2, we focus more on the second configuration. In Section 4.4, we have a comprehensive comparison on these two strategies. All detailed configurations are shown in the Appendices. In addition, most of our experiments only search for one epoch or one iteration because of the benefits of FTSO's huge reduction on the parameter numbers. For more experimental support, please refer to Section 4.4. Note that it is almost impossible for the existing models to obtain satisfying results via only searching for one epoch or one iteration because their supernets contain a large amount of parameters, which require a long period to tune.
### Results on CIFAR10
We compare FTSO to existing state-of-the-art NAS methods in Table 3. In the experiment, we only search for the topology with skip-connections and then replace them all with \(3\times 3\)_separable convolutions_. The reason that we do not adopt \(5\times 5\)_separable convolutions_ is that the pre-processed input images do not have enough resolutions, and the network is rather deep. After a few layers, the convolution's receptive field becomes larger than the whole image. At that time, larger convolutional kernels may not bring benefits. Instead, the extra parameters brought by the larger kernel size may lead to over-fitting.
On the other hand, suppose both the image's resolution and the dataset's scale are big enough and the evaluation period is adequate, the \(5\times 5\)_separable convolution_ might be a better choice. The found architecture after one epoch's search is shown in Figure 2. Due to FTSO containing only a few trainable parameters, it can even achieve comparable accuracy to PC-DARTS with only a one-time gradient update. Under this configuration, a mere \(0.68\) seconds are required and \(99.993\%\) of the search time is saved. In addition, as shown in Table 1, when the topology is searched with powerful operators for a long period, an additional operator search usually helps. However, when we search for the topology with simple operators for a short period, omitting the operator search may lead to better results. This is because with simple operators and very few updates, the found topology can already generalize quite well.
### Results on ImageNet
On ImageNet, we use similar configurations to those on CIFAR10. When searching, we have two configurations. The detailed configurations are shown in the Appendices. Our experiments in Table 3 show that FTSO is significantly superior to existing methods in both efficiency and effectiveness. The found architectures after one epoch's search on CIFAR10 and the entire ImageNet are shown in Figure 2.
It is surprising that the best architecture we found on ImageNet is the shallowest and widest one. Compared to the much more'reasonable' architectures shown in Figure 2(e) and 2(f), which were found with the topology search only containing \(3\times 3\) separable convolutions and an additional operator search on CIFAR10, the 'abnormal' architecture, containing the same amount of FLOPs and parameters, can achieve \(0.78\%\) higher testing accuracy. We think this is because the whole model is stacked with many cells. If the depth of each cell is too high, it leads to a very deep neural network. At that time, because all the operators in our found architecture are convolutions, we cannot use skip connections to facilitate gradients' propagation in ResNet's manner. In this way, both the vanishing and explosion of gradients may prevent the deeper models from higher performance.
Figure 4: Ablation study Part 2. (a): CIFAR10: Accuracy - Operator replacing skip-connections; (b): CIFAR10: Accuracy - Training epochs of the found architecture; (c): Epoch-wise on CIFAR100: Accuracy - Max eigenvalue \(\nabla^{2}_{Arch}\mathcal{L}_{val}\); (d): Iteration-wise on CIFAR100: Accuracy - Max eigenvalue \(\nabla^{2}_{Arch}\mathcal{L}_{val}\).
### Results on NATS-Bench
In the search space of NATS-Bench, there is one input node, three intermediate nodes and one output node, and each intermediate node connects to all its predecessors. Here we implement FTSO based on DARTS instead of PC-DARTS, and we compare FTSO's performance to other NAS algorithms in Table 4. It is shown that FTSO dominates DARTS in all configurations. Coinciding with our analysis in Section 3, the architectures found by DARTS tend to only contain simple operators, thus cannot achieve satisfying accuracy. For example, when searching on CIFAR10, the architecture found by DARTS is full of _skip connections_ as shown in Figure 2(h). By comparison, as shown in Figure 2(g), the architecture found by FTSO is much more powerful.
### Ablation Study
In terms of a topology-only search, one epoch is just enough, thanks to the many fewer kernel weights contained in FTSO, and more search epochs bring obvious disadvantages because of over-fitting. Since one epoch performs better than more epochs, it raises the question whether one iteration is also superior to more iterations. We find that the found architecture's performance generally first drops and then increases in the first epoch, and then always decreases after the first epoch. In Figure 3(c) we show that although one iteration cannot surpass one epoch, it is better than a few iterations. This is because when we only search for one iteration, the model does not over-fit the data, thus the model generalizes well. When we only searched for a few iterations, the number of different images seen by the model is not big enough. However, since the super-net only contains skip connections, such a number of gradient updates is enough for the architecture parameters to become over-fitted. This is the reason that a few iterations perform worse than one iteration. After we have searched for one whole epoch, the super-net has seen enormous different images, which helps it to generalize better on the testing set. This is the reason one epoch performs the best. In terms of whether we should search for one iteration or two, in Figure 3(d), we show that both choices work well. When we do not search for the operators after the topology search, we assign all the remaining edges a fixed operator. Thus, which operator we should choose becomes a critical question. Figure 4(a) show that a \(3\times 3\)_separable convolution_ can indeed outperform all other operators in terms of accuracy.
As shown in Figure 4(b), we find that under a different number of evaluation epochs, both the absolute value and the relative ranking of the same architecture's testing accuracy may vary; i.e., some architectures which perform well within 600 epochs perform poorly after 1200 epochs. However, in general, they still obey a positive correlation, with a Pearson correlation coefficient of \(0.77\), as shown in Figure 3(f). In terms of the generalization ability from CIFAR10 to ImageNet, Figure 5(d) reveals that the architectures which perform well after long-term evaluation on CIFAR10 can usually generalize better on ImageNet, with a correlation coefficient of \(0.7\); yet, as shown in Figure 5(c), there is no guarantee that those working well on CIFAR10 within limited evaluation epochs can also dominate on ImageNet. This is because it only proves that they can converge quickly, not that they can converge to a global optimum.
In Figures 4(c) and 4(d), we show that one epoch is not only an optimal choice on CIFAR10, but also enough for the topology-only search on CIFAR100. In addition, as the search epochs and iterations increase, the max eigenvalue of the loss's Hessian matrix on the validation set increases. At the same time, the testing accuracy generally decreases because the model's generalization ability is dropping. This phenomenon is particularly obvious epoch-wise because, after just a few iterations, the model can already reach a
Figure 5: Ablation study Part 3. (a): CIFAR10: Operator search’s epoch-wise impact on testing accuracy (0 epoch means only searching for the topology); **??**: CIFAR10: The found architecture’s training epochs’ impacts on the testing accuracy (same architecture); (c):Testing accuracy: CIFAR10 600 evaluation epochs - ImageNet 250 evaluation epochs (evaluating after training the same architecture on different datasets); (d): Testing accuracy: CIFAR10 1200 evaluation epochs - ImageNet 250 evaluation epochs (evaluating after training the same architecture on different datasets)
comparable accuracy on the training set. Then the model's performance on the testing set starts to relate with its generalization ability.
## 5 Generalization to other tasks and search spaces
As shown in Section 4, FTSO works well under different search spaces and node numbers. Theoretically, FTSO's advantages to DARTS can be enlarged while the search space and node number increase. The reason is that FTSO reduces the computational cost from \(O(n^{3})\) to \(O(n^{2})\) and avoids over-fitting. Based on such considerations, as the future work, we plan to apply FTSO in high-level tasks, for example, instance segmentation and multi-view stereo. Although in this paper, we establish FTSO within differentiable search spaces, in fact, the first topology second operator strategy is not limited to any specific search space or tasks. Whether or not the search space is discrete or continuous, or the search algorithm is gradient-based or reinforcement learning-based, we first shrink the candidate operator set, and only retain the simplest operator, in language modeling which might be a _skip connection_ or a _pooling layer_. After this, the size of the whole search space is reduced in magnitude. Then, we search for the best topology with any available search algorithm. In this way, a promising topology can be found. Then, we can either directly assign each edge a powerful operator, in language modeling which might be a _LSTM unit_ or an _attention layer_, or use gradients to search for operators. Generally, the directly replacing strategy leads to higher accuracy, and the gradient-based strategy reduces the model complexity.
## 6 Conclusion
In this paper, we propose an ultra computationally efficient neural architecture search method named FTSO, which reduces NAS's search time cost from days to less than 0.68 seconds, while achieving 1.5% and 0.27% accuracy improvement on ImageNet and CIFAR10, respectively. Our key idea is to divide the search procedure into two sub-phases. In the first phase, we only search for the network's topology with simple operators. Then, in the second phase, we fix the topology and only consider which operators we should choose.
Our strategy is concise in both theory and implementation, and our promising experimental results show that current NAS methods contain too much redundancy, which heavily impacts the efficiency and becomes a barrier to higher accuracy. What is more, as mentioned in Section 5, our method is not bound by differentiable search spaces as it can also cooperate well with existing NAS approaches.
| 既存の一ShotNNアーキテクチャサーチ(NAS)方法は、巨大な超ネットワークを検索する必要があり、そのために計算コストが大きくなる。この論文では、FTSOという方法を提案し、アーキテクチャサーチを2つのサブステップに分けることでコスト削減を行う。具体的には、最初のステップではトポロジーのみを検索し、次のステップではオペレータを検索する。FTSOは、NASの検索時間を数日から0.68秒に短縮するだけでなく、発見されたアーキテクチャの精度も大幅に向上させる。ImageNetの広範な実験では、18秒でFTSOが76.4%のテスト精度を達成し、SOTAであるPC-DARTSに対して1.5%の向上を実現した。さらに、FTSOはCIFAR10で97.77%のテスト精度を達成し、SOTAであるPC-DARTSに対して0. |
2309.10862 | Young nearby open clusters and their luminosity functions | Context. Open clusters are groups of coeval stars sharing properties such as
distance and metallicity, and they are key to understanding stellar evolution.
Aims. Our main goal is to study the evolution of open clusters with a special
focus on the universality of the luminosity function. Methods. We applied an
upgraded version of the convergent point technique on about 50 open clusters.
The selection of cluster members was based purely on the exquisite astrometry
of the Gaia DR3 and Hipparcos catalogues in the five-dimensional or full
six-dimensional space. Results. We present updated lists of bona fide members
of ~50 open clusters within 500 pc and younger than 1 Gyr, exploiting the full
depth of the third Gaia data release complemented by Hipparcos at the bright
end, excluding regions in the Galactic plane. Our catalogues also are
complemented by optical and infrared photometry from the major large-scale
public surveys. All the data will be made available on a dedicated webpage with
interactive plots and a direct link to Aladin and Vizier hosted at the Centre
de Donn\'ees de Strasbourg. We derived luminosity functions for all bound
clusters and compared them in three age groups of ~50 Myr, ~150 Myr, and ~600
Myr, discussing similarities and differences to constrain their dynamical
evolution. Conclusions. Luminosity functions of clusters at 50 Myr are more
likely similar to each other and show a greater degree of similarity than older
clusters. We explain this observation with the universal luminosity function
within the volume of our sample (500 pc). Luminosity functions of clusters with
ages similar to the Pleiades or Hyades are more diverse, perhaps due to
internal dynamical evolution, but more work is needed to provide additional
evidence. | M. Žerjal, N. Lodieu, A. Pérez-Garrido, J. Olivares, V. J. S. Béjar, E. L. Martín | 2023-09-19T18:15:52 | http://arxiv.org/abs/2309.10862v1 | # Young nearby open clusters and their luminosity functions
###### Abstract
Context:Open clusters are groups of coeval stars sharing properties such as distance and metallicity, and they are key to understanding stellar evolution.
Aims:Our main goal is to study the evolution of open clusters with a special focus on the universality of the luminosity function.
Methods:We applied an upgraded version of the convergent point technique on about 50 open clusters. The selection of cluster members was based purely on the exquisite astrometry of the _Gaia_ DR3 and Hipparcos catalogues in the five-dimensional or full six-dimensional space.
Results:We present updated lists of bona fide members of \(\sim\)50 open clusters within 500 pc and younger than 1 Gyr, exploiting the full depth of the third _Gaia_ data release complemented by Hipparcos at the bright end, excluding regions in the Galactic plane. Our catalogues also are complemented by optical and infrared photometry from the major large-scale public surveys. All the data will be made available on a dedicated webpage with interactive plots and a direct link to Aladin and Vizier hosted at the Centre de Donnees de Strasbourg. We derived luminosity functions for all bound clusters and compared them in three age groups of \(\sim\)50 Myr, \(\sim\)150 Myr, and \(\sim\)600 Myr, discussing similarities and differences to constrain their dynamical evolution.
Conclusions:Luminosity functions of clusters at 50 Myr are more likely similar to each other and show a greater degree of similarity than older clusters. We explain this observation with the universal luminosity function within the volume of our sample (500 pc). Luminosity functions of clusters with ages similar to the Pleiades or Hyades are more diverse, perhaps due to internal dynamical evolution, but more work is needed to provide additional evidence.
Conclusions:
## 1 Introduction
Open clusters are gravitationally bound groups of a few tens of stars to over a thousand stars that formed from the same cloud of dust and gas. The similar distance, age, and metallicity of the cluster members means that a group of clusters at different ages can be viewed as snapshots of stellar evolution across time. This makes them an ideal laboratory to study the universality of the initial luminosity and mass functions (Salpeter, 1955; Miller & Scalo, 1979; Kroupa, 1998; Scalo et al., 1998; Chabrier, 2001; Kroupa, 2001a; Bastian et al., 2010a), nuclear timescales, and the evolution of the stellar multiplicity rates.
Over 1100 open clusters were already discovered in our Galaxy (Mermilliod, 1995; Dias et al., 2002; Kharchenko et al., 2013) before the advent of the _Gaia_ astrometric mission (Gaia Collaboration et al., 2016). The arrival of the _Gaia_ data releases (Gaia Collaboration et al., 2017, 2018, 2021, 2022) have resulted in improved lists of cluster members as well as a discovery of new clusters (e.g. Cantat-Gaudin et al., 2018; Castro-Ginard et al., 2019; Kounkel & Covey, 2019; Sim et al., 2019; Jaehnig et al., 2021), stellar streams (Borasto et al., 2020), tails, and coronae around clusters (Meingast et al., 2021; Moranta et al., 2022).
Precise parallaxes and proper motions from _Gaia_ have also improved the search for members in the kinematic parameter space. Different clustering techniques, in mostly five-dimensional space (i.e., missing radial velocities), have been used to evaluate membership in open clusters. Each of these techniques has their own biases and data quality cuts. For example, hierarchical density-based spatial clustering of applications with noise (HDBSCAN; McInnes et al., 2017) helped reveal a number of new clusters, associations, and comoving groups within 1 kpc (Kounkel & Covey, 2019). Tarricq et al. (2022) applied the same method to a search for cluster members up to 50 pc around their centres in _Gaia_ EDR3 data and detected vast coronae around almost all the clusters, as well as tidal tails in 71 open clusters. Castro-Ginard et al. (2018, 2019, 2020) used DBSCAN in _Gaia_ DR2 and He et al. (2022) used it in _Gaia_ EDR3 to discover numerous new clusters in the Milky Way. Unsupervised photometric membership assignment in stellar clusters (UPMASK; Krone-Martins & Moitinho, 2014) was applied to the proper motions and parallaxes in the TGAS catalogue (Michalik et al., 2015) by Cantat-Gaudin et al. (2018) to characterise 128 known open clusters in the solar neighbourhood. Cantat-Gaudin et al. (2018) used the same method on a compiled list of thousands of known or putative clusters from the literature in the _Gaia_ DR2 catalogue (Gaia Collaboration et al., 2018) to obtain
a list of members and cluster parameters for 1229 clusters. Recently, Perren et al. (2022) used the same algorithm for the analysis of the 25 most distant (\(>\)9 kpc) catalogued open clusters. In contrast with (H)DBSCAN and UPMASK, Extreme Deconvolution Gaussian Mixture Models (Bovy et al., 2011) take astrometric uncertainties into account. With this method, Olivares et al. (2019) analysed the oldest cluster in the solar neighbourhood, Ruprecht 147; Price-Whelan et al. (2019) discovered a disrupting young open cluster in the halo of the Milky Way; and (Jaehnig et al., 2021) provided membership lists for 431 open clusters. Perryman et al. (1998) developed a convergent point technique to retrieve the Hyades members in the Hipparcos data (Perryman et al., 1997). The membership of this cluster was later improved with the same method using the TGAS catalogue (Reino et al., 2018) and _Gaia_ DR2 (Lodieu et al., 2019). Lodieu et al. (2019) improved memberships of the \(\alpha\) Per, Pleiades, and Praesepe clusters with the same algorithm.
While stellar mass is the main parameter of the evolution of a star, its determination usually depends on models and their assumptions, except in the case of spectroscopic or eclipsing binaries. Observations have shown that the shape of the initial mass function appears universal among star-forming regions and open clusters, at least formally within the error bars (Bastian et al., 2010). Likewise, the luminosity function constructed from direct observables - stellar magnitudes that are model-free (unlike stellar masses) - reflects the cluster's initial mass function, the history of its dynamical evolution, and its age. For example, when stars fuse their hydrogen fuel and become giants, they increase their luminosity. Similarly, the dynamical evolution causes mass segregation and the gradual evaporation of low-mass members due to the interactions with the external massive objects while moving around the Galaxy.
The main goal of this study is to produce catalogues of members for a volume-limited (distances less than 500 pc) and age-limited (younger than 1 Gyr) sample of open clusters outside the Galactic plane, taking advantage of the latest _Gaia_ Data Release 3 (DR3; Gaia Collaboration et al., 2022). We do not make any magnitude cuts to reach the faintest member objects, and we include bright Hipparcos stars that are missing parallaxes and proper motions in _Gaia_ in order to reach maximal completeness. Our aim is to study the evolution of the luminosity functions, dividing our sample of clusters into three age groups. We will make all the catalogues and plots available publicly through an interactive website for the scientific community.
The paper is organised as follows. Section 2 introduces the main source of data and the clusters under scrutiny. Section 3 describes the procedure to infer membership of any source to a given cluster. We describe the catalogues of members in Section 4 and provide details about the website with interactive plots in Section 5. We analyse the luminosity functions of the clusters in Section 6 and conclude in Section 7.
## 2 Data
In this section, we describe the selection procedure to compile a list of clusters, which is the main focus of this study. Then, we continue with the summary of the data to evaluate individual stellar membership for each cluster. This includes stellar positions, distances, proper motions, and radial velocities as well as their mass estimates and supplementary photometric measurements.
### Selection of clusters
Our preliminary list of clusters is based on the known clusters from Kharchenko et al. (2013), Cantat-Gaudin et al. (2018), Tarrico et al. (2021), and Dias et al. (2021) as well as the WEBDA website.1 To avoid crowded and highly reddened regions of the Milky Way, we only selected clusters that are located outside the Galactic plane (\(|b|\)\(>\) 10 deg). We excluded clusters beyond 500 pc due to the increasing parallax uncertainties of their members. We set an upper literature age limit of 1 Gyr since our objective is to focus on young clusters, and there are not many older open clusters within 500 pc. For example, in the list of Friel (1995), we find one such cluster - NGC 752 at \(\sim 430\) pc with the age of 1.61 \(\pm\) 0.03 \(\pm\) 0.05 Gyr (Sandquist et al., 2023). However, we added well-known clusters such as IC 2602, IC 2391, and \(\alpha\) Persei. These three clusters are within \(b\) = 10 deg but have been included because they are nearby and young and have been studied in detail in the literature (e.g. Randich et al., 2001; Lodieu et al., 2005; Dobbie et al., 2010; Lodieu et al., 2012; Nisak et al., 2022; Lodieu et al., 2019). The following clusters/regions Sigma Orionis (Garrison, 1967; Barrado y Navascues et al., 2001; Caballero, 2018), IC 348 and NGC 1333 (Luhman et al., 2016; Olivares et al., 2023), Feigelson 1 (also known as \(\alpha\) Cha Cluster; Dickson-Vandervdele et al., 2021), Platais 2 and 5 (Platais et al., 1998), UBC 19 (Castro-Ginard et al., 2018), Melotte 227 (Epstein, 1968), Collinder 65 (Yen et al., 2018), Collinder 70 (Caballero & Solano, 2008), and ASCC 100 (Kharchenko et al., 2005) were removed because they are not isolated, and many of them are found in very young complex regions (e.g. Taurus, Orion, Upper Scorpius and Rho Ophiuchi) that display an extended structure that is often non-trivial to isolate from the surrounding environment. This selection left us with 49 clusters, which are presented in a list in Table 1 with their equatorial and Galactic coordinates, distances, and numbers of initial candidates with and without radial velocities. All clusters are unique, and a few of them are located relatively close to each other. In rare cases, their members might partially overlap. We explain this issue in Section 4.1.
Footnote 1: [https://webda.physics.muni.cz/navigation.html](https://webda.physics.muni.cz/navigation.html)
### Stellar positions and velocities
We downloaded the catalogue with astrometric measurements of the Milky Way stars from the _Gaia_ DR3 database (Gaia Collaboration et al., 2016, 2022). We limited the selection to stars with parallax uncertainties better than 20% but applied no constraints on any other parameters except for parallax to encompass a parameter space large enough to include all possible candidate members for all clusters. Due to the fact that stars with 1/parallax just beyond 500 pc have relatively large parallax uncertainties (typical parallax error of 12% as opposed to 9% for stars between 400 and 500 pc and 4% for stars between 100 and 200 pc) and can thus still be consistent with the distance of a cluster at 500 pc, we set the parallax limit to 1.6 mas, which corresponds to distances up to 625 pc. Our input catalogue therefore includes right ascension, declination, parallax, proper motions, and associated uncertainties for 44 million stars.
To avoid the minimal risk of negative parallaxes for our most distant stars, parallax uncertainties needed to be taken into account in the distance determination (e.g. see Figure 3 in Luri et al., 2018). Stellar distances were obtained from Bailer-Jones et al. (2021), who used parallaxes from _Gaia_ EDR32 (Gaia Col
laboration et al. 2021) with a prior constructed from a 3D model of the Galaxy. In particular, we worked with their geometric distances r_med_geo and their lower and upper limits r_lo_geo and r_hi_geo, respectively. However, their Galactic prior with a scale length varies across the sky. If this scale distance is far away and the parallax uncertainty is high, this may result in a mode distance considerably away from the true one. For parallaxes with an error of 20%, this translates to a distance error of \(10-20\%\) for a cluster at 500 pc, as shown in Fig. 1 of Olivares et al. (2020). Despite the exclusion of the clusters in the prior of
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline Cluster & RA & DEC & Distance & 1 & b & N(init) & N(RV) & Ref \\ \hline & deg & deg & \multicolumn{2}{c}{pc} & \multicolumn{2}{c}{deg} & \multicolumn{2}{c}{deg} & \multicolumn{1}{c}{} \\ \hline ASCC 101 & 288.3179 & 36.3917 & 392.4 \(\pm\) 3.6 & 68.0206 & 11.6767 & 93 & 38 & 1 \\ ASCC 41 & 116.7218 & 0.0704 & 293.1 \(\pm\) 2.5 & 219.2884 & 12.3447 & 162 & 54 & 1 \\ Alessi 10 & 301.2260 & \(-\)10.4884 & 433.7 \(\pm\) 3.6 & 31.6592 & \(-\)21.0234 & 112 & 28 & 2 \\ Alessi 13 & 51.4891 & \(-\)35.8355 & 103.5 \(\pm\) 0.9 & 237.5839 & \(-\)56.1610 & 57 & 27 & 1 \\ Alessi 24 & 260.8358 & \(-\)62.7086 & 477.8 \(\pm\) 4.8 & 328.9885 & \(-\)14.6079 & 189 & 52 & 3 \\ Alessi 3 & 109.2165 & \(-\)46.3786 & 275.2 \(\pm\) 2.6 & 257.6573 & \(-\)15.1952 & 88 & 47 & 1 \\ Aveni-Hunter 1 & 354.3071 & 48.3612 & 416.4 \(\pm\) 5.6 & 110.4164 & \(-\)21.6989 & 113 & 29 & 2 \\ Blanco 1 & 0.8825 & \(-\)29.9150 & 234.3 \(\pm\) 0.8 & 15.2846 & \(-\)79.1200 & 519 & 117 & 1 \\ Collinder 135 & 109.3361 & \(-\)36.7609 & 293.9 \(\pm\) 1.8 & 248.7192 & \(-\)11.0970 & 212 & 66 & 1 \\ Collinder 350 & 267.0184 & 1.4074 & 366.0 \(\pm\) 3.3 & 26.8441 & 14.7183 & 155 & 76 & 1 \\ Gulliver 20 & 273.6913 & 11.1062 & 419.8 \(\pm\) 5.0 & 38.8340 & 13.1195 & 80 & 28 & 3 \\ IC 2391 & 130.2486 & \(-\)53.0266 & 150.5 \(\pm\) 0.5 & 270.3989 & \(-\)6.7794 & 198 & 69 & 3 \\ IC 2602 & 160.7801 & \(-\)64.3706 & 150.4 \(\pm\) 0.7 & 289.6021 & \(-\)4.8724 & 249 & 88 & 3 \\ IC 4665 & 266.3973 & 5.6721 & 340.9 \(\pm\) 3.1 & 30.4945 & 17.2198 & 179 & 52 & 1 \\ Mamajek 1 & 130.6553 & \(-\)78.3476 & 100.9 \(\pm\) 1.8 & 291.8782 & \(-\)21.2700 & 15 & 10 & 1 \\ Mamajek 2 & 264.3584 & \(-\)7.9914 & 195.1 \(\pm\) 2.3 & 17.0597 & 12.4905 & 62 & 20 & 1 \\ Mamajek 3 & 81.8939 & 6.6251 & 98.5 \(\pm\) 2.3 & 197.0021 & \(-\)15.2903 & 24 & 10 & 1 \\ Mamajek 4 & 276.5463 & \(-\)50.9501 & 441.2 \(\pm\) 3.1 & 343.8369 & \(-\)17.0053 & 90 & 48 & 1 \\ Melotte 111 & 185.8656 & 26.0623 & 85.5 \(\pm\) 0.6 & 221.0263 & 83.6569 & 99 & 62 & 1 \\ Melotte 20 & 51.5358 & 48.9839 & 173.3 \(\pm\) 0.5 & 147.3072 & \(-\)6.4265 & 523 & 173 & 5 \\ Melotte 22 & 56.6517 & 24.0765 & 135.0 \(\pm\) 0.3 & 166.5277 & \(-\)23.6113 & 996 & 358 & 1 \\ Melotte 25 & 66.9682 & 16.5421 & 46.5 \(\pm\) 0.1 & 179.6565 & \(-\)21.7361 & 101 & 73 & 2 \\ NGC 1039 & 40.5334 & 42.7536 & 491.6 \(\pm\) 1.9 & 143.6702 & \(-\)15.6167 & 580 & 219 & 2 \\ NGC 1662 & 72.1526 & 10.8962 & 402.6 \(\pm\) 2.6 & 187.7547 & \(-\)21.1051 & 233 & 97 & 1 \\ NGC 1901 & 79.8301 & \(-\)68.3214 & 415.2 \(\pm\) 4.1 & 278.8643 & \(-\)33.5280 & 74 & 42 & 1 \\ NGC 2516 & 119.5561 & \(-\)60.7628 & 407.3 \(\pm\) 0.6 & 273.8338 & \(-\)15.8432 & 2211 & 626 & 1 \\ NGC 2632 & 129.9961 & 19.6375 & 183.4 \(\pm\) 0.4 & 205.9118 & 32.3824 & 732 & 324 & 1 \\ Platais 3 & 68.8478 & 71.2787 & 178.0 \(\pm\) 1.9 & 138.9824 & 15.7877 & 69 & 32 & 1 \\ Platais 4 & 77.1983 & 22.3997 & 313.5 \(\pm\) 26.9 & 180.8842 & \(-\)10.5194 & 74 & 20 & 1 \\ Stephenson 1 & 283.5958 & 36.8612 & 354.6 \(\pm\) 2.0 & 66.8801 & 15.3303 & 321 & 63 & 1 \\ Turner 5 & 142.9664 & \(-\)36.4349 & 416.7 \(\pm\) 3.9 & 264.1163 & 10.9797 & 41 & 22 & 1 \\ UBC 12 & 287.9726 & 56.8465 & 233.2 \(\pm\) 1.8 & 87.5620 & 19.7780 & 116 & 43 & 4 \\ UBC 12 & 126.0640 & \(-\)8.5525 & 437.6 \(\pm\) 9.4 & 231.7665 & 16.1980 & 24 & 16 & 4 \\ UBC 159 & 339.0372 & 39.6582 & 489.7 \(\pm\) 2.7 & 96.4360 & \(-\)16.1532 & 78 & 23 & 4 \\ UBC 31 & 61.0929 & 32.8044 & 356.2 \(\pm\) 1.6 & 163.2998 & \(-\)14.5508 & 487 & 110 & 3 \\ UBC 7 & 106.6258 & \(-\)37.6535 & 273.9 \(\pm\) 1.7 & 248.6182 & \(-\)13.4307 & 167 & 34 & 4 \\ UBC 8 & 84.4439 & 57.1315 & 471.9 \(\pm\) 2.7 & 154.8780 & 13.3166 & 144 & 60 & 4 \\ UBC 9 & 276.7157 & 26.4682 & 34
Bailer-Jones et al. (2021), the effect on the derived distances is negligible. The difference with the inverted parallax distance is minimal, typically less than 1 pc for stars closer than 400 pc and 4 pc for stars beyond this limit.
The latest _Gaia_ DR3 catalogue (Katz et al. 2022) contains radial velocities for 33 million stars. The overall radial velocity completeness ratio in our sample is 14%, and 44% of the stars with radial velocity have uncertainty better than 10%. The completeness varies over spectral types, from 92% for G stars to 2% for M stars, with typical uncertainties of 1.44 km s\({}^{-1}\) for G and 4.60 km s\({}^{-1}\) for M stars, respectively.
While the majority of our clusters are composed of relatively faint stars, the nearest ones contain stars visible to the naked eye. Due to the saturation limit at the bright end, 20% of stars brighter than magnitude three do not have an entry in _Gaia_ EDR3 (Fabricius et al. 2021). Additionally, a fraction of bright sources, for example Alcyone, the brightest star in the Pleiades cluster, lack measurements of parallaxes and proper motions. Fortunately, the saturation limit in _Gaia_'s predecessor, the _Hipparcos_ space telescope, was much brighter. We therefore took parallaxes, proper motions, and their uncertainties from the new reduction of the _Hipparcos_ measurements (van Leeuwen 2007). We used the cross-match with _Gaia_ DR3 that is available in the _Gaia_ archives and added the measurements to our list in the following way: If a _Gaia_ source brighter than \(G\) = 6 mag lacked parallax, we inserted the _Hipparcos_ data for this star. Furthermore, there is a small number of _Hipparcos_ stars that do not have a counterpart in _Gaia_. We appended such stars brighter than \(H_{p}\) = 6 mag to our list and noted the underlying assumption of a perfect cross-match. This might not be true in case of binary stars unresolved in _Hipparcos_. We checked the offsets of the Hipparcos catalogue with respect to _Gaia_ for the entire overlapping sample (100,000 stars) and provided the median values. There is an offset of 0.1087 mas in the parallax. This is below the typical parallax uncertainty of 0.3600 mas in Hipparcos but greater than a typical _Gaia_ parallax error of 0.0662 mas in this magnitude range. The offsets of proper motions are 0.0972 and \(-0.0165\) mas yr\({}^{-1}\) in right ascension and declination, respectively. These values are not only smaller than Hipparcos uncertainties (0.3400 and 0.2900 mas yr\({}^{-1}\), respectively) but also smaller or comparable with the _Gaia_ uncertainties (0.0658 and 0.0606 mas yr\({}^{-1}\), respectively).
### Stellar masses
One of the steps in our membership estimation algorithm described below in Section 3.2 is the determination of the systemic velocity for each cluster (i.e. the mean velocity of the centre of mass). This means that we needed to estimate individual stellar masses. Approximate mass computation is sufficient for our needs, assuming a spherical symmetry of the clusters and the number of members large enough to provide a robust estimate. This presumption is supported by Perryman et al. (1998) and Reino et al. (2018), who concluded that the chosen weighting scheme in mass determination does not have a large influence on the resulting barycentre. Similarly, assuming an isotropic distribution of binaries in clusters, their mass determination technique does not have an impact on the barycentre (Lodieu et al. 2019b). Unresolved binary stars in our catalogue thus have masses determined in the same way as single objects. We note that this does, however, skew the total mass of the cluster.
Mass estimation using the luminosity-mass or magnitude-mass relations requires a particular model for each cluster to meet its age and metallicity. To simplify the process, we adopted a colour-mass relation that is valid for a wide range of masses and ages below 1 Gyr. A comparison of different isochrones with ages between 50 Myr and 1 Gyr in Figure 1 shows no large variation that would exceed our requirements of rough mass estimates. We thus chose the models at 200 Myr.
Our mass estimation is based on the colour-mass relation from two models that are joined together at the point of their best overlap at \(G-R_{\rm P}=0.8\) mag. The vast majority of cluster member candidates have positive \(G-R_{\rm P}\) colours. We took the isochrones with solar metallicity from Baraffe et al. (2015) for \(G-R_{\rm P}\) = 0.8 mag and PARSEC isochrones (the Padova and Tkieste Stellar Evolution Code; Bressan et al. 2012; Chen et al. 2014, 2015; Tang et al. 2014; Marigo et al. 2017; Pastorelli et al. 2019, 2020) for stars with for \(G-R_{\rm P}\) \(<\) 0.8 mag. We fit an 11th order polynomial to these two isochrones to provide a relation that can be applied to all stars in our list. To avoid the extrapolation problems outside the models, we set the lower mass limit to 0.05 \(M_{\odot}\). For dwarf stars with \(G-R_{\rm P}<0\), we followed a similar procedure but took the 50 Myr isochrones.
The masses of _Hipparcos_ stars are based in the \(B-V\)-mass relation from an updated table3 originally published by Pecaut & Mamajek (2013, hereafter we cite the table as P13). We fit a 21st order polynomial to this relation and estimated the masses of _Hipparcos_ stars using their \(B-V\) colours. Again, we reinforced all masses to be greater than 0.05 \(M_{\odot}\). We set masses for a few individual stars above 5 \(M_{\odot}\) manually from Tetzlaff et al. (2011) in mass determination with both colours.
Footnote 3: [http://www.pas.rochester.edu/~emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt](http://www.pas.rochester.edu/~emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt), v2021.03.02.
Some of our older clusters contain white dwarfs. Typically, about 4% of objects in the input catalogue are found in the white dwarf region of the colour-magnitude diagrams. We defined all stars fainter than \(M_{\rm G}=10(G-R_{\rm p})+5\) as white dwarfs and assigned them a typical mass of a DA white dwarf (hydrogen lines), 0.59 M\({}_{\odot}\) (Kepler et al. 2007), which represent 80% of all white dwarfs. This sample partially overlaps with a list of white dwarfs identified in _Gaia_ EDR3 (Gentile Fusillo et al. 2021).
Figure 1: Comparison of isochrones for a range of ages from 50 Myr to 1 Gyr for the BHAC15 (dashed lines) and PARSEC models (solid lines). The polynomial fit used as a colour-mass relation is shown in magenta. While we only plot the relation for \(G-R_{\rm P}>-0.2\), the fit extends to bluer colours in order to cover the most massive stars in our input catalogue.
### Field population
We compared the luminosity functions and mass distributions of the clusters with the field population. We thus prepared a sample of field stars in the following way: We selected stars from the _Gaia_ DR3 catalogue within 25 pc with small parallax errors by requiring parallax\(>\) 40 mas and parallax\({}_{\mbox{\scriptsize\it error}}\)\(<\) 8 mas in the _Gaia_ query. We excluded stars with either missing or noisy photometry in the \(G\) and \(R_{p}\) filters. In particular, we required phot\({}_{\mbox{\scriptsize\it rp}}\)\({}_{\mbox{\scriptsize\it mean}}\)\({}_{\mbox{\scriptsize\it flux}}\)\({}_{\mbox{\scriptsize\it over}}\)\({}_{\mbox{\scriptsize\it error}}\) and phot\({}_{\mbox{\scriptsize\it g}}\)\({}_{\mbox{\scriptsize\it mean}}\)\({}_{\mbox{\scriptsize\it flux}}\)\({}_{\mbox{\scriptsize\it over}}\)\({}_{\mbox{\scriptsize\it error}}\) to be more than 100. These selection criteria resulted in 4760 stars that we proclaim as the field population. We computed the total mass within the given distance to the Sun. This cumulative mass distribution - the total mass within the given distance to the Sun - reflects a uniform distribution with no over-densities. We tested this claim by taking the same sample and simulating a uniform distribution of the stars within a sphere with the same volume. In practice, we kept the original locations of the stars but shuffled their masses. The cumulative mass distributions of both samples are practically the same and correspond to a mass density of 0.033 M\({}_{\odot}\)pc\({}^{-3}\) (see Figure 2). This sample was assumed to be complete down to L4 objects, while the completeness for the most distant clusters in our list does not exceed mid-M dwarfs. For comparison, we expected it to be complete to M9/L0 in the Hyades (see our reasoning in Section 2.3) and \(\sim\)M7 in the Pleiades (Lodieu et al. 2019a,b). This means that membership lists for most of the clusters in this work lack brown dwarfs. We note that the comparison with the field is not ideal for more distant clusters, since these samples might not come from the same environment and are affected by completeness and contamination issues.
### Photometric data
We cross-matched our catalogues of cluster members (see Section 4.1) with the photometric surveys described in Table 2 to complement the _Gaia_ astrometry and photometry of all the stars. We used cross-matches provided in the _Gaia_ database for 2MASS, AllWISE, and Pan-STARRS. For SDSS, UKIDSS, and VISTA, we performed a coordinate cross-match in TopCat within the radius of 3 arcsec.
## 3 Membership determination
The membership determination technique used in this work exploits the fact that stars in open clusters display a common location and velocities, as opposed to the uniform distribution of field stars in a Cartesian parameter space. The method takes advantage of the full 6D space (i.e. stellar positions and velocities) and the 5D space, when radial velocities are missing. It is based on the comparison of the transverse and radial velocities of a candidate star with the values expected for a member of a cluster at the location of a star. The expected values were estimated from the cluster's systemic velocity and the position of the candidate star with respect to the cluster. This approach was developed by Perryman et al. (1998) to retrieve the members of the Hyades cluster from the _Hipparcos_ data with a later application to Gaia DR1 (Reino et al. 2018). Lodieu et al. (2019b) and Lodieu et al. (2019a) utilised the same method on the _Gaia_ DR2 data to improve the memberships of the Hyades, \(\alpha\) Persei, the Pleiades, and Praesepe clusters. The specifics of this technique are described in the aforementioned papers. In this section, we describe the input data for each cluster and follow it with a summary of our membership selection algorithm, which is an improved version of the one used by Lodieu et al. (2019b).
The terms we use to describe different samples of stars in the next sections are the following: 'Preliminary members' are used only in the membership determination method but are not part of the results; see 3.1.1. 'Bona fide members' are the most reliable members of the cluster, as these are stars within 1 tidal radius of the cluster centre (see 4). Finally, 'cluster members' include stars up to 3 tidal radii.
### Input catalogues for each cluster
The input catalogues for each cluster consist of two parts, the list of preliminary candidate members and the list of objects in the cluster region. The purpose of the catalogue with the preliminary candidate members is the computation of the initial systemic velocity and barycentre of the cluster, which were later refined in the iterative loop described in Section 3.2. As for the list of objects in the cluster region, it encompasses a large volume around the cluster in order to include all potential candidate members. This list is a simple subset of our entire _Gaia_ catalogue with 44 million stars, outlined in Sect. 2.2, and its purpose is to speed up the membership selection algorithm. We added Cartesian coordinates for stellar velocities and positions as required in some steps of the membership determination algorithm. In the following subsections, we describe the preparation process for both datasets and the coordinate transformations.
#### 3.1.1 Preliminary candidate members
The catalogue of preliminary candidate members was prepared manually for each cluster to ensure a coherent catalogue. The goal was to find enough bona fide members to estimate the initial systemic velocity and barycentre of a cluster in a robust way. This means that our goal was to have a low contamination rate and that we were not concerned with the completeness level.
In the first step, we limited the data to the region of the sky centred on a cluster with coordinates reported in the literature. This region typically spans an angle in the sky that corresponds to a diameter of 60 pc at the distance of the cluster centre. Since the typical radius of a bound cluster is less than 10 pc, this diameter should include all bona fide members. We note that not all clusters appear as clear over-densities in the sky. Therefore, we identified each of them as an overdensity in the proper motion space with the help of the literature values. We noted an occasionally complex and crowded nature for the proper motion space with asymmetric and sometimes partially overlapping over-densities, due to the rather large selected area in the sky. We refer the reader to Appendix D for a description of clusters with peculiarities. The cut in the proper motion space is circular with a typical radius of 0.7 mas yr\({}^{-1}\). We manually adjusted this radius after a visual inspection to cover the obvious overdensity but stayed conservative in order to limit the contamination to the minimum.
To further help eliminate interlopers, we removed outliers in the distance versus total proper motion space. This step revealed a clear, narrow sequence in the colour-magnitude diagram that represents a coeval population of stars.
The selection of initial candidate members was an iterative process because some individual and less populated clusters appeared as somewhat sparse groups of stars in the sky with only a small number of members. We modified the parameters of the cuts as needed to optimise the selection. Finally, we manually re
moved the outliers in the colour-magnitude diagram, taking out all white dwarfs and (sub)giants as well as any stars with radial velocities significantly different from the average value in the sample (e.g. more than 20 km s\({}^{-1}\)). This left us with a sample of stars that are very likely members of the cluster. The number of initial candidate members and the number of stars with a radial velocity in the initial sample is listed in Table 1. The fraction of stars with a radial velocity in the initial sample has increased thanks to _Gaia_ DR3 and varies from 20-70%, depending on the spectral type, with a typical value of 30%. These numbers enable good statistics for determining a robust initial systemic velocity of the cluster.
#### 3.1.2 Objects in the cluster region
To ensure the volume of each of the input catalogues was large enough to enclose all potential candidate members, we performed wide and simple cuts to our fully downloaded _Gaia_ catalogue (Section 2.2). In particular, we only made cuts in the right ascension and declination in order to limit the catalogue to the cluster centre and its vicinity in the sky. We took the right ascension and declination from the literature and kept all the stars that are located within an angular span and have a radius approximately three times the estimated radius of the cluster. We visually compared the distribution of this input catalogue and the initial list of candidate members in the sky to make sure the volume is large enough. We did not make any additional cuts in the distance in order to mitigate the risk of missing potential members with large distance uncertainties (except for the distance cuts explained in Section 2.2). The typical number of objects in the cluster region varies from a few 10,000 to a few 100,000 stars, depending on the part of the sky and distance from the Galactic plane.
#### 3.1.3 Coordinate transformations
We transformed the spatial observables \(\alpha\) and \(\delta\) as well as the distance \(d\) (in parsecs) from the input catalogue of 44 million stars to the Galactic Cartesian system of physical distances with the centre in the Sun. The position of the \(i\)th star is described by \(\mathbf{b}_{i}=(b_{zi},\ b_{zi},\ b_{zi})\). The velocity vector \(\mathbf{v}_{i}=(v_{xi},\ v_{yi},\ v_{zi})\) was computed by the transformation of the observed transverse and radial velocities into the equatorial coordinate frame with
\[\begin{pmatrix}v_{zi}\\ v_{yi}\end{pmatrix}=\mathbf{R}_{i}\begin{pmatrix}&V_{axis}\\ V_{yi}\\ V_{Ri}\end{pmatrix}. \tag{1}\]
The radial velocity is denoted by \(V_{Ri}\). The transverse velocities \(V_{\alpha vi}\) and \(V_{\delta i}\) are defined as
\[V_{\alpha vi}=\mu_{\alpha i}A_{v}/\pi_{i},\quad\text{and} \tag{2}\] \[V_{\delta i}=\mu_{\delta i}A_{v}/\pi_{i}, \tag{3}\]
where \(\mu_{\delta i}\) and \(\mu_{\alpha vi}=\mu_{\alpha i}\cos\delta_{i}\) are proper motions and \(\hat{\pi}_{i}\) is a parallax computed from the distance r_med_geo. The constant \(A_{v}=4.74047\) km yr s\({}^{-1}\) was used to convert the units and express \(V_{\alpha vi}\) and \(V_{\delta i}\) in kilometres per second. The rotation matrix \(\mathbf{R}_{i}\) is given by
\[\mathbf{R}_{i}=\begin{pmatrix}-\sin\alpha_{i}&-\sin\delta_{i}\cos\alpha_{i}& \cos\delta_{i}\cos\alpha_{i}\\ \cos\alpha_{i}&-\sin\delta_{i}\sin\alpha_{i}&\cos\delta_{i}\sin\alpha_{i}\\ 0&\cos\delta_{i}&\sin\delta_{i}\end{pmatrix}. \tag{4}\]
### Membership determination algorithm
Stellar clusters appear as over-densities in the spatial and kinematic parameter space. The process of identifying cluster members thus operates in the full 6D parameter space. The algorithm takes the kinematic properties of the stars in the cluster region and compares them with the systemic velocity of the group. However, the members of a cluster must be known in order to determine its systemic properties. To break this cyclic dependency, we used a two-step procedure that iteratively improves both the membership list and the systemic cluster properties. In the first iteration, we predefined the systemic cluster properties from the initial list of bona fide members and searched for the candidate members. In the next iteration, the new membership list was used to improve the cluster properties and in turn update the catalogue of candidate members. This iteration was repeated until converged. We proceed next with the details of each iterative step, convergence criterion, and the treatment of stars with missing radial velocities.
#### 3.2.1 Systemic properties of the cluster
The systemic properties include the barycentre, systemic velocity, and half-mass radius of the cluster. The computation of the barycentre and velocity was done in the Cartesian coordinate system by
\[\mathbf{b}_{c}=\frac{\Sigma m_{i}\mathbf{b}_{i}}{\Sigma m_{i}},\quad\mathbf{ v}_{c}=\frac{\Sigma m_{i}\mathbf{v}_{i}}{\Sigma m_{i}}, \tag{5}\]
where \(\mathbf{b}_{c}\) denotes a barycentre and \(\mathbf{v}_{c}\) marks the systemic velocity vector of the cluster. The weights \(m_{i}\) are stellar masses (see Section 2.3). While the assumption of the entirely single nature of the stars keeps the total cluster mass at its lower limit, it does not affect the barycentre significantly. This is because the contributions of the binary stars cancel out due to their expected isotropic distribution.
We note that radial velocity measurements are not available for all the stars. Therefore, the list of stars used to compute the systemic velocity is a subsample of the catalogue employed in the determination of the barycentre.
The initial cluster systemic velocity and barycentre were determined from the preliminary list of cluster members (Sec
\begin{table}
\begin{tabular}{l l l} \hline Photometric survey & Photometric bands & Reference \\ \hline Two Micron All-Sky Survey (2MASS) & \(J,H,K_{s}\) & Cutri et al. (2003); Skrutskie et al. (2006) \\ Wide Field All Sky Explorer (AllWISE) & \(W1,W2,W3,W4\) & (Wright et al., 2010) \\ UKIRT Infrared Deep Sky Survey (UKIDSS) & \(Y,J,H,K\) & (Lawrence et al., 2007) \\ VISTA Hemisphere Survey (VHS) & \(Y,J,H,K_{s}\) & (McMahon, 2012) \\ Sloan Digital Sky Survey (SDSS) DR13 & \(u,\,g,\,r,\,i,\,z\) & (York et al., 2000; Gunn et al., 2006; Albareti et al., 2017) \\ Panoramic Survey Telescope and Rapid Response & \(u,\,g,\,r,\,i,\,z\) & (Chambers et al., 2016; Flewelling et al., 2020) \\ System (Pan-STARRS) DR1 & \(u,\,g,\,r,\,i,\,z\) & (Chambers et al., 2016; Flewelling et al., 2020) \\ \hline \end{tabular}
\end{table}
Table 2: Photometric surveys to complement _Gaia_ data.
tion 3.1.1). In the later iterative steps, we took candidate members within the 3\(\sigma\) confidence interval (see Section 3.2.2 for definition) and the half-mass radius of the cluster to refine the systemic cluster parameters.
#### 3.2.2 Search for cluster members
Once the systemic cluster properties were determined, we compared the kinematic properties of each object in the input catalogue with those of the cluster. In particular, we computed the expected transverse and radial velocities for a candidate member of a cluster at the location of the candidate star:
\[\left(\begin{array}{c}V_{gsi}^{e}\\ V_{gsi}^{e}\\ V_{Ri}^{e}\end{array}\right)=\mathbf{R}_{i}^{-1}\left(\begin{array}{c}v_{x}\\ v_{y}\\ v_{z}\end{array}\right). \tag{6}\]
The matrix \(\mathbf{R}_{i}\) is defined in Equation 4. The velocity vector in the right-hand side corresponds to the cluster systemic velocity \(\mathbf{v}_{c}=(v_{x},\,v_{y},\,v_{x})\).
To evaluate the membership of a star, we first computed the difference between its observed and expected transverse and radial velocities and denoted it with a vector \(\mathbf{z}_{i}\). To account for the measurement uncertainties and the correlations between the observables, we followed Perryman et al. (1998) and prepared a covariance matrix for the observed values and another matrix for the expected values of the star. The sum of both matrices \(\Sigma\) represents the confidence region and was used to compute the scaled distance between the observed and expected velocity vector by
\[c=\mathbf{z}^{T}\Sigma^{-1}\mathbf{z}. \tag{7}\]
Assuming a normal distribution, \(c^{2}\) follows an \(\chi^{2}\) distribution for a given number of degrees of freedom. For three degrees of freedom (radial velocity and 2D proper motion), a 3\(\sigma\) confidence interval corresponds to \(c=14.16\). For two degrees of freedom, when radial velocity is missing, \(c\) equals to 11.83. In other words, stars with a \(c\) value equal to or lower than the one corresponding to the selected 3\(\sigma\) confidence interval are considered bona fide members of the cluster. We note that this part of the algorithm does not take into account over-densities in the positions with respect to the cluster centre.
#### 3.2.3 Iterations
The algorithm iterates between the computation of the systemic cluster properties and membership determination until the convergence criterion is achieved. This happens when the absolute difference between the distance of the cluster's barycentre to the Sun in the previous and the new step drops below 0.01 pc, which typically occurred after the second iteration.
Our definition of cluster membership consists of two criteria. The first criterion is that a star must be within a 3\(\sigma\) confidence interval in the \(c\) value. The second criterion is that only stars that are gravitationally bound to the cluster can be included, that is, stars within the tidal radius of the cluster as per definition. We describe the determination of the tidal radius in Section 3.2.4.
The tidal radius is always larger than the half-mass radius. To make the estimate of the systemic cluster properties more robust, we recomputed the barycentre and cluster velocity with the objects within the tidal radius. Finally, we took the new cluster properties and used them to reiterate the tidal radius.
#### 3.2.4 Tidal radius
Cluster members are subject to the gravitational fields of the cluster itself and the Galaxy. In a simplified view, stars with sufficient energy are able to escape the cluster at the point where the gravitational pulls between the cluster and the Galaxy balance out (i.e. the first Lagrangian point). We defined the distance of the Lagrangian point from the cluster centre as a tidal radius, assuming that stars are in a circular orbit. We took Equation 3 from Roser et al. (2011), which describes the relation between the tidal radius \(r\) and the mass of the cluster within the tidal radius \(M_{\rm cluster}\)
\[M_{\rm cluster}=\frac{4A(A-B)}{G}r^{3}. \tag{8}\]
Here, \(G\) is a gravitational constant, and \(A=14.5\) km s\({}^{-1}\) kpc\({}^{-1}\) and \(B=-13.0\) km s\({}^{-1}\) kpc\({}^{-1}\) are Oort's constants, from Piskunov et al. (2006), determined from 581 clusters within 2.5 kpc.
We solved the equation numerically by comparing the relation from Equation 8 with the cumulative radial mass distribution of the cluster. The intersection of both curves corresponds to the tidal radius. The example of the Pleiades is shown in Figure 2, where we only use stars with a high membership probability (i.e. 3\(\sigma\) confidence). The figure reveals that the slope of the cumulative radial mass distribution is steeper within the tidal radius, indicating a larger spatial density of the stars. For comparison, we added the distribution of the field stars in the solar neighbourhood (see Section 2.4) that reflects a uniform distribution of stars with a density of \(0.033\) M\({}_{\odot}\)pc\({}^{-3}\).
The cumulative radial mass distribution of the 3\(\sigma\) candidate members gives a lower limit of the tidal radius because our mass determination method assumes all stars are single (Section 2.3).
Figure 2: Tidal radius determined by a cross-section between the cumulative radial mass distribution of the cluster and M(\(r\)) from equation 8 (blue line). The tidal radius for single stars (black) represents a minimal value, while a synthetic sample where all binaries have an equal mass (red) indicates the upper limit. The difference between these tidal radii is about 10%. The field distribution was added for comparison (purple and green for the actual and simulated field, respectively; see Section 2.4).
If we account for binaries, the total mass and thus the tidal radius would be larger. To estimate the upper limit of the tidal radius, we took the members of the Pleiades, manually selected all stars seen by eye above the single cluster sequence (Figure 3), and proclaimed them as equal mass binaries by doubling their mass. This gave a tidal radius of 12.5 pc that is 11% larger than the value from the sample with single stars (Figure 2). However, the true impact of the binaries is likely smaller because not all binaries are equal mass objects. Additionally, clusters appear elongated along the line of sight due to the parallax uncertainties, and this affects the determination of their tidal radius. Consequently, not all bound stars are actually included as bona fide members by our selection criteria. However, we chose to keep this procedure because we only use members within 1 tidal radius to estimate the global cluster properties (e.g. cluster centre in the physical and velocity space). In our studies of the luminosity functions, we included members up to 3 tidal radii. At such distances to the cluster centre, the stellar density drops, and the distance-to-centre cut does not have such a large dependence on the accuracy of the tidal radius.
## 4 Clusters and their members
Our membership determination algorithm (Section 3) produced catalogues of bona fide members within 1 tidal radius and within a 3\(\sigma\) confidence interval in \(c\) for all the clusters. We also report candidates within 3 tidal radii for all the clusters in our tables (Section 4.1) in order to be as inclusive as possible and not reject evaporated objects and sources belonging to the corona and tidal tails.
We divided the list of clusters is into two subsamples (bound and unbound) because tidal radius computation was not possible for the clusters in the unbound sample. Indeed, they appear too sparse and do not reach the minimum total mass that is necessary for a system to be gravitationally bound (and not be disrupted by the Galactic forces). We manually set their radii to a value that approximately includes most of the obvious members, based on the visual inspection of their overdensity in the proper motion space. We note that their radii are only rough estimates and should be refined for a more detailed analysis.
We further split the bound clusters into three age categories, each of them being represented by one of three benchmark clusters IC 2602, the Pleiades, and the Hyades. These three age classes cover different stages in the nuclear and dynamical evolution of the clusters. At the age of \(46^{+6}_{-6}\) Myr (Dobbie et al. 2010), IC 2602 still contains pre-main-sequence stars in the low-mass end. Stars in the Pleiades at the age of 112\(\pm\)5 Myr (Dahm 2015) have already settled into the main sequence, while the low-mass members of the older Hyades (\(640^{+67}_{-49}\) Myr, Lodieu et al. 2019b) might have already evaporated. The age ranges of the groups are \(<\)50 Myr for the IC 2602, 100-200 Myr for the Pleiades, and \(\geq\)500 Myr for the Hyades group. The age classification of the clusters into three groups is based on the comparison of their colour-magnitude diagrams with those of the three benchmark clusters. We present the results in Table 1. In total, there are seven clusters in the IC 2602 age group (Alessi 13, IC 2391, UBC 7, Collinder 135, UBC 9, and Stephenson 1), ten clusters similar to the Pleiades (\(\alpha\) Persei, Platais 3, Blanco 1, ASCC 41, ASCC 101, NGC 2516, Alessi 24, and NGC 1039), and six clusters comparable to the Hyades (Coma Berenices, NGC 2632, Alessi 3, NGC 1901, and Mamajek 4).
In Section 4.1, we describe the catalogues and evaluate the membership classification algorithm by comparing our members of the three benchmark clusters with the literature. In Section 4.2, we outline the systemic cluster properties, provide their coordinates in the position and velocity spaces, and estimate their velocity dispersions. We comment on completeness and contamination rates in Section 4.3.
### Catalogues of cluster members
A schema of the membership catalogue is presented in Table 1. Full membership lists for all clusters are available as supplementary material in the Vizier database4 and on our website, together with the visualisation tools (Section 5). We present all members in one catalogue and note that a small percentage of stars (0.8%) appear in the catalogue twice. This duplication is due to their membership in close cluster pairs, resulting in their simultaneous assignment to two clusters. (For more details, see Appendix D.) We expanded the catalogues with the photometric data from the external databases as outlined in Section 2.5. Below, we briefly describe the three benchmark clusters, IC 2602, Pleiades and Hyades, and compare their members from this work with the catalogues from the literature. We note that while other works performed more strict data quality cuts - Cantat-Gaudin et al. (2018a), for example, used stars brighter than _Gaia_\(G=18\) mag - we merely applied filters on the parallax quality and thus reached the detection limit of _Gaia_. Additionally, with the aim of comparing the luminosity functions in mind, we made sure that all our clusters were analysed homogeneously (i.e. all with the same technique). We chose the convergent point method (Perryman et al. 1998) that was designed for astronomy, while most of the literature is based on machine learning techniques that were not originally created for astrophysics.
Footnote 4: [https://vizier.cds.unistra.fr/](https://vizier.cds.unistra.fr/)
#### 4.1.1 Pleiades
The Pleiades is one of the most prominent and best-studied clusters. Due to its proximity, large number of members, and high proper motion, it stands out from background stars. Lodieu et al. (2019a) provide a comprehensive list of works related to a historical debate about the distance and age of the Pleiades. The same paper used _Gaia_ DR2 data to retrieve 1248 sources within the tidal radius of 11.6 pc. The authors estimated a distance of 135.2 \(\pm\) 0.4 pc and based its proper motion on 342 stars with radial velocities. They used white dwarfs to estimate the age of the Pleiades to \(132^{+26}_{-27}\) Myr, in agreement with the lithium depletion boundary age from Stauffer et al. (1998); Barrado y Navascues et al. (2004); and Dahm (2015).
In this work, we found 1355 bona fide members within a tidal radius of 11.28\(\pm\)0.03 pc. While our tidal radius is slightly smaller than the value from Lodieu et al. (2019a), we report about 9% more members (within the respective tidal radii). This is likely due to more resolved binaries thanks to the longer _Gaia_ baseline. Our distance of \(135.0\pm 0.3\) pc is in excellent agreement with the aforementioned result.
To make a fair comparison, we took stars within 3 \(\sigma\) in the \(c\) value and with a distance to the cluster centre of less than 11.6 pc in both samples. The overlap between the samples is 1156 stars; 77 stars are found in the Lodieu et al. (2019a) list but are not members in this work. Additionally, this work increases the sample by 223 stars, which is likely due to now-resolved binary stars.
We compared our catalogue with the members from Cantat-Gaudin & Anders (2020) and found 1045 objects in common. There are 43 stars found only in their catalogue and 334 only in our list. The reason for a larger number of bona fide members in
our catalogue is twofold. On one hand, _Gaia_ DR3 resolved more binaries, and on the other hand, Cantat-Gaudin & Anders (2020) only considered stars with a five-parameter astrometric solution in the \(G=5\)-18 mag range.
The narrow sequence in the colour-magnitude diagram in Figure 3 - independent of the kinematics used in classification - reinforces the credibility of the result and is consistent with the coeval nature of the sample typical for clusters. The contamination rate is higher for cooler stars due to their intrinsically low luminosities, binarity, and lack of radial velocity measurements, but the overall contamination rate remains small. Low mass outliers above the main sequence in the \(G-R_{\rm p}\) colour fall on the main sequence in other colour spaces.
The binary sequence in the cool part of the colour-magnitude diagram is clearly visible in Figure 3. These stars together with other outliers typically have rruve \(>\) 1.4, indicating either binarity or problems with the astrometric solution.
The significant increase in the number of sources with radial velocity measurements in _Gaia_ DR3 enables a robust statistical study of clusters in the velocity space. Except for a few of the brightest stars, practically all stars earlier than \(\sim\)M4 have a radial velocity measurement. This accounts for about one third of the Pleiades members (480 objects). We inferred a systemic cluster velocity of 32.21\(\pm\)0.27 km s\({}^{-1}\) in the Galactic reference frame.
#### 4.1.2 Hyades
The nearest star cluster to the Sun is the Hyades, at the distance of 47.03\(\pm\)0.20 pc (Lodieu et al. 2019b). Its age is estimated to be 640\({}^{+67}_{-69}\) Myr based on white dwarfs (Lodieu et al. 2019b) and 650\(\pm\)70Myr from its lithium depletion boundary (Lodieu et al. 2018; Martin et al. 2018). There are 385 stars within its tidal radius of 9 pc (Lodieu et al. 2019b), but many more are found in its tidal tails. Indeed, Roser et al. (2019) reports 501 members to be within two tidal radii, and 529 stars to be in its asymmetric tidal tails extending up to 170 pc. Similarly, N-body simulation of the Hyades has shown that the total extent of its tails might reach almost 1 kpc and might thus contain more mass (Jerabkova et al. 2021).
We inferred a distance of 46.47\(\pm\)0.14 pc, which is slightly lower than Lodieu et al. (2019b) but in agreement within 2 \(\sigma\). Our tidal radius (8.4\(\pm\)0.02 pc) is also slightly lower than previous estimates (panel a in Figure 6), but the total number of members (427) is slightly larger. We note that our input candidate list is centred on \(\alpha=67.2149\) deg and \(\delta=16.5569\) deg and spans 70deg in diameter (three times its tidal diameter), while the outskirts of the Hyades extend beyond this limit (Jerabkova et al. 2021). We therefore focussed only on the cluster core and its immediate vicinity. Figure 4 shows the colour-magnitude diagram of the Hyades and reveals seven white dwarfs that are known members of the cluster.
To compare our sample with the literature, we prepared a catalogue from Lodieu et al. (2019b) by limiting the data to 3 \(\sigma\) in \(\epsilon\). We took stars with a distance to the cluster centre within 9 pc in both cases. There are 346 stars in common between the samples. Additionally, 14 stars were found only in the Lodieu et al. (2019b) catalogue and 108 stars only in this work. We attribute the increase of the sample in this work to a larger fraction of resolved binaries. We found a similar but slightly smaller overlap in other works. For example, membership lists from Roser et al. (2019) and (Jerabkova et al. 2021) respectively have 333 and 318 stars in common with our work. On the other hand, we found 121 and 136 stars from this work that are not in these two catalogues, respectively.
#### 4.1.3 Ic 2602
The youngest benchmark cluster in our list is IC 2602, also known as the Southern Pleiades. Its lithium depletion age is estimated to be 46\({}^{+6}_{-5}\)Myr (Dobbie et al. 2010) with a distance to 151.6\({}^{+1.9}_{-1.8}\)pc (Nisak et al. 2022). We estimated its tidal radius to 7.2 pc in Figure 6 and its distance to 150.39\(\pm\)0.72pc,
Figure 3: Colour-magnitude diagram for the Pleiades cluster with bona fide and candidate members within 3 tidal radii. The top axis indicates approximate spectral types from the relation in P13. Stars with high rruve values (i.e. potential multiples) are marked with green circles. Triangles denote white dwarfs from Gentile Fusillo et al. (2021). We found one white dwarf among the bona fide members (EGGR 25).
which are in agreement with the literature. Its location in the Galactic plane (\(b=-5\)deg) makes membership selection difficult. Thus, our membership algorithm results in many interlopers with large photometric uncertainties (\(\sigma_{G}\geq 0.15\)) at the faint end, which we removed. Since these interlopers have larger relative parallax uncertainties than the more reliable members, we excluded all stars fainter than \(Gaia\)\(G=8\) mag and with parallax_error/parallax\(>\)0.075.
Our catalogue contains 308 members within 1 tidal radius and 619 stars within 3 tidal radii, while Cantat-Gaudin & Anders (2020) and Nisak et al. (2022) report 311 and 451 members, respectively; both are based on _Gaia_ DR2. The overlap between Nisak et al. (2022) and our sample is 416 stars, while their catalogue contains 44 additional stars that are not in our list. In our comparison with Cantat-Gaudin & Anders (2020), we found 295 objects in common, 324 stars in our list that are not in theirs, and 20 stars in their list that are not in ours.
### Catalogue of cluster characteristics
We present the summary of the cluster properties in Table 1 for bound clusters and Table 2 for unbound clusters. The tables include the cluster position in the physical and velocity field, expressed in both observable and Cartesian space. We added tidal radius, the number of bona fide cluster members \(N\), tidal mass, and velocity dispersion in all three directions. We computed the velocity dispersions for each cluster. Table 1 lists them separately for transversal and radial velocities due to large uncertainties of the latter. We determined a 2D dispersion \(\sigma_{\rm 2D}\) from transversal velocities.
To check if cluster properties evolve with time, we computed the median values for clusters in each age bin, but we observed no clear trends in the median tidal radius and their dispersion, the minimal and maximal tidal radius, nor in 2D dispersion in velocity and the number of bona fide members. This might be due to our age bins being too wide and the fact that while all clusters lose mass with time, the surviving clusters are those with an initially larger mass.
### Completeness and contamination rate
In this section, we address the completeness and contamination rate in our catalogues of cluster members. The completeness of the _Gaia_ DR3 catalogue is affected by the saturation limitations in the bright end and the intrinsic faintness of the low-mass stars. Bright stars missing in _Gaia_ are assumed to be covered in our list by the insertion of the _Hipparcos_ catalogue, as described in Section 2. The completeness ratio at _Gaia_ G magnitude 20 is 92.2% for a five- and six-parameter solution (Lindegren et al., 2021). Additionally, our input catalogues are affected by the parallax quality cut that correlates with the magnitude and the fact that radial velocities are only available to \(G=16\) mag.
In _Gaia_,'relatively few sources are found with separations less than about 0.6 arcsec' (Lindegren et al., 2021). We checked the crowded regions in the core of the most populated cluster in this work, NGC 2516, for a possible source of incompleteness. Stellar pairs with a separation of less than 5 arcsec (less than 1% of the bona fide members) show a rather uniform distribution in the sky. We concluded that stellar density does not affect the incompleteness.
The adopted membership determination algorithm in this work assumes that members have the same motion as the cluster. While the measurement uncertainties in proper motions are small, relatively large errors in parallaxes and radial velocity smear out the internal kinematics, for example, the rotation of the cluster. Our estimate is that this algorithm works well for stars within the tidal radius and those that are still found in the vicinity of the cluster and evaporated in the direction of the cluster motion. However, the technique might not be adequate to identify tidal tails in all directions and large distances from the cluster centre. Stars that escaped in a direction perpendicular to the cluster motion will not be detected. Additionally, the results
Figure 4: Colour-magnitude diagram for the Hyades cluster with bona fide and candidate members within 3 tidal radii. The top axis indicates approximate spectral types from the relation in P13. Stars with high r**ruve** values (i.e. potential multiples) are marked with green circles. Triangles denote white dwarfs from Gentile Fusillo et al. (2021). We found seven known white dwarfs among the bona fide members.
become less trivial to interpret for clusters that have neighbours in both velocity and physical space, for example, Platais 4 (see Appendix D.3). In this case, the algorithm cannot distinguish between the two clusters and detects stars from both clusters as if there was only one large cluster.
Binary stars orbit their common centre of mass and cause periodic oscillations in their proper motions and radial velocities. Ideally, the velocity of the centre of their mass should be compared with the cluster in our algorithms, but we avoided this step due to the large number of stars in our sample. Instead, we tested the membership detection of binaries in the Pleiades by analysing the distribution of their \(c\) values. While _Gaia_ resolves wide binary stars with a separation \(>0.4\) arcsec, ruue\(>\)1.4 is a good proxy for close binary objects. Stars with ruue\(>\)1.4 have a slightly broader distribution of the \(c\) values, with the main peak at \(c\) = 1 and a smaller peak at around \(c\) = 8. This is still well within our 3\(\sigma\) confidence interval (\(c\)\(\approx\)14 for three degrees of freedom). Spectroscopic binaries should be detected by _Gaia_, but if the common centre of mass is significantly displaced (e.g. in non-equal mass objects), we may miss them. We concluded that the 3\(\sigma\) criterion is good enough to incorporate the majority of binary stars, with a possible exception to the non-equal mass binaries.
The contamination rate remains small for bound clusters. We provide a quantitative estimate from the number of outliers in the colour-magnitude diagram. Contamination in our most distant cluster (NGC 1039 at \(b=-15.6\) deg) is up to 2% within 1 tidal radius and \(9-15\)% within 3 tidal radii, depending on how conservative we are in estimating the intrinsic spread in the lower main sequence. We note that this is only an approximate estimate, as it is often non-trivial to distinguish between, for example, subdwarfs or overluminous low-mass members and true outliers that might otherwise be reliable kinematic members with problematic photometry. We automated the contamination estimation procedure and applied it to all clusters in the following way: We fit an 11th order polynomial to the cluster sequence within 1 tidal radius and between \(0<G-R_{\rm P}<1.5\) and computed a root-mean-square of the sequence. We counted stars that are more than 3 RMS away from the sequence as contaminants. We note that this is only an approximate estimate, as the polynomial fit is not an ideal solution. We then computed the contamination rate as the number of outliers over the total number of stars in this colour range. The contamination rates are below 5% for most of the clusters. We found no obvious correlation with the Galactic latitude. Most of our clusters are within 20 degrees from the Galactic plane. Those further away have, on average, contamination rate less than 2%. Further, we checked crowdedness in the cluster cores as a possible source of incompleteness and concluded that it is negligible since the pairs with a separation of less than 5 arcsec show a uniform distribution in the sky and are not concentrated in the cluster cores.
## 5 The website
Visualisation of the multidimensional parameter space is challenging, especially if the catalogues are large and the phenomena of interest are observed on various scales. To present our results in an adequate way for all the clusters, we designed a website.5 It includes interactive and 3D visualisation tools and allows the user to select the quantities on the axes of the plot for the cluster of interest. The list of available columns includes _Gaia_ data and the external photometric measurements described in Section 2.5. We added a database of PARSEC isochrones to overplot over the colour-magnitude diagrams.
Footnote 5: [http://research.iac.es/proyecto/gaiaclusters](http://research.iac.es/proyecto/gaiaclusters)
One of the highlights is the interactive link between our plots and the Aladin sky atlas (Bonnarel et al. 2000; Boch & Fernique 2014)6. In particular, a click on a given star in our plot centres the Aladin plugin on the chosen object. This feature can be useful in many cases, for example, to inspect the astrometric aspect of close binaries and of faint or bright objects. We linked every cluster page with the catalogue of its members to the Vizier
Figure 5: Colour-magnitude diagram for the IC 2602 cluster with bona fide and candidate members within 3 tidal radii. The top axis indicates approximate spectral types from the relation in P13. Stars with high ruue values (i.e. potential multiples) are marked with green circles.
Article number, page 12 of 29
Figure 6: Cumulative radial mass distributions for all clusters. These plots were used to determine tidal radii. The term M(\(r\)) is a theoretical curve that describes the location of the first Lagrange point. The dashed lines represent clusters whose tidal radii we could not determine with our technique, so we set their characteristic radii manually.
database and enabled a direct download of the data for further analysis.
The website was built to enable the professional community to have a quick look at well-known clusters. At the same time, we hope to bring the latest scientific results to the public and inspire new citizen science and exciting student projects.
## 6 Luminosity functions
The luminosity function represents the number of stars per magnitude bins within a given cluster. It is subject to a temporal variation due to nuclear and dynamical processes of the stellar population. For example, the low-mass stars in the young cluster IC 2602 are still found in the pre-main-sequence stage and thus appear more luminous than their older counterparts. In the later phases of a cluster's life, its orbital motion in the Galaxy creates interactions and results in the gradual evaporation of weakly bound and low-mass members. For example, the Hyades lost a significant fraction of M dwarfs from its core. In fact, a significant number of its members very likely drifted away to its tidal tails (Roser et al. 2019; Jerabkova et al. 2021). In this work, we also studied the timescale of dynamical evolution and estimated the percentage of M dwarfs lost from clusters at different stages of their life.
To address this question of mass loss, we first re-evaluated the ages of the clusters in the bound subsample from Table 1 whose ages span \(\sim\)50 to \(\sim\)650 Myr in order to cover the most significant period in the evolution of the clusters (Section 4). We discuss their constructed spectral type distributions in Section 6.1 and their luminosity functions in Section 6.2. For comparison, we added the field population from the immediate Solar neighbourhood described in Section 2.4 but manually removed white dwarfs because they correspond to objects with larger initial masses.
### Spectral type distributions
The low-resolution version of the luminosity function is a spectral type distribution. We divided stars into spectral types based on their \(G-R_{P}\) colour from the _Gaia_ DR3 catalogue. The spectral type classification is based on P13. To test the reliability of this conversion, we took the sample of field stars within 8 pc from Kirkpatrick et al. (2012) and compared their spectral types with the classification from the colour conversion. Out of 141 dwarf stars that were cross-matched with the _Gaia_ DR3 catalogue, we found eight cases with discrepant spectral types. This is a relatively small error (6%); thus, we concluded that the classification using the \(G-R_{P}\) colour relation is sufficient for our purpose.
Figure 11 from Kirkpatrick et al. (2012) presents a sample of all objects within 8 pc in three different ways: the total mass per spectral type, the number of stars per spectral type, and the spatial mass density per spectral type. To reproduce and compare our results with their figure, we took our mass estimates from Section 2.3 and counted the number of stars per spectral type. We note that binary stars in our work are treated as single objects in the mass determination and thus represent a lower mass limit. We show the results for our field sample within 25 pc (described in Section 2.4) and the benchmark clusters in Figure 7. In order to compare the clusters with the field population, we overplot the field sample scaled to the total mass and number of late-G stars. To trace the change in the distribution in greater detail, we split the spectral types into early- and late-type classes (i.e. from 0-4 and 5-9 subtype bins).
As expected, the number of objects and the total mass grow towards the late spectral types, with a peak around M3-M4 dwarfs. This corresponds to the peak of the present day mass function in the solar neighbourhood and open clusters (Kroupa 1998, 2001b; Chabrier 2001; Bastian et al. 2010a). We quantified this observation by computing the fraction of M dwarfs in the entire sample. We did this for masses and the number of stars and present the results in Table 3.
The mass and number ratio of M dwarfs in IC 2602 and the Pleiades match the field population (\(\sim\)40% of the total cluster mass and 80% in terms of a star count). In contrast, only 65% of stars in the Hyades are M dwarfs, representing 30% of the total cluster mass. Assuming the same initial mass function as for IC 2602 and the Pleiades, it is clear that the Hyades lost its M dwarfs because of evaporation during its multiple rotations around the Galaxy and the dynamical processes that lead to the formation of the tidal tails that stretch up to nearly 1 kpc (Jerabkova et al. 2021). The number ratio between the early- and late-M dwarfs in the Hyades is twice as large as in IC 2602 and the Pleiades. This means that the late-type M dwarfs are missing in the Hyades. Their absence cannot be attributed to _Gaia_'s incompleteness, since the Hyades is the closest of all clusters (Lodieu et al. 2019c).
The low-mass stars in the youngest clusters are in their pre-main-sequence phase. Clusters with the Pleiades age can already contain white dwarfs, while clusters in the oldest group with the Hyades age stars can produce (sub)giants. The number of A-type stars in a cluster decreases with age, as expected from stellar evolution models. On the low-mass end, we observed that the numbers of M dwarf members is similar between the field, the Pleiades, and IC 2602 within the Poisson error bars, while the Hyades shows a clear deficit of low-mass and very low mass stars. We quantified this fact by calculating the ratio of early-M dwarfs to G-type stars in the three benchmark clusters (Table 3). This substantial loss of M dwarfs is the result of interaction during the motion of the cluster in our Galaxy, which yielded the dynamical evaporation of the lower mass members. This appears to be natural in the evolution of a young cluster into an older one, as simulated by Kroupa et al. (2001).
### Luminosity functions
A luminosity function is defined as the distribution of the luminosity in a sample (i.e. a distribution of stellar magnitudes). Its profile is subject to the intrinsic properties of a population, such as its initial mass function, age, and star formation history, as well as the dynamical processes that cause gradual evaporation of the low-mass stars in the population.
We determined absolute _Gaia_\(G\) magnitudes using the Bailer-Jones et al. (2021) distances and assumed a negligible extinction since the clusters are located close to the Sun. To check our assumption, we compiled \(E(B-V)\) values from the
\begin{table}
\begin{tabular}{l r r r r} \hline Cluster & N(M) & N(eM)/N(IM) & N(G) & N(G)/N(M) \\ \hline Field & 0.81 & 3.09 & 0.05 & 0.06 \\ IC 2602 & 0.82 & 1.84 & 0.04 & 0.04 \\ Pleiades & 0.81 & 1.75 & 0.03 & 0.04 \\ Hyades & 0.72 & 2.27 & 0.05 & 0.07 \\ \hline \end{tabular}
* **Notes.** N(M) and N(G) are ratios of M and G dwarfs in the entire cluster, while ‘e’ and ‘l’ are early- and late-type M dwarfs (M0–M4, M5–M9).
\end{table}
Table 3: Number ratios of stars in the field and benchmark clusters.
literature for the bound clusters in Appendix A. Clusters beyond 400 pc (e.g. Alessi 10, NGC 2516, and NGC 1039) are the most reddened clusters in our sample, with the \(E(B-V)\) reaching 0.2 mag, and we found no correlation with their age. Three clusters are missing the literature value, and two of them (UBC 7: \(40-50\) Myr Kovaleva et al. 2020, and UBC 9, classified as young in this work) are estimated to be very young. We thus provide our own estimate of the upper limit of reddening and extinction based on dustmaps (Green 2018). The large model reddening values exceed those measured in other works. The upper limit for \(E(B-V)\) is 0.21 for UBC 7, but it is based on other clusters. We deduced that the value is likely much smaller. Assuming \(R_{\rm V}=3.1\), the typical extinction \(A_{\rm V}\) is much less than one magnitude, and we thus concluded that it does not significantly affect our luminosity functions. A possible exception is Alessi 10, which has an extinction of \(A_{\rm V}=0.6\) that is approximately half of the magnitude bin in our luminosity function, as described below. We computed the luminosity functions in the range from 0 to 15 magnitudes with a fixed step of one magnitude and normalised them by the area under the curve after counting the number of objects per magnitude bin.
Figure 7: Spectral type and total mass distribution of the field dwarfs within 25 pc and three benchmark clusters, IC 2602, the Pleiades, and the Hyades. Spectral types G, K, and M are split into early and late subtypes. The field population was used to compute the expected number of stars for each cluster with respect to the number of late-G dwarfs (grey bars).
Figure 8 shows the luminosity functions of the clusters in each age group together with the benchmark clusters for comparison. In the figure, we plot all members within 3 tidal radii. Within each age group, the luminosity functions seem to be similar to each other, as they practically overlap with the one of the benchmark cluster. To quantify this comparison, we used a Kolmogorov-Smirnov test for two samples from SciPy. The test was performed pair-wise within the common magnitude completeness limits of each cluster pair. We present the p-values for all pairs in Figure 9, where we sort clusters according to their age group. The p-values indicate the probability that the two samples, in our case the luminosity functions of the two clusters in question, come from the same distribution. We note that off the diagonal (the diagonal compares clusters to themselves), the values are generally lower than 0.5, with a few exceptions. Because it is not trivial to interpret the absolute values and the test might sometimes be affected by the low-number statistics in the bright end of the luminosity functions, we focussed on the groups as a whole. We most often found pairs with a relatively high p-value in the youngest age group. This means that clusters in this age group are more similar to each other than in the intermediate and old age groups. At the same time, it is clear that there is no significant similarity between the young and older clusters from the Pleiades and Hyades age groups, with an exception being cluster Alessi 10. On the other hand, there is no significant difference between the Pleiades and Hyades age groups. Clusters NGC 1901, NGC 1039, and Platais 3 are affected by incompleteness and low-number statistics. The average p-values for both age groups, the Pleiades and the Hyades, appear to be lower, and one is less likely to find a pair from the same age group with a high degree of similarity.
Moreover, there are seven clusters that bear similarities to clusters from another age group. These clusters are Alessi 24, ASCC 101, and Platais 3 from the intermediate age group and Melotte 111, Mamajek 4, Melotte 25, and Alessi 3 from the old age group.
To summarise, clusters in the youngest age group are the most similar to each other, and the degree of similarity later decreases. These observations led us to draw a parallel with the mass functions in star-forming regions, open clusters, and globular clusters, as they are universal within current observational uncertainties (see review by Bastian et al. 2010b). Based on our data, the luminosity function appears to be universal at least within 500 pc from the Sun. We reiterate that our cluster selection function only depends on the distance (500 pc), upper age limit of 1 Gyr, and the distance from the Galactic plane (\(|b|>10\) deg), and the membership selection bias was minimised by the fact that we used the same method on all of the clusters. Our results are consistent with the earlier study of Piskunov et al. (2008), who observed a strong similarity between the luminosity functions in nearby clusters and other galaxies, and agree with the findings in young regions, as Galli et al. (2020) state that their _Gaia_ members in the Taurus, Upper Scorpius, and Lupus regions show a similar shape for the distribution of spectral types at the faint end of the initial mass function.
If the luminosity function is universal at least up to the Hyades age, then all our clusters must have been born with similar luminosity functions. However, the luminosity functions of our intermediate and old age clusters differ from the young clusters, and this might be due to many reasons. One reason is stellar evolution that, for example, causes massive stars to increase their luminosity when they turn into giants. On the other hand, since single stars of the same mass and metallicity evolve at the same rate, clusters of similar advanced age (e.g. the Hyades age) should still have similar luminosity functions. Our clusters are single (with the exception being the pair Collinder 135 - UBC 7, see Appendix D.1), and the reason for the increased dissimilarity with age might be in their internal dynamical evolution, which depends on their density. However, further analysis with models and simulations would be needed to provide more evidence to support this claim.
## 7 Conclusions
We selected 49 open clusters younger than 1 Gyr and closer than 500 pc and reassessed their membership using _Gaia_ DR3 and Hipparcos 6D astrometric information (positions, parallaxes, proper motions, and radial velocities) over the full range of validity of both missions. We discarded some known clusters in the Galactic plane. We provide the physical parameters of each cluster (3D position and velocity, distance, proper motion, age, tidal radius and mass, number of members) and an updated list of bona fide members, up to 1 tidal radius, and candidates, up to 3 tidal radii. We will make the new list of members public in a dedicated webpage7 as well as on Vizier (the Centre de Donnees de Strasbourg) for use by professional and amateur astronomers and educators. We studied the evolution of the luminosity function as a function of age and found that luminosity functions of the youngest clusters differ from the older populations, are more likely similar to each other, and show a greater degree of similarity than older clusters. We explained this observation with the universal luminosity function within the volume of our sample (500 pc). Clusters with ages similar to the Pleiades or Hyades show a lesser degree of similarity than younger clusters. If the luminosity function is universal, at least up to the age of the Hyades, one of the reasons for increased diversity of these single clusters could be internal dynamical evolution, but future work is needed to support this claim.
Footnote 7: [http://research.iac.es/proyecto/gaialclusters](http://research.iac.es/proyecto/gaialclusters)
###### Acknowledgements.
We thank the anonymous reviewer and acknowledge their valuable contribution that helped to significantly improve this paper. MZ and NL acknowledge support from the Conseljeria de Economia, Conocientaire & Emole del Goberico de Canarias and the European Regional Development Fund (ERDF) under grant with reference PROD20200010052, M2, NL, JO, VB and ELM acknowledge support from the Agencia Estatal de Investigacion del Mins- terrio de Scienzuela Innovacion (AEI-MICON) under grant PID2019-109522GB-C53. APG acknowledges support from the grant PID2020-120052GB-100 f- mancy by MCNAIE/1103039/50110011033. C0-funded by the European Union (ERC, SUBSTELLAR, project number 101054354). Views and opinions expressed are however those of the authors) only do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This research has made use of the Simbad and Vizier databases, and the Aladin sky atlas operated at the centre de Donnees Astronomiques de Strasbourg (CDS), and of NASA's Astrophysics Data System Bibliographic Services (ADS). This research has made use of "Aladin sky atlas" developed at CDS, Strasbourg Observatory, France. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Based on observations obtained as part of the VISTA Hemisphere Survey, ESO Program, 179.A-2010 (PI: McMahon) The UKIDSS project is defined in Lawrence et al. (2007). UKIDSS uses the UKIRT Wide Field Camera Casali et al. (WFCAM; 2007). The photometric system is described in Hewett et al. (2006), and the calibration is described in Hodgkin et al. (2009). The pipeline processing and science archive are described in Irwin et al. (2009, in prep) and Hambly et al. (2008).
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology, WISE and NEOWISE are funded by the National Aeronautics and Space Administration.
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, and Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX008AR2202 issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Betros Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III website is [http://www.sdss3.org/](http://www.sdss3.org/). SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, and University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINa Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. Software:astropy (Price-Whelan et al., 2018), Rumy (Harris et al., 2020), Tpython (Perez & Granger, 2007), topcat (Taylor 2005) and Matplotlib(Hunter, 2007).
| Context.
open clusters は、距離と金属量などの特性を共有する、 coevalな星の群れであり、星質量演化の理解に欠かせないものです。
Aims.
私たちの主な目標は、特に、輝度分布の普遍性に関するオープンクラスターの進化を研究することです。
Methods.
50個のオープンクラスターに対して、収束点法を改良したバージョンを使用しました。
cluster のメンバーの選択は、Gaia DR3 と Hipparcos データの精巧な測位に基づきました。これらのデータは、5次元または6次元空間で扱われました。
Results.
50個のオープンクラスターの真のメンバーリストを更新しました。これは、500 pc 内の、1 Gyr 以下のオープンクラスターを対象としたものです。これは、Gaia の3番目のデータリリースの深さと Hipparcos の明るい部分とを組み合わせ、銀河平面の |
2307.16767 | Infection-induced Cascading Failures -- Impact and Mitigation | In the context of epidemic spreading, many intricate dynamical patterns can
emerge due to the cooperation of different types of pathogens or the
interaction between the disease spread and other failure propagation mechanism.
To unravel such patterns, simulation frameworks are usually adopted, but they
are computationally demanding on big networks and subject to large statistical
uncertainty. Here, we study the two-layer spreading processes on
unidirectionally dependent networks, where the spreading infection of diseases
or malware in one layer can trigger cascading failures in another layer and
lead to secondary disasters, e.g., disrupting public services, supply chains,
or power distribution. We utilize a dynamic message-passing method to devise
efficient algorithms for inferring the system states, which allows one to
investigate systematically the nature of complex intertwined spreading
processes and evaluate their impact. Based on such dynamic message-passing
framework and optimal control, we further develop an effective optimization
algorithm for mitigating network failures. | Bo Li, David Saad | 2023-07-31T15:36:01 | http://arxiv.org/abs/2307.16767v4 | # Infection-induced Cascading Failures - Impact and Mitigation
###### Abstract
Coupled spreading processes on interdependent networks are studied, where the spreading infection of diseases or malware in one layer can trigger cascading failures in another layer and lead to secondary disasters, e.g., disrupting public services, supply chains, or power distribution. We utilize the dynamic message-passing method to devise efficient algorithms for inferring the system states, which allows one to investigate systematically the nature of such complex intertwined spreading processes and evaluate their impact. Based on the dynamic message-passing framework and optimal control, we further develop an effective optimization algorithm for mitigating network failures.
## I Introduction
Epidemic outbreaks do not only possess a direct threat to public health but also, indirectly, impact other sectors [1; 2; 3]. For instance, when many infected individuals have to rest, be hospitalized or quarantined in order to slow down the epidemic spread, this could severely disrupt public services, causing disutility even to those who are not infected. For instance, the highly interdependent supply chains can be easily disrupted due to epidemic outbreaks [4; 5]. Similar concerns apply to cyber security. The spread of malware is not merely detrimental to computer networks, but can also cause failures to power grids or urban transportation networks which rely on modern communication systems [6; 7]. What is even worse is that the failures of certain components of technological networks can by themselves trigger a cascade of secondary failures, which can eventually lead to large-scale outages [8]. Therefore, it is vital to understand the nature of epidemic (or malware) spreading and failure propagation on interdependent networks, based on which, further mitigation and control measures can be devised.
A number of previous papers address the scenario of coupled spreading processes. In the context of epidemic spreading, two types of pathogens can cooperate or compete with each other, creating many intricate patterns of disease propagation [9; 10; 11; 12]. For interdependent technological networks (e.g., a communication network coupled with a power network), the failure of components in one network will not only affect neighboring parts within the same network, but will also influence the adjoint network through the interdependent connections. Macroscopic analyses based on simplified models show that such a spreading mechanism can easily result in a catastrophic breakdown of the whole system [13; 14].
In this work, we study a scenario where the epidemic or malware spreading on one network can trigger cascading failures on another. This is highly relevant in the above-mentioned cases where epidemic outbreaks cause disruption in public services or economic activities. Similarly, it can also be applied to study the effect of malware spread on computer networks causing the breakdown of other technological networks such as the power grid. The latter phenomenon is gaining more and more attention due to the increasing interdependency among various engineering networks [7].
Most existing research in the area of multi-layer spreading processes employs macroscopic approaches, such as the degree-distribution-based mean-field methods and asymptotic percolation analysis, in order to obtain the global picture of the models' behavior [15]. Such methods typically do not consider specific network instances and lack the ability to treat the interplay between the spreading dynamics and the fine-grained network topology [15]. For stochastic spreading processes with specific system conditions (e.g., topology initial conditions and individual node properties), it is common to apply extensive Monte Carlo (MC) simulations to observe the evolution of the spread, based on which important policy decisions are made [16]. However, such simulations are computationally demanding on big networks and can be subject to large statistical uncertainty; as a result, they are difficult to be used for downstream analysis or optimization tasks. Therefore, researchers have been pursuing tractable and accurate theoretical methods to tackle the complex stochastic dynamics on networks [15; 17].
Among the various developed theoretical approaches used, dynamic message-passing (DMP) is based on ideas from statistical physics offering a desirable algorithmic framework for approximate inference while it remains computationally efficient [18; 19; 20]. Notably, the DMP method has been shown to be much more accurate than the widely adopted individual-based mean-field method, especially in sparse networks [21; 22]. Moreover, the DMP approach yields a set of closed-form equations, which is very convenient for additional parameter estimation and optimization tasks [12; 23; 24]. In this work, we will leverage the DMP method to study the nature of infection-induced cascading failures, evaluate their impacts, and devise optimization algorithms for mitigating the failures. The remainder of the paper is organized as follows. We introduce the model and derive its DMP
equations in Sec. II and Sec. III. We then investigate the impact of the spreading processes in Sec. IV, and devise optimization algorithms for mitigating the network failures in Sec. V. Finally, we summarize our findings and conclude the paper in Sec. VI.
## II Model and Framework
### The Model
To study the impact of infection spread of diseases of malware and their secondary effects, we consider multiplex networks comprising two layers [25], which are denoted as layers \(a\) and \(b\), and are represented by two graphs \(G_{a}(V_{a},E_{a})\) and \(G_{b}(V_{b},E_{b})\). For convenience, we assume that the nodes in both layers correspond to the same set of individuals, denoted as \(V=V_{a}=V_{b}\). This can be extended to more general settings. Denote \(\partial_{i}^{a}\) and \(\partial_{i}^{b}\) as the sets of nodes adjacent to node \(i\) in layers \(a\) and \(b\), respectively. We also define \(\partial_{i}=\partial_{i}^{a}\cup\partial_{i}^{b}\). See Fig. 1 for an example of the network model under consideration.
Every individual has two states on layers \(a\) and \(b\), respectively. In layer \(a\), each node assumes one of four states, susceptible (\(S\)), infected (\(I\)), recovered (\(R\)), and protected (\(P\)) at any particular time step. The infection spreading process occurs in layer \(a\), which is modeled by the stochastic discrete-time SIR model [15], augmented with a protection mechanism, which we term the SIRP model
\[\begin{split} S(i)+I(j)&\xrightarrow{\beta_{ji}}I (i)+I(j),\\ I(i)&\xrightarrow{\mu_{i}}R(i),\\ S(i)&\xrightarrow{\gamma(i)}P(i),\end{split} \tag{1}\]
where \(\beta_{ji}\) is the probability that node \(j\) being in the infected state transmits the infection to its susceptible neighboring node \(i\) at a certain time step. At each time step, an existing infected node \(i\) recovers with probability \(\mu_{i}\); the recovery process is assumed to occur after possible transmission activities. At time \(t\), an existing susceptible node \(i\) turns into state \(P\) if it receives protection at time \(t-1\), which occurs with probability \(\gamma_{i}(t-1)\). The protection can be achieved by vaccination in the epidemic setting or special protection measures in the malware spread setting, which is usually subject to certain budget constraints. The protection probabilities \(\{\gamma_{i}(t)\}\) will be the major control variables for mitigating the outbreaks. At initial time \(t=0\), we assume that node \(i\) has a probability \(P_{S}^{i}(0)\) to be in state \(S\), and probability \(P_{I}^{i}(0)=1-P_{S}^{i}(0)\) to be in state \(I\).
In layer \(b\), each node \(i\) can either be in the normal state (\(N\)) or the failed state (\(F\)), indicated by a binary state variable \(x_{i}\) where \(x_{i}=1\) (\(0\)) denotes the 'fail' ('normal') state at a particular time step. A node \(i\) in layer \(b\) fails if (i) it has been infected, i.e., node \(i\) is in state \(I\) or \(R\) in layer \(a\); (ii) there exists certain neighboring failed nodes such that \(\sum_{j\in\partial_{i}^{b}}b_{ij}x_{j}\geq\Theta_{i}\), where \(\Theta_{i}\) is a threshold and the influence parameter \(b_{ij}\) measures the importance of the failure of node \(j\) on node \(i\). The latter case indicates that node \(i\) can fail due to the failures of its neighbors which it relies on, even though node \(i\) itself is not infected. In summary, the failure propagation process in layer \(b\) can be expressed as
\[x_{i}=\left\{\begin{aligned} & 1,\ \ \text{either (i) node $i$ in state $I$ or $R$ in layer $a$},\\ &\ \
### The DMP Framework
We aim to use the DMP approach to investigate the coupled spreading processes described above. The DMP equations of the usual SIR and the LTM model have been derived, based on the microscopic dynamic belief propagation equations [20; 29]. As in generic belief propagation methods [30], the DMP method is exact on tree graphs, while it can constitute a good approximation in loopy graphs when short loops are scarce.
The coupled spreading processes combining the SIR and LTM model appear more involved, where approximations relying on uncorrelated multiplex networks were used [28]. Such approximations become less adequate when the two network layers are correlated, e.g., both layers share the same network topology.
### Dynamic Belief Propagation
To devise more accurate DMP equations for general network models and accommodate the protection mechanism for mitigation, we start from the principled dynamic belief propagation equations of the coupled process, instead of considering each process separately. One important characteristic of our model is that state transition is unidirectional, which can only take the direction \(S\to I\to R\) or \(S\to P\) in layer \(a\), and \(N\to F\) in layer \(b\). In this case, the DMP formalism is much more tractable [20].
Following Refs. [20; 29], we parametrize the dynamical trajectory of each node by its state transition times. In layer \(a\), we denote \(\tau_{i}^{a},\omega_{i}^{a}\) and \(\varepsilon_{i}^{a}\) as the first time at which node \(i\) turns into state \(I\), \(R\) and \(P\), respectively. In layer \(b\), we denote \(\tau_{i}^{b}\) as the first time at which node \(i\) turns into state \(F\). The cavity probability of the trajectory of node \(i\) in the absence of node \(j\), denoted as \(m^{i\to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a},\tau_{i}^{b})\), is computed by the following dynamic belief propagation equations
\[m^{i\to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a},\tau_{i}^{b})\] \[= \sum_{\{\tau_{k}^{a},\omega_{k}^{a},\tau_{k}^{a}\}}W_{\text{SIRP} }^{i}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}||\{\tau_{k}^{a},\omega_ {k}^{a},\varepsilon_{k}^{a}\}_{k\in\partial_{i}^{a}})\] \[\times W_{\text{LTM}}^{i}(\tau_{i}^{b}||\tau_{i}^{a},\varepsilon_{i }^{a},\{\tau_{k}^{b}\}_{k\in\partial_{i}^{a}})\] \[\times \prod_{k\in\partial_{i}\setminus j}m^{k\to i}(\tau_{k}^{a}, \omega_{k}^{a},\varepsilon_{k}^{a},\tau_{k}^{b}), \tag{3}\]
where \(W_{\text{SIRP}}^{i}(\cdot)\) and \(W_{\text{LTM}}^{i}(\cdot)\) are the transition kernels dictated by the dynamical rules of the SIRP and LTM model, respectively (for details see Appendix A). The marginal probability of the trajectory of node \(i\), denoted as \(m^{i}(\cdot)\), can be computed in a similar way as Eq. (3), by replacing the product \(\prod_{k\in\partial_{i}\setminus j}\) in the last line of Eq. (3) by \(\prod_{k\in\partial_{i}}\).
The node-level probability of node \(i\) in a certain state can be computed by summing the trajectory-level probability, which will be described in the next section.
## III Node-level DMP equations
Consider the cavity probability of node \(i\) being in state \(S\) in layer \(a\) at time \(t\) (assuming node \(j\) is absent - the cavity), it is obtained by tracing over the corresponding probabilities of trajectories \(m^{i\to j}(\cdot)\) in the cavity graph (assuming node \(j\) is removed)
\[P_{S}^{i\to j}(t)=\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a},\tau_ {i}^{b}\}\mathbb{I}(t<\tau_{i}^{a}<\omega_{i}^{a})\mathbb{I}(t<\varepsilon_{i }^{a})\] \[\times m^{i\to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}, \tau_{i}^{b}), \tag{4}\]
where \(\mathbb{I}(\cdot)\) is the indicator function enforcing the order of state transitions. Similarly, we denote the cavity probability of node \(i\) in state \(F\) in layer \(b\) (in the absence of node \(j\)) as \(P_{F}^{i\to j}(t)\); it is obtained by
\[P_{F}^{i\to j}(t)=\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a},\tau_ {i}^{b}}\mathbb{I}(\tau_{i}^{b}\leq t)m^{i\to j}(\tau_{i}^{a},\omega_{i}^{a}, \varepsilon_{i}^{a},\tau_{i}^{b}). \tag{5}\]
The marginal probabilities \(P_{S}^{i}(t)\) and \(P_{F}^{i}(t)\) can be computed in a similar manner, by replacing \(m^{i\to j}(\cdot)\) in Eq. (4) and Eq. (5) with \(m^{i}(\cdot)\).
### DMP Equations in Layer \(a\)
We note that infection spread in layer \(a\) is not influenced by cascades in layer \(b\), while the failure time in layer \(b\) depends on the infection time and the protection time of the corresponding node in layer \(a\). Hence, we can decompose the message \(m^{i\to j}(\cdot)\) to the respective components as
\[m^{i\to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a},\tau_{i}^{b})= \ m_{a}^{i\to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a})\] \[\times m_{b}^{i\to j}(\tau_{i}^{b}\mid\tau_{i}^{a},\varepsilon_{i}^{a}). \tag{6}\]
where \(m_{a}^{i\to j}(\cdot)\) and \(m_{b}^{i\to j}(\cdot)\) denote the trajectory-level probabilities of the processes in layer \(a\) and \(b\), respectively.
Summing \(m_{a}^{i\to j}(\cdot)\) over \(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}\) up to a certain time yields the normal DMP equations of node-level probabilities for the infection spread in layer \(a\) (see details in Appendix A). They admit the following expressions for \(t>0\)
\[P_{S}^{i\to j}(t)=P_{S}^{i}(0)\prod_{t^{\prime}=0}^{t-1}\big{[}1- \gamma_{i}(t^{\prime})\big{]}\prod_{k\in\partial i\setminus j}\theta^{k\to i}(t),\] (7) \[\theta^{k\to i}(t)=\theta^{k\to i}(t-1)-\beta_{ki}\phi^{k\to i}(t-1),\] (8) \[\phi^{k\to i}(t)=\big{(}1-\beta_{ki}\big{)}\big{(}1-\mu_{k}\big{)} \phi^{k\to i}(t-1)\] \[\ \
\(t\), and \(\phi^{k\to i}(t)\) is the cavity probability that \(k\) is in state \(I\) but has not transmitted the infection signal to node \(i\) up to time \(t\).
At time \(t=0\), as we consider that each node \(i\) is either in state \(S\) with probability \(P_{S}^{i}(0)\) or in state \(I\) with probability \(1-P_{S}^{i}(0)\), we have the following initial conditions for the messages
\[P_{S}^{i\to j}(0) =P_{S}^{i}(0),\] \[\phi^{i\to j}(0) =1-P_{S}^{i}(0),\] \[\theta^{i\to j}(0) =1. \tag{10}\]
Upon iterating the above messages (7)-(9) starting from the initial conditions (II.1), the node-level marginal probabilities can be computed as
\[P_{S}^{i}(t) =P_{S}^{i}(0)\prod_{t^{\prime}=0}^{t-1}\big{[}1-\gamma_{i}(t^{ \prime})\big{]}\prod_{k\in\partial i}\theta^{k\to i}(t), \tag{11}\] \[P_{R}^{i}(t) =P_{R}^{i}(t-1)+\mu_{i}P_{I}^{i}(t-1),\] (12) \[P_{P}^{i}(t) =P_{P}^{i}(t-1)+\gamma_{i}(t-1)P_{S}^{i}(t-1),\] (13) \[P_{I}^{i}(t) =1-P_{S}^{i}(t)-P_{R}^{i}(t)-P_{P}^{i}(t). \tag{14}\]
The above DMP equations (11)-(14) bear similarity to those of SIR model [19], except for the protection mechanism with control parameters \(\{\gamma_{i}(t)\}\).
### DMP Equations in Layer \(b\)
As for the cascade process in layer \(b\), whether node \(i\) will turn into state \(F\) (fail) also depends on the state in layer \(a\), making it more challenging to derive the corresponding DMP equations. The key to obtaining node-level equations for \(P_{F}^{i\to j}(t)\) in Eq. (5) (and the corresponding marginal probability \(P_{F}^{i}(t)\)) is to introduce several intermediate quantities to facilitate the calculation; the details are outlined in Appendix A.
To summarize, the node-level failure probability \(P_{F}^{i}(t)\) can be decomposed as
\[P_{F}^{i}(t)=P_{I}^{i}(t)+P_{R}^{i}(t)+P_{SF}^{i}(t)+P_{PF}^{i}(t), \tag{15}\]
where \(P_{SF}^{i}(t)\) and \(P_{PF}^{i}(t)\) are the probabilities that node \(i\) is in state \(F\) in layer \(b\), while it is in state \(S\) or state \(P\) in layer \(a\), respectively. For these two cases, the failure of node \(i\) is triggered by the failure propagation of its neighbors from layer \(b\). A similar relation holds for the cavity probability \(P_{F}^{i\to j}(t)\).
The probability \(P_{SF}^{i}(t)\) admits the following iteration
\[P_{SF}^{i}(t)=P_{S}^{i}(0)\prod_{t^{\prime}=0}^{t-1}\big{[}1- \gamma_{i}(t^{\prime})\big{]}\prod_{k\in\partial_{i}^{a}\setminus\partial_{i} ^{a}\cap\partial_{i}^{b}}\theta^{k\to i}(t)\] \[\times\sum_{\begin{subarray}{c}\{x_{k}\}_{k\in\partial_{i}^{a}} \end{subarray}}\mathbb{I}\Big{(}\sum_{k\in\partial_{i}^{b}}b_{ki}x_{k}\geq \Theta_{i}\Big{)}\] \[\times\prod_{\begin{subarray}{c}\{\partial_{i}^{b}\setminus \partial_{i}^{a}\cap\partial_{i}^{b}},\\ x_{k}=1\end{subarray}}P_{F}^{k\to i}(t-1)\prod_{\begin{subarray}{c}k\in \partial_{i}^{b}\setminus\partial_{i}^{a}\cap\partial_{i}^{b},\\ x_{k}=0\end{subarray}}\big{[}1-P_{F}^{k\to i}(t-1)\big{]}\] \[\times\prod_{\begin{subarray}{c}k\in\partial_{i}^{a}\cap\partial _{i}^{b},\\ x_{k}=1\end{subarray}}\chi^{k\to i}(t)\prod_{\begin{subarray}{c}k\in \partial_{i}^{a}\cap\partial_{i}^{b},\\ x_{k}=0\end{subarray}}\big{[}\theta^{k\to i}(t)-\chi^{k\to i}(t) \big{]}, \tag{16}\]
where \(\chi^{k\to i}(t)\) is the cavity probability that node \(k\) is in state \(F\) at time \(t-1\), and it has not sent the infection signal to node \(i\) up to time \(t\).
The cavity probability \(\chi^{k\to i}(t)\) can be decomposed into
\[\chi^{k\to i}(t)=\psi^{k\to i}(t)+P_{SF}^{k\to i}(t-1)+P_{PF}^{k\to i}(t-1), \tag{17}\]
where \(\psi^{k\to i}(t)\) is the cavity probability that node \(k\) is in state \(F\) or \(R\) at time \(t-1\), but has not transmitted the infection signal to node \(i\) up to time \(t\). The cavity probability \(\psi^{k\to i}(t)\) can be computed as
\[\psi^{k\to i}(t)=\psi^{k\to i}(t-1)-\beta_{ki}\phi^{k\to i}(t-1)\] \[\qquad+\big{[}1-\gamma_{k}(t-2)\big{]}P_{S}^{k\to i}(t-2)-P_{S}^{k \to i}(t-1). \tag{18}\]
Similarly, the probability \(P_{PF}^{i}(t)\) admits the following iteration
\[P_{PF}^{i}(t)=P_{S}^{i}(0)\sum_{\varepsilon=1}^{t}\gamma_{i}( \varepsilon-1)\prod_{t^{\prime}=0}^{\varepsilon-2}\big{[}1-\gamma_{i}(t^{ \prime})\big{]} \tag{19}\] \[\times\prod_{\begin{subarray}{c}k\in\partial_{i}^{a}\setminus \partial_{i}^{a}\cap\partial_{i}^{b}\\ x_{k}=1\end{subarray}}\theta^{k\to i}(\varepsilon-1)\sum_{\begin{subarray}{c}\{x_{k }\}_{k\in\partial_{i}^{b}}\\ x_{k}=0\end{subarray}}\mathbb{I}\bigg{(}\sum_{k\in\partial_{i}^{b}}b_{ki}x_{k} \geq\Theta_{i}\bigg{)}\] \[\times\prod_{\begin{subarray}{c}k\in\partial_{i}^{a}\cap\partial_{ i}^{a}\cap\partial_{i}^{b},\\ x_{k}=1\end{subarray}}P_{F}^{k\to i}(t-1)\prod_{\begin{subarray}{c}k\in \partial_{i}^{a}\cap\partial_{i}^{a}\cap\partial_{i}^{b},\\ x_{k}=0\end{subarray}}\big{[}1-P_{F}^{k\to i}(t-1)\big{]}\] \[\times\prod_{\begin{subarray}{c}k\in\partial_{i}^{a}\cap\partial_{ i}^{b},\\ x_{k}=1\end{subarray}}\tilde{\chi}^{k\to i}(t,\varepsilon)\prod_{\begin{subarray}{c}k \in\partial_{i}^{a}\cap\partial_{i}^{b},\\ x_{k}=0\end{subarray}}\big{[}\theta^{k\to i}(\varepsilon-1)-\tilde{\chi}^{k \to i}(t,\varepsilon)\big{]},\]
where the dummy variable \(\varepsilon\) indicates the time at which node \(i\) receives the protection signal.
In Eq. (19), \(\tilde{\chi}^{k\to i}(t,\varepsilon)\) is the cavity probability that node \(k\) is in state \(F\) at time \(t-1\), but has not transmitted the infection signal to node \(i\) up to time \(\varepsilon\). It can be decomposed into
\[\tilde{\chi}^{k\to i}(t,\varepsilon)=\tilde{\psi}^{k\to i}(t, \varepsilon)+P_{SF}^{k\to i}(t-1)+P_{PF}^{k\to i}(t-1), \tag{20}\]
where \(\tilde{\psi}^{k\to i}(t,\varepsilon)\) is the cavity probability that node \(k\) is in state \(I\) or \(R\) at time \(t-1\), but has not transmitted the
infection signal to node \(i\) up to time \(\varepsilon-1\). The cavity probability \(\tilde{\psi}^{k\to i}(t)\) can be computed as
\[\tilde{\psi}^{k\to i}(t,\varepsilon) =\psi^{k\to i}(\varepsilon-1)+P_{I}^{k\to i}(t-1)+P_{R}^{k\to i}(t-1)\] \[\quad-\left[P_{I}^{k\to i}(\varepsilon-2)+P_{R}^{k\to i}( \varepsilon-2)\right]. \tag{21}\]
Note that the cavity probabilities \(P_{SF}^{i\to j}(t)\) and \(P_{PF}^{i\to j}(t)\) are computed using the similar formula as in Eq. (16) and Eq. (19), but in the cavity graph where node \(j\) is removed. This closes the loop for the DMP equations in layer \(b\).
The initial conditions for the corresponding messages are given by
\[P_{F}^{k}(0)=P_{F}^{k\to i}(0)=P_{I}^{k}(0), \tag{22}\] \[P_{SF}^{k}(0)=P_{SF}^{k\to i}(0)=0,\] (23) \[P_{PF}^{k}(0)=P_{PF}^{k\to i}(0)=0,\] (24) \[\psi^{k\to i}(1)=\chi^{k\to i}(1)=(1-\beta_{ki})P_{I}^{k}(0),\] (25) \[\tilde{\psi}^{k\to i}(1,1)=\tilde{\chi}^{k\to i}(1,1)=P_{I}^{k}(0). \tag{26}\]
For \(t\geq 2,\varepsilon=1\), we have
\[\tilde{\psi}^{k\to i}(t,\varepsilon=1) =P_{I}^{k\to i}(t-1)+P_{R}^{k\to i}(t-1), \tag{27}\] \[\tilde{\chi}^{k\to i}(t,\varepsilon=1) =P_{I}^{k\to i}(t-1)+P_{R}^{k\to i}(t-1)\] \[\quad+P_{SA}^{k\to i}(t-1)+P_{PA}^{k\to i}(t-1). \tag{28}\]
We remark that for a total time \(T\), the computational complexity for the node-level DMP equations is of \(O(|E|T^{2})\), unlike the \(O(|E|T)\) complexity for the SIRP model. This is due to the coupling of the two-layer processes and the protection mechanism. The summation of the dummy state \(\{x_{k}\}_{k\in\partial_{i}^{b}}\) in Eq. (16) and Eq. (19) also implies a high computational demand of networks with high-degree nodes. One way to alleviate this complexity is to use the dynamic programming techniques introduced in Ref. [31].
These DMP equations are exact if both layers are tree networks, while they are approximate solutions when there are loops in the underlying networks.
### Simplification in the Absence of Neighbors Overlap
If there are no overlaps between the neighbors of node \(i\) in layer \(a\) and those in layer \(b\), i.e., \(\partial_{i}^{a}\cap\partial_{i}^{b}=\varnothing\), the messages \(\chi^{k\to i},\psi^{k\to i},\tilde{\chi}^{k\to i}\) and \(\tilde{\psi}^{k\to i}\) are not needed, and the node-level probabilities \(P_{SF}^{i}(t)\) and \(P_{PF}^{i}(t)\) can be much simplified as
\[P_{SF}^{i}(t)=P_{S}^{i}(t)\sum_{\{x_{k}\}_{k\in\partial_{i}^{b} }}\mathbb{I}\bigg{(}\sum_{k\in\partial_{i}^{b}}b_{ki}x_{k}\geq\Theta_{i}\bigg{)} \tag{29}\] \[\quad\times\prod_{k\in\partial_{i}^{b},\,x_{k}=1}P_{F}^{k\to i }(t-1)\prod_{k\in\partial_{i}^{b},\,x_{k}=0}\big{[}1-P_{F}^{k\to i}(t-1)\big{]},\] \[P_{PF}^{i}(t)=P_{P}^{i}(t)\sum_{\{x_{k}\}_{k\in\partial_{i}^{b} }}\mathbb{I}\bigg{(}\sum_{k\in\partial_{i}^{b}}b_{ki}x_{k}\geq\Theta_{i}\bigg{)}\] (30) \[\quad\times\prod_{k\in\partial_{i}^{b},\,x_{k}=1}P_{F}^{k\to i }(t-1)\prod_{k\in\partial_{i}^{b},\,x_{k}=0}\big{[}1-P_{F}^{k\to i}(t-1)\big{]}.\]
This is also a reasonable approximation if the two layers \(a\) and \(b\) have little correlation, which has been exploited in Ref. [28]. In this work, we will employ this approximation when we consider the dynamics in the large time limit and devise an optimization algorithm for mitigating the cascading failures, in order to reduce the computational complexity.
### Effectiveness of the DMP Method
We test the efficacy of the DMP equations derived in Sec. III.1 and Sec. III.2, by comparing the node-level probabilities \(P_{S}^{i}(t)\) and \(P_{F}^{i}(t)\) to those obtained by Monte Carlo (MC) simulations. The DMP theory produces exact marginal probabilities for node activities in coupled tree networks, as demonstrated in Fig. 2(a) and (b). For random regular graphs where there are many loops, the DMP method also yields reasonably accurate solutions, as demonstrated in Fig. 2(c) and (d).
## IV Impact of Infection-Induced Cascades
The obtained DMP equations of the coupled SIRP and LTM models allow us to examine the impact of the infection-induced cascading failures, on either a specific instance of a multiplex network or an ensemble of networks following a certain degree distribution. In this section, we do not consider the protection of nodes by setting \(\gamma_{i}(t)=0\), where the process in layer \(a\) is essentially a discrete-time SIR model.
### Impact on A Specific Network
For the process in layer \(a\), we define the outbreak size at time \(t\) as the fraction of nodes that have been infected at that time
\[\rho_{I}+\rho_{R}=\frac{1}{N}\sum_{i\in V_{a}}P_{I}^{i}(t)+\frac{1}{N}\sum_{i \in V_{a}}P_{R}^{i}(t). \tag{31}\]
For the process in layer \(b\), we define the cascade size at time \(t\) as the fraction of nodes that have failed at that time
\[\rho_{F}=\frac{1}{N}\sum_{i\in V_{b}}P_{F}^{i}(t). \tag{32}\]
By definition, we have \(\rho_{F}\geq\rho_{I}+\rho_{R}\).
In Fig. 3, we demonstrate the time evolution of the infection outbreak size and the cascade size in a multiplex network where both layers are random regular graphs with size \(N=1600\). It can be observed that \(\rho_{F}\) is much larger than \(\rho_{I}+\rho_{R}\) asymptotically, which suggests that the failure propagation mechanism in layer \(b\) significantly amplifies the impact of the infection outbreaks in layer \(a\). In particular, the failure can eventually propagate to the whole network even though less than 70% of the population gets infected when the spread of the infection saturates. Compare to MC simulations, the DMP method systematically overestimates the outbreak sizes due to the effect of mutual infection, but it has been shown to offer a significant improvement over the individual-based mean-field method [21, 22, 32].
### Asymptotic Properties
In the above example, the system converges to a steady state in the large time limit. The DMP approach allows us to systematically investigate the asymptotic behavior of the coupled spreading processes.
For the process in layer \(a\), we define an auxiliary probability
\[p_{ij}:=\frac{\beta_{ij}}{\beta_{ij}+\mu_{i}-\beta_{ij}\mu_{i}}. \tag{33}\]
Then the messages in layer \(a\) admit the following expressions in the limit \(T\rightarrow\infty\)
\[\phi^{i\to j}(\infty) =0,\] \[\theta^{i\to j}(\infty) =1-p_{ij}+p_{ij}P_{S}^{i\to j}(\infty),\] \[P_{S}^{i\to j}(\infty) =P_{S}^{i}(0)\prod_{k\in\partial_{i}^{n}\setminus j}\theta^{k \to i}(\infty),\] \[P_{S}^{i}(\infty) =P_{S}^{i}(0)\prod_{k\in\partial_{i}^{n}}\theta^{k\to i}( \infty), \tag{34}\]
Details of the derivation can be found in Appendix B. The above asymptotic equations (34) suggest a well-known relationship between epidemic spreading and bond percolation [18, 15, 33]. The quantity \(p_{ij}\) defined in Eq. (33) can be interpreted as a bond occupation probability of edge \((i,j)\), which differs from the continuous-time counterpart [18, 33] with an additional term \(\beta_{ij}\mu_{i}\) in the denominator. The term \(\beta_{ij}\mu_{i}\) accounts for the simultaneous events that node \(i\) infects node \(j\) and recovers within the same time step [21].
For the process in layer \(b\), we assume that layers \(a\) and \(b\) are weakly correlated due to their different topologies and adopt the approximation made in Sec. III.3. As no protection is applied, we have \(P_{PF}^{i}(t)=0\). Then the messages in layer \(b\) admit the following expression in the
Figure 3: Evolution of the sizes of the infection outbreak in layer \(a\) (measured by \(\rho_{I}+\rho_{R}\)) and total failures in layer \(b\) (measured by \(\rho_{F}\)). Layer \(a\) and layer \(b\) have different network topologies, but both are realizations of random regular graphs of size \(N=1600\) and degree \(K=5\). At time \(t=0\), there are 5 infected nodes. The system parameters are \(\beta_{ij}=0.2,\mu_{i}=0.5,b_{ij}=1,\Theta_{i}=0.6|\partial_{i}^{b}|,\gamma_{i }(t)=0\).
Figure 2: Comparison of node-level probabilities \(P_{S}^{i}(t)\) and \(P_{F}^{i}(t)\) obtained by the DMP theory and Monte Carlo simulation (averaged over \(10^{5}\) realizations). Panels (a) and (b) correspond to a binary tree network of size \(N=63\) for both layers. Panels (c) and (d) correspond to a random regular graph (RRG) of size \(N=100\) and degree \(K=5\) for both layers. The system parameters are \(T=50,\beta_{ij}=0.2,\mu_{i}=0.5,b_{ij}=1,\Theta_{i}=0.6|\partial_{i}^{b}|, \gamma_{i}(t)=0\).
limit \(T\to\infty\)
\[P_{F}^{i\to j}(\infty)=1-P_{S}^{i}(\infty) \tag{35}\] \[\quad+P_{S}^{i}(\infty)\sum_{\{x_{k}\}_{k\in\partial_{i}^{b}\setminus j }}\mathbb{I}\bigg{(}\sum_{k\in\partial_{i}^{b}\setminus j}b_{ki}x_{k}\geq\Theta _{i}\bigg{)}\] \[\qquad\times\prod_{k\in\partial_{i}^{b}\setminus j,x_{k}=1}P_{F}^ {k\to i}(\infty)\prod_{k\in\partial_{i}^{b}\setminus j,x_{k}=0}\big{[}1-P_{F}^ {k\to i}(\infty)\big{]},\]
where a similar expression holds for \(P_{F}^{i}(\infty)\) by replacing \(\partial_{i}^{b}\setminus j\) with \(\partial_{i}^{b}\) in Eq. (35). The asymptotic equations for layer \(b\) suggest a relationship between the LTM model and bootstrap percolation [29].
### Coupled Percolation in Large Homogeneous Networks
The large-time behaviors of the two processes correspond to two types of percolation problems. To further examine the macroscopic critical behaviors of the coupled percolation models, it is convenient to consider large-size random regular graphs of degree \(K\) (which have a homogeneous network topology), and homogeneous system parameters with \(\beta_{ij}=\beta,\mu_{i}=\mu,b_{ij}=b,\Theta_{i}=\Theta\). We further assume that each node \(i\) has a vanishingly small probability of being infected at time \(t=0\) with \(P_{I}^{i}(0)=1-P_{S}^{i}(0)\propto 1/N\). In the large size limit \(N\to\infty\), we have \(P_{S}^{i}(0)\to 1\).
Due to the homogeneity of the system, one can assume that all messages and marginal probabilities are identical,
\[\theta^{i\to j}(\infty)=\theta^{\infty}, \tag{36}\] \[P_{F}^{i\to j}(\infty)=P_{F}^{\infty},\] (37) \[P_{S}^{i}(\infty)=\rho_{S}^{\infty},\] (38) \[P_{F}^{i}(\infty)=\rho_{F}^{\infty}. \tag{39}\]
It leads to the self-consistent equations in the large size limit (\(N\to\infty\)),
\[\theta^{\infty} =1-p+p\cdot(\theta^{\infty})^{K-1}, \tag{40}\] \[\rho_{S}^{\infty} =(\theta^{\infty})^{K},\] (41) \[P_{F}^{\infty} =1-\rho_{S}^{\infty}\] (42) \[\quad+\rho_{S}^{\infty}\sum_{n=\lceil\Theta\rceil}^{K-1}{K-1 \choose n}(P_{F}^{\infty})^{n}(1-P_{F}^{\infty})^{K-1-n},\] \[\rho_{F}^{\infty} =1-\rho_{S}^{\infty}\] (43) \[\quad+\rho_{S}^{\infty}\sum_{n=\lceil\Theta\rceil}^{K}{K\choose n }(P_{F}^{\infty})^{n}(1-P_{F}^{\infty})^{K-n},\]
where \(p=\frac{\beta}{\beta+\mu-\beta\mu}\) and \(\lceil x\rceil\) is the smallest integer greater than or equal to \(x\).
We observe that \(\theta^{\infty}=1,\rho_{S}^{\infty}=1,P_{F}^{\infty}=0,\rho_{F}^{\infty}=0\) is always a fixed point to Eqs. (40)-(43), which corresponds to vanishing outbreak sizes. When the infection probability \(\beta\) is larger than a critical point \(\beta_{c}^{a}\), this fixed point solution becomes unstable and another fixed point with finite outbreak sizes develops.
As a concrete example, we consider random regular graphs of degree \(K=5\) and fix \(\mu=0.5,b=1,\Theta=3\). By solving Eqs. (40)-(43) for different \(\beta\), we obtain outbreak sizes for both layers \(a\) and \(b\) under different infection strengths. The result is shown in Fig. 4, where the asymptotic theory accurately predicts the behavior of a large-size system (\(N=1600\)) in the large-time limit. It is also observed that the outbreak sizes in both layers become non-zero when \(\beta\) is larger than a critical point \(\beta_{c}^{a}=\frac{1}{7}\). Furthermore, the outbreak size \(\rho_{F}^{\infty}\) in layer \(b\) exhibits a discontinuous jump to a complete breakdown (\(\rho_{F}^{\infty}=1\)) when \(\beta\) increases and surpasses another transition point \(\beta_{c}^{b}\approx 0.159\). However, at the transition point \(\beta_{c}^{b}\), only about \(28.6\%\) of the population has been infected in layer \(a\).
This example again indicates that the cascading failure propagation in layer \(b\) can drastically amplify the impact of the epidemic outbreaks in layer \(a\). Lastly, we remark that whether layer \(b\) will exhibit a discontinuous transition or not depends on the values of \(K\) and \(\Theta\)[29], as predicted by the bootstrap percolation theory [34].
## V Mitigation of Infection-Induced Cascades
### The Optimization Framework
The catastrophic breakdown can be mitigated if timely protections are provided to stop the infection's spread. In our model, this is implemented by assigning a non-zero protection probability \(\gamma_{i}(t)\) to node \(i\), after which it is immune from infection from layer \(a\). To minimize the size of final failures, it would be more effective to take into account the spreading processes in both layers \(a\) and \(b\) when deciding which nodes to prioritize for protection.
Here, we develop mitigation strategies by solving the following constrained optimization problems
\[\min_{\gamma} \mathcal{O}(\gamma):=\rho_{F}(T)=\frac{1}{N}\sum_{i\in V_{b}}P_{F} ^{i}(T),\] (44) s. t. \[0\leq\gamma_{i}(t)\leq 1\quad\forall i,t, \tag{45}\] \[\sum_{i\in V_{b}}\sum_{t=0}^{T-1}\gamma_{i}(t)\leq\gamma^{\, \mathrm{tot}}, \tag{46}\]
where the constraint in Eq. (45) ensures that \(\gamma_{i}(t)\) is a probability, and Eq. (46) represents the global budget constraint on the protection resources. As the objective function \(\mathcal{O}(\gamma)\) (the size of final failures) depends on the evolution of the coupled spreading processes, the optimization problem is challenging. Ref. [24] introduced the optimal control framework to tackle similar problems, by estimating the marginal probabilities of individuals with
the DMP methods. The success of the optimal control approach highlights another advantage of the theoretical methods over numerical simulations [12; 35; 24].
In this work, we adopt a similar strategy to solve the optimization problem defined in Eqs. (44)-(46), where \(P_{F}^{i}(T)\) is estimated by the DMP equations derived in Sec. III. We also adopt the approximation made in Sec. III.3 for simplicity. As the expressions of the DMP equations have been explicitly given and only involve elementary arithmetic operations, we leverage tools of automatic differentiation to compute the gradient of the objective function \(\nabla_{\gamma}\mathcal{O}(\gamma)\) in a back-propagation fashion [36]. It allows us to derive gradient-based algorithms for solving the optimization problem. We remark that such a back-propagation algorithm is equivalent to optimal control with gradient descent update on the control parameters [37].
To handle the box constraint in Eq. (45), we adopt the mirror descent method, which performs the gradient-based update in the dual (or mirror) space rather than the primal space where \(\{\gamma_{i}(t)\}\) live [38; 39]. In our case, we use the logit function \(\Psi(x)=\log(\frac{x}{1-x})\) to map the primal control variable \(\gamma_{i}(t)\) to the dual space as \(h_{i}(t)=\psi(\gamma_{i}(t))\in\mathbb{R}\), where the gradient descent updates are performed. The primal variable can be recovered through the inverse mapping of \(\Psi(\cdot)\), which is \(\Psi^{-1}(h)=\frac{1}{1+\exp(-h)}\). The elementary mirror descent update step is
\[g^{n} \leftarrow\nabla_{\gamma}\mathcal{O}(\gamma^{n}), \tag{47}\] \[\gamma^{n+1} \leftarrow\Psi^{-1}\big{(}\Psi(\gamma^{n})-sg^{n}\big{)}, \tag{48}\]
where \(n\) is an index keeping track of the optimization process and \(s\) is the step size of the gradient update.
In general, the above optimization process tends to increase the total resources \(\sum_{i,t}\gamma_{i}(t)\). To prevent the violation of the constraint in Eq. (46) during the updates, we suppress the gradient component which increases the total resources when \(\sum_{i,t}\gamma_{i}(t)\geq(1-\epsilon)\gamma^{\rm tot}\), by shifting the gradient \(g^{n}\) in Eq. (48) with a magnitude \(b^{n}\)
\[b^{n} \leftarrow\frac{\sum_{t,i}\gamma_{i}^{n}(t)(1-\gamma_{i}^{n}(t)) \frac{\partial}{\partial\gamma_{i}(t)}\mathcal{O}(\gamma^{n})}{\sum_{t,i} \gamma_{i}^{n}(t)(1-\gamma_{i}^{n}(t))}, \tag{49}\] \[g^{n} \leftarrow\nabla_{\gamma}\mathcal{O}(\gamma^{n})-b^{n}. \tag{50}\]
The rationale for the choice of \(b^{n}\) is explained in Appendix C. In our implementation of the algorithm, we choose \(\epsilon=0.02\). Even though the shifted gradient method is used, it does not strictly forbid the violation of the constraint in Eq. (46). If the resource capacity constraint is violated, we project the control variables to the feasible region through the simple rescaling
\[\gamma^{n}\leftarrow\frac{\gamma^{\rm tot}}{\sum_{t,i}\gamma_{i}^{n}(t)}\gamma ^{n}. \tag{51}\]
Finally, the resource capacity constraint Eq. (46) implies that a \(\gamma^{\rm tot}\) amount of protection resources can be distributed in different time steps. In some scenarios, the resources arrive in an online fashion, e.g., a limited number of vaccines can be produced every day. In these cases, there is a resource capacity constraint at each time step. Some results of such a scenario are discussed in Appendix D.
### Case Study on a Tree Network
We first verify the effectiveness of the optimization method by considering a simple problem on a binary tree network of size \(N=63\). Three individuals are chosen to be infected at time \(t=0\), and the outbreak is simulated for \(T=50\) time steps. The system parameters are set as \(\beta_{ij}=0.5,\mu_{i}=0.5,b_{ij}=1,\Theta_{i}=0.6|\partial_{i}^{b}|\). Without any mitigation strategy, _more than half of the population fail_ at the end of the process.
We then protect some vital nodes to mitigate the system failure, by using the optimization method proposed
Figure 4: Size of infection outbreak in layer \(a\) (measured by \(\rho_{I}+\rho_{R}\)) and total failures in layer \(b\) (measured by \(\rho_{F}\)) as a function of the infection probability \(\beta\) in the large-time limit. (a) Random regular graphs with \(N=1600,K=5\) are considered. The spreading processes are iterated for \(T=100\) steps, where stationary states are attained. (b) Random regular graphs with \(K=5\) in the asymptotic limit \(T\rightarrow\infty,N\rightarrow\infty\) are considered by analyzing the large-time behaviors of the DMP equations. The triangle and the square markers indicate the locations of the two transition points \(\beta_{c}^{a}\) and \(\beta_{c}^{b}\), respectively. The system parameters are homogeneous, with \(\mu=0.5,b=1,\Theta=3,\gamma_{i}(t)=0\).
in Sec. V.1. On the left column of Fig. 5, we restrict the total resource to be \(\gamma^{\rm tot}=5\). Fig. 5(a) shows that the optimization algorithm successfully reduces the final failure rate, which demonstrates the effectiveness of the method. We found that the optimal protection resource distribution \(\{\gamma_{i}^{*}(t)\}\) mostly concentrates on a few nodes at a certain time step (as shown in Fig. 5(c)), which implies that we can confidently select which nodes to protect. All the nodes with high \(\gamma_{i}^{*}(t)\) receive protection at time \(t=0\), which implies that the best mitigation strategy in this example is to distribute all \(\gamma^{\rm tot}\) resources as early as possible to stop the infection spread. Fig. 5(e) shows the optimal placement of resources, which can completely block the infection spread, hence minimizing the network failure. In this example, both layers \(a\) and \(b\) have the same network structure, which is depicted in Fig. 5(e).
Similar phenomena are observed in the case with \(\gamma^{\rm tot}=4\) as shown in the right column of Fig. 5, except that the protections are not sufficient to completely block the infection spread. The optimization algorithm sacrifices only two nodes in the vicinity of the infected node in the lower right corner of Fig. 5(f) (indicated by a black arrow), leaving other parts of the network in the normal state.
The good performance of the optimization is based on the fact that there are enough protection resources (i.e., having a large \(\gamma^{\rm tot}\)) as well as being aware of the origins of the outbreak. In some cases, whether a node was infected at the initial time is not fully determined but follows a probability distribution. Such cases can be easily accommodated in the DMP framework which is intrinsically probabilistic. We investigated such a scenario with probabilistic seeding in Appendix E, and found that the optimization method can still effectively reduce the sizes of network failures.
### Case Study in a Synthetic Network
To further showcase the applicability of the optimization algorithm for failure mitigation, we consider a synthetic technological multiplex network where layer \(a\) represents a communication network and layer \(b\) represents a power network. We consider the scenario that the communication network can be attacked by malware but can also be protected by technicians, which is modeled by the proposed SIRP model. The infection of a node in the communication network causes the breakdown of the corresponding node in the power network. The breakdown of components in a power network can trigger further failures and form a cascade, which is modeled by the proposed LTM model. We have neglected the details of the power flow dynamics in order to obtain a tractable model and an insightful simple example.
Here, we extract the network topology from the IEEE 118-bus test case to form layer \(b\)[40], which has \(N=118\) nodes. We then obtain layer \(a\) by rewiring a regular graph of the same size with degree \(K=4\) using a rewiring probability \(p_{\rm rewire}=0.3\), which creates a Watts-Strogatz small-world network and mimics the topology of communication networks [41]. The resulting multiplex network is plotted in Fig. 6.
As the failures in layer \(b\) are initially induced by the infections in layer \(a\), one may wonder whether deploying the protection resource by minimizing the size of infections, i.e., minimizing \(\rho_{I}(T)+\rho_{R}(T)\) instead of minimizing \(\rho_{F}(T)\), is already sufficient to mitigate the final failures. To investigate this effect, we replace the objective function in Eq. (44) by \(\mathcal{O}^{a}(\gamma)=\rho_{I}(T)+\rho_{R}(T)\) and solve the optimization problem using the same techniques in Sec. V.1. The result is shown in Fig. 7(a), which suggests that blocking the infection is as good as minimizing the original objective function in Eq. (44) for the purpose of minimizing the total failure size. Minimiz
Figure 5: Mitigation of the network failures in a binary tree network of size \(N=63\) for both layers. Panels (a)(c)(e) correspond to the case with \(\gamma^{\rm tot}=5\), while Panels (b)(d)(f) correspond to the case with \(\gamma^{\rm tot}=4\). Panels (a) and (b) depict how the final failure size changes during the optimization process. Specifically, the control parameters \(\{\gamma_{i}^{n}(t)\}\) for each optimization step \(n\) were recorded, which were fed to the DMP equations for computing \(\rho_{F}(T)\) at step \(n\). Panels (c) and (d) plot the histogram of the optimal decision variables \(\{\gamma_{i}^{*}(t)\}\). Panels (e) and (f) show the optimal placement of resources on layer \(a\), where green square nodes receive protection (having a high \(\gamma_{i}^{*}(t)\) at time \(t=0\)). The three red triangle nodes are the initially infected individuals.
ing either objective function constitutes a much better improvement over the random deployment of the same amount of protection resources in this case.
The results in Fig. 7(a) point to the conventional wisdom that one should try best to stop the epidemic or malware spread (in layer \(a\)) for mitigating system failure. The situation will be different if there are vital components in layer \(b\), which should be protected to prevent the failure cascade. This is typically manifested in the heterogeneity of the network connectivity or the system parameters. To showcase this effect, we manually plant a vulnerable connected cluster in layer \(b\) by setting the influence parameters \(b_{ji}\) for an edge \((i,j)\) in this cluster as \(b_{ji}=\Theta_{i}\), so that the failure of node \(i\) itself is already sufficient to trigger the failure of node \(j\). In this case, we found that minimizing \(\rho_{F}(T)\) yields a much better improvement over minimizing \(\rho_{I}(T)+\rho_{R}(T)\) for the purpose of mitigating the system failure, as shown in Fig. 7(b).
## VI Conclusion and Discussion
We investigate the nature of a type of coupled spreading processes in interdependent networks, comprising two interacting layers \(a\) and \(b\). Disease or malware spreads in layer \(a\), which can trigger cascading failures in layer \(b\), leading to secondary disasters. The spreading processes in the two layers are modeled by the SIRP and LTM models, respectively. To tackle the complex stochastic dynamics in interdependent networks, we utilized the dynamic message-passing method by working out the dynamic belief propagation equations. The resulting DMP algorithms have low computational complexity and allow us to perform accurate and efficient inference of the system states.
Based on the DMP method, we systematically studied and evaluated the impact of the infection-induced cascading failures. The cascade process in layer \(b\) can lead to large-scale network failures, even when the infection rate in layer \(a\) remains at a relatively low level. By considering a homogeneous network topology and homogeneous system parameters, we derive the asymptotic and large-size limits of the DMP equations. The asymptotic limit of the coupled spreading processes corresponds to the coupling between a bond percolation model and a bootstrap percolation model, which can be analytically solved. The infection outbreak size in layer \(a\) changes continuously from zero to non-zero as the infection probability \(\beta\) surpasses a transition point \(\beta_{c}^{a}\), while the failure size in layer \(b\) can exhibit a discontinuous jump to the completely failed state when \(\beta\) surpasses another transition point \(\beta_{c}^{b}\) under certain conditions. All these results highlight the observation that cascading failure propagation in layer \(b\) can drastically amplify the impact of
Figure 6: An artificial two-layer network, where each layer has \(N=118\) nodes. Layer \(a\) is a Watts-Strogatz small-world network, which mimics the topology of communication networks; it is obtained by rewiring a regular graph of degree 4 with rewiring probability \(p_{\text{rewrite}}=0.3\). Layer \(b\) is a power network extracted from the IEEE 118-bus test case.
Figure 7: Evolution of the failure rate \(\rho_{F}(t)\) of the synthetic network shown in Fig. 6 under various mitigation strategies. The curve labeled by “random \(\gamma\)” corresponds to the random deployment of a \(\gamma^{\text{tot}}\) amount of protection resources at time \(t=0\); 20 different random realizations are considered and the error bar indicates one standard deviation of the sample fluctuations. The time window is set as \(T=50\). (a) Most system parameters are homogeneous with \(\beta_{ji}=0.2,\mu=0.5,b_{ji}=1\), while \(\Theta_{i}=0.6|\partial_{i}^{b}|\). Five nodes are randomly chosen as the initially infected individuals, and \(\gamma^{\text{tot}}=10\) is considered. (b) The system parameters are \(\beta_{ji}=0.17,\mu=0.5,\Theta_{i}=0.6|\partial_{i}^{b}|\). Planted influence parameters \(\{b_{ji}\}\) are considered. Three nodes are randomly chosen as the initially infected individuals, and \(\gamma^{\text{tot}}=9\) is considered.
the epidemic outbreaks in layer \(a\), which requires special attention.
Another advantage of the DMP method is that it yields a set of closed-form equations, which can be very useful for other downstream analyses and tasks. We exploited this property to devise optimization algorithms for mitigating network failure. The optimization method works by back-propagating the impact at the final time to adjust the control parameters (i.e., the protection probabilities). The mirror descent method and a heuristic gradient shift method were also used to handle the constraints on the control parameter. We show that the resulting algorithm can effectively minimize the size of system failures. We believe that our dedicated analyses provide valuable insights and a deeper understanding of the impact the infection-induced cascading failures on networks, and the obtained optimization algorithms will be useful for practical applications in systems of this kind.
###### Acknowledgements.
B.L. and D.S. acknowledge support from European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 835913. B.L. acknowledges support from the startup funding from Harbin Institute of Technology, Shenzhen (Grant No. 20210134), and the National Natural Science Foundation of China (Grant No. 12205066). D.S. acknowledges support from the Leverhulme Trust (RPG-2018-092)and the EPSRC programme grant TRANSNET (EP/R035342/1).
## Appendix A Deriving the DMP Equations From Dynamic Belief Propagation
In this Appendix, we supplement some technical details of the DMP equations based on dynamic belief propagation.
We assume that at the initial time \(t=0\), node \(i\) is either in state \(S\) or state \(I\), occurring with probabilities \(P_{S}^{i}(0)\) and \(P_{I}^{i}(0)\) (with \(P_{S}^{i}(0)+P_{I}^{i}(0)=1\)), respectively.
According to dynamical rule Eq. (1) of the SIRP model, the transition kernel \(W_{\text{SIRP}}^{i}(\cdot)\) of the spreading process in layer \(a\) admits the following form
\[W_{\text{SIRP}}^{i}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{ a}||\{\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}\}_{k\in\partial_{i}^{a}})\] \[= \mathbb{I}(\tau_{i}^{a}<\varepsilon_{i}^{a})\bigg{\{}P_{I}^{i}(0 )\mathbb{I}(\tau_{i}^{a}=0)+P_{S}^{i}(0)\mathbb{I}(\tau_{i}^{a}>0)\prod_{t^{ \prime}=0}^{\tau_{i}^{a}-2}\prod_{k\in\partial_{i}^{a}}\mathbb{I}\big{[}1- \beta_{ki}\mathbb{I}(\omega_{k}^{a}\geq t^{\prime}+1)\mathbb{I}(\tau_{k}^{a} \leq t^{\prime})\big{]}\] \[\times\bigg{[}1-\prod_{k\in\partial_{i}^{a}}\big{[}1-\beta_{ki} \mathbb{I}(\omega_{k}^{a}\geq\tau_{i}^{a})\mathbb{I}(\tau_{k}^{a}\leq\tau_{i} ^{a}-1)\big{]}\bigg{]}\times\bigg{(}\prod_{t^{\prime\prime}=\tau_{i}^{a}}^{ \omega_{i}^{a}-2}(1-\mu_{i})\bigg{)}\mu_{i}\times\prod_{t^{\prime\prime\prime }=0}^{\tau_{i}^{a}-1}(1-\gamma_{i}(t^{\prime\prime\prime}))\bigg{\}}\] \[+\mathbb{I}(\tau_{i}^{a}\geq\varepsilon_{i}^{a})\bigg{\{}P_{S}^{ i}(0)\mathbb{I}(\tau_{i}^{a}>0)\prod_{t^{\prime}=0}^{\tau_{i}^{a}-2}\prod_{k\in \partial_{i}^{a}}\mathbb{I}\big{[}1-\beta_{ki}\mathbb{I}(\omega_{k}^{a}\geq t ^{\prime}+1)\mathbb{I}(\tau_{k}^{a}\leq t^{\prime})\big{]}\times\big{[}\prod _{t^{\prime\prime\prime}=0}^{\varepsilon_{i}^{a}-2}(1-\gamma_{i}(t^{\prime \prime\prime}))\big{]}\gamma_{i}(\varepsilon_{i}^{a}-1)\bigg{\}}.\]
The transition kernel \(W_{\text{LTM}}^{i}(\cdot)\) of the cascade process in layer \(b\) admits the following form
\[W_{\text{LTM}}^{i}(\tau_{i}^{b}||\tau_{i}^{a},\varepsilon_{i}^{ a},\{\tau_{k}^{b}\}_{k\in\partial_{i}^{b}})\] \[= \mathbb{I}\bigg{[}\sum_{k\in\partial_{i}^{b}}b_{ki}\mathbb{I}( \tau_{k}^{b}\leq\tau_{i}^{b}-2)<\Theta_{i}\bigg{]}\delta_{\tau_{i}^{b},\tau_{i} ^{a}}+\mathbb{I}(\tau_{i}^{b}<\tau_{i}^{a})\mathbb{I}\bigg{[}\sum_{k\in \partial_{i}^{b}}b_{ki}\mathbb{I}(\tau_{k}^{b}\leq\tau_{i}^{b}-2)<\Theta_{i} \bigg{]}\mathbb{I}\bigg{[}\sum_{k\in\partial_{i}^{b}}b_{ki}\mathbb{I}(\tau_{k}^ {b}\leq\tau_{i}^{b}-1)\geq\Theta_{i}\bigg{]},\]
where the first term corresponds to the case where node \(i\) fails (in layer \(b\)) due to infection (from layer \(a\)), while the second term corresponds to the case where node \(i\) failed due to losing supports from neighboring nodes \(\partial_{i}^{b}\) in layer \(b\).
For infection spread in layer \(a\), the node-level probability of node \(i\) in a certain state is computed by tracing over the corresponding probabilities of trajectories \(m_{a}^{i\to j}(\cdot)\) (note that the process in layer \(b\) does not have a feedback influence
on layer \(a\))
\[P_{S}^{i\to j}(t) =\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}}m_{a}^{i\to j}( \tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a})\;\mathbb{I}(t<\tau_{i}^{a}< \omega_{i}^{a})\;\mathbb{I}(t<\varepsilon_{i}^{a}), \tag{10}\] \[P_{I}^{i\to j}(t) =\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}}m_{a}^{i \to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a})\;\mathbb{I}(\tau_{i}^{ a}\leq t<\omega_{i}^{a})\;\delta_{\varepsilon_{i}^{a},\infty},\] (11) \[P_{R}^{i\to j}(t) =\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}}m_{a}^{i \to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a})\;\mathbb{I}(\tau_{i}^ {a}<\omega_{i}^{a}\leq t)\;\delta_{\varepsilon_{i}^{a},\infty},\] (12) \[P_{P}^{i\to j}(t) =\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}}m_{a}^{i \to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a})\;\mathbb{I}( \varepsilon_{i}^{a}\leq t)\;\delta_{\tau_{i}^{a},\infty}. \tag{13}\]
The computation of these probabilities is similar to that of the SIR model, where we refer readers to Ref. [20] for the details.
For activities in layer \(b\), the node-level probability of node \(i\) in state \(F\) (failed) is computed as
\[P_{F}^{i\to j}(t) =\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a},\tau_{i}^ {b}}\mathbb{I}(\tau_{i}^{b}\leq t)m^{i\to j}(\tau_{i}^{a},\omega_{i}^{a}, \varepsilon_{i}^{a},\tau_{i}^{b})\] \[=\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a},\tau_{i}^ {b}}\mathbb{I}(\tau_{i}^{b}\leq t)m_{a}^{i\to j}(\tau_{i}^{a},\omega_{i}^{a}, \varepsilon_{i}^{a})m_{b}^{i\to j}(\tau_{i}^{b}\mid\tau_{i}^{a}, \varepsilon_{i}^{a}) \tag{14}\]
which appears much more difficult to treat due to the dependence on the activities in layer \(a\). In particular, the failure of node \(i\) can be attributed to the infection from one of its neighbors from layer \(a\), or to the failures of its neighbors from layer \(b\). For the latter case, node \(i\) can be either in state \(S\) or in state \(P\) at time \(t\), which depends on the infection time \(\tau_{i}^{a}\) and protection time \(\varepsilon_{i}^{a}\). For this reason, we introduce the following conditional failure probability
\[P_{F}^{i\to j}(t|\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}) =\sum_{\tau_{i}^{b}}\mathbb{I}(\tau_{i}^{b}\leq t)m_{b}^{i\to j}( \tau_{i}^{b}|\tau_{i}^{a},\varepsilon_{i}^{a})=\sum_{\tau_{i}^{b}}\mathbb{I}( \tau_{i}^{b}\leq t)\frac{m^{i\to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i} ^{a},\tau_{i}^{b})}{m_{a}^{i\to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i }^{a})}\] \[=\mathbb{I}(\tau_{i}^{a}\leq t)+\mathbb{I}(\tau_{i}^{a}>t)\frac{ \xi^{i\to j}(t|\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a})}{m_{a}^{i\to j }(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a})}, \tag{15}\]
where we have defined
\[\xi^{i\to j}(t|\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}):= \mathbb{I}(\tau_{i}^{a}>t)\sum_{\tau_{i}^{b}}\mathbb{I}(\tau_{i}^{b}\leq t)m^{ i\to j}(\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a},\tau_{i}^{b}), \tag{16}\]
which is the cavity probability of node \(i\) not in state \(I\) or \(R\) but being failed at time \(t\) due to the failures of neighbors from layer \(b\), while it follows the specific trajectory \(\{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{k}^{a}\}\) in layer \(a\).
The marginal probability of node \(i\) in state \(F\) at time \(t\) is obtained by tracing over all the possible trajectories of layer \(a\) as
\[P_{F}^{i\to j}(t) =\!\!\!\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}} \mathbb{I}(\omega_{i}^{a}>\tau_{i}^{a})m_{a}^{i\to j}(\tau_{i}^{a},\omega_{i}^{ a},\varepsilon_{i}^{a})P_{F}^{i\to j}(t|\tau_{i}^{a},\omega_{i}^{a}, \varepsilon_{i}^{a})\] \[=\!\!P_{I}^{i\to j}(t)+P_{R}^{i\to j}(t)+\sum_{\tau_{i}^{a}, \omega_{i}^{a},\varepsilon_{i}^{a}}\mathbb{I}(\tau_{i}^{a}>t)\mathbb{I}( \omega_{i}^{a}>\tau_{i}^{a})\xi^{i\to j}(t|\tau_{i}^{a},\omega_{i}^{a}, \varepsilon_{i}^{a}), \tag{17}\]
where the summation in the last term can be further decomposed into \(P_{SF}^{i\to j}(t)\) and \(P_{PF}^{i\to j}(t)\), depending on whether the protection on node \(i\) (given at time \(\varepsilon_{i}^{a}\)) occurs after time \(t\) or before time \(t\)
\[P_{SF}^{i\to j}(t) =\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}} \mathbb{I}(\varepsilon_{i}^{a}>t)\mathbb{I}(\tau_{i}^{a}>t)\mathbb{I}(\omega_{i}^{a }>\tau_{i}^{a})\xi^{i\to j}(t|\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}), \tag{18}\] \[P_{PF}^{i\to j}(t) =\sum_{\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}}\mathbb{I}( \varepsilon_{i}^{a}\leq t)\mathbb{I}(\tau_{i}^{a}>t)\mathbb{I}(\omega_{i}^{a}> \tau_{i}^{a})\xi^{i\to j}(t|\tau_{i}^{a},\omega_{i}^{a},\varepsilon_{i}^{a}). \tag{19}\]
In summary, we can decompose \(P_{F}^{i\to j}(t)\) into four terms
\[P_{F}^{i\to j}(t)=P_{I}^{i\to j}(t)+P_{R}^{i\to j}(t)+P_{SF}^{i\to j}(t)+P_{ PF}^{i\to j}(t), \tag{20}\]
where a similar form holds for \(P_{F}^{i}(t)\) as stated in the main text.
To obtain node-level iteration of \(P_{SF}^{i\to j}(t)\) and \(P_{PF}^{i\to j}(t)\), the key is to further introduce the auxiliary probabilities \(\chi^{k\to i}(t)\), \(\psi^{k\to i}(t)\), \(\tilde{\chi}^{k\to i}(t,\varepsilon)\) and \(\tilde{\psi}^{k\to i}(t,\varepsilon)\), defined as
\[\chi^{k\to i}(t)= \sum_{\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}}\mathbb{I}( \omega_{k}^{a}>\tau_{k}^{a})\prod_{t^{\prime}=0}^{t-1}\big{[}1-\beta_{ki} \mathbb{I}(\omega_{k}^{a}\geq t^{\prime}+1)\mathbb{I}(\tau_{k}^{a}\leq t^{ \prime})\big{]}m_{a}^{k\to i}(\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a})P _{F}^{k\to i}(t-1|\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}), \tag{101}\] \[\psi^{k\to i}(t)= \sum_{\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}}\mathbb{I}( \omega_{k}^{a}>\tau_{k}^{a})\prod_{t^{\prime}=0}^{t-1}\big{[}1-\beta_{ki} \mathbb{I}(\omega_{k}^{a}\geq t^{\prime}+1)\mathbb{I}(\tau_{k}^{a}\leq t^{ \prime})\big{]}m_{a}^{k\to i}(\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}) \mathbb{I}(\tau_{k}^{a}\leq t-1),\] (102) \[\tilde{\chi}^{k\to i}(t,\varepsilon)= \sum_{\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}}\mathbb{I}( \omega_{k}^{a}>\tau_{k}^{a})\prod_{t^{\prime}=0}^{e-2}\big{[}1-\beta_{ki} \mathbb{I}(\omega_{k}^{a}\geq t^{\prime}+1)\mathbb{I}(\tau_{k}^{a}\leq t^{ \prime})\big{]}m_{a}^{k\to i}(\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}) P_{F}^{k\to i}(t-1|\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}),\] (103) \[\tilde{\psi}^{k\to i}(t,\varepsilon)= \sum_{\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}}\mathbb{I}( \omega_{k}^{a}>\tau_{k}^{a})\prod_{t^{\prime}=0}^{e-2}\big{[}1-\beta_{ki} \mathbb{I}(\omega_{k}^{a}\geq t^{\prime}+1)\mathbb{I}(\tau_{k}^{a}\leq t^{ \prime})\big{]}m_{a}^{k\to i}(\tau_{k}^{a},\omega_{k}^{a},\varepsilon_{k}^{a}) \mathbb{I}(\tau_{k}^{a}\leq t-1). \tag{104}\]
These auxiliary probabilities are linked to \(P_{SF}^{i\to j}(t)\) and \(P_{PF}^{i\to j}(t)\) through the transition kernel \(W_{\text{SIRP}}^{i}\) and \(W_{\text{LTM}}^{i}\), respectively, and their iteration equations can be mechanistically derived (e.g., by relating \(\chi^{k\to i}(t)\) to \(\chi^{k\to i}(t-1)\)). The explicit forms of the iteration equations and the physical interpretation of the auxiliary probabilities are stated in the main text.
## Appendix B Deriving the Large-time Limit of the Discrete-time SIR Model
Here, we derive the DMP equations of the discrete-time SIR model in the large-time limit, which differs from the continuous-time counterpart [18].
We consider \(\gamma_{i}(t)=0\) in Sec. III.1 and re-write the DMP equations for \(\theta^{i\to j}\) and \(\phi^{i\to j}\) as
\[\theta^{i\to j}(t+1)-\theta^{i\to j}(t)= -\beta_{ij}\phi^{i\to j}(t), \tag{105}\] \[\phi^{i\to j}(t+1)-\phi^{i\to j}(t)= -\big{(}\beta_{ij}+\mu_{i}-\beta_{ij}\mu_{i}\big{)}\phi^{i\to j }(t)-\big{[}P_{S}^{i\to j}(t+1)-P_{S}^{i\to j}(t)\big{]}. \tag{106}\]
Summing both sides of the above equations from \(t=0\) to \(t=T-1\) and canceling the term \(\sum_{t=0}^{T-1}\phi^{i\to j}(t)\) yields
\[\phi^{i\to j}(T)-\phi^{i\to j}(0)=\frac{\beta_{ij}+\mu_{i}-\beta_{ij}\mu_{i}}{ \beta_{ij}}\big{[}\theta^{i\to j}(T)-\theta^{i\to j}(0)\big{]}-\big{[}P_{S}^{i \to j}(T)-P_{S}^{i\to j}(0)\big{]}. \tag{107}\]
Define \(p_{ij}=\beta_{ij}/(\beta_{ij}+\mu_{i}-\beta_{ij}\mu_{i})\) and remind that the initial conditions for the messages are \(P_{S}^{i\to j}(0)=P_{S}^{i}(0),\phi^{i\to j}(0)=1-P_{S}^{i}(0),\theta^{i\to j}(0)=1\), which leads to
\[\theta^{i\to j}(T)=1-p_{ij}+p_{ij}P_{S}^{i\to j}(T)+p_{ij}\phi^{i\to j}(T). \tag{108}\]
When \(T\to\infty\), all infected nodes will recover, which implies that \(\phi^{i\to j}(\infty)=0\) and leads to the self-consistent equations for the messages \(\theta^{i\to j}\) as
\[\theta^{i\to j}(\infty) =1-p_{ij}+p_{ij}P_{S}^{i\to j}(\infty)\] \[=1-p_{ij}+p_{ij}P_{S}^{i}(0)\prod_{k\in\partial_{t}^{\pi}\setminus j }\theta^{k\to i}(\infty). \tag{109}\]
## Appendix C Resource Capacity Constraint in Mirror Descent
When designing algorithms for mitigating network failure, we impose the realistic resource capacity constraint in the form of
\[\sum_{i\in V_{b}}\sum_{t=0}^{T-1}\gamma_{i}(t)\leq\gamma^{\text{tot}}. \tag{110}\]
Such a linear equality constraint can generally be handled by introducing a Lagrangian multiplier and deriving the corresponding Karush-Kuhn-Tucker condition (KKT condition), or by introducing a barrier function as in the interior point method. Consider the latter approach by augmenting the objective function with a log barrier function as
\[f(\gamma) :=\gamma^{\text{tot}}-\sum_{i\in V_{b}}\sum_{t=0}^{T-1}\gamma_{i}(t), \tag{10}\] \[\mathcal{O}_{\text{aug}}(\gamma) :=\mathcal{O}(\gamma)-\lambda\cdot\log\big{(}f(\gamma)\big{)}, \tag{11}\]
where \(\lambda>0\) is a tunable parameter. The log barrier function strongly penalizes \(\mathcal{O}_{\text{aug}}(\gamma)\) when \(f(\gamma)\) is close to zero, which encourages \(\gamma\) to stay in the interior of the feasible region of Eq. (10).
The gradient of the augmented objective function reads
\[\nabla_{\gamma}\mathcal{O}_{\text{aug}}(\gamma)=\nabla_{\gamma}\mathcal{O}( \gamma)+\frac{\lambda}{f(\gamma)}\mathbf{1}, \tag{12}\]
where \(\mathbf{1}\) is the all-one vector. Eq. (12) suggests a global shift of the gradient to encourage the satisfaction of the capacity constraint. This is the motivation for considering a shifted gradient \(g^{n}=\nabla_{\gamma}\mathcal{O}(\gamma^{n})-b^{n}\) in the mirror descent algorithm in the main text. The gradient shift \(-b^{n}\) is toggled on when \(\sum_{t,i}\gamma_{i}^{n}(t)\lessapprox\gamma^{\text{tot}}\), where the shift magnitude \(b^{n}\) is chosen based on the following arguments
\[\gamma^{n+1} =\Psi^{-1}\big{(}\Psi(\gamma^{n})-s\cdot\big{[}\nabla_{\gamma} \mathcal{O}(\gamma^{n})-b^{n}\big{]}\big{)}, \tag{13}\] \[\sum_{t,i}\gamma_{i}^{n+1}(t) \approx\sum_{t,i}\Psi^{-1}\bigg{(}\Psi(\gamma_{i}^{n}(t))\bigg{)} -s\sum_{t,i}\Psi^{-1\prime}\bigg{(}\Psi(\gamma_{i}^{n}(t))\bigg{)}\bigg{[}\frac {\partial}{\partial\gamma_{i}(t)}\mathcal{O}(\gamma^{n})-b^{n}\bigg{]}\] \[=\sum_{t,i}\gamma_{i}^{n}(t)-s\sum_{t,i}\gamma_{i}^{n}(t)(1-\gamma _{i}^{n}(t))\bigg{[}\frac{\partial}{\partial\gamma_{i}(t)}\mathcal{O}(\gamma^ {n})-b^{n}\bigg{]}\lessapprox\gamma^{\text{tot}}, \tag{14}\]
where a small step size \(s\) is assumed. It requires that
\[\sum_{t,i}\gamma_{i}^{n}(t)(1-\gamma_{i}^{n}(t))\bigg{[}\frac{ \partial}{\partial\gamma_{i}(t)}\mathcal{O}(\gamma^{n})-b^{n}\bigg{]}\approx 0, \tag{15}\] \[\Longleftrightarrow\quad b^{n}\approx\frac{\sum_{t,i}\gamma_{i} ^{n}(t)(1-\gamma_{i}^{n}(t))\frac{\partial}{\partial\gamma_{i}(t)}\mathcal{O} (\gamma^{n})}{\sum_{t,i}\gamma_{i}^{n}(t)(1-\gamma_{i}^{n}(t))}, \tag{16}\]
which explains the choice of \(b^{n}\) in the main text.
## Appendix D Online Supply of Resources
In this appendix, we consider the cases where the resources arrive in an online fashion, e.g., a limited number of vaccines can be produced every day. In these cases, there is a resource capacity constraint \(\gamma_{t}^{\text{tot}}\) at each time step \(t\), yielding the following constrained optimization problems
\[\min_{\gamma} \mathcal{O}(\gamma)=\frac{1}{N}\sum_{i\in V_{b}}P_{F}^{i}(T),\] (17) s. t. \[0\leq\gamma_{i}(t)\leq 1\quad\forall i,t, \tag{18}\] \[\sum_{i\in V_{b}}\gamma_{i}(t)\leq\gamma_{t}^{\text{tot}}\quad \forall t. \tag{19}\]
The mirror descent algorithm introduced in Sec. V.1 can be readily applied to solve this optimization problem, except that the shifted gradient \(g_{t}^{n}=\nabla_{\gamma(t)}\mathcal{O}(\gamma^{n}(t))-b_{t}^{n}\) has a time-dependent shift \(b_{t}^{n}\) in this case. The shift magnitude \(b_{t}^{n}\) at time \(t\) can be derived in the same spirit as in Appendix. C, which admits the following expression
\[b_{t}^{n}\approx\frac{\sum_{i}\gamma_{i}^{n}(t)(1-\gamma_{i}^{n}(t))\frac{ \partial}{\partial\gamma_{i}(t)}\mathcal{O}(\gamma^{n})}{\sum_{i}\gamma_{i}^{n }(t)(1-\gamma_{i}^{n}(t))}. \tag{20}\]
If the resource capacity constraint for \(\gamma_{t}^{\rm tot}\) is still violated, we project the control variables at time \(t\) to the feasible region through a simple rescaling
\[\gamma^{n}(t)\leftarrow\frac{\gamma_{t}^{\rm tot}}{\sum_{i}\gamma_{i}^{n}(t)} \gamma^{n}(t). \tag{49}\]
As a concrete example, we consider the synthetic two-layer network in Sec. V.3 using the planted influence parameters \(\{b_{ji}\}\) as in Fig. 7(b). The results are shown in Fig. 8, which illustrates the effectiveness of the optimization algorithms for the scenario with the online supply of resources.
## Appendix E Probabilistic Seeding
There are some cases where the initial infection status of a node \(i\) is not fully determined but follows a probability distribution \(P_{I}^{i}(0)\), which may be obtained after some inference. In the DMP framework, we simply use the available \(\{P_{I}^{i}(0)\}\) as the initial condition to iterate the DMP equations, and further optimize the evolution by deploying the protection resources.
we consider a similar setting as in Sec. V.2. In Fig. 5, there are 3 seeds, each of which has \(P_{I}^{i}(0)=1\). In this section, we consider 6 seeds, each of which has \(P_{I}^{i}(0)=\frac{1}{2}\). The results of optimization are shown in Fig. 9. It can be observed that the optimization algorithm can still successfully reduce the final failure size, as in the cases with deterministic seeding. Interestingly, the optimal protection resource distribution \(\{\gamma_{i}^{*}(t)\}\) is also concentrated among a few nodes at a certain time step (as shown in Fig. 9(c) and (d)), even though one cannot be sure about which nodes are initially infected. This may be due to the simple network topology, where there exists a clear optimal deployment strategy to block the infections coming from the 6 probabilistic seeds altogether, as shown in Fig. 9(e) and (f).
| 流行病の蔓延の状況において、異なる種類の病原体との協力や、疾患の拡散と他の失敗伝播メカニズムの相互作用によって、複雑なダイナミックパターンが発生することがあります。このようなパターンを解明するためには、シミュレーションフレームワークが用いられることが多いのですが、大規模ネットワークでは計算コストがかかり、統計的な不確定性が大きいです。そこで、本研究では、一方の層に疾患やマルウェアの拡散が引き起こす、他の層におけるカスケード的な失敗を誘発し、二次的な災害を引き起こす、 unidirectional dependent network を使用して、2層の拡散過程を研究します。たとえば、公共サービス、供給網、電力配給など、その層の感染が引き起こす可能性があります。この研究では、動的メッセージ伝送方法を使用して、システムの状態を推定するための効率的なアルゴリズムを開発します。これにより、複雑な相互作用する |
2301.13591 | Zero3D: Semantic-Driven Multi-Category 3D Shape Generation | Semantic-driven 3D shape generation aims to generate 3D objects conditioned
on text. Previous works face problems with single-category generation,
low-frequency 3D details, and requiring a large number of paired datasets for
training. To tackle these challenges, we propose a multi-category conditional
diffusion model. Specifically, 1) to alleviate the problem of lack of
large-scale paired data, we bridge the text, 2D image and 3D shape based on the
pre-trained CLIP model, and 2) to obtain the multi-category 3D shape feature,
we apply the conditional flow model to generate 3D shape vector conditioned on
CLIP embedding. 3) to generate multi-category 3D shape, we employ the
hidden-layer diffusion model conditioned on the multi-category shape vector,
which greatly reduces the training time and memory consumption. | Bo Han, Yitong Fu, Yixuan Shen | 2023-01-31T12:43:54 | http://arxiv.org/abs/2301.13591v5 | # Zero3D: Semantic-Driven 3D Shape Generation for Zero-Shot Learning
###### Abstract
Semantic-driven 3D shape generation aims to generate 3D shapes conditioned on text. Previous works face problems with single-category generation, low-frequency details, and requiring a large number of paired data. To tackle these challenges, we propose a multi-category diffusion model. Specifically, 1) to alleviate the problem of lack of large-scale paired data, we establish a bridge between text, 2D image, and 3D shape through the pre-trained CLIP model, thus realizing zero-shot learning. 2) to obtain the 3D shape feature, we apply the conditional flow model to generate the shape vector conditioned on CLIP embedding. 3) to generate the multi-category 3D shape, we employ the hidden-layer diffusion model conditioned on the multi-category shape vector, which greatly reduces the training time and memory consumption. We evaluate the generated results of our framework and demonstrate that our method outperforms existing methods.
Bo Han\({}^{\star}\) Yitong Fu\({}^{\star}\) Yixuan Shen\({}^{\dagger}\)\({}^{\star}\)Zhejiang University, Hangzhou, China
\({}^{\dagger}\) National University of Singapore, Singapore, Singapore
3D shape generation, diffusion model
## 1 Introduction
As the core element in the Metaverse world [1], 3D objects play a vital role in enhancing people's interactive experience. With the rapid development of AIGC technology [2, 3], people can easily create images, audio, video, etc. through text prompts. But 3D objects are currently designed by manually modeling software like Blender and Maya3D, which requires a great deal of time and expertise. Therefore, how to generate high-quality 3D objects through semantic information becomes a practical task.
Unlike 2D images which can be viewed as arrays of pixel values. 3D objects have diverse and complex representations, such as voxels, point clouds, grids, and implicit representations. Each representation has its own advantage and limitation. Different representations require different processing methods, which results in its challenge [14].
Text-to-shape generation is also challenging [15, 16] since it is hard to jointly understand 3D shape and text at the same time, resulting in it being difficult to represent them in a common space. Besides, unlike text-to-image generation, where paired data is abundant, text-to-shape generation lacks large-scale paired text and 3D shape data.
Recently, much work has been done on 3D shape generation [4, 5, 6, 11]. DreamFusion [5] transforms the diffusion and denoising process in the pixel space into the operations in the NeRF parameter space. Since the supervision signal in DreamFusion operates on very low-resolution images (64 x 64), therefore it cannot synthesize high-frequency 3D geometric and texture details. DPM [11] trains an encoder to generate a shape vector representing the point cloud shape, which is then used to train a flow model. After that, the pre-trained flow model can turn noise into a shape vector. Subsequently, the diffusion model part utilizes this shape vector as a condition for 3D shape generation. Since DPM is trained on the specific category, therefore it can only generate point cloud data of one type.
To tackle these challenges, we first pre-train a CLIP model which establishes a superior correspondence between text and 2D image. At the same time, we can get a large number of high-resolution 2D images corresponding to 3D objects through the blender. Therefore, the CLIP model bridges text, 2D images, and 3D objects, alleviating the problem of lack of large-scale paired text-3D shapes data. Thereafter, we apply a condition flow model to generate the multi-category shape vector conditioned on CLIP embedding. Subsequently, we employ a condition diffusion model to generate the 3D shape conditioned on the multi-category shape vector. Specifically, during training, the CLIP model is used to encode the 2D image as the condition, so the corresponding relationship between the 2D image and the 3D shape can be learned. During inference, the CLIP model is used to encode the semantic information as the condition, thus the 3D shape corresponding to the semantic information can be generated. Besides, in view of the high time and memory consumption problems of the diffusion model itself, we implement the diffusion process and denoising operation on the hidden layer.
To summarize, our main contributions are as follows:
* Considering the superior correspondence between 2D images and texts in the CLIP model, We establish a connection between texts and 3D shapes by using 2D images as a medium, thereby enabling zero-shot learning.
* We propose a multi-category diffusion framework, which is capable of generating 3D point cloud data of multiple categories with just a single network model.
## 2 Background
**3D Shape Generation**. 3D-GAN [9] uses the 3D-CNN to gradually map a high-dimensional hidden vector into a 3D object represented by the voxel. The r-GAN [13] utilizes GCN as the generator, effectively utilizing local information within point cloud data. However, due to the uncertainty of Generative Adversarial Networks, the results are not ideal. PointFlow [10] introduces a flow model to generate the shape distribution of point clouds. It uses the hidden vector representing the shape distribution as a condition to guide the point cloud generation. Since point clouds are usually distributed on a two-dimensional manifold, it is difficult to obtain better results through a flow model assuming that the point cloud obeys a three-dimensional prior distribution. In the 3D domain, DPM [11] and PVD [12] use diffusion models to generate point cloud data. Although they can generate satisfactory results, they are all trained in a specific category.
**Semantics-Driven 3D Shape Generation**. Text2Shape [22] proposes an end-to-end association learning framework. It encodes text and 3D shapes separately into the same latent space. However, large-scale text-3D shape data are still difficult to obtain, so ClipForge [17] bypasses this problem with the aid of the CLIP model on text-image matching. CLIP-Mesh [18] also uses the CLIP model to measure the matching degree between the image rendered by the grid model and the text, so as to optimize the entire model parameters. Dreamfields [4], DreamFusion [5] and Magic3D [6] all use NeRF [19] as an implicit representation of 3D objects, and render images through differentiable renderers. They utilize the matching degree between images and text to optimize the entire network and finally adopt the optimized implicit neural field representation to extract the 3D mesh model.
## 3 Method
The schematic overview of the proposed architecture is illustrated in Fig.1. The photo on the left is the training architecture. It mainly consists of four components: shape encoder, CLIP model, conditional flow model, and conditional diffusion model. We use the DPM model as our backbone model, which samples noise data from Gaussian distribution and generates point cloud data through the denoising process under the condition. Specifically, we separate our model into two tasks during training. First, we render the 3D objects to obtain high-resolution 2D images, and then the 2D rendered images are used as the pre-trained CLIP model input, thereafter the conditional flow model is trained to establish the relationship between the CLIP model embedding and the shape vector \(s\). Next, we adopt the shape vector as the condition to guide the 3D shape generation. During inference, the text is used as the CLIP model input. Based on the bi-directionality of the flow model, we can obtain the shape vector \(s\) guided by the CLIP model output. Subsequently, the shape vector \(s\) guides the diffusion model to generate the multi-category point cloud.
**Shape Encoder:** It maps the point cloud data to a distribution of shape vectors, namely the shape mean and shape variance, and then samples a shape vector from the shape mean and variance. The overall network includes the feature extraction layer and distribution map layer. For the feature extraction layer, we use a series of 1D convolutional layers to increase the dimension of the point cloud data and then select the maximum value of each dimension feature to perform feature dimension reduction. For the distribution mapping layer, the data after feature dimension reduction is mapped to the shape mean and variance respectively to represent the distribution of the point cloud shape vector. After that, randomly generate an offset value \(\varepsilon\) to sample a shape vector \(s\) defined as equation 1.
\[\mathrm{z}=\mu+\epsilon*\exp\left(0.5*\log\left(\sigma^{2}\right)\right) \tag{1}\]
**CLIP Model:** It encodes texts and 2D images into the same latent space, i.e., matching images and texts. Therefore, based on the CLIP model, we learn the correspondence between 3D point clouds and texts using images as the intermediary. The CLIP model is based on VisualTransformer [2]. We match images to 16*16 text vectors using the ViT-B/32 model. Images or texts are passed through corresponding CLIP encoders to obtain a one-dimensional vector with a length of 256, which is normalized and input into the conditional flow model as the condition.
**Conditional Flow Model:** Traditional VAE model encodes data into a standard normal distribution, while the flow model can learn a more flexible and variable distribution. The shape vector is fed into the conditional flow model to learn the transformation from the Gaussian noise distribution to the distribution of \(s\), where the CLIP embedding is as the condition. During inference, the data is directly sampled from the Gaussian distribution, and the corresponding shape vector is obtained through the inverse transformation of the flow model, which is then input into the diffusion model as the condition of the diffusion model. We use the affine transformation layer in the RealNVP network architecture [20] to build the flow model. The affine transformation layer divides the input into two parts. The first part keeps the same as before. For the second part, the scale scaling coefficient and the offset coefficient are used to transform the data.
**Point Cloud Autoencoder:** To reduce the computational time and memory consumption of the diffusion model itself, we first train a point cloud autoencoder, and the encoded hidden vector is used as the input of the following diffusion model for 3D shape generation. The output of the denoising process diffusion model is decoded into point cloud data by the point cloud decoder. A point cloud autoencoder consists of an encoder and a decoder. The encoder is mainly based on the PointNet network architecture [7] and the graph-based max pooling layer [8]. The decoder is mainly based on the
FoldingNet [21]. Fig. 2 shows the network architecture of the point cloud autoencoder.
**Conditional Diffusion Model:** The diffusion model is comprised of the diffusion process and the denoised process. The diffusion process gradually adds noise to the point cloud hidden vector, thereby converting a point cloud distribution of a specific shape into a random noise distribution. The denoised process transform noisy data into point cloud data whose condition is the shape vector \(s\). The diffusion process can be expressed as follows:
\[q\left(x_{i}^{(t)}\mid x_{i}^{(t-1)}\right)=\mathcal{N}\left(x^{(t)}\mid\sqrt{1 -\beta_{t}}x^{(t-1)},\beta_{t}\mathbf{I}\right) \tag{2}\]
\[q\left(x_{i}^{1:T}\mid x_{i}^{(0)}\right)=\prod_{t=1}^{T}q\left(x_{i}^{(t)} \mid x_{i}^{(t-1)}\right) \tag{3}\]
where \(\beta_{1}...\beta_{T}\) are hyperparameters at each time step that controls the noise addition process.
The denoised process is to recover the original point cloud hidden vector from the noise. First, the point cloud hidden vector is sampled from the noise distribution, and then through the reverse Markov chain, the noise is gradually subtracted. Under the condition of shape vector \(s\), the denoised diffusion process can be expressed as follows:
\[p_{\theta}\left(x^{(t-1)}\mid x^{(t)},s\right)=\mathcal{N}\left(x^{(t-1)} \mid\mu_{\theta}\left(x^{(t)},t,s\right),\beta_{t}\mathbf{I}\right) \tag{4}\]
\[p_{\theta}\left(x^{(0:T)}\mid s\right)=p\left(x^{(T)}\right)\prod_{t=1}^{T}p _{\theta}\left(x^{(t-1)}\mid x^{(t)},s\right) \tag{5}\]
Among them, \(\mu_{\theta}\) is a mean value estimated by the neural network, \(s\) is the shape vector, and the initial data of inverse diffusion obeys the standard normal distribution \(N(0,I)\).
The training objective is to maximize the likelihood function of the generated point cloud data \(E\left[\log p_{\theta}\left(\mathbf{X}^{(0)}\right)\right]\). Similar to the VAE model, the specific optimization goal is still to maximize its variational lower bound (ELBO).
\[\begin{split}\mathbb{E}[\log p_{\theta}(\mathbf{X}^{(0)})]& \geq\mathbb{E}\big{[}\log\frac{p_{\theta}(\mathbf{X}^{(0:T)},s)}{q( \mathbf{X}^{(1:T)},s|\mathbf{X}^{(0)})}\big{]}\\ &=\mathbb{E}\big{[}\log p(\mathbf{X}^{T})\\ &\quad+\sum_{t=1}^{T}\log\frac{p_{\theta}(\mathbf{X}^{(t-1)}| \mathbf{X}^{(t)},s)}{q(\mathbf{X}^{(t)}|\mathbf{X}^{(t-1)})}\\ &\quad-\log\frac{q_{\phi}(s|\mathbf{X}^{(0)})}{p(s|c)}\big{]} \end{split} \tag{6}\]
Where c is the condition of the flow model, i.e., the vector encoded by the CLIP model. \(s\) is the condition of the diffusion model, i.e., the shape vector. To simplify the above variational bound, DPM proposes training on pairs of \((x_{t},x_{0})\) to learn to parameterize this process with a simple squared L2 loss. The following objective is simpler to train, resembles denoising score matching and was found to yield higher-quality samples:
\[L(\theta)=\left\|\epsilon-\epsilon_{\theta}\left(x_{i}^{(t)},t,s\right) \right\|^{2},\epsilon\sim\mathcal{N}(0,\mathbf{I}) \tag{7}\]
where \(t\) is sampled uniformly between 1 and \(T\), and \(\epsilon_{\theta}\) is the learned diffusion model.
Figure 1: An overview of our proposed model
Figure 2: Point Cloud Autoencoder network architecture
## 4 Experiments
### Dataset
We use the ShapeNet (v2) processed dataset [23], which contains 13 categories of data, and a single sample contains point cloud data and the corresponding 2D rendered images of each 3D object. In order to ensure the fairness of comparison, we adopt the same dataset split approach.
### Evaluation Metrics
**Chamfer Distance:** It (CD) calculates the average minimum distance between each generated point and its closest point in the ground-truth point cloud.
**Earth Mover Distance:** It (EMD) measures the dissimilarity between two distributions, taking into account both their shape and their relative position.
**CLIP R-precision:** It [4] evaluates the generation effect with the composite text, which ranks results between textual descriptions and generated images. The higher the ranking of the real text, the higher the quality of the generated data.
### Results
The point cloud data generated using the words are shown in Fig.4 and Table 1. Evaluation results demonstrate that the proposed approach outperforms the current SOAT work in terms of the EMD index, and our scheme supports the generation of multiple categories of data using a single model, boasting better scalability. The point cloud data generated using composite text are shown in Fig.3 and Table 2. As there are currently no available open-source codes for comparison, we compare with DreamFusion (represented by NeRF). | semantic-driven 3D形状生成は、テキストに基づいて3Dオブジェクトを生成することを目指します。過去の研究では、単一カテゴリ生成、低頻度3D詳細、そしてペアデータセットの大量の取得が課題となっています。これらの課題に対処するため、私たちは多様なカテゴリ条件付き拡散モデルを提案します。具体的には、1) 大規模なペアデータセットの欠如を緩和するため、テキスト、2D画像、3D形状をプリッペッド CLIP モデルに基づいて接続します。2) 多様なカテゴリ3D形状の特徴を抽出するため、条件付きフローモデルを用いてCLIP エンジングに基づいて3D形状ベクトルを生成します。3) 多様なカテゴリ3D形状を生成するためには、多様なカテゴリ形状ベクトルに基づいて条件付きディフュージョンモデルを使用し、これにより、学習時間の短縮とメモリ消費量の削減を実現しました。 |
2306.17750 | TD Convergence: An Optimization Perspective | We study the convergence behavior of the celebrated temporal-difference (TD)
learning algorithm. By looking at the algorithm through the lens of
optimization, we first argue that TD can be viewed as an iterative optimization
algorithm where the function to be minimized changes per iteration. By
carefully investigating the divergence displayed by TD on a classical counter
example, we identify two forces that determine the convergent or divergent
behavior of the algorithm. We next formalize our discovery in the linear TD
setting with quadratic loss and prove that convergence of TD hinges on the
interplay between these two forces. We extend this optimization perspective to
prove convergence of TD in a much broader setting than just linear
approximation and squared loss. Our results provide a theoretical explanation
for the successful application of TD in reinforcement learning. | Kavosh Asadi, Shoham Sabach, Yao Liu, Omer Gottesman, Rasool Fakoor | 2023-06-30T16:01:04 | http://arxiv.org/abs/2306.17750v2 | # TD Convergence: An Optimization Perspective
###### Abstract
We study the convergence behavior of the celebrated temporal-difference (TD) learning algorithm. By looking at the algorithm through the lens of optimization, we first argue that TD can be viewed as an iterative optimization algorithm where the function to be minimized changes per iteration. By carefully investigating the divergence displayed by TD on a classical counter example, we identify two forces that determine the convergent or divergent behavior of the algorithm. We next formalize our discovery in the linear TD setting with quadratic loss and prove that convergence of TD hinges on the interplay between these two forces. We extend this optimization perspective to prove convergence of TD in a much broader setting than just linear approximation and squared loss. Our results provide a theoretical explanation for the successful application of TD in reinforcement learning.
## 1 Introduction
Temporal-difference (TD) learning is arguably one of the most important algorithms in reinforcement learning (RL), and many RL algorithms are based on the principles that TD embodies. TD is at the epicenter of some of the recent success examples of RL [1; 2], and has influenced many areas of science such as AI, economics, and neuroscience. Despite the remarkable success of TD in numerous settings, the algorithm is shown to display divergent behavior in contrived examples [3; 4; 5]. In practice, however, divergence rarely manifests itself even in situations where TD is used in conjunction with complicated loss functions and function approximators. Thus, it is worthwhile to obtain a deeper understanding of TD behavior, and to generalize existing convergence results to explain the practical success of this algorithm.
In this paper, our desire is to study the TD algorithm through the lens of optimization. We argue that TD could best be thought of as an iterative optimization algorithm, which proceeds as follows:
\[\theta^{t+1}\approx\arg\min_{w}H(\theta^{t},w). \tag{1}\]
This process involves two different parameters, namely the target parameter \(\theta^{t}\) that remains fixed at each iteration \(t\), and the optimization parameter \(w\) that is adjusted during each iteration to minimize the corresponding loss function. Using the more familiar deep-RL terminology, \(\theta^{t}\) corresponds to the parameters of the target network, whereas \(w\) corresponds to the parameters of the online network. Many RL algorithms can be described by this iterative process with the main difference being the approach taken to (approximately) perform the minimization. For example, the online TD algorithm [6] takes a single gradient step to crudely approximate the minimization problem (1), whereas Fitted Value Iteration [7] lies at the other extreme and solves each iteration exactly. Therefore, a deeper understanding of the iterative optimization process (1) can facilitate a better understanding of TD and related RL algorithms.
In order to build more intuition about the iterative process (1), we start by taking a deeper dive into one of the classical examples where TD displays divergent behavior [3]. By doing so, we identify two key forces, namely a target force and an optimization force, whose interplay dictates whether TD is guaranteed to converge or that a divergent behavior may manifest itself. If the optimization force can dominate the target force, then we show that the process is convergent even when we operate in the famous deadly triad [5], namely in the presence of bootstrapping, function approximation, and off-policy updates.
To better situate the interplay between the two forces, we then consider the setting where the function \(H\) is constructed using linear function approximation and quadratic loss. The most notable result in this setting is due to the seminal work of Tsitsiklis and Van Roy [3] that proved convergence using the operator perspective of TD. Our optimization perspective enables us to leverage the interplay between the two forces and show an alternative convergence guarantee for TD in this setting of linear function approximation and squared loss.
The main contribution of the paper is to show that our proposed optimization perspective of TD can naturally be extended to a much broader setting. More specifically, this optimization perspective enables us to prove TD convergence in settings where the function \(H\) is constructed with alternative function approximators and loss functions. This highlights the potency of the optimization view, and stands in sharp contrast to the more classical operator view of TD, which is limited to the linear setting and squared loss. The key ingredient of our analysis is to show again, this time in a more general setting, that TD will be convergent in situations where the optimization force dominates the target force. Overall, our results demonstrate that TD is a sound algorithm in a much broader setting than understood in previous work.
## 2 Problem Setting
Reinforcement learning (RL) is the study of artificial agents that can learn through trial and error [5]. In this paper, we focus on the more specific setting where the agent is interested in predicting the long-term goodness or the value of its states. Referred to as the prediction setting, this problem is mathematically formulated by the Markov reward process (MRP) [8]. In this paper, we consider the discounted infinite-horizon case of MRPs, which is specified by the tuple \(\langle\mathcal{S},\mathcal{R},\mathcal{P},\gamma\rangle\), where \(\mathcal{S}\) is the set of states. The function \(R:\mathcal{S}\rightarrow\ \mathbb{R}\) denotes the reward when transitioning out of a state. For any set, we denote the space of probability distributions over it by \(\Delta\). The transition \(\mathcal{P}:\mathcal{S}\rightarrow\ \Delta(\mathcal{S})\) defines the conditional probabilities over the next states given the current state, and is denoted by \(\mathcal{P}(s^{\prime}\mid s)\). Finally, the scalar \(\gamma\in(0,1)\) geometrically discounts rewards that are received in the future steps.
The primary goal of this paper is to understand the behavior of RL algorithms that learn to approximate the state value function defined as \(v(s):=\mathbb{E}\big{[}\sum_{t=0}^{\infty}\gamma^{t}r_{t}\big{|}s_{0}=s\big{]}\). To this end, we define the Bellman operator \(\mathcal{T}\) as follows:
\[\big{[}\mathcal{T}v\big{]}(s):=\mathcal{R}(s)+\sum_{s^{\prime}\in\mathcal{S} }\gamma\ \mathcal{P}(s^{\prime}\mid s)v(s^{\prime})\;,\]
which we can write compactly as: \(\mathcal{T}v:=R+\gamma Pv\). In large-scale RL problems the number of states \(|\mathcal{S}|\) is enormous, which makes it unfeasible to use tabular approaches. We are interested in the setting where we have a parameterized function approximator, and our desire is to find a parameter \(\theta\) for which the learned value function \(v(s;\theta)\) results in a good approximation of the true value function \(v(s)\).
A fundamental and quite popular approach to finding a good approximation of the value function is known as temporal difference (TD) learning [6]. Suppose that a sample \(\langle s,r,s^{\prime}\rangle\) is given where \(s^{\prime}\sim\mathcal{P}(\cdot|s)\). In this case, TD learning algorithm updates the parameters of the approximate value function as follows:
\[\theta^{t+1}\leftarrow\theta^{t}+\alpha\big{(}r+\gamma v(s^{\prime};\theta^{t })-v(s;\theta^{t})\big{)}\nabla_{\theta}v(s;\theta^{t})\;, \tag{2}\]
where \(\theta^{t}\) denotes the parameters of our function approximator at iteration \(t\). Also, by \(\nabla_{\theta}v(s;\theta^{t})\) we are denoting the partial gradient of \(v(s;\theta)\) with respect to the parameter \(\theta\). Note that TD uses the value estimate obtained by one-step lookahead \(\big{(}r+\gamma v(s^{\prime};\theta^{t})\big{)}\) to update its approximate value function \(v(s;\theta^{t})\) in the previous step. This one-step look-ahead of TD could be thought of as a sample of the right-hand-side of the Bellman equation. Our focus on TD is due to the fact that many
of the more modern RL algorithms are designed based on the principles that TD embodies. We explain this connection more comprehensively in section 3.
To understand the existing results on TD convergence, we define the Markov chain's stationary state distribution \(d(\cdot)\) as the unique distribution with the following property: \(\forall s^{\prime}\sum_{s\in\mathcal{S}}d(s)\mathcal{P}(s^{\prime}\mid s)=d(s^{ \prime})\). Then, Tsitsiklis and Van Roy show in [3] that, under linear function approximation, TD will be convergent if the states \(s\) are sampled from the stationary-state distribution. We will discuss this result in more detail later on in section 5.
However, it is well-known that linear TD can display divergent behavior if states are sampled from an arbitrary distribution rather than the stationary-state distribution of the Markov chain. A simple yet classical counter example of divergence of linear TD is shown in Figure 1 and investigated in section 4. First identified in [3], this example is a very simple Markov chain with two non-terminal states and zero rewards. Moreover, little is known about convergence guarantees of TD under alternative function approximators, or even when the update rule of TD is modified slightly.
In this paper, we focus on the Markov reward process (MRP) setting which could be thought of as a Markov decision process (MDP) with a single action. TD can display divergence even in the MRP setting [3; 5], which indicates that the presence of multiple actions is not the core reason behind TD misbehavior [5] -- Divergence can manifest itself even in the more specific case of single action. Our results can naturally be extended to the MDP setting (multiple actions) with off-policy updates where in update (2) of TD, actions are sampled according to a target policy but states are sampled according to a behavior policy. That said, akin to Tsitsiklis and Van Roy [3], as well as chapter 11 in the book of Sutton and Barto [5], we focus on MRPs to study the root cause of TD in the most clear setting.
## 3 TD Learning as Iterative Optimization
In this section, we argue that common approaches to value function prediction can be viewed as iterative optimization algorithms where the function to be minimized changes per iteration. To this end, we recall that for a given experience tuple \(\langle s,r,s^{\prime}\rangle\), TD performs the update presented in (2). A common augmentation of TD is to decouple the parameters to target (\(\theta\)) and optimization (\(w\)) parameters [1] and update \(\theta\) less frequently. In this case, at a given iteration \(t\), the algorithm performs multiple (\(K\)) gradient steps as follows:
\[w^{t,k+1}\gets w^{t,k}+\alpha\big{(}r+\gamma v(s^{\prime};\theta^{t})-v( s;w^{t,k})\big{)}\nabla_{\theta}v(s;w^{t,k})\;, \tag{3}\]
and then updates the target parameter \(\theta^{t+1}\gets w^{t,K}\) before moving to the next iteration \(t+1\). Here \(K\) is a hyper-parameter, where \(K=1\) takes us back to the original TD update (2). Observe that the dependence of \(v(s^{\prime};\theta^{t})\) to our optimization parameter \(w\) is ignored in this update, despite the fact that an implicit dependence is present due to the final step \(\theta^{t+1}\gets w^{t,K}\). This means that the objective function being optimized is made up of two separate input variables2. We now define:
Footnote 2: In fact, [9] shows that there cannot exist any objective function \(J(\theta)\) with a single input variable whose gradient would take the form of the TD update.
\[H(\theta,w)=\sum_{s}d(s)\big{(}\textbf{E}_{r,s^{\prime}}[r+\gamma v(s^{ \prime};\theta)]-v(s;w)\big{)}^{2}\;,\]
where we allow \(d(\cdot)\) to be an arbitrary distribution, not just the stationary-state distribution of the Markov chain. Observe that the partial gradient of \(H\) with respect to the optimization parameters \(w\) is equivalent to the expectation of the update in (3). Therefore, TD, DQN, and similar algorithms could best be thought of as learning algorithms that proceed by approximately solving for the following sequence of optimization problems:
\[\theta^{t+1}\approx\arg\min_{w}H(\theta^{t},w)\;,\]
using first-order optimization techniques. This optimization perspective is useful conceptually because it accentuates the unusual property of this iterative process, namely that the first argument of the objective \(H\) hinges on the output of the previous iteration. Moreover, the general form of this optimization process allows for using alternative forms of loss functions such as the Huber loss [1], the logistic loss [10], or the entropy [11], as well as various forms of function approximation such as linear functions or deep neural networks. Each combination of loss functions and function approximators yields a different \(H\), but one that is always comprised of a function \(H\) with two inputs.
A closely related optimization process is one where each iteration is solved exactly:
\[\theta^{t+1}\leftarrow\arg\min_{w}H(\theta^{t},w)\;, \tag{4}\]
akin to Fitted Value Iteration [7; 12]. Exact optimization is doable in problems where the model of the environment is available and that the solution takes a closed form. A pseudo-code of both algorithms is presented in Algorithms 1 and 2.
```
Input:\(\theta^{0},\ T\) for\(t=0\)to\(T-1\)do \(\theta^{t+1}\leftarrow\arg\min_{w}H(\theta^{t},w)\) endfor Return\(\theta^{T}\)
```
**Algorithm 1** Value Function Optimization with Exact Updates
In terms of the difference between the two algorithms, notice that in Algorithm 1 we assume that we have the luxury of somehow solving each iteration exactly. This stands in contrast to Algorithm 2 where we may not have this luxury, and resort to gradient updates to find a rough approximation of the actual solution. Thus, Algorithm 1 is more difficult to apply but easier to understand, whereas Algorithm 2 is easier to apply but more involved in terms of obtaining a theoretical understanding.
Note that if convergence manifests itself in each of the two algorithms, the convergence point denoted by \(\theta^{\star}\) must have the following property:
\[\nabla_{w}H(\theta^{\star},\theta^{\star})=\mathbf{0}\;. \tag{5}\]
This fixed-point characterization of TD has been explored in previous work [13; 14]. Whenever it exists, we refer to \(\theta^{\star}\) as the fixed-point of these iterative algorithms. However, convergence to the fixed-point is not always guaranteed [5] even when we have the luxury of performing exact minimization akin to Algorithm 1. In this paper, we study both the exact and inexact version of the optimization process. In doing so, we identify two forces that primarily influence convergence. To begin our investigation, we study a counter example to build intuition on why divergence can manifest itself. We present a formal analysis of the convergence of the two algorithms in section 6.
## 4 Revisiting the Divergence Example of TD
In this section, we focus on a simple counter example where TD is known to exhibit divergence. First identified by [3], investigating this simple example enables us to build some intuition about the root cause of divergence in the most clear setting.
Shown in Figure 1, this example is a Markov chain with two non-terminal states and zero rewards. A linear function approximation, \(v(s;\theta)=\phi(s)\theta\), is employed with a single feature where \(\phi(s_{1})=1\) and \(\phi(s_{2})=2\). The third state is a terminal one whose value is always zero. The true value function (0 in all states) is realizable with \(\theta=0\).
To build some intuition about the root cause of divergence, we discuss the convergence of exact TD with a few state distributions in this example. We desire to update all but the terminal state with non-zero probability. However, to begin with, we focus on a particular extreme case where we put all of our update probability behind the second state:
\[\theta^{t+1}\leftarrow\arg\min_{w}H(\theta^{t},w)=\arg\min_{w}\frac{1}{2}\big{(} (1-\epsilon)(\gamma 2\theta^{t})-2w\big{)}^{2}\;.\]
We thus have \(\nabla_{w}H(\theta^{t},w)=2\big{(}2w-(1-\epsilon)\gamma 2\theta^{t}\big{)}\), and since \(\nabla_{w}H(\theta^{t},\theta^{t+1})=0\), we can write: \(\theta^{t+1}\leftarrow(2)^{-1}(1-\epsilon)\gamma 2\theta^{t}\). The process converges to the fixed-point \(\theta=0\) for all values of \(\gamma<1\)
Figure 1: Divergence example of TD [3].
and \(\epsilon\in[0,1]\). This is because the target force of \(\theta_{t}\) (namely \(\big{(}(1-\epsilon)\gamma 2\big{)}\)) is always dominated by the optimization force of \(w\) (namely 2). Note that updating this state is thus not problematic at all, and that the update becomes even more conducive to convergence when \(\gamma\) is smaller and \(\epsilon\) is larger.
We now juxtapose the first extreme case with the second one where we put all of our update probability behind the first state, in which case at a given iteration \(t\) we have:
\[\theta^{t+1}\leftarrow\arg\min_{w}H(\theta^{t},w)=\arg\min_{w}\ \frac{1}{2}( \gamma 2\theta^{t}-w)^{2}\;.\]
We thus have \(\nabla_{w}H(\theta^{t},w)=(w-\gamma 2\theta^{t})\) and so \(\theta^{t+1}\leftarrow(1)^{-1}\gamma 2\theta^{t}\). Unlike the first extreme case, convergence is no longer guaranteed. Concretely, to ensure convergence, we require that the target force of \(\theta^{t}\) (namely \(\gamma 2\)) be larger than the optimization force of \(w\) (namely 1).
To better understand why divergence manifests itself, note that the two states \(s_{1}\) and \(s_{2}\) have very similar representations (in the form of a single feature). Note also that the value of the feature is larger for the target state than it is for the source state, \(\phi(s_{2})>\phi(s_{1})\). This means that any change in the value of the source state \(s_{1}\), would have the external effect of changing the value of the target state \(s_{2}\). Further, the change is in the same direction (due to the positive sign of the feature in both states) and that it is exactly 2\(\gamma\) larger for the target state relative to the source state. Therefore, when \(\gamma>1/2\), the function \(H\) will be more sensitive to the target parameter \(\theta\) relative to the optimization parameter \(w\). This is the root cause of divergence.
Moving to the case where we update both states with non-zero probability, first note that if the two updates were individually convergent, then the combination of the two updates would also have been convergent in a probability-agnostic way. Stated differently, convergence would have been guaranteed regardless of the probability distribution \(d\) and for all off-policy updates. However, in light of the fact that the optimization force of \(s_{1}\) does not dominate its target force, we need to choose \(d(s_{1})\) small enough so as to contain the harmful effect of updating this state.
We compute the overall update (under the case where we update the two states with equal probability) by computing the sum of the two gradients in the two extreme cases above:
\[(w-\gamma 2\theta^{t})+2(2w-\gamma 2(1-\epsilon)\theta^{t})=0\;,\]
and so \(\theta^{t+1}\leftarrow(5)^{-1}\gamma(6-4\epsilon)\theta^{t}\). Note that even with a uniform update distribution, the two states are contributing non-uniformly to the overall update, and the update in the state \(s_{2}\) is more influential because of the higher magnitude of the feature in this state (\(\phi(s_{2})=2\) against \(\phi(s_{1})=1\) in the first state).
To ensure convergence, we need to have that \(\gamma<\frac{5}{6-4\epsilon}\). Notice, again, that we are combining the update in a problematic state with the update in the state that is conducive for convergence. We can contain the negative effect of updating the first state by ensuring that our update in the second state is even more conducive for convergence (corresponding to a larger \(\epsilon\)). In this case, the update of \(s_{2}\) can serve as a mitigating factor.
We can further characterize the update with a general distribution. In this case, we have:
\[d(s_{1})\big{(}w-\gamma 2\theta^{t}\big{)}+\big{(}1-d(s_{1})\big{)}\big{(}2( 2w-\gamma 2(1-\epsilon)\theta^{t})\big{)}=0\;,\]
which gives us a convergent update if: \(d(s_{1})<\frac{4-4\gamma(1-\epsilon)}{3+\gamma 2-4\gamma(1-\epsilon)}\;.\)
Both the denominator and the numerator are always positive, therefore regardless of the values of \(\epsilon\) and \(\gamma\), there always exists a convergent off-policy update, but one that needs to assign less probability to the problematic state as we increase \(\gamma\) and decrease \(\epsilon\).
This means that using some carefully chosen off-policy distributions is not only safe, but that doing so can even speed up convergence. This will be the case if the chosen distribution makes the objective function \(H\) more sensitive to changes in its first argument (target parameter \(\theta\)) than its second argument (the optimization parameter \(w\)). We next formalize this intuition.
## 5 Convergence of Linear TD with Quadratic Loss
We now focus on the case of TD with linear function approximation. In this case, the expected TD update (2) could be written as the composition of two separate operators (with \(v_{t}=\Phi\theta_{t}\)):
\[v_{t+1}\leftarrow\Pi_{D}\big{(}\mathcal{T}(v_{t})\big{)}\;,\]
where the projection operator \(\Pi_{D}\) and the Bellman operator \(\mathcal{T}\) are defined as follows:
\[\Pi_{D}=\Phi(\Phi^{\top}D\Phi)^{-1}\Phi^{\top}D,\qquad\text{and}\qquad\mathcal{T }(v)=R+\gamma Pv\;.\]
Here, \(D\) is a diagonal matrix with diagonal entries \(d(s_{1}),...,d(s_{n})\), where \(n\) is the number of states. The projection operator \(\Pi_{D}\) is non-expansive under any distribution \(d\). However, the Bellman operator is a \(\gamma\)-contraction under a specific \(d\), namely the stationary-state distribution of the Markov chain specified by the transition matrix \(P\) as shown by Tsitsiklis and Van Roy [3]. Therefore, the composition of the two operators is a \(\gamma\)-contraction when the distribution \(d\) is the stationary-state distribution of the Markov chain.
In light of this result, one may think that TD is convergent in a very narrow sense, specifically when the updates are performed using the stationary-state distribution. But, as we saw with the counter example, in general there may exist many other distributions \(d\) that are in fact very conducive for TD convergence. In these cases, the proof technique above cannot provide a tool to ensure convergence because of the reliance of the non-expansive and contraction properties of these operators. Is this a limitation of the TD algorithm itself, or a limitation of this specific operator perspective of TD? Further, if this operator perspective is limited, is there a different perspective that can give rise to a more general understanding of TD convergence?
Our goal for the rest of the paper is to develop an alternative optimization perspective that can be applied in a broader setting relative to the operator perspective. To this end, our first task is to show that the optimization perspective can give us new insights even in the linear case and with squared loss functions. We generalize our optimization view of TD in the counter example by defining the objective function \(\tilde{H}\) for this case:
\[H(\theta,w)=\frac{1}{2}\|R+\gamma P\Phi\theta-\Phi w\|_{D}^{2}\;, \tag{6}\]
where \(\left\|x\right\|_{D}=\sqrt{x^{\top}Dx}\). Recall that in the exact case, the TD algorithm could succinctly be written as: \(\theta^{t+1}\leftarrow\arg\min_{w}H(\theta^{t},w)\). Following the steps taken with the counter example, we first compute the gradient to derive the target and optimization forces:
\[\nabla_{w}H(\theta,w)=\Phi^{\top}D(\Phi w-R-\gamma P\Phi\theta)=\underbrace{ \Phi^{\top}D\Phi}_{\mathbf{M}_{w}}w-\underbrace{\gamma\Phi^{\top}DP\Phi}_{ \mathbf{M}_{\theta}}\theta-\Phi^{\top}DR\;. \tag{7}\]
Here we have similar dynamics between \(\theta\) and \(w\) except that in this more general case, rather than scalar forces as in the counter example, we now have the two matrices \(\mathbf{M}_{w}\) and \(\mathbf{M}_{\theta}\). Note that \(\mathbf{M}_{w}\) is a positive definite matrix, \(\lambda_{\min}(\mathbf{M}_{w})=\min_{x}x^{\top}\mathbf{M}_{w}x/||x||^{2}>0\), if \(\Phi\) is full rank. Below we can conveniently derive the update rule of linear TD by using these two matrices.
**Proposition 1**.: Let \(\{\theta^{t}\}_{t\in\mathbb{N}}\) be a sequence of parameters generated by Algorithm 1. Then, for the fixed-point \(\theta^{\star}\) and any \(t\in\mathbb{N}\), we have
\[\theta^{t+1}-\theta^{\star}=\mathbf{M}_{w}^{-1}\mathbf{M}_{\theta}\left(\theta ^{t}-\theta^{\star}\right).\]
Proof.: Since \(\theta^{t+1}\leftarrow\arg\min_{w}H(\theta^{t},w)\), we have \(\nabla_{w}H(\theta^{t},\theta^{t+1})=\mathbf{0}\). Using (7) we have:
\[\mathbf{M}_{w}\theta^{t+1}=\Phi^{\top}DR+\mathbf{M}_{\theta}\theta^{t}=( \mathbf{M}_{w}-\mathbf{M}_{\theta})\theta^{\star}+\mathbf{M}_{\theta}\theta^{ t},\]
where the last equality follows from \(\theta^{\star}\) being the fixed-point and therefore \(\nabla_{w}h(\theta^{\star},\theta^{\star})=\mathbf{0}\), which in light of (7) translates to \(\Phi^{\top}DR=(\mathbf{M}_{w}-\mathbf{M}_{\theta})\theta^{\star}\). Multiplying both sides by \(\mathbf{M}_{w}^{-1}\), we get:
\[\theta^{t+1}=(I-\mathbf{M}_{w}^{-1}\mathbf{M}_{\theta})\theta^{\star}+ \mathbf{M}_{w}^{-1}\mathbf{M}_{\theta}\theta^{t}\;.\]
Rearranging the equality leads into the desired result.
This proposition characterizes the evolution of the difference between the parameter \(\theta^{t}\) and the fixed-point \(\theta^{\star}\). Similarly to our approach with the counter example, we desire to ensure that this difference converges to \(\mathbf{0}\). The following corollary gives us a condition for convergence [15].
**Corollary 2**.: Let \(\{\theta^{t}\}_{t\in\mathbb{N}}\) be a sequence of parameters generated by Algorithm 1. Then, \(\{\theta^{t}\}_{t\in\mathbb{N}}\) converges to the fixed-point \(\theta^{\star}\) if and only if the spectral radius of \(\mathbf{M}_{w}^{-1}\mathbf{M}_{\theta}\) satisfies \(\rho(\mathbf{M}_{w}^{-1}\mathbf{M}_{\theta})<1\).
We can employ this Corollary to characterize the convergence of TD in the counter example from the previous section. In this case, we have \(\mathbf{M}_{w}=5\) and \(\mathbf{M}_{\theta}=\gamma(6-4\epsilon)\), which give us \((5)^{-1}\gamma(6-4\epsilon)<1\). This is exactly the condition obtained in the previous section.
Notice that if \(d\) is the stationary-state distribution of the Markov chain then \(\rho(\mathbf{M}_{w}^{-1}\mathbf{M}_{\theta})<1\)[3], and so the algorithm is convergent. However, as demonstrated in the counter example, the condition can also hold for many other distributions. The key insight is to ensure that the distribution puts more weight behind states where the optimization force of \(w\) is dominating the target force due to \(\theta\).
Note that the operator view of TD becomes inapplicable as we make modifications to \(H\), because it will be unclear how to write the corresponding RL algorithm as a composition of operators. Can we demonstrate that in these cases the optimization perspective is still well-equipped to provide us with new insights about TD convergence? We next answer this question affirmatively by showing that the optimization perspective of TD convergence naturally extends to a much broader setting than considered in this section, and therefore, is a more powerful perspective than the classical operator perspective.
## 6 Convergence of TD with General \(H\)
Our desire now is to show convergence of TD without limiting the scope of our results to linear approximation and squared loss. We would like our theoretical results to support alternative ways of constructing \(H\) than the one studied in existing work as well as our previous section. In doing so, we again show that convergence of TD hinges on the interplay between the two identified forces.
Before presenting our main theorem, we discuss important concepts from optimization that will be used at the core of our proofs. We study convergence of Algorithms 1 and 2 with a general function \(H:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) that satisfies the following two assumptions:
1. The partial gradient \(\nabla_{w}H\), is \(F_{\theta}\)-Lipschitz: \[\forall\theta_{1},\forall\theta_{2}\qquad\|\nabla_{w}H(\theta_{1},w)-\nabla_{ w}H(\theta_{2},w)\|\leq F_{\theta}\|\theta_{1}-\theta_{2}\|\;.\]
2. The function \(H(\theta,w)\) is \(F_{w}\)-strongly convex in \(w\): \[\forall w_{1},\forall w_{2}\qquad\big{(}\nabla_{w}H(\theta,w_{1})-\nabla_{w}H( \theta,w_{2})\big{)}^{\top}(w_{1}-w_{2})\geq F_{w}\|w_{1}-w_{2}\|^{2}\;.\]
Note that, in the specific linear case and quadratic loss (our setting in the previous section) these assumptions are easily satisfied [16]. More specifically, in that case \(F_{\theta}=\lambda_{max}(\mathbf{M}_{\theta})\) and \(F_{w}=\lambda_{min}(\mathbf{M}_{w})\). But the assumptions are also satisfied in a much broader setting than before. For example, the loss function being used could be the Huber loss used as in DQN [1], logistic loss [10], or the entropy [11]. Also the function approximator can have an alternative form such as quadratic [17], or the more general input-convex neural networks [18]. Any combination of these choices can conveniently co-exist under our assumptions.
We are now ready to present the main result of our paper:
**Theorem 3**.: Let \(\{\theta^{t}\}_{t\in\mathbb{N}}\) be a sequence generated by either Algorithm 1 or 2. If \(F_{\theta}<F_{w}\), then the sequence \(\{\theta^{t}\}_{t\in\mathbb{N}}\) converges to the fixed-point \(\theta^{\star}\).
In order to prove the result, we tackle the two cases of Algorithm 1 and 2 separately. We first start by showing the result for Algorithm 1 where we can solve each iteration exactly. This is an easier case to tackle because we have the luxury of performing exact minimization, which is more stringent and difficult to implement but easier to analyze and understand. This would be more pertinent to Fitted Value Iteration and similar algorithms. We then move to the case where we approximately solve each iteration (Algorithm 2) akin to TD and similar algorithms. The proof in this case is more involved, and partly relies on choosing small steps when performing gradient descent.
### Exact Optimization (Algorithm 1)
In this case, convergence to the fixed-point \(\theta^{\star}\) can be obtained as a corollary of the following result:
**Proposition 4**.: Let \(\{\theta^{t}\}_{t\in\mathbb{N}}\) be a sequence generated by Algorithm 1. Then, we have:
\[\|\theta^{t+1}-\theta^{\star}\|\leq F_{w}^{-1}F_{\theta}\|\theta^{t}-\theta^{ \star}\|\;.\]
From this result, we immediately obtain that the relative strength of the two forces, namely the conducive force \(F_{w}\) due to optimization and the detrimental target force \(F_{\theta}\), determines if the algorithm is well-behaved. The proof of Theorem 3 in this case follows immediately from Proposition 4. We now present the proof of this Proposition.
Proof.: First notice that \(\theta^{t+1}\leftarrow\operatorname*{arg\,min}_{w}H(\theta^{t},w)\), so we have \(\nabla_{w}H(\theta^{t},\theta^{t+1})=\mathbf{0}\). Now, using the \(F_{w}\)-strong convexity of \(w\to H(\theta^{t},w)\), we get that
\[F_{w}\|\theta^{t+1}-\theta^{\star}\|^{2} \leq \langle\theta^{\star}-\theta^{t+1},\nabla_{w}H(\theta^{t},\theta ^{\star})-\nabla_{w}H(\theta^{t},\theta^{t+1})\rangle\] \[= \langle\theta^{\star}-\theta^{t+1},\nabla_{w}H(\theta^{t},\theta ^{\star})\rangle\qquad(\text{from }\nabla_{w}H(\theta^{t},\theta^{t+1})=\mathbf{0}).\]
Now, since \(\theta^{\star}\) is a fixed-point, it follows that \(\nabla_{w}H(\theta^{\star},\theta^{\star})=\mathbf{0}\). Therefore, we have:
\[F_{w}\|\theta^{t+1}-\theta^{\star}\|^{2} \leq\langle\theta^{\star}-\theta^{t+1},\nabla_{w}H(\theta^{t}, \theta^{\star})-\nabla_{w}H(\theta^{\star},\theta^{\star})\rangle\] \[\leq\|\theta^{t+1}-\theta^{\star}\|\cdot\|\nabla_{w}H(\theta^{t}, \theta^{\star})-\nabla_{w}H(\theta^{\star},\theta^{\star})\|\] \[\leq F_{\theta}\|\theta^{t+1}-\theta^{\star}\|\cdot\|\theta^{t}- \theta^{\star}\|,\]
where in the second line we used to Cauchy-Schwartz inequality, and the last inequality follows from the \(F_{\theta}\)-Lipschitz property of \(\nabla_{w}H(\cdot,\theta^{\star})\). Since this inequality holds true for any \(t\in\mathbb{N}\), we get that if \(\theta^{t}=\theta^{\star}\), then we also have that \(\theta^{t+1}=\theta^{\star}\). Thus, if \(\theta^{T}=\theta^{\star}\) for some \(T\in\mathbb{N}\), then \(\theta^{t}=\theta^{\star}\) for any \(t\geq T\) and so the algorithm has converged. On the other hand, if \(\theta^{t}\neq\theta^{\star}\) for all \(t\in\mathbb{N}\), the desired result follows after dividing both sides by \(\|\theta^{t+1}-\theta^{\star}\|\).
### Inexact Optimization (Algorithm 2)
So far we have shown that the optimization process is convergent in the presence of exact optimization. This result supports the soundness of algorithms such as Fitted Value Iteration, but not TD yet, because in the case of TD we only roughly approximate the minimization step. Can this desirable convergence result be extended to the more general setting of TD-like algorithms where we inexactly solve each iteration by a few gradient steps, or is exact optimization necessary for obtaining convergence? Answering this question is very important because in many settings it would be a stringent requirement to have to solve the optimization problem exactly.
We now show that indeed convergence manifests itself in the inexact case as well. In the extreme case, we can show that all we need is merely one single gradient update at each iteration. This means that even the purely online TD algorithm, presented in update (2), is convergent with general \(H\) if the optimization force can dominate the target force.
However, it is important to note that because we are now using gradient information to crudely approximate each iteration, we need to ensure that the step-size parameter \(\alpha\) is chosen reasonably. More concretely, in this setting we need an additional assumption, namely that there exists an \(L>0\) such that:
\[\forall w_{1},\forall w_{2}\qquad\|\nabla_{w}H(\theta,w_{1})-\nabla_{w}H( \theta,w_{2})\|\leq L\|w_{1}-w_{2}\|\.\]
Notice that such an assumption is quite common in the optimization literature (see, for instance, [19]). Moreover, it is common to choose \(\alpha=1/L\), which we also employ in the context of Algorithm 2. We formalize this in the proposition presented below:
**Proposition 5**.: Let \(\{\theta^{t}\}_{t\in\mathbb{N}}\) be a sequence generated by Algorithm 2 with the step-size \(\alpha=1/L\). Then, we have:
\[\|\theta^{t+1}-\theta^{\star}\|\leq\sigma_{K}\|\theta^{t}-\theta^{\star}\|\,\]
where:
\[\sigma_{K}^{2}\equiv(1-\kappa)^{K}\left(1-\eta^{2}\right)+\eta^{2}\.\]
with \(\kappa\equiv L^{-1}F_{w}\) and \(\eta\equiv F_{w}^{-1}F_{\theta}\).
Notice that \(\kappa\), which is sometimes referred to as the condition number in the optimization literature, is always smaller than 1. Therefore, we immediately conclude Theorem 3. Indeed, since the optimization force dominates the target force (meaning \(\eta<1\)), Algorithm 2 is convergent. Notice that a surprisingly positive consequence of this theorem is that we get convergent updates even if we only perform one gradient step per iteration (\(K=1\)). In deep-RL terminology, this corresponds to the
case where we basically have no frozen target network, and that we immediately use the new target parameter \(\theta\) for the subsequent update.
To further situate this result, notice that as \(K\) approaches \(\infty\) then \(\sigma_{K}\) approaches \(\eta\), which is exactly the contraction factor from Proposition 4 where we assumed exact optimization. So this proposition should be thought of as a tight generalization of Proposition 4 for the exact case. With a finite \(K\) we are paying a price for the crudeness of our approximation.
Moreover, another interesting reduction of our result is to the case where the target force due to bootstrapping in RL is absent, meaning that \(F_{\theta}\equiv 0\). In this case, the contraction factor \(\sigma_{K}\) reduces to \((1-\kappa)^{K/2}\), which is exactly the known convergence rate for the gradient-descent algorithm in the strongly convex setting [19].
## 7 Related Work
In this paper, we studied convergence of TD through the lens of optimization. The underlying principles of TD are so central to RL that a large number of RL algorithms can be thought of as versions of TD, and the availability of convergence results varies between different types of algorithms. We chose a setting we believe is as elementary as possible to highlight key principles of TD convergence. Closest to our work is the work of Tsitsiklis and Van Roy [3] who proved convergence of TD with linear function approximation, squared loss, and the stationary-state distribution of the Markov chain. Later work further analyzed the finite-time convergence of TD with linear function approximation and squared loss [20; 21], and also generalized it to the control setting [22; 23].
Modifications of TD are presented in prior work that are more conducive to convergence analysis [24; 25; 26; 27; 28]. They have had varying degrees of success both in terms of empirical performance [29], and in terms of producing convergent algorithms [30]. TD is also studied under over-parameterization [31; 32], with learned representations [33], proximal updates [34], and auxiliary tasks [35]. Also, the quality of TD fixed-point has been studied in previous work [14].
A large body of literature focuses on finding TD-like approaches that are in fact true gradient-descent approaches in that they follow the gradient of a stationary objective function [36; 13; 37; 38; 39; 40]. In these works, the optimization problem is formulated in such a way that the minimizer of the loss will be the fixed-point of the standard TD. Whereas TD has been extended to large-scale settings, these algorithms have not been as successful as TD in terms of applications.
A closely related algorithm to TD is that of Baird, namely the residual gradient algorithm [4; 41]. This algorithm has a double-sampling issue that needs to be addressed either by assuming a model of the environment or by learning the variance of the value function [41; 42]. However, even with deterministic MDPs, in which the double sampling issue is not present [43], the algorithm often finds a fixed-point that has a lower quality than that of the TD algorithm [44]. This is attributed to the fact that the MDP might still look stochastic in light of the use of function approximation [36].
TD could be thought of as an incremental approach to approximate dynamic programming and Fitted Value Iteration [7] for which various convergence results based on the operator view exits [12; 45; 46]. Also, these algorithm are well-studied in terms of their error-propagation behavior [47; 48; 49].
Many asymptotic or finite-sample results on Q-learning (the control version of TD) with function approximation make additional assumptions on the problem structure, with a focus on the exploration problems in MDP [50; 51; 52; 53; 54]. Like mentioned before, our focus was on the prediction setting where exploration is not relevant.
## 8 Conclusion
In this paper, we argued that the optimization perspective of TD is more powerful than the well-explored operator perspective of TD. To demonstrate this, we generalized previous convergence results of TD beyond the linear setting and squared loss functions. We believe that further exploring this optimization perspective can be a promising direction to design convergent RL algorithms.
Our general result on the convergent nature of TD is consistent with the empirical success and the attention that this algorithm has deservedly received. The key factor that governs the convergence of TD is to ensure that the optimization force of the algorithm is well-equipped to dominate the more harmful target force. This analogy is one that can be employed to explain the convergent nature of TD even in the presence of the three pillar of the deadly triad. | 私たちは、 celebrated temporal-difference (TD)学習アルゴリズムの収束挙動を研究しています。最適化の視点からアルゴリズムを検討することで、TDは逐次最適化アルゴリズムとみなせることを主張します。各イテレーションで最小化すべき関数が変化する特性を持つ。TDの収束と非収束を示す古典的な反例を通して、TDが収束するか非収束するかを決定する2つの力を見出します。この発見を線形TD設定で二次損失と定式化し、TDの収束はこれらの2つの力が相互作用することで決定されることを証明します。この最適化の視点から、線形近似と平方損失を超えた幅広い設定におけるTDの収束を証明します。私たちの成果は、TDが強化学習に成功する理論的な説明を提供します。 |
2307.00105 | Sensitivity Analysis and Uncertainty Quantification on Point Defect
Kinetics Equations with Perturbation Analysis | The concentration of radiation-induced point defects in general materials
under irradiation is commonly described by the point defect kinetics equations
based on rate theory. However, the parametric uncertainty in describing the
rate constants of competing physical processes such as recombination and loss
to sinks can lead to a large uncertainty in predicting the time-evolving point
defect concentrations. Here, based on the perturbation theory, we derived up to
the third order correction to the solution of point defect kinetics equations.
This new set of equations enable a full description of continuously changing
rate constants, and can accurately predict the solution up to $50\%$ deviation
in these rate constants. These analyses can also be applied to reveal the
sensitivity of solution to input parameters and aggregated uncertainty from
multiple rate constants. | Miaomiao Jin, Jilang Miao | 2023-06-30T19:45:56 | http://arxiv.org/abs/2307.00105v1 | # Sensitivity Analysis and Uncertainty Quantification on
###### Abstract
The concentration of radiation-induced point defects in general materials under irradiation is commonly described by the point defect kinetics equations based on rate theory. However, the parametric uncertainty in describing the rate constants of competing physical processes such as recombination and loss to sinks can lead to a large uncertainty in predicting the time-evolving point defect concentrations. Here, based on the perturbation theory, we derived up to the third order correction to the solution of point defect kinetics equations. This new set of equations enable a full description of continuously changing rate constants, and can accurately predict the solution up to \(50\%\) deviation in these rate constants. These analyses can also be applied to reveal the sensitivity of solution to input parameters and aggregated uncertainty from multiple rate constants.
Point defect kinetics, sensitivity analysis, uncertainty quantification, perturbation
## 1 Introduction
Radiation-induced defects are the key to the degradation of materials properties such as segregation, swelling and embrittlement [1]. Comparing to the thermal equilibrium condition, a much higher concentration of crystalline defects can be created due to high-energy radiation particles colliding with lattice atoms. As defects can significantly accelerate the rates of diffusion and reaction, a description of defect concentration in materials under the radiation environment constitutes the basis to predictive modeling of radiation effects. In particular, point defects (vacancy and interstitial) are commonly described by the point defect kinetics equations via the chemical rate theory [1]. Under the concept of mean field rate theory where the spatial dependence is neglected, the change in defect concentration can be described from several competing processes, including direct defect production from irradiation, vacancy-interstitial recombination, and defect loss to sinks such as dislocations and grain boundaries. Mathematically,
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}C_{\mathrm{v}}\\ C_{\mathrm{i}}\end{array}\right)=K_{0}-K_{\mathrm{iv}}C_{\mathrm{i}}C_{\mathrm{ v}}-C_{\mathrm{s}}\left(\begin{array}{c}K_{\mathrm{vs}}C_{\mathrm{v}}\\ K_{\mathrm{is}}C_{\mathrm{i}}\end{array}\right) \tag{1}\]
where \(C_{\mathrm{v}}\) and \(C_{\mathrm{i}}\) are vacancy and interstitial concentration, respectively, \(K_{0}\) is defect production rate, \(K_{\mathrm{iv}}\) is vacancy-interstitial recombination rate constant, \(K_{\mathrm{vs}}\) and \(K_{\mathrm{is}}\) are vacancy-sink and interstitial-sink reaction rate constant, respectively. The values of these rate constants are hence
of significance to solving the concentrations. Physically, such rates can be derived via diffusion- or reaction-limited analysis, which yields a formulation depending on the defect interactions and mobilities [1]. As a typical methodology, lower length scale computational methods (e.g., density functional theory (DFT) and molecular dynamics) are used to determine fundamental quantities such as interaction strength and diffusion energy barriers. Such treatment inevitably introduces uncertainly in the rate parameters due to several reasons: i) lower length scale methods have its own accuracy limit due to calculation settings and potential choices; ii) limited defect migration pathways are considered due to complexity; and iii) pre-existing damage is hardly captured to modify the current defect energetics. Short et al. demonstrated that a slight change in the vacancy migration energy barrier (0.03 eV) can cause drastic changes in the point defect concentration profile in self-ion irradiation alpha-Fe [2]. It is thus of significance to perform parametric sensitivity analysis and uncertainty quantification due to the avoidable uncertainties in input parameters. Although one may modify the parameter in little increment/decrement to solve point defect kinetics, it is generally time-consuming and can not exhibit a full picture of uncertainty variation in the nearby parameter regions with respect to the parameters in use. To tackle this problem, we use perturbation theory [3] to derive a new set of equations, which can be solved in concurrent with Eq. 1, and the uncertainty of defect concentration can be nicely captured by the correction terms. These analyses can be combined to yield a multi-parameter uncertainty quantification considering the joint distribution of the rate parameters.
## 2 Perturbation Analysis
Consider a perturbation to an input parameter \(K\) (e.g. \(K_{0}\), \(K_{\mathrm{iv}}\)) in Eq 1 in the form of
\[K\to K(1+\epsilon) \tag{2}\]
then the solution can be expressed in the form of perturbative expansion,
\[\left(\begin{array}{c}C_{\mathrm{v}}\\ C_{\mathrm{i}}\end{array}\right)=\left(\begin{array}{c}C_{\mathrm{v}}^{(0)}\\ C_{\mathrm{i}}^{(0)}\end{array}\right)+\epsilon\left(\begin{array}{c}C_{ \mathrm{v}}^{(1)}\\ C_{\mathrm{i}}^{(1)}\end{array}\right)+\epsilon^{2}\left(\begin{array}{c}C_{ \mathrm{v}}^{(2)}\\ C_{\mathrm{i}}^{(2)}\end{array}\right)+\epsilon^{3}\left(\begin{array}{c}C_{ \mathrm{v}}^{(3)}\\ C_{\mathrm{i}}^{(3)}\end{array}\right)+\cdot\cdot\cdot \tag{3}\]
where \(C_{\mathrm{v}}^{(0)}\) and \(C_{\mathrm{i}}^{(0)}\) are the solution of the unperturbed Eq 1. The correction terms (\(C_{\mathrm{v}}^{(1)}\), \(C_{\mathrm{i}}^{(1)}\), \(C_{\mathrm{v}}^{(2)}\), \(C_{\mathrm{i}}^{(2)}\), etc.) can be found by substituting Eq 2 and Eq 3 into Eq 1 and matching the coefficients of the perturbation \(\epsilon\). The equations needed to find results up to the third order are listed below.
### Differential equations of higher order solution corrections
#### 2.1.1 Perturbation on \(K_{0}\)
The equations to solve for the change due to \(K_{0}\) are given in Eq 4 to Eq 6.
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}C_{\mathrm{v}}^{(1)}\\ C_{\mathrm{i}}^{(1)}\end{array}\right)=K_{0}-K_{\mathrm{iv}}\left(C_{\mathrm{i} }^{(0)}C_{\mathrm{v}}^{(1)}+C_{\mathrm{v}}^{(0)}C_{\mathrm{i}}^{(1)}\right)- C_{\mathrm{s}}\left(\begin{array}{c}K_{\mathrm{vs}}C_{\mathrm{v}}^{(1)}\\ K_{\mathrm{is}}C_{\mathrm{i}}^{(1)}\end{array}\right) \tag{4}\]
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}C_{\mathrm{v}}^{(2)}\\ C_{\mathrm{i}}^{(2)}\end{array}\right)=-K_{\mathrm{iv}}\left(C_{\mathrm{i}}^{(0 )}C_{\mathrm{v}}^{(2)}+C_{\mathrm{v}}^{(0)}C_{\mathrm{i}}^{(2)}+C_{\mathrm{i} }^{(1)}C_{\mathrm{v}}^{(1)}\right)-C_{\mathrm{s}}\left(\begin{array}{c}K_{ \mathrm{vs}}C_{\mathrm{v}}^{(2)}\\ K_{\mathrm{is}}C_{\mathrm{i}}^{(2)}\end{array}\right) \tag{5}\]
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}C_{\mathrm{v}}^{(3)}\\ C_{\mathrm{i}}^{(3)}\end{array}\right)=-K_{\mathrm{iv}}\left(C_{\mathrm{i}}^{(0 )}C_{\mathrm{v}}^{(3)}+C_{\mathrm{v}}^{(0)}C_{\mathrm{i}}^{(3)}+C_{\mathrm{i} }^{(1)}C_{\mathrm{v}}^{(2)}+C_{\mathrm{v}}^{(1)}C_{\mathrm{i}}^{(2)}\right)- C_{\mathrm{s}}\left(\begin{array}{c}K_{\mathrm{vs}}C_{\mathrm{v}}^{(3)}\\ K_{\mathrm{is}}C_{\mathrm{i}}^{(3)}\end{array}\right) \tag{6}\]
#### Perturbation on \(K_{\rm vs}\)
The equations to solve for the change due to \(K_{\rm vs}\) are given in Eq 7 to Eq 9.
\[\frac{{\rm d}}{{\rm d}t}\left(\begin{array}{c}C_{\rm v}^{(1)}\\ C_{\rm i}^{(1)}\end{array}\right)=-K_{\rm iv}\left(C_{\rm i}^{(0)}C_{\rm v}^{(1) }+C_{\rm v}^{(0)}C_{\rm i}^{(1)}\right)-C_{\rm s}\left(\begin{array}{c}K_{ \rm vs}C_{\rm v}^{(1)}\\ K_{\rm is}C_{\rm i}^{(1)}\end{array}\right)-C_{\rm s}\left(\begin{array}{c}K _{\rm vs}C_{\rm v}^{(0)}\\ 0\end{array}\right) \tag{7}\]
\[\frac{{\rm d}}{{\rm d}t}\left(\begin{array}{c}C_{\rm v}^{(2)}\\ C_{\rm i}^{(2)}\end{array}\right) =-K_{\rm iv}\left(C_{\rm i}^{(0)}C_{\rm v}^{(2)}+C_{\rm v}^{(0)}C_{ \rm i}^{(2)}+C_{\rm i}^{(1)}C_{\rm v}^{(1)}\right) \tag{8}\] \[-C_{\rm s}\left(\begin{array}{c}K_{\rm vs}C_{\rm v}^{(2)}\\ K_{\rm is}C_{\rm i}^{(2)}\end{array}\right)-C_{\rm s}\left(\begin{array}{c}K _{\rm vs}C_{\rm v}^{(1)}\\ 0\end{array}\right)\] \[\frac{{\rm d}}{{\rm d}t}\left(\begin{array}{c}C_{\rm v}^{(3)}\\ C_{\rm i}^{(3)}\end{array}\right) =-K_{\rm iv}\left(C_{\rm i}^{(0)}C_{\rm v}^{(3)}+C_{\rm v}^{(0)}C_ {\rm i}^{(3)}+C_{\rm i}^{(1)}C_{\rm v}^{(2)}+C_{\rm v}^{(1)}C_{\rm i}^{(2)}\right)\] (9) \[-C_{\rm s}\left(\begin{array}{c}K_{\rm vs}C_{\rm v}^{(3)}\\ K_{\rm is}C_{\rm i}^{(3)}\end{array}\right)-C_{\rm s}\left(\begin{array}{c}K _{\rm vs}C_{\rm v}^{(2)}\\ 0\end{array}\right)\]
#### Perturbation on \(K_{\rm is}\)
The equations to solve for the change due to \(K_{is}\) are given in Eq 10 to Eq 12.
\[\frac{{\rm d}}{{\rm d}t}\left(\begin{array}{c}C_{\rm v}^{(1)}\\ C_{\rm i}^{(1)}\end{array}\right) =-K_{\rm iv}\left(C_{\rm i}^{(0)}C_{\rm v}^{(1)}+C_{\rm v}^{(0)}C _{\rm i}^{(1)}\right)-C_{\rm s}\left(\begin{array}{c}K_{\rm vs}C_{\rm v}^{ (1)}\\ K_{\rm is}C_{\rm i}^{(1)}\end{array}\right)-C_{\rm s}\left(\begin{array}{c}0 \\ K_{\rm is}C_{\rm i}^{(0)}\end{array}\right) \tag{10}\]
\[\frac{{\rm d}}{{\rm d}t}\left(\begin{array}{c}C_{\rm v}^{(2)}\\ C_{\rm i}^{(2)}\end{array}\right) =-K_{\rm iv}\left(C_{\rm i}^{(0)}C_{\rm v}^{(2)}+C_{\rm v}^{(0)}C _{\rm i}^{(2)}+C_{\rm i}^{(1)}C_{\rm v}^{(1)}\right) \tag{11}\] \[-C_{\rm s}\left(\begin{array}{c}K_{\rm vs}C_{\rm v}^{(2)}\\ K_{\rm is}C_{\rm i}^{(2)}\end{array}\right)-C_{\rm s}\left(\begin{array}{c}0 \\ K_{\rm is}C_{\rm i}^{(1)}\end{array}\right)\]
\[\frac{{\rm d}}{{\rm d}t}\left(\begin{array}{c}C_{\rm v}^{(3)}\\ C_{\rm i}^{(3)}\end{array}\right) =-K_{\rm iv}\left(C_{\rm i}^{(0)}C_{\rm v}^{(3)}+C_{\rm v}^{(0)}C_ {\rm i}^{(3)}+C_{\rm i}^{(1)}C_{\rm v}^{(2)}+C_{\rm v}^{(1)}C_{\rm i}^{(2)} \right) \tag{12}\] \[-C_{\rm s}\left(\begin{array}{c}K_{\rm vs}C_{\rm v}^{(3)}\\ K_{\rm is}C_{\rm i}^{(3)}\end{array}\right)-C_{\rm s}\left(\begin{array}{c}0 \\ K_{\rm is}C_{\rm i}^{(2)}\end{array}\right)\]
The corresponding equations for \(K_{\rm is}\) are identical to those of \(k_{\rm vs}\) (Eqs 7, 8 and 9) except the last term in each equation, where \(K_{\rm vs}\) changes are replaced with \(K_{\rm is}\) changes accordingly.
#### Perturbation on \(C_{\rm s}\)
The equations to solve for the change due to \(C_{\rm s}\) are given in Eq 13 to Eq 15.
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}C_{\mathrm{v}}^{(1)}\\ C_{\mathrm{i}}^{(1)}\end{array}\right)=-K_{\mathrm{iv}}\left(C_{\mathrm{i}}^{(0 )}C_{\mathrm{v}}^{(1)}+C_{\mathrm{v}}^{(0)}C_{\mathrm{i}}^{(1)}\right)-C_{ \mathrm{s}}\left(\begin{array}{c}K_{\mathrm{vs}}C_{\mathrm{v}}^{(1)}\\ K_{\mathrm{is}}C_{\mathrm{i}}^{(1)}\end{array}\right)-C_{\mathrm{s}}\left( \begin{array}{c}K_{\mathrm{vs}}C_{\mathrm{v}}^{(0)}\\ K_{\mathrm{is}}C_{\mathrm{i}}^{(0)}\end{array}\right) \tag{13}\]
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}C_{\mathrm{v}}^{(2)}\\ C_{\mathrm{i}}^{(2)}\end{array}\right)=-K_{\mathrm{iv}}\left(C_{\mathrm{i}}^{(0 )}C_{\mathrm{v}}^{(2)}+C_{\mathrm{v}}^{(0)}C_{\mathrm{i}}^{(2)}+C_{\mathrm{i} }^{(1)}C_{\mathrm{v}}^{(1)}\right) \tag{14}\]
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}C_{\mathrm{v}}^{(3)}\\ C_{\mathrm{i}}^{(3)}\end{array}\right)=-K_{\mathrm{iv}}\left(C_{\mathrm{i}}^{(0 )}C_{\mathrm{v}}^{(3)}+C_{\mathrm{v}}^{(0)}C_{\mathrm{i}}^{(3)}+C_{\mathrm{i} }^{(1)}C_{\mathrm{v}}^{(2)}+C_{\mathrm{v}}^{(1)}C_{\mathrm{i}}^{(2)}\right) \tag{15}\]
The corresponding equations for \(C_{\mathrm{s}}\) are identical to those of \(K_{\mathrm{vs}}\) and \(K_{\mathrm{is}}\) except that the last term in each equation is the sum of those of \(K_{\mathrm{vs}}\) and \(K_{\mathrm{is}}\).
#### Perturbation on \(K_{\mathrm{iv}}\)
The equations to solve for the change due to \(K_{iv}\) are given in Eq 16 to Eq 18.
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}C_{\mathrm{v}}^{(1)}\\ C_{\mathrm{i}}^{(1)}\end{array}\right)=-K_{\mathrm{iv}}\left(C_{\mathrm{i}}^{(0 )}C_{\mathrm{v}}^{(1)}+C_{\mathrm{v}}^{(0)}C_{\mathrm{i}}^{(1)}+C_{\mathrm{i} }^{(0)}C_{\mathrm{v}}^{(0)}\right)-C_{\mathrm{s}}\left(\begin{array}{c}K_{ \mathrm{vs}}C_{\mathrm{v}}^{(1)}\\ K_{\mathrm{is}}C_{\mathrm{i}}^{(1)}\end{array}\right) \tag{16}\]
\[-C_{\mathrm{s}}\left(\begin{array}{c}K_{\mathrm{vs}}C_{\mathrm{v}}^{(2)}\\ K_{\mathrm{is}}C_{\mathrm{i}}^{(2)}\end{array}\right) \tag{17}\]
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}C_{\mathrm{v}}^{(3)}\\ C_{\mathrm{i}}^{(3)}\end{array}\right)=-K_{\mathrm{iv}}\left(C_{\mathrm{i}}^{(0 )}C_{\mathrm{v}}^{(3)}+C_{\mathrm{v}}^{(0)}C_{\mathrm{i}}^{(3)}+C_{\mathrm{i} }^{(1)}C_{\mathrm{v}}^{(2)}+C_{\mathrm{v}}^{(1)}C_{\mathrm{i}}^{(2)}+\]
\[C_{\mathrm{i}}^{(0)}C_{\mathrm{v}}^{(2)}+C_{\mathrm{v}}^{(0)}C_{\mathrm{i}}^{( 2)}+C_{\mathrm{i}}^{(1)}C_{\mathrm{v}}^{(1)}\right) \tag{18}\]
### Sensitivity analysis
The results above in Eq 4 to Eq 18 can be used to predict results change on fine grids of perturbations. For each input parameter, we only need to solve a few more equations, then the deviation from the unperturbed solution can be calculated for as many as \(\epsilon\)'s. Section 3 below shows results on the sensitivity of the solution on change of \(K_{\mathrm{vs}}\) and \(K_{\mathrm{iv}}\). It verifies that the \(3^{rd}\) order perturbation
captures response to \(50\%\) input changes very well. It can be shown that the results in section 2.1 can be extended to any finite orders. Then convergence criteria can be implemented to adjust the number of correction terms automatically.
### Uncertainty quantification
In addition to the sensitivity analysis to each individual input parameter, the perturbation expansion can be applied to get aggregated uncertainty due to all the parameters. Denote the input parameters as \(\{K_{\alpha}\}_{\alpha=1,2,3\cdots}\), then the deviation from unperturbed solution from each component as in Eq 3 can be summed as
\[\begin{split}&\left(\begin{array}{c}C_{\mathrm{v}}\\ C_{\mathrm{i}}\end{array}\right)-\left(\begin{array}{c}C_{\mathrm{v}}^{(0)} \\ C_{\mathrm{i}}^{(0)}\end{array}\right)=\\ &\sum_{K_{\alpha}}\left[\epsilon_{K_{\alpha}}\left(\begin{array}{c}C_{ \mathrm{v,}K_{\alpha}}^{(1)}\\ C_{\mathrm{i,}K_{\alpha}}^{(1)}\end{array}\right)+\epsilon_{K_{\alpha}}^{2} \left(\begin{array}{c}C_{\mathrm{v,}K_{\alpha}}^{(2)}\\ C_{\mathrm{i,}K_{\alpha}}^{(2)}\end{array}\right)+\epsilon_{K_{\alpha}}^{3} \left(\begin{array}{c}C_{\mathrm{v,}K_{\alpha}}^{(3)}\\ C_{\mathrm{i,}K_{\alpha}}^{(3)}\end{array}\right)+\cdots\right]\end{split} \tag{19}\]
The perturbations \(\{\epsilon_{K_{\alpha}}\}_{\alpha=1,2,3\cdots}\) can be viewed as random variables with given joint-distribution. Apply the variance operator on Eq 19, the aggregated uncertainty of \(C_{v}\) and \(C_{i}\) can be expressed as function of uncertainty of the individual input parameters and higher order correlations if any.
## 3 Application
We apply the above analyses to pure alpha-Fe under electron irradiation. It is reasonable to assume that only point defects are directly produced during irradiation due to the limited energy transfer between electrons and lattice atoms. In addition, we assume that no defect clustering would occur, since it necessitates a more sophisticated treatment beyond point defect kinetics. To solve Eq. 1, we use the irradiation condition and materials parameters shown in TABLE 1, where \(D_{\mathrm{v0}}\) and \(D_{\mathrm{i0}}\) are the diffusion coefficient prefactors, and \(E_{mv}\) and \(E_{mi}\) are the migration barriers, respectively, and \(R\) is the defect interaction distance. For simplicity, only dislocations are considered as the sinks to point defects (i.e., \(C_{\mathrm{s}}\equiv\rho_{d}\), where \(\rho_{d}\) is the dislocation density). The defect sink rate to dislocations is written as [1],
\[K_{\mathrm{(v,i)}d}=\frac{2\pi D_{\mathrm{(v,i)}}}{\ln\left(\frac{d/2}{R_{ \mathrm{(v,i)}d}}\right)},\ \ \text{with dislocation distance}\ d=\frac{2}{\sqrt{\pi\rho_{d}}} \tag{20}\]
The recombination rate is written as [1],
\[K_{\mathrm{iv}}=4\pi R_{\mathrm{v,i}}\left(D_{\mathrm{v}}+D_{\mathrm{i}} \right),\ \ \text{where}\ D_{\mathrm{(v,i)}}=D_{\mathrm{(v,i)0}}\mathrm{exp}\left(\frac{-E_ {m\mathrm{(v,i)}}}{k_{B}T}\right) \tag{21}\]
The solution of Eq. 1 is shown in Figure 1, which falls into the regime of low-temperature high-sink density scenario [1] and the steady state has not yet reached until 10,000 s. To validate our analyses, we consider the variations in \(K_{\mathrm{iv}}\) and \(K_{\mathrm{vs}}\), which depend on fundamental defect properties. Uncertainty can be introduced by various factors, such as impurities, local stress, and computation accuracy [5, 6]. Here, the uncertainty range is considered to be up to 50%. This choice is based on the observation of Short et al.'s work [2], where it was shown that a slight change
(0.01 eV) in vacancy migration energy can cause a significant change in the vacancy concentration profile. Here, we translate this variation into the rate constants, by estimating the change in the vacancy diffusion coefficient given the Arrhenius form. Given the formulations provided in Eqs. 20 and 21, it leads to \(\exp(0.01\mathrm{eV}/0.025851\mathrm{eV})=1.47\) or 47% change in \(K_{\mathbf{vs}}\) and \(K_{\mathbf{iv}}\) at 300 K. As another example, typical DFT convergence inaccuracy around 5 meV would translate to \(\exp(0.005\mathrm{eV}/0.0253\mathrm{eV})=1.21\) or 21% change in \(K_{\mathbf{vs}}\) and \(K_{\mathbf{iv}}\) at 300 K. Hence, in the following demonstrations, we show the results for variations of \(K_{\mathbf{vs}}\) and \(K_{\mathbf{iv}}\) at \(\pm 20\%\) and \(\pm 50\%\) changes. To simplify notation, \(\alpha\equiv\Delta K_{\mathbf{iv}}/K_{\mathbf{iv}}\) and \(\beta\equiv\Delta K_{\mathbf{vs}}/K_{\mathbf{vs}}\).
First, we consider the independent uncertainty in \(K_{\mathbf{iv}}\) and \(K_{\mathbf{vs}}\). Figures 2 and 3 plot the real and percentage changes in \(C_{\mathrm{v}}\) and \(C_{\mathrm{i}}\) via direct solving Eq. 1 with changed \(K_{\mathbf{iv}}\) and applying the perturbation analysis up to third-order correction. Since \(K_{\mathbf{iv}}\) indicates the loss due to recombination, a more negative \(\alpha\) would lead to higher defect concentration (\(\Delta C>0\)), and vice versa (Figures 2a and 3a). Note that the positive and negative \(\alpha\) don't exhibit symmetry in \(\Delta C_{\mathrm{i}}\) vs. time due to the nonlinear nature of the two coupled equations. It can also been seen that the absolute discrepancy in \(C_{\mathrm{i}}\) and \(C_{\mathrm{v}}\) embraces an opposite trend, where the former decreases (Figure 2a) while the latter increases (Figure 3a) with time. However, the relative discrepancy increases for both interstitial
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Displacement rate & \(4.6\times 10^{-3}\) dpa/s & Dislocation density (\(\rho_{d}\)) & \(10^{15}/\mathrm{m}^{2}\) \\ \hline Lattice parameter & 0.286 nm & \(R_{\mathrm{id}}\) (dislocation-interstitial) & 3.6 nm [2] \\ \hline \(D_{\mathrm{v0}}\) & \(8.016\times 10^{-7}\mathrm{m}^{2}/\mathrm{s}\)[2] & \(R_{\mathrm{vd}}\) (dislocation-vacancy) & 1.2 nm [2] \\ \hline \(D_{\mathrm{i0}}\) & \(2.09\times 10^{-7}\mathrm{m}^{2}/\mathrm{s}\)[2] & \(R_{\mathrm{iv}}\) (interstitial-vacancy) & 0.65 nm [4] \\ \hline \(E_{mv}\) & 0.86 eV [2] & Temperature (\(T\)) & 300 K \\ \hline \(E_{mi}\) & 0.17 eV [2] & & \\ \hline \end{tabular}
\end{table}
Table 1: Materials Parameters and radiation condition used in solving Eq. 1.
and vacancy concentrations (Figures 2b and 3b). In all cases considered, the third-order correction agrees well with the direct solution, up to 50 % variation in the \(K_{\text{iv}}\). With even higher variations, higher order corrections are needed as third-order exhibits certain difference with the direct solution. To reveal how the order of correction affect the accuracy in prediction, Figure 4 displays \(\Delta C_{\text{v}}\) at the first, second and third order corrections, under \(\alpha=-20\%\). A large deviation exists for the first order correction, however, the second order can already capture the solution very well and the third order agrees perfectly with the solution.
The effect of varying \(K_{\text{vs}}\) on \(C_{\text{i}}\) and \(C_{\text{v}}\) is shown in Figures 5 and 6, which show a lessened impact compared to that in \(K_{\text{iv}}\). Both absolute and relative discrepancy in defect concentrations due to uncertainty in \(K_{\text{vs}}\), increases with time. Note that, \(\Delta C_{\text{i}}\) and \(\Delta C_{\text{v}}\) exhibit an opposite sign for a given \(\beta\). For example, with 50 % increased \(K_{\text{vs}}\) (\(\beta=50\%\)), \(\Delta C_{\text{v}}\) become increasingly negative with time. Due to the reduction in \(C_{\text{v}}\), less recombination causes an increase in \(C_{\text{i}}\), hence,
Figure 3: (a) \(\Delta C_{\text{v}}\) vs. time under \(\alpha=\pm 20\%\) and \(\alpha=\pm 50\%\). Solid lines represent the solution from solving Eq. 1. Dotted lines represent the results up to third-order correction based on Eqs 16,17,18. (b) shows the corresponding percentage changes. The color coding is the same as Fig 2.
Figure 2: (a) \(\Delta C_{\text{i}}\) vs. time under \(\alpha=\pm 20\%\) and \(\alpha=\pm 50\%\). Solid lines represent the solution from solving Eq. 1. Dotted lines represent the results up to third-order correction based on Eqs 16,17,18. (b) shows the corresponding percentage changes.
become increasingly positive with time. The relative changes demonstrate the same trend to the real changes. In these cases, third order corrections can fully predict the direct solution. From Figure 7 with \(\beta=-20\%\) (\(\beta=\pm 50\%\) were also evaluated, exhibiting the same behavior), it suggests that even first order correction is capable to capture all discrepancies.
Finally, we evaluate the simultaneous variation in both \(K_{\mathbf{iv}}\) and \(K_{\mathbf{vs}}\) given the same dependence on the vacancy diffusion coefficient. Figure 8 shows the relative changes in \(C_{\mathrm{i}}\) and \(C_{\mathrm{v}}\) under \(\alpha=-20\%\) and \(\beta=-20\%\). Third-order prediction overlaps the direct solution at all times, indicating the strong efficacy of this perturbation methodology for uncertainty in multiple parameters.
Figure 4: \(\Delta C_{\mathrm{v}}(\%)\) vs. time under \(\alpha=-20\%\). The solid line represents the solution from solving Eq 1. The dashed line represents the results up to first-order correction based on Eq 16. The dashdot line represents the results up to second-order correction based on Eq 17. The dotted line represents the results up to third-order correction based on Eq 18.
Figure 5: (a) \(\Delta C_{\mathrm{i}}\) vs. time under \(\beta=\pm 20\%\) and \(\beta=\pm 50\%\). Solid lines represent the solution from solving Eq. 1. Dotted lines represent the results up to third-order correction based on Eqs 7,8,9. (b) shows the corresponding percentage changes.
Note, the other two rates (\(K_{0}\) and \(K_{\text{is}}\)) can be similarly considered for relevant scenarios involving uncertainty in dose rate and the interstitial-dislocation interactions.
## 4 Conclusion
We derived the perturbation expansion to analyze the response of the point defect kinetics equation to the uncertainty and sensitivity in the input parameters. The results were numerically verified on parameters \(K_{\text{vs}}\) and \(K_{\text{iv}}\) up to 50 % variations, considering the case of electron irradiated pure \(\alpha\)-Fe. The method has the advantage that by solving a few extra equations, the sensitivity analysis can be performed on continuously changing parameters, instead of solving the original equation
Figure 6: (a) \(\Delta C_{\text{v}}\) vs. time under \(\beta=\pm 20\%\) and \(\beta=\pm 50\%\). Solid lines represent the solution from solving Eq. 1. Dotted lines represent the results up to third-order correction based on Eqs 7,8,9. (b) shows the corresponding percentage changes. The color coding is the same as Fig 5.
Figure 7: \(\Delta C_{\text{v}}(\%)\) vs. time under \(\beta=-20\%\). The solid line represents the solution from solving Eq 1. The dashed line represents the results up to first-order correction based on Eq 7. The dashdot line represents the results up to second-order correction based on Eq 8. The dotted line represents the results up to third-order correction based on Eq 9.
repeatedly on all the cases. We also discussed the capability of the analyses to generate aggregated uncertainty due to uncertainty in multiple rate constants. This method can be extended to add higher orders adaptively if substantial uncertainty exists in those rate constants.
**ACKNOWLEDGEMENTS**: We acknowledge the support from the Department of Nuclear Engineering at Penn State University.
| 放射線誘導点欠陥の濃度を一般素材に照射するときの変化は、速率論に基づく点欠陥動力学方程式で一般的に説明されています。しかし、競合する物理過程の速率定数(例えば、再結合や損失)の不確定性は、点欠陥濃度を予測する際に大きな不確定性を生み出す可能性があります。そこで、この論文では、Perturbation理論に基づいて、点欠陥動力学方程式の解の3次補正を導出しました。この新しい方程式により、連続的な変化する速率定数についての完全な記述が可能になり、これらの速率定数の解決値を50%の誤差まで正確に予測できます。これらの分析は、入力パラメータに対する溶液の敏感性を明らかにしたり、複数の速率定数からの集積不確定性を明らかにすることができます。 |
2309.04817 | Normal coactions extend to the C*-envelope | We show that a normal coaction of a discrete group on an operator algebra
extends to a normal coaction on the C*-envelope. This resolves an open problem
considered by Kakariadis, Katsoulis, Laca, and X. Li, and provides an
elementary proof of a prominent result of Sehnem. As an application, we resolve
a question of Li by identifying the C*-envelope of the operator algebra arising
from a groupoid-embeddable category and of cancellative right LCM monoids. This
latter class includes many examples of monoids that are not group-embeddable. | Kevin Aguyar Brix, Chris Bruce, Adam Dor-On | 2023-09-09T14:59:03 | http://arxiv.org/abs/2309.04817v2 | # Normal coactions extend to the C*-envelope
###### Abstract.
We show that a normal coaction of a discrete group on an operator algebra extends to a normal coaction on the C*-envelope. This resolves an open problem considered by Kakariadis, Katsoulis, Laca, and X. Li, and provides an elementary proof of a prominent result of Sehnem. As an application, we resolve a question of Li by identifying the C*-envelope of the operator algebra arising from a groupoid-embeddable category.
Key words and phrases:C*-envelope, coaction, small category, groupoid, C*-algebra, boundary quotient 2020 Mathematics Subject Classification: Primary: 46K50, 46L05, 47L55; Secondary: 46L55, 47L65 K.A. Brix was supported by a DFF-international postdoc grant 1025-00004B. C. Bruce has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101022531. A. Dor-On was partially supported by EU grant HORIZON-MSCA-SE-2021 Project 101086394 and by Banff International Research Station for the 2020 Focused Research Group 248. This research was partially supported by a European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme 817597 and the London Mathematical Society through a Research in Pairs Grant 42204.
Introduction
A _C*-envelope_ is a _C*-envelope_ if it is a C*-envelope \(\mathcal{T}^{+}\)-algebra \(\mathcal{T}^{+}\) with \(\mathcal{T}^{+}\) and \(\mathcal{T}^{+}\) with \(\mathcal{T}^{
**Theorem A** (Theorem 3.5).: _Let \(\mathcal{A}\) be an operator algebra with a contractive approximate identity. Then, any normal coaction of a discrete group \(G\) on \(\mathcal{A}\) has a (necessarily unique) extension to a normal coaction of \(G\) on \(C^{*}_{\mathrm{env}}(\mathcal{A})\)._
By the co-universal property of the C*-envelope, every _action_ of a discrete group on an operator algebra extends to an action on the C*-envelope. However, it is not at all clear whether a _coaction_ extends to the C*-envelope. In order to overcome this problem, we extend Katayama duality [10] to normal coactions of discrete groups on operator algebras (Theorem 3.3). This allows us to embed the C*-envelope in a reduced double crossed product in such a way that the restriction of the canonical coaction on the double crossed product gives us the desired coaction. This approach is concise, and leads to new avenues of research. For instance, Theorem A may lead to new applications in dilation theory, and has important consequences for Arveson's program of computing C*-envelopes.
First, in the special case of product systems over group-embeddable monoids, Theorem A leads to a short, conceptually simpler proof of Sehnem's main result, [11, Theorem 5.1] (thus also recovering results from [10, 10]). Second, for an operator algebra arising from a cancellative small category, we use Theorem A to provide a sufficient condition for its C*-envelope to coincide with the boundary quotient C*-algebra of the category. We verify this sufficient condition when the small category is groupoid-embeddable, which then yields a description of the C*-envelope, see Theorem B. We emphasize that examples of operator algebras arising from cancellative small categories do not naturally fit into Fowler's context. On the other hand, examples of operator algebras that do fit in that context now enjoy a significantly more conceptual proof for computing their C*-envelope, avoiding the technical language of product systems. As this application makes novel use of etale groupoid theory in the setting of non-selfadjoint operator algebras, we next explain the context and consequences of our application.
### Application to operator algebras generated by partial isometries
Left cancellative small categories are studied as natural generalizations of monoids [14], and have recently been used by Li to resolve open problems on finiteness properties for topological full groups [11]. C*-algebras of left cancellative small categories [15, 16, 17] unify the theories of several classes of C*-algebras. For instance, semigroup C*-algebras [18, 19, 20, 17, 14], which have connections to, e.g., algebraic number theory [1, 2, 16, 18] and (higher-rank) graph C*-algebras [21, 22, 23, 24], which led to the general theory of topological full groups [25, 26] and have important connections to symbolic dynamics [25, 27, 28, 29].
All C*-algebras mentioned above are generated by families of partial isometries such that the composition law (as operators on Hilbert space) can be interpreted as composing morphisms in a left cancellative small category. Such a category then provides an _orientation_ inside the C*-algebra, in the sense that it determines an operator algebra of partial isometries that need _not_ be closed under taking adjoints. We think of this operator algebra as an "irreversible" operator algebra inside the ambient C*-algebra. This orientation corresponds to the "one-sided" or "irreversible" nature of the underlying dynamics, the direction in a directed graph, or a notion of positivity in a group. It was Spielberg who recently discovered this vastly general and unifying framework of left cancellative small categories [15, 16]. See also [14, 17] for further developments.
A left cancellative small category \(\mathfrak{C}\) determines a Toeplitz-type C*-algebra \(C^{*}_{\lambda}(\mathfrak{C})\) (Definition 2.8), a boundary quotient C*-algebra \(\partial C^{*}_{\lambda}(\mathfrak{C})\) (Definition 2.11), and an operator algebra \(\mathcal{A}_{\lambda}(\mathfrak{C})\subseteq C^{*}_{\lambda}(\mathfrak{C})\) (Definition 2.9). At the masterclass "Dilation and classification in operator algebra theory" at the University of Copenhagen in October 2022, Xin Li posed the following natural question.
**Question B**.: _Given a left cancellative small category \(\mathfrak{C}\), does the C*-envelope of \(\mathcal{A}_{\lambda}(\mathfrak{C})\) coincide with the boundary quotient C*-algebra \(\partial C^{*}_{\lambda}(\mathfrak{C})\)?_
This question is known to have an affirmative answer for certain subclasses of left cancellative small categories, e.g., when \(\mathfrak{C}\) is a higher-rank graph or a submonoid of a group, see [14, 15].
However, new techniques are necessary to answer it for more general categories. Our main application of Theorem A is a sufficient condition for answering Question B that we verify for all groupoid-embeddable categories.
**Theorem B** (Theorem 4.17).: _Let \(\mathfrak{C}\) be a subcategory of a groupoid \(\mathfrak{G}\). Then, the C*-envelope of \(\mathcal{A}_{\lambda}(\mathfrak{C})\) is canonically *-isomorphic to the boundary quotient C*-algebra \(\partial C^{*}_{\lambda}(\mathfrak{C})\)._
The proof of this application uses novel etale groupoid-theoretic techniques and consists of two parts. The first step is to show that \(\partial C^{*}_{\lambda}(\mathfrak{C})\) is a C*-cover of \(\mathcal{A}_{\lambda}(\mathfrak{C})\) (Theorem 4.3). Here, we rely heavily on inverse semigroups and etale groupoids underlying the C*-algebras [1, 2, 3, 4, 5], and this marks a new interaction between the theory of non-selfadjoint operator algebras and etale groupoids. Interestingly, this part requires only the much weaker assumption that the category be cancellative with a Hausdorff boundary groupoid. Our approach also provides new proofs of several established results from [10] and [4]. We give an example of this by providing a short direct computation of the C*-envelope for finitely aligned higher-rank graphs and \(P\)-graphs for group-embeddable \(P\) (Theorem 4.12). In the second step, we show injectivity of the canonical map from \(\partial C^{*}_{\lambda}(\mathfrak{C})\) onto \(C^{*}_{\mathrm{env}}(\mathcal{A}_{\lambda}(\mathfrak{C}))\) that arises from the co-universal property of the C*-envelope. This requires Theorem A together with a careful analysis of the etale groupoid model underlying \(\partial C^{*}_{\lambda}(\mathfrak{C})\).
As a byproduct of our analysis, we prove that when \(\mathfrak{C}\) is a subcategory of a discrete groupoid, then \(\partial C^{*}_{\lambda}(\mathfrak{C})\) can be realized as a crossed product C*-algebra for a canonical partial action (Corollary 4.16). This is a nontrivial generalization of [16, Proposition 3.10] and is of independent interest. We believe our approach using etale groupoids will have important consequences for future research, e.g., into new example classes of categories that are not groupoid-embeddable where the associated etale groupoids are not Hausdorff.
**Acknowledgements.** We are grateful to Xin Li for discussions on non-abelian duality that led to simplified proofs, and for bringing the reference [13] to our attention. The third-named author acknowledges the March 2020 FRG 248 BIRS meeting in Banff.
## 2. Preliminaries
### Coactions on operator algebras
We first discuss operator algebras and their C*-envelopes. The reader is referred to [1, 2, 13] for the unital theory, and to [14, SS 2.2] for additional details on the general theory.
By an operator algebra \(\mathcal{A}\), we mean a norm-closed subalgebra of the bounded operators \(\mathbb{B}(\mathcal{H})\) on a Hilbert space \(\mathcal{H}\). A _C*-cover_ of an operator algebra \(\mathcal{A}\) is a pair \((\iota,\mathcal{B})\), where \(\mathcal{B}\) is a C*-algebra and \(\iota\colon\mathcal{A}\to B\) is a completely isometric homomorphism such that \(\mathcal{B}=C^{*}((\mathcal{A}))\). The _C*-envelope_ of \(\mathcal{A}\) is a C*-cover \((\kappa,C^{*}_{\mathrm{env}}(\mathcal{A}))\) of \(\mathcal{A}\) with the _co-universal property_ that for every other C*-cover \((\iota,\mathcal{B})\) of \(\mathcal{A}\), there is a surjective *-homomorphism \(q_{e}\colon C^{*}(\mathcal{B})\to C^{*}_{\mathrm{env}}(\mathcal{A})\) with \(q_{e}\circ\iota=\kappa\) on \(\mathcal{A}\). By this co-universal property, the C*-algebra \(C^{*}_{\mathrm{env}}(\mathcal{A})\) is unique up to canonical *-isomorphism.
Throughout this paper \(\otimes\) denotes the minimal (spatial) tensor product of operator algebras, \(G\) will denote a discrete group, and \(u_{g}\in C^{*}(G)\) will denote the canonical unitary corresponding to \(g\in G\). Let \(\lambda\colon C^{*}(G)\to C^{*}_{\lambda}(G)\) denote the canonical quotient map of \(C^{*}(G)\) onto the reduced group C*-algebra \(C^{*}_{\lambda}(G)\). We also use \(\lambda\) to denote the regular representation of \(G\) in \(\mathcal{U}(\ell^{2}(G))\), so that \(\lambda_{g}=\lambda(u_{g})\), for all \(g\in G\). We let
\[\Delta\colon C^{*}(G)\to C^{*}(G)\otimes C^{*}(G),\quad u_{g}\mapsto u_{g} \otimes u_{g}\]
and
\[\Delta_{\lambda}\colon C^{*}_{\lambda}(G)\to C^{*}_{\lambda}(G)\otimes C^{*}_ {\lambda}(G),\quad\lambda_{g}\mapsto\lambda_{g}\otimes\lambda_{g}\]
denote the comultiplications on \(C^{*}(G)\) and \(C^{*}_{\lambda}(G)\), respectively, where the latter exists by Fell's absorption principle.
**Definition 2.1**.: _A coaction of a discrete group \(G\) on an operator algebra \(\mathcal{A}\) is a completely contractive homomorphism \(\delta\colon\mathcal{A}\to\mathcal{A}\otimes C^{*}(G)\) such that_
1. \((\delta\otimes\mathrm{id}_{C^{*}(G)})\circ\delta=(\mathrm{id}_{\mathcal{A}} \otimes\Delta)\circ\delta\) _(coaction identity);_
_
2. \(\overline{\delta(\mathcal{A})(I_{\mathcal{A}}\otimes C^{*}(G))}=\mathcal{A} \otimes C^{*}(G)\) _(nondegeneracy)._
_The coaction \(\delta\) is normal if the map \((\operatorname{id}_{\mathcal{A}}\otimes\lambda)\circ\delta\colon\mathcal{A} \to\mathcal{A}\otimes C^{*}_{\lambda}(G)\) is completely isometric. We write \(\delta\colon G\ \langle\,\mathcal{A}\,\mathcal{A}\,\rangle\) to denote a coaction of \(G\) on \(\mathcal{A}\)._
**Remark 2.2**.: _When \(\mathcal{A}\) is a C*-algebra, the map \(\delta\) in Definition 2.1 is automatically *-preserving, and is therefore a *-homomorphism. Indeed, if \(\mathcal{A}\) is unital we do nothing, and if \(\mathcal{A}\) is nonunital, then by [12, SS 3], \(\delta\) extends to a unital completely contractive homomorphism on its unitization. Either way, a unital complete contraction is automatically positive by [13, Proposition 2.12], and therefore preserves adjoints. Thus, when \(\mathcal{A}\) is a C*-algebra, we get the original definition of a discrete group coaction on a C*-algebra in the literature, cf. [1, Definition A.21]._
Whenever we have a coaction \(\delta\colon G\ \langle\,\mathcal{A}\,\rangle\), we may define analogues of Fourier coefficients \(\mathbb{E}_{g}\colon\mathcal{A}\to\mathcal{A}\otimes\mathbb{C}u_{g}\) by setting \(\mathbb{E}_{g}(a)=(\operatorname{id}_{\mathcal{A}}\otimes\Phi_{g})\circ\delta(a)\), where \(\Phi_{g}\colon C^{*}(G)\to\mathbb{C}u_{g}\) is the standard \(g\)-th Fourier coefficient map on the full group C*-algebra. By the coaction identity, it follows that \(\mathbb{E}_{g}(a)(I\otimes u_{g^{-1}})\in\mathcal{A}\otimes\mathbb{C}I\), identified as an element \(b\) in \(\mathcal{A}\), satisfies \(\delta(b)=b\otimes u_{g}\). We then have the following result, which shows that [1, Definition 3.1] coincides with Definition 2.1.
**Proposition 2.3**.: _Let \(\mathcal{A}\) be an operator algebra, let \(G\) be a discrete group, and suppose \(\delta\colon\mathcal{A}\to\mathcal{A}\otimes C^{*}(G)\) is a completely contractive homomorphism. Then, \(\delta\) is a coaction on \(\mathcal{A}\) if and only if \(\sum_{g}\mathcal{A}_{g}^{\delta}\) is norm-dense in \(\mathcal{A}\), where \(\mathcal{A}_{g}^{\delta}\) is the spectral subspace given by \(\mathcal{A}_{g}^{\delta}\coloneqq\{a\in\mathcal{A}:\delta(a)=a\otimes u_{g}\}\) for \(g\in G\)._
Proof.: Suppose first that \(\sum_{g}\mathcal{A}_{g}^{\delta}\) is dense in \(\mathcal{A}\). It is easy to verify the coaction identity on elements of each spectral subspace, so it holds on all of \(\mathcal{A}\). Next, we show that \(\delta\) is nondegenerate. Clearly we have the containment \(\overline{\delta(\mathcal{A})(I_{\mathcal{H}}\otimes C^{*}(G))}\subseteq \mathcal{A}\otimes C^{*}(G)\), so we need only prove the converse. For \(g\in G\) and \(a\in\mathcal{A}_{g}^{\delta}\), we have \(\delta(a)=a\otimes u_{g}\in\delta(\mathcal{A})\), and since \(I_{\mathcal{H}}\otimes u_{g^{-1}}\in I_{\mathcal{H}}\otimes C^{*}(G)\), we have that \(a\otimes I=(a\otimes u_{g})(I_{\mathcal{H}}\otimes u_{g^{-1}})\in\delta( \mathcal{A})(I_{\mathcal{H}}\otimes C^{*}(G))\). From density of \(\sum_{g}\mathcal{A}_{g}^{\delta}\) in \(\mathcal{A}\), we get that \(\mathcal{A}\otimes\mathbb{C}I\subseteq\overline{\delta(\mathcal{A})(I_{ \mathcal{H}}\otimes C^{*}(G))}\), so that \(\overline{\delta(\mathcal{A})(I_{\mathcal{H}}\otimes C^{*}(G))}=\mathcal{A} \otimes C^{*}(G)\).
Suppose now that \(\delta\) is a coaction in the sense of Definition 2.1. We will show that \(\sum_{g}\mathcal{A}_{g}\) is dense in \(\mathcal{A}\). Let \(a\in\mathcal{A}\) be some element, and \(\epsilon>0\). Then, by nondegeneracy there are elements \(a_{g}\in\mathcal{A}\), finitely many of which are nonzero, such that
\[\left\|a\otimes I-\sum_{g}\delta(a_{g})(I_{\mathcal{H}}\otimes u_{g^{-1}}) \right\|<\epsilon.\]
Since \(\operatorname{id}_{\mathcal{A}}\otimes\Phi_{e}\) is contractive, we get in \(\mathcal{A}\otimes\mathbb{C}u_{e}\) that
\[\left\|a\otimes I-\sum_{g}\mathbb{E}(a_{g})(I_{\mathcal{H}}\otimes u_{g^{-1}}) \right\|<\epsilon.\]
By the coaction identity, we have \(\mathbb{E}(a_{g})(I_{\mathcal{H}}\otimes u_{g^{-1}})\in\mathcal{A}_{g} \otimes\mathbb{C}I\), so \(a\) can be approximated up to arbitrary tolerance by an element in \(\sum_{g}\mathcal{A}_{g}\).
**Remark 2.4**.: _It follows from Proposition 2.3 that any coaction \(\delta\) is automatically completely isometric. Indeed, if we let \(1\colon G\to\mathbb{C}\) be the trivial representation, then we have \((\operatorname{id}_{\mathcal{A}}\otimes 1)\circ\delta=\operatorname{id}_{ \mathcal{A}}\), as can be verified on on spectral subspaces._
**Definition 2.5**.: _A reduced coaction of a discrete group \(G\) on an operator algebra \(\mathcal{A}\) is a completely isometric homomorphism \(\varepsilon\colon\mathcal{A}\to\mathcal{A}\otimes C^{*}_{\lambda}(G)\) such that_
1. \(\overline{(\varepsilon\otimes\operatorname{id}_{C^{*}_{\lambda}(G)})}\circ \varepsilon=(\operatorname{id}_{\mathcal{A}}\otimes\Delta_{\lambda})\circ\varepsilon\) _(coaction identity);_
2. \(\overline{\varepsilon(\mathcal{A})(I_{\mathcal{A}}\otimes C^{*}_{\lambda}(G))}= \mathcal{A}\otimes C^{*}_{\lambda}(G)\) _(nondegeneracy)._
**Remark 2.6**.: _As in Remark 2.2, when \(\mathcal{A}\) is a C*-algebra, a reduced coaction is automatically *-preserving. Therefore, Definition 2.5 coincides with the notion of a reduced coaction on a C*-algebra in the literature, cf. [1, Definition A.72]._
If \(\delta\colon G\ \langle\,\mathcal{A}\,\rangle\) is a normal coaction, then \(\delta_{\lambda}\coloneqq(\operatorname{id}_{\mathcal{A}}\otimes\lambda) \circ\delta\colon\mathcal{A}\to\mathcal{A}\otimes C^{*}_{\lambda}(G)\) is a reduced coaction, as it automatically satisfies (i) and (ii) from Definition 2.5. On the other hand, if \(\varepsilon\colon G\ \langle\,\mathcal{A}\,\rangle\) is a
reduced coaction, a similar proof as the one of Proposition 2.3 shows that \(\sum_{g}\mathcal{A}_{g}^{\varepsilon}\) is norm-dense in \(\mathcal{A}\), where \(\mathcal{A}_{g}^{\varepsilon}\) are the spectral subspaces given by
\[\mathcal{A}_{g}^{\varepsilon}\coloneqq\{a\in\mathcal{A}:\varepsilon(a)=a \otimes\lambda_{g}\},\]
for \(g\in G\). If \(\delta\colon G\)\(\langle\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
for all \(c,x\in\mathfrak{C}\), where \(\{e_{x}:x\in\mathfrak{C}\}\) is the canonical orthonormal basis for \(\ell^{2}(\mathfrak{C})\). We see that for \(c\in\mathfrak{C}\), the operator \(\lambda_{\mathfrak{C}}(c)\) is a partial isometry with initial space \(\ell^{2}(\mathfrak{d}(c)\mathfrak{C})\) and final space \(\ell^{2}(c\mathfrak{C})\). The following C*-algebra was defined by Spielberg [14].
**Definition 2.8** ([14, Definition 11.2]).: _The reduced Toeplitz algebra of \(\mathfrak{C}\) is the C*-algebra_
\[C^{*}_{\lambda}(\mathfrak{C})\coloneqq C^{*}(\{\lambda_{\mathfrak{C}}(c):c \in\mathfrak{C}\})\subseteq\mathbb{B}(\ell^{2}(\mathfrak{C})).\]
The C*-algebra \(C^{*}_{\lambda}(\mathfrak{C})\) is called the left reduced C*-algebra of \(\mathfrak{C}\) in [13, Definition 2.2].
**Definition 2.9**.: _The operator algebra of \(\mathfrak{C}\) is_
\[\mathcal{A}_{\lambda}(\mathfrak{C})\coloneqq\overline{\operatorname{alg}}( \{\lambda_{\mathfrak{C}}(c):c\in\mathfrak{C}\})\subseteq C^{*}_{\lambda}( \mathfrak{C}).\]
The operator algebra \(\mathcal{A}_{\lambda}(\mathfrak{C})\) is unital whenever \(\mathfrak{C}^{0}\) is finite and has a contractive approximate identity comprised of projections. Indeed, the net of projections \(\sum_{u\in F}\lambda_{u}\) as \(F\) ranges over the finite subsets of \(\mathfrak{C}^{0}\) is a contractive approximate identity of projections for the dense algebra \(\operatorname{alg}(\{\lambda_{\mathfrak{C}}(c):c\in\mathfrak{C}\})\), so it is also a contractive approximate identity for \(\mathcal{A}_{\lambda}(\mathfrak{C})\).
A priori, we have the following description of \(C^{*}_{\lambda}(\mathfrak{C})\) with *-monomials:
\[C^{*}_{\lambda}(\mathfrak{C})=\overline{\operatorname{span}}(\{\lambda_{ \mathfrak{C}}(c_{1})^{*}\lambda_{\mathfrak{C}}(d_{1})\cdots\lambda_{\mathfrak{ C}}(c_{n})^{*}\lambda_{\mathfrak{C}}(d_{n}):c_{i},d_{i}\in\mathfrak{C},n \geqslant 1\}). \tag{2.1}\]
The language of inverse semigroups is then very convenient for understanding how *-monomials interact.
Recall that a _partial bijection of \(\mathfrak{C}\)_ is a bijection between two subsets of \(\mathfrak{C}\), i.e., a bijection \(f\colon\operatorname{dom}(f)\to\operatorname{im}(f)\), where \(\operatorname{dom}(f)\) and \(\operatorname{im}(f)\) are subsets of \(\mathfrak{C}\) called the _domain_ and _image_ of \(f\), respectively. The symmetric inverse monoid \(\mathcal{I}(\mathfrak{C})\) consists of all partial bijections of \(\mathfrak{C}\) with composition and inversion of partial bijections. Due to left cancellation in \(\mathfrak{C}\), each \(c\in\mathfrak{C}\) defines a partial bijection
\[\mathfrak{d}(c)\mathfrak{C}\to c\mathfrak{C},\quad x\mapsto cx. \tag{2.2}\]
Following [13], we shall use \(c\) also to denote the partial bijection defined in (2.2), so that \(\operatorname{dom}(c)=\mathfrak{d}(c)\mathfrak{C}\), \(\operatorname{im}(c)=c\mathfrak{C}\), and \(c(x)\coloneqq cx\) for all \(x\in\mathfrak{d}(c)\mathfrak{C}\). The inverse \(c^{-1}\) is the partial bijection of \(\mathfrak{C}\) given by \(c\mathfrak{C}\to\mathfrak{d}(c)\mathfrak{C}\), \(c^{-1}(cx)\coloneqq x\) for all \(cx\in c\mathfrak{C}\), though it may not make sense as an element of \(\mathfrak{C}\).
There is a canonical faithful representation of the inverse monoid \(\mathcal{I}(\mathfrak{C})\) on \(\ell^{2}(\mathfrak{C})\) by partial isometries defined as follows: given \(f\in\mathcal{I}(\mathfrak{C})\), there is a partial isometry \(\Lambda_{f}\in\mathbb{B}(\ell^{2}(\mathfrak{C}))\) given by
\[\Lambda_{f}e_{x}\coloneqq\begin{cases}e_{f(x)}&\text{if }x\in\operatorname{ dom}(f),\\ 0&\text{if }x\notin\operatorname{dom}(f),\end{cases} \tag{2.3}\]
for all \(x\in\mathfrak{C}\), and the map \(f\mapsto\Lambda_{f}\) is injective and a homomorphism of inverse monoids in the sense that \(\Lambda_{fg}=\Lambda_{f}\Lambda_{g}\) and \(\Lambda_{f}^{*}=\Lambda_{f^{-1}}\) for all partial bijections \(f,g\in\mathcal{I}(\mathfrak{C})\). In the specific case where the partial bijections come from elements of \(\mathfrak{C}\), we have:
* \(\lambda_{\mathfrak{C}}(c)=\Lambda_{c}\) and \(\lambda_{\mathfrak{C}}(c)^{*}=\Lambda_{c^{-1}}\) for all \(c\in\mathfrak{C}\);
* \(\Lambda_{c}^{*}\Lambda_{d}=\Lambda_{c^{-1}d}\) for all \(c,d\in\mathfrak{C}\), where \(c^{-1}d\) is the composition of \(c^{-1}\) and \(d\) in \(\mathcal{I}(\mathfrak{C})\).
This allows us to describe a general *-monomial as a partial isometry. Specifically, we have
\[\lambda_{\mathfrak{C}}(c_{1})^{*}\lambda_{\mathfrak{C}}(d_{1})\cdots\lambda_{ \mathfrak{C}}(c_{n})^{*}\lambda_{\mathfrak{C}}(d_{n})=\Lambda_{c_{1}^{-1}d_{1 }\cdots c_{n}^{-1}d_{n}}, \tag{2.4}\]
for all \(c_{i},d_{i}\in\mathfrak{C}\) and \(n\geqslant 1\). The _left inverse hull_\(I_{l}=I_{l}(\mathfrak{C})\) of \(\mathfrak{C}\) is the inverse semigroup generated by the collection \(\{c:c\in\mathfrak{C}\}\) of partial bijections from (2.2), and we have
\[I_{l}=\{c_{1}^{-1}d_{1}\cdots c_{n}^{-1}d_{n}:c_{i},d_{i}\in\mathfrak{C},n \geqslant 1\}\]
where the product \(c_{1}^{-1}d_{1}\cdots c_{n}^{-1}d_{n}\) is taken inside \(\mathcal{I}(\mathfrak{C})\). We denote by \(0\) the empty function on \(\mathfrak{C}\). Using [10, Lemma 5.6.43], we see that \(I_{l}\) contains \(0\) if and only if \(\mathfrak{C}\) is a monoid such that \(x\mathfrak{C}\cap y\mathfrak{C}\neq\emptyset\) for all \(x,y\in\mathfrak{C}\). Combining (2.1) and (2.4), we get that
\[C^{*}_{\lambda}(\mathfrak{C})=\overline{\operatorname{span}}(\{\Lambda_{s}:s \in I_{l}\}). \tag{2.5}\]
Moreover, the map \(\Lambda\colon I_{l}\to C^{*}_{\lambda}(\mathfrak{C})\) is a faithful representation of the inverse semigroup \(I_{l}\) by partial isometries in \(C^{*}_{\lambda}(\mathfrak{C})\). The description in (2.5) has several important consequences. First, the composition law in \(I_{l}\) tells us how to take products of spanning elements in \(C^{*}_{\lambda}(\mathfrak{C})\). Second, because the idempotents in an inverse semigroup form a semilattice (i.e., a commutative idempotent semigroup), we see that \(C^{*}_{\lambda}(\mathfrak{C})\) contains a canonical commutative C*-subalgebra
\[D_{\lambda}(\mathfrak{C})\coloneqq\overline{\operatorname{span}}(\{1_{X}:X \in\mathcal{J}\})\subseteq\ell^{\infty}(\mathfrak{C}),\]
where \(\mathcal{J}\coloneqq\{\operatorname{dom}(s):s\in I_{l}\}\) is the semilattice of constructible right ideals of \(\mathfrak{C}\). Since the map \(X\mapsto\operatorname{id}_{X}\) is a semilattice isomorphism from \(\mathcal{J}\) onto the idempotent semilattice of \(I_{l}\), we will often treat these interchangeably. We let \(\mathcal{J}^{\times}\) denote the nonempty constructible right ideals of \(\mathfrak{C}\).
A _character_ on \(\mathcal{J}\) is a nonzero map \(\chi\colon\mathcal{J}\to\{0,1\}\) such that \(\chi(X\cap Y)=\chi(X)\chi(Y)\) for all \(X,Y\in\mathcal{J}\) and \(\chi(\emptyset)=0\) if \(\emptyset\in\mathcal{J}\). The Gelfand spectrum of \(D_{\lambda}(\mathfrak{C})\) is canonically identified with the subspace \(\Omega\subseteq\{0,1\}^{\mathcal{J}}\) consisting of the characters \(\chi\) on \(\mathcal{J}\) with the property that whenever \(X,X_{1},\ldots,X_{n}\in\mathcal{J}^{\times}\) satisfy \(X=\bigcup_{i=1}^{n}X_{i}\), then \(\chi(X)=1\) implies \(\chi(X_{i})=1\) for some \(i=1,\ldots,n\). The space \(\Omega\) is a locally compact totally disconnected Hausdorff space with a basis given by compact open subsets of the form
\[\Omega(X;\mathfrak{f})\coloneqq\{\chi\in\Omega:\chi(X)=1,\chi(Y)=0\text{ for all }Y\in\mathfrak{f}\},\]
where \(X\in\mathcal{J}\) and \(\mathfrak{f}\subseteq\mathcal{J}^{\times}\) is a finite (possibly empty) subset such that \(\bigcup_{Y\in\mathfrak{f}}Y\subseteq X\). For each \(X\in\mathcal{J}\), we put \(\Omega(X)\coloneqq\Omega(X;\emptyset)\).
The inverse semigroup \(I_{l}\) acts on \(\Omega\): each \(s\in I_{l}\) defines a partial homeomorphism
\[\Omega(\operatorname{dom}(s))\to\Omega(\operatorname{im}(s)),\quad\chi \mapsto s.\chi,\]
where \(s.\chi(X)\coloneqq\chi(s^{-1}(X\cap\operatorname{im}(s)))\) for all \(X\in\mathcal{J}\). Let
\[I_{l}\ast\Omega\coloneqq\{(s,\chi)\in I_{l}\times\Omega:\chi(\operatorname{ dom}(s))=1\}.\]
Define an equivalence relation on \(I_{l}\ast\Omega\) by
\[(s,\chi)\sim(t,\chi)\]
if there exists \(X\in\mathcal{J}\) such that \(\chi(X)=1\) and \(s(x)=t(x)\) for all \(x\in X\). We let \([s,\chi]\) denote the equivalence class of \((s,\chi)\) with respect to \(\sim\). The transformation groupoid \(I_{l}\ltimes\Omega\) is the quotient space \((I_{l}\ast\Omega)/\sim\) with groupoid operations determined by
\[[s,t.\chi][t,\chi]\coloneqq[st,\chi]\quad\text{ and }\quad[s,\chi]^{-1} \coloneqq[s^{-1},s.\chi]\]
for all \(s,t\in I_{l}\) and \(\chi\in\Omega\). The range and source maps are given by \(\mathfrak{r}([s,\chi])\coloneqq s.\chi\) and \(\mathfrak{s}([s,\chi])\coloneqq\chi\), respectively, for all \([s,\chi]\in I_{l}\ltimes\Omega\). We tacitly identify \(\Omega\) with the unit space of \(I_{l}\ltimes\Omega\) via \(\chi\mapsto[\operatorname{id}_{X},\chi]\) for all \(\chi\in\Omega\), where \(X\in\mathcal{J}\) is any constructible right ideal with \(\chi(X)=1\). Subsets of the form \([s,U]\coloneqq\{[s,\chi]:\chi\in U\}\) for all \(s\in I_{l}\) and compact open subsets \(U\) of \(\Omega(\operatorname{dom}(s))\) generate a topology on the groupoid \(I_{l}\ltimes\Omega\) which is locally compact, etale (both range and source maps are local homeomorphisms), and ample (there is a basis consisting of compact open bisections) though not necessarily Hausdorff. A bisection is a subset of \(I_{l}\ltimes\Omega\) on which both the range and source maps are injective, and any basic open subset \([s,U]\) as above is a compact open bisection.
A character \(\chi\) on \(\mathcal{J}\) is said to be _maximal_ if \(\chi^{-1}(1)\) is maximal with respect to set inclusion in the collection \(\{\gamma^{-1}(1):\gamma\text{ a character of }\mathcal{J}\}\). Every maximal character lies in \(\Omega\), and we denote by \(\Omega_{\max}\) the subset of \(\Omega\) consisting of maximal characters. The _boundary_ of \(\Omega\) is the closure \(\partial\Omega\coloneqq\overline{\Omega_{\max}}\), which is a closed and invariant subset of \(\Omega\). Given \(X\in\mathcal{J}\) and a finite subset \(\mathfrak{f}\subseteq\mathcal{J}^{\times}\) with \(\bigcup_{Y\in\mathfrak{f}}Y\subseteq X\), we put \(\partial\Omega(X;\mathfrak{f})=\partial\Omega\cap\Omega(X;\mathfrak{f})\) and \(\partial\Omega(X)\coloneqq\partial\Omega(X;\emptyset)\). Compact open subsets of this form are a basis for the topology on \(\partial\Omega\).
**Remark 2.10**.: _If \(0\notin I_{l}\), then \(\partial\Omega=\Omega_{\max}=\{\chi_{\infty}\}\) is a single point, where \(\chi_{\infty}\colon\mathcal{J}\to\{0,1\}\) is the unique maximal character defined by \(\chi(X)=1\) for all \(X\in\mathcal{J}\)._
Let us recall some terminology from [10, Definition 11.5]. A subset \(F\subseteq\mathcal{J}\) of constructible right ideals is said to be a _cover_ for \(X\in\mathcal{J}\) if \(Z\subseteq X\) for all \(Z\in F\) and for every \(Y\in\mathcal{J}\) with \(Y\subseteq X\), there exists \(Z\in F\) such that \(Z\cap Y\neq\emptyset\). If \(0\in I_{l}\), then \(\partial\Omega\) is precisely the set of tight characters of
\(\mathcal{J}\) in the sense of Exel, see [10, Theorem 12.9]. Precisely, this means that \(\chi\in\Omega\) lies in \(\partial\Omega\) if and only if whenever \(X\in\mathcal{J}^{\times}\) and \(F\) is a finite cover for \(X\), we have \(\chi(Y)=1\) for some \(Y\in F\). This implies that \(\partial\Omega(X,\mathfrak{f})=\emptyset\) whenever \(\mathfrak{f}\) is a cover for \(X\).
Next, we explain how each character \(\chi\in\Omega\) gives rise to an analogue of a left regular representation on those groupoid elements whose source equals \(\chi\). This makes sense for any etale groupoid. Since we shall not need to consider such representations for non-Hausdorff groupoids, we shall assume that \(I_{l}\ltimes\Omega\) is Hausdorff. Note that \((I_{l}\ltimes\Omega)_{\chi}\coloneqq\{[s,\chi]\in I_{l}\ltimes\Omega:s\in I_{l },\chi(\operatorname{dom}(s))=1\}\) is discrete since the groupoid is etale, and define the left regular representation of \(C_{c}(I_{l}\ltimes\Omega)\) associated with \(\chi\) as
\[\rho_{\chi}\colon C_{c}(I_{l}\ltimes\Omega)\to\mathbb{B}(\ell^{2}((I_{l} \ltimes\Omega)_{\chi})),\]
given by
\[\rho_{\chi}(f)\delta_{[t,\chi]}=\sum_{[s,t,\chi]\in(I_{l}\ltimes\Omega)_{t, \chi}}f([s,t.\chi])\delta_{[st,\chi]},\]
for all \(f\in C_{c}(I_{l}\ltimes\Omega)\) and \([t,\chi]\in(I_{l}\ltimes\Omega)_{\chi}\). Similarly, given \(\chi\in\partial\Omega\), we let
\[\partial\rho_{\chi}\colon C_{c}(I_{l}\ltimes\partial\Omega)\to\mathbb{B}(\ell ^{2}((I_{l}\ltimes\partial\Omega)_{\chi}))\]
denote the left regular representation of \(C_{c}(I_{l}\ltimes\partial\Omega)\) associated with \(\chi\). We then define \(C_{r}^{*}(I_{l}\ltimes\Omega)\) as the completion of \(\bigoplus_{\chi\in\partial}\rho_{\chi}(C_{c}(I_{l}\ltimes\Omega))\) in \(\mathbb{B}(\bigoplus_{\chi\in\partial\Omega}\ell^{2}((I_{l}\ltimes\Omega)_{ \chi})))\), and, similarly, \(C_{r}^{*}(I_{l}\ltimes\partial\Omega)\) is the completion of \(\bigoplus_{\chi\in\partial\Omega}\rho_{\chi}(C_{c}(I_{l}\ltimes\partial\Omega))\) in \(\mathbb{B}(\bigoplus_{\chi\in\partial\Omega}\ell^{2}((I_{l}\ltimes\partial \Omega)_{\chi})))\). Both \(\rho_{\chi}\) and \(\partial\rho_{\chi}\) extend to *-homomorphisms (still denoted \(\rho_{\chi}\) and \(\partial\rho_{\chi}\)) of \(C_{r}^{*}(I_{l}\ltimes\Omega)\) and \(C_{r}^{*}(I_{l}\ltimes\partial\Omega)\), respectively. From the analysis in [10, SS 3] (in particular, [10, Corollary 3.4]) and following [22, Proposition 11.4], we see that if the groupoid \(I_{l}\ltimes\Omega\) is Hausdorff, then there is a *-isomorphism
\[\mathfrak{j}\colon C_{r}^{*}(I_{l}\ltimes\Omega)\to C_{\lambda}^{*}(\mathfrak{ C}),\quad\mathfrak{l}_{[s,\Omega(\operatorname{dom}(s)))]}\mapsto\Lambda_{s}, \tag{2.6}\]
for all \(s\in I_{l}\).
By [10, Lemma 4.1(i)], we know that \(I_{l}\ltimes\Omega\) is Hausdorff if and only if for all \(s\in I_{l}\), there exists a finite (possibly empty) set \(F\subseteq\mathcal{J}^{\times}\) such that \(\{x\in\operatorname{dom}(s):s(x)=x\}=\bigcup_{\chi\in F}X\). For instance, this means that \(I_{l}\ltimes\Omega\) is Hausdorff whenever \(\mathfrak{C}\) is cancellative and finitely aligned in the sense of [22, Definition 3.2]. Thus, in order to use the identification of C*-algebras in (2.6), we assume \(I_{l}\ltimes\Omega\) is Hausdorff.
**Definition 2.11**.: _Let \(\mathfrak{C}\) be a cancellative small category, and suppose that \(I_{l}\ltimes\Omega\) is Hausdorff. The (reduced) boundary quotient of \(C_{\lambda}^{*}(\mathfrak{C})\) is the C*-algebra \(\partial C_{\lambda}^{*}(\mathfrak{C})\coloneqq C_{r}^{*}(I_{l}\ltimes \partial\Omega)\)._
The _boundary quotient map_ is the surjective *-homomorphism
\[q_{\partial}\colon C_{r}^{*}(I_{l}\ltimes\Omega)\to C_{r}^{*}(I_{l}\ltimes \partial\Omega) \tag{2.7}\]
determined by \(q_{\partial}(f)=f|_{I_{l}\ltimes\partial\Omega}\) for all \(f\in C_{c}(I_{l}\ltimes\Omega)\).
When \(I_{l}\ltimes\Omega\) is Hausdorff, the isomorphism in (2.6) justifies Definition 2.11. In the non-Hausdorff setting, there is another candidate for a groupoid model for \(C_{\lambda}^{*}(\mathfrak{C})\) (see [10, SS 3]), and we do not know if the two groupoids are different (see [10, Question 3.6]).
## 3. Existence of coactions on C*-envelopes
In this section, we extend Katayama duality for normal coactions of discrete groups on C*-algebras [11, Theorem 8] to normal coactions of discrete groups on operator algebras. The proof is a straightforward generalization of the C*-algebra version. We then use this Katayama duality to prove our main result (Theorem 3.5) that a normal coaction on an operator algebra extends to a coaction on the C*-envelope.
Let \(\mathcal{A}\subseteq\mathbb{B}(\mathcal{H})\) be an operator algebra, and suppose \(\delta\colon G\subset\mathcal{A}\) is a coaction by a discrete group \(G\). Let \(M\colon c_{0}(G)\to\mathbb{B}(\ell^{2}(G))\) be the canonical representation by diagonal multiplication operators, and define \(j_{c_{0}(G)}\colon c_{0}(G)\to\mathbb{B}(\mathcal{H}\otimes\ell^{2}(G))\) by \(j_{c_{0}(G)}(f)\coloneqq I\otimes M_{f}\) for all \(f\in c_{0}(G)\). We view \(\delta_{\lambda}\coloneqq(\operatorname{id}_{\mathcal{A}}\otimes\lambda)\circ\delta\) as a homomorphism \(\mathcal{A}\to\mathbb{B}(\mathcal{H}\otimes\ell^{2}(G))\).
**Definition 3.1**.: _The (reduced) crossed product of \(\mathcal{A}\) by the coaction \(\delta\) of \(G\) is the operator algebra_
\[\mathcal{A}\rtimes_{\delta}G\coloneqq\overline{\operatorname{alg}}(\{\delta_{ \lambda}(a)j_{c_{0}(G)}(f):a\in\mathcal{A},f\in c_{0}(G)\})\subseteq\mathbb{B} (\mathcal{H}\otimes\ell^{2}(G)). \tag{3.1}\]
For \(g\in G\), there is a completely isometric automorphism \(\hat{\delta}_{g}\colon\mathcal{A}\rtimes_{\delta}G\to\mathcal{A}\rtimes_{ \delta}G\) such that \(\hat{\delta}_{g}(x)=(I\otimes\rho_{g}^{*})x(I\otimes\rho_{g})\) for all \(x\in\mathcal{A}\rtimes_{\delta}G\), where \(\rho\colon G\to\mathcal{U}(\ell^{2}(G))\) is the right regular representation of \(G\). We call \(\hat{\delta}\) the _dual action_ of \(G\) on \(\mathcal{A}\rtimes_{\delta}G\). On generators, we have
\[\hat{\delta}_{g}(\delta_{\lambda}(a)j_{c_{0}(G)}(f))=\delta_{\lambda}(a)j_{c_ {0}(G)}(\sigma_{g}(f)),\]
for all \(a\in\mathcal{A}\) and \(f\in c_{0}(G)\), where \(\sigma_{g}\colon\alpha_{0}(G)\to c_{0}(G)\) is given by \(\sigma_{g}(f)(h)=f(hg)\) for all \(g,h\in G\) and \(f\in c_{0}(G)\).
Consider the maps
* \(k_{\mathcal{A}}(a)\coloneqq\delta_{\lambda}(a)\otimes I_{\ell^{2}(G)}\) for all \(a\in\mathcal{A}\);
* \(k_{c_{0}(G)}(f)\coloneqq I\otimes((M\otimes M)\circ\nu(f))\) for all \(f\in c_{0}(G)\);
* and \(k_{G}(g)\coloneqq I\otimes I\otimes\lambda_{g}\) for all \(g\in G\),
where \(\nu\colon c_{0}(G)\to\mathcal{M}(c_{0}(G)\otimes c_{0}(G))\) is given by \(\nu(f)(g,h)\coloneqq f(gh^{-1})\) for all \(g,h\in G\) and \(f\in c_{0}(G)\). We define the _reduced double crossed product_\(\mathcal{A}\rtimes_{\delta}G\rtimes_{\delta}^{r}G\) to be the operator algebra
\[\mathcal{A}\rtimes_{\delta}G\rtimes_{\delta}^{r}G\coloneqq\overline{ \operatorname{alg}}(\{k_{\mathcal{A}}(\mathcal{A})k_{c_{0}(G)}(c_{0}(G))k_{G} (G)\})\subseteq\mathbb{B}(\mathcal{H}\otimes\ell^{2}(G)\otimes\ell^{2}(G)).\]
**Remark 3.2**.: _The operator algebra \(\mathcal{A}\rtimes_{\delta}G\rtimes_{\hat{\delta}}^{r}G\) coincides with the relative crossed product of \(\mathcal{A}\rtimes_{\delta}G\) by the action \(\hat{\delta}\) of \(G\) with respect to the \(\hat{\delta}\)-admissible cover \(C^{*}(\mathcal{A}\rtimes_{\delta}G)\subseteq\mathbb{B}(\mathcal{H}\otimes \ell^{2}(G))\) of \(\mathcal{A}\rtimes_{\delta}G\), as defined in [16, Definition 3.2]._
Next, define unitary operators \(U,S\in\mathcal{U}(\ell^{2}(G)\otimes\ell^{2}(G))\) by setting
\[Ue_{g}\otimes e_{h}\coloneqq e_{g}\otimes e_{gh}\quad\text{ and }\quad Se_{g} \otimes e_{h}\coloneqq e_{g}\otimes e_{h^{-1}}\]
for all \(g,h\in G\), where \(\{e_{g}:g\in G\}\) is the canonical orthonormal basis for \(\ell^{2}(G)\). There is a reduced coaction \(\hat{\hat{\delta}}\) of \(G\) on \(\mathcal{A}\rtimes_{\delta}G\rtimes_{\delta}^{r}G\) given by
\[\hat{\hat{\delta}}(x)\coloneqq(I_{\mathcal{H}}\otimes I_{\ell^{2}(G)} \otimes U)(x\otimes I_{\ell^{2}(G)})(I_{\mathcal{H}}\otimes I_{\ell^{2}(G)} \otimes U^{*}) \tag{3.2}\]
for all \(x\in\mathcal{A}\rtimes_{\delta}G\rtimes_{\hat{\delta}}^{r}G\). For \(a\in\mathcal{A}\), \(f\in c_{0}(G)\), and \(g\in G\), a straightforward computation yields the formula
\[\hat{\hat{\delta}}(k_{\mathcal{A}}(a)k_{c_{0}(G)}(f)k_{G}(g))=k_{\mathcal{A}} (a)k_{c_{0}(G)}(f)k_{G}(g)\otimes\lambda_{g}.\]
We are now ready to extend Katayama duality [10, Theorem 8] to the non-selfadjoint setting. Recall that when \(\delta\) is normal, \(\delta_{\lambda}\) is a completely isometric isomorphism from \(\mathcal{A}\) onto \(\delta_{\lambda}(\mathcal{A})\).
**Theorem 3.3** (Katayama duality for operator algebras).: _Let \(\mathcal{A}\subseteq\mathbb{B}(\mathcal{H})\) be an operator algebra, let \(G\) be a discrete group, and suppose \(\delta\colon G\not(\,\mathcal{A}\) is a normal coaction. Then, there is a completely isometric isomorphism_
\[\Psi\colon\mathcal{A}\rtimes_{\delta}G\rtimes_{\delta}^{r}G\to\delta_{ \lambda}(\mathcal{A})\otimes\mathbb{K}\]
_such that the reduced coaction \(\hat{\hat{\delta}}\) from (3.2) on the double crossed product is conjugated to the reduced coaction \(\tilde{\delta}\) of \(G\) on \(\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}\) given by_
\[\tilde{\delta}(x)=(I_{\mathcal{H}}\otimes I_{\ell^{2}(G)}\otimes U^{*})[(\delta _{\lambda}\otimes\Sigma)\circ(\delta_{\lambda}\circ(\delta_{\lambda}^{-1} \otimes\operatorname{id}))(x)](I_{\mathcal{H}}\otimes I_{\ell^{2}(G)}\otimes U), \tag{3.3}\]
_for all \(x\in\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}\), where \(\Sigma\colon C_{\lambda}^{*}(G)\otimes\mathbb{K}\to\mathbb{K}\otimes C_{ \lambda}^{*}(G)\) is the flip map._
Proof.: The proof is essentially the same as the proof of Katayama duality for C*-algebras as given in [10, Lemma A.70 and Theorem A.69], so we give only a sketch. Consider the unitary \(V\coloneqq I_{\mathcal{H}}\otimes US\in\mathcal{U}(\mathcal{H}\otimes\ell^{2}(G )\otimes\ell^{2}(G))\). Direct calculations yield the following formulas:
* \(\operatorname{Ad}(V)(k_{\mathcal{A}}(a))=\delta_{\lambda}(a)\otimes\lambda_{g}\) for \(g\in G\) and \(a\in\mathcal{A}_{g}\);
* \(\operatorname{Ad}(V)(k_{c_{0}(G)}(f))=I\otimes I\otimes M_{f}\) for \(f\in c_{0}(G)\);
* \(\operatorname{Ad}(V)(k_{G}(g))=I\otimes I\otimes\rho_{g}\) for \(g\in G\).
Define \(\Psi\) to be the restriction to the double crossed product of conjugation by the unitary \(V\). It follows from (i), (ii), and (iii) that \(\Psi\) carries \(\mathcal{A}\rtimes_{\delta}G\rtimes_{\delta}^{r}G\) completely isometrically into \(\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}\).
Next, we show that \(\Psi\) is surjective. Observe from (i), (ii), and (iii) again and the fact that \(\overline{c_{0}(G)C_{\rho}^{*}(G)}=\mathbb{K}\), we have
\[\operatorname{Ad}(V)(k_{c_{0}(G)}(c_{0}(G)))\operatorname{Ad}(V)(k_{G}(G))= \mathbb{C}I_{\mathcal{H}}\otimes\mathbb{C}I_{\ell^{2}(G)}\otimes\overline{c_{0 }(G)C_{\rho}^{*}(G)}=\mathbb{C}I_{\mathcal{H}}\otimes\mathbb{C}I_{\ell^{2}(G) }\otimes\mathbb{K},\]
so it suffices to show that
\[\overline{\operatorname{Ad}(V)(k_{\mathcal{A}}(\mathcal{A}))[\mathbb{C}I_{ \mathcal{H}}\otimes\mathbb{C}I_{\ell^{2}(G)}\otimes\mathbb{K}]}=\delta_{ \lambda}(\mathcal{A})\otimes\mathbb{K}.\]
Since \(\delta_{\lambda}\) is nondegenerate and \(\mathbb{K}=\overline{C_{\lambda}^{*}(G)c_{0}(G)}\), we see that
\[\overline{\delta_{\lambda}(\mathcal{A})(\mathbb{C}I_{\mathcal{H}}\otimes \mathbb{K})}=\overline{\delta_{\lambda}(\mathcal{A})(\mathbb{C}I_{\mathcal{H }}\otimes C_{\lambda}^{*}(G))(\mathbb{C}I_{\mathcal{H}}\otimes c_{0}(G))}= \overline{(\mathcal{A}\otimes C_{\lambda}^{*}(G))(\mathbb{C}I_{\mathcal{H}} \otimes c_{0}(G))}=\mathcal{A}\otimes\mathbb{K}.\]
A straightforward (but lengthy) computation then shows that
\[\operatorname{Ad}(V)(k_{\mathcal{A}}(\mathcal{A}))[\mathbb{C}I_{\mathcal{H}} \otimes\mathbb{C}I_{\ell^{2}(G)}\otimes\mathbb{K}]=(\delta_{\lambda}\otimes \operatorname{id}_{\mathbb{K}})[\delta_{\lambda}(\mathcal{A})(\mathbb{C}I_{ \mathcal{H}}\otimes\mathbb{K})],\]
so by taking closures and using what we established above, we obtain
\[\overline{\operatorname{Ad}(V)(k_{\mathcal{A}}(\mathcal{A}))[ \mathbb{C}I_{\mathcal{H}}\otimes\mathbb{C}I_{\ell^{2}(G)}\otimes\mathbb{K}]} =\overline{(\delta_{\lambda}\otimes\operatorname{id}_{\mathbb{K }})[\delta_{\lambda}(\mathcal{A})(\mathbb{C}I_{\mathcal{H}}\otimes\mathbb{K})]}\] \[=(\delta_{\lambda}\otimes\operatorname{id}_{\mathbb{K}})[ \overline{\delta_{\lambda}(\mathcal{A})(\mathbb{C}I_{\mathcal{H}}\otimes \mathbb{K})}]\] \[=\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}.\]
This proves that \(\Psi\) is surjective, so \(\Psi\) is a completely isometric isomorphism.
Finally, the double crossed product is invariant for the induced double-dual coaction \(\hat{\hat{\delta}}\), so the fact that \(\Psi\) conjugates \(\hat{\hat{\delta}}\) into \(\tilde{\delta}\) given in formula (3.3) follows from a direct calculation.
Theorem 3.3 extends Katayama's original result [12, Theorem 8] by Remark 2.2 and Proposition 2.3. Before turning to our main result, we need a technical lemma.
**Lemma 3.4**.: _Suppose \(\mathcal{A}\) has a contractive approximate identity, and assume that \(\delta\colon G\)\(\langle\)\(\zeta\)\(\mathcal{A}\) is a coaction by a discrete group \(G\). Then, the coaction crossed product \(\mathcal{A}\rtimes_{\delta}G\) has a contractive approximate identity._
Proof.: First, suppose we have a contractive approximate identity \(\{a_{\alpha}\}_{\alpha\in I}\) of \(\mathcal{A}\) and let \(\mathbb{E}_{e}\) be the canonical conditional expectation onto \(\mathcal{A}_{e}\), as described in SS 2.1. We will show that \(\{\mathbb{E}_{e}(a_{\alpha})\}_{\alpha\in I}\), which is a net in \(\mathcal{A}_{e}\), is a contractive approximate identity for \(\mathcal{A}\). It is clearly contractive, because \(\mathbb{E}_{e}\) is a contractive idempotent. Now, for \(a\in\mathcal{A}_{g}\) and \(g\in G\) we first show that \(a\mathbb{E}_{e}(a_{\alpha})\to a\). Indeed,
\[a=\mathbb{E}_{g}(a)=\lim_{\alpha}\mathbb{E}_{g}(aa_{\alpha})=\lim_{\alpha} \mathbb{E}_{g}(a\mathbb{E}_{e}(a_{\alpha}))=\mathbb{E}_{g}(a)\lim_{\alpha} \mathbb{E}_{e}(a_{\alpha})=a\lim_{\alpha}\mathbb{E}_{e}(a_{\alpha}).\]
Now let \(a\in\mathcal{A}\) and \(\epsilon>0\). As the linear span of spectral subspaces is dense, there is a finite subset \(F\subseteq G\) and elements \(a_{g}\in\mathcal{A}_{g}\) for \(g\in F\) such that \(\sum_{g\in F}a_{g}\) is \(\epsilon\)-close to \(a\). Hence, since \(\{\mathbb{E}_{e}(a_{\alpha})\}_{\alpha\in I}\) is an approximate identity for \(\sum_{g\in F}a_{g}\) for arbitrary \(\epsilon>0\), it is also an approximate identity for \(a\).
Hence, we may assume without loss of generality that \(\{a_{\alpha}\}_{\alpha\in I}\) is a contractive approximate unit for \(\mathcal{A}\) in \(\mathcal{A}_{e}\). Let \(\{\chi_{F}\}_{F}\) be the net of characteristic functions supported on finite subsets \(F\subseteq G\), which we denote by \(\operatorname{Fin}(G)\). We will show that the net \(\{a_{\alpha}\chi_{F}\}_{(\alpha,F)\in I\times\operatorname{Fin}(G)}\) is a contractive approximate identity for \(\mathcal{A}\rtimes_{\delta}G\).
It suffices to show that \(\{a_{\alpha}\chi_{F}\}_{(\alpha,F)\in I\times\operatorname{Fin}(G)}\) is an approximate identity on generators of the form \(\delta_{\lambda}(a)j_{c_{0}(G)}(\chi_{E})\), where \(a\in\mathcal{A}_{g}\) for \(g\in G\), and \(E\subseteq G\) is finite. In this case, if we take \(F\) finite such that \(gE,E\subseteq F\), and by multiplying on the right we see that
\[\delta_{\lambda}(a)j_{c_{0}(G)}(\chi_{E})\delta_{\lambda}(a_{\alpha})j_{c_{0} (G)}(\chi_{F})=\delta_{\lambda}(a)(I\otimes M_{\chi_{E}})(a_{\alpha}\otimes I )(I\otimes M_{\chi_{F}})=\delta_{\lambda}(aa_{\alpha})j_{c_{0}(G)}(\chi_{E})\]
converges to \(\delta_{\lambda}(a)j_{c_{0}(G)}(\chi_{E})\). By multiplying on the left,
\[\delta_{\lambda}(a_{\alpha})j_{c_{0}(G)}(\chi_{F})\delta_{\lambda}( a)j_{c_{0}(G)}(\chi_{E}) =(a_{\alpha}\otimes I)(I\otimes M_{\chi_{F}})(a\otimes\lambda_{g})( I\otimes M_{\chi_{E}})\] \[=(a_{\alpha}\otimes I)(I\otimes M_{\chi_{F}})(I\otimes M_{\chi_{ gE}})(a\otimes\lambda_{g})\] \[=(a_{\alpha}\otimes I)(I\otimes M_{\chi_{gE}})(a\otimes\lambda_{g})\] \[=\delta_{\lambda}(a_{\alpha}a)j_{c_{0}(G)}(\chi_{E})\]
converges to \(\delta_{\lambda}(a)j_{c_{0}(G)}(\chi_{E})\). Hence, we see that \(\{a_{\alpha}\chi_{F}\}_{(\alpha,F)\in I\times\operatorname{Fin}(G)}\) is an approximate identity for linear generators of \(\mathcal{A}\rtimes_{\delta}G\), and therefore for their closure as well.
We are now ready for the main result of this paper.
**Theorem 3.5**.: _Let \(\mathcal{A}\) be an operator algebra with a contractive approximate identity, and let \(\kappa_{\mathcal{A}}\colon\mathcal{A}\to C^{*}_{\operatorname{env}}( \mathcal{A})\) be the canonical completely isometric inclusion. Suppose \(\delta\colon G\not\subset\mathcal{A}\) is a normal coaction of a discrete group \(G\). Then, there exists a normal coaction \(\delta_{\operatorname{env}}\colon G\not\subset C^{*}_{\operatorname{env}}( \mathcal{A})\) such that \(\delta_{\operatorname{env}}\circ\kappa_{\mathcal{A}}=(\kappa_{\mathcal{A}} \otimes\operatorname{id}_{C^{*}(G)})\circ\delta\)._
Proof.: Let \(\delta_{\lambda}=(\operatorname{id}\otimes\lambda)\circ\delta\) be the reduced coaction of \(G\) on \(\mathcal{A}\) associated with \(\delta\). By Proposition 2.7, it suffices to find a reduced coaction \(\delta_{\operatorname{env},r}\) on the C*-envelope \(C^{*}_{\operatorname{env}}(\mathcal{A})\) such that \(\kappa_{\mathcal{A}}\) is \(\delta_{\lambda}-\delta_{\operatorname{env},r}\)-equivariant.
Let \(\Psi\colon\mathcal{A}\rtimes_{\delta}G\rtimes_{\delta}^{r}G\to\delta_{ \lambda}(\mathcal{A})\otimes\mathbb{K}\) be the Katayama isomorphism from Theorem 3.3, and let \(\Psi^{-1}_{\operatorname{env}}\colon C^{*}_{\operatorname{env}}(\delta_{ \lambda}(\mathcal{A})\otimes\mathbb{K})\to C^{*}_{\operatorname{env}}( \mathcal{A}\rtimes_{\delta}G\rtimes_{\delta}^{r}G)\) be the *-isomorphism induced from \(\Psi^{-1}\) between the C*-envelopes. By the co-universal property of the C*-envelope, the dual action \(\hat{\delta}\) of \(G\) on \(\mathcal{A}\rtimes_{\delta}G\) extends to an action, also denoted \(\hat{\delta}\), of \(G\) on \(C^{*}_{\operatorname{env}}(\mathcal{A}\rtimes_{\delta}G)\) (see [11, Lemma 3.4]). By Lemma 3.4, \(\mathcal{A}\rtimes_{\delta}G\) has a contractive approximate identity, so [11, Theorem 2.5] gives us a *-isomorphism \(\theta\colon C^{*}_{\operatorname{env}}(\mathcal{A}\rtimes_{\delta}G\rtimes_ {\delta}^{r}G)\to C^{*}_{\operatorname{env}}(\mathcal{A}\rtimes_{\delta}G) \rtimes_{\delta}^{r}G\) that maps generators to generators. We summarize the above discussion in the following commutative diagram.
(3.4)
Here, \(\kappa_{\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}}\) and \(\hat{\kappa}\) are the canonical completely isometric inclusions into the C*-envelopes, and \(\varphi\coloneqq\theta\circ\hat{\kappa}\).
Let \(\hat{\hat{\delta}}\colon G\not\subset\mathcal{A}\rtimes_{\delta}G\rtimes_{ \delta}^{r}G\) denote the canonical reduced coaction of \(G\) on the double crossed product, and let \(\varepsilon\colon G\not\subset C^{*}_{\operatorname{env}}(\mathcal{A} \rtimes_{\delta}G)\rtimes_{\delta}^{r}G\) denote the canonical reduced coaction on the C*-crossed product. As \(\varphi\) maps generators to generators, it is \(\hat{\hat{\delta}}-\varepsilon\)-equivariant. Let \(\tilde{\delta}\) denote the reduced coaction of \(G\) on \(\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}\) from (3.3) in Theorem 3.3. Define \(\tilde{\delta}_{\operatorname{env}}\colon G\not\subset C^{*}_{\operatorname{ env}}(\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K})\) as the reduced coaction given by \(\tilde{\delta}_{\operatorname{env}}\coloneqq(\Psi_{\operatorname{env}}\circ \theta^{-1}\otimes\operatorname{id}_{C^{*}_{\lambda}(G)})\circ\varepsilon \circ\theta\circ(\Psi^{-1})_{\operatorname{env}}\). Here, we used that \((\Psi^{-1}_{\operatorname{env}})^{-1}=\Psi_{\operatorname{env}}\). By commutativity of the diagram (3.4) and equivariance of \(\varphi\) and \(\Psi\), we have
\[\varepsilon\circ\theta\circ\Psi^{-1}_{\operatorname{env}}\circ\kappa_{\delta_{ \lambda}(\mathcal{A})\otimes\mathbb{K}}=([\theta\circ(\Psi^{-1})_{\operatorname {env}}\circ\kappa_{\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}}]\otimes \operatorname{id}_{C^{*}_{\lambda}(G)})\circ\tilde{\delta},\]
so \(\tilde{\delta}_{\operatorname{env}}\circ\kappa_{\delta_{\lambda}(\mathcal{A}) \otimes\mathbb{K}}=(\kappa_{\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}} \otimes\operatorname{id}_{C^{*}_{\lambda}(G)})\circ\tilde{\delta}\), i.e., \(\kappa_{\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}}\) is \(\tilde{\delta}-\tilde{\delta}_{\operatorname{env}}\)-equivariant.
Next, let \(P_{e}\) denote the rank-one projection in \(\mathbb{K}\) onto the subspace spanned by the point-mass function at the identity element \(e\) of \(G\). We now show that \(\delta_{\lambda}(\mathcal{A})\otimes P_{e}\) is invariant under \(\tilde{\delta}\). For this, it suffices to check that \(\tilde{\delta}(a\otimes P_{e})\in\delta_{\lambda}(\mathcal{A})\otimes P_{e} \otimes C^{*}_{\lambda}(G)\) for all \(a\in\mathcal{A}_{g}\) and \(g\in G\) (recall the explicit definition of \(\tilde{\delta}\) from the statement of Theorem 3.3). For all \(a\in\mathcal{A}_{g}\) and \(g\in G\), we have
\[\tilde{\delta}(\delta_{\lambda}(a)\otimes P_{e}) =(I_{\mathcal{H}}\otimes I_{\ell^{2}(G)}\otimes U^{*})[\delta_{ \lambda}(a)\otimes P_{e}\otimes\lambda_{g}](I_{\mathcal{H}}\otimes I_{\ell^{2}(G )}\otimes U)\] \[=\delta_{\lambda}(a)\otimes U^{*}(P_{e}\otimes\lambda_{g})U\] \[=\delta_{\lambda}(a)\otimes P_{e}\otimes\lambda_{g}.\]
Let \(\tilde{\delta}^{\prime}\) denote the restriction of \(\tilde{\delta}\) to \(\delta_{\lambda}(\mathcal{A})\otimes P_{e}\).
By [13, Corollary 2.7], there is a *-isomorphism \(C^{*}_{\mathrm{env}}(\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K})\to C^{*}_{ \mathrm{env}}(\delta_{\lambda}(\mathcal{A}))\otimes\mathbb{K}\) given by \(\kappa_{\delta_{\lambda}(\mathcal{A})}(\delta_{\lambda}(a))\otimes K\mapsto \kappa_{\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K}}(\delta_{\lambda}(a) \otimes K)\) for all \(a\in\mathcal{A}\) and \(K\in\mathbb{K}\). A similar argument as above now shows that the C*-subalgebra \(C^{*}_{\mathrm{env}}(\delta_{\lambda}(\mathcal{A}))\otimes P_{e}\) of \(C^{*}_{\mathrm{env}}(\delta_{\lambda}(\mathcal{A}))\otimes\mathbb{K}\cong C^{* }_{\mathrm{env}}(\delta_{\lambda}(\mathcal{A})\otimes\mathbb{K})\) is \(\tilde{\delta}_{\mathrm{env}}\)-invariant. Let \(\tilde{\delta}^{\prime}_{\mathrm{env}}\) denote the restriction of \(\delta_{\mathrm{env}}\) on \(C^{*}_{\mathrm{env}}(\delta_{\lambda}(\mathcal{A}))\otimes P_{e}\). It now follows that the inclusion \(\kappa_{\delta_{\lambda}(\mathcal{A})\otimes\mathrm{id}}\) is \(\tilde{\delta}^{\prime}-\tilde{\delta}^{\prime}_{\mathrm{env}}\)-equivariant. This is summarized in the right-most part of the diagram below.
(3.5)
The map \(\alpha\colon\mathcal{A}\to\delta_{\lambda}(\mathcal{A})\otimes P_{e}\) given by \(\alpha(a)=\delta_{\lambda}(a)\otimes P_{e}\) for all \(a\in\mathcal{A}\) is a completely isometric isomorphism. For \(g\in G\) and \(a\in A_{g}\), we have
\[\tilde{\delta}^{\prime}\circ\alpha(a)=\tilde{\delta}^{\prime}(\delta_{\lambda }(a)\otimes P_{e})=\delta_{\lambda}(a)\otimes P_{e}\otimes\lambda_{g}=(\alpha \otimes\mathrm{id})(a\otimes\lambda_{g})=(\alpha\otimes\mathrm{id})\delta_{ \lambda}(a)\]
which shows that \(\alpha\) is \(\delta_{\lambda}-\tilde{\delta}^{\prime}\)-equivariant.
Finally, let \(\alpha_{\mathrm{env}}\colon C^{*}_{\mathrm{env}}(\mathcal{A})\to C^{*}_{ \mathrm{env}}(\delta_{\lambda}(\mathcal{A}))\otimes P_{e}\cong C^{*}_{\mathrm{ env}}(\delta_{\lambda}(\mathcal{A})\otimes P_{e})\) be the composition of the canonical *-isomorphism \(C^{*}_{\mathrm{env}}(\mathcal{A})\cong C^{*}_{\mathrm{env}}(\delta_{\lambda}( \mathcal{A}))\otimes P_{e}\) and the *-isomorphism between the C*-envelopes induced from \(\alpha\), and consider the reduced coaction \(\delta_{\mathrm{env},r}\coloneqq(\alpha_{\mathrm{env}}^{-1}\otimes\mathrm{id })\circ\tilde{\delta}^{\prime}\circ\alpha_{\mathrm{env}}\) of \(G\) on \(C^{*}_{\mathrm{env}}(\mathcal{A})\). Using commutativity of the diagram (3.5) and that \(\kappa_{\delta_{\lambda}(\mathcal{A})\otimes\mathrm{id}}\) and \(\alpha\) are equivariant, it follows that
\[\delta_{\mathrm{env},r}\circ\kappa_{\mathcal{A}}=(\alpha_{\mathrm{env}}^{-1} \otimes\mathrm{id})\circ\tilde{\delta}^{\prime}\circ\kappa_{\delta_{\lambda}( \mathcal{A})\otimes\mathrm{id}}\circ\alpha=(\kappa_{\mathcal{A}}\circ\alpha^{- 1}\otimes\mathrm{id})\circ(\alpha\otimes\mathrm{id})\circ\delta_{\lambda}=( \kappa_{\mathcal{A}}\otimes\mathrm{id})\circ\delta_{\lambda},\]
i.e., \(\kappa_{\mathcal{A}}\) is \(\delta_{\lambda}-\delta_{\mathrm{env},r}\)-equivariant. This means that \(C^{*}_{\mathrm{env}}(\mathcal{A})\) admits a reduced coaction \(\delta_{\mathrm{env},r}\) which extends the reduced coaction \(\delta_{\lambda}\) on \(\mathcal{A}\).
Below, \(C^{*}_{\mathrm{env}}(\mathcal{A},\delta)\) denotes the equivariant C*-envelope for the coaction \(\delta\colon G\)\(\langle\!\!
### The boundary quotient is a C*-cover
Throughout this subsection, we assume that \(\mathfrak{C}\) is a _cancellative_ small category and that \(I_{l}\ltimes\Omega\) is Hausdorff. Inside the groupoid C*-algebra \(C^{*}_{r}(I_{l}\ltimes\Omega)\) there is a natural operator subalgebra
\[\mathcal{A}_{r}(\mathfrak{C})\coloneqq\overline{\mathrm{alg}}(\{1_{[c,\Omega (\mathfrak{C}(c)\mathfrak{C})]}:c\in\mathfrak{C}\})\subseteq C^{*}_{r}(I_{l} \ltimes\Omega).\]
When \(I_{l}\ltimes\Omega\) is Hausdorff, the *-isomorphism from (2.6) restricts to a completely isometric isomorphism between \(\mathcal{A}_{r}(\mathfrak{C})\) and the operator algebra \(\mathcal{A}_{\lambda}(\mathfrak{C})\) from Definition 2.9. We will show that the boundary quotient map \(q_{\partial}\colon C^{*}_{r}(I_{l}\ltimes\Omega)\to C^{*}_{r}(I_{l}\ltimes \partial\Omega)\) from (2.7) is completely isometric on \(\mathcal{A}_{r}(\mathfrak{C})\). Our strategy is inspired by the proof of [1, Lemma 3.2]. For each \(\chi\in\partial\Omega\) and each subset \(X\subseteq\mathfrak{C}\), put
\[[X,\chi]=\{[c,\chi]:c\in X,\chi(\mathfrak{d}(c)\mathfrak{C})=1\}\subseteq(I_ {l}\ltimes\partial\Omega)_{\chi}.\]
Since \(\mathfrak{C}\) is cancellative, we have \([c,\chi]=[d,\chi]\) if and only if \(c=d\). Indeed, the equality \([c,\chi]=[d,\chi]\) implies that there exists \(Y\in\chi^{-1}(1)\) such that \(cy=dy\) for \(y\in Y\) and, by right cancellation, this implies that \(c=d\).
**Lemma 4.1**.: _For each \(\chi\in\partial\Omega\), there is a *-homomorphism_
\[\vartheta_{\chi}\colon C^{*}_{r}(I_{l}\ltimes\Omega)\to\mathbb{B}(\ell^{2}([ \mathfrak{C},\chi]))\]
_determined by_
\[\vartheta_{\chi}(1_{[s,\Omega(\mathrm{dom}(s))]})e_{[d,\chi]}=\begin{cases}e_{ [s(d),\chi]}&\text{ if }d\in\mathrm{dom}(s),\\ 0&\text{ if }d\notin\mathrm{dom}(s),\end{cases} \tag{4.1}\]
_for all \(s\in I^{\times}_{l}\) and \([d,\chi]\in[\mathfrak{C},\chi]\)._
Proof.: For \(\chi\in\partial\Omega\), let \(\mathfrak{C}_{\chi}\coloneqq\{c\in\mathfrak{C}:\chi(\mathfrak{d}(c) \mathfrak{C})=1\}\). It is easy to verify directly that \(\mathfrak{C}_{\chi}\) is an \(I_{l}\)-invariant subset of \(\mathfrak{C}\), so that \(\ell^{2}(\mathfrak{C}_{\chi})\subseteq\ell^{2}(\mathfrak{C})\) is a \(C^{*}_{\lambda}(\mathfrak{C})\)-reducing subspace. Denote by \(\mu\) the associated representation of \(C^{*}_{\lambda}(\mathfrak{C})\) on \(\ell^{2}(\mathfrak{C}_{\chi})\), and let \(U\colon\ell^{2}(\mathfrak{C}_{\chi})\to\ell^{2}([\mathfrak{C},\chi])\) be the unitary given by \(Ue_{c}=e_{[c,\chi]}\) for all \(c\in\mathfrak{C}_{\chi}\). Define \(\vartheta_{\chi}\colon C^{*}_{r}(I_{l}\ltimes\Omega)\to\mathbb{B}(\ell^{2}([ \mathfrak{C},\chi]))\) by \(\vartheta_{\chi}(a)\coloneqq U\mu(\mathfrak{j}(a))U^{*}\), where \(\mathfrak{j}\) is the *-isomorphism from (2.6). A straightforward computation shows that \(\vartheta_{\chi}\) satisfies (4.1).
If \(\mathfrak{C}\) is a cancellative _monoid_, then \(\vartheta_{\chi}\) is unitarily equivalent to the map \(\mathfrak{j}\) from (2.6), and is therefore injective. In general, we only get injectivity by taking the sum of all such representations.
**Lemma 4.2**.: _The *-homomorphism_
\[\vartheta\coloneqq\oplus_{\chi\in\partial\Omega}\vartheta_{\chi}\colon C^{*} _{r}(I_{l}\ltimes\Omega)\to\mathbb{B}\left(\bigoplus_{\chi\in\partial\Omega} \ell^{2}([\mathfrak{C},\chi])\right)\]
_is injective._
Proof.: Consider the diagram
(4.2)
where \(E_{r}\colon C^{*}_{r}(I_{l}\ltimes\Omega)\to C_{0}(\Omega)\) and \(\Phi_{\chi}\colon\mathbb{B}(\ell^{2}([\mathfrak{C},\chi]))\to\ell^{\infty}([ \mathfrak{C},\chi])\) for all \(\chi\in\partial\Omega\) are the canonical faithful conditional expectations. We claim that diagram (4.2) commutes. It is enough to check this on the spanning elements \(1_{[s,\Omega(\mathrm{dom}(s))]}\) for all \(s\in I^{\times}_{l}\). To show that, fix \(\chi\in\partial\Omega\), and note that we have
\[\Phi_{\chi}(\vartheta_{\chi}(1_{[s,\Omega(\mathrm{dom}(s))]}))=1_{[\mathrm{fix }(s),\chi]},\]
where \(\mathrm{fix}(s)\coloneqq\{x\in\mathrm{dom}(s):s(x)=x\}\). On the other hand, we have that \(E_{r}(1_{[s,\Omega(\mathrm{dom}(s))]}))=1_{[s,\Omega(\mathrm{dom}(s))]\cap\Omega}\). By [1, Proposition 3.14], we get \([s,\Omega(\mathrm{dom}(s))]\cap\Omega=\mathcal{F}_{s}\) where we define \(\mathcal{F}_{s}\coloneqq\bigcup_{X\in\mathcal{J},X\subseteq\mathrm{fix}(s)} \Omega(X)\). Since \(I_{l}\ltimes\Omega\) is Hausdorff, [1, Lemma 4.1(i)] implies that there exists
a finite (possibly empty) subset \(F\subseteq\mathcal{J}^{\times}\) such that \(\operatorname{fix}(s)=\bigcup_{X\in F}X\), so that \(\mathcal{F}_{s}=\bigcup_{X\in F}\Omega(X)\). By the inclusion-exclusion principle, we have that
\[1_{\mathcal{F}_{s}}=1_{\bigcup_{X\in F}\Omega(X)}=\sum_{\emptyset\neq A\subseteq F }(-1)^{|A|-1}\prod_{X\in A}1_{\Omega(X)}.\]
Since \(\vartheta_{\chi}(1_{\Omega(X)})=1_{[X,\chi]}\), this gives us
\[\vartheta_{\chi}(1_{\mathcal{F}_{s}})=\sum_{\emptyset\neq A\subseteq F}(-1)^{ |A|-1}\prod_{X\in A}\vartheta_{\chi}(1_{\Omega(X)})=1_{\bigcup_{X\in F}[X, \chi]}=1_{[\operatorname{fix}(s),\chi]},\]
so that \(\vartheta_{\chi}(E_{r}(1_{[s,\Omega(\operatorname{dom}(s))]}))=1_{[ \operatorname{fix}(s),\chi]}\). Thus, diagram (4.2) commutes.
Since both \(E_{r}\) and \(\prod_{\chi}\Phi_{\chi}\) are faithful, in order to prove \(\vartheta\) is injective, it will suffice to prove that \(\vartheta|_{C_{0}(\Omega)}\) is injective. By [1, Proposition 5.6.21], the kernel of \(\vartheta|_{C_{0}(\Omega)}\) is generated by the projections \(1_{\Omega(X)}-1_{\bigcup_{i=1}^{n}\Omega(X_{i})}\), where \(X,X_{1},\ldots,X_{n}\in\mathcal{J}\) with \(X_{i}\subseteq X\) are such that \([X,\chi]=\bigcup_{i=1}^{n}[X_{i},\chi]\) for all \(\chi\in\partial\Omega\). Thus, it will suffice to show that \(\vartheta_{\chi}\) is nonzero on all such projections. Fix such a projection \(1_{\Omega(X)}-1_{\bigcup_{i=1}^{n}\Omega(X_{i})}\) and note that it is equal to zero if and only if \(X=\bigcup_{i=1}^{n}X_{i}\). So we need only prove that \(X=\bigcup_{i=1}^{n}X_{i}\) whenever we have \([X,\chi]=\bigcup_{i=1}^{n}[X_{i},\chi]\) for every \(\chi\in\partial\Omega\). Take \(x\in X\), and choose \(\chi\in\Omega_{\max}\) such that \(\chi(\mathfrak{d}(x)\mathfrak{C})=1\) (such a \(\chi\) exists by [1, Lemma 2.21(ii)]). Then, since \([x,\chi]\in[X,\chi]\) and we have the equality \([X,\chi]=\bigcup_{i=1}^{n}[X_{i},\chi]\), we get that \([x,\chi]\in[X_{i},\chi]\) for some \(i\), so \(x\in X_{i}\) by right cancellation. We conclude that \(\vartheta|_{C_{0}(\Omega)}\) is injective.
We are now ready to prove that \(C_{r}^{*}(I_{l}\ltimes\partial\Omega)\) is a C*-cover of \(\mathcal{A}_{r}(\mathfrak{C})\).
**Theorem 4.3**.: _Let \(\mathfrak{C}\) be a cancellative small category, and assume \(I_{l}\ltimes\Omega\) is Hausdorff. Then, the quotient map \(q_{\partial}\colon C_{r}^{*}(I_{l}\ltimes\Omega)\to C_{r}^{*}(I_{l}\ltimes \partial\Omega)\) from (2.7) is completely isometric on \(\mathcal{A}_{r}(\mathfrak{C})\)._
Proof.: Since \((I_{l}\ltimes\Omega)_{\chi}=(I_{l}\ltimes\partial\Omega)_{\chi}\) for all \(\chi\in\partial\Omega\), we have \(q_{\partial}=\oplus_{\chi\in\partial\Omega}\rho_{\chi}\). For each \(\chi\in\partial\Omega\), let \(P_{\chi}\colon\ell^{2}((I_{l}\ltimes\Omega)_{\chi}))\to\ell^{2}([\mathfrak{C},\chi])\) be the orthogonal projection associated with the inclusion \([\mathfrak{C},\chi]\subseteq(I_{l}\ltimes\Omega)_{\chi}\). A straightforward calculation shows that
\[\rho_{\chi}(1_{[c,\Omega(c)\mathfrak{C}]})e_{[d,\chi]}=\begin{cases}e_{[cd,\chi] }&\text{ if }\mathfrak{d}(c)=\mathfrak{t}(d),\\ 0&\text{ if }\mathfrak{d}(c)\neq\mathfrak{t}(d),\end{cases} \tag{4.3}\]
for all \(c\in\mathfrak{C}\) and \([d,\chi]\in[\mathfrak{C},\chi]\). Hence, the subspace \(\ell^{2}([\mathfrak{C},\chi])\) is invariant under \(\rho_{\chi}(1_{[c,\Omega(\mathfrak{d}(c)\mathfrak{C})]})\), and \(P_{\chi}\rho_{\chi}(1_{[c,\Omega(\mathfrak{d}(c)\mathfrak{C})]})P_{\chi}= \vartheta_{\chi}(1_{[c,\Omega(\mathfrak{d}(c)\mathfrak{C})]})\). It follows that \(\bigoplus_{\chi\in\partial\Omega}\ell^{2}([\mathfrak{C},\chi])\) is invariant under \(\mathcal{A}_{r}(\mathfrak{C})\), so we obtain an algebra homomorphism \(\psi\colon\mathcal{A}_{r}(\mathfrak{C})\to\mathbb{B}\left(\bigoplus_{\chi \in\partial\Omega}\ell^{2}([\mathfrak{C},\chi])\right)\) given by
\[1_{[c,\Omega(\mathfrak{d}(c)\mathfrak{C})]}\mapsto\bigoplus_{\chi\in\partial \Omega}P_{\chi}\rho_{\chi}(1_{[c,\Omega(\mathfrak{d}(c)\mathfrak{C})]})P_{\chi},\]
for all \(c\in\mathfrak{C}\). The injective *-homomorphism \(\vartheta\) from Lemma 4.2 satisfies \(\vartheta(1_{[c,\Omega(\mathfrak{d}(c)\mathfrak{C})]})=\psi(1_{[c,\Omega( \mathfrak{d}(c)\mathfrak{C})]})\) for all \(c\in\mathfrak{C}\), and this implies that \(\psi\) is completely isometric. Since \(\psi\) is the compression of \(q_{\partial}\) by the projection \(\oplus_{\chi\in\partial\Omega}P_{\chi}\), we see that \(q_{\partial}\) is also completely isometric.
The general strategy for the proof of Theorem 4.3 is inspired by [1, Lemma 3.2], but the proof differs significantly at the technical level and is in the language of groupoids rather than product systems. Additionally, we do not have an analogue of Fowler's Theorem ([1, Theorem 7.2]), which played a crucial role in the proof of [1, Lemma 3.2]. This is why we needed to establish the faithfulness result in Lemma 4.2.
For submonoids of groups, Theorem 4.3 provides a new approach to proving that the boundary quotient C*-algebra is a C*-cover. Moreover, Theorem 4.3 covers new examples of monoids, e.g., every cancellative monoid whose boundary groupoid is Hausdorff. This class includes, e.g., all cancellative finitely aligned monoids. Even the class of singly aligned (right LCM) cancellative monoids contains interesting examples that are not group-embeddable ([1, Proposition 4.3]).
The co-universal property of the C*-envelope now provides us with the following corollary.
**Corollary 4.4**.: _Let \(\mathfrak{C}\) be a cancellative small category, and assume \(I_{l}\ltimes\Omega\) is Hausdorff. Then, there exists a surjective *-homomorphism_
\[\pi_{\operatorname{env}}\colon C_{r}^{*}(I_{l}\ltimes\partial\Omega)\to C_{ \operatorname{env}}^{*}(\mathcal{A}_{r}(\mathfrak{C})) \tag{4.4}\]
_such that \(\pi_{\mathrm{env}}(1_{[c,\partial\Omega(\mathfrak{d}(c)\mathfrak{c}))]})=1_{[c, \Omega(\mathfrak{d}(c)\mathfrak{c}))]}\) for all \(c\in\mathfrak{C}\)._
We observe that \(\pi_{\mathrm{env}}\) from (4.4) is always injective on the canonical diagonal subalgebra.
**Lemma 4.5**.: _Assume \(\mathfrak{C}\) is a cancellative small category and that \(I_{l}\ltimes\Omega\) is Hausdorff. Then, the map \(\pi_{\mathrm{env}}\) from (4.4) is injective on \(C_{0}(\partial\Omega)\)._
Proof.: Let \(K\subseteq\partial\Omega\) be a closed \(I_{l}\)-invariant subset with \(\ker(\pi_{\mathrm{env}}|_{C_{0}(\partial\Omega)})=C_{0}(\partial\Omega\backslash K)\). We need to show that \(K=\partial\Omega\). For each \(c\in\mathfrak{C}\), we have
\[\pi_{\mathrm{env}}(1_{\partial\Omega(c\mathfrak{C})})=\pi_{\mathrm{env}}(1_{[ c,\partial\Omega(\mathfrak{d}(c)\mathfrak{c})]}1_{[c,\partial\Omega(\mathfrak{d}(c) \mathfrak{c}))]}^{\ast})=1_{[c,\Omega(\mathfrak{d}(c)\mathfrak{c}))]}1_{[c, \Omega(\mathfrak{d}(\mathfrak{c}(c)\mathfrak{c}))]}^{\ast}\neq 0,\]
so that \(K\cap\partial\Omega(c\mathfrak{C})\neq\emptyset\), and we may choose some \(\gamma_{c}\in K\cap\partial\Omega(c\mathfrak{C})\) for every \(c\in\mathfrak{C}\).
If \(0\notin I_{l}\), then \(\partial\Omega=\{\chi_{\infty}\}\) by Remark 2.10, and we are done. So assume \(0\in I_{l}\). Since \(K\) is closed, it suffices to show that \(\{\gamma_{c}:c\in\mathfrak{C}\}\) is dense in \(\partial\Omega\). Let \(\partial\Omega(X;\mathfrak{f})\) be a nonempty basic open subset of \(\partial\Omega\) as in SS 2.2. Since \(0\in I_{l}\), there exists a nonempty constructible right ideal \(Z\) of \(X\) with \(Z\cap\bigcup_{Y\in\mathfrak{f}}Y=\emptyset\) (see the discussion after Remark 2.10). Take \(c\in Z\), and note that \(c\mathfrak{C}\subseteq Z\subseteq X\) and \(c\mathfrak{C}\cap Y=\emptyset\) for all \(Y\in\mathfrak{f}\). Thus, \(\gamma_{c}(X)=1\) and \(\gamma_{c}(Y)=0\) for all \(Y\in\mathfrak{f}\), so that \(\gamma_{c}\in\partial\Omega(X;\mathfrak{f})\). Thus, we have shown that \(\{\gamma_{c}:c\in\mathfrak{C}\}\) is dense in \(\partial\Omega\), which implies that \(K=\partial\Omega\).
Recall that an inclusion of C*-algebras \(\mathcal{D}\subseteq\mathcal{B}\)_detects ideals_ if every nontrivial ideal \(\mathcal{I}\subseteq\mathcal{B}\) intersects \(\mathcal{D}\) nontrivially. See [13, Theorem 7.2] for a characterization of when the diagonal detects ideals in a groupoid C*-algebra. In the setting of Corollary 4.4, if \(C_{0}(\partial\Omega)\) detects ideals in \(C_{r}^{\ast}(I_{l}\ltimes\partial\Omega)\) (e.g., if the groupoid \(I_{l}\ltimes\partial\Omega\) is effective, see [12, Corollary 5.13] for a characterization), then \(\pi_{\mathrm{env}}\) is injective by Lemma 4.5. Moreover, if \(I_{l}\ltimes\Omega\) is second countable, Hausdorff, and \(0\in I_{l}\), then \(\pi_{\mathrm{env}}\) is injective if and only if its restriction to the C*-subalgebra of the subgroupoid of the interior of the isotropy is injective, see [1, Theorem 3.1(a)]. An explicit description of this subgroupoid can be derived from [12, SS 5].
### From functors to coactions
In this subsections we use coaction techniques and Theorem 3.5 to find significantly broader sufficient conditions for the map \(\pi_{\mathrm{env}}\) from (4.4) to be injective.
A (discrete) groupoid \(\mathfrak{G}\) admits a universal group \(\mathcal{U}(\mathfrak{G})\) which comes with a functor (i.e., a groupoid homomorphism) \(j_{\mathfrak{G}}\colon\mathfrak{G}\to\mathcal{U}(\mathfrak{G})\) that is characterized by the following universal property: whenever \(\mu\colon\mathfrak{G}\to H\) is a functor to a group, there exists a group homomorphism \(\mu^{\prime}\colon\mathcal{U}(\mathfrak{G})\to H\) such that \(\mu^{\prime}\circ j_{\mathfrak{G}}=\mu\). Our first task is to construct normal coactions of \(\mathcal{U}(\mathfrak{G})\) on the C*-algebras attached to \(\mathfrak{C}\). Let \(I_{l}^{\times}=I_{l}\backslash\{0\}\) and \(\mathcal{J}^{\times}=\mathcal{J}\backslash\{\emptyset\}\).
There always exists a functor \(\rho\colon\mathfrak{C}\to\mathfrak{G}\) to a groupoid \(\mathfrak{G}\). For instance, one may take \(\mathfrak{G}\) to be the enveloping groupoid associated to \(\mathfrak{C}\) (see [1, SS II.3.1]). Note that \(\rho\) need not be injective, and that in some cases \(\mathfrak{G}\) may be a group.
**Lemma 4.6**.: _Suppose \(\mathfrak{C}\) is a left cancellative small category and that \(\rho\colon\mathfrak{C}\to\mathfrak{G}\) is a functor to a groupoid \(\mathfrak{G}\). Then, there exists a map \(\tilde{\rho}\colon I_{l}^{\times}\to\mathfrak{G}\) such that \(\tilde{\rho}(st)=\tilde{\rho}(s)\tilde{\rho}(t)\) for all \(s,t\in I_{l}\) with \(st\neq 0\) and \(\rho(s(x))=\tilde{\rho}(s)\rho(x)\) for all \(s\in I_{l}^{\times}\) and \(x\in\mathrm{dom}(s)\). In particular, we have \(\tilde{\rho}(c)=\rho(c)\) for all \(c\in\mathfrak{C}\)._
Proof.: Let \(d\in\mathfrak{C}\). For every \(x\in d\mathfrak{C}\), we can write \(x=dd^{\prime}\) for some \(d^{\prime}\in\mathfrak{d}(d)\mathfrak{C}\), and then \(d^{-1}(x)=d^{\prime}\). Thus,
\[\rho(d^{-1}(x))=\rho(d^{\prime})=\rho(d)^{-1}\rho(d)\rho(d^{\prime})=\rho(d)^{- 1}\rho(dd^{\prime})=\rho(d)^{-1}\rho(x).\]
We also have \(\rho(cx)=\rho(c)\rho(x)\) for all \(c\in\mathfrak{C}\) and \(x\in\mathfrak{d}(c)\mathfrak{C}\). Thus, given \(s=d_{1}^{-1}c_{1}\cdots d_{n}^{-1}c_{n}\in I_{l}^{\times}\), where \(c_{i},d_{i}\in\mathfrak{C}\), we have by induction
\[\rho(s(x))=\rho(d_{1})^{-1}\rho(c_{1})\cdots\rho(d_{n})^{-1}\rho(c_{n})\rho(x)\]
for every \(x\in\mathrm{dom}(s)\). By right cancellation in the groupoid \(\mathfrak{G}\), this gives us a well-defined map \(\tilde{\rho}\colon I_{l}^{\times}\to\mathfrak{G}\) such that
\[\tilde{\rho}(d_{1}^{-1}c_{1}\cdots d_{n}^{-1}c_{n})=\rho(d_{1})^{-1}\rho(c_{1}) \cdots\rho(d_{n})^{-1}\rho(c_{n})\]
for all \(c_{i},d_{i}\in\mathfrak{C}\) with \(d_{1}^{-1}c_{1}\cdots d_{n}^{-1}c_{n}\in I_{l}^{\times}\). It is clear that \(\tilde{\rho}\) satisfies the stated conditions.
Given \(\tilde{\rho}\) as in Lemma 4.6, we put \(\bar{\rho}\coloneqq j_{\mathfrak{G}}\circ\tilde{\rho}\colon I_{l}^{\times} \to\mathcal{U}(\mathfrak{G})\). Then, \(\bar{\rho}\) is a partial homomorphism in the sense that \(\bar{\rho}(st)=\bar{\rho}(s)\bar{\rho}(t)\) for all \(s,t\in I_{l}^{\times}\) with \(st\neq 0\).
**Lemma 4.7**.: _Suppose \(\mathfrak{C}\) is a left cancellative small category and that \(\rho\colon\mathfrak{C}\to\mathfrak{G}\) is a functor to a groupoid \(\mathfrak{G}\). Then, there is a continuous groupoid homomorphism_
\[\kappa_{\rho}\colon I_{l}\ltimes\Omega\to\mathcal{U}(\mathfrak{G}) \tag{4.5}\]
_such that \(\kappa_{\rho}([s,\chi])=\bar{\rho}(s)\) for all \([s,\chi]\in I_{l}\ltimes\Omega\)._
Proof.: If \([s,\chi]=[t,\chi]\) in \(I_{l}\ltimes\Omega\), then there exists \(X\in\chi^{-1}(1)\) with \(s\circ\mathrm{id}_{X}=t\circ\mathrm{id}_{X}\neq 0\), so that \(\bar{\rho}(s)=\bar{\rho}(s)\bar{\rho}(\mathrm{id}_{X})=\bar{\rho}(s\circ \mathrm{id}_{X})=\bar{\rho}(t\circ\mathrm{id}_{X})=\bar{\rho}(t)\). Thus, there is a well-defined map \(\kappa_{\rho}\colon I_{l}\ltimes\Omega\to\mathcal{U}(\mathfrak{G})\) such that \(\kappa_{\rho}([s,\chi])=\bar{\rho}(s)\) for all \([s,\chi]\in I_{l}\ltimes\Omega\). If \(\chi\in\Omega\) and \(t,s\in I_{l}\) with \([t,s\cdot\chi][s,\chi]=[ts,\chi]\), then \(ts\neq 0\) and
\[\kappa_{\rho}([t,s\cdot\chi])\kappa_{\rho}([s,\chi])=\bar{\rho}(t)\bar{\rho}(s )=\bar{\rho}(ts)=\kappa_{\rho}([ts,\chi]).\]
This shows that \(\kappa_{\rho}\) is a groupoid homomorphism. Given \(g\in\mathcal{U}(\mathfrak{G})\), we have
\[\kappa_{\rho}^{-1}(g)=\bigcup\{[s,U]:s\in\bar{\rho}^{-1}(g),U\subseteq\Omega( \mathrm{dom}(s))\text{ open}\},\]
which is open, so \(\kappa_{\rho}\) is continuous.
Denote by \(\partial\kappa_{\rho}\) the restriction of \(\kappa_{\rho}\) to \(I_{l}\ltimes\partial\Omega\). Next, we use an observation from [1, Lemma 6.1].
**Proposition 4.8**.: _Suppose \(\mathfrak{C}\) is a left cancellative small category and that \(\rho\colon\mathfrak{C}\to\mathfrak{G}\) is a functor to a groupoid \(\mathfrak{G}\). Then, there are reduced coactions \(\delta_{\rho,\lambda}\colon\mathcal{U}(\mathfrak{G})\not\subset C_{r}^{*}(I_{ l}\ltimes\Omega)\) and \(\partial\delta_{\rho,\lambda}\colon\mathcal{U}(\mathfrak{G})\not\subset C_{r}^{*} (I_{l}\ltimes\partial\Omega)\) determined on generators by \(1_{[s,\Omega(\mathrm{dom}(s))]}\mapsto 1_{[s,\Omega(\mathrm{dom}(s))]}\otimes \lambda_{\bar{\rho}(s)}\) and \(1_{[s,\partial\Omega(\mathrm{dom}(s))]}\mapsto 1_{[s,\partial\Omega(\mathrm{dom}(s))]}\otimes \lambda_{\bar{\rho}(s)}\) for all \(s\in I_{l}^{\times}\)._
Proof.: By [1, Lemma 6.1], there is a *-homomorphism \(\delta_{\rho,\lambda}\colon C_{r}^{*}(I_{l}\ltimes\Omega)\to C_{r}^{*}(I_{l} \ltimes\Omega)\otimes\mathcal{U}(\mathfrak{G})\), which is nondegenerate, satisfies the coaction identity, and such that \(\delta_{\rho,\lambda}(f)=f\otimes\lambda_{g}\) for \(f\in C_{c}(I_{l}\ltimes\Omega)\) with \(\mathrm{supp}(f)\subseteq\kappa_{\rho}^{-1}(g)\) and \(g\in\mathfrak{G}\). Moreover, from the proof of [1, Lemma 6.1], we see that \(\delta_{\rho,\lambda}\) is injective, so that \(\delta_{\rho,\lambda}\) is a reduced coaction. Existence of \(\partial\delta_{\rho,\lambda}\) is proven the same way by using \(\partial\kappa_{\rho}\) on \(I_{l}\ltimes\partial\Omega\) instead of \(\kappa_{\rho}\).
**Corollary 4.9**.: _Suppose \(\mathfrak{C}\) is a left cancellative small category and that \(\rho\colon\mathfrak{C}\to\mathfrak{G}\) is a functor to a groupoid \(\mathfrak{G}\). Then, there is a normal coaction \(\delta_{\rho}\colon\mathcal{U}(\mathfrak{G})\not\subset\mathcal{A}_{r}( \mathfrak{C})\) which extends to a normal coaction \(\delta_{\rho,\mathrm{env}}\) on \(C_{\mathrm{env}}^{*}(\mathcal{A}_{r}(\mathfrak{C}))\) such that \(\delta_{\rho,\mathrm{env}}(1_{[c,\Omega(\mathrm{\mathcal{G}}(c)\mathfrak{C})]} )=1_{[c,\Omega(\mathrm{\mathcal{G}}(c)\mathfrak{C})]}\otimes u_{\bar{\rho}(c)}\) for all \(c\in\mathfrak{C}\)._
Proof.: From [1, Proposition 3.4], the reduced coaction \(\delta_{\rho,\lambda}\) of Proposition 4.8 induces a normal coaction \(\delta_{\rho}\colon\mathcal{U}(\mathfrak{G})\not\subset C_{r}^{*}(I_{l}\ltimes\Omega)\) such that \(\delta_{\rho}=(\mathrm{id}\otimes\lambda)\circ\delta_{\rho,\lambda}\). Since \(\delta_{\rho}\) is normal, the restriction of \(\delta_{\rho}\) to \(\mathcal{A}_{r}(\mathfrak{C})\) is also a normal coaction. The existence of \(\delta_{\rho,\mathrm{env}}\) now follows from Theorem 3.5.
Suppose \(\rho\colon\mathfrak{C}\to\mathfrak{G}\) is a functor to a groupoid \(\mathfrak{G}\), and recall the terminology from SS 4.2. By [1, Lemma 6.3], there is a canonical isomorphism \(C_{r}^{*}(I_{l}\ltimes\partial\Omega)^{\delta\delta_{\rho}}\cong C_{r}^{*}( \ker(\partial\kappa_{\rho}))\), and we view this as a C*-subalgebra of \(C_{r}^{*}(I_{l}\ltimes\partial\Omega)\). Theorem 3.5 now gives us a characterization of injectivity of \(\pi_{\mathrm{env}}\) in terms of \(C_{r}^{*}(\ker(\partial\kappa_{\rho}))\).
**Proposition 4.10**.: _Assume \(\mathfrak{C}\) is a cancellative small category and that \(I_{l}\ltimes\Omega\) is Hausdorff. Let \(\rho\colon\mathfrak{C}\to\mathfrak{G}\) be a functor to a groupoid \(\mathfrak{G}\). Then, the map \(\pi_{\mathrm{env}}\) from (4.4) is injective if and only if its restriction to \(C_{r}^{*}(\ker(\partial\kappa_{\rho}))\) is injective._
Proof.: Consider the diagram
\[\begin{CD}C_{r}^{*}(I_{l}\ltimes\partial\Omega)@>{\pi_{\mathrm{env}}}>{}>C_{ \mathrm{env}}^{*}(\mathcal{A}_{r}(\mathfrak{C}))\\ @V{}V{\partial E}V@V{}V{\Psi}V\\ C_{r}^{*}(\ker(\partial\kappa_{\rho}))@>{\pi_{\mathrm{env}}|_{C_{r}^{\#}(\ker( \partial\kappa_{\rho}))}}>{}>C_{\mathrm{env}}^{*}(\mathcal{A}_{r}(\mathfrak{C}))_{e} \,,\end{CD}\]
where \(\partial E\) is the faithful conditional expectation associated with the coaction \(\partial\delta_{\rho}\), and \(\Psi\) is the faithful conditional expectation onto the unit fibre associated with the normal coaction \(\delta_{\rho,\mathrm{env}}\) from Corollary 4.9. This diagram commutes, so \(\pi_{\mathrm{env}}\) is injective if and only if \(\pi_{\mathrm{env}}|_{C^{*}_{r}(\ker(\partial\kappa_{\rho}))}\) is injective.
If \(\ker(\partial\kappa_{\rho})\) is second countable, then [1, Theorem 3.1(a)] implies that \(\pi_{\mathrm{env}}\) is injective on \(C^{*}_{r}(\ker(\partial\kappa_{\rho}))\) if and only if it is injective on the C*-subalgebra of the interior of the isotropy subgroupoid of \(\ker(\partial\kappa_{\rho})\). Since \(\pi_{\mathrm{env}}\) is always injective on \(C_{0}(\partial\Omega)\) by Lemma 4.5, we obtain the following sufficient condition for \(\pi_{\mathrm{env}}\) to be injective.
**Corollary 4.11**.: _Assume \(\mathfrak{C}\) is a cancellative small category and that \(I_{l}\ltimes\Omega\) is Hausdorff. If \(C_{0}(\partial\Omega)\) detects ideals in \(C^{*}_{r}(\ker(\partial\kappa_{\rho}))\) (e.g. if \(\ker(\partial\kappa_{\rho})\) is effective), then the map \(\pi_{\mathrm{env}}\) from (4.4) is injective._
Let us show how Corollary 4.11 can be used to compute C*-envelopes for operator algebras arising from finitely aligned higher-rank graphs, and even \(P\)-graphs. We include this to show the breadth of our results, though we expect Theorem 4.12 to be covered (using different methods) from Sehnem's work [1, Theorem 5.1].
Let \(P\) be a submonoid of a group \(G\) such that \(P\cap P^{-1}=\{e\}\). A \(P\)_-graph_ in the sense of [1, Definition 8.1] (cf. [13, Definition 6.1] and [1, Definition 2.1]) is a pair \((\mathfrak{C},\deg)\), where \(\mathfrak{C}\) is a finitely aligned countable category, and \(\deg\colon\mathfrak{C}\to P\) is a functor satisfying the unique factorization property: for every \(c\in\mathfrak{C}\) and \(p_{1},p_{2}\in P\) with \(\deg(c)=p_{1}p_{1}\), there exist unique \(c_{1},c_{2}\in\mathfrak{C}\) with \(\mathfrak{d}(c_{1})=\mathfrak{t}(c_{2})\) such that \(\deg(c_{1})=p_{1}\), \(\deg(c_{2})=p_{2}\), and \(c=c_{1}c_{2}\). If \((\mathfrak{C},\deg)\) is a \(P\)-graph, then \(\mathfrak{C}\) is cancellative and the units \(\mathfrak{C}^{0}\) are the only invertible elements in \(\mathfrak{C}\). Thus, if \((\mathfrak{C},\deg)\) is a finitely aligned \(P\)-graph, then \(I_{l}\ltimes\Omega\) is Hausdorff by [11, Corollary 4.2].
Given a category \(\mathfrak{C}\), we let \(\mathrm{Env}(\mathfrak{C})\) be the enveloping groupoid of \(\mathfrak{C}\) in the sense of [1, Definition II.3.3] (note that \(\mathrm{Env}(\mathfrak{C})\) is called the fundamental groupoid in [1]), and let \(\rho_{u}\colon\mathfrak{C}\to\mathrm{Env}(\mathfrak{C})\) be the canonical functor.
**Theorem 4.12**.: _Let \((\mathfrak{C},\deg)\) be a finitely aligned \(P\)-graph, where \(P\) is a group-embeddable monoid. Then, the map \(\pi_{\mathrm{env}}\colon C^{*}_{r}(I_{l}\ltimes\partial\Omega)\to C^{*}_{ \mathrm{env}}(\mathcal{A}_{r}(\mathfrak{C}))\) from (4.4) is a *-isomorphism._
Proof.: We show that \(\ker(\partial\kappa_{\rho_{u}})\) is a principal groupoid (i.e., every element whose range and source coincide is a unit). It follows that \(C_{0}(\partial\Omega)\) detects ideals in \(\ker(\partial\kappa_{\rho_{u}})\), so the conclusion would then follow from Corollary 4.11.
Put \(\partial\kappa\coloneqq\partial\kappa_{\rho_{u}}\) and suppose \(P\) embeds into a group \(G\) with identity element \(e\). Viewing \(\deg\) as a functor from \(\mathfrak{C}\) to \(G\), the universal property of \(\mathrm{Env}(\mathfrak{C})\) implies the existence of a functor \(\rho^{\prime}_{u}\colon\mathrm{Env}(\mathfrak{C})\to G\) such that the following diagram commutes:
(4.6)
Every element of \(I_{l}\ltimes\partial\Omega\) can be written as \([cd^{-1},\chi]\) for some \(c,d\in\mathfrak{C}\) with \(\mathfrak{d}(c)=\mathfrak{d}(d)\) and \(\chi\in\partial\Omega\) with \(\chi(d\mathfrak{C})=1\) (see, e.g., [11, Lemma 2.19]). Take such an element \([cd^{-1},\chi]\) and suppose \(\partial\kappa([cd^{-1},\chi])=e\). This means that \(\rho_{u}(c)=\rho_{u}(d)\), so by commutativity of (4.6), we have \(\deg(c)=\deg(d)\). Assuming that \([cd^{-1},\chi]\) is isotropy, i.e., that \(cd^{-1}.\chi=\chi\), we have \(1=\chi(d\mathfrak{C})=cd^{-1}.\chi(d\mathfrak{C})=\chi(dc^{-1}(c\mathfrak{C} \cap d\mathfrak{C}))\). In particular, \(c\mathfrak{C}\cap d\mathfrak{C}\neq\emptyset\), and [1, Lemma 8.2] implies that \(c=d\). Therefore, \([cd^{-1},\chi]\) is a unit, and this shows that \(\ker(\partial\kappa)\) is principal.
This result covers all finitely aligned higher-rank graphs. It is interesting to note that even among \(2\)-graphs with a single vertex there are examples of cancellative monoids that are not group-embeddable. See, e.g., [1, Example 7.1].
In the next subsection, we showcase another natural class of examples where \(\pi_{\mathrm{env}}\) is injective, which are not covered by the class of product systems over group-embeddable monoids as in [10].
### Groupoid-embeddable categories
Let \(\mathfrak{G}\) be a discrete groupoid with range and source maps \(\mathfrak{r}\) and \(\mathfrak{s}\), respectively. By [1, Theorem 6.10], the universal group \(\mathcal{U}(\mathfrak{G})\) of \(\mathfrak{G}\) can be described as follows. Let \([u]\coloneqq\mathfrak{r}(\mathfrak{s}^{-1}(u))\) denote the orbit of a unit \(u\in\mathfrak{G}^{0}\) and let \(\mathcal{R}\) be a complete set of representatives for the set of orbits \(\mathfrak{G}^{0}/\mathfrak{G}\coloneqq\{[u]:u\in\mathfrak{G}^{0}\}\), so that \(\mathfrak{G}=\bigsqcup_{u\in\mathcal{R}}\mathfrak{G}_{[u]}\), where \(\mathfrak{G}_{[u]}\coloneqq\{g\in\mathfrak{G}:\mathfrak{s}(g)\in[u]\}\). For \(u,v\in\mathfrak{G}^{0}\), we put \(\mathfrak{G}^{u}_{u}\coloneqq\{g\in\mathfrak{G}:\mathfrak{s}(g)=u,\mathfrak{ r}(g)=v\}\), and for each \(u\in\mathcal{R}\), we let \(\mathcal{X}_{u}\coloneqq[u]\backslash\{u\}\). Then, [1, Theorem 6.10] provides us with a group isomorphism
\[\mathcal{U}(\mathfrak{G})\cong\mathfrak{*}_{u\in\mathcal{R}}\mathcal{U}( \mathfrak{G}_{[u]})\cong\mathfrak{*}_{w\in\mathcal{R}}(\mathbb{F}(\mathcal{X} _{u})\rtimes\mathfrak{G}^{u}_{u}),\]
where \(\mathbb{F}(\mathcal{X}_{u})\) is the free group on \(\mathcal{X}_{u}\). Moreover, the homomorphism \(j_{\mathfrak{G}}\colon\mathfrak{G}\to\mathfrak{*}_{u\in\mathcal{R}}(\mathbb{F} (\mathcal{X}_{u})\rtimes\mathfrak{G}^{u}_{u})\) can be described as follows (see [1, Proposition 6.8]). For each \(u\in\mathcal{R}\) and \(v\in[u]\), choose \(\gamma_{v}\in\mathfrak{G}^{v}_{u}\). Then,
\[j_{\mathfrak{G}}(g)=\bar{z}(\gamma_{z}^{-1}g\gamma_{y})\bar{y}^{-1}, \tag{4.7}\]
for all \(g\in\mathfrak{G}^{z}_{y}\), where \(\bar{z},\bar{y}\) are the images of \(z,y\) in \(\mathbb{F}(\mathcal{X}_{u})\) for the unique element \(u\) in \(\mathcal{R}\) such that \(z,y\in[u]\). Note that \(j_{\mathfrak{G}}\) generally depends on the various choices made above. We shall identify \(\mathcal{U}(\mathfrak{G})\) with \(\mathfrak{*}_{u\in\mathcal{R}}(\mathbb{F}(\mathcal{X}_{u})\rtimes\mathfrak{G} ^{u}_{u})\) via the isomorphism in equation (4.7). Let \(e\) be the identity element in \(\mathcal{U}(\mathfrak{G})\).
**Lemma 4.13**.: _Let \(\mathfrak{G}\) be a discrete groupoid. Then, \(j_{\mathfrak{G}}^{-1}(e)=\mathfrak{G}^{0}\), and \(j_{\mathfrak{G}}\) is injective on \(\mathfrak{G}\backslash\mathfrak{G}^{0}\)._
Proof.: Let \(y,z\in\mathfrak{G}^{0}\) and suppose \(j_{\mathfrak{G}}(g)=e\) for some \(g\in\mathfrak{G}^{z}_{y}\). Then, \(\gamma_{z}^{-1}g\gamma_{y}=\bar{z}^{-1}\bar{y}\) in the group \(\mathbb{F}(\mathcal{X}_{u})\rtimes\mathcal{U}(\mathfrak{G}^{u}_{u})\), where \(u\) is the unique element in \(\mathcal{R}\) such that \(z,y\in[u]\). Since \(\gamma_{z}^{-1}g\gamma_{y}\in\mathfrak{G}^{u}_{u}\) and \(\bar{z}^{-1}\bar{y}\in\mathbb{F}(\mathcal{X}_{u})\), and the subgroups \(\mathfrak{G}^{u}_{u}\) and \(\mathbb{F}(\mathcal{X}_{u})\) intersect trivially in \(\mathbb{F}(\mathcal{X}_{u})\rtimes\mathfrak{G}^{u}_{u}\), we deduce that \(\bar{z}^{-1}\bar{y}=e=\gamma_{z}^{-1}g\gamma_{y}\). In particular, \(\bar{y}=\bar{z}\), so that \(y=z\) and \(g=\gamma_{y}\gamma_{y}^{-1}=y\) is a unit. This shows that \(j_{\mathfrak{G}}^{-1}(e)\subseteq\mathfrak{C}^{0}\). The reverse containment follows from the definition of \(j_{\mathfrak{G}}\).
Next, we show that \(j_{\mathfrak{G}}\) is injective on \(\mathfrak{G}\backslash\mathfrak{G}^{0}\). Suppose \(g,h\in\mathfrak{G}\backslash\mathfrak{G}^{0}\) are such that \(j_{\mathfrak{G}}(g)=j_{\mathfrak{G}}(h)\). Then, \(g\in\mathfrak{G}^{z}_{y}\) and \(h\in\mathfrak{G}^{w}_{x}\) for some \(x,y,w,z\in\mathfrak{G}^{0}\), and we have \(j_{\mathfrak{G}}(g)=\bar{z}(\gamma_{z}^{-1}g\gamma_{y})\bar{y}^{-1}\) and \(j_{\mathfrak{G}}(h)=\bar{w}(\gamma_{w}^{-1}h\gamma_{x})\bar{x}^{-1}\).
If \(\gamma_{z}^{-1}g\gamma_{y}\) is a unit, so that \(g=\gamma_{z}\gamma_{y}^{-1}\), then \(j_{\mathfrak{G}}(g)=\bar{z}\bar{y}^{-1}\) is in \(\mathbb{F}(\mathcal{X}_{u})\). By uniqueness of reduced words in free products, \(\bar{w}(\gamma_{w}^{-1}h\gamma_{x})\bar{x}^{-1}=j_{\mathfrak{G}}(h)=j_{ \mathfrak{G}}(g)=\bar{z}\bar{y}^{-1}\) implies that \(\gamma_{w}^{-1}h\gamma_{x}\) is a unit, so that \(h=\gamma_{w}\gamma_{x}^{-1}\). Now \(j_{\mathfrak{G}}(g)=j_{\mathfrak{G}}(h)\) simplifies to \(\bar{z}\bar{y}^{-1}=\bar{w}\bar{x}^{-1}\), which implies that \(z=w\) and \(y=x\). Hence, \(g=\gamma_{y}\gamma_{y}^{-1}\) and \(h=\gamma_{x}^{-1}\gamma_{x}\) are units, which contradicts our assumptions on \(g,h\).
As both \(\gamma_{z}^{-1}g\gamma_{y}\) and \(\gamma_{z}^{-1}g\gamma_{z}\) are not units, the uniqueness of reduced words in free products implies that \(\bar{z}(\gamma_{z}^{-1}g\gamma_{y})\bar{y}^{-1}=\bar{w}(\gamma_{w}^{-1}h\gamma_ {x})\bar{x}^{-1}\), so \(\bar{z}=\bar{w}\) and \(\overline{y}=\overline{x}\). Now we have \(\gamma_{z}^{-1}g\gamma_{y}=\gamma_{w}^{-1}h\gamma_{x}\). This forces \(z=w\) and \(y=x\), so we actually get that \(\gamma_{z}^{-1}g\gamma_{y}=\gamma_{z}^{-1}h\gamma_{y}\). Multiplying this last equation on the left by \(\gamma_{z}\) and on the right by \(\gamma_{y}^{-1}\) yields \(g=h\).
Recall the definition of \(\bar{\rho}\colon I_{l}^{\times}\to\mathcal{U}(\mathfrak{G})\) from the discussion after the proof of Lemma 4.6, as well as our identification of \(\mathcal{J}^{\times}\) with the nonzero idempotents in \(I_{l}^{\times}\).
**Lemma 4.14**.: _Let \(\mathfrak{C}\) be a cancellative small category and suppose \(\rho\colon\mathfrak{C}\to\mathfrak{G}\) is an embedding into a discrete groupoid \(\mathfrak{G}\). The partial homomorphism \(\bar{\rho}\) is idempotent pure in the sense that \(\bar{\rho}^{-1}(e)=\mathcal{J}^{\times}\)._
Proof.: Let \(s\in I_{l}^{\times}\) with \(\bar{\rho}(s)=e\). Then, \(j_{\mathfrak{G}}\circ\bar{\rho}(s)=e\), and by Lemma 4.13 we get that \(\tilde{\rho}(s)\in\mathfrak{G}^{0}\). Using Lemma 4.6, we have that \(\rho(s(x))=\tilde{\rho}(s)\rho(x)=\rho(x)\) for all \(x\in\operatorname{dom}(s)\). Since \(\rho\) is injective, it follows that \(s(x)=x\) for all \(x\in\operatorname{dom}(s)\), so that \(s\in\mathcal{J}^{\times}\).
Applying [1, Lemma 5.5.22] yields the following consequence. Note that a groupoid-embeddable category is automatically small and cancellative.
**Proposition 4.15**.: _Let \(\mathfrak{C}\) be a category that embeds in a groupoid \(\mathfrak{G}\). Then, there is a canonical partial action of \(\mathcal{U}(\mathfrak{G})\) on \(\Omega\) and an isomorphism of topological groupoids_
\[I_{l}\ltimes\Omega\cong\mathcal{U}(\mathfrak{G})\ltimes\Omega,\quad[s,\chi] \mapsto(\bar{\rho}(s),\chi). \tag{4.8}\]
_In particular, \(I_{l}\ltimes\Omega\) is Hausdorff._
Note that the isomorphism \(I_{l}\ltimes\Omega\cong\mathcal{U}(\mathfrak{G})\ltimes\Omega\) from (4.8) restricts to an isomorphism \(I_{l}\ltimes\partial\Omega\cong\mathcal{U}(\mathfrak{G})\ltimes\partial\Omega\), so we obtain the following corollary, which extends [12, Proposition 3.10] from submonoids of groups to subcategories of groupoids.
**Corollary 4.16**.: _Let \(\mathfrak{C}\) be a category that embeds in a groupoid \(\mathfrak{G}\). Then, there is a canonical partial action of \(\mathcal{U}(\mathfrak{G})\) on \(\Omega\) and canonical *-isomorphisms_
\[C^{*}_{r}(I_{l}\ltimes\Omega)\cong C_{0}(\Omega)\rtimes^{r}\mathcal{U}( \mathfrak{G})\quad\text{ and }\quad C^{*}_{r}(I_{l}\ltimes\partial\Omega)\cong C_{0}( \partial\Omega)\rtimes^{r}\mathcal{U}(\mathfrak{G}).\]
The following is the main result of this subsection, and answers Li's Question B in the affirmative for the class of groupoid-embeddable categories.
**Theorem 4.17**.: _Let \(\mathfrak{C}\) be a category that embeds in a groupoid \(\mathfrak{G}\). Then, the associated groupoid \(I_{l}\ltimes\Omega\) is Hausdorff, and the map \(\pi_{\rm env}\colon C^{*}_{r}(I_{l}\ltimes\partial\Omega)\to C^{*}_{\rm env}( \mathcal{A}_{r}(\mathfrak{C}))\) from (4.4) is a *-isomorphism, i.e., the C*-envelope \(C^{*}_{\rm env}(\mathcal{A}_{r}(\mathfrak{C}))\) coincides with the boundary quotient C*-algebra \(C^{*}_{r}(I_{l}\ltimes\partial\Omega)\)._
Proof.: Let \(\rho\colon\mathfrak{C}\to\mathfrak{G}\) be an embedding into a groupoid \(\mathfrak{G}\) so that \(\mathfrak{C}\) is cancellative. By Corollary 4.16, \(I_{l}\ltimes\partial\Omega\) is Hausdorff, and we have an isomorphism of topological groupoids \(I_{l}\ltimes\partial\Omega\cong\mathcal{U}(\mathfrak{G})\ltimes\partial\Omega\) that carries \(\partial\kappa_{\rho}\) to the homomorphism on \(\mathcal{U}(\mathfrak{G})\ltimes\partial\Omega\) given by projecting onto \(\mathcal{U}(\mathfrak{G})\). Under this isomorphism, the coaction of \(\mathcal{U}(\mathfrak{G})\) on \(C^{*}_{r}(\mathcal{U}(\mathfrak{G})\ltimes\partial\Omega)\) from Proposition 4.8 is carried to the canonical coaction of \(\mathcal{U}(\mathfrak{G})\) on \(C^{*}_{r}(\mathcal{U}(\mathfrak{G})\ltimes\partial\Omega)\cong C_{0}( \partial\Omega)\rtimes^{r}\mathcal{U}(\mathfrak{G})\), and the fixed point algebra of this latter coaction is \(C_{0}(\partial\Omega)\) (see, e.g., [13, Lemma 6.3]). By Lemma 4.5, \(\pi_{\rm env}\) is injective on \(C_{0}(\partial\Omega)\), so by Corollary 4.11 we conclude that \(\pi_{\rm env}\) is injective.
**Examples 4.18**.: _By Ore's theorem, every left Ore category can be emdedded into a groupoid, see [14, Proposition II.3.11]. Special classes of Ore categories have been studied recently in the setting of Thompson's groups, see [15], and non-Ore groupoid-embeddable categories associated with graphs of groups have been studied in [16] and [17]. See, e.g., [11] for more on groupoid-embeddability of small categories._
| discrete group on an operator algebraextends to a normal coaction on the C*-envelope.
Kakariadis, Katsoulis, Laca, and X. Li, and provides anelementary proof of a prominent result of Sehnem.
As an application, we resolvea question of Li by identifying the C*-envelope of the operator algebra arisingfrom a groupoid-embeddable category and of cancellative right LCM monoids.
Thislatter class includes many examples of monoids that are not group-embeddable. |
2309.04803 | Towards Real-World Burst Image Super-Resolution: Benchmark and Method | Despite substantial advances, single-image super-resolution (SISR) is always
in a dilemma to reconstruct high-quality images with limited information from
one input image, especially in realistic scenarios. In this paper, we establish
a large-scale real-world burst super-resolution dataset, i.e., RealBSR, to
explore the faithful reconstruction of image details from multiple frames.
Furthermore, we introduce a Federated Burst Affinity network (FBAnet) to
investigate non-trivial pixel-wise displacements among images under real-world
image degradation. Specifically, rather than using pixel-wise alignment, our
FBAnet employs a simple homography alignment from a structural geometry aspect
and a Federated Affinity Fusion (FAF) strategy to aggregate the complementary
information among frames. Those fused informative representations are fed to a
Transformer-based module of burst representation decoding. Besides, we have
conducted extensive experiments on two versions of our datasets, i.e.,
RealBSR-RAW and RealBSR-RGB. Experimental results demonstrate that our FBAnet
outperforms existing state-of-the-art burst SR methods and also achieves
visually-pleasant SR image predictions with model details. Our dataset, codes,
and models are publicly available at https://github.com/yjsunnn/FBANet. | Pengxu Wei, Yujing Sun, Xingbei Guo, Chang Liu, Jie Chen, Xiangyang Ji, Liang Lin | 2023-09-09T14:11:37 | http://arxiv.org/abs/2309.04803v1 | # Towards Real-World Burst Image Super-Resolution: Benchmark and Method
###### Abstract
Despite substantial advances, single-image super-resolution (SISR) is always in a dilemma to reconstruct high-quality images with limited information from one input image, especially in realistic scenarios. In this paper, we establish a large-scale real-world burst super-resolution dataset, i.e., RealBSR, to explore the faithful reconstruction of image details from multiple frames. Furthermore, we introduce a Federated Burst Affinity network (FBAnet) to investigate non-trivial pixel-wise displacements among images under real-world image degradation. Specifically, rather than using pixel-wise alignment, our FBAnet employs a simple homography alignment from a structural geometry aspect and a Federated Affinity Fusion (FAF) strategy to aggregate the complementary information among frames. Those fused informative representations are fed to a Transformer-based module of burst representation decoding. Besides, we have conducted extensive experiments on two versions of our datasets, i.e., RealBSR-RAW and RealBSR-RGB. Experimental results demonstrate that our FBAnet outperforms existing state-of-the-art burst SR methods and also achieves visually-pleasant SR image predictions with model details. Our dataset, codes, and models are publicly available at [https://github.com/yjsunnu/FBANet](https://github.com/yjsunnu/FBANet).
+
Footnote †: *Corresponding author: Liang Lin, Jie Chen
## 1 Introduction
As a fundamental research topic, Super-Resolution (SR) attracts long-standing substantial interest, which targets high-resolution (HR) image reconstruction from a single or a sequence of low-resolution (LR) observations. In recent years, we have witnessed the prosperity of Single Image Super-Resolution (SISR), _e.g_., SRCNN [7], EDSR [19], SRGAN [16], RDN [33] and ESRGAN [28]. Nevertheless, SISR intrinsically suffers from a limited capacity of restoring details from only one LR image, typically yielding over-smooth LR predictions, especially for large-scale factors. With real detailed sub-pixel displacement information, Multi-Frame Super-Resolution (MFSR) [31, 1, 2, 21, 20] provides a promising potential to reconstruct the high-quality image from multiple LR counterparts, which is valuable for many sensitive realistic applications, _e.g_., medical imaging, and remote satellite sensing.
After the pioneering work [25] of Tsai and Huang in 1984, the research on MFSR has not achieved as tremendous progress as SISR. Typically, they are overwhelmed by two challenges: 1) the difficulty of fusing multiple LR inputs, which especially is aggravated for real-world data; 2) the limitation of artificially-synthesized data, accounting for a poor generalization for real-world scenarios; To address those challenges, a recent work [1] has made seminal contributions to the first real-world burst SR dataset benchmark, BurstSR, and a novel architecture, DBSR. Subsequently, MFIR proposes a deep reparametrization to reformulate the classical MAP objective in a deep feature space [2]. BIPNet [8]
Figure 1: SR predictions with different numbers of burst image inputs in our RealBSR dataset, where more burst inputs facilitate more accurate reconstruction of image details.
introduces a set of pseudo-burst features for information exchange among multiple burst frames. BSRT [20] employs a pyramid flow-guided deformable convolution network in a Transformer architecture.
Despite great progress achieved, two aspects still need to be revisited. _1) Method:_ Align-fusion-reconstruction paradigm-based methods usually fuse multiple burst images according to their similarity to a reference image, following their alignment via the optical flow or deformable convolution. However, this fusion strategy largely relies on the reference image and is limited to exploring more information among burst images. _2) Dataset:_ BurstSR captures multiple LR images with a smartphone in burst mode and a corresponding HR image with a DSLR camera. Thus, several unexpected issues are nontrivial: _a)_ data misalignment (even distortion) among burst LRs and their HR counterparts; _b)_ cross-device gap between LRs and HR captured by different cameras; and _c)_ unfair model evaluation on warped SR predictions by introducing GT HR. Moreover, BurstSR can be cast as a coupled task of burst image SR and enhancement.
To address these issues, we propose the Federated Burst Affinity Network (**FBAnet**), and make an attempt to build a new real-world burst image SR dataset, named **RealBSR**. Our RealBSR dataset is captured in quick succession a sequence of LR images and one HR image under a continuous shooting mode with the optical zoom strategy, like RealSR [3]. It provides a real-world benchmark for image detail reconstruction of real-world burst SR applications, avoiding the color style change in terms of the original LR data, especially, burst RAW inputs have no ISP process and often, for faithful high-resolution image predictions, especially for sensitive applications, _e.g._, medical imaging.
Our FBAnet employs a simple-yet-effective alignment algorithm via a homography matrix from a structural and global aspect. Then, a Federated Affinity Fusion (FAF) module is introduced to aggregate inter- and intra-frame information through affinity difference maps, aiming to not only focus on pixels consistent with the reference frame for global content reconstruction but also highlight the distinction among frames to absorb complementary information. The fused representations pass through the burst representation decoding module to integrate local features extracted by convolutions with the global long-range dependencies of self-attentions for HR image reconstruction.
In a nutshell, our contributions are summarized below:
* We make an effort to establish a Real-world Burst Super-Resolution benchmark, _i.e._, RealBSR, which has two versions consisting of RAW and RGB images. RealBSR has a great potential to inspire further researches for realistic burst SR applications.
* We propose a Federated Burst Affinity network to address real-world burst image super-resolution, which derives the affinity difference maps of burst images to federate inter- and intra-frame complementary information for reconstructing more image details.
* We have conducted extensive experiments on RAW and RGB versions of RealBSR to benchmark existing state-of-the-art methods. Empirically, the efficacy of our FBAnet has been justified with superior SR performances from quantitative and qualitative aspects.
## 2 Related Work
### Single Image Super-Resolution
SRCNN [7] pioneers CNN to image SR, inspiring numerous follow-ups. Fueled by the evolving of deep neural networks [13, 10, 14, 26], a series of seminal SISR methods have been built to achieve significant advances, _e.g._, VDSR [15], EDSR [21], SRResNet [17], ESRGAN [28], DRN [11], SwinIR [18], _etc._ Nevertheless, considering the over-cost collection of real-world LR-HR image pairs, those methods turn to map synthetic LR images to their HR counterparts, which is constantly criticized for poor model generalization in practical scenarios. To facilitate the exploration of real-world image SR, great efforts have been made on building functional benchmarks, _e.g._, SRRAW [32], RealSR [3], and DRealSR [30], following the optical zoom manner to capture paired LR-HR images. Meanwhile, LP-KPN [3] has been proposed to employ a Laplacian-based network for non-uniform kernel estimation. Encountering heterogeneous image degradation, CDC [30] proposes a gradient-weighted loss to adapt to diverse challenges in reconstructing different regions.
### Multi-Frame Super-Resolution
With great potential to remedy the intrinsic ill-posed SISR problem, MFSR pursues absorbing authentic sub-pixel details contained in the image sequences towards real-world applications. In the early times of MFSR, Tsai and Huang [25] contribute the first fair solution. Afterward, taking advantage of deep learning, TDAN [24] introduces deformable convolutions to mitigate the misalignment problem between neighboring frames and the reference frames. Similarly, EDVR [27] and EBSR [21] build a pyramid structure facilitating the motion compensation during the alignment procedure. MFIR [2] presents a deep reparametrization algorithm that transforms Maximum A Posteriori (MAP) formulation to the latent space for better reconstruction. BIPNet [8] introduces a pseudo-burst feature fusion method to allow flexible information exchange among frames. In addition, BSRT [20] builds the reconstruction module based on Swin Transformer, which further improves the performance.
For real-world burst image SR, Bhat et al. [1] establish a dataset consisting of LR burst images captured from a smartphone and HR counterparts from a DSLR camera and introduce an encoder-decoder-based model to deal with un
known pixel-wise displacement with optical flow estimation and merge aligned frames with an attention mechanism.
One typical challenge of burst SR lies in the fusion strategy. It is common to use the affinity maps between one frame and base frame as fusion weights. However, this would be stuck in a devil: it just focuses on what is similar to the base frame for all the other frames and uses those similar pixels as complementary to the base frame; but, it is limited to excavate comprehensively complementary relations among frames. Besides, when examining the line of research works conducted on the BurstSR dataset, three issues are also worthy of being taken into serious consideration (Sec. 3), _i.e_., data misalignment among LRs and HRs, cross-device distribution because of different imaging cameras for capturing LRs and HRs, and unfair model evaluation with warped SR predictions via their ground-truth HRs.
In this work, we propose to leverage the federated affinity fusion strategy in our FBAnet model, comprehensively investigating complementary pixel displacements among a sequence of burst images. Meanwhile, we build a real-world burst image super-resolution dataset, named RealBSR, aiming to facilitate further exploration of real-world burst SR.
## 3 RealBSR: A new Benchmark
BurstSR is the only existing dataset for real-world burst image super-resolution and enhancement, which has three typical issues. _1) Data Misalignment._ The distortion between LRs and their HR counterparts is distinct. It possibly results in a severe misalignment between paired LR and HR images. Such serious mismatches yield counterproductive super-resolution results with few details reconstructed from burst LR images. _2) Cross-Device Distribution._ Since LR sequences and HR counterparts are captured by a smartphone and a DSLR, respectively, the difference of imaging devices would inevitably lead to a cross-device gap between them. Therefore, it has to cast this task on the BurstSR dataset as a combination of burst image super-resolution and enhancement. _3) Evaluation Deficiency._ The evaluation routine for BurstSR is that a generated final SR image is warped with the reference of its ground-truth HR and then this warped SR image is used to compute the evaluation metrics with the same ground-truth HR. This is rather problematic and even not fair to truly evaluate the model performance with the aid of GTs. Besides, the calculated metric values (_e.g_., PSNR) cannot well reflect the visual quality, which means pursuing a higher PSNR on the BurstSR dataset is not positively related to better reconstruction quality. This evaluation strategy greatly attributes to data misalignment and cross-device distribution, inviting a great challenge for evaluation.
In this work, we build a real-world burst super-resolution dataset, named RealBSR. It consists of 579 groups (RAW version) and 639 groups (RGB version) for the scale factor 4. Each group has 14 burst LR images and a GT HR image.
### Collection and Processing
We use the optical zoom strategy for data collection, similar to RealSR [4] and DRealSR [30]. With a Sony DSLR camera (Alpha 7R), we capture a sequence of 14 LR images by pressing the camera shutter and optically zoom the camera to shoot an HR image. Those images are collected in various scenes, _e.g_., buildings (museum, church, office building, tower, _etc_.), posters, plants/trees, sculptures, and ships. Our indoor and outdoor images have 21 and 618 groups, respectively. For each group of burst data, since LR and HR images have different fields of view, we employ SIFT to crop LR sequences under the reference of collected HR counterpart.
Considering the distortion of RAW images is not addressed by the camera, their center regions are cropped into our RAW version dataset, named RealBSR-RAW. Besides, the RGB version of RealBSR, termed RealBSR-RGB, is also provided. Since the collected RGB images are processed by the camera ISP, it also needs color and luminance correction between LRs and their HR counterparts.
To facilitate the model training, we crop the inputs into 160\(\times\)160 patches, similar to RealSR. Accordingly, the RealBSR-RAW dataset has 20,842 groups of paired patches for training and 2,377 groups for testing. Similarly, the RealBSR-RGB dataset has 19,509 groups of paired patches for training and 2,224 groups for testing.
### Characteristic Analysis
**Pixel Shift** is computed between the base frame (the first LR frame) and the other 13 frames. In Fig. 1(a), 50\(\%\) offsets between frames are under 1 pixel, but 25\(\%\) offsets in the range of (1,2) and 25\(\%\) larger than 2 pixels, indicating the model still needs an alignment module to eliminate large offsets. Instead of other external factors like moving objects and inconsistent colors, the large pixel shifts in RealBSR are caused by intense hand tremors. In Fig. 1(b), the sub-pixel shifts are distributed evenly in the range of (0,1) which provides abundant information for SR improvement.
**Image Diversity.** We employ grey-level co-occurrence matrix (GLCM), which is widely used to measure image textures [12], to analyze the image diversity. With GLCM, we derive five second-order statistic features from all the training images, _i.e_., Haralick features [12], including image _contrast_, _entropy_, _dissimilarity_, _correlation_ and _energy_, Fig. 1(c). _Contrast_ measures the intense changes between contiguous pixels, and _dissimilarity_ is similar to _contrast_ but increasing linearly. _Energy_ measures texture uniformity, and _entropy_ measures the disorder of the image, which is negatively correlated with energy. _Correlation_ measures the linear dependency in the image.
## 4 FBAnet: A New Method
### Overview
In comparison with SISR, MFSR pursues favorable pixel-wise displacements to facilitate realistic detail reconstruction. Since it is not easy to exactly figure out the displacement association among different burst LR images, how to fuse burst images remains intractable. What's worse, stemming from physical camera shake during imaging, it occurs more unexpected and non-uniform pixel shifts. To address this issue, we propose a federated burst affinity network to move towards real-world burst SR by effectively integrating informatic sub-pixel details contained in multiple frames.
Our FBAnet follows a conventional alignment-to-fusion paradigm, Fig. 4. Formally, given an LR image sequence \(\left\{x_{i}\right\}_{i=1}^{N}\) of \(N\) burst observations as the input, our model will yield a high-resolution image prediction \(\hat{y}_{i}\) for a scale factor \(s\), where their ground-truth (GT) HR counterpart is denoted as \(y_{i}\). With the randomness of pixel-wise shift among different burst images, the fused features would not be perfect enough to directly support the reconstruction of image details. Without loss of generality, the first frame \(x_{1}\) is regarded as the reference frame to align the other images in the sequence by their homography matrix \(H\). Then, FBAnet employs a federated affinity fusion strategy to aggregate multiple frames and utilizes two hourglass Transformer blocks to take over the fused features for the final decoding phase of high-resolution image prediction.
2) _Faf_: Despite the higher affinity of \(x_{i}\) (\(\forall i\neq 1\)) indicating the higher similarity to the base frame \(x_{1}\), there also underlies two adverse effects, especially for real-world burst images: (a) Their easy reconstruction regions (_e.g_., the flat) would also have large affinity values for \(x_{i}\), \(\forall i\neq 1\), which would unexpectedly drive the model to pay more attention to those regions, resulting in over-fitting. (b) As for the imperfect alignment and pixel shift, even the key pixels in detail-rich regions may not have large affinity values to be highlighted for fusion.
To address these issues, our FAF additionally considers the affinity difference maps to distinguish specific differences between one frame from other frames. Consequently, our FAF would pay attention to those complementary details do not appear in the base frame, _e.g_., _Pixel-B_ in Fig. 4(a). The affinity difference map of the \(i\)-\(th\) frame can be expressed as
\[D_{1}=A_{1,1};\quad D_{i}=d(A_{1,i},A_{1,1}),\ when\ \ i\neq 1, \tag{1}\]
where \(d(\cdot)\) is the difference function.
The final fused feature can be defined as,
\[\begin{split} M&=\sum\nolimits_{i=1}^{N}D_{i} \circ F_{i}\\ &=\underbrace{A_{1,1}\circ F_{1}}_{\text{self-affinity feature}}+\underbrace{\sum\nolimits_{i=2}^{N}d(A_{1,i},A_{1,1})\circ F_{i}}_{ \text{frame-specific feature}}.\end{split} \tag{2}\]
**Analysis:** As Eq. (2) indicates, the fused feature map consists of two components, _i.e_., (i) attentive features of the base frame based on its self-affinity, (ii) frame-specific features that are relatively independent of the base frame providing more complement from other frames. Given \(D_{i}\) computed in the Euclidean space, Eq. (2) can be derived as
\[M=A_{1,1}\circ F_{1}+\sum\nolimits_{i=2}^{N}\left(F_{i}-F_{1}\right)\cdot F_ {1}\circ F_{i}. \tag{3}\]
The second terms in Eq. (2) & (3) can be regarded as the combination of difference maps (\(F_{i}\)-\(F_{1}\)) and correlation maps \(F_{1}\circ F_{i}\). The former would alleviate the issue that VAF encourages the fusion of redundant information similar to the base frame too much, which is too easy for reconstruction, _e.g_., the flat. The latter would alleviate the adverse
Figure 4: Workflow illustration of the proposed FBAnet, which contains three main components, including homography alignment, federated affinity fusion (_cf_. Eq. (2) and Eq. (1)), and burst representation decoding.
Figure 5: Intuitive illustration of FAF. FAF considers more complementary details from other frames (_e.g_., _Pixel-B_ in (a)), besides those similar to base frame (_e.g_., _Pixel-A_). (b) FAF can rectify the fusion information, avoiding negative effects from easy reconstruction regions with very high similarity and encouraging those subpixels for fusion.
effects derived from the misalignment due to large motions. Thus, FAF rectifies the fusion information to alleviate the adverse effects resulting from the large misalignment and the overfitting to the easy reconstruction of regions with high affinity, as illustrated in Fig. 4(b).
3) _FAF*:_ Following the similar federating spirit, we can further extend this design of FAF. That is, the affinity maps and their different maps can take more complex federated interactions of frames into consideration, rather than only taking the base frame as a reference. Specifically, for \(t\)-\(th\) frame, its affinity difference map can be compared to any other frame. Thus, \(D_{i}=d(A_{1,i},A_{1,m}),i,m\neq 1\) and the fused features is computed similar to Eq. (2),
\[M=\sum\nolimits_{k=1}^{N}\left(A_{k,k}\circ F_{k}+\sum\nolimits_{i=1,i\neq k }d(A_{k,i},A_{k,k})\circ F_{i}\right). \tag{4}\]
### Burst Representation Decoding
To aggregate global information for finer high-frequency detail reconstruction, we utilize the self-attention mechanism to model long-range pixel relations. Specifically, our FBAnet adopts a burst representation decoding module to explicitly model inter-dependencies among channels. This module has two cascaded blocks, shown in Fig. 4. A block has an encoder and a decoder, both of which cascade three Locally-enhanced Window (LeWin) Transformer blocks [29]. Each block has a LayerNorm, multi-head self-attention, a LayerNorm, and a Locally-Enhanced Feed-Forward (LeFF) layer [29]. The module is followed by pixelshuffle [22] for producing the final HR predictions.
Our training objective includes a Mean Absolute Error (MAE) loss for SR image reconstruction. In addition, on the RAW-version dataset, to mitigate the negative effects brought by a slight misalignment of the RAW-version dataset, we also introduce the CoBi loss [32] to ease the training and enhance the visual quality of final results. While on the RGB-version dataset, we adopt the Gradient Weighted (GW) loss [30] for high-frequency detail reconstruction.
## 5 Experiments
### Experimental Settings
**Datasets.** We conduct experiments on the two versions (RAW and RGB) of the proposed RealBSR benchmark at scale factor 4, real-world BurstSR [1] and a synthetic burst SR dataset, SyntheticBurst [1, 8], with fair comparisons.
**Implementation Details.** We align frames in a burst sequence using OpenCV to estimate homography matrixes, before training. Input images are augmented using flip and rotation in the training stage. The AdamW optimizer is employed and the initial learning rate is set to be 1e-4. Besides, we adopt the cosine annealing schedule to set the learning rate of each parameter group.
**Evaluation Metric.** On RealBSR-RAW, we adopt four evaluation metrics, _i.e_., PSNR, SSIM, LPIPS, and PSNR-Linear [1]. The first three metrics are computed in the RGB space, and the last one is in the linear sensor space. On RealBSR-RGB, it follows the evaluation routine in the RGB image space and thus three metrics (PSNR, SSIM, LPIPS) are adopted. On BurstSR, the predicted SR images have to be warped by taking GT HRs as a reference before computing metrics [1], while without post-processing on our RealBSR.
RealBSR-RGB and RealBSR-RAW datasets. On RealBSR-RGB, it is clear that the state-of-the-art burst SR methods are prone to generate realistic but blurry textures, _e.g_., the building in Fig. 6. On RealBSR-RAW, the SR predictions of DBSR, MFIR and BIPNet have differences in image color, compared with that of our FBAnet. Moreover, on SyntheticBurst and real-world BurstSR, all the evaluation methods are trained from scratch in Tab. 3 and performance gains are also achieved by our FBAnet over existing methods.
**Comparison with video SR methods:** To further evaluate the results of video SR methods in the real-world burst SR task, we also introduce three state-of-the-art video SR methods (_i.e_. EDVR, BasicVSR and BasicVSR++) for comparison. Since the video SR algorithms are always based on RGB dataset, we train all these methods from scratch on RealBSR-RGB dataset. In Tab. 2, our FBAnet outperforms the state-of-the-art video SR algorithms by \(\sim\)0.5dB gains (_vs. BasicVSR++_) at least and 1.489dB gains (_vs. EDVR_) at most. In Fig. 5(a), it is clearly observed that visualization results of EDVR, BasicVSR and BasicVSR++ produce blurry details of the building, while our proposed FBAnet reconstructs realistic and sharp textures.
**SISR vs. Burst SR:** To verify the benefits brought by burst SR data, we provide comparisons under the real-world SISR task. The compared methods are two representative real-world SISR methods (_i.e_., LP-KPN [4] and CDC [30]) and a Transformer-based SISR method (_i.e_., SwinIR [18]). Those SISR methods only take the base frame of burst sequences as input. Compared to MFSR methods, SISR methods are characterized by generating relatively sharp and clean outputs, which could be observed from Fig. 5(a), while suffering from the absence of informative details.
### Evaluation and Analysis
**Alignment**: We have ablatively investigated the homography alignment module and also compared it with other different alignment methods including flow-based alignment [1, 2] and deformable-based alignment [27, 24]. As shown in Tab. 4, compared with optical flow alignment [1] and deformable convolutional alignment [27, 20], our approach outperforms them with performance improvements by 0.155dB and 0.230dB in PSNR, respectively. This demonstrates that our method is effective to align the pixel shifts in real-world
\begin{table}
\begin{tabular}{|l c c|c c c|} \hline Alignment & Fusion & Decoding & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \multicolumn{6}{c}{_Alignment_} \\ \hline No alignment & FAF (ours) & ours & 30.223 & 0.878 & 0.125 \\ Optical flow & FAF (ours) & ours & 30.857 & 0.889 & 0.117 \\ Deformable & FAF (ours) & ours & 30.782 & 0.891 & 0.111 \\ Homography & FAF (ours) & ours & **31.012** & **0.898** & **0.102** \\ \hline \multicolumn{6}{c}{_Fusion_} \\ \hline Homography & VFA/TSA [27] & ours & 30.724 & 0.896 & 0.107 \\ Homography & FAF (ours) & ours & 31.012 & 0.898 & 0.102 \\ Homography & FAF* (ours) & ours & **31.197** & **0.901** & **0.101** \\ \hline \multicolumn{6}{c}{_Decoding_} \\ \hline Homography & FAF (ours) & BSRT [20] & 30.890 & 0.895 & 0.106 \\ Homography & FAF (ours) & ours & **31.012** & **0.898** & **0.102** \\ \hline \end{tabular}
\end{table}
Table 4: Evaluation about alignment, fusion, and decoding.
Figure 6: Result visualization of existing methods including Real-world SISR (blue), video SR (green), and burst SR (red).
burst frames, even though it is simple.
To verify our solution, we analyze motion patterns among a sequence of burst images. Taking the base frame as reference, pixel shifts of each image in a sequence are image-dependent and global-structural, Fig. 7. Namely, they present a relatively consistent displacement of pixels. This evidences that it is reasonable and effective to align images via homography matrix in the real-world burst super-resolution task. This is different from many existing burst algorithms that usually adopt pixel-wise alignment methods (_e.g._, optical flow and deformable convolutions), which rarely consider the image-wise structural motion pattern prior of the frame.
**Federated Affinity Fusion**: In Tab. 4, we have evaluated the proposed federated affinity fusion module. In comparison with VAF using only affinity maps, our FAF introduces affinity difference maps and achieves the performance gains by 0.288dB in PSNR. And our FAF* further improves the performance by 0.185dB gains in PSNR. This indicates that our federated affinity fusion provides complementary information to the subsequent module.
To further analyze FAF, Fig. 8 provides the visualization of affinity maps and affinity difference maps. As discussed in Sec. 4.3, the affinity values in the flat region with few details would be rather large. Since VAF takes the affinity of one frame to base frame as fusion weight, this encourages the model to pay more attention to those easy reconstruction regions. Instead, FAF uses the affinity difference maps to lower their weights to alleviate this negative effect.
Besides, for the difference map of Frame2 in Fig. 8, it could be seen that the highlighted attention is different from that of Frame1 and Frame3, indicating that Frame2 also provides additional details to the fusion process. This can be further validated through the presented residual between HR predictions of FAF and VAF, which demonstrates that our FAF achieves better detail reconstruction than VAF, as highlighted in the prediction difference image.
**Burst Representation Decoding.** In Tab. 4, we compare our decoding module to that of BSRT with a Transformer design, under a similar architecture with the same alignment and FAF modules. Our decoding has achieved gains by 0.122dB.
**The Number of Burst Image Inputs.** We investigate the impact of different numbers of burst images in a sequence and compare it with a single-frame baseline. And all the training processes are based on our proposed architecture. Tab. 5 reveals that there has been a giant gap in the performance between the single-image baseline and multi-frame
\begin{table}
\begin{tabular}{|c|c|c c c|} \hline Method & Burst Inputs Data & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline \multirow{2}{*}{DBSR [1]} & (base frame)\(\times\)14 & 29.389 & 0.867 & 0.150 \\ & 14 burst images & 30.715 & 0.899 & 0.101 \\ \hline \multirow{2}{*}{MFIR [2]} & (base frame)\(\times\)14 & 29.325 & 0.865 & 0.151 \\ & 14 burst images & 30.895 & 0.901 & **0.098** \\ \hline \multirow{2}{*}{BIPNet [8]} & (base frame)\(\times\)14 & 30.001 & 0.878 & 0.136 \\ & 14 burst images & 30.665 & 0.892 & 0.111 \\ \hline \multirow{2}{*}{BSRT [20]} & (base frame)\(\times\)14 & 29.501 & 0.869 & 0.151 \\ & 14 burst images & 30.695 & 0.897 & 0.105 \\ \hline \multirow{2}{*}{FBAnet (Ours)} & (base frame)\(\times\)14 & 30.086 & 0.868 & 0.152 \\ & 14 burst images & **31.012** & **0.898** & 0.102 \\ \hline \end{tabular}
\end{table}
Table 6: Evaluating the burst inputs’ complementary content.
Figure 8: Visualization of affinity maps and affinity difference maps.
\begin{table}
\begin{tabular}{|c|c c c|} \hline Burst Inputs Number & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline
1 & 30.139 & 0.879 & 0.132 \\
2 & 30.616 & 0.891 & 0.113 \\
4 & 30.818 & 0.894 & 0.108 \\
8 & 30.945 & 0.898 & 0.101 \\
10 & 30.980 & 0.899 & **0.098** \\
14 & **31.012** & **0.898** & 0.102 \\ \hline \end{tabular}
\end{table}
Table 5: Evaluation on the number of burst inputs.
Figure 7: Three examples on pixel shift among a sequence of 14 burst images before and after our homography alignment. Each color indicates one frame in a sequence.
restoration results. Specifically, with the burst size increasing from 2 to 14, the performance also experiences a marked rise from 30.616dB to 31.197dB, which tends to be relatively saturated with the input number being close to 1.
**The Complementary Content of Burst Image Inputs.** To verify the influence of contents in burst frames, we compare models trained on 14 shifted frames with models trained on 14 identical images (_i.e_. the base frame and its 13 copies), the results of which are reported in Tab. 6. For the five models (_i.e_. DBSR, MFIR, BIPNet, BSMT, and Ours) adopted, the performance gains among (base frame)\(\times\)14 and 14 burst images range from 0.664dB to 1.194dB, which proves the necessity and effectiveness of complementary information provided by sub-pixel information among shifted frames.
## 6 Conclusions, Limitations, and Future Work
We release a real-world burst image super-resolution dataset, named RealBSR, which is expected to facilitate exploring the reconstruction of more image details from multiple frames for realistic applications, and a Federated Burst Affinity network (FBAnet), targeting addressing the fusing issue of burst images. Specifically, our FBAnet employs simple homography alignment from a structural geometry aspect, evidenced by the relatively consistent pixel shift for a sequence of burst images. Then, a Federated Affinity Fusion (FAF) strategy is proposed to aggregate the complementary information among frames. Extensive experiments on RealBSR-RAW and RealBSR-RGB datasets with improved performance have justified the superiority of our FBAnet.
**Limitations and future work**: Our FBAnet employs a simple homography alignment. But it is not easy to extend to the video SR task with large motions, which will be addressed in our future work. Since noise is inevitable, addressing real-world burst super-resolution and denoising at the same time is more practical. We will be devoted to this real-world benchmark and the solutions in future work.
## Acknowledgements
This work was supported in part by National Natural Science Foundation of China (NSFC) under Grant No. U21A20470, and National Key R&D Program of China under Grant No. 2021ZD0111601. Thanks a lot for the valuable help from Xiaoxiao Sun.
| despite substantial advances, 単一画像スーパーレゾリューション (SISR) は常に、制限的な情報から高品質な画像を復元することが困難なジレンマに直面しています。特に、現実的なシナリオでは、このことはさらに複雑化されます。この論文では、大規模な現実世界のバッススーパーレゾリューションデータセット(RealBSR)を構築し、複数フレームから画像の詳細を faithful に復元することを目的としています。さらに、私たちは、実際の画像劣化下の画像間の非trivialなピクセルごとの移動性を調査するために、Federated Burst Affinity network (FBAnet) を導入しました。具体的な例では、ピクセルごとのアラインメントを使用するのではなく、FBAnetは構造的な幾何学的側面から単純なホログラフィアラインメントを採用し、Federated Affinity Fusion (FAF)strateを介してフレーム間で補完的な情報を統合します。これらの統合された有用な |
2307.16773 | AsdKB: A Chinese Knowledge Base for the Early Screening and Diagnosis of
Autism Spectrum Disorder | To easily obtain the knowledge about autism spectrum disorder and help its
early screening and diagnosis, we create AsdKB, a Chinese knowledge base on
autism spectrum disorder. The knowledge base is built on top of various
sources, including 1) the disease knowledge from SNOMED CT and ICD-10 clinical
descriptions on mental and behavioural disorders, 2) the diagnostic knowledge
from DSM-5 and different screening tools recommended by social organizations
and medical institutes, and 3) the expert knowledge on professional physicians
and hospitals from the Web. AsdKB contains both ontological and factual
knowledge, and is accessible as Linked Data at https://w3id.org/asdkb/. The
potential applications of AsdKB are question answering, auxiliary diagnosis,
and expert recommendation, and we illustrate them with a prototype which can be
accessed at http://asdkb.org.cn/. | Tianxing Wu, Xudong Cao, Yipeng Zhu, Feiyue Wu, Tianling Gong, Yuxiang Wang, Shenqi Jing | 2023-07-31T15:40:45 | http://arxiv.org/abs/2307.16773v2 | # AsdKB: A Chinese Knowledge Base for the Early Screening and Diagnosis of Autism Spectrum Disorder
###### Abstract
To easily obtain the knowledge about autism spectrum disorder and help its early screening and diagnosis, we create AsdKB, a Chinese knowledge base on autism spectrum disorder. The knowledge base is built on top of various sources, including 1) the disease knowledge from SNOMED CT and ICD-10 clinical descriptions on mental and behavioural disorders, 2) the diagnostic knowledge from DSM-5 and different screening tools recommended by social organizations and medical institutes, and 3) the expert knowledge on professional physicians and hospitals from the Web. AsdKB contains both ontological and factual knowledge, and is accessible as Linked Data at [https://w3id.org/asdkb/](https://w3id.org/asdkb/). The potential applications of AsdKB are question answering, auxiliary diagnosis, and expert recommendation, and we illustrate them with a prototype which can be accessed at [http://asdkb.org.cn/](http://asdkb.org.cn/).
Keywords:Autism Spectrum Disorder Knowledge Base Ontology.
## 1 Introduction
Autism spectrum disorder (ASD) is a kind of neurodevelopmental disability which begins before the age of 3 years and can last throughout a person's whole life. People with ASD have problems in social communication and interaction, and may have stereotypic or repetitive behaviors (or interests). According to the most recent statistics [17] published by the Centers for Disease Control and Prevention (CDC), about 1 in 36 children aged 8 years has been identified with ASD, and this proportion is quite high. However, there is no quantitative medical test to diagnose such a disorder, and professional physicians only use screening tools and look at the behaviors for some time to make a diagnosis. In
this way, many children cannot receive a final diagnosis until much older, which causes the children with ASD might not get the early help they need. In China, the situation on screening and diagnosing the children with ASD maybe much worse compared with western developed countries. The 2020 China rehabilitation report of children developmental disorder6 points out that the ASD incidence in China is around 1% and the number of ASD children is more than three million, but the number of professional physicians who can diagnose ASD is only about 500, let alone the number of board certified behavior analysts. This does hinder the timely diagnosis on ASD, which inspires us to think about if we can apply artificial intelligence techniques to solve the early screening and diagnosis of ASD. The key problem is how to extract and integrate ASD relevant knowledge from heterogeneous sources to support upper-level intelligent applications.
Footnote 6: [http://pkucarenjk.com/news-family/2303.html](http://pkucarenjk.com/news-family/2303.html)
To solve this problem, we build AsdKB, a Chinese knowledge base for the early screening and diagnosis of ASD, from various sources (see Figure 1), such as SNOMED CT [5] (a large collection of medical terms), ICD-107 (the 10th revision of the classification system of diseases published by WHO) clinical descriptions on mental and behavioural disorders [21], DSM-5 [1] (the 5th edition of diagnostic and statistical manual of mental disorders), the screening tools recommended by CDC and so on. Specifically, we first build an ontology covering important concepts about the screening and diagnosis of ASD from DSM-5, ICD-10 clinical descriptions on mental and behavioural disorders, SNOMED CT, CDC materials, and other Web sources. Using this ontology as the schema, we then extract and integrate factual knowledge on diseases, diagnosis, experts, and
Figure 1: The data sources for building AsdKB.
others. Besides, we use and develop Web crawler and natural language processing (NLP) tools for data extraction, keyword extraction, knowledge extraction, machine translation, and etc., over various formats of data, including text, tables, and structured knowledge. All classes, properties, and instances in AsdKB are identified by permanent dereferenceable URIs in w3id8. All data are available as RDF dump files on Zenodo9, and the basic information of the AsdKB project can be accessed at Github10. All the resources are published under CC BY-SA 4.0. The main contributions of this paper are summarized as follows:
Footnote 8: [https://w3id.org/asdkb/](https://w3id.org/asdkb/)
Footnote 9: [https://zenodo.org/record/8199698](https://zenodo.org/record/8199698)
Footnote 10: [https://github.com/SilenceSnake/ASDKB](https://github.com/SilenceSnake/ASDKB)
* We first build a Chinese knowledge base for the early screening and diagnosis of ASD, i.e., AsdKB, which contains both ontological and factual knowledge, and publish it following Linked Data best practices.
* We present a prototype system on question answering, auxiliary diagnosis, and expert recommendation with AsdKB, and discuss how to support the early screening and diagnosis of ASD with this system.
The rest of this paper is organized as follows. Section 2 introduces the process of ontology building. Section 3 describes the extraction of factual knowledge. Section 4 presents the potential applications of AsdKB. Section 5 outlines related work, and we conclude in the last section.
## 2 Ontology Building
This section introduces the process of building the AsdKB ontology as the schema which is used to guide extracting and integrating factual knowledge from various sources. We follow Ontology Development 101 [20] to build the ontology (Figure 2 shows a part of it) as follows.
Step 1: Determine the domain and scope of the ontology.AsdKB is expected to cover the ASD relevant knowledge on the early screening and diagnosis, so the ontology needs to cover important concepts in widely recognized materials about the screening and diagnosis of ASD. Here, we select relevant materials from CDC, DSM-5, ICD-10, SNOMED-CT, and other Web sources.
Step 2: Consider reusing existing ontologies.In this part, we reuse the standard RDF, RDFS, and OWL vocabularies, including rdf:type linking from instances to classes, rdfs:label recording the Chinese (or English) labels of classes and properties, rdfs:comment providing textual descriptions to clarify meanings of classes, rdfs:subClassOf describing the class hierarchy, equivalent classes are linked by owl:equivalentClass from the AsdKB ontology to other ontologies, and rdfs:domain and rdfs:range specifying the resources and values of a property are instances of one or more classes, respectively.
**Step 3: Enumerate important terms in the ontology.** We read the ASD materials from CDC, DSM-5, ICD-10, SNOMED CT and other Web sources mentioned in the first step, to manually identify a list of important concept-level terms. For example, important symptom concepts in disease knowledge include "Impairments in Social Interaction" and " Restrictive, Repetitive and Stereotyped Behaviors". Important concepts in expert knowledge include "Physician" and "Hospital". Besides, "Screening Tool" and "Diagnostic Standard" are related to screening and diagnosis.
**Step 4: Define the classes and the class hierarchy.** Based on the previous identified important terms, we start to create disease classes (e.g., "Autism Spectrum Disorder" and "Asperger's Syndrome"), diagnosis classes (e.g., "Screening Tool" and "Screening Question"), expert classes (e.g., "Physician" and "Hospital"), and others. For the class hierarchy, we consider the hierarchies within disease classes, symptom classes, and diagnosis classes, respectively. For example, as shown in Figure 2, we have " Asperger's Syndrome rdfs:subClassOf Autism Spectrum Disorder" and "Standard of Social Interaction rdfs:subClassOf Diagnostic Standard". Specifically, we have created a class "Screening Question" in the diagnosis classes to facilitate the exploration of the association between instances of "Screening Question" and "Diagnostic Standard".
Figure 2: A part of the AsdKB ontology.
**Step 5: Define the properties of classes.** After selecting classes from the list of terms, we start to attach properties to classes using rdfs:domain. We distinguish datatype properties and object properties. For example, for the class "Physician", we have the object property workAt and datatype properties Name, Title, Specialty, Hospital Department, and etc.
**Step 6: Define the facets of the properties.** We specify the value type of each property by defining rdfs:range. The range of a datatype property is an XML Schema datatype. For example, the ranges of properties Address (attached to the class "Hospital") and Score (attached to the class "Option") are xsd:string and xsd:float, respectively. Besides, the range of an object property is a class. For example, the range of hasSymptom is the "Symptom".
**Step 7: Create instances.** We do not define instances in the ontology but only use it as the schema of AsdKB. The creation of instances belongs to factual knowledge extraction, and it will be described in Section 3.
**Statistics about the AsdKB ontology.** The built ontology is currently online: [https://w3id.org/asdkb/ontology/](https://w3id.org/asdkb/ontology/). It contains 32 classes, 25 datatype properties, and 16 object properties. The maximum depth of a class in the class hierarchy is 4. Note that we apply google translate11 to translate the English labels of all elements in the ontology into Chinese ones, and also perform careful manual proofreading and correction.
Footnote 11: [https://translate.google.com/](https://translate.google.com/)
**Mapping to other ontologies.** To facilitate schema knowledge sharing across different ontologies, We map our AsdKB ontology to the Unified Medical Language System [2] (UMLS) and the Autism DSM-ADI-R (ADAR) ontology [19]. UMLS is the largest integrated biomedical ontology covering the vocabularies from around 200 sources including SNOMED CT, DSM-5, FMA [23], and etc. ADAR is built from Autism Diagnostic Interview-Revised [16] (ADI-R), which is a structured interview used for autism diagnosis. ADAR focuses on autism symptom classification, and it constructs many fine-grained symptom classes, e.g., "First walked unaided" and "Daily spontaneous and meaningful speech", which are taken as instances in AsdKB. This is why we do not directly re-use ADAR in AsdKB.
Since the disease classes in the AsdKB ontology are extracted from SNOMED CT, which is also a part of UMLS, such classes are naturally linked to UMLS (see the Example 1 in Figure 3). For the rest eighteen classes in AsdKB, we submit each of their labels to UMLS Metathesaurus Browser12 and manually make comparisons between the returned classes and submitted ones to decide whether there exist owl:equivalentClass or rdfs:subClassOf relations (the Example 2 in Figure 3 gives a mapping result). Besides, we apply Agreement-MakerLight [6] to mapping the AsdKB ontology to ADAR, and the Example 3 in Figure 3 also shows a mapping result.
## 3 Factual Knowledge Extraction
This section presents the extraction of factual knowledge of ASD. Due to limited spaces, we do not explain every detail but focus on the main content.
### Disease Knowledge
For disease knowledge, we need to extract the factual knowledge about disease and symptom instances according to the AsdKB ontology. Disease instances (e.g., "Atypical Rett syndrome") are derived from SNOMED CT, and they are actually the leaf nodes in the disease taxonomy in SNOMED CT. For each disease instance, we extract the values of the properties: Label (instance name), SCTID (the term ID in SNOMED CT), ICD-10 code (the corresponding ICD-10 category), and Synonym from SNOMED CT, respectively. We also manually design templates (i.e., regular expressions) to extract the values of properties Introduction (a brief description of the given disease instance), Patient Groups (e.g., "children" or "female children"), and Pathogen (e.g., "genetic and environmental factors") from ICD-10 clinical descriptions on mental and behavioural disorders [21], respectively. Besides, for the values of properties Label, Synonym, Introduction, Patient Groups, and Pathogen, we obtain the corresponding Chinese versions by Google Translate and manual proofreading. We collect 49 disease instances relevant to ASD in total, and their corresponding property information.
Symptom instances are also extracted from ICD-10 clinical descriptions on mental and behavioural disorders. We model the symptom instance extraction as the task of sequence labeling. We first take each paragraph as a document, and apply Term Frequency-Inverse Document Frequency [15] (TF-IDF) to identify keywords. Based on this, we then label a small amount of symptom instances in the corpus to train an extraction model. Here, we use BioBERT [14], a pre-trained biomedical language representation model for biomedical text mining, to encode each word as an embedding. Afterwards, we utilize BiLSTM [7] to capture textual context features for each word. Finally, we apply conditional
Figure 3: Examples of mapping the AsdKB ontology to UMLS and ADAR.
random fields [13] to finishing sequence labeling, which naturally classifies symptom instances to the pre-defined symptom classes, i.e., "Impairments in Social Interaction", " Restrictive, Repetitive and Stereotyped Behaviors", and "Other Symptoms". High-quality results of sequence labeling obtained by the trained model will be added to the labeled data to train a new model. We repeat this process until the maximum number of iterations is reached. Google Translate is also used to get the Chinese description of each symptom instance. Figure 4 shows the triples of the symptom <[https://w3id.org/asdkb/instance/symptom64](https://w3id.org/asdkb/instance/symptom64)>. Finally, We collect 65 symptom instances in total.
### Diagnostic Knowledge
For diagnostic knowledge, we extract the factual knowledge on the instances of diagnostic standards, screening tools, screening questions, and the corresponding options. Instances of diagnostic standards are acquired from the Chinese edition13 of DSM-5 [1], so we only have Chinese descriptions for the instances of diagnostic standards. We follow a similar process used for extracting symptom instances, and only replace the pre-trained model BioBERT with a more general model BERT [11] because BioBERT does not support Chinese but BERT does. Different from symptom instances, instances of diagnostic standards do not refer to specific behaviors or activities of the people with ASD, they actually present textual summarizations for specific classes of diagnostic standards (i.e., "Standard of Repetitive Behavior" and "Standard of Impairments in Social Interaction"). For example, an instance of diagnostic standards in AsdKB is expressed as " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " "" " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " "" " "" " " " " " " " " " " " " " " " "" " " " " " " "" " " " " " "" " " " " "" " " " " " " " " "" " " "" " " " "" " " "" " " " " "" " " " "" " " " " "" " " " "" " " "" " "" " "" " " "" " "" " " " "" " "" " " " "" " " "" " " " " "" " " "" " "" " "" "" " "" " "" " " "" " " "" " "" " "" "" " "" " "" " "" " "" " "" " "" " "" " "" " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" """ "" "" "" """ "" "" """ "" "" "" "" """ "" "" "" """ """ """ "" """ "" """ "" "" """ """ "" "" """ """ """ """ """ "" """ """ "" "" """ """ """ """ "" """ """ """ """ """ """ """ """ """ """ """ """ """" """ "" """ "" """ """ """ """ """" """ """" """ """ """ """ """ """" """" """" """ """ """" """ """ """ """ """ """" """" """" """" """ """" """" """" """"" """" """" """" """"" """"" """"" """" """"" """" """"" """"" """"" """"" """""" """"" """"""" """""" """"""" """""" """""""""
organizations and medical institutes, including CDC14, ALSOLIFE15 (China ASD Evaluation and Intervention Platform), Autism Canada16, and OCALI17 (The Ohio Center for Autism and Low Incidence). Instances of screening tools in AsdKB are actually screening scales, which have the properties Introduction (basic information and instructions), Author, User (the one filling in the scale, e.g., parents or teacher), Age (applicable ages of screening targets), Time (the time it takes to fill in the scale), Rule (screening principles and details), and Screening Boundary (the score of screening boundary after finishing the scale). After careful selection, we extract twenty instances of screening tools, containing fifteen English screening scales and five Chinese ones, such as ABC [12], CARS2 [24], and M-CHAT [27]. Google Translate is used here to translate English scales into Chinese ones, and manual proofreading is also conducted.
Footnote 14: [https://www.cdc.gov/ncbdd/autism/hcp-screening.html#Tools](https://www.cdc.gov/ncbdd/autism/hcp-screening.html#Tools)
Footnote 15: [https://www.alsolife.com/autism/screen/](https://www.alsolife.com/autism/screen/)
Footnote 16: [https://autismcanada.org/autism-explained/screening-tools/](https://autismcanada.org/autism-explained/screening-tools/)
Footnote 17: [https://www.ocali.org/project/assessment_measures](https://www.ocali.org/project/assessment_measures)
Footnote 18: [https://www.haodf.com/](https://www.haodf.com/)
Footnote 19: [https://www.familydoctor.com.cn/](https://www.familydoctor.com.cn/)
For instances of screening questions and options, they can be directly obtained from screening scales through table extraction. Besides keeping their textual content as the property, we also establish correspondingSymptom relationships between instances of screening questions and symptom instances, and matchStandard relationships between option instances and instances of diagnostic standards. These two kinds of relationships (i.e., object properties) benefit to the interpretability of screening results. For example, as shown in Figure 5, AsdKB can tell users that the current question investigates what specific symptoms are and whether the current option matches some diagnostic standard or not, in order to help users better understand screening questions and provide explanations to screening results. To identify correspondingSymptom relationships, we first use FNLP [22] to perform Chinese word segmentation on the instances of screening questions and symptom instances. After removing stopwords, we then compare string similarities between two word sequences to decide whether the correspondingSymptom relationship exists. The method of extracting matchStandard relationships is similar to that of correspondingSymptom relationships, and the only difference is to additionally consider the property Score of each option. If an option has the highest score or lowest score, it means the result of the current screening question is abnormal or normal (it ups to the design of screening scales), and abnormal results could help identify matchStandard relationships.
### Expert Knowledge
For expert knowledge, we extract factual knowledge of the instances of professional physicians, and the hospitals they work at, from the Web. We select two famous Chinese healthcare websites The Good Doctor18 and Family Doctor19
as the data sources to extraction, so the string values of some datatype properties are only presented in Chinese. We first submit the following keywords "\(\mathcal{H}\)" (ASD)", "\(\mathcal{H}\)" (pervasive developmental disorders)", "\(\mathcal{L}\)" (childhood autism)", and "(Asperger's syndrome)", to the search engines of the selected websites, which locates the Web pages of professional physicians on ASD. Faced with the structures like infobox tables in Wikipedia, we then extract physician instances and the values of properties Name, Title (e.g., "\(\mathcal{L}\)" (chief physician)" and "\(\mathcal{H}\)" (attending physician)"), Specialty (e.g., "\(\mathcal{L}\)" (various types of mental disorders in childhood)"), Hospital Department (e.g., "\(\mathcal{L}\)" (child healthcare department)" and "\(\mathcal{H}\)" (psychiatry department)"), and workAt (i.e., hospital instances). We collect 499 physician instances in total.
According to the values of the property workAt, we locate the Web pages of hospital instances. Similar to the extraction on physician instances and the corresponding property information, we extract the hospital instances and the values of properties Name, Address, Contact Details, and Hospital Level (e.g., "\(\mathcal{L}\)" (Grade-A tertiary hospital)"). We collect 270 hospital instances in total.
Since physician and hospital instances are extracted from different sources, we perform instance matching using heuristics. Given two hospital instances, if their values for at least one of the properties Address and Contact Details are the same, they are treated as equivalent. Given two physician instances, if their values for the property workAt are equivalent, and the values for the properties Name and Title are the same respectively, these two instances are determined as equivalent. Equivalent instances are fused as one instance in AsdKB.
Figure 5: An example to show the benefits of corresponding Symptom and matchStandard relationships.
### Other Knowledge
In this part, we extract factual knowledge on the instances of intervention methods, and the information of China administrative divisions. Instances of intervention methods are obtained from The National Clearinghouse on Autism Evidence and Practice20 (NCAEP), and such instances are all evidence-based practices, including "Discrete Trial Training", "Social Skills Training", "Peer-Based Instruction and Intervention", and etc. For each instance of intervention methods, we extract the values of properties Label (instance name) and Introduction (a brief description on the instance information). English string values are translated to Chinese by Google Translate, and we also conduct careful proofreading.
Footnote 20: [https://ncaep.fpg.unc.edu/](https://ncaep.fpg.unc.edu/)
With expert knowledge introduced in Section 3.3, a potential application is to seek expertise help from physicians to diagnosis. In order to find professional physicians in the target districts, cities, and provinces, we extract instances of China administrative divisions from National Bureau of Statistics21. The extracted instances are specific districts, cities, and provinces, and we also build locateAt relationships among them. To link each hospital to the corresponding administrative divisions, we first use Amap (a leading provider of digital map in China) API22 to get the latitude and longitude of each hospital by inputting the value of property address. With the information of latitudes and longitudes, Amap API can return the corresponding districts, cities, and provinces of hospitals. Besides, we record the Population of each instance of China administrative divisions, which could help regional analysis on ASD.
Footnote 21: [http://www.stats.gov.cn/](http://www.stats.gov.cn/)
Footnote 22: [https://github.com/amapapi](https://github.com/amapapi)
### Quality of AsdKB
AsdKB contains 6,166 entities (including conceptual entities, i.e., classes, and individual entities, i.e., instances) and 69,290 triples in total. All class URIs in the namespace [http://w3id.org/asdkb/ontology/class/](http://w3id.org/asdkb/ontology/class/) and instance URIs in the namespace [http://w3id.org/asdkb/instance/](http://w3id.org/asdkb/instance/) are dereferenceable. To evaluate the quality of AsdKB, we design two evaluation methods: accuracy evaluation, and task evaluation.
**Accuracy Evaluation.** There is no ground truth available, and it is impossible to evaluate all triples manually. Therefore, we apply a random evaluation strategy. We first randomly select 100 entities distributed across classes and instances, and obtain 732 triples. These samples can reflect the distribution of triples in the entire knowledge base. We then conduct manual labeling to evaluate the accuracy of the samples. The accuracy of the entire AsdKB is estimated by evaluating the accuracy of the samples.
Five graduate students participate in the labeling process. We provide three choices, which are _correct_, _incorrect_, and _unknown_ to label each sample. After each student label all the samples, we calculate the average accuracy. Finally,
similar to YAGO [10], Zhishi.me [28], and Linked Open Schema [29], we use the Wilson interval [3] when \(\alpha=5\%\) to extend our findings on the subset to the entire knowledge base. The Wilson interval is a binomial proportion confidence interval calculated from the results of a series of Bernoulli trials, and \(\alpha\) is the significance level. For the randomly selected 732 triples, the average _correct_ votes is 712, so the accuracy is 97.02% \(\pm\) 1.21%, and it demonstrates the high quality of AsdKB.
**Task Evaluation.** Besides the accuracy of the triples in AsdKB, we try to evaluate the effectiveness of AsdKB in answering real-world ASD relevant questions. Thus, we collect 100 frequently asked questions (e.g., "What are the clinical symptoms of autism?)" and "Which interventions are effective?)") on ASD from Chinese healthcare websites The Good Doctor and Family Doctor (introduced in Section 3.3), which are also the data sources of the expert knowledge in AsdKB. We store AsdKB in a graph database Neo4j [26], and also invite five graduate students to manually write Cypher (Neo4's graph query language) queries for the collected questions so as to check whether the returned query results can answer the questions. According to the above evaluation, AsdKB can answer 81 questions, i.e., the coverage reaches to 81%, which reflects the practicality of AsdKB.
## 4 Application of AsdKB
To illustrate the potential application of AsdKB, this section describes the implementation of a prototype system23 for the early screening and diagnosis of ASD based on AsdKB. This system has three main applications, including question answering, auxiliary diagnosis, and expert recommendation. Users of this system are parents, teachers, and caregivers.
Footnote 23: [http://asdkb.org.cn/](http://asdkb.org.cn/)
### Question Answering
We implement a natural language question answering (QA) system based on AsdKB, and expect that the QA system can answer various common-sense and factual questions on ASD. As mentioned in Section 3.5, AsdKB is stored in Neo4j, so we aim to translate each natural language question to a Cypher query, in order to query the graph database to return the answer. We use two strategies to design the QA system. The first one is to manually write common ASD relevant natural language query patterns (i.e., regular expressions) according to AsdKB ontology and the corresponding Cypher query templates. If a user query matches one of our patterns, then we construct and execute the corresponding Cypher query based on the pre-defined Cypher query template to get the answer. If the user query does not match our patterns, we use the second strategy, which applies the idea of the method for translating natural language questions to formal queries with semantic query graph modeling [31] to generating the Cypher query.
Figure 6 shows the interface of our QA system. We also detect the intention of each question to check whether the user would like to further fill in screening scales. If so, the system will directly give the link of auxiliary diagnosis to help choose screening scales (see Figure 6). The intention identification is modeled as a task of binary classification, where we use BERT to encode questions in the labeled data, and then train a SVM [9] classifier to predict whether users are willing to conduct screening or not.
### Auxiliary Diagnosis
We have developed an auxiliary diagnosis system based on AsdKB. This system provides users with screening scales to assess the risk of being ASD. As long as the screening result of a screening scale shows a risk, the system will prompt the user to seek professional medical evaluation and recommend experts using our expert recommendation system (will be introduced in Section 4.3).
As shown in Figure 7(a), before filling the screening scales, users can select appropriate screening conditions based on their situations, such as the child's age and existing symptoms, and the system will return the corresponding screening scales with a brief introduction (see Figure 7(b)). Figure 7(c) shows the questions and options when filling in the ABC screening scale. After completing a screening scale, the system will give the screening result (i.e., risky or not) based on the total score of all options and the screening boundary.
When users are filling in screening scales, they can check what specific symptoms the current question investigates to better understand the question, so as to help make a choice more precisely. Besides, after completing screening scales, this system can also analyze which option matches some diagnostic standard, to provide explanations of the screening results. More details have already been introduced in Section 3.2 and Figure 5.
Figure 6: The interface of the QA system.
### Expert Recommendation
If our auxiliary diagnosis system reports the risk of being ASD, users may have requirements to find experts on diagnosing ASD in the target administrative divisions. Thus, we design an expert recommendation system with facet search on AsdKB. Users can choose the target province, city and district by selecting a checkbox or directly clicking their locations on the map (see Figure 8). The recommendation result is a list of professional physicians with their names, titles, hospital departments, hospitals, hospital addresses, and specialties.
The recommendation has two steps: candidate physician generation and candidate physician ranking. In candidate physician generation, we use the location information of hospitals in AsdKB to match user selected administrative divisions, and the physicians in AsdKB working at such hospitals are candidates. Note that if no candidate physician returns, we will consider more hospitals in surrounding administrative divisions by distance calculation with latitudes and longitudes. In the candidate physician ranking, three aspects are taken into consideration. Firstly, the higher the title, the higher the ranking. Secondly, the higher the hospital level, the higher the ranking. Finally, the higher the number of thumbs up minus the number of thumbs down (Figure 8 gives an example), the higher the ranking.
Figure 7: An illustration of the auxiliary diagnosis system.
## 5 Related Work
Tu et al. [25] first proposed an autism ontology with domain terms and relationships relevant to autism phenotypes. The main target is to enable user queries and inferences about such phenotypes using data in the NDAR repository, but it does not include DSM criteria, so it does not support diagnosis of ASD. McCray et al. [18] also developed an ASD-phenotype ontology assessing and comparing different ASD diagnostic instruments, but it also does not include DSM-IV or DSM-5 criteria phenotypes. ADAR [19] extends an ontology proposed by Tu et al [25]. with additional SWRL rules to infer phenotypes from ADI-R [16] items, and it covers various symptoms and features of DSM IV and DSM-5 diagnostic criteria, such as difficulties with social interaction, language and communication issues, and stereotyped and repetitive behaviors. However, many fine-grained classes are actually instances in the generic sense.
The most recent work is AutismOnt [8], an ontology for autism diagnosis and treatment, which covers various covers autism research directions. AutismOnt includes the classes: Diagnosis, Risk Factors, Treatments, Strength and Weakness, Services, Lifespan Issues, Profile, and Family Relationships. However, although the authors claim that AutismOnt is available in the NCBO BioPortal, it cannot be found in the repository.
Some large-scale medical knowledge bases also contain ASD knowledge. For example, SNOMED CT [5] contains a large-scale number of medical terms, and the disease classes in AsdKB also comes from SNOMED CT, but it does not cover other kinds of knowledge, such as diagnostic knowledge and expert knowledge. Yuan et al. [30] proposed a method for constructing knowledge graphs with minimal supervision based on unstructured biomedical domain-specific contexts.
Figure 8: An illustration of the expert recommendation system.
They collected 24,687 abstracts of articles related to ASD from PubMed24, and constructed a knowledge graph on ASD. However, they did not design the ontology and the knowledge graph is not publicly available. CMeKG [4] is a Chinese medical knowledge graph developed using natural language processing and text mining techniques from a large amount of medical text data. CMeKG mistakenly uses drugs as the treatment for ASD, but drugs are only used to alleviate the complications of ASD in fact.
Footnote 24: [https://pubmed.ncbi.nlm.nih.gov/](https://pubmed.ncbi.nlm.nih.gov/)
Compared with all existing works, AsdKB is the first publicly available Chinese knowledge base on ASD, and it contains both ontological and factual knowledge about diseases, diagnosis, experts, and others. AsdKB has been applied in developing applications of the early screening and diagnosis of ASD.
## 6 Conclusions and Future Work
We develop and publish a Chinese knowledge base on ASD called AsdKB by extracting and integrating knowledge from various data sources with different formats. To the best of our knowledge, AsdKB is the most comprehensive ASD knowledge base on the Web, and it supports the different applications on the early screening and diagnosis of ASD, such as question answering, auxiliary diagnosis, and expert recommendation. However, there are still some limitations to our work that we plan to address in the future.
#### 6.0.1 Quality of AsdKB.
During our preliminary evaluations of AsdKB, we discovered that the entities contained within the knowledge base are of high quality. However, errors do exist during the automatic extraction process. These errors stem from a variety of factors such as the quality of the original data sources, differences in data formats, and our integration methods. To address this issue, we plan to introduce crowd-sourcing techniques to fix the existing errors in AsdKB and study automatic error detection methods to ensure the accuracy of knowledge in the process of knowledge update.
#### 6.0.2 Applications of AsdKB.
We have explored various applications for AsdKB, including QA, auxiliary diagnosis, and expert recommendation. The integrated prototype system has demonstrated the potential for AsdKB to play a critical role in early ASD screening and diagnosis. To further improve the accuracy of QA and auxiliary diagnosis, we will incorporate data-driven machine learning models on more user log data in our prototype system. In addition to this, we plan to analyze electronic medical records if possible using AsdKB to assist physicians in ASD diagnosis. By analyzing medical histories, symptoms, and other relevant information using AsdKB, physicians can make more accurate diagnosis and give appropriate and personalised treatment suggestions to the people with ASD.
#### Acknowledgements
This work is supported by the NSFC (Grant No. 62006040, 62072149), the Project for the Doctor of Entrepreneurship and Innovation in Jiangsu Province (Grant No. JSSCBS20210126), the Fundamental Research Funds for the Central Universities, and ZhiShan Young Scholar Program of Southeast University.
| autism spectrum disorderの知識を得やすく、早期診断と screening を支援するため、私たちは、ASDKBという、中国におけるautism spectrum disorderの知識ベースを作成しました。知識ベースは、SNOMED CTの疾病知識と、精神・行動障害についてのICD-10の臨床記述から、1) 医学的知識を、2) DSM-5の診断知識と、社会組織や医療機関が推奨する様々なスクリーニングツールから、3) 専門医や病院の専門知識をWebから構築しました。ASDKBには、論理的知識と事実的知識が含まれており、https://w3id.org/asdkb/ でアクセス可能なリンクデータです。AsdKBの潜在的な応用としては、質問応答、補助的な診断、そして専門家の推薦があります。これらの応用を、http://asdkb.org.cn/ にアクセスできるプロトタイプで実証しました。 |
2301.13585 | Naive imputation implicitly regularizes high-dimensional linear models | Two different approaches exist to handle missing values for prediction:
either imputation, prior to fitting any predictive algorithms, or dedicated
methods able to natively incorporate missing values. While imputation is widely
(and easily) use, it is unfortunately biased when low-capacity predictors (such
as linear models) are applied afterward. However, in practice, naive imputation
exhibits good predictive performance. In this paper, we study the impact of
imputation in a high-dimensional linear model with MCAR missing data. We prove
that zero imputation performs an implicit regularization closely related to the
ridge method, often used in high-dimensional problems. Leveraging on this
connection, we establish that the imputation bias is controlled by a ridge
bias, which vanishes in high dimension. As a predictor, we argue in favor of
the averaged SGD strategy, applied to zero-imputed data. We establish an upper
bound on its generalization error, highlighting that imputation is benign in
the d $\sqrt$ n regime. Experiments illustrate our findings. | Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet | 2023-01-31T12:34:10 | http://arxiv.org/abs/2301.13585v1 | # Naive imputation implicitly regularizes high-dimensional linear models.
###### Abstract
Two different approaches exist to handle missing values for prediction: either imputation, prior to fitting any predictive algorithms, or dedicated methods able to natively incorporate missing values. While imputation is widely (and easily) use, it is unfortunately biased when low-capacity predictors (such as linear models) are applied afterward. However, in practice, naive imputation exhibits good predictive performance. In this paper, we study the impact of imputation in a high-dimensional linear model with MCAR missing data. We prove that zero imputation performs an implicit regularization closely related to the ridge method, often used in high-dimensional problems. Leveraging on this connection, we establish that the imputation bias is controlled by a ridge bias, which vanishes in high dimension. As a predictor, we argue in favor of the averaged SGD strategy, applied to zero-imputed data. We establish an upper bound on its generalization error, highlighting that imputation is benign in the \(d\gg\sqrt{n}\) regime. Experiments illustrate our findings.
## 1 Introduction
Missing data has become an inherent problem in modern data science. Indeed, most real-world data sets contain missing entries due to a variety of reasons: merging different data sources, sensor failures, difficulty to collect/access data in sensitive fields (e.g., health), just to name a few. The simple, yet quite extreme, solution of throwing partial observations away can drastically reduce the data set size and thereby hinder further statistical analysis. Specific methods should be therefore developed to handle missing values. Most of them are dedicated to model estimation, aiming at inferring the underlying model parameters despite missing values (see, e.g., Rubin, 1976). In this paper, we take a different route and consider a supervised machine learning (ML) problem with missing values in the training and test inputs, for which our aim is to build a prediction function (_and not_ to estimate accurately the true model parameters).
Prediction with NAA common practice to perform supervised learning with missing data is to simply impute the data set first, and then train any predictor on the completed/imputed data set. The imputation technique can be simple (e.g., using mean
###### Abstract
We consider a zero-imputation strategy consisting in replacing input missing entries by zero, and we formalize the induced bias on a regression task (Section 2). When the missing values are said Missing Completely At Random (MCAR), we prove that zero imputation, used prior to training a linear model, introduces an implicit regularization closely related to that of ridge regression (Section 3). This bias is exemplified to be negligible in settings commonly encountered in high-dimensional regimes, e.g., when the inputs admit a low-rank covariance matrix. We then advocate for the choice of an averaged stochastic gradient algorithm (SGD) applied on zero-imputed data (Section 4). Indeed, such a predictor, being computationally efficient, remains particularly relevant for high-dimensional learning. For such a strategy, we establish a generalization bound valid for all \(d,n\), in which the impact of imputation on MCAR data is soothed when \(d\gg\sqrt{n}\).
These theoretical results legitimate the widespread imputation approach, adopted by most practitioners, and are corroborated by numerical experiments in Section 5. All proofs are to be found in the Appendix.
## 2 Background and motivation
### General setting and notations
In the context of supervised learning, consider \(n\in\mathbb{N}\) input/output observations \(((X_{i},Y_{i}))_{i\in[n]}\), i.i.d. copies of a generic pair \((X,Y)\in\mathbb{R}^{d}\times\mathbb{R}\). By some abuse of notation, we always use \(X_{i}\) with \(i\in[n]\) to denote the \(i\)-th observation living in \(\mathbb{R}^{d}\), and \(X_{j}\) (or \(X_{k}\)) with \(j\in[d]\) (or \(k\in[d]\)) to denote the \(j\)-th (or \(k\)-th) coordinate of the generic input \(X\) (see Section A for notations).
Missing valuesIn real data sets, the input covariates \((X_{i})_{i\in[n]}\) are often only partially observed. To code for this missing information, we introduce the random vector \(P\in\{0,1\}^{d}\), referred to as mask or missing pattern, and such that \(P_{j}=0\) if the \(j\)-th coordinate of \(X\), \(X_{j}\), is missing and \(P_{j}=1\) otherwise. The random vectors \(P_{1},\ldots,P_{n}\) are assumed to be i.i.d. copies of a generic random variable \(P\in\{0,1\}^{d}\) and the missing patterns of \(X_{1},\ldots,X_{n}\). Note that we assume that the output is always observed and only entries of the input vectors can be missing. Missing data are usually classified into 3 types, initially introduced by (Rubin, 1976). In this paper, we focus on the MCAR assumption where missing patterns and (underlying) inputs are independent.
**Assumption 1** (Missing Completely At Random - MCAR).: The pair \((X,Y)\) and the missing pattern \(P\) associated to \(X\) are independent.
For \(j\in[d]\), we define \(\rho_{j}:=\mathbb{P}(P_{j}=1)\), i.e., \(1-\rho_{j}\) is the expected proportion of missing values on the \(j\)-th feature. A particular case of MCAR data requires, not only the independence of the mask and the data, but also the independence between all mask components, as follows.
**Assumption 1'** (Ho-MCAR: MCAR pattern with independent homogeneous components).: The pair \((X,Y)\) and the missing pattern \(P\) associated to \(X\) are independent, and the distribution of \(P\) satisfies \(P\sim\mathcal{B}(\rho)^{\otimes d}\) for \(0<\rho\leq 1\), with \(1-\rho\) the expected proportion of missing values, and \(\mathcal{B}\) the Bernoulli distribution.
Naive imputation of covariatesA common way to handle missing values for any learning task is to first impute missing data, to obtain a complete dataset, to which standard ML algorithms can then be applied. In particular, constant imputation (using the empirical mean or an oracle constant provided by experts) is very common among practitioners. In this paper, we consider, even for noncentered distributions, the naive imputation by zero, so that the imputed-by-0 observation \((X_{\text{imp}})_{i}\), for \(i\in[n]\), is given by
\[(X_{\text{imp}})_{i}=P_{i}\odot X_{i}. \tag{1}\]
RiskLet \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) be a measurable prediction function, based on a complete \(d\)-dimensional input. Its predictive performance can be measured through its quadratic risk,
\[R(f):=\mathbb{E}\left[\left(Y-f\left(X\right)\right)^{2}\right]. \tag{2}\]
Accordingly, we let \(f^{\star}(X)=\mathbb{E}[Y|X]\) be the Bayes predictor for the complete case and \(R^{\star}\) the associated risk.
In the presence of missing data, one can still use the predictor function \(f\), applied to the imputed-by-0 input \(X_{\mathrm{imp}}\), resulting in the prediction \(f(X_{\mathrm{imp}})\). In such a setting, the risk of \(f\), acting on the imputed data, is defined by
\[R_{\mathrm{imp}}(f):=\mathbb{E}\left[\left(Y-f(X_{\mathrm{imp}})\right)^{2} \right]. \tag{3}\]
For the class \(\mathcal{F}\) of linear prediction functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\), we respectively define
\[R^{\star}(\mathcal{F})=\inf_{f\in\mathcal{F}}R(f), \tag{4}\]
and
\[R^{\star}_{\mathrm{imp}}(\mathcal{F})=\inf_{f\in\mathcal{F}}R_{\mathrm{imp}}( f), \tag{5}\]
as the infimum over the class \(\mathcal{F}\) with respectively complete and imputed-by-0 input data.
For any linear prediction function defined by \(f_{\theta}(x)=\theta^{\top}x\) for any \(x\in\mathbb{R}^{d}\) and a fixed \(\theta\in\mathbb{R}^{d}\), as \(f_{\theta}\) is completely determined by the parameter \(\theta\), we make the abuse of notation of \(R(\theta)\) to designate \(R(f_{\theta})\) (and \(R_{\mathrm{imp}}(\theta)\) for \(R_{\mathrm{imp}}(f_{\theta})\)). We also let \(\theta^{\star}\in\mathbb{R}^{d}\) (resp. \(\theta^{\star}_{\mathrm{imp}}\)) be a parameter achieving the best risk on the class of linear functions, i.e., such that \(R^{\star}(\mathcal{F})=R(\theta^{\star})\) (resp. \(R^{\star}_{\mathrm{imp}}(\mathcal{F})=R_{\mathrm{imp}}(\theta^{\star}_{ \mathrm{imp}})\)).
Imputation biasEven if the prepocessing step consisting of imputing the missing data by 0 is often used in practice, this imputation technique can introduce a bias in the prediction. We formalize this _imputation bias_ as
\[B_{\mathrm{imp}}(\mathcal{F}):=R^{\star}_{\mathrm{imp}}(\mathcal{F})-R^{ \star}(\mathcal{F}). \tag{6}\]
This quantity represents the difference in predictive performance between the best predictor on complete data and that on imputed-by-0 inputs. In particular, if this quantity is small, the risk of the best predictor on imputed data is close to that of the best predictor when all data are available. Note that, in presence of missing values, one might be interested in the Bayes predictor
\[f^{\star}_{\mathrm{mis}}(X_{\mathrm{imp}},P)=\mathbb{E}[Y|X_{\mathrm{imp}},P]. \tag{7}\]
and its associated risk \(R^{\star}_{\mathrm{mis}}\).
**Lemma 2.1**.: _Assume that regression model \(Y=f^{\star}(X)+\epsilon\) is such that \(\epsilon\) and \(P\) are independent, then \(R^{\star}\leq R^{\star}_{\mathrm{mis}}\)._
Intuitively, under the classical assumption \(\varepsilon\perp\!\!\!\perp P\)(see Josse et al., 2019), which is a verified under Assumption 1, missing data ineluctably deteriorates the original prediction problem. As a direct consequence, for a well-specified linear model on the complete case \(f^{\star}\in\mathcal{F}\),
\[R_{\mathrm{imp}}(\mathcal{F})-R^{\star}_{\mathrm{mis}}\leq B_{\mathrm{imp}}( \mathcal{F}). \tag{8}\]
Consequently, in this paper, we focus our analysis on the bias (and excess risk) associated to impute-then-regress strategies with respect to the complete-case problem (right-hand side term of (8)) thus controlling the excess risk of imputation with respect to the missing data scenario (left-hand side term of (8)).
In a nutshell, the quantity \(B_{\mathrm{imp}}(\mathcal{F})\) thus represents how missing values, handled with zero imputation, increase the difficulty of the learning problem. This effect can be tempered in a high-dimensional regime, as rigorously studied in Section 3. To give some intuition, let us now study the following toy example.
_Example 2.2_.: Assume an extremely redundant setting in which all covariates are equal, that is, for all \(j\in[d]\), \(X_{j}=X_{1}\) with \(\mathbb{E}\left[X_{1}^{2}\right]=1\). Also assume that the output is such that \(Y=X_{1}\) and that Assumption 1' holds with \(\rho=1/2\). In this scenario, due to the input redundancy, all \(\theta\) satisfying \(\sum_{j=1}^{d}\theta_{j}=1\) minimize \(\theta\mapsto R(\theta)\). Letting, for example, \(\theta_{1}=\left(1,0,...,0\right)^{\top}\), we have \(R^{\star}=R(\theta_{1})=0\) but
\[R_{\mathrm{imp}}(\theta_{1})=\mathbb{E}\left[(X_{1}-P_{1}X_{1})^{2}\right]= \frac{1}{2}.\]
This choice of \(\theta_{1}\) introduces an irreducible discrepancy between the risk computed on the imputed data and the Bayes risk \(R^{\star}=0\). Another choice of parameter could actually help to close this gap. Indeed, by exploiting the redundancy in covariates, the parameter \(\theta_{2}=\left(2/d,2/d,...,2/d\right)^{\top}\) (which is not a minimizer of the initial risk anymore) gives
\[R_{\mathrm{imp}}(\theta_{2})=\mathbb{E}\bigg{[}\Big{(}X_{1}-\frac{2}{d}\sum_{ j=1}^{d}P_{j}X_{j}\Big{)}^{2}\bigg{]}=\frac{1}{d},\]
so that the imputation bias \(B_{\mathrm{imp}}(\mathcal{F})\) is bounded by \(1/d\), tending to zero as the dimension increases. Two other important observations on this example follow. First, this bound is still valid if \(\mathbb{E}X_{1}\neq 0\), thus the imputation by \(0\) is still relevant even for non-centered data. Second, we remark that \(\|\theta_{2}\|_{2}^{2}=4/d\), thus good candidates to predict with imputation seem to be of small norm in high dimension. This will be proved for more general settings, in Section 4.
The purpose of this paper is to generalize the phenomenon described in Example 2.2 to less stringent settings. In light of this example, we focus our analysis on scenarios for which some information is shared across input variables: for linear models, correlation plays such a role.
Covariance matrixFor a generic complete input \(X\in\mathbb{R}^{d}\), call \(\Sigma:=\mathbb{E}\left[XX^{\top}\right]\) the associated covariance matrix, admitting the following singular value decomposition
\[\Sigma=\sum_{j=1}^{d}\lambda_{j}v_{j}v_{j}^{\top}, \tag{9}\]
where \(\lambda_{j}\) (resp. \(v_{j}\)) are singular values (resp. singular vectors) of \(\Sigma\) and such that \(\lambda_{1}\geq...\geq\lambda_{d}\). The associated pseudo-norm is given by, for all \(\theta\in\mathbb{R}^{d}\),
\[\|\theta\|_{\Sigma}^{2}:=\theta^{\top}\Sigma\theta=\sum_{j=1}^{d}\lambda_{j}(v _{j}^{\top}\theta)^{2}.\]
For the best linear prediction, \(Y=X^{\top}\theta^{\star}+\epsilon\), and the noise satisfies \(\mathbb{E}[\epsilon X]=0\) (first order condition). Denoting \(\mathbb{E}[\epsilon^{2}]=\sigma^{2}\), we have
\[\mathbb{E}Y^{2}=\|\theta^{\star}\|_{\Sigma}^{2}+\sigma^{2}=\sum_{j=1}^{d} \lambda_{j}(v_{j}^{\top}\theta^{\star})^{2}+\sigma^{2}. \tag{10}\]
The quantity \(\lambda_{j}(v_{j}^{\top}\theta^{\star})^{2}\) can be therefore interpreted as the part of the variance explained by the singular direction \(v_{j}\).
_Remark 2.3_.: Note that, in the setting of Example 2.2, \(\Sigma\) has a unique positive singular values \(\lambda_{1}=d\), that is to say, all of the variance is concentrated on the first singular direction. Actually, our analysis will stress out that a proper decay of singular values leads to low imputation biases.
Furthermore, for the rest of our analysis, we need the following assumptions on the second-order moments of \(X\).
**Assumption 2**.: \(\exists L<\infty\) such that, \(\forall j\in[d]\), \(\mathbb{E}[X_{j}^{2}]\leq L^{2}\).
**Assumption 3**.: \(\exists\ell>0\) such that, \(\forall j\in[d]\), \(\mathbb{E}[X_{j}^{2}]\geq\ell^{2}\).
For example, Assumption 2 and 3 hold with \(L^{2}=\ell^{2}=1\) with normalized data.
## 3 Imputation bias for linear models
### Implicit regularization of imputation
Ridge regression, widely used in high-dimensional settings, and notably for its computational purposes, amounts to form an \(\ell_{2}\)-penalized version of the least square estimator:
\[\hat{\theta}_{\lambda}\in\operatorname*{arg\,min}_{\theta\in\mathbb{R}^{d}} \left\{\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}-f_{\theta}(X_{i})\right)^{2}+ \lambda\left\|\theta\right\|_{2}^{2}\right\},\]
where \(\lambda>0\) is the penalization parameter. The associated generalization risk can be written as
\[R_{\lambda}(\theta):=R(\theta)+\lambda\left\|\theta\right\|_{2}^{2}.\]
Proposition 3.1 establishes a link between imputation and ridge penalization.
**Proposition 3.1**.: _Under Assumption 1, let \(V\) be the covariance matrix of \(P\) (\(V_{ij}=\operatorname{Cov}(P_{i},P_{j})\)) and \(H=\operatorname{diag}(\rho_{1},\ldots,\rho_{d})\), with \(\rho_{j}=\mathbb{P}(P_{j}=1)\). Then, for all \(\theta\),_
\[R_{\mathrm{imp}}(\theta)=R\left(H\theta\right)+\left\|\theta\right\|_{V\odot \Sigma}^{2}.\]
_In particular, under Assumptions 1', 2 and 3 when \(L^{2}=\ell^{2}\),_
\[R_{\mathrm{imp}}(\theta)=R\left(\rho\theta\right)+L^{2}\rho(1-\rho)\left\| \theta\right\|_{2}^{2}. \tag{11}\]
This result highlights the implicit \(\ell^{2}\)-regularization at work: performing standard regression on zero-imputed ho-MCAR data can be seen as performing a ridge regression on complete data, whose strength \(\lambda\) depends on the missing values proportion. More precisely, using Equation (11), the optimal predictor \(\theta_{\mathrm{imp}}^{\star}\) working with imputed samples verifies
\[\theta_{\mathrm{imp}}^{\star}=\frac{1}{L^{2}\rho}\operatorname*{arg\,min}_{ \theta\in\mathbb{R}^{d}}\left\{R\left(\theta\right)+\lambda_{\mathrm{imp}} \left\|\theta\right\|_{2}^{2}\right\},\]
with \(\lambda_{\mathrm{imp}}:=L^{2}\left(\frac{1-\rho}{\rho}\right)\). We exploit this correspondence in Section 3.2 and 3.3 to control the imputation bias.
### Imputation bias for linear models with ho-MCAR missing inputs
When the inputs admit ho-MCAR missing patterns (Assumption 1'), the zero-imputation bias \(B_{\mathrm{imp}}(\mathcal{F})\) induced in the linear model is controlled by a particular instance of the ridge regression bias (see, e.g., Hsu et al., 2012; Dieuleveut et al., 2017; Mourtada, 2019), defined in general by
\[B_{\mathrm{ridge},\lambda}(\mathcal{F}) :=\inf_{\theta\in\mathbb{R}^{d}}\left\{R_{\lambda}(\theta)-R^{ \star}(\mathcal{F})\right\} \tag{12}\] \[=\lambda\left\|\theta^{\star}\right\|_{\Sigma(\Sigma+\lambda I)^{ -1}}^{2}. \tag{13}\]
**Theorem 3.2**.: _Under Assumption 1', 2, and 3, one has_
\[B_{\mathrm{ridge},\lambda_{\mathrm{imp}}^{\prime}}(\mathcal{F})\leq B_{\mathrm{ imp}}(\mathcal{F})\leq B_{\mathrm{ridge},\lambda_{\mathrm{imp}}}(\mathcal{F}),\]
_with \(\lambda_{\mathrm{imp}}^{\prime}:=\ell^{2}\left(\frac{1-\rho}{\rho}\right)\) and \(\lambda_{\mathrm{imp}}=L^{2}\left(\frac{1-\rho}{\rho}\right)\)._
As could be expected from Proposition 3.1, the zero-imputation bias is lower and upper-bounded by the ridge bias, with a penalization constant depending on the fraction of missing values. In the specific case where \(\ell^{2}=L^{2}\) (same second-order moment), the imputation bias exactly equals a ridge bias with a constant \(L^{2}(1-\rho)/\rho\). Besides, in the extreme case where there is no missing data (\(\rho=1\)) then \(\lambda_{\mathrm{imp}}=0\), and the bias vanishes. On the contrary, if there is a large percentage of missing values (\(\rho\to 0\)) then \(\lambda_{\mathrm{imp}}^{\prime}\to+\infty\) and the imputation bias amounts to the excess risk of the naive predictor, i.e., \(B_{\mathrm{imp}}(\mathcal{F})=R(0_{\mathbb{R}^{d}})-R^{\star}(\mathcal{F})\). For the intermediate case where half of the data is likely to be missing (\(\rho=1/2\)), we obtain \(\lambda_{\mathrm{imp}}=L^{2}\).
Thus, in terms of statistical guarantees, performing linear regression on imputed inputs suffers from a bias comparable to that of a ridge penalization, but with a fixed hyperparameter \(\lambda_{\mathrm{imp}}\). Note that, when performing standard ridge regression in a high-dimensional setting, the best theoretical choice of the penalization parameter usually scales as \(d/n\)(see Sridharan et al., 2008; Hsu et al., 2012; Mourtada and Rosasco, 2022, for details). If \(\rho\gtrsim L^{2}\frac{n}{d+n}\) (which is equivalent to \(\lambda_{\mathrm{imp}}\lesssim\frac{d}{n}\)), the imputation bias remains smaller than that of the ridge regression with the optimal hyperparameter \(\lambda=d/n\) (which is commonly accepted in applications). In this context, performing zero-imputation prior to applying a ridge regression allows handling easily missing data without drastically increasing the overall bias.
In turns out that the bias of the ridge regression in random designs, and thus the imputation bias, can be controlled, under classical assumptions about low-rank covariance structures (Caponnetto and De Vito, 2007; Hsu et al., 2012; Dieuleveut et al., 2017). In all following examples, we consider that \(\mathrm{Tr}(\Sigma)=d\), which holds in particular for normalized data.
_Example 3.3_ (Low-rank covariance matrix with equal singular values).: Consider a covariance matrix with a low rank \(r\ll d\) and constant eigenvalues (\(\lambda_{1}=\dots=\lambda_{r}=\frac{d}{r}\)). Then \(\Sigma(\Sigma+\lambda_{\mathrm{imp}}I)^{-1}\preceq\lambda_{r}^{-1}\Sigma= \frac{r}{d}\Sigma\) and Theorem 3.2 leads to
\[B_{\mathrm{imp}}(\mathcal{F})\leq\lambda_{\mathrm{imp}}\frac{r}{d}\left\| \theta^{\star}\right\|_{\Sigma}^{2}.\]
Hence, the imputation bias is small when \(r\ll d\) (low-rank setting). Indeed, for a fixed dimension, when the covariance is low-rank, there is a lot of redundancy across variables, which helps counterbalancing missing information in the input variables, thereby reducing the prediction bias.
Note that Example 3.3 (\(r\ll d\)) is a generalization of Example 2.2 (in which \(r=1\)), and is rotation-invariant contrary to the latter.
_Remark 3.4_.: A first order condition (see equation (29)) implies that \(\|\theta^{\star}\|_{\Sigma}^{2}+\sigma^{2}=\mathbb{E}Y^{2}=R\left(0_{\mathbb{R }^{d}}\right)\), which is independent of the dimension \(d\). Thus, in all our upper bounds, \(\|\theta^{\star}\|_{\Sigma}^{2}\) can be replaced by \(\mathbb{E}Y^{2}\), which is dimension-free. Consequently, we can interpret Example 3.3 (and the following examples) upper bound as follows: if \(r\ll d\), then the risk of the naive predictor is divided by \(d/r\gg 1\). As a consequence, \(B_{\mathrm{imp}}\) tends to zero when the dimension increases and the rank is fixed.
_Example 3.5_ (Low-rank covariance matrix compatible with \(\theta^{\star}\) ).: Consider a covariance matrix with a low rank \(r\ll d\) and assume that \(\langle\theta^{\star},v_{1}\rangle^{2}\geq\cdots\geq\langle\theta^{\star},v_{d} \rangle^{2}\) (meaning that \(\theta^{\star}\) is well represented with the first eigendirections of \(\Sigma\)), Theorem 3.2 leads to
\[B_{\mathrm{imp}}(\mathcal{F})\lesssim\lambda_{\mathrm{imp}}\frac{r(\log(r)+1)} {d}\left\|\theta^{\star}\right\|_{\Sigma}^{2}.\]
This result is similar to Example 3.3 (up to a log factor), except that assumptions on the eigenvalues of \(\Sigma\) have been replaced by a condition on the compatibility between the covariance structure and \(\theta^{\star}\). If \(\theta^{\star}\) is well explained by the largest eigenvalues then the imputation bias remains low. This underlines that imputation bias does not only depend on the spectral structure of \(\Sigma\) but also on \(\theta^{\star}\).
_Example 3.6_ (Spiked model, Johnstone (2001)).: In this model, the covariance matrix can be decomposed as \(\Sigma=\Sigma_{\leq r}+\Sigma_{>r}\) where \(\Sigma_{\leq r}\) corresponds to the low-rank part of the data with large eigenvalues and \(\Sigma_{>r}\) to the residual high-dimensional data. Suppose that \(\Sigma_{>r}\preceq\eta I\) (small operator norm) and that all non-zero eigenvalues of \(\Sigma_{\leq r}\) are equal, then Theorem 3.2 gives
\[B_{\mathrm{imp}}(\mathcal{F})\leq\frac{\lambda_{\mathrm{imp}}}{1-\eta}\frac{r }{d}\left\|\theta^{\star}\right\|_{\Sigma}^{2}+\eta\left\|\theta^{\star}_{>r} \right\|_{2}^{2},\]
where \(\theta^{\star}_{>r}\) is the projection of \(\theta^{\star}\) on the range of \(\Sigma_{>r}\). Contrary to Example 3.3, \(\Sigma\) is only _approximately_ low rank, and one can refer to \(r\) as the "effective rank" of \(\Sigma\)(see Bartlett et al., 2020). The above upper bound admits a term in \(O(r/d)\) (as in Example 3.3), but also suffers from a non-compressible part \(\eta\left\|\theta^{\star}_{>r}\right\|_{2}^{2}\), due to the presence of residual (potentially noisy) high-dimensional data. Note that, if \(\theta^{\star}_{>r}=0\) (only the low-dimensional part of the data is informative) then we retrieve the same rate as in Example 3.3.
### Imputation bias for linear models and general MCAR settings
Theorem 3.2 holds only for Ho-MCAR settings, which excludes the case of dependence between mask components. To cover the case of dependent variables \(P_{1},\ldots,P_{d}\) under Assumption 1, recall \(\rho_{j}:=\mathbb{P}(P_{j}=1)\) the probability that the component \(j\) is not missing, and define the matrix \(C\in\mathbb{R}^{d\times d}\) associated to \(P\), given by:
\[C_{kj}:=\frac{V_{k,j}}{\rho_{k}\rho_{j}},\quad(k,j)\in[d]\times[d]. \tag{14}\]
Furthermore, under Assumption 2, define
\[\Lambda_{\mathrm{imp}}:=L^{2}\lambda_{\mathrm{max}}(C). \tag{15}\]
The following result establishes an upper bound on the imputation bias for general MCAR settings.
**Proposition 3.7**.: _Under Assumption 1 and 2, we have_
\[B_{\mathrm{imp}}(\mathcal{F})\leq B_{\mathrm{ridge},\Lambda_{\mathrm{imp}}}( \mathcal{F}).\]
The bound on the bias is similar to the one of Theorem 3.2 but appeals to \(\lambda=\Lambda_{\mathrm{imp}}\) which takes into account the correlations between the components of missing patterns. Remark that, under Assumption 1', there are no correlation and \(\Lambda_{\mathrm{imp}}=L^{2}\frac{1-\rho}{\rho}\), thus matching the result in Theorem 3.2. The following examples highlight generic scenarios in which an explicit control on \(\Lambda_{\mathrm{imp}}\) is obtained.
_Example 3.8_ (Limited number of correlations).: If each missing pattern component is correlated with at most \(k-1\) other components then \(\Lambda_{\mathrm{imp}}\leq L^{2}k\max_{j\in[d]}\left\{\frac{1-\rho_{j}}{\rho_{ j}}\right\}\).
_Example 3.9_ (Sampling without replacement).: Missing pattern components are sampled as \(k\) components without replacement in \([d]\), then \(\Lambda_{\mathrm{imp}}=L^{2}\frac{k+1}{d-k}\). In particular, if one half of data is missing (\(k=\frac{d}{2}\)) then \(\Lambda_{\mathrm{imp}}\leq 3L^{2}\).
In conclusion, we proved that the imputation bias is controlled by the ridge bias, with a penalization constant \(\Lambda_{\mathrm{imp}}\), under any MCAR settings. More precisely, all examples of the previous section (Examples 3.3, 3.5 and 3.6), relying on a specific structure of the covariance matrix \(\Sigma\) and the best predictor \(\theta^{\star}\), are still valid, replacing \(\lambda_{\mathrm{imp}}\) by \(\Lambda_{\mathrm{imp}}\). Additionally, specifying the missing data generation (as in Examples 3.8 and 3.9) allows us to control the imputation bias, which is then proved to be small in high dimension, for all the above examples.
## 4 SGD on zero-imputed data
Since the imputation bias is only a part of the story, we need to propose a proper estimation strategy for \(\theta^{\star}_{\mathrm{imp}}\). To this aim, we choose to train a linear predictor on imputed samples, using an averaged stochastic gradient algorithm (Polyak and Juditsky, 1992), described below. We then establish generalization bounds on the excess risk of this estimation strategy.
### Algorithm
Given an initialization \(\theta_{0}\in\mathbb{R}^{d}\) and a constant learning rate \(\gamma>0\), the iterates of the averaged SGD algorithm are given at iteration \(t\) by
\[\theta_{\mathrm{imp},t}=\left[I-\gamma X_{\mathrm{imp},t}X_{\mathrm{imp},t}^{ \top}\right]\theta_{\mathrm{imp},t-1}+\gamma Y_{t}X_{\mathrm{imp},t}, \tag{16}\]
so that after one pass over the data (early stopping), the final estimator \(\bar{\theta}_{\mathrm{imp},n}\) is given by the Polyak-Ruppert average \(\bar{\theta}_{\mathrm{imp},n}=\frac{1}{n+1}\sum_{t=1}^{n}\theta_{\mathrm{imp },t}\). Such recursive procedures are suitable for high-dimensional settings, and indicated for model miss-specification (induced here by missing entries), as studied in Bach and Moulines (2013). Besides, they are very competitive for large-scale datasets, since one pass over the data requires \(O(dn)\) operations.
### Generalization bound
Our aim is to derive a generalization bound on the predictive performance of the above algorithm, trained on zero-imputed data. To do this, we require the following extra assumptions on the complete data.
**Assumption 4**.: There exist \(\sigma>0\) and \(\kappa>0\) such that \(\mathbb{E}[XX^{\top}\left\|X\right\|_{2}^{2}]\preceq\kappa\mathrm{Tr}(\Sigma)\Sigma\) and \(\mathbb{E}[\epsilon^{2}\left\|X\right\|_{2}^{2}]\leq\sigma^{2}\kappa\mathrm{Tr} (\Sigma)\), where \(\epsilon=Y-X^{\top}\theta^{\star}\).
Assumption 4 is a classical fourth-moment assumption in stochastic optimization (see Bach and Moulines, 2013; Dieuleveut et al., 2017, for details). Indeed, the first statement in Assumption 4 holds, for example, if \(X\) is a Gaussian vector (with \(\kappa=3\)) or when \(X\) satisfies \(\left\|X\right\|_{2}\leq\kappa\mathrm{Tr}(\Sigma)\) almost surely. The second statement in Assumption 4 holds, for example, if the model is well specified or when the noise \(\varepsilon\) is almost surely bounded. Note that if the first part holds then the second part holds with \(\sigma^{2}\leq 2\mathbb{E}[Y^{2}]+2\mathbb{E}[Y^{4}]^{1/2}\).
Our main result, establishing an upper bound on the risk of SGD applied to zero-imputed data, follows.
**Theorem 4.1**.: _Under Assumption 4, choosing a constant learning rate \(\gamma=\frac{1}{\kappa\mathrm{Tr}(\Sigma)\sqrt{n}}\) leads to_
\[\mathbb{E}\left[R_{\mathrm{imp}}\left(\bar{\theta}_{\mathrm{imp},n}\right) \right]-R^{\star}(\mathcal{F})\lesssim\frac{\kappa\mathrm{Tr}(\Sigma)}{\sqrt{ n}}\left\|\theta_{\mathrm{imp}}^{\star}-\theta_{0}\right\|_{2}^{2}+\frac{ \sigma^{2}+\|\theta^{\star}\|_{\Sigma}^{2}}{\sqrt{n}}+B_{\mathrm{imp}}( \mathcal{F}),\]
_where \(\theta^{\star}\) (resp. \(\theta_{\mathrm{imp}}^{\star}\)) is the best linear predictor for complete (resp. with imputed missing values) case._
Theorem 4.1 gives an upper bound on the difference between the averaged risk \(\mathbb{E}[R_{\mathrm{imp}}\left(\bar{\theta}_{\mathrm{imp},n}\right)]\) of the estimated linear predictor with imputed missing values (in both train and test samples) and \(R^{\star}(\mathcal{F})\), the risk of the best linear predictor on the complete case. Interestingly, by Lemma 2.1 and under a well-specified linear model, the latter also holds for \(\mathbb{E}\left[R_{\mathrm{imp}}\left(\bar{\theta}_{\mathrm{imp},n}\right) \right]-R_{\mathrm{mis}}^{\star}\). The generalization bound in Theorem 4.1 takes into account the statistical error of the method as well as the optimization error. More precisely, the upper bound can be decomposed into \((i)\) a bias associated to the initial condition, \((ii)\) a variance term of the considered method, and \((iii)\) the aforementioned imputation bias.
The variance term \((ii)\) depends on the second moment of \(Y\) (as \(\left\|\theta^{\star}\right\|_{\Sigma}^{2}\leq\mathbb{E}Y^{2}\)) and decreases with a slow rate \(1/\sqrt{n}\). As seen in Section 3, the imputation bias is upper-bounded by the ridge bias with penalization parameter \(\lambda_{\mathrm{imp}}\), which is controlled in high dimension for low-rank data (see examples in Section 3.2).
The bias \((i)\) due to the initial condition is the most critical. Indeed, \(\mathrm{Tr}(\Sigma)=\mathbb{E}[\|X\|_{2}^{2}]\) is likely to increase with \(d\), e.g., under Assumption 2, \(\mathrm{Tr}(\Sigma)\leq dL^{2}\). Besides, the starting point \(\theta_{0}\) may be far from \(\theta_{\mathrm{imp}}^{\star}\). Fortunately, Lemma 4.2 establishes some properties of \(\theta_{\mathrm{imp}}^{\star}\).
**Lemma 4.2**.: _Under Assumptions 1 and 3, let \(V\) be the covariance matrix of \(P\) defined in Proposition 3.1. If \(V\) is invertible, then_
\[\left\|\theta_{\mathrm{imp}}^{\star}\right\|_{2}^{2}\leq\frac{B_{\mathrm{imp} }(\mathcal{F})}{\ell^{2}\lambda_{\mathrm{min}}(V)}. \tag{17}\]
_In particular, under Assumption 1',_
\[\left\|\theta_{\mathrm{imp}}^{\star}\right\|_{2}^{2}\leq\frac{B_{\mathrm{imp} }(\mathcal{F})}{\ell^{2}\rho(1-\rho)}. \tag{18}\]
Lemma 4.2 controls the norm of the optimal predictor \(\theta_{\mathrm{imp}}^{\star}\) by the imputation bias: if the imputation bias is small, then the optimal predictor on zero-imputed data is of low norm. According to Section 3, this holds in particular for high-dimensional settings. Thus, choosing \(\theta_{0}=0\) permits us to exploit the upper bound provided by Lemma 4.2 in Theorem 4.1. With such an initialization, the bias due to this initial condition is upper bounded by \(\frac{\kappa\mathrm{Tr}(\Sigma)}{\sqrt{n}}\|\theta_{\mathrm{imp}}^{\star}\|_{2}^ {2}\). Intuitively, as \(\theta_{\mathrm{imp}}^{\star}\) is in an \(\ell^{2}\)-ball of small radius, choosing \(\theta_{0}\) within that ball, e.g. \(\theta_{0}=0\) is a good choice.
Taking into account Lemma 4.2, Proposition 4.3 establishes our final upper bound on SGD on zero-imputed data.
**Proposition 4.3**.: _Under Assumptions 1', 2, 3 and 4, the predictor \(\bar{\theta}_{\mathrm{imp},n}\) resulting from the SGD strategy, defined in Section 4.1, with starting point \(\theta_{0}=0\) and learning rate \(\gamma=\frac{1}{d\kappa L^{2}\sqrt{n}}\), satisfies_
\[\mathbb{E}\left[R_{\mathrm{imp}}\left(\bar{\theta}_{\mathrm{imp},n}\right) \right]-R^{\star}(\mathcal{F})\lesssim\left(\frac{L^{2}}{\ell^{2}}\frac{\kappa d }{\rho(1-\rho)\sqrt{n}}+1\right)B_{\mathrm{imp}}(\mathcal{F})+\frac{\sigma^{2 }+\|\theta^{\star}\|_{\Sigma}^{2}}{\sqrt{n}}.\]
In this upper bound, the first term encapsulates the imputation bias and the one due to the initial condition, whilst the second one corresponds to the variance of the training procedure. As soon as \(d\gg\frac{\ell^{2}}{L^{2}}\frac{\rho(1-\rho)\sqrt{n}}{\kappa}\) then the imputation bias is negligible compared to that of the initial condition.
### Examples
According to Examples 3.3 and 3.6, \(B_{\mathrm{imp}}(\mathcal{F})\) decreases with the dimension, provided that \(\Sigma\) or \(\beta\) are structured. Strikingly, Corollary 4.4 highlights cases where the upper bound of Proposition 4.3 is actually dimension-free.
**Corollary 4.4**.: _Suppose that assumptions of Proposition 4.3 hold. Recall that \(\lambda_{1}\geq\ldots\geq\lambda_{d}\) are the eigenvalues of \(\Sigma\) associated with the eigenvectors \(v_{1},\ldots,v_{d}\)._
1. (Example 3.3 - Low-rank \(\Sigma\)). _If_ \(\Sigma\) _has a low rank_ \(r\ll d\) _and equal non-zero singular values, then_ \[\mathbb{E}\left[R_{\mathrm{imp}}\left(\bar{\theta}_{\mathrm{imp},n}\right) \right]-R^{\star}(\mathcal{F})\lesssim\frac{L^{2}}{\ell^{2}}\left(\frac{L^{2}} {\ell^{2}}\frac{\kappa}{\rho\sqrt{n}}+\frac{1-\rho}{d}\right)\frac{r\,\| \theta^{\star}\|_{\Sigma}^{2}}{\rho}+\frac{\sigma^{2}}{\sqrt{n}}.\]
2. (Example 3.6 - Spiked model). _If_ \(\Sigma=\Sigma_{\leq r}+\Sigma_{>r}\) _with_ \(\Sigma_{>r}\preceq\ell^{2}\eta I\)_,_ \(\Sigma_{\leq r}\) _has a low rank_ \(r\ll d\) _with equal non-zero singular values, and the projection of_ \(\theta^{\star}\) _on the range of_ \(\Sigma_{>r}\) _satisfies_ \(\theta_{>r}^{\star}=0\)_, then_ \[\mathbb{E}\left[R_{\mathrm{imp}}\left(\bar{\theta}_{\mathrm{imp},n}\right) \right]-R^{\star}(\mathcal{F})\lesssim\frac{L^{2}}{\ell^{2}}\left(\frac{L^{2}} {\ell^{2}}\frac{\kappa}{\rho\sqrt{n}}+\frac{1-\rho}{d}\right)\frac{r\,\| \theta^{\star}\|_{\Sigma}^{2}}{\rho(1-\eta)}+\frac{\sigma^{2}}{\sqrt{n}}.\]
Corollary 4.4 establishes upper bounds on the risk of SGD applied on zero-imputed data, for some particular structures on \(\Sigma\) and \(\theta^{\star}\). These bounds take into account the statistical error as well as the optimization one, and are expressed as function of \(d\) and \(n\). Since \(\left\lVert\theta^{\star}\right\rVert_{\Sigma}^{2}\) is upper bounded by \(\mathbb{E}Y^{2}\) (a dimension-free term), the risks in Corollary 4.4 can also be upper bounded by dimension-free quantities, provided \(d>\frac{\ell^{2}}{L^{2}}\frac{\rho(1-\rho)\sqrt{n}}{\kappa}\).
Besides, Corollary 4.4 shows that, for \(d\gg\frac{\ell^{2}}{L^{2}}\frac{\rho(1-\rho)\sqrt{n}}{\kappa}\), the imputation bias is negligible with respect to the stochastic error of SGD. Therefore, for structured problems in high-dimensional settings for which \(d\gg\frac{\ell^{2}}{L^{2}}\frac{\rho(1-\rho)\sqrt{n}}{\kappa}\), the zero-imputation strategy is consistent, with a slow rate of order \(1/\sqrt{n}\).
_Remark 4.5_ (Discussion about slow rates).: An important limitation of coupling naive imputation with SGD is that fast convergence rates cannot be reached. Indeed, in large dimensions, the classical fast rate is given by \(\operatorname{Tr}(\Sigma(\Sigma+\lambda I)^{-1})/n\) with \(\lambda\) the penalization hyper-parameter. The quantity \(\operatorname{Tr}(\Sigma(\Sigma+\lambda I)^{-1})\), often called degrees of freedom, can be negligible w.r.t. \(d\) (for instance when \(\Sigma\) has a fast eigenvalue decay). However, when working with an imputed dataset, the covariance matrix of the data is not \(\Sigma\) anymore, but \(\Sigma_{\text{imp}}=\mathbb{E}X_{\text{imp}}X_{\text{imp}}^{\top}\). Therefore, in the case of Assumption 1' (Ho-MCAR), all the eigenvalues of \(\Sigma_{\text{imp}}\) are larger than \(\rho(1-\rho)\) (preventing the eigenvalues decay obtained when working with complete inputs). By concavity of the degrees of freedom (on positive semi-definite matrix), we can show that \(\operatorname{Tr}(\Sigma_{\text{imp}}(\Sigma_{\text{imp}}+\lambda I)^{-1}) \geq\frac{d\rho(1-\rho)}{1+\lambda}\), hindering traditional fast rates.
Link with dropoutDropout is a classical regularization technique used in deep learning, consisting in randomly discarding some neurons at each SGD iteration (Srivastava et al., 2014). Regularization properties of dropout have attracted a lot of attention (e.g., Gal and Ghahramani, 2016). Interestingly, setting a neuron to \(0\) on the input layer is equivalent to masking the corresponding feature. Running SGD (as in Section 4) on a stream of zero-imputed data is thus equivalent to training a neural network with no hidden layer, a single output neuron, and dropout on the input layer. Our theoretical analysis describes the implicit regularization impact of dropout in that very particular case. Interestingly, this can also be applied to the fine-tuning of the last layer of any regression network structure.
## 5 Numerical experiments
Data simulationWe generate \(n=500\) complete input data according to a normal distribution with two different covariance structures. First, in the **low-rank** setting (Ex. 3.3 and 3.5), the output is formed as \(Y=\beta^{\top}Z+\epsilon\), with \(\beta\in\mathbb{R}^{r}\), \(Z\sim\mathcal{N}(0,I_{r})\) and \(\epsilon\sim\mathcal{N}(0,2)\), and the inputs are given by \(X=AZ+\mu\), with a full rank matrix \(A\in\mathbb{R}^{d\times r}\) and a mean vector \(\mu\in\mathbb{R}^{d}\). Note that the dimension \(d\) varies in the experiments, while \(r=5\) is kept fixed. Besides, the full model can be rewritten as \(Y=X^{\top}\theta^{\star}+\epsilon\) with \(\theta^{\star}=(A^{\dagger})^{\top}\beta\) where \(A^{\dagger}\) is the Moore-Penrose inverse of \(A\). Secondly, in the **spiked model** (Ex. 3.6), the input and the output are decomposed as \(X=(X_{1},X_{2})\in\mathbb{R}^{d/2}\times\mathbb{R}^{d/2}\) and \(Y=Y_{1}+Y_{2}\), where \((X_{1},Y_{1})\)
is generated according to the low-rank model above and \((X_{2},Y_{2})\) is given by a linear model \(Y_{2}=\theta_{2}^{\top}X_{2}\) and \(X_{2}\sim\mathcal{N}(0,I_{d/2})\), choosing \(\|\theta_{2}\|=0.2\).
Two missing data scenarios, with a proportion \(\rho\) of observed entries equal to \(50\%\), are simulated according to (i) the Ho-MCAR setting (Assumption 1'); and to (ii) the self-masking MNAR setting, which departs significantly from the MCAR case as the presence of missing data depends on the underlying value itself. More precisely, set \(\alpha\in\mathbb{R}^{d}\) such that, for all \(j\in[d]\), \(\mathbb{P}(P_{j}=1|X)=(1+e^{-\alpha_{j}X_{j}})^{-1}\) and \(\mathbb{E}[P_{j}]=0.5\) (\(50\%\) of missing data on average per components).
RegressorsFor two-step strategies, different impur
## 6 Discussion and conclusion
In this paper, we study the impact of zero imputation in high-dimensional linear models. We demystify this widespread technique, by exposing its implicit regularization mechanism when dealing with MCAR data. We prove that, in high-dimensional regimes, the induced bias is similar to that of ridge regression, commonly accepted by practitioners. By providing generalization bounds on SGD trained on zero-imputed data, we establish that such two-step procedures are statistically sound, while being computationally appealing.
Theoretical results remain to be established beyond the MCAR case, to properly analyze and compare the different strategies for dealing with missing data in MNAR settings (see Figure 1 (c)). Extending our results to a broader class of functions (escaping linear functions) or even in a classification framework, would be valuable to fully understand the properties of imputation.
| 予測のための欠損値を扱うための2つの異なるアプローチが存在します。それは、予測アルゴリズムを適用する前に欠損値を補完するか、欠損値を natively 統合できる専用のメソッドです。補完は広く(そして容易に)使用されていますが、低容量の予測モデル(例えば線形モデル)を適用すると、不幸にしてバイアスが生じます。しかし、実際には、無作為の補完は予測性能が良好です。この論文では、M.C.A.R.欠損データを持つ高次元線形モデルにおける補完の影響を調査します。ゼロ補完は、高次元問題でよく使われるリッジメソッドに関連する、無意識の正規化を遂行します。この関係を利用して、補完のバイアスがリッジバイアスによって制御されることを示し、高次元では消失します。予測モデルとして、ゼロ補完データに対して平均化されたSGD戦略を |
2305.01476 | Deep Learning Based Multimodal with Two-phase Training Strategy for
Daily Life Video Classification | In this paper, we present a deep learning based multimodal system for
classifying daily life videos. To train the system, we propose a two-phase
training strategy. In the first training phase (Phase I), we extract the audio
and visual (image) data from the original video. We then train the audio data
and the visual data with independent deep learning based models. After the
training processes, we obtain audio embeddings and visual embeddings by
extracting feature maps from the pre-trained deep learning models. In the
second training phase (Phase II), we train a fusion layer to combine the
audio/visual embeddings and a dense layer to classify the combined embedding
into target daily scenes. Our extensive experiments, which were conducted on
the benchmark dataset of DCASE (IEEE AASP Challenge on Detection and
Classification of Acoustic Scenes and Events) 2021 Task 1B Development,
achieved the best classification accuracy of 80.5%, 91.8%, and 95.3% with only
audio data, with only visual data, both audio and visual data, respectively.
The highest classification accuracy of 95.3% presents an improvement of 17.9%
compared with DCASE baseline and shows very competitive to the state-of-the-art
systems. | Lam Pham, Trang Le, Cam Le, Dat Ngo, Weissenfeld Axel, Alexander Schindler | 2023-04-30T19:12:34 | http://arxiv.org/abs/2305.01476v1 | # Deep Learning Based Multimodal with Two-phase Training Strategy for Daily Life Video Classification
###### Abstract
In this paper, we present a deep learning based multimodal system for classifying daily life videos. To train the system, we propose a two-phase training strategy. In the first training phase (Phase I), we extract the audio and visual (image) data from the original video. We then train the audio data and the visual data with independent deep learning based models. After the training processes, we obtain audio embeddings and visual embeddings by extracting feature maps from the pre-trained deep learning models. In the second training phase (Phase II), we train a fusion layer to combine the audio/visual embeddings and a dense layer to classify the combined embedding into target daily scenes. Our extensive experiments, which were conducted on the benchmark dataset of DCASE (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) 2021 Task 1B Development, achieved the best classification accuracy of 80.5%, 91.8%, and 95.3% with only audio data, with only visual data, both audio and visual data, respectively. The highest classification accuracy of 95.3% presents an improvement of 17.9% compared with DCASE baseline and shows very competitive to the state-of-the-art systems.
## I Introduction
Recently, applying deep learning techniques to analyze videos has achieved many successes and opened a variety of real-life applications. Indeed, a wide range of deep learning based systems have been proposed for various human-relevant tasks of emotion recognition [1], lip-reading [2], or detecting human activities [3, 4, 5], etc. Recently, a dataset of daily-scene videos [6], which was proposed by DCASE challenge [7] for a new task of audio-visual scene classification (AVSC), was published and attracted attention from the video research community. Similar to the systems proposed for analyzing videos of human activities [5, 1], the state-of-the-art systems proposed for AVSC task also leveraged deep learning based models and presented joined audio-visual analysis. For instances, the proposed systems in [8, 9] used convolutional based models to extract audio embeddings from audio data and leveraged pre-trained deep learning models for extracting visual embeddings from visual data. Then, the audio embeddings and the visual embeddings are concatenated and fed into dense layers for classification. To further enhance audio/visual embeddings, the authors in [10] proposed a graphed based model which was used to learn the audio/visual feature maps extracted from middle layers of deep learning backbone models. The graph based model then generates a graph based audio/visual embedding. The graph based audio/visual embeddings are finally fused with audio/visual embeddings before going to dense layers for classification. Meanwhile, authors in [11] improved the audio/visual embeddings by proposing a contrastive event-object alignment layer. The contrastive event-object alignment layer, which is based on the contrastive learning technique, helps to explore the relationship between audio and visual information by learning relative distances of event-object pairs occurring in both audio and visual scenes.
In this paper, we also leverage deep learning techniques, propose a deep learning based multimodal system for the task of AVSC. We present our main contributions: (a) We propose a mechanism, which combines a multi-model fusion and a two-phase training strategy, to generate an audio-visual embedding representing for one video input. (b) We evaluate our proposed deep learning based multimodal system on the DCASE 2021 Task 1B Development set which is the benchmark and largest dataset for the task of audio-visual daily scene classification. Results reveal that our proposed system is very competitive to the state-of-the-art systems.
## II Proposed Deep Learning Based Multimodal for AVSC task
As Figure 1 shows, the high-level architecture of our proposed deep learning based multimodal for audio-visual scene classification (AVSC) comprises two individual branches: the audio branch and the visual branch, which focus on either audio or visual data extracted from the input video. Regarding the audio branch, the audio is first transformed
Fig. 1: The high-level architecture of the proposed deep learning based multimodal for AVSC task
into spectrograms which are then fed into three Audio Backbones to extract audio embeddings. Meanwhile, images are fed into two Visual Backbones to extract image embeddings. The audio and image embeddings are then combined by a Fusion Layer to generate an audio-visual embedding (i.e. The Fusion Layer is denoted by the function \(f\)). The audio-visual embedding is finally classified into target categories by a Dense Layer. From results shown in recent papers [8, 9, 11], we can see that the visual data contributes to the AVSC performance more significantly than the audio data. If we train our proposed AVSC system with an end-to-end training process, it possibly causes an overfitting on the visual branch and reduces the role of the audio branch. We therefore propose a two-phase training strategy to train our proposed AVSC system. While the first training phase (Phase I) is used to train and achieve the individual Audio and Visual Backbones, the Fusion Layer and the Dense Layer are trained in the second phase (Phase II).
_Phase I: Train deep learning models on individual audio or visual data to achieve audio and visual backbones_
In Phase I, we aim to achieve individual high-performance Audio and Visual Backbones as shown in Figure 1. To this end, we consider the AVSC task as a combination of two independent tasks of Acoustic Scene Classification (ASC) and Visual Scene Classification (VSC). To deal with the ASC task, we leverage multiple input spectrograms, which proves powerful to improve the ASC performance [12, 13]. In particular, we propose deep learning based systems as shown in Figure 2 to train audio data. The audio is firstly re-sampled to 32,000 Hz, then transformed into three types of spectrogram: Mel spectrogram (MEL), Gammatone (GAM), and Constant-Q-Transform (CQT) where both temporal and spectral features are presented. By using two channels and setting parameters of the filter number, the window size, the hop size to 128, 80 ms, 14 ms, respectively, we generate MEL, GAM, and CQT spectrograms of \(128{\times}309{\times}2\) from one 10-second audio segment. Delta and delta-delta are then applied to the three-dimensional spectrograms to obtain six-dimensional spectrograms of \(128{\times}305{\times}6\). Next, the Mixup [14] data augmentation method is applied on the six-dimensional spectrograms before feeding into a residual-inception based network for classification. Regarding the residual-inception based network for training audio spectrograms, it is separated into two main parts: A Residual-Inception block and a Dense block. The Residual-Inception block in this paper is the CNN-based backbone of the novel residual-inception deep neural network architecture which is reused from our previous works in [15]. Meanwhile, the Dense block comprises two dense layers with detailed configuration shown in the lower part of Figure 2. As we apply three types of spectrogram transformation (e.g. MEL, GAM, and CQT), we obtain three individual deep learning based models for audio input, referred to as the Aud-MEL, Aud-GAM, Aud-CQT models, respectively.
As regards the visual data, we propose deep learning models as shown in Figure 3. As Fig 3 shows, the original images (i.e. two images from each second) extracted from scene videos are first scaled into the tensor of \(224{\times}224{\times}3\) with RGB format. Then, the Mixup [14] data augmentation method is applied on the scaled images before feeding into classification models. To construct the classification models, we are inspired by [16] which shows that a combination of Inception based and ConvNet based models proves effective to improve the performance of VSC tasks. We, therefore, select InceptionV3 and ConvNeXtTiny networks from Keras library [17] for evaluating the VSC task in this paper. As both InceptionV3 and ConvNeXtTiny networks were trained with ImageNet [18] in advance, we reuse the trainable parameters from the first layer to the global pooling layer of these networks. We then connect these pre-trained layers with a two dense layers as shown in the lower part in Figure 3 to perform the InceptionV3 and ConvNeXtTiny based classification models for the VSC task in this paper. The InceptionV3 and ConvNeXtTiny based classifiers, which are finetuned on the downstream VSC task, are referred to as Vis-CONV and Vis-INC models, respectively.
Given the individual pre-trained models of Aud-MEL, Aud-GAM, Aud-CQT for audio input and Vis-CONV and Vis-INC for visual input, we remove header layers of these pre-trained models (i.e. The header layers of the pre-trained models are either the softmax layer or the final dense layer) to perform the Audio and Visual Backbones as shown in Figure 1. In the other words, when we feed an audio or visual data into the pre-trained models of Aud-MEL, Aud-GAM, Aud-CQT, Vis-CONV, Vis-INC, the feature maps extracted at the first fully connected FC(1024) or at second fully connected FC(10) are considered as the audio and visual embeddings as shown in Figure 1.
representing for one video input. In this paper, we proposed three combination methods for the Fusion Layer. Additionally, we have two types of audio/visual embeddings: The first type of audio/visual embeddings are extracted from the first fully connected layer FC(1024) of the pre-trained deep learning based models: Aud-MEL, Aud-GAM, Aud-CQT, Vis-CONV, Vis-INC; and the second type of audio/visual embeddings are extracted from the second fully connected layer FC(10) of these pre-trained deep learning based models. As a result, we totally evaluate six types of Fusion Layer, referred to as \(f_{1},f_{2},f_{3},f_{4},f_{5}\), and \(f_{6}\). While \(f_{1},f_{2},f_{3}\) are three types of combinations for the first type of audio/visual embeddings, \(f_{4},f_{5},f_{5}\) are for the second type of audio/visual embeddings. Let consider \(\{\mathbf{a}\mathbf{e}_{\mathbf{g}},\mathbf{a}\mathbf{m},\mathbf{a}\mathbf{e} _{\mathbf{c}},\mathbf{v}\mathbf{e}_{\mathbf{i}},\mathbf{v}\mathbf{e}_{\mathbf{c }}\}\in R^{1024}\) as the the first type of audio and visual embeddings extracted from the the first fully connected layer FC(1024) of the Audio and Visual Backbones, the fusion functions of \(f_{1},f_{2},f_{3}\) representing for the Fusion Layer are described by
\[f_{1}=\mathbf{a}\mathbf{e}_{\mathbf{g}}.\mathbf{w}_{\mathbf{1}}+\mathbf{a} \mathbf{e}_{\mathbf{m}}.\mathbf{w}_{\mathbf{2}}+\mathbf{a}\mathbf{e}_{\mathbf{ c}}.\mathbf{w}_{\mathbf{3}}+\mathbf{v}\mathbf{e}_{\mathbf{i}}.\mathbf{w}_{ \mathbf{4}}+\mathbf{v}\mathbf{e}_{\mathbf{c}}.\mathbf{w}_{\mathbf{5}}+\mathbf{ b}, \tag{1}\]
\[f_{2}=(\mathbf{a}\mathbf{e}_{\mathbf{g}}.\mathbf{w}_{\mathbf{1}}+\mathbf{a} \mathbf{e}_{\mathbf{m}}.\mathbf{w}_{\mathbf{2}}+\mathbf{a}\mathbf{e}_{ \mathbf{c}}.\mathbf{w}_{\mathbf{3}}).\mathbf{w}_{\mathbf{a}}+(\mathbf{v} \mathbf{e}_{\mathbf{i}}.\mathbf{w}_{\mathbf{4}}+\mathbf{v}\mathbf{e}_{\mathbf{ c}}.\mathbf{w}_{\mathbf{5}}).\mathbf{w}_{\mathbf{v}}+\mathbf{b}, \tag{2}\]
\[f_{3}=concat[(\mathbf{a}\mathbf{e}_{\mathbf{g}}.\mathbf{w}_{\mathbf{1}}+ \mathbf{a}\mathbf{e}_{\mathbf{m}}.\mathbf{w}_{\mathbf{2}}+\mathbf{a}\mathbf{e} _{\mathbf{c}}.\mathbf{w}_{\mathbf{3}}),(\mathbf{v}\mathbf{e}_{\mathbf{1}}. \mathbf{w}_{\mathbf{4}}+\mathbf{v}\mathbf{e}_{\mathbf{c}}.\mathbf{w}_{\mathbf{5 }})], \tag{3}\]
where \(\{\mathbf{w}_{\mathbf{1}},\mathbf{w}_{\mathbf{2}},\mathbf{w}_{\mathbf{3}}, \mathbf{w}_{\mathbf{4}},\mathbf{w}_{\mathbf{5}},\mathbf{w}_{\mathbf{a}}, \mathbf{w}_{\mathbf{v}},\mathbf{b}\}\in R^{1024}\) are trainable parameters.
Regarding the fusion function \(f_{1}\), we assume that individual audio/visual embeddings have a linear relation across each dimension. Therefore, we apply the element-wise product between each trainable vector of \(\mathbf{w}_{\mathbf{1}},\mathbf{w}_{\mathbf{2}},\mathbf{w}_{\mathbf{3}}, \mathbf{w}_{\mathbf{4}},\mathbf{w}_{\mathbf{5}}\) and each individual embedding before adding a bias \(\mathbf{b}\). By this way, a linear function, which helps to learn the relation of audio/visual embeddings across 1024 dimension, is established. Meanwhile, in the fusion function \(f_{2}\), we first apply the linear combination for only audio embeddings and for visual embeddings independently. Then, we again apply the linear combination for both audio and visual embeddings using trainable vectors of \(\mathbf{w}_{\mathbf{a}},\mathbf{w}_{\mathbf{v}}\) and \(\mathbf{b}\). For the fusion function \(f3\), we also first apply the linear combination for only audio embeddings and only visual embeddings independently. We then concatenate two audio and visual embeddings to perform one audio-visual embedding. The fusion functions \(f_{4},f_{5},f_{6}\) share the same equation as \(f_{1},f_{2},f_{3}\) respectively with the second type of audio/visual input embeddings of \(\{\mathbf{a}\mathbf{e}_{\mathbf{g}},\mathbf{a}\mathbf{e}_{\mathbf{m}},\mathbf{a }\mathbf{e}_{\mathbf{c}},\mathbf{v}\mathbf{e}_{\mathbf{i}},\mathbf{v}\mathbf{e }_{\mathbf{c}}\}\in R^{10}\) and the trainable parameters of \(\{\mathbf{w}_{\mathbf{1}},\mathbf{w}_{\mathbf{2}},\mathbf{w}_{\mathbf{3}}, \mathbf{w}_{\mathbf{4}},\mathbf{w}_{\mathbf{5}},\mathbf{w}_{\mathbf{a}}, \mathbf{w}_{\mathbf{v}},\mathbf{b}\}\in R^{10}\).
The output of the Fusion Layer, known as the audio-visual embedding, is finally classified by a Dense Layer performed by a fully connected layer FC(10) and a Softmax layer as shown in the Figure 1. Notably, as we freeze the Audio and Visual Backbones in the Phase II training process, the model is forced to learn the Fusion Layer and the Dense Layer.
## III Experimental Results and Discussions
### _Implementation of deep learning models_
We apply Tensorflow framework for implementing all deep learning based models in this paper. As mixup [14] data augmentation is used for audio spectrograms, image frames, and audio/visual embeddings to enforce classifiers, the labels of the augmented data are no longer one-hot. We therefore use Kullback-Leibler (KL) divergence loss to train back-end classification models:
\[LOSS_{KL}(\Theta)=\sum_{n=1}^{N}\mathbf{y}_{n}\log\left(\frac{\mathbf{y}_{n}}{ \mathbf{\hat{y}}_{n}}\right)+\frac{\lambda}{2}||\Theta||_{2}^{2}, \tag{4}\]
where \(N\) is the training samples, \(\Theta\) present the trainable network parameters, and \(\lambda\) denotes the \(\ell_{2}\)-norm regularization coefficient. \(\mathbf{y}_{n}\) and \(\mathbf{\hat{y}}_{n}\) denote the ground-truth and the network output, respectively. All the training processes in this paper are run on two GeForce RTX 2080 Titan GPUs using Adam method [19] for optimization.
### _Datasets and evaluation metric_
This dataset is referred to as DCASE Task 1B Development, which was proposed for DCASE 2021 challenge [7]. The dataset is slightly imbalanced and contains both acoustic and visual information, recorded from 12 large European cities: Amsterdam, Barcelona, Helsinki, Lisbon, London, Lyon, Madrid, Milan, Prague, Paris, Stockholm, and Vienna. It consists of 10 scene classes: airport, shopping mall (indoor), metro station (underground), pedestrian street, public square, street (traffic), traveling by tram, bus and metro (underground), and urban park, which can be categorized into three meta-class of indoor, outdoor, and transportation. The dataset was recorded by four recording devices simultaneously with the same setting of 48000 Hz sampling rate and 24 bit resolution. We obey the DCASE 2021 Task 1B challenge [7], separate this dataset into training (Train.) subset for the training process and evaluation (Eval.) subset for the inference. As regards the evaluation metric, the Accuracy (Acc.%), which is commonly applied in classification tasks [7] and is also proposed for DCASE Task 1B challenge, is used to evaluate the AVSC task in this paper.
### _Experimental results and discussion_
We first evaluate the performance of our proposed systems with different types of fusion methods mentioned in Section II-B. As the results show in Table I, fusion methods of \(f_{4},f_{5},f_{6}\) outperform \(f_{1},f_{2},f_{3}\) respectively. In the other words, the fusions of audio/visual embeddings extracted from the second fully connected layer FC(10) are more effective rather than the fusions of audio/visual embeddings from the first fully connected layer FC(1024). We also see that the best accuracy score of 95.3% is achieved from \(f_{4}\) method which presents a linear combination of all five audio/visual embeddings.
We then evaluate the performance comparison among the proposed systems using \(f_{4}\) fusion of only audio data, of only visual data, of both audio and visual data. As the Figure 4 shows, the proposed AVSC system using only visual data (91.8%) outperforms the system with only audio data (80.5%) over almost categories, except of 'Tram' and 'Park'. When both audio and visual data are used, this helps to improve the performance in all categories (Most categories
record have accuracy more than 90%, except 'Airport' with 88.0%).
We compare our best systems (i.e. using \(f_{4}\) fusion) with the state-of-the-art systems. As the Table II shows, our proposed systems using only audio or using only visual data outperforms the state-of-the-art systems, records the accuracy of 80.5% and 91.8%, respectively. Our proposed system using both audio and visual data achieves the top-2 after the system from [20]. However, the top-1 system [20] presented an intensive ensemble of nine large deep learning models (EfficientNet, ResNeSt, and RegNet for directly training audio data; ResNet-6.4F, ResNetSt-50d, HRNet-W18 for directly training visual data; CLIP based networks of ResNet-101, ResNet-50x4 ViT-B32 for extracting visual embeddings), which requires nine individual processes as well as a post processing method for an inference. Meanwhile, our proposed system combines five lighter models (3 residual-inception based models for audio data (36 M trainable parameters), InceptionV3 and ConvTiny based models for visual data (69.4 M trainable parameters)) and presents an end-to-end inference process.
## IV Conclusion
We have proposed a deep learning based multimodal system with the two-phase training strategy for classifying daily life videos. Our proposed model, which makes use of a multi-spectrogram approach for audio data (i.e. MEL, GAM, and CQT) and multiple networks for visual data (InceptionV3 and ConvNeXtTiny), achieves the best performance of 95.3% on the benchmark dataset of DCASE 2021 Task 1B. The experimental results prove that our proposed AVSC system is very competitive to the state-of-the-art systems and potential for applying to real-life applications.
| この論文では、多modalなシステムを、日常生活動画の分類に用いる深層学習モデルを提案します。このシステムを訓練するために、2つの段階の訓練戦略を提案しました。第一の訓練段階(Phase I)では、元のビデオから音声と画像(画像)データを抽出し、独立した深層学習ベースモデルで音声データと画像データを訓練しました。訓練終了後、事前学習された深層学習モデルから特徴マップを抽出して、音声エンベッディングと視覚エンベッディングを取得しました。第二の訓練段階(Phase II)では、音声/視覚エンベッディングを融合させるための融合層を訓練し、結合されたエンベッディングを目標の日常的なシーンに分類するための密接層を訓練しました。この論文の成果は、DCASE(IEEE AASP チャレンジ)2021 のタスク 1B 開発の基準データセットでの実験によって、音声のみ |
2306.00238 | Bytes Are All You Need: Transformers Operating Directly On File Bytes | Modern deep learning approaches usually utilize modality-specific processing.
For example, the most common deep learning approach to image classification
involves decoding image file bytes into an RGB tensor which is passed into a
neural network. Instead, we investigate modality-independent representation
learning by performing classification directly on file bytes, without the need
for decoding files at inference time. This enables models to operate on various
modalities without any hand-designed, modality-specific processing. Our model,
ByteFormer, improves ImageNet Top-1 classification accuracy by $5\%$ (from
$72.2\%$ to $77.33\%$) relative to DeIT models of similar size. Compared to
Perceiver IO, our model requires absolutely no modality-specific processing at
inference time, and uses an order of magnitude fewer parameters at equivalent
accuracy on ImageNet. We demonstrate that the same ByteFormer architecture can
perform audio classification without modifications or modality-specific
preprocessing. We achieve $95.42\%$ classification accuracy on the Speech
Commands V2 dataset (comparable to the state-of-the-art accuracy of $98.7\%$).
Additionally, we demonstrate that ByteFormer can operate jointly on images and
audio, handling joint classification without explicit knowledge of the input
modality. We release our code at
https://github.com/apple/corenet/tree/main/projects/byteformer. | Maxwell Horton, Sachin Mehta, Ali Farhadi, Mohammad Rastegari | 2023-05-31T23:18:21 | http://arxiv.org/abs/2306.00238v2 | # Bytes Are All You Need: Transformers Operating Directly On File Bytes
###### Abstract
Modern deep learning approaches usually transform inputs into a modality-specific form. For example, the most common deep learning approach to image classification involves decoding image file bytes into an RGB tensor which is passed into a neural network. Instead, we investigate performing classification directly on file bytes, without the need for decoding files at inference time. Using file bytes as model inputs enables the development of models which can operate on multiple input modalities. Our model, _ByteFormer_, achieves an ImageNet Top-1 classification accuracy of \(77.33\%\) when training and testing directly on TIFF file bytes using a transformer backbone with configuration similar to DeiT-Ti (\(72.2\%\) accuracy when operating on RGB images). Without modifications or hyperparameter tuning, ByteFormer achieves \(95.42\%\) classification accuracy when operating on WAV files from the Speech Connunds v2 dataset (compared to state-of-the-art accuracy of \(98.7\%\)). Additionally, we demonstrate that ByteFormer has applications in privacy-preserving inference. ByteFormer is capable of performing inference on particular obfuscated input representations with no loss of accuracy. We also demonstrate ByteFormer's ability to perform inference with a hypothetical privacy-preserving camera which avoids forming full images by consistently masking \(90\%\) of pixel channels, while still achieving \(71.35\%\) accuracy on ImageNet. Our code will be made available at [https://github.com/apple/ml-cvnets/tree/main/examples/byteformer](https://github.com/apple/ml-cvnets/tree/main/examples/byteformer).
## 1 Introduction
Deep learning inference usually involves explicit modeling of the input modality. For example, Vision Transformers (ViTs) [7] explicitly model the 2D spatial structure of images by encoding image patches into vectors. Similarly, audio inference often involves computing spectral features (such as MFCCs [25]) to pass into a network [10, 18]. When a user wants to perform inference on a file stored on disk (e.g. a JPEG image file or an MP3 audio file), the user must first decode the file into a modality-specific representation (e.g. an RGB tensor or MFCCs), as in Figure 0(a).
The practice of decoding inputs into a modality-specific representation has two main drawbacks. First, it requires hand-crafting an input representation and a model stem for each input modality. Recent works such as PerceiverIO [14] and UnifiedIO [24] have shown that Transformer backbones can be used for a variety of different tasks. However, these methods still require modality-specific input preprocessing. For instance, PerceiverIO decodes image files into \([H\times W,C]\) tensors before passing them into the network. Other modalities input to PerceiverIO are processed into different forms. We hypothesize that it's possible to remove all modality-specific input preprocessing by performing inference directly on file bytes.
The second drawback of decoding inputs into a modality-specific representation is that it reduces privacy by exposing the data being analyzed. Consider the case of a smart-home device that performs inference on RGB images. If an adversary accesses this model input, the user's privacy might be compromised. We hypothesize that inference can instead be performed on privacy-preserving inputs.
To address these drawbacks, we note that a common property of many input modalities is that they can be stored as file bytes. Thus, we use file bytes (without any decoding) as inputs to our model at inference time (Figure 0(b)). We use a modified Transformer [39] architecture for our model, given their ability to handle a variety of modalities [14, 24] and variable-length inputs. We call our model ByteFormer.
We demonstrate the efficacy of ByteFormer on ImageNet [6] classification, achieving \(77.33\%\) accuracy on files stored in the TIFF format. Our model uses transformer backbone hyperparameters chosen in DeiT-Ti [38] (which achieves \(72.2\%\) accuracy on RGB inputs). We also demonstrate strong results on PNG and JPEG files. Additionally, we demonstrate that our classification model can achieve \(95.8\%\) accuracy on Speech Commands v2 [42], comparable to state-of-the-art (\(98.7\%\)) [18], _without any architecture changes or hyperparameter tuning_.
Because ByteFormer can handle a variety of input representations, we can also use it to operate on privacy-preserving inputs. We demonstrate that we can remap in
put byte values using a permutation function \(\phi:[0,255]\rightarrow[0,255]\) (Figure 1c) to obfuscate inputs without losing accuracy. Although this does not guarantee cryptography-level security, we demonstrate how this method can be used as a building block for obfuscating inputs to a learning system.
Stronger privacy can be obtained by performing inference with ByteFormer on a partially-formed image (Figure 1d). We demonstrate that ByteFormer is capable of training on images with \(90\%\) of the pixels masked while still achieving \(71.35\%\) accuracy on ImageNet [6]. ByteFormer does not require information about the specific location of unmasked pixels. Our representation passed to our model maintains privacy by avoiding a standard image capture.
In summary, our contributions are: (1) we develop ByteFormer, a model capable of performing inference on file bytes. (2) We show that ByteFormer achieves strong performance on a variety of image and audio file encodings, without the need for architectural changes or hyperparameter tuning. (3) We demonstrate application of ByteFormer to privacy-preserving inputs. (4) We analyze properties of ByteFormers trained to classify images and audio directly from file bytes. (5) We will release our code at [https://github.com/apple/ml-cvnets/tree/main/examples/byteformer](https://github.com/apple/ml-cvnets/tree/main/examples/byteformer).
## 2 Related Work
**Architectures With Multimodal Inputs:** A few methods have explored the idea of feeding different input modalities into the same network for processing. Perceiver IO [14] demonstrates that a Transformer [39] architecture with cross-attention input can be used for a variety of different tasks. Their method differs from ours because their inputs
Figure 1: An overview of our ByteFormer (BF) compared to standard inference with DeiT [38]. (A): File bytes are read from disk and converted to an RGB tensor using a standard image decoder. A patch embedding creates tokens from the RGB representation. (B): File bytes on disk are directly used as tokens, and projected into learned embeddings. (C): Similar to (B), but we apply an obfuscation function \(\phi\). (D): We capture a privacy-preserving representation with a custom camera, and perform token embedding from this representation.
are processed with modality-specific preprocessing. For example, images are loaded into a \([H\times W,C]\) buffer, which differs from the model's treatment of text. By contrast, our method can classify images directly from file bytes. To our knowledge, we are the first to explore models that directly consume file bytes without modality-specific processing.
Other recent works that model multiple modalities with a single model or a single embedding space (but still require input-specific processing) include Unified IO [24], CLIP [33], and LQAE [21].
**Alternate Image Input Representations:** Previous works have explored using alternate input representations for images. In [11], the authors perform partial JPEG decoding, stopping when Discrete Cosine Transform [26] coefficients are formed. They modify CNNs [12] to ingest this new representation. In [31], a similar method is used with Transformers. Our work differs in that we perform no decoding of file bytes at inference time.
**Privacy-Preserving Inference:** We demonstrate applications of our model to privacy-preserving inference. A few works have examined secure inference in Multi-Party Communication (MPC) settings [41, 19, 35, 37, 16]. The focus of these works is to perform inference securely on a remote machine using decrypted data on a local machine. We differ from these methods in that our privacy-preserving systems add a layer of privacy to the data on a local machine, and inference is performed locally. Thus, our approach complementary to theirs.
**Compressive Sensing:** Our privacy-preserving camera is inspired by works in compressive sensing [8]. Related works use image input masking with a single-pixel camera to capture an image over multiple exposures with different masks [9, 13]. Instead, we experiment with a single masked capture on a multi-pixel camera.
## 3 Overview of Common File Encodings
When performing inference with a standard model, the choice of file encoding is irrelevant. For example, it doesn't matter whether an image is stored as a JPEG or PNG file if it will be decoded into an RGB tensor. By contrast, Byteformer performs inference on file bytes. In this case, the choice of file encoding matters. This section provides an overview of common file encodings for images (subsection 3.1) and audio (subsection 3.2). File encoding methods typically contain a large number of optional settings that influence the resulting file bytes. We use default settings provided by PIL [4] or scipy [40] software packages unless otherwise stated.
### Image File Encodings
**fflWC:** We use fHWC as an abbreviation for "flattened tensors in height, width, channel order." It refers to uint8 image bytes stored in HWC order without any file headers. It is not common to store images in this way, since they cannot be decoded without pre-existing knowledge of their height and width. This serves as a strong baseline.
**fCHW:** This format is similar to fHWC, but images are stored in "CHW" order.
**TIFF:** The TIFF file encoding [32] allows for many custom configurations. For our experimentation, we use the default settings provided by PIL. This results in a format similar to fHWC, but with the addition of TIFF image headers describing configuration options and the image size.
**PNG:** The PNG format [2] contains headers describing PNG configuration options, followed by rows of image data stored in "IDAT" chunks. Each IDAT chunk contains a byte describing the filtering method used for that row's data. The filtering method applies an offset to the row's data based on neighboring pixel values. Thus, our PNG file contains rows of RGB data, with offsets applied, interrupted by occasional bytes that contain file encoding settings. We do not use the optional zlib compression that PNG allows.
**JPEG:** JPEG [43] encodes images by applying a series of transformations to compress the image before serialization. The RGB image is converted to YCbCr, then downsampled in the chroma channels, then passed through a Discrete Cosine Transform [26], then quantized using coefficients determined by the JPEG quality factor. The quality factor determines the level of compression, with \(100\) denoting no compression due to quantization, and lower values indicating stronger compression. After quantization, the coefficients are encoded via a run-length encoding, followed by a Huffman encoding [34]. Note that Huffman codes are not byte-aligned, e.g. they can cross byte boundaries. We expect this to make our modeling task more difficult.
### Audio File Encodings
**WAV:** The WAV file encoding [17] stores audio signals represented as a sequence of amplitudes. We use single-channel (mono) audio files. The most common configuration options are the bit depth and the frequency. The bit depth corresponds to the precision with which amplitude values are stored. We experiment with a variety of bit depths, storing audio with 8-bit unsigned integers, 16-bit integers, 32-bit integers, and 32-bit floating-point values. The frequency corresponds to how often amplitude values are chosen. We use 16 kHz, a standard choice for audio [42].
**MP3:** MP3 [30] uses a perceptual compression method that removes portions of audio that are difficult for humans to detect. The remaining portions are recorded in frequency space. An MP3 file contains a series of frames. Each frame contains a header with encoding settings, followed by the encoded signal in frequency space. We use standard settings for MP3 provided by the pydub[36] software package. We expect MP3 encodings to be more difficult to handle than WAV files due to the compression applied.
## 4 Methods
First, we discuss our method for performing inference on file bytes (subsection 4.1). Then, we discuss how to use our method with image obfuscation techniques to enable privacy-preserving inference (subsection 4.2). Finally, we discuss how to use our method with a privacy-preserving camera to perform inference without constructing full images (subsection 4.3).
### Inference on File Bytes
#### 4.1.1 Preprocessing
Some of our file encodings such as TIFF are not frequently used in machine learning datasets. To allow for comparisons on a single dataset across a variety of file encodings, we must re-encode files with different file encodings.
At training time, we decode the file (e.g. read the contents into an RGB tensor in the case of images), then perform standard training augmentation (e.g. random cropping in the case of images), then save the result in the desired file encoding. We find that standard training augmentation is important for model accuracy. Thus, our _training_ method is implicitly dependent on the input modality due to our augmentation.
At _inference_ time, we do not need knowledge of the input modality. We only need to ensure that our model inputs use the correct file encoding. For example, for TIFF experiments on ImageNet, we precompute \(224\times 224\) crops of the validation images and save them in the TIFF format. Such preprocessing is only necessary because the ImageNet validation set is not already stored in the desired format.
#### 4.1.2 ByteFormer
We describe our ByteFormer model for inference on file bytes. An overview of our model is given in Figure 2. The main challenge in using file bytes as inputs is the long sequence lengths. In Table 1, we observe that input sizes \(\mathbb{E}[S]\) for various file encodings can exceed \(150,000\) tokens. As described below, we use strided Conv1D and shifted window attention [22] to handle long sequence lengths.
The first step of our model is to use a learned token embedding with a vocabulary size of \(256\) (corresponding to \(2^{8}\) unique byte values) to produce embeddings. This choice allows our model to handle a variety of input modalities.
The next step of our model is to perform a Conv1D to reduce the sequence length. Our intuition for choosing Conv1D is that neighboring file bytes often contain related information. Reducing our sequence length with Conv1D greatly improves memory usage. In Table 1, \(\mathbb{E}[L_{t}]\) denotes the input size to our Transformer, which is significantly smaller than \(\mathbb{E}[S]\). Typically, we set our kernel size \(k=32\). Our stride is always \(k/2\).
Next, we add positional embeddings to the token embeddings, then pass our embeddings to a Transformer. We choose Transformer size parameters to match the \(12\)-layer DeiT-Ti [38] architecture with embedding dimension \(192\). We call this particular version of our architecture ByteFormer Tiny (BF-Ti). To compensate for our long sequence length (\(9417\) for TIFF, compared to \(196\) for DeiT-Ti), we use shifted window attention [22] to limit the attention window size \(w\), alleviating the quadratic complexity of attention layers on sequence length. We also add down-sampling layers to halve the sequence length, as in [22]. We add them after transformer blocks 0, 1, 3, 5, 7, and 9. After passing our tokens through the transformer, we average the embed
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Data format** & \(\mathbb{E}[S]\) & \(\mathbb{E}[L_{t}]\) & **Top-1** \\ \hline DeiT-Ti & RGB Tensor & \(3\times 224\times 224\) & 196 & 72.2 \\ DeiT-Ti\({}^{*}\) & RGB Tensor & \(3\times 224\times 224\) & 196 & 74.35 \\ \hline \multirow{4}{*}{BF-Ti (Ours)} & fHWC & 150528 & 9407 & 77.06 \\ & fCHW & 150528 & 9407 & 74.65 \\ \cline{1-1} & TIFF & 150668 & 9415 & 77.33 \\ \cline{1-1} & PNG & 150864 & 9428 & 74.94 \\ \cline{1-1} & JPEG & 48564 & 12140 & 65.92 \\ \hline \hline \end{tabular}
\end{table}
Table 1: ImageNet Top-1 accuracy of ByteFormer Tiny (BF-Ti) using various file encodings, compared to DeiT-Ti. \(\mathbb{E}[S]\) denotes the input shape, and \(\mathbb{E}[L_{t}]\) denotes the token length passed to the transformer backbone. (\({}^{*}\)) denotes our implementation of DeiT-Ti. We set BF-Ti’s Conv1D kernel size to \(k=32\) for all experiments except JPEG (\(k=8\)).
Figure 2: An overview of ByteFormer. We map byte values to learned vectors using a learned token embedding. Next, we apply a Conv1D to reduce the token dimension. Finally, we apply a transformer with shifted window attention and downsampling.
dings across the sequence dimension.
### Inference on Obfuscated Inputs
ByteFormer is designed to perform inference on file encodings without converting them into a standard input representation (e.g. an RGB tensor in the case of images). Therefore, we explore whether ByteFormer can be used for inference on privacy-preserving representations that obfuscate information about the underlying data (Figure 0(c)).
Consider a permutation \(\phi:\{0,1,2,\ldots,255\}\rightarrow\{0,1,2,\ldots,255\}\). Let \(\tau\) denote a token embedding, and let \(f_{\theta}\) denote the subsequent transformer. It's easy to see that, for a given \(\phi\), there exists a \(\tau_{\phi^{-1}}\) such that \(\tau_{\phi^{-1}}(\phi(x))=\tau(x)\). \(\tau_{\phi^{-1}}\) is simply a copy of \(\tau\) with embedding vectors reassigned to different indices. Thus, \(f(\tau_{\phi^{-1}}(\phi(x)))=f(\tau(x))\). The implication of this statement is that our network \(f_{\theta}\) can operate on re-encoded inputs \(\phi(x)\)_without requiring any retraining_ as long as the network's token embedding \(\tau\) is reordered to \(\tau_{\phi^{-1}}\).
To take advantage of this property, we choose a permutation \(\phi\) at random before training. All training and inference inputs are remapped using \(\phi\). We optionally apply uniform noise before applying \(\phi\). Without uniform noise, \(\phi\) can be applied to a standard ByteFormer without retraining (as explained above). However, we find uniform noise helpful in obfuscating regions of constant color in our experiments.
More generally, we can use more sophisticated methods for altering input representations. As our method can handle highly nonlinear JPEG encodings, we expect it to perform well on a variety of alternative encodings that an outside observer might not be able to easily guess. How secure are such methods against an adversary? This analysis depends on the threat model used. For example, if an adversary has access to a large number of encoded samples \(\phi(x)\), analysis of byte statistics might suggest that strings of common bits correspond to patches of blue sky in images. The adversary's task is made more difficult given certain file encodings (e.g. the highly nonlinear JPEG encoding). We do not make strong claims regarding the level of security provided by different choices of \(\phi\). Secure systems should be designed and analyzed by security researchers. Instead, we simply suggest that decoupling the input representation from the model can lead to new possibilities for building more secure systems.
### Privacy-Preserving Camera
We describe another application of ByteFormer to privacy-preserving inference (Figure 0(d)). In this scenario, a custom camera captures a non-standard, privacy-preserving representation to allow for inference without building a full RGB image. This custom representation could take a variety of forms. In our experimentation, we consider a hypothetical camera that masks out a large fraction of its pixel channels. The camera stores the remaining unmasked pixel channels in an array without retaining the coordinates of pixel channels on the image sensor. In this scenario, an adversary could not obtain a faithful reconstruction of the input image. Even if the adversary could guess pixel channel locations, the low resolution of captured data prevents the adversary from recovering a high-fidelity image.
## 5 Experiments
We evaluate ByteFormer on 1000-way classification on ImageNet [6]. We also evaluate 12-way audio keyword classification (including "background" and "unknown" classes) of 1-second audio clips sampled at \(16\) khz using Speech Commands v2 [42]. For all experiments, ByteFormer's backbone uses hyperparameters that match DeiT-Ti [38]. We refer to this architecture as BF-Ti.
We train using CVNets [27]. For ImageNet, we use batch size \(48\) on 8 NVIDIA A100 GPU machines. At training time, we use random resized cropping, random horizontal flipping, RandAugment [5], and RandomErase [45] before storing the image in the desired file encoding (subsubsection 4.1.1). We train with AdamW [23] with weight decay \(0.05\), and a cosine annealing learning rate schedule from \(0.001\) to \(0.00002\), with \(7500\) warmup iterations.
We train our Speech Commands v2 with MixUp [44], noise augmentation, and time shifting augmentation, as in [29]. Our training and architecture hyperparameters match our ImageNet experiments. We train these models on 4 NVidia A100 GPU machines.
For ImageNet experiments, we report Top-1 accuracy of models trained with exponential moving average of weights with momentum \(0.0001\), which typically increased accuracy by roughly \(0.25\%\). For Speech Commands V2, we found EMA to sometimes increase and sometimes decrease accuracy, so we omit it.
### ImageNet File Encodings
Table 1 summarizes results for a variety of file encodings on the ImageNet dataset. For BF-Ti, we use \(w=128\) and \(k=32\) for all models except JPEG, for which we find \(k=8\) to perform better. Our method surpasses DeiT-Ti accuracies for TIFF, PNG, fCHW, and fHWC encodings.
We find training on JPEG to be more difficult. This is likely due to the highly nonlinear and variable-length JPEG encoding. We investigate the influence of our model's kernel size \(k\) on JPEG accuracy in Table 2. We find that reducing \(k\) from its default value of \(32\) increases accuracy. Since JPEG images have a smaller token length than TIFF or PNG, they are likely less compressible. To further explore this, we investigate two settings for JPEG quality factor in Table 2. We find that lower quality factors result in lower token lengths, thus reducing \(k\) improves accuracy. We also try reducing \(w\), but accuracy does not improve.
We present our method's computational efficiency compared to related works in Appendix A.
### Speech Commands v2 File Encodings
Results for audio classification on Speech Commands v2 [42] are given in Table 3. BF-Ti achieves accuracies of up to \(95.51\%\) on WAV files, comparable to the state-of-the-art method BC-ResNet-8 [18]. Note that BC-ResNet-8 is specifically designed for audio processing. By contrast, we performed no parameter tuning relative to our ImageNet training recipe (besides ablating choices of \(w\) and \(k\)). Our best-performing model has \(w=128\) and \(k=32\). Our model performs best on floating-point values. In this case, since each 32-bit floating-point value in the audio signal will be encoded as 4 file bytes, each audio sample will be represented by 4 neighboring tokens before our Conv1D.
We investigate the influence of \(k\) on model accuracy. In general, the optimal \(k\) decreases when the expected number of input tokens decreases. This matches our observations in ImageNet JPEG experiments. For MP3 files, we observed that \(k=32\) resulted in unstable models due to the drastic reduction in token length. For MP3, we additionally experiment with \(w=32\), but it does not improve results.
### Image Obfuscation
Results for our image obfuscation method (subsection 4.2) on ImageNet are summarized in Table 4. After obtaining our fHWC encoding, we apply a randomly chosen obfuscation function \(\phi\).
Examples of obfuscated images are shown in Figure 3. We observe that byte remapping retains shape information. A region of the image that is dominated by a single pixel value will continue to be dominated by a new (remapped) pixel value. To alleviate this, we add noise from a uniform distribution \(\mathbb{U}[-a,a]\) sampled from \(-a\) to \(a\) (inclusive) to each pixel channel independently, then compute the result modulo \(256\). Afterwards, we apply \(\phi\). This prevents regions of constant pixel value from being remapped to a single value. As shown in Figure 3, the upper right corner of
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Noise level**} & \multicolumn{3}{c}{**Model**} \\ \cline{2-4} & **DeiT-Ti** & **BF-Ti** \\ \hline None & 51.61 & **77.39** \\ \(\mathbb{U}[-5,5]\) & 50.77 & **77.27** \\ \(\mathbb{U}[-10,10]\) & 49.50 & **77.17** \\ \(\mathbb{U}[-20,20]\) & 43.84 & **76.31** \\ \hline \hline \end{tabular}
\end{table}
Table 4: ImageNet Top-1 results for obfuscation with \(\phi\). We show results with no noise, and with uniform noise in \([-a,a]\) added. We use the fHWC encoding.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{\(q\)} & \multirow{2}{*}{\(w\)} & \multirow{2}{*}{\(k\)} & \(\mathbb{E}[S]\) & **Top-1** \\ \hline
100 & 128 & 32 & 48564 & 60.86 \\
100 & 128 & 16 & 48564 & 64.86 \\
100 & 128 & 8 & 48564 & 65.92 \\ \hline
60 & 128 & 32 & 8436 & 31.8 \\
60 & 128 & 16 & 8436 & 50.11 \\
60 & 128 & 8 & 8436 & 56.26 \\
60 & 128 & 4 & 8436 & 62.52 \\ \hline
60 & 32 & 32 & 8436 & 37.23 \\
60 & 32 & 16 & 8436 & 50.24 \\
60 & 32 & 8 & 8436 & 56.74 \\
60 & 32 & 4 & 8436 & 59.52 \\ \hline \hline \end{tabular}
\end{table}
Table 2: ImageNet Top-1 accuracy for ByteFormer Tiny (BF-Ti) for different JPEG quality factors \(q\), window sizes \(w\), and convolutional kernel sizes \(k\). \(\mathbb{E}[S]\) denotes the expected shape of the inputs during validation.
Figure 3: A sample image from the ImageNet validation set, with uniform noise applied (top row), and with byte remapping \(\phi\) additionally applied (bottom row).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model** & **Input** & \(w\) & \(k\) & \(\mathbb{E}[S]\) & **Top-1** \\ \hline BC-ResNet-8 & log Mel & - & - & \(40\times 98\) & 98.70 \\ \hline BF-Ti (Ours) & W-FP32 & 128 & 32 & 64058 & 95.80 \\ & 128 & 16 & 64058 & 95.51 \\ \hline BF-Ti (Ours) & W-INT32 & 128 & 32 & 64044 & 94.90 \\ & 128 & 16 & 64044 & 95.27 \\ \hline BF-Ti (Ours) & W-INT16 & 128 & 32 & 32044 & 94.81 \\ & 128 & 16 & 32044 & 95.51 \\ & 128 & 8 & 32044 & 95.13 \\ \hline BF-Ti (Ours) & W-UINT8 & 128 & 32 & 16044 & 92.28 \\ & 128 & 16 & 16044 & 94.39 \\ & 128 & 8 & 16044 & 94.81 \\ & 128 & 4 & 16044 & 93.99 \\ \hline BF-Ti (Ours) & MP3 & 128 & 8 & 3465 & 88.39 \\ & 32 & 4 & 3465 & 88.00 \\ & 32 & 8 & 3465 & 88.69 \\ & 32 & 4 & 3465 & 89.19 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for audio classification with BF-Ti on the Speech Commands v2 dataset. “W-” denotes WAV files with the given bit width. \(\mathbb{E}[S]\) denotes the shape of network inputs.
the image becomes less recognizable as noise from progressively larger ranges is used. In Table 4, we observe that our method is resilient to this transformation, but DeiT is not.
### Privacy Preserving Camera
Table 5 summarizes our results for our privacy-preserving camera (subsection 4.3). We emulate the camera setup by masking pixel channels of ImageNet images at random, then storing unmasked pixels in a buffer (in fWPC order) and passing that buffer into our network. For these experiments, we cannot provide DeiT-Ti baselines because DeiT-Ti is not capable of ingesting pixel values without any indication of their placement in the image.
In Figure 4, we show masked inputs before the unmasked pixels are rasterized. At \(10\%\) pixel retention, the content of image is hard to visually perceive _even though active pixels are placed side-by-side in a new buffer_. Even if an adversary correctly guessed the positions of unmasked pixel channels in the original image, the adversary could not former a high-fidelity image. As shown in Table 5, our accuracy at \(10\%\) pixel retention is \(71.35\%\), comparable to the original DeiT-Ti model operating on non-privacy-preserving (unmasked) images.
Note that this privacy-preserving technique can be combined with the byte remapping technique (subsection 4.2) to further obfuscate network inputs.
used shifted window attention.
**Effect of Byte Ordering:** To better understand ByteFormer's behavior, we ask, _does ByteFormer simply learn byte frequencies, or is the byte ordering relevant?_ In Table 7, we apply a series of augmentations during training and validation. We focus on the case of JPEG compression at quality factor \(100\) with our standard kernel size \(k=32\). Each augmentation modifies the byte order of the inputs in some way. In random shuffle, we randomly reorder the bytes during training and validation. The order is redrawn every iteration. This severely degrades accuracy. Next, we perform a strided sampling with stride size \(1024\) (e.g. \([0,1024,2048,\ldots,1,1025,2049,\ldots]\)). This slightly improves accuracy over the previous method by improving byte order consistency. Next, we experiment with window shuffle, in which the bytes from each window of size \(1024\) are consistently permuted. This increases accuracy to \(18.14\%\). Next we experiment with a cyclic shift in which the second half of the image bytes are moved to the beginning. Accuracy matches the baseline (unaltered JPEG bytes) closely. Similarly, reverse, in which the byte order is reversed, preserves locality well and matches the baseline. We find that our model is sensitive to locality, and does not only learn byte frequencies.
**Learned Token Embeddings:** We study the token embeddings learned by ByteFormer. These embeddings are used to project file bytes into vector representations. In Figure 6 (top row), we observe the absolute value of the cosine distance \(|x\cdot y|/(||x||\cdot||y||)\) between each pair of token embeddings \(x\), \(y\) on a variety of file encodings. We choose this metric to highlight the difference between (anti-)correlated embeddings (bright patches) and uncorrelated embeddings (dark patches). The pattern varies substantially across input encodings and tasks. In TIFF, PNG, and fCHW, we observe a bright band off of the diagonal, corresponding to high correlation between bytes and their neighbors. This matches our expectations, since replacing a byte with its neighbor only slightly alters the image. This does not hold for JPEG due to the Huffman encoding step. We also observe that the correlation between token embeddings in the float32 encoding of Speech Commands is generally weak. We believe this occurs because the float32 audio amplitude value is split across four bytes in the file encoding, weakening the association between byte values and amplitudes.
Learned position embeddingsWe visualize the absolute value of the cosine distance between the first 256 positional embeddings learned by ByteFormer in Figure 6 (bottom row). For JPEG, we see a strong band of highly uncorrelated values at early positions, corresponding to the file header. Later positions demonstrate interesting patterns that may arise due to the Huffman encodings crossing byte boundaries. In TIFF, a small band of highly uncorrelated values is visible early on, corresponding to the header (which is shorter than in the JPEG case).
## 7 Limitations
The accuracy of ByteFormer depends on the file encoding chosen. As shown in section 5, choosing JPEG over TIFF results in a reduction of accuracy on ImageNet. Adding invariance to file encodings is future work.
As discussed in subsection 4.2, our choice of \(\phi\) for our obfuscation method does not provide cryptography-level security against an attacker with access to a large set of model inputs. We view this method as a building block for security experts to design thoroughly analyzed, secure systems.
Finally, our method has only been evaluated on classification for images and audio. Experimenting with other domains (video, text) and tasks that require fine-grained localization (detection, segmentation) is exciting future work.
Figure 6: \(|x\cdot y|/(||x||\cdot||y||)\) for pairs \(x,y\) of token embeddings (top row) and positional embeddings (bottom row) learned by BF-Ti. We show results for various file encodings on ImageNet (IN) and Speech Commands (SC).
## 8 Conclusion
We present ByteFormer, a model that consumes only bytes and does not explicitly model the input modality. We show that it achieves strong performance on image and audio classification without hyperparameter tuning or architecture modifications. We show how ByteFormer can be used in conjunction with image obfuscation techniques with little or no loss in accuracy. We also demonstrate how ByteFormer can be incorporated into a privacy-preserving camera to enable inference without forming a full image at capture time.
| 現代の深層学習アプローチは、通常、模倣に特化した処理を用います。例えば、画像分類の最も一般的な深層学習アプローチでは、画像ファイルバイトをRGBテンソルに変換し、ニューラルネットワークに渡します。代わりに、ファイルバイトを直接分類し、推論時にファイルのデコードを必要としない、模倣非依存の表現学習を探求しています。これにより、モデルは、モダリティの特化処理なしに、様々なモダリティで動作することができます。私たちのモデル、ByteFormerは、ImageNet Top-1分類精度を5%向上させました(72.2%から77.33%へと)。DeITモデルと比較して、サイズが同じであることから。推論時にモダリティに特化した処理を必要としないため、ByteFormerモデルは、ImageNetでの同等精度で、10倍少ないパラメータを必要とします。同じByteFormer |
2306.17744 | Zespol: A Lightweight Environment for Training Swarming Agents | Agent-based modeling (ABM) and simulation have emerged as important tools for
studying emergent behaviors, especially in the context of swarming algorithms
for robotic systems. Despite significant research in this area, there is a lack
of standardized simulation environments, which hinders the development and
deployment of real-world robotic swarms. To address this issue, we present
Zespol, a modular, Python-based simulation environment that enables the
development and testing of multi-agent control algorithms. Zespol provides a
flexible and extensible sandbox for initial research, with the potential for
scaling to real-world applications. We provide a topological overview of the
system and detailed descriptions of its plug-and-play elements. We demonstrate
the fidelity of Zespol in simulated and real-word robotics by replicating
existing works highlighting the simulation to real gap with the milling
behavior. We plan to leverage Zespol's plug-and-play feature for neuromorphic
computing in swarming scenarios, which involves using the modules in Zespol to
simulate the behavior of neurons and their connections as synapses. This will
enable optimizing and studying the emergent behavior of swarm systems in
complex environments. Our goal is to gain a better understanding of the
interplay between environmental factors and neural-like computations in
swarming systems. | Shay Snyder, Kevin Zhu, Ricardo Vega, Cameron Nowzari, Maryam Parsa | 2023-06-30T15:52:18 | http://arxiv.org/abs/2306.17744v1 | # Zespol: A Lightweight Environment for Training Swarming Agents
###### Abstract.
Agent-based modeling (ABM) and simulation have emerged as important tools for studying emergent behaviors, especially in the context of swarming algorithms for robotic systems. Despite significant research in this area, there is a lack of standardized simulation environments, which hinders the development and deployment of real-world robotic swarms. To address this issue, we present Zespol, a modular, Python-based simulation environment that enables the development and testing of multi-agent control algorithms. Zespol provides a flexible and extensible sandbox for initial research, with the potential for scaling to real-world applications. We provide a topological overview of the system and detailed descriptions of its plug-and-play elements. We demonstrate the fidelity of Zespol in simulated and real-word robotics by replicating existing works highlighting the simulation to real gap with the milling behavior. We plan to leverage Zespol's plug-and-play feature for neuromorphic computing in swarming scenarios, which involves using the modules in Zespol to simulate the behavior of neurons and their connections as synapses. This will enable optimizing and studying the emergent behavior of swarm systems in complex environments. Our goal is to gain a better understanding of the interplay between environmental factors and neural-like computations in swarming systems.
multi-agent systems, swarm intelligence, modeling and simulation, applied neuromorphic computing +
Footnote †: journal: Computer Vision and Pattern Recognition
+
frameworks presents additional difficulties. There are also notable performance issues associated with this framework that make running large-scale simulations a computationally expensive task.
MASON (Zespol et al., 2016) is an agent-based simulation library designed from the ground-up to support custom Java-based (Bahdan et al., 2017) simulations. There are many similarities between MASON and Zespol such as the inherent separation between the environment and visualization systems along with the compartmentalized nature of individual simulations. Both MASON and Zespol allow agents to be given arbitrary dynamics. The major limitation of MASON is a consequence of using very advanced Java where the barrier to entry for new users can be high. This issue is only compounded when we consider the lack of a mature and low-barrier system for distributing these simulations among heterogeneous computing systems. Addressing these issues is one of the major goals of Zespol.
OpenAI Gym, introduced in 2016, was a pioneering platform in the field of single-agent reinforcement learning (Beng et al., 2017). Out of the box, they support a wide variety of applications for classic control problems such as Box2D (Brock et al., 2018), and Atari (Auri et al., 2018). Compared to Zespol, Gym has two major limitations in that it is primarily designed for reinforcement learning and the programmatic architecture around Gym is focused purely on single agent simulations which severely limits its applicability to multi-agent robotics (Zespol et al., 2016; Snojkovic et al., 2017).
NetLogo (Zespol et al., 2016) is another multi-agent simulation environments. It is primarily designed to be used in educational environments, as evidenced by its integrated IDE with a drag-and-drop GUI. This makes programming behaviors easy, but the NetLogo language is limited. It is possible to run Python and R code from within NetLogo, as well as invoke a NetLogo simulation from a Java environment, but the interfaces are clunky and limited; thus NetLogo is largely incompatible with current means of distributing computation and simulation environments among heterogeneous computing systems and modern learning frameworks. NetLogo's simulation speed is, at best, equal to that of MASON (Zespol et al., 2016), but struggles with anything higher than two-dimensional environments.
In (Snojkovic et al., 2017), they conducted an interactive simulation in the design loop where simulated experiments where tightly coupled with real-world experiments. This study was broken up into four distinct portions to minimize the simulation to reality gap: 1) Characterizing the salient capabilities of the real robot, 2) Building a minimally viable simulation environment that characterizes the measured capabilities of physical robots, 3) Developing and exploring potential emergent behaviors in simulation, and 4) Deploying real robots based on simulation-driven-hypothesis and evaluating the performance penalties associated with the domain shift. They used a binary controller (Beng et al., 2017) for the salient capabilities of real robots and created stable milling behaviors in NetLogo that also performed the same behavior on physical robots. Despite their ability to minimize the simulation to reality gap, we are interested in deploying low-power and scalable neuromorphic computing platforms and explore novel methods of arriving at emergent behaviors. Zespol is designed as a simulation framework compatible with existing neuromorphic frameworks (Zespol et al., 2016; Snojkovic et al., 2017; Snojkovic et al., 2017) and hardware (Zespol et al., 2016).
Some of the key differences between prior works and Zespol are summarized in Table 1. Zespol is the only simulator that is written in user-friendly and well documented Python code, provides native capability for distributed (dist) simulation environments, and allows for arbitrary agent states and dynamics.
## 3. Programmatic Architecture
Zespol's underlying architecture is designed with modularity in mind where each fundamental building block has a plug and play interface. This design philosophy allows users to develop their own blocks such as sensor modules, controllers, and physical dynamics. All simulations are designed to minimize inter-object dependencies to reduce the chance of segmentation faults and minimize communication latency by only passing critical information between blocks. Each interface is thoroughly documented with the provided examples showing how users can extend the framework to support their needs. More formally, each building block is represented by two data structures that form an **object-state** relationship. We have provided three fundamental object-state pairs, Agent-AgentState, Swarm-SwarmState, and World-WorldState. A more detailed description of these pairs is given in the following.
Zespol objects are data structures responsible for containing all elements required for the object to function. For example, a robot object would contain the robot's current location, all of its sensor objects, controller objects, and control the interaction between these elements at every simulation time step.
The "_Agent_" object base class should be extended to support the specific requirements of a user's application. For example, The base class defines position and orientation vectors along three dimensions, a unique identifier, and the simulation time step fidelity. However, the _tick_ method must be updated based on user requirements to control the interactions between sensors, controllers, and physical dynamics.
\begin{table}
\begin{tabular}{r l l l l}
**Simulator** & **Lang** & **Dist** & **State** & **Dynamics** \\ \hline \hline VMAS (Beng et al., 2017) & Python & No & Arbitrary & Holonomic \\ Swarm-Sim (Snojkovic et al., 2017) & Python & No & Discrete & Holonomic \\ SwarmLab (Snojkovic et al., 2017) & MATLAB & No & Continuous & Drone \\ MASON (Zespol et al., 2016) & Java & No & Arbitrary & Arbitrary \\ NetLogo (Zespol et al., 2016) & NetLogo & No & Arbitrary & Arbitrary \\ Gym (Beng et al., 2017) & Python & No & Arbitrary & Arbitrary \\ \hline
**Zespol** & **Python** & **Yes** & **Arbitrary** & **Arbitrary** \\ \end{tabular}
\end{table}
Table 1. Comparison of multi-agent simulators
Figure 1. A flowchart presenting the critical programmatic flow within and between Zespol’s main components.
The _"Swarm"_ class contains references to all agents within the swarm and controls the interactions between agents at every simulation time step. This is where the distributed nature of Zespol is highlighted because the memory and process spaces for all agents are separated, the processing of individual agent updates at every time step can be distributed across heterogeneous compute clusters with tools such as Dask (Dask, 2015).
Bringing everything together, we have the _"World"_ class that contains every object and actionable element within the simulation environment. Therefore, this object maintains references to all swarms, visualization systems, and environmental objects such as world boundaries and obstacles. The last major responsibility of World objects is to manage the interactions between all swarms and environmental objects to manage the programmatic flow at every simulation time step.
For every agent, swarm, and world object there are associated states that contains a holistic view of the object with the central idea being the establishment of a shareable data structure that only contains fundamental information. This avoids repeatedly passing redundant information between objects. For example, AgentStates contain an Agent's location and orientation but shouldn't contain a copy of the Agent's sensor or controller.
_"AgentsStates"_ are defined by a snapshot of the given Agent's current location and heading, the change in these values from the previous step, along with their unique identifier. _"SwarmStates"_ are represented by a collection of states from all member agents along with a variety of metrics such as angular momentum, center of mass, scatter, and radial variance. Lastly, the _"WorldState"_ encompasses the states of all swarms along with all polygons that define the boundaries of the environment.
Besides the three predefined object-state pairs, there are three other notable objects within the system: _Sensors_, _Controllers_, and _Visualizers_. Each _"Sensor"_ is representative of a real-world sensor such as an RGB camera or LIDAR scanner that uses information within the WorldState to recreate a synthetic version of the perspective an Agent would see from their location in the world. _"Controllers"_ accept input from Sensors and modify the location, orientation, and heading of an agent based on their physical dynamics. These dynamics are arbitrary so they can be modified to fit a user's specific application. _"Visualizers"_ in _Zespol_ are separable, optional components of the simulator. They take a WorldState at every time step and generate visual output. We include a visualization system based around Matplotlib(Matplotlib, 2017) to provide users with an example to follow when extending these utilities to support their specific application.
The overall algorithmic flow starts at (1) initializing all Swarms and Agents within the World. (2) The WorldState object is constructed by querying all Swarms and Agents for their SwarmStates and AgentStates, respectively. (3) For every Agent within every Swarm, an artificial sensory perception is calculated in the Agent's Sensor based on its location relative to all other elements in the environment. (4) This perception is then passed to the associated Controller where the AgentState is modified. (5) Once every Agent in every Swarm has calculated their new states, any visualizations and logs can be created. (6) Lastly, the newly accumulated WorldState is used to progress through the next simulation time-step. Figure 1 provides a visual representation for the algorithmic flow between the fundamental Zespol elements.
## 4. Initial Results & Discussion
Our initial use case for Zespol was recreating the circular milling behavior from (Zegol, 2017; Zegol, 2017) where agents move in a uniform circle. Using knowledge gained from (Zegol, 2017) and Zespol's modular framework, we set up a simulation environment consisting of 9 Flockbots (Zegol, 2017) with each being equipped with a front-facing infrared proximity sensor and a differential drive system. An image of a real-world Flockbot can be seen in Figure 2.
To fully implement this environment in Zespol, we extended the Agent class with the FlockbotAgent class, a BinarySensor class, and a DifferentialDriveController class. As shown in Figure 3, the entire process starts at the WorldState going into the BinarySenor where a synthetic binary output is calculated based on the Agent's current location and orientation with respect to the rest of the world. Next, the binary sense is transferred to the DifferentialDriveController where the agent turns left if it senses something or turns right if it senses nothing.
There are numerous parameters for the Flockbot milling behavior that we selected based on the results of (Zegol, 2017) where the World ticks at 30 ticks per second, the Swarm contains 9 agents, and Sensors have a view distance of 3 meters and the same asymmetric field-of-view found in (Zegol, 2017) with a left bound of 11.5 degrees left of center and a right bound of 4 degrees left of center.
Zespol successfully models the complex coordination between multiple agents that results in a stable milling behavior. A visualization of the resulting formation is shown in Figure 4. This highlights the ability of Zespol to recreate emergent behaviors from other simulated environments and experimental results that have been validated on real-world robotic systems.
Figure 3. A flowchart presenting a detailed view of the interprocess communication within our example Zespol application using a Flockbot robot with a binary sensor
Figure 2. Image of a real-world Flockbot (Zegol, 2017)
## 5. Conclusion and Future Work
In conclusion, the field of agent-based modeling and simulation for studying emergent behaviors has witnessed substantial growth in parallel with the demand for robotic systems that can perform collective tasks. However, the lack of standardization in simulation environments makes it challenging to compare and contrast newfound research ideas with existing methods. The Zespol environment is introduced to serve as a lightweight and modular, Python-based simulation environment for developing multi-agent control algorithms. It offers ample opportunities for adoption and expansion by the broader research community. Moreover, the fidelity of Zespol is evaluated against previously published results in simulated and real-world robotics, demonstrating its ability to replicate existing swarming algorithms with the comparison between Zespol, NetLogo, and real robots conducting the milling behavior with Flockbots. With Zespol, users can develop and standardize swarming algorithms before transitioning over to real-world experiments or higher fidelity simulations. Zespol also provides native support for distributed parallelization across compute clusters and is compatible with neuromorphic computing platforms. As a result, it is a promising solution to issues slowing the advancement of emergent behaviors in robotic swarms of low-powered and individually incapable robotic systems.
Although Zespol is already demonstrating promising results, there is still room for improvement to make it a solid foundation for research on the application of neuromorphic computing in swarming robotics. Our plans include developing formal interfaces for common neuromorphic computing frameworks such as Lava (Lava, 2018) and Nengo (Nengo, 2020). We will also incorporate formal support for evolutionary algorithms (Nengo, 2020) and Bayesian optimization learning schemes (Zespol, 2020). To simplify the distributed nature of Zespol, we will create a user-friendly interface that minimizes the hassle of dealing with Dask (Dask, 2020) and multiprocessing (Zespol, 2020). Additionally, we will incorporate a vectorized simulation module to run simulations on multiple GPUs across heterogeneous systems. Finally, we will leverage spiking controllers to discover novel swarming behaviors.
## Acknowledgement
This work was supported in part by the Department of the Navy, Office of Naval Research (ONR), under federal grant N00014-22-1-2207.
| ```
ABM(Agent-based modeling)とシミュレーションは、特にロボットシステムの swarming アルゴリズムの研究において重要なツールとして浮上してきました。この分野における重要な研究 despite significant research in this area, there is a lack of standardized simulation environments, which hinders the development and deployment of real-world robotic swarms. これを解決するために、私たちは Zespol を提案します。これは、多様な Agent の制御アルゴリズムを開発およびテストするためのモジュールベースの Python ベースのシミュレーション環境です。Zespol は、初期研究のための柔軟で拡張性の高いサンドボックスを提供し、現実世界のアプリケーションへのスケーリングの可能性があります。Zespol のシステムのトポロジ的概要とプラグアンドプレイ要素の詳細な説明を提供しています。Zespol のシミュレーションと現実世界 robotics の実証において、Zespol の信頼性を評価するために、既存の研究を再作成し、機械加工動作におけるシミュレーション |
2309.06972 | Gravitational bremsstrahlung in plasmas and clusters | We study the gravitational bremsstrahlung owing to collisions mediated by a
$1/r$ potential. We combine classical and first order Born approximation
results in order to construct an approximate gravitational `Gaunt factor' for
the total emitted energy. We also obtain the cross-section with an angular
momentum cut-off, and hence the cross-section for emission via close hyperbolic
encounters in a gravitating cluster. These effects are the dominant source of
very high frequency gravitational noise in the solar system. The total
gravitational wave power of the Sun is $76\pm 20\,$MW. | A. M. Steane | 2023-09-13T14:05:48 | http://arxiv.org/abs/2309.06972v2 | # Gravitational bremsstrahlung in plasmas and clusters
###### Abstract
We study the gravitational bremsstrahlung owing to collisions mediated by a \(1/r\) potential. We combine classical and first order Born approximation results in order to construct an approximate gravitational 'Gaunt factor' for the total emitted energy. We also obtain the cross-section with an angular momentum cut-off, and hence the cross-section for emission via close hyperbolic encounters in a gravitating cluster. These effects are the dominant source of very high frequency gravitational noise in the solar system. The total gravitational wave power of the Sun is \(76\pm 20\,\)MW.
The aim of this paper is to review and extend the understanding of gravitational bremsstrahlung during collisions in a \(1/r\) potential. In practice this is Coulomb collisions and gravitational collisions where the potential is well-approximated as \(1/r\). Such processes take place in plasmas such as stellar interiors, and in gravitating clusters such as those of black holes believed to be present in many galactic nuclei, or in the early universe. However the motivation to study these processes is mainly their innate interest. They involve a combination of quantum theory and dynamic gravitation. For Coulomb collisions in the Sun the resulting gravitational wave amplitude is small and undetectable on Earth using any technology liable to be realised in the near future, but in principle it contributes to the limits on coherence of matter-wave interferometry owing to gravitational field noise.[1; 21; 8; 22]
Introductory material is set out in the first two sections below. Section I provides a brief historical survey of work related to gravitational wave (GW) emission during collisions in a \(1/r\) potential at low (non-relativistic) speeds. Section II introduces notation and methods of the present work. Section III obtains the total cross-section for the GW energy emission after integrating over impact parameter. This consists in first reporting existing work treating classical and quantum (first order Born approximation) limits of the motion, and then providing approximate formulae for the intermediate regime. Section IV considers the power and energy emission during a single hyperbolic encounter. Section V presents the cross-section obtained if one imposes a cut-off on the angular momentum. This is useful for treating emission in the case of attractive forces, where it makes sense to separate the collisions into those leading to capture and those where the bodies escape to infinity. Section VI applies the results of the previous sections so as to obtain the GW energy emission cross-section for close hyperbolic encounters in a gravitating cluster. Section VII uses the formulae of the paper to estimate the total GW power of the Sun. Section VIII concludes.
## I Historical survey
Early work on graviton emission during scattering of fundamental particles was carried out by Ivanenko and Sokolov (1947, 1952). [18; 19]. In 1965 Weinberg published an account of gravitational bremsstrahlung during Coulomb collisions, using quantum field theory in the limit where the gravitons are'soft', meaning they have negligible impact on the energy-momentum in external legs of the relevant Feynman diagrams.[32] The following year Carmeli confirmed this and also provided a classical calculation, for the case of a repulsive potential, which gives the total emitted energy after integration over impact parameters.[5] His clever method of calculation did not require an expression for the emitted energy in each hyperbolic encounter. Boccaletti (1972) extended this method to the Yukawa potential, and estimated emission from neutron stars.[3] Meanwhile Barker _et al._ 1969 gave the Born approximation calculation for graviton emission during collisions in a \(1/r\) potential, among other results.[2]. Emission from binary stars on Keplerian orbits had also been calculated, pioneered by Peters and Matthews (1963). [25; 27].
The above all concern low velocities and Euclidean geometry. Pioneering calculations for the case of a Schwarzschild-Droste metric and arbitrary velocity were provided by Peters (1970).[26] In the present survey we will not pursue the high-velocity or non-Newtonian cases. We are interested in cases where the velocity of the participating masses are small compared to \(c\) and the quadrupole approximation applies.
Gal'Tsov and Grats 1974 carried out Born approximation calculations, giving some further information not included in Barker _et al._.[12] They subsequently (1983) extended their study towards a more complete kinetic theory of a plasma such as that of the Sun.[13]
The first person to have correctly reported the total GW energy emitted during a hyperbolic encounter in a \(1/r\) potential, according to classical (not quantum) physics, appears to be Turner (1977), correcting a minor error in a
previous calculation by Hansen.[17; 29] This work was duly noted in a comprehensive review by Kovacs and Thorne in 1978, who comment: "Such computations are straightforward and simple," but in view of the fact that errors exist in the literature (we will point out some more in the next paragraphs) such computations are clearly not straightforward for ordinary mortals.[20]
Dehnen and Ghaboussi 1985 treated a general central potential and report some useful results for that case.[10; 11] They apply their methods to the \(1/r\) potential as an example and obtain the total scattered energy. Their formula agrees with that of Turner. They did not cite Turner, presumably an indication that they were not aware of his work. (Different authors report the formula in terms of different parameters so the agreement is not self-evident; we shall display both versions in section IV.)
Further reviews of astrophysical sources of gravitational waves are provided by Papini and Valluri 1977, Cutler and Thorne 2002 and Aggarwal _et al._ 2021.[1; 8; 24] Whereas Papini and Valluri discuss bremsstrahlung inside stars along with other processes, Cutler and Thorne do not because their review is focussed on signals that may be detectable now or in the near future.
Recently a further case has gained interest: the emission from clusters of black holes which may have been produced in the early universe or in the centres of galaxies. [4; 9; 14; 15; 23] The emission is partly from pairs (or larger numbers) of masses in bound orbits, and partly from a background of close hyperbolic encounters. Capozziello _et al._ (2008) calculated the emitted power and total emitted energy per encounter. Their results reproduce those of Turner and of Dehnen and Ghaboussi though they cite neither; they cite the review by Kovacs and Thorne which includes Turner but they do not make the comparison. De Vittori _et al._ (2012) follow the method of Capozziello explicitly but their eqn (6) has a sign error in the last term and their eqn (8) has the total power too large by a factor 4. Garcia-Bellido and Nesseris, and also Grobner _at al._, point out further mistakes. In view of these discrepancies a new calculation may be useful and we provide one.
The spectrum of the emitted radiation was treated by various authors, with noteworthy contributions from Turner, O'Leary _et al._, De Vittori _et al._, Garcia-Bellido and Nesseris and Grobner _at al._. (Grobner _et al._'s opening statement that De Vittori _et al._ constitutes 'the first calculation of the frequency spectrum' understates the contribution of Turner who gave explicit formulae for the cases of eccentricity \(e=1\) and \(e\gg 1\) and much of the analysis for general \(c\); subsequent authors completed the Fourier integrals for all \(e\)). Some mistakes in [9] are corrected in [14; 16].
The overall picture of work to date is one in which the calculations presented for electrical plasmas and those presented for gravitating clusters appear to be unaware of one another although they are often calculating the same things (i.e. emission during scattering in a \(1/r\) potential). The present work makes the following contributions. (i) bring together the two communities or histories just outlined; (ii) present the work of Galt'sov and Grats afresh; (iii) estimate the case, intermediate between classical and quantum, which is not amenable to classical nor Born approximations, obtaining an approximate 'Gaunt factor' for the total emitted power; (iv) obtain an emission cross section by using an angular momentum cut-off; (v) show how the above can be applied to calculate the emission from gravitating clusters and from a stellar plasma.
## II Notation and general approach
For two colliding partners of masses \(m_{1}\), \(m_{2}\) we define the total mass \(M=m_{1}+m_{2}\) and the reduced mass \(\mu=m_{1}m_{2}/M\). We shall also occasionally use the unadorned \(m\) (with no subscript) as an alternative notation for reduced mass; thus \(m\equiv\mu\). A given binary collision is described in the COM frame, such that it consists in a particle of mass \(\mu\) moving in a fixed central potential of the form either \(V(r)=Z_{1}Z_{2}e^{2}/r\) or \(V(r)=-Gm_{1}m_{2}/r\). It is only necessary to treat one of these two cases since the other is then given by making the replacement \(Z_{1}Z_{2}e^{2}\leftrightarrow-Gm_{1}m_{2}\). In the following we mostly present the former (Coulomb scattering) case since it includes both attractive and repulsive collisions, and also preserves in the notation the distinction between the role of the potential and the role of \(G\) in the emission of gravitational waves. For a slightly more succinct notation we define \(e_{1}e_{2}\equiv Z_{1}Z_{2}e^{2}\). We adopt electromagnetic units such that the Coulomb force between electrons is \(e^{2}/r^{2}\). In order to convert expressions involving \(e^{2}\) into SI units it suffices to replace \(e^{2}\) by \(e^{2}/(4\pi\epsilon_{0})\).
For a collision with the masses initially far apart, \(v_{0}\) is the initial velocity and \(b\) is the impact parameter. The collision energy is \(E=(1/2)\mu v_{0}^{2}\) and angular momentum \(L=\mu bv_{0}\).
If a flux \(n_{2}v\) is incident on a single collision centre, then the rate of collisions is \(n_{2}v\sigma\) where \(\sigma\) is the cross section (this defines \(\sigma\)). If there is a density \(n_{1}\) of collision centres, then the collision rate per unit volume is \(n_{1}n_{2}v\sigma\) if the particle types 1 and 2 are distinct, and it is \((1/2)n_{1}^{2}v\sigma\) if the particle types are not distinct. In this paper we shall write \(n_{1}n_{2}v\sigma\) and expect the reader to understand that in the case of identical particles the factor half must be introduced.
Our discussion is entirely non-relativistic. This is a good approximation for conditions in the core of the Sun, where \(\gamma-1\) (the difference of the Lorentz factor from 1) is 0.004 for electrons at the r.m.s velocity.
The gravitational bremsstrahlung process has some features in common with electromagnetic bremsstrahlung, which
has been studied extensively owing to its importance in astrophysics. For the electromagnetic case, for an otherwise free electron moving in the Coulomb field of an atomic ion of charge \(Z\), the emitted power per photon solid angle and per photon frequency range at frequency \(\nu\) from an electron of impact velocity \(v\) is written
\[j(\nu,v)=\frac{8\pi Z^{2}e^{6}n}{3\sqrt{3}c^{3}m_{e}^{2}v}g_{\rm ff}(\nu,v)\]
where the first part of the expression is the result of approximate classical electrodynamics and the factor \(g_{\rm ff}\) is called the "free-free _Gaunt factor_" which incorporates quantum and other corrections. Complicated expressions exist for \(g_{\rm ff}\) but for many purposes it is useful to have a simpler formula of reasonable accuracy. For the electromagnetic case this has recently been provided by Weinberg [33].
For an approximate classical calculation, one way to proceed is to integrate the emitted power at each moment (obtained from the acceleration) for an electron moving on the trajectory it would follow if no radiation were emitted. For the electromagnetic case this approximation is not always good, but for the GW emission it holds very well for particle collisions and we shall adopt it.
Whether in the electromagnetic or GW case, there are two significant energy scales in the collision dynamics: the collision kinetic energy and the potential energy at a distance of order a de-Broglie wavelength. The former is \((1/2)mv^{2}\) where \(v\) can be taken as the speed at infinity for a repulsive potential, or as the speed at the distance of closest approach for an attractive potential. It is important to note that for low angular momentum the speed and acceleration have very different behaviours for attractive and repulsive cases, leading to different formulae for GW emission even though the differential cross section of the collision may be independent of the sign of the potential.
For Coulomb collisions between particles of charges \(Z_{1}e\), \(Z_{2}e\) we define the dimensionless parameter \(n_{\rm B}\) called the _Born parameter_ by Galt'sov and Grats (and called \(\xi\) by Weinberg [33]):
\[n_{\rm B}\equiv\frac{|Z_{1}Z_{2}e^{2}|}{\hbar v}=|Z_{1}Z_{2}|\alpha\frac{c}{v} \tag{1}\]
where \(\alpha\) is the fine structure constant. The Born parameter can be read as a statement either about energy or about angular momentum. It is the ratio of the Coulomb energy at \(2\lambd\) to the collision energy. It is also approximately equal to the angular momentum in units of \(\hbar\). For a repulsive potential the distance at closest approach is \(2n_{\rm B}\lambd\) according to classical mechanics. The case \(n_{\rm B}\lesssim 1\) is the quantum limit; the Born approximation for the scattering holds when \(n_{\rm B}\ll 1\). The case \(n_{\rm B}\gg 1\) is the classical limit. Thus low temperatures give classical trajectories. The ground state of hydrogen has \(n_{\rm B}\approx 1\).
A further relevant energy is that of the emitted photons or gravitons, \(h\nu\). We say the photons or gravitons are'soft' when \(h\nu\ll(1/2)mv^{2}\) and 'hard' otherwise. The maximum possible emitted photon or graviton energy is equal to the entire kinetic energy of the incident particle, \((1/2)mv^{2}\). More generally if a single photon or graviton is emitted then the initial and final momenta of the scattered particle (e.g. electron) in the COM frame are related by
\[\frac{p_{i}^{2}}{2m}-\frac{p_{f}^{2}}{2m}=h\nu. \tag{2}\]
In bremsstrahlung the collision process itself has a timescale \(\tau\approx r_{0}/v\) where \(r_{0}\) is the distance of closest approach. Classical mechanics predicts that the emitted spectral power extends up to the angular frequency range near \(1/\tau\), but quantum mechanics insists there is the hard cut-off at \(\omega=(1/2)mv^{2}/\hbar\). The question arises, then, whether the classically 'preferred' frequency is available, or whether it is not because it is beyond the cut-off. The condition that \(1/\tau\) is less than the cut-off is \(2\hbar<mvr_{0}\), i.e. \(n_{\rm B}>1\).
### Methods of calculation
In the compact source approximation in linearized gravity, the luminosity (i.e. the emitted power) of a source is given by
\[L_{\rm GW}=\frac{G}{5c^{5}}\left<\ddot{Q}_{ij}\,\ddot{Q}^{ij}\right> \tag{3}\]
where
\[Q^{ij}=\frac{1}{c^{2}}\int T^{00}(x^{i}x^{j}-\frac{1}{3}\delta_{ij}x^{k}x_{k}) \,{\rm d}^{3}{\bf x} \tag{4}\]
is the quadrupole moment of the mass distribution and the angle bracket indicates an average over a small region of spacetime.
For given collision partners, a collision is parametrised by two quantities: the initial velocity \(v\) and the impact parameter \(b\). We can express the total power generated in some small volume \(V\) of a plasma, as a result of collisions between particles of types 1 and 2, as
\[P=Vn_{1}n_{2}\left\langle v\Sigma\right\rangle \tag{5}\]
where \(n_{1}\) and \(n_{2}\) are number densities of two species (\(n_{1}n_{2}\) should be replaced by \((1/2)n_{1}^{2}\) if the species are identical, as already remarked) and \(\Sigma\) is a cross section (to be calculated) with the physical dimensions of energy times area.
We shall obtain \(\Sigma\) by calculating the total GW energy emitted during a single collision, integrated over impact parameter \(b\). We adopt and compare four methods of calculation, as follows.
1. **Purely classical**. A good classical approximation is to take it that the emission does not significantly change the trajectory of the colliding partners. We calculate that trajectory in the COM frame and then the total emitted energy is \(\int L_{\rm GW}{\rm d}t\) per collision, with \(\tilde{Q}_{ij}\) obtained from the trajectory. The GW emission cross section is \[\Sigma=\int_{-\infty}^{\infty}{\rm d}t\int_{0}^{\infty}2\pi b\,{\rm d}b\,L_{ \rm GW}\] (6) The integral over time is conveniently done by using the particle separation \(r\) as a parameter, and exploiting the symmetry of the inward and outward motion. Thus one finds \[\Sigma=2\,\int_{r_{0}}^{\infty}\frac{{\rm d}r}{|\dot{r}|}\int_{0}^{b_{\rm max }}2\pi b\,{\rm d}b\,L_{\rm GW}\] (7) where \(r_{0}\) is the smallest distance of closest approach and \(b_{\rm max}\) is the largest impact parameter whose associated trajectory can reach a given \(r\); see Fig. 1 for an elucidation of this.
2. **Born approximation**. For a calculation of GW scattering in first order Born approximation in the non-relativistic limit we adopt results obtained by Barker _et al._; and by Gal'tsov and Grats (GG).[2; 12]
3. **Soft photon theorem**. Weinberg has obtained a very general expression for the emission of soft massless particles in any collision. In the non-relativistic limit his'soft photon theorem' applied to gravitons yields an expression for the power in the radiated spectrum up to frequency \(\Lambda/\hbar\): \[P_{<\Lambda}\simeq V\frac{8G}{5\pi c^{5}}m^{2}v^{5}n_{1}n_{2}\frac{\Lambda}{ \hbar}\int\frac{{\rm d}\sigma}{{\rm d}\Omega}\sin^{2}\theta\,{\rm d}\Omega\] (8) where \(\Lambda\) is an energy cut-off which has to be taken low enough so that it is small compared to relevant kinetic energies in the problem, and \({\rm d}\sigma/{\rm d}\Omega\) is the differential cross section for the collision in the absence of radiant emission. The term'soft' here means the graviton's energy-momentum is small compared to that of the particle emitting it.
Figure 1: The region of integration of (7) and (55). \(b\) is the impact parameter, \(r\) is the distance from the origin in the COM frame. At any given impact parameter \(b\), the trajectory does not reach values of \(r\) below \(r_{\rm min}\) given by (7) and therefore at any given \(r\) it does not reach values of \(b\) above \(b_{\rm max}\).
Note that Weinberg's result does not give the whole emitted power, only the part owing to soft gravitons, and only that part up the frequency cut-off \(\Lambda/\hbar\). Therefore we should not expect it to reproduce in full the result of a calculation of the whole power. Nonetheless it offers a useful consistency check on other calculations. The presence of the fifth power (\(v^{5}\)) in this result can be recognised as one from the particle flux and 4 from \(Q_{ij}^{2}\). Expressed as a cross-section we have \[\Sigma_{<\Lambda}\simeq\frac{8G}{5\pi c^{5}}m^{2}v^{4}\frac{\Lambda}{\hbar} \int\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\sin^{2}\theta\,\mathrm{d}\Omega\,.\] (9) The soft photon (or graviton) theorem concerns gravitons attached to external legs of a Feynman diagram and which do not significantly change the momentum. Here 'external' means lines for which the 4-momentum is near the mass shell. This is a useful method for repulsive potentials where the particles have their highest momentum in the initial and final states. For an attractive potential, however, Eqn (8) is less useful in the classical limit, as we shall see. The above formula implies that the emitted spectrum is uniform over frequency, and this is indeed the prediction for soft gravitons at low particle energies. For general particle energies the theorem gives the low-frequency part of the spectrum as \(\omega^{B}\) where \(B\) is a function of velocity which is of order \(G\bar{Q}_{ij}^{2}/\hbar c^{5}\); this is very small (\(<10^{-38}\)) for collisions of fundamental particles.
4. **Modified classical**. With a view to gaining intuition about the quantum limit, and to obtain formulae which are approximately valid for any initial velocity, we explore the effect of modifying the classical formula (7). This is not a modification to the equation of motion; it is merely a rough method to gain reasonable insight and approximate formulae. The idea is that the quantum behaviour can be modelled roughly by using a classical mass distribution with mass density equal to \(m\left|\psi\right|^{2}\) where \(\psi\) is a wavefunction in position space, and we suppose this distribution has a peaked (e.g. Gaussian) form with a standard deviation to be discovered and a mean which follows the classical trajectory. We then suppose that, to sufficient approximation, the result of such a model can be estimated by some simple adjustment to the integrand in (7). One idea, for example, is to replace \(r\) in the integrand of (7) with some simple function such as \((r^{2}+\Delta^{2})^{1/2}\) where \(\Delta\) is a parameter to be set so as to reproduce the known behaviour in the limits of small and large Born parameter. One would expect this \(\Delta\) to be of the order of the de Broglie wavelength. This was explored, and so were other possibilities. In particular, one might leave the integrand unchanged and simply adjust the lower limit of the integral, whether over \(b\) or \(r\) or both. It was found that this simpler approach gives a good approximation. This is presented in sections III.5, V.
## III Total emission cross section
### Order-of-magnitude estimate
In order to get some general insight into the results to be discussed, we first present a simple order-of-magnitude estimate of GW radiation during repulsive Coulomb collisions.
From (3) we have
\[L_{\mathrm{GW}}\approx\frac{G}{5c^{5}}\left(\frac{\overline{Mx^{2}}}{\tau^{3 }}\right)^{2}\approx\frac{4G}{5c^{5}}\left(\frac{E_{Q}}{\tau}\right)^{2} \tag{10}\]
where \(\tau\) is the timescale and \(E_{Q}\) is the part of the kinetic energy associated with non-spherical (i.e. quadrupolar) movements. The timescale of the changing quadrupole moment is \(\tau\simeq 0.5\,b_{E}/v\) where \(b_{E}\) is a characteristic distance scale for a collision at energy \(E\) and \(v\) is the relative speed of the colliding partners. In the case of Coulomb collisions of particles, the timescale \(\tau\) is very much smaller for electrons than protons so it is the electron collisions which dominate \(L_{\mathrm{GW}}\). We take as characteristic distance
\[b_{E}=2e_{1}e_{2}/E \tag{11}\]
where \(E\) is the collision energy in the COM frame. This \(b_{E}\) is equal to the impact parameter for Rutherford scattering through 90 degrees (and this is twice the distance of closest approach of a head-on collision.)
The duration of each collision is about \(2\tau\) so the emitted energy per collision is \((8G/5c^{5})E^{2}/\tau\). Multiplying this by the collision rate \(n_{2}\sigma v\) and the number density \(n_{1}\), and using \(\sigma=\pi b_{E}^{2}\), we obtain the power per unit volume of the gravitational wave production:
\[\frac{P}{V}\approx n_{1}n_{2}e_{1}e_{2}\frac{64\pi G}{5c^{5}}\frac{E^{2}}{\mu} \tag{12}\]
where \(\mu\) is the reduced mass of the colliding partners and \(E=(1/2)\mu v^{2}\).
Eqn (12) is compared with the result of a precise calculation in the next section. We there find that it captures correctly the scaling with parameters of the classical result for a repulsive potential, and gets the numerical factor about right.
### Classical treatment
We treat the two-body dynamics as a single-body motion of a particle of mass \(\mu\) moving in a static potential centred on the origin. Let \(D_{ij}\equiv 3Q_{ij}\), then \(D_{ik}=\mu(3x_{i}x_{k}-x^{j}x_{j}\delta_{ik})\) and
\[\tilde{D}_{ik}=6\mu v_{i}v_{k}-6\frac{\mathrm{d}V}{\mathrm{d}r}\frac{1}{r}x_{ i}x_{k}+\left[-2\mu v_{j}v^{j}+2\frac{\mathrm{d}V}{\mathrm{d}r}\frac{1}{r}x^{j}x_ {j}\right]\delta_{ik}. \tag{13}\]
The calculation of \(\stackrel{{\cdot}}{{D}}_{ik}\stackrel{{\cdot}}{{D}} ^{ik}\) is straightforward and the result is given by Boccaletti. [3] For Coulomb collisions one finds
\[L_{\mathrm{GW}}=\frac{8G}{15c^{5}}\frac{(e_{1}e_{2})^{2}}{r^{4}}\left(v^{2}+11 v_{\perp}^{2}\right) \tag{14}\]
where \(v_{\perp}^{2}=v^{2}-\dot{r}^{2}\) and in this expression \(v\), \(v_{\perp}\) and \(r\) are all functions of time.
The case of gravitational scattering can be treated by the replacement \(e_{1}e_{2}\rightarrow-Gm_{1}m_{2}\).
The potential is
\[V(r)=e_{1}e_{2}/r\,, \tag{15}\]
which may be positive or negative, depending on the signs of the charges. Let
\[r_{0}\equiv\frac{2e_{1}e_{2}}{mv_{0}^{2}} \tag{16}\]
where \(v_{0}\) is the initial velocity. In the case of a repulsive force (potential hill) \(r_{0}\) is a positive number equal to the minimum distance attained in a head-on collision. In the case of an attractive force (potential well) \(r_{0}\) has no such interpretation but we retain the formula as a definition, and then \(r_{0}<0\).
From conservation of energy and angular momentum we have
\[v^{2} = v_{0}^{2}(1-r_{0}/r) \tag{17}\] \[v_{\perp} = v_{0}b/r \tag{18}\]
where \(v_{0}\) is the initial velocity and \(b\) is the impact parameter. Hence
\[\dot{r}=v_{0}\sqrt{1-r_{0}/r-b^{2}/r^{2}}. \tag{19}\]
Using (7) and the above definitions, we have
\[\Sigma=\frac{32\pi G}{15c^{5}}(e_{1}e_{2})^{2}v_{0}\int_{r_{\mathrm{min}}}^{ \infty}\int_{0}^{\sqrt{r^{2}-rr_{0}}}\frac{(1-r_{0}/r)+11b^{2}/r^{2}}{r^{4} \sqrt{(1-r_{0}/r)-b^{2}/r^{2}}}\,b\mathrm{d}r\mathrm{d}b. \tag{20}\]
Taking the integration with respect to \(b\) first, we note that, for constants \(A,B,C,D\),
\[\int\frac{C+Db^{2}}{\sqrt{A-Bb^{2}}}b\mathrm{d}b=-\frac{1}{3B^{2}}\sqrt{A-Bb ^{2}}\left(3BC+2AD+BDb^{2}\right). \tag{21}\]
Therefore
\[\Sigma=\frac{64\pi G}{9c^{5}}\frac{(e_{1}e_{2})^{2}v_{0}}{|r_{0}|}\chi \tag{22}\]
where
\[\chi=\frac{5|r_{0}|}{2}\int_{r_{\rm min}}^{\infty}\frac{1}{r^{2}}\left(1-\frac{r _{0}}{r}\right)^{3/2}\,{\rm d}r\;=\;\frac{5}{2}\int_{x_{\rm min}}^{\infty} \frac{1}{x^{2}}\left(1\pm\frac{1}{x}\right)^{3/2}\,{\rm d}x \tag{23}\]
where the plus(minus) sign corresponds to an attractive(repulsive) potential. The lower limit on the integral with respect to \(r\) is the smallest \(r\) attained in the motion. This is zero for an attractive collision and \(r_{0}\) for a repulsive one. It follows that \(x_{\rm min}=0\) for an attractive collision and \(x_{\rm min}=1\) for a repulsive one. Consequently \(\chi\) diverges for an attractive collision and one obtains \(\chi=1\) for a repulsive collision. Hence the classical calculation (with no adjustment for quantum effects) yields a divergent result for an attractive collision (owing to infinite acceleration in a head-on collision), and for a repulsive collision yields
\[\Sigma_{\rm r}=\frac{32\pi G}{9c^{5}}Z_{1}Z_{2}e^{2}mv^{3} \tag{24}\]
where we now use \(v\) to indicate \(v_{0}\) in order to make the comparison with other results more transparent. This is the equation first obtained by Carmeli ([5], eqn (4.4)). We observe that when substituted into (5) it confirms our rough estimate (12).
### Quantum treatment
We now review results of quantum scattering theory for this problem, obtained by previous authors. Both Barker _et al._ and GG treat the Born approximation and also give some higher-order results; they differ in their choices of which further results to consider. We shall present the results for the Born approximation, and some further observations by GG.
Equation (8) of GG is the same as eqn (10) of Barker _et al._ after the replacement \((GMm/\hbar c)\rightarrow(e^{2}/\hbar c)\). (This replacement is the one Barker _et al._ point out after their eqn (15), except that they adopt rationalised electromagnetic units.) In our units, Barker _et al._, and also GG, find that the contribution to \(\Sigma\) of the graviton frequency range \({\rm d}\omega\), in the case of Coulomb scattering, is:
\[{\rm d}\Sigma=\frac{64G\hbar}{15c^{3}}\left(\frac{e_{1}e_{2}}{\hbar c}\right) ^{2}\left(5x+\frac{3}{2}(1+x^{2})\ln\frac{1+x}{1-x}\right)\hbar{\rm d}\omega \tag{25}\]
where \(x=p^{\prime}/p\) is the ratio of final to initial momentum of a particle scattering off a fixed potential. For single graviton emission (i.e. Born approximation) we have, by conservation of energy, \(\hbar\omega=(p^{2}-p^{\prime 2})/2m=(1-x^{2})p^{2}/2m\) so \(\hbar{\rm d}\omega=-xp^{2}/m\). When \(\omega\) ranges from 0 to the hard cut-off, \(x\) ranges from 1 to 0, so
\[\Sigma_{\rm B} = \frac{64G}{15c^{5}\hbar}(e_{1}e_{2})^{2}\frac{p^{2}}{m}\int_{0}^ {1}\left(5x^{2}+\frac{3}{2}x(1+x^{2})\ln\frac{1+x}{1-x}\right){\rm d}x \tag{26}\] \[= (160G/9\hbar c^{5})(e_{1}e_{2})^{2}mv^{2}. \tag{27}\]
However one should keep in mind that the Born approximation is only valid when \(n_{\rm B}\ll 1\) for both the initial and final momenta. At the hard end of the spectrum \(p^{\prime}\to 0\) so \(n_{\rm B}\rightarrow\infty\). Therefore the above formula has to be corrected at the hard end. This is the region where \(x\to 0\). GG obtain
\[{\rm d}\Sigma\rightarrow\pm\frac{1024\pi G}{15c^{5}}(e_{1}e_{2})^{2}\frac{ \tilde{\alpha}c}{v}\frac{{\rm d}\omega}{(e^{\pm 2\pi\tilde{\alpha}c/xv}-1)} \tag{28}\]
where the \(+\) sign is for repulsion and the \(-\) sign is for attraction, and \(\tilde{\alpha}\equiv Z_{1}Z_{2}\alpha\). Since \(xv\) is the final speed, the corrected formula should match the uncorrected one when the final Born parameter \(\alpha c/xv\ll 1\), as indeed it does. But at the hard end, \(x\to 0\), the spectrum is different in the two cases:
\[{\rm d}\Sigma\rightarrow\frac{1024\pi G}{15c^{5}}(e_{1}e_{2})^{2}\frac{\tilde {\alpha}c}{v}{\rm d}\omega\left\{\begin{array}{cc}e^{-2\pi\tilde{\alpha}c/xv }&\mbox{repulsion}\\ 1&\mbox{attraction}\end{array}\right. \tag{29}\]
It follows that (27) overestimates the power in the repulsive case, and underestimates it in the attractive case; c.f. Fig. 2. Note also that \(\mathrm{d}\Sigma\) scales as \((Z_{1}Z_{2})^{3}\).
The above Born approximation results apply when \(n_{\mathrm{{B}}}\ll 1\). Closed formulae are also available in the other limit, \(n_{\mathrm{{B}}}\gg 1\). For repulsion one then regains the classical result (24). For attraction the classical result (with no angular momentum cut-off) diverges; the quantum treatment derived by GG (their eqn (17)) gives
\[\Sigma_{\mathrm{a}}=\frac{8G}{5c^{5}}12^{1/3}\Gamma^{2}(2/3)\,Z_{1}Z_{2}e^{2} mv^{4/3}(\tilde{\alpha}c)^{5/3}. \tag{30}\]
where the subscript 'a' stands for 'attractive'.
In order to compare the various results, let us define in each case
\[\chi\equiv\Sigma/\Sigma_{\mathrm{r}} \tag{31}\]
where \(\Sigma_{\mathrm{r}}\) is given by (24). From (27) one obtains
\[\chi_{\mathrm{B}}\equiv\frac{\Sigma_{\mathrm{B}}}{\Sigma_{\mathrm{r}}}=\frac{ 9}{2\pi}n_{\mathrm{{B}}}. \tag{32}\]
Thus quantum effects here act to suppress the power by a factor \(9n_{\mathrm{{B}}}/2\pi\) compared to what would be expected classically. (Roughly speaking, the spread-out nature of the wavefunction results in a less-rapidly-changing quadrupole moment.)
Comparing now attraction and repulsion in the low-velocity limit, we have
\[\chi_{\mathrm{a}}\equiv\frac{\Sigma_{\mathrm{a}}}{\Sigma_{\mathrm{r}}}\simeq 0.6013(\tilde{\alpha}c/v)^{5/3}=0.6013\,n_{\mathrm{{B}}}^{5/3}. \tag{33}\]
The power in the attractive case greatly exceeds that in the repulsive case for low \(v\). This is because the relevant speed for the attractive case is not the incident speed but the speed at closest approach. For a classical trajectory at angular momentum \(L\), the speed at closest approach is approximately \(n_{\mathrm{{B}}}v\hbar/L=\tilde{\alpha}c\hbar/L\) in the limit \(n_{\mathrm{{B}}}\gg L/\hbar\). The scaling \(v^{4/3}\) exhibited in (30) can be interpreted as the cube of a velocity which makes a compromise (roughly a geometric mean) between \(v\) and \(n_{\mathrm{{B}}}v\).
The predictions of (24), (32) and (33) are plotted as dashed lines on figure 3.
Figure 2: Spectrum of GW emission in a Coulomb collision in the first order Born approximation for the collision (\(n_{\mathrm{{B}}}\ll 1\)), as given by (25). The dashed lines show the corrected spectrum near the hard end, eqn (28). Blue dashed: \(v=0.1c\), red dash-dot: \(v=0.3c\).
### Soft photon theorem
The soft photon theorem has to be applied with caution in the case of Coulomb collisions owing to the divergence of the collision cross-section term in (9). That is, the quantity
\[\tilde{\sigma}\equiv\int\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\sin^{2}\theta \mathrm{d}\Omega \tag{34}\]
diverges. Therefore the approximations invoked in the theorem do not hold in the case of the Coulomb potential. The problem is connected to the long-range nature of \(1/r\); similar problems arising in other scattering problems associated with this potential. In practice in a plasma there will be Debye screening so the potential is not well modelled by a \(1/r\) form at large \(r\), and is better modelled by a Yukawa potential. For the Yukawa potential one finds that \(\tilde{\sigma}\sim v^{-4}\ln v\) in the limit where the exponential term in the potential has a large length scale.
The soft photon/graviton theorem does not give the whole emitted power and one only expects order-of-magnitude agreement with the full \(\Sigma\) in general. However by judicious choice of the cut-off \(\Lambda\) one may expect to reproduce the full \(\Sigma\) to within a factor 2 in cases where the emission is mostly soft.
For Coulomb collisions there are two relevant frequency scales: the inverse of the collision time, and the hard cut-off at \(K/\hbar\) where \(K=(1/2)mv^{2}\) is the collision energy. The collision time is of order \(|r_{0}|/v\) where \(v\) is the maximum speed, which is \(v_{0}\) for repulsive collisions and for attractive collisions a suitable value is given by the case \(r=\upgamma_{\mathrm{dB}}=\hbar/mv\) with \(v^{2}=v_{0}^{2}+2|e_{1}e_{2}|/mr\) from energy conservation. One finds \(v=|\tilde{\alpha}|c+\sqrt{\tilde{\alpha}^{2}c^{2}+v_{0}^{2}}\). Hence the characteristic frequency is
\[\omega\simeq\frac{v}{r}=\left\{\begin{array}{ll}\mu v_{0}^{3}/2e_{1}e_{2}& \mbox{repulsion}\\ (\mu c^{2}/\hbar)\left(|\tilde{\alpha}|+\sqrt{\tilde{\alpha}^{2}+v_{0}^{2}/c^ {2}}\right)^{2}&\mbox{attraction}\end{array}\right. \tag{35}\]
For the attractive case this \(\omega\) is above the hard cut-off so to good approximation one has simply \(\Lambda\simeq K\) for that case. If we take \(\tilde{\sigma}\propto v^{-4}\) and use as \(\Lambda\) the smaller of \(\hbar\omega\) and \(K\) then the behaviour shown in figure 3 for repulsive collisions is reproduced by the formula (9) in both limits of low and high \(v_{0}\). To be specific, this is the case for
\[\tilde{\sigma}\simeq 32\pi\alpha^{2}\left(\hbar/\mu c\right)^{2}(c/v)^{4}. \tag{36}\]
The quantity in the squared bracket here is the Compton wavelength of the reduced particle.
For attractive collisions the soft photon theorem is less successful, but gives a good estimate at high \(v_{0}\) (low Born parameter).
Figure 3: Predictions for GW radiation in Coulomb collisions. The dashed lines show the limiting cases as described by (24) and (32) (low \(v\)) and (33) (high \(v\)). The full (dotted) line shows the predictions of the modified classical method described in section III.5 (eqns (37), (38)). The horizontal axis is \(\tilde{\alpha}/n_{\mathrm{h}}\); this is equal to \(v/c\) in the case of electron collisions.
### Modified classical
As noted in section II, our proposed modified classical method of calculation is merely an adjustment of the classical integrals so as to yield a reasonable approximation. In this section we consider the effect of adjusting the lower limit \(x_{\rm min}\) of the integral in (23). We have
\[\chi_{r}(\lambda) = \frac{5}{2}\int_{1+\lambda}^{\infty}\frac{1}{x^{2}}\left(1-\frac{ 1}{x}\right)^{3/2}\,{\rm d}x=1-\left(1+1/\lambda\right)^{-5/2}, \tag{37}\] \[\chi_{a}(\lambda) = \frac{5}{2}\int_{\lambda}^{\infty}\frac{1}{x^{2}}\left(1+\frac{ 1}{x}\right)^{3/2}\,{\rm d}x=-1+\left(1+1/\lambda\right)^{5/2} \tag{38}\]
where \(\lambda\) is a parameter which one would expect to be of the order of the de Broglie wavelength divided by a relevant distance scale such as \(|r_{0}|\).
Defining \(\lambda_{\rm dB}\equiv 2\pi\hbar/\mu v_{0}\), one finds \(\lambda_{\rm dB}/|r_{0}|=\pi/n_{\rm B}.\) By setting the parameter value
\[\lambda=0.5515\pi/n_{\rm B} \tag{39}\]
one finds that (37) reproduces the known results in both classical and quantum limits, and gives reasonable behaviour at all \(v<c\), see Fig. 3.
For attractive collisions the distance scale where quantum effects must be allowed-for is not simply \(|r_{0}|\) but may be considerably smaller. By solving for \(r\) the equation \(r=h/\mu v\) with \(v=v_{0}(1+|r_{0}|/r)^{1/2}\) one finds \(r\simeq\pi\lambda_{\rm C}/\alpha\) where \(\lambda_{\rm C}=h/\mu c\) is the Compton wavelength. We mention this merely to indicate that the attractive case is less straightforward. We shall choose the parameter \(\lambda\) so as to match (33) in the low-velocity limit and (32) in the high-velocity limit. We also have a further piece of information provided by (29), namely that \(\chi\) approaches the asymptote from above at small Born parameter in the attractive case. These constraints are achieved by adopting (for example)
\[\lambda=\left(5.20+1.84\,n_{\rm B}\right)^{1/3}/n_{\rm B}. \tag{40}\]
The result is shown in Fig. 3.
Eqns (37)-(40) together provide a formula for \(\chi\) which is approximately valid at all collision speeds \(v\). This \(\chi\) is the "Gaunt factor" for the total (i.e. integrated over frequency) emission. It allows one to obtain \(\Sigma\) by taking \(\Sigma_{\rm r}\) given by (24) and multiplying by a correction factor.
## IV Power and energy for a given scattering event
So far we have not treated the motion during individual scattering events, because it was convenient to integrate over impact parameter. We now treat individual events of given \(b\), \(v_{0}\). We shall present the gravitational (Keplerian), i.e. attractive case.
The orbit can be described by the parameters \(b\), \(v_{0}\) or by a number of other pairs, including \(E,L\) (energy and angular momentum, both conserved) and \(a,e\) where \(a\equiv GM/v_{0}^{2}=-r_{0}/2\) and \(e\) is the eccentricity defined by
\[e=\sqrt{1+b^{2}/a^{2}}\,. \tag{41}\]
For a hyperbolic orbit one then finds that the distance of closest approach is
\[r_{\rm min}=a(e-1)=b\sqrt{\frac{e-1}{e+1}} \tag{42}\]
and
\[e=-1/\cos\phi_{0} \tag{43}\]
where \(\phi_{0}\) is half the total change in azimuthal angle during the encounter (the deflection angle is \(2\phi_{0}-\pi\)).
On a classical model under the adopted assumptions (i.e. motion in a \(1/r\) potential), the GW power during the scattering process is given by (14), which, after using the conservation laws (18), gives an expression in terms of \(r\) and constants.
Turner gives the following formula (eqn (24) of [29]):
\[P=\frac{8G^{4}}{15c^{5}}\frac{Mm_{1}^{2}m_{2}^{2}}{[(1+e)r_{\rm min}]^{5}}(1+e \cos\phi)^{4}[e^{2}\sin^{2}\phi+12(1+e\cos\phi)^{2}] \tag{44}\]
where \(\phi\) is the azimuthal angle taken from \(\phi=0\) at periastron (the point where \(r=r_{\rm min}\)). Thus \(\phi\) goes from \(-\phi_{0}\) initially to \(\phi_{0}\) finally.
Capozziello _et al._ give (eqn (21) of [4]):
\[P=\frac{32GL^{6}\mu^{2}}{45c^{5}b^{8}}f(\phi_{0},\psi) \tag{45}\]
where
\[f(\phi_{0},\psi)=\frac{\sin^{4}\left(\phi_{0}-\psi/2\right)\sin^{4}\left(\psi /2\right)}{\tan^{2}\phi_{0}\sin^{6}\phi_{0}}\left[150+72\cos 2\phi_{0}+66 \cos 2(\phi_{0}-\psi)-144\cos(2\phi_{0}-\psi)-144\cos\psi)\right]. \tag{46}\]
(This formula is quoted incorrectly in [9] where there is a sign error in the last term). Here \(\psi\equiv\phi+\phi_{0}\) (thus \(\psi\) goes from \(0\) initially to \(2\phi_{0}\) finally). If we express \(f\) in terms of \(\phi\) rather than \(\psi\), it simplifies a little:
\[f=\frac{3}{8}\frac{\left(\cos\phi_{0}-\cos\phi\right)^{4}}{\tan^{2}\phi_{0} \sin^{6}\phi_{0}}\left[25+12\cos 2\phi_{0}-48\cos\phi_{0}\cos\phi+11\cos 2\phi \right]. \tag{47}\]
Equations (14), (44) and (45) give three ways of expressing the same result. They are all equivalent, which one may confirm by employing
\[r=\frac{b\sin\phi_{0}}{\cos\phi-\cos\phi_{0}} \tag{48}\]
(a standard result of orbital mechanics).
The integral of \(P\) over time is conveniently done by converting to an integral over \(\phi\). The result was first obtained by Turner:
\[\Delta E=\frac{8G^{7/2}}{15c^{5}}\frac{M^{1/2}m_{1}^{2}m_{2}^{2}}{r_{\rm min}^ {7/2}}g(e) \tag{49}\]
with
\[g(e)=\frac{\phi_{0}(24+73e^{2}+37e^{4}/4)+\sqrt{e^{2}-1}(602+673e^{2})/12}{(1 +e)^{7/2}} \tag{50}\]
(correcting an earlier calculation of Hansen). In order to bring out the comparison with (24), note that
\[\frac{8G^{7/2}}{15c^{5}}\frac{M^{1/2}m_{1}^{2}m_{2}^{2}}{((e+1)r_{\rm min})^{ 7/2}}=\frac{8G}{15c^{5}}\frac{GM\mu^{2}v_{0}^{3}}{b^{2}(e^{2}-1)^{5/2}}. \tag{51}\]
Dehnen and Ghaboussi'e result (eqn (7) of [10]) is
\[\Delta E=\frac{8G(e_{1}e_{2})^{2}}{15c^{5}}\frac{\mu E^{2}}{L^{3}}\left[(37+3 66z^{2}+425z^{4})\phi_{0}+(673/3+425z^{2})z\right] \tag{52}\]
where
\[z\equiv-\cot\phi_{0}=\frac{1}{\sqrt{e^{2}-1}}. \tag{53}\]
This agrees with Turner after one makes the substitution \(e_{1}e_{2}\rightarrow-Gm_{1}m_{2}\).
The total scattered energy was also obtained by Capozziello _et al._ Their expression is consistent with Turner's if one handles the term \(\sqrt{e^{2}-1}\) correctly. It must be taken positive, which means it is equal to \(-\tan\phi_{0}\) not \(\tan\phi_{0}\) when \(e>1\). Also, [9] give a result a factor 4 larger than that of [4]. In view of these issues a further check is useful. We completed the calculation independently and agree with Turner (and therefore also Dehnen and Ghaboussi) and with Capozziello _et al._ as long as the correct sign is taken, as just noted. Our result (equivalent to [4] eqns (27), (28)) is
\[\Delta E=\frac{G^{2}Mm^{2}v_{0}^{3}}{90c^{5}b^{2}}\frac{(\phi_{0}[2628+2328 \cos 2\phi_{0}+144\cos 4\phi_{0}]-1948\sin 2\phi_{0}-301\sin 4\phi_{0})}{| \tan\phi_{0}|\sin^{4}\phi_{0}}. \tag{54}\]
## V Classical collisions with angular momentum cut-off
So far we have surveyed or confirmed existing work, and contributed a small extension in the modified classical method. The remainder of our discussion is mostly new.
Rather than taking the integral (7) over all impact parameters, we now place a lower limit on \(b\). This will be useful for two purposes. First, the influence of quantum mechanics on collision cross-sections can sometimes be estimated by imposing a low angular momentum cut-off, at a value of order \(\hbar\), on a classical collision integral. Secondly, for attractive collisions the low angular momentum limit has to be considered separately in any case. This is because the approximation that the orbit is almost unaffected by the radiation breaks down.
In place of eqn (7) we introduce
\[\Sigma(L,v_{0})\equiv 2\int_{r_{\rm min}}^{\infty}\frac{{\rm d}r}{|\dot{r}|} \int_{L/mv}^{b_{\rm max}}2\pi b\,{\rm d}b\,L_{\rm GW} \tag{55}\]
where \(L\) is the cut-off and the notation on the left hand side is to indicate explicitly that the result is a function of the cut-off angular momentum \(L\) as well as \(v_{0}\). Then in place of (20) we have
\[\Sigma(L,v_{0})=\frac{32\pi G}{15c^{5}}(e_{1}e_{2})^{2}v_{0}\int_{r_{\rm min} }^{\infty}\int_{b_{0}}^{\sqrt{r^{2}-rr_{0}}}\frac{(1-r_{0}/r)+11b^{2}/r^{2}}{ r^{4}\sqrt{(1-r_{0}/r)-b^{2}/r^{2}}}\,b\,{\rm d}r{\rm d}b \tag{56}\]
where \(b_{0}=L/mv_{0}\), and \(r_{\rm min}\) is given by (42) (and by 59). After using (21) we obtain
\[\Sigma(L,v_{0})=\frac{64\pi G}{9c^{5}}\frac{(e_{1}e_{2})^{2}v_{0}}{|r_{0}|} \chi(L,v_{0}) \tag{57}\]
where
\[\chi(L,v_{0})=\frac{|r_{0}|}{10}\int_{r_{\rm min}}^{\infty}\frac{1}{r^{2}} \left(25\left(1-\frac{r_{0}}{r}\right)+11\frac{b_{0}^{2}}{r^{2}}\right)\left(1 -\frac{r_{0}}{r}-\frac{b_{0}^{2}}{r^{2}}\right)^{1/2}\,{\rm d}r. \tag{58}\]
The lower limit on this integral is the smallest \(r\) attained in the motion when the impact parameter is \(b_{0}\). This is
\[r_{\rm min}=\frac{1}{2}\left(r_{0}+\sqrt{r_{0}^{2}+4b_{0}^{2}}\right) \tag{59}\]
where the positive square root should be taken. (For \(L=0\) this gives \(r_{\rm min}=r_{0}\) for a repulsive collision and \(r_{\rm min}=0\) for an attractive collision.) The integral is doable; one finds
\[\chi(L,v_{0})=\frac{1}{80|y^{5}|}\left[6(1+y^{2})(85+37y^{2})\left(\frac{\pi} {2}-\cot^{-1}y\right)-510y-562y^{3}\right] \tag{60}\]
Figure 4: \(\chi(L,v_{0})\) given by (60) for attractive (upper line, dashed) and repulsive (lower line, full) collisions.
where
\[y\equiv\frac{Lv_{0}}{e_{1}e_{2}}=\pm\sqrt{e^{2}-1},\qquad|y|=\frac{L}{\hbar}\frac{ v}{c}\frac{1}{|Z_{1}Z_{2}|\alpha}=\frac{L/\hbar}{n_{\mbox{\tiny B}}} \tag{61}\]
where the negative square root is taken for the attractive case. \(\chi(L,v_{0})\) is plotted as a function of \(y\) in figure 4. It is remarkable that this \(\chi\) is a function of eccentricity alone.
One finds
\[\chi(L,v_{0})\rightarrow\left\{\begin{array}{cc}1&y\ll 1,\;y>0\\ (51\pi/8)y^{-5}&|y|\ll 1,y<0\\ (111\pi/80)y^{-1}&|y|\gg 1\end{array}\right. \tag{62}\]
Positive \(y\) means the potential is repulsive. At small \(y\) the result is then independent of \(L\) and reproduces the classical calculation without any angular momentum cut-off. This is because at small initial velocities the particles do not approach closely in a repulsive potential. At large \(y\) the result exactly reproduces the first order Born approximation (27) in the limit if we take
\[L=\frac{37\pi^{2}}{120}\hbar\simeq 3.043\,\hbar. \tag{63}\]
It follows that \(\Sigma(3.04\hbar,v_{0})\) can be taken as a reasonable approximation to the exact result (i.e. a quantum scattering calculation to all orders) for GW scattering during Coulomb collisions on a repulsive potential, for any collision energy in the non-relativistic limit. In other words, _for repulsive Coulomb collisions the complete quantum scattering prediction (summed over all orders or Feynman diagrams) closely matches a classical prediction in which low angular momentum states do not contribute at all_. The phrase 'closely matches' here signifies exact agreement in the limits of large or small \(n_{\mbox{\tiny B}}\), and agreement at some unknown accuracy of order 10% in the case \(n_{\mbox{\tiny B}}\sim 1\).
For an attractive potential the situation is less simple. In this case \(\Sigma(3.04\hbar,v_{0})\) produces the correct cross-section at high \(|y|\) but not at low \(|y|\). In other words, for an attractive Coulomb collision it is not sufficient merely to place a lower bound on the angular momentum in order to approximate the quantum physics of a collision at low energy.
## VI Gravitating clusters
In section III we discussed the total emission cross section, integrating over all impact parameters. For emission from a plasma this is a useful quantity, but for gravitational scattering in general it is not. This is because for an attractive potential the approximations break down at low angular momentum. Various situations can arise. Astrophysical bodies are generally not point-like and can crash into each other or otherwise merge. Also, even on a point-like model there can be radiative capture. This happens when
\[\Delta E>\frac{1}{2}\mu v_{0}^{2}. \tag{64}\]
That is, the emitted energy is larger than the initial energy in the binary system, with the result that an initially unbound pair finishes in a bound state. In a bound state the pair subsequently follows an almost periodic, almost elliptical orbit, gradually losing more energy until the bodies coalesce.
In order to treat a gravitating cluster, one way to proceed is to separate the scattering events into those where the bodies emerge to infinity, and those where there is gravitational capture owing to the gravitational radiation. We will employ the condition (64) to separate the two cases, which is valid at a low density of pairs but not at higher density where three-body effects tend to reduce the capture rate. [30]
Using (49) on the left hand side of (64) we find that the limiting case (where \(\Delta E=E\)) is given by
\[e-1=\left(\frac{16}{15}\frac{\mu}{M}\frac{v_{0}^{5}}{c^{5}}g(e)\right)^{2/7}. \tag{65}\]
This method of calculation is approximate since for such collisions the outgoing value of \(e\) will not be equal to the initial value, but it gives a reasonable estimate. Eqn (65) has \(g(e)\) on the right hand side so it is an implicit equation for \(e\) with no analytical solution. But we observe that for \(v_{0}\ll c\) one has \(e-1\ll 1\) as one would expect: \(e=1\) is the parabolic orbit where \(E=0\). In this case we can use \(g(1)\) on the right hand side, obtaining
\[e-1\simeq\left(\frac{85\pi}{6\sqrt{2}}\frac{\mu}{M}\frac{v_{0}^{5}}{c^{5}} \right)^{2/7}. \tag{66}\]
This agrees with eqn (17) of [23]. Non-captured orbits have \(e-1\) larger than this. We should now note two consistency checks. For the Newtonian potential to be valid we require \(r_{\rm min}\gg R_{s}=2GM/c^{2}\) (the Schwarzschild radius). This yields the condition
\[e-1\gg 2v_{0}^{2}/c^{2}. \tag{67}\]
This is comfortably satisfied by (66) for \(v_{0}\ll c\). Also for non-relativistic mechanics we require \(v_{\rm max}\ll c\). Conservation of angular momentum gives \(r_{\rm min}v_{\rm max}=bv_{0}\) and one obtains
\[\frac{e-1}{e+1}\gg\frac{v_{0}^{2}}{c^{2}}. \tag{68}\]
Since \(e+1>2\) this is a stronger condition than the previous one, but still comfortably satisfied.
We have in (66) an expression for the minimum eccentricity, at any given \(v_{0}\), for non-captured orbits. Since \(e-1\ll 1\) we can use \(y\equiv-\sqrt{e^{2}-1}\simeq-\sqrt{2}(e-1)^{1/2}\), and since this is small we can use the small \(|y|\) limit of eqn (60), giving
\[\chi(y)\simeq\frac{51\pi}{32\sqrt{2}}\left(\frac{6\sqrt{2}}{85\pi}\frac{M}{ \mu}\frac{c^{5}}{v_{0}^{5}}\right)^{5/7}. \tag{69}\]
Hence the total cross-section for emission of gravitational wave energy during hyperbolic (i.e. non-captured) encounters, in a low-density, low-velocity gravitating cluster is
\[\Sigma=\frac{\pi}{5}\left(\frac{340\pi}{3\sqrt{2}}\right)^{2/7}\frac{GM}{c^{2} }Gm_{1}m_{2}\left(\frac{\mu}{M}\right)^{2/7}\left(\frac{c}{v}\right)^{4/7}\,. \tag{70}\]
As an example, consider information furnished by O'Leary _et al._. They remark, "20,000 BHs are expected to have segregated into the inner \(\sim\)1 pc of the Milky Way".[23] The number density distributions in their figure 1 give \(n\simeq n_{0}(r_{0}/r)^{2}\) for \(r_{0}<r<0.3\,\)pc, where \(r\) is the distance from the centre of the galaxy, \(n_{0}\simeq 10^{10}\,\)pc\({}^{-3}\) and \(r_{0}=3\times 10^{-4}\,\)pc. They propose black holes in the mass range 5 to 15 \(M_{\odot}\) and encounters with initial relative speeds of order \(v\sim 1000\,\)km/s. Putting these values into (70) and (5) we obtain a total power from close hyperbolic encounters of black holes in the galactic centre of order \(10^{25}\,\)watt after averaging over times long enough for many encounters.
## VII The gravitational radiation of the Sun
Consider now a plasma in thermal equilibrium at the density and temperature of the core of the Sun--c.f. table 1. The thermal energy \(k_{\rm B}T_{\rm core}\simeq 1.35\,\)keV is about twice the Fermi energy of the electrons, and therefore the electron gas is non-degenerate to reasonable approximation. Each electron or proton has a kinetic energy of order \(k_{\rm B}T\) and the r.m.s. energy is approximately \(E_{Q}\simeq 2k_{\rm B}T\).
Gravitational bremsstrahlung in the Sun arises mainly from collisions among electrons, protons and \({}^{4}\)He nuclei. We shall present the result of integrating the emission over the Sun, treating the collisions as Coulomb collisions. This ignores the effect of Debye screening and therefore cannot be taken as an accurate value for the actual situation. But the Debye screening is not expected to change the overall result by as much as an order of magnitude. Therefore a calculation using the unscreened potential is a useful indicator, and also serves to establish which regime of behaviour (low or high Born parameter, attractive or repulsive collisions) dominates.
\begin{table}
\begin{tabular}{l l l} \(T_{\rm core}\) & & \(1.57\times 10^{7}\) K \\ \((3/2)k_{\rm B}T_{\rm core}\) & & \(2.03\) keV \\ Coulomb distance \(b_{E}\) & & \(1.4\) pm \\ plasma wavelength \(\lambda\) & & \(640\) pm \\ Debye (screening) length \(\lambda_{D}\) & & \(12\) pm \\ & & electrons protons \\ mean separation & & \(25\) pm & \(32\) pm \\ \(\lambda_{\rm th}=\hbar\sqrt{2\pi/mk_{\rm B}T}\) & & \(18.8\) pm & \(0.43\) pm \\ \(\lambdown_{\rm dB}=\hbar/\sqrt{2mE}\) & & \(4.3\) pm & \(0.10\) pm \\ \end{tabular}
\end{table}
Table 1: Some properties of the solar core. pm = picometre. \(\lambda_{\rm th}\) is defined such that \(n\lambda_{\rm th}^{3}\) is the onset of degeneracy. \(\lambdown_{\rm dB}\) is the distance over which a de Broglie wave acquires a phase of one radian, for a particle of energy \(E=(3/2)k_{\rm B}T_{\rm core}\).
In the solar core we have \(|n_{\rm B}|\simeq 0.06\) for collisions involving electrons. It was remarked by GG that the emission is therefore substantially reduced below the value predicted by the classical calculation (24) (we find one order of magnitude below, not two as suggested by GG). We observe also that it is important to include the attractive (ep and eHe) collisions as well as the repulsive ones.
The total power is obtained by adding the contributions from the various types of collision, integrated over the temperature and density distribution of the Sun. In order to perform such an integral, we adopted the distributions given by the Standard Solar Model.[28; 31] The result of the numerical integration is indicated in table 2. We find that the total power is 76 MW (in the absence of Debye screening). This is the first time this power has been calculated with better than order of magnitude accuracy. (The previous best estimate was that of GG who estimated the order of magnitude as 10 MW). It follows that the GW power of the Sun is \(76\pm 20\,\)MW, where the uncertainty is mostly owing to the as-yet-uncalculated impact of Debye screening.
It is noteworthy that ee, ep and eHe collisions make almost equal contributions. If it were not for the quantum effects, it would not be so. For if we simply set \(\chi=1\) for all the processes, then one finds the ee collisions dominate owing to their smaller reduced mass, leading to higher velocities. The value \(\chi=1\) also leads to a total power 10 times larger, indicating that the quantum effects are important for the conditions of the Sun. Note also that the increased emission for attractive, as compared with repulsive, collisions also raises the contribution of ep and eHe collisions a little, compared with ee.
From the above one may deduce that there is gravitational noise in the Sun with an rms strain amplitude of order \(10^{-41}\) at \(10^{18}\,\)Hz owing to Coulomb collisions. This is the dominant source of gravitational noise in the solar system at this frequency. The energy density of this radiation arriving at Earth is of order \(10^{-24}\,\)Wm\({}^{-3}\). This is similar to the energy density of relic gravitational waves in the frequency band up to GHz thought to be present owing to early-universe processes.[6; 7; 22] Owing to their lower frequency, the latter will have larger observable effects.
## VIII Conclusion
In conclusion, we have achieved the five aims set out at the end of section I. We have reviewed studies of gravitational bremsstrahlung during Coulomb collisions and presented a formula, based on semi-classical physical reasoning, which is able to reproduce, approximately, the predictions of a full (i.e. quantum) treatment of the total emitted power at any value of the Born parameter, in the non-relativistic limit. Equations (37)-(40) allow one to calculate the energy cross-section with high accuracy in certain limits and with \(\sim\!10\%\) accuracy in general. One can thus obtain the power averaged over many collisions in a homogeneous fluid. As an example, we have applied these equations to a treatment of the Sun, obtaining the total emitted power in the approximation where Debye screening is neglected.
Eqn (60) (combined with (24)) gives the energy cross-section in the classical (high Born parameter) limit for collisions at a given initial velocity after integrating over impact parameters above a lower limit set by a given angular momentum. This has not previously been calculated. We have used it to obtain, in eqn (70), the total cross section for emission of GW energy during close hyperbolic encounters where capture does not occur. This can be used to calculate, for example, the time-averaged emission from galactic nuclei by this process.
It has recently been suggested that black hole collisions in the early universe made a non-negligible to the stochastic gravitational background in the present. One may ask whether Coulomb collisions in the very early universe made a further non-negligible contribution. I have attempted an estimate of this (unpublished); the estimate suggests that the contribution is negligible but it would be interesting nonetheless to look into this more fully.
| gravitational bremsstrahlung の研究対象は、$1/r$ Potenzialeによって媒介される衝突によって生じるもの。古典的と1次Born近似結果を組み合わせることで、総放射エネルギーに近似的な重力「Gaunt因子」を構築する。また、角運動量カットオフを用いて cross-section を得ており、これにより、重力群に存在する非常に近接な放出の cross-section を得る。これらの効果は太陽系における非常に高周波数の重力騒音の主要な起源である。太陽の重力波の総電力$76 \pm 20$ MWである。
**Explanation of the translation:**
* **gravitational bremsstrahlung の研究対象は、$1/r$ Potenzialeによって媒介される衝突によって生じるもの。:** This translates to "The research subject of gravitational bremsstrahlung is the radiation caused by collisions mediated by |
2309.08787 | Beyond Labels: Leveraging Deep Learning and LLMs for Content Metadata | Content metadata plays a very important role in movie recommender systems as
it provides valuable information about various aspects of a movie such as
genre, cast, plot synopsis, box office summary, etc. Analyzing the metadata can
help understand the user preferences to generate personalized recommendations
and item cold starting. In this talk, we will focus on one particular type of
metadata - \textit{genre} labels. Genre labels associated with a movie or a TV
series help categorize a collection of titles into different themes and
correspondingly setting up the audience expectation. We present some of the
challenges associated with using genre label information and propose a new way
of examining the genre information that we call as the \textit{Genre Spectrum}.
The Genre Spectrum helps capture the various nuanced genres in a title and our
offline and online experiments corroborate the effectiveness of the approach.
Furthermore, we also talk about applications of LLMs in augmenting content
metadata which could eventually be used to achieve effective organization of
recommendations in user's 2-D home-grid. | Saurabh Agrawal, John Trenkle, Jaya Kawale | 2023-09-15T22:11:29 | http://arxiv.org/abs/2309.08787v1 | # Beyond Labels: Leveraging Deep Learning and LLMs for Content Metadata
###### Abstract.
Content metadata plays a very important role in movie recommender systems as it provides valuable information about various aspects of a movie such as genre, cast, plot synopsis, box office summary, etc. Analyzing the metadata can help understand the user preferences to generate personalized recommendations and item cold starting. In this talk, we will focus on one particular type of metadata - _genre_ labels. Genre labels associated with a movie or a TV series help categorize a collection of titles into different themes and correspondingly setting up the audience expectation. We present some of the challenges associated with using genre label information and propose a new way of examining the genre information that we call as the _Genre Spectrum_. The Genre Spectrum helps capture the various nuanced genres in a title and our offline and online experiments corroborate the effectiveness of the approach. Furthermore, we also talk about applications of LLMs in augmenting content metadata which could eventually be used to achieve effective organization of recommendations in user's 2-D home-grid.
2020 rights rightsre
adds an additional layer of complexity in accurately labeling movies within specific genres. Furthermore, genre labels do not capture the degree or intensity of a genre within a video. For instance, a movie like Jurassic Park can be classified as a science fiction and adventure film, but it also contains elements of horror and thriller. The genre labels alone fail to convey the nuanced blend of genres present in the movie. Moreover, movies within the same genre can still exhibit substantial differences. For example, consider two movies namely 'Gladiator' and 'Die Hard', both categorized as action films. However, the flavor of action in these movies diverges significantly due to distinct contextual factors. Gladiator is an epic historical action film set in ancient Rome, showcasing thrilling battles and action sequences within the Colosseum. On the other hand, Die Hard centers around intense action scenes taking place in a modern skyscraper during a terrorist siege.
## 2. Genre Spectrum
We propose an alternative approach to examining the genres which we refer to as the _Genre Spectrum_. Our hypothesis is that every title consists of a spectrum of genres and we transform the discrete genre labels data into a latent space where each dimension could be considered as an abstract concept/characteristic of a movie. Every genre will then manifest itself in this latent space as a subspace defined by a range of combinations of all the latent dimensions. We hypothesize that the continuum nature of genre spectrum embeddings enhances its expressive power in comparison to the discrete genre labels.
### Methodology
We use neural network based supervised machine learning to learn genre-spectrum embeddings. The underlying intuition is that the textual metadata of movies (e.g. genre, language, year of release, plot synopsis and summary, ratings, Rotten Tomatoes scores, user reviews, box office information etc.) have rich information to classify a movie into one or more genre labels. We collect textual metadata of about 1.1M movies from various sources and apply language modeling techniques to learn textual embedding of every movie in a text-embedding space. We then formulate a multi-label classification problem that aims to predict the genre labels using learned textual embeddings as input features. In particular, we train a multi-layer feedforward dense neural network that ingests textual embeddings as inputs and emits the probabilities of every genre class as the output. The model is trained using cross-entropy loss averaged over all the genre classes. Both the components of the neural net, the textual-to-genre-spectrum transformer and the genre classifier are trained jointly on the multi-label cross-entropy loss function. Thus, once the model is trained, we simply obtain genre-spectrum embeddings, by doing a forward pass on the transformer component of the neural net, i.e. collect the output from the penultimate layer of the neural net (as shown in Figure 1).
Figure 1. Neural net architecture for learning Genre Spectrum Embeddings
**Data Augmentation:** To further improve the quality of embeddings particularly on less-popular movies that have poor quality of metadata, we applied data augmentation technique proposed in (Dosov et al., 2017) on the training data. This technique randomly samples two training samples and take their random convex combination (on both features and labels) to generate a new synthetic data sample. We applied this technique (with small modifications to increase representation of rarer classes) and increased the training data by a factor of 10.
## 3. Experiments & Results
### Data and Setup
We collected textual metadata and genre labels of about 1.1M movies and tv series from three sources of movie metadata, namely: IMDb, Rotten Tomatoes, and Gracenote. We used 60-20-20 split to generate training, validation, and test set.
### Offline evaluation
For qualitative evaluation, we generated the \(2\)-D plot of genre spectrum embeddings produced using UMAP(Uniform Manifold Approximation and Projection) technique. As can be seen, the genres appear to be cohesive colored clusters in
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l|l} \hline \hline
**Embeddings** & \multicolumn{6}{c|}{**Top-100 nb genre-similarity score (\%) on popularity-based groups**} \\ \hline & IMDb & IMDb & IMDb & IMDb & IMDb & IMDb & IMDb \\ & votes & votes & votes & votes & votes & votes \\ & \(\in[0,10]\) & \(\in[10,100]\) & \([100,100]\) & \([10^{3},10^{4}]\) & \([10^{4},10^{5}]\) & \([10^{5},10^{6}]\) & \([10^{6},10^{7}]\) \\ \hline Doc2Vec (textual) & 65.60 & 68.96 & 76.66 & 84.07 & 88.08 & 88.54 & 92.30 \\ BERT (textual) & 40.27 & 44.64 & 53.07 & 60.89 & 65.26 & 64.74 & 72.44 \\ GPT-4 (textual) & 56.46 & 63.03 & 67.23 & 74.28 & 77.68 & 77.36 & 85.33 \\ GS on Doc2Vec & 78.50 & 83.28 & 89.84 & 94.43 & 96.48 & 96.45 & 97.89 \\ GS on BERT & 60 & 63.03 & 67.23 & 74.28 & 77.68 & 77.36 & 85.33 \\ GS on GPT-4 & 78.80 & 80.60 & 83.61 & 87.55 & 90.32 & 92.29 & 96.11 \\ GS-Augmented & 80.94 & 84.82 & 91.34 & 95.51 & 97.34 & 97.3 & 98.17 \\ on Doc2Vec & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Comparison of latent feature spaces on genre similarity in top-100 neighborhoods
Figure 2. A 2-D plot of genre spectrum embeddings generated using Universal Manifold Approximation and Projection (UMAP) technique
the latent space. Next, we evaluate different variants of Genre Spectrum (GS) embeddings based on the genre similarity in the neighborhood of every movie. Specifically, for each movie, we compute genre similarity in top-\(k\) neighborhood as a fraction of top \(k\) nearest neighbors in genre-spectrum space that share one or more primary genres with the given movie. A primary genre is defined as the one that is assigned to a movie by majority of the labeling sources. To gain deeper insights into the relationship between the metric and popularity, the genre similarity score is calculated as an average across different subsets of movies, which are grouped based on their IMDB votes as shown in Table 1. The first six rows in the table correspond to six variants of embeddings, three of them are textual embeddings generated using a variety of NLP models including Doc2Vec (96 dimensions)(Dong et al., 2019), pretrained BERT model trained on web corpus (Cheng et al., 2019) (768 dimensions), and OpenAI GPT-4 (1536 dimensions) (Dong et al., 2019) the latest version of LLM released by OpenAI. The next three rows correspond to Genre Spectrum embeddings learnt using genre label supervision on each one of the aforementioned textual embeddings. The last row corresponds to another variant of _GS on Doc2Vec_ where we applied data augmentation step described in Section 2.1. We make several insightful observations from the table: i) All the variants of Genre Spectrum embeddings perform better than their corresponding textual embedding variants in all the popularity buckets, validating the effectiveness of our approach. In particular, the effectiveness of our proposed methodology also applies in the context of LLMs. ii) Further, it can be seen that the improvement in genre-similarity is higher on the lower popularity buckets. This could potentially be attributed to the fact that the quality of metadata (e.g. terse synopses, less tags) degrades on non-popular movies. Consequently, the textual embeddings tend to be more unreliable in classifying genres for such movies. However, the noise is considerably reduced in Genre Spectrum embeddings as they are trained using genre labels. iii) _GS-Augmented on Doc2Vec_ beats _GS on Doc2Vec_ consistently in genre similarity scores for all the popularity segments, justifying the utility of data augmentation step. Further in Figure 3, we present an anecdotal example of a popular movie called _Life of Pi\({}^{t}\)_ to compare top-10 neighbors in textual and genre spectrum embedding spaces. In comparison to textual embedding space, neighbors in genre-spectrum latent space are much better aligned with the query movie on genre similarity.
### Online evaluation
To evaluate genre-spectrum embeddings in our online Tubi recommender system, we introduced a retrieval model in our production system. This model retrieves nearest neighbors of movies the user previously watched. Through an A/B test, we compared it to the control variant, which utilized binary genre labels. The test resulted in a statistically significant 0.6 % improvement in our primary user-engagement metric, 'tvt-capped' (total view time capped at 4 hours per day). This improvement validates the effectiveness of genre spectrum embeddings in enhancing personalization and user engagement.
## 4. Conclusion & Future work
We presented a case study on various challenges in incorporating genre label information in movie recommendation systems and how to address those challenges by learning meaningful embeddings to capture genre label information in a video-recommendation problem setting. An evident expansion of our work involves broadening the scope of content metadata to encompass other manually annotated movie datasets that offer a more extensive range of tags. Nevertheless, a common hurdle with such datasets lies in their limited coverage. Given the powerful capabilities of LLMs, one of the potential future directions could be to apply LLMs on textual metadata and generate more specific annotations for every movie in the form of _micro-genres_. Such micro-genres could then be used along with genre labels to learn more precise representation vectors of movies. Additionally, micro-genres could also be very useful in optimal organization of movie
recommendations on user's home screen. In particular, movie recommendations on prominent Video on Demand (VOD) platforms such as Tubi, Netflix, and Amazon Prime are typically presented in a 2-D grid layout using a set of 'carousels.' Each carousel groups together movies with a common theme, such as genre, language, or year, as reflected in its title. Conventional methods often use limited themes (e.g., standard genres or 90's classics) for carousel generation, which might result in sub-optimal personalization of the home-grid. By incorporating LLM-generated micro-genres, we can enrich the pool of carousel themes, leading to more effective personalization. During the presentation, we will also share preliminary results from our explorations in this direction.
## 5. Biographies
**Saurabh Agrawal** is a Senior Machine Learning Engineer at Tubi since August 2022 where he leads deep learning projects for Search and Recommendation Systems at Tubi. Prior to Tubi, he completed his PhD in Computer Science from University of Minnesota before he worked at Amazon for more than three years as an Applied Scientist.
**John Trenkle** is an experienced professional in AI/ML. John's work at Tubi includes significant contributions in Recommendation Systems, AdTech, Natural Language Processing (NLP), and Big Data management, showcasing his adaptable approach to the evolving field of machine learning.
**Jaya Kawale** is the VP of Engineering at Tubi leading all the machine learning efforts at Tubi. She did her PhD in Computer Science from the University of Minnesota and has published 15+ papers at top-tier machine learning conferences. Prior to Tubi, she has worked at Netflix, Adobe Research, Yahoo Research and Microsoft Research.
Figure 3. Comparison of top-10 neighbors of movie _Life of Pi_ in textual embeddings (Doc2Vec) space and the genre spectrum embeddings space (trained on Doc2Vec) in the top and bottom panel respectively. | コンテンツメタデータは、映画 recomendar システムで非常に重要な役割を果たします。それは、映画のさまざまな側面に関する貴重な情報を提供するためです。たとえば、ジャンル、俳優、あらすじ、興行収入要約などです。メタデータの分析は、ユーザーの好みを理解し、パーソナライズされた推奨事項とアイテムの冷始を生成するのに役立ちます。この講義では、メタデータの一種であるジャンルラベルについて焦点を当てます。ジャンルラベルは、映画やテレビ番組に関連付けられ、タイトルの収集を異なるテーマに分類し、観衆の期待を調整するのに役立ちます。私たちは、このジャンルラベル情報を使用する課題について説明し、ジャンル情報に関連する新しい方法を提案します。これは「ジャンルスペクトル」と呼ばれています。ジャンルスペクトルは、タイトルのさまざまなニュアンスのジャンルを捉えるのに役立ちます。そして、私たちのオフラインとオンライン実験は、このアプローチの有効性を裏付けて |
2308.16848 | Accurate Computation of Quantum Excited States with Neural Networks | We present a variational Monte Carlo algorithm for estimating the lowest
excited states of a quantum system which is a natural generalization of the
estimation of ground states. The method has no free parameters and requires no
explicit orthogonalization of the different states, instead transforming the
problem of finding excited states of a given system into that of finding the
ground state of an expanded system. Expected values of arbitrary observables
can be calculated, including off-diagonal expectations between different states
such as the transition dipole moment. Although the method is entirely general,
it works particularly well in conjunction with recent work on using neural
networks as variational Ans\"atze for many-electron systems, and we show that
by combining this method with the FermiNet and Psiformer Ans\"atze we can
accurately recover vertical excitation energies and oscillator strengths on a
range of molecules. Our method is the first deep learning approach to achieve
accurate vertical excitation energies, including challenging double
excitations, on benzene-scale molecules. Beyond the chemistry examples here, we
expect this technique will be of great interest for applications to atomic,
nuclear and condensed matter physics. | David Pfau, Simon Axelrod, Halvard Sutterud, Ingrid von Glehn, James S. Spencer | 2023-08-31T16:27:08 | http://arxiv.org/abs/2308.16848v3 | # Natural Quantum Monte Carlo Computation of Excited States
###### Abstract
We present a variational Monte Carlo algorithm for estimating the lowest excited states of a quantum system which is a natural generalization of the estimation of ground states. The method has no free parameters and requires no explicit orthogonalization of the different states, instead transforming the problem of finding excited states of a given system into that of finding the ground state of an expanded system. Expected values of arbitrary observables can be calculated, including off-diagonal expectations between different states such as the transition dipole moment. Although the method is entirely general, it works particularly well in conjunction with recent work on using neural networks as variational Ansatze for many-electron systems, and we show that by combining this method with the FermiNet and Posiformer Ansatze we can accurately recover vertical excitation energies and oscillator strengths on molecules as large as benzene. Beyond the examples on molecules presented here, we expect this technique will be of great interest for applications of variational quantum Monte Carlo to atomic, nuclear and condensed matter physics.
## I Introduction
The computation of excited states properties of quantum systems is a fundamental challenge in chemistry and many branches of physics. Understanding electronic excitations is critical for predicting photochemical phenomena such as fluorescence and conformational changes in the presence of light [1; 2]. In condensed matter physics, excitations determine the optical band gap of semiconductors, which is critical for predicting the behavior of solar cells, photosensors, LEDs and lasers [3]. Excited states are also relevant to understanding nuclear phenomena like metastable isomers and electron capture [4]. Ultimately, the dynamics of quantum systems when stimulated cannot be understood without taking excited states into account. Despite the importance of excited states for quantum phenomena, a full computational account of excited states remains challenging.
Quantum Monte Carlo (QMC) methods [5; 6] are an appealing class of algorithms for computing the behavior of quantum systems due do the favorable scaling with the number of particles, typically \(\mathcal{O}(N^{3})-\mathcal{O}(N^{4})\), and wide applicability. Variational quantum Monte Carlo (VMC) in particular is quite conceptually simple, and consists of finding an explicit functional form for a wavefunction which minimizes a variational bound, but historically was not considered accurate enough on its own for many demanding applications. Recent work using neural networks as a wavefunction Ansatz has reinvigorated interest in VMC [7; 8], and has demonstrated that VMC can be competitive with state-of-the-art methods for ground state calculations.
In this paper, we focus on computing excited states of quantum systems by VMC. When used to optimize ground states, there are only two variational principles for QMC - energy minimization and variance minimization. Innovations in ground state VMC primarily focus on the choice of trial wavefunction [9; 10], or optimization method used to achieve the variational bound [11; 12], but the choice of objective to optimize is well-established. The same cannot be said for variational optimization of excited states.
Approaches for computing excited states by VMC can be broken down into several categories. Most methods are either state-_targeting_, in that they aim to find a single excited state, or state-_averaging_, in that they aim to find the lowest-lying exciting states by minimizing the total weighted energy of many states simultaneously. Among state-targeting methods, there are methods which target specific energy ranges [13; 14], specific symmetries of the system [15], or a specific ordering of the roots (i.e. the \(k\)-th lowest state) [16]. For state-averaging approaches, the different states must be kept orthogonal, which can be achieved by including a penalty term in the variational bound which pushes the states apart [17; 15; 18], or by explicitly constructing orthogonal Ansatze, sometimes repeatedly re-orthogonalizing during optimization [19; 20; 21; 22].
All of these approaches have drawbacks and limitations. Targeting specific symmetries or energy ranges requires prior knowledge about the states of interest which may not be available, and state-targeting by variance minimization can lose track of the desired state [21]. Root-targeting methods are prone to root-flipping, whether they are used for QMC or other computational paradigms [23; 24]. Some methods require solving a generalized eigenvalue problem from stochastic estimates of the Hamiltonian and overlap matrices, which introduces biases into the gradients [16; 25]. Penalty methods often have problems with multiple Ansatze collapsing onto the same state, or have biased gradients [18], and the
strength of the penalty term is a free parameter which must be chosen. Constructing orthogonal Ansatze is usually only possible when the Ansatz is a linear combination of basis set functions [26; 27], which rules out many recently-developed Ansatze based on deep neural networks [28; 29; 30; 7]. Heuristics such as variance matching may be required to achieve good numerical results for all approaches. Despite almost four decades of work on QMC methods for excited states [26; 31], no single variational principle has emerged which has no free parameters, has convergence guarantees when optimizing with noisy Monte Carlo estimates, and is applicable to all possible Ansatze and all excited states, regardless of symmetry.
Here we present a new variational principle for computing the lowest excited states of a quantum system by Monte Carlo which does not suffer from any of these limitations. Our method can be seen as a state-averaging approach with a particular choice of sampling distribution which does not require the states to be orthogonal. This choice of sampling distribution is equivalent to reformulating the problem of finding \(K\) excited states of an \(N\) particle system into the problem of finding the ground state of a \(K\)-fermion system where each fermion is equivalent to \(N\) particles in the original system. Instead of orthogonalizing the states, the local energy is promoted from a scalar to a matrix, which gives unbiased estimates of a matrix whose eigenvalues are the energies of orthogonal states. Because wavefunction optimization can be done by stochastic gradient descent from unbiased noisy estimates of the total energy, the procedure is guaranteed to converge to a local minimum of the total energy over states. Due to the many desirable mathematical properties which follow from the choice of sampling distribution, we refer to our proposed approach as _natural excited states_ for VMC (NES-VMC).
## II Method
### Variational Monte Carlo
First we briefly review ground-state VMC and establish some notation. We will stick to the notation of first quantization and consider a system of \(N\) particles with states \(\mathbf{x}=\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\), although everything we discuss could be applied to variational Ansatze represented in second quantization as well. We aim to find the lowest eigenfunction of a Hamiltonian operator \(\hat{H}\). This can be done by reformulating the eigenfunction problem in variational form, as one of finding the minimum of the Rayleigh quotient:
\[\psi^{*}=\arg\min_{\psi}\frac{\langle\psi\hat{H}\psi\rangle}{\langle\psi^{2} \rangle} \tag{1}\]
where the Ansatz \(\psi\) is not necessarily normalized. Computing this quotient involves taking high-dimensional integrals over all possible particle states \(\mathbf{x}\), and can be approximated by Monte Carlo integration. Many choices of Monte Carlo sampling distribution \(p(\mathbf{x})\) are possible, but if \(p(\mathbf{x})\propto\psi^{2}(\mathbf{x})\), then the Rayleigh quotient take a simple form that allows for unbiased empirical estimation of the energy and gradients of the energy:
\[\frac{\langle\psi\hat{H}\psi\rangle}{\langle\psi^{2}\rangle}=\mathbb{E}_{ \mathbf{x}\sim\psi^{2}}\left[\psi^{-1}(\mathbf{x})\hat{H}\psi(\mathbf{x})\right] \tag{2}\]
For this reason, \(\psi^{2}\) is the natural choice of sampling distribution for ground state estimation. The scalar \(E_{L}(\mathbf{x})\mathbf{\triangleq}\psi^{-1}(\mathbf{x})\hat{H}\psi(\mathbf{ x})\) that appears inside the expectation is the _local energy_, and at any eigenfunction of \(\hat{H}\) it will be constant if \(\hat{H}\) is a local operator.
### Natural Excited States
Going from ground states to excited states, we aim to find the lowest \(K\) eigenfunctions of \(\hat{H}\). We refer to a single set of \(N\) particle states as a _particle set_, and denote different particle sets with an upper index, so that \(\mathbf{x}^{i}\) denotes a set of \(N\) particles \(\mathbf{x}^{i}_{1},\ldots,\mathbf{x}^{i}_{N}\). For the remainder of the article, we will use \(\mathbf{x}\) to denote the complete state of all particle sets \(\mathbf{x}^{1},\ldots,\mathbf{x}^{K}\). Let \(\psi_{i}\) denote a (possibly unnormalized) N-particle wavefunction, then we are trying to find wavefunctions \(\psi_{1},\ldots,\psi_{K}\) which approximate the lowest excited states. Let \(\mathbf{\Psi}(\mathbf{x})\in\mathbb{R}^{K\times K}\) denote the matrix combining all electron sets with all wavefunctions:
\[\mathbf{\Psi}(\mathbf{x})\overset{\triangle}{\equiv}\begin{pmatrix}\psi_{1}( \mathbf{x}^{1})&\ldots&\psi_{K}(\mathbf{x}^{1})\\ \vdots&&\vdots\\ \psi_{1}(\mathbf{x}^{K})&\ldots&\psi_{K}(\mathbf{x}^{K})\end{pmatrix} \tag{3}\]
The determinant of this matrix \(\Psi(\mathbf{x})=\det(\mathbf{\Psi}(\mathbf{x}))\) can be thought of as an unnormalized Slater determinant, except that instead of single-particle orbitals, it is made up of N-particle wavefunctions. We call \(\Psi(\mathbf{x})=\det(\mathbf{\Psi}(\mathbf{x}))\) the _total Ansatz_, while the individual \(\psi_{i}\) are the _single-state Ansatze_.
Rather than optimizing the single-state Ansatze in order from lowest to highest energy, we will only optimize the total Ansatz to minimize the total energy of all states. This is conceptually quite similar to state-averaging approaches in VMC, except that we will not explicitly enforce the orthogonality of the different single-state Ansatze. Note that taking any linear combination of single-state Ansatze \(\psi^{\prime}{}_{i}=\sum_{j}a_{ij}\psi_{j}\) only changes the total Ansatz by a constant factor. Also note that if two single-state Ansatze are the same, the total Ansatz becomes zero. Thus, by representing the total Ansatz as a determinant of single-state Ansatze, we can prevent the collapse of different Ansatze onto the same state, without requiring them to be orthogonal.
For an arbitrary operator \(\hat{\mathcal{O}}\) that acts on \(N\)-particle wavefunctions, let \(\hat{\mathcal{O}}\mathbf{\Psi}(\mathbf{x})\) denote the matrix of all values
of this operator applied to all single-state Ansatze and particle sets:
\[\hat{\mathcal{O}}\boldsymbol{\Psi}(\mathbf{x})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Not only is diagonalizing \(\mathbb{E}_{\Psi^{2}}[\mathbf{E}_{L}(\mathbf{x})]\) sufficient to recover the energies - it also provides us with the necessary change of basis to evaluate other observables \(\hat{\mathcal{O}}\), even off-diagonal observables \(\langle\psi_{i}\hat{\mathcal{O}}\psi_{j}\rangle\) between states. This can be seen due to the identity \(\mathbb{E}_{\Psi^{2}}[\mathbf{\Psi}^{-1}\hat{\mathcal{O}}\mathbf{\Psi}]= \mathbf{S}^{-1}\hat{\mathbf{O}}\), and for single-state Ansatze which are a linear combination of eigenfunctions, \(\mathbf{S}^{-1}\hat{\mathbf{O}}=\mathbf{A}^{-1}\hat{\mathbf{O}}^{*}\mathbf{A}\). So if we accumulate and diagonalize \(\mathbb{E}_{\Psi^{2}}[\mathbf{E}_{L}(\mathbf{x})]\) and use the resulting eigenvectors to compute \(\mathbf{U}^{-1}\mathbb{E}_{\Psi^{2}}[\mathbf{\Psi}^{-1}\hat{\mathcal{O}} \mathbf{\Psi}]\mathbf{U}\), then in the vicinity of the true ground state of the total Ansatz the result will be approximately \(\mathbf{\Sigma}^{-1}\hat{\mathbf{O}}^{*}\mathbf{\Sigma}\). Along the diagonal, this gives exactly the expectations \(\langle\psi_{i}^{*}\hat{\mathcal{O}}\psi_{i}^{*}\rangle\). Off the diagonal, this yields \(\frac{\sigma_{i}}{\sigma_{j}}\langle\psi_{i}^{*}\hat{\mathcal{O}}\psi_{j}^{*}\rangle\). If we multiply the matrix elementwise by its transpose, the \(\sigma_{i}\) terms cancel out, and we recover \(\langle\psi_{i}^{*}\hat{\mathcal{O}}\psi_{j}^{*}\rangle^{2}\), which gives the expectation up to a sign factor. This sign factor is not physically observable however, and in practice for computing quantities like the oscillator strength, only the expectation squared is needed.
### Neural Network Ansatze
The use of variational Monte Carlo for ground state calculations was typically used to find a trial wavefunction for more accurate projector QMC methods like diffusion Monte Carlo [5] or auxiliary field Monte Carlo [33]. However, in recent years, advances in deep neural networks have led to their use as accurate Ansatze for studying spin systems [7], electronic structure [8] and nuclear systems [34], often reaching levels of accuracy rivaling projector QMC methods. This has led to a renewed interest in VMC as a standalone method. While a variety of different neural network architectures can be used depending on the problem, such as restricted Boltzmann machines [7], convolutional neural networks [35], and autoregressive models [36], a number of custom architectures have been developed specifically for many-body electronic structure problems in first quantization [28; 29; 30; 37; 38; 39; 40; 41]. Most of these Ansatze start from a linear combination of Slater determinants:
\[\psi(\mathbf{x})=\sum_{k}\omega_{k}\text{det}\begin{pmatrix}\phi_{1}^{k}( \mathbf{x}_{1})&\ldots&\phi_{N}^{k}(\mathbf{x}_{1})\\ \vdots&&\vdots\\ \phi_{1}^{k}(\mathbf{x}_{N})&\ldots&\phi_{N}^{k}(\mathbf{x}_{N})\end{pmatrix} \tag{11}\]
It has long been recognized [42] that the single-particle orbitals in a Slater determinant can be generalized to depend on _all_ particles, so long as they depend on all but one in a permutation-independent manner:
\[\psi(\mathbf{x})=\sum_{k}\omega_{k}\text{det}\begin{pmatrix}\phi_{1}^{k}( \mathbf{x}_{1};\{\mathbf{x}_{/1}\})&\ldots&\phi_{N}^{k}(\mathbf{x}_{1};\{ \mathbf{x}_{/1}\})\\ \vdots&&\vdots\\ \phi_{1}^{k}(\mathbf{x}_{N};\{\mathbf{x}_{/N}\})&\ldots&\phi_{N}^{k}(\mathbf{x }_{N};\{\mathbf{x}_{/N}\})\end{pmatrix} \tag{12}\]
where \(\{\mathbf{x}_{/i}\}\) denotes the set of all particles _except_\(\mathbf{x}_{i}\). In the event that the particles are spin-assigned, the orbitals can also be expressed as \(\phi_{i}^{k}(\mathbf{x}_{j}^{\dagger};\{\mathbf{x}_{/j}^{\dagger}\},\{\mathbf{ x}^{\dagger}\})\) where the function is only invariant to changing the order of particles of the same spin. Most neural network Ansatze for electrons in real space implement this idea by using permutation-equivariant deep neural networks to represent the orbitals, sometimes with a multiplicative Jastrow factor to account for pairwise interactions [29; 30; 38].
Extending these Ansatze to represent multiple states is quite straightforward. Each state is still expressed as a sum of determinants of generalized neural network orbitals, there are simply more orbitals:
\[\psi_{i}(\mathbf{x})=\sum_{ik}\omega_{ik}\text{det}\begin{pmatrix}\phi_{1}^{ ik}(\mathbf{x}_{1};\{\mathbf{x}_{/1}\})&\ldots&\phi_{N}^{ik}(\mathbf{x}_{1};\{ \mathbf{x}_{/1}\})\\ \vdots&&\vdots\\ \phi_{1}^{ik}(\mathbf{x}_{N};\{\mathbf{x}_{/N}\})&\ldots&\phi_{N}^{ik}(\mathbf{x }_{N};\{\mathbf{x}_{/N}\})\end{pmatrix} \tag{13}\]
Nothing is changed about the neural network architecture itself, just the number of orbitals is increased proportionally to the number of states.
Neural network Ansatze differ from classic Ansatze like the Slater-Jastrow-backflow Ansatz [9] in important ways which make it difficult to apply existing excited state methods. Many methods assume that the Ansatz is a linear combination of orthogonal basis functions like Slater determinants, a necessary assumption for maintaining the orthogonality of states, either through explicit construction or a diagonalization step [19]. Classic Ansatze are usually optimized through a small number of gradient steps, where each gradient step is accumulated over a large number of MCMC steps, so that the gradients are nearly deterministic. Most modern deep neural networks, by constrast, are optimized by stochastic gradient descent using a large number of small, noisy steps [43]. This means bias in the gradients becomes a more significant concern.
Existing work on excited state calculations with neural networks has focused on penalty methods [18; 15], but these still require choosing a free parameter trading off total energy and penalty strength, and may not exactly satisfy orthogonality in the states. Some of these methods also have biased gradients in the penalty term [18] due to nonlinearities meant to push states apart more strongly. By contrast, the NES-VMC method has no free parameters to tune, can be optimized by unbiased gradients that have the same form as for ground state calculations, does not require the states to be orthogonal, and makes no assumption on the functional form of the Ansatz. Thus, while NES-VMC is generally applicable to _all_ excited state VMC calculations, it is particularly well-tailored for use with recently developed neural network Ansatze.
## III Results
While the natural excited states method is fully general and can be applied to any quantum Hamiltonian, our experimental validation is focused on electronic structure in atoms and molecules, due to the abundant experimental and computational literature to compare against. For
all experiments, we are solving the Schrodinger equation in the Born-Oppenheimer approximation [44]:
\[\hat{H}= -\frac{1}{2}\sum_{i}\nabla_{i}^{2}+\sum_{i>j}\frac{1}{|\mathbf{r}_ {i}-\mathbf{r}_{j}|}\] \[-\sum_{iI}\frac{Z_{I}}{|\mathbf{r}_{i}-\mathbf{R}_{I}|}+\sum_{I>J }\frac{Z_{I}Z_{J}}{|\mathbf{R}_{I}-\mathbf{R}_{J}|} \tag{14}\]
where the indices \(i\) and \(j\) are over electrons and \(I\) and \(J\) are over atomic nuclei with fixed locations.
To try to disentangle the effect that the choice of Ansatz has on performance, we investigated two different neural network architectures: the Fermionic Neural Network (FermiNet) [29] and the Wavefunction Transformer (Psiformer) [38]. While the Psiformer has generally been found to be more accurate on large systems, it is also slower, and for ground state calculations up to approximately 15 electrons, no appreciable difference in accuracy between the two has been found.
### Atomic Spectra
As an initial check of the correctness of our method, we investigate the excited states of first-row atoms, from lithium to neon. Atomic spectral lines have been the subject of some of the highest-precision measurements in all of science, and while we do not aim to reach spectroscopic accuracy, we can have high confidence in accuracy of the measurements, and do not need to worry about effects such as adiabatic relaxation and zero-point vibrational energy which affect molecular measurements. All experimental data was taken from the energy level tables in the NIST Handbook of Basic Atomic Spectroscopic Data [32]. Because we are working with the nonrelativistic Schrodinger equation without spin-orbit corrections, we are not able to compute fine or hyperfine structure. To remove the fine structure, experimental energy levels with different total angular momenta are averaged together weighted by the degeneracy \(m_{J}=2J+1\) and treated as a single level. The hyperfine structure is too small to be of concern here. To investigate the effect of the choice of Ansatz as well as the choice of number of states \(k\) to compute, we ran calculations with the FermiNet with both 5 and 10 states, as well as the Psiformer with 10 states. Results are given in Fig. 1, with numerical results (including error bars) in the Appendix in Table 2.
For all atoms, NES-VMC gives results closely matching experiment. From lithium up to oxygen, the error relative to experiment is far less than 1 mHa (27.2 meV) for all but the highest excited state, and is often less than 0.1 mHa, an exceedingly high level of accuracy for a deep neural network Ansatz. On lithium, all Ansatze correctly converge to the \({}^{2}S\) and \({}^{2}P^{\circ}\) states, which are missed by the PauliNet penalty method [18]. The method struggles in some cases to get the highest energy state correct, but this seems to be improved by simply computing more states - for instance, the error in the \({}^{4}P\) states of fluorine is cut in half by increasing the number of states from 5 to 10. In rare cases, the highest state
Figure 1: Excited state energies for first row atoms from lithium to neon. Results from natural excited state VMC applied to the FermiNet (10 states, blue, 5 states, red) are shown on top of experimental results [32]. Spectral lines which match computed states are labeled with electron configurations and atomic term symbols (except for the highest levels of F and Ne, where term symbols are omitted for clarity). For all but the largest systems and highest excited states, there is excellent agreement with experiment. The discrepancy between 5 and 10 excited states is minimal except for the highest excited states of F and Ne, where computing more states increases the accuracy of a given state. Complete numerical results are given in Table 2.
seems to converge to the incorrect state, such as boron with the Psiformer, which seems to converge to the \({}^{2}P^{\circ}\) state rather than the last \({}^{2}D\) state. Fluorine and neon both have relatively large errors on the order of 1-2 mHa for low-lying states, but going from the FermiNet to the more accurate Psiformer Ansatz seems to reduce this error in all cases. The largest errors are in the highest states of fluorine and neon, where the error is significant. In this case we suspect the difficulty is due to the large number of different states with similar electron configurations and energies, and hope that by computing even more states or by using even more expressive Ansatze, the effects of individual states can be disentangled. The excellent performance on low-lying states gives us confidence that NES-VMC is mathematically sound.
### Oscillator Strengths
Going beyond results on single atoms and vertical excitation energies, we are interested in the performance of NES-VMC on more complicated molecular systems, as well as observable quantities other than the energy. The QUEST database [45; 46; 48; 49; 50; 51; 52; 53; 54] is an excellent source of well-controlled benchmark vertical excited states calculations using coupled cluster methods on molecules of various sizes, with consistent geometries and basis set extrapolations. Of particular interest is the subset of QUEST for which oscillator strengths have been computed [45], as oscillator strengths provide a strong test of how well an excited state method can perform on experimentally-observable quantities, and especially as oscillator strength and transition probability calculations are known to be highly sensitive to choices of basis set [55].
Oscillator strengths are a measure of the probability of transition between different states occurring as a result of photon emission or absorption. Under the assumption that the wavelength of the photon is much longer than the system under consideration, so the interaction can be approximated by a constant electric field, the transition dipole moment between two states gives a measure of how that transition will interact with light:
\[\mathbf{d}_{ij}=\left\langle\psi_{i}^{\dagger}\sum_{k}q_{k}\mathbf{r}_{k}\psi_ {j}\right\rangle \tag{15}\]
where the sum over \(k\) is taken over all particles in the system with charge \(q_{k}\) and position \(\mathbf{r}_{k}\). For electrons, \(q_{k}=-e\). The transition dipole moments are vector-valued quantities which include a complex phase factor, and are not directly observable. The oscillator strength
Figure 2: Vertical excitation energies and oscillator strengths for small molecules from Chrayteh _et al._[45]. Singlet states are in blue and triplet states are in gray. NES-VMC results are indicated by markers while theoretical best estimates from Chrayteh _et al._[45] or directly from QUEST [46] are given by the lines. When no data from QUEST is available, no TBE is given. Experimental results from Chrayteh _et al._[45] and references thererin are given by the dashed lines in green. Where available, energies and oscillator strengths from Entwistle _et al._[18] are provided by the black triangles for comparison, with (pointing left) and without (pointing right) variance matching. In most cases, our results on both energies and oscillator strengths agree closely with theoretical best estimates. Complete numerical results are given in Table 3.
of a particular transition can be computed from the transition dipole moment:
\[f_{ij}=\frac{2}{3}\frac{m}{\hbar^{2}}\left(E_{i}-E_{j}\right)|\mathbf{d}_{ij}|^{2} \tag{16}\]
which reduces the transition dipole moment to a dimensionless positive scalar. In natural excited states, we can compute expectations of operators between different states up to an arbitrary sign factor, and that sign factor goes away in the oscillator strength. Computational details are discussed in more detail in Sec. C.3.
We applied NES-VMC to all of the small molecules investigated in Chrayteh _et al._[45], computing the 5 lowest energy states with both the FermiNet and Psiformer. Results are presented in Fig. 2 and Table 3. Wherever possible, we take results from QUEST [45, 46] to be theoretical best estimates (TBEs) for comparison, though for many of the states we converged to, especially triplets, no results exist in QUEST. For molecules with heavier atoms (HCl, H\({}_{2}\)S, H\({}_{2}\)CSi), we found that using pseudopotentials for the heaviest atoms significantly improved the accuracy of the results, likely because the total energy scale was reduced by ignoring core electrons. Where applicable, we also include a comparison against the VMC penalty method of Entwistle _et al._[18]. We omit N\({}_{2}\) because the lowest-lying excited states are all triplets. For all diatomic systems, the \({}^{1}\Pi\) state is doubly-degenerate, and so the baseline oscillator strengths are divided by two to match the computed results.
In almost all cases, both the vertical excitation energies and the oscillator strengths are in excellent agreement with the TBE. The vertical excitation energies are almost all within chemical accuracy (1.6 mHa or 0.04 eV) of the TBE while the oscillators strengths usually diverge from the TBE by at most an amount on the order of 0.001, comparable to the uncertainty in the calculations. The results of Entwistle _et al._, in contrast, often differ noticeably from other theoretical results, even when correction using variance matching are applied. This is particularly noticeable for the oscillator strengths. We note that we do not use variance matching for any of the NES-VMC calculations.
There are a few cases where NES-VMC behaves oddly. While the FermiNet and Psiformer find nearly identical vertical excitation energies for the \({}^{1}\Pi\) state of HCl, and the FermiNet accurately predicts the oscillator strength, the Psiformer mistakenly finds this to be a dark state. On formaldehyde (CH\({}_{2}\)O), both the FermiNet and Psiformer fail to find the \({}^{3}A_{1}\) state at all, and the oscillator strength for the \({}^{1}B_{2}\) state diverges from the TBE by a significant margin, although the Psiformer halves that margin relative to the FermiNet. Vertical excitation energies for systems with heavier atoms, such as H\({}_{2}\)S, and the highest state of thioformaldehydehyde (CH\({}_{2}\)S), are not quite
Figure 3: Excited states of the carbon dimer (C\({}_{2}\)). (a) The symmetries of the different states can be identified by evaluating each single state Ansatz at location \(\mathbf{r}\) and \(-\mathbf{r}\) for parity symmetry (u/g, blue) or by flipping \(\mathbf{r}\) across the x-axis for reflection symmetry (+/–, orange). (b) The vertical and adiabatic energies of excited states of C\({}_{2}\). The green line indicates experimental energies [47] and the red line indicates the energy of the \(B^{1}\Delta_{g}\) state from QUEST [48]. Bright transitions are labelled with their oscillator strength and, when available, their names. (c) Visualization of the 8 lowest natural orbitals of C\({}_{2}\). (d) The occupancy of the different natural orbitals for the different excited states of C\({}_{2}\), identified from the density matrix of each state. The \(a^{3}\Pi_{u}\) through \(A^{1}\Pi_{u}\) states are single excitations while the last two states are double excitations. Complete numerical results are given in Table 4.
as accurate as other results, though in the case of thioformaldehyde we are hopeful that, consistent with the atomic results in the previous section, computing more states will reduce the error in the \({}^{3}B_{2}\) state. For nitroxyl (HNO), the FermiNet fails to converge to the \({}^{1}A^{\prime}\) state, but the Psiformer finds it correctly, albeit with a relatively large error in the vertical excitation energy. This suggests that there are occasional difficulties in getting NES-VMC to converge to all low-lying states, but we are hopeful that improvements in optimization methods can improve this in the future. What is clear is that NES-VMC works well in the large majority of cases, and is far more accurate than alternative methods which have been proposed for neural network Ansatze.
Other QMC methods have also been applied to some of these systems. In particular, the QMC-CIPSI method has been successfully applied to computing the vertical excitation energies of the \({}^{1}A_{2}\) state in formaldehyde and thioformaldehyde to within chemical accuracy, using a conventional Slater-Jastrow Ansatz [56]. While the QMC-CIPSI method cannot be applied to neural network Ansatze, this suggests that good results can still be achieved with VMC with a simple Ansatz, and that the benefit of using NES-VMC relative to the penalty method in Entwistle _et al._ is due to the method rather than the choice of Ansatz.
### Carbon Dimer
In addition to computing observable quantities, it is also desirable to be able to say something about the _nature_ of different excited states - whether a state is a valence or Rydberg or charge transfer excitation, what its symmetries are, whether it is a single or double excitation, etc. As a benchmark system for demonstrating the ability of NES-VMC to characterize different states, we study the carbon dimer (C\({}_{2}\)). Despite its small size, the carbon dimer has a complicated electronic structure with a large number of low-lying excited states [60; 47; 59]. Due to the existence of very strong bands in the visible spectrum, the carbon dimer is frequently detected in astrophysical measurements, and can be observed in comets rich in organic materials [61]. The exact bond order of C\({}_{2}\) is still a subject of some controversy - while molecular orbital theory would classify it as a double bond, valence bond calculations suggest it may be better described as a quadruple bond [62]. And the carbon dimer is one of the smallest molecules to have low-lying double excitations, a class of excited state which other methods often struggle with [48]. Correctly reconstructing the potential energy curves for different low-lying states requires correctly disentangling and characterizing these different states at different geometries.
We compute the 8 lowest-lying states of the carbon dimer at several different bond lengths using NES-VMC and the Psiformer Ansatz, and present the results in Figs. 3 and 4. At equilibrium (1.244 A), we classify the different states by computing their spin magnitude and their symmetries - both parity symmetry (u/g) where the electron positions \(\mathbf{r}\) are replaced by \(-\mathbf{r}\) and reflection symmetry (+/-) where the positions are flipped on the x-axis. We do not compute the orbital angular momentum operator, but confirm that we see the expected degeneracy, for instance \(\Pi\) states are doubly degenerate (one of each reflection symmetry). The oscillator strengths show several bright transitions, which we show in Fig. 3b. Due to the degeneracy of the \(\Pi\) states, we add the oscillator strengths together to give the total strength. We correctly identify the Phillips and Ballik-Ramay systems [63; 64], as well as the unnamed \(B^{1}\Delta_{g}\to A^{1}\Pi_{u}\) transition. We also find that the energy of the \(B^{1}\Delta_{g}\) energy closes matches the TBE in QUEST [48]. The \(A^{1}\Pi_{u}\), \(c^{3}\Sigma_{u}^{+}\) and \(b^{3}\Sigma_{g}^{-}\) states all have nearly the same energy, so correctly identifying the oscillator strengths for these transitions is very challenging.
To better understand the nature of each excited state, we compute the occupancy of the different natural orbitals. We first compute the one-electron reduced density matrix (1-RDM) for each single-state Ansatz in a large basis set and then diagonalize these matrices to find the natural orbitals, as described in more detail in Sec. C.2. In this case, using the Hartree-Fock orbitals as the basis set, we find that all 1-RDMs are nearly diagonal, that is the natural orbitals closely match the Hartree-Fock molecular orbitals. We see in Fig. 3d that all states above the ground state involve excitation of electrons into the \(2p_{z}\sigma_{g}\) orbital. The \(\Pi\) states are well-described by single excitations from one of the \(2p\pi_{u}\) orbitals while the \(c^{3}\Sigma_{u}^{+}\) state promotes an electron from the \(2s\sigma_{u}^{*}\) orbital. Finally, both the \(b^{3}\Sigma_{g}^{-}\) and \(B^{1}\Delta_{g}\) states are double excitations of the \(2p\pi_{u}\) electrons into the \(2s\sigma_{u}^{*}\) orbital, as expected. Not only is NES-VMC able to predict double excitation energies correctly, but by having an explicit functional form
Figure 4: Potential energy curves of the low-lying excited states of C\({}_{2}\) which can be uniquely identified from their symmetries. Complete numerical results are given in
for the wavefunction Ansatz, we can compute quantities such as the reduced density matrices which allow us to derive insight about the nature of electronic excitations.
Predicting experimental excitation energies requires computing the energy difference between different states in their respective lowest energy configurations, the so-called _adiabatic_ excitation energy. To compute this for C\({}_{2}\), we repeated the equilibrium calculations at a number of different bond lengths. Wherever possible, we matched the energy levels at different geometries to the appropriate states based on the same symmetries as in Fig (a)a, and for five states we were able to reconstruct enough of the potential energy curve to identify the minimum energy for each. The results are shown in Fig. 4, smoothed by cubic interpolation. Taking the difference between the minimum energies of each state gives an estimate of the adiabatic excitation energy, which we show in purple in Fig. (b)b, and in 3 out of 4 cases we matched the experimental energy[47] to within roughly 0.01 eV. We did not estimate the zero-point vibrational energies, but believe this may explain the discrepancy in the \(c^{3}\Sigma_{u}^{+}\) state. This shows that not only can NES-VMC match other theoretical calculations of vertical excitation energies, but can predict experimental results to high accuracy.
### Twisted Ethylene
The excited states of ethylene (C\({}_{2}\)H\({}_{4}\)) across its potential energy surface present a challenging benchmark problem for many methods. As the torsion of the carbon double bond is increased, an avoided crossing occurs when the torsion angle is \(90^{\circ}\). Even for ground state calculations, DFT and single-reference coupled cluster calculations predict an unphysical cusp at this location[67]. Starting from the \(90^{\circ}\) torsion and bending the hydrogen atoms on one side inward (so-called "pyramidalization"), ethylene undergoes a conical intersection where the ground state transitions from a \(\pi\) to \(\pi^{*}\) highest occupied orbital (the \(N\) and \(V\) states, with term symbols \({}^{1}A_{g}\) and \({}^{1}B_{1u}\)). Modeling this conical intersection requires fundamentally multireference methods, and while time-dependent density functional theory (TD-DFT) struggles with this system[68], multireference configuration interaction (MR-CI) methods describe it well[58].
We compute the excited states of ethylene as the torsion angle is varied from \(0^{\circ}\) to \(90^{\circ}\), followed by variation of the pyramidalization angle from \(0^{\circ}\) to \(120^{\circ}\), enough to include the conical intersection of the \(N\) and \(V\) states. We try to match the geometry from previous MR-CI studies[58] as closely as possible. Results are shown in Fig. 5. There are also several low-lying triplet states of ethene, the \({}^{3}B_{1u}\) and \({}^{3}B_{3u}\) states, and so we calculated \(K=3\) excited states for all geometries, which we found was enough to find two singlet states for all geometries except at equilibrium, where we used \(K=5\) and took the highest state, as the \({}^{1}B_{3u}\) state has lower energy exclusively at equilibrium. We tried both the FermiNet and Psiformer, and did not find any significant difference in the results, so we show the Psiformer results here (though FermiNet results are included in Table 6). For comparison, in addition to TD-DFT[57] and MR-CI, we also compare against the PauliNet penalty method[18]. For consistency, we show the PauliNet penalty method without variance matching, though the difference is not large. All results are normalized so that the ground state energy at
Figure 5: Excited states and conical intersection of ethylene (C\({}_{2}\)H\({}_{4}\)). Our results (blue) are compared against TD-DFT[57] (purple), MR-CI[58] (green) and a penalty method used with the PauliNet[18] (red). The best estimate of the location of the conical intersection of the V and N states for each method is given by the vertical line in Fig. (b)b. Our method is in close agreement with MR-CI up to a constant shift, and agrees with the location of the conical intersection better than the PauliNet penalty method. Note that the \(\phi=0\) geometry in Fig. (b)b differs slightly from the \(\tau=90\) geometry in Fig. (a)a, as in Barbati _et al.[58]_. Complete numerical results are given in Table 6.
the equilibrium geometry is 0.
Qualitatively, the results from NES-VMC closely match MR-CI. The spurious cusp when the torsion angle is 90\({}^{\circ}\) is avoided, and the error in the ground state relative to MR-CI is smaller than for the PauliNet penalty method across torsion angles. The non-parallelity error in the \(V\) state relative to MR-CI is lower for our method than the PauliNet penalty method, and our predicted location for the conical intersection (\(\sim\)97.5 degrees) is closer to the MR-CI value (\(\sim\)96 degrees) than the predicted PauliNet penalty method value (\(\sim\)100 degrees). There is a nearly constant shift in the energy of the \(V\) state on the order of several tenths of an eV relative to MR-CI, and a shift in the energy of the \(N\) state which grows as the pyramidal angle grows. Increasing the number of excited states and using a different Ansatz did not seem to make a difference. We note that when using the equilibrium geometry for ethylene from QUEST in Sec III.2 as opposed to the geometry from MR-CI, our results agreed with the theoretical best estimates to within chemical accuracy. The overall agreement with experimentally relevant quantities like the location of the conical intersection is in excellent agreement with other highly accurate theoretical studies, and so we are confident that NES-VMC is able to capture the important behavior of this system across the potential energy surface.
### Benzene
Finally, as a demonstration of the ability of our method to scale to larger systems, we applied NES-VMC with both the FermiNet and Psiformer to benzene. Benzene is a common benchmark for excited state methods for medium-sized molecules, so there is abundant data for us to compare against. For VMC, in addition to the penalty method of Entwistle _et al._[18], there is also the penalty method of Pathak _et al._[17], which is used in combination with a traditional Slater-Jastrow Ansatz, and uses a different form of the penalty function which allows for unbiased gradients. On top of VMC results and coupled-cluster-based TBEs from QUEST, we also compare against CASPT2[66] and TD-DFT with the PBE0 functional[65]. Results are shown in Fig. 6, with complex numerical results in Table 7. For our calculations, we used the same geometry as in QUEST[50].
As can be seen in Fig 5(a), NES-VMC with the Psiformer comes very close to reaching the TBE for all computed states. The FermiNet is not quite as accurate, and struggles with the highest energy \({}^{3}B_{2u}\) state. Inspection of the spin magnitude reveals that the highest excited state of the FermiNet converges to a mixture of a triplet and singlet state, which suggests that contamination from the \({}^{1}B_{1u}\) state is affecting the performance. The Psiformer is known to be much more accurate for ground state calculations on systems as large as benzene[38], so it is not surprising that the Psiformer is also better suited for computing the relative energy between states at this
Figure 6: Excited states of benzene. The NES-VMC results (green and blue) are compared against theoretical best estimates from QUEST[50; 54] alongside TD-DFT-PBE0[65], CASPT2[66], DMC with a Slater-Jastrow Ansatz and penalty method[17], and the PauliNet with a penalty method[18]. NES-VMC with the Psiformer Ansatz is competitive with state-of-the-art methods. All excitations are \(\pi\rightarrow\pi^{*}\) excitations, and the orbitals involved are visualized in Fig 5(b). Complete numerical results are given in Table 7.
scale. CASPT2 and TD-DFT methods are less accurate across the board, though this is not surprising as density functional methods are generally less accurate than wavefunction methods, and CASPT2 is generally intermediate in accuracy between DFT and coupled cluster. The penalty method of Entwistle _et al._ in combination with the PauliNet was only applied to the lowest excited state, and even on that, it only reaches CASPT2-level accuracy, even with variance matching (which we do not use in NES-VMC). The penalty method of Pathak _et al._, however, is much more accurate, generally reaching comparable levels of accuracy to NES-VMC with the Psiformer. We suspect this is due to the unbiased gradients in the method of Pathak _et al._. Additionally, the results reported in Pathak _et al._ include a diffusion Monte Carlo correction, which we omit, though this only reduces the root mean squared error by \(\sim\)0.1 eV. We note that NES-VMC with a sufficiently expressive Ansatz not only reaches levels of accuracy near coupled cluster, but does so without any free parameters to tune, unlike penalty methods.
To better understand the nature of the excitations computed, we inspected the density matrices of the respective states, similarly to the analysis of the carbon dimer in Sec III.3 and Fig. 3c. The natural orbitals are well described by the Hartree-Fock orbitals, and so the density matrices in the Hartree-Fock basis are all nearly diagonal. All five excited states for benzene we computed are single excitations from a \(\pi\) to \(\pi^{*}\) orbital, but interestingly, in the natural orbital picture, they are best described by exciting half an electron from two distinct \(\pi_{g}\) orbitals into two distinct \(\pi_{u}^{*}\) orbitals. These orbitals are visualized in Fig 6b. The ability to easily evaluate and analyze properties of wavefunctions other than just the energy is one of the advantages of explicitly computing the functional form of the wavefunction in VMC. Overall, our results on benzene show that NES-VMC can be usefully applied to larger systems and still produce accurate results, so long as the correct Ansatz is used.
## IV Discussion
We have presented a novel method for calculating excited state properties of quantum systems by variational quantum Monte Carlo (VMC), the Natural Excited States method (NES-VMC). NES-VMC has no free parameters to tune, and allows for unbiased estimation of energies and gradients, by reformulating a state-averaging approach as the problem of finding the ground state of an extended system. In much the same way that sampling from \(\psi^{2}\) is the natural way to compute ground state properties by VMC, we believe that NES-VMC is the natural variational principle for computing excited state properties. Additionally, it dovetails well with recent work on neural network Ansatze for many-body systems.
We have demonstrated the effectiveness of NES-VMC on a number of benchmark problems ranging from small atoms and molecules up to the benzene molecule. In all cases, NES-VMC is competitive with theoretical best estimates using coupled cluster methods for estimating energies and oscillator strengths, and can capture the behavior of double excitations and conical intersections. The optimized Ansatz can be used in downstream analyses to characterize the nature of the electronic structure of different excited states. We are confident that NES-VMC is as effective as any other method for computing excited states with QMC, with the added benefit of simplicity and generality.
Neural network Ansatze can be quite computationally expensive, which puts an upper limit on the scale of systems we considered. We believe that recent work on scaling and accelerating neural network Ansatze for many-electron problems [41] can be usefully applied to NES-VMC as well, which could allow these methods to be applied to problems for which no satisfactory solution exists today. While we focused on applications using neural network Ansatze, classic Ansatze like the Slater-Jastrow Ansatz can be scaled to much larger systems [25; 69]. Although our results suggest that more accurate Ansatze are quite important for achieving good performance, we look forward to finding out how well NES-VMC works in conjuction with these classic Ansatze on large problems.
Finally, while our experiments in this paper focused on molecular systems, that is, many-electron problems with open boundary conditions, NES-VMC is fully general and can be applied to _any_ quantum Hamiltonian. Excited state calculations with QMC are an important tool for studying nuclear physics [6], optical band gaps in condensed matter physics [13; 70], many properties of spin systems, as well as time dynamics and finite temperature phenomena. We are excited to see how NES-VMC can be applied to many of the most challenging open problems in many-body quantum mechanics in the future.
###### Acknowledgements.
The authors would like to thank Matthew Foulkes, Denis Jacquemin, Michael Bearpark, Aron Cohen and Alex Gaunt for helpful discussions, and James Kirkpatrick, Annette Obika, Ali Eslami and Pushmeet Kohli for support.
| Variational Monte Carloアルゴリズムを用いて、量子システムの低い励起状態を推定します。これは、基底状態の推定の自然 généralisationである。この方法は自由パラメータがなく、異なる状態の直交化を必要としない。代わりに、特定のシステムの励起状態を見つける問題を、拡張されたシステムの基底状態を見つける問題に変換する。任意のオブザーバブルの期待値を計算することができ、異なる状態間のオフダイアгона期待値も計算できる。例えば、異なる状態間の遷移偶合分などの期待値も計算できる。この方法は完全に一般的であるが、近年、ニューラルネットワークを多電子系における変分Ans\"atzとして利用する研究との併用により特に良好な結果を得ることができ、FermiNetとPsiformerAns\"atzeと組み合わせることで、分子における垂直励起エネルギーと振幅を正確に再現することができた。この方法は、 |
2309.09499 | Weak Operator Continuity for Evolutionary Equations | Considering evolutionary equations in the sense of Picard, we identify a
certain topology for material laws rendering the solution operator continuous
if considered as a mapping from the material laws into the set of bounded
linear operators, where the latter are endowed with the weak operator topology.
The topology is a topology of vector-valued holomorphic functions and provides
a lift of the previously introduced nonlocal $\mathrm{H}$-topology to
particular holomorphic functions. The main area of applications are nonlocal
homogenisation results for coupled systems of time-dependent partial
differential equations. A continuous dependence result for a nonlocal model for
cell migration is also provided. | Andreas Buchinger, Nathanael Skrepek, Marcus Waurick | 2023-09-18T05:46:41 | http://arxiv.org/abs/2309.09499v1 | # Weak operator continuity for evolutionary equations
###### Abstract.
Considering evolutionary equations in the sense of Picard, we identify a certain topology for material laws rendering the solution operator continuous if considered as a mapping from the material laws into the set of bounded linear operators, where the latter are endowed with the weak operator topology. The topology is a topology of vector-valued holomorphic functions and provides a lift of the previously introduced nonlocal H-topology to particular holomorphic functions. The main area of applications are nonlocal homogenisation results for coupled systems of time-dependent partial differential equations. A continuous dependence result for a nonlocal model for cell migration is also provided.
Key words and phrases:Homogenisation, H-convergences, nonlocal H-convergence, Evolutionary equations, piezo-electricity 2020 Mathematics Subject Classification: 32C18, 35B27, 74Q10, 78M40
## 1. Introduction
Evolutionary Equations in the sense of Picard provide a unified Hilbert space framework to address well-posedness, regularity, long-time behaviour and other qualitative as well as quantitative properties of predominantly time-dependent partial differential equations. The origins date back to the seminal papers [14, 15]; several examples can be found in [15]. We particularly refer to the monograph [16] for a self-contained round-up of the theory. In the linear and autonomous case, evolutionary equations take the form
\[(\partial_{t}M(\partial_{t})+A)U=F,\]
formulated in some space-time Hilbert space of functions \(\mathbb{R}\to\mathcal{H}\), where \(\mathcal{H}\) is a Hilbert space modelling the spatial dependence. \(F\) is a given right-hand side, \(U\) is the unknown, \(A\) is a skew-selfadjoint linear operator, containing the spatial derivatives, and \(\partial_{t}\) is an operator-realisation of the time-derivative. The main conceptual difference to other more traditional approaches of addressing time-dependent partial differential equations is the _material law operator_
\[M(\partial_{t}),\]
which in turn is a bounded, operator-valued holomorphic function \(M\) defined on a certain right-half plane of the complex numbers applied to the time-derivative in a sense of a functional calculus. As the name suggests, the material law operator encodes the underlying material properties, i.e., the constitutive relations and their corresponding material coefficients. As a consequence, the complexity of the material is _not_ contained in \(A\) but rather in \(M(\partial_{t})\). This has certain advantages. For instance, domain issues or interface phenomena can be more aptly dealt with in the framework of evolutionary equations. Since the material properties are encoded in \(M(\partial_{t})\), it is of no surprise that homogenisation problems can be reformulated in
the framework of evolutionary equations. The main question for these problems is: given a sequence of material laws \((M_{n})_{n}\) does there exist a material law \(M\) such that
\[(\partial_{t}M_{n}(\partial_{t})+A)^{-1}\to(\partial_{t}M(\partial_{t})+A)^{-1}\]
in a suitable (operator) topology as \(n\to\infty\). In this general situation, it is possible to answer the question in the affirmative, see [1]. Indeed, based on a result for (abstract) Friedrichs systems [1, 1], one can show that such a material law \(M\) exists if the above convergence is assumed in the weak operator topology, arguably a very weak assumption. In applications, this result might not be precise enough for the identification of \(M\).
Asking more of the operator \(A\), we will define a topology on the set of material laws such that if \(M_{n}\to M\) in this topology, then the solution operators converge in the weak operator topology. Since it is possible to show that suitably bounded material laws are compact under this topology, one can also recover a special case of the main result in [1] in this case. However, the upshot of the present article is the identification of the topology. The topology introduced is an appropriate generalisation of the Schur topology or nonlocal H-topology as introduced in [20]. Since, in applications, this topology is the precise operator-theoretic description of the topology induced by H-convergence (see [19, 21] for H-convergence and [20] for its connections to the nonlocal case) known homogenisation results can be used to find the limit of \(M_{n}\) and, immediately, homogenisation results for time-dependent problems can be obtained. Moreover, note that, in a certain way, the compactness result of the present article can also be viewed as a variant of [19, Lemma 10.10] as it asserts that (nonlocal) H-convergence preserves holomorphic dependence on a parameter in the limit.
The idea of finding an appropriate topology on the set of operator-valued holomorphic functions to describe homogenisation problems goes back to the PhD-Thesis [20]. It has since then been applied to ordinary differential equations (with infinite-dimensional state spaces) [20, 21], to partial differential equations [20, 22], or to systems of partial differential equations [20]. The main assumptions imposed on \(A\) were either \(A=0\), \(A\) being invertible with compact resolvent or, more generally, \(\operatorname{dom}(A)\cap\ker(A)^{\perp}\) being compactly embedded in \(\mathcal{H}\). An even more general assumption on \(A\) was treated in [20] with severe technical challenges and seemingly undue restrictions on the admissible set of material laws. Here, we may entirely drop all these additional assumptions on the material law imposed in [20] and thus provide the recently most general form of continuous dependence results for solution operators of evolutionary equations depending on their material laws. We emphasise that this theorem also supersedes the findings for the autonomous case in [20]. The present result is also easier to apply compared to previous versions. The approach particularly works for systems of partial differential equations and nonlocal material laws.
We quickly outline how the paper is organised. In Section 2 we briefly recall the concept of nonlocal H-convergence and discuss its connection to the classical notion of H-convergence. The concept of evolutionary equations is introduced in Section 3. In this section, the notion of material laws and material law operators is properly defined and the fundamental result - Picard's theorem - is recalled. Section 4 describes the topology of compact convergence for operator-valued holomorphic functions with respect to the weak operator topology. An analogue of the Banach-Alaoglu theorem (Corollary 4.7) is presented and proved in a comprehensive manner filling some details missing in the (sketch) proofs in [20, Theorem 4.3] or [20, Theorem 3.4]. The developed results are used to define the desired topology that characterises nonlocal H-convergence for material laws in Section 5, where we
also establish a compactness result of said topology. The main theorem of the present contribution is contained in Section 6, where we provide the continuity statement for the solution operator as a mapping from material laws taking values in the bounded linear operators in space-time endowed with the weak operator topology. Section 7 is devoted to two examples: one is about a nonlocal model for cell migration and the other one is concerned with a (nonlocal) homogenisation problem for (scalar) piezo-electricity. We conclude our findings in Section 8 and recall some known results in the Appendix A.
All scalar products considered are linear in the first component and antilinear in the second.
## 2. Nonlocal H-Convergence
The present section is devoted to summarise the rationale introduced in [11], exemplified in [11], and further studied in [11]. More precisely, we let \(\mathcal{H}\) be a Hilbert space and let \(\mathcal{H}_{0}\subseteq\mathcal{H}\) be a closed subspace; \(\mathcal{H}_{1}\coloneqq\mathcal{H}_{0}^{\perp}\). Then, any bounded linear operator \(M\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) can be represented as a \(2\)-by-\(2\) operator matrix
\[M=\begin{pmatrix}M_{00}&M_{01}\\ M_{10}&M_{11}\end{pmatrix},\]
where \(M_{ij}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{j},\mathcal{H}_{i})\), \(i,j\in\{0,1\}\) (cf. Lemma A.2). We define
\[\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1}) \coloneqq\left\{M\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\, \big{|}\,M_{00}^{-1}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{0}),M^{-1}\in \mathcal{L}_{\mathrm{b}}(\mathcal{H})\right\}\] \[\mathcal{M}(\alpha) \coloneqq\left\{M\in\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1}) \,\bigg{|}\,\operatorname{Re}M_{00}\geq\alpha_{00},\operatorname{Re}M_{00}^{- 1}\geq\frac{1}{\alpha_{11}},\right.\] \[\left.\left\|M_{10}M_{00}^{-1}\right\|\leq\alpha_{10},\|M_{00}^{- 1}M_{01}\|\leq\alpha_{01},\right.\] \[\left.\operatorname{Re}(M_{11}-M_{10}M_{00}^{-1}M_{01})^{-1} \geq\frac{1}{\alpha_{11}},\right.\] \[\left.\operatorname{Re}(M_{11}-M_{10}M_{00}^{-1}M_{01})\geq \alpha_{00}\right\},\]
where \(\alpha=(\alpha_{ij})_{i,j\in\{0,1\}}\in(0,\infty)^{2\times 2}\). In applications, see [11, 11, 11], Example 2.3 or Section 7.2 below, the decomposition assumed for \(\mathcal{H}\) is drawn from the Helmholtz decomposition. This then leads to an appropriate generalisation of H-convergence (or G-convergence) to a general operator-theoretic and possibly but not necessarily nonlocal setting (see Theorem 2.4). The definition of nonlocal H-convergence reads as follows.
**Definition 2.1**.: The _nonlocal_ H_-topology_ or _Schur topology_, \(\tau(\mathcal{H}_{0},\mathcal{H}_{1})\), on \(\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1})\) is given as the initial topology given by the mappings
\[\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1})\ni M \mapsto M_{00}^{-1}\in\mathcal{L}_{\mathrm{b}}^{\mathrm{w}}( \mathcal{H}_{0})\] \[\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1})\ni M \mapsto M_{10}M_{00}^{-1}\in\mathcal{L}_{\mathrm{b}}^{\mathrm{w}}( \mathcal{H}_{0},\mathcal{H}_{1})\] \[\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1})\ni M \mapsto M_{00}^{-1}M_{01}\in\mathcal{L}_{\mathrm{b}}^{\mathrm{w}}( \mathcal{H}_{1},\mathcal{H}_{0})\] \[\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1})\ni M \mapsto M_{11}-M_{10}M_{00}^{-1}M_{01}\in\mathcal{L}_{\mathrm{b}}^{ \mathrm{w}}(\mathcal{H}_{1}),\]
where \(\mathcal{L}_{\mathrm{b}}^{\mathrm{w}}(\mathcal{X},\mathcal{Y})\) denotes the space \(\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})\) endowed with the weak operator topology.
For later use and illustrational purposes, we quickly provide a sufficient condition for convergence in the topology just introduced. The main tool for the proof is a modification of the proof of [10, Prop. 13.1.4] (see also Lemma A.10).
**Lemma 2.2**.: _Let \((M_{n})_{n\in\mathbb{N}}\) be a sequence in \(\mathcal{M}(\alpha)\) that converges to \(M\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) in the strong operator topology. Then, \(M\in\mathcal{M}(\alpha)\) and \((M_{n})_{n\in\mathbb{N}}\) converges to \(M\) in \(\tau(\mathcal{H}_{0},\mathcal{H}_{1})\)._
Proof.: First, note that the Pythagorean theorem implies \(M_{n,00}\varphi\to M_{00}\varphi\) and \(M_{n,10}\varphi\to M_{10}\varphi\) for all \(\varphi\in\mathcal{H}_{0}\), as well as \(M_{n,01}\psi\to M_{01}\psi\) and \(M_{n,11}\psi\to M_{11}\psi\) for all \(\psi\in\mathcal{H}_{1}\).
Since \(M_{n}\in\mathcal{M}(\alpha)\) holds, we have \(\operatorname{Re}M_{n,00}\geq\alpha_{00}\) for \(n\in\mathbb{N}\). Together with \(M_{n,00}\varphi\to M_{00}\varphi\) for \(\varphi\in\mathcal{H}_{0}\), this shows
\[\operatorname{Re}M_{00}\geq\alpha_{00}.\]
Considering Lemma A.8, we infer that \(M_{00}\) is boundedly invertible and that the sequence \((M_{n,00}^{-1})_{n\in\mathbb{N}}\) is uniformly bounded in the operator norm. Hence, we can apply Lemma A.10 to prove
\[M_{n,00}^{-1}\varphi\to M_{00}^{-1}\varphi\]
for all \(\varphi\in\mathcal{H}_{0}\). We also know that \(\operatorname{Re}M_{n,00}^{-1}\geq 1/\alpha_{11}\) for all \(n\in\mathbb{N}\), which now implies
\[\operatorname{Re}M_{00}^{-1}\geq 1/\alpha_{11}.\]
Addition and multiplication of operators are sequentially continuous w.r.t. the strong operator topology. Thus, we immediately obtain
\[M_{n,10}M_{n,00}^{-1}\varphi \to M_{10}M_{00}^{-1}\varphi,\] \[M_{n,00}^{-1}M_{n,01}\psi \to M_{00}^{-1}M_{01}\psi\text{ and }\] \[M_{n,11}\psi-M_{n,10}M_{n,00}^{-1}M_{n,01}\psi \to M_{11}\psi-M_{10}M_{00}^{-1}M_{01}\psi\]
for all \(\varphi\in\mathcal{H}_{0}\) and all \(\psi\in\mathcal{H}_{1}\). Once again, Lemma A.8 and Lemma A.10 yield
\[\operatorname{Re}(M_{11}-M_{10}M_{00}^{-1}M_{01}) \geq\alpha_{00}\text{ and } \tag{1}\] \[\operatorname{Re}(M_{11}-M_{10}M_{00}^{-1}M_{01})^{-1} \geq 1/\alpha_{11}.\]
Hence, Lemma A.8 even allows us to explicitly write down the inverse of \(M\), which we will omit here (cf., however, Lemma 6.3). This means \(M^{-1}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\).
Clearly, operator norm balls are closed in the strong operator topology. Hence, we lastly infer
\[\|M_{10}M_{00}^{-1}\|\leq\alpha_{10}\quad\text{ and }\quad\|M_{00}^{-1}M_{01}\| \leq\alpha_{01}.\qed\]
The reason for having introduced the nonlocal H-topology is the consideration of homogenisation problems in an operator-theoretic setting. For this, we quickly recall a standard situation for choices of \(\mathcal{H}_{0}\) and \(\mathcal{H}_{1}\), respectively.
**Example 2.3**.: Let \(\Omega\subseteq\mathbb{R}^{3}\) a bounded weak Lipschitz domain with continuous boundary. Then,
\[\mathfrak{g}_{0} \coloneqq\{\operatorname{grad}u\,|\,u\in\mathrm{H}_{0}^{1}( \Omega)\},\] \[\mathfrak{c} \coloneqq\{\operatorname{curl}E\,|\,E\in\mathrm{L}_{2}(\Omega)^{ 3},\operatorname{curl}E\in\mathrm{L}_{2}(\Omega)^{3}\},\] \[\mathfrak{g} \coloneqq\{\operatorname{grad}u\,|\,u\in\mathrm{H}^{1}(\Omega) \},\quad\text{and}\] \[\mathfrak{c}_{0} \coloneqq\{\operatorname{curl}E\,|\,\exists(E_{n})_{n}\in \mathrm{C}_{c}^{\infty}(\Omega)^{3}\colon E_{n}\to E\in\mathrm{L}_{2}(\Omega)^{ 3},\operatorname{curl}E_{n}\to\operatorname{curl}E\in\mathrm{L}_{2}(\Omega)^ {3}\}\]
are closed subspaces of \(\mathrm{L}_{2}(\Omega)^{3}\). Indeed, see [11] for the FA-toolbox establishing closed range results based on selection theorems (see also [1, Lemma 4.1] for the technique) and [1, 15] for the Picard-Weber-Weck selection theorem needed.
Then, the orthogonal decompositions
\[\mathrm{L}_{2}(\Omega)^{3}=\mathfrak{g}_{0}\oplus\mathfrak{c}\oplus\mathcal{H} _{D}(\Omega)=\mathfrak{g}\oplus\mathfrak{c}_{0}\oplus\mathcal{H}_{N}(\Omega),\]
hold, see, e.g., [13], where \(\mathcal{H}_{D}(\Omega)\) and \(\mathcal{H}_{N}(\Omega)\) are finite-dimensional subspaces, whose dimensions can be found to be the number of bounded connected components and the number of handles respectively, see [10, 13] for the details. \(\blacklozenge\)
One of the main results of [12] is the identification of the nonlocal H-topology as the precise topology describing homogenisation problems. For this, we introduce for \(\Omega\subseteq\mathbb{R}^{3}\) open and \(0<\alpha<\beta\)
\[M(\alpha,\beta;\Omega)\coloneqq\big{\{}a\in\mathrm{L}_{\infty}(\Omega)^{3 \times 3}\,\big{|}\,\mathrm{Re}\,a(x)\geq\alpha,\mathrm{Re}\,a(x)^{-1}\geq 1/ \beta\ (\text{a.e. }x\in\Omega)\big{\}}.\]
Next, we can identify \(M(\alpha,\beta;\Omega)\) as a subspace of \(\mathcal{L}_{\mathrm{b}}(\mathrm{L}_{2}(\Omega)^{3})\) and, thus, endow it with the trace topology induced by the nonlocal H-topology subject to the decompositions exemplified in Example 2.3. This straightforwardly works for \(\Omega\) being _topologically trivial_; that is, for \(\mathcal{H}_{D}(\Omega)=\mathcal{H}_{N}(\Omega)=\{0\}\). For topologically non-trivial domains, the situation is more involved. This is dealt with in [12, Section 5]. The main result of [12] shows that nonlocal H-convergence indeed generalises H-convergence appropriately and reads as follows.
**Theorem 2.4**.: _Let \(\Omega\subseteq\mathbb{R}^{3}\) be a bounded weak Lipschitz domain with continuous boundary. Additionally, assume \(\Omega\) to be topologically trivial. Let \(0<\alpha<\beta\) and \((a_{n})_{n}\) in \(M(\alpha,\beta;\Omega)\), \(a\in\mathcal{L}_{\mathrm{b}}(\mathrm{L}_{2}(\Omega)^{3\times 3})\). Then, the following conditions are equivalent:_
1. \(a\in M(\alpha,\beta;\Omega)\) _and_ \((a_{n})_{n}\)__H-converges to_ \(a\)_; that is, for all_ \(f\in\mathrm{H}^{-1}(\Omega)\) _and_ \(u_{n}\in\mathrm{H}^{1}_{0}(\Omega)\) _satisfying_ \[\langle a_{n}\operatorname{grad}u_{n},\operatorname{grad}\varphi\rangle=f( \varphi)\quad(\varphi\in\mathrm{H}^{1}_{0}(\Omega)),\] _we have_ \(u_{n}\rightharpoonup u\in\mathrm{H}^{1}_{0}(\Omega)\) _and_ \(a_{n}\operatorname{grad}u_{n}\rightharpoonup a\operatorname{grad}u\in \mathrm{L}_{2}(\Omega)^{3}\)_, where_ \[\langle a\operatorname{grad}u,\operatorname{grad}\varphi\rangle=f(\varphi) \quad(\varphi\in\mathrm{H}^{1}_{0}(\Omega));\]
2. \((a_{n})_{n}\) _converges to_ \(a\) _in_ \(\tau(\mathfrak{g}_{0},\mathfrak{c})\)_;_
3. \((a_{n})_{n}\) _converges to_ \(a\) _in_ \(\tau(\mathfrak{g},\mathfrak{c}_{0})\)_._
## 3. Evolutionary Equations
In this section, we briefly summarise the concept of evolutionary equations as introduced by Picard, see [10]; we particularly refer to [11] for a recent monograph on the subject matter. For \(\nu>0\) and a Hilbert space \(\mathcal{H}\) we define
\[\mathrm{L}_{2,\nu}(\mathbb{R};\mathcal{H})\coloneqq\bigg{\{}f\in\mathrm{L}_{1, \mathrm{loc}}(\mathbb{R};\mathcal{H})\,\bigg{|}\int_{\mathbb{R}}\lVert f(s) \rVert_{\mathcal{H}}^{2}\exp(-2\nu s)\,\mathrm{d}s<\infty\bigg{\}}.\]
The distributional derivative, \(\partial_{t}\), realised as an operator in \(\mathrm{L}_{2,\nu}(\mathbb{R};\mathcal{H})\) endowed with the maximal domain is continuously invertible, with explicit inverse given by
\[\partial_{t}^{-1}f(t)=\int_{-\infty}^{t}f(s)\,\mathrm{d}s\quad(f\in\mathrm{L}_ {2,\nu}(\mathbb{R};\mathcal{H})).\]
We have \(\lVert\partial_{t}\rVert_{\mathcal{L}_{\mathrm{b}}(\mathrm{L}_{2,\nu}(\mathbb{ R};\mathcal{H}))}\leq 1/\nu\). The (unitary) Fourier-Laplace transformation, \(\mathcal{L}_{\nu}\colon\mathrm{L}_{2,\nu}(\mathbb{R};\mathcal{H})\to\mathrm{L}_ {2}(\mathbb{R};\mathcal{H})\) provides an explicit spectral theorem for \(\partial_{t}\). Indeed, using \(\mathrm{m}\) to denote the multiplication-by-argument operator with maximal domain in \(\mathrm{L}_{2}(\mathbb{R};\mathcal{H})\), we have
\[\partial_{t}=\mathcal{L}_{\nu}^{*}(\mathrm{im}+\nu)\mathcal{L}_{\nu},\]
where, for compactly supported, continuous \(\varphi\colon\mathbb{R}\to\mathcal{H}\),
\[\mathcal{L}_{\nu}\varphi(\xi)=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\mathrm{e} ^{-(\mathrm{i}\xi+\nu)s}\varphi(s)\,\mathrm{d}s\quad(\xi\in\mathbb{R}).\]
The spectral representation for \(\partial_{t}\) gives rise to a functional calculus. It is enough to consider bounded, holomorphic functions on some right-half planes of \(\mathbb{C}\). This leads us to define the notion of material laws and corresponding material law operators:
**Definition 3.1**.: We call \(M\colon\operatorname{dom}(M)\subseteq\mathbb{C}\to\mathcal{L}_{\mathrm{b}}( \mathcal{H})\)_material law_, if \(\operatorname{dom}(M)\) is open, \(M\) is holomorphic and if there exists \(\nu_{0}>0\) such that \(\mathbb{C}_{\mathrm{Re}>\nu_{0}}\subseteq\operatorname{dom}(M)\) with
\[\sup_{z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}}\|M(z)\|<\infty.\]
The infimum over all such \(\nu_{0}\) is denoted by \(\mathrm{s}_{\mathrm{b}}(M)\), the _abscissa of boundedness_. If \(M\) is a material law and \(\nu>\mathrm{s}_{\mathrm{b}}(M)\), we define \(M(\partial_{t})\in\mathcal{L}_{\mathrm{b}}(\mathrm{L}_{2,\nu}(\mathbb{R}; \mathcal{H}))\), the _(corresponding) material law operator_, via
\[M(\partial_{t})\coloneqq\mathcal{L}_{\nu}^{*}M(\mathrm{im}+\nu)\mathcal{L}_{ \nu},\]
where for \(\varphi\in\mathrm{L}_{2}(\mathbb{R};\mathcal{H})\) we put
\[(M(\mathrm{im}+\nu)\varphi)(\xi)\coloneqq M(\mathrm{i}\xi+\nu)\varphi(\xi) \quad\text{(a.e. $\xi\in\mathbb{R}$)}.\qed\]
The well-posedness theorem for evolutionary equations reads as follows. It is applicable to both classical examples like heat and Maxwell's equations or non-standard ones like time-nonlocal examples from elasticity theory, mixed type equations or equations with coefficients that are nonlocal in space. A closed and densely defined operator in \(\mathcal{H}\) can be (canonically) lifted to \(\mathrm{L}_{2,\nu}(\mathbb{R};\mathcal{H})\) by pointwise application; this (abstract) multiplication operator will be denoted by the same name. Note that if the initial operator is skew-selfadjoint, so is the lifted one.
**Theorem 3.2** (Picard's Theorem, see, e.g., [5, Theorem 6.2.1]).: _Let \(\nu>0\), \(\mathcal{H}\) a Hilbert space, \(A\colon\operatorname{dom}(A)\subseteq\mathcal{H}\to\mathcal{H}\) a skew-selfadjoint operator. Let \(M\colon\operatorname{dom}(M)\subseteq\mathbb{C}\to\mathcal{L}_{\mathrm{b}}( \mathcal{H})\) be a material law with \(\nu>\mathrm{s}_{\mathrm{b}}(M)\). If, there exists \(c>0\) such that for all \(z\in\mathbb{C}_{\mathrm{Re}>\nu}\) we have_
\[\operatorname{Re}zM(z)\geq c.\]
_Then, the operator sum_
\[(\partial_{t}M(\partial_{t})+A)\colon\operatorname{dom}(\partial_{t}M( \partial_{t}))\cap\operatorname{dom}(A)\subseteq\mathrm{L}_{2,\nu}(\mathbb{R };\mathcal{H})\to\mathrm{L}_{2,\nu}(\mathbb{R};\mathcal{H})\]
_is closable. Its closure is continuously invertible with_
\[\|\overline{\partial_{t}M(\partial_{t})+A}^{-1}\|\leq\tfrac{1}{c}.\]
_Moreover, \(\mathcal{S}\coloneqq\overline{\partial_{t}M(\partial_{t})+A}^{-1}\) is the material law operator corresponding to the material law_
\[z\mapsto(zM(z)+A)^{-1},\]
\(\mathcal{S}\) _is causal; that is,_
\[\forall a\in\mathbb{R}:\mathds{1}_{(-\infty,a)}\mathcal{S}\mathds{1}_{(- \infty,a)}=\mathds{1}_{(-\infty,a)}\mathcal{S};\]
_and, if \(f\in\operatorname{dom}(\partial_{t})\), then \(\mathcal{S}f\in\operatorname{dom}(\partial_{t})\cap\operatorname{dom}(A) \subseteq\operatorname{dom}(\partial_{t}M(\partial_{t}))\cap\operatorname{ dom}(A)\)._
## 4. Compactness in the Locally Uniform Weak Operator Topology
In this section we consider the space of holomorphic functions that map into the space of bounded operators between two Hilbert spaces. We endow this space with an initial topology that is motivated by the weak operator topology. The new aspect is that we additionally have holomorphic dependence on a complex parameter.
**Lemma 4.1**.: _Let \(U\subseteq\mathbb{C}\) be open and let \(\mathcal{X}\), \(\mathcal{Y}\) be Hilbert spaces. We regard \(\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\) and \(\operatorname{Hol}(U,\operatorname{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\), where \(\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})\) is endowed with the operator norm and the bounded sesquilinear forms \(\operatorname{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})\) are endowed with the sesquilinear norm, i.e., \(\|\sigma\|\coloneqq\sup_{\|\varphi\|=1,\|\psi\|=1}[\sigma(\varphi,\psi)|\) for \(\sigma\in\operatorname{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})\). Then, the mapping_
\[G\colon\left\{\begin{array}{rcl}\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b} }(\mathcal{X},\mathcal{Y}))&\to&\operatorname{Hol}(U,\operatorname{Ses}_{ \mathrm{b}}(\mathcal{X},\mathcal{Y})),\\ f&\mapsto&z\mapsto\langle f(z)\cdot,\cdot\rangle_{\mathcal{Y}}.\end{array}\right. \tag{2}\]
_is bijective and linear._
Proof.: It is well-known that mapping an operator to the corresponding sesquilinear form is a linear and isometric bijection from \(\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})\) to \(\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})\). Hence, \(G\) is a linear bijection from \(\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})^{U}\) to \(\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})^{U}\).
Let \(f\in\mathrm{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\). Then the complex derivative \(f^{\prime}\) exists. Thus,
\[\lim_{w\to z}\Bigl{\|}\frac{G(f)(z)-G(f)(w)}{z-w}-G(f^{\prime})(z) \Bigr{\|}_{\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})}\] \[=\lim_{w\to z}\Bigl{\|}\frac{G(f)(z)-G(f)(w)-G(f^{\prime})(z)(z-w)}{ z-w}\Bigr{\|}_{\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})}\] \[=\lim_{w\to z}\Bigl{\|}G\Bigl{(}\frac{f(z)-f(w)-f^{\prime}(z)(z-w) }{z-w}\Bigr{)}\Bigr{\|}_{\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})}\] \[=\lim_{w\to z}\Bigl{\|}\frac{f(z)-f(w)}{z-w}-f^{\prime}(z)\Bigr{\|} _{\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})}=0,\]
which implies \(G(f)\in\mathrm{Hol}(U,\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\) with \(G(f)^{\prime}(z)=G(f^{\prime})(z)\). For \(\sigma\in\mathrm{Hol}(U,\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\) we can analogously prove \(G^{-1}(\sigma)\in\mathrm{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{ Y}))\) with \(G^{-1}(\sigma)^{\prime}(z)=G^{-1}(\sigma^{\prime})(z)\).
**Definition 4.2**.: Let \(U\subseteq\mathbb{C}\) be an open set and \(\mathcal{X}\), \(\mathcal{Y}\) Hilbert spaces. For every \(\varphi\in\mathcal{X}\) and \(\psi\in\mathcal{Y}\) we define (cf. Remark A.1)
\[\Lambda_{\varphi,\psi}\colon\left\{\begin{array}{rcl}\mathrm{Hol}(U,\mathcal{ L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))&\to&\mathrm{Hol}(U,\mathbb{C}),\\ f&\mapsto&\langle f(\cdot)\varphi,\psi\rangle_{\mathcal{Y}},\end{array}\right.\]
and
\[\tilde{\Lambda}_{\varphi,\psi}\colon\left\{\begin{array}{rcl}\mathrm{Hol}(U, \mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))&\to&\mathrm{Hol}(U, \mathbb{C}),\\ \sigma&\mapsto&\sigma(\cdot)(\varphi,\psi).\end{array}\right.\]
Moreover, we define the _locally uniform weak operator topology_ on \(\mathrm{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\) as the initial topology w.r.t. the mappings \(\Lambda_{\varphi,\psi}\) for \(\varphi\in\mathcal{X}\) and \(\psi\in\mathcal{Y}\) and denote it by \(\mathcal{T}_{\Lambda}\). Analogously, we denote the initial topology on \(\mathrm{Hol}(U,\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\) w.r.t. the mappings \(\tilde{\Lambda}_{\varphi,\psi}\) for all \(\varphi\in\mathcal{X}\) and \(\psi\in\mathcal{Y}\) by \(\mathcal{T}_{\tilde{\Lambda}}\).
_Remark 4.3_.: The mappings \(\Lambda_{\varphi,\psi}\) are linear and \(\bigcap\ker\Lambda_{\varphi,\psi}=\{0\}\). Therefore, the corresponding initial topology \(\mathcal{T}_{\Lambda}\) is Hausdorff and makes \(\mathrm{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\) a topological vector space.
**Lemma 4.4**.: _Let \(G\colon\mathrm{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})) \to\mathrm{Hol}(U,\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\) be the linear bijection from (2). Then \(G\colon(\mathrm{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})), \mathcal{T}_{\Lambda})\to(\mathrm{Hol}(U,\mathrm{Ses}_{\mathrm{b}}(\mathcal{X}, \mathcal{Y})),\mathcal{T}_{\tilde{\Lambda}})\) is a linear homeomorphism._
Figure 1. Initial topology on \(\mathrm{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\) and \(\mathrm{Hol}(U,\mathrm{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\)
Proof.: By definition of \(G\), \(\Lambda_{\varphi,\psi}\) and \(\tilde{\Lambda}_{\varphi,\psi}\) we can immediately see that \(\tilde{\Lambda}_{\varphi,\psi}\circ G=\Lambda_{\varphi,\psi}\) and since \(G\) is invertible we also have \(\tilde{\Lambda}_{\varphi,\psi}=\Lambda_{\varphi,\psi}\circ G^{-1}\). Hence, the diagram in Figure 1 commutes. Since initial topologies are transitive, we conclude that \(G\) is a homeomorphism.
**Lemma 4.5**.: _A function \(\sigma\colon U\to\operatorname{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})\) is holomorphic if and only if \(\sigma(\cdot)(\varphi,\psi)\) is holomorphic for all \(\varphi\in\mathcal{X}\) and \(\psi\in\mathcal{Y}\)._
Proof.: In view of Lemma 4.1 and Theorem A.7, it suffices to prove that a set of operators \(\mathcal{B}\subseteq\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y})\) is bounded (w.r.t. the operator norm) if and only if
\[\forall\varphi\in\mathcal{X}\forall\psi\in\mathcal{Y}:\sup_{A\in\mathcal{B}} \lvert\langle A\varphi,\psi\rangle_{\mathcal{Y}}\rvert<\infty. \tag{3}\]
Obviously, boundedness of \(\mathcal{B}\) implies (3).
Conversely, we can keep \(\varphi\in\mathcal{X}\) fixed and write \(\iota_{A\varphi}\in\mathcal{L}_{\mathrm{b}}(\mathcal{Y},\mathbb{C})\) for the functional with Riesz representation \(A\varphi\in\mathcal{Y}\), i.e., \(\psi\mapsto\langle\psi,A\varphi\rangle_{\mathcal{Y}}\). Then, (3) and the uniform boundedness principle yield
\[\forall\psi\in\mathcal{Y}:\sup_{A\in\mathcal{B}}\lvert\iota_{A\varphi}(\psi) \rvert<\infty\implies\sup_{A\in\mathcal{B}}\lVert A\varphi\rVert_{\mathcal{Y} }<\infty.\]
Since this holds true for every \(\varphi\in\mathcal{X}\), another iteration of the uniform boundedness principle shows that \(\mathcal{B}\) is bounded.
Similarly to Banach-Alaoglu's theorem (for the weak operator topology) we show that
\[K\coloneqq\{\sigma\in\operatorname{Hol}(U,\operatorname{Ses}_{\mathrm{b}}( \mathcal{X},\mathcal{Y}))\,|\,\forall z\in U:\lVert\sigma(z)\rVert\leq 1\}\]
(the analogue of the closed ball in our setting) is compact.
**Theorem 4.6**.: \(K\) _is a \(\mathcal{T}_{\bar{\Lambda}}\)-compact subset of \(\operatorname{Hol}(U,\operatorname{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\). If both \(\mathcal{X}\) and \(\mathcal{Y}\) are separable, then \((K,\mathcal{T}_{\bar{\Lambda}})\) is metrisable._
Proof.: We define the following mappings
\[\iota_{\Pi}\colon\left\{\begin{array}{rcl}\operatorname{Hol}(U, \operatorname{Ses}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))&\to&\prod_{( \varphi,\psi)\in\mathcal{X}^{\prime}\times\mathcal{Y}}\operatorname{Hol}(U, \mathbb{C}),\\ \sigma&\mapsto&\big{(}\sigma(\cdot)(\varphi,\psi)\big{)}_{(\varphi,\psi)\in \mathcal{X}\times\mathcal{Y}},\end{array}\right.\]
and
\[\pi_{\zeta,\eta}\colon\left\{\begin{array}{rcl}\prod_{(\varphi, \psi)\in\mathcal{X}\times\mathcal{Y}}\operatorname{Hol}(U,\mathbb{C})&\to& \operatorname{Hol}(U,\mathbb{C}),\\ \big{(}f(\cdot)(\varphi,\psi)\big{)}_{(\varphi,\psi)\in\mathcal{X}\times \mathcal{Y}}&\mapsto&f(\cdot)(\zeta,\eta).\end{array}\right.\]
We can immediately see that \(\tilde{\Lambda}_{\varphi,\psi}=\pi_{\varphi,\psi}\circ\iota_{\Pi}\). Hence, the diagram in Figure 2 is commutative. If we endow \(\prod_{(\varphi,\psi)}\operatorname{Hol}(U,\mathbb{C})\) with the product topology, i.e., the
initial topology w.r.t. the \(\pi_{\zeta,\eta}\) and \(\iota_{\Pi}(\operatorname{Hol}(U,\operatorname{Ses_{b}}(\mathcal{X},\mathcal{Y}))) \subseteq\prod_{(\varphi,\psi)}\operatorname{Hol}(U,\mathbb{C})\) with the trace topology, the transitivity of initial topologies and the commutativity of the diagram imply that \(\iota_{\Pi}\) is a homeomorphism onto its range.
Therefore, \(K\) is compact in \(\operatorname{Hol}(U,\operatorname{Ses_{b}}(\mathcal{X},\mathcal{Y}))\), if and only if \(\iota_{\Pi}(K)\) is compact in \(\prod_{(\varphi,\psi)}\operatorname{Hol}(U,\mathbb{C})\).
First, we show that \(\iota_{\Pi}\big{(}\operatorname{Hol}(U,\operatorname{Ses_{b}}(\mathcal{X}, \mathcal{Y}))\big{)}\) is closed in \(\prod_{(\varphi,\psi)}\operatorname{Hol}(U,\mathbb{C})\): Let \((\sigma_{i})_{i\in I}\) be a net in \(\operatorname{Hol}(U,\operatorname{Ses_{b}}(\mathcal{X},\mathcal{Y}))\) such that \((\iota_{\Pi}\sigma_{i})_{i\in I}\) converges to \(f\in\prod_{(\varphi,\psi)}\operatorname{Hol}(U,\mathbb{C})\) (w.r.t. the product topology). The sesquilinearity of the \(\sigma_{i}\) exactly means
\[\sigma_{i}(\cdot)(\varphi_{1}+\alpha\varphi_{2},\psi_{1}+\beta \psi_{2})\\ =\sigma_{i}(\cdot)(\varphi_{1},\psi_{1})+\alpha\sigma_{i}(\cdot) (\varphi_{2},\psi_{1})+\overline{\beta}\sigma_{i}(\cdot)(\varphi_{1},\psi_{2} )+\alpha\overline{\beta}\sigma_{i}(\cdot)(\varphi_{2},\psi_{2})\]
or equivalently
\[\pi_{\varphi_{1}+\alpha\varphi_{2},\psi_{1}+\beta\psi_{2}}\iota_{\Pi}\sigma_{ i}=\pi_{\varphi_{1},\psi_{1}}\iota_{\Pi}\sigma_{i}+\alpha\pi_{\varphi_{2},\psi_{1}} \iota_{\Pi}\sigma_{i}+\overline{\beta}\pi_{\varphi_{1},\psi_{2}}\iota_{\Pi} \sigma_{i}+\alpha\overline{\beta}\pi_{\varphi_{2},\psi_{2}}\iota_{\Pi}\sigma_ {i}\]
for all \(\varphi_{1},\varphi_{2}\in\mathcal{X}\), \(\psi_{1},\psi_{2}\in\mathcal{Y}\) and \(\alpha,\beta\in\mathbb{C}\). By continuity of the projections \(\pi_{\zeta,\eta}\) we conclude that the last identity also holds if we replace \(\iota_{\Pi}\sigma_{i}\) by its limit \(f\), i.e., \(f(z)\) is sesquilinear for all \(z\in U\). Therefore, Lemma 4.5 implies that \(\iota_{\Pi}^{-1}f\in\operatorname{Hol}(U,\operatorname{Ses_{b}}(\mathcal{X}, \mathcal{Y}))\) exists. Hence, \(\iota_{\Pi}\big{(}\operatorname{Hol}(U,\operatorname{Ses_{b}}(\mathcal{X}, \mathcal{Y}))\big{)}\) is closed in \(\prod_{(\varphi,\psi)}\operatorname{Hol}(U,\mathbb{C})\).
It is straightforward to show
\[\iota_{\Pi}(K)=\prod_{(\varphi,\psi)\in\mathcal{X}\times\mathcal{Y}} \operatorname{Hol}(U,\overline{\mathbf{B}}_{\|\varphi\|\|\chi\|\psi\|_{\mathcal{ Y}}}(0))\cap\iota_{\Pi}\big{(}\operatorname{Hol}(U,\operatorname{Ses_{b}}( \mathcal{X},\mathcal{Y}))\big{)}.\]
Note that by Corollary A.6\(\operatorname{Hol}(U,\overline{\mathbf{B}}_{\|\varphi\|\|\psi\|}(0))\) is compact for every \(\varphi\in\mathcal{X}\) and \(\psi\in\mathcal{Y}\). By Tychonoff's theorem \(\prod_{(\varphi,\psi)}\operatorname{Hol}(U,\overline{\mathbf{B}}_{\|\varphi\| \|\psi\|}(0))\) is compact and therefore \(\iota_{\Pi}(K)\) is compact as the intersection of a compact and a closed set. This finally implies the compactness of \(K\).
In the separable case, let \(X\subseteq\mathcal{X}\) and \(Y\subseteq\mathcal{Y}\) be countable and dense. Then, we replace
\[\prod_{(\varphi,\psi)\in\mathcal{X}\times\mathcal{Y}}\operatorname{Hol}(U, \mathbb{C})\quad\text{ with }\quad\prod_{(\varphi,\psi)\in X\times Y} \operatorname{Hol}(U,\mathbb{C})\]
in the definition of \(\iota_{\Pi}\), and we only consider \(\pi_{\zeta,\eta}\) and \(\tilde{\Lambda}_{\varphi,\psi}\) with \(\zeta\in X\) and \(\eta\in Y\) in Figure 2. This gives rise to a new topology \(\mathcal{T}_{\aleph_{0}}\) on \(\operatorname{Hol}(U,\operatorname{Ses_{b}}(\mathcal{X},\mathcal{Y}))\). Since two bounded sesquilinear forms that coincide on \(X\times Y\) also coincide on \(\mathcal{X}\times\mathcal{Y}\), we still have that \(\iota_{\Pi}\) is a homeomorphism onto its range w.r.t. \(\mathcal{T}_{\aleph_{0}}\). Thus, \(\prod_{(\varphi,\psi)}\operatorname{Hol}(U,\mathbb{C})\) being metrisable as the countable product of metrisable spaces (cf. Remark A.1) implies that \((\operatorname{Hol}(U,\operatorname{Ses_{b}}(\mathcal{X},\mathcal{Y})), \mathcal{T}_{\aleph_{0}})\) is metrisable. The identity operator
\[\operatorname{id}\colon(K,\mathcal{T}_{\tilde{\Lambda}})\to(K,\mathcal{T}_{ \aleph_{0}})\]
clearly is continuous and bijective. Moreover, we have just shown that its domain is compact and that its codomain is metrisable and hence Hausdorff. Therefore, \(\operatorname{id}\) is a homeomorphism implying that \((K,\mathcal{T}_{\tilde{\Lambda}})\) is metrisable.
Lemma 4.4 now immediately yields:
**Corollary 4.7**.: _The set \(R\coloneqq\{f\in\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X}, \mathcal{Y}))\,|\,\forall z\in U:\|f(z)\|\leq 1\}\) is a \(\mathcal{T}_{\Lambda}\)-compact subset of \(\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{X},\mathcal{Y}))\). If both \(\mathcal{X}\) and \(\mathcal{Y}\) are separable, then \((R,\mathcal{T}_{\Lambda})\) is metrisable._
## 5. The Parameterised Nonlocal H-Topology
We now come to the main part of this paper, the introduction and discussion of the parameterised nonlocal H-topology or parametrised Schur topology
From now on, we regard a Hilbert space \(\mathcal{H}\) that can be orthogonally decomposed into two closed subspaces
\[\mathcal{H}=\mathcal{H}_{0}\oplus\mathcal{H}_{1},\]
and an open subset \(U\) of \(\mathbb{C}\). Furthermore, let \(\alpha\in(0,\infty)^{2\times 2}\) be the matrix
\[\alpha=\begin{pmatrix}\alpha_{00}&\alpha_{01}\\ \alpha_{10}&\alpha_{11}\end{pmatrix}.\]
We regard the following space of holomorphic functions
\[\mathfrak{M}(U)\coloneqq\{M\in\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}( \mathcal{H}))\,|\,\forall z\in U:M(z)\in\mathcal{M}(\mathcal{H}_{0},\mathcal{H }_{1})\}. \tag{4}\]
Similarly to [23, Sec. 5], we will introduce the topology on \(\mathfrak{M}(U)\) as an initial topology w.r.t. the "projections" onto the components of the Schur complement and expect to obtain analogous properties. Instead of the weak operator topology we will use the locally uniform weak operator topology, see Definition 4.2. To wit, we regard the following "projections"
\[\Lambda_{00}\colon\left\{\begin{array}{rcl}\mathfrak{M}(U)& \rightarrow&\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{0})) \\ M&\mapsto&M_{00}(\cdot)^{-1}\end{array}\right. \tag{5a}\] \[\Lambda_{01}\colon\left\{\begin{array}{rcl}\mathfrak{M}(U)& \rightarrow&\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{1}, \mathcal{H}_{0}))\\ M&\mapsto&M_{00}(\cdot)^{-1}M_{01}\end{array}\right.\] (5b) \[\Lambda_{10}\colon\left\{\begin{array}{rcl}\mathfrak{M}(U)& \rightarrow&\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{0}, \mathcal{H}_{1}))\\ M&\mapsto&M_{10}M_{00}(\cdot)^{-1}\end{array}\right.\] (5c) \[\Lambda_{11}\colon\left\{\begin{array}{rcl}\mathfrak{M}(U)& \rightarrow&\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{1}))\\ M&\mapsto&M_{11}-M_{10}M_{00}(\cdot)^{-1}M_{01}\end{array}\right. \tag{5d}\]
Note that these mappings are well-defined by the definition of \(\mathfrak{M}(U)\) and Lemmas A.2, A.3 and A.4.
**Definition 5.1**.: Let \(\mathfrak{M}(U)\) be the set defined in (4) and \(\Lambda_{00}\), \(\Lambda_{01}\), \(\Lambda_{10}\), \(\Lambda_{11}\) the mappings defined in (5). Then we define the _parameterised nonlocal_ H_-topology_ or _parameterised Schur topology_, \(\tau_{\operatorname{Hol}}(\mathcal{H}_{0},\mathcal{H}_{1})\), as the initial topology on \(\mathfrak{M}(U)\) w.r.t. the mappings \(\Lambda_{00}\), \(\Lambda_{01}\), \(\Lambda_{10}\), \(\Lambda_{11}\), where the codomains are each endowed with the corresponding locally uniform weak operator topology \(\mathcal{T}_{\Lambda}\) (Definition 4.2).
_Remark 5.2_.: Comparing Definition 2.1 and Definition 5.1, we can deduce that if a net \((M_{i})_{i\in I}\) in \(\mathfrak{M}(U)\)\(\tau_{\operatorname{Hol}}(\mathcal{H}_{0},\mathcal{H}_{1})\)-converges to \(M\in\mathfrak{M}(U)\), then for each \(z\in U\) the net \((M_{i}(z))_{i\in I}\) in \(\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1})\) converges to \(M(z)\in\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1})\) w.r.t. \(\tau(\mathcal{H}_{0},\mathcal{H}_{1})\).
_Remark 5.3_.: Apparently, we cannot expect a statement similar to Remark 4.3 to hold. Neither is \(\mathfrak{M}(U)\) a vector space, nor are the mappings \(\Lambda_{00}\), \(\Lambda_{01}\), \(\Lambda_{10}\), \(\Lambda_{11}\) linear. Just considering \(\Lambda_{00}\) and the fact that in general \(1/z+1/w\neq 1/(z+w)\) holds for \(z,w\in\mathbb{C}\), we see that addition is not even continuous when staying within \(\mathfrak{M}(U)\). Scalar multiplication however is continuous as a mapping from \(\mathbb{C}\setminus\{0\}\times\mathfrak{M}(U)\) to \(\mathfrak{M}(U)\) as one can show using nets and the definition of \(\mathcal{T}_{\Lambda}\). Moreover, \(\Lambda_{00}\), \(\Lambda_{01}\), \(\Lambda_{10}\), \(\Lambda_{11}\) separate points, i.e., \(\mathfrak{M}(U)\) is Hausdorff.
We now regard the space
\[\mathfrak{M}(U,\alpha)\coloneqq\{f\in\mathfrak{M}(U)\,|\,\forall z\in U:f(z) \in\mathcal{M}(\alpha)\}\]
equipped with the trace topology of \((\mathfrak{M}(U),\tau_{\operatorname{Hol}}(\mathcal{H}_{0},\mathcal{H}_{1}))\) and the spaces
\[\mathcal{A}(U,\alpha_{00},\alpha_{11}) \coloneqq\bigg{\{}A\in\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}( \mathcal{H}_{0}))\,\bigg{|}\,\forall z\in U:\operatorname{Re}A(z)^{-1}\geq \alpha_{00},\operatorname{Re}A(z)\geq\frac{1}{\alpha_{11}}\bigg{\}},\] \[\mathcal{B}(U,\alpha_{01}) \coloneqq\bigg{\{}B\in\operatorname{Hol}(U,\mathcal{L}_{\mathrm{ b}}(\mathcal{H}_{0},\mathcal{H}_{1}))\,\bigg{|}\,\forall z\in U:\|B(z)\|\leq \alpha_{01}\bigg{\}},\] \[\mathcal{C}(U,\alpha_{10}) \coloneqq\bigg{\{}C\in\operatorname{Hol}(U,\mathcal{L}_{\mathrm{ b}}(\mathcal{H}_{1},\mathcal{H}_{0}))\,\bigg{|}\,\forall z\in U:\|C(z)\|\leq \alpha_{10}\bigg{\}},\] \[\mathcal{D}(U,\alpha_{00},\alpha_{11}) \coloneqq\bigg{\{}D\in\operatorname{Hol}(U,\mathcal{L}_{\mathrm{ b}}(\mathcal{H}_{1}))\,\bigg{|}\,\forall z\in U:\operatorname{Re}D(z)\geq \alpha_{00},\operatorname{Re}D(z)^{-1}\geq\frac{1}{\alpha_{11}}\bigg{\}}\]
equipped with the traces of the respective locally uniform weak operator topology.
_Remark 5.4_.: Let \(\Lambda_{00}\), \(\Lambda_{01}\), \(\Lambda_{10}\) and \(\Lambda_{11}\) be the mappings from (5). Then their following restrictions to \(\mathfrak{M}(U,\alpha)\) are well-defined and continuous:
\[\Lambda_{00}\colon\left\{\begin{array}{rcl}\mathfrak{M}(U,\alpha)&\to& \mathcal{A}(U,\alpha_{00},\alpha_{11})\\ M&\mapsto&M_{00}(\cdot)^{-1}\\ \end{array}\right.\] \[\Lambda_{01}\colon\left\{\begin{array}{rcl}\mathfrak{M}(U,\alpha)& \to&\mathcal{B}(U,\alpha_{01})\\ M&\mapsto&M_{00}(\cdot)^{-1}M_{01}\end{array}\right.\] \[\Lambda_{10}\colon\left\{\begin{array}{rcl}\mathfrak{M}(U,\alpha) &\to&\mathcal{C}(U,\alpha_{10})\\ M&\mapsto&M_{10}M_{00}(\cdot)^{-1}\end{array}\right.\] \[\Lambda_{11}\colon\left\{\begin{array}{rcl}\mathfrak{M}(U,\alpha) &\to&\mathcal{D}(U,\alpha_{00},\alpha_{11})\\ M&\mapsto&M_{11}-M_{10}M_{00}(\cdot)^{-1}M_{01}\end{array}\right.\]
In fact, they even induce the topology on \(\mathfrak{M}(U,\alpha)\) as their initial topology (the diagram in Figure 4 is commutative and initial topologies are transitive).
**Lemma 5.5**.: \(\mathcal{A}(U,\alpha_{00},\alpha_{11})\)_, \(\mathcal{B}(U,\alpha_{01})\), \(\mathcal{C}(U,\alpha_{10})\) and \(\mathcal{D}(U,\alpha_{00},\alpha_{11})\) are compact. If \(\mathcal{H}\) is separable, then these sets are metrisable._
Proof.: Note that
\[\mathcal{A}(U,\alpha_{00},\alpha_{11})\subseteq\{f\in\operatorname{Hol}(U, \mathcal{L}_{\mathrm{b}}(\mathcal{H}_{0}))\,|\,\forall z\in U:\|f(z)\|\leq C\}\]
for a suitable \(C\geq 0\) (see Lemma A.8). Hence by Corollary 4.7, the set \(\mathcal{A}(U,\alpha_{00},\alpha_{11})\) is contained in a compact set and is therefore relatively compact.
For compactness, we will show that \(\mathcal{A}(U,\alpha_{00},\alpha_{11})\) even is closed: Let \((A_{i})_{i\in I}\) be a net in \(\mathcal{A}(U,\alpha_{00},\alpha_{11})\) converging to \(A\in\operatorname{Hol}(U,\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{0}))\). Then for each \(z\in U\), \(i\in I\) and \(\varphi\in\mathcal{H}_{0}\),
\[\frac{1}{\alpha_{11}}\|\varphi\|_{\mathcal{H}_{0}}^{2}\leq\operatorname{Re} \langle A_{i}(z)\varphi,\varphi\rangle_{\mathcal{H}_{0}}\]
holds and taking the limit in \(i\in I\) implies
\[\frac{1}{\alpha_{11}}\leq\operatorname{Re}A(z).\]
These last two inequalities also show that \(A_{i}(z)^{-1}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{0}),i\in I\) and \(A(z)^{-1}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{0})\) exist for every \(z\in U\) (Lemma A.8). Next, we have
\[\alpha_{00}\|\varphi\|_{\mathcal{H}_{0}}^{2}\leq\operatorname{Re}\langle A_{i }(z)^{-1}\varphi,\varphi\rangle_{\mathcal{H}_{0}}\] (6a) for each \[z\in U\], \[i\in I\] and \[\varphi\in\mathcal{H}_{0}\]. Using the substitution \[\psi=A_{i}(z)^{-1}\varphi\], we get \[\alpha_{00}\|A_{i}(z)\psi\|_{\mathcal{H}_{0}}^{2}\leq\operatorname{Re}\langle \psi,A_{i}(z)\psi\rangle_{\mathcal{H}_{0}}=\operatorname{Re}\langle A_{i}(z) \psi,\psi\rangle_{\mathcal{H}_{0}}\] (6b) for each \[z\in U\], \[i\in I\] and \[\psi\in\mathcal{H}_{0}\]. The Cauchy-Schwarz inequality yields \[\alpha_{00}\frac{|\langle A_{i}(z)\psi,A(z)\psi\rangle_{\mathcal{H}_{0}}|^{2} }{\|A(z)\psi\|_{\mathcal{H}_{0}}^{2}}\leq\alpha_{00}\|A_{i}(z)\psi\|_{ \mathcal{H}_{0}}^{2}\leq\operatorname{Re}\langle A_{i}(z)\psi,\psi\rangle_{ \mathcal{H}_{0}}\] (7) in case \[A(z)\psi\neq 0\]. Taking the limits in the two scalar products, we obtain \[\alpha_{00}\|A(z)\psi\|_{\mathcal{H}_{0}}^{2}\leq\operatorname{Re}\langle A(z )\psi,\psi\rangle_{\mathcal{H}_{0}}\] for each \[z\in U\] and \[\psi\in\mathcal{H}_{0}\] (the case \[A(z)\psi=0\] is trivial). The substitution \[\psi=A(z)^{-1}\varphi\] then implies \[\operatorname{Re}A(z)^{-1}\geq\alpha_{00}\].
The same proof also shows that \(\mathcal{D}(U,\alpha_{00},\alpha_{11})\) is compact. The sets \(\mathcal{B}(U,\alpha_{01})\) and \(\mathcal{C}(U,\alpha_{10})\) are already of a form that allows us to directly employ Corollary 4.7.
If \(\mathcal{H}\) is separable, both \(\mathcal{H}_{0}\) and \(\mathcal{H}_{1}\) are separable too. Hence, Corollary 4.7 yields the desired metrisability.
**Theorem 5.6**.: \(\mathfrak{M}(U,\alpha)\) _equipped with the trace topology of \(\mathfrak{M}(U)\) is compact. If \(\mathcal{H}\) is separable, then \(\mathfrak{M}(U,\alpha)\) is metrisable and thus sequentially compact._
Proof.: If we endow
\[S\coloneqq\mathcal{A}(U,\alpha_{00},\alpha_{11})\times\mathcal{B}(U,\alpha_{0 1})\times\mathcal{C}(U,\alpha_{10})\times\mathcal{D}(U,\alpha_{00},\alpha_{11})\]
with the product topology, \(S\) is compact (metrisable) as the product of finitely many compact (metrisable) spaces (see Lemma 5.5). Moreover, the mapping
\[(\Lambda_{00},\Lambda_{01},\Lambda_{10},\Lambda_{11})\colon\mathfrak{M}(U, \alpha)\to S\]
Figure 4. Two equivalent ways of defining the initial topology on \(\mathfrak{M}(U,\alpha)\)
is continuous. For \((A,B,C,D)\in S\) we define (note that \(A(z)\) and \(D(z)\) are bounded and invertible for \(z\in U\) by Lemma A.8 and note Lemmas A.2, A.3 and A.4) the block operator
\[z\mapsto M(z)\coloneqq\begin{pmatrix}A(z)^{-1}&A(z)^{-1}B(z)\\ C(z)A(z)^{-1}&D(z)+C(z)A(z)^{-1}B(z)\end{pmatrix}\in\operatorname{Hol}(U, \mathcal{L}_{\mathrm{b}}(\mathcal{H})).\]
Its inverse operator is given by
\[z\mapsto M(z)^{-1}=\begin{pmatrix}A(z)+B(z)D(z)^{-1}C(z)&-B(z)D(z)^{-1}\\ -D(z)^{-1}C(z)&D(z)^{-1}\end{pmatrix}\in\operatorname{Hol}(U,\mathcal{L}_{ \mathrm{b}}(\mathcal{H})).\]
In other words, we obtain \(M\in\mathfrak{M}(U)\). Moreover, \(\Lambda_{00}(M)=A\), \(\Lambda_{01}(M)=B\), \(\Lambda_{10}(M)=C\) and \(\Lambda_{11}(M)=D\), i.e., \(M\in\mathfrak{M}(U,\alpha)\) is a pre-image of \((A,B,C,D)\) under \((\Lambda_{00},\Lambda_{01},\Lambda_{10},\Lambda_{11})\). Clearly, it is the only one, which implies that
\[(\Lambda_{00},\Lambda_{01},\Lambda_{10},\Lambda_{11})\colon\mathfrak{M}(U, \alpha)\to S\]
is a continuous bijection. Since the diagram in Figure 5 is commutative and initial topologies are transitive, \((\Lambda_{00},\Lambda_{01},\Lambda_{10},\Lambda_{11})\) even is a homeomorphism, which finishes the proof.
**Lemma 5.7**.: _Let \((M_{i})_{i\in I}\) be a net in \(\mathfrak{M}(U,\alpha)\) and \(M\colon U\to\mathcal{M}(\mathcal{H}_{0},\mathcal{H}_{1})\). Then, \(M\in\mathfrak{M}(U)\) and \((M_{i})_{i\in I}\) converges to \(M\) w.r.t. \(\tau_{\operatorname{Hol}}(\mathcal{H}_{0},\mathcal{H}_{1})\) if and only if \((M_{i}(z))_{i\in I}\) converges to \(M(z)\) w.r.t. \(\tau(\mathcal{H}_{0},\mathcal{H}_{1})\) for every \(z\in U\). In either case, we have \(M\in\mathfrak{M}(U,\alpha)\)._
Proof.: As discussed in Remark 5.2, we know that parameterised implies pointwise convergence to the same limit.
Conversely, assume that \((M_{i}(z))_{i\in I}\) converges to \(M(z)\) w.r.t. \(\tau(\mathcal{H}_{0},\mathcal{H}_{1})\) for every \(z\in U\) and consider any subnet of \((M_{i})_{i\in I}\). By virtue of Theorem 5.6, this subnet has a further subnet converging to some \(N\in\mathfrak{M}(U,\alpha)\) w.r.t. the \(\tau_{\operatorname{Hol}}(\mathcal{H}_{0},\mathcal{H}_{1})\). Since we also have pointwise convergence to \(M\), Remark 5.2 implies \(N=M\in\mathfrak{M}(U,\alpha)\). So, every subnet has a further subnet converging to \(M\) which finishes the proof.
Combining Lemma 5.7 and Lemma 2.2, we immediately obtain:
**Corollary 5.8**.: _Let \((M_{n})_{n\in\mathbb{N}}\) be a sequence in \(\mathfrak{M}(U,\alpha)\) and \(M\colon U\to\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) such that \(M_{n}(z)\) converges to \(M(z)\) in the strong operator topology for every \(z\in U\). Then, \(M\in\mathfrak{M}(U,\alpha)\) and \((M_{n})_{n\in\mathbb{N}}\) converges to \(M\) w.r.t. \(\tau_{\operatorname{Hol}}(\mathcal{H}_{0},\mathcal{H}_{1})\)._
We stress that the statement of Corollary 5.8 is independent of the decomposition considered for \(\mathcal{H}\). In the next section, we establish the announced continuity result
Figure 5. Topology on \(\mathfrak{M}(U,\alpha)\) as a product topology
for solution operators for abstract time-dependent partial differential equations, that is, for evolutionary equations.
## 6. Applications to Evolutionary Equations
Finally, we establish the connection between evolutionary equations and the parameterised nonlocal H-topology.
The following lemma is based on ideas obtained from [1, 1].
**Lemma 6.1**.: _Let \(\mathcal{H}\) be a separable Hilbert space and \((T_{n})_{n\in\mathbb{N}}\) in \(\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) with_
\[\operatorname{Re}T_{n}\geq c>0\quad\text{ and }\quad\operatorname{Re}T_{n}^{-1} \geq d>0\]
_for \(n\in\mathbb{N}\). Moreover, assume that \(A\colon\operatorname{dom}(A)\subseteq\mathcal{H}\to\mathcal{H}\) is a skew-selfadjoint operator and that \(T\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) such that \(0\in\rho(T+A)\). If \((T_{n}+A)^{-1}\) converges to \((T+A)^{-1}\) in the weak operator topology, then we obtain_
\[\operatorname{Re}T\geq c\quad\text{ and }\quad\operatorname{Re}T^{-1}\geq d.\]
Proof.: \(T_{n}+A\) is closed with adjoint \(T_{n}^{*}-A\) and
\[\operatorname{Re}\langle(T_{n}+A)\varphi,\varphi\rangle_{\mathcal{H}}= \operatorname{Re}\langle T_{n}\varphi,\varphi\rangle_{\mathcal{H}}\geq c\| \varphi\|_{\mathcal{H}}^{2} \tag{8}\]
for all \(\varphi\in\operatorname{dom}(A)\) and \(n\in\mathbb{N}\). By virtue of Lemma A.11, we infer \((T_{n}+A)^{-1}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) with \(\|(T_{n}+A)^{-1}\|\leq 1/c\). Lemma A.8 yields \(\|T_{n}\|\leq 1/d\). Hence, we also get
\[\|A(T_{n}+A)^{-1}\|=\|I-T_{n}(T_{n}+A)^{-1}\|\leq 1+\frac{1}{cd} \tag{9}\]
for \(n\in\mathbb{N}\). Thus, for any subsequence of \(((T_{n}+A)^{-1})_{n\in\mathbb{N}}\), sequential compactness of operator norm balls in the weak operator topology gives us a further subsequence that converges in the weak operator topology on \(\mathcal{L}_{\mathrm{b}}(\mathcal{H},\operatorname{dom}(A))\), where the Hilbert space \(\operatorname{dom}(A)\) is endowed with the graph inner product. Since the weak operator limits in \(\mathcal{L}_{\mathrm{b}}(\mathcal{H},\operatorname{dom}(A))\) and \(\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) have to coincide ( \(\operatorname{dom}(A)\) as a Hilbert space is continuously embedded in \(\mathcal{H}\)), we conclude that every subsequence of \(((T_{n}+A)^{-1})_{n\in\mathbb{N}}\) has a further subsequence converging to \((T+A)^{-1}\) in the weak operator topology on \(\mathcal{L}_{\mathrm{b}}(\mathcal{H},\operatorname{dom}(A))\). In other words, \((T_{n}+A)^{-1}\) converges to \((T+A)^{-1}\) in the weak operator topology on \(\mathcal{L}_{\mathrm{b}}(\mathcal{H},\operatorname{dom}(A))\).
We have \(T_{n}(T_{n}+A)^{-1}=I-A(T_{n}+A)^{-1}\) for \(n\in\mathbb{N}\). Therefore, \(T_{n}(T_{n}+A)^{-1}\) converges to \(I-A(T+A)^{-1}=T(T+A)^{-1}\) in the weak operator topology on \(\mathcal{L}_{\mathrm{b}}(\mathcal{H})\). Furthermore, the skew-selfadjointness of \(A\) implies
\[\operatorname{Re}\langle A(T_{n}+A)^{-1}\varphi,(T_{n}+A)^{-1}\varphi\rangle_ {\mathcal{H}}=0\text{ and }\operatorname{Re}\langle A(T+A)^{-1}\varphi,(T+A)^{-1}\varphi\rangle_{ \mathcal{H}}=0\]
for all \(\varphi\in\mathcal{H}\) and \(n\in\mathbb{N}\). Thus, from
\[T_{n}(T_{n}+A)^{-1}+A(T_{n}+A)^{-1}=I=T(T+A)^{-1}+A(T+A)^{-1}\]
for all \(n\in\mathbb{N}\), it follows
\[\lim_{n\to\infty}\operatorname{Re}\langle T_{n}(T_{n}+A)^{-1} \varphi,(T_{n}+A)^{-1}\varphi\rangle_{\mathcal{H}} =\lim_{n\to\infty}\operatorname{Re}\langle\varphi,(T_{n}+A)^{-1} \varphi\rangle_{\mathcal{H}}\] \[=\operatorname{Re}\langle\varphi,(T+A)^{-1}\varphi\rangle_{ \mathcal{H}} \tag{10}\] \[=\operatorname{Re}\langle T(T+A)^{-1}\varphi,(T+A)^{-1}\varphi \rangle_{\mathcal{H}}\]
for all \(\varphi\in\mathcal{H}\). Reusing the methods employed in (7) and (6), we obtain
\[c\|(T+A)^{-1}\varphi\|_{\mathcal{H}}^{2} \leq\lim_{n\to\infty}\operatorname{Re}\langle T_{n}(T_{n}+A)^{-1 }\varphi,(T_{n}+A)^{-1}\varphi\rangle_{\mathcal{H}}\] \[=\operatorname{Re}\langle T(T+A)^{-1}\varphi,(T+A)^{-1}\varphi \rangle_{\mathcal{H}}\]
as well as
\[d\|T(T+A)^{-1}\varphi\|_{\mathcal{H}}^{2} \leq\lim_{n\to\infty}\operatorname{Re}\langle T_{n}(T_{n}+A)^{-1} \varphi,(T_{n}+A)^{-1}\varphi\rangle_{\mathcal{H}} \tag{11}\] \[=\operatorname{Re}\langle T(T+A)^{-1}\varphi,(T+A)^{-1}\varphi \rangle_{\mathcal{H}}\]
for all \(\varphi\in\mathcal{H}\). As \((T+A)^{-1}\varphi\) ranges over the dense subspace \(\operatorname{dom}(A)\) of \(\mathcal{H}\) and as both \(T\) and \(T^{-1}\) are bounded on \(\mathcal{H}\), we conclude \(\operatorname{Re}T\geq c\) and with (6) also \(\operatorname{Re}T^{-1}\geq d\).
From now on, let \(\mathcal{H}\) be a separable Hilbert space, let \(A\colon\operatorname{dom}(A)\subseteq\mathcal{H}\to\mathcal{H}\) be skew-selfadjoint and let \(\operatorname{dom}A\cap(\ker A)^{\perp}\) endowed with the graph scalar product of \(A\) be compactly embedded into \(\mathcal{H}\). Recall that this compact embedding implies closedness of \(\operatorname{ran}A\) by a standard argument (see [1, Lemma 4.1] or the FA-Toolbox in [13]) and hence \((\ker A)^{\perp}=\operatorname{ran}A\). Thus, we obtain the following decomposition:
\[\mathcal{H}=\underbrace{\ker A}_{=\mathcal{H}_{0}}\oplus\underbrace{ \operatorname{ran}A}_{=\mathcal{H}_{1}}, \tag{12}\]
and \(\operatorname{dom}(A)\cap\mathcal{H}_{1}\) is compactly embedded in \(\mathcal{H}_{1}\).
_Remark 6.2_.: Clearly, \(A\) itself has the form
\[\begin{pmatrix}0&0\\ 0&\tilde{A}\end{pmatrix}:\,\ker A\oplus(\operatorname{dom}A\cap\operatorname {ran}A)\subseteq\mathcal{H}_{0}\oplus\mathcal{H}_{1}\to\mathcal{H}_{0}\oplus \mathcal{H}_{1},\]
where \(\tilde{A}\colon(\operatorname{dom}A\cap\operatorname{ran}A)\subseteq \mathcal{H}_{1}\to\mathcal{H}_{1}\) is the restriction of \(A\). Introducing this \(\tilde{A}\) gives that \(\operatorname{dom}\tilde{A}\) as a Hilbert space is compactly embedded in \(\mathcal{H}_{1}\). One can immediately verify that \(\tilde{A}\) is still skew-selfadjoint by showing that \((\tilde{A})^{*}=(\overline{A^{*}})=-\tilde{A}\).
**Lemma 6.3**.: _Consider an operator \(T\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) and assume \(T_{00}^{-1}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) as well as \(\operatorname{Re}(T_{11}-T_{10}T_{00}^{-1}T_{01})\geq c>0\). Then \((T+A)^{-1}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) and this inverse reads_
\[\begin{pmatrix}T_{00}^{-1}+T_{00}^{-1}T_{01}T_{A}^{-1}T_{10}T_{00}^{-1}&-T_{00}^ {-1}T_{01}T_{A}^{-1}\\ -T_{A}^{-1}T_{10}T_{00}^{-1}&T_{A}^{-1}\end{pmatrix}, \tag{13}\]
_where \(T_{A}\coloneqq(T_{11}-T_{10}T_{00}^{-1}T_{01}+\tilde{A})\). Moreover, we have_
\[\|T_{A}^{-1}\|\leq\frac{1}{c}\quad\text{and}\quad\|\tilde{A}T_{A}^{-1}\|\leq 1 +\frac{\|T_{11}-T_{10}T_{00}^{-1}T_{01}\|}{c}.\]
Proof.: Using the decomposition (12) and Lemma A.2, we can write
\[T+A=\begin{pmatrix}T_{00}&T_{01}\\ T_{10}&T_{11}+\tilde{A}\end{pmatrix}:\,\mathcal{H}_{0}\oplus(\operatorname{ dom}A\cap\mathcal{H}_{1})\subseteq\mathcal{H}_{0}\oplus\mathcal{H}_{1}\to \mathcal{H}_{0}\oplus\mathcal{H}_{1}\]
with all the components of \(T\) being bounded by \(\|T\|\). Due to Lemma A.11 and the conditions imposed on \(T\) and \(A\), \(T_{11}-T_{10}T_{00}^{-1}T_{01}+\tilde{A}\) is boundedly invertible on \(\mathcal{H}_{1}\) with \(\|(T_{11}-T_{10}T_{00}^{-1}T_{01}+\tilde{A})^{-1}\|_{\mathcal{H}_{1}}\leq 1/c\) (cf. (8)). Therefore, (13) is an element of \(\mathcal{L}_{\mathrm{b}}(\mathcal{H})\). Furthermore, (13) maps from \(\mathcal{H}_{0}\oplus\mathcal{H}_{1}\) to \(\mathcal{H}_{0}\oplus(\operatorname{dom}A\cap\mathcal{H}_{1})\). It remains to verify that applying (13) to \(T+A\) from the right yields the identity on \(\mathcal{H}\), and that applying (13) to \(T+A\) from the left yields the identity on \(\mathcal{H}_{0}\oplus(\operatorname{dom}A\cap\mathcal{H}_{1})\). These are two short and straightforward calculations. The remaining inequality follows similarly to (9).
The combination of the definition of the Schur topology together with the compactness assumption on \(A\) leads to the following fundamental convergence statement underlying our main result on evolutionary equations.
**Lemma 6.4**.: _Let \((T_{n})_{n\in\mathbb{N}}\) be a sequence in \(\mathcal{M}(\alpha)\) converging to \(T\in\mathcal{M}(\alpha)\) w.r.t. \(\tau(\mathcal{H}_{0},\mathcal{H}_{1})=\tau(\ker(A),\operatorname{ran}(A))\). Then, \((T_{n}+A)^{-1}\) converges to \((T+A)^{-1}\) in the weak operator topology on \(\mathcal{L}_{\mathrm{b}}(\mathcal{H})\)._
Proof.: In view of Lemma A.8, we can write both \((T_{n}+A)^{-1}\) and \((T+A)^{-1}\) in the form of (13).
Consider any subsequence of \((T_{n}+A)^{-1}\). We will not introduce a new index for this subsequence. For \(\varphi_{0}+\varphi_{1}\in\mathcal{H}_{0}\oplus\mathcal{H}_{1}\) and \(n\in\mathbb{N}\), we have
\[\|\varphi_{1}-T_{n,10}T_{n,00}^{-1}\varphi_{0}\|_{\mathcal{H}_{1}}\leq\| \varphi_{1}\|_{\mathcal{H}_{1}}+\alpha_{10}\|\varphi_{0}\|_{\mathcal{H}_{0}}.\]
Moreover, Lemma 6.3 yields
\[\|(T_{n,11}-T_{n,10}T_{n,00}^{-1}T_{n,01}+\tilde{A})^{-1}\|\leq \frac{1}{\alpha_{00}}\] \[\text{and}\quad\|\tilde{A}(T_{n,11}-T_{n,10}T_{n,00}^{-1}T_{n,01 }+\tilde{A})^{-1}\|\leq 1+\frac{\alpha_{11}}{\alpha_{00}}.\]
Thus, denoting
\[(u_{n,1})_{n\in\mathbb{N}}:=\big{(}(T_{n,11}-T_{n,10}T_{n,00}^{-1}T_{n,01}+ \tilde{A})^{-1}(\varphi_{1}-T_{n,10}T_{n,00}^{-1}\varphi_{0})\big{)}_{n\in \mathbb{N}}\]
and using
\[\|T_{n,11}-T_{n,10}T_{n,00}^{-1}T_{n,01}\|\leq\alpha_{11}\]
as well as
\[\|T_{n,10}T_{n,00}^{-1}\|\leq\alpha_{10}\]
for \(n\in\mathbb{N}\), we deduce that both \((u_{n,1})_{n\in\mathbb{N}}\) and
\[(\tilde{A}u_{n,1})_{n\in\mathbb{N}}=(\varphi_{1}-T_{n,10}T_{n,00}^{-1}\varphi_ {0}-(T_{n,11}-T_{n,10}T_{n,00}^{-1}T_{n,01})u_{n,1})_{n\in\mathbb{N}}\]
are bounded sequences in \(\mathcal{H}_{1}\). Hence, we may choose a subsequence (not relabelled) such that \((u_{n,1})_{n\in\mathbb{N}}\) weakly converges to some \(u_{1}\) in \(\operatorname{dom}(\tilde{A})\) endowed with the graph inner product. Since the continuity of \(\tilde{A}\colon\operatorname{dom}(\tilde{A})\to\mathcal{H}_{1}\) (w.r.t. the graph norm) implies its weak continuity, the sequence \(\tilde{A}u_{n,1}\) weakly converges to \(\tilde{A}u_{1}\). The compact embedding of \(\operatorname{dom}(\tilde{A})\) into \(\mathcal{H}_{1}\) yields \(\mathcal{H}_{1}\)-convergence of (a subsequence of) \((u_{n,1})_{n\in\mathbb{N}}\) to \(u_{1}\in\mathcal{H}_{1}\). Next, consider \(((T_{n,11}-T_{n,10}T_{n,00}^{-1}T_{n,01})u_{n,1})_{n\in\mathbb{N}}\). As this is a uniformly bounded sequence of operators converging in the weak operator topology \((\tau(\mathcal{H}_{0},\mathcal{H}_{1})\)-convergence) applied to a convergent sequence in \(\mathcal{H}_{1}\), the sequence altogether weakly converges to \((T_{11}-T_{10}T_{00}^{-1}T_{01})u_{1}\). All in all, we have proven
\[\tilde{A}u_{1}=\varphi_{1}-T_{10}T_{00}^{-1}\varphi_{0}-(T_{11}-T_{10}T_{00}^{ -1}T_{01})u_{1},\]
i.e.,
\[u_{1}=(T_{11}-T_{10}T_{00}^{-1}T_{01}+\tilde{A})^{-1}(\varphi_{1}-T_{10}T_{00}^ {-1}\varphi_{0}).\]
In other words, the \(\mathcal{H}_{1}\)-component of \((T_{n}+A)^{-1}(\varphi_{0}+\varphi_{1})\) converges to the \(\mathcal{H}_{1}\)-component of \((T+A)^{-1}(\varphi_{0}+\varphi_{1})\).
The convergence of \((u_{n,1})_{n\in\mathbb{N}}\), \(\tau(\mathcal{H}_{0},\mathcal{H}_{1})\)-convergence and the uniform bound
\[\|T_{n,00}^{-1}T_{n,01}\|\leq\alpha_{01}\]
for \(n\in\mathbb{N}\) yield weak convergence of \(T_{n,00}^{-1}\varphi_{0}-T_{n,00}^{-1}T_{n,01}u_{n,1}\) to \(T_{00}^{-1}\varphi_{0}-T_{00}^{-1}T_{01}u_{1}\). In other words, the \(\mathcal{H}_{0}\)-component of \((T_{n}+A)^{-1}(\varphi_{0}+\varphi_{1})\) weakly converges to the \(\mathcal{H}_{0}\)-component of \((T+A)^{-1}(\varphi_{0}+\varphi_{1})\).
To sum up, we have shown that every subsequence of \((T_{n}+A)^{-1}\) has a further subsequence that converges to \((T+A)^{-1}\) in the weak operator topology.
We are now in the position to state and prove the main result of this section.
**Theorem 6.5**.: _Consider \(\nu_{0}>0\) and a sequence of material laws \((M_{n})_{n\in\mathbb{N}}\) with \(\mathbb{C}_{\mathrm{Re}>\nu_{0}}\) in their domain. Furthermore, assume there exist \(c,d>0\) with_
\[\mathrm{Re}\,zM_{n}(z)\geq c\quad\text{and}\quad\|M_{n}(z)\|\leq d\]
_for all \(z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}\) and all \(n\in\mathbb{N}\). This implies \(\mathrm{s}_{\mathrm{b}}(M_{n})\leq\nu_{0}\) for all \(n\in\mathbb{N}\). If there exists an \(M\colon\mathbb{C}_{\mathrm{Re}>\nu_{0}}\to\mathcal{M}(\ker(A),\mathrm{ran}(A))\) with \(\|M(z)\|\leq d\) for \(z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}\) and \((M_{n})_{n\in\mathbb{N}}\) converges to \(M\) pointwise in \(\tau(\ker(A),\mathrm{ran}(A))\), then \(M\) is a material law with_
\[\mathrm{Re}\,zM(z)\geq c \tag{14}\]
_for all \(z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}\) and \(\mathrm{s}_{\mathrm{b}}(M)=\nu_{0}\). Moreover, we have_
\[\overline{\partial_{t}M_{n}(\partial_{t})+A}^{-1}\to\overline{\partial_{t}M( \partial_{t})+A}^{-1} \tag{15}\]
_in the weak operator topology on \(\mathcal{L}_{\mathrm{b}}(\mathrm{L}_{2,\nu}(\mathbb{R},\mathcal{H}))\) for every \(\nu>\nu_{0}\)._
Proof.: Lemma A.8 yields
\[\mathrm{Re}(zM_{n}(z))^{-1}\geq c\|zM_{n}(z)\|^{-2}\geq cd^{-2}|z|^{-2}\]
for all \(z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}\) and all \(n\in\mathbb{N}\). Fix any \(\mu>\nu_{0}\). Then by easy calculations (Lemmas A.2, A.8 and A.9), we find an \(\alpha\in(0,\infty)^{2\times 2}\) such that \((z\mapsto zM_{n}(z))\in\mathfrak{M}(\mathbb{C}_{\mu>\mathrm{Re}>\nu_{0}}^{| \mathrm{Im}|<\mu},\alpha)\) for \(n\in\mathbb{N}\). Lemma 5.7 yields holomorphicity of \(M\) and \(zM(z)\in\mathcal{M}(\alpha)\) on \(\mathbb{C}_{\mu>\mathrm{Re}>\nu_{0}}^{|\mathrm{Im}|<\mu}\). Since \(\mu>\nu_{0}\) was arbitrary, we obtain holomorphicity of \(M\) on \(\mathbb{C}_{\mathrm{Re}>\nu_{0}}\).
In particular, we have proven \(zM_{n}(z)\in\mathcal{M}(\alpha)\) for all \(n\in\mathbb{N}\) and \(zM(z)\in\mathcal{M}(\alpha)\) for each \(z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}\) (with the \(\alpha\) only depending on \(z\)). Thus, Lemma 6.4 yields
\[(zM_{n}(z)+A)^{-1}\to(zM(z)+A)^{-1} \tag{16}\]
in the weak operator topology for each \(z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}\), and Lemma 6.1 proves (14). This means, Theorem 3.2 is applicable to both \(M_{n}\) for \(n\in\mathbb{N}\) and to \(M\). Fourier-Laplace transforming (16), we get (15).
_Remark 6.6_.: It is possible to replace the uniform boundedness condition imposed on \((M_{n})_{n\in\mathbb{N}}\) and its limit \(M\) in Theorem 6.5 with
\[\mathrm{Re}\langle M_{n}(z)\varphi,\varphi\rangle_{\mathcal{H}}\geq\frac{1}{d }\|M_{n}(z)\varphi\|_{\mathcal{H}}^{2} \tag{17}\]
for all \(z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}\) and \(n\in\mathbb{N}\):
First, note that \(\mathrm{Re}\,zM_{n}(z)\geq c\) and Lemma A.8 show that \(M_{n}(z)\) is boundedly invertible for all \(z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}\) and \(n\in\mathbb{N}\). Hence looking at (6), we see that (17) is equivalent to \(\mathrm{Re}(M_{n}(z))^{-1}\geq 1/d\) and with Lemma A.8 we even obtain \(\|M_{n}(z)\|\leq d\) for all \(z\in\mathbb{C}_{\mathrm{Re}>\nu_{0}}\) and \(n\in\mathbb{N}\). Therefore, we can apply the proof of Theorem 6.5 until we get (16) and (14).
In order to obtain (15), we need to apply Theorem 3.2, which means, it remains to prove the uniform boundedness of \(M\). Since we now have a compactness condition on \(A\), we can refine the argument (10). We have
\[\big{\langle}zM_{n}(z)(zM_{n}(z)+A)^{-1}\varphi,(zM_{n}(z)+A)^{-1} \varphi\big{\rangle}_{\mathcal{H}} \tag{18a}\] \[+\big{\langle}A(zM_{n}(z)+A)^{-1}\varphi,(zM_{n}(z)+A)^{-1} \varphi\big{\rangle}_{\mathcal{H}}\] (18b) \[=\big{\langle}zM(z)(zM(z)+A)^{-1}\varphi,(zM_{n}(z)+A)^{-1} \varphi\big{\rangle}_{\mathcal{H}}\] (18c) \[+\big{\langle}A(zM(z)+A)^{-1}\varphi,(zM_{n}(z)+A)^{-1}\varphi \big{\rangle}_{\mathcal{H}} \tag{18d}\]
for \(\varphi\in\mathcal{H}\). (18c) and (18d) converge due to (16).
For (18b), we recall that we have proven weak convergence of \(A(zM_{n}(z)+A)^{-1}\varphi\) to \(A(zM(z)+A)^{-1}\varphi\) in the first paragraph of the proof of Lemma 6.1. Moreover, \((zM_{n}(z)+A)^{-1}\varphi\) in the second entry of the inner product can be replaced with
its projection onto \(\mathcal{H}_{1}=\operatorname{ran}A\) as it is multiplied with \(A(zM_{n}(z)+A)^{-1}\varphi\in\operatorname{ran}A\). Obviously, this projected sequence converges weakly in the Hilbert space \(\operatorname{dom}(A)\cap\mathcal{H}_{1}\). As a consequence, we get strong convergence to the projection of \((zM(z)+A)^{-1}\varphi\) onto \(\mathcal{H}_{1}\) by the compact embedding of \(\operatorname{dom}(A)\cap\mathcal{H}_{1}\) into \(\mathcal{H}\). Alltogether, that means convergence of (18b) to \(\langle A(zM(z)+A)^{-1}\varphi,(zM(z)+A)^{-1}\varphi\rangle_{\mathcal{H}}\) and thus (18a) converges to
\[\langle zM(z)(zM(z)+A)^{-1}\varphi,(zM(z)+A)^{-1}\varphi\rangle_{\mathcal{H}}.\]
Dividing by \(z\) and repeating the argument (11), we conclude \(\operatorname{Re}(M(z))^{-1}\geq 1/d\) and with Lemma A.8 even \(\|M(z)\|\leq d\) for all \(z\in\mathbb{C}_{\operatorname{Re}>\nu_{0}}\).
## 7. Examples
### On a model for cell migration
In [1], the authors introduce and analyse a nonlocal model for cell migration. Here, we are interested to exemplify our previous findings. Hence, we only focus on an autonomous, linear variant of the equation in [1]. However, we may allow for matrix-valued coefficients here. For this, let throughout \(\Omega\subseteq\mathbb{R}^{n}\) be a bounded, weak Lipschitz domain with continuous boundary, and introduce, for \(r\geq 0\) and \(q\in\operatorname{L}_{2}(\Omega)^{n}\), the linear operator \(\mathcal{S}_{r}\) given by
\[\mathcal{S}_{r}q(x)\coloneqq n\int_{0}^{1}\frac{1}{|S_{1}|}\int_{S_{1}} \langle q(x+rsy),y\rangle_{\mathbb{R}^{n}}y\,\mathrm{d}\sigma(y)\,\mathrm{d}s \quad(x\in\Omega),\]
where \(q\) is extended to \(\mathbb{R}^{n}\) via \(0\), \(S_{1}\) denotes the sphere with radius \(1\) and \(\sigma\) its surface measure. According to [1], we have \(\mathcal{S}_{r}\in\mathcal{L}_{\mathrm{b}}(\operatorname{L}_{2}(\Omega)^{n})\) for all \(r\geq 0\). Moreover, the operator family \((\mathcal{S}_{r})_{0\leq r\leq 1}\) is a special case of an approximation of unity.
**Definition 7.1**.: We call \((\mathcal{R}_{r})_{0\leq r\leq 1}\) in \(\mathcal{L}_{\mathrm{b}}(\operatorname{L}_{2}(\Omega)^{n})\) an _approximation of unity_, if \(\sup_{0\leq r\leq 1}\|\mathcal{R}_{r}\|<\infty\) and \(\mathcal{R}_{r}\to 1\) in the strong operator topology as \(r\to 0\).
Note that the example \((\mathcal{T}_{r})_{r}\) treated in [1] is, too, an approximation of unity.
In the following, let \((\mathcal{R}_{r})_{r}\) be an approximation of unity. Then consider \(a_{1},a_{2},a_{3}\in M(\alpha,\beta;\Omega)\) for some \(0<\alpha<\beta\) and consider, for \(0\leq r\leq 1\), the following equation
\[\partial_{t}c_{r}-\operatorname{div}(a_{1}-a_{2}\mathcal{R}_{r}a_{3})\operatorname {grad}c_{r}=f\in\operatorname{L}_{2,\nu}(\mathbb{R};\operatorname{L}_{2}( \Omega)),\]
with \(f\) and \(\nu>0\) fixed. Introducing \(q_{r}\coloneqq-A_{r}\operatorname{grad}c_{r}\) with \(A_{r}\coloneqq(a_{1}-a_{2}\mathcal{R}_{r}a_{3})\) and assuming homogeneous Neumann boundary conditions for \(q_{r}\), we rewrite the system as an evolutionary equation. For this, we impose the standing assumption that there exists \(c>0\) such that for all \(0\leq r\leq 1\), we have
\[\operatorname{Re}(a_{1}-a_{2}\mathcal{R}_{r}a_{3})\geq c\]
in the sense of positive definiteness in \(\mathcal{L}_{\mathrm{b}}(\operatorname{L}_{2}(\Omega)^{n})\). This assumption is slightly weaker than the one imposed in [1]. By Lemma A.8, it implies that \(A_{r}\) is boundedly invertible with \(\|A_{r}^{-1}\|\leq 1/c\). Then, we may equivalently consider
\[\begin{bmatrix}\partial_{t}\begin{pmatrix}1&0\\ 0&0\end{pmatrix}+\begin{pmatrix}0&0\\ 0&A_{r}^{-1}\end{pmatrix}+\begin{pmatrix}0&\ddot{\operatorname{div}}\\ \operatorname{grad}&0\end{pmatrix}\end{bmatrix}\begin{pmatrix}c_{r}\\ q_{r}\end{pmatrix}=\begin{pmatrix}f\\ 0\end{pmatrix}\]
where \(\dot{\operatorname{div}}\coloneqq\overline{\operatorname{div}|_{\mathcal{C}_{ \mathrm{Re}}^{\alpha}(\Omega)^{n}}}\) is the closure of \(\operatorname{div}\) as an operator in \(\operatorname{L}_{2}\) on smooth compactly supported vector fields. This models homogeneous Neumann boundary conditions.
Note that
\[M_{r}\colon z\mapsto\begin{pmatrix}1&0\\ 0&0\end{pmatrix}+z^{-1}\begin{pmatrix}0&0\\ 0&A_{r}^{-1}\end{pmatrix}\]
defines material laws for \(0\leq r\leq 1\) with \(\operatorname{s_{b}}(M_{r})=0\) and the following properties:
\[\operatorname{Re}zM_{r}(z)\geq\min\{\operatorname{Re}z,\operatorname{Re}A_{r}^{ -1}\}\]
and
\[\|M_{r}(z)\|\leq 1+\|z^{-1}A_{r}^{-1}\|\leq\frac{1}{c|z|}\]
for \(|z|>0\). We obtain
\[\operatorname{Re}A_{r}^{-1} =\operatorname{Re}(a_{1}-a_{2}\mathcal{R}_{r}a_{3})^{-1}\] \[\geq c\|(a_{1}-a_{2}\mathcal{R}_{r}a_{3})\|^{-2}\geq c(\beta+ \beta^{2}\sup_{0\leq r\leq 1}\|\mathcal{R}_{r}\|)^{-2}\]
by Lemma A.8.
Since \((\mathcal{R}_{r})_{r}\) is an approximation of unity, it follows from Lemma A.10 that \(A_{r}^{-1}\to A_{0}^{-1}\) in the strong operator topology as \(r\to 0\).
**Theorem 7.2**.: _For all \(\nu>0\), we have_
\[\overline{\left[\partial_{t}\begin{pmatrix}1&0\\ 0&0\end{pmatrix}+\begin{pmatrix}0&0\\ 0&A_{r}^{-1}\end{pmatrix}+\begin{pmatrix}0&\operatorname{div}\\ \operatorname{grad}&0\end{pmatrix}\right]}^{-1}\\ \to\overline{\left[\partial_{t}\begin{pmatrix}1&0\\ 0&0\end{pmatrix}+\begin{pmatrix}0&0\\ 0&A_{0}^{-1}\end{pmatrix}+\begin{pmatrix}0&\operatorname{div}\\ \operatorname{grad}&0\end{pmatrix}\right]}^{-1}\]
_as \(r\to 0\) in the weak operator topology of \(\mathcal{L}_{\operatorname{b}}\big{(}\mathrm{L}_{2,\nu}(\mathbb{R};\mathrm{L}_ {2}(\Omega)^{n+1})\big{)}\)._
Proof.: Considering the Rellich-Kondrachov theorem and the above discussion, this immediately follows from Lemmas A.2, A.8 and A.9, Corollary 5.8 and Theorem 6.5.
_Remark 7.3_.: In [23, Theorem 5.1.3], one can show - even in the non-autonomous case - that the solution operators even converge in the strong operator topology. The example is merely presented to have a nonlocal example at hand.
In the case \(n=3\), note that the convergence assumptions of the above theorem can be weakened. We particularly refer to the example in [23] showing that if, additionally, \(a_{1}\) is replaced by an H-converging sequence \((a_{1,k})\) with limit \(a_{1}\), the resulting sequence
\[(a_{1,k}-a_{2}\mathcal{R}_{1/k}a_{3})_{k}^{-1}\]
converges to \((a_{1}-a_{2}\mathcal{R}_{0}a_{3})^{-1}\) in \(\tau(\mathfrak{g},\mathfrak{c}_{0})\).
### A homogenisation problem for scalar piezo-electricity
In this section, we consider a classical homogenisation problem in order to showcase the applicability for rapidly oscillating albeit local coefficients. Again, we refer to the example in [23] for more sophisticated situations. Here, we follow the model description of piezo-electro-magnetism from [19]. Note, that we treat homogeneous Dirichlet boundary conditions throughout and for ease of readability we simplify the case to scalar elastic waves. The rationale for \(3\)-dimensional elastic waves can be dealt with similarly. Let \(\Omega\subseteq\mathbb{R}^{3}\) be a bounded, weak Lipschitz domain with continuous boundary. Additionally, assume that \(\Omega\) is topologically trivial (recall Example 2.3 for the Helmholtz decomposition). We adopt the notation rolled out in [19] and consider the evolutionary equation \((\partial_{t}M_{0}+M_{1}+A)U=F\) with the following
setting
\[M_{0} \coloneqq\begin{pmatrix}1&0&0&0\\ 0&C^{-1}&C^{-1}e&0\\ 0&e^{*}C^{-1}&\varepsilon+e^{*}C^{-1}e&0\\ 0&0&0&\mu\\ \end{pmatrix}, M_{1} \coloneqq\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&\sigma&0\\ 0&0&0&0\\ \end{pmatrix},\] \[A \coloneqq\begin{pmatrix}0&-\operatorname{div}&0&0\\ -\operatorname{grad}&0&0&0\\ 0&0&0&-\operatorname{curl}\\ 0&0&\operatorname{curl}&0\\ \end{pmatrix},\]
where \(C\), \(e\), \(\mu\), \(\varepsilon\), \(\sigma\) are operators in \(\mathcal{L}_{\mathrm{b}}(\mathrm{L}_{2}(\Omega)^{3})\), of which \(C\), \(\mu\) and \(\varepsilon\) are self-adjoint and non-negative, and \(\operatorname{curl}:=\operatorname{curl}|_{\mathcal{C}_{\mathrm{e}}^{\infty} (\Omega)^{3}}\) is defined similarly to \(\operatorname{div}\) before. Well-posedness in \(\mathrm{L}_{2,\nu}(\mathbb{R};\mathrm{L}_{2}(\Omega)^{10})\) can be guarenteed by [17], if for some \(c,d>0\) and \(\nu_{0}\geq 0\) we have
\[C\geq 1/d,\quad\mu\geq c\quad\text{and}\quad\nu\varepsilon+\operatorname{Re} \sigma\geq c\]
for all \(\nu>\nu_{0}\). Additionally asking for
\[C^{-1}\geq c,\quad\mu^{-1}\geq 1/d\quad\text{and}\quad\operatorname{Re}\left(( \varepsilon+\sigma/z)^{-1}\right)\geq 1/d\]
for \(\operatorname{Re}z>\nu_{0}\), we analogously obtain (17). In order to address the homogenisation problem, we consider bounded sequences \((C_{n})_{n}\), \((e_{n})_{n}\), \((\mu_{n})_{n}\), \((\varepsilon_{n})_{n}\), \((\sigma_{n})_{n}\) where we assume the same self-adjointness, non-negativity and positive-definiteness conditions as before for \(C,\varepsilon,\mu,\sigma\). The positive-definiteness constants \(c,d\) and \(\nu_{0}\) are supposed to be independent of \(n\).
The operator \(A\) induces the following decomposition (see Example 2.3) of the space \(\mathcal{H}=\mathrm{L}_{2}(\Omega)^{10}\):
\[\mathcal{H}=\mathrm{L}_{2}(\Omega)^{10}=(\underbrace{\{0\}\oplus\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}_{ }}_{})\oplus\oplus\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}_{ }}\oplus\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \ \ }}}}}}}}} \oplus\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \
Proof.: Considering the Rellich-Kondrachov theorem, the compact embeddings of \(\operatorname{dom}(\operatorname{curl})\cap\mathfrak{c}_{0}\) and \(\operatorname{dom}(\operatorname{curl})\cap\mathfrak{c}\) as Hilbert spaces into \(\operatorname{L}_{2}(\Omega)^{3}\) and the above discussion, the claim follows from Theorem 6.5 in combination with Remark 6.6.
_Remark 7.5_.: It is desirable to obtain a more explicit formula for the limit expression in Theorem 7.4 if more structural assumptions on the coefficients and the couplings are at hand. In fact, in a slightly different situation this is done in [10]. Thus, at least for periodic, highly oscillatory coefficents one can anticipate the existence of a limit. The particular computation of which, however, will be left to future research.
## 8. Conclusion
In this paper, we have defined a topology on holomorphic, operator-valued functions (Definition 5.1) and provided a compactness result (Theorem 5.6). Moreover, we have identified a continuity statement related to the resolvent of a skew-selfadjoint operator with compact resolvent outside its kernel that, together with the introduced topology, yields a convergence result (Theorem 6.5) that has applications to (abstract, nonlocal) homogenisation problems for evolutionary equations and can be easily applied to a class of nonlocal equations as well as to homogenisation problems for systems of time-dependent partial differential equations.
## Appendix A
_Remark A.1_.: Whenever we consider the holomorphic functions \(\operatorname{Hol}(U,\mathbb{C})\) for an open \(U\subseteq\mathbb{C}\), we endow this space with the topology of compact convergence. That means, a sequence in \(\operatorname{Hol}(U,\mathbb{C})\) converges if and only if it converges uniformly on every compact subset of \(U\). One can (cf. [1] and [12, Thm. 3.4.16]) explicitly construct a complete and separable metric that induces this topology, i.e., \(\operatorname{Hol}(U,\mathbb{C})\) is Polish. Note that (obviously) both addition and scalar multiplication are continuous w.r.t. this topology, i.e., \(\operatorname{Hol}(U,\mathbb{C})\) is a topological vector space.
**Lemma A.2**.: _Let \(U\subseteq\mathbb{C}\) be an open set and let \(\mathcal{H}\) be a Hilbert space that can be orthogonally decomposed into \(\mathcal{H}=\mathcal{H}_{0}\oplus\mathcal{H}_{1}\). Then, the block operator_
\[M=\begin{pmatrix}M_{00}&M_{01}\\ M_{10}&M_{11}\end{pmatrix}:U\to\mathcal{H}^{\mathcal{H}}\]
_maps to \(\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) if and only if each block entry \(M_{ij}\colon U\to\mathcal{H}_{i}^{\mathcal{H}_{j}}\) maps to \(\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{j},\mathcal{H}_{i})\). In that case, \(M\colon U\to\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) is holomorphic if and only if each block entry \(M_{ij}\colon U\to\mathcal{L}_{\mathrm{b}}(\mathcal{H}_{j},\mathcal{H}_{i})\) is holomorphic._
Proof.: For \(v\in\mathcal{H}\) with \(\|v\|_{\mathcal{H}}\leq 1\) and its unique decomposition \(v=v_{0}+v_{1}\), the Pythagorean theorem yields \(\|v_{0}\|_{\mathcal{H}},\|v_{1}\|_{\mathcal{H}}\leq 1\). Once again applying the Pythagorean theorem, we obtain
\[\|M(z)v\|_{\mathcal{H}}^{2} =\|M_{00}(z)v_{0}+M_{01}(z)v_{1}\|_{\mathcal{H}}^{2}+\|M_{10}(z)v_ {0}+M_{11}(z)v_{1}\|_{\mathcal{H}}^{2}\] \[\leq(\|M_{00}(z)\|+\|M_{01}(z)\|)^{2}+(\|M_{10}(z)\|+\|M_{11}(z) \|)^{2}\]
for \(z\in U\). This shows
\[\|M(z)\|\leq\|M_{00}(z)\|+\|M_{01}(z)\|+\|M_{10}(z)\|+\|M_{11}(z)\|. \tag{19}\]
Conversely, assume \(v_{0}\in\mathcal{H}_{0}\) with \(\|v_{0}\|\leq 1\). Then, the Pythagorean theorem yields
\[\|M_{00}(z)v_{0}\|_{\mathcal{H}}^{2}\leq\|M_{00}(z)v_{0}\|_{\mathcal{H}}^{2}+ \|M_{10}(z)v_{0}\|_{\mathcal{H}}^{2}=\|M(z)v_{0}\|_{\mathcal{H}}^{2}\leq\|M(z )\|^{2}\]
for \(z\in U\). After similar calculations for the other block entries, we get
\[\max\big{(}\|M_{00}(z)\|,\|M_{01}(z)\|,\|M_{10}(z)\|,\|M_{11}(z)\|\big{)}\leq\|M (z)\|. \tag{20}\]
Inequalities (19) and (20) immediately prove the claimed statements.
**Lemma A.3**.: _Let \(U\subseteq\mathbb{C}\) be an open set and let \(\mathcal{H}\) be a Hilbert space. If \(M,N\colon U\to\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) are holomorphic, then also the product \(MN\) is holomorphic with derivative \(MN^{\prime}+M^{\prime}N\)._
Proof.: Note that the multiplication in \(\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) is a continuous operation. Hence,
\[\lim_{w\to z}\frac{M(z)N(z)-M(w)N(w)}{z-w}\] \[\qquad\qquad=\lim_{w\to z}\frac{M(z)N(z)-M(z)N(w)+M(z)N(w)-M(w)N(w)}{ z-w}\] \[\qquad\qquad=\lim_{w\to z}\frac{M(z)N(z)-M(z)N(w)}{z-w}+\lim_{w \to z}\frac{M(z)N(w)-M(w)N(w)}{z-w}\] \[\qquad\qquad=M(z)\lim_{w\to z}\frac{N(z)-N(w)}{z-w}+\lim_{w\to z} \frac{M(z)-M(w)}{z-w}N(w)\] \[\qquad\qquad=M(z)N^{\prime}(z)+\lim_{w\to z}\frac{M(z)-M(w)}{z-w} \lim_{w\to z}N(w)\] \[\qquad\qquad=M(z)N^{\prime}(z)+M^{\prime}(z)N(z),\]
which implies that \(MN\) is complex differentiable.
**Lemma A.4**.: _Let \(U\subseteq\mathbb{C}\) be an open set and let \(\mathcal{H}\) be a Hilbert space. If a holomorphic function \(M\colon U\to\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) is such that \(M(z)\) is invertible for every \(z\in U\), then \(M(\cdot)^{-1}\colon U\to\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) is also holomorphic with derivative \(-M(\cdot)^{-1}M^{\prime}M(\cdot)^{-1}\)._
Proof.: Note that \(A\mapsto A^{-1}\) is continuous on \(\{A\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\,|\,A\text{ is invertible}\}\) by [12, Thm. 10.12] and the multiplication in \(\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) is a continuous operation. Hence,
\[\lim_{w\to z}\frac{M(z)^{-1}-M(w)^{-1}}{z-w} =\lim_{w\to z}\frac{M(z)^{-1}(M(w)-M(z))M(w)^{-1}}{z-w}\] \[=M(z)^{-1}\lim_{w\to z}\frac{-(M(z)-M(w))}{z-w}\lim_{w\to z}M(w)^{-1}\] \[=-M(z)^{-1}M^{\prime}(z)M(z)^{-1},\]
which implies that \(M(\cdot)^{-1}\) is complex differentiable.
**Theorem A.5** (Montel's theorem [12, Thm. 14.6]).: _Let \(U\subseteq\mathbb{C}\) be open and \(S\subseteq\operatorname{Hol}(U,\mathbb{C})\). Then \(S\) is relatively compact (also called normal), if and only if, \(S\) is locally uniformly bounded, i.e., for all \(K\subseteq U\) compact there exists a \(C_{K}>0\) such that_
\[\sup_{z\in K,f\in S}|f(z)|\leq C_{K}.\]
**Corollary A.6**.: \(\operatorname{Hol}(U,\overline{\mathrm{B}}_{r}(0))\) _is compact, where \(\overline{\mathrm{B}}_{r}(0)\) is the closed ball with radius \(r\geq 0\) in \(\mathbb{C}\)._
Proof.: By Theorem A.5 (Montel's theorem) we conclude that \(\operatorname{Hol}(U,\overline{\mathrm{B}}_{r}(0))\) is relatively compact in \(\operatorname{Hol}(U,\mathbb{C})\). We finish the proof by showing the closedness of \(\operatorname{Hol}(U,\overline{\mathrm{B}}_{r}(0))\): Let \((f_{n})_{n\in\mathbb{N}}\) be a sequence in \(\operatorname{Hol}(U,\overline{\mathrm{B}}_{r}(0))\) that converges to \(f\in\operatorname{Hol}(U,\mathbb{C})\), i.e., for all \(K\subseteq U\) compact we have
\[\sup_{z\in K}|f_{n}(z)-f(z)|\to 0.\]
In particular \(f_{n}(z)\) converges to \(f(z)\) in \(\mathbb{C}\). Since \(|f_{n}(z)|\leq 1\) and limits preserve inequalities, we conclude \(|f(z)|\leq 1\)
The following theorem is a small adaption of [1, Prop. 6.1]. We just regard \(U\subseteq\mathbb{C}\) instead of the more general case \(U\subseteq\mathcal{Y}\) for a normed vector space \(\mathcal{Y}\).
**Theorem A.7**.: _Let \(\mathcal{X}\) be a Banach space and \(U\subseteq\mathbb{C}\) open. If \(\Psi\subseteq\mathcal{X}^{\prime}\) has the following property_
\[W\subseteq\mathcal{X}\text{ is bounded}\quad\Leftrightarrow\quad\psi(W) \subseteq\mathbb{C}\text{ is bounded }\forall\psi\in\Psi,\]
_then the following statements are equivalent:_
1. \(f\in\operatorname{Hol}(U,\mathcal{X})\)_,_
2. \(\psi\circ f\in\operatorname{Hol}(U,\mathbb{C})\) _for all_ \(\psi\in\Psi\)_._
**Lemma A.8** ([2, Prop. 6.2.3]).: _Let \(\mathcal{H}\) be a Hilbert space and \(A\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) such that \(\operatorname{Re}A\geq c>0\). Then, \(A^{-1}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) with \(\|A^{-1}\|\leq\frac{1}{c}\) and \(\operatorname{Re}A^{-1}\geq c\|A\|^{-2}\)._
**Lemma A.9** ([2, Lemma 3.9]).: _Let \(\mathcal{H}\) be a Hilbert space that can be orthogonally decomposed into \(\mathcal{H}=\mathcal{H}_{0}\oplus\mathcal{H}_{1}\). Consider an operator \(T\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) in his block form \((T_{ij})_{i,j\in\{0,1\}}\) (cf. Lemma A.2). If we have \(\operatorname{Re}T\geq d\) for some \(d>0\), then \(\operatorname{Re}T_{11}\geq d\) and \(\operatorname{Re}(T_{00}-T_{01}T_{11}^{-1}T_{10})\geq d\) follow._
Proof.: Let \(\varphi_{1}\in\mathcal{H}_{1}\). Then,
\[\operatorname{Re}\langle T_{11}\varphi_{1},\varphi_{1}\rangle_{\mathcal{H}_{1 }}=\operatorname{Re}\biggl{\langle}T\begin{pmatrix}0\\ \varphi_{1}\end{pmatrix},\begin{pmatrix}0\\ \varphi_{1}\end{pmatrix}\biggr{\rangle}_{\mathcal{H}}\geq d\biggl{\langle} \begin{pmatrix}0\\ \varphi_{1}\end{pmatrix},\begin{pmatrix}0\\ \varphi_{1}\end{pmatrix}\biggr{\rangle}_{\mathcal{H}}=d\langle\varphi_{1}, \varphi_{1}\rangle_{\mathcal{H}_{1}}.\]
By Lemma A.8 it follows that \(T_{11}\) is invertible. For the accretivity of the second expression, one quickly checks the relation \(R=Q^{*}TQ\), where
\[Q\coloneqq\begin{pmatrix}1&0\\ -(T_{01}T_{11})^{*}&1\end{pmatrix}\quad\text{and}\quad R\coloneqq\begin{pmatrix} T_{00}-T_{01}T_{11}^{-1}T_{10}&0\\ T_{10}-T_{11}(T_{11}^{-1})^{*}T_{01}&T_{11}\end{pmatrix}.\]
Next, we let \(\varphi_{0}\in\mathcal{H}_{0}\), \(S\coloneqq T_{00}-T_{01}T_{11}^{-1}T_{10}\) and compute
\[\operatorname{Re}\langle S\varphi_{0},\varphi_{0}\rangle_{\mathcal{H}_{0}} =\operatorname{Re}\biggl{\langle}R\begin{pmatrix}\varphi_{0}\\ 0\end{pmatrix},\begin{pmatrix}\varphi_{0}\\ 0\end{pmatrix}\biggr{\rangle}_{\mathcal{H}}\] \[=\operatorname{Re}\biggl{\langle}TQ\begin{pmatrix}\varphi_{0}\\ 0\end{pmatrix},Q\begin{pmatrix}\varphi_{0}\\ 0\end{pmatrix}\biggr{\rangle}_{\mathcal{H}}\]
**Lemma A.10** ([2, Prop. 13.1.4]).: _Let \(\mathcal{H}\) be a Hilbert space and \((T_{n})_{n\in\mathbb{N}}\) a boundedly invertible sequence in \(\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) such that \(\sup_{n\in\mathbb{N}}\|T_{n}^{-1}\|<\infty\). If \((T_{n})_{n\in\mathbb{N}}\) converges to a \(T\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) w.r.t. the strong operator topology and if \(T\) has dense range, then \(T\) is boundedly invertible and \((T_{n}^{-1})_{n\in\mathbb{N}}\) converges to \(T^{-1}\) w.r.t. the strong operator topology._
**Lemma A.11** ([2, Prop. 6.3.1]).: _Let \(\mathcal{H}\) be a Hilbert space and \(A\colon\operatorname{dom}(A)\subseteq\mathcal{H}\to\mathcal{H}\) densely defined and closed with \(\operatorname{dom}(A^{*})\subseteq\operatorname{dom}(A)\). If \(\operatorname{Re}\langle A\varphi,\varphi\rangle_{\mathcal{H}}\geq c>0\) holds for all \(\varphi\in\operatorname{dom}(A)\), then \(A^{-1}\in\mathcal{L}_{\mathrm{b}}(\mathcal{H})\) and \(\|A^{-1}\|\leq\frac{1}{c}\)._
| |
2310.20168 | Understanding and Visualizing Droplet Distributions in Simulations of
Shallow Clouds | Thorough analysis of local droplet-level interactions is crucial to better
understand the microphysical processes in clouds and their effect on the global
climate. High-accuracy simulations of relevant droplet size distributions from
Large Eddy Simulations (LES) of bin microphysics challenge current analysis
techniques due to their high dimensionality involving three spatial dimensions,
time, and a continuous range of droplet sizes. Utilizing the compact latent
representations from Variational Autoencoders (VAEs), we produce novel and
intuitive visualizations for the organization of droplet sizes and their
evolution over time beyond what is possible with clustering techniques. This
greatly improves interpretation and allows us to examine aerosol-cloud
interactions by contrasting simulations with different aerosol concentrations.
We find that the evolution of the droplet spectrum is similar across aerosol
levels but occurs at different paces. This similarity suggests that
precipitation initiation processes are alike despite variations in onset times. | Justus C. Will, Andrea M. Jenney, Kara D. Lamb, Michael S. Pritchard, Colleen Kaul, Po-Lun Ma, Kyle Pressel, Jacob Shpund, Marcus van Lier-Walqui, Stephan Mandt | 2023-10-31T04:25:00 | http://arxiv.org/abs/2310.20168v1 | # Understanding and Visualizing Droplet Distributions in Simulations of Shallow Clouds
###### Abstract
Thorough analysis of local droplet-level interactions is crucial to better understand the microphysical processes in clouds and their effect on the global climate. High-accuracy simulations of relevant droplet size distributions from Large Eddy Simulations (LES) of bin microphysics challenge current analysis techniques due to their high dimensionality involving three spatial dimensions, time, and a continuous range of droplet sizes. Utilizing the compact latent representations from Variational Autoencoders (VAEs), we produce novel and intuitive visualizations for the organization of droplet sizes and their evolution over time beyond what is possible with clustering techniques. This greatly improves interpretation and allows us to examine aerosol-cloud interactions by contrasting simulations with different aerosol concentrations. We find that the evolution of the droplet spectrum is similar across aerosol levels but occurs at different paces. This similarity suggests that precipitation initiation processes are alike despite variations in onset times.
## 1 Introduction
Understanding and accurately representing cloud processes in numerical models is crucial for improving weather and climate predictions. Cloud droplets and their size distributions play a significant role in various atmospheric phenomena, such as radiation and precipitation initiation, making their characterization essential. However, complete simulation of these processes remains prohibitive in numerical models of the atmosphere due to their high complexity and small physical scale. Instead, cloud physics are represented through parameterizations, greatly simplified processes that often rely on assumptions about the shape of cloud droplet distributions over volumes and the size of a numerical model's grid cell, which remain largely under-verified using observations.
Numerical simulations with more sophisticated cloud microphysics parameterizations (i.e., relying on fewer or no assumptions about the shape of droplet distributions) are used to inform the next generation of cloud physics models. The motivation for this study arises from the need to efficiently and effectively summarize the simulated droplet distributions from a pioneering set of Large Eddy Simulations of shallow clouds. These simulations provide droplet bin masses for every grid cell at a relatively high temporal frequency. While previous studies have employed clustering algorithms on observed droplets for this task (e.g., Allwayin et al. [1]), these methods pose challenges for our data due to their sheer size. We are inspired by recent advancements in machine learning, particularly Variational Autoencoders (VAEs), which have shown promise in capturing patterns in complex climate datasets while preserving physical interpretability [2; 3; 4]. Our main contributions incldue:
* We propose a new way to visualize high-dimensional, spatio-temporal droplet size distributions by a VAE-based approach, representing droplet distributions through color spectra.
* We characterize the transition of droplet distributions from ambient to precipitating.
* Our analysis confirms that aerosol concentrations may delay precipitation onset.
### LES Simulations
Several LES simulations were run under different meteorological conditions using the PINACLES model [5]. For the sake of brevity, we focus our analysis on simulations of warm clouds in the _trade cumuli_ regime based on the _ATEX_ campaign [6]. We note that our methodology is applicable to a broader set of simulations which also includes, for example, _nocturnal stratocumulus_ based on the _DYCOMS_ campaign [7]. In all simulations, microphysical processes are resolved using the _Fast Spectral Bin Microphysics version 2_[8], which defines cloud droplet size distributions (DSDs) using \(33\) mass-doubling bins up to a maximum diameter of \(6.5\) mm. They are run for 8 hours of simulated time with an internal timestep of roughly 1 second. Three-dimensional snapshots of the \(25.6\times 25.6\times 3\) km doubly periodic domain (with a grid resolution of \(40\)m) are taken every \(10\) minutes. Three separate simulations are run at half, base, and double the published aerosol concentrations as prescribed for the RICO study [9], allowing us to isolate and analyze cloud-aerosol interactions. For computational efficiency, we discard all DSDs associated with clear air, i.e. whose summed mixing ratio (mass of liquid per unit of dry air) falls below a threshold of \(10^{-5}\). Furthermore, whenever DSDs are used as input to neural networks, their summed mixing ratio is normalized to \(1\). This allows for faster and more stable learning and avoids giving less importance to DSDs with less mass.
## 2 Variational Autoencoder and Learned Representation
A recent study by Lamb et al. [10] suggests that droplet collision-coalescence, which is the most important processes governing the time evolution of DSDs, has an inherent dimensionality of \(3\). This motivates the use of a learned \(3\)-dimensional representation, which empirically captures all important characteristics, even in our more complex setting including spatial interaction and aerosols. Specifically, we use Variational Autoencoders (VAEs) [11], which are generative latent variable models that can be fit to data \(\mathcal{D}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\), learning both a low-dimensional representation of data samples and enabling controlled generation of new data. In contrast to non-stochastic autoencoders this allows us to find more robust representations that better generalize to new samples and to quantify data variability and model uncertainty. To this end, we define a joint likelihood over data \(\mathbf{x}\) and a lower-dimensional latent variable \(\mathbf{z}\), where \(\mathbf{z}\) informs a complex conditional distribution \(p_{\theta}(\mathbf{x}|\mathbf{z})\) over the data domain - in our case, a Gaussian distribution whose mean is parameterized by a feed-forward neural network (MLP) \(\mu_{\theta}(\mathbf{z})\) (the variational decoder). To fit this model to data, we use amortized variational inference to minimize the negative evidence lower bound (NELBO) \(\mathcal{L}_{\theta}(q)\), which uses a Gaussian approximation \(q_{\psi}\) (with mean \(g_{\psi}(\mathbf{x}_{i})\) and \(h_{\psi}(\mathbf{x}_{i})\) parameterized using MLPs) to the posterior \(p_{\theta}(\mathbf{z}|\mathbf{x})\) to tightly bound the intractable negative marginal likelihood from above. This is equivalent to minimization with the loss
\[\mathcal{L}_{\theta,\psi}(\mathbf{x})=\mathbb{E}_{\mathbf{z}\sim q_{\psi}} \left[\frac{1}{2}\left\|\mathbf{x}-\mu_{\theta}(\mathbf{z})\right\|_{2}^{2} \right]+\beta\operatorname{KL}(q_{\psi}(\mathbf{z})\left\|\,p(\mathbf{z}))\]
so that suitable parameters \((\theta,\psi)\) can be found with stochastic gradient-based optimization techniques.
Figure 1: The time evolution of the spatial organization of droplet size distributions in simulations of shallow clouds at a base aerosol level. Color represents latent space location as defined in Section 3 and thus indicates distribution characteristics. Precipitating regions appear only at later times.
Figure 2a illustrates the latent space of our final model showing the point cloud of encoded latent representations for all DSDs across all time steps and aerosol levels. Encoded points close in this latent space directly correspond to DSDs with similar characteristics so that the spatial organization in the latent space meaningfully represents the inherent structure present in the data. Specifically, we observe that the regions with high point density form a highly connected continuum, indicating the presence of a very continuous transition between DSDs of different characteristics, even in distribution space. We identify a large, homogeneous, and roughly spherical region centered at zero that smoothly transitions into a separate narrow filament structure that traces a path with a sharp bend.
## 3 Visualization and Insights
Representing the 3D latent space location with colors, we can assign continuous labels to different regions to permit a 1D _interpretation_ of latent space "neighborhoods" without any clustering or information loss. Specifically, we make use of the fact that color itself can be described using a three-dimensional spectrum and map the latent variable \(z\) onto a color where the value in the first, second, and third latent dimension linearly corresponds to the amount of red, green, and blue in the color. Figure 2a shows each data point colored using this RGB representation.
Figure 1 shows the time evolution of DSDs in a simulation of clouds from the _ATEX_ meteorological case at the base aerosol level. Looking at the spatial organization allows us to better understand the role DSDs of different characteristics play. We note that even as early as \(2\) hours, well in advance of the occurrence of large cohesive shafts of large droplets extending down to the surface (i.e., precipitation), small pockets of yellow-to-green DSD form, which later become associated more with the precipitating regions, where rain seems to form a yellow-green-blue transition as the droplets get bigger and start to fall to lower altitudes.
The emergence of precipitation regions is also clearly visible in the latent space, where the associated filament structure only appears at later time steps, when mass starts moving along the path as indicated in Figure 2a. By tracing the retrieved path in the latent space and relating it back to associated distributions, we can get valuable insight about distribution transtitions along the path of precipitation. Specifically, for each point on the latent space path, we average the \(1000\) observed DSDs whose encoded representations are closest, in a Euclidean sense, to the point of interest. The obtained distribution evolution is shown in Figure 2b and characterized by a steady increase in droplet size, again confirming the close association with rainfall.
Figure 2: a) The joint VAE latent space over all time steps and aerosol levels. Color represents latent space location as defined in Section 3. The red arrow marks the pathway of precipitation, retrieved based on the latent space evolution through time. b) Evolution of the droplet size distributions along the pathway, from ambient to precipitating distributions.
Finally, we can summarize the state of the simulation at each time step by looking at what proportion of DSDs follow specific characteristics/colors. This allows us to pinpoint precipitation initiation to the moment when the presence of DSDs in precipitating regions, associated with green and blue colors, significantly increases at later simulation times. This analysis is similar to tracking distribution across clusters over time, only for the case of continuous labels using a full spectrum of colors instead. Figure 3 shows, for each altitude level of the model, the relative occurrence of specific droplet size distributions at \(2\), \(4\), and \(7\) hours for each aerosol concentration. Information about the horizontal position and vertical _structure_ is discarded. Specifically, we sort the set of all DSDs in the horizontal plane (for fixed aerosol, time step, and height) by hue. After normalization of saturation and brightness, this leaves us with a smooth color transition from violet/pink to blue colors that show proportionality while roughly following the transition from ambient DSDs with mainly small droplet sizes to DSDs associated with precipitation.
The composition plots in Figure 3 enable the fast summary of the state of the simulation and allow for insightful comparisons across different simulation conditions. For aerosol concentration in particular, we note that an increase in aerosol concentration causes a delay in the onset of precipitation. At a base aerosol level, green and blue DSDs appear in significant amounts only after roughly \(4\) hours. With less aerosols, this happens \(2\) hours earlier and with more aerosols \(3\) hours later. Delayed onset is likely a consequence of the higher number of smaller droplets that form with more aerosols, suppressing rain formation. The above insights highlight the utility of our proposed visualization techniques in the analysis of LES simulation data. In the future, we aim to extend this work with further visualization tools that will enable new applications and give us the ability to answer a broader range of questions relating to, for example, entrainment, mass transport, temperature, updraft, and horizontal winds.
## 4 Conclusion
In this study, we have introduced a novel approach to understanding and visualizing droplet size distributions in simulations of warm clouds using Variational Autoencoders (VAEs). By encoding droplet distributions into a compact latent space and representing them through color spectra, we gain valuable insights into the organization and evolution of droplet sizes over time and across different aerosol concentrations. Our findings reveal that while increased aerosol levels delay the onset of precipitation, the evolution of droplet distributions follows a similar pattern. The visualization techniques presented offer powerful tools for efficient and effective analysis of Large Eddy Simulation (LES) data and permit a deeper understanding of cloud microphysics and its impact on weather and climate predictions. Future work will explore additional visualizations to address a broader range of questions related to cloud dynamics and processes.
Figure 3: The relative occurrence of specific droplet size distributions per height level. Color is normalized and represents latent space location as discussed in Section 3. Increasing levels of aerosols delay the precipitation onset.
## Acknowledgements
The study was supported as part of the Enabling Aerosol-cloud interactions at GLobal convection-permitting scalES (EAGLES) project (project no. 74358) sponsored by the United States Department of Energy (DOE), Office of Science, Office of Biological and Environmental Research (BER), Earth System Model Development (ESMD) program area. The Pacific Northwest National Laboratory (PNNL) is operated for the DOE by the Battelle Memorial Institute under Contract DE-AC05-76RL01830. The research used high-performance computing resources from the PNNL Research Computing, the BER Earth System Modeling program's Compy computing cluster located at PNNL, and resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231, using NERSC awards ALCC-ERCAP0025938 and BER-ERCAP0024471.
| 詳細な局所液滴レベルの相互作用の分析は、雲の微物理的プロセスをより深く理解し、それらが地球規模に及ぼす影響を理解するために不可欠です。大規模渦流シミュレーション(LES)におけるぼん微物理学の解析から、関連する液滴のサイズ分布の高精度なシミュレーションは、三次元空間、時間、連続的な液滴のサイズ範囲を含む複雑な次元構造に直面しているため、現在の分析技術に課題をもたらします。可変オートエンコーダ(VAEs)からのコンパクトな潜在的な表現を利用することで、液滴のサイズと時間の変化の組織を新しいで直感的で視覚化し、従来のクラスタリング技術では不可能な範囲です。これにより、解釈が大きく改善され、異なるAerosol濃度を持つシミュレーションと比較することで、エアロゾル-雲相互作用を検討できます。 |
2303.17824 | Implementation and (Inverse Modified) Error Analysis for
implicitly-templated ODE-nets | We focus on learning unknown dynamics from data using ODE-nets templated on
implicit numerical initial value problem solvers. First, we perform Inverse
Modified error analysis of the ODE-nets using unrolled implicit schemes for
ease of interpretation. It is shown that training an ODE-net using an unrolled
implicit scheme returns a close approximation of an Inverse Modified
Differential Equation (IMDE). In addition, we establish a theoretical basis for
hyper-parameter selection when training such ODE-nets, whereas current
strategies usually treat numerical integration of ODE-nets as a black box. We
thus formulate an adaptive algorithm which monitors the level of error and
adapts the number of (unrolled) implicit solution iterations during the
training process, so that the error of the unrolled approximation is less than
the current learning loss. This helps accelerate training, while maintaining
accuracy. Several numerical experiments are performed to demonstrate the
advantages of the proposed algorithm compared to nonadaptive unrollings, and
validate the theoretical analysis. We also note that this approach naturally
allows for incorporating partially known physical terms in the equations,
giving rise to what is termed ``gray box" identification. | Aiqing Zhu, Tom Bertalan, Beibei Zhu, Yifa Tang, Ioannis G. Kevrekidis | 2023-03-31T06:47:02 | http://arxiv.org/abs/2303.17824v2 | # Implementation and (Inverse Modified) Error Analysis
###### Abstract
We focus on learning unknown dynamics from data using ODE-nets templated on implicit numerical initial value problem solvers. First, we perform Inverse Modified error analysis of the ODE-nets using unrolled implicit schemes for ease of interpretation. It is shown that training an ODE-net using an unrolled implicit scheme returns a close approximation of an Inverse Modified Differential Equation (IMDE). In addition, we establish a theoretical basis for hyper-parameter selection when training such ODE-nets, whereas current strategies usually treat numerical integration of ODE-nets as a black box. We thus formulate an adaptive algorithm which monitors the level of error and adapts the number of (unrolled) implicit solution iterations during the training process, so that the error of the unrolled approximation is less than the current learning loss. This helps accelerate training, while maintaining accuracy. Several numerical experiments are performed to demonstrate the advantages of the proposed algorithm compared to nonadaptive unrollings, and validate the theoretical analysis. We also note that this approach naturally allows for incorporating partially known physical terms in the equations, giving rise to what is termed "gray box" identification.
l 37M10, 65L06, 65L09, 65P99
## 1 Introduction
Discovering unknown dynamical systems from observed dynamical data is an established systems task where machine learning has been shown to be remarkably effective. Neural networks \(f_{\theta}\), coined "ODE-nets", are used to parameterize the unknown governing differential equations; their parameters \(\theta\) are obtained by minimizing the difference between the observed state time series and the outputs evaluated by numerically solving the ODE governed by the right-hand-side \(f_{\theta}\). Original publications along this line date back to the 1990s [2, 21, 42, 43]. Recently, Neural ODEs [11] substantially revisited these ideas using modern computational tools, and is being applied to more challenging tasks beyond modeling dynamical systems. Here, the adjoint reverse-time equations--introduced as a continuous-time analogue of backpropagation--are employed for computation of gradients. In addition, various related architectures have been proposed [40, 27, 51], and research interest in this direction has been growing to include coupling machine learning with prior knowledge of (some) physics of the underlying systems [6, 7, 10, 29, 30, 50, 53] (see also [42] for a discussion
on gray-box modeling for incorporating known physics into such learned models).
However, even assuming the best-case convergence of the optimizer and accuracy of the data, the numerical integration of the network used to fit the data can itself introduce a bias into the equations extracted. In this paper, we propose to analyse the influence of the numerical integration scheme template in such learning models.
In the last few decades, modified differential equations (MDEs) and backward error analysis [16; 18; 19; 24; 41; 46; 52] have become well-established tools for analyzing the numerical solution of evolution equations (where we produce approximate trajectories from a true ODE). The main idea of MDEs is to interpret the numerical solution as the exact solution of a perturbed differential equation expressed by a formal series. We can then analyze the MDE, which is easier than the analysis of the discrete numerical solution.
Recently, inspired by MDEs and BE, Inverse Modified Differential Equations (IMDEs) [55] have been proposed; they allow the efficient analysis of numerical schemes applied to the discovery of dynamics (_where we produce an approximate ODE from true trajectories_). By analogy with the MDE (see Figure 1), the IMDE is a perturbed differential equation whose numerical solution matches the exact observed solution (the data). It was shown in [55] that training an ODE-net returns a close approximation of the IMDE, and that some known analysis results of solving ODEs, such as order of convergence, have natural extensions to the field of discovery of dynamics.
Other analysis results exist for the discovery of dynamics by combining numerical integrators and deep learning. In [31], a refined framework is established to derive the convergence and stability of Linear Multistep Neural Networks (LMNets) [40] via the characteristic polynomial of classic numerical linear multistep methods (LMM). In addition, an augmented loss function was introduced, based on auxiliary conditions that serve a purpose analogous to the explicit starting step used when performing forward integration with a LMM. It has been shown that the grid error of the global minimizer is bounded by the sum of the discretization error and the approximation error [15].
These analyses concentrate on LMM in LMNets, where all LMM discretization (typically implicit) can be exactly employed, and directly quantify the error between the true governing function and its neural network approximation. The existence of an associated IMDE implies uniqueness of the solution to the learning task (in a concrete sense), and also allows us to analyze the numerical error in ODE-nets. However, the results in [55] only hold when the
Figure 1: Schematic depiction of the relation between the true model, the Modified and the Inverse Modified Differential Equations (MDE and IMDE respectively). Forward Error Analysis studies the difference between the true and the numerical solution of the model, while Backward, and Inverse Backward Error Analysis examine the difference between the true model and the MDE/IMDE respectively.
numerical integration is _exactly_ evaluated, whereas the implementation of implicit integration in ODE-nets requires a root-finding algorithm, i.e. by unrolling the iterations, so as to obtain an accurate approximate solution. The mutual differences between these existing theoretical analyses and our main results are schematically visualized in Figure 2.
In this paper, we extend the analysis proposed in [55] and perform IMDE analysis for ODE-nets in which we unroll (and truncate) the iterations for solving the implicit scheme within the network architecture. To begin with, we search for a perturbed differential equation, i.e., the IMDE, such that its unrolled implicit integration matches observations of the exact solution of the true system. It is noted that this IMDE now depends on the number of unrolled stages (iteration number) of the unrolled implicit scheme. In addition, we prove that, under reasonable assumptions, training an ODE-net using an unrolled implicit scheme returns an approximation of the corresponding IMDE. As a direct consequence, increasing the iteration number results in a more accurate recovery of the IMDE. Finally, the rate of convergence of ODE-nets using unrolled implicit schemes is also presented. Several experiments are performed to validate the analysis, and the numerical results are in agreement the theoretical findings.
The numerical integration of ODE-nets is typically treated as a black box in current strategies. Here, an unrolling approach to implicit integration requires recurrent calculations; augmenting computational cost and, in particular, memory demands. Based on the analysis results, we establish a theoretical basis for hyper-parameter selection when training ODE-nets. We formulate an adaptive algorithm that monitors the level of error and adapts the iteration number in the training process to accelerate training while maintaining accuracy. In the initial stage of training, a rough approximation target, i.e., a smaller iteration number, is accurate enough for optimization. As learning loss decreases, we increase the iteration number so as
Figure 2: _Relationships between different ODEs considered in the backwards analysis literature. A schematic diagram showing existing theoretical analyses and our main results with explicit Euler (ee) and implicit Euler (ie) schemes as examples. The residual of explicit Euler is \(r_{\mathrm{ee}}=||\phi_{\Delta t}(x)-(x+\Delta t\cdot f_{\theta}(x)||_{2}^{2}\) where \(\{x,\phi_{\Delta t}(x)\}\) are the data. The residual of the LMNets approach to implicit Euler is \(r_{\mathrm{LMNets}}=||\phi_{\Delta t}(x)-(x+\Delta t\cdot f_{\theta}(\phi_{ \Delta t}(x))||_{2}^{2}\), given the data. The residual of our implicit Euler is \(r_{\mathrm{ie}}=||\phi_{\Delta t}(x)-\mathrm{argsoln}_{z}\{z=x+\Delta t\cdot f _{\theta}(z)\}||_{2}^{2}\) where \(z\) is the network prediction obtained by a root-finding algorithm._
to achieve a more accurate target. Numerical experiments show that the proposed algorithm leads to a 2-3\(\times\) speedup in training without any degradation in accuracy.
### Related works
There have been extensive attempts to determine unknown dynamics using various approaches including symbolic regression [47], Gaussian processes [39], sparse regression [8], statistical learning [34], etc. Among various models, the ODE-nets [44, 38, 11, 43] have been established as powerful tools to model complicated physical phenomena from time series data, and have achieved numerous successes [2, 7, 21, 29, 40, 42, 43]. Recently, researchers have focused on leveraging a continuous-time representation to incorporate physical inductive biases such as symplectic structure [6, 22, 50], the Onsager principle [53], the GENERIC formalism [54] and time-reversal symmetry [29], to name a few, into the learning model.
The implementation of ODE-nets and their variants is inevitably linked with numerical integration. Several libraries such as _torchdiffeq_, _diffrax_ and _torchdyn_ have been developed to provide standardized differentiable implementations of ODE solvers. Many learning models use the Euler discretization method (e.g. [6, 22]) or higher-order explicit Runge-Kutta methods (e.g. [53]), while some models encoding symplecticity use a symplectic integrator to preserve the special Hamiltonian form (e.g., [12, 48]). The work in [35] proposed a novel stiffness regularization for ODE-nets based on the internal cost of numerical integration. The interplay between learning Neural ODEs and numerical integration is explored in [37], where so-called hypersolvers are introduced for fast inference. A comprehensive study of gradient-based learning with implicit integration was explored in previous work [2], considering unrolling as well as Pineda's and Almeida's Recurrent Back-Propagation [36, 1]. In this paper, we focus on the implementation of _unrolled_ implicit numerical integration within ODE-nets, its numerical analysis, and the adaptation of the iteration number for the solution of the implicit problem to reduce computational cost.
Recent works [3, 5, 28, 17] proposed various versions of implicit models and demonstrated their empirical success; they directly exploit root-finding algorithms (e.g., fixed-point iteration, Newton-Raphson iteration and its variants, Broyden's method and Anderson acceleration) to solve for the output in the forward pass. In [4] an auxiliary network was introduced, to provide both the initial value and perform iterative updates to improve inference efficiency. In [20] a novel gradient estimate was proposed, to circumvent computing the exact gradient by implicit differentiation. Although the precise formulations and motivations of these implicit models are quite different, applications of our adaptive algorithm to these implicit models is a promising avenue for future work.
## 2 Problem setup
Consider the dynamical system
\[\frac{d}{dt}\mathbf{y}(t)=f(\mathbf{y}(t)),\quad\mathbf{y}(0)=\mathbf{x} \tag{1}\]
where \(\mathbf{y}(t)\in\mathbb{R}^{D}\) is the state vector, evolving over time according to the governing function \(f\). Let \(\phi_{t}(\mathbf{x})\) be the exact solution and \(\Phi_{h}(\mathbf{x})\) be the numerical solution (by some initial value problem solving algorithm) with discrete step \(h\). In order to emphasize a specific differential equation, we will add the subscript \(f\) and denote \(\phi_{t}\) as \(\phi_{t,f}\) and \(\Phi_{h}\) as \(\Phi_{h,f}\).
If \(f\) and the initial state \(\mathbf{x}\) are known, the future states can be predicted by solving the equation (1). On the other hand, if the exact governing equation is unknown, but some trajectories are given, ODE-nets model the dynamical system by neural networks and then predict future states via the learned model.
Mathematically, an ODE-net identified right-hand-side leads to the ODE
\[\frac{d}{dt}\tilde{\mathbf{y}}(t)=f_{\theta}(\tilde{\mathbf{y}}(t)),\quad\tilde{\mathbf{y}} (0)=\mathbf{x}, \tag{2}\]
where \(f_{\theta}\) is the neural network approximating the unknown vector field \(f\). With initial condition \(\mathbf{x}\), an ODE-net predicts the output by solving (2) numerically. Given \(N\) observed trajectories \(\mathbf{x}_{n},\phi_{\Delta t}(\mathbf{x}_{n}),\cdots,\phi_{M\Delta t}(\mathbf{x}_{n})\), \(n=1,\cdots,N\) with time step \(\Delta t\), the network parameters are determined by minimizing the loss function
\[\mathcal{L}_{exact}=\sum_{n=1}^{N}\sum_{m=1}^{M}\|\text{ODESolve}(\mathbf{x}_{n},f_ {\theta},m\Delta t)-\phi_{m\Delta t}(\mathbf{x}_{n})\|_{2}^{2}/(m\Delta t)^{2}. \tag{3}\]
Note that variable data step \(\Delta t\) are also possible. \(M=1\) is the classical "teacher forcing"; excessive M can be both computationally costly and offer limited benefits especially early in training, when the long-time predictions are especially poor. So, if the training data is in the form of long trajectories, we often divide them into smaller sub-episodes, leading to an \(M\)-step teacher forcing scheme [49]. We used \(M=1\) for all of the numerical experiments in this paper except the last, for which we used \(M=10\).
In this paper, the choice for the ODE solver consists of \(s\) compositions of a numerical scheme, i.e.,
\[\text{ODESolve}(\mathbf{x},f_{\theta},m\Delta t)=\underbrace{\Phi_{h,f_{\theta}} \circ\cdots\circ\Phi_{h,f_{\theta}}}_{ms\text{ compositions}}(\mathbf{x})=\left(\Phi_{h,f_{\theta}}\right)^{ms}(\mathbf{x}),\]
where \(h=\Delta t/s\) is the discrete step. A common choice of numerical scheme \(\Phi_{h}\) is the Runge-Kutta method, which is formulated as
\[\mathbf{v}_{i}=\mathbf{x}+h\sum_{j=1}^{I}a_{ij}f_{\theta}(\mathbf{v}_{j})\quad i =1,\cdots,I \tag{4a}\] \[\Phi_{h,f_{\theta}}(\mathbf{x})=\mathbf{x}+h\sum_{i=1}^{I}b_{i}f_{\theta} (\mathbf{v}_{i}). \tag{4b}\]
A Runge-Kutta method (4) is explicit only if \(a_{ij}=0\) for \(i\leq j\). Otherwise it is implicit, and the output has to be computed iteratively. For example, we could use fixed-point iteration (successive substitution) with fixed iteration number \(L\), in which case the approximation of
(2.4), denoted by \(\Phi^{L}_{h,g}(\mathbf{x})\), is given by
\[\mathbf{v}^{0}_{i}=\mathbf{x}\quad i=1,\cdots,I, \tag{2.5}\] \[\mathbf{v}^{l}_{i}=\mathbf{x}+h\sum_{j=1}^{I}a_{ij}f_{\theta}(\mathbf{v}^{l-1 }_{j})\quad i=1,\cdots,I,\ l=1,\cdots,L.\] \[\Phi^{L}_{h,f_{\theta}}(\mathbf{x})=\mathbf{x}+h\sum_{i=1}^{I}b_{i}f_{ \theta}(\mathbf{v}^{L}_{i}).\]
Newton-Raphson iteration is available as an alternative approach for solving the implicit equation (2.4a), where the approximation using \(L\) iterations of (2.4), denoted as \(\Phi^{L}_{h,g}(\mathbf{x})\), is given by
\[\mathbf{v}^{0}_{i}=\mathbf{x}\quad i=1,\cdots,I, \tag{2.6}\] \[\mathbf{v}^{l}_{i}=\mathbf{x}+h\sum_{j=1}^{I}a_{ij}\big{(}f_{\theta}(\bm {v}^{l-1}_{j})+f^{\prime}_{\theta}(\mathbf{v}^{l-1}_{j})(\mathbf{v}^{l}_{j}-\mathbf{v}^{l- 1}_{j})\big{)}\quad i=1,\cdots,I,\ l=1,\cdots,L.\] \[\Phi^{L}_{h,f_{\theta}}(\mathbf{x})=\mathbf{x}+h\sum_{i=1}^{I}b_{i}f_{ \theta}(\mathbf{v}^{L}_{i}).\]
The second of the equations in (2.6) is equivalent to the \(l^{\text{th}}\) Newton step, where we know \(\mathbf{v}^{l-1}_{j}\) for \(j=1,\ldots,I\), and we solve (for each \(l\)) \(D\times I\) linear equations \(F^{\prime}(\mathbf{V}^{l-1})\cdot(\mathbf{V}^{l-1}-\mathbf{V}^{l})=F(\mathbf{V}^{l-1})\) to obtain \(\mathbf{v}^{l}_{j}\) for \(j=1,\ldots,I\). Specifically,
\[\mathbf{V}^{l-1}=\begin{pmatrix}\mathbf{v}^{l-1}_{1}\\ \mathbf{v}^{l-1}_{2}\\ \vdots\\ \mathbf{v}^{l-1}_{I}\end{pmatrix},\quad\mathbf{V}^{l}=\begin{pmatrix}\mathbf{v}^{l}_{1}\\ \mathbf{v}^{l}_{2}\\ \vdots\\ \mathbf{v}^{l}_{I}\end{pmatrix},\quad F(\mathbf{V}^{l-1})=\begin{pmatrix}\mathbf{v}^{l-1}_ {1}-\mathbf{x}-h\sum_{j=1}^{I}a_{1j}f_{\theta}(\mathbf{v}^{l-1}_{j})\\ \mathbf{v}^{l-1}_{2}-\mathbf{x}-h\sum_{j=1}^{I}a_{2j}f_{\theta}(\mathbf{v}^{l-1}_{j})\\ \vdots\\ \mathbf{v}^{l-1}_{I}-\mathbf{x}-h\sum_{j=1}^{I}a_{Ij}f_{\theta}(\mathbf{v}^{l-1}_{j}) \end{pmatrix},\]
and
\[F^{\prime}(\mathbf{V}^{l-1})=\begin{pmatrix}\mathbf{I}_{D\times D}-ha_{11}f^{ \prime}_{\theta}(\mathbf{v}^{l-1}_{1})&-ha_{12}f^{\prime}_{\theta}(\mathbf{v}^{l-1}_{2 })&\cdots&-ha_{11}f^{\prime}_{\theta}(\mathbf{v}^{l-1}_{I})\\ -ha_{21}f^{\prime}_{\theta}(\mathbf{v}^{l-1}_{1})&\mathbf{I}_{D\times D}-ha_{22}f^{ \prime}_{\theta}(\mathbf{v}^{l-1}_{2})&\cdots&-ha_{2I}f^{\prime}_{\theta}(\mathbf{v}^{ l-1}_{I})\\ \vdots&\vdots&\ddots&\vdots\\ -ha_{I1}f^{\prime}_{\theta}(\mathbf{v}^{l-1}_{1})&-ha_{I2}f^{\prime}_{\theta}(\mathbf{ v}^{l-1}_{2})&\cdots&\mathbf{I}_{D\times D}-ha_{II}f^{\prime}_{\theta}(\mathbf{v}^{l-1}_{ I})\end{pmatrix}.\]
In either case, ((2.5) or (2.6)), all of the operations (including any Jacobian evaluations or inversions) have forward and backward implementations in established automatic differentiation packages, and the practical loss function we optimize is given as
\[\mathcal{L}_{unrolled}:=\sum_{n=1}^{N}\sum_{m=1}^{M}\|\big{(}\Phi^{L}_{h,f_{ \theta}}\big{)}^{ms}\left(\mathbf{x}_{n}\right)-\phi_{m\Delta t}(\mathbf{x}_{n})\|_{2}^ {2}/(m\Delta t)^{2}. \tag{2.7}\]
## 3 Inverse Modified Error Analysis
The discovery of dynamics using ODE-nets is essentially an inverse process. As (direct) Modified Differential Equations (MDEs) were well-established for the numerical analysis of differential equations, the idea of a formal extension in Inverse Modified Differential Equations (IMDEs) should prove particularly useful to the study error analysis for ODE-nets [55]. In this section, we will extend the results in [55] to unrolled implicit schemes.
### Inverse Modified Differential Equations of unrolled implicit schemes
An IMDE is a perturbed differential equation of the form
\[\frac{d}{dt}\mathbf{\tilde{y}}(t)=f_{h}(\mathbf{\tilde{y}}(t))=f_{0}(\mathbf{\tilde{y}})+ hf_{1}(\mathbf{\tilde{y}})+h^{2}f_{2}(\mathbf{\tilde{y}})+\cdots,\]
such that formally
\[\Phi_{h,f_{h}}(\mathbf{x})=\phi_{h,f}(\mathbf{x}), \tag{10}\]
where the identity is understood in the sense of the formal power series in \(h\). To obtain \(f_{h}\) of an unrolled implicit scheme (5), we can expand both sides of (10) into the corresponding Taylor series around \(h=0\). First,
\[\begin{split}\phi_{h,f}(\mathbf{x})&=\mathbf{x}+hf(\mathbf{x}) +\frac{h^{2}}{2}f^{\prime}f(\mathbf{x})+\frac{h^{3}}{6}(f^{\prime\prime}(f,f)(\mathbf{x })+f^{\prime}f^{\prime}f(\mathbf{x}))\\ &+\frac{h^{4}}{24}(f^{\prime\prime\prime}(f,f,f)(\mathbf{x})+3f^{ \prime\prime}(f^{\prime}f,f)(\mathbf{x})+f^{\prime}f^{\prime\prime}(f,f)(\mathbf{x})+ f^{\prime}f^{\prime}f^{\prime}f(\mathbf{x}))+\cdots.\end{split} \tag{11}\]
Here, \(f^{\prime}(\mathbf{x})\) is a linear map (the Jacobian); the second order derivative \(f^{\prime\prime}(\mathbf{x})\) is a symmetric bilinear map; and so on for higher order derivatives described as tensors. We remark that a general expansion (11) can be obtained by Lie derivatives. Next, we expand the unrolled implicit scheme (5) as
\[\Phi^{L}_{h,f_{h}}(\mathbf{x})=\mathbf{x}+hd_{1,f_{h}}(\mathbf{x})+h^{2}d_{2,f_{h}}(\mathbf{x })+h^{3}d_{3,f_{h}}(\mathbf{x})+\cdots, \tag{12}\]
where the functions \(d_{j,f_{h}}\) are given -and typically composed of \(f_{h}\) and its derivatives-, and can be calculated by applying B-series [9] on equation (5). For consistent integrators, we have
\[d_{1,f_{h}}(\mathbf{x})=f_{h}(\mathbf{x})=f_{0}(\mathbf{x})+hf_{1}(\mathbf{x})+h^{2}f_{2}(\mathbf{ x})+\cdots.\]
Furthermore, in \(h^{i}d_{i,f_{h}}(\mathbf{x})\), the powers of \(h\) of the terms containing \(f_{k}\) is at least \(k+i\). Thus, the coefficient of \(h^{k+1}\) in (12) is
\[f_{k}+\cdots,\]
where the "\(\cdots\)" indicates residual terms composed of \(f_{j}\) with \(j<k\) and their derivatives. A comparison of equal powers of \(h\) (11) and (12) then yields recursively the functions \(f_{k}\) in
terms of \(f\) and its derivatives. Some examples are included in Appendix A to illustrate this process. Here, we denote the truncation as
\[f_{h}^{K}(\mathbf{y})=\sum_{k=0}^{K}h^{k}f_{k}(\mathbf{y}).\]
The IMDE is obtained by paper-and-pencil formal expansion given \(f\) and a numerical scheme of choice, and thus is inaccessible in practice due to the unknown true governing function. Nevertheless, we will be able to conclude the uniqueness of the solution of the learning task and analyse the numerical integration in ODE-nets.
### Main results
We now show that, under reasonable assumptions, training an ODE-net using an unrolled implicit scheme returns a close approximation of the IMDE for the underlying numerical method.
We first set some notation: For a compact subset \(\mathcal{K}\subset\mathbb{C}^{D}\) and the complex ball \(\mathcal{B}(\mathbf{x},r)\subset\mathbb{C}^{D}\) of radius \(r>0\) centered at \(\mathbf{x}\in\mathbb{C}^{D}\), we define the \(r\)-dilation of \(\mathcal{K}\) as \(\mathcal{B}(\mathcal{K},r)=\bigcup_{x\in\mathcal{K}}\mathcal{B}(\mathbf{x},r)\). We will work with \(l_{\infty}\)- norm on \(\mathbb{C}^{D}\), denote \(\|\cdot\|=\|\cdot\|_{\infty}\), and for an analytic vector field \(f\), define
\[\|f\|_{\mathcal{K}}=\sup_{x\in\mathcal{K}}\|f(\mathbf{x})\|.\]
Now we present the main result, which implies that the unrolled implicit ODE-net approximates the IMDE.
**Theorem 3.1** (The unrolled approximation approaches the IMDE).: _Consider the dynamical system (2.1), a consistent implicit Runge-Kutta scheme \(\Phi_{h}\) (2.4), and its unrolled approximation \(\Phi_{h}^{L}\) ((2.5) or (2.6)). Let \(f_{\theta}\) be the network learned by optimizing (2.7). For \(\mathbf{x}\in\mathbb{R}^{D}\), \(r_{1},r_{2}>0\), we denote_
\[\mathcal{L}=\|\big{(}\Phi_{h,f_{\theta}}^{L}\big{)}^{s}-\phi_{sh,f}\|_{ \mathcal{B}(\mathbf{x},r_{1})}/\Delta t, \tag{3.4}\]
_and suppose the true vector field \(f\) and the learned vector field \(f_{\theta}\) are analytic and satisfy \(\|f\|_{\mathcal{B}(\mathbf{x},r_{1}+r_{2})}\leq m,\|f_{\theta}\|_{\mathcal{B}(\bm {x},r_{1}+r_{2})}\leq m\). Then, there exists a uniquely defined vector field \(f_{h}^{K}\), i.e., the truncated IMDE of \(\Phi_{h}^{L}\), such that, if \(0<\Delta t<\Delta t_{0}\),_
\[\|f_{\theta}(\mathbf{x})-f_{h}^{K}(\mathbf{x})\|\leq c_{1}me^{-\gamma_{1}/\Delta t^{ 1/q}}+\frac{e}{e-1}\mathcal{L}, \tag{3.5}\]
_where the integer \(K=K(h)\) and the constants \(\Delta t_{0}\), \(q\), \(\gamma_{1}\), \(c_{1}\) depend only on \(m/r_{1}\), \(m/r_{2}\), \(s\), \(\Phi_{h}\) and the implicit solver1_
Footnote 1: The constants here depend on the choice of solver (specifically, on the constants \(b_{1},b_{2},b_{3}\) in Assumption B.1). However, since the first term in (3.5) is very small, the constants contained have little effect on the results.
Proof.: The proof can be found in Appendix B.1.
Here, the first term on the right hand side of (3.5) is sub-exponentially small. The \(\mathcal{L}\) defined in (3.4) can be regarded as a form of generalization of the learning loss (2.3) when \(M=1\) (loss (2.3); with different \(M\) we have equivalent convergence due to the following Lemma 3.2). In this paper we mainly focus on numerical schemes, and thus we will not further quantify \(\mathcal{L}\). Provided we make the additional assumption that there are sufficient many data points, the network is sufficiently large and the training finds a neural network with perfect performance, then the learning loss converges to zero and the difference between the learned ODE and the truncated IMDE converges to near-zero (as per (3.5)). We therefore claim that \(f_{\theta}\) is a close approximation of \(f_{h}^{K}\).
Next, we show that the teacher-forcing loss (i.e., setting \(M=1\) in \(\mathcal{L}_{unrolled}\) of (2.7)) is bounded by the \(M\)-step shooting loss on the same data, and thus these two have the equivalent convergence.
**Lemma 3.2** (**The \(M\)-step shooting loss and the teacher-forcing loss have equivalent convergence**). Let \(\mathcal{T}=\{\phi_{m\Delta t}(\mathbf{x}_{n})\}_{1\leq n\leq N,0\leq m\leq M-1}\) be the total observed data, then, there exist constants \(C_{1}\), \(C_{2}\), such that
\[C_{1}\cdot\mathcal{L}_{unrolled}\leq\sum_{x\in\mathcal{T}}\bigl{\|}\bigl{(} \Phi_{h,f_{\theta}}^{L}\bigr{)}^{s}\left(\mathbf{x}\right)-\phi_{sh,f}(\mathbf{x}) \|_{2}^{2}/\Delta t^{2}\leq C_{2}\cdot\mathcal{L}_{unrolled}. \tag{3.6}\]
Proof.: The proof can be found in Appendix B.2.
Since we consider variable \(M\) in section 2, we perform the analysis that follows for \(M=1\), and use Lemma 3.2 to extend to different choices of \(M\).
Next, we have the following Theorem 3.3, which indicates that increasing the iteration number \(L\) is equivalent to adjusting the approximation target to gradually approach the true target with the help of Theorem 3.1.
**Theorem 3.3** (**Increasing the iteration number \(L\) is equivalent to adjusting the approximation target to gradually approach the true target**). Consider a consistent implicit Runge-Kutta scheme \(\Phi_{h}\) (2.4) and denote the IMDE2 of \(\Phi_{h}\) as \(\hat{f}_{h}=\sum_{k=0}^{\infty}h^{k}\hat{f}_{k}\), and the corresponding IMDE via unrolled approximation \(\Phi_{h}^{L}\) ( (2.5) or (2.6)) as \(f_{h}=\sum_{k=0}^{\infty}h^{k}f_{k}\), respectively. Then**
Footnote 2: If we suspect that this sum does not converge with \(L\to\infty\), we can still study the this sum formally, truncating it according to Theorem 3.1.
\[\hat{f}_{h}-f_{h}=\mathcal{O}(h^{L^{*}+1}),\text{ i.e.},\hat{f}_{k}=f_{k}\text{ for }k=0,\cdots,L^{*},\]
_where \(L^{*}=L\) for the unrolled approximation using fixed-point iteration (2.5) and \(L^{*}=2^{L+1}-2\) for the unrolled approximation using Newton-Raphson iteration (2.6)._
Proof.: The proof can be found in Appendix B.3.
Additionally, with the tools of IMDEs, we can obtain the order of convergence for learning ODEs with unrolled implicit integration:
**Theorem 3.4** (**Order of convergence for learning ODEs**).: _With the notation and under the conditions of Theorem 3.1 and Theorem 3.3, if \(\Phi_{h}\) is of order \(p\), i.e., \(\Phi_{h}(\mathbf{x})=\phi_{h}(\mathbf{x})+\text{\rm{truncating}}\), then, there exist constants \(C_{1}\), \(C_{2}\), such that_
\[C_{1}\cdot\mathcal{L}_{unrolled}\leq\sum_{x\in\mathcal{T}}\bigl{\|}\bigl{(} \Phi_{h,f_{\theta}}^{L}\bigr{)}^{s}\left(\mathbf{x}\right)-\phi_{sh,f}(\mathbf{x}) \|_{2}^{2}/\Delta t^{2}\leq C_{2}\cdot\mathcal{L}_{unrolled}. \tag{3.7}\]
Proof.: The proof can be found in Appendix B.2.
Since we consider variable \(M\) in section 2, we perform the analysis that follows for \(M=1\), and use Lemma 3.2 to extend to different choices of \(M\).
Next, we have the following Theorem 3.3, which indicates that increasing the iteration number \(L\) is equivalent to adjusting the approximation target to gradually approach the true target with the help of Theorem 3.1.
**Theorem 3.3** (**Increasing the iteration number \(L\) is equivalent to adjusting the approximation target to gradually approach the true target**).: _Consider a consistent implicit Runge-Kutta scheme \(\Phi_{h}\) (2.4) and denote the IMDE2 of \(\Phi_{h}\) as \(\hat{f}_{h}=\sum_{k=0}^{\infty}h^{k}\hat{f}_{k}\), and the corresponding IMDE via unrolled approximation \(\Phi_{h}^{L}\) ( (2.5) or (2.6)) as \(f_{h}=\sum_{k=0}^{\infty}h^{k}f_{k}\), respectively. Then_
Footnote 2: If we suspect that this sum does not converge with \(L\to\infty\), we can still study the this sum formally, truncating it according to Theorem 3.1.
\[\hat{f}_{h}-f_{h}=\mathcal{O}(h^{L^{*}+1}),\text{ i.e.},\hat{f}_{k}=f_{k}\text{ for }k=0,\cdots,L^{*},\]
_where \(L^{*}=L\) for the unrolled approximation using fixed-point iteration (2.5) and \(L^{*}=2^{L+1}-2\) for the unrolled approximation using Newton-Raphson iteration (2.6)._
Proof.: The proof can be found in Appendix B.3.
Additionally, with the tools of IMDEs, we can obtain the order of convergence for learning ODEs with unrolled implicit integration:
**Theorem 3.4** (**Order of convergence for learning ODEs**).: _With the notation and under the conditions of Theorem 3.1 and Theorem 3.3, if \(\Phi_{h}\) is of order \(p\), i.e., \(\Phi_{h}(\mathbf{x})=\phi_{h}(\mathbf{x})+\text{\rm{truncating}}\), then, there exist constants \(C_{1}\), \(C_{2}\), such that_
\[C_{1}\cdot\mathcal{L}_{unrolled}\leq\sum_{x\in\mathcal{T}}\bigl{\|}\bigl{(} \Phi_{h,f_{\theta}}^{L}\bigr{)}^{s}\left(\mathbf{x}\right)-\phi_{sh,f}(\mathbf{x}) \|_{2}^{2}/\Delta t^{2}\leq C_{2}\cdot\mathcal{L}_{unrolled}. \tag{3.8}\]
Proof.: The proof can be found in Appendix B.2.
\(\mathcal{O}(h^{p+1})\), and \(L^{*}+1\geq p\), then,_
\[\|f_{\theta}(\mathbf{x})-f(\mathbf{x})\|\leq c_{2}mh^{p}+\frac{e}{e-1}\mathcal{L},\]
_where the constant \(c_{2}\) depends only on \(m/r_{1}\), \(m/r_{2}\), \(s\), \(\Phi_{h}\) and the implicit solver._
Proof.: The proof can be found in Appendix B.4.
## 4 Implementation of implicit scheme
As discussed in section 2, one has to exploit a root-finding algorithm to solve the implicit equation (4a) for an implementation of (4). However, a drawback is that the iteration number, or stopping criterion, should usually be determined in advance and fixed during training. According to Theorem 3 and Theorem 3, different iteration numbers lead to different approximation targets and increasing the iteration number results in a more accurate target. Therefore, our goal is to provide _an adaptive algorithm_ that increases the iteration number \(L\), such that the error of the unrolled approximation is less than the current learning loss, thereby increasing computational efficiency while preserving accuracy.
```
1:Initialization: \(L\), and a neural network \(f_{\theta}\) with trainable parameters.
2:for each training epoch do
3: Compute \(\text{Loss}=\frac{1}{D\cdot N\cdot M}\mathcal{L}_{unrolled}\), where \(D\) is the dimension.
4: Let \(\theta\leftarrow\text{optimizer}(\theta,\text{lr},\frac{\partial\text{Loss}}{ \partial\theta})\) to update neural network parameters, where lr is the learning rate.
5:if adjust iteration number then
6:\(\delta=\frac{1}{D\cdot N\cdot M}\sum_{n=1}^{N}\sum_{m=1}^{M}\|\big{(}\Phi_{h, f_{\theta}}^{L+1}\big{)}^{ms}(\mathbf{x}_{n})-\big{(}\Phi_{h,f_{\theta}}^{L} \big{)}^{ms}(\mathbf{x}_{n})\|_{2}^{2}/(m\Delta t)^{2}\).
7:if\(\text{Loss}<c\delta\)then
8: Increase the iteration number \(L\).
9:endif
10:endif
11:endfor
```
**Algorithm 1**Training with adaptive iteration
Next, we present the error quantification for an ODE-net using an unrolled implicit scheme, which will form the cornerstone for the following adaptive algorithm.
**Lemma 1** (**Convergence of the ("inner") implicit iteration**).: _Consider a consistent implicit Runge-Kutta scheme \(\Phi_{h}\) (4) and its approximation \(\Phi_{h}^{L}\) using fixed-point iteration (5) or Newton-Raphson iteration (6). Then,_
\[\mathcal{L}_{exact}^{\frac{1}{2}}:= \left(\sum_{n=1}^{N}\sum_{m=1}^{M}\|(\Phi_{h,f_{\theta}})^{ms} \left(\mathbf{x}_{n}\right)-\phi_{m\Delta t}(\mathbf{x}_{n})\|_{2}^{2}/(m\Delta t)^{2 }\right)^{\frac{1}{2}}\] \[\leq \mathcal{L}_{unrolled}^{\frac{1}{2}}+\left(\sum_{n=1}^{N}\sum_{m =1}^{M}\|\big{(}\Phi_{h,f_{\theta}}^{L+1}\big{)}^{ms}(\mathbf{x}_{n})-\big{(}\Phi _{h,f_{\theta}}^{L}\big{)}^{ms}(\mathbf{x}_{n})\|_{2}^{2}/(m\Delta t)^{2}\right)^{ \frac{1}{2}}+\mathcal{O}(h^{(L+1)^{*}+1}),\]
_where \(L^{*}=L\) for the unrolled approximation using fixed-point iteration (5) and \(L^{*}=2^{L+1}-2\) for the unrolled approximation using Newton-Raphson iteration (6)._
Proof.: The proof can be found in Appendix B.5.
According to this inequality, we formulate our adaptive Algorithm 1. The core idea is to monitor the level of error, and adapt the iteration number in the training process according to Lemma 1. Essentially, Algorithm 1 adjusts the approximation target, i.e., the IMDE of \(\Phi_{h}^{L}\), to gradually approach the true target, i.e., the IMDE of \(\Phi_{h}\), see Figure 3 for an illustration.
## 5 Numerical examples
In this section, several examples are used to demonstrate the performance of the proposed algorithm and verify the theoretical analysis. We use the PyTorch library to implement Algorithm 1 to train our neural networks. For a given implicit solver (e.g., fixed-point iteration or Newton-Raphson iteration), we can store and backpropagate through all the iterations to obtain exact gradients for optimization. For all experiments except the last one, we generate the state data by numerically solving the dynamical system using a high order integrator with a tiny adaptive step. The trajectories are "split" so that their length \(M\) is \(1\). The last of our experiments uses real-world data [47]. In this last case, due to the measurement errors and other non-ideal effects, we set the length of divided trajectories to \(M=10\). After training, we simulate the learned system using a high resolution numerical solver and compare it against the true system solution. Specifically, the numerical solver for generating data and solving learned system is the fourth-order Runge-Kutta method with a finer time step of size \(0.01\cdot\Delta t\).
### Linear ODEs
We first present some numerical results for two-dimensional _linear_ ODEs, to verify that training an ODE-net using an unrolled implicit scheme returns an approximation of the IMDE. All examples are taken from [51].
For each test in this subsection, the training data is composed of \(100\) points generated from a uniform distribution over a computational domain \(\mathcal{D}\); each paired with its time-\(\Delta t\) flow; i.e., \(\{\mathbf{x}_{n},\phi_{\Delta t}(\mathbf{x}_{n})\}_{n=1}^{100}\), with \(\mathbf{x}_{n}\sim\text{Uniform}(\mathcal{D})\). The ODE solver that used to learn the ODE was
Figure 3: _Illustration of the proposed adaptive algorithm. We initially employ a smaller iteration number \(L\) to train the neural network, and we gradually increase \(L\) as the learning error decreases for more precise approximation target._
\begin{table}
\begin{tabular}{c c c c l l} \hline \hline Phase portrait & True system & Learned system & IMDE & Settings \\ \hline \multirow{6}{*}{Saddle point} & \(\dfrac{d}{dt}p=\) & \(1.1035p\) & \(\dfrac{d}{dt}p=\) & \(1.1035p\) & \(\mathcal{D}=[0,2]^{2}\) \\ & & \(+\,1.0033q\) & \(+\,1.0033q\) & \(\Delta t=0.1\) \\ & \(\dfrac{d}{dt}p=\) & \(p+q-2\) & \(-\,2.1068\) & \(-\,2.1068\) & \(\Delta t=0\) \\ & \(\dfrac{d}{dt}q=\) & \(p-q\) & \(\dfrac{d}{dt}q=\) & \(1.0033p\) & \(\dfrac{d}{dt}q=\) & \(1.0033p\) \\ & & \(-\,0.9032q\) & \(-\,0.9032q\) & \(\dfrac{d}{dt}q=\) & \(1.0033p\) \\ & & \(-\,0.1002\) & \(-\,0.1002\) & \(\dfrac{d}{dt}q=\) & \(1.0033p\) \\ & & \(-\,0.9032q\) & \(-\,0.9032q\) & \(\dfrac{d}{dt}q=\) & \(1.0033p\) \\ & & \(-\,0.1002\) & \(-\,0.1002\) & \(\dfrac{d}{dt}p=\) & \(1.002\) \\ \hline \multirow{6}{*}{Saddle point} & \(\dfrac{d}{dt}p=\) & \(0.9651p\) & \(\dfrac{d}{dt}p=\) & \(0.9609p\) & \(\mathcal{D}=[-1,1]^{2}\) \\ & \(\dfrac{d}{dt}p=\) & \(+\,1.9607q\) & \(+\,1.9568q\) & \(\Delta t=0.12\) \\ & \(\dfrac{d}{dt}p=\) & \(+\,0.0000\) & \(+\,0.0000\) & \(s=1\) \\ & \(\dfrac{d}{dt}q=\) & \(-\,5p-q\) & \(\dfrac{d}{dt}q=\) & \(-\,4.9017p\) & \(\dfrac{d}{dt}q=\) & \(-\,4.8920p\) \\ & & \(-\,0.9956q\) & \(-\,0.9959q\) & \(-\,0.9959q\) \\ & & \(+\,0.0000\) & \(+\,0.0000\) & \\ \hline \multirow{6}{*}{Speak point} & \(\dfrac{d}{dt}p=\) & \(0.8222p\) & \(\dfrac{d}{dt}p=\) & \(0.8270p\) & \(\mathcal{D}=[-1,1]^{2}\) \\ & Improper node & \(-\,3.7709q\) & \(-\,3.7771q\) & \(\Delta t=0.12\) \\ & \(\dfrac{d}{dt}p=\) & \(p-4q\) & \(+\,0.0000\) & \(+\,0.0000\) & \(s=1\) \\ & \(\dfrac{d}{dt}q=\) & \(4p-7q\) & \(\dfrac{d}{dt}q=\) & \(3.7709p\) & \(\dfrac{d}{dt}q=\) & \(3.7771p\) \\ & & \(-\,6.7197q\) & \(-\,6.7272q\) & \(-\,6.7272q\) \\ & & \(+\,0.0000\) & \(+\,0.0000\) & \\ \hline \multirow{6}{*}{Speak point} & \(\dfrac{d}{dt}p=\) & \(-\,0.9729p\) & \(\dfrac{d}{dt}p=\) & \(-\,0.9729p\) & \(\mathcal{D}=[-3,-1]\times[0,2]\) \\ & Spiral point & \(-\,1.0503q\) & \(-\,1.0504q\) & \(\Delta t=0.05\) \\ & \(\dfrac{d}{dt}p=\) & \(-\,p-q-1\) & \(-\,0.8955\) & \(-\,0.8954\) & \(s=1\) \\ & \(\dfrac{d}{dt}q=\) & \(2p-q+5\) & \(\dfrac{d}{dt}q=\) & \(2.1006p\) & \(\dfrac{d}{dt}q=\) & \(2.1008p\) \\ & & \(-\,0.9729q\) & \(-\,0.9729q\) & \(-\,0.9729q\) \\ & & \(+\,5.1742\) & \(+\,5.1745\) & & \\ \hline \multirow{6}{*}{Speak point} & \(\dfrac{d}{dt}p=\) & \(-\,2.3368p\) & \(\dfrac{d}{dt}p=\) & \(-\,2.3366p\) & \(\mathcal{D}=[-2,0]\times[-1,1]\) \\ & Nodal sink & \(+\,1.2743q\) & \(+\,1.2741q\) & \(\Delta t=0.12\) \\ \cline{1-1} & \(\dfrac{d}{dt}p=\) & \(-\,2+q-2\) & \(-\,2.3368\) & \(-\,2.3366\) & \(s=1\) \\ \cline{1-1} & \(\dfrac{d}{dt}q=\) & \(p-2q+1\) & \(\dfrac{d}{dt}q=\) & \(1.2743p\) & \(\dfrac{d}{dt}q=\) & \(1.2741p\) \\ \cline{1-1} & & \(-\,2.3368q\) & \(-\,2.3366q\) & \(\dfrac{d}{dt}q=\) & \(1.2741p\) \\ \cline{1-1} & & \(-\,2.3368q\) & \(-\,2.3366q\) & \(\dfrac{d}{dt}q=\) & \(1.2741\) \\ \cline{1-1} & & \(+\,1.2743\) & \(+\,1.2741\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Discovery of linear systems. The leftmost column shows the phase portraits of the true, learned and modified systems. The four columns on the right give the corresponding ODEs as well as the experiment details. Here, \(L\) is the fixed iteration number, and \(L=0\) means that there is no iteration; i.e., the scheme reduces to forward Euler \(\Phi^{0}_{h,f}(\mathbf{x})=\mathbf{x}+hf(\mathbf{x})\).
chosen to be a single composition (\(s=1\)) of a chosen unrolled implicit scheme with a fixed iteration number \(L\). For the linear case, the employed neural networks have a single linear layer, i.e., we learn an affine transformation \(f_{\theta}(\mathbf{x})=\mathbf{W}\mathbf{x}+\mathbf{b}\), where \(\mathbf{W}\in\mathbb{R}^{D\times D},\mathbf{b}\in\mathbb{R}^{D}\) are the \(D^{2}+D\) learnable parameters. We use full-batch Adam optimization [32] with a learning rate of \(0.01\) to update the parameters \(10^{4}\) times.
The detailed computational settings, descriptions of the systems and the corresponding numerical and analysis results are presented in Table 1. Note that Newton-Raphson iteration with \(L=1\) can exactly solve the implicit linear equation and thus higher iterations are not discussed. As shown in the phase portraits, the ODE-nets accurately capture the evolution of the corresponding IMDE. In addition, the trajectories of the learned systems are closer to those of the Modified systems than to the those of the true ODE, which confirms that training an ODE-net returns an approximation of the IMDE.
### Damped pendulum problem
We now consider the damped pendulum problem,
\[\frac{d}{dt}p= -\alpha p-\beta\sin q,\] \[\frac{d}{dt}q= p,\]
where \(\alpha=0.2\) and \(\beta=8.91\).
We generate \(90\) and \(10\) trajectories from \(t=0\) to \(t=4\) for the training data and test data respectively, with initial points randomly sampled from a uniform distribution on \(\mathcal{D}=[-1.5,0]\times[-4,0]\). For each trajectory, \(4/\Delta t+1\) data points at equidistant time steps \(\Delta t\) are selected and grouped in \(4/\Delta t\) successive \(M=1\) pairs.
Figure 4: **Convergence rate test with respect to \(h=\Delta t/s\) for learning the damped pendulum system.** Here, \(\Delta t\) is the data step size, \(h\) is the numerical scheme step size, and therefore \(s=\Delta t/h\) is the number of scheme compositions used. Since error \(\sim h^{p}=(\Delta t/s)^{p}\), we set \(1/s\) and \(\Delta t\) as the horizontal coordinates to show convergence with respect to \(h\). As we sweep \(\Delta t\), we keep \(h\) (i.e. \(s\)) constant, and vice-versa. We see that the \(\text{Error}(f_{\theta},f)\) is more than the \(\text{Error}(f_{\theta},f_{h})\). The order of \(\text{Error}(f_{\theta},f)\) with respect to \(h\) is consistent with the order of the employed numerical schemes. The results are obtained by taking the mean of \(5\) independent experiments, and the shaded region represents one standard deviation.
Here we employ fixed-point iteration to solve the implicit equation. We use a feedforward neural network with two hidden layers to represent the unknown vector field, i.e.,
\[f_{\theta}(\mathbf{x})=\mathbf{W}_{3}\texttt{tanh}(\mathbf{W}_{2}\texttt{tanh}( \mathbf{W}_{1}\mathbf{x}+\mathbf{b}_{1})+\mathbf{b}_{2})+\mathbf{b}_{3},\]
where \(\mathbf{W}_{1}\in\mathbb{R}^{128\times D}\), \(\mathbf{W}_{2}\in\mathbb{R}^{128\times 128}\), \(\mathbf{W}_{3}\in\mathbb{R}^{D\times 128}\), \(\mathbf{b}_{1},\mathbf{b}_{2}\in\mathbb{R}^{128}\), \(\mathbf{b}_{3}\in\mathbb{R}^{D}\) are the \(256D+128^{2}+256+D\) learnable parameters and \(D=2\) is the state dimension. Results are collected after \(10^{5}\) parameter updates using full-batch Adam optimization; the learning rate is set to decay exponentially with, linearly decreasing power from \(10^{-2}\) to \(10^{-4}\). We also include comparisons with the fixed iteration number setting, where we apply \(L=5\) iterations.
We first verify the convergence rate with respect to the step size \(h\). Here, we evaluate the average error between \(f_{\theta}\) and \(f\) and between \(f_{\theta}\) and \(f_{h}\) in the \(l_{\infty}\)- norm, i.e.,
\[\begin{split}\text{Error}(f_{\theta},f)=\frac{1}{|\mathcal{T}_{ test}|}\sum_{\mathbf{x}_{n}\in\mathcal{T}_{test}}\|f_{\theta}(\mathbf{x}_{n})-f(\mathbf{x}_{n}) \|,\\ \text{Error}(f_{\theta},f_{h})=\frac{1}{|\mathcal{T}_{test}|} \sum_{\mathbf{x}_{n}\in\mathcal{T}_{test}}\|f_{\theta}(\mathbf{x}_{n})-f_{h}(\mathbf{x}_{n })\|,\end{split} \tag{5.1}\]
where \(\mathcal{T}_{test}=\left\{\phi_{m\Delta t}(\mathbf{x}_{n})\right\}_{1\leq n\leq 1 0,\ 0\leq m\leq 4/\Delta t,\Delta t=0.01}\) is the total test data when \(\Delta t=0.01\). We assign various step sizes \(\Delta t\) as \(\Delta t=0.01\cdot 2^{k},\ k=0,\cdots,4\) with fixed composition number \(s=1\); we also use several composition numbers \(s=2^{k},\ k=0,\cdots,4\) with fixed horizon \(\Delta t=0.16\), respectively. The errors are recorded in Figure 4. It can be seen that the
Figure 5: _Results for learning damped pendulum problem. The integration of the learned system always matches that of the derived IMDE more than the truth, but the adaptive algorithm extracts this IMDE more quickly._
\(\text{Error}(f_{\theta},f)\) is markedly more than the \(\text{Error}(f_{\theta},f_{h})\), indicating that the learned ODE-net returns an approximation of the particular IMDE rather than the true ODE. In addition, the order of \(\text{Error}(f_{\theta},f)\) with respect to \(h\) is consistent with the order of the employed numerical schemes when \(h\) is relatively large, since the learning error dominates the overall error for an accurate solver.
Next, we simulate the exact solution from \(t=0\) to \(t=8\) using the initial condition \(y_{0}=(-3.876,-1.193)\). We show in Figure 5 the exact trajectories of the true system, the corresponding IMDE, and the right-hand-sides learned by ODE-net for different schemes, where \(\Delta t=0.01,s=2,4,8\). For all integrations, the ODE-net accurately captures the evolution of the corresponding IMDE, which again implies that the learned ODE-net returns an approximation of the IMDE. When using a small learning time step \(h=\Delta t/s\), the difference between the IMDE and the original equation is reduced, and thus the ODE-net tends to learn the true system. In addition, we record the error and the training time on the right side of Figure 5. It is observed that the proposed adaptive iteration algorithm is remarkably faster than the non-adaptive, direct implementation, and requires less training wall-clock time to reach similar accuracy.
### Glycolytic oscillator
As an example of an initial value solver employing the Newton-Raphson iteration, we consider a model of oscillations in yeast glycolysis [13]. The model describes the concentrations of seven biochemical species and is defined by
\[\frac{d}{dt}S_{1}= J_{0}-\frac{k_{1}S_{1}S_{6}}{1+(S_{6}/K_{1})^{q}},\] \[\frac{d}{dt}S_{2}= 2\frac{k_{1}S_{1}S_{6}}{1+(S_{6}/K_{1})^{q}}-k_{2}S_{2}(N-S_{5}) -k_{6}S-2S_{5},\] \[\frac{d}{dt}S_{3}= k_{2}S_{2}(N-S_{5})-k_{3}S_{3}(A-S_{6}),\] \[\frac{d}{dt}S_{4}= k_{3}S_{3}(A-S_{6})-k_{4}S_{4}S_{5}-\kappa(S_{4}-S_{7}),\] \[\frac{d}{dt}S_{5}= k_{2}S_{2}(N-S_{5})-k_{4}S_{4}S_{5}-k_{6}S_{2}S_{5},\] \[\frac{d}{dt}S_{6}= -2\frac{k_{1}S_{1}S_{6}}{1+(S_{6}/K_{1})^{q}}+2k_{3}S_{3}(A-S_{6} )-k_{5}S_{6},\] \[\frac{d}{dt}S_{7}= \psi\kappa(S_{4}-S_{7})-kS_{7},\]
where the ground truth parameters are taken from Table 1 in [13].
In this example, training data consists of 20 simulations which start at \((1+\delta)\cdot\mathbf{x}_{0}\), where \(\mathbf{x}_{0}=(1.125,0.95,0.075,0.16,0.265,0.7,0.092)^{\top}=(S_{1},\dots,S_{7}) ^{\top}\) and \(\delta\) is uniformly sampled from \([-0.2,0.2]\). On each trajectory, 500 pairs of snapshots at \((i\Delta t,(i+1)\Delta t)\), \(i=0,\cdots,499\), \(\Delta t=0.01\) are used as training data. While the Newton-Raphson iteration is used, and \(s\) is fixed to 2, the chosen model architecture and hyperparameters are the same as in subsection 5.2.
After training, we record the training time and the error in Table 2; the error between \(f_{\theta}\) and \(f\) is evaluated via (5.1) with \(\mathcal{T}_{test}=\left\{\phi_{m\Delta t}(\mathbf{x}_{0})\right\}_{0\leq m\leq 500}\), while the error between the
learned and exact trajectories are evaluated by
\[\text{Error}(\phi_{T,f_{\theta}},\phi_{T,f})=\frac{\Delta t}{T}\sum_{m=1}^{T/ \Delta t}\lVert\phi_{m\Delta t,f_{\theta}}(\mathbf{x}_{0})-\phi_{m\Delta t,f}(\mathbf{x }_{0})\rVert.\]
As can be seen from Table 2, the proposed algorithm leads to a 2-3\(\times\) speedup in training without noticeable degradation in accuracy.
In addition, we use an implicit midpoint scheme to learn the system from initial condition \(\mathbf{x}_{0}\) and depict the learned and exact dynamics in Figure 6. It can be seen that the system learned using the proposed algorithm correctly captures the form of the dynamics, indicating that the performance of our approach is still promising for moderately high-dimensional equation discovery.
### Learning real-world dynamics
Finally, we use real-world data [47] to verify that the proposed algorithm can learn accurate dynamics and predict future behavior in real-world
\begin{table}
\begin{tabular}{l|c c c c} \hline \multirow{2}{*}{Methods} & Implicit Midpoint & Implicit Midpoint & Implicit Trapezoidal & Implicit Trapezoidal \\ & (adaptive) & (fixed) & (adaptive) & (fixed) \\ \hline Training time & \(1763\pm 87\) & \(5879\pm 224\) & \(2264\pm 92\) & \(6357\pm 167\) \\ \hline Error(\(f_{\theta}\), \(f\)) & 2.91e-2 \(\pm\) 2.37e-3 & 2.91e-2 \(\pm\) 2.76e-3 & 4.05e-2 \(\pm\) 3.14e-3 & 4.09e-2 \(\pm\) 3.21e-3 \\ \hline Error(\(\phi_{T,f_{\theta}}\), \(\phi_{T,f}\)) & 7.62e-3 \(\pm\) 4.10e-3 & 7.63e-3 \(\pm\) 3.70e-3 & 1.19e-2 \(\pm\) 5.30e-3 & 8.74e-3 \(\pm\) 5.46e-3 \\ \hline \end{tabular}
\end{table}
Table 2: The training time (in seconds) and global error for learning the glycolytic oscillator. The results are recorded in the form of mean \(\pm\) standard deviation based on 10 independent training. The proposed adaptive algorithm markedly decrease training time with no compromise in accuracy.
Figure 6: Exact and learned dynamics of glycolytic oscillator. For this evaluation experiment, we keep \(\Delta t=0.01\) as in training, but set \(M=500\) rather than \(M=1\) to show that the long-term dynamics are accurate. Additionally, the initial condition used is not itself present in the training dataset.
problems. This data consists of about 500 points a single trajectory of two coupled oscillators. We use the first \(3/4\) of the trajectory for training, and the remainder for testing. Here we assign \(s=1\), and set the length of divided trajectories to \(M=10\) rather than 1 due to measurement errors and other non-ideal effects. We train models by using full-batch Adam with a learning rate of \(10^{-4}\) over 5000 epochs.
We use the last point in the training data as the initial point to simulate the learned system, and depict the learned dynamics and test trajectory in Figure 7. Despite the measurement errors and other non-ideal effects, we see that the proposed algorithm still performs robustly. In addition, while all test schemes are of order 2, the use of implicit schemes for identification preserves the phase portrait more accurately and the implicit midpoint method achieves the lowest prediction error. The use of implicit schemes also permits the incorporation of geometric properties such as symplecticity, symmetry and reversibility. Although their necessity has not been mathematically proven, the results in Figure 7 show empirically better results with implicit training schemes.
Figure 7: _Results of learning the real-world dynamics. We find that the fine details of the trajectory around position \(0.0\), momentum \(0.1\) are preserved more faithfully by the two implicit methods than by the one explicit. This effect is conserved across multiple experiments._
## Summary
Machine learning via ODE-nets provides data-driven approaches to model and predict the dynamics of physical systems from data. Since the models are typically trained on discrete data, we have to perform a numerical integration to evaluate a loss for training. In this paper we extend previous work [55], in which we defined the inverse modified differential equation (IMDE). We prove that training an ODE-net templated on an unrolled implicit scheme returns an approximation of a particular IMDE. In addition, we show that the convergence with discrete step \(h\) is of order \(p\), where \(p\) is the order of the numerical integrator. Numerical experiments support the theoretical findings.
In addition, for learning with neural networks templated on implicit numerical integration, we propose and implement an adaptive algorithm that adjusts the iteration number of unrolled implicit integration during the training process to accelerate training. Instead of treating numerical integration of ODE-nets as a black box, our algorithm allows for finding the cheapest iteration number via monitoring the errors of the implicit solver and the learning loss. Numerical experiments show that the proposed algorithm leads to a \(2-3\times\) speedup in training without any degradation in accuracy. Finally, we remark that our method naturally applies to the approaches based on ODE-net incorporating partially known physical terms (i.e., "gray box" identification). [42, 33]
Several challenges remain to be addressed in future work. First, the Newton-Raphson iteration (6) requires solving a linear equation, which makes scaling to high-dimensional equation discovery expensive. One possible direction is to do Newton-Raphson steps with an iterative algorithm such as GMRES [45].
Second, our algorithm uses the interplay between the training and numerical integration to adapt the stopping criterion. Such an idea can also be extended to efficient adaptive time-step methods, where the IMDE for adaptive steps still remains open.
Third, in classical initial value solvers, it is well known that implicit schemes have better stability [26], and allow for geometric properties such as symplecticity, symmetry and reversibility [25]. We would like to further explore in future work how these well-known forward-integration properties of implicit methods produce benefits in implicitly-templated ODE-nets over the merely explicitly-templated ones.
Finally, while we provide a rigorous grounding for the proposed adaptive Algorithm 1, this suggests a family of further adaptive methods for accelerating the neural identification of ODEs from data; these should better exploit existing intuition and experience about the tradeoffs inherent in the methods available in the literature. For instance, in the same way that the current learning loss sets a ceiling on the useful iteration number, we might find that switching from unrolled ODE-nets to adjoint-differentiation NODEs does make sense, but possibly only later in the training process. This program could be taken further towards a meta-learning approach, where an agent is trained to make such hyperparameter decisions online, during the training of the target ODE network. [23, 14]
## Appendix A Calculation of Imde
### Linear ODEs
Consider a linear IVP
\[\frac{d}{dt}\mathbf{y}(t)=f(\mathbf{y}(t))=\mathbf{A}\mathbf{y}(t),\quad\mathbf{y}(0)=\mathbf{x}, \tag{1}\]
where \(\mathbf{y}(t),\mathbf{b}\in\mathbb{R}^{D}\) and \(\mathbf{A}\in\mathbb{R}^{D\times D}\) is invertible. It's solution at time \(h\) can be given as
\[\phi_{h,f}(\mathbf{x})=e^{\mathbf{A}h}\mathbf{x}=\sum_{k=0}^{\infty}\frac{h^{k}}{k!} \mathbf{A}^{k}\mathbf{x}=(\sum_{k=0}^{\infty}\frac{(-h)^{k}}{k!}\mathbf{A}^{k})^{-1 }\mathbf{x}. \tag{12}\]
We now consider learning with implicit Euler scheme,
\[\mathbf{v}_{1}=\mathbf{x}+hf(\mathbf{v}_{1}),\quad\Phi_{h,f_{h}}(\mathbf{x})=\mathbf{x}+hf_{h}(\bm {v}_{1})\]
we have
\[\Phi_{h,f_{h}}(\mathbf{x})=(\mathbf{I}_{D}-hf_{h})^{-1}(\mathbf{x})\]
where \(\mathbf{I}_{D}\) is the identity map. By \(\Phi_{h,f_{h}}=\phi_{h,f}\), we deduce that
\[f_{h}(\mathbf{x})=\sum_{k=0}^{\infty}\frac{(-1)^{k}h^{k}}{(k+1)!}\mathbf{A}^{k+1} \mathbf{x}=\mathbf{A}_{h}\mathbf{x},\]
Additionally, if \(f(\mathbf{y})=\mathbf{A}\mathbf{y}+\mathbf{b}\), we have \(f_{h}(\mathbf{x})=\mathbf{A}_{h}\mathbf{x}+\mathbf{A}_{h}\mathbf{A}^{-1}\mathbf{b}\) by linear transformation \(\mathbf{\hat{y}}=\mathbf{y}+\mathbf{A}^{-1}\mathbf{b}\).
### General nonlinear ODEs
For a nonlinear ODE,
\[\frac{d}{dt}\mathbf{y}(t)=f(\mathbf{y}(t)),\quad\mathbf{y}(0)=\mathbf{x},\]
we first expand the exact solution:
\[\phi_{h,f}(\mathbf{x}) =\mathbf{x}+hf(\mathbf{x})+\frac{h^{2}}{2}f^{\prime}f(\mathbf{x})+\frac{h^{3} }{6}(f^{\prime\prime}(f,f)(\mathbf{x})+f^{\prime}f^{\prime}f(\mathbf{x}))\] \[+\frac{h^{4}}{24}(f^{\prime\prime\prime}(f,f,f)(\mathbf{x})+3f^{ \prime\prime}(f^{\prime}f,f)(\mathbf{x})+f^{\prime}f^{\prime\prime}(f,f)(\mathbf{x})+ f^{\prime}f^{\prime}f^{\prime}f(\mathbf{x}))+\cdots. \tag{13}\]
As an example, the numerical scheme is chosen to be implicit Euler scheme, using Newton-Raphson iteration with \(L=1\),
\[\mathbf{v}_{1}^{0}\equiv\mathbf{x},\quad\mathbf{v}_{1}^{1}=\mathbf{x}+hf_{h}(\mathbf{v}_{1}^{0})+ hf_{h}^{\prime}(\mathbf{v}_{1}^{0})(\mathbf{v}_{1}^{1}-\mathbf{v}_{1}^{0}),\quad\Phi_{h,f_{h }}^{1}(\mathbf{x})\equiv\mathbf{x}+hf_{h}(\mathbf{v}_{1}^{1}).\]
We expand it as
\[\Phi_{h,f_{h}}^{1}(\mathbf{x})= \mathbf{x}+hf_{h}(\mathbf{x})+h^{2}f_{h}^{\prime}f_{h}(\mathbf{x})+\frac{h^{3} }{2}f_{h}^{\prime\prime}(f_{h},f_{h})(\mathbf{x})+h^{3}f_{h}^{\prime}f_{h}^{\prime} f_{h}(\mathbf{x})\] \[+\frac{h^{4}}{6}f_{h}^{\prime\prime\prime}(f_{h},f_{h},f_{h})(\bm {x})+h^{4}f_{h}^{\prime\prime}(f_{h}^{\prime}f_{h},f_{h})(\mathbf{x})+h^{4}f_{h}^{ \prime}f_{h}^{\prime}f_{h}^{\prime}f_{h}(\mathbf{x})+\cdots.\]
Substituting
\[f_{h}=f_{0}+hf_{1}+h^{2}f_{2}+h^{3}f_{3}\cdots\]
yields
\[\Phi^{2}_{h,f_{h}}(\mathbf{x})= \mathbf{x}+hf_{0}+h^{2}(f_{1}(\mathbf{x})+f^{\prime}_{0}f_{0}(\mathbf{x}))\] \[+h^{3}(f_{2}(\mathbf{x})+f^{\prime}_{1}f_{0}(\mathbf{x})+f^{\prime}_{0}f_{1 }(\mathbf{x})+\frac{1}{2}f^{\prime\prime}_{0}(f_{0},f_{0})(\mathbf{x})+f^{\prime}_{0}f^ {\prime}_{0}f_{0}(\mathbf{x}))\] \[+h^{4}\big{(}f_{3}(\mathbf{x})+f^{\prime}_{1}f_{1}(\mathbf{x})+f^{\prime} _{0}f_{2}(\mathbf{x})+f^{\prime}_{2}f_{0}(\mathbf{x})+\frac{1}{2}f^{\prime\prime}_{1}( f_{0},f_{0})(\mathbf{x})+f^{\prime\prime}_{0}(f_{1},f_{0})(\mathbf{x})\] \[+f^{\prime}_{1}f^{\prime}_{0}f_{0}(\mathbf{x})+f^{\prime}_{0}f^{ \prime}_{1}f_{0}(\mathbf{x})+f^{\prime}_{0}f^{\prime}_{0}f_{1}(\mathbf{x})\] \[+\frac{1}{6}f^{\prime\prime\prime}_{0}(f_{0},f_{0},f_{0})(\mathbf{x} )+f^{\prime\prime}_{0}(f^{\prime}_{0}f_{0},f_{0})(\mathbf{x})+f^{\prime}_{0}f^{ \prime}_{0}f^{\prime}_{0}f_{0}(\mathbf{x})\big{)}+\cdots.\]
Comparing like powers of \(h\) with expression (A.3) yields recurrence relations for functions \(f_{k}\), i.e.,
\[f_{0}(\mathbf{y}) =f(\mathbf{y}),\] \[f_{1}(\mathbf{y}) =\frac{1}{2}f^{\prime}f(\mathbf{y})-f^{\prime}_{0}f_{0}(\mathbf{y})=- \frac{1}{2}f^{\prime}f(\mathbf{y}),\] \[f_{2}(\mathbf{y}) =\frac{1}{6}(f^{\prime\prime}(f,f)(\mathbf{y})+f^{\prime}f^{\prime}f( \mathbf{y}))-(f^{\prime}_{1}f_{0}(\mathbf{y})+f^{\prime}_{0}f_{1}(\mathbf{y})+\frac{1}{2} f^{\prime\prime}_{0}(f_{0},f_{0})(\mathbf{y})+f^{\prime}_{0}f^{\prime}_{0}f_{0}(\mathbf{y}))\] \[=\frac{1}{6}f^{\prime\prime}(f,f)(\mathbf{y})+\frac{1}{6}f^{\prime}f^ {\prime}f(\mathbf{y}),\] \[f_{3}(\mathbf{y}) =\frac{1}{24}(f^{\prime\prime\prime}(f,f,f)(\mathbf{y})+3f^{\prime \prime}(f^{\prime}f,f)(\mathbf{y})+f^{\prime}f^{\prime\prime}(f,f)(\mathbf{y})+f^{ \prime}f^{\prime}f^{\prime}f(\mathbf{y}))\] \[-\big{(}f^{\prime}_{1}f_{1}(\mathbf{y})+f^{\prime}_{0}f_{2}(\mathbf{y})+f ^{\prime}_{2}f_{0}(\mathbf{y})+\frac{1}{2}f^{\prime\prime}_{1}(f_{0},f_{0})(\mathbf{y} )+f^{\prime\prime}_{0}(f_{1},f_{0})(\mathbf{y})\] \[+f^{\prime}_{1}f^{\prime}_{0}f_{0}(\mathbf{y})+f^{\prime}_{0}f^{ \prime}_{1}f_{0}(\mathbf{y})+f^{\prime}_{0}f^{\prime}_{0}f_{1}(\mathbf{y})\] \[+\frac{1}{6}f^{\prime\prime\prime}_{0}(f_{0},f_{0},f_{0})(\mathbf{y})+ f^{\prime\prime}_{0}(f^{\prime}_{0}f_{0},f_{0})(\mathbf{y})+f^{\prime}_{0}f^{\prime}_{0}f ^{\prime}_{0}f_{0}(\mathbf{y})\big{)}\] \[=-\frac{1}{24}f^{\prime\prime\prime}(f,f,f)(\mathbf{y})-\frac{1}{8}f^ {\prime\prime}(f^{\prime}f,f)(\mathbf{y})+\frac{11}{24}f^{\prime}f^{\prime\prime}( f,f)(\mathbf{y})-\frac{1}{24}f^{\prime}f^{\prime}f^{\prime}f(\mathbf{y}),\] \[\vdots\]
Note that Newton-Raphson iteration with \(L=1\) can exactly solve the implicit linear equation. In the linear case (A.1), \(f^{\prime}=\mathbf{A}\) and all terms involving \(f^{\prime\prime}\) and all higher-order derivatives are \(0\), giving us
\[f_{0}(\mathbf{y}) =\mathbf{A}\cdot\mathbf{y} f_{1}(\mathbf{y}) =-\frac{1}{2}\mathbf{A}^{2}\cdot\mathbf{y}\] (A.4) \[f_{2}(\mathbf{y}) =\frac{1}{6}\mathbf{A}^{3}\cdot\mathbf{y} f_{3}(\mathbf{y}) =-\frac{1}{24}\mathbf{A}^{4}\cdot\mathbf{y}\]
so we recover approximately (A.2) via
\[\begin{split} f_{h}(\mathbf{y})&=f_{0}(\mathbf{y})+hf_{1}(\mathbf{y} )+h^{2}f_{2}(\mathbf{y})+h^{3}f_{3}(\mathbf{y})+\cdots\\ &=\left(\frac{h^{0}}{1}\mathbf{A}+\frac{-h}{2}\mathbf{A}^{2}+ \frac{h^{2}}{6}\mathbf{A}^{3}-\frac{1h^{3}}{24}\mathbf{A}^{4}+\cdots\right) \cdot\mathbf{y}\\ &\approx\sum_{k=0}^{\infty}\frac{(-1)^{k}h^{k}}{(k+1)!}\mathbf{A }^{k+1}\cdot\mathbf{y}.\end{split}\] (A.5)
We next present an example employing fixed-point iteration with \(L=2\),
\[\mathbf{v}_{1}^{0}\equiv\mathbf{x},\quad\mathbf{v}_{1}^{1}=\mathbf{x}+hf(\mathbf{x}),\quad\mathbf{v}_{1 }^{2}=\mathbf{x}+hf(\mathbf{v}_{1}^{1}),\quad\Phi_{h,f_{h}}^{2}(\mathbf{x})\equiv\mathbf{x}+hf_ {h}(\mathbf{v}_{1}^{2}).\] (A.6)
We then use B-series to expand \(\Phi_{h,f_{h}}^{2}(\mathbf{x})\):
\[\begin{split}\Phi_{h,f_{h}}^{2}(\mathbf{x})=&\mathbf{x}+hf_ {h}(\mathbf{x})+h^{2}f_{h}^{\prime}f_{h}(\mathbf{x})+\frac{h^{3}}{2}f_{h}^{\prime \prime}(f_{h},f_{h})(\mathbf{x})+h^{3}f_{h}^{\prime}f_{h}^{\prime}f_{h}(\mathbf{x})\\ &+\frac{h^{4}}{6}f_{h}^{\prime\prime\prime}(f_{h},f_{h},f_{h})( \mathbf{x})+h^{4}f_{h}^{\prime\prime}(f_{h}^{\prime}f_{h},f_{h})(\mathbf{x})+\frac{h ^{4}}{2}f_{h}^{\prime}f_{h}^{\prime\prime}(f_{h},f_{h})(\mathbf{x})+\cdots.\end{split}\]
And similarly we have
\[\begin{split}\Phi_{h,f_{h}}^{2}(\mathbf{x})=&\mathbf{x}+hf_ {0}+h^{2}(f_{1}(\mathbf{x})+f_{0}^{\prime}f_{0}(\mathbf{x}))\\ &+h^{3}(f_{2}(\mathbf{x})+f_{1}^{\prime}f_{0}(\mathbf{x})+f_{0}^{\prime} f_{1}(\mathbf{x})+\frac{1}{2}f_{0}^{\prime\prime}(f_{0},f_{0})(\mathbf{x})+f_{0}^{ \prime}f_{0}^{\prime}f_{0}(\mathbf{x}))\\ &+h^{4}\big{(}f_{3}(\mathbf{x})+f_{1}^{\prime}f_{1}(\mathbf{x})+f_{0}^{ \prime}f_{2}(\mathbf{x})+f_{2}^{\prime}f_{0}(\mathbf{x})+\frac{1}{2}f_{1}^{\prime \prime}(f_{0},f_{0})(\mathbf{x})+f_{0}^{\prime\prime}(f_{1},f_{0})(\mathbf{x})\\ &+f_{1}^{\prime}f_{0}^{\prime}f_{0}(\mathbf{x})+f_{0}^{\prime}f_{1}^ {\prime}f_{0}(\mathbf{x})+f_{0}^{\prime}f_{0}^{\prime}f_{1}(\mathbf{x})\\ &+\frac{1}{6}f_{0}^{\prime\prime\prime}(f_{0},f_{0},f_{0})(\mathbf{x })+f_{0}^{\prime\prime}(f_{0}^{\prime}f_{0},f_{0})(\mathbf{x})+\frac{1}{2}f_{0}^{ \prime}f_{0}^{\prime\prime}(f_{0},f_{0})(\mathbf{x})\big{)}+\cdots.\end{split}\]
Comparing like powers of \(h\) with expression (A.3), we obtain that
\[f_{0}(\mathbf{y}) =f(\mathbf{y}),\] \[f_{1}(\mathbf{y}) =\frac{1}{2}f^{\prime}f(\mathbf{y})-f_{0}^{\prime}f_{0}(\mathbf{y})=-\frac{ 1}{2}f^{\prime}f(\mathbf{y}),\] \[f_{2}(\mathbf{y}) =\frac{1}{6}(f^{\prime\prime}(f,f)(\mathbf{y})+f^{\prime}f^{\prime}f (\mathbf{y}))-(f_{1}^{\prime}f_{0}(\mathbf{y})+f_{0}^{\prime}f_{1}(\mathbf{y})+\frac{1}{2}f _{0}^{\prime\prime}(f_{0},f_{0})(\mathbf{y})+f_{0}^{\prime}f_{0}^{\prime}f_{0}(\bm {y}))\] \[=\frac{1}{6}f^{\prime\prime}(f,f)(\mathbf{y})+\frac{1}{6}f^{\prime}f^ {\prime}f(\mathbf{y}),\] \[f_{3}(\mathbf{y}) =\frac{1}{24}(f^{\prime\prime\prime}(f,f,f)(\mathbf{y})+3f^{\prime \prime}(f^{\prime}f,f)(\mathbf{y})+f^{\prime}f^{\prime\prime}(f,f)(\mathbf{y})+f^{ \prime}f^{\prime}f^{\prime}f(\mathbf{y}))\] \[-\big{(}f_{1}^{\prime}f_{1}(\mathbf{y})+f_{0}^{\prime}f_{2}(\mathbf{y})+ f_{2}^{\prime}f_{0}(\mathbf{y})+\frac{1}{2}f_{1}^{\prime\prime}(f_{0},f_{0})(\mathbf{y})+f _{0}^{\prime\prime}(f_{1},f_{0})(\mathbf{y})\] \[+f_{1}^{\prime}f_{0}^{\prime}f_{0}(\mathbf{y})+f_{0}^{\prime}f_{1}^{ \prime}f_{0}(\mathbf{y})+f_{0}^{\prime}f_{0}^{\prime}f_{1}(\mathbf{y})\] \[+\frac{1}{6}f_{0}^{\prime\prime\prime}(f_{0},f_{0},f_{0})(\mathbf{y}) +f_{0}^{\prime\prime}(f_{0}^{\prime}f_{0},f_{0})(\mathbf{y})+\frac{1}{2}f_{0}^{ \prime}f_{0}^{\prime\prime}(f_{0},f_{0})(\mathbf{y})\big{)}\] \[=-\frac{1}{24}f^{\prime\prime\prime}(f,f,f)(\mathbf{y})-\frac{1}{8}f^ {\prime\prime}(f^{\prime}f,f)(\mathbf{y})-\frac{1}{24}f^{\prime}f^{\prime\prime}( f,f)(\mathbf{y})+\frac{23}{24}f^{\prime}f^{\prime}f^{\prime}f(\mathbf{y}),\] \[\vdots\]
## Appendix B Proofs
### Proof of Theorem 3.1 (The unrolled approximation approaches the IMDE)
The proof of Theorem 3.1 is obtained as the proof of Theorem 3.1 in [55] under the following Assumption B.1. Here we sketch the main idea in the notation used there, and then show that Assumption B.1 holds.
**Assumption B.1** (Assumptions for numerical schemes).: _For analytic \(g\), \(\hat{g}\) satisfying \(\|g\|_{\mathcal{B}(\mathcal{K},r)}\leq m\), \(\|\hat{g}\|_{\mathcal{B}(\mathcal{K},r)}\leq m\), there exist constants \(b_{1},b_{2},b_{3}\) that depend only on the scheme \(\Phi_{h}\) and composition number \(S\) such that the unrolled approximation \(\Phi_{h}^{L}\) satisfies_
* _for_ \(|h|\leq h_{0}=b_{1}r/m\) _,_ \((\Phi_{h,\hat{g}}^{L})^{S}\)_,_ \((\Phi_{h,g}^{L})^{S}\) _are analytic on_ \(\mathcal{K}\)_._
* _for_ \(|h|\leq h_{0}\)_,_ \[\|\big{(}\Phi_{h,\hat{g}}^{L}\big{)}^{S}-\big{(}\Phi_{h,g}^{L}\big{)}^{S}\|_{ \mathcal{K}}\leq b_{2}S|h|\|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},r)}.\]
* _for_ \(|h|<h_{1}<h_{0}\)_,_ \[\|\hat{g}-g\|_{\mathcal{K}}\leq \frac{1}{S|h|}\|\big{(}\Phi_{h,\hat{g}}^{L}\big{)}^{S}-\big{(}\Phi _{h,g}^{L}\big{)}^{S}\|_{\mathcal{K}}+\frac{b_{2}|h|}{h_{1}-|h|}\|\hat{g}-g\|_ {\mathcal{B}(\mathcal{K},b_{3}Sh_{1}m)}.\]
**Lemma B.2** (Choice of truncation and estimation of error for IMDE).: _Let \(f(\mathbf{y})\) be analytic in \(\mathcal{B}(\mathcal{K},r)\) and satisfies \(\|f\|_{\mathcal{B}(\mathcal{K},r)}\leq m\). Suppose the numerical scheme \(\Phi_{h}\) and its approximation \(\Phi_{h}^{L}\) satisfy Assumption B.1. Take \(\eta=\max\{6,\frac{b_{2}+1}{29}+1\}\), \(\zeta=10(\eta-1)\), \(q=-\ln(2b_{2})/\ln 0.912\) and let \(K\) be the largest integer satisfying_
\[\frac{\zeta(K-p+2)^{q}|h|m}{\eta r}\leq e^{-q}.\]
_If \(|h|\) is small enough such that \(K\geq p\), then the truncated IMDE satisfies_
\[\|(\Phi^{L}_{h,f^{K}_{h}})^{S}-\phi_{Sh,f}\|_{\mathcal{K}}\leq b_{2 }\eta me^{2q-qp}|Sh|e^{-\gamma/|Sh|^{1/q}},\] \[\|\mbox{$\sum$}_{k=p}^{K}h^{k}f_{k}\|_{\mathcal{K}}\leq b_{2}\eta m \left(\frac{\zeta m}{b_{1}r}\right)^{p}(1+1.38^{q}d_{p})|h|^{p},\] \[\|f^{K}_{h}\|_{\mathcal{K}}\leq(\eta-1)m,\]
_where \(\gamma=\frac{q}{e}\left(\frac{b_{1}r}{\zeta m}\right)^{1/q}\), \(d_{p}=p^{qp}e^{-q(p-1)}\)._
Proof.: According the Lemma B.1 in [55], the IMDE of the \((\Phi^{L}_{h})^{S}\) coincides with the IMDE of \(\Phi^{L}_{h}\). In addition, via regarding compositions \((\Phi^{L}_{h})^{S}\) as a one-step integrator, the estimates are obtained as in the proof of Lemma B.4 in [55]. Special constants including \(0.912\) and \(1.38\) are also explained in [55].
_Proof of Theorem 3.1_.: According to Lemma B.2, we have that
(B.1) \[\delta:=\frac{1}{\Delta t}\|(\Phi_{h,f_{\theta}})^{S}-\left(\Phi_{h,f^{K}_{h} }\right)^{S}\|_{\mathcal{B}(\mathbf{x},r_{1})}\leq\mathcal{L}+cme^{-\gamma/\Delta t ^{1/q}},\quad\|f^{K}_{h}\|_{\mathcal{B}(\mathbf{x},r_{1})}<(\eta-1)m,\]
where \(c=b_{2}\eta e^{q}\). Let
\[h_{1}=(eb_{2}+1)h,\ M=(\eta-1)m.\]
By the third term of Assumption B.1, we deduce that for \(0\leq j\leq r_{1}/b_{3}Sh_{1}M\),
\[\|f_{\theta}-f^{K}_{h}\|_{\mathcal{B}(\mathbf{x},jb_{3}Sh_{1}M)}\leq\delta+e^{-1} \|f_{\theta}-f^{K}_{h}\|_{\mathcal{B}(\mathbf{x},(j+1)b_{3}Sh_{1}M)}.\]
As in the proof of Theorem 3.1 in [55]. we obtain that
\[\|f_{\theta}(\mathbf{x})-f^{K}_{h}(\mathbf{x})\|\leq e^{-\hat{\gamma}/\Delta t}\|f_{ \theta}-f^{K}_{h}\|_{\mathcal{B}(\mathbf{x},r_{1})}+\frac{\delta}{1-\lambda},\]
where \(\hat{\gamma}=\frac{r_{1}}{(eb_{2}+1)b_{3}M}\). And thus we conclude that
\[\|f_{\theta}(\mathbf{x})-f^{K}_{h}(\mathbf{x})\|\leq c_{1}me^{-\gamma/\Delta t^{1/q}}+ C\mathcal{L},\]
where \(C=e/(e-1)\) and \(c_{1}\) is a constant satisfying \(c_{1}\geq C\cdot c+\eta e^{\gamma/\Delta t^{1/q}-\hat{\gamma}/\Delta t}\).
Next, to complete the proof of Theorem 3.1, it suffices to show that unrolled implicit Runge-Kutta scheme \(\Phi_{h}\) (2.4) using fixed-point iteration (2.5) or Newton-Raphson iteration (2.6) both satisfy Assumption B.1.
**Lemma B.3** (Fixed-point iteration obeys Assumption B.1).: _Consider a consistent implicit Runge-Kutta scheme \(\Phi_{h}\) (2.4) and its approximation via fixed-point iteration \(\Phi^{L}_{h}\) (2.5), denote_
\[\mu=\sum_{i=1}^{I}|b_{i}|,\quad\kappa=\max_{1\leq i\leq I}\sum_{j=1}^{I}|a_{ij}|.\]
_Let \(g,\hat{g}\) be analytic in \(\mathcal{B}(\mathcal{K},r)\) and satisfy \(\|g\|_{\mathcal{B}(\mathcal{K},r)}\leq m\), \(\|\hat{g}\|_{\mathcal{B}(\mathcal{K},r)}\leq m\). Then, for \(|h|\leq h_{0}=r/(2(S\mu+\kappa)m)\) and \(\mathbf{x}\in\mathcal{K}\), the compositions \((\Phi^{L}_{h,g})^{S}(\mathbf{x})\), \((\Phi^{L}_{h,\hat{g}})^{S}(\mathbf{x})\) are analytic and_
\[\|(\Phi^{L}_{h,\hat{g}})^{S}-(\Phi^{L}_{h,g})^{S}\|_{\mathcal{K}}\leq(e-1)(S \mu+\kappa)|h|\|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},r)}.\]
_In addition, for \(|h|<h_{1}\leq h_{0}\),_
\[\|\hat{g}-g\|_{\mathcal{K}}\leq\frac{\|(\Phi^{L}_{h,\hat{g}})^{S}-(\Phi^{L}_{ h,g})^{S}\|_{\mathcal{K}}}{S|h|}+\frac{(e-1)(\mu+\kappa/S)|h|\|\hat{g}-g\|_{ \mathcal{B}(\mathcal{K},(S\mu+\kappa)h_{1}m)}}{h_{1}-|h|}.\]
Proof.: For \(\mathbf{y}\in\mathcal{B}(\mathcal{K},r/2)\) and \(\|\Delta\mathbf{y}\|\leq 1\), the function \(\alpha(z)=g(\mathbf{y}+z\Delta\mathbf{y})\) is analytic for \(|z|\leq r/2\) and bounded by \(m\). By Cauchy's estimate, we obtain
\[\|g^{\prime}(\mathbf{y})\Delta\mathbf{y}\|=\|\alpha^{\prime}(0)\|\leq 2m/r,\]
and \(\|g^{\prime}(\mathbf{y})\|\leq 2m/r\) for \(y\in\mathcal{B}(\mathcal{K},r/2)\) in the operator norm. Similarly, \(\|\hat{g}^{\prime}(y)\|\leq 2m/r\) for \(y\in\mathcal{B}(\mathcal{K},r/2)\).
As in (A.6), when using fixed-point iteration to unroll Runge-Kutta method (2.5), the solutions are recursively obtained by
\[\begin{cases}\mathbf{v}_{i}^{0,s}=\mathbf{x}^{L,s},\quad\mathbf{x}^{L,0}=\mathbf{x}\\ \mathbf{v}_{i}^{l,s}=\mathbf{x}^{L,s}+h\sum_{j=1}^{I}a_{ij}g(\mathbf{v}_{j}^{l-1,s}),\\ \mathbf{x}^{L,s+1}=\mathbf{x}^{L,s}+h\sum_{i=1}^{I}b_{i}g(\mathbf{v}_{i}^{L,s}),\end{cases} \begin{cases}\mathbf{\hat{v}}_{i}^{0,s}=\mathbf{\hat{x}}^{L,s},\quad\mathbf{\hat{x}}^{L,0}=\mathbf{x}\\ \mathbf{\hat{v}}_{i}^{l,s}=\mathbf{\hat{x}}^{L,s}+h\sum_{j=1}^{I}a_{ij}\hat{g}(\mathbf{\hat {v}}_{j}^{l-1,s}),\\ \mathbf{\hat{x}}^{L,s+1}=\mathbf{\hat{x}}^{L,s}+h\sum_{i=1}^{I}b_{i}\hat{g}(\mathbf{\hat {v}}_{i}^{L,s}),\end{cases}\]
where \(l=1,\cdots,L\), \(s=0,\cdots,S-1\). For \(|h|\leq h_{0}=r/(2(S\mu+\kappa)m)\) and \(\mathbf{x}\in\mathcal{K}\), we can readily check that
\[\mathbf{x}^{L,s},\mathbf{\hat{x}}^{L,s}\in\mathcal{B}(\mathcal{K},s\mu m |h|)\text{ for }s=0,\cdots,S,\] \[\mathbf{v}_{i}^{l,s},\mathbf{\hat{v}}_{i}^{l,s}\in\mathcal{B}(\mathcal{K},(s\mu+\kappa)m|h|)\text{ for }s=0,\cdots,S-1,\ l=1,\cdots,L,\ i=1,\cdots,I.\]
Denote \(V^{l,s}=\max_{1\leq i\leq I}\|\mathbf{v}_{i}^{l,s}-\mathbf{\hat{v}}_{i}^{l,s}\|\), \(X^{s}=\|\mathbf{x}^{L,s}-\mathbf{\hat{x}}^{L,s}\|\), we have
\[\|\mathbf{v}_{i}^{l,s}-\mathbf{\hat{v}}_{i}^{l,s}\|\leq |h|\sum_{j=1}^{s}|a_{ij}|(\|g(\mathbf{v}_{j}^{l-1,s})-g(\mathbf{\hat{v}}_{ j}^{l-1,s})\|+\|g(\mathbf{\hat{v}}_{j}^{l-1,s})-\hat{g}(\mathbf{\hat{v}}_{j}^{l-1,s})\|)+X^ {s}\] \[\leq |h|\kappa\frac{2m}{r}V^{l-1,s}+|h|\kappa\|\hat{g}-g\|_{\mathcal{B }(\mathcal{K},(S\mu+\kappa)m|h|)}+X^{s}.\]
Thus we obtain
\[V^{l,s}\leq|h|\kappa\frac{2m}{r}V^{l-1,s}+|h|\kappa\|\hat{g}-g\|_{\mathcal{B} (\mathcal{K},(S\mu+\kappa)m|h|)}+X^{s}.\]
As a result, we have
(B.2) \[\begin{split} V^{L,s}\leq&(|h|\kappa\frac{2m}{r})^{L}V^{0, s}+\frac{1-(|h|\kappa\frac{2m}{r})^{L}}{1-|h|\kappa\frac{2m}{r}}X^{s}+\frac{1-(|h| \kappa\frac{2m}{r})^{L}}{1-|h|\kappa\frac{2m}{r}}|h|\kappa\|\hat{g}-g\|_{ \mathcal{B}(\mathcal{K},(S\mu+\kappa)m|h|)}\\ \leq&\frac{1}{1-|h|\kappa\frac{2m}{r}}X^{s}+\frac{1} {1-|h|\kappa\frac{2m}{r}}|h|\kappa\|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},(S\mu +\kappa)m|h|)}.\end{split}\]
In addition,
(B.3) \[\begin{split}\|\mathbf{x}^{L,s+1}-\hat{\mathbf{x}}^{L,s+1}\|\leq& X^{s}+|h|\sum_{i=1}^{s}|b_{i}|(\|g(\mathbf{v}_{j}^{L,s})-g(\hat{\mathbf{v}}_{j}^{L,s}) \|+\|g(\hat{\mathbf{v}}_{j}^{L,s})-\hat{g}(\hat{\mathbf{v}}_{j}^{L,s})\|)\\ \leq& X^{s}+|h|\mu\frac{2m}{r}V^{L,s}+|h|\mu\|\hat{g }-g\|_{\mathcal{B}(\mathcal{K},(S\mu+\kappa)m|h|)}.\end{split}\]
These estimates, together with (B.2), indicate that
(B.4) \[X^{s+1}\leq(1+\frac{|h|\mu\frac{2m}{r}}{1-|h|\kappa\frac{2m}{r}})X^{s}+(\frac{ |h|\mu\frac{2m}{r}}{1-|h|\kappa\frac{2m}{r}}\kappa+\mu)|h|\|\hat{g}-g\|_{ \mathcal{B}(\mathcal{K},(S\mu+\kappa)m|h|)},\]
Therefore, we deduce that
\[\begin{split} X^{S}\leq&\frac{(1+\frac{|h|\mu\frac {2m}{r}}{1-|h|\kappa\frac{2m}{r}})^{S}-1}{\frac{|h|\mu\frac{2m}{r}}{1-|h|\kappa \frac{2m}{r}}}(\frac{|h|\mu\frac{2m}{r}}{1-|h|\kappa\frac{2m}{r}}\kappa+\mu)|h| \|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},(S\mu+\kappa)m|h|)}\\ \leq&(e-1)(S\mu+\kappa)|h|\|\hat{g}-g\|_{\mathcal{B} (\mathcal{K},(S\mu+\kappa)m|h|)}.\end{split}\]
where we have used the fact \(|h|\mu\frac{2m}{r}/(1-|h|\kappa\frac{2m}{r})\leq 1/S\).
Finally, using Cauchy's estimate, we deduce that \(h_{1}\leq h_{0}\)
\[\|\frac{d^{i}}{dh^{i}}\left((\Phi_{h,\hat{g}}^{L})^{S}(\mathbf{x})-(\Phi_{h,g}^{L} )^{S}(\mathbf{x})\right)\big{|}_{h=0}\|\leq\frac{i!\cdot(e-1)(S\mu+\kappa)\|\hat{g }-g\|_{\mathcal{B}(\mathcal{K},(S\mu+\kappa)h_{1}m)}}{h_{1}^{i-1}}.\]
By the analyticity and triangle inequality, we obtain that for \(|h|<h_{1}\),
\[\begin{split}&\|(\Phi_{h,\hat{g}}^{L})^{S}(\mathbf{x})-(\Phi_{h,g}^{L} )^{S}(\mathbf{x})\|\\ \geq& S|h|\|\hat{g}(\mathbf{x})-g(\mathbf{x})\|-\sum_{i=2}^{ \infty}\|\frac{h^{i}}{i!}\frac{d^{j}}{dh^{j}}\left((\Phi_{h,\hat{g}}^{L})^{S}( \mathbf{x})-(\Phi_{h,g}^{L})^{S}(\mathbf{x})\right)\big{|}_{h=0}\|\\ \geq& S|h|\|\hat{g}(\mathbf{x})-g(\mathbf{x})\|-(e-1)(S\mu+ \kappa)|h|\|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},(S\mu+\kappa)h_{1}m)}\sum_{i=2 }^{\infty}\left(\frac{|h|}{h_{1}}\right)^{i-1}.\end{split}\]
Therefore, we have
(B.5) \[\|\hat{g}-g\|_{\mathcal{K}}\leq\frac{\|(\Phi_{h,\hat{g}}^{L})^{S}-(\Phi_{h,g}^{ L})^{S}\|_{\mathcal{K}}}{S|h|}+\frac{(e-1)(\mu+\kappa/S)|h|\|\hat{g}-g\|_{\mathcal{B} (\mathcal{K},(S\mu+\kappa)h_{1}m)}}{h_{1}-|h|},\]
which concludes the proof.
**Lemma B.4** (Newton-Raphson iteration obeys Assumption B.1).: _Consider a consistent implicit Runge-Kutta scheme \(\Phi_{h}\) (2.4), and its approximation using Newton-Raphson iteration \(\Phi_{h}^{L}\) (2.6), denote_
\[\mu=\sum_{i=1}^{I}|b_{i}|,\quad\kappa=\max_{1\leq i\leq I}\sum_{j=1}^{I}|a_{ij}|.\]
_Let \(g,\hat{g}\) be analytic in \(\mathcal{B}(\mathcal{K},r)\) and satisfy \(\|g\|_{\mathcal{B}(\mathcal{K},r)}\leq m\), \(\|\hat{g}\|_{\mathcal{B}(\mathcal{K},r)}\leq m\). Then, for \(|h|\leq h_{0}=r/(2(S\mu+3.5\kappa)m)\) and \(\mathbf{x}\in\mathcal{K}\), the compositions \((\Phi_{h,g}^{L})^{S}(\mathbf{x})\), \((\Phi_{h,\hat{g}}^{L})^{S}(\mathbf{x})\) are analytic and_
\[\|(\Phi_{h,\hat{g}}^{L})^{S}-(\Phi_{h,g}^{L})^{S}\|_{\mathcal{K}}\leq(e-1)\mu S |h|\|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},r)}.\]
_In addition, for \(|h|<h_{1}\leq h_{0}\),_
\[\|\hat{g}-g\|_{\mathcal{K}}\leq\frac{\|(\Phi_{h,\hat{g}}^{L})^{S}-(\Phi_{h,g}^ {L})^{S}\|_{\mathcal{K}}}{S|h|}+\frac{(e-1)(\mu+\kappa/S)|h|\|\hat{g}-g\|_{ \mathcal{B}(\mathcal{K},(S\mu+3\kappa)h_{1}m)}}{h_{1}-|h|}.\] (B.6)
We note that (B.5) and (B.6) differ only in the constants 3.5 and 3 used (which are both 1 in the FP case).
Proof.: For \(\mathbf{y}\in\mathcal{B}(\mathcal{K},r/2)\) and \(\|\Delta\mathbf{y}\|\leq 1\), the function \(\alpha(z)=g(\mathbf{y}+z\Delta\mathbf{y})\) is analytic for \(|z|\leq r/2\) and bounded by \(m\). By Cauchy's estimate, we obtain
\[\|g^{\prime}(\mathbf{y})\Delta\mathbf{y}\|=\|\alpha^{\prime}(0)\|\leq 2m/r,\quad\|g^{ \prime\prime}(\mathbf{y})(\Delta\mathbf{y},\Delta\mathbf{y})\|=\|\alpha^{\prime\prime}(0) \|\leq 4m/r^{2}\]
and thus \(\|g^{\prime}(\mathbf{y})\|\leq 2m/r\), \(\|g^{\prime\prime}(\mathbf{y})\|\leq 4m/r^{2}\) for \(\mathbf{y}\in\mathcal{B}(\mathcal{K},r/2)\) in the operator norm. Similar estimates hold for \(\hat{g}\).
When using Newton-Raphson iteration to unroll Runge-Kutta method (2.5), the solutions are recursively obtained by
\[\begin{cases}\mathbf{v}_{i}^{0,s}=\mathbf{x}^{L,s},\quad\mathbf{x}^{L,0}=\mathbf{x}\\ \mathbf{v}_{i}^{l,s}=\mathbf{x}^{L,s}+h\sum_{j=1}^{I}a_{ij}\big{(}g(\mathbf{v}_{j}^{l-1,s} )+g^{\prime}(\mathbf{v}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}-\mathbf{v}_{j}^{l-1,s})\big{)}, \\ \mathbf{x}^{L,s+1}=\mathbf{x}^{L,s}+h\sum_{i=1}^{I}b_{i}g(\mathbf{v}_{i}^{L,s}), \end{cases}\]
\[\begin{cases}\mathbf{\hat{v}}_{i}^{0,s}=\mathbf{\hat{x}}^{L,s},\quad\mathbf{\hat{x}}^{L,0 }=\mathbf{x}\\ \mathbf{\hat{v}}_{i}^{l,s}=\mathbf{\hat{x}}^{L,s}+h\sum_{j=1}^{I}a_{ij}\big{(}\hat{g} (\mathbf{\hat{v}}_{j}^{l-1,s})+\hat{g}^{\prime}(\mathbf{\hat{v}}_{j}^{l-1,s})(\mathbf{ \hat{v}}_{j}^{l,s}-\mathbf{\hat{v}}_{j}^{l-1,s})\big{)},\\ \mathbf{\hat{x}}^{L,s+1}=\mathbf{\hat{x}}^{L,s}+h\sum_{i=1}^{I}b_{i}\hat{g}(\mathbf{\hat{v }}_{i}^{L,s}),\end{cases}\]
where \(l=1,\cdots,L\), \(s=0,\cdots,S-1\). Denote \(U^{l,s}=\max_{1\leq i\leq I}\|\mathbf{v}_{i}^{l,s}-\mathbf{v}_{i}^{l-1,s}\|\), we have
\[\|\mathbf{v}_{i}^{l,s}-\mathbf{v}_{i}^{l-1,s}\|\] \[\leq h\sum_{j=1}^{I}|a_{ij}|\|g^{\prime}(\mathbf{v}_{j}^{l-1,s})(\mathbf{v}_{i }^{l,s}-\mathbf{v}_{j}^{l-1,s})+g(\mathbf{v}_{j}^{l-1,s})-g(\mathbf{v}_{j}^{l-2,s})-g^{ \prime}(\mathbf{v}_{j}^{l-2,s})(\mathbf{v}_{i}^{l-1,s}-\mathbf{v}_{j}^{l-2,s})\|\] \[\leq m_{1}\kappa hU^{l,s}+m_{2}\kappa h(U^{l-1,s})^{2}\]
where \(m_{1}=\max_{s,l,i}\|g^{\prime}(\mathbf{v}_{j}^{l-1,s})\|\), \(m_{2}=\max_{s,l,i}\sup_{\theta\in[0,1]}\|g^{\prime\prime}(\theta\mathbf{v}_{j}^{l -1,s}+(1-\theta)\mathbf{v}_{j}^{l-2,s})\|\). Let \(m_{0}=\max_{s,i}|g(\mathbf{x}^{L,s})|\), we have
\[U^{l,s}\leq\frac{m_{2}\kappa h}{1-m_{1}\kappa h}(U^{l-1,s})^{2} \leq(\frac{m_{2}\kappa h}{1-m_{1}\kappa h})^{2^{l-1}-1}(U^{1,s})^{2^{l-1}} \leq(\frac{m_{2}\kappa h}{1-m_{1}\kappa h})^{2^{l-1}-1}(\frac{m_{0}\kappa h}{ 1-m_{1}\kappa h})^{2^{l-1}}.\]
Therefore, we can inductively check that for \(s=0,\cdots,S-1\), \(l=1,\cdots,L\), \(i=1,\cdots,I\),
\[\mathbf{x}^{L,s},\hat{\mathbf{x}}^{L,s}\in\mathcal{B}(\mathcal{K},(s-1)|h |\mu m),\quad\mathbf{v}_{i}^{l,s},\hat{\mathbf{v}}_{i}^{l,s}\in\mathcal{B}(\mathcal{K},(s-1)|h|\mu m+|h|\kappa m/(1-m_{1}\kappa h)),\] \[m_{0}\leq m,\quad m_{1}\leq 2m/r.\]
Denote \(V^{l,s}=\max_{1\leq i\leq I}\|\mathbf{v}_{i}^{l,s}-\hat{\mathbf{v}}_{i}^{l,s}\|\), \(X^{s}=\|\mathbf{x}^{L,s}-\mathbf{\hat{x}}^{s}\|\), we have
\[\|g^{\prime}(\mathbf{v}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}-\mathbf{v}_{j}^{l-1,s})-\hat{g}^{\prime}(\hat{\mathbf{v}}_{j}^{l-1,s})(\hat{\mathbf{v}}_{j}^{l,s}-\hat{ \mathbf{v}}_{j}^{l-1,s})\|\] \[\leq \|g^{\prime}(\mathbf{v}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}-\mathbf{v}_{j}^{l-1,s})-\hat{g}^{\prime}(\hat{\mathbf{v}}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}-\mathbf{v}_{j}^{l -1,s})\|\] \[+\|\hat{g}^{\prime}(\mathbf{v}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}-\mathbf{v}_{j }^{l-1,s})-\hat{g}^{\prime}(\hat{\mathbf{v}}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}-\mathbf{v}_{ j}^{l-1,s})\|\] \[+\|\hat{g}^{\prime}(\hat{\mathbf{v}}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}-\bm {v}_{j}^{l-1,s})-\hat{g}^{\prime}(\hat{\mathbf{v}}_{j}^{l-1,s})(\hat{\mathbf{v}}_{j}^{ l,s}-\hat{\mathbf{v}}_{j}^{l-1,s})\|\] \[\leq \|g-\hat{g}\|_{\mathcal{B}(\mathcal{K},(s-1)|h|\mu m+2|h|\kappa m /(1-m_{1}\kappa|h|))}+\frac{4|h|\kappa m^{2}}{r^{2}(1-m_{1}\kappa|h|)}V^{l-1,s}+\frac{2m}{r}(V^{l-1,s}+V^{l,s})\] \[\leq \|g-\hat{g}\|_{\mathcal{B}(\mathcal{K},(s-1)|h|\mu m+3|h|\kappa m )}+\frac{m}{r}V^{l-1,s}+\frac{2m}{r}(V^{l-1,s}+V^{l,s}),\]
where the last inequality holds by the fact that \(3\kappa m|h|\leq r/2\).
Subsequently, we deduce that
\[\|\mathbf{v}_{i}^{l,s}-\hat{\mathbf{v}}_{i}^{l,s}\|\] \[\leq X^{s}+|h|\sum_{j=1}^{s}|a_{ij}|(\|g(\mathbf{v}_{j}^{l-1,s})-\hat{g}( \hat{\mathbf{v}}_{j}^{l-1,s})\|+\|g^{\prime}(\mathbf{v}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}- \mathbf{v}_{j}^{l-1,s})-\hat{g}^{\prime}(\hat{\mathbf{v}}_{j}^{l-1,s})(\hat{\mathbf{v}}_{j} ^{l,s}-\hat{\mathbf{v}}_{j}^{l-1,s})\|)\] \[\leq |h|\kappa\frac{5m}{r}V^{l-1,s}+|h|\kappa\frac{2m}{r}V^{l,s}+2|h |\kappa\|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},(s-1)|h|\mu m+3|h|\kappa m)}+X^{s}.\]
Thus we obtain
\[V^{l,s}\leq\frac{|h|\kappa\frac{5m}{r}}{1-|h|\kappa\frac{2m}{r}}V^{l-1,s}+ \frac{|h|\kappa\|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},(s-1)|h|\mu m+3|h|\kappa m )}+X^{s}}{1-|h|\kappa\frac{2m}{r}}.\]
As a result, we have
\[V^{L,s}\leq \frac{1}{1-|h|\kappa\frac{7m}{r}}X^{s}+\frac{1}{1-|h|\kappa\frac{7m} {r}}|h|\kappa\|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},(S\mu+3\kappa)m|h|)}.\] (B.7)
Due to the similarity of estimates (B.2) and (B.7), it is now possible to carry over the results of fixed-point iteration to Newton-Raphson iteration. Here, the analogous estimates are given as
\[X^{S}\leq (e-1)(S\mu+\kappa)|h|\|\hat{g}-g\|_{\mathcal{B}(\mathcal{K},(S\mu +3\kappa)m|h|)},\] \[\|\hat{g}-g\|_{\mathcal{K}}\leq \frac{\|(\Phi_{h,\hat{g}}^{L})^{S}-(\Phi_{h,g}^{L})^{S}\|_{ \mathcal{K}}}{S|h|}+\frac{(e-1)(\mu+\kappa/S)|h|\|\hat{g}-g\|_{\mathcal{B}( \mathcal{K},(S\mu+3\kappa)h_{1}m)}}{h_{1}-|h|}.\]
The proof is completed.
Proof of Lemma 3.2 (The \(M\)-step shooting loss and the teacher-forcing loss have equivalent convergence)
In the following, we seek to prove a double inequality of the form \(c_{1}A\leq B\leq c_{2}A\), and, broadly speaking, do this by showing (1) that \(B\leq c_{2}A\), and (2) that \(A\leq c_{3}B\) with \(c_{1}=1/c_{3}\).
Proof.: Denote by \(C_{L}\) the Lipschitz constant of \(\left(\Phi_{h,f_{\theta}}^{L}\right)^{s}\), we have
\[\sum_{x\in\mathcal{T}}\|\big{(}\Phi_{h,f_{\theta}}^{L}\big{)}^{s} (\mathbf{x})-\phi_{sh,f}(\mathbf{x})\|_{2}^{2}=\sum_{n=1}^{N}\sum_{m=1}^{M}\|\big{(} \Phi_{h,f_{\theta}}^{L}\big{)}^{s}\circ\phi_{(m-1)\Delta t,f}(\mathbf{x}_{n})-\phi _{m\Delta t,f}(\mathbf{x}_{n})\|_{2}^{2}\] \[\leq \sum_{n=1}^{N}\sum_{m=1}^{M}2\|\big{(}\Phi_{h,f_{\theta}}^{L} \big{)}^{s}\circ\phi_{(m-1)\Delta t,f}(\mathbf{x}_{n})-\big{(}\Phi_{h,f_{\theta}} ^{L}\big{)}^{ms}\|_{2}^{2}+2\|\big{(}\Phi_{h,f_{\theta}}^{L}\big{)}^{ms}\,( \mathbf{x}_{n})-\phi_{m\Delta t,f}(\mathbf{x}_{n})\|_{2}^{2}\] \[\leq 2(C_{L}^{2}+1)\cdot M^{2}\cdot\sum_{n=1}^{N}\sum_{m=1}^{M}\| \big{(}\Phi_{h,f_{\theta}}^{L}\big{)}^{ms}\,(\mathbf{x}_{n})-\phi_{m\Delta t,f}( \mathbf{x}_{n})\|_{2}^{2}/m^{2}.\]
In addition,
\[\|\big{(}\Phi_{h,f_{\theta}}^{L}\big{)}^{ms}\,(\mathbf{x}_{n})-\phi_ {m\Delta t,f}(\mathbf{x}_{n})\|_{2}^{2}/m^{2}\] \[\leq \sum_{i=0}^{m-1}\|\big{(}\Phi_{h,f_{\theta}}^{L}\big{)}^{(m-i)s} \circ\phi_{i\Delta t,f}(\mathbf{x}_{n})-\big{(}\Phi_{h,f_{\theta}}^{L}\big{)}^{(m-i -1)s}\circ\phi_{(i+1)\Delta t,f}(\mathbf{x}_{n})\|_{2}^{2}\] \[\leq \sum_{i=1}^{m-1}C_{L}^{2(m-i-1)}\|\big{(}\Phi_{h,f_{\theta}}^{L} \big{)}^{s}\circ\phi_{i\Delta t,f}(\mathbf{x}_{n})-\phi_{(i+1)\Delta t,f}(\mathbf{x}_{ n})\|_{2}^{2}\] \[\leq \sum_{i=1}^{M}C_{L}^{2(M-1)}\|\big{(}\Phi_{h,f_{\theta}}^{L} \big{)}^{s}\circ\phi_{(i-1)\Delta t,f}(\mathbf{x}_{n})-\phi_{i\Delta t,f}(\mathbf{x}_{ n})\|_{2}^{2}.\]
Therefore, we conclude that
\[\begin{split}&\sum_{n=1}^{N}\sum_{m=1}^{M}\|\big{(}\Phi_{h,f_{\theta}}^ {L}\big{)}^{ms}\left(\mathbf{x}_{n}\right)-\phi_{m\Delta t,f}(\mathbf{x}_{n})\|_{2}^{2} /m^{2}\\ \leq&\sum_{n=1}^{N}\sum_{m=1}^{M}\sum_{i=1}^{M}C_{L} ^{2(M-1)}\|\big{(}\Phi_{h,f_{\theta}}^{L}\big{)}^{s}\circ\phi_{(i-1)\Delta t,f} (\mathbf{x}_{n})-\phi_{i\Delta t,f}(\mathbf{x}_{n})\|_{2}^{2}\\ \leq& C_{L}^{2(M-1)}\cdot M\cdot\sum_{x\in\mathcal{T }}\|\big{(}\Phi_{h,f_{\theta}}^{L}\big{)}^{s}\left(\mathbf{x}\right)-\phi_{sh,f}( \mathbf{x})\|_{2}^{2}.\end{split} \tag{10}\]
The proof is completed.
Proof of Theorem 3 (Increasing the iteration number \(L\) is equivalent to adjusting the approximation target to gradually approach the true target)
We first demonstrate the convergence of both fixed-point iteration (5) and Newton-Raphson iteration (6) for multiple compositions, which will be also used for the proof of Lemma 4.
**Lemma B.5** (**Multiple compositions of fixed-point iteration converges.**).: _Consider a consistent implicit Runge-Kutta scheme \(\Phi_{h}\) (4) and its approximation using fixed-point iteration \(\Phi_{h}^{L}\) (5). Denote_
\[\mu=\sum_{i=1}^{I}|b_{i}|\quad\kappa=\max_{1\leq i\leq I}\sum_{j=1}^{I}|a_{ij}|.\]
_Then, for any continuously differentiable \(g\) and initial value \(\mathbf{x}\), there exists remainder term \(R=\mathcal{O}(h^{L+3})\) such that_
\[\|\big{(}\Phi_{h,g}^{L}\big{)}^{S}(\mathbf{x})-\big{(}\Phi_{h,g}\big{)}^{S}(\mathbf{x} )\|_{\infty}\leq S\|g(\mathbf{x})\|\|g^{\prime}(\mathbf{x})\|^{L+1}\mu\kappa^{L+1}h^{L +2}+R.\]
Proof.: The solution of \(\big{(}\Phi_{h,g}\big{)}^{S}(\mathbf{x})\) and \(\big{(}\Phi_{h,g}^{L}\big{)}^{S}(\mathbf{x})\) with initial value \(\mathbf{x}\) are respectively given by
\[\begin{cases}\mathbf{x}^{0}=\mathbf{x},\\ \mathbf{v}_{i}^{s}=\mathbf{x}^{s}+h\sum_{j=1}^{I}a_{ij}g(\mathbf{v}_{j}^{s}),\\ \mathbf{x}^{s+1}=\mathbf{x}^{s}+h\sum_{i=1}^{I}b_{i}g(\mathbf{v}_{i}^{s}),\end{cases}\quad \begin{cases}\mathbf{x}^{L,0}=\mathbf{x},\quad\mathbf{v}_{i}^{0,s}=\mathbf{x}^{L,s},\\ \mathbf{v}_{i}^{l,s}=\mathbf{x}^{L,s}+h\sum_{j=1}^{I}a_{ij}g(\mathbf{v}_{j}^{l-1,s}),\\ \mathbf{x}^{L,s+1}=\mathbf{x}^{L,s}+h\sum_{i=1}^{I}b_{i}g(\mathbf{v}_{i}^{L,s}),\end{cases}\]
where \(s=0,\cdots,S-1\), \(l=1,\cdots,L\), \(i=1,\cdots,I\) and \(\big{(}\Phi_{h,g}\big{)}^{S}(\mathbf{x})=\mathbf{x}^{S}\), \(\big{(}\Phi_{h,g}^{L}\big{)}^{S}(\mathbf{x})=\mathbf{x}^{L,S}\). Denote \(V^{l,s}=\max_{i}\|\mathbf{v}_{i}^{l,s}-\mathbf{v}_{i}^{s}\|\), we have
\[\|\mathbf{v}_{i}^{l,s}-\mathbf{v}_{i}^{s}\|\leq m_{1}\kappa h\cdot V^{l-1,s}+\|\mathbf{x}^{L,s}-\mathbf{x}^{s}\|,\]
where \(m_{1}=\max_{s,l,i}\|g(\mathbf{v}_{i}^{s})-g(\mathbf{v}_{i}^{l,s})\|/\|\mathbf{v}_{i}^{s}-\mathbf{v }_{i}^{l,s}\|\). As a result,
\[V^{L,s}\leq (m_{1}\kappa h)^{L}\cdot V^{0,s}+\frac{1-(m_{1}\kappa h)^{L}}{1-m_ {1}\kappa h}\|\mathbf{x}^{L,s}-\mathbf{x}^{s}\|\] \[\leq (m_{1}\kappa h)^{L}\cdot m_{0}\kappa h+\frac{1}{1-m_{1}\kappa h} \|\mathbf{x}^{L,s}-\mathbf{x}^{s}\|,\]
where \(m_{0}=\max_{s,i}|g(\mathbf{v}_{i}^{s})|\). In addition, we deduce that
\[\|\mathbf{x}^{L,s+1}-\mathbf{x}^{s+1}\|\leq \|\mathbf{x}^{L,s}-\mathbf{x}^{s}\|+m_{1}\mu h\cdot V^{L,s}\] \[\leq (1+\frac{m_{1}\mu h}{1-m_{1}\kappa h})\|\mathbf{x}^{L,s}-\mathbf{x}^{s}\| +m_{0}m_{1}^{L+1}\mu\kappa^{L+1}h^{L+2}.\]
Finally, we obtain that
\[\|\mathbf{x}^{L,S}-\mathbf{x}^{S}\|\leq (1+\frac{m_{1}\mu h}{1-m_{1}\kappa h})^{S}\|\mathbf{x}^{L,0}-\mathbf{x}^{ 0}\|+\frac{(1+\frac{m_{1}\mu h}{1-m_{1}\kappa h})^{S}-1}{\frac{m_{1}\mu h}{1- m_{1}\kappa h}}m_{1}^{L+1}m_{0}\mu\kappa^{L+1}h^{L+2}\] \[\leq \frac{(1+\frac{m_{1}\mu h}{1-m_{1}\kappa h})^{S}-1}{\frac{m_{1} \mu h}{1-m_{1}\kappa h}}\cdot m_{0}m_{1}^{L+1}\mu\kappa^{L+1}h^{L+2}\] \[= S\|g(\mathbf{x})\|\|g^{\prime}(\mathbf{x})\|^{L+1}\mu\kappa^{L+1}h^{L+2}+ \mathcal{O}(h^{L+3}).\]
The proof is complete.
**Lemma B.6** (**Multiple compositions of Newton-Raphson iteration converges**.).: _Consider a consistent implicit Runge-Kutta scheme \(\Phi_{h}\) (2.4) and its approximation using Newton-Raphson iteration \(\Phi_{h}^{L}\) (2.6). Then, for any twice continuously differentiable \(g\) and initial value \(\mathbf{x}\), there exist remainder term \(R=\mathcal{O}(h^{2^{L+1}+1})\) such that_
\[\|\big{(}\Phi_{h,g}^{L}\big{)}^{S}(\mathbf{x})-\big{(}\Phi_{h,g}\big{)}^{S}(\mathbf{x} )\|_{\infty}\leq S\|g^{\prime}(\mathbf{x})\|(\frac{\|g^{\prime\prime}(\mathbf{x})\|}{1- \|g^{\prime}(\mathbf{x})\|\kappa h})^{2^{L}-1}\|g(\mathbf{x})\|^{2^{L}}\mu\kappa^{2^{L +1}-1}h^{2^{L+1}}+R,\]
_where \(\mu\) and \(\kappa\) are constants defined in Lemma B.5._
Proof.: The solution of \(\big{(}\Phi_{h,g}\big{)}^{S}(\mathbf{x})\) and \(\big{(}\Phi_{h,g}^{L}\big{)}^{S}(\mathbf{x})\) with initial value \(\mathbf{x}\) are respectively given by
\[\begin{cases}\mathbf{x}^{0}=\mathbf{x},\\ \mathbf{v}_{i}^{s}=\mathbf{x}^{s}+h\sum_{j=1}^{I}a_{ij}g(\mathbf{v}_{j}^{s}),\\ \mathbf{x}^{s+1}=\mathbf{x}^{s}+h\sum_{i=1}^{I}b_{i}g(\mathbf{v}_{i}^{s}),\end{cases}\quad \begin{cases}\mathbf{x}^{L,0}=\mathbf{x},\quad\mathbf{v}_{i}^{0,s}=\mathbf{x}^{L,s},\\ \mathbf{v}_{i}^{l,s}=\mathbf{x}^{L,s}+h\sum_{j=1}^{I}a_{ij}\big{(}g(\mathbf{v}_{j}^{l-1,s} )+g^{\prime}(\mathbf{v}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}-\mathbf{v}_{j}^{l-1,s})\big{)},\\ \mathbf{x}^{L,s+1}=\mathbf{x}^{L,s}+h\sum_{i=1}^{I}b_{i}g(\mathbf{v}_{i}^{L,s}),\end{cases}\]
where \(s=0,\cdots,S-1\), \(l=1,\cdots,L\), \(i=1,\cdots,I\) and \(\big{(}\Phi_{h,g}\big{)}^{S}(\mathbf{x})=\mathbf{x}^{S}\), \(\big{(}\Phi_{h,g}^{L}\big{)}^{S}(\mathbf{x})=\mathbf{x}^{L,S}\).
Let
\[\hat{\mathbf{v}}_{i}^{s}=\mathbf{x}^{L,s}+h\sum_{j=1}^{I}a_{ij}g(\hat{\mathbf{v}}_{j}^{s}),\ \text{for}\ i=1,\cdots,I,\ s=0,\cdots,S-1,\]
and \(m_{0}=\max_{s,i}|g(\hat{\mathbf{v}}_{i}^{s})|\), \(m_{1}=\max_{s,l,i}\|g^{\prime}(\mathbf{v}_{j}^{l-1,s})\|\), \(m_{2}=\max_{s,l,i}\sup_{\theta\in[0,1]}\|g^{\prime\prime}(\theta\mathbf{v}_{j}^{l-1,s}+(1-\theta)\hat{\mathbf{v}}_{j}^{s})\|/2\), \(\hat{V}^{l,s}=\max_{i}\|\mathbf{v}_{i}^{l,s}-\hat{\mathbf{v}}_{i}^{s}\|\), we have that
\[\|\mathbf{v}_{i}^{l,s}-\hat{\mathbf{v}}_{i}^{s}\|\leq h\sum_{j=1}^{I}|a_{ij}|\|g(\mathbf{v}_{j}^{l-1,s})+g^{\prime}(\mathbf{v}_{j}^{ l-1,s})(\hat{\mathbf{v}}_{j}^{s}-\mathbf{v}_{j}^{l-1,s})-g(\hat{\mathbf{v}}_{j}^{s})+g^{ \prime}(\mathbf{v}_{j}^{l-1,s})(\mathbf{v}_{j}^{l,s}-\hat{\mathbf{v}}_{j}^{s})\|\] \[\leq m_{2}\kappa h(\hat{V}^{l-1,s})^{2}+m_{1}\kappa h\hat{V}^{l,s},\]
which implies that \(\hat{V}^{l,s}\leq(\frac{m_{2}\kappa h}{1-m_{1}\kappa h})^{2^{l}-1}(m_{0}\kappa h )^{2^{l}}\). Let \(\tilde{m}_{1}=\max_{s,i}\|g(\hat{\mathbf{v}}_{i}^{s})-g(\mathbf{v}_{i}^{s})\|/\|\hat{ \mathbf{v}}_{i}^{s}-\mathbf{v}_{i}^{s}\|\) and \(\tilde{V}^{s}=\max_{i}\|\hat{\mathbf{v}}_{i}^{s}-\mathbf{v}_{i}^{s}\|\), we have that
\[\|\hat{\mathbf{v}}_{i}^{s}-\mathbf{v}_{i}^{s}\|\leq\|\mathbf{x}^{L,s}-\mathbf{x}^{s}\|+\tilde{ m}_{1}\kappa h\tilde{V}^{s},\]
which implies that \(\tilde{V}^{s}\leq\|\mathbf{x}^{L,s}-\mathbf{x}^{s}\|/(1-\tilde{m}_{1}\kappa h)\). Therefore, we conclude that
\[V^{l,s}\leq\hat{V}^{l,s}+\tilde{V}^{s}\leq(\frac{m_{2}\kappa h}{1-m_{1}\kappa h })^{2^{l}-1}(m_{0}\kappa h)^{2^{l}}+\frac{1}{1-\tilde{m}_{1}\kappa h}\|\mathbf{x} ^{L,s}-\mathbf{x}^{s}\|.\]
In addition, similarly to Lemma B.5, we have that
\[\|\mathbf{x}^{L,s+1}-\mathbf{x}^{s+1}\|\leq \|\mathbf{x}^{L,s}-\mathbf{x}^{s}\|+m_{1}\mu h\cdot V^{L,s}\] \[\leq (1+\frac{m_{1}\mu h}{1-\tilde{m}_{1}\kappa h})\|\mathbf{x}^{L,s}-\bm {x}^{s}\|+m_{1}(\frac{m_{2}}{1-m_{1}\kappa h})^{2^{L}-1}m_{0}^{2^{L}}\mu\kappa ^{2^{L+1}-1}h^{2^{L+1}}.\]
and thus
\[\|\mathbf{x}^{L,S}-\mathbf{x}^{S}\|\] \[\leq (1+\frac{m_{1}\mu h}{1-\tilde{m}_{1}\kappa h})^{S}\|\mathbf{x}^{L,0}- \mathbf{x}^{0}\|+\frac{(1+\frac{m_{1}\mu h}{1-\tilde{m}_{1}\kappa h})^{S}-1}{\frac {m_{1}\mu h}{1-\tilde{m}_{1}\kappa h}}m_{1}(\frac{m_{2}}{1-m_{1}\kappa h})^{2 ^{L}-1}m_{0}^{2^{L}}\mu\kappa^{2^{L+1}-1}h^{2^{L+1}}\] \[\leq S\|g^{\prime}(\mathbf{x})\|(\frac{\|g^{\prime\prime}(\mathbf{x})\|}{1-\| g^{\prime}(\mathbf{x})\|\kappa h})^{2^{L}-1}\|g(\mathbf{x})\|^{2^{L}}\mu\kappa^{2^{L+1}-1}h^{2 ^{L+1}}+\mathcal{O}(h^{2^{L+1}+1}),\]
which completes the proof.
We next present the proof of Theorem 3.3.
Proof of Theorem 3.3.: We first prove that the statement holds for fixed-point iteration by induction. First, the case when \(k=0\) is obvious since \(f=\hat{f}_{0}=f_{0}\). Suppose now that \(\hat{f}_{k}=f_{k}\) for \(0\leq k\leq K\leq L-1\), then
\[\hat{f}_{h}^{K}=\sum_{k=0}^{K}h^{k}\hat{f}_{k}=\sum_{k=0}^{K}h^{k}f_{k}=f_{h}^{K}\]
By Lemma B.5, we have
\[\Phi_{h,\hat{f}_{h}^{K}}-\Phi_{h,\hat{f}_{h}^{K}}^{L}=\mathcal{O}(h^{L+2}).\] (B.10)
We rewrite the calculation procedure of IMDE as
\[\phi_{h,f}-\Phi_{h,\hat{f}_{h}^{K}}= h^{K+2}\hat{f}_{K+1}+\mathcal{O}(h^{K+3}),\] \[\phi_{h,f}-\Phi_{h,f_{h}^{K}}^{L}= h^{K+2}f_{K+1}+\mathcal{O}(h^{K+3}).\]
Subtracting above two equations and substituting (B.10), we conclude that \(\hat{f}_{K+1}=f_{K+1}\), which completes the induction.
In addition, for Newton-Raphson iteration, by Lemma B.6, repeating the above induction implies \(\hat{f}_{k}=f_{k}\) for \(0\leq k\leq 2^{L+1}-2\). The proof is completed.
### Proof of Thereom 3.4 (Order of convergence for learning ODEs)
_Proof._ The proof is a direct consequence of Theorem 3.1, Theorem 3.3 and the following Lemma.
**Lemma B.7** (IMDE power series for a \(p^{\text{th}}\) order integrator has first error term of order \(h^{p}\).).: _Suppose that the integrator \(\Phi_{h}(\mathbf{x})\) with discrete step \(h\) is of order \(p\geq 1\), then, the IMDE obeys_
\[\frac{d}{dt}\mathbf{\tilde{y}}=f_{h}(\mathbf{\tilde{y}})=f(\mathbf{\tilde{y}})+h^{p}f_{p} (\mathbf{\tilde{y}})+\cdots.\]
_Proof._ The proof can be found in [55].
### Proof of Lemma 4.1 (Convergence of the ("inner") implicit iteration)
_Proof._ By Minkowski's inequality, we obtain that for neural network \(f_{\theta}\),
\[\mathcal{L}_{exact}^{\frac{1}{2}}\leq \mathcal{L}_{unrolled}^{\frac{1}{2}}+\mathcal{R}_{L},\] \[\mathcal{L}_{exact}^{\frac{1}{2}}\leq \mathcal{L}_{unrolled}^{\frac{1}{2}}+\left(\sum_{n=1}^{N}\sum_{m=1 }^{M}\|\big{(}\Phi_{h,f_{\theta}}^{L}(\mathbf{x}_{n})\big{)}^{ms}-\Big{(}\Phi_{h,f _{\theta}}^{L+1}(\mathbf{x}_{n})\Big{)}^{ms}\|_{2}^{2}/(m\Delta t)^{2}\right)^{ \frac{1}{2}}+\mathcal{R}_{L+1},\]
where
\[\mathcal{R}_{L}=\left(\sum_{n=1}^{N}\sum_{m=1}^{M}\|\big{(}\Phi_{h,f_{\theta}} ^{L}(\mathbf{x}_{n})\big{)}^{ms}-(\Phi_{h,f_{\theta}}(\mathbf{x}_{n}))^{ms}\|_{2}^{2} /(m\Delta t)^{2}\right)^{\frac{1}{2}}.\]
According to Lemma B.5 and Lemma B.6, we have \(\mathcal{R}_{L}=\mathcal{O}(h^{L^{*}+1})\) where \(L^{*}=L\) for the unrolled approximation using fixed-point iteration (2.5) and \(L^{*}=2^{L+1}-2\) for the unrolled approximation using Newton-Raphson iteration (2.6) and thus complete the proof. | ```
データを用いてODE-netを、非直交型数値初期値問題ソルバーでテンプレート化した上で未知のダイナmiquesを学習することに重点を置きます。まず、ODE-netの逆修正エラー分析を行い、解釈の容易さのために、未展開非直交型スキームを用いて実行します。この結果、未展開非直交型スキームを用いてODE-netを学習することは、逆修正差分方程式の近似値を返します。さらに、このODE-netの学習時にパラメータの選択の理論的な基礎を構築し、現在の戦略は通常、ODE-netの非直交型数値積分をブラックボックスとして扱います。そこで、この論文では、誤差レベルを監視し、トレーニング中に未展開の非直交型解の回数調整を行い、未展開の近似の誤差を学習損失よりも小さく保ちます。これにより、トレーニング |
2309.12907 | Certifying the Topology of Quantum Networks: Theory and Experiment | Distributed quantum information in networks is paramount for global secure
quantum communication. Moreover, it finds applications as a resource for
relevant tasks, such as clock synchronization, magnetic field sensing, and
blind quantum computation. For quantum network analysis and benchmarking of
implementations, however, it is crucial to characterize the topology of
networks in a way that reveals the nodes between which entanglement can be
reliably distributed. Here, we demonstrate an efficient scheme for this
topology certification. Our scheme allows for distinguishing, in a scalable
manner, different networks consisting of bipartite and multipartite
entanglement sources. It can be applied to semi-device independent scenarios
also, where the measurement devices and network nodes are not well
characterized and trusted. We experimentally demonstrate our approach by
certifying the topology of different six-qubit networks generated with
polarized photons, employing active feed-forward and time multiplexing. Our
methods can be used for general simultaneous tests of multiple hypotheses with
few measurements, being useful for other certification scenarios in quantum
technologies. | Lisa T. Weinbrenner, Nidhin Prasannan, Kiara Hansenne, Sophia Denker, Jan Sperling, Benjamin Brecht, Christine Silberhorn, Otfried Gühne | 2023-09-22T14:50:38 | http://arxiv.org/abs/2309.12907v2 | # Certifying the topology of quantum networks: theory and experiment
###### Abstract
Distributed quantum information in networks is paramount for global secure quantum communication. Moreover, it finds applications as a resource for relevant tasks, such as clock synchronization, magnetic field sensing, and blind quantum computation. For quantum network analysis and benchmarking of implementations, however, it is crucial to characterize the topology of networks in a way that reveals the nodes between which entanglement can be reliably distributed. Here, we demonstrate an efficient scheme for this topology certification. Our scheme allows for distinguishing, in a scalable manner, different networks consisting of bipartite and multipartite entanglement sources, for different levels of trust in the measurement devices and network nodes. We experimentally demonstrate our approach by certifying the topology of different six-qubit networks generated with polarized photons, employing active feed-forward and time multiplexing. Our methods can be used for general simultaneous tests of multiple hypotheses with few measurements, being useful for other certification scenarios in quantum technologies.
## I Introduction
A key hallmark of modern information technology is the ability to establish multi-user communication channels. In the field of quantum information processing, such multilateral communication channels gain further significance as they can be enhanced by providing entanglement as a quantum resource between multiple nodes of a quantum network [1; 2; 3]. Indeed, such quantum networks can serve as useful structures for secure communication [4], clock synchronization [5], distributed field sensing [6; 7], and even blind quantum computation [8]. Consequently, many experimental groups pursue their implementation by demonstrating basic network structures [9; 10; 11]. In any case, real quantum networks are fragile and stochastic effects caused by, e.g., probabilistic entanglement generation or failure of nodes and links [12; 13; 14], detrimentally affect the usefulness and connectivity of a network. Similarly, eavesdropping events as well as corrupted nodes may affect the network structure, necessitating a probing and monitoring of the shared quantum resources in an easily accessible manner.
In all aforementioned cases, the characterization of the distributed entanglement across the network is indispensable. So far, however, the tools with which this goal can be achieved have been limited mainly to analyzing which quantum states and correlations can and cannot be established in a given network structure [15; 16; 17; 18; 19; 20; 21]. In order to understand the properties and limitations of a given quantum network, however, it is crucial to certify its topology. This refers to the probing of a set of targeted quantum network configurations (see Fig. 1) and goes beyond the characterization of single distributed quantum states [22; 23; 20]. First approaches to this problem have recently been given [24; 25]. But, although the proposed methods can distinguish between certain topologies, they assume the distribution of pure states or specific noise models and do not allow for certifying the quantum nature of the distributed states.
Conceptually speaking, the scenario of topology certification amounts to the joint test of several mutually exclusive hypotheses. This is significantly different from many
Figure 1: (a,b) For a quantum network of eight parties, two possible network configurations are shown: (a) two two-qubit sources and one four-qubit source and (b) one two-qubit source and two three-qubit sources are used to distribute eight qubits. In this paper, we design and implement an efficient method to discriminate these and all other configurations. (c) For the experimental implementation, we consider a six-qubit network, where four different configurations shall be distinguished. Here, the sources distribute two-, four- or six-qubit GHZ states, which are depicted by the fully connected graphs.
existing works in quantum information science, where typically only one hypothesis is compared with one null hypothesis [26; 27; 28; 29; 30]. We also add that the related problem of community detection in classical networks has been intensively discussed [31; 32; 33], but these approaches do not directly translate to a quantum mechanical formulation.
In this work, we explore in theory and experiment the resource-efficient hypothesis testing of distinct quantum network configurations. For this purpose, we devise and implement a protocol that allows us to statistically certify (or falsify) which hypothesis is consistent with the multipartite quantum state of a network. Importantly, our different tests are based on a common set of local measurements and are easily implementable. In the experiment, we generate six-qubit quantum networks with different multipartite entanglement structures from a flexible, engineered source based on time-multiplexing and feed-forward; different network topologies correspond to different feed-forward sequences in the same physical source. Our measurements then allow us to determine the generated entanglement configuration with high confidence. Finally, we present methods for the topology certification of networks which can also be applied if some nodes are not trusted or some measurement devices are not certified.
## II Statement of the problem
The problem we consider is best explained by an example; see Fig. 1. Consider eight parties connected through a quantum network. Then, entanglement is distributed via the network across eight qubits. As depicted in Fig. 1, this may be done in different ways: In network (a), the entanglement is distributed by two Bell pair sources together with one four-qubit source, whereas network (b) consists of one Bell pair and two three-qubit sources. Clearly, other topologies are also possible yet omitted here for the sake of clarity and simplicity. Here and in the following, we always assume that the sources distribute (potentially noisy) Greenberger-Horne-Zeilinger (GHZ) states,
\[|GHZ_{n}\rangle=\frac{1}{\sqrt{2}}(|0\rangle^{\otimes n}+|1\rangle^{\otimes n }), \tag{1}\]
consisting of \(n\) qubits. For the two-qubit case, GHZ states are simply the maximally entangled Bell states. For more particles, they are also, in some sense, maximally entangled [34], and form valuable resources for multiparticle cryptography [35; 4] and quantum metrology [36; 37]. The key questions are now: How can the eight parties identify, in a simple manner, which of the different configurations the network qubits are currently sharing? How can they find out which types of sources have been used (e.g., how many qubits were entangled) and between which of their qubits entanglement has successfully been generated?
These questions, with appropriate modifications, arise in several situations. For instance, it may be that the parties are connected via an intricate network of qubits, where the network provider promises to generate maximally entangled states in different configurations. In this case, the parties may be interested in verifying the provider's claims with minimum effort. Alternatively, consider a network with some dishonest participants. Then, some other participants may want to certify that they share an entangled state, while ensuring that this state is not shared with any potentially malicious party.
In the general case, the problem can be considered for \(N\) nodes, corresponding to \(N\) qubits from different sources. The aim is then to certify the topology of the network from which the qubits originate. Here, one may additionally assume that only GHZ states of maximally \(M<N\) qubits can be generated by the sources, effectively reducing the set of possible configurations. In the following, we present an efficient scheme to measure all fidelities \(F_{I}=\operatorname{tr}(|GHZ_{I}\rangle\,\langle GHZ_{I}|\,\varrho_{I})\) for all possible configurations in a unified manner. The index \(I\) denotes here the set of \(|I|=n\) qubits on which the fidelity depends, and the state \(\varrho_{I}\) is the reduced state on the qubits labeled by \(I\); in Fig. 1(a), the set \(I\) could be \(\{1,2,3,4\}\), \(\{5,6\}\) or \(\{7,8\}\). This now allows us to derive statistically rigorous tests for the different hypotheses about the topology directly from the measurement data. Our approach can be formulated for well-characterized measurements on the qubits as well as for the device-independent scenario, where some parties are not trusted and the measurements are potentially misaligned. We stress that our approach is fundamentally different from the task of state discrimination for a set of states [38] as we are not assuming that the quantum state comes from a fixed collection of states. In addition, we do not make any assumptions about the kind of noise.
## III Simultaneous fidelity estimation and hypothesis testing
To start, we recall how the fidelity of an \(N\)-qubit GHZ state can be determined [39]. This state can be decomposed into a diagonal term \(\mathcal{D}_{N}\) and an anti-diagonal term \(\mathcal{A}_{N}\), i.e.,
\[|GHZ_{N}\rangle\!\langle GHZ_{N}|= \frac{1}{2}\big{(}\,|0\rangle\!\langle 0|^{\otimes N}+|1 \rangle\!\langle 1|^{\otimes N}+|0\rangle\!\langle 1|^{\otimes N}\] \[+|1\rangle\!\langle 0|^{\otimes N}\,\big{)}=\frac{1}{2}( \mathcal{D}_{N}+\mathcal{A}_{N}). \tag{2}\]
The diagonal term can be determined by performing the Pauli measurement \(\sigma_{z}^{\otimes N}\) on all qubits. Concerning the anti-diagonal term, it has been shown that it can be written as \(\mathcal{A}_{N}=\nicefrac{{1}}{{N}}\sum_{k=0}^{N-1}(-1)^{k}\mathcal{M}_{k}^{ \otimes N}\), where the observ
ables \(\mathcal{M}_{k}\) are given by measurements in the \(x\)-\(y\) plane of the Bloch sphere,
\[\mathcal{M}_{k}=\Big{[}\cos\left(\frac{k\pi}{N}\right)\sigma_{x}+\sin\left(\frac {k\pi}{N}\right)\sigma_{y}\Big{]}. \tag{3}\]
This means that the fidelity of an \(N\)-qubit GHZ state can be determined by in total \(N+1\) local measurements. Note that the measurements \(\mathcal{M}_{k}\) also depend on the number \(N\) of qubits.
The key observation in our approach is that the decomposition of \(\mathcal{A}_{N}\) is not unique. Indeed, other sets of measurements in the \(x\)-\(y\) plane of the Bloch sphere also allow us to determine \(\mathcal{A}_{N}\), as long as the measurements form a basis in the space of operators spanned by products of \(\sigma_{x}\) and \(\sigma_{y}\), with an even number of \(\sigma_{y}\)[39]. This paves the way for the simultaneous estimation of several GHZ fidelities: From the measurement data of \(\mathcal{M}_{k}^{\otimes N}\), \(\mathcal{A}_{N}\) can be determined using the formula above. Furthermore, for any subset of \(m<N\) qubits, the expectation values \(\langle\mathcal{M}_{k}^{\otimes m}\rangle\) can be obtained from the same set of data, which allows for the computation of the fidelity of the \(m\)-qubit GHZ states with respect to the reduced state on these \(m\) particles. Explicit formulas for the \(m\)-qubit fidelities are provided in Appendix A.
Combining the above observations leads to the following scheme for testing the topology of a network: For a given \(N\), the parties perform the \(N+1\) local measurements \(\sigma_{z}^{\otimes N}\) and \(\mathcal{M}_{k}^{\otimes N}\). They then use this data to determine the set of fidelities \(\{F_{I}\}\) for each considered network configuration. This allows them to identify the actual configuration and, at the same time, to characterize the quality of the sources. If the parties know that the sources are at most \(M\)-partite, then it suffices to perform the \(M\) measurements \(\mathcal{M}_{k}^{\otimes N}\) with angles \({}^{k\pi/M}\) in the \(x\)-\(y\) plane and the measurement \(\sigma_{z}^{\otimes N}\).
It remains to discuss how to formulate a proper hypothesis test in the space of potential fidelities that can be used to make a decision based on the observed data. Here, the task is to formulate a set of exclusive hypotheses which correspond to the different topologies, are physically motivated, and, at the same time, allow for a direct estimation of a \(p\)-value. The \(p\)-value of a hypothesis describes the probability of observing the experimental data given that the hypothesis \(H\) holds true, i.e., \(p=\Pr[\text{data}\mid H]\). From a physical point of view, it is important to certify that a working source delivers GHZ states with a fidelity \(F>1/2\), as this guarantees the presence of genuine multiparticle entanglement [40; 41]. Moreover, there are intricate dependencies between the fidelities of different GHZ states. If a state on \(n\) qubits has a high GHZ fidelity, then the reduced state on a subset of \(m<n\) qubits has also a non-vanishing fidelity with a GHZ state (with potentially adjusted phases); indeed, we have \(F_{m}>F_{n}/2\) because of the common entries on the diagonal.
The above considerations motivate the following strategy to formulate exclusive hypotheses in the space of all fidelities. Any topology \(T\) is characterized by a set of fidelities \(\{F_{I}^{T}\}\) of the included GHZ states. The hypothesis corresponding to \(T\) is then given by a set of conditions of the type
\[F_{I}^{T}-\max_{G\supset I}\Big{\{}F_{G}^{T}\Big{\}}>\frac{1}{2}, \tag{4}\]
where \(G\supset I\) denotes the relevant supersets of the qubits in the \(n\)-qubit set \(I\). For instance, in order to distinguish the distinct topologies in Fig. 1, the hypothesis for configuration (a) should contain the conditions \(F_{\{1,2,3,4\}}>1/2\), \(F_{\{5,6\}}>1/2\) and \(F_{\{7,8\}}-F_{\{6,7,8\}}>1/2\), and the hypothesis for (b) the conditions \(F_{\{1,2,3\}}-F_{\{1,2,3,4\}}>1/2\), \(F_{\{4,5\}}>1/2\) and \(F_{\{6,7,8\}}>1/2\), rendering these hypotheses mutually exclusive. Taking the differences of the fidelities, e.g., \(F_{\{7,8\}}-F_{\{6,7,8\}}>1/2\), is necessary to distinguish between tripartite entanglement on \(\{6,7,8\}\), which leads to high fidelities \(F_{\{7,8\}}\)_and_\(F_{\{6,7,8\}}\), and bipartite entanglement on \(\{7,8\}\), which only results in a high fidelity \(F_{\{7,8\}}\). Finally, one always has to consider the null hypothesis, where the fidelities are small, making it impossible to certify the network structure. Given such hypotheses, the \(p\)-values can directly be calculated from the data, using large deviation bounds, like the Hoeffding inequality; see also Appendix B and C for details.
## III Experimental generation of network states
The experimental implementation of our scheme demands a flexible state-generation circuit for the different entanglement configurations; see also Fig. 2 (a). We generate polarization-entangled Bell states with a dispersion-engineered parametric down-conversion source in a periodically poled potassium titanyl waveguide [42]. Larger entangled states are created with the help of a polarization qubit memory, based on an all-optical storage loop, which doubles as time-multiplexing device to increase generation rates and beam splitter to interfere successive Bell pairs [43]. The qubit memory's operation mode--swap or interfere--is triggered by fast feed-forward based on the detection of one qubit from each Bell pair. We can generate four- and six-photon GHZ states with our setup at increased rates. Here, we make use of the specific programming capabilities of our system to generate four different six-photon states by changing the feed-forward sequence of the memory, without any physical changes in the experimental setup in practice; see Fig. 2 (b).
The first line schematically depicts the feed-forward sequence for the generation of a \(|GHZ_{4}\rangle\otimes|GHZ_{2}\rangle\) network topology. Upon detection of a photon from the first Bell pair, its partner is stored in the memory by means of a
swap operation. It is then interfered with a photon from the successive Bell pair to generate a \(\left|GHZ_{4}\right\rangle\) state by means of the interfere operation. The stored photon is then exchanged for a photon from the third Bell pair via another swap operation. Note that we did not depict a final swap operation, which serves to read out the final photon from the memory. The interfere operation is essentially a fusion of two polarization qubits from two Bell pairs by interfering them on a polarizing beam splitter and post-selecting on a specific measurement pattern to create a graph state [44, 45].
Two consecutive interfere operations generate a \(\left|GHZ_{6}\right\rangle\) state, while a swap operation followed by an interfere operation generates the state \(\left|GHZ_{2}\right\rangle\otimes\left|GHZ_{4}\right\rangle\). Finally, two swap operations yield the state \(\left|GHZ_{2}\right\rangle\otimes\left|GHZ_{2}\right\rangle\otimes\left|GHZ_{2} \right\rangle\otimes\left|GHZ_{2}\right\rangle\), where photons from each Bell pair only share entanglement with each other. Note that in our setup fixing the phase of the six-photon GHZ state as \(\left|GHZ_{6}\right\rangle=(\left|0\right\rangle^{\otimes 6}+\left|1\right\rangle^{ \otimes 6})/\sqrt{2}\) implicitly fixes the phase of the four-photon state to \(\left|GHZ_{4}^{-}\right\rangle=(\left|0\right\rangle^{\otimes 4}-\left|1 \right\rangle^{\otimes 4})/\sqrt{2}\), so we formulate the hypotheses for this four-photon source.
Our source generates Bell states with entanglement visibility exceeding 93%. In total, we multiplex up to seven pump pulses to create the three Bell states required for this work. This yields a final six-photon event rate of approximately 0.3 Hz; more details are given in Appendix
Figure 2: Operation principle of the experiment. (a) A dispersion-engineered, integrated Bell-state source probabilistically generates polarization-entangled Bell states. A successful polarization-resolved detection of one pair-photon generates a feed-forward signal to the quantum memory that stores its sibling. Retrieved light from the memory is detected in another polarization-resolved detection stage. The zoom-out shows the two operation modes of the memory: swap stores a new photon while releasing a stored photon without them interacting; interfere realizes a balanced interference between a new photon and a stored photon to increase the size of the entangled state. HWP: half-wave plate; QWP: quarter-wave plate; PBS: polarizing beam splitter. (b) Depending on the desired final network topology, different feed-forward sequences are implemented, which exchange swap and interfere operations of the quantum memory. Lines 2-4 show a smaller version of the quantum memory pictogram. (c) Measured average coherence terms \(\langle\mathcal{M}_{\text{b}}^{\otimes 6}\rangle\) for the four different network topologies (orange markers), in the same order as in (b). The terms show an oscillatory dependence on the phase, which correlates with the number of entangled photons in each state. The blue lines are theory and serve as guide to the eye. Error bars are smaller than the symbol size.
D. Note that higher rates can be achieved by multiplexing more pump pulses [43]. This, however, comes at the cost of a decreased state fidelity. As described above, for any network topology we perform the Pauli measurement \(\sigma_{z}^{\otimes 6}\) as well as the measurements of the \(\mathcal{M}_{k}^{\otimes 6}\). For the latter, we set the corresponding wave plate angles in front of the detection; see Fig. 2. We record around thousand successful events for every measurement setting to ensure good statistics. Our data yields H/V populations of \(\mathcal{D}_{6}=(74\pm 1.7)\%\) and a total coherence value of \(\mathcal{A}_{6}=(60\pm 0.9)\%\), resulting in a total fidelity of \(F_{6}=0.67\pm 0.01\) for the \(|GHZ_{6}\rangle\) state [46]. A plot of all coherence terms for the different topologies is shown in Fig. 2(c).
## IV Statistical model selection
All that remains to be done is the formulation of exclusive hypotheses and the calculation of the according \(p\)-values for our specific experiment. Naturally, we look for four distinct hypotheses according to the four possible configurations, and a fifth null hypothesis, which accounts for the case that none of the desired states was prepared. To exploit the fact that a fidelity higher than \(1/2\) guarantees the presence of multipartite entanglement, it is reasonable to start with the hypothesis \(H_{1}:F_{\{1,2,3,4,5,6\}}>1/2\), which describes the successful detection of a 6-particle GHZ state. The next hypothesis \(H_{2}\) responsible for the certification of the generation of \(|GHZ_{4}\rangle\otimes|GHZ_{2}\rangle\) consists of two parts for the respective \(GHZ\) states: \(F_{\{1,2,3,4\}}-F_{\{1,2,3,4,5,6\}}>1/2\) and \(F_{\{5,6\}}-\max\{F_{\{3,4,5,6\}},F_{\{1,2,3,4,5,6\}}>1/2\). The hypothesis \(H_{3}\) and \(H_{4}\), associated with the states \(|GHZ_{2}\rangle\otimes|GHZ_{4}\rangle\) and \(|GHZ_{2}\rangle\otimes|GHZ_{2}\rangle\otimes|GHZ_{2}\rangle\), respectively, are constructed in the same fashion and given explicitly in Appendix C. In Fig. 3, the different measured fidelities for the four different states are depicted together with the differences of the fidelities considered in the hypotheses.
The \(p\)-values of the hypotheses are given by the probability to observe the experimental data given that the respective hypothesis \(H_{i}\) holds true, i.e., \(p=\Pr[\text{data}\mid H_{i}]\). The data in our case is given by the differences of the fidelities we compute, e.g., \(d_{\{5,6\}}=F_{\{5,6\}}-\max\{F_{\{3,4,5,6\}},F_{\{1,2,3,4,5,6\}}\}\). We resort to calculating an upper bound on \(p\), using the Hoeffding inequality [47] as in Refs. [26; 48]. The Hoeffding inequality is a large deviation bound for a sum of independent and bounded random variables. It states that the probability for this sum to differ from the mean value by a certain amount decreases exponentially in the number of variables. The measured difference of fidelities can be seen as a sum of the measurement results from the runs of the experiment, weighted with certain coefficients. Each measurement result in turn is an independent random event which can be modeled as a random variable so that we can apply Hoeffding's inequality to the sum of these variables to bound the \(p\)-value. An exact analytical presentation of this bound and its proof can be found in Appendix C.
For each of the four generated states, we computed upper bounds on the \(p\)-values corresponding to the five hypotheses \(H_{i}\) (\(i=0,\ldots,4\)). The upper bounds concerning the null hypothesis are smaller than \(9.5\times 10^{-8}\) for all states, i.e., the probability that the fidelities are too small to certify one of the network states is at most \(9.5\times 10^{-8}\). For the four hypotheses \(H_{1}\), \(H_{2}\), \(H_{3}\) and \(H_{4}\), there exists one hypothesis for each state, for which the \(p\)-value is trivially upper bounded by one, while the other three \(p\)-values are upper bounded by at most \(1.2\times 10^{-32}\). The exact values of the bounds are given in Table 1 of Appendix C.
Figure 3: Graphical depiction of the fidelities (blue curves) and the differences of the fidelities considered in the hypotheses (red curves) for the four different measured datasets. The different directions \(D_{I}\) denote either the fidelity \(F_{I}\) (blue) or the difference \(F_{I}-\max_{G\supset I}F_{G}\) (red). The differences allow for a clear separation between the states in the sense that only the desired terms are larger than \(1/2\) (outside the dark area). The hypothesis test leads to the following results: dataset a) belongs to the state \(|GHZ_{6}\rangle\), b) to \(|GHZ_{4}\rangle\otimes|GHZ_{2}\rangle\), c) to \(|GHZ_{2}\rangle\otimes|GHZ_{4}\rangle\) and d) to \(|GHZ_{2}\rangle\otimes|GHZ_{2}\rangle\otimes|GHZ_{2}\rangle\).
## Device-independent approach
So far, we have assumed that the performed measurements are well calibrated and that the nodes are trusted. This, however, may not necessarily be the case. The key to discussing the device-independent scenario is to consider the Bell operator from the Mermin inequality [49], which reads for three particles as
\[\mathcal{B}_{3}=X\otimes X\otimes X-X\otimes Y\otimes Y-Y\otimes X\otimes Y-Y \otimes Y\otimes X. \tag{5}\]
Here, \(X\) and \(Y\) are general dichotomic observables, but not necessarily Pauli matrices. For local realistic models, \(\langle\mathcal{B}_{3}\rangle\leq 2\) has to hold while the GHZ state reaches \(\langle\mathcal{B}_{3}\rangle=4\) if the measurements \(X=\sigma_{x}\) and \(Y=\sigma_{y}\) are performed. The Mermin inequality can be generalized to more particles; it consists of a combination of \(X\) and \(Y\) measurements with an even number of \(Y\)s and alternating signs. Formally, this can efficiently be written as \(\mathcal{B}_{N}=[(X+iY)^{\otimes N}+(X-iY)^{\otimes N}]/2\), and, with the identification \(X=\sigma_{x},Y=\sigma_{y}\), one finds \(\mathcal{B}_{N}=2^{N}\mathcal{A}_{N}\).
The essential point is that several results connecting the expectation value \(\langle\mathcal{B}_{N}\rangle\) with the GHZ-state fidelity are known. First, if nothing is assumed about the measurements, and if the value \(\langle\mathcal{B}_{N}\rangle\) is close to the algebraic maximum \(2^{N}\), a high GHZ fidelity up to some local rotations is certified [50]. Second, as we show in Appendix E, if one assumes that \(X\) and \(Y\) are (potentially misaligned) measurements on qubits, one can directly formulate a lower bound on the GHZ fidelity. Third, it has been shown that, even in the presence of collaborating dishonest nodes, the global state must be close to a GHZ state if the honest nodes choose \(X=\sigma_{x}\) and \(Y=\sigma_{y}\) as measurements [20]. Finally, if all parties choose \(X=\sigma_{x}\) and \(Y=\sigma_{y}\), then \(F_{N}\geq\langle\mathcal{B}_{N}\rangle/2^{N}\) since for quantum states \(\langle\mathcal{D}_{N}\rangle\geq\langle\mathcal{A}_{N}\rangle\).
The above comments suggest the following scheme for the device independent scenario. All parties measure all combinations of \(X\) and \(Y\), leading to \(2^{N}\) measurements in total. Then, they can evaluate the Bell operator of the Mermin inequality on each subset and characterize the fidelity for all sources with three or more qubits, depending on the assumptions they can justify. For the case that the maximal size of the GHZ states is known to be \(M<N\), not all \(2^{N}\) measurements need to be performed. Rather, a smartly chosen subset of these measurements suffices as the scheme of overlapping tomography allows to evaluate all combinations of \(X\) and \(Y\) on \(M\)-particle subsets with an effort increasing only logarithmically in \(M\)[51]. Finally, one may also consider advanced measurement schemes based on continuous stabilizers [21], or the Svetlichny inequality [22], which may be more efficient in the presence of dishonest parties.
## III Discussion
We devised and directly realized a resource-efficient scheme for probing the entanglement topology of quantum networks. Our method is based on a small set of local measurements, is easily implementable and scalable, and can be generalized to the device-independent scenario. Employing our flexibly programmable experimental setup based on feed-forward and time multiplexing, we were able to generate six-photon, polarization-encoded qubit states with different multipartite entanglement structures and discriminate them with high confidence.
Our work opens an avenue to several new research directions in the field of quantum information science. First, our methods can be extended to characterize other network scenarios. For instance, other quantum states besides GHZ states, such as cluster and graph states [52; 53], may be distributed and certified. Also, one may include the effects of classical communication, imperfect quantum memories, and probabilistic entanglement generation to certify the topology of the classical and quantum layer of a network. Second, our approach is an example of a multiple hypothesis test, a concept which has natural applications to other problems. An example is the joint estimation of several incompatible measurements [54] from simpler ones. Here, also shadow-like techniques based on randomized measurements may be a fruitful tool [55].
We thank Jef Pauwels for discussions. This work has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198, and the Collaborative Research Center TRR 142 (Project No. 231447078, project C10)), the Sino-German Center for Research Promotion (Project M-0294), and the German Ministry of Education and Research (Project QuKuK, BMBF Grant No. 16KIS1618K). The work was further funded through the Ministerium fur Kultur und Wissenschaft des Landes Nordrhein-Westfalen through the project PhoQC: Photonisches Quantencomputing. LTW, KH, and SD acknowledge support by the House of Young Talents of the University of Siegen.
## IV Appendix A: Coefficients for the fidelity calculation
Given an \(N\)-qubit GHZ-state, it is known as mentioned in the main text that its fidelity \(F_{N}=\mathrm{Tr}\{\varrho\left|GHZ_{N}\right\rangle\left\langle GHZ_{N} \right|\}\) with some state \(\varrho\) can be obtained by measuring \(N+1\) different measurement settings
only [39]. These consist of the Pauli-\(Z\) measurement \(\sigma_{z}^{\otimes N}\) and the \(N\) measurements
\[\mathcal{M}_{k}^{\otimes N}=\left[\cos\left(\frac{k\pi}{N}\right)\sigma_{x}+\sin \left(\frac{k\pi}{N}\right)\sigma_{y}\right]^{\otimes N},\quad k=0,\dots,N-1. \tag{6}\]
Since these are all local measurements, all measurement results of \(\sigma_{z}^{\otimes n}\) and \(\mathcal{M}_{k}^{\otimes n}\) for \(n\leq N\) can be deduced from the same measurement data. We now show that, by using the results of these measurements, we can also compute the fidelity of any \(n\)-qubit reduced state of \(\varrho\) (denoted \(\varrho^{(n)}\)) with an \(n\)-party GHZ-state for \(n<N\). For simplicity, we just write \(F_{n}\) for this fidelity and omit the notation on which \(n\) parties the fidelity is calculated.
First, we note that the \(GHZ_{n}\) state can be written as
\[\left|GHZ_{n}\right\rangle\left\langle GHZ_{n}\right|=\frac{1}{2}\left(\left| 0\right\rangle\left\langle 0\right|^{\otimes n}+\left|1\right\rangle\left\langle 1 \right|^{\otimes n}\right)+\frac{1}{2}\left(\left|0\right\rangle\left\langle 1 \right|^{\otimes n}+\left|1\right\rangle\left\langle 0\right|^{\otimes n} \right)=:\frac{1}{2}(\mathcal{D}_{n}+\mathcal{A}_{n}) \tag{7}\]
such that the fidelity \(F_{n}\) reads as
\[F_{n}=\mathrm{Tr}(\varrho\left|GHZ_{n}\right\rangle\left\langle GHZ_{n}\right| \otimes\mathds{1}_{N-n})=\frac{1}{2}\left[\mathrm{Tr}\Big{(}\varrho^{(n)} \mathcal{D}_{n}\Big{)}+\mathrm{Tr}\Big{(}\varrho^{(n)}\mathcal{A}_{n}\Big{)} \right]. \tag{8}\]
The expectation values of \(\left|0\right\rangle\left\langle 0\right|^{\otimes n}\) and \(\left|1\right\rangle\left\langle 1\right|^{\otimes n}\), and thus of \(\mathcal{D}_{n}\), can be directly recovered from the measurement results of \(\sigma_{z}^{\otimes N}\) for all \(n\leq N\). The calculation of the expectation value of \(\mathcal{A}_{n}\) is not as trivial, and we show that there exist real coefficients \(a_{k}\) (\(k=0,\dots,N-1\)) such that
\[\sum_{k=0}^{N-1}a_{k}\mathcal{M}_{k}^{\otimes n}=\mathcal{A}_{n}. \tag{9}\]
This means that the expectation value of \(\mathcal{A}_{n}\) can be obtained from the measurement results of \(\mathcal{M}_{k}^{\otimes n}\). Note that the fidelity of \(\varrho\) with the \(\left|GHZ_{n}^{-}\right\rangle\) state, which is given by
\[\left|GHZ_{n}^{-}\right\rangle=\frac{1}{\sqrt{2}}(\left|0\right\rangle^{\otimes n }-\left|1\right\rangle^{\otimes n}), \tag{10}\]
can also be calculated directly from the same data by just switching the sign from \(\mathcal{A}_{n}\) to \(-\mathcal{A}_{n}\).
**Theorem 1**.: _The \(N\)-dimensional vector of coefficients \(\vec{a}\) in Eq. (9) is given by_
\[\vec{a}=\mathrm{diag}(e^{-i\pi\frac{k\pi}{N}})\mathcal{F}^{-1}(\vec{c}), \tag{11}\]
_where \(\mathrm{diag}(e^{-i\pi\frac{k\pi}{N}})\) denotes the diagonal matrix with entries \(e^{-i\pi\frac{k\pi}{N}}\) (\(k=0,\dots,N-1\)) and \(\mathcal{F}^{-1}\) denotes the inverse discrete Fourier transform (DFT) of \(\vec{c}=(1,0,\dots,0,1,c_{n+1},\dots,c_{N-1})\in\mathbb{C}^{N}\)._
Proof.: Let us first rewrite \(\mathcal{M}_{k}\) as
\[\mathcal{M}_{k} =\cos\left(\frac{k\pi}{N}\right)\left[\left|0\right\rangle \left\langle 1\right|+\left|1\right\rangle\left\langle 0\right|\right]+\sin \left(\frac{k\pi}{N}\right)\left[-i\left|0\right\rangle\left\langle 1\right|+i\left|1 \right\rangle\left\langle 0\right|\right] \tag{12}\] \[=e^{-i\frac{k\pi}{N}}\left|0\right\rangle\left\langle 1\right|+e^{i \frac{k\pi}{N}}\left|1\right\rangle\left\langle 0\right|,\quad k=0,\dots,N-1, \tag{13}\]
and thus
\[\sum_{k=0}^{N-1}a_{k}\mathcal{M}_{k}^{\otimes n} =\sum_{j=0}^{n}\left[\sum_{k=0}^{N-1}e^{-i\pi\frac{k}{N}j}e^{i\pi \frac{k}{N}(n-j)}a_{k}\right]\sum_{\pi}\left[\bigotimes_{l=0}^{j}\left|0 \right\rangle\left\langle 1\right|\bigotimes_{l=j+1}^{n}\left|1\right\rangle \left\langle 0\right|\right] \tag{14}\] \[=:\sum_{j=0}^{n}c_{j}\sum_{\pi}\left[\bigotimes_{l=0}^{j}\left|0 \right\rangle\left\langle 1\right|\bigotimes_{l=j+1}^{n}\left|1\right\rangle \left\langle 0\right|\right], \tag{15}\]
where \(\sum_{\pi}\dots\) denotes the sum over all permutations leading to different terms. It now follows directly that we must have \(c_{0}=c_{n}=1\) and \(c_{2}=\dots=c_{n-1}=0\) with
\[c_{j}=\sum_{k=0}^{N-1}e^{-i\pi\frac{k}{N}j}e^{i\pi\frac{k}{N}(n-j)}a_{k}=\sum_{ k=0}^{N-1}e^{-2\pi i\frac{k\pi}{N}}e^{i\pi\frac{k\pi}{N}}a_{k} \tag{16}\]
for \(j=0,\ldots,n\). These are the first \(n+1\) entries of the DFT of \(\mathrm{diag}(e^{i\pi\frac{kn}{N}})\vec{a}\). By defining \(c_{n+1},\ldots,c_{N-1}\) through Eq. (16), we can consider the extended vector \(\vec{c}=(1,0,\ldots,0,1,c_{n+1},\ldots,c_{N-1})\) as the entire DFT \(\vec{c}=\mathcal{F}(\mathrm{diag}(e^{i\pi\frac{kn}{N}})\vec{d})\). Using the inverse of the DFT, we arrive at
\[\vec{a}=\mathrm{diag}(e^{-i\pi\frac{kn}{N}})\mathcal{F}^{-1}(\vec{c}), \tag{17}\]
or, alternatively,
\[a_{k}=\frac{1}{N}e^{-i\pi\frac{kn}{N}}\sum_{j=0}^{N-1}e^{2\pi i\frac{kn}{N}}c_{ j}, \tag{18}\]
with \(c_{n+1},\ldots,c_{N-1}\in\mathbb{C}\).
Note that the coefficients \(a_{k}\) are not necessarily real for an arbitrary choice of \(c_{n+1},\ldots,c_{N-1}\). In that case, one can simply take the real part \(\mathrm{Re}(\vec{a})\) only without affecting the validity of Eq. (9) as \(\mathcal{M}_{k}\) and \(\mathcal{A}_{n}\) are hermitian. However, we show in the following Corollary that, from requiring a minimal norm of the coefficients \(\left\|\vec{a}\right\|_{2}\), it already follows that the vector \(\vec{a}\) is real-valued. We will see in the following Appendix B that minimizing the norm \(\left\|\vec{a}\right\|_{2}\) leads to the best bounds on the accuracy of our fidelity estimate.
For the next part, we recall that \(\left\|\mathcal{F}(\vec{a})\right\|_{2}^{2}=N\|\vec{a}\|_{2}^{2}\) holds true in the case of the DFT. Since \(\vec{c}\) is the DFT of \(\vec{a}\), up to a complex phase, it directly follows that \(\left\|\vec{c}\right\|_{2}^{2}=N\|\vec{a}\|_{2}^{2}\). Keeping this in mind, we can directly calculate the coefficients \(a_{k}\) with minimal norm \(\left\|\vec{a}\right\|_{2}\) from the above Theorem.
**Corollary 2**.: _The coefficients \(a_{k}\) minimizing the norm \(\left\|\vec{a}\right\|_{2}\) for fixed \(n\) and \(N\) are given by_
\[a_{k}=\frac{1}{N}\begin{cases}(-1)^{k}&\text{ for }n=N,\\ 2\cos\bigl{(}\pi\frac{kn}{N}\bigr{)}&\text{ for }n<N.\end{cases} \tag{19}\]
Proof.: Since the vectors \(\vec{a}\) and \(\vec{c}\) fulfill \(\left\|\vec{c}\right\|_{2}^{2}=N\|\vec{a}\|_{2}^{2}\), the norm of \(\vec{a}\) is minimal if and only if the norm of \(\vec{c}\) is minimal. According to Theorem 1, the norm of the vector \(\vec{c}\) reads as
\[\left\|\vec{c}\right\|_{2}^{2}=1+0+\cdots+0+1+|c_{n+1}|^{2}+\cdots+|c_{N-1}|^ {2}, \tag{20}\]
which is minimal for \(c_{n+1}=\cdots=c_{N-1}=0\). Thus, the ideal choice for the vector \(\vec{c}\) only contains up to two entries equal to one and has therefore a norm of \(\left\|\vec{c}\right\|_{2}=\sqrt{2}\) for \(n<N\) and \(\left\|\vec{c}\right\|_{2}=1\) for \(n=N\). In the case of \(n<N\), this leads to
\[Na_{k}=e^{-i\pi\frac{kn}{N}}\left[e^{0}+e^{2\pi i\frac{kn}{N}}\right]=e^{-i\pi \frac{kn}{N}}+e^{\pi i\frac{kn}{N}}=2\cos\biggl{(}\pi\frac{kn}{N}\biggr{)} \tag{21}\]
and, in the case of \(n=N\), to
\[Na_{k}=e^{-i\pi\frac{kN}{N}}=e^{-i\pi k}=(-1)^{k}, \tag{22}\]
which proves the statement.
Note that in the case \(n=N\) these are exactly the coefficients given in the main text. It follows directly from the above Corollary that the norm of the minimal coefficient vector \(\vec{a}\) only depends on the number of qubits \(\left\|\vec{a}\right\|_{2}^{2}=\frac{1}{N}\left\|c\right\|_{2}^{2}\leq\frac{2}{N}\). We will use this result in Appendix B to characterize the accuracy of the fidelities obtained from the experimental data.
## Appendix B: Confidence region for fidelity estimation
In the asymptotic limit, the experimental data for a measurement \(M\) ought to reproduce exactly the expectation value \(\langle M\rangle\). However, since every experiment is restricted to a finite number of measurements, one can only calculate estimates of the expectation values from the measurement results. Here we describe a way to bound the accuracy of the calculated estimate using Hoeffding's inequality [26; 47].
The setting is the following: We want to calculate the fidelity \(F\), which is a linear combination of some observables \(M_{i}\),
\[F=\sum_{i=0}^{N}d_{i}\langle M_{i}\rangle. \tag{23}\]
In our case, the coefficients and measurements will be the same as described in Appendix A. For now, however, we will look at the general case described above. In the experiment, each of the measurements \(M_{i}\) will be measured for a finite number \(m_{i}\) where each of these single measurements leads to a measurement result \(A_{ij}\) (\(j=1,\ldots,m_{i}\)). For example, measuring \(m_{i}=10\) times the measurement \(M_{i}=\sigma_{z}\) leads to the ten measurement results \(A_{ij}=+1\) or \(A_{ij}=-1\) for \(j=1,\ldots,10\). The estimate for the expectation value of \(M_{i}\) is then calculated by
\[\widehat{\langle M_{i}\rangle}=\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}A_{ij}, \tag{24}\]
and the estimate for the fidelity \(F\) is obtained accordingly by
\[\hat{F}=\sum_{i=0}^{N}d_{i}\widehat{\langle M_{i}\rangle}=\sum_{i=0}^{N} \frac{d_{i}}{m_{i}}\sum_{j=1}^{m_{i}}A_{ij}. \tag{25}\]
We now want to characterize the accuracy of this estimate by using Hoeffding's inequality [47], which we briefly recall:
**Lemma 3** (Hoeffding's inequality [47]).: _Let \(X_{1},\ldots,X_{m}\) be independent, bounded random variables such that there exist \(s_{i}\) and \(t_{i}\) with \(s_{i}\leq X_{i}\leq t_{i}\). Then, the sum \(S_{n}=\sum_{i=1}^{n}X_{i}\) fulfills_
\[\Pr[(S_{n}-\langle S_{n}\rangle)>\epsilon]<\exp\biggl{\{}-\frac{2\epsilon}{C} \biggr{\}}, \tag{26}\]
_with \(C=\sum_{i=1}^{m}(t_{i}-s_{i})^{2}\)._
For a proof, see [47].
Using this, we can now calculate the accuracy of the estimate \(\hat{F}\).
**Lemma 4**.: _Defining the fidelities, coefficients, and measurements in the same way as in Appendix A, and denoting the minimal number of measurements by \(\mu=\min_{i}(m_{i})\), the estimate of the fidelity \(\hat{F}\) obeys \(\Pr\Bigl{[}(\hat{F}_{n}-F_{n})>\epsilon\Bigr{]}<\delta\) for \(\delta(\epsilon)=\exp\Bigl{\{}-\frac{8\mu\epsilon^{2}}{1+8/N}\Bigr{\}}\Leftrightarrow \epsilon(\delta)=\sqrt{-\frac{1+8/N}{8\mu}\ln\delta}\)._
Proof.: Following the notation in Appendix A, the fidelity reads
\[F_{n}=\sum_{k=0}^{N-1}\frac{a_{k}}{2}\langle\mathcal{M}_{k}^{\otimes n}\rangle +\frac{1}{2}\langle\mathcal{D}_{n}\rangle=:\sum_{k=0}^{N}\frac{a_{k}}{2} \langle M_{k}\rangle, \tag{27}\]
for \(M_{k}=\mathcal{M}_{k}^{\otimes n}\) for \(k=0,\ldots,N-1\), \(M_{N}=\mathcal{D}_{n}\) and \(a_{N}=1\). Note that each measurement \(M_{k}\) has only two possible measurement results: the measurements \(\mathcal{M}_{k}^{\otimes n}\) can only result in either \(+1\) or \(-1\), and the measurement \(\mathcal{D}_{n}\) only in either \(0\) or \(1\). Denoting the number of times each measurement \(M_{i}\) is performed by \(m_{i}\), and the result of the \(j\)-th measurement of \(M_{i}\) by \(A_{ij}\), the estimate of the fidelity is then
\[\hat{F}_{n}=\sum_{i=0}^{N}\frac{a_{i}}{2m_{i}}\sum_{j=1}^{m_{i}}A_{ij}. \tag{28}\]
Note that this is a sum of the independent random variables \(\frac{a_{i}}{2m_{i}}A_{ij}\), whose expectation value yields the true fidelity,
\[\langle\hat{F}_{n}\rangle=\sum_{i=0}^{N}\frac{a_{i}}{2m_{i}}\sum_{j=1}^{m_{i} }\langle A_{ij}\rangle=\sum_{i=0}^{N}\frac{a_{i}}{2m_{i}}\sum_{j=1}^{m_{i}} \langle M_{i}\rangle=\sum_{i=0}^{N}\frac{a_{i}}{2}\langle M_{i}\rangle=F_{n}. \tag{29}\]
The random variables \(A_{ij}\) themselves can only take two different values; for \(i=N\), it holds
\[\frac{a_{N}}{2m_{N}}A_{Nj}\in\left\{\frac{a_{N}}{2m_{N}}\times 0,\frac{a_{N}}{2m _{N}}\times 1\right\}=\left\{0,\frac{1}{2m_{N}}\right\} \tag{30}\]
and, for \(i=0,\ldots,N-1\),
\[\frac{a_{i}}{2m_{i}}A_{ij}\in\left\{\frac{a_{i}}{2m_{i}}\times(-1),\frac{a_{i }}{2m_{i}}\times 1\right\}. \tag{31}\]
Using Hoeffding's inequality, we therefore arrive at
\[\Pr\Bigl{[}\Bigl{(}\hat{F}_{n}-F_{n}\Bigr{)}>\epsilon\Bigr{]}=\Pr\Bigl{(}\hat {F}_{n}-\langle\hat{F}_{n}\rangle>\epsilon\Bigr{)}\leq\exp\biggl{\{}-\frac{2 \epsilon^{2}}{C}\biggr{\}} \tag{32}\]
with
\[C = \sum_{i=0}^{N-1}\sum_{j=1}^{m_{i}}\left(\frac{a_{i}}{2m_{i}}-\frac {-a_{i}}{2m_{i}}\right)^{2}+\sum_{j=1}^{m_{N}}\left(\frac{1}{2m_{N}}\right)^{2} \tag{33}\] \[= \sum_{i=0}^{N-1}\frac{a_{i}^{2}}{m_{i}}+\frac{1}{4m_{N}}\] (34) \[\leq \frac{1}{4\mu}\left(4\|\vec{a}\|_{2}^{2}+1\right)\] (35) \[\leq \frac{1}{4\mu}\left(8/N+1\right), \tag{36}\]
which proves the statement.
The last inequality follows directly from Corollary 2 and explains the need to minimize the norm of the coefficient vector \(\vec{a}\).
In Appendix C, we want to test hypotheses which compare fidelities on different subsets. Let us for now denote two of these different fidelities by \(F_{1}\) and \(F_{2}\). Then, we can bound in exactly the same way as described above the probability by
\[\Pr\Bigl{[}\bigl{(}\hat{F}_{1}-\hat{F}_{2}\bigr{)}-\left(F_{1}-F_{2}\right)> \epsilon\Bigr{]}<\exp\biggl{\{}-\frac{2\epsilon^{2}\mu}{1+8/N}\biggr{\}}. \tag{37}\]
Intuitively, the factor of \(4\) in the constant \(C\) appears because the length of the intervals in which the summed random variables assume their values approximately doubles. Mathematically, the random variables \(\frac{a_{i}^{(1)}}{2m_{i}}A_{ij}^{(1)}-\frac{a_{i}^{(2)}}{2m_{i}}A_{ij}^{(2)}\) arising in the calculation of the fidelity can now take four values, which are bounded by sums of the different coefficients \(a_{i}^{(k)}\). Using the subadditivity of the norm \(\left\|\cdot\right\|_{2}\) in the calculation of the constant \(C\) yields then the expression given above.
## Appendix C: Hypothesis testing
Recall that our goal is to distinguish between different topologies of distributed states. Using the calculated fidelities for subsets of \(n\) of the \(N\) qubits, we develop here a hypothesis test to decide which layout describes the measured data best. We do that by first formulating the different topologies as partitions of the set of all qubits. Then, we derive a hypothesis test for different partitions and calculate the \(p\)-value of each hypothesis using the results from Appendix B.
We start by noticing that the different layouts of a network can be seen as different partitions of the set \(\{1,\ldots,N\}\). We recall that a partition \(P\) of \(\{1,\ldots,N\}\) is a set of subsets \(\{I_{1},\ldots,I_{k}\}\) of \(\{1,\ldots,N\}\) such that \(I_{i}\neq\emptyset\) and \(I_{i}\cap I_{j}=\emptyset\) for all \(i,j\) and the union of all subsets covers again \(\{1,\ldots,N\}\),
\[\bigcup_{j=1}^{k}I_{j}=\{1,\ldots,N\}. \tag{38}\]
For instance, the configurations (a) and (b) from Fig. 1 would correspond to the partitions \(P_{a}=\{\{1,2,3,4\},\{5,6\},\{7,8\}\}\) and \(P_{b}=\{\{1,2,3\},\{4,5\},\{6,7,8\}\}\), respectively.
As seen in the section before, all the fidelities \(F_{n}=\operatorname{Tr}\{\varrho\left|GHZ_{n}\right\rangle\left\langle GHZ_{n} \right|\otimes\mathds{1}_{N-n}\}\) for \(n\leq N\) can be computed from only \(N+1\) measurements. We now refine the notation to keep track on which \(n\) parties the fidelity is calculated. We denote the \(n\)-party GHZ-state on the \(n\) qubits \(I\subseteq\{1,\ldots,N\}\), \(\left|I\right|=n\), by \(\left|GHZ_{I}\right\rangle\) and the respective fidelity by \(F_{I}=\operatorname{Tr}\{\varrho\left|GHZ_{I}\right\rangle\left\langle GHZ_{I }\right|\otimes\mathds{1}_{\overline{I}}\}\). The overline \(\overline{I}\) denotes the complement of \(I\) in \(\{1,\ldots,N\}\).
Now, let \(\mathcal{P}(\{1,\ldots,N\})\) be the set of all partitions of \(\{1,\ldots,N\}\). Physically, we interpret a given partition \(P=\{I_{1},\ldots,I_{k}\}\) as the subsets of parties \(I_{j}\) on which a GHZ-state was distributed. This means we expect the global state of the network to be
\[\bigotimes_{j=1}^{k}\left|GHZ_{I_{j}}\right\rangle\left\langle GHZ_{I_{j}} \right|=\bigotimes_{I\in P}\left|GHZ_{I}\right\rangle\left\langle GHZ_{I} \right|. \tag{39}\]
In our case, we have \(N=6\) photons which can be entangled in four different ways:
\[\left|\Psi_{1}\right\rangle= \left|GHZ_{6}\right\rangle, \tag{40a}\] \[\left|\Psi_{2}\right\rangle= \left|GHZ_{4}\right\rangle\otimes\left|GHZ_{2}\right\rangle,\] (40b) \[\left|\Psi_{3}\right\rangle= \left|GHZ_{2}\right\rangle\otimes\left|GHZ_{4}\right\rangle,\quad \text{and}\] (40c) \[\left|\Psi_{4}\right\rangle= \left|GHZ_{2}\right\rangle\otimes\left|GHZ_{2}\right\rangle \otimes\left|GHZ_{2}\right\rangle. \tag{40d}\]
These four cases correspond to the four partitions
\[P_{1}= \{\{1,2,3,4,5,6\}\}, \tag{41a}\] \[P_{2}= \{\{1,2,3,4\},\{5,6\}\},\] (41b) \[P_{3}= \{\{1,2\},\{3,4,5,6\}\},\quad\text{and}\] (41c) \[P_{4}= \{\{1,2\},\{3,4\},\{5,6\}\}, \tag{41d}\]
respectively. To discriminate these different possible configurations from the measured data, we now develop a hypothesis test in which the hypotheses exclude each other, ensuring that only one hypothesis can be accepted.
**Lemma 5**.: _Let \(P_{1},\ldots,P_{K}\in\mathcal{P}(\{1,\ldots,N\})\) be \(K\) different partitions such that for every pair of partitions \(P_{i}\) and \(P_{j}\), there exists at least one subset in each partition \(\tilde{I}\in P_{i}\) and \(\tilde{J}\in P_{j}\) with \(\tilde{I}\subsetneq\tilde{J}\) or \(\tilde{I}\supsetneq\tilde{J}\). Then, the \(K+1\) hypotheses \(H_{1},\ldots,H_{K}\) and \(H_{\emptyset}\) given by_
\[H_{i} :\ \forall I\in P_{i}\ :\ F_{I}-\max_{G\supset I}F_{G}>\frac{1}{2} \quad\forall i\in\{1,\ldots,K\}, \tag{42}\] \[H_{\emptyset} :\ \text{otherwise} \tag{43}\]
_are pairwise exclusive to each other, i.e. only one of them can be accepted._
Proof.: Clearly, the different hypotheses \(H_{i}\) and the hypothesis \(H_{\emptyset}\) are mutually exclusive by definition. So, it is only left to show that any two hypotheses \(H_{i}\) and \(H_{j}\), with \(i\neq j\), exclude each other. From the assumptions, there exist two sets \(\tilde{I}\in P_{i}\) and \(\tilde{J}\in P_{j}\) which fulfill without loss of generality \(\tilde{I}\subsetneq\tilde{J}\). Let us now assume that both hypotheses could be accepted at the same time. It follows that
\[F_{\tilde{I}}>\frac{1}{2}+\max_{G\supset\tilde{I}}F_{G}\geq\frac {1}{2}+F_{\tilde{J}}\quad\text{and} \tag{44}\] \[F_{\tilde{J}}>\frac{1}{2}+\max_{G\supset\tilde{J}}F_{G}\geq\frac {1}{2}. \tag{45}\]
But then we have
\[F_{\tilde{I}}>\frac{1}{2}+F_{\tilde{J}}>\frac{1}{2}+\frac{1}{2}=1, \tag{46}\]
which contradicts the fact that fidelities can be at most one, i.e. \(F_{\tilde{I}}\leq 1\), and completes the proof.
The different partitions of Eq. (41) obviously fulfill the condition in Lemma 5. So, in our case, we have the five following different hypotheses:
\[H_{1} :\ F_{123456}>\nicefrac{{1}}{{2}}; \tag{47a}\] \[H_{2} :\ \begin{cases}h_{21}&:\ F_{1234}-F_{123456}>\nicefrac{{1}}{{2}} \quad\text{and}\\ h_{22}&:\ F_{56}-\max\{F_{3456},F_{123456}\}>\nicefrac{{1}}{{2}};\end{cases}\] (47b) \[H_{3} :\ \begin{cases}h_{31}&:\ F_{12}-\max\{F_{1234},F_{123456}\}> \nicefrac{{1}}{{2}}\quad\text{and}\\ h_{32}&:\ F_{3456}-F_{123456}>\nicefrac{{1}}{{2}};\end{cases}\] (47c) \[H_{4} :\ \begin{cases}h_{41}&:\ F_{12}-\max\{F_{1234},F_{123456}\}> \nicefrac{{1}}{{2}},\\ h_{42}&:\ F_{34}-\max\{F_{1234},F_{3456},F_{123456}\}>\nicefrac{{1}}{{2}}\quad \text{and}\\ h_{43}&:\ F_{56}-\max\{F_{3456},F_{123456}\}>\nicefrac{{1}}{{2}},\end{cases}\] (47d) and \[H_{\emptyset} :\ \text{otherwise}, \tag{47e}\]
respectively. Note that, for the sake of simplicity, we wrote, e.g., 12 for the set \(\{1,2\}\).
So, if we have the true fidelities \(F_{I}\) on every one of these specific subsets, we can unambiguously decide which hypothesis or configuration is true and reject the other possibilities. However, as we only have access to finite statistics, and therefore to estimates of the fidelities, we now have to calculate the \(p\)-values of every single hypothesis. The \(p\)-value describes the probability to get a certain experimental result, in our case the calculated estimate of the fidelity, given that a hypothesis is true.
Since the hypotheses \(H_{i}\) of Eq. (42) are all of the same form, we concentrate on a single one of these, consisting of \(m\) terms of the type \(F_{I}-\max_{J\supset I}F_{J}>\frac{1}{2}\). For the sake of readability, we neglect the index \(I\) in the following calculations and denote the hypothesis by \(H\), consisting of the sub-hypotheses \(h_{i}\ :\ F^{(i)}-\max_{J}F_{J}^{(i)}>\frac{1}{2}\ (i=1,\ldots,k)\). Additionally, we introduce the shorthand notation \(X_{J}^{(i)}:=F^{(i)}-F_{J}^{(i)}\), or equivalently \(\min_{J}X_{J}^{(i)}:=F^{(i)}-\max_{J}F_{J}^{(i)}\). With this notation, the \(p\)-value of the hypothesis for the minimum of the estimates
\[d_{i}:=\min_{\overline{J}}\widehat{X_{J}^{(i)}}, \tag{48}\]
which are calculated from the experimentally observed data, reads as
\[p=\Pr\biggl{[}\min_{J}\widehat{X_{J}^{(i)}}\leq d_{i}\ \forall i\in\{1,\ldots,k\}\ |\ H\biggr{]}. \tag{49}\]
First, we prove a short and later helpful Lemma.
**Lemma 6**.: _With the notation introduced above and the fidelities defined as in Appendix A and B, it holds_
\[\Pr\biggl{[}\biggl{(}\min_{J}\widehat{X_{J}^{(i)}}-\min_{J}X_{J}^{(i)}\biggr{)} >\epsilon\biggr{]}\leq\exp\biggl{\{}-\frac{2\epsilon^{2}\mu}{1+8/N}\biggr{\}}. \tag{50}\]
Proof.: To be able to use the results from Appendix B, we note that the minimum \(\min_{J}^{(i)}X_{J}^{(i)}=:X_{J}^{(i)}\) is assumed for at least one \(\tilde{J}\). Additionally, the probability for the minimum of the estimates \(\min_{J}\widehat{X_{J}^{(i)}}\) being larger than some value is upper bounded by the probability for any of the \(\widehat{X_{J}^{(i)}}\) to be larger than that same value. We therefore arrive at
\[\Pr\biggl{[}\biggl{(}\min_{J}\widehat{X_{J}^{(i)}}-\min_{J}X_{J}^{(i)}\biggr{)} >\epsilon\biggr{]}=\Pr\biggl{[}\biggl{(}\min_{J}\widehat{X_{J}^{(i)}}-X_{J}^{( i)}\biggr{)}>\epsilon\biggr{]}\leq\Pr\biggl{[}\biggl{(}\widehat{X_{J}^{(i)}}-X_{J}^{( i)}\biggr{)}>\epsilon\biggr{]}\stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def__def_def_def__def_def_def__def__def__def_def_def__def__def__def__def__def__def__def__def__def__def__def__def___def__def__def__def__def___def__def___def__def__def___def__def__def__def___def___def__def___def__def___def__def___def___def__def___def__def___def___def___def___def___def___def___def__def____def__def___def___def___def___def___def___def___def___def___def___def___def___def___def____def___def___def___def___def___def___def___def___def___def___def___def____def___def___def___def___def____def___def____def___def___def___def___def___def___def___def___def___def__def___def___def___def___def___def___def___def___def___def___def___def___def___def___def__def___def___def___def___def___def__def___def___def___def___def___def__def___def___def___def___def__def___def___def___def__def___def___def__def___def___def___def___def__def___def___def___def___def___def___def___def__def___def___def__def___def___def___def__def___def___def___def___def__def__def___def__def___def__def__def__def__def___def___def__def___def__def__def__def___def__def___def___def__def__def__def__def___def___def__def___def___def__def__def___def__def___def__def__def___def__def__def__def__def___def__def__def__def__def__def__def__def___def__def__def__def__def___def___def__def__def___def__def__def__def__def__def__def___def__def___def__def__def___def__def___def__def__def__def__def__def__def___def__def___def___def__def__def___def__def___def__def__def__def__def__def__def__def__def___def__def___def__def___def__def__def__def___def__def__def___def__def__def__def__def__def___def__def__def__def___def__def___def__def__def__def___def__def__def__def__def__def__def___def__def__def__def__def___def__def__def__def__def__def___def__def__def___def__def___def__def__def__def__def__def__def___def__def___def___def__def___def__def__def__def___def__def__def___def__def__def__def___def__def___def__def___def___def__def___def__def___def__def___def___def___def___def___def__def__def___def__def___def___def___def___def__def____def___def___def___def___def___def___def___def___def___def____def___def__def___def___def___def___def____def__def___def___def___def____def___def___def____def___def__def___def___def__def___def____def___def__def___def___def__def___def___def__def____def___def___def___def___def___def___def___def__def___def___def__def___def___def___def__def___def__def___def___def__def__def___def___def__def___def___def___def__def___def___def___def___def___def___def__def__def___def__def___def__def___def__def___def__def__def___def__def__def___def__def___def__def___def__def__def__def___def__def__def__def__def___def__def__def__def__def__def__def__def__def___def__def__def__def___def__def__def__def__def__._def_def__def__def__def___def_def__def_def__._def_def_def__def_
**Lemma 7**.: _For the hypothesis \(H\) described above, the \(p\)-value in Eq. (49) is bounded by_
\[p\leq\delta(d) \tag{52}\]
_where \(d\) and \(\delta(d)\) are defined by \(d=\min\{d_{1},\ldots,d_{k},1/2\}\) and \(\delta(d)=\exp\Bigl{\{}-\frac{2(1/2-d)^{2}M}{1+8/N}\Bigr{\}}\)._
Proof.: We use the fact that the random variables \(\min_{J}X_{J}^{(i)}\) and their estimates are resulting from quantum states \(\varrho\). We define regions in the state space where the different hypotheses are fulfilled,
\[R_{i}:= \{\varrho\mid h_{i}\ :\ \min_{J}X_{J}^{(i)}>\nicefrac{{1}}{{2}} \text{ is true}\},\quad i\in\{1,\ldots,k\}\quad\text{and} \tag{53}\] \[R:= \{\varrho\mid h_{i}\ :\ \min_{J}X_{J}^{(i)}>\nicefrac{{1}}{{2}} \text{ is true }\forall i\in\{1,\ldots,k\}\}. \tag{54}\]
Obviously, it holds \(R\subseteq R_{i}\). We now can calculate a bound on the \(p\)-value by maximizing over the possible quantum states which lie in \(R\):
\[p=\Pr\Bigl{[}\min_{J}\widehat{X_{J}^{(j)}}\leq d_{j}\ \forall j\mid H\Bigr{]}\leq \max_{\varrho\in R}\Pr\Bigl{[}\min_{J}\widehat{X_{J}^{(j)}}\leq d_{j}\ \forall j\mid\varrho\Bigr{]}\leq\max_{\varrho\in R_{i}}\Pr \Bigl{[}\min_{J}\widehat{X_{J}^{(j)}}\leq d_{j}\ \forall j\mid\varrho\Bigr{]}\quad\forall i. \tag{55}\]
Since this holds for every \(R_{i}\), it also holds for the minimum over \(i\). Additionally, the joint probability for the different events \(\min_{J}\widehat{X_{J}^{(j)}}\leq d_{j}\) is bounded by the probability of one of these events. We therefore arrive at
\[p\leq\min_{i}\max_{\varrho\in R_{i}}\Pr\Bigl{[}\min_{J}\widehat{X_{J}^{(j)}} \leq d_{j}\ \forall j\mid\varrho\Bigr{]}\leq\min_{i}\max_{\varrho\in R_{i}}\Pr \Bigl{[}\min_{J}\widehat{X_{J}^{(i)}}\leq d_{i}\ \mid\varrho\Bigr{]}. \tag{56}\]
However, the probability in the last line can be bounded using Lemma 6. Indeed, for the case \(d_{i}<1/2\), it holds
\[\max_{\varrho\in R_{i}}\Pr\Bigl{[}\min_{J}\widehat{X_{J}^{(i)}} \leq d_{i}\mid\varrho\Bigr{]} \leq\Pr\biggl{[}\Bigl{(}\min_{J}X_{J}^{(i)}-\min_{J}\widehat{X_{J }^{(i)}}\Bigr{)}\geq 1/2-d_{i}\biggr{]}\] (57) \[\stackrel{{\eqref{eq:p_value_bound_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eqeq_eq_eq_eqeq_eq_eq_eqeq_eq_eqeq_eq_eq_eqeq_eq_eqeq_eq_eqeq_eq_eqeq_eqeq_eq_eqeq_eq_eqeq_eq_eqeq_eqeq_eq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeqeq_eqeqeq_
with \(\delta(d_{i})=\exp\Bigl{\{}-\frac{2(1/2-d_{i})^{2}M}{1+8/N}\Bigr{\}}\). Thus, we have
\[\Pr\Bigl{[}\min_{G}(\widehat{F_{I^{(j)}}}-\widehat{F_{G}})\geq d_{j}\ \forall j\mid H_{\emptyset}\Bigr{]}\leq\min_{j}\{\delta(d_{j}),1\}. \tag{63}\]
The \(p\)-value itself is then bounded by the maximum over the different combinations of the \(I^{(i)}\)
\[p \leq \max_{\{I^{(i)}\}}\Pr\Bigl{[}\min_{G}(\widehat{F_{I^{(i)}}}- \widehat{F_{G}})\geq d_{i}\ \forall i\mid H_{\emptyset}\Bigr{]} \tag{64}\] \[\leq \max_{\{I^{(i)}\}}\min_{i}\{\delta(d_{i}),1\}\] (65) \[\leq \min\{\delta(d),1\}, \tag{66}\]
where \(d=\min_{\{i|d_{i}>1/2\}}d_{i}\).
Intuitively, this can again be understood in the following way: The probability for the observed data given the hypothesis \(H_{\emptyset}\) is upper bounded by the probability of the least probable event. The least probable event in this case is the one farthest away from \(1/2\), namely \(\max_{i}d_{i}>1/2\). Since the exact combination of the \(I^{(i)}\) and therefore the \(d_{i}\) one has to consider in the maximization are unknown, one can only upper bound the \(p\)-value by the worst case \(d_{i}\), namely the one nearest to but still larger than \(1/2\); that is \(d=\min_{\{i|d_{i}>1/2\}}d_{i}\).
## Appendix D Additional data and experimental details
An in-depth description and characterization of our experiment can be found in Ref. [43]. A dispersion-engineered, spectrally decorrelated guided-wave parametric down-conversion in a periodically poled potassium titanyl waveguide is pumped with ultrafast pump pulses with a central wavelength of \(\lambda_{p}=775\,\)nm. The source is arranged in a Sagnac configuration to generate polarization-entangled Bell states with a central wavelength of \(\lambda_{\text{Bell}}=1550\,\)nm [42]. A set of tomographic wave plates (half-wave plate, quarter-wave plate) and a polarizing beam splitter are used in one arm of the source to implement a polarization-resolving heralding measurement. We use superconducting nanowire single-photon detectors with an efficiency of more than 70% and recovery time of 11 ns for photon detection. These allow to operate the source at the full laser repetition rate of 76 MHz, corresponding to a time difference between successive laser pulses of 13.6 ns. A successful heralding detection event is fed forward to a high-speed electro-optic switch inside the quantum memory. The switch is, effectively, a Pockels cell that rotates the polarization of an incoming light pulse when a high voltage is applied. It has a response time (rise and/or fall time) of less than 5 ns. Depending on when the switching occurs, we can realize the swap and interfere operations described in the main text. Again, more details are found in Ref. [43].
The quantum memory itself is an all-optical, free-space storage loop with a memory bandwidth beyond 1 THz, a memory efficiency of 91%, and a lifetime of 131 ns. Its operation state is controlled by a field-programmable gate array (FPGA), which converts the heralding detection events into switching time lists for the Pockels cell as well as a time gate for the photon detection behind the memory. The latency of the feed-forward is compensated for by sending the photons through a 300 m long single-mode fiber. After retrieval from the memory, photons are sent to another polarization-resolving detection stage. A successful datum is registered when six photons are detected in total, three herald photons in different time bins together with three photons from the memory in the corresponding time bins. Hence, a complete experiment consists of the following steps: Firstly, define the network topology and the corresponding switching sequence; secondly, perform a test measurement of the H/V populations to assess system performance; thirdly, measure all relevant observables by automatically setting the corresponding
\begin{table}
\begin{tabular}{l|c|c|c|c|c} & \(H_{1}\) & \(H_{2}\) & \(H_{3}\) & \(H_{4}\) & \(H_{\emptyset}\) \\ \hline \hline Dataset 1 & 1 & \(2.3724\times 10^{-139}\) & \(9.2153\times 10^{-146}\) & \(3.0256\times 10^{-116}\) & \(9.4765\times 10^{-8}\) \\ Dataset 2 & \(8.3274\times 10^{-92}\) & 1 & \(1.6231\times 10^{-258}\) & \(1.6231\times 10^{-258}\) & \(1.4347\times 10^{-13}\) \\ Dataset 3 & \(6.1789\times 10^{-93}\) & \(6.8294\times 10^{-259}\) & 1 & \(8.8629\times 10^{-272}\) & \(1.3010\times 10^{-23}\) \\ Dataset 4 & \(1.1810\times 10^{-32}\) & \(3.7960\times 10^{-174}\) & \(4.9169\times 10^{-188}\) & 1 & \(2.3295\times 10^{-22}\) \\ \hline \end{tabular}
\end{table}
Table 1: Upper bound on the \(p\)-value of the different hypotheses for the four different states. We recall that the hypotheses are explicitly given in Eq. (47), and that \(H_{1}\) corresponds to \(F_{123456}\) being large, \(H_{2}\) to \(F_{1234}\) and \(F_{56}\) being large, \(H_{3}\) to \(F_{12}\) and \(F_{3456}\) being large, \(H_{4}\) to \(F_{12}\), \(F_{34}\) and \(F_{56}\) being large and, finally, \(H_{\emptyset}\) is the null hypothesis.
wave-plate angles and collecting data until a predetermined number of successful events (in our case around thousand) has been measured.
Figure 4 exemplarily shows the click-correlation matrices for the H/V populations of the four network topologies considered in the main text (top row) and the \(\langle\mathcal{M}_{3}^{\otimes 6}\rangle\) coherence term (lower row). Note that the same data sets have been used to calculate the observables as described in the main text.
## Appendix E Semi-device-independent estimation on the fidelity
In a device-independent scenario, there are no assumptions made on the measurements. As mentioned in the main text, there are already results connecting the violation of a Mermin inequality by some state with its GHZ-state fidelity [50]. In the following, using the experimentally observed violation of the Mermin inequality \(S_{N}=\langle\mathcal{B}_{N}\rangle\), we will give a lower bound of the GHZ-state fidelity under the assumption that one performs (misaligned) measurements \(X\) and \(Y\) on qubits.
Firstly, recall that the Bell operator of the Mermin inequality for \(N\) qubits is given by
\[\mathcal{B}_{N}=[(X+iY)^{\otimes N}+(X-iY)^{\otimes N}]/2. \tag{67}\]
We then specify the observables as
\[X =\sigma_{x}, \tag{68}\] \[Y =\sin(\theta_{k})\sigma_{x}+\cos(\theta_{k})\sigma_{y}, \tag{69}\]
where \(\theta_{k}\) is the angle denoting the misalignment from a perfect measurement (\(X=\sigma_{x}\) and \(Y=\sigma_{y}\)), which can be different for each qubit \(k=1,...,N\). Note that we can assume \(X=\sigma_{x}\) since we are only interested in the GHZ-state fidelity modulus local unitaries. For the same reason, it is no further restriction to assume the second measurement to be in the \(x\)-\(y\) plane of the Bloch sphere.
Figure 4: (Top row) Population of six-fold clicks for the four different topologies (as named in figure) measured in the H/V basis. Direct population analysis shows \(\mathcal{D}_{N}\) values of \((74\pm 1.7)\%\), \((83.7\pm 1.1)\%\), \((82.3\pm 1.1)\%\), and \((92.6\pm 0.8)\%\), respectively, for the states from left to right. A Poissonian error on the counts is included in the data analysis. As expected, fidelities decrease for larger-size GHZ states. (Bottom row) Six-fold click probabilities measured in a specific coherence term (\(\langle\mathcal{M}_{3}^{\otimes 6}\rangle\)) setting.
The goal is now to estimate the lower bound on the GHZ-state fidelity, given a violation \(S_{N}\) of the \(N\)-qubit Mermin inequality. For two qubits, this has previously been done using the spectral decomposition of the Clauser, Horne, Shimony, Holt (CHSH) operator [56]. We now extend this method to \(N>2\) qubits by using the spectral decomposition of the Mermin operator \(\mathcal{B}_{N}\) and the fact that, for any Bell operator with two observables per qubit, its eigenstates (if not degenerate) are given by the \(N\)-qubit GHZ states, such that its eigendecomposition is \(\mathcal{B}_{N}=\sum_{i}\lambda_{i}\left|GHZ_{i}\right\rangle\left\langle GHZ _{i}\right|\)[34].
The idea is now the following: If the violation exceeds a certain value \(S_{N}\), we want to be able to state that the fidelity for some GHZ state \(\left|GHZ_{i}\right\rangle\) is at least \(F\). Thus, we start by fixing a fidelity \(F\) and maximize the violation of the Bell inequality.
Consider the expectation value of the Bell operator for some state \(\varrho\),
\[S_{N} =\mathrm{Tr}(\mathcal{B}_{N}\varrho)=\sum_{i}\lambda_{i}\left\langle GHZ _{i}\right|\varrho\left|GHZ_{i}\right\rangle \tag{70}\] \[=\sum_{i}\lambda_{i}F(\varrho,\left|GHZ_{i}\right\rangle)\leq \lambda_{1}F+\lambda_{2}(1-F), \tag{71}\]
where we fixed the largest fidelity \(F\coloneqq\max_{i}\{F(\varrho,\left|GHZ_{i}\right\rangle)\}\) and sorted the eigenvalues \(\lambda_{i}\) in decreasing order.
As mentioned before, the measurements might be misaligned with respect to some angles \(\theta_{k}\) and therefore the eigenvalues \(\lambda_{i}\) are functions of the angles \(\theta_{k}\). Since we are interested in the largest violation one can achieve for a fixed fidelity \(F\), we have to consider all possible misalignments \(\{\theta_{k}\}\). Then, Eq. (71) reads
\[S_{N}\leq\max_{\{\theta_{k}\}}[\lambda_{1}(\{\theta_{k}\})F+\lambda_{2}(\{ \theta_{k}\})(1-F)]. \tag{72}\]
In order to find the maximum of this expression, consider
\[\mathcal{B}_{N}^{2}=[((X+iY)^{2})^{\otimes N}+((X-iY)^{2})^{\otimes N}+((X+iY )(X-iY))^{\otimes N}+((X-iY)(X+iY))^{\otimes N}]/4. \tag{73}\]
Using \(\mathrm{Tr}(\sigma_{i}\sigma_{j})=2\delta_{ij}\), it follows that \(\mathrm{Tr}(XY)=2\sin(\theta_{k})\), and further that
\[\mathrm{Tr}\big{(}\mathcal{B}_{N}^{2}\big{)} =[\prod_{k=1}^{N}4i\sin(\theta_{k})+\prod_{k=1}^{N}(-4i)\sin( \theta_{k})+4^{N}+4^{N}]/4 \tag{74}\] \[=[(4i)^{N}\prod_{k=1}^{N}\sin(\theta_{k})+(-4i)^{N}\prod_{k=1}^{N }\sin(\theta_{k})+2\cdot 4^{N}]/4. \tag{75}\]
For an odd number of qubits \(N\), this reduces to
\[\mathrm{Tr}\big{(}\mathcal{B}_{N}^{2}\big{)}=[2\cdot 4^{N}]/4=2^{-1}2^{2N}=2^{N}2 ^{N-1}. \tag{76}\]
Following Ref. [34], where similar formulas were derived, we note that
\[\sum_{i=1}^{2^{N}}\lambda_{i}^{2}=\mathrm{Tr}\big{(}\mathcal{B}_{N}^{2}\big{)} =2^{N}2^{N-1}. \tag{77}\]
Further, it is known that the eigenvalues of the Mermin operator \(\mathcal{B}_{N}\) appear pairwise with alternating signs; therefore, summing up only the squared positive eigenvalues yields
\[\sum_{i=1}^{2^{N-1}}(\lambda_{i}^{+})^{2}=2^{N}2^{N-1}/2=2^{2(N-1)}. \tag{78}\]
Thus, the two largest eigenvalues must fulfill
\[\lambda_{1}^{2}+\lambda_{2}^{2}\leq 2^{2(N-1)}. \tag{79}\]
We can use this to parameterize \(\lambda_{1}=2^{N-1}\cos(\alpha)\) and \(\lambda_{2}=2^{N-1}\sin(\alpha)\), as done in Ref. [56]. Note that \(\alpha\) is not directly connected to the misalignment \(\theta_{k}\) anymore, but it simply represents a parameter used to express all possible configurations for \(\lambda_{1}\) and \(\lambda_{2}\) saturating Eq. (79). Then, Eq. (72) yields
\[S_{N} \leq\max_{\alpha}[2^{N-1}\cos(\alpha)F+2^{N-1}\sin(\alpha)(1-F)] \tag{80}\] \[\leq 2^{N-1}\sqrt{F^{2}+(1-F)^{2}}, \tag{81}\]
using that \(\max_{\alpha}(a\sin(\alpha)+b\cos(\alpha))=\sqrt{a^{2}+b^{2}}\), and it follows that
\[F\geq\frac{1}{2}+\frac{1}{\sqrt{2}}\sqrt{\left(\frac{S_{N}}{2^{N-1}}\right)^{2}- \frac{1}{2}} \tag{82}\]
for an odd number of qubits. Note that this expression is only defined if the violation is large enough, which shows that, in the semi-device-independent scenario, a violation of at least \(S_{N}=2^{N-1}/\sqrt{2}\) is needed to certify entanglement. Furthermore, if the maximal violation \(2^{N-1}\) is achieved, the GHZ-state fidelity is \(F=1\).
Lastly, it is to mention that this ansatz to find the analytical expression only works for an odd number of qubits. However, when numerically maximizing Eq. (72) for up to 10 qubits and scaling the violation with a factor \(2^{-(N-1)}\), we obtain the same curve for all \(N\), as shown in Fig. 5. Thus, we conjecture that Eq. (82) is true for all \(N\).
| 量子ネットワークにおける分散量子情報は、世界的な量子通信の重要な要素であり、さらに、時計同期、磁場感知、および盲量子計算などの関連するタスクに利用できる。しかし、量子ネットワークの分析と実装のベンチマーク化には、 entanglement が信頼して分布できるノード間のネットワークトポロジーを特徴付けることが重要である。ここでは、このトポロジーの認証のための効率的な方法を提示する。この方法は、双対および多対 entangled のソースを構成する異なるネットワークをスケーラブルに識別することを可能にする。また、測定デバイスとネットワークノードが不明瞭かつ信頼できない半デバイス不独立なシナリオにも適用できる。 polarization photon を使用して生成された 6Qubit ネットワークのトポロジーを認証するために、アクティブフィードフォワードとタイムマルチプレクシングを用いて実験的にその方法を証明した。これらの方法を用いることで、複数の仮説の同時テストを、少ない測定 |
2309.12640 | MEV Makes Everyone Happy under Greedy Sequencing Rule | Trading through decentralized exchanges (DEXs) has become crucial in today's
blockchain ecosystem, enabling users to swap tokens efficiently and
automatically. However, the capacity of miners to strategically order
transactions has led to exploitative practices (e.g., front-running attacks,
sandwich attacks) and gain substantial Maximal Extractable Value (MEV) for
their own advantage. To mitigate such manipulation, Ferreira and Parkes
recently proposed a greedy sequencing rule such that the execution price of
transactions in a block moves back and forth around the starting price.
Utilizing this sequencing rule makes it impossible for miners to conduct
sandwich attacks, consequently mitigating the MEV problem.
However, no sequencing rule can prevent miners from obtaining risk-free
profits. This paper systemically studies the computation of a miner's optimal
strategy for maximizing MEV under the greedy sequencing rule, where the utility
of miners is measured by the overall value of their token holdings. Our results
unveil a dichotomy between the no trading fee scenario, which can be optimally
strategized in polynomial time, and the scenario with a constant fraction of
trading fee, where finding the optimal strategy is proven NP-hard. The latter
represents a significant challenge for miners seeking optimal MEV.
Following the computation results, we further show a remarkable phenomenon:
Miner's optimal MEV also benefits users. Precisely, in the scenarios without
trading fees, when miners adopt the optimal strategy given by our algorithm,
all users' transactions will be executed, and each user will receive equivalent
or surpass profits compared to their expectations. This outcome provides
further support for the study and design of sequencing rules in decentralized
exchanges. | Yuhao Li, Mengqian Zhang, Jichen Li, Elynn Chen, Xi Chen, Xiaotie Deng | 2023-09-22T06:12:19 | http://arxiv.org/abs/2309.12640v1 | # MEV Makes Everyone Happy under Greedy Sequencing Rule
###### Abstract
Trading through decentralized exchanges (DEXs) has become crucial in today's blockchain ecosystem, enabling users to swap tokens efficiently and automatically. However, the capacity of miners to strategically order transactions has led to exploitative practices (_e.g._, front-running attacks, sandwich attacks) and gain substantial Maximal Extractable Value (MEV) for their own advantage. To mitigate such manipulation, Ferreira and Parkes recently proposed a greedy sequencing rule such that the execution price of transactions in a block moves back and forth around the starting price. Utilizing this sequencing rule makes it impossible for miners to conduct sandwich attacks, consequently mitigating the MEV problem.
However, no sequencing rule can prevent miners from obtaining risk-free profits. This paper systemically studies the computation of a miner's optimal strategy for maximizing MEV under the greedy sequencing rule, where the utility of miners is measured by the overall value of their token holdings. Our results unveil a dichotomy between the no trading fee scenario, which can be optimally strategized in polynomial time, and the scenario with a constant fraction of trading fee, where finding the optimal strategy is proven NP-hard. The latter represents a significant challenge for miners seeking optimal MEV.
Following the computation results, we further show a remarkable phenomenon: Miner's optimal MEV also benefits users. Precisely, in the scenarios without trading fees, when miners adopt the optimal strategy given by our algorithm, all users' transactions will be executed, and each user will receive equivalent or surpass profits compared to their expectations. This outcome provides further support for the study and design of sequencing rules in decentralized exchanges.
**Keywords:** Decentralized Finance, Maximal Extractable Value, Sequencing Rule
Introduction
Decentralized finance (also known as DeFi), as the main application of blockchain and smart contracts, has grown incredibly popular and attracted more than 40 billion dollars [1]. Within the DeFi ecosystem, decentralized exchange (DEX) becomes a fundamental service that allows users to trade cryptocurrency directly without any centralized authority. Nowadays, the daily volume of these DEXs has reached billions of US dollars [2].
Most DEXs (_e.g._, Uniswap, SushiSwap, Curve Finance, and Balancer) are organized as constant function market makers (CFMMs). Uniswap [3], for example, utilizes a constant product formula to make sure that the product of the quantity of two tokens remains constant before and after a swap. The exchange rate, or say the price that the swap executes at, is automatically determined by the reserves of the pair. So the outcome of each trade is sensitively influenced by system status at execution time.
In the blockchain, it is the block builders (also referred to as miners or validators) that select pending transactions and specify their execution order. This gives an exploitable chance for miners to extract profit by strategically including, excluding, and reordering transactions in a block. This is known as _Maximal Extractable Value_ (MEV) [4]. A prevalent MEV example is the _sandwich attack_[5] on DEX transactions: the attacker "sandwiches" a profitable victim transaction by front-running and back-running it and earns from the spread between buying and selling prices.
To mitigate this market manipulation by miners, Ferreira and Parkes [6] recently introduced a _greedy sequencing rule_. Simply put, when dealing with a bunch of transactions from the liquidity pool of tokens \(\mathcal{X}\) and \(\mathcal{Y}\), this sequencing rule requires miners to take the starting price as a benchmark. Then at any point during the execution in the block, if the current price of \(\mathcal{Y}\) is higher than the benchmark, the priority should be given to the transactions selling token \(\mathcal{Y}\). Conversely, the transactions selling token \(\mathcal{X}\) should be executed next. This sequencing rule structurally makes the sandwich attack impossible. It restricts miners from manipulating transaction orders, thus mitigating the impact of MEV. More importantly, it introduces verifiability by allowing users to efficiently verify whether the execution order of transactions complies with the rule.
### Our Contributions
As mentioned in [6], miners can always obtain risk-free profits in some cases under arbitrary sequencing rule. In this paper, we systematically study the computation of miner's optimal MEV strategy under the greedy sequencing rule. The study is based on the utility model where the worth of miners is the overall _value_ of all their tokens. Like the similar work [7] aiming to maximize extractable value without rules or limits, the value of a token is measured by its price, which is exogenous, given by an oracle, and fixed throughout the attack. It was explicitly emphasized by Ferreira and Parkes [6] to also consider miner's utility as a real-valued function when studying sequencing rules. The monetary function we considered is arguably the most natural choice.
We highlight our results on the computation of miners' optimal strategies, as well as their surprising consequences. We give a computation dichotomy, supported by our two main theorems (Theorem 1 and Theorem 3). For the scenario where there is no trading fee, a polynomial time algorithm for a miner to compute an optimal strategy is given (Theorem 1); In contrast, when the fraction of trading fees is any constant larger than \(0\) (_e.g._, \(f=0.3\%\) in most Uniswap pools), we prove it is NP-hard to find an optimal strategy (Theorem 3).
The computational intractability implies hardness for a miner to hope for optimal MEV. More surprisingly, in the \(f=0\) regime, when miners adopt the optimal strategy provided by our algorithm
(Algorithm 1), users will also benefit in the following sense: all users' transactions will be executed (Corollary 1), and every user gets at least as good as if their transaction was the only transaction in the block (Corollary 2). The latter was one of the main motivations to propose the greedy sequencing rule, even though it is generally not true when the miner _truthfully_ follows the greedy sequencing rule.
We conclude this paper by discussing many interesting future directions and open problems in the last section (Section 5).
### Related Work
#### 1.2.1 Sequencing Rules
Typically, miners organize transactions based on their gas prices. In order to protect users from order manipulation, Kelkar _et al._[8] investigate the notion of _fair transaction ordering_ for Byzantine consensus, which is further extended to the permissionless setting in [9]. Cachin _et al._[10] introduce a new _differential order-fairness property_ and present the quick order-fair atomic broadcast protocol which is much more efficient than previous solutions. The general idea of these approaches is to rely on a committee rather than a single miner to order transactions. A main threat to fair transaction ordering is the _Condorcet attack_[11]. Vafadar and Khabbazian [11] show that an attacker can undermine fairness by imposing Condorcet cycles even when all others in the system behave honestly.
Another category is _content-oblivious ordering_[12, 13] which guarantees that the transaction data is not accessible to the committee responsible for sequencing them until an order has been determined. This could be achieved using methods like threshold public key encryption schemes.
#### 1.2.2 MEV Mitigation
It has long been discovered that miners could exploit transaction ordering for their own benefit [14]. The term Maximal Extractable Value (MEV) was introduced in [4], formally defined in [15], and its growth has resulted in network congestion and high gas prices [4, 16]. Besides the sequencing rules, some other approaches are also explored to mitigate the impact of MEV. To avoid sandwich attacks, users are suggested to reduce the trading volume by splitting transactions [17] and to restrict the slippage tolerance [18]. This method, however, may also increase the transaction costs for users. Zhou _et al._[19] propose a new DEX design called A\({}^{2}\)MM, which helps users to immediately execute an arbitrage following their swap transactions. It also allows users to benefit from MEV atomically. Another popular way is to rely on the service from trusted third parties like flashbots [20], Eden [21], and OpenMEV [22]. Then can help to order transactions without broadcasting them to the whole network, thus protecting from front-running and sandwich attacks.
## 2 Preliminaries
### Constant Function Market Makers
Let \(A\) be an AMM for trading between token \(\mathcal{X}\) and token \(\mathcal{Y}\). The exchange has _state_\(s=(x,y)\), where \(x\) and \(y\) are current reserves of token \(\mathcal{X}\) and \(\mathcal{Y}\), respectively. When \(A\) is a CFMM, the trading invariant can be modeled by a constant function with two variables \(F(x,y)=C\). We will
focus on CFMMs that satisfy Axiom 1 and Axiom 2, which are defined as follows. We note that all currently known CFMMs are consistent with these two properties.1
Footnote 1: This also includes Uniswap v3, which is less trivial.
**Axiom 1**.: _For different pairs \((x,y)\) and \((x^{\prime},y^{\prime})\) such that \(F(x,y)=F(x^{\prime},y^{\prime})=C\), we have \(x<x^{\prime}\) if and only if \(y>y^{\prime}\)._
By this axiom, we know that for any \(x\) (reserves of token \(\mathcal{X}\)), there is a unique \(y\) such that \(F(x,y)=C\) and vice versa. So we will use \(F_{y}(x)\) to denote the \(y\) such that \(F(x,y)=C\) and similarly define \(F_{x}(y)\).
**Axiom 2**.: \(F_{y}(x)\) _is differentiable and the marginal exchange rate \(|dF_{y}(x)/dx|\) is decreasing with respect to \(x\)._
In the rest of the paper, we use \(r(x)\) to denote the _marginal exchange rate_ of swapping tokens \(\mathcal{X}\) for \(\mathcal{Y}\), _i.e._, \(r(x)\coloneqq|dF_{y}(x)/dx|\).
### Execution of Transactions
Users can submit a transaction of the following two types: \(\mathtt{Sell}(\mathcal{X},\cdot)\) and \(\mathtt{Sell}(\mathcal{Y},\cdot)\), where \(\cdot\) is a real parameter representing how many units of token the user wants to trade.
To be more concrete, suppose that the current state of CFMM \(A\) is \(s=(x,y)\). For each swap, part of tokens are charged as fees and we use \(f\in[0,1)\) to denote the fraction of this trading fee. When executing a transaction \(\mathtt{Sell}(\mathcal{X},q)\), the user will pay \(q\) units of token \(\mathcal{X}\) and get \(y-F_{y}(x+(1-f)q)\) units of token \(\mathcal{Y}\). Similarly, when executing a transaction \(\mathtt{Sell}(\mathcal{Y},q)\), the user will pay \(q\) units of token \(\mathcal{Y}\) and get \(x-F_{x}(y+(1-f)q)\) units of token \(\mathcal{X}\).
The executing of multiple transactions \(\{\mathtt{TX}^{i}\}_{i\in[n]}\) will be well-defined if an order among them is determined. In particular, suppose that \(\tau:[n]\to[n]\) is a permutation. Then the execution will work as follows: Let \(s_{0}=(x_{0},y_{0})\) be the initial state and iteratively execute each transaction \(\mathtt{TX}^{\tau(i)}\). For the \(i\)-th iteration, if \(\mathtt{TX}^{\tau(i)}=\mathtt{Sell}(\mathcal{X},q)\), then \(s_{i}=(x_{i},y_{i})\) where \(x_{i}=x_{i-1}+(1-f)q\) and \(y_{i}=F_{y}(x_{i})\); if \(\mathtt{TX}^{\tau(i)}=\mathtt{Sell}(\mathcal{Y},q)\), then \(s_{i}=(x_{i},y_{i})\) where \(y_{i}=y_{i-1}+(1-f)q\) and \(x_{i}=F_{x}(y_{i})\).
It is easy to see the order under which the transactions are executed crucially influences the trades outcomes. However, due to the same reason, it is also well-known that the decentralized exchange systems suffer from _order manipulation_, where an anonymous miner can manipulate the context of a block, even including inserting their own attacking transactions. Ferreira and Parkes [6] considered the notion of _verifiable sequencing rules_ and proposed a greedy sequencing rule to limit miners' ability to manipulate (therefore in general it also benefits users). We recap their definitions below.
### Sequencing Rules
We start with the definition of the verifiable sequencing rule.
**Definition 1** (Verifiable sequencing rule, [6]).: _A sequencing rule \(R\) is a map from initial state \(s_{0}\) and a set of transactions \(\{\,\mathcal{T}\!\mathsf{X}^{i}\}_{i\in[n]}\) to a set of permutations \(\{\tau:[n]\to[n]\}\), where each permutation is a valid order to execute these transactions under this sequencing rule._
_A sequencing rule is efficiently computable, if there is a polynomial time algorithm that can compute a permutation \(\tau:[n]\to[n]\) that satisfies \(R\) (i.e., \(\tau\in R(s_{0},\{\,\mathcal{T}\!\mathsf{X}^{i}\}_{i\in[n]})\)) for any initial state \(s_{0}\) and transactions \(\{\,\mathcal{T}\!\mathsf{X}^{i}\}_{i\in[n]}\)._
_A sequencing rule is efficiently verifiable, if there is a polynomial time algorithm such that for any permutation \(\tau:[n]\to[n]\), the algorithm accepts \(\tau\) if and only if \(\tau\in R(s_{0},\{\,\textsf{TX}^{i}\}_{i\in[n]})\)._
Along this way, Ferreira and Parkes [6] proposed a greedy sequencing rule (we use GSR to denote it), which is efficiently computable and verifiable.
**Definition 2** (Greedy sequencing rule, [6]).: _A permutation \(\tau\) satisfies the greedy sequencing rule (\(\tau\in\textsf{GSR}(s_{0},\{\,\textsf{TX}^{i}\}_{i\in[n]})\)) if the following conditions hold for all \(i\in[n]\):_
* \(\textsf{TX}^{\tau(i)}\) _is a_ \(\textsf{Sell}(\mathcal{X},\cdot)\) _transaction only if either_ \(x_{i-1}\leq x_{0}\) _or_ \(\textsf{TX}^{\tau(j)}\) _is_ \(\textsf{Sell}(\mathcal{X},\cdot)\) _for all_ \(i<j\leq n\)_; and_
* \(\textsf{TX}^{\tau(i)}\) _is a_ \(\textsf{Sell}(\mathcal{Y},\cdot)\) _transaction only if either_ \(y_{i-1}\leq y_{0}\) _or_ \(\textsf{TX}^{\tau(j)}\) _is_ \(\textsf{Sell}(\mathcal{Y},\cdot)\) _for all_ \(i<j\leq n\)_,_
_where \(s_{i-1}=(x_{i-1},y_{i-1})\) is the state before executing \(\textsf{TX}^{\tau(i)}\)._
Besides efficiency, the greedy sequencing rule enjoys the property that for every transaction, _either_ its receive is as good as it was the only transaction in the block _or_ it does not suffer from a sandwich attack.
However, it is totally possible for a miner to gain profits by manipulating the content of the block, even if it follows some given sequencing rule (e.g., the greedy sequencing rule). In the rest of the paper, we study the computation of miners' optimal strategies.
## 3 Miner's Strategy Space
We define the miner's strategy space in the most general way. To make the profits of the miner comparable, we assume that there are exogenous prices of \(\mathcal{X}\) (denoted by \(p_{x}\)) and \(\mathcal{Y}\) (denoted by \(p_{y}\)) and the miner wants to collect as much money as possible. Like previous work [7], \(p_{x}\) and \(p_{y}\) are assumed to remain the same during the attack (usually the timeslot for a block, _e.g._, about 12 seconds in Ethereum).
**Definition 3** (Strategy Space).: _Given a sequencing rule \(R\), an initial state \(s_{0}=(x_{0},y_{0})\), and a set of users' transactions \(\{\,\textsf{TX}^{i}\}_{i\in[n]}\), a miner could create \(m\) number of its own transactions \(\{\,\textsf{TX}^{i}\}_{i\in[n+1:n+m]}\), select a subset of all these \(n+m\) transactions \(S\subseteq[n+m]\), compute an order \(\tau\in R(s_{0},\{\,\textsf{TX}^{i}\}_{i\in S})\) (here instead of permutation, \(\tau\) should be a one-to-one mapping from \([|S|]\) to \(S\)) that satisfies the sequencing rule, and execute them under the order \(\tau\)._
_The miner's profit \(U(\{\,\textsf{TX}^{i}\}_{i\in[n+1:n+m]},S,\tau)\) is defined as_
\[\sum_{i\in[|S|],\tau(i)\in[n+1:n+m]}\frac{x_{i-1}-x_{i}}{1-f\cdot 1_{\{x_{i}>x_{i -1}\}}}\cdot p_{x}+\frac{y_{i-1}-y_{i}}{1-f\cdot 1_{\{y_{i}>y_{i-1}\}}}\cdot p_{y},\]
_where \(f\in[0,1)\) is the fraction of trading fees._
Here, \(1_{\{x_{i}>x_{i-1}\}}\) indicates that \(\textsf{TX}^{\tau(i)}\) is a \(\textsf{Sell}(\mathcal{X},\cdot)\) transaction and \(1_{\{y_{i}>y_{i-1}\}}\) indicates that \(\textsf{TX}^{\tau(i)}\) is a \(\textsf{Sell}(\mathcal{Y},\cdot)\) transaction. These two events will not happen simultaneously.
### Arbitrage-Free Interval
In this subsection, we present a clean lemma that characterizes (what we call) arbitrage-free interval, which provides the first intuition behind the proofs later. It may also serve as the first step in other scenarios of decentralized exchanges when concerning the miner's strategies, _e.g._, optimal sandwich attacks of a miner who wants to collect money.
Before we state and prove the lemma, we first introduce a notation, which is also used in the subsequent sections. We use \(L_{x}\) to denote the \(x\) such that the marginal exchange rate \(r(L_{x})=\frac{1}{1-f}\frac{p_{x}}{p_{y}}\) and \(R_{x}\) to denote the \(x\) such that \(r(R_{x})=(1-f)\frac{p_{x}}{p_{y}}\).
**Lemma 1**.: _Given the exogenous prices \(p_{x}\) and \(p_{y}\), and the current state \(s^{*}=(x^{*},y^{*})\), miner's optimal profit is positive if and only if \(x^{*}\not\in[L_{x},R_{x}]\). Furthermore, when \(x^{*}<L_{x}\), miner's optimal strategy is to execute \(\texttt{Sell}(\mathcal{X},(L_{x}-x^{*})/(1-f))\); when \(x^{*}>R_{x}\), miner's optimal strategy is to execute \(\texttt{Sell}(\mathcal{Y},(F_{y}(R_{x})-y^{*})/(1-f))\)._
Proof.: We first argue that it suffices for the miner to execute at most one transaction. This is because if miner executes two transactions with the same type (say \(\texttt{Sell}(\mathcal{X},q_{1})\) and \(\texttt{Sell}(\mathcal{X},q_{2})\)), then it is equivalent to execute \(\texttt{Sell}(\mathcal{X},q_{1}+q_{2})\); if miner executes two transactions with different types (say \(\texttt{Sell}(\mathcal{X},q_{1})\) and \(\texttt{Sell}(\mathcal{Y},q_{2})\)), then it is _better_ to replace them by one single transaction since miner can avoid additional cost of trading fees.
So next we consider the case where the miner executes one of its transactions \(\mathsf{TX}\). Suppose that \(\mathsf{TX}=\texttt{Sell}(\mathcal{X},q)\), then miner's profit is
\[U(\mathcal{X},q)=\left(\int_{x^{*}}^{x^{*}+(1-f)q}r(x)dx\right)\cdot p_{y}-q \cdot p_{x}.\]
We show below that when \(x^{*}\geq L_{x}\), \(U(\mathcal{X},q)\leq 0\) for all \(q\geq 0\).
\[U(\mathcal{X},q) =\left(\int_{x^{*}}^{x^{*}+(1-f)q}r(x)dx\right)\cdot p_{y}-q\cdot p _{x}\] \[\leq r(x^{*})(1-f)q\cdot p_{y}-q\cdot p_{x}\] \[\leq r(L_{x})(1-f)q\cdot p_{y}-q\cdot p_{x}\] \[=\frac{1}{1-f}\frac{p_{x}}{p_{y}}(1-f)q\cdot p_{y}-q\cdot p_{x}\] \[=0.\]
Symmetrically we can define \(U(\mathcal{Y},q)\) when miner executes \(\texttt{Sell}(\mathcal{Y},q)\) and conclude that when \(x^{*}\leq R_{x}\), \(U(\mathcal{Y},q)\leq 0\) for all \(q\geq 0\). This finishes the proof that when \(x^{*}\in[L_{x},R_{x}]\), miners cannot obtain positive profits.
Then we consider what is an optimal attack when \(x^{*}\not\in[L_{x},R_{x}]\). Suppose that \(x^{*}<L_{x}\), then by previous argument, the miner should not execute \(\texttt{Sell}(\mathcal{Y},\cdot)\) (as \(x^{*}<L_{x}\leq R_{x}\)). So let's focus on the case where the miner executes \(\texttt{Sell}(\mathcal{X},q)\).
Letting \(x^{\prime}=x^{*}+(1-f)q\), note that
\[U(\mathcal{X},q) =\left(\int_{x^{*}}^{x^{*}+(1-f)q}r(x)dx\right)\cdot p_{y}-q\cdot p _{x}\] \[=\left(\int_{x^{*}}^{L_{x}}r(x)dx+\int_{L_{x}}^{x^{\prime}}r(x)dx \right)\cdot p_{y}-q\cdot p_{x},\]
where \(\left(\int_{x^{*}}^{L_{x}}r(x)dx\right)\cdot p_{y}-(L_{x}-x^{*})/(1-f)\cdot p_{x}\) is the profits that miner can get by executing \(\texttt{Sell}(\mathcal{X},(L_{x}-x^{*})/(1-f))\) as states in the lemma. Next we show that
\[g(x^{\prime})=\left(\int_{L_{x}}^{x^{\prime}}r(x)dx\right)\cdot p_{y}-(x^{\prime }-L_{x})/(1-f)\cdot p_{x}\leq 0\]
for all \(x^{\prime}\).
Note that
\[g(x^{\prime})=\left(F_{y}(L_{x})-F_{y}(x^{\prime})\right)\cdot p_{y}-(x^{ \prime}-L_{x})/(1-f)\cdot p_{x}.\]
So we have
\[g^{\prime}(x^{\prime})=-F_{y}^{\prime}(x^{\prime})p_{y}-p_{x}/(1-f)=r(x^{ \prime})p_{y}-p_{x}/(1-f),\]
which is a decreasing function as \(r(x^{\prime})\) is decreasing. Since \(g^{\prime}(L_{x})=0\), we have the maximal value of \(g\) is at \(L_{x}\), which is \(0\).
This finishes the proof.
## 4 Strategies under Greedy Sequencing Rule
In this section, we systemically analyze the strategic behaviors of the miners who _follow_ the greedy sequencing rule.
We specifically focus on the case that the initial state \(s_{0}=(x_{0},y_{0})\) satisfies \(r(x_{0})=p_{x}/p_{y}\). Note that this is without loss of generality in our context: On the one hand, when \(f=0\), \(L_{x}=R_{x}\) (_i.e._, the arbitrage-free interval becomes an arbitrage-free point). Supported by Lemma 1, if the current \(\mathcal{X}\) reserves are not \(L_{x}\) (\(R_{x}\)), anyone can make money by a single arbitrage transaction, namely, by selling \(\mathcal{X}\) or \(\mathcal{Y}\) to reach the arbitrage-free point. Thus, it is reasonable to think the last transaction ends up with the state \(s_{0}=(x_{0},y_{0})\) satisfying \(r(x_{0})=p_{x}/p_{y}\), which is also the initial state of this attack; On the other hand, when \(f>0\), we show that the NP-hardness holds even if \(r(x_{0})=p_{x}/p_{y}\), let alone the more general case. It is still interesting to consider the case \(r(x_{0})\neq p_{x}/p_{y}\), and we discuss it in the last section (Section 5).
In Section 4.2, we show a polynomial time algorithm to compute an optimal attack in the regime that the fraction of trading fee \(f=0\). Interestingly, it will also _benefit_ the users if the miner follows such a strategy compared to truthfully following the greedy sequencing rule.
In contrast, Section 4.3 shows that when the fraction of trading fee \(f\) is any constant larger than \(0\) (say \(f=0.3\%\) as being used in most Uniswap pools), it is NP-hard to find an optimal strategy.
### Upper Bounds of Optimal Profits
Our main results in this section (Theorem 1 and Theorem 3) will be crucially based on the following lemma, which provides an upper bound of miner's optimal profit (using arbitrary strategy) under the greedy sequencing rule.
Before presenting the lemma, we first define the arbitragable profit for one transaction, inspired by Lemma 1.
**Definition 4** (Arbitragable Profit).: _Given an initial state \(s_{0}=(x_{0},y_{0})\) and a user's transaction \(\mathsf{TX}\), we define the arbitragable profit \(\mathsf{AP}(s_{0},\,\mathsf{TX})\) as follows:_
* _If_ \(\mathsf{TX}=\texttt{Sell}(X,q)\)_, let_ \(x^{\prime}=\max\left\{x_{0}+(1-f)q,R_{x}\right\}\)_. Then_ \(\mathsf{AP}(s_{0},\,\mathsf{TX})\coloneqq(x^{\prime}-R_{x})\cdot p_{x}-\left(F _{y}(R_{x})-F_{y}(x^{\prime})\right)/(1-f)\cdot p_{y}\)_;_
* _If_ \(\mathsf{TX}=\mathsf{Sell}\,\mathcal{U}(\mathcal{Y},q)\)_, let_ \(x^{\prime}=\min\left\{F_{x}(y_{0}+(1-f)q),L_{x}\right\}\)_. Then_ \(\mathsf{AP}(s_{0},\,\mathsf{TX})\coloneqq(F_{y}(x^{\prime})-F_{y}(L_{x}))\cdot p _{y}-(L_{x}-x^{\prime})/(1-f)\cdot p_{x}\)_._
Figure 1 illustrates the intuition behind Arbitragable Profit.
The lemma below shows that the miner's optimal profit is upper-bounded by the sum arbitragable profits of all users' transactions.
**Lemma 2**.: _Given an initial state \(s_{0}=(x_{0},y_{0})\) with \(r(x_{0})=p_{x}/p_{y}\), a set of users' transactions \(\{\,\mathsf{TX}^{i}\}_{i\in[n]}\), the miner's profit (using arbitrary strategy) under the greedy sequencing rule is upper bounded by \(M(s_{0},\{\,\mathsf{TX}^{i}\}_{i\in[n]})\), where_
\[M(s_{0},\{\,\mathsf{TX}^{i}\}_{i\in[n]})\coloneqq\sum_{i=1}^{n}\mathsf{AP}(s_ {0},\,\mathsf{TX}^{i}).\]
Proof.: Fix arbitrary sequence of (users' and miner's) transactions \((\mathsf{TX}^{\tau(1)},\cdots,\mathsf{TX}^{\tau(k)})\), where \(\mathsf{TX}^{\tau(i)}\) is a user's transaction if \(\tau(i)\in[n]\) and it is the miner's transaction otherwise. Let \(s_{i}=(x_{i},y_{i})\) be the state after executing \(\mathsf{TX}^{\tau(i)}\). Without loss of generality, we assume that \(\mathsf{TX}^{\tau(i)}=\mathsf{Sell}(\mathcal{X},\cdot)\) if and only if \(x_{i-1}\leq x_{0}\) and \(\mathsf{TX}^{\tau(i)}=\mathsf{Sell}(\mathcal{Y},\cdot)\) if and only if \(y_{i-1}\leq y_{0}\) for all \(i\in\{2,\cdots,k\}\). To see it, suppose that for \(k^{\prime}<k\) we have \(\mathsf{TX}^{\tau(i)}=\mathsf{Sell}(\mathcal{X},\cdot)\) and \(x_{i-1}>x_{0}\) for all \(i\in\{k^{\prime}+1,\cdots,k\}\). Then by Lemma 1, we know that miner's profit obtained from \(\mathsf{TX}^{\tau(k^{\prime}+1)},\cdots\mathsf{TX}^{\tau(k)}\) is at most \(0\) (and possibly negative). It means the miner can always choose not to execute these transactions and the profit is as good as before.
We will inductively show that after executing the first \(i\) transactions, the miner's profit \(U_{i}\leq V_{i}\coloneqq\sum_{j\in[i],\tau(j)\in[n]}\mathsf{AP}(s_{0},\, \mathsf{TX}^{\tau(j)})\). This will imply that after executing all \(k\) transactions, miner's profit is upper bounded by \(\sum_{i\in[n]}\mathsf{AP}(s_{0},\mathsf{TX}^{i})\).
We define \(\phi_{i}\) as follows:
\[\phi_{i}=\left\{\begin{array}{ll}(x_{i}-R_{x})\cdot p_{x}+\frac{F_{y}(x_{i} )-F_{y}(R_{x})}{1-f}\cdot p_{y},&x_{i}>R_{x};\\ \frac{x_{i}-L_{x}}{1-f}\cdot p_{x}+(F_{y}(x_{i})-F_{y}(L_{x}))\cdot p_{y},&x_{ i}<L_{x};\\ 0,&x_{i}\in[L_{x},R_{x}].\end{array}\right.\]
Figure 1: Illustration of Arbitrage-Free Interval and the intuition behind Arbitragable Profit.
We will show that \((U_{i}+\phi_{i})-(U_{i-1}+\phi_{i-1})\leq V_{i}-V_{i-1}=\mathtt{AP}(s_{0}, \mathtt{TX}^{\tau(i)})\) for all \(i\in[k]\), which will imply our desired statement \(U_{i}\leq V_{i}\) as \(\phi_{i}\geq 0\) for all \(i\in[k]\). (Here we define \(\mathtt{AP}(s_{0},\mathtt{TX}^{\tau(i)})=0\) if it is a miner's transaction.)
The basis of the induction is trivial as \(U_{0}+\phi_{0}=0\). For the induction step, let's consider arbitrary \(i\in[k]\).
**Case 1:**\(\mathtt{TX}^{\tau(i)}\) is a user's transaction. Then we have \(U_{i}=U_{i-1}\). So it suffices for us to show \(\phi_{i}-\phi_{i-1}\leq\mathtt{AP}(s_{0},\mathtt{TX}^{\tau(i)})\). Suppose that \(\mathtt{TX}^{\tau(i)}=\mathtt{Sell}(\mathcal{X},q)\). Then it must be the case \(x_{i-1}<=x_{0}\) due to the greedy sequencing rule. (The other case \(\mathtt{TX}^{\tau(i)}=\mathtt{Sell}(\mathcal{Y},q)\) will be symmetric.) If \(x_{i}\leq x_{0}\), then we have that \(\phi\) in fact didn't increase, which means \(\phi_{i}-\phi_{i-1}\leq 0\leq\mathtt{AP}(s_{0},\mathtt{TX}^{\tau(i)})\). If \(x_{i}>x_{0}\), then since \(x_{i-1}\leq x_{0}\), we have \(x_{i}\leq\max\left\{x_{0}+(1-f)q,R_{x}\right\}\). So that \(\phi_{i}\leq\mathtt{AP}(s_{0},\mathtt{TX}^{\tau(i)})\), concluding the first case.
**Case 2:**\(\mathtt{TX}^{\tau(i)}\) is a miner's transaction. Then we have \(V_{i}=V_{i-1}\). So it suffices for us to show \(U_{i}-U_{i-1}+\phi_{i}-\phi_{i-1}\leq 0\). Suppose that \(\mathtt{TX}^{\tau(i)}=\mathtt{Sell}(\mathcal{X},q)\), then it must be the case \(x_{i-1}<=x_{0}\) due to the greedy sequencing rule. (Again, the other case \(\mathtt{TX}^{\tau(i)}=\mathtt{Sell}(\mathcal{Y},q)\) will be symmetric.)
If \(x_{i-1}\leq x_{i}\leq L_{x}\), then we have in fact \(U_{i}-U_{i-1}+\phi_{i}-\phi_{i-1}=0\) since \(U_{i}-U_{i-1}=\phi_{i-1}-\phi_{i}=-(x_{i}-x_{i-1})/(1-f)\cdot p_{x}+(F_{y}(x_{ i-1})-F_{y}(x_{i}))\cdot p_{y}\).
Now, let's consider the case \(L_{x}\leq x_{i}\). To simplify the analysis, we consider an intermediate state \(s^{\prime}\) with \(U^{\prime}\) and \(\phi^{\prime}\). If \(x_{i-1}\geq L_{x}\), then we just set \(s^{\prime}=s_{i-1}\) with \(U^{\prime}=U_{i-1}\) and \(\phi^{\prime}=\phi_{i-1}\). If \(x_{i-1}<L_{x}\), we split \(\mathtt{TX}^{\tau(i)}\) into two transactions: \(\mathtt{TX}^{\prime}=\mathtt{Sell}(\mathcal{X},(L_{x}-x_{i-1})/(1-f))\) and \(\mathtt{TX}^{\prime\prime}=\mathtt{Sell}(\mathcal{X},(x_{i}-L_{x})/(1-f))\), and we define \(s^{\prime}\), \(U^{\prime}\) and \(\phi^{\prime}\) as that after executing \(\mathtt{TX}^{\prime}\).
Note that we have \(U^{\prime}-U_{i-1}=\phi_{i-1}-\phi^{\prime}\). So we only need to show \(U_{i}-U^{\prime}\leq\phi^{\prime}-\phi_{i}\). Note that in fact \(\phi^{\prime}=0\).
If \(x_{i}\leq R_{x}\), then \(\phi_{i}=\phi^{\prime}=0\). In addition, by Lemma 1, we know that \(U_{i}-U^{\prime}\leq 0\). So we conclude \(U_{i}-U^{\prime}\leq\phi^{\prime}-\phi_{i}\) as desired.
The last possibility is that \(x_{i}>R_{x}\), where we have
\[\phi_{i}=(x_{i}-R_{x})\cdot p_{x}+\frac{F_{y}(x_{i})-F_{y}(R_{x})}{1-f}\cdot p_ {y}.\]
Moreover, by Lemma 1, we know that \(U_{i}-U^{\prime}\leq(R_{x}-x_{i})/(1-f)\cdot p_{x}+(F_{y}(R_{x})-F_{y}(x_{i})) \cdot p_{y}<-\phi_{i}\).
This finishes the proof.
### Polynomial Time Algorithm When \(f=0\)
In this subsection, we show a polynomial time algorithm to find an optimal strategy for the miner when \(f=0\). Interestingly, when adopting our algorithm, users will also benefit in the following sense: all users' transactions will be executed (_a.k.a_ they will be included in the block), and every user gets at least as good as if their transaction was the only one in the block. The latter is generally not true if the miner truthfully follows the greedy sequencing rule.
**Theorem 1**.: _When the fraction of trading fee \(f=0\), Algorithm 1 finds an optimal strategy under the greedy sequencing rule in polynomial time, and the optimal profit is equal to the upper bound \(M(s_{0},\{\,\mathtt{TX}^{i}\}_{i\in[n]})\)._
**Remark 1**.: _Before going into details of the proof, we note that our algorithm can obtain the optimal profit \(M(s_{0},\{\,\mathtt{TX}^{i}\}_{i\in[n]})\) under arbitrary order of users' transactions \(\{\,\mathtt{TX}^{i}\}_{i\in[n]}\). So it still works even if there are some constraints on the execution order of certain transactions (e.g., a user may create two transactions \(\{\,\mathtt{TX}^{1},\,\mathtt{TX}^{2}\}\) and specify that \(\mathtt{TX}^{1}\) must be executed before \(\mathtt{TX}^{2}\))._
Proof of Theorem 1.: We first show that the sequence given by Algorithm 1 satisfies the greedy sequencing rule. Note that after executing each user's transaction \(\mathsf{TX}^{\tau(i)}\), we always execute a miner's transaction with the opposite direction, shown between line 4 and 7. Besides, at the end of \(i\)-th iteration, we have the state \(s_{2i}=s_{0}\) (we use \(2i\) because we execute two transactions in each iteration). So our sequence satisfies the greedy sequencing rule. Furthermore, during the \(i\)-th iteration, we obtain exactly \(\mathsf{AP}(s_{0},\mathsf{TX}^{\tau(i)})\) profits by executing the transaction on line 5 or 7. Then the optimality follows from the same upper bound provided by Lemma 2.
Now we turn to the _positive_ effects on users when a miner launches an optimal strategy given by Algorithm 1. We summarize them as the following two corollaries and omit the proofs as they are relatively straightforward from the proof of Theorem 1.
**Corollary 1**.: _When a miner launches an optimal strategy given by Algorithm 1, all users' transactions \(\{\,\mathsf{TX}^{i}\}_{i\in[n]}\) will be executed._
**Corollary 2**.: _When a miner launches an optimal strategy given by Algorithm 1, each user's profit is as good as if their transaction was the only transaction in the block._
As shown in Theorem 1, Corollary 1, Corollary 2, both miner and users are satisfied when miner adopts our Algorithm 1.
### NP-hardness When \(f>0\)
In this subsection, we show the computational hardness of finding an optimal strategy when the fraction of trading fees is any constant larger than \(0\) (say \(f=0.3\%\)).
We will mainly focus on the proof of the NP-completeness of the following decision problem, then Theorem 3 will follow directly.
**Theorem 2**.: _Let \(f\in(0,1)\) be any universal constant. It is NP-complete to decide if there is a strategy that can obtain profits \(M(s_{0},\{\,\mathsf{TX}^{i}\}_{i\in[n]})\) for any initial state \(s_{0}=(x_{0},y_{0})\) and users' transactions \(\{\,\mathsf{TX}^{i}\}_{i\in[n]}\)._
Proof.: The NP-membership is easy. Given any strategy, we can efficiently simulate the execution of the sequence of transactions and check if the final profit is \(M(s_{0},\{\,\mathsf{TX}^{i}\}_{i\in[n]})\) or not.
For the NP-hardness, we reduce the Partition problem to our problem. Recall that the instance of the partition problem contains \(n\) positive integers and ask if it can be partitioned into two subsets \(S_{1}\) and \(S_{2}\) such that the sum of numbers in \(S_{1}\) equals that in \(S_{2}\).
Suppose we are given arbitrary \(n\) positive integers \(\{a_{1},\cdots,a_{n}\}\). Let \(t\) be half of the sum of these integers, _i.e._, \(\frac{1}{2}\sum_{i=1}^{n}a_{i}\). Without loss of generality, we assume that \(a_{i}\leq t\) for all \(i\in[n]\) otherwise the answer to the decision problem will directly be "no".
We first construct a CFMM \(A\) and initial state \(s_{0}\). Concretely, we can consider the constant curve of \(A\) as \(F(x,y):xy=k\), and our goal is to choose parameters such that \(x_{0}-L_{x}=(1-f)t\). Precisely, we know that \(L_{x}=\sqrt{1-f}x_{0}\), since \(r(L_{x})=\frac{1}{1-f}r(x_{0})\). This means \(x_{0}-L_{x}=(1-\sqrt{1-f})x_{0}\). So choosing \(x_{0}=\frac{1-f}{1-\sqrt{1-f}}t\) would suffice.
Next, we construct users' transactions. For each integer \(a_{i}\), we construct \(\mathsf{TX}^{i}=\mathsf{Sell}(\mathcal{X},a_{i})\). Clearly, we have \(\mathsf{AP}(s_{0},\mathsf{TX}^{i})=0\) as \((1-f)a_{i}\leq(1-f)t=x_{0}-L_{x}\leq R_{x}-x_{0}\). Then we construct two \(\mathsf{Sell}(\mathcal{Y},\cdot)\) transactions. Precisely, we construct \(\mathsf{TX}^{n+1}=\mathsf{TX}^{n+2}=\mathsf{Sell}(\mathcal{Y},q^{*})\) where \(q^{*}\) is large enough such that \(F_{x}(y_{0}+(1-f)q^{*})<L_{x}\). Then we know \(\mathsf{AP}(s_{0},\mathsf{TX}^{n+1})=\mathsf{AP}(s_{0},\mathsf{TX}^{n+2})>0\). This finishes the construction. And we know \(M(s_{0},\{\mathsf{TX}^{i}\}_{i\in[n+2]})=2\mathsf{AP}(s_{0},\mathsf{TX}^{n+1})\).
Finally, we argue that there exists a strategy obtaining profits \(M(s_{0},\{\mathsf{TX}^{i}\}_{i\in[n+2]})\) if and only if there exists a subset \(S\subseteq[n]\) such that the sum of the numbers in \(S\) equal \(t\). And this will conclude the theorem.
One direction is easy: if there exists \(S\subseteq[n]\) such that the sum of the numbers in \(S\) equal \(t\), then we execute transactions as follows:
1. Execute user's transaction \(\mathsf{TX}^{n+1}\); Execute miner's transaction \(\mathsf{Sell}(\mathcal{X},\frac{L_{x}-F_{x}(y_{0}+(1-f)q^{*})}{1-f})\);
2. Execute \(\mathsf{TX}^{i}\) for all \(i\in S\);
3. Repeat item (1) except replacing \(\mathsf{TX}^{n+1}\) by \(\mathsf{TX}^{n+2}\).
It is easy to verify that this sequence satisfies the greedy sequencing rule, and the miner can obtain \(M(s_{0},\{\mathsf{TX}^{i}\}_{i\in[n+2]})\).
For the other direction, we show that the sequence of transactions constructed above is essentially the only way to obtain \(M(s_{0},\{\mathsf{TX}^{i}\}_{i\in[n+2]})\). So a miner can obtain \(M(s_{0},\{\mathsf{TX}^{i}\}_{i\in[n+2]})\) only if the answer to the given Partition problem is "yes".
We adopt a proof scheme similar to that of Lemma 2. Fix a sequence of (users' and miner's) transactions \((\mathsf{TX}^{\tau(1)},\cdots,\mathsf{TX}^{\tau(k)})\) such that miner's profit \(U=2\mathsf{AP}(s_{0},\mathsf{TX}^{n+1})\). Recall that in the proof of Lemma 2, we defined \(\phi_{i}\) and showed \(U_{i}+\phi_{i}-(U_{i-1}+\phi_{i-1})\leq\mathsf{AP}(s_{0},\mathsf{TX}^{i})\) for all \(i\in[k]\). Since \(U_{k}=2\mathsf{AP}(s_{0},\mathsf{TX}^{n+1})\) at the end, it must be the case \(U_{i}+\phi_{i}=V_{i}\) for all \(i\in[k]\) and \(\phi_{k}=0\). As a result, the sequence of transactions must satisfy that
* The miner does not lose profit for any transaction; otherwise the loss of the profit is strictly larger than the gain of the \(\phi\) function, and this will result in \(U_{i}+\phi_{i}<V_{i}\) for some \(i\).
* There are \(i_{1}\neq i_{2}\in[k]\) such that \(\phi_{i_{1}}=\phi_{i_{2}}=\mathsf{AP}(s_{0},\mathsf{TX}^{n+1})\). This means when execute \(\mathsf{TX}^{n+1}\) and \(\mathsf{TX}^{n+2}\), the corresponding state must be \((x_{0},y_{0})\).
To achieve both items simultaneously, it must be \((x_{i_{1}-1},y_{i_{1}-1})=(x_{0},y_{0})\) and \(\mathsf{TX}^{n+1}\) is executed as \(\mathsf{TX}^{\tau(i_{1})}\). To get the first \(\mathsf{AP}(s_{0},\mathsf{TX}^{n+1})\) profit, miner executes \(\mathsf{Sell}(\mathcal{X},\frac{L_{x}-F_{x}(y_{0}+(1-f)q^{*})}{1-f})\) in the \((i_{1}+1)\)-th iteration. To make sure that \((x_{i_{2}-1},y_{i_{2}-1})=(x_{0},y_{0})\) (and \(\mathsf{TX}^{n+2}\) is executed as \(\mathsf{TX}^{\tau(i_{2})}\)) while the miner does not loss any profit in this process, we must use users' transactions to change
the state from \(x_{i_{1}}=L_{x}\) to \(x_{i_{2}-1}=x_{0}\), which means we need a subset \(S\) of users' transactions such that the sum of numbers in \(S\) is exactly \(t\).
This finishes the proof.
Theorem 3 follows directly by simulating any algorithm that computes an optimal strategy and calculates the profits to solve the decision problem.
**Theorem 3**.: _Let \(f\in(0,1)\) be any universal constant. It is NP-hard to compute the strategy that can obtain the optimal profits._
## 5 Discussion and Open Problems
**Refined Sequencing Rule.** Our first question is related to mechanism design, motivated by a revisit of our polynomial time algorithm when \(f=0\). Recall that our algorithm can always obtain the upper bound profits, even if the miner is asked to follow the greedy sequencing rule such that the sequence is additionally under a descending order. Thus, we would like to ask if there is some sequencing rule (that is computationally efficient and verifiable) that can further mitigate the miner's incentive to manipulate. We propose the following way to build a theoretical foundation when considering real-world applications. We could consider the case where users' transactions are drawn from a certain distribution \(\mathcal{D}\) (witnessed by real-world DeFi scenarios), and show that under the refined greedy sequencing rule, miners cannot obtain large profits with high probability. We leave it as a promising open question.
**Approximation Algorithm for Miners.** It is also worth to study about approximation algorithm design for miners. Our NP-hardness rules out the possibility for a miner to have a polynomial time algorithm for an optimal strategy (assuming P\(\neq\)NP). However, it remains possible to design a polynomial time algorithm with a good approximation guarantee. This strategy exploration allows miners to develop efficient algorithms that can yield sufficient MEV close to the optimal strategy. As the optimal MEV problem shares a similar spirit with the Knapsack problem, one promising direction is to apply the classic approximation algorithms to our setting.
**User's Strategies.** The third question is about strategic analysis from the perspective of users. In this work, we systematically studied the optimal strategies of miners. We also note that there is fruitful space for a user to adopt strategies. For example, a user who wants to sell a large amount of \(\mathcal{X}\) tokens may have an incentive to split it into several smaller transactions, and this may lead them to a higher profit under the greedy sequencing rule. Generally speaking, we wonder what is an optimal strategy for a user under certain sequencing rules. Different from the miner's incentive, multiple users are making decisions simultaneously, which forms a multi-agent system. One step further than one user's optimization, we ask what the equilibrium is when all users behave strategically. The game theory problem between users and miners under specific sequencing rules is also an intriguing question.
**Other Scenarios where MEV Makes Everyone Happy.** Finally, recall our exciting journey about the positive effects of MEV: when a miner attracts MEV (optimally), users are also benefited in a reasonable sense (Corollary 1 and Corollary 2). The intuition behind this phenomenon is that although the existence of MEV incentivizes miners to engage in attacking behaviors when a good sequencing rule can restrict miners' actions and prevent them from affecting users' profits, the presence of MEV itself can benefit users. In this case, MEV not only does not harm users but can expedite the execution of user transactions as miners have the motivation to execute more
transactions (to obtain MEV). We expect and are eager to know a wider range of scenarios where the same conceptual result also holds. We leave this as the most important future work.
| decentralized exchanges (DEXs)の取引が、今日のブロックチェーンエコシステムにおいて重要な役割を果たしています。これは、ユーザーが効率的にトークンを交換することを可能にし、自動的に行うことができます。しかし、マイナーの能力は、戦略的に取引を処理することで、搾取的な行為(例えば、前倒し攻撃、サンドイッチ攻撃)を引き起こし、自分たちの利益のために大きな最大可抽出値(MEV)を獲得する可能性があります。このような操作を回避するために、フェリkerasとパースクは最近、greedy sequencing ruleを提案しました。これは、ブロック内の取引の価格を開始価格の周りに動かすことで実現します。このシーケンスルールを採用することで、マイナーがサンドイッチ攻撃を行うことは不可能となり、MEV問題を軽減します。しかし、シーケンスルールは、マイナーがリスクのない利益を得られることを防ぐことができます。この論文では、greedy sequencing ruleの下でマイナーの最適な |
2309.03995 | First-principle Study of Multiple Metastable Charge Ordering States in
La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ | La doped SrFeO$_{3}$, La$_{1/3}$Sr$_{2/3}$FeO$_{3}$, exhibits a
metal-to-insulator transition accompanied by both antiferromagnetic and charge
ordering states along with the Fe-O bond disproportionation below a critical
temperature near 200K. Unconventionally slow charge dynamics measured in this
material near the critical temperature shows that its excited charge ordering
states can exhibit novel electronic structures with nontrivial energy profiles.
Here, we reveal possible metastable states of charge ordering structures in
La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ using the first-principle and climbing image
nudged elastic band methods. In the strong correlation regime,
La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ is an antiferromagnetic insulator with a charge
ordering state of the big-small-big pattern, consistent with the experimental
measurement of this material at the low temperature. As the correlation effect
becomes weak, we find at least two possible metastable charge ordering states
with the distinct Fe-O bond disproportionation. Remarkably, a ferroelectric
metallic state emerges with the small energy barrier of $\sim$7 meV, driven by
a metastable CO state of the small-medium-big pattern. The electronic
structures of these metastable charge ordering states are noticeably different
from those of the ground-state. Our results can provide an insightful
explanation to multiple metastable charge ordering states and the slow charge
dynamics of this and related oxide materials. | Nam Nguyen, Alex Taekyung Lee, Vijay Singh, Anh T. Ngo, Hyowon Park | 2023-09-07T19:58:28 | http://arxiv.org/abs/2309.03995v1 | First-principle study of multiple metastable charge ordering states in La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\)
###### Abstract
La doped SrFeO\({}_{3}\), La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\), exhibits a metal-to-insulator transition accompanied by both antiferromagnetic and charge ordering states along with the Fe-O bond disproportionation below a critical temperature near 200K. Unconventionally slow charge dynamics measured in this material near the critical temperature [Nature Communications, **9** 1799 (2018)] shows that its excited charge ordering states can exhibit novel electronic structures with nontrivial energy profiles. Here, we reveal possible metastable states of charge ordering structures in La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) using the first-principle and climbing image nudged elastic band methods. In the strong correlation regime, La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) is an antiferromagnetic insulator with a charge ordering state of the big-small-big pattern, consistent with the experimental measurement of this material at the low temperature. As the correlation effect becomes weak, we find at least two possible metastable charge ordering states with the distinct Fe-O bond disproportionation. Remarkably, a ferroelectric metallic state emerges with the small energy barrier of \(\sim\)7meV, driven by a metastable CO state of the small-medium-big pattern. The electronic structures of these metastable charge ordering states are noticeably different from those of the ground-state. Our results can provide an insightful explanation to multiple metastable charge ordering states and the slow charge dynamics of this and related oxide materials.
## I Introduction
Charge ordering (CO) or charge density wave (CDW) is an intriguing material property driven by a spontaneous symmetry breaking of the periodicity in crystals. In strongly correlated materials, the charge degree of freedom is typically coupled to other degrees of freedom including spin, orbital, or lattice. While the origin of CDW can be purely electronic and the electronic correlation plays an important role, it is often accompanied by structural distortions such as the bond-order or the Peierls transition, possibly leading to ferroelectricity. Indeed, the combination of CDW, spin density wave (SDW), and the bond order has been proposed as the mechanism of ferroelectricity [1; 2; 3].
La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) (LSFO) is a transition metal oxide with a perovskite structure undergoing a weakly first-order transition at a temperature, \(T\)=200K, from a paramagnetic metallic state with the average valence state of Fe\({}^{3.67+}(d^{4.3})\) at a high temperature to an antiferromagnetic (AFM) insulating state with a CO sate of Fe\({}^{3+}(d^{3})\): Fe\({}^{5+}(d^{3})\)=2:1 at a low temperature [4]. Structural properties of LSFO with or without CO phases have been characterized experimentally using X-ray diffraction, neutron diffraction, and electron microscopy. The studies of X-ray and neutron diffraction [5; 6; 7; 8; 9; 10; 11] showed that bulk LSFO forms a rhombohedral structure in the space group of \(R\bar{3}c\) (see Fig. 1) with the lattice constants \(a=5.47\) A and \(c=13.35\) A. A sign of the CDW spanning the periodicity of three Fe ions accompanied by SDW with a periodicity of six Fe ions was measured along the pseudocubic [111] direction, but there was no clear evidence of structural distortions. Later, the electron microscopy study by Li _et al._[12] revealed a structural distortions along the pseudocubic [111] direction in the real space upon the CDW transition. Finally, the neutron diffraction studies by Sabyasachi _et al._[6] and Yang _et al._[8] also showed a possibility of the meta-stable CO state due to multiple neutron peaks below the critical temperature.
Electronic properties of LSFO at the low-temperature CO phase haven been characterized by various experiments. The study of optical spectroscopy by Ishikawa _et al._[13] showed the optical gap of LSFO was about 0.13 eV at low temperature. The studies of Mossbauer spectrocopy [5; 6; 7; 8; 9; 11; 14; 15] captured two kinds of Fe ions with different hyperfine fields, confirming the charge disproportionation below the critical temperature. Recent ultrafast X-ray measurement in LSFO by Zhu _et al_ has shown that the noticeable slowdown occurs during the relaxation of CO near the critical temperature [16]. They argued that the photoexcitation due to an ultrafast pump can drive a ground state of La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) into metastable states with different spin/charge orderings, which can be the origin of slowdown in the relaxation process. According to Yamamoto _et al._, [17] these metastable or transient states are the CO in sequence of Fe\({}^{4+}\)Fe\({}^{3+}\)Fe\({}^{4+}\). However, the magnetic moments as well as the spin states, i.e. high spin (HS) or low spin (LS), of these Fe\({}^{4+}\) and Fe\({}^{3+}\) ions were unknown. In general, the slow dynamics of CO can be originated from the multiple meta-stable CO states accessible during the relaxation process.
Unlike those various experimental characterizations, theoretical studies of LSFO have been rather limited. The Hartree-Fock study by Matsuno _et al_[18] captured an energy gap of 0.14 eV, which was in a good agreement with the experimental gap at low temperature. The first-principle study of density functional theory plus the Hubbard \(U\) (DFT+\(U\)) by Zhu _et al._[16] and Saha
Dasgupta _et al._[19] verified the presence of structural modulation or oxygen breathing distortions accompanied by CO of Fe ions in a sequence of Fe\({}^{3+}\)Fe\({}^{5+}\)Fe\({}^{3+}\). They also found that another sequence of CO is possible, namely Fe\({}^{4+}\)Fe\({}^{3+}\)Fe\({}^{4+}\). These CO states are strongly coupled to the spin states as the Fe ion with a larger charge state shows the high-spin state with the Fe-O bond elongation. Finally, the possibility of ferroelectricity in La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) was pointed out by Park _et al._ by rearranging the La/Sr layers[20]. Nevertheless, the effect of electronic correlations on the stability of CO states and the emergence of novel metastable states, such as ferroelectricity, which can be accessible from the photo-excitation experiment, have not been studied from first-principle.
In this work, we study the effect of electron correlations on structural and electronic properties of LSFO having the strong charge-spin-lattice coupling by adopting the first-principle DFT+\(U\) method. In particular, we explore possible meta-stable CO phases driven by a new pattern of structural distortions by adopting the climbing image nudged elastic band (CINEB) method along with DFT+\(U\). Remarkably, we find a new electronic phase in LSFO exhibiting the ferroelectricity driven by a small-medium-big CO pattern and a distinct Fe-O bond disproportionation with the small-medium-big magnetic moments. This new meta-stable phase has almost the degenerate energy compared to previously known CO phases with a small energy barrier of \(5\sim 7\) meV, implying the promising tunability of this material as a future electronic device.
## II Methods
### First-principle calculation
To perform the structural relaxation and the band structure calculations of LSFO, we adopt DFT+\(U\)[21] based on the projected-augmented wave (PAW) method[22] as implemented in the Vienna _ab initio_ simulation package (VASP)[23; 24]. The exchange-correlation energy functional was treated using generalized gradient approximation (GGA) by adopting the Perdew-Burke-Ernzerhof (PBE) functional[25]. The cutoff energy for the plane-wave basis was used as 600 eV, and the Gamma-centered 8\(\times\)8\(\times\)2 \(k-\)point mesh[26] was used for all calculations. For structural relaxations, the Hellmann-Feynman force[27] on each atom was set to be smaller than 0.01 eV/A. To treat the correlation effect of Fe \(d\) orbitals, we impose the Hubbard \(U\) and the Hund's coupling \(J\) within DFT+U.
As noted from the previous study of Ref. [16], two distinct CO structures (CO1 and CO3, see Fig. 2) can be obtained in LSFO by relaxing the crystal structure imposing different \(U\) values on the Fe ions. For the ground-state CO1 structure, we used \(U\)=5eV and \(J\)=1eV, while a distinct CO3 structure is obtained using \(U\)=3eV and \(J\)=0.6eV. Both the crystal shape and ionic positions were relaxed during the structural relaxation, while the crystal volume of LSFO was fixed to 329.24 A\({}^{3}\). To obtain a meta-stable CO phase (CO2, see Fig. 2), we adopt the CINEB method along with the DFT+\(U\) using \(U\)=3.62eV (see Sec. III.2). We also explore the effect of \(U\) values (\(J=0.2U\)) on the stability of different CO phases (see Sec. III.3).
### Energy calculation along a structural path
To obtain the minimum energy curve along a structural path and explore possible metastable structures, we adopt the CINEB method along with DFT+\(U\). The nudged elastic band (NEB) method is an efficient tool for finding the minimum energy path between two stable structures, i.e. a given initial (reactant) and final (product) state[28; 29]. The CINEB method is a small modification of the NEB method without adding any significant computational method[30]. The CINEB method yields a rigorous convergence of a saddle point, which has a maximum energy along the band but minimum in the other directions. The other images in the band serve for the purpose of defining one degree of freedom for which the energy of the climbing image is maximized along the band. In this work, we adopt the CINEB method to explore metastable CO states with distinct structural distortions following a computed structural path and compute the energy barrier along the path. We obtain the structural path by defining two stable CO structures relaxed with different initial conditions and constructing an energy path between two structures using the CINEB method.
### Order Parameter
While ferroelectricity is a phenomenon driven by the spontaneous polarization of materials, the polarization calculation in a periodic system requires a careful treatment of the formula[31]. At the same time, the inversion symmetry breaking of a structure is a clear indication of the spontaneous polarization. While we are not interested in obtaining the quantitative value of the polarization in this work, we will investigate the displacements of Fe and O planes in the rhombohedral unit cell along the [111]\({}_{c}\) direction (Figure 1), where the Fe plane distortion occurs below the critical temperature.
The displacements of Fe and O planes were investigated in the following way. First, we confirm that the CO1 and CO3 structures are centrosymmetric and define the central plane \(C\) as the midway between Fe1 and Fe6 planes. Next, we generated the other dashed-line planes which are equidistant and correspond to the Fe and O planes of the undistorted high-temperature structure (see Figure 2). Then, we can quantify how much Fe and O planes are displaced from the dashed lines. We de
fine the total displacements per unit cell (\(\Delta_{tot}\)) for these Fe and O planes (see Table 1). For the CO2 structure, the \(\Delta_{tot}\) is finite due to the inversion symmetry breaking, also implying the emergence of ferroelectricity.
## III Results and Discussions
### Structural Relaxations
The study of the neutron diffraction measurement by Battle _et al_[5] at the room temperature showed that bulk LSFO forms a rhombohedral structure of the space group \(R3c\) (Figure 1) with lattice constants of \(a=5.47\) A and \(c=13.35\) A. The rhombohedral unit cell of LSFO has 30 atoms including two La, four Sr, six Fe, and eighteen O ions. The c-axis of the rhomboheral unit cell is equivalent to the \([111]_{c}\) direction of the pseudocubic one, which is conventionally adopted in literatures. Thus, the cubic \([111]_{c}\) direction will be also adopted in this paper. Above the CO critical temperature, all Fe ions are equivalent and have the same Fe-O bond lengths. As the temperature is lowered below T\({}_{CO}\), both SDW and CDW orders develop along the \([111]_{c}\) direction. While the CDW order spans the periodicity of three Fe ions, the antiferromagnetic SDW repeats in the unit-cell of six Fe ions, which is commensurate with the crystal lattice periodicity. As a result, the space group of the crystal structure is lowered to \(P\bar{3}m1\) (trigonal, No. 164) with the point group symmetry of \(D_{3d}\) while the crystal remains centrosymmetric. Here, we find that three distinct CO phases can be stable in LSFO with the same commensurate modulations of the SDW and CDW, and the stabilities of these CO structures are dependent on the electronic correlation effect (the Hubbard \(U\) values).
As already noted from Ref. [16], two distinct centrosymmetric CO structures (CO1 and CO3, see Fig. 2) can be obtained by relaxing them with different Hubbard \(U\) values in DFT+\(U\). To explore other metastable CO phases and the energy barriers, these two CO1 and CO3 structures will be used for two reference structures in the CINEB method. The first stable structure CO1 (charge ordering 1) was obtained in a strongly correlated regime with \(U\)=5 eV and \(J\)=1 eV, which have been used for LSFO in literatures [16; 19; 20]. Another stable structure CO3 was obtained in a weakly correlated regime [19] with
Figure 2: The schemetics of Fe magnetic moments and the displacements of Fe/O planes for different CO1, CO2, and CO3 phases along the \([111]_{c}\) direction. The displacements are the changes of atomic positions from their undistorted structures (grey dash lines). The central plane C (grey solid line) is midway between Fe1 and Fe6 planes.
Figure 1: The crystal structure of La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) in a rhombohedral unit cell. The rhombohedral c-axis is equivalent to the \([111]_{c}\) direction of the cubic unit cell.
\(U\)=3 eV and \(J\)=0.6 eV, which was also used by Zhu _et al_[16]. Both CO1 and CO3 structures exhibit a sixfold (six Fe ions) spin density wave (SDW) along the cubic [111]\({}_{c}\) direction, such that Fe1(\(\uparrow\))Fe2(\(\uparrow\))Fe3(\(\uparrow\))Fe4(\(\downarrow\))Fe5(\(\downarrow\))Fe6(\(\downarrow\)) (Figure 2).
We find that the \(<\)Fe-O\(>\) mean bond-lengths are closely related to the magnetic moments. In the CO1 structure, the magnetic moments of Fe1(Fe4) and Fe3(Fe6) are larger than the one of Fe2(Fe5), so the charge states of Fe1 and Fe3 should be larger than the one of Fe2 (see Section III.4). As a result, the high-spin (HS) Fe-O bond expands and the bond-disproportionation occurs. Particularly, in the CO1 structure the \(<\)Fe-O\(>\) mean bond-lengths of Fe1-Fe2-Fe3 (similar to Fe4-Fe5-Fe6) show the big-small-big pattern, coupled to the big-small-big magnetic moments of Fe1, Fe2, and Fe3 ions respectively (Fig. 2). The \(<\)Fe-O\(>\) mean bond-lengths of CO1 are 1.92 A for Fe1 and 1.86 A for Fe2, respectively.
In the case of CO3, the magnetic moments of Fe1 (Fe4) and Fe3 (Fe6) ions become smaller than Fe2 (Fe5) ion and the bond-length changes to small-big-small. The bond-lengths are 1.88 A for Fe1 and 1.94 A for Fe2. As a result of the Fe-O bond-length disproportionation, the displacements of Fe and O planes are also non-uniform as shown in Fig. 2.
In Table 1, we list the relaxed unit-cell parameters, the space group, the displacements of Fe planes, and the total displacement of Fe and O planes (\(\Delta_{tot}\)) along the [111]\({}_{c}\). Both of CO1 and CO3 have the space group of \(P\bar{3}m1\), implying CO1 and CO3 are centrosymmetric. Also, based on the displacements of Fe and O planes of CO1 and CO3, the Figure 2 shows that the reflection of the supercells, including Fe1O\({}_{6}\)-Fe6O\({}_{6}\) cells, of CO1 and CO3 around the central plane C yields the same supercells respectively. Finally, the total displacements of Fe and O planes in CO1 and CO3 are also zeros (Table 1), meaning polarization is not induced in CO1 and CO3.
### Energy Path along Multiple Charge Orderings
In this section, we compute the energy path between two energetically degenerate CO phases (CO1 and CO3) to explore the possible metal-stable CO states along the structural path. We first tuned \(U\) (\(J\)=0.2\(U\)) for CO1 and CO3 phases while relaxing crystal structures at a fixed volume to investigate their stability and plot the relative energy \(\Delta\)E\({}_{CO1-CO3}\)(\(=E[\)CO\(1]-E[\)CO3\(]\)) between them as a function of \(U\) in Fig. 3(a). Here, we find that the low-temperature experimental ground-state structure (CO1) is stable when the \(U\) value becomes larger than 3.7 eV (see Fig. 5(a)). While DFT+\(U\) is a zero-temperature theory, we find that the energetics of different CO phases can be tuned by changing the onsite Coulomb repulsion \(U\), which can mimic the effect of temperature, pressure, or photoinduced excitation. In principle, laser excitation in experiment can modulate the electronic structure from the ground-state, by affecting the exchange interaction [32], and may eventually trigger a phenomenon called "photoinduced structural phase transition" [33].
Fig. 3(a) shows that the energy difference between CO1 and CO3 structures can be almost zero at \(U_{c}=3.62\) eV (\(J=0.724\) eV). This means that other meta-stable CO structures could be found in the CINEB calculation near \(U=3.62\)eV. At \(U\)=3.62 eV, CO1(CO3) still has the big-small-big (small-big-small) bond-order (see Fig. 5). Here, we perform a CINEB calculation at \(U=3.62\) eV using both CO1 and CO3 as two reference structures. Remarkably, Fig. 3(b) shows that the CINEB energy curve calculated with \(U_{c}\)=3.62 eV can capture a meta-stable structure of CO2, whose energy is only 3meV above the CO1 or CO3 structure with the energy barrier of \(\sim\)7meV. This CO2 structure is obtained by the spontaneous dis
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline CO phase & c (Å) & a (Å) & \(\Delta_{Fe1}\)/\(\Delta_{Fe3}\)/\(\Delta_{Fe2}\) [Å] & \(\Delta_{tot}\) [Å] \\ \hline \hline CO1 & 13.12 & 5.38 & -0.01/0.01/0.00 & 0.00 \\ CO3 & 13.22 & 5.36 & 0.03/-0.03/0.00 & 0.00 \\ \hline CO2 & 13.19 & 5.37 & 0.01/-0.04/-0.03 & -0.22 \\ CO2 & 13.19 & 5.37 & 0.04/-0.01/0.03 & 0.22 \\ \hline \end{tabular}
\end{table}
Table 1: The relaxed cell parameters (\(a\) and \(c\)), the space group, the Fe plane displacements, and the total displacement (\(\Delta_{tot}\)) of LSFO at each CO phase. Both CO1 and CO3 structures (space group: \(P\bar{3}m1\)) have the inversion symmetry, while CO2 and CO2 structures (space group: \(P3m1\)) do not.
Figure 3: (a) The relative energies per formula unit of CO1, CO2, and CO3 phases as a function of the Hubbard \(U\). (b) Comparison of CINEB and Linear Interpolation energies vs Image structures calculated with DFT+U at \(U\)=3.62 eV and \(J\)=0.724 eV.
placement of the Fe plane and it can not be captured by the linear interpolation method where the image structures along the path are obtained by linearly interpolating atomic positions between CO1 and CO3.
The obtained CO2 structure has the small-medium-big FeO\({}_{6}\) octahedra (Fe-O bond order), (see Fig. 5(b)), coupled to the magnetic ordering of 2.2 (small), 3.0 (medium), and 3.4 \(\mu_{B}\) (big). Unlike CO1 and CO3, CO2 has the space group of \(P3m1\) (trigonal, No. 156) with the point group symmetry of \(C_{3v}\), which belongs to a polar point group [34]. The reflection of CO2 supercell around the C plane does not yield the same supercell (see Fig. 2), implying the broken inversion symmetry in CO2. Also, the total displacement (\(\Delta_{tot}\)) of Fe and O planes in CO2 is not zero (see Table 1), resulting the spontaneous polarization. Remarkably, the CO2 phase is metallic, which is not common for the ferroelectric materials due to the screening of the polarization [35; 36]. We also find that the other meta-stable CO2 structure can be obtained by applying the inversion operation to the CO2 structure about the central plane in Fig. 2 and shows the opposite polarization compared to the CO2 case (see Table 1). The polarizations between CO2 and CO2 are easily switchable due to the very small energy barrier of \(\sim\)7meV between them. The existence of ferroelectricity in LSFO was first pointed out by Park _et al._[20], but its structure was obtained manually by exchanging the La and Sr layers.
To address the difference between the CINEB and Linear Interpolation results, we compare the displacement (\(\Delta_{Fe2}\)) of Fe2 plane, the total displacement (\(\Delta_{tot}\)) of Fe and O planes, and the bond angle along O1-Fe2-O2 (\(\angle\)O1-Fe2-O2). Figure 4(a) shows that along the Linear Interpolation path the displacement of Fe2 plane \(\Delta_{Fe2}^{LI}\) and the total displacement of Fe and O planes \(\Delta_{tot}^{LI}\) are kept zero, while along the CINEB path an abrupt change of the displacement of Fe2 plane \(\Delta_{Fe2}^{CINEB}\) and the total displacement \(\Delta_{tot}^{CINEB}\) occurs at image number 5 and reach a minimum at image 9 where CO2 was captured. This change of the Fe2 displacement along the CINEB path is also accompanied by a sudden change of the bond angle (\(\angle\)O1-Fe2-O2)\({}_{CINEB}\), while the one along the Linear Interpolation path (\(\angle\)O1-Fe2-O2)\({}_{LI}\) remains 180\({}^{0}\) (Figure 4(b)). The existences of this CO2 phase might be captured by Sabyasachi _et al._ and Yang _et al._, where the neutron diffraction shows the multiple \(Q\) plane magnetic reflections with equivalent intensities [6; 8].
### The dependence of structural parameters on \(U\)
Our structural relaxation results show that the stability of the different CO phases depends sensitively on the Hubbard \(U\) values (\(J=0.2U\)). In general, the correlation effect of the Hubbard \(U\) is important to stabilize the bond/charge disproportionation in many oxides including nickelates [37; 38], cobaltates [39], ferrites [40], and manganites [41]. This is because only pa
Figure 4: Comparison between the CINEB and the Linear Interpolation paths: (a) The displacement of Fe2 plane and the total displacement of Fe and O planes. (b) The bond angle \(\angle\)O1-Fe2-O2.
Figure 5: The \(<\)Fe-O\(>\) mean bond lengths vs the Hubbard \(U\) (\(J=0.2U\)) obtained for (a) CO1, (b) CO2, and (c) CO3 phases. The vertical dash lines represent the \(U\) values used for stabilizing each CO phase, namely CO1 (\(U=5.0\) eV), CO2 (\(U=3.62\) eV), and CO3 (\(U=3.0\) eV) as shown in Table 1. The white (shaded) regions represent metallic (insulating) phases.
\(M\) sites undergo the spin-state transition to the HS state with the \(M-O\) bond elongation and more HS sites are populated with the stronger \(U\) values. Our calculation confirms that the DFT-relaxed structure of LSFO shows no Fe-O bond disproportionation, consistently as the experimental high-temperature structure, and the increase of \(U\) energetically favors the structures with more HS states in a non-trivial way.
Fig. 5(a) shows that the CO1 structure as shown in Table 1 can be stable only when \(U>3.7\) eV (\(J\)=0.74 eV) and the structural transition to CO3 occurs along with the insulator-metal transition. The CO2 structure as shown in Table 1 is meta-stable in a narrow \(U\) range of \(3.55\leq U\leq 3.7\) eV and evolves into a distorted CO3 phase as \(U\) becomes lower than 3.55 eV. The CO3 structure can be stable in a wide-range of \(U\) values although this phase is energetically lower than CO1 or CO2 phases when \(U\leq 3.62\) eV. Both of CO2 and CO3 structures converge to the high-temperature structure without the Fe-O disproportionation as \(U\) becomes smaller than 2eV. We find that the insulating phase in LSFO occurs only in the CO1 structure with \(U>3.7\)eV.
### Electronic Structure and magnetism in LSFO
Here, we investigate electronic structures of LSFO at different CO states computed using DFT+U. Due to the AFM structure, the Fe1/Fe2/Fe3 density of states (DOS) is equivalent to the Fe4/Fe5/Fe6 one in LSFO once their spins are flipped. For CO1 and CO3, the crystal structures are centrosymmetric and we show only Fe1 and Fe2 DOS since Fe1 (Fe4) and Fe3 (Fe6) are equivalent. To distinguish the importance of electronic correlations from the structure effect, we compare \(U=3.62\) eV (\(J=0.724\) eV) and \(U=4\) eV (\(J=0.8\) eV) DOS at the fixed structure of each CO phase.
At \(U\)=4 eV, the CO1 phase is an insulating state with the spectral gap size of \(\sim\)120 meV (see Fig. 6(a)), consistent with the optical gap measurement in LSFO at a low temperature [13]. In the Fe1 ion, both e\({}_{g}\) and t\({}_{2g}\) bands are half-filled with the gap size comparable to \(U\) behaving as a typical Mott insulator. However, only the t\({}_{2g}\) bands of Fe2 are half-filled, while the e\({}_{g}\) bands are almost empty (see Fig. 6(a)). This is consistent with the high-spin picture of the charge-ordering state between Fe1 (\(d^{5}\); t\({}_{2g\uparrow}^{3}\)e\({}_{g\uparrow}^{2}\)) and Fe2 (\(d^{3}\); t\({}_{2g\uparrow}^{3}\)e\({}_{g\uparrow}^{0}\)) ions. As the correlation becomes weaker (\(U\)=3.62 eV), the DOS for CO1 becomes metallic as the Fe1 e\({}_{g}\) (Fe2 t\({}_{2g}\)) state is less (more) occupied and the spectral gap at the Fermi energy is closed.
In CO3 at \(U\)=4eV, the charge-ordering pattern changes for Fe1 (\(d^{4}\); t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0}\)) and Fe2 (\(d^{5}\); t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{1}\)). The spin state for Fe1 changes to the low-spin, while the Fe2 spin is close to the intermediate one. This is because the crystal field splitting of the Fe1 ion becomes larger due to the smaller octahedron size compared to Fe1 in the CO1 phase. As a result, both Fe1 t\({}_{2g}\) and Fe2 t\({}_{2g}\) states are partially filled and the DOS becomes metallic (see Fig. 6(a)). As the correlation becomes weak (\(U\)=3.62 eV), the CO3 phase remains metallic.
Similar to CO3, CO2 is metallic at both \(U\)=4eV and 3.62eV. As the Fe1 \(d\) DOS of CO2 is similar to the Fe1 \(d\) DOS of CO3 and the Fe1-O bond-lengths of CO2 and CO3 are similar to each other as well, we expect that the local electronic configuration of Fe1 should be similarly given as the low-spin \(d^{4}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0}\)). Moreover, the Fe-O bond-length of the Fe1 ion is the smallest, while those of Fe2 and Fe3 ions are close to each other implying the similar electronic structure between Fe2 and Fe3. Nevertheless, the evidence of the CO can be found near \(E\approx-1eV\) where the occupied Fe3 e\({}_{g}\) states have slightly more DOS than the Fe2 one, while their t\({}_{2g}\) DOS are similar. This implies that the local electronic configurations of Fe2 and Fe3 ions should be Fe\({}^{(3.5+)+}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0.5-\delta}\)) and Fe\({}^{(3.5-\delta)+}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0.5+\delta}\)), respectively.
The calculated magnetic moments of Fe ions (\(m_{Fe1}\), \(m_{Fe2}\), and \(m_{Fe3}\)) are coupled to the above valence states and these values for CO1, CO2, and CO3 are shown in Table 2. The magnetic moments in CO1 calculated with DFT+U (\(U\)=4 eV, \(J\)=0.8 eV) are in a good agreement with the experimental ones recently obtained by Li _et al_[11] at low temperature. The calculated value of \(m_{Fe1}\) is rather screened from the electronic configuration estimation based on the DOS since we expect \(m_{Fe1}\) (t\({}_{2g\uparrow}^{3}\)e\({}_{g\uparrow}^{2}\)) = 5\(\mu_{B}\), while the \(m_{Fe2}\) value is consistent (t\({}_{2g\uparrow}^{3}\)e\({}_{g\uparrow}^{0}\) = 3\(\mu_{B}\)). The expected moments of the Fe1 and Fe2 ions in CO3 are \(m_{Fe1}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0}\)) = 2\(\mu_{B}\) and \(m_{Fe2}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{1}\)) = 3\(\mu_{B}\), respectively. However, since CO3 is metallic and the magnetic moments calculated with DFT+\(U\) also depends on \(U\) and \(J\), our calculated moments of 2.42\(\mu_{B}\) and 3.52\(\mu_{B}\) at \(U\)=4 eV are larger than these expected values. We confirmed that the magnetic moments of Fe1 and Fe2 are reduced at \(U\)=3 eV as 2.08 and 3.14 \(\mu_{B}\), similar to the expected values respectively.
Similarly, the magnetic moment of Fe1 at CO2 calculated with \(U\)=4 eV is 2.70 \(\mu_{B}\), which is still large for a LS state of Fe\({}^{4+}\) (2.0 \(\mu_{B}\)). We find that this moment computed using \(U\)=3.62eV is more consistent as 2.18\(\mu_{B}\). For CO2, the magnetic moments of \(m_{Fe1}\), \(m_{Fe2}\), and \(m_{Fe3}\) show the small-medium-big pattern, which is consistent with the charge-ordering pattern of Fe\({}^{4+}\)-Fe\({}^{(3.5+\delta)+}\)-Fe\({}^{(3.5-\delta)+}\).
## IV Conclusion
In conclusion, we studied the structural and electronic properties of charge-ordered La doped SrFeO\({}_{3}\), La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) (LSFO) systematically using DFT+U along with the antiferromagnetic order. We find that metastable structures with distinct CO phases in LSFO can be obtained by relaxing the structures with the different \(U\) values varying the correlation effect. The DFT+U calculation of LSFO with \(U\)=5eV can capture the low temperature CO phase (CO1 in the main text) of the big-small-big pattern, where the enhanced charge density is accompanied by the large magnetic moment with the Fe-O bond elongation. The ground-state is insulating as the spectral function at the Fermi energy opens a Mott gap driven by the high-spin states of Fe ions.
As the correlation effect becomes weak by reducing the \(U\) value in DFT+U, we can capture other metastable CO phases with distinct Fe-O bond patterns. One CO phase (CO3 in the main text) shows the crystal structure with the same space group as the CO1 phase, while the CO pattern changes to small-big-small. The other metastable CO phase (CO2 in the main text) can be obtained by interpolating the structural path between CO1 and CO3 phases using the CINEB calculation. Remarkably, the CO2 phase stabilize a lower symmetry crystal structure along with the inversion symmetry breaking and it shows the ferroelectric metallic state driven by the big-medium-small CO pattern. This CO2 phase can not be captured by the linear interpolation method as it requires the spontaneous displacement of Fe ions at the symmetric points. The electronic structures of these metastable CO states are notably changed as both CO2 and CO3 phases are metallic while the ground-state CO1 phase is insulating. The energy barrier of this CO2 phase along the structural path is only \(\sim\)7meV, which can be easily accessible from experiments by applying the pressure or the optical excitation.
Our results suggest that the strong correlation effect plays an important role to study and stabilize the multiple CO phases of transition metal oxides accompanying the mixed valence and the metal-oxygen bond disproportionation. The CINEB method combined with the energy and force calculations based on the first-principles can capture such metastable CO phases along with the distinct electronic structure from their ground state. While DFT+U is an efficient static method to incorporate the correlation effect, it can generally suffer from the convergence problem in systems with multiple correlated states [39; 42]. More advanced first-principle method such as dynamical mean field theory (DMFT) can be a promising way to study metastable phases in strongly correlated materials driven by both the structural distortion and the strong correlations especially when the CINEB method is combined with the energy and force calculations within DMFT [43; 44].
## Acknowledgement
We thank Yue Cao for fruitful discussions. NN, AL, AN, and HP acknowledge financial support from the US
Figure 6: The DOS plots of CO1, CO2, and CO3 phases calculated with DFT+\(U\). (a) \(U=4.0\) eV and \(J=0.8\) eV and (b) \(U=3.62\) eV and \(J=0.724\) eV. Schematic energy diagrams of Fe t\({}_{2g}\) and e\({}_{g}\) orbitals are also shown in the insets.
Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Science and Engineering Division. VS was supported from NSF SI2-SSE Grant 1740112. We gratefully acknowledge the computing resources provided on Bebop, a high- performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory.
| La DOPED SrFeO$_{3}$, La$_{1/3}$Sr$_{2/3}$FeO$_{3}$, における金属から無機物への転移は、反ferromagnetic と電荷配列状態、そして Fe-O 結合の異質性と、臨界温度付近の200K付近に伴い、異質性とエネルギープロファイルを持つ。この材料において測定した、臨界温度付近の異質性電荷の動きが異例に遅いため、その励起電荷配列状態は、新しい電子構造を持つ。この論文では、La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ の電荷配列構造の安定な状態を、第一原理計算と climbing image nudged elastic band 方法を用いて明らかにする。強い相関状態では、La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ は反 |
2303.17763 | Two-component model description of Bose-Einstein correlations in pp
collisions at 13 TeV measured by the CMS Collaboration at the LHC | Using the two-component model, we analyze Bose-Einstein correlations in pp
collisions at the center-of-mass energy of 13 TeV, measured by the CMS
Collaboration at the LHC, and compare results with the $\tau$-model. We utilize
data described by the double ratios with an average pair transverse momentum
$0\le k_T\le 1.0$ GeV and six intervals described by the reconstructed charged
particle multiplicity as $N_{\rm trk}^{\rm offline}$. The estimated ranges are
1-4 fm for the magnitude of extension of emitting source expressed by the
exponential function $\exp(-RQ)$ and 0.4-0.5 fm for that by the Gaussian
distribution $\exp(-(RQ)^2))$, respectively. Moreover, we estimate the upper
limits of the 3-pion BEC to test the two-component model and investigate the
role of the long-range correlation. | Takuya Mizoguchi, Seiji Matsumoto, Minoru Biyajima | 2023-03-31T01:40:58 | http://arxiv.org/abs/2303.17763v2 | # Analysis of CMS collaboration Bose-Einstein correlations at 13 TeV using the two-component model
###### Abstract
Using the two-component model, we analyze Bose-Einstein correlations (BEC) at 13 TeV, measured by the CMS collaboration, and compare results with the \(\tau\)-model. We utilize data described by the double ratios with [\(0\leq k_{T}\leq 1.0\) GeV and six intervals for \(N_{\rm trk}^{\rm offline}\)]. The estimated range is 1-4 fm for the exponential form \(R_{1}\) and 0.4-0.5 fm for the Gaussian form \(R_{2}\). We estimate the upper limits of the 3-pion BEC to test the two-component model and investigate the role of the long-range correlation.
## 1 Introduction
This article investigates the Bose-Einstein correlations (BEC) described by double ratios (DRs) at 13 TeV, obtained by the CMS collaboration [1]. However, CMS only reports \(\chi^{2}/\)ndf values obtained using the \(\tau\)-model. Here, we analyze the DRs [\(0\leq k_{T}\leq 1.0\) GeV, and six intervals \(a\leq N_{\rm trk}^{\rm offline}\leq b\)] obtained by the \(\tau\)-model, as illustrated in Fig. 1. The formula used by CMS in their analysis [2] is
\[F_{\tau}=C[1+\lambda\cos((r_{0}Q)^{2}+\tan(\alpha_{\tau}\pi/4)(Qr)^{\alpha})e^ {-(Qr)^{\alpha_{\tau}}}]\cdot(1+\delta Q) \tag{1}\]
where \(\lambda\), \(r_{0}\), \(r\), and \(\alpha_{\tau}\) are parameters introduced in the stable distribution based on stochastic theory, namely the degree of coherence, two interaction ranges, and the characteristic index, respectively (see, also Refs. [3, 4]). \(Q=\sqrt{-(k_{1}-k_{2})^{2}}\) is the magnitude of the 4-momentum transfer between two pions.
Our estimated values are presented in Table 1.
Figure 1: Analysis of the BEC at 13 TeV by Eq. (1).
Table 1 shows that the \(\chi^{2}\)/ndf values obtained from our analysis are consistent with those reported by the CMS collaboration [1].
As indicated in Table 1, the interaction ranges of the Levy-type form \((e^{-(Qr)^{\alpha}})\) increase as the interval containing \(N_{\rm trk}^{\rm offline}\) increases. The estimated values \(r=20\sim 50\) fm appear large for \(p+p\) collisions at 13 TeV.
This paper investigates this issue from a different perspective, focusing on the collision mechanism. Three processes occur in collisions at the LHC [5, 6, 7]: the non-diffractive dissociation, the single-diffractive dissociation, and the double-diffractive dissociation (DD). BEC are related to the chaotic components of particle production. Since the contribution from the DD is Poissonian [7], there is no effect to the BEC. Thus we calculated the following two-component model [7, 8] (see also empirical Refs. [9, 10, 11]),
\[{\rm CF}_{\rm II}=1+\lambda_{1}E_{\rm BE_{1}}+\lambda_{2}E_{\rm BE_{2}}. \tag{2}\]
For the exchange functions \(E_{\rm BE_{1}}\) and \(E_{\rm BE_{2}}\), we assign the following two functions [12],
\[\exp(-RQ)\quad\mbox{and}\quad\exp\left(-(RQ)^{2}\right). \tag{3}\]
LRC refers to the long-range correlation. To express the LRC of BEC at 13 TeV, we use the Gaussian form, similar to the analysis of the denominator of the single ratio, \(N_{\rm BG}^{(+-)}\) (see [1]).
\[{\rm LRC}_{\rm(Gauss)}=\frac{C}{1.0+\alpha\exp(-\beta Q^{2})}. \tag{4}\]
In the second section, we analyze the BEC at 13 TeV using Eqs. (2)-(4). In the third section, we present our predictions for 3-pion BEC using the two-component model. In the final section, we provide concluding remarks. Appendix A presents an analysis of BEC at 13 TeV using the \(\tau\)-model with Eq. (4). In Appendix B, we reanalyze the CMS BEC at 0.9 and 7 TeV utilizing Eq. (4), because in previous works [7, 8], we used \({\rm LRC}_{\rm(linear)}=C(1+\delta Q)\).
## 2 Analysis of BEC at 13 TeV using Eqs. (2)-(4).
Considering the results of the CMS BEC at 7 TeV Ref. [7], we assume a combination of exponential functions and Gaussian distribution, as this combination has been shown to play an important role in the analysis of the CMS BEC at 0.9 and 7 TeV [7]. However, Shimoda et al. Ref. [12], investigated a number of distributions. Our results are presented in Fig. 2 and Table 2. We observe extraordinary behaviors in the two intervals [\(0\leq N_{\rm trk}^{\rm offline}\leq 4\) and \(10\leq N_{\rm trk}^{\rm offline}\leq 12\)] of the LRC shown in Fig. 3.
As indicated by Fig. 2 and Table 2, the two-component model with Eqs. (2)-(4) effectively characterizes three intervals: \(31\leq N_{\rm trk}^{\rm offline}\leq 33\), \(80\leq N_{\rm trk}^{\rm offline}\leq 84\), and \(105\leq N_{\rm trk}^{\rm offline}\leq 109\).
Figure 2: Analysis of the BEC at 13 TeV by Eqs. (2)–(4).
Among the six intervals shown in Fig. 3, the red (solid) line and green (dashed) line appear to be exceptional.
## 3 Test of the two-component model for 3-pion BEC
Here, we investigate the 3-pion BEC using the two-component model. Since there is currently no information from CMS on the multiplicity distribution \(P(n)\) at 13 TeV, it is challenging to determine the ratio between the contributions of the first and the second components. We use the diagrams in Fig. 4.
Figure 3: Our LRCs for six intervals are shown.
The formula that corresponds to the diagrams in Fig. 4[13, 14, 15] is expressed as
\[F_{i}^{(3)}=1.0+3\lambda_{i}E_{\rm BE_{i}}+2(\lambda_{i}E_{\rm BE_{i}})^{3/2} \tag{5}\]
By assuming an equal weight for the first and the second components, \(F_{1}^{(3)}\) and \(F_{2}^{(3)}\), we obtain the following normalized expression
\[E^{(3+:3-)}=1.0+\frac{1}{2}\left(3\lambda_{1}E_{\rm BE_{1}}+2(\lambda_{1}E_{ \rm BE_{1}})^{3/2}\right)+\frac{1}{2}\left(3\lambda_{2}E_{\rm BE_{2}}+2(\lambda _{2}E_{\rm BE_{2}})^{3/2}\right), \tag{6}\]
where \(\lambda_{1}\), \(\lambda_{2}\), \(R_{1}\), and \(R_{2}\) are fixed by using the numerical values in Table2. Typical figures are presented in Fig. 5. We could calculate the ratio if the CMS collaboration reports the multiplicity distributions \(P(n)\)[2], as this would allow us to understand the ensemble property of the BEC through the multiplicity distribution. It is worth noting that the ATLAS collaboration has already observed the multiplicity distributions \(P(n)\) and BEC at 13 TeV [16, 17, 18].
Figure 4: Diagrams for the third-order BEC. The matrix indicates the exchange of identical pions.
Figure 5: Prediction of upper limit of the \(3\pi\) BEC at 13 TeV by means of Eq. (6) with Eqs. (2)-(4).
In the near future, we may be able to further test the two-component model when the CMS collaboration analyzes the 3-\(\pi\) BEC. If we observe the same interaction ranges as in Fig. 2, we could conclude that the two-component model is a viable approach.
## 4 Concluding remarks
* Our analysis of CMS BEC at 13 TeV using the \(\tau\)-model with Eq. (1) confirms the applicability of this model. This is evidenced by the values of \(\chi^{2}\) in Table 1.
* As portrayed in Table 1, the interaction ranges \(r\) in the Levy-type expression \(e^{-(Qr)^{\alpha_{\tau}}}\) increase as the range of the interval \(N_{\rm trk}^{\rm offline}\) increases. However, it appears that the interaction ranges from 30 to 50 fm are large in \(pp\) collisions at 13 TeV.
* To gain a better understanding of the results obtained from the \(\tau\)-model, we have analyzed the BEC using the \(\tau\)-model with Eq. (4). This has led to improved estimations, as shown in Appendix A.
* We look forward to future analyses by the CMS collaboration of the multiplicity distributions and the third-order BEC at 13 TeV
Hereafter, we summarize the results of the two-component model using Eqs. (2)-(4).
* To investigate the remarks mentioned in C2) above using the two-component model, we utilized Eqs. (2)-(4). Our results are presented in Table 2. The large interaction ranges are approximately 4 fm, and they appear to be reasonable.
* Furthermore, to test the availability of the two-component model, we calculated the 3-pion BEC by making use of the estimated values and diagrams presented in Fig. 4. Interestingly, as \(N_{\rm trk}^{\rm offline}\) increases, the 3-pion BEC rapidly decreases, due to the changes in the interaction range \(R_{1}\) (1 fm to 4 fm). Moreover, the intercepts at \(Q=0.0\) GeV are about 3.0, providing the equal weight.
* To investigate the role of the \({\rm LEC_{(Gauss)}}\), i.e., Eq. (4), we reanalyzed the BEC at 0.9 and 7 TeV, with the results presented in Appendix B. The estimated \(\chi^{2}\) values became smaller than that of \({\rm LRC_{(linear)}}\)[7].
* As portrayed in Table 2, the BEC in the intervals \(0\leq N_{\rm trk}^{\rm offline}\leq 4\) and \(10\leq N_{\rm trk}^{\rm offline}\leq 12\) cannot be analyzed with better \(\chi^{2}\) values. A more complicated model may be necessary.
_Acknowledgments._ One of the authors (M.B.) would like to thank his colleagues at the Department of Physics, Shinshu University.
## Appendix A Analysis of BEC at 13 TeV using the \(\tau\)-model with Eq. (4)
We are interested in the influence of Eq. (4) on the \(\tau\)-model. To investigate this, we reanalyzed the BEC using the following formula
\[E^{(2+:2-)}/N_{\rm BG}=[1+\lambda\cos((r_{0}Q)^{2}+\tan(\alpha_{\tau}\pi/4)( Qr)^{\alpha})e^{-(Qr)^{\alpha_{\tau}}}]\cdot{\rm LRC_{(Gauss)}} \tag{7}\]
Our findings are presented in Fig. 6 and Table 3. It can be seen that the interaction range \(r\) values are smaller than 10 fm. Compare ours with those in Table 1.
As illustrated in Fig. 7, the LRC for \(0\leq N_{\rm trk}^{\rm offline}\leq 4\) appears to be singular. The LRC\({}_{\rm(Gauss)}\) is not favor with Eq. (1) in the \(\tau\)-model.
## Appendix B Reanalysis of CMS BEC at 0.9 and 7 TeV [2] by LRC, expressed by Eq. (4)
We examined the changes in the values of \(\chi^{2}\) when LRC\({}_{\rm(linear)}\) was replaced with Eq. (4) in the reanalysis of BEC at 0.9 and 7 TeV [2]. Our new results obtained using Eq. (4) are presented in Fig. 8 and Table 4 and compared with those obtained elsewhere [7], where the linear form for the LRC \(=C(1+\delta Q)\) was used. These results are also shown in Table 4. We show the LRCs in Fig. 9.
It can be said that the Gaussian distribution of the LRC in the two-component model is better than that of the linear form because the LRC\({}_{\rm(Gauss)}\) converges to 1.0 in the region \(Q\geq 2.0\) GeV.
| 2つ成分モデルを用いて、CMS colaboraciónがLHCで測定した13 TeVのセンター・オブ・マスのエネルギーにおけるpp衝突でのBose-Einstein correlationsを分析し、$\tau$-modelと比較しました。双重比率で表されたデータを用いて、平均的なペア横断運動量$0\le k_T\le 1.0$ GeVと6つの積分された電荷粒子多様性として再構成された粒子の多様性$N_{\rm trk}^{\rm offline}$ を使用しました。推定範囲は、発光源の大きさの指数関数$\exp(-RQ)$で1-4 fm、およびガウス分布$\exp(-(RQ)^2)$で0.4-0.5 fmです。さらに、3 pion BECの上限値を推定することで、2つ成分モデルを検証し、長距離 correlations の役割を調査しました。 |
2302.00136 | Learning Topology-Preserving Data Representations | We propose a method for learning topology-preserving data representations
(dimensionality reduction). The method aims to provide topological similarity
between the data manifold and its latent representation via enforcing the
similarity in topological features (clusters, loops, 2D voids, etc.) and their
localization. The core of the method is the minimization of the Representation
Topology Divergence (RTD) between original high-dimensional data and
low-dimensional representation in latent space. RTD minimization provides
closeness in topological features with strong theoretical guarantees. We
develop a scheme for RTD differentiation and apply it as a loss term for the
autoencoder. The proposed method "RTD-AE" better preserves the global structure
and topology of the data manifold than state-of-the-art competitors as measured
by linear correlation, triplet distance ranking accuracy, and Wasserstein
distance between persistence barcodes. | Ilya Trofimov, Daniil Cherniavskii, Eduard Tulchinskii, Nikita Balabin, Evgeny Burnaev, Serguei Barannikov | 2023-01-31T22:55:04 | http://arxiv.org/abs/2302.00136v2 | # Learning Topology-Preserving Data Representations
###### Abstract
We propose a method for learning topology-preserving data representations (dimensionality reduction). The method aims to provide topological similarity between the data manifold and its latent representation via enforcing the similarity in topological features (clusters, loops, 2D voids, etc.) and their localization. The core of the method is the minimization of the Representation Topology Divergence (RTD) between original high-dimensional data and low-dimensional representation in latent space. RTD minimization provides closeness in topological features with strong theoretical guarantees. We develop a scheme for RTD differentiation and apply it as a loss term for the autoencoder. The proposed method "RTD-AE" better preserves the global structure and topology of the data manifold than state-of-the-art competitors as measured by linear correlation, triplet distance ranking accuracy, and Wasserstein distance between persistence barcodes.
## 1 Introduction
Dimensionality reduction is a useful tool for data visualization, preprocessing, and exploratory data analysis. Clearly, immersion of high-dimensional data into 2D or 3D space is impossible without distortions which vary for popular methods. Dimensionality reduction methods can be broadly classified into global and local methods. Classical global methods (PCA, MDS) tend to preserve the global structure of a manifold. However, in many practical applications, produced visualizations are non-informative since they don't capture complex non-linear structures. Local methods (UMAP (McInnes et al., 2018), PaCMAP (Wang et al., 2021), t-SNE (Van der Maaten & Hinton, 2008), Laplacian Eigenmaps (Belkin & Niyogi, 2001), ISOMAP (Tenenbaum et al., 2000)) focus on preserving neighborhood data and local structure with the cost of sacrificing the global structure. The most popular methods like t-SNE and UMAP are a good choice for inferring cluster structures but often fail to describe correctly the data manifold's topology. t-SNE and UMAP have hyperparameters influencing representations neighborhood size taken into account. Different values of hyperparameters lead to significantly different visualizations and neither of them is the "canonical" one that correctly represents high-dimensional data.
We take a different perspective on dimensionality reduction. We propose the approach based on _Topological Data Analysis (TDA)_. Topological Data Analysis (Barannikov, 1994; Zomorodian, 2001; Chazal & Michel, 2017) is a field devoted to the numerical description of multi-scale topological properties of data distributions by analyzing point clouds sampled from them. TDA methods naturally capture properties of data manifolds on multiple distance scales and are arguably a good trade-off between local and global approaches.
The state-of-the-art TDA approach of this kind is TopoAE (Moor et al., 2020). However, it has several weaknesses: 1) the loss term is not continuous 2) the nullity of the loss term is only necessary but not a sufficient condition for the coincidence of topology, as measured by persistence barcodes, see more details in Appendix J.
In our paper, we suggest using the Representation Topology Divergence (RTD) (Barannikov et al., 2022) to produce topology-aware dimensionality reduction. RTD measures the topological discrepancy between two point clouds with one-to-one correspondence between clouds and enjoys nice theoretical properties (Section 3.2). The major obstacle to incorporate RTD into deep learning is its differentiation. There exist approaches to the differentiation of barcodes, generic barcodes-based functions with respect to deformations of filtration (Carriere et al., 2021) and to TDA differentiation in special cases (Hofer et al., 2019; Poulenard et al., 2018).
In this paper, we make the following contributions:
1. We develop an approach for RTD differentiation. Topological metrics are difficult to differentiate; the differentiability of RTD and its implementation on GPU is a valuable step forward in the TDA context which opens novel possibilities in topological optimizations;
2. We propose a new method for topology-aware dimensionality reduction: an autoencoder enhanced with the differentiable RTD loss: "RTD-AE". Minimization of RTD loss between real and latent spaces forces closeness in topological features and their localization with strong theoretical guarantees;
3. By doing computational experiments, we show that the proposed RTD-AE outperforms state-of-the-art methods of dimensionality reduction and the vanilla autoencoder in terms of preserving the global structure and topology of a data manifold; we measure it by the linear correlation, the triplet distance ranking accuracy, Wasserstein distance between persistence barcodes, and RTD. In some cases, the proposed RTD-AE produces more faithful and visually appealing low-dimensional embeddings than state-of-the-art algorithms. We release the RTD-AE source code. 1 Footnote 1: github.com/danchern97/RTD_AE
## 2 Related work
Various dimensionality reduction methods have been proposed to obtain 2D/3D visualization of high-dimensional data (Tenenbaum et al., 2000; Belkin and Niyogi, 2001; Van der Maaten and Hinton, 2008; McInnes et al., 2018). Natural science researchers often use dimensionality reduction methods for exploratory data analysis or even to focus further experiments (Becht et al., 2019; Kobak and Berens, 2019; Karlov et al., 2019; Andronov et al., 2021; Szubert et al., 2019). The main problem with these methods is inevitable distortions (Chari et al., 2021; Batson et al., 2021; Wang et al., 2021) and incoherent results for different hyperparameters. These distortions can largely affect global representation structure such as inter-cluster relationships and pairwise distances. As the interpretation of these quantities in some domain such as physics or biology can lead to incorrect conclusions, it is of high importance to preserve them as much as possible. UMAP and t-SNE visualizations are frequently sporadic and cannot be considered as "canonical" representation of high-dimensional data. An often overlooked issue is the initialization which significantly contributes to the performance of dimensionality reduction methods (Kobak and Linderman, 2021; Wang et al., 2021). Damrich and Hamprecht (2021) revealed that the UMAP's true loss function is different from the purported from its theory because of negative sampling. There is a number of works that try to tackle the distortion problem and preserve as much inter-data relationships as possible. Authors of PHATE (Moon et al., 2019) and ivis (Szubert et al., 2019) claim that their methods are able to capture local as well as global features, but provide no theoretical guarantees for this. (Wagner et al., 2021)
Figure 1: Dimensionality reduction (3D \(\rightarrow\) 2D) on the “Mammoth” dataset. The proposed RTD-AE method better captures both global and local structure.
propose DIPOLE, an approach to dimensionality reduction combining techniques of metric geometry and distributed persistent homology.
From a broader view, deep representation learning is also dedicated to obtaining low-dimensional representation of data. Autoencoder (Hinton & Salakhutdinov, 2006) and Variational Autoencoder (Kingma & Welling, 2013) are mostly used to learn representations of objects useful for solving downstream tasks or data generation. They are not designed for data visualization and fail to preserve simultaneously local and global structure on 2D/3D spaces. Though, their parametric nature makes them scalable and applicable to large datasets, which is why they are used in methods such as parametric UMAP (Sainburg et al., 2021) and vis (Szubert et al., 2019) and ours.
Moor et al. (2020) proposed TopoAE, including an additional loss for the autoencoder to preserve topological structures of the input space in latent representations. The topological similarity is achieved by retaining similarity in the multi-scale connectivity information. Our approach has a stronger theoretical foundation and outperforms TopoAE in computational experiments.
An approach for differentiation of persistent homology-based functions was proposed by Carriere et al. (2021). Leygonie et al. (2021) systematizes different approaches to regularisation of persistence diagrams function and defines notions of differentiability for maps to and from the space of persistence barcodes. Luo et al. (2021) proposed a topology-preserving dimensionality reduction method based on graph autoencoder. Kim et al. (2020) proposed a differentiable topological layer for general deep learning models based on persistence landscapes.
## 3 Preliminaries
### Topological data analysis, persistent homology
Topology is often considered to describe the "shape of data", that is, multi-scale properties of the datasets. Topological information was generally recognized to be important for various data analysis problems. In the perspective of the commonly assumed manifold hypothesis (Goodfellow et al., 2016), datasets are concentrated near low-dimensional manifolds located in high-dimensional ambient spaces. The standard direction is to study topological features of the underlying manifold. The common approach is to cover the manifold via simplices. Given the threshold \(\alpha\), we take sets of the points from the dataset \(X\) which are pairwise closer than \(\alpha\). The family of such sets is called the Vietoris-Rips simplicial complex. For further convenience, we introduce the fully-connected weighted graph \(\mathcal{G}\) whose vertices are the points from \(X\) and whose edges have weights given by the distances between the points. Then, the Vietoris-Rips simplicial complex is defined as:
\[\text{VR}_{\alpha}(\mathcal{G})=\left\{\{i_{0},\ldots,i_{k}\},i_{m}\in\text{ Vert}(\mathcal{G})\mid m_{i,j}\leq\alpha\right\},\]
where \(m_{i,j}\) is the distance between points, \(\text{Vert}(\mathcal{G})=\{1,\ldots,|X|\}\) is the vertices set of the graph \(\mathcal{G}\).
For each \(\text{VR}_{\alpha}(\mathcal{G})\), we define the vector space \(C_{k}\), which consists of formal linear combinations of all \(k\)-dimensional simplices from \(\text{VR}_{\alpha}(\mathcal{G})\) with modulo 2 arithmetic. The boundary operator \(\partial_{k}:C_{k}\to C_{k-1}\) maps every simplex to the sum of its facets. One can show that \(\partial_{k}\circ\partial_{k-1}=0\) and the chain complex can be created:
\[\ldots\to C_{k+1}\stackrel{{\partial_{k+1}}}{{\to}}C_{k} \stackrel{{\partial_{k}}}{{\to}}C_{k-1}\to\ldots.\]
The quotient vector space \(H_{k}=ker(\partial_{k})/im(\partial_{k+1})\) is called the \(k\)-th homology group, elements of \(H_{k}\) are called homology classes. The dimension \(\beta_{k}=dim(H_{k})\) is called the \(k\)-th Betti number and it approximates the number of basic topological features of the manifold represented by the point cloud \(X\).
The immediate problem here is the selection of appropriate \(\alpha\) which is not known beforehand. The standard solution is to analyze all \(\alpha>0\). Obviously, if \(\alpha_{1}\leq\alpha_{2}\leq\ldots\leq\alpha_{m}\), then \(\text{VR}_{\alpha_{j}}(\mathcal{G})\subseteq\text{VR}_{\alpha_{2}}(\mathcal{ G})\subseteq\ldots\subseteq\text{VR}_{\alpha_{m}}(\mathcal{G})\); the nested sequence is called the filtration. The evolution of cycles across the nested family of simplicial complexes \(S_{\alpha_{i}}\) is canonically decomposed into "birth" and "death" of basic topological features, so that a basic feature \(c\) appears in \(H_{k}(S_{\alpha})\) at a specific threshold \(\alpha_{c}\) and disappears at a specific threshold \(\beta_{c}\), \(\beta_{c}-\alpha_{c}\) describes the "lifespan" or persistence of the homology class. The set of the corresponding intervals \([\alpha_{c},\beta_{c}]\) for the basic homology classes from \(H_{k}\) is called the _persistence barcode_; the whole theory is dubbed the _persistent homology_(Chazal & Michel, 2017; Barannikov, 1994; Zomorodian, 2001).
### Representation Topology Divergence (RTD)
The classic persistent homology is dedicated to the analysis of a single point cloud \(X\). Recently, Representation Topology Divergence (RTD) (Barannikov et al., 2022) was proposed to measure the dissimilarity in the multi-scale topology between two point clouds \(X,\tilde{X}\) of equal size \(N\) with a one-to-one correspondence between clouds. Let \(\mathcal{G}^{w}\), \(\mathcal{G}^{\bar{w}}\) be graphs with weights on edges equal to pairwise distances of \(X,\tilde{X}\). To provide the comparison, the auxiliary graph \(\mathcal{G}^{w,\bar{w}}\) with doubled set of vertices and edge weights matrix \(m(w,\tilde{w})\), see details in Appendix B, is created. The persistence barcode of the graph \(\mathcal{G}^{w,\bar{w}}\) is called the _R-Cross-Barcode_ and it tracks the differences in the multi-scale topology of the two point clouds by comparing their \(\alpha\)-neighborhood graphs for all \(\alpha\).
Here we give a simple example of an R-Cross-Barcode, see also (Cherniavskii et al., 2022). Suppose we have two point clouds \(A\) and \(B\), of seven points each, with distances between points as shown in the top row of Figure 2. Consider the R-Cross-Barcode\({}_{1}\)(A, B), it consists of 4 intervals (the bottom row of the figure). The 4 intervals describe the topological discrepancies between connected components of \(\alpha\)-neighborhood graphs of \(A\) and \(B\).
An interval is opened, i.e. a topological discrepancy appears, at threshold \(\alpha=\tilde{w}_{uv}^{B}\) when in the union of \(\alpha\)-neighborhood graph of \(A\) and \(B\), two vertex sets \(C_{1}\) and \(C_{2}\) disjoint at smaller thresholds, are joined into one connected component by the edge \((uv)\) from \(B\). This interval is closed at threshold \(\alpha=w_{uv^{\prime}}^{A}\), when the two vertex sets \(C_{1}\) and \(C_{2}\) are joined into one connected component in the \(\alpha\)-neighborhood graph of \(A\).
For example, a discrepancy appears at the threshold \(\alpha=0.53\) when the vertex sets \(\{4\}\) and \(\{3,6,7\}\) are joined into one connected component in the union of neighborhood graphs of \(A\) and \(B\) by the edge \((4,7)\). We identify the "death" of this R-Cross-Barcode feature at \(\alpha=0.57\), when these two sets are joined into one connected component in the neighborhood graph of cloud A (via the edge \((4,7)\) in Figure 2 becoming grey).
By definition, \(\text{RTD}_{k}(X,\tilde{X})\) is the sum of intervals' lengths in the _R-Cross-Barcode\({}_{k}(X,\tilde{X})\)_ and measures its closeness to an empty set.
**Proposition 1** (Barannikov et al. (2022)).: _If \(\text{RTD}_{k}(X,\tilde{X})=\text{RTD}_{k}(\tilde{X},X)=0\) for all \(k\geq 1\), then the barcodes of the weighted graphs \(\mathcal{G}^{w}\) and \(\mathcal{G}^{\bar{w}}\) are the same in any degree. Moreover, in this case the topological features are located in the same places: the inclusions \(\text{VR}_{\alpha}(\mathcal{G}^{w})\subseteq\text{VR}_{\alpha}(\mathcal{G}^{ \min(w,\bar{w})})\), \(\text{VR}_{\alpha}(\mathcal{G}^{\bar{w}})\subseteq\text{VR}_{\alpha}(\mathcal{ G}^{\min(w,\bar{w})})\) induce homology isomorphisms for any threshold \(\alpha\)._
The Proposition 1 is a strong basis for topology comparison and optimization. Given a fixed data representation \(X\), how to find \(\tilde{X}\) lying in a different space, and having a topology similar to \(X\), in particular, similar persistence barcodes? Proposition 1 states that it is sufficient to minimize
Figure 2: A graphical representation of an R-Cross-Barcode\({}_{1}(A,B)\) for the point clouds \(A\) and \(B\). The pairwise distance matrices for \(A\) and \(B\) are shown in the top raw. Edges present in the \(\alpha\)-neighborhood graphs for \(B\) but not for \(A\) are colored in red. Edges present in the \(\alpha\)-neighborhood graph for \(A\) are colored in grey. The timeline for appearance-disappearance of topological features distinguishing the two graphs is shown. The appearance-disappearance process is illustrated by the underlying bars, connecting the corresponding thresholds.
\(\sum_{i\geq 1}\left(\text{RTD}_{i}(X,\tilde{X})+\text{RTD}_{i}(\tilde{X},X)\right)\). In most of our experiments we minimized \(\text{RTD}_{1}(X,\tilde{X})+\text{RTD}_{1}(\tilde{X},X)\). \(\text{RTD}_{1}\) can be calculated faster than \(\text{RTD}_{2+}\), also \(\text{RTD}_{2+}\) are often close to zero. To simplify notation, we denote \(\text{RTD}(X,\tilde{X}):=\nicefrac{{1}}{{2}}(\text{RTD}_{1}(X,\tilde{X})+ \text{RTD}_{1}(\tilde{X},X))\).
**Comparison with TopoAE loss**. TopoAE (Moor et al., 2020) is the state-of-the-art algorithm for topology-preserving dimensionality reduction. The TopoAE topological loss is based on comparison of minimum spanning trees in \(X\) and \(\tilde{X}\) spaces. However, it has several weak spots. First, when the TopoAE loss is zero there is no guarantee that persistence barcodes of \(X\) and \(\tilde{X}\) coincide. Second, the TopoAE loss can be discontinuous in rather standard situations, see Appendix J. At the same time, RTD loss is continuous, and its nullity guarantees the coincidence of persistence barcodes of \(X\) and \(\tilde{X}\). The continuity of the RTD loss follows from the stability of the R-Cross-Barcode\({}_{k}\) (Proposition 2).
**Proposition 2**.: _(a) For any quadruple of edge weights sets \(w_{ij}\), \(\tilde{w}_{ij}\), \(v_{ij}\), \(\tilde{v}_{ij}\) on \(\mathcal{G}\):_
\(d_{B}(\text{R-Cross-Barcode}_{k}(w,\tilde{w}),\text{R-Cross-Barcode}_{k}(v, \tilde{v}))\leq\max(\max_{ij}\lvert v_{ij}-w_{ij}\rvert,\max_{ij}\lvert\tilde {v}_{ij}-\tilde{w}_{ij}\rvert).\)__
_(b) For any pair of edge weights sets \(w_{ij}\), \(\tilde{w}_{ij}\) on \(\mathcal{G}\):_
\[\lVert\text{R-Cross-Barcode}_{k}(w,\tilde{w})\rVert_{B}\leq\max_{ij}\lvert w_{ ij}-\tilde{w}_{ij}\rvert.\]
_(c) The expectation for the bottleneck distance between R-Cross-Barcode\({}_{k}(w,\tilde{w})\) and R-Cross-Barcode\({}_{k}(w^{\prime},\tilde{w})\), where \(w_{ij}=w(x_{i},x_{j})\), \(w^{\prime}_{ij}=w^{\prime}(x_{i},x_{j})\), \(\tilde{w}_{ij}=\tilde{w}(x_{i},x_{j})\), \(w,w^{\prime},\tilde{w}\) is a triple of metrics on a measure space \((\mathcal{X},\mu)\), and \(\tilde{X}=\{x_{1},\ldots,x_{n}\}\), \(x_{i}\in\mathcal{X}\) is a sample from \((\mathcal{X},\mu)\), is upper bounded by Gromov-Wasserstein distance between \(w\) and \(w^{\prime}\):_
\[\int_{\mathcal{X}\times\ldots\times\mathcal{X}}d_{B}(\text{R-Cross-Barcode}_{k }(w,\tilde{w}),\text{R-Cross-Barcode}_{k}(w^{\prime},\tilde{w}))d\mu^{\otimes n }\leq n\,GW(w,w^{\prime}).\]
_(d) The expectation for the bottleneck norm of R-Cross-Barcode\({}_{k}(w,\tilde{w})\) for two weighted graphs with edge weights \(w_{ij}=w(x_{i},x_{j})\), \(\tilde{w}_{ij}=\tilde{w}(x_{i},x_{j})\), where \(w,\tilde{w}\) is a pair of metrics on a measure space \((\mathcal{X},\mu)\), and \(X=\{x_{1},\ldots,x_{n}\}\), \(x_{i}\in\mathcal{X}\) is a sample from \((\mathcal{X},\mu)\), is upper bounded by Gromov-Wasserstein distance between \(w\) and \(\tilde{w}\):_
\[\int_{\mathcal{X}\times\ldots\times\mathcal{X}}\lVert\text{R-Cross-Barcode}_{k }(w,\tilde{w})\rVert_{B}d\mu^{\otimes n}\leq n\,GW(w,\tilde{w}).\]
The proofs are given in Appendix K.
## 4 Method
### Differentiation of RTD
We propose to use RTD as a loss in neural networks. Here we describe our approach to RTD differentiation. Denote by \(\Sigma_{k}\) the set of all \(k-\)simplices in the Vietoris-Rips complex of the graph \(\hat{\mathcal{G}}^{w,\tilde{w}}\), and by \(\mathcal{T}_{k}\) the set of all intervals in the _R-Cross-Barcode\({}_{k}(X,\tilde{X})\)_. Fix (an arbitrary) strict order on \(\mathcal{T}_{k}\). There exists a function \(f_{k}:\ \cup_{(b_{i},d_{i})\in\mathcal{T}_{k}}\{b_{i},d_{i}\}\to\Sigma_{k}\) that maps \(b_{i}\) (or \(d_{i}\)) to a simplex \(\sigma\) whose appearance leads to "birth" (or "death") of the corresponding homological class. Let
\[m_{\sigma}=\max_{i,j\in\sigma}m_{i,j}\]
denote the function of \(m_{ij}\) equal to the filtration value at which the simplex \(\sigma\) joins the filtration. Since \(\frac{\partial\text{ RTD}_{k}(X,\tilde{X})}{\partial d_{i}}=-\frac{\partial \text{ RTD}_{k}(X,\tilde{X})}{\partial b_{i}}=1\), we obtain the following equation for the subgradient
\[\frac{\partial\text{ RTD}_{k}(X,\tilde{X})}{\partial m_{\sigma}}=\sum_{i\in \mathcal{T}_{k}}\mathbb{I}\{f_{k}(d_{i})=\sigma\}-\sum_{i\in\mathcal{T}_{k}} \mathbb{I}\{f_{k}(b_{i})=\sigma\}.\]
Here, for any \(\sigma\) no more than one term has non-zero indicator. Then
\[\frac{\partial\text{ RTD}_{k}(X,\tilde{X})}{\partial m_{i,j}}=\sum_{\sigma\in \Sigma_{k}}\frac{\partial\text{ RTD}_{k}(X,\tilde{X})}{\partial m_{\sigma}}\frac{\partial m_{\sigma}}{ \partial m_{i,j}}\]
Figure 3: RTD Autoencoder
The only thing that is left is to obtain subgradients of \(\text{RTD}(X,\tilde{X})\) by points from \(X\) and \(\tilde{X}\). Consider (an arbitrary) element \(m_{i,j}\) of matrix \(m\). There are 4 possible scenarios:
1. \(i,j\leq N\), in other words \(m_{i,j}\) is from the upper-left quadrant of \(m\). Its length is constant and thus \(\forall l:\frac{\partial m_{i,j}}{\partial X_{i}}=\frac{\partial m_{i,j}}{ \partial X_{l}}=0\).
2. \(i\leq N<j\), in other words \(m_{i,j}\) is from the upper-right quadrant of \(m\). Its length is computed as Euclidean distance and thus \(\frac{\partial m_{i,j}}{\partial X_{i}}=\frac{X_{i}-X_{i-N}}{||X_{i}-X_{j-N}|| _{2}}\) (similar for \(X_{N-j}\)).
3. \(j\leq N<i\), similar to the previous case.
4. \(N<i,j\), in other words \(m_{i,j}\) is from the bottom-right quadrant of \(m\). Here we have subgradients like \[\frac{\partial m_{i,j}}{\partial X_{i-N}}=\frac{X_{i}-X_{j-N}}{||X_{i}-X_{j-N} ||_{2}}\mathbb{I}\{w_{i-N,j-N}<\tilde{w}_{i-N,j-N}\}\] Similar for \(X_{j-N},\tilde{X}_{i-N}\) and \(\tilde{X}_{j-N}\).
Subgradients \(\frac{\partial\text{RTD}(X,\tilde{X})}{\partial X_{i}}\) and \(\frac{\partial\text{RTD}(X,\tilde{X})}{\partial\tilde{X}_{i}}\) can be derived from the beforementioned using the chain rule and the formula of full (sub)gradient. Now we are able to minimize \(\text{RTD}(X,\tilde{X})\) by methods of (sub)gradient optimization. We discuss some possible tricks for improving RTD differentiation in Appendix I.
### RTD Autoencoder
Given the data \(X=\{x_{i}\}_{i=1}^{n}\), \(x_{i}\in\mathbb{R}^{d}\), in high-dimensional space, our goal is to find the representation in low-dimensional space \(Z=\{z_{i}\}\), \(z_{i}\in\mathbb{R}^{p}\). For the visualization purposes, \(p=2,3\). Our idea is to find a representation \(Z\) which preserves _persistence barcodes_, that is, multi-scale topological properties of the point clouds, as much as possible. The straightforward approach is to solve \(\min_{Z}\text{RTD}(X,Z)\), where the optimization is performed over \(n\) vectors \(z_{i}\in\mathbb{R}^{p}\), in the flavor similar to UMAP and t-SNE. This approach is workable albeit very time-consuming and could be applied only to small datasets, see Appendix F. A practical solution is to learn representations via the encoder network \(E(w,x):X\to Z\), see Figure 3.
**Algorithm**. Initially, we train the autoencoder for \(E_{1}\) epochs with the reconstruction loss \(\frac{1}{2}||X-X_{rec}||^{2}\) only. Then, we train for \(E_{2}\) epochs with the loss \(\frac{1}{2}||X-X_{rec}||^{2}+\text{RTD}(X,Z)\). Both losses are calculated on mini-batches. The two-step procedure speedups training since calculating \(\text{RTD}(X,Z)\) for the untrained network takes much time.
Figure 4: Results on dimensionality reduction to 3D-space
## 5 Experiments
In computational experiments, we perform dimensionality reduction to high-dimensional and 2D/3D space for ease of visualization. We compare original data with latent representations by (1) linear correlation of pairwise distances, (2) Wasserstein distance (W.D.) between \(H_{0}\) persistence barcodes (Chazal & Michel, 2017), (3) triplet distance ranking accuracy (Wang et al., 2021) (4) RTD. All of the quality measures are tailored to evaluate how the manifold's global structure and topology are preserved. We note that RTD, as a quality measure, provides a more precise comparison of topology than the W.D. between \(H_{0}\) persistence barcodes. First, RTD takes into account the localization of topological features, while W.D. does not. Second, W.D. is invariant to permutations of points, but we are interested in comparison between original data and latent representation where natural one-to-one correspondence holds.
We compare the proposed RTD-AE with t-SNE (Van der Maaten & Hinton, 2008), UMAP (McInnes et al., 2018), TopoAE (Moor et al., 2020), vanilla autoencoder (AE), PHATE (Moon et al., 2019), Ivis (Szubert & Drozdov, 2019), PacMAP (Wang et al., 2021). The complete description of all the used datasets can be found in Appendix L. See hyperparameters in Appendix H.
### Synthetic datasets
We start with the synthetic dataset "Spheres": eleven 100D spheres in the 101D space, any two of those do not intersect and one of the spheres contains all other inside. For the visualization, we perform dimensionality reduction to 3D space. Figure 4 shows the results: RTD-AE is the best one preserving the nestedness for the "Spheres" dataset. Also, RTD-AE outperforms other methods by quality measures, see Table 1. We were unable to run MDS on "Spheres" dataset because it was too large for that method. See more results in Appendix M.
### Real world datasets
We performed experiments with a number of real-world datasets: MNIST (LeCun et al., 1998), F-MNIST (Xiao et al., 2017), COIL-20 (Nene et al., 1996), scRNA mice (Yuan et al., 2017), scRNA melanoma (Tirosh et al., 2016) with latent dimension of 16 and 2, see Tables 2, 5. The choice of scRNA datasets was motivated by the increased importance of dimensionality reduction methods in natural sciences, as was previously mentioned. RTD-AE is consistently better than competitors; moreover, the gap in metrics for the latent dimension 16 is larger than such for the latent dimension 2 (see Appendix D). 2 For the latent dimension 2, RTD-AE is the first or the second one among the methods by the quality measures (see Table 5, Figure 7 in Appendix D). We conclude that the proposed RTD-AE does a good job in preserving global structure of data manifolds.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & \multicolumn{4}{c}{Quality measure} \\ \cline{3-6} Dataset & Method & L. C. & W. D. \(H_{0}\) & T. A. & RTD \\ \hline Spheres 3D & t-SNE & 0.087 & 47.89 \(\pm\) 2.59 & 0.206 \(\pm\) 0.01 & 37.32 \(\pm\) 1.44 \\ & UMAP & 0.049 & 48.31 \(\pm\) 1.83 & 0.313 \(\pm\) 0.03 & 44.70 \(\pm\) 1.47 \\ & PaCMAP & 0.394 & 46.48 \(\pm\) 1.61 & 0.156 \(\pm\) 0.02 & 45.88 \(\pm\) 1.51 \\ & PHATE & 0.302 & 48.78 \(\pm\) 1.65 & 0.207 \(\pm\) 0.02 & 44.05 \(\pm\) 1.42 \\ & PCA & 0.155 & 47.15 \(\pm\) 1.89 & 0.174 \(\pm\) 0.02 & 38.96 \(\pm\) 1.25 \\ & MDS & N.A. & N.A. & N.A. & N.A. \\ & Ivis & 0.257 & 46.32 \(\pm\) 2.04 & 0.130 \(\pm\) 0.01 & 41.15 \(\pm\) 1.28 \\ & AE & 0.441 & **45.07 \(\pm\) 2.27** & 0.333 \(\pm\) 0.02 & 39.64 \(\pm\) 1.45 \\ & TopoAE & 0.424 & 45.89 \(\pm\) 2.35 & 0.274 \(\pm\) 0.02 & 38.49 \(\pm\) 1.59 \\ & RTD-AE & **0.633** & **45.02 \(\pm\) 2.69** & **0.346 \(\pm\) 0.02** & **35.80 \(\pm\) 1.63** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quality of data manifold global structure preservation at projection from 101D into 3D space.
For the "Mammoth" (Coenen and Pearce, 2019b) dataset (Figure 1) we did dimensionality reduction 3D \(\rightarrow\) 2D. Besides good quality measures, RTD-AE produced an appealing 2D visualization: both large-scale (shape) and low-scale (chest bones, toes, tussk) features are preserved.
### Analysis of distortions
Next, to study distortions produced by various dimensionality reduction methods we learn transformation from 2D to 2D space, see Figure 5. Here, we observe that RTD-AE in general recovers the global structure for all of the datasets. RTD-AE typically does not suffer from the squeezing (or bottleneck) issue, unlike AE, which is noticeable in "Random", "3 Clusters" and "Circle". Whereas t-SNE and UMAP struggle to preserve cluster densities and intercluster distances, RTD-AE manages to do that in every case. It does not cluster random points together, like t-SNE. Finally, the overall shape of representations produced by RTD-AE is consistent, it does not tear apart close points, which is something UMAP does in some cases, as shown in the "Circle" dataset. The metrics, presented in the Table 6 in Appendix E, also confirm the statements above. RTD-AE has typically higher pairwise distances linear correlation and triplet accuracy, which accounts for good multi-scale properties, while having a lower Wasserstein distance between persistence barcodes.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & \multicolumn{4}{c}{Quality measure} \\ \cline{3-6} Dataset & Method & L. C. & W. D. \(H_{0}\) & T. A. & RTD \\ \hline F-MNIST & UMAP & 0.602 & 592.0 \(\pm\) 3.9 & 0.741 \(\pm\) 0.018 & 12.31 \(\pm\) 0.44 \\ & PaCMAP & 0.600 & 585.9 \(\pm\) 3.2 & 0.741 \(\pm\) 0.013 & 12.72 \(\pm\) 0.48 \\ & Ivis & 0.582 & 552.6 \(\pm\) 3.5 & 0.718 \(\pm\) 0.014 & 10.76 \(\pm\) 0.30 \\ & PHATE & 0.603 & 576.4 \(\pm\) 4.4 & 0.756 \(\pm\) 0.016 & 10.72 \(\pm\) 0.15 \\ & AE & 0.879 & 320.5 \(\pm\) 1.9 & 0.850 \(\pm\) 0.004 & 5.52 \(\pm\) 0.17 \\ & TopoAE & 0.905 & 190.7 \(\pm\) 1.2 & 0.867 \(\pm\) 0.006 & 3.69 \(\pm\) 0.24 \\ & RTD-AE & **0.960** & **181.2 \(\pm\) 0.8** & **0.907 \(\pm\) 0.004** & **3.01 \(\pm\) 0.13** \\ \hline MNIST & UMAP & 0.427 & 879.1 \(\pm\) 5.6 & 0.625 \(\pm\) 0.016 & 17.62 \(\pm\) 0.73 \\ & PaCMAP & 0.410 & 887.5 \(\pm\) 6.1 & 0.644 \(\pm\) 0.012 & 20.07 \(\pm\) 0.70 \\ & Ivis & 0.423 & 712.6 \(\pm\) 5.0 & 0.668 \(\pm\) 0.013 & 12.40 \(\pm\) 0.32 \\ & PHATE & 0.358 & 819.5 \(\pm\) 4.0 & 0.626 \(\pm\) 0.018 & 15.01 \(\pm\) 0.25 \\ & AE & 0.773 & 391.0 \(\pm\) 2.9 & 0.771 \(\pm\) 0.010 & 7.22 \(\pm\) 0.14 \\ & TopoAE & 0.801 & 367.5 \(\pm\) 1.9 & 0.796 \(\pm\) 0.014 & 5.84 \(\pm\) 0.19 \\ & RTD-AE & **0.879** & **329.6 \(\pm\) 2.6** & **0.833 \(\pm\) 0.006** & **4.15 \(\pm\) 0.18** \\ \hline COIL-20 & UMAP & 0.301 & 274.7 \(\pm\) 0.0 & 0.574 \(\pm\) 0.011 & 15.99 \(\pm\) 0.52 \\ & PaCMAP & 0.230 & 273.5 \(\pm\) 0.0 & 0.548 \(\pm\) 0.012 & 15.18 \(\pm\) 0.35 \\ & Ivis & N.A. & N.A. & N.A. \\ & PHATE & 0.396 & 250.7 \(\pm\) 0.000 & 0.575 \(\pm\) 0.014 & 13.76 \(\pm\) 0.78 \\ & AE & 0.834 & 183.6 \(\pm\) 0.0 & 0.809 \(\pm\) 0.008 & 8.35 \(\pm\) 0.15 \\ & TopoAE & 0.910 & 148.0 \(\pm\) 0.0 & 0.822 \(\pm\) 0.020 & 6.90 \(\pm\) 0.19 \\ & RTD-AE & **0.944** & **88.9 \(\pm\) 0.0** & **0.892 \(\pm\) 0.007** & **5.78 \(\pm\) 0.10** \\ \hline scRNA mice & UMAP & 0.560 & 1141.0 \(\pm\) 0.0 & 0.712 \(\pm\) 0.010 & 21.30 \(\pm\) 0.17 \\ & PaCMAP & 0.496 & 1161.3 \(\pm\) 0.0 & 0.674 \(\pm\) 0.016 & 21.89 \(\pm\) 0.13 \\ & Ivis & 0.401 & 1082.6 \(\pm\) 0.0 & 0.636 \(\pm\) 0.007 & 22.56 \(\pm\) 1.13 \\ & PHATE & 0.489 & 1134.6 \(\pm\) 0.0 & 0.722 \(\pm\) 0.013 & 21.34 \(\pm\) 0.32 \\ & AE & 0.710 & 1109.2 \(\pm\) 0.0 & 0.788 \(\pm\) 0.013 & 20.80 \(\pm\) 0.16 \\ & TopoAE & 0.634 & **826.0 \(\pm\) 0.0** & 0.748 \(\pm\) 0.010 & **15.37 \(\pm\) 0.22** \\ & RTD-AE & **0.777** & 932.9 \(\pm\) 0.0 & **0.802 \(\pm\) 0.006** & 17.03 \(\pm\) 0.15 \\ \hline scRNA melanoma & UMAP & 0.474 & 1416.9 \(\pm\) 9.2 & 0.682 \(\pm\) 0.013 & 20.02 \(\pm\) 0.35 \\ & PaCMAP & 0.357 & 1441.8 \(\pm\) 9.1 & 0.681 \(\pm\) 0.014 & 20.53 \(\pm\) 0.36 \\ & Ivis & 0.465 & 1168.0 \(\pm\) 11.4 & 0.653 \(\pm\) 0.016 & 16.31 \(\pm\) 0.28 \\ & PHATE & 0.427 & 1427.5 \(\pm\) 9.1 & 0.687 \(\pm\) 0.018 & 20.18 \(\pm\) 0.41 \\ & AE & 0.458 & 1345.9 \(\pm\) 11.3 & 0.708 \(\pm\) 0.016 & 19.50 \(\pm\) 0.37 \\ & TopoAE & 0.544 & 973.7 \(\pm\) 11.1 & 0.709 \(\pm\) 0.011 & 13.41 \(\pm\) 0.35 \\ & RTD-AE & **0.684** & **769.5 \(\pm\) 11.5** & **0.728 \(\pm\) 0.017** & **10.35 \(\pm\) 0.33** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quality of data manifold global structure preservation at projection into 16D space.
### Limitations and computational complexity
The main source of complexity is RTD computation. For the batch size \(b\), object dimensionality \(d\) and latent dimensionality \(k\), the complexity is \(O(b^{2}(d+k))\) operations since all the pairwise distances should be calculated. The R-Cross-Barcode computation is at worst cubic in the number of simplices involved. However, the computation is often quite fast for batch sizes \(\leq 256\) since the boundary matrix is typically sparse for real datasets. The selection of simplices whose addition leads to "birth" or "death" of the corresponding homological class doesn't take extra time. For RTD calculation and differentiation, we used GPU-optimized software. As calculation relies heavily on the batch size, the training time of RTD-AE ranges from 1.5x the time of the basic autoencoder at batch size 8 to 4-6x the time in case of batch 512. For COIL-20, the it took \(\sim\)10 minutes to train a basic AE and \(\sim\)20 minutes for RTD-AE. Overall, the computation of a R-Cross-Barcode takes a similar time as in the previous step even on datasets of big dimensionality.
### Discussion
Experimental results show that RTD-AE better preserves the data manifold global structure than its competitors. The most interesting comparison is with TopoAE, the state-of-the-art, which uses an alternative topology-preserving loss. The measures of interest for topology comparison are the Wasserstein distances between persistence barcodes. Tables 2, 6, 5 show that RTD-AE is better than TopoAE. RTD minimization has a stronger theoretical foundation than the loss from TopoAE (see Section 3.2).
## 6 Conclusions
In this paper, we have proposed an approach for topology-preserving representation learning (dimensionality reduction). The topological similarity between data points in original and latent spaces is achieved by minimizing the Representation Topology Divergence (RTD) between original data and latent representations. Our approach is theoretically sound: RTD=0 means that persistence barcodes of any degree coincide and the topological features are located in the same places. We proposed how to make RTD differentiable and implemented it as an additional loss to the autoencoder, constructing RTD-autoencoder (RTD-AE). Computational experiments show that the proposed RTD-AE better preserves the global structure of the data manifold (as measured by linear correlation, triplet distance ranking accuracy, Wasserstein distance between persistence barcodes) than popular methods t-SNE and UMAP. Also, we achieve higher topological similarity than the alternative TopoAE method. Of course, the application of RTD loss is not limited to autoencoders and we expect more deep learning applications involving one-to-one correspondence between points. The main limitation is that calculation of persistence barcodes and RTD, in particular, is computationally demanding. We see here another opportunity for further research.
Figure 5: Results on synthetic 2D data. First column: original data. Other columns: results of dimensionality reduction methods.
## Acknowledgements
The work was supported by the Analytical center under the RF Government (subsidy agreement 000000D730321P5Q0002, Grant No. 70-2021-00145 02.11.2021)
## Reproducibility Statement
To provide reproducibility, we release the source code of the proposed RTD-AE, see section 1, for hyperparameters see Appendix H. For other methods, we used either official implementations or implementations from scikit-learn with default hyperparameters. We used public datasets (see Section 5, Appendix L). We generated several synthetic datasets and made the generating code available.
| We propose a method for learning topology-preserving data representations (dimensionality reduction). The method aims to provide topological similarity between the data manifold and its latent representation via enforcing the similarity in topological features (clusters, loops, 2D voids, etc.) and their localization. The core of the method is the minimization of the Representation Topology Divergence (RTD) between original high-dimensional data and low-dimensional representation in latent space. RTD minimization provides closeness in topological features with strong theoretical guarantees. We develop a scheme for RTD differentiation and apply it as a loss term for the autoencoder. The proposed method "RTD-AE" better preserves the global structure and topology of the data manifold than state-of-the-art competitors as measured by linear correlation, triplet distance ranking accuracy, and Wasserstein distance between persistence barcodes. |
2310.00327 | Memorization with neural nets: going beyond the worst case | In practice, deep neural networks are often able to easily interpolate their
training data. To understand this phenomenon, many works have aimed to quantify
the memorization capacity of a neural network architecture: the largest number
of points such that the architecture can interpolate any placement of these
points with any assignment of labels. For real-world data, however, one
intuitively expects the presence of a benign structure so that interpolation
already occurs at a smaller network size than suggested by memorization
capacity. In this paper, we investigate interpolation by adopting an
instance-specific viewpoint. We introduce a simple randomized algorithm that,
given a fixed finite dataset with two classes, with high probability constructs
an interpolating three-layer neural network in polynomial time. The required
number of parameters is linked to geometric properties of the two classes and
their mutual arrangement. As a result, we obtain guarantees that are
independent of the number of samples and hence move beyond worst-case
memorization capacity bounds. We illustrate the effectiveness of the algorithm
in non-pathological situations with extensive numerical experiments and link
the insights back to the theoretical results. | Sjoerd Dirksen, Patrick Finke, Martin Genzel | 2023-09-30T10:06:05 | http://arxiv.org/abs/2310.00327v2 | # Memorization with neural nets:
###### Abstract
In practice, deep neural networks are often able to easily interpolate their training data. To understand this phenomenon, many works have aimed to quantify the memorization capacity of a neural network architecture: the largest number of points such that the architecture can interpolate any placement of these points with any assignment of labels. For real-world data, however, one intuitively expects the presence of a benign structure so that interpolation already occurs at a smaller network size than suggested by memorization capacity. In this paper, we investigate interpolation by adopting an instance-specific viewpoint. We introduce a simple randomized algorithm that, given a fixed finite dataset with two classes, with high probability constructs an interpolating three-layer neural network in polynomial time. The required number of parameters is linked to geometric properties of the two classes and their mutual arrangement. As a result, we obtain guarantees that are independent of the number of samples and hence move beyond worst-case memorization capacity bounds. We illustrate the effectiveness of the algorithm in non-pathological situations with extensive numerical experiments and link the insights back to the theoretical results.
## 1 Introduction
The _bias-variance tradeoff_[1, 1] has been a cornerstone of classical machine learning theory that illustrates the relationship between the bias of a model and its variance, and how they affect its generalization performance. It states that if the model is too simple (high bias), it may underfit as it does not capture the underlying patterns in the data. However, if it is too complex (high variance), it may overfit noise in the training data and fail to generalize well. The resulting conventional wisdom was to adjust the model complexity to achieve a balance between underfitting and overfitting, which would then lead to good generalization.
This classical viewpoint has been uprooted by modern practice in deep learning, where it is common to use heavily overparameterized neural networks that fit the used training data (almost) perfectly. In spite of this (near-)perfect fit, these models can generalize well to new data. In fact, it can be observed that as the model complexity increases, the test error first decreases, then increases (as predicted by the bias-variance trade-off), and then decreases again. This phenomenon, coined the _double descent phenomenon_[1], is well documented not only for deep neural networks but for a wide range of machine learning methods, see e.g. [1, 1, 2, 1, 10]. The second descent of the test error is observed at the _interpolation threshold_, where the model has become complex enough to interpolate the training samples. Thus, to gain a deeper understanding of double descent it is important to identify at which size a neural network can interpolate finitely many samples.
To determine the interpolation threshold, we may look at the literature on the _memorization capacity_ of neural networks, which quantifies the number of parameters and neurons necessary for a network to be able to interpolate _any_\(N\) data points with _arbitrary_ labels. Thus, memorization capacity offers a worst-case quantitative analysis of the interpolation threshold. In this analysis, 'the network architecture comes first and the data comes later'. As a result, the required network complexity for memorization scales in terms of the number of training data (see Section 1.3 for more details). In practical applications, however, 'the data comes first and the network architecture comes later': the neural network architecture and size are tuned to given training data via cross-validation. Intuitively, one expects that the training data possesses some 'nice' structure so that interpolation is achievable with a smaller network complexity than suggested by memorization capacity - which assumes arbitrary data and arbitrary labels.
In this paper, we investigate interpolation by adopting an instance-specific viewpoint. We introduce a simple randomized algorithm that, given a fixed finite dataset with two classes, with high probability constructs an interpolating neural network in polynomial time, see Theorem 1.4. We then link the required number of parameters to the _mutual complexity_ of the dataset, which depends on both the geometric properties of two data classes as well as their mutual arrangement. As a result, we obtain guarantees that are independent of the number of samples and instead yield a 'problem-adaptive' bound on the interpolation threshold. Finally, we carry out a series of numerical experiments to demonstrate the practical effectiveness of our algorithm and to show that it can produce small interpolating networks in non-pathological situations.
### Summary of results
Let us first formalize the concept of interpolation in a classification setting with two classes. The setting with binary labels is considered for simplicity - our results can be readily extended to multiple classes by a one-versus-many approach (see Section 4.3 for details). In the following, \(\mathcal{X}^{-},\mathcal{X}^{+}\subset R\mathbb{B}_{2}^{d}\) denote disjoint and finite sets, representing two classes of objects.
**Definition 1.1** (Interpolation).: We say that a classification function \(F\colon\mathbb{R}^{d}\to\{\pm 1\}\)_interpolates_\(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\) if, for all \(\boldsymbol{x}^{-}\in\mathcal{X}^{-}\) and \(\boldsymbol{x}^{+}\in\mathcal{X}^{+}\),
\[F(\boldsymbol{x}^{-})=-1\quad\text{and}\quad F(\boldsymbol{x}^{+})=+1.\]
In this work, we will formulate a concrete, randomized algorithm that takes \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\) as inputs and produces an interpolating neural net as an output (Algorithm 1). As the statement of the algorithm requires some technical preparation, we postpone its discussion to Section 2. Our main result, informally stated as Theorem 1.4 below and developed in full detail in Section 2, shows that this algorithm succeeds with high probability in polynomial time and provides bounds on the size of the interpolating network. The bounds are phrased in terms of two structural assumptions on the data, that together quantify the difficulty of the interpolation problem.
First, we will assume that the classes are \(\delta\)_-separated_. This assumption is also common in a number of works on memorization capacity, e.g. [23, 11, 12]. Below we will write, for any sets \(\mathcal{A},\mathcal{B}\subset\mathbb{R}^{d}\),
\[\mathrm{d}(\boldsymbol{a},\mathcal{B})=\inf_{\boldsymbol{b}\in\mathcal{B}}\| \boldsymbol{a}-\boldsymbol{b}\|_{2},\qquad\mathrm{d}(\mathcal{A},\mathcal{B}) =\inf_{\boldsymbol{a}\in\mathcal{A}}\mathrm{d}(\boldsymbol{a},\mathcal{B}).\]
**Definition 1.2** (\(\delta\)-separation).: \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\) are \(\delta\)_-separated_ if \(\mathrm{d}(\mathcal{X}^{-},\mathcal{X}^{+})\geq\delta\).
Second, we will quantify the problem difficulty using the following notion that was first introduced in [10] (in a slightly different form).
**Definition 1.3** (Mutual covering).: We call
\[\mathcal{C}^{-} =\{\boldsymbol{c}_{1}^{-},\ldots,\boldsymbol{c}_{M^{-}}^{-}\} \subset\mathcal{X}^{-}, r_{1}^{-},\ldots,r_{M^{-}}^{-}\geq 0,\] \[\mathcal{C}^{+} =\{\boldsymbol{c}_{1}^{+},\ldots,\boldsymbol{c}_{M^{+}}^{+}\} \subset\mathcal{X}^{+}, r_{1}^{+},\ldots,r_{M^{+}}^{+}\geq 0\]
a _mutual covering_ for \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\) if the sets
\[\mathcal{X}^{-}_{\ell}\coloneqq\mathcal{X}^{-}\cap\mathbb{B}_{2}^{d}( \boldsymbol{c}_{\ell}^{-},r_{\ell}^{-})\quad\text{and}\quad\mathcal{X}^{+}_{j} \coloneqq\mathcal{X}^{+}\cap\mathbb{B}_{2}^{d}(\boldsymbol{c}_{j}^{+},r_{j}^ {+}),\]
for \(\ell\in[M^{-}]\) and \(j\in[M^{+}]\), cover \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\), respectively. We call these sets the _components_ of the mutual covering and call \(M^{-}\) and \(M^{+}\) the _mutual covering numbers_.
As we have only finitely many inputs, clearly a mutual covering always exists. However, if the arrangement of the classes is benign, the mutual covering numbers can be much smaller than the number of samples.
To see how the notion of mutual covering allows us to quantify the difficulty of a (binary) interpolation problem, we turn to our main result. In Theorem 1.4 we require the existence of a mutual covering with radii
\[r_{\ell}^{-}\lesssim\frac{\mathrm{d}(\boldsymbol{c}_{\ell}^{-},\mathcal{C}^{+ })}{\log^{1/2}(eR/\mathrm{d}(\boldsymbol{c}_{\ell}^{-},\mathcal{C}^{+}))} \quad\text{and}\quad r_{j}^{+}\lesssim\frac{\mathrm{d}(\boldsymbol{c}_{j}^{+},\mathcal{C}^{-})}{\log^{1/2}(eR/\mathrm{d}(\boldsymbol{c}_{j}^{+},\mathcal{C} ^{-}))}. \tag{1}\]
Geometrically, this means that the components covering \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\) cannot intersect (more precisely, need to be slightly separated from) the ideal decision boundary between the two sets, as is illustrated in Figure 1. In particular, components with a small radius are only needed close to the ideal decision boundary, while parts that are far away from this boundary can be crudely covered with large components. Compared to classical coverings with balls of a fixed radius (as used in the classical notion of the Euclidean covering number of a set, see e.g. [20]), this can drastically reduce the required number of components.
While the mutual covering numbers \(M^{-}\) and \(M^{+}\) can be viewed as a measure of the _global complexity_ of the data, our result also involves the _local complexity_, measured by the'sizes' of the components. Specifically, define \(\omega\coloneqq\max\{\omega^{-},\omega^{+}\}\) where
\[\omega^{-}\coloneqq\max_{\ell\in[M^{-}]}\frac{w^{2}(\mathcal{X}^{-}_{\ell}- \boldsymbol{c}^{-}_{\ell})}{\mathrm{d}^{3}(\boldsymbol{c}^{-}_{\ell}, \mathcal{C}^{+})}\quad\text{and}\quad\omega^{+}\coloneqq\max_{j\in[M^{+}]} \frac{w^{2}(\mathcal{X}^{+}_{j}-\boldsymbol{c}^{+}_{j})}{\mathrm{d}^{3}( \boldsymbol{c}^{+}_{j},\mathcal{C}^{-})}. \tag{2}\]
The quantities \(\omega^{-}\) and \(\omega^{+}\) measure the scaled version of the'size' of the largest (centered) component of \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\), respectively. Here, the _Gaussian mean width_ of a set \(\mathcal{A}\subset\mathbb{R}^{d}\) is defined as
\[w(\mathcal{A})\coloneqq\mathbb{E}\sup_{\boldsymbol{x}\in\mathcal{A}}|\langle \boldsymbol{g},\boldsymbol{x}\rangle|,\]
Figure 1: **The mutual covering is ‘problem-adaptive’.** Condition (1) on the radii in Theorem 1.4 allows a covering ‘adapted to’ the mutual arrangement of the data: only the parts of the data that lie close to the ideal decision boundary need to be covered using balls with small diameters – other parts can be crudely covered using larger balls.
where \(\mathbf{g}\sim N(\mathbf{0},\mathbf{I}_{d})\) denotes a standard Gaussian random vector. The mean width is a well-established complexity measure in high-dimensional statistics and geometry which is sensitive to low-dimensional structures such as sparsity, unions of low-dimensional subspaces, or manifolds, see, e.g., [20] for a detailed discussion and examples. We refer to Remark 2.7 for straightforward estimates of \(\omega\).
We are now ready to present the informal version of our main result, which we state for the case of threshold activations. Intuitively, this should be the most challenging because the signal amplitude is lost. It is possible to prove analogous results for other activations. In fact, Algorithm 1 only requires an activation function \(\sigma\) such that \(\sigma(t)=0\) for \(t\leq 0\) and \(\sigma(t)>0\) for \(t>0\), which e.g. includes the ReLU. Note, however, that the bounds on the network size may change for different activations.
**Theorem 1.4** (Informal).: Let \(\mathcal{X}^{-},\mathcal{X}^{+}\subset R\mathbb{B}_{2}^{d}\) be finite and disjoint. Suppose that there is a mutual covering with \(\delta\)-separated centers and radii satisfying (1). Then, with high probability, Algorithm 1 terminates in polynomial time and outputs a 2-hidden-layer fully-connected neural network with threshold activations,
\[\mathcal{O}\left(M^{-}+R\delta^{-1}\log(2M^{-}M^{+})+R\omega\right)\]
neurons and
\[\mathcal{O}\left(R(d+M^{-})(\delta^{-1}\log(2M^{-}M^{+})+\omega)\right)\]
parameters, that interpolates \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\).
A first interesting feature of this result is the asymmetric dependence on the complexities of the classes: the network size depends linearly on \(M^{-}\) but only logarithmically on \(M^{+}\). Second, our bounds are independent of the number of samples. This is a fundamental difference between memorization capacity and our instance-specific approach to interpolation, see the discussion in Section 1.3. To highlight this second point further, we deduce an interpolation result for infinite sets from our analysis. In contrast to Theorem 1.4, the proof is nonconstructive.
**Corollary 1.5**.: Let \(\mathcal{X}^{-},\mathcal{X}^{+}\subset R\mathbb{B}_{2}^{d}\) be (possibly infinite) sets. Suppose that there is a mutual covering with \(\delta\)-separated centers and radii satisfying (1). Then, there exists a neural network of the same size as in Theorem 1.4 that interpolates \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\).
### Organization
The rest of the paper is organized as follows. In Section 1.3 we discuss related works, then introduce notation in Section 1.4. In Section 2 we present Algorithm 1 and give intuition on how it works. We also state the formal counterpart of our main result in Theorem 2.6. All proofs are contained in Section 3. Finally, in Section 4 we demonstrate the performance of our algorithm through numerical experiments.
### Related Works
Memorization capacity.Neural network architectures used in practice are powerful memorizers: it has been observed that various popular architectures for image classification do not only interpolate their training data, but can even interpolate this data when the labels are replaced by random labels (after retraining) [10]. To understand this phenomenon, an extensive literature has studied the memorization capacity of neural networks, by quantifying how large a network needs to be to interpolate any \(N\) points with _arbitrary labels_. In this case, we will say that the network can memorize \(N\) points. In practice, memorization results often include some assumptions on the inputs. Here we will summarize relevant memorization literature that makes similar structural assumptions on the inputs, such as \(\delta\)-separation or a bound on the norm. Other works consider randomized samples or samples drawn from a distribution, see e.g. [1, 1, 10].
The study of the memorization capacity of neural networks with threshold activations has a rich history. Assuming that the points are in general position1, [1] showed that a 1-hidden-layer threshold network with \(\mathcal{O}(N+d)\) parameters and \(\mathcal{O}(\lceil N/d\rceil)\) neurons is enough to memorize binary labels of \(N\) points in \(\mathbb{R}^{d}\). In [11] it was shown that \(\mathcal{O}(Nd)\) parameters and \(\mathcal{O}(N)\) neurons are enough to memorize real labels, without placing any additional constraints on the points. Assuming that the points are \(\delta\)-separated and lie on the unit sphere, [20] proved that a deep threshold (or ReLU) network can memorize binary labels using \(\widetilde{\mathcal{O}}(e^{1/\delta^{2}}(d+\sqrt{N})+N)\) parameters and \(\widetilde{\mathcal{O}}(e^{1/\delta^{2}}+\sqrt{N})\) neurons. The exponential dependence on \(\delta\) was improved by [14], who proved that \(\widetilde{\mathcal{O}}(d/\delta+N)\) parameters and \(\widetilde{\mathcal{O}}(1/\delta+\sqrt{N})\) neurons are enough for memorization of binary labels, while further only requiring bounded norm instead of unit norm. The constructions of both [20] and [14] are probabilistic, while the ones of [1] and [11] are purely deterministic.
Footnote 1: A set of \(N\) points in \(\mathbb{R}^{d}\) is said to be in general position if any subset of \(d\) vectors is linearly independent.
There have been a number of works on the memorization capacity of networks with other activations. We will only summarize the results for ReLU activations due to its popularity in practice, and refer to e.g. [15, 16, 17] and the references therein for other activations. The work [1] extended the result of [1] to the case of real-valued labels using a network with ReLU activation with a size of the same order. Using weight sharing in the first layer, [10] showed that a 1-hidden-layer ReLU network could memorize real-valued labels using \(\mathcal{O}(N+d)\) parameters and \(\mathcal{O}(N)\) neurons, with no further assumptions on the points. [21] proved that both multi-class and real-valued labels can be memorized by a ReLU net with two and three hidden layers, respectively, using \(\mathcal{O}(d\sqrt{N}+N)\) parameters and \(\mathcal{O}(\sqrt{N})\) neurons. [16] achieved the first result on memorization with a sublinear number of parameters: assuming that the points are separated, they showed that ReLU (or hard-tanh) nets can memorize multiple classes using \(\widetilde{\mathcal{O}}(d+N^{2/3})\) parameters,
constant width and \(\widetilde{\mathcal{O}}(N^{2/3})\) layers. [20] improved the above dependence on \(N\) from \(N^{2/3}\) to \(\sqrt{N}\), which is optimal. Specifically, assuming that the points are \(\delta\)-separated and have bounded norm, they show that a ReLU net with \(\widetilde{\mathcal{O}}(d+\sqrt{N})\) parameters, constant width and \(\widetilde{\mathcal{O}}(\sqrt{N})\) layers is enough to memorize multi-class labels.
To directly compare the above with our results, we consider a _trivial_ mutual covering that always 'works' regardless of the labels of the points: we cover each point by its own component with a radius of zero. Thus, \(M^{-}=N^{-}\coloneqq|\mathcal{X}^{-}|\), \(M^{+}=N^{+}\coloneqq|\mathcal{X}^{+}|\) and \(\omega=0\). Thus, in the worst case Theorem 1.4 yields a network with \(\mathcal{O}(R(d+N^{-})\delta^{-1}\log(2N^{-}N^{+}))\) parameters and \(\mathcal{O}\left(N^{-}+R\delta^{-1}\log(2N^{-}N^{+})\right)\) neurons. If \(N^{-}\simeq N^{+}\), the number of neurons scales (slightly worse than) linear in the number of points, which is worse than the best result on memorization capacity for networks using the threshold activation. In Proposition 2.8 we show that the linear scaling in terms of \(M^{-}\) in Theorem 1.4 is not a proof artifact. Hence, our method cannot recover optimal performance in the worst case. It is an interesting open question whether our method can be modified to achieve this.
Nevertheless, in practical situations one expects that typically a much better mutual covering exists, due to intrinsic low-dimensional structure of the input data and/or a more benign label assignment than arbitrary labelling. In such cases Theorem 1.4 can guarantee a much smaller interpolating network. In particular, since our bounds are independent of the number of samples we can derive interpolation results for infinite sets (Corollary 1.5). In contrast, results on memorization capacity cannot have this feature. The VC-dimension2 of feed-forward neural networks with threshold activation is \(\mathcal{O}(W\log W)\)[1], where \(W\) denotes the total number of parameters, i.e., the sum of the number of weights and biases over all layers. Hence, to memorize more samples than this upper bound, one would necessarily need to add more parameters to the network. Similar results hold for arbitrary piecewise linear activations such as the ReLU [1] or analytic definable activation functions [14].
Footnote 2: The VC-dimension is the maximal \(N\) for which there exist points \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\in\mathbb{R}^{d}\) such that for every assignment of labels \(y_{1},\ldots,y_{N}\in\{\pm 1\}\) there exists a set of parameters \(\theta\) such that the network interpolates the samples, i.e. \(F_{\theta}(x_{i})=y_{i}\) for all \(i\in[N]\).
Separation capacity.Related to interpolation is the question of _separation capacity_ of a neural network: under what conditions can a neural network make two (not necessarily finite) classes linearly separable? Obviously, a network with separation capacity can be extended to an interpolating network by adding the separating hyperplane as an additional layer.
In [1] it was shown that any two disjoint sets can be made linearly separable using a deterministic two-layer ReLU neural net. However, their proof is non-constructive and they provided no estimates on the size of the network. Inspired by this work, [1] showed that a wide enough two-layer random ReLU network can make any two \(\delta\)-separated sets linearly separable if the weights and biases are chosen from appropriate distributions. Unlike the ex
istence result in [1], they provided bounds linking the number of required neurons to geometric properties of the classes and their mutual arrangement via a notion of mutual covering similar to Definition 1.3. This instance-specific viewpoint allows them to overcome the curse of dimensionality if the data carries a low-complexity structure. Following up on this, [10] showed that even a wide enough one-layer ReLU net is enough to accomplish separation. They introduced a deterministic memorization algorithm which is then 'implemented' by a random neural network. As [11] they also used a mutual covering to capture the complexity of the data.
While the above results could be applied to interpolation, the required number of parameters would be larger than what we require in Theorem 1.4. Both [11] and [10] yield networks scaling polynomially in terms of the mutual covering numbers, while our network scales only linearly.
The present paper is strongly influenced by [11] - we adopt an instance-specific viewpoint and the notion of mutual covering. However, instead of separation, we directly focus on interpolation. Together with our only partially randomized approach, this allows us to prove better bounds for this case.
Random hyperplane tesselations.As will become apparent below, our technical analysis is linked to tessellations created by random hyperplanes with Gaussian directions and uniformly distributed shifts, which were recently intensively studied in [12, 13]. In particular, [13] derived a sharp bound on the number of hyperplanes needed to induce a uniform tessellation of a given set, meaning that the Euclidean distance between any two points in the set corresponds to the fraction of hyperplanes separating them up to a prespecified error. We will use some insights from these works, see in particular Lemma 3.5.
### Setup and Notation
For any \(1\leq p\leq\infty\) we let \(\left\lVert\cdot\right\rVert_{p}\) denote the \(\ell_{p}\) norm. We use \(\mathbb{B}_{2}^{d}(\boldsymbol{c},r)\) to denote the Euclidean ball in \(\mathbb{R}^{d}\) with center \(\boldsymbol{c}\in\mathbb{R}^{d}\) and radius \(r\geq 0\) and we denote the unit ball by \(\mathbb{B}_{2}^{d}\). For \(n\in\mathbb{N}\), we set \([n]\coloneqq\{1,\ldots,n\}\). For any set \(\mathcal{A}\) we use \(|\mathcal{A}|\) to denote its cardinality and let \(\mathds{1}_{\mathcal{A}}\) denote its indicator. We let sign denote the function
\[\operatorname{sign}(x)=\begin{cases}+1&\text{if }x\geq 0,\\ -1&\text{else}.\end{cases}\]
For a function \(\sigma\colon\mathbb{R}\to\mathbb{R}\) and a vector \(\boldsymbol{x}\in\mathbb{R}^{d}\) we denote the element-wise application by \(\sigma(\boldsymbol{x})=(\sigma(x_{i}))_{i=1}^{d}\). If an equality holds up to an absolute constant \(C\), we write \(A\gtrsim B\) instead of \(A\geq C\cdot B\). We write \(A\simeq B\) if \(A\gtrsim B\gtrsim A\). We use \(\mathcal{O}(\,\cdot\,)\) to omit constant terms and \(\widetilde{\mathcal{O}}(\,\cdot\,)\) to additionally omit logarithmic terms. We define the distance between any point \(\boldsymbol{x}\in\mathbb{R}^{d}\) and a set \(\mathcal{X}\subset\mathbb{R}^{d}\) as \(\operatorname{d}(\boldsymbol{x},\mathcal{X})\coloneqq\inf\{\left\lVert \boldsymbol{x}-\boldsymbol{y}\right\rVert_{2}:\boldsymbol{y}\in\mathcal{X}\}\). For \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{R}^{n}\) we define \(\mathds{1}[\boldsymbol{x}=\boldsymbol{y}]\in\{0,1\}^{n}\)
by
\[(\mathds{1}[\mathbf{x}=\mathbf{y}])_{i}=\begin{cases}1&\text{if }x_{i}=y_{i},\\ 0&\text{else}.\end{cases}\]
We denote by \(\mathbf{0},\mathbf{1}\in\mathbb{R}^{d}\) the vector with entries all equal to \(0\) and all equal to \(1\), respectively. We denote the standard multivariate normal distribution in \(d\) dimensions by \(N(\mathbf{0},\mathbf{I}_{d})\) and the uniform distribution on \(\mathcal{A}\subset\mathbb{R}^{d}\) by \(\text{Unif}(\mathcal{A})\).
## 2 Interpolation algorithm and main results
Consider any disjoint \(\mathcal{X}^{-},\mathcal{X}^{+}\subset R\mathbb{B}_{2}^{d}\) with \(N^{-}\coloneqq|\mathcal{X}^{-}|\) and \(N^{+}\coloneqq|\mathcal{X}^{+}|\). Let \(\sigma\colon\mathbb{R}\to\mathbb{R}\) satisfy \(\sigma(t)=0\) for \(t\leq 0\) and \(\sigma(t)>0\) for \(t>0\). Let us outline our method to construct an interpolating three-layer neural network:
1. To build the first layer \(\Phi\colon\mathbb{R}^{d}\to\mathbb{R}^{n}\), we iteratively sample i.i.d. random hyperplanes \(H[\mathbf{w}_{i},b_{i}]\) until any \(\mathbf{x}^{-}\in\mathcal{X}^{-}\) is separated from any \(\mathbf{x}^{+}\in\mathcal{X}^{+}\) by at least one of them (see Figure 2 and Definition 3.1). Each hyperplane includes a shift \(b_{i}\) so that it is able to separate points located on a ray emanating from the origin. In the worst case, one could have points with opposite labels close to the boundary of \(R\mathbb{B}_{2}^{d}\), hence one needs the maximal shift to scale at least like \(R\). We let \(\mathbf{W}\) be the matrix containing the \(\mathbf{w}_{i}\) as its rows and let \(\mathbf{b}\) be the vector having the \(b_{i}\) as its coordinates. We define the first, random layer \(\Phi\) of the network by \(\Phi(\mathbf{x})=\sigma(\mathbf{W}\mathbf{x}+\mathbf{b})\)
Figure 2: **Random hyperplanes in the input domain \(\mathbb{R}^{d}\).** In Algorithm 1 we iteratively sample random hyperplanes \(H[\mathbf{w}_{i},b_{i}]\) until every pair of points with opposite labels is separated by at least one of them. This tessellates the space into multiple cells, where each cell is only populated with points of the same label. Each hyperplane can be associated with one of the neurons of the first layer \(\Phi\).
Since all pairs of points with opposite labels are separated by at least one hyperplane, \(\Phi\) has the following property: for any \((\mathbf{x}^{-},\mathbf{x}^{+})\in\mathcal{X}^{-}\times\mathcal{X}^{+}\) there exists at least one \(i\in[n]\) with \[\Phi_{i}(\mathbf{x}^{-})=0\quad\text{and}\quad\Phi_{i}(\mathbf{x}^{+})>0.\] (3) This enables us to distinguish between points of different labels.
2. We then exploit (3) in the following way. For \(\mathbf{x}^{-}\in\mathcal{X}^{-}\) consider the mask \(\mathbf{u}_{\mathbf{x}^{-}}=\mathds{1}[\Phi(\mathbf{x}^{-})=\mathbf{0}]\). By (3), \[\langle\mathbf{u}_{\mathbf{x}^{-}},\Phi(\mathbf{x}^{-})\rangle=0\quad\text{and}\quad \langle\mathbf{u}_{\mathbf{x}^{-}},\Phi(\mathbf{x}^{+})\rangle>0\quad\text{for all }\mathbf{x}^{+}\in\mathcal{X}^{+}.\] Geometrically, this means that the hyperplane \(H[-\mathbf{u}_{\mathbf{x}^{-}},m_{\mathbf{x}^{-}}]\), where \[m_{\mathbf{x}^{-}}=\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\langle\mathbf{u}_{\mathbf{x}^{-}},\Phi(\mathbf{x}^{+})\rangle,\] separates \(\Phi(\mathcal{X}^{+})\) from \(\Phi(\mathbf{x}^{-})\) (see Figure 3). Let \(\mathbf{U}\in\mathbb{R}^{N^{-}\times n}\) be the matrix with rows \(\mathbf{u}_{\mathbf{x}^{-}}\) and let \(\mathbf{m}\in\mathbb{R}^{N^{-}}\) be the vector with coordinates \(m_{\mathbf{x}^{-}}\). We then define the second layer \(\hat{\Phi}\colon\mathbb{R}^{n}\to\mathbb{R}^{\hat{n}}\) of the network by \(\hat{\Phi}(\mathbf{z})=\sigma(-\mathbf{U}\mathbf{z}+\mathbf{m})\). This layer satisfies, for every \(\mathbf{x}^{-}\in\mathcal{X}^{-}\), \[[\hat{\Phi}(\Phi(\mathbf{x}^{-}))]_{\mathbf{x}^{-}}>0\quad\text{and}\quad[\hat{\Phi} (\Phi(\mathbf{x}^{+}))]_{\mathbf{x}^{-}}=0\quad\text{for all }\mathbf{x}^{+}\in\mathcal{X}^{+}.\] (4) Thus, in the second hidden layer, there is a dedicated neuron to detect each point of \(\mathcal{X}^{-}\), but none of them activates on \(\mathcal{X}^{+}\).
Figure 3: **The effect of the first layer \(\Phi\).** After transforming the data with the first layer \(\Phi\) we can, for each \(\mathbf{x}^{-}\in\mathcal{X}^{-}\), construct a hyperplane \(H[-\mathbf{u}_{\mathbf{x}^{-}},m_{\mathbf{x}^{-}}]\) that separates \(\Phi(\mathcal{X}^{+})\) from \(\Phi(\mathbf{x}^{-})\). Each hyperplane can be associated with one of the neurons in the second layer.
3. In the output layer, we simply sum the output from the second layer \(\hat{\Phi}\). By (4), for all \(\mathbf{x}^{-}\in\mathcal{X}^{-}\) and \(\mathbf{x}^{+}\in\mathcal{X}^{+}\), \[\langle\mathbf{1},\hat{\Phi}(\Phi(\mathbf{x}^{-}))\rangle>0\quad\text{and}\quad \langle\mathbf{1},\hat{\Phi}(\Phi(\mathbf{x}^{+}))\rangle=0\] and hence \(\text{sign}(-\cdot)\) outputs the correct label.
The second step of this method is rather naive: for _every_\(\mathbf{x}^{-}\in\mathcal{X}^{-}\), we construct a dedicated neuron
\[\hat{\varphi}_{\mathbf{x}^{-}}(\mathbf{z})=\sigma(-\langle\mathbf{u}_{\mathbf{x}^{-}},\mathbf{z} \rangle+m_{\mathbf{x}^{-}}) \tag{5}\]
that distinguishes \(\Phi(\mathbf{x}^{-})\) and \(\Phi(\mathcal{X}^{+})\), i.e., \(\hat{\varphi}_{\mathbf{x}^{-}}(\Phi(\mathbf{x}^{-}))>0\) and \(\hat{\varphi}_{\mathbf{x}^{-}}(\Phi(\mathbf{x}^{+}))=0\) for all \(\mathbf{x}^{+}\in\mathcal{X}^{+}\). This potentially leads to redundancy, since to get an interpolating net at the third step, it suffices if for each \(\mathbf{x}^{-}\) there is _some_\(\mathbf{x}^{-}_{*}\) such that \(\hat{\varphi}_{\mathbf{x}^{-}_{*}}\) distinguishes \(\Phi(\mathbf{x}^{-})\) and \(\Phi(\mathcal{X}^{+})\). We can especially hope for this to be true if \(\mathbf{x}^{-}\) is 'close enough to' \(\mathbf{x}^{-}_{*}\) in a suitable sense. This is illustrated in Figure 4. Thus we can improve the second step by forward selection: we iteratively select elements \(\mathbf{x}^{-}_{*}\) from \(\mathcal{X}^{-}\) and construct the associated neuron \(\hat{\varphi}_{\mathbf{x}^{-}_{*}}\) until there is a distinguishing neuron for each element in \(\mathcal{X}^{-}\).
These considerations lead to our interpolation algorithm formalized in Algorithm 1.
Figure 4: **Motivation for forward selection.** While each \(\Phi(\mathbf{x}^{-})\) is separated by a corresponding ‘dedicated’ hyperplane from \(\Phi(\mathcal{X}^{+})\) (depicted in dashed grey), we can identify a single hyperplane \(H[-\mathbf{u}_{\mathbf{x}^{-}_{*}},m_{\mathbf{x}^{-}_{*}}]\) (depicted in grey) that separates several \(\Phi(\mathbf{x}^{-})\) from \(\Phi(\mathcal{X}^{+})\) simultaneously. The other hyperplanes are redundant and the corresponding neurons do not need to be included in the second layer \(\hat{\Phi}\).
```
1:Discjoint and finite \(\mathcal{X}^{-},\mathcal{X}^{+}\subset\mathbb{R}^{d}\), activation \(\sigma\colon\mathbb{R}\to\mathbb{R}\) satisfying \(\sigma(t)=0\) for \(t\leq 0\) and \(\sigma(t)>0\) for \(t>0\), (minimal) width of the first layer \(n_{\min}\geq 0\).
2:A three-layer fully-connected neural network \(F\colon\mathbb{R}^{d}\to\{\pm 1\}\) that interpolates \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\).
```
(_First layer \(\Phi\)_)
```
1:Calculate \(R\leftarrow\max_{\boldsymbol{x}\in\mathcal{X}^{-}\cup\mathcal{X}^{+}}\left\| \boldsymbol{x}\right\|_{2}\) and choose \(\lambda\gtrsim R\).
2:Initialize \(\mathcal{S}\leftarrow\emptyset\) and \(n\gets 0\).
3:while\(\mathcal{S}\neq\mathcal{X}^{-}\times\mathcal{X}^{+}\) or \(n<n_{\min}\)do
4: Update \(n\gets n+1\).
5: Sample \[\boldsymbol{w}_{n}\sim N(\boldsymbol{0},\boldsymbol{I}_{d}),\quad b_{n}\sim \operatorname{Unif}([-\lambda,\lambda]).\]
6: Update \(\mathcal{S}\) according to \[\mathcal{S}\leftarrow\mathcal{S}\cup\{(\boldsymbol{x}^{-},\boldsymbol{x}^{+}) \in\mathcal{X}^{-}\times\mathcal{X}^{+}:\langle\boldsymbol{w}_{n},\boldsymbol {x}^{-}\rangle\leq-b_{n}<\langle\boldsymbol{w}_{n},\boldsymbol{x}^{+}\rangle\}.\]
7:endwhile
8:Define \(\Phi(\boldsymbol{x})=\sigma(\boldsymbol{W}\boldsymbol{x}+\boldsymbol{b})\) with \(\boldsymbol{W}\in\mathbb{R}^{n\times d}\) and \(\boldsymbol{b}\in\mathbb{R}^{n}\) where \[\boldsymbol{W}\leftarrow\begin{bmatrix}\boldsymbol{w}_{1}^{\top}\\ \vdots\\ \boldsymbol{w}_{n}^{\top}\end{bmatrix}\quad\text{and}\quad\boldsymbol{b} \leftarrow\begin{bmatrix}b_{1}\\ \vdots\\ b_{n}\end{bmatrix}.\] (_Second layer \(\hat{\Phi}\)_)
9:Initialize \(\mathcal{C}\leftarrow\mathcal{X}^{-}\) and \(\hat{n}\gets 0\).
10:while\(\mathcal{C}\neq\emptyset\)do
11: Update \(\hat{n}\leftarrow\hat{n}+1\).
12: Select \(\boldsymbol{x}_{\hat{n}}^{-}\in\mathcal{C}\) uniformly at random from \(\mathcal{C}\) and calculate \[\boldsymbol{u}_{\hat{n}}\leftarrow\mathds{1}[\Phi(\boldsymbol{x}_{\hat{n}}^{- })=\boldsymbol{0}],\quad m_{\hat{n}}\leftarrow\min_{\boldsymbol{x}^{+}\in \mathcal{X}^{+}}\langle\boldsymbol{u}_{\hat{n}},\Phi(\boldsymbol{x}^{+})\rangle.\]
13: Update \(\mathcal{C}\) according to \[\mathcal{C}\leftarrow\mathcal{C}\setminus\{\boldsymbol{x}^{-}\in\mathcal{C}: \langle\boldsymbol{u}_{\hat{n}}^{-},\Phi(\boldsymbol{x}^{-})\rangle<m_{ \boldsymbol{x}_{\hat{n}}^{-}}\}.\]
14:endwhile
15:Define \(\hat{\Phi}(\boldsymbol{z})=\sigma(-\boldsymbol{U}\boldsymbol{z}+\boldsymbol{ m})\) with \(\boldsymbol{U}\in\mathbb{R}^{\hat{n}\times n}\) and \(\boldsymbol{m}\in\mathbb{R}^{\hat{n}}\) where \[\boldsymbol{U}\leftarrow\begin{bmatrix}\boldsymbol{u}_{1}^{\top}\\ \vdots\\ \boldsymbol{u}_{\hat{n}}^{\top}\end{bmatrix}\quad\text{and}\quad\boldsymbol{m} \leftarrow\begin{bmatrix}m_{1}\\ \vdots\\ m_{\hat{n}}\end{bmatrix}.\]
16:Return \(F(\boldsymbol{x})=\operatorname{sign}(-\langle\boldsymbol{1},\hat{\Phi}( \boldsymbol{x})\rangle)\). (_Output network \(F\)_)
_Remark 2.1_.: First, let us briefly comment on the parameter \(n_{\min}\) in the first loop of Algorithm 1, which is the minimal width of the first layer \(\Phi\). In (the proof of) Proposition 2.2 we will see that the first loop (and hence, the algorithm) terminates with probability 1. In Theorem 2.6, we will derive a lower bound on \(n_{\min}\) that ensures that the algorithm terminates with high probability and derive an upper bound on the total size of the output net \(F\). The first condition in line 1 of the algorithm will in this case be redundant. We only include this condition to ensure that the second loop of the algorithm is always guaranteed to terminate.
Second, we comment on the parameter \(\lambda\), which is the maximal shift of the hyperplanes in the first layer. The condition \(\lambda\gtrsim R\) in the first line of Algorithm 1 is used to guarantee that every pair of samples with different labels is separated by at least one of the hyperplanes (even if they are on a line through the origin, Proposition 3.2), and that they induce a uniform tesselation, allowing us to relate the fraction of hyperplanes between points to their Euclidean distance (Lemmas 3.4 and 3.5). As this condition involves an unknown constant, for a practical application \(\lambda\) can be treated like a hyperparameter. In Section 4 we will see that \(\lambda\geq R\) typically is sufficient and, depending on the dataset, smaller values might also work.
Let us now state our main results.
**Proposition 2.2** (Termination and correctness).: Let \(\mathcal{X}^{-},\mathcal{X}^{+}\subset\mathbb{R}^{d}\) be disjoint and finite. Then Algorithm 1 terminates with probability 1 and its output \(F\) interpolates \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\).
From the discussion at the start of this section, it is clear that Algorithm 1 produces an interpolating network _if_ the first loop of the algorithm terminates. We will prove termination in Section 3.1.
Additionally, the following gives an estimate of the run time of Algorithm 1.
**Proposition 2.3** (Run time).: Let \(\mathcal{X}^{-},\mathcal{X}^{+}\subset\mathbb{R}^{d}\) be finite and \(\delta\)-separated. Let \(N^{-}\coloneqq|\mathcal{X}^{-}|\) and denote \(N\coloneqq N^{-}+N^{+}\). Assume that \(N^{-}\simeq N^{+}\), the input dimension \(d\) is constant and the activation function \(\sigma\) is computable in constant time. Then Algorithm 1 has a run time of at most
\[\mathcal{O}(\delta^{-1}\lambda\log(N/\eta)N^{2}),\]
with probability at least \(1-\eta\).
_Remark 2.4_.: The run time of Algorithm 1 has a bottleneck of \(\mathcal{O}(N^{2})\) in terms of the number of samples, which may be serious for large datasets. This bottleneck already occurs in the first loop. In Section 4 we will consider a variation of the algorithm in which the number of hyperplanes drawn in the first layer is a hyperparameter. As we will see in Theorem 2.6, this algorithm is guaranteed to succeed with high probability if the number of draws is chosen large enough. In this case, the run time of the algorithm is dictated by the construction of the second layer, which takes time \(\mathcal{O}(M^{-}N^{+})\).
To complement Proposition 2.2 we derive a high probability bound on the size of the network produced by Algorithm 1. This bound will (at least in our proof) depend on the choice of the activation function \(\sigma\). We focus on the setting with threshold activations, i.e., we consider
\[\sigma(t)=\mathrm{Thres}(t)=\begin{cases}1&\text{if }t>0,\\ 0&\text{else}.\end{cases}\]
Let us first observe that in the limit, the shape of the activation region of every neuron in the second layer is a Euclidean ball of a'maximal radius', i.e., that touches the closest point in the set \(\mathcal{X}^{+}\). This gives geometric intuition on why the size of the second layer is naturally connected with the mutual covering numbers.
**Proposition 2.5** (Limit shape of activation regions - threshold activations).: Consider any \(\mathbf{x}_{*}^{-}\in\mathcal{X}^{-}\) and let \(\mathcal{A}_{\mathbf{x}_{*}^{-}}\) be the activation region of \(\hat{\varphi}_{\mathbf{x}_{*}^{-}}\). Then, for any \(\mathbf{x}\in\mathbb{R}^{d}\setminus\partial\mathcal{B}_{\mathbf{x}_{*}^{-}}\),
\[\lim_{\lambda\to\infty}\lim_{n\to\infty}\mathds{1}_{\mathcal{A}_{\mathbf{x}_{*}^{ -}}}(\mathbf{x})=\mathds{1}_{\mathcal{B}_{\mathbf{x}_{*}^{-}}}(\mathbf{x})\]
almost surely, where \(\mathcal{B}_{\mathbf{x}_{*}^{-}}=\mathbb{B}_{2}^{d}(\mathbf{x}_{*}^{-};\mathrm{d}(\bm {x}_{*}^{-},\mathcal{X}^{+}))\).
Let us now state the main result of our work.
**Theorem 2.6** (Size of interpolating net - threshold activations).: Let \(\mathcal{X}^{-},\mathcal{X}^{+}\subset R\mathbb{B}_{2}^{d}\) be finite and disjoint. Let \(\sigma\) be the threshold activation and \(\lambda\gtrsim R\). Suppose that there is a mutual covering of \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\) such that the centers \(\mathcal{C}^{-}\) and \(\mathcal{C}^{+}\) are \(\delta\)-separated and the radii satisfy
\[r_{\ell}^{-}\lesssim\frac{d(\mathbf{c}_{\ell}^{-},\mathcal{C}^{+})}{\log^{1/2}(e \lambda/d(\mathbf{c}_{\ell}^{-},\mathcal{C}^{+}))}\quad\text{and}\quad r_{j}^{+} \lesssim\frac{d(\mathbf{c}_{j}^{+},\mathcal{C}^{-})}{\log^{1/2}(e\lambda/d(\mathbf{c} _{j}^{+},\mathcal{C}^{-}))}\]
for all \(\ell\in[M^{-}]\) and \(j\in[M^{+}]\). Set \(\omega\coloneqq\max\{\omega^{-},\omega^{+}\}\) where
\[\omega^{-}\coloneqq\max_{\ell\in[M^{-}]}\frac{w^{2}(\mathcal{X}_{\ell}^{-}- \mathbf{c}_{\ell}^{-})}{\mathrm{d}^{3}(\mathbf{c}_{\ell}^{-},\mathcal{C}^{+})}\quad \text{and}\quad\omega^{+}\coloneqq\max_{j\in[M^{+}]}\frac{w^{2}(\mathcal{X}_{j }^{+}-\mathbf{c}_{j}^{+})}{\mathrm{d}^{3}(\mathbf{c}_{j}^{+},\mathcal{C}^{-})}.\]
Suppose that
\[n_{\min}\gtrsim\lambda\delta^{-1}\log(2M^{-}M^{+}/\eta)+\lambda\omega. \tag{6}\]
Then, with probability at least \(1-\eta\), the neural network computed by Algorithm 1 has layer widths \(n=n_{\min}\) and \(\hat{n}\leq M^{-}\).
_Remark 2.7_.: We give a few examples of estimates of the Gaussian mean width (see, e.g., [21] for further details) to highlight some special cases of the condition (6).
1. For a finite set \(\mathcal{A}\subset\mathbb{B}_{2}^{d}\) we have \(w(\mathcal{A})\lesssim\sqrt{\log(|\mathcal{A}|)}\). As Algorithm 1 requires a finite number \(N\) of input samples, \(\omega\lesssim\delta^{-1}\log(N)\).
2. If \(\mathcal{A}\subset\mathbb{B}_{2}^{d}\) lies in a \(k\)-dimensional subspace, then \(w(\mathcal{A})\lesssim\sqrt{k}\). Hence, for samples in a \(k\)-dimensional subspace, \(\omega\lesssim\delta^{-1}k\).
3. The set \(\Sigma_{s}^{d}\coloneqq\{\mathbf{x}\in\mathbb{B}_{2}^{d}:\left\|\mathbf{x}\right\|_{0} \leq s\}\) of \(s\)-sparse vectors in the unit ball, where \(\left\|\mathbf{x}\right\|_{0}\) counts the number of non-zero coordinates in \(\mathbf{x}\), satisfies \(w(\Sigma_{s}^{d})\lesssim\sqrt{s\log(ed/s)}\). Hence, if the input samples are \(s\)-sparse, \(\omega\lesssim\delta^{-1}s\log(ed/s)\).
Notice that the latter two estimates are independent of the number of samples.
The idea of the proof of Theorem 2.6 is to show that if \(\Phi\) is wide enough, then the neuron \(\hat{\varphi}_{\mathbf{x}_{*}^{-}}\) associated with \(\mathbf{x}_{*}^{-}\) (defined in (5)) not only separates \(\Phi(\mathbf{x}_{*}^{-})\) and \(\Phi(\mathcal{X}^{+})\), but in fact acts as a _robust separator_: it will also separate \(\Phi(\mathbf{x}^{-})\) and \(\Phi(\mathcal{X}^{+})\) for all points \(\mathbf{x}^{-}\) 'close enough to' \(\mathbf{x}_{*}^{-}\). The key formal observation is stated below in Lemma 3.6. Intuitively, the notion of 'close enough' should be relative to the distance of \(\mathbf{x}_{*}^{-}\) to the decision boundary. As a result, the size of the interpolating neural net is related to the 'complexity' of a mutual covering of \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\) in which only the parts of \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\) that lie close to the decision boundary need to be covered using components with small diameter - other parts can be crudely covered using large components (see Figure 1).
Finally, we prove that the statement of Theorem 2.6 cannot be improved in a certain sense. Proposition 2.8 below shows that the upper bound on the size of the second layer \(\hat{\Phi}\), as stated in Theorem 2.6, cannot be improved in general, assuming that, in addition, \(\sigma\) is non-decreasing. Note that this assumption is satisfied by many popular activations, including the ReLU. In the proof, we construct a one-dimensional dataset of points with alternating labels, which one could however embed (e.g. by appending zeros) into \(\mathbb{R}^{d}\) for an arbitrary dimension \(d\geq 1\). Note that the result holds independently of the random sampling of the first layer, so one cannot even find a benign choice of hyperplanes to improve the situation described below.
**Proposition 2.8**.: Assume that \(\sigma\) is non-decreasing, \(\sigma(t)=0\) for \(t\leq 0\) and \(\sigma(t)>0\) for \(t>0\). Let \(M^{-}\geq 2\) and \(M^{+}\coloneqq M^{-}-1\). Then, for all \(N^{-}\geq M^{-}\) and \(N^{+}\geq M^{+}\), there exists \(\mathcal{X}^{-},\mathcal{X}^{+}\subset[0,1]\) with \(N^{-}=|\mathcal{X}^{-}|\) and \(N^{+}=|\mathcal{X}^{+}|\), and a mutual covering \(\mathcal{C}^{-}=\{c_{1}^{-},\ldots,c_{M^{-}}^{-}\}\subset\mathcal{X}^{-}\) and \(\mathcal{C}^{+}=\{c_{1}^{+},\ldots,c_{M^{+}}^{+}\}\subset\mathcal{X}^{+}\) such that the output \(F\) of Algorithm 1 has at least \(M^{-}\) neurons in its second layer.
## 3 Proofs
### Proof of Proposition 2.2
We use the following terminology.
**Definition 3.1**.: Let \(\mathbf{v}\in\mathbb{R}^{d}\setminus\{\mathbf{0}\}\), \(\tau\in\mathbb{R}\) and \(t\geq 0\). A hyperplane \(H[\mathbf{v},\tau]\)\(t\)-separates \(\mathcal{X}^{-}\) from \(\mathcal{X}^{+}\) if
\[\langle\mathbf{v},\mathbf{x}^{-}\rangle+\tau\leq-t \text{ for all }\mathbf{x}^{-}\in\mathcal{X}^{-},\] \[\langle\mathbf{v},\mathbf{x}^{+}\rangle+\tau>+t \text{ for all }\mathbf{x}^{+}\in\mathcal{X}^{+}.\]
If \(t=0\), we simply say that \(H[\mathbf{v},\tau]\)_separates_\(\mathcal{X}^{-}\)_from_\(\mathcal{X}^{+}\).
To prove Proposition 2.2 it suffices to prove the following statement.
**Proposition 3.2**.: Let \(\mathcal{X}^{-},\mathcal{X}^{+}\subset R\mathbb{B}_{2}^{d}\) be finite and \(\delta\)-separated with \(N^{-}\coloneqq|\mathcal{X}^{-}|\) and \(N^{+}\coloneqq|\mathcal{X}^{+}|\). Let \(\lambda\gtrsim R\). Assume that the loop in Algorithm 1 ran for at least \(n\) iterations, where
\[n\gtrsim\delta^{-1}\lambda\cdot\log(N^{-}N^{+}/\eta). \tag{7}\]
Then, the exit condition of the loop is satisfied with probability at least \(1-\eta\).
In the proof, we will use the following lower bound on the probability that a random hyperplane from Algorithm 1 separates a fixed pair of points.
**Lemma 3.3**.: [10, Theorem 18] There is an absolute constant \(c>0\) such that the following holds. Let \(\mathbf{x}^{-},\mathbf{x}^{+}\in R\mathbb{B}_{2}^{d}\). Let \(\mathbf{g}\in\mathbb{R}^{d}\) denote a standard Gaussian random vector and let \(\tau\in[-\lambda,\lambda]\) be uniformly distributed. If \(\lambda\gtrsim R\), then with probability at least \(c\|\mathbf{x}^{+}-\mathbf{x}^{-}\|_{2}/\lambda\), the hyperplane \(H[\mathbf{g},\tau]\)\(\|\mathbf{x}^{+}-\mathbf{x}^{-}\|_{2}\)-separates \(\mathbf{x}^{-}\) from \(\mathbf{x}^{+}\).
Proof of Proposition 3.2.: Fix \(\mathbf{x}^{-}\in\mathcal{X}^{-}\) and \(\mathbf{x}^{+}\in\mathcal{X}^{+}\). We consider i.i.d. copies \(H_{1},\ldots,H_{n}\) of a hyperplane \(H=H[\mathbf{w},b]\), where \(\mathbf{w}\sim N(\mathbf{0},\mathbf{I}_{d})\) and \(b\sim\mathrm{Unif}([-\lambda,\lambda])\) are independent. By Lemma 3.3, the probability that \(\mathbf{x}^{-}\) and \(\mathbf{x}^{+}\) is not separated by any of these hyperplanes is at most \((1-c\delta/\lambda)^{n}\). By taking a union bound over all \(N^{-}N^{+}\) pairs of points, we see that the probability that at least one pair has no separating hyperplane is at most
\[N^{-}N^{+}\left(1-c\frac{\delta}{\lambda}\right)^{n}\leq N^{-}N^{+}e^{-c\frac {\delta}{\lambda}n}\leq\eta,\]
where we used that \(1+x\leq e^{x}\) for \(x\in\mathbb{R}\) and the last inequality follows from (7).
### Proof of Proposition 2.3
The calculation of the radius takes time \(\mathcal{O}(N^{-}+N^{+})\). The loops run for \(n\) and \(\hat{n}\) iterations where each iteration takes time \(\mathcal{O}(N^{-}N^{+})\) and \(\mathcal{O}(n(N^{-}+N^{+}))\), respectively. Transforming all samples once with the first layer (which is needed to compute the second loop) takes time \(\mathcal{O}(n(N^{-}+N^{+}))\). This totals \(\mathcal{O}(n(N^{-}N^{+}+\hat{n}N^{-}+\hat{n}N^{+}))=\mathcal{O}(nN^{2})\), where we used that \(\hat{n}\leq N\). Applying Proposition 3.2 completes the proof.
### Proof of Proposition 2.5
Recall that the neuron \(\hat{\varphi}_{\mathbf{x}_{*}^{-}}\) activates on \(\mathbf{x}\in\mathbb{R}^{d}\) if and only if
\[\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x})\rangle<m_{\mathbf{x}_{*}^{-}}=\min_{ \mathbf{x}^{+}\in\mathcal{X}^{+}}\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})\rangle.\]
We make two observations. First, for any \(\mathbf{x}\in\mathbb{R}^{d}\),
\[\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x})\rangle=\sum_{i=1}^{n}\mathds{1}_{\{ \Phi(\mathbf{x}_{*}^{-})_{i}=0\}}\Phi(\mathbf{x})_{i}=\sum_{i=1}^{n}\mathds{1}_{\{\langle \mathbf{w}_{i},\mathbf{x}_{*}^{-}\rangle+b_{i}\leq 0<\langle\mathbf{w}_{i},\mathbf{x} \rangle+b_{i}\}},\]
and hence, by the law of large numbers and by symmetry,
\[\lim_{n\to\infty}\frac{1}{n}\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x})\rangle =\frac{1}{2}\mathbb{P}(\operatorname{sign}(\langle\mathbf{w},\mathbf{x}_{*}^{-} \rangle+b)\neq\operatorname{sign}(\langle\mathbf{w},\mathbf{x}\rangle+b)) \tag{8}\]
almost surely, where \(\mathbf{w}\sim N(\mathbf{0},\mathbf{I}_{d})\) and \(b\sim\operatorname{Unif}([-\lambda,\lambda])\) are independent. Second, by [DMS22, Lemma A.1], for any \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\),
\[2\lambda\mathbb{P}_{b}(\operatorname{sign}(\langle\mathbf{w},\mathbf{x }\rangle+b)\neq\operatorname{sign}(\langle\mathbf{w},\mathbf{y}\rangle+b))\] \[\quad=|\langle\mathbf{w},\mathbf{x}-\mathbf{y}\rangle|1_{\{|\langle\mathbf{w}, \mathbf{x}\rangle|\leq\lambda,|\langle\mathbf{w},\mathbf{y}\rangle|\leq\lambda\}}\] \[\quad+2\lambda(1_{\{\langle\mathbf{w},\mathbf{x}\rangle>\lambda,\langle \mathbf{w},\mathbf{y}\rangle<-\lambda\}}+1_{\{\langle\mathbf{w},\mathbf{x}\rangle<-\lambda, \langle\mathbf{w},\mathbf{y}\rangle>\lambda\}})\] \[\quad+(\lambda-\langle\mathbf{w},\mathbf{x}\rangle)1_{\{\langle\mathbf{w}, \mathbf{y}\rangle>\lambda,|\langle\mathbf{w},\mathbf{x}\rangle|\leq\lambda\}}+(\lambda- \langle\mathbf{w},\mathbf{y}\rangle)1_{\{\langle\mathbf{w},\mathbf{x}\rangle>\lambda,|\langle \mathbf{w},\mathbf{y}\rangle|\leq\lambda\}}\] \[\quad+(\lambda+\langle\mathbf{w},\mathbf{x}\rangle)1_{\{\langle\mathbf{w}, \mathbf{y}\rangle<-\lambda,|\langle\mathbf{w},\mathbf{x}\rangle|\leq\lambda\}}+(\lambda+ \langle\mathbf{w},\mathbf{y}\rangle)1_{\{\langle\mathbf{w},\mathbf{x}\rangle<-\lambda,|\langle \mathbf{w},\mathbf{y}\rangle|\leq\lambda\}},\]
where \(\mathbb{P}_{b}\) is the probability with respect to \(b\). As \(\mathbb{P}(|\langle\mathbf{w},\mathbf{z}\rangle|>\lambda)\leq 2e^{-c\lambda^{2}/\|\mathbf{z} \|_{2}^{2}}\) for any \(\mathbf{z}\in\mathbb{R}^{d}\), we find by taking expectations with respect to \(\mathbf{w}\), taking the limit for \(\lambda\to\infty\), and using monotone convergence that
\[\lim_{\lambda\to\infty}2\lambda\mathbb{P}(\operatorname{sign}(\langle\mathbf{w}, \mathbf{x}\rangle+b)\neq\operatorname{sign}(\langle\mathbf{w},\mathbf{y}\rangle+b))= \mathbb{E}|\langle\mathbf{w},\mathbf{x}-\mathbf{y}\rangle|=\sqrt{2/\pi}\|\mathbf{x}-\mathbf{y}\|_{2}. \tag{9}\]
We proceed with the proof by distinguishing two cases. Let \(\mathbf{x}\in\mathbb{R}^{d}\), assume \(\|\mathbf{x}_{*}^{-}-\mathbf{x}\|<\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\|\mathbf{x}_{*}^{-}- \mathbf{x}^{+}\|\) and define
\[\varepsilon\coloneqq\frac{\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\|\mathbf{x}_{*}^{-} -\mathbf{x}^{+}\|-\|\mathbf{x}_{*}^{-}-\mathbf{x}\|}{2}>0.\]
By (9), there exists \(\Lambda>0\) such that for \(\lambda>\Lambda\),
\[\sqrt{2\pi} \lambda\mathbb{P}(\operatorname{sign}(\langle\mathbf{w},\mathbf{x}_{*}^{ -}\rangle+b)\neq\operatorname{sign}(\langle\mathbf{w},\mathbf{x}\rangle+b))\] \[<\big{\|}\mathbf{x}_{*}^{-}-\mathbf{x}\big{\|}+\varepsilon\] \[=\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\big{\|}\mathbf{x}_{*}^{-}-\mathbf{x}^ {+}\big{\|}-\varepsilon\] \[<\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\sqrt{2\pi}\lambda\mathbb{P}( \operatorname{sign}(\langle\mathbf{w},\mathbf{x}_{*}^{-}\rangle+b)\neq\operatorname{ sign}(\langle\mathbf{w},\mathbf{x}^{+}\rangle+b)),\]
and hence
\[\mathbb{P}(\operatorname{sign}(\langle\mathbf{w},\mathbf{x}_{*}^{-} \rangle+b)\neq\operatorname{sign}(\langle\mathbf{w},\mathbf{x}\rangle+b))\] \[<\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\mathbb{P}(\operatorname{ sign}(\langle\mathbf{w},\mathbf{x}_{*}^{-}\rangle+b)\neq\operatorname{sign}(\langle\mathbf{w}, \mathbf{x}^{+}\rangle+b)).\]
Further, define
\[\delta\coloneqq\frac{1}{2}\big{(}\min_{\mathbf{x}^{+}\in\mathcal{X}^{ +}}\mathbb{P}(\operatorname{sign}(\langle\mathbf{w},\mathbf{x}_{*}^{-}\rangle+b)\neq \operatorname{sign}(\langle\mathbf{w},\mathbf{x}^{+}\rangle+b))\] \[-\mathbb{P}(\operatorname{sign}(\langle\mathbf{w},\mathbf{x}_{*}^{-} \rangle+b)\neq\operatorname{sign}(\langle\mathbf{w},\mathbf{x}\rangle+b))\big{)}>0.\]
By (8), almost surely, there exists \(N\in\mathbb{N}\) such that for \(n>N\),
\[\frac{2}{n}\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x})\rangle <\mathbb{P}(\operatorname{sign}(\langle\mathbf{w},\mathbf{x}_{*}^{-}\rangle +b)\neq\operatorname{sign}(\langle\mathbf{w},\mathbf{x}\rangle+b))+\delta\] \[=\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\mathbb{P}(\operatorname{ sign}(\langle\mathbf{w},\mathbf{x}_{*}^{-}\rangle+b)\neq\operatorname{sign}(\langle\mathbf{w}, \mathbf{x}^{+}\rangle+b))-\delta\] \[<\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\frac{2}{n}\langle\mathbf{u}_{\bm {x}_{*}^{-}},\Phi(\mathbf{x}^{+})\rangle,\]
and hence
\[\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x})\rangle<\min_{\mathbf{x}^{+}\in \mathcal{X}^{+}}\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})\rangle.\]
This shows that
\[\lim_{\lambda\to\infty}\lim_{n\to\infty}\mathds{1}_{\mathcal{A}_{\mathbf{x}_{*}^{ -}}}(\mathbf{x})=\mathds{1}_{\mathcal{B}_{\mathbf{x}_{*}^{-}}}(\mathbf{x})\]
almost surely if \(\|\mathbf{x}_{*}^{-}-\mathbf{x}\|<\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\|\mathbf{x}_{*}^{-}- \mathbf{x}^{+}\|\). The case \(\|\mathbf{x}_{*}^{-}-\mathbf{x}\|>\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\|\mathbf{x}_{*}^{-}- \mathbf{x}^{+}\|\) can be proved with only minor changes and is omitted.
### Proof of Theorem 2.6
The key observation to prove Theorem 2.6 is stated in Lemma 3.6. To prove it we will need two ingredients.
**Lemma 3.4**.: There exists an absolute constant \(c>0\) such that the following holds. Let \(\mathcal{X}^{-},\mathcal{X}^{+}\subset R\mathbb{B}_{2}^{d}\) be \(\delta\)-separated sets with \(N^{-}:=|\mathcal{X}^{-}|\), \(N^{+}:=|\mathcal{X}^{+}|\). Let \(\mathbf{W}\in\mathbb{R}^{n\times d}\) be a matrix with standard Gaussian entries, \(\mathbf{b}\in\mathbb{R}^{n}\) be uniformly distributed in \([-\lambda,\lambda]^{n}\) and let \(\mathbf{W}\) and \(b\) be independent. Consider the associated random threshold layer \(\Phi\colon\mathbb{R}^{d}\to\mathbb{R}^{n}\)
\[\Phi(\mathbf{x})=\operatorname{Thres}(\mathbf{W}\mathbf{x}+\mathbf{b}),\quad\mathbf{x}\in\mathbb{ R}^{d}.\]
Suppose that \(\lambda\gtrsim R\) and
\[n\gtrsim\delta^{-1}\lambda\cdot\log(2N^{-}N^{+}/\eta). \tag{10}\]
Then with probability at least \(1-\eta\), the following event occurs: For every \(\mathbf{x}^{-}\in\mathcal{X}^{-}\), the vector \(\mathbf{u}_{\mathbf{x}^{-}}\in\{0,1\}^{n}\)
\[(\mathbf{u}_{\mathbf{x}^{-}})_{i}=\begin{cases}1,&(\Phi(\mathbf{x}^{-}))_{i}=0,\\ 0,&\text{otherwise},\end{cases} \tag{11}\]
satisfies \(\langle\mathbf{u}_{\mathbf{x}^{-}},\Phi(\mathbf{x}^{-})\rangle=0\) and
\[\langle\mathbf{u}_{\mathbf{x}^{-}},\Phi(\mathbf{x}^{+})\rangle\geq c\|\mathbf{x}^{+}-\mathbf{x}^{- }\|_{2}\cdot\lambda^{-1}n\qquad\text{for all }\mathbf{x}^{+}\in\mathcal{X}^{+}.\]
Geometrically, Lemma 3.4 states that with high probability the hyperplane \(H[\mathbf{u}_{\mathbf{x}^{-}},0]\) linearly separates \(\Phi(\mathbf{x}^{-})\) from \(\Phi(\mathcal{X}^{+})\) and the separation margin increases with both \(n\) and the distance between \(\mathbf{x}^{-}\) and \(\mathcal{X}^{+}\).
Proof.: By (11) it is clear that \(\left\langle\mathbf{u}_{\mathbf{x}^{-}},\Phi(\mathbf{x}^{-})\right\rangle=0\). Let \(\mathbf{W}=[\mathbf{w}_{1},\ldots,\mathbf{w}_{n}]^{\top}\in\mathbb{R}^{n\times d}\) and \(\mathbf{b}=(b_{1},\ldots,b_{n})^{\top}\in\mathbb{R}^{n}\) be the weight matrix and bias vector of \(\Phi\), respectively. For \(\mathbf{x}^{-}\in\mathcal{X}^{-}\) and \(\mathbf{x}^{+}\in\mathcal{X}^{+}\) define
\[\mathcal{I}_{\mathbf{x}^{-},\mathbf{x}^{+}}=\{i\in[n]:\left\langle\mathbf{w}_{i},\mathbf{x}^{ -}\right\rangle\leq-b_{i}<\left\langle\mathbf{w}_{i},\mathbf{x}^{+}\right\rangle\},\]
and define the events
\[B^{i}_{\mathbf{x}^{-},\mathbf{x}^{+}}=\{H[\mathbf{w}_{i},b_{i}]\ \left\|\mathbf{x}^{+}-\mathbf{x}^{-} \right\|_{2}\text{-separates }\mathbf{x}^{-}\text{ from }\mathbf{x}^{+}\}.\]
For \(n^{\prime}(\mathbf{x}^{-},\mathbf{x}^{+})\in[n]\) to be specified later, set
\[B_{\mathbf{x}^{-},\mathbf{x}^{+},n^{\prime}(\mathbf{x}^{-},\mathbf{x}^{+})} =\Big{\{}\sum_{i=1}^{n}\mathds{1}_{B^{i}_{\mathbf{x}^{-},\mathbf{x}^{+}}} \geq n^{\prime}(\mathbf{x}^{-},\mathbf{x}^{+})\Big{\}},\] \[B =\bigcap_{(\mathbf{x}^{-},\mathbf{x}^{+})\in\mathcal{X}^{-}\times \mathcal{X}^{+}}B_{\mathbf{x}^{-},\mathbf{x}^{+},n^{\prime}(\mathbf{x}^{-},\mathbf{x}^{+})}.\]
On the event \(B\) the following holds for every \(\mathbf{x}^{-}\in\mathcal{X}^{-}\): For all \(\mathbf{x}^{+}\in\mathcal{X}^{+}\) there exists at least \(n^{\prime}(\mathbf{x}^{-},\mathbf{x}^{+})\geq 1\) hyperplanes that \(\left\|\mathbf{x}^{+}-\mathbf{x}^{-}\right\|_{2}\)-separate \(\mathbf{x}^{-}\) from \(\mathbf{x}^{+}\). In particular, for any \(\mathbf{x}^{+}\in\mathcal{X}^{+}\), \(\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\left|\mathcal{I}_{\mathbf{x}^{-},\mathbf{x}^{+}} \right|\geq n^{\prime}(\mathbf{x}^{-},\mathbf{x}^{+})\) and hence
\[\left\langle\mathbf{u}_{\mathbf{x}^{-}},\Phi(\mathbf{x}^{+})\right\rangle\geq\sum_{i\in \mathcal{I}_{\mathbf{x}^{-},\mathbf{x}^{+}}}\text{Thres}(\left\langle\mathbf{w}_{i},\mathbf{x }^{+}\right\rangle+b_{i})=\left|\mathcal{I}_{\mathbf{x}^{-},\mathbf{x}^{+}}\right|\geq n ^{\prime}(\mathbf{x}^{-},\mathbf{x}^{+}).\]
Fix \(i\in[n]\). Lemma 3.3 implies that \(\mathbb{P}(B^{i}_{\mathbf{x}^{-},\mathbf{x}^{+}})\geq c\left\|\mathbf{x}^{+}-\mathbf{x}^{-} \right\|_{2}\lambda^{-1}\) for an absolute constant \(c>0\) if \(\lambda\gtrsim R\). Therefore, the Chernoff bound implies that
\[\mathbb{P}\left(\sum_{i=1}^{n}\mathds{1}_{B^{i}_{\mathbf{x}^{-},\mathbf{x}^{+}}}\geq \frac{c}{2}\lambda^{-1}\left\|\mathbf{x}^{+}-\mathbf{x}^{-}\right\|_{2}n\right)\geq 1- \exp(-c^{\prime}\lambda^{-1}\left\|\mathbf{x}^{+}-\mathbf{x}^{-}\right\|_{2}n).\]
Setting \(n^{\prime}(\mathbf{x}^{-},\mathbf{x}^{+})=\left\lceil\frac{c}{2}\left\|\mathbf{x}^{+}-\bm {x}^{-}\right\|_{2}\lambda^{-1}n\right\rceil\), we obtain
\[\mathbb{P}(B^{c}_{\mathbf{x}^{-},\mathbf{x}^{+},n^{\prime}(\mathbf{x}^{-},\mathbf{x}^{+})}) \leq\exp(-c^{\prime}\lambda^{-1}\left\|\mathbf{x}^{+}-\mathbf{x}^{-}\right\|_{2}n) \leq\exp(-c^{\prime}\lambda^{-1}\delta n).\]
Hence, by the union bound and (10),
\[\mathbb{P}(B^{c})\leq N^{-}N^{+}\exp(-c^{\prime}\lambda^{-1}\delta n)\leq\eta,\]
Our second proof ingredient is the following lemma. It is an immediate consequence of [13, Theorem 2.9].
**Lemma 3.5**.: Consider \(\mathbf{c}_{1},\ldots,\mathbf{c}_{M}\subset\mathbb{R}^{d}\) and \(\mathcal{X}_{1},\ldots,\mathcal{X}_{M}\subset\mathbb{R}^{d}\) such that \(\mathcal{X}_{j}\subset B(\mathbf{c}_{j},r_{j})\subset R\mathbb{B}_{2}^{d}\) for all \(j\in[M]\). Let
\[r_{j}\lesssim\frac{r_{j}^{\prime}}{\sqrt{\log(e\lambda/r_{j}^{\prime})}}, \qquad r^{\prime}=\min_{j\in[M]}r_{j}^{\prime}.\]
Let further \(\mathbf{w}_{1},\dots,\mathbf{w}_{n}\sim N(\mathbf{0},\mathbf{I}_{d})\) and \(b_{1},\dots,b_{n}\sim\mathrm{Unif}([-\lambda,\lambda])\) all be independent. If \(\lambda\gtrsim R\) and
\[n\gtrsim\frac{\lambda}{r^{\prime}}\log(2M/\eta)+\max_{j\in[M]}\frac{\lambda}{( r^{\prime}_{j})^{3}}w^{2}(\mathcal{X}_{j}-\mathbf{c}_{j}),\]
then with probability at least \(1-\eta\), for all \(j\in[M]\) and \(\mathbf{x}\in\mathcal{X}_{j}\),
\[|\{i\in[n]\ :\ \mathrm{Thres}(\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle+b_{i})\neq \mathrm{Thres}(\langle\mathbf{w}_{i},\mathbf{x}\rangle+b_{i})\}|\lesssim\frac{r^{ \prime}_{j}n}{\lambda}.\]
The following result shows that the 'dedicated' neuron \(\hat{\varphi}_{\mathbf{x}_{*}^{-}}\) associated with \(\mathbf{x}_{*}^{-}\) (defined in (5)) not only separates \(\Phi(\mathbf{x}_{*}^{-})\) and \(\Phi(\mathcal{X}^{+})\), but in fact acts as a robust separator: it also separates \(\Phi(\mathbf{x}^{-})\) and \(\Phi(\mathcal{X}^{+})\) for all points \(\mathbf{x}^{-}\) in the component of the mutual covering in which \(\mathbf{x}_{*}^{-}\) resides.
**Lemma 3.6**.: Consider the setting of Theorem 2.6. For \(\mathbf{x}_{*}^{-}\in\mathcal{X}^{-}\) we define the associated neuron \(\hat{\varphi}_{\mathbf{x}_{*}^{-}}\colon\mathbb{R}^{n}\to\{0,1\}\) by
\[\hat{\varphi}_{\mathbf{x}_{*}^{-}}(\mathbf{z})=\mathrm{Thres}(-\langle\mathbf{u}_{\mathbf{x}_{ *}^{-}},\mathbf{z}\rangle+m_{\mathbf{x}_{*}^{-}}),\]
where
\[\mathbf{u}_{\mathbf{x}_{*}^{-}}=\mathds{1}[\Phi(\mathbf{x}_{*}^{-})=\mathbf{0}]\quad\text{and }\quad m_{\mathbf{x}_{*}^{-}}=\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\langle\mathbf{u}_{ \mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})\rangle.\]
Then, with probability at least \(1-\eta\), for all \(\ell\in[M^{-}]\) and \(\mathbf{x}_{*}^{-}\in\mathcal{X}_{\ell}^{-}\),
\[\hat{\varphi}_{\mathbf{x}_{*}^{-}}(\Phi(\mathbf{x}^{-}))>0\quad\text{for all }\mathbf{x}^{-}\in \mathcal{X}_{\ell}^{-}, \tag{12}\]
and
\[\hat{\varphi}_{\mathbf{x}_{*}^{-}}(\Phi(\mathbf{x}^{+}))=0\quad\text{for all }\mathbf{x}^{+}\in \mathcal{X}^{+}. \tag{13}\]
Proof.: Clearly, the choice of \(m_{\mathbf{x}_{*}^{-}}\) ensures that (13) holds. It remains to show that, with probability \(1-\eta\),
\[\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{-})\rangle<m_{\mathbf{x}_{*}^{-}}=\min _{\mathbf{x}^{+}\in\mathcal{X}^{+}}\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})\rangle\]
for all \(\ell\in[M^{-}]\) and \(\mathbf{x}_{*}^{-},\mathbf{x}^{-}\in\mathcal{X}_{\ell}^{-}\). Let \(A\) be the event where, for every \(\ell\in[M^{-}]\) and \(j\in[M^{+}]\),
\[\langle\mathbf{u}_{\mathbf{c}_{\ell}^{-}},\Phi(\mathbf{c}_{j}^{+})\rangle\geq c_{1} \lambda^{-1}\left\|\mathbf{c}_{\ell}^{-}-\mathbf{c}_{j}^{+}\right\|_{2}n.\]
By Lemma 3.4, \(\mathbb{P}(A)\geq 1-\eta\) under our assumptions. Let \(B\) be the event where for all \(\ell\in[M^{-}]\) and \(\mathbf{x}^{-}\in\mathcal{X}_{\ell}^{-}\)
\[\left\|\Phi(\mathbf{c}_{\ell}^{-})-\Phi(\mathbf{x}^{-})\right\|_{1} =|\{i\in[n]:\mathrm{Thres}(\langle\mathbf{w}_{i},\mathbf{c}_{\ell}^{-} \rangle+b_{i})\neq\mathrm{Thres}(\langle\mathbf{w}_{i},\mathbf{x}^{-}\rangle+b_{i})\}|\] \[\leq c_{2}\frac{(r^{\prime}_{\ell})^{-}n}{\lambda},\]
and all \(j\in[M^{+}]\) and \(\mathbf{x}^{+}\in\mathcal{X}_{j}^{+}\)
\[\left\|\Phi(\mathbf{c}_{j}^{+})-\Phi(\mathbf{x}^{+})\right\|_{1} =|\{i\in[n]:\text{Thres}(\langle\mathbf{w}_{i},\mathbf{c}_{j}^{+}\rangle+b _{i})\neq\text{Thres}(\langle\mathbf{w}_{i},\mathbf{x}^{+}\rangle+b_{i})\}|\] \[\leq c_{2}\frac{(r_{j}^{\prime})^{+}n}{\lambda},\]
where
\[(r_{\ell}^{\prime})^{-}=\frac{c_{1}}{12c_{2}}d(\mathbf{c}_{\ell}^{-},\mathcal{C}^{ +}),\qquad(r_{j}^{\prime})^{+}=\frac{c_{1}}{4c_{2}}d(\mathbf{c}_{j}^{+},\mathcal{C }^{-}).\]
By Lemma 3.5, \(\mathbb{P}(B)\geq 1-\eta\) under the stated assumptions. For the remainder of the proof, we condition on the event \(A\cap B\).
By using \(B\), we find
\[|\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{-})\rangle| =|\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{-})-\Phi(\mathbf{x}_{*} ^{-})\rangle|\] \[\leq\left\|\Phi(\mathbf{x}^{-})-\Phi(\mathbf{c}_{*}^{-})\right\|_{1}\] \[\leq 2c_{2}\frac{(r_{\ell}^{\prime})^{-}}{\lambda}n.\]
Now pick \(j\in[M^{+}]\) and \(\mathbf{x}^{+}\in\mathcal{X}_{j}^{+}\). Using \(A\) and \(B\),
\[\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})\rangle\] \[\qquad=\langle\mathbf{u}_{\mathbf{c}_{\ell}^{-}},\Phi(\mathbf{c}_{j}^{+}) \rangle+\langle\mathbf{u}_{\mathbf{x}_{*}^{-}}-\mathbf{u}_{\mathbf{c}_{\ell}^{-}},\Phi(\mathbf{c}_ {j}^{+})\rangle+\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})-\Phi(\mathbf{c}_{ j}^{+})\rangle\] \[\qquad\geq\langle\mathbf{u}_{\mathbf{c}_{\ell}^{-}},\Phi(\mathbf{c}_{j}^{+}) \rangle-\left\|\langle\Phi(\mathbf{x}_{*}^{-})-\Phi(\mathbf{c}_{\ell}^{-}),\Phi(\mathbf{c}_ {j}^{+})\rangle\right\|-|\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})-\Phi( \mathbf{c}_{j}^{+})\rangle|\] \[\qquad\geq c_{1}\lambda^{-1}\left\|\mathbf{c}_{\ell}^{-}-\mathbf{c}_{j}^ {+}\right\|_{2}n-c_{2}\frac{(r_{\ell}^{\prime})^{-}}{\lambda}n-c_{2}\frac{(r_{ j}^{\prime})^{+}}{\lambda}n,\]
where in the second step we used that \(\mathbf{u}_{\mathbf{x}}=\mathbf{1}-\Phi(\mathbf{x})\) due to the threshold activation.
Combining the above we see that, for all \(\ell\in[M^{-}]\), \(\mathbf{x}_{*}^{-},\mathbf{x}^{-}\in\mathcal{X}_{\ell}^{-}\), \(j\in[M^{+}]\), and \(\mathbf{x}^{+}\in\mathcal{X}_{j}^{+}\),
\[\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{-})\rangle<\langle\mathbf{u}_{\mathbf{x}_{ *}^{-}},\Phi(\mathbf{x}^{+})\rangle,\]
where we have used that
\[(r_{\ell}^{\prime})^{-}<\frac{c_{1}}{6c_{2}}\left\|\mathbf{c}_{\ell}^{-}-\mathbf{c}_{j }^{+}\right\|_{2},\qquad(r_{j}^{\prime})^{+}<\frac{c_{1}}{2c_{2}}\left\|\mathbf{c }_{\ell}^{-}-\mathbf{c}_{j}^{+}\right\|_{2}.\]
Since for any \(\mathbf{x}^{+}\in\mathcal{X}^{+}\) there is some \(j\in[M^{+}]\) such that \(\mathbf{x}^{+}\in\mathcal{X}_{j}^{+}\), we find for all \(\ell\in[M^{-}]\) and \(\mathbf{x}_{*}^{-},\mathbf{x}^{-}\in\mathcal{X}_{\ell}^{-}\),
\[\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{-})\rangle<\min_{\mathbf{x}^{+}\in \mathcal{X}^{+}}\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})\rangle=m_{\mathbf{ x}_{*}^{-}},\]
as desired.
We can now complete the proof.
Proof of Theorem 2.6.: Throughout, we condition on the event from Lemma 3.6. Let us first observe that the first loop of Algorithm 1 terminates after \(n_{\min}\) iterations and hence the first layer \(\Phi\) of \(F\) has width \(n_{\min}\). Indeed, taking \(\mathbf{x}_{*}^{-}=\mathbf{x}^{-}\) in (12), we see that \(\hat{\varphi}_{\mathbf{x}^{-}}(\mathbf{x}^{-})>0\) for any \(\mathbf{x}^{-}\in\mathcal{X}^{-}\) and hence
\[0=\langle\mathbf{u}_{\mathbf{x}^{-}},\Phi(\mathbf{x}^{-})\rangle<m_{\mathbf{x}^{-}}=\min_{\bm {x}^{+}\in\mathcal{X}^{+}}\langle\mathbf{u}_{\mathbf{x}^{-}},\Phi(\mathbf{x}^{+})\rangle.\]
This estimate implies that for all \(\mathbf{x}^{+}\in\mathcal{X}^{+}\), there must be a hyperplane that separates \(\mathbf{x}^{-}\) from \(\mathbf{x}^{+}\).
Next, using induction we show that the second loop terminates after at most \(M^{-}\) steps, thus \(\hat{n}\leq M^{-}\). In the first iteration, we select \(\mathbf{x}_{1}^{-}\in\mathcal{C}=\mathcal{X}^{-}\) which is part of at least one component of the mutual covering, say \(\mathcal{X}_{i_{1}}^{-}\). By Lemma 3.6, the associated neuron \(\hat{\varphi}_{\mathbf{x}_{1}^{-}}\) activates on all of \(\mathcal{X}_{i_{1}}^{-}\) and hence \(\mathcal{C}\cap\mathcal{X}_{i_{1}}^{-}=\emptyset\) after the update. Suppose that the \(p\)-th iteration finished, thus \(\mathcal{C}\cap\mathcal{X}_{i_{j}}^{-}=\emptyset\) for all \(j\in[p]\). We select \(\mathbf{x}_{p+1}^{-}\in\mathcal{C}\subset\mathcal{X}^{-}\setminus(\mathcal{X}_{i_ {1}}^{-}\cup\cdots\cup\mathcal{X}_{i_{p}}^{-})\) which must be part of a new component, say \(\mathcal{X}_{i_{p+1}}^{-}\). Again, by the lemma the associated neuron activates on all of the component, and thus, after the update \(\mathcal{C}\cap\mathcal{X}_{i_{j}}^{-}=\emptyset\) for all \(j\in[p+1]\). By induction, after at most \(M^{-}\) iterations \(\mathcal{C}=\emptyset\) and hence the algorithm terminates with \(\hat{n}\leq M^{-}\).
### Proof of Corollary 1.5
Let \(\mathcal{C}^{-}\) and \(\mathcal{C}^{+}\) denote the centers of the mutual covering. We apply Algorithm 1 to \(\mathcal{C}^{-}\) and \(\mathcal{C}^{+}\) (with \(\lambda\approx R\) and \(n_{\min}\) from Theorem 2.6) with the following change: when computing the biases in the second layer, instead of taking the minimum only over \(\mathcal{C}^{+}\) we set, for all \(\ell\in[M^{-}]\),
\[m_{\mathbf{c}_{\ell}^{-}}=\min_{\mathbf{x}^{+}\in\mathcal{X}^{+}}\langle\mathbf{u}_{\mathbf{c }_{\ell}^{-}},\Phi(\mathbf{x}^{+})\rangle.\]
Inspecting the proof of Theorem 2.6, we see that with positive probability this network has the asserted size and, moreover, interpolates \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\).
### Proof of Proposition 2.8
Before we construct a dataset that satisfies the properties of the proposition, we make some preliminary observations. Consider any \(\mathcal{X}^{-},\mathcal{X}^{+}\subset\mathbb{R}^{d}\). Let \(\Phi\) denote the first layer of the output \(F\) of Algorithm 1. For a given \(\mathbf{x}_{*}^{-}\in\mathcal{X}^{-}\), consider its associated neuron \(\varphi_{\mathbf{x}_{*}^{-}}\) defined in (5). Consider \(\mathbf{x}_{*}^{-}\neq\mathbf{x}^{-}\in\mathcal{X}^{-}\) and for \(t\geq 0\) set
\[\mathbf{x}_{t}=\mathbf{x}_{*}^{-}+t(\mathbf{x}^{-}-\mathbf{x}_{*}^{-}),\]
so that \(\{\mathbf{x}_{t}\ :\ t\geq 0\}\) is the ray originating from \(\mathbf{x}_{*}^{-}\) and passing through \(\mathbf{x}^{-}\).
First, we claim that \(t\mapsto\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}_{t})\rangle\) is non-decreasing. This is an immediate consequence of the fact that \(\Phi_{i}(\mathbf{x}_{t})\leq\Phi_{i}(\mathbf{x}_{s})\) for all \(i\in[n]\) such that
\(\Phi_{i}(\mathbf{x}_{*}^{-})=0\) and for all \(0\leq t\leq s\). This is clear in the case \(\Phi_{i}(\mathbf{x}_{t})=0\). Assuming \(\Phi_{i}(\mathbf{x}_{t})>0\) (and hence \(t>0\)), the assumptions imposed on \(\sigma\) imply that
\[0<\langle\mathbf{w}_{i},\mathbf{x}_{t}\rangle+b_{i}=\langle\mathbf{w}_{i},\mathbf{x}_{*}^{-} \rangle+b_{i}+t\langle\mathbf{w}_{i},\mathbf{x}^{-}-\mathbf{x}_{*}^{-}\rangle.\]
As \(t>0\) and \(\langle\mathbf{w}_{i},\mathbf{x}_{*}^{-}\rangle+b_{i}\leq 0\) due to \(\Phi_{i}(\mathbf{x}_{*}^{-})=0\), it follows that
\[\langle\mathbf{w}_{i},\mathbf{x}^{-}-\mathbf{x}_{*}^{-}\rangle>0.\]
Finally, since \(\sigma\) is non-decreasing,
\[\Phi_{i}(\mathbf{x}_{t}) =\sigma(\langle\mathbf{w}_{i},\mathbf{x}_{t}\rangle+b_{i})\] \[\leq\sigma(\langle\mathbf{w}_{i},\mathbf{x}_{t}\rangle+b_{i}+(s-t)\langle \mathbf{w}_{i},\mathbf{x}^{-}-\mathbf{x}_{*}^{-}\rangle)\] \[=\sigma(\langle\mathbf{w}_{i},\mathbf{x}_{s}\rangle+b_{i})=\Phi_{i}(\mathbf{ x}_{s}),\]
proving our claim.
Now let us make the following observation: suppose there is \(\mathbf{x}^{+}\in\mathcal{X}^{+}\) which lies between \(\mathbf{x}_{*}^{-}\) and \(\mathbf{x}^{-}\) in the sense that there exists \(t^{+}\in(0,1)\) such that \(\mathbf{x}_{t^{+}}=\mathbf{x}^{+}\). Then, the neuron \(\hat{\varphi}_{\mathbf{x}_{*}^{-}}\) does not activate on \(\Phi(\mathbf{x}^{-})\). To see this, we simply invoke the above claim, which yields
\[m_{\mathbf{x}_{*}^{-}}\leq\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})\rangle \leq\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{-})\rangle,\]
and directly implies \(\hat{\varphi}_{\mathbf{x}_{*}^{-}}(\Phi(\mathbf{x}^{-}))=0\).
With these observations, we can now prove the statement of the proposition. Consider the interval \([0,1]\) and place points \(\mathbf{c}_{\ell}^{-}\) and \(\mathbf{c}_{j}^{+}\) in an alternating fashion on an equispaced grid: formally, for \(\ell\in[M^{-}]\) and \(j\in[M^{+}]\) we set
\[\mathbf{c}_{\ell}^{-}=\frac{\ell-1}{M^{-}-1}\quad\text{and}\quad\mathbf{c}_{j}^{+}= \frac{j-1/2}{M^{-}-1}.\]
Let \(r_{\ell}^{-}\) and \(r_{j}^{+}\) be as in Theorem 2.6. Choose the remaining \(N^{-}-M^{-}\) points \(\mathbf{x}^{-}\in\mathcal{X}^{-}\) and \(N^{+}-M^{+}\) points \(\mathbf{x}^{+}\in\mathcal{X}^{+}\) such that for each of them there exists \(\ell\in[M^{-}]\) with \(\left\|\mathbf{x}^{-}-\mathbf{c}_{\ell}^{-}\right\|_{2}\leq r_{\ell}^{-}\) and \(j\in[M^{+}]\) with \(\left\|\mathbf{x}^{+}-\mathbf{c}_{j}^{+}\right\|_{2}\leq r_{j}^{+}\), respectively. Then, \(\mathcal{C}^{-}=\{\mathbf{c}_{1}^{-},\dots,\mathbf{c}_{M^{-}}^{-}\}\) and \(\mathcal{C}^{+}=\{\mathbf{c}_{1}^{+},\dots,\mathbf{c}_{M^{+}}^{+}\}\) form a mutual covering of \(\mathcal{X}^{-}\) and \(\mathcal{X}^{+}\) as required by Theorem 2.6.
Let \(\ell\in[M^{-}]\) be fixed. By our earlier observation, for each \(\mathbf{x}^{-}\in\mathcal{X}^{-}\setminus\mathcal{X}^{-}_{\ell}\), \(\varphi_{\mathbf{x}^{-}}(\Phi(\mathbf{c}_{\ell}^{-}))=0\), as there is a point \(\mathbf{c}_{j}^{+}\in\mathcal{X}^{+}\) between \(x^{-}\) and \(\mathbf{c}_{\ell}^{-}\). Thus, to classify \(\mathbf{c}_{\ell}^{-}\) correctly, we need to choose (at least) one neuron corresponding to a point in \(\mathcal{X}^{-}_{\ell}\). As we need to classify the points \(\mathbf{c}_{\ell}^{-}\) for all \(\ell\in[M^{-}]\) correctly, we cannot include less than \(M^{-}\) neurons in the second layer.
## 4 Numerical Experiments
In this section, we study the performance of Algorithm 1 through numerical simulations on different datasets.3 In particular, we want to investigate how
the interpolation probability (approximated as the fraction of a fixed amount of runs that produce an interpolating network) and the width of the second layer respond to changes in the width of the first layer \(n\) and the maximal bias \(\lambda\). Recall that the algorithm was designed in such a way that it adapts the width of the first layer to guarantee interpolation on the input data. To have free control over this parameter we adapt the algorithm slightly for the experiments.
Hence, we formulate Algorithm 2 which has both \(n\) and \(\lambda\) as hyperparameters. As the first layer might be such that not every pair of samples with different labels is separated by at least one hyperplane, we have to adjust the construction of the second layer. We keep track of the set \(\mathcal{C}\) of candidate samples whose associated neurons might be accepted into the second layer, the set \(\mathcal{U}\) of samples that have yet to be correctly classified by a neuron (the universe), and the set \(\mathcal{A}\) of samples whose associated neurons have been accepted into the second layer. Note that \(\mathcal{C}\subset\mathcal{U}\) but there might not be equality. The algorithm stops if we either run out of candidates or all points are classified correctly. In every iteration, we draw a candidate sample at random and compute the associated neuron. If the neuron at least correctly classifies the candidate itself, we accept it into the second layer and remove every point that the neuron classifies correctly from both \(\mathcal{C}\) and \(\mathcal{U}\). This check could be omitted in Algorithm 1 due to the construction of the first layer which also guaranteed that \(\mathcal{C}=\mathcal{U}\).
In the following, we present four experiments: In Section 4.1, we apply Algorithm 2 to the Two Moons dataset (see Figure 4(a)). In Section 4.2 we observe how our method responds to an increasing number of samples drawn from a fixed distribution. We introduce an extension to multiclass classification in Section 4.3, which we then apply to the MNIST dataset. Finally, we present a worst-case example in Section 4.4. In all experiments, we let \(\sigma\) be the threshold activation.
### Binary classification on Two Moons
In this section, we apply Algorithm 2 to the 2D Two Moons4 dataset (Figure 4(a)), allowing us to easily visualize the output of the algorithm in the input domain. While this is only a synthetic toy dataset, it provides a clear geometric structure with well-separated classes. At the same time, the data is not linearly separable, and not all pairs of samples with different labels can be efficiently separated by hyperplanes that pass through the origin, making it a good first testing ground for the effect of the parameter \(\lambda\).
Footnote 4: See [https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_moons.html](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_moons.html).
Interpolation probability.In Figure 4(b) we observe a clear phase transition in the interpolation probability which is in line with the prediction of Theorem 2, where we treat all complexity terms depending on the dataset as constant. As can be seen from the contour lines, for \(\lambda\) larger than the data radius, \(n\gtrsim\lambda\) is enough to guarantee interpolation with any fixed probability.
**Algorithm 2** Interpolation (experiments)
```
1:Disjoint and finite \(\mathcal{X}^{-},\mathcal{X}^{+}\subset\mathbb{R}^{d}\) with \(N^{-}\coloneqq|\mathcal{X}^{-}|\), \(N^{+}\coloneqq|\mathcal{X}^{+}|\), activation \(\sigma\colon\mathbb{R}\to\mathbb{R}\) satisfying \(\sigma(t)=0\) for \(t\leq 0\) and \(\sigma(t)>0\) for \(t>0\), width of first layer \(n\geq 1\), maximal bias \(\lambda\geq 0\).
2:A three-layer fully-connected neural network \(F\colon\mathbb{R}^{d}\to\{\pm 1\}\). (First layer \(\Phi\))
3:Randomly sample \(\mathbf{W}\in\mathbb{R}^{n\times d}\) and \(\mathbf{b}\in\mathbb{R}^{n}\) where \[\mathbf{W}_{i}\sim N(\mathbf{0},\mathbf{I}_{d})\quad\text{and}\quad b_{i}\sim\text{Unif}([- \lambda,\lambda]).\] are all independent and define the first layer \(\Phi(\mathbf{x})=\sigma(\mathbf{W}\mathbf{x}+\mathbf{b})\). (Second layer \(\hat{\Phi}\))
4:Initialize \(\mathcal{C}\leftarrow\mathcal{X}^{-}\), \(\mathcal{U}\leftarrow\mathcal{X}^{-}\) and \(\mathcal{A}\leftarrow\emptyset\).
5:while\(\mathcal{C}\neq\emptyset\)and\(\mathcal{U}\neq\emptyset\)do
6: Select a candidate \(\mathbf{x}_{*}^{-}\in\mathcal{C}\) at random and update \(\mathcal{C}\leftarrow\mathcal{C}\setminus\{\mathbf{x}_{*}^{-}\}\).
7: Calculate \(\mathbf{u}_{\mathbf{x}_{*}^{-}}\in\{0,1\}^{n}\) and \(m_{\mathbf{x}_{*}^{-}}\geq 0\) according to \[\mathbf{u}_{\mathbf{x}_{*}^{-}}\leftarrow\mathds{1}[\Phi(\mathbf{x}_{*}^{-})=\mathbf{0}]\quad \text{and}\quad m_{\mathbf{x}_{*}^{-}}\leftarrow\min_{\mathbf{x}^{+}\in\mathcal{X}^{+ }}\langle\mathbf{u}_{\mathbf{x}_{*}^{-}},\Phi(\mathbf{x}^{+})\rangle.\]
8:if\(m_{\mathbf{x}_{*}^{-}}>0\)then
9: Calculate \(\mathcal{T}\leftarrow\{\mathbf{x}^{-}\in\mathcal{U}:\langle\mathbf{u}_{\mathbf{x}_{*}^{-} },\Phi(\mathbf{x}^{-})\rangle<m_{\mathbf{x}_{*}^{-}}\}\).
10: Update \(\mathcal{C}\), \(\mathcal{U}\) and \(\mathcal{A}\) according to \[\mathcal{C}\leftarrow\mathcal{C}\setminus\mathcal{T},\quad\mathcal{U} \leftarrow\mathcal{U}\setminus\mathcal{T}\quad\text{and}\quad\mathcal{A} \leftarrow\mathcal{A}\cup\{\mathbf{x}_{*}^{-}\}.\]
11:endif
12:endwhile
13:Define \(\hat{\Phi}(\mathbf{z})=\sigma(-\mathbf{U}\mathbf{z}+\mathbf{m})\) with \(\mathbf{U}\in\mathbb{R}^{|\mathcal{A}|\times n}\) and \(\mathbf{m}\in\mathbb{R}^{|\mathcal{A}|}\) where \[\mathbf{U}\leftarrow\left[\mathbf{u}_{\mathbf{x}_{*}^{-}}^{\top}\right]_{\mathbf{x}_{*}^{-} \in\mathcal{A}}\quad\text{and}\quad\mathbf{m}\leftarrow\left[m_{\mathbf{x}_{*}^{-}} \right]_{\mathbf{x}_{*}^{-}\in\mathcal{A}}.\]
14:Return \(F(\mathbf{x})=\text{sign}(-\langle\mathbf{1},\hat{\Phi}(\Phi(\mathbf{x}))\rangle)\). (Output network \(F\))
```
**Algorithm 3** Interpolation (experiments)
On the other hand, one can observe that a large enough \(\lambda\) is also necessary for efficient interpolation, as for \(\lambda=0\) interpolation does not happen for any value of \(n\).
It is noteworthy that the optimal value of \(\lambda\) is smaller than the data radius. This is intuitive here, as a maximal bias exceeding the radius of \(\mathcal{X}^{+}\) already guarantees the efficient separation of pairs of opposite labels in the first layer.
Width of the second layer \(\hat{\Phi}\).As can be seen in Figure (c)c, the width of the second layer becomes much smaller than the number of points. We are mainly interested in the part of the parameter space where the interpolation probability is close to one. In this region, the width attains its minimum and is essentially constant.
Due to the two-dimensionality of the data, it is possible to visualize the decision boundary of our method in input space, see Figure 6. Neurons of the second layer have (approximately) circular activation regions that are centered at their corresponding candidate points and which extend all the way to the other class. The third layer takes a union of these regions - the boundary of this union is the decision boundary. We can repeat this visualization for different values of the hyperparameters, see Figure 7. For \(\lambda=0\) the method fails to separate pairs of samples with opposite labels because all hyperplanes pass through the origin. If \(\lambda\) is large enough and as \(n\) grows, the method begins to succeed. The activation regions of the individual neurons become more circular as \(n\) increases, which can be best seen in the rightmost column of Figure 7.
### Behaviour in the sample size limit
In Theorem 2.6, the size of the interpolating network is independent of the number of samples and only dictated by the parameters of the mutual covering. To illustrate this numerically, we consider a scenario where we sample points from a distribution whose support consists of two disjoint, compact sets representing two classes. We expect that as we iteratively sample points from the distribution, the size of the interpolating network should saturate and be bounded by the parameters of the mutual covering of the support of the distribution (satisfying the restrictions in Theorem 2.6).
To verify this, we return to the Two Moons dataset from Section 4.1. We fix the maximal bias \(\lambda=1\) and vary the number of points \(N\) by drawing samples drawn from the data distribution.5
Footnote 5: We use sklearn.datasets.make_moons(n_samples=N, noise=0.05) from the scikit-learn Python package to generate the samples.
Interpolation probability.The contour lines in the heatmap in Figure (b)b show which width of the first layer is required to achieve interpolation with a fixed probability for a certain number of samples. We can observe that there is an increase in the required width up to around \(50\,000\) samples. After this
Figure 5: Binary classification on the Two Moons dataset.
Figure 6: **Decision boundary.** Each star marks an accepted point and the region of the same color is the activation region of its associated neuron. The decision boundary of the networks is the boundary of the union of these regions. Here, we used \(n=2\,000\) and \(\lambda=1\).
Figure 7: **Decision boundaries for different choices of hyperparameters.** Similar to Figure 6 but includes all combinations of hyperparameters \(n\in\{100,250,500\}\) (rows) and \(\lambda\in\{0,0.5,1\}\) (columns). Plots in which the network interpolates the data are marked with a thick dashed frame.
threshold, however, a constant width of the first layer is enough to interpolate any number of samples.
Width of the second layer \(\hat{\Phi}\).As in the other experiments we are interested in the part of the parameter space where the interpolation probability is almost one. Similar to the contour lines of the interpolation probability we observe that to obtain a fixed width of the second layer there is an increase in the required width of the first layer only up to a certain threshold (around \(80\,000\) samples). After this threshold, a constant width of the first layer is enough to obtain a fixed width of the second layer.
Combining the above observations we note the following: there is a threshold in the number of samples such that for larger sample sizes there is a width of the first layer for which the network interpolates with probability close to one and the width of the second layer stays constant. Hence, as the width of the second layer is only lower for smaller sample sizes, a neural network of constant size (whose parameters can be computed via our algorithm) suffices to interpolate any number of samples.
### Multi-class classification on MNIST
Recall that our method is designed for binary problems. One-versus-many is a common strategy to extend binary classification methods to multi-class problems: for each class, train a binary classifier to distinguish between this class and all other classes. At inference time, query all classifiers and output the class label corresponding to the classifier with the highest confidence score.
We extend Algorithm 2 to multi-class problems in a similar manner. However, as the first layer is obtained in an identical way for every execution of our method, we reuse it across all classes. One can use a simple union bound argument to prove high success probability for this case. Let \(K\geq 2\) denote the total number of classes and \(\mathcal{X}_{k}\) the set of samples of class \(k\in[K]\). Sample the first layer \(\Phi\) at random as in Algorithm 2. Then, for each class \(k\in[K]\) compute the second and third layer while using \(\Phi\) as the first layer and \(\mathcal{X}^{-}=\mathcal{X}_{k}\) and \(\mathcal{X}^{+}=\bigcup_{\ell\neq k}\mathcal{X}_{\ell}\) as input data. It is convenient to modify the third layer to map samples of \(\mathcal{X}^{-}\) to \(1\) and samples of \(\mathcal{X}^{+}\) to \(0\). Denote the concatenation of the second and third layers by \(F_{k}\). Define the final classifier \(F\colon(\mathcal{X}_{1}\cup\dots\cup\mathcal{X}_{K})\to\{0,1\}^{K}\) by
\[F(\boldsymbol{x})=(F_{1}(\Phi(\boldsymbol{x})),\dots,F_{K}(\Phi(\boldsymbol{x })))\]
which outputs the class label as a one-hot encoding. We apply this method to the MNIST dataset [10].
Interpolation probability.In Figure 8(b) we again observe a clear phase transition in the interpolation probability. As in the case of Two Moons, this behaves as predicted by Theorem 2.6, as for \(\lambda\) larger than the radius of the data, \(n\gtrsim\lambda\)
Figure 8: Sample size limit on the Two Moons dataset.
is enough to guarantee interpolation with any fixed probability. For \(\lambda=0\) the method not only interpolates but it does so with the narrowest first layer. That this works can be intuitively explained by the angular separation of MNIST. The minimal angle between two samples from MNIST is around \(0.17\) (in contrast to about \(2.44\cdot 10^{-6}\) for Two Moons). Hence, it is possible to efficiently separate pairs of samples with hyperplanes through the origin.
Width of the second layer \(\hat{\Phi}\).Again we are interested in the part of the parameter space where the interpolation probability is close to one. In Figure 9c we observe that, while \(\lambda=0\) seems to be the optimal choice (for the interpolation probability), increasing \(n\) may still lead to a reduction of the width of the second layer. Figure 10 reveals that the width does decrease well after interpolation is possible, and in fact, \(\lambda\approx 0.5\) yields an even lower value. This might be due to the effect that can be seen in Figure 7, where for \(\lambda=0\) the activation regions of the neurons of the second layer are 'wedges' and become more circular for larger \(\lambda\), which then might prove beneficial to the width of the second layer. Compared to the binary classification experiments in the previous sections, the width of the second layer is relatively large. For the most part, this is due to the larger number of classes: due to our one-versus-many approach, the width of the second layer scales as \(\sum_{i=1}^{K}M_{i}^{-}\), where \(K\) is the number of classes and \(M_{i}^{-}\) is the mutual covering number for the one-versus-rest problem for class \(i\).
### A worst-case example
We conclude with a constructed example that demonstrates that our algorithm can in certain cases fail to produce a small interpolating net. Figure 11 shows samples drawn from two parallel lines, where the distances of samples between classes are smaller than the distances of samples within each class. This forces the components of the mutual covering (and the activation regions of the neurons in the second layer) to be so small that they only cover a single point. Hence, the width of the second layer scales as the number of samples, which is the worst case. This example shows that, although our algorithm is guaranteed to produce small interpolating neural networks on data with a small mutual covering number, it may not take advantage of alternative benign structures (linear separability in this constructed example).
| 実践的に、深層ニューラルネットワークは、訓練データの容易な補間が可能なことが多く、多くの研究では、ニューラルネットワークアーキテクチャの記憶容量を定量化しようとしています。つまり、任意の配置の点を任意のラベルの割り当てで補間できる最大点数です。しかしながら、現実のデータでは、補間が起きる構造が良好であるという直感的な期待があり、メモリの容量よりも小さいネットワークサイズで補間が起きる場合が多いです。本論文では、インスタンスベースの視点で補間を調査します。二つのクラスを持つ固定された有限のデータセットを与えられた場合、このアルゴリズムは、高確率で、三次層のニューラルネットワークをポリノミアルタイムで補間します。必要となるパラメータ数は、二つのクラスの幾何学的性質とそれらの相互配置にリンクされています。この結果、サンプル数に依存しない保証 |
2309.12898 | Earthquake-like dynamics in ultrathin magnetic film | We study the motion of a domain wall on an ultrathin magnetic film using the
magneto-optical Kerr effect (MOKE). At tiny magnetic fields, the wall creeps
only via thermal activation over the pinning centers present in the sample. Our
results show that this creep dynamics is highly intermittent and correlated. A
localized instability triggers a cascade, akin to aftershocks following a large
earthquake, where the pinned wall undergoes large reorganizations in a compact
active region for a few seconds. Surprisingly, the size and shape of these
reorganizations display the same scale-free statistics of the depinning
avalanches in agreement with the quenched Kardar-Parisi-Zhang universality
class. | Gianfranco Durin, Vincenzo Maria Schimmenti, Marco Baiesi, Arianna Casiraghi, Alessandro Magni, Liza Herrera-Diez, Dafiné Ravelosona, Laura Foini, Alberto Rosso | 2023-09-22T14:36:55 | http://arxiv.org/abs/2309.12898v1 | # Earthquake-like dynamics in ultrathin magnetic film
###### Abstract
We study the motion of a domain wall on an ultrathin magnetic film using the magneto-optical Kerr effect (MOKE). At tiny magnetic fields, the wall creeps only via thermal activation over the pinning centers present in the sample. Our results show that this creep dynamics is highly intermittent and correlated. A localized instability triggers a cascade, akin to aftershocks following a large earthquake, where the pinned wall undergoes large reorganizations in a compact active region for a few seconds. Surprisingly, the size and shape of these reorganizations display the same scale-free statistics of the depinning avalanches in agreement with the quenched Kardar-Parisi-Zhang universality class.
pacs: An important class of future spintronic nanoelectronic devices is based on fully controlling magnetic domain walls in ultrathin films [1; 2]. When used as memory devices, for instance, it is fundamental to control their position stability and understand their dynamics under a small perturbation. It is well known that defects naturally present in the nanostructure can pin the domain wall. Consequently, the wall creeps at a finite temperature, with a velocity strongly vanishing with the applied magnetic field. In ultrathin magnetic films, the creep regime holds up to room temperature and well below the depinning field \(H_{\rm dep}\). After an initial transient, the wall moves with a small, steady velocity given by the celebrated _creep formula_:
\[\ln v(H)=\left(\frac{H}{H_{0}}\right)^{-1/4}+\ln v_{0} \tag{1}\]
Here, \(H_{0}\) and \(v_{0}\) are materials and temperature-dependent parameters. The exponent \(1/4\) is instead universal and is the true hallmark of the creep dynamics. It has been first predicted in [3], measured in [4] over several decades of velocity, and then confirmed in many experiments [5; 6]. Despite this success, the nature of the creep dynamics remains controversial. In particular, several hypotheses are made on the length scales involved and on the shape of the wall.
The original derivation of the creep formula assumes that the small magnetic field tilts the minima but leaves the landscape locally at thermal equilibrium. Within this picture, one finds a single length \(L_{\rm opt}\), the scale of the reorganization needed to overcome the optimal energy barriers. Under this assumption, one can estimate that \(L_{\rm opt}\sim H^{-3/4}\) and the corresponding energy barriers grow as \(H^{-1/4}\)[3; 7; 8; 9]. Below the scale \(L_{\rm opt}\), the dynamics is thus purely thermal, characterized by an incoherent back-and-forth motion. Above \(L_{\rm opt}\) instead, the wall never comes back and undergoes a novel slow reorganization of size \(L_{\rm opt}\) in a different location [3].
Further studies, based on functional renormalization group (FRG) [10] and numerical simulations at infinitesimal temperature \(T\to 0^{+}\)[11; 12; 9; 13], have proposed a different scenario. The activation over a size \(L_{\rm opt}\) destabilizes the local energy landscape and reduces the size of the energy barriers. Similarly to what is observed in earthquakes, the jump of size \(L_{\rm opt}\) acts as the mainshock that produces a cascade of aftershocks of smaller size [14; 15; 16; 17]. Hence, the region undergoes a much larger reorganization and mimics the depinning avalanches belonging to the quenched Edwards-Wilkinson (qEW) universality class [18; 19]. The scale-free statistics of these avalanches is valid up to a length \(L_{\rm av}\) much more extensive than \(L_{\rm opt}\) and controlled by the finite values of the temperature and the field.
Interestingly, this scenario has strong connections with the thermal facilitation proposed to justify the dynamical heterogeneity in glass-forming liquids [20; 21]. The mechanism is similar: the slow relaxation time is dominated by localized slow events that nucleate large responses on a much larger scale. However, experimental evidence of these large reorganizations is still lacking.
In this paper, we report the full dynamical evolution of a domain wall using the magneto-optical Kerr effect (MOKE) on an ultrathin magnetic thin film. As it is clear from the movie in [22], the dynamics is intermittent and correlated. Our analysis demonstrates that the correlations are on scales much larger than \(L_{\rm opt}\) and that the
destabilization and reorganization are governed by the depinning critical point, displaying scale-free statistics with exponents in agreement with the quenched Kardar-Parisi-Zhang (qKPZ) universality class.
_Experimental setting._ -- Field-driven domain wall dynamics is investigated in a Ta(5)/CoFeB(1)/MgO(2)/Ta(3) (thickness in nm) thin film with perpendicular magnetic anisotropy (PMA) [22]. This material is typically very soft, exhibiting a depinning field of the order of 10 mT. The low density of pinning defects with respect to other PMA systems, such as Co/Pt and Co/Ni multilayers, makes it a good candidate to study domain wall dynamics [23]. The competition between domain wall elasticity and the local disorder results in a thermally activated creep motion for driving fields up to the depinning field [24]. A magnetic bubble domain is initially nucleated with a \(\sim 30\)\(\mu\)m radius in the pre-saturated film through a short field pulse. The subsequent slow expansion occurs under a small continuous perpendicular applied field. Here we use \(H=0.13,0.14,0.15,0.16\) mT, corresponding to \(<2\,\%\) of \(H_{\text{dep}}\). This ultra-slow creep dynamics is captured through MOKE microscopy. MOKE images with a spatial resolution of 400 nm are acquired every 200 ms until the bubble radius has increased to about 100 \(\mu\)m. Even at the lowest applied field, the bubble domain conserves its circular shape and boundary smoothness upon expansion, indicating weak random pinning. The limitations in the spatial resolution and in the acquisition rate do not allow us to detect the fast dynamics of the domain wall at the nanoscale, but we can resolve the motion of the wall by estimating the time at which each pixel changes its gray level (see section 1 of [22] for a detailed description of the procedure). Remarkably, the set of switched pixels between two consecutive images is always connected in space, and we define it as a single _frame event_.
_Analysis of experimental spatiotemporal patterns._ -- The dynamics of the domain observed frame by frame displays two important features:
* The bubble always expands and never comes back. As shown in Fig. 1 (b), the position of the interface \(R(\theta,t)\) along any direction is a non-decreasing function of time. Moreover, after an initial transient, the velocity of the wall decays to its steady value \(\bar{v}\). However, the local velocity (inset) displays strong intermittency in time.
* The motion presents spatial correlations well above the pixel size. Indeed Fig. 1 (a) shows that each event frame corresponds to a compact spatial region, and events of subsequent frames tend to cluster in space. See the movie in [22] to visualize the full dynamics of the bubble.
These two features support the second scenario for which the initial reorganization of a region of size
Figure 1: (a): The evolution of a portion (in pink) of the wall during 8 sec. The sequence of the frame events is organized in two distinct clusters (in blue and red). The color gradient represents the progression of time, much faster than the steady velocity \(\bar{v}\). (b): Time evolution of wall position \(R(\theta,t)\) along three directions (the gray shadow indicates the total spreading). After an initial transient of \(t\sim 300s\), the velocity of the wall decays to its steady value (e.g. \(\bar{v}\sim 0.018\,\mu m/s\) for \(H=0.14\)). Inset: Local velocity along three directions. For a given direction \(\theta\) we find the times \(t_{1},t_{2},\dots t_{i}\dots\) where \(R(\theta,t)\) changes values. The velocity \(v(\theta,t)\) for \(t\in[t_{i},t_{i+1}]\) is obtained as \((R(\theta,t_{i+1})-R(\theta,t_{i}))/(t_{i+1}-t_{i})\). The dashed line corresponds to the average steady velocity \(\bar{v}\). Each signal displays intermittency with instantaneous velocity 100 larger than \(\bar{v}\).
is followed by a cascade of frame events on much larger scales. Indeed the simple thermal activation is characterized by incoherent back-and-forth motion representing the attempts to overcome the energy barrier. Here, instead, we observe a coherent forward motion on time scales much faster than the steady velocity. This conclusion is also coherent with the estimation of \(L_{\rm opt}\) given in [5; 10]:
\[L_{\rm opt}\sim L_{C}(H_{\rm dep}/H)^{3/4} \tag{2}\]
with \(L_{C}\) the microscopic Larkin length at which the wall fluctuations become of the order of its thickness. In the materials used in this work, the Larkin length is approximately \(L_{C}\sim 100\) nm. Hence, \(L_{\rm opt}\) is \(\sim 380-400\) nm. This scale is just below the single pixel size of \(400\)nm and is too small to be experimentally accessible. To quantify the spatial correlations observed beyond \(L_{\rm opt}\), we construct clusters of frame events close in space and time via a simple algorithm that depends on two parameters \(\Delta t\) and \(\Delta s\). In practice, we start from an initial frame event (the epicenter of the cluster) and include all frame events within a time window \(\Delta t\) and a distance \(\Delta s\). Section 3 of [22] shows that our analysis is robust upon variations of \(\Delta t\) and \(\Delta s\). Fig. 2 shows the clusters obtained using this procedure. Each cluster can be characterized by two quantities, namely the size \(S\) (the colored areas in Fig. 2) and the longitudinal length \(\ell\) (see section 4 of [22]). Both quantities display scale-free statistics (Fig. 3 (a) and (b)), with exponents which are incompatible with the equilibrium exponents used to characterize the barrier of the energy landscape up to the scale \(L_{\rm opt}\). It is thus tempting to interpret these clusters as avalanches at the depinning transition as suggested by the numerical simulations on directed interfaces in [12]. In those simulations, however, avalanches are very fat in the growth direction (i.e., the direction of propagation of the interface) consistently with the quenched Edwards Wilkinson (qEW) depinning. Here clusters are instead elongated objects as shown in Figs. 2 and 3 (c) where \(S\sim\ell^{1+\zeta}\) results in a roughness exponent \(\zeta\sim 0.63\). This exponent excludes the possibility of qEW depinning but is consistent with the qKPZ depinning. We corroborate this conclusion with an independent study of the roughness of the whole interface. Following the method proposed in [25], we compute the structure factor \(S(q)\) that, as discussed in Ref. [12], displays a \(q^{-(1+2\zeta)}\) dependence at small values of the wave number \(q\). Fig. 4 shows that the interface's roughness exponent \(\zeta\) is consistent with the one characterizing the elongated shape of the cluster. Our results thus prove that the spatial correlations observed beyond the scale \(L_{\rm opt}\) are in the qKPZ depinning. The qKPZ universality class reveals the presence of anisotropic disorder in our experiment. This feature was not included in previous numerical simulations.
## Conclusions
The celebrated creep formula (1) rests on the hypothesis that the key feature determining a wall motion is optimized excitations of size \(L_{\rm opt}\). Our work focuses on intermittently occurring rapid movements along a magnetic wall and unveils their spatial organization, extending on scales much more extensive than \(L_{\rm opt}\). Their size and shape display the same statistics of the avalanches recorded at the depinning magnetic field but with a much slower evolution. In contrast with previous theoretical and experimental studies [26; 9], our experiment shows that the exponents are compatible with the qKPZ instead of the qEW universality class. The
Figure 2: Sequences of clusters at different applied fields, which start from the initial bubble (the central black sector at the inner corner of the images) and grow until the radius is about \(100\,\mu\)m. See also the movie in [22].
emergence of KPZ dynamics at depinning must be sustained by anisotropy in the material, and its origin calls for further understanding.
The scenario emerging from our results should be tested in other examples of elastic disordered systems such as ferroelectric domain walls [27; 28; 29] or crack propagation [30]. Interestingly, a similar scenario was recently reported for a different class of disordered systems, such as amorphous solids or glass-forming liquids. Simulations on elastoplastic models have shown how localized excitations can trigger cascades of faster events [20; 21]. Hence, thermally-facilitated avalanches can be pretty generic in disordered systems. They reveal the complex nature of disordered energy landscapes that cannot be described simply by a sequence of uncorrelated elementary excitations.
The results reported here can also have significant consequences in the field of spintronics. The creep dynamics of a bubble domain is, in fact, at the base of one of the most used methods to determine the interfacial Dzyaloshinskii-Moriya interaction (DMI). This is a chiral interaction responsible for the occurrence of topological spin structures, such as chiral domain walls and skyrmions, considered the most promising information carriers in future spintronics technologies [31]. The determination of the DMI constant is based on the asymmetric expansion of the bubble under an in-plane magnetic field, with the domain wall velocity measured by dividing the displacement between two MOKE snapshots over their time interval. Fig. 1 (b) actually suggests that the velocity is constant only at large times/displacements, and thus that this procedure could be misleading. In addition, theoretical expressions to evaluate the DMI field from the velocity curve are primarily phenomenological, and a more accurate description of the domain wall dynamics, such as the qKPZ reported here, could highly improve the fits of the data. We hope these considerations shed some light on a more accurate determination of DMI value and solve the contradictions with other popular methods, such as the Brillouin light scattering.
_Acknowledgements._ -- V.M.S. acknowledges 80Prime CNRS support for the project CorrQuake. M.B. is supported by research grant BAIE_BIRD2021_01 of the University of Padova.
Figure 3: (a) Cluster size \(S\) and (b) longitudinal length \(\ell\) distributions for different magnetic fields. (c) Cluster size versus their longitudinal length. The clusters have been obtained for \(\Delta t=8\) frames and \(\Delta s=2\) pixels. The first two panels are compatible with qEW and qKPZ universality classes but not with the equilibrium exponents. The value of the roughness exponents from (c) is computed using the power law scaling \(S\sim\ell^{1+\zeta}\). The measured value is compatible with both \(\zeta_{\text{qKPZ}}=0.63\) and \(\zeta_{\text{equilibrium}}=2/3\), but exclude the qEW universality class \(\zeta_{\text{qEW}}=1.25\). Combining these findings leaves the qKPZ universality class as the sole possible candidate for describing the creep motion in our experiment. | 磁気光学ケール効果(MOKE)を用いて、超薄磁性膜上にドメイン壁の運動を研究しています。非常に小さな磁界では、壁は、試料に存在する固定中心を介してのみ熱活性を伴って動きます。私たちの研究の結果は、このクリープダイナミクスは非常に間歇的で相関的です。局部不安定性は、大地震の後のように、ピンの壁が活性領域で数秒間大きな再編成を行うことで、一連の cascade を引き起こします。驚くべきことに、この再編成のサイズと形状は、デピンニング avalanches のスケールフリー統計と一致し、ケードラ・パリジ・ Zhang の普遍的なクラスに属しています。 |
2309.14231 | Combined sizing and layout optimization of truss structures via update
Monte Carlo tree search (UMCTS) algorithm | The main concern of this study is to find the optimal design of truss
structures considering sizing and layout variables simultaneously. As compared
to purely sizing optimization problems, this problem is more challenging since
the two types of variables involved are fundamentally different in nature. In
this paper, a reinforcement learning method combining the update process and
Monte Carlo tree search called the update Monte Carlo tree search (UMCTS) for
sizing optimization problems is applied to solve combined sizing and layout
optimization for truss structures. This study proposes a novel update process
for nodal coordinates with two features. (1) The allowed range of each
coordinate varies in each round. (2) Accelerators for the number of entries in
the allowed range and iteration numbers are introduced to reduce the
computation time. Furthermore, nodal coordinates and member areas are
determined at the same time with only one search tree in each round. The
validation and efficiency of the UMCTS are tested on benchmark problems of
planar and spatial trusses with discrete sizing variables and continuous layout
variables. It is shown that the CPU time of the UMCTS is two times faster than
the branch and bound method. The numerical results demonstrate that the
proposed method stably achieves a better solution than other traditional
methods. | Fu-Yao Ko, Katsuyuki Suzuki, Kazuo Yonekura | 2023-09-25T15:42:52 | http://arxiv.org/abs/2309.14231v1 | **Combined sizing and layout optimization of truss structures via**
###### Abstract
The main concern of this study is to find the optimal design of truss structures considering sizing and layout variables simultaneously. As compared to purely sizing optimization problems, this problem is more challenging since the two types of variables involved are fundamentally different in nature. In this paper, a reinforcement learning method combining the update process and Monte Carlo tree search called the update Monte Carlo tree search (UMCTS) for sizing optimization problems is applied to solve combined sizing and layout optimization for truss structures. This study proposes a novel update process for nodal coordinates with two features. (1) The allowed range of each coordinate varies in each round. (2) Accelerators for the number of entries in the allowed range and iteration numbers are introduced to reduce the computation time. Furthermore, nodal coordinates and member areas are determined at the same time with only one search tree in each round. The validation and efficiency of the UMCTS are tested on benchmark problems of planar and spatial trusses with discrete sizing variables and continuous layout variables. It is shown that the CPU time of the UMCTS is two times faster than the branch and bound method. The numerical results demonstrate that the proposed method stably achieves a better solution than other traditional methods.
## 1 Introduction
Structural optimization has qualified as an important tool in the design process over the past decades owing to limited material resources [1, 2]. For truss structures with fixed topology, optimization tasks focus on the structural layout or geometry and member's sizing, i.e., determining nodal coordinates and cross-sectional areas of members simultaneously [3, 4]. The design of structures should satisfy various constraints, such as stress, displacement, and buckling limits. From a weight-saving aspect, sizing and layout optimization is recognized as an important task because it can provide more reduction in weight than purely sizing optimization [5]. However, this problem is more challenging since it is an ill-conditioned problem [6, 7] due to the fact that the two types of variables involved are of fundamentally different in nature and produce a different rate of convergence.
Originally, many traditional approaches based on approximation method formulated by Hansen and Vanderplaats [8], Kirsch [9], Zhou and Xia [10], and Salajegheh and
Vanderplaats [11] and optimality criteria proposed by Gil and Andreu [12] and Wang et al. [13] have been used to solve truss optimization problems with sizing and layout variables. These kinds of methods are gradient-based solution strategies that require the relationship between design variables and objective function to determine the path toward an optimal solution. Gradient-based optimization tools result in solutions far from the global optimum or even in infeasible solutions unless the problem is convex. Moreover, these algorithms are not applicable when treating discrete variables because the gradients may become singular across the boundary of discontinuity.
In the past three decades, structural optimization problems have been investigated using different categories of metaheuristic algorithms inspired by the natural phenomena, animal strategies and behaviors, and biological sciences. The most extensively applied metaheuristic algorithms are genetic algorithm (GA) by Wu and Chow [14], Hasancebi and Erbatur [15], Kaveh and Kalatjari [16], Tang et al. [17], Hwang and He [18], and Rahami et al. [19], particle swarm optimization (PSO) by Fourie and Groenwold [20], Gholizadeh [21], and Shojaee et al. [22], firefly algorithm (FA) by Miguel et al. [23], teaching-learning-based optimization (TLBO) by Dede and Ayvaz [24], differential evolution (DE) by Ho-Huu et al. [25], and artificial bee colony algorithm (ABC) by Jawad et al. [26]. Metaheuristic algorithm is a computational intelligence paradigm based on random search in the problem-solving space [27]. The quality of the solution strongly depends on the initial solution and parameters. Therefore, metaheuristic algorithms require appropriate tuning of the parameters [28].
Reinforcement learning (RL) is an area of machine learning which trains an intelligent agent to perform tasks by interacting with an unknown dynamic environment. RL maximizes the expected cumulative reward by learning to map different states in a sequential decision process to optimal actions. RL is usually formulated as a Markov decision process (MDP) in which the next state and reward are determined solely by the current state [29, 30]. Because RL does not need direct knowledge or a model of the environment, RL is suitable for structural optimization problems where it is difficult to determine the desired optimal solutions beforehand [31, 32, 33]. Monte Carlo tree search (MCTS) is a best-first search method for finding the optimal decisions in a given domain that uses random samples to explore the search space. Initially, only the root node exists in the search tree, which is not changed during the process. Henceforward, the search tree is built incrementally over time and further develops the search tree by finding the most promising moves until the search time is terminated [34].
Recently an RL-based method called update Monte Carlo tree search (UMCTS) is proposed by Ko et al. [35] to solve sizing optimization problems for truss structures. This algorithm combined the update process and MCTS with the upper confidence bound (UCB) [36]. An accelerator for the number of choices for member area and
iteration number was introduced to reduce the computation time. Furthermore, the best reward collected during the simulation process was employed to determine the optimal action sequence. The performance of UMCTS was compared with that of other metaheuristic algorithms. The authors indicated that UMCTS stably attained optimal solutions lighter than the other metaheuristic methods. Moreover, the CPU time of the UMCTS was much faster than the branch and bound (BB) method.
For the validity and efficiency in solving sizing optimization of truss structures with UMCTS approach, the aim of this paper is to apply this algorithm to combined sizing and layout optimization problems for truss structures. To the best of the author's knowledge, the RL-based method has not been utilized to solve sizing and layout optimization simultaneously. In this research, the novel update process for nodal coordinate is proposed to determine the best layout of truss structures. Then, update process for nodal coordinate and member area introduced by Ko et al. [35] recently is combined in only one search tree to determine the geometry and cross-sectional area at the same time. The details are shown in Section 3.
The rest of the paper is arranged as follows. Section 2 briefly presents the characteristics of combined sizing and layout optimization problems. The UMCTS algorithm for sizing and layout optimization is introduced in Section 3. UMCTS is validated by several well-studied truss structures from the literature, and the numerical results are shown in Section 4. Finally, the conclusions of this study are provided in Section 5.
## 2 Formulation of combined sizing and layout truss optimization problems
The goal of this study is to find the optimal layout of the structure. Therefore, design variables include the coordinates of certain nodes and the cross-sectional areas of members. This problem is constructed to minimize the weight of the structure under stress and displacement constraints. The sizing and layout problem for truss structures can be mathematically formulated as
\[\text{Minimize}\qquad W(\textbf{A},\boldsymbol{\Delta})=\rho\sum_{i=1}^{n_{g} }\Bigg{(}a_{i}\sum_{j=1}^{\psi_{i}}l_{ij}\left(\delta_{w}\right)\Bigg{)}. \tag{1}\]
In Eq. (1), \(W(\textbf{A},\boldsymbol{\Delta})\) is the objective function which is the total weight of the truss. \(\textbf{A}=\left(a_{1},a_{2},...,a_{i},...,a_{n_{g}}\right)\) is the sizing variable vector which includes a cross-sectional area \(a_{i}\) chosen from a sorted set **D** with available discrete values in section type \(i\). All members in section type \(i\) have the same member area \(a_{i}\). For brevity, group
is utilized to indicate section type in this paper. \(\mathbf{D}\) includes all available discrete values arranged in ascending sequences and can be expressed as
\[\mathbf{D}=\left\{d_{1},d_{2},...,d_{h},...,d_{n_{b}}\right\},\ 1\leq h\leq n_{b}, \tag{2}\]
where \(h\) is the index of available discrete values; \(n_{b}\) is the total number of available sections. \(\mathbf{\Delta}=\left(\delta_{1},\delta_{2},...,\delta_{w},...,\delta_{n_{ \sigma}}\right)\) is the layout variable vector including nodal coordinate \(\delta_{w}\) for joint \(w\) to be determined and can be any continuous value between \(\vartheta_{w}^{m}\) and \(\vartheta_{w}^{M}\). \(\vartheta_{w}^{m}\) and \(\vartheta_{w}^{M}\) are the predefined lower and upper bounds for the location of the joints. \(\rho\) is the material density. \(n_{g}\) is the total number of design variables for member area. \(n_{\sigma}\) is the total number of design variables for nodal coordinate. \(\mathbf{\Psi}=\left\{\psi_{1},\psi_{2},...,\psi_{i},...,\psi_{n_{g}}\right\}\) is a sorted set including elements \(\psi_{i}\) which represents the total number of the members in group \(i\). \(l_{ij}(\delta_{w})\) is the length of the member \(j\) in group \(i\) based on nodal coordinate \(\delta_{w}\) for design. \(j\) is the index of the member in group \(i\). The constraint functions are formulated as
\[\begin{array}{l}\mathbf{K(A,\Delta)U=F},\\ \varsigma_{ij}=\mathbf{B_{ij}U_{ij}},\\ s_{ij}=\frac{E}{l_{ij}}\varsigma_{ij},\\ s_{\mathrm{m}}\leq s_{ij}\leq s_{\mathrm{M}},\ i=1,2,...,n_{g},j=1,2,...,\psi_ {i},\\ u_{\mathrm{m}}\leq u_{k}\leq u_{\mathrm{M}},\ k=1,2,...,n_{c},\end{array} \tag{3}\]
where \(\mathbf{K}\) is the stiffness matrix of truss structure; \(\mathbf{U}=\left(u_{1},u_{2},...,u_{k},...,u_{n_{c}}\right)^{T}\) is the vector of nodal displacements for truss structures; \(\mathbf{F}=\left(f_{1},f_{2},...,f_{k},...,f_{n_{c}}\right)^{T}\) is the vector of applied nodal forces for truss structures; \(k\) denotes the index of structural joint; \(n_{c}\) represents the total number of structural joints for truss structures; \(E\) is the modulus of elasticity; \(\varsigma_{ij}\) is the elongation of the member; \(\mathbf{B_{ij}}\) is written as \(\mathbf{B_{ij}}=(-\mathbf{e_{ij}^{T}}\quad\mathbf{e_{ij}^{T}})\); \(\mathbf{e_{ij}}\) is an unit vector along the member pointed from local node 1 to local node 2 and is expressed as \(\mathbf{e_{ij}}=(\cos\theta_{ij}\quad\sin\theta_{ij})^{T}\); \(\theta_{ij}\) is the angle from the X-axis to \(\mathbf{e_{ij}}\). \(\mathbf{U_{ij}}=(\mathbf{U_{ij,1}}\quad\mathbf{U_{ij,2}})^{T}\) is a vector with the displacements at the ends of the member; \(\mathbf{U_{ij,1}}\) and \(\mathbf{U_{ij,2}}\) is represented as \(\mathbf{U_{ij,1}}=(\mathrm{u_{ij,1\chi}}\quad\mathrm{u_{ij,1\chi}})^{T}\) and \(\mathbf{U_{ij,2}}=(\mathrm{u_{ij,2\chi}}\quad\mathrm{u_{ij,2\chi}})^{T}\). A member with local nodes 1 and 2 at each end is shown in Fig. 1. In Eq. (3), the normal stresses \(s_{ij}\) are compared with the allowable stresses \(s_{\mathrm{m}}\) and \(s_{\mathrm{M}}\). Also, the nodal displacements \(u_{k}\) are compared with the allowable displacements \(u_{\mathrm{m}}\) and \(u_{\mathrm{M}}\).
## 3 UMCTS algorithm for sizing and layout optimization of truss structures
For sizing and layout optimization of truss structures, there are two kinds of variables to be determined: nodal coordinate \(\delta_{w}\) and member area \(a_{i}\). In Sections 3.1 and 3.2, a novel update process for nodal coordinate is presented, and update process for member area is briefly reviewed. Then, the approach to combine sizing and layout variables in the update process is proposed in Section 3.3. At last, the elements in the MCTS and procedure for search tree creation is described in Section 3.4.
### Update process for layout variables
For a round \(p\), the initial state includes layout vector \(\mathbf{\Lambda^{p}}=\left(\lambda_{1}^{p},\lambda_{2}^{p},...,\lambda_{w}^{p},...,\gamma_{n_{\sigma}}^{p}\right)\) including nodal coordinate \(\lambda_{w}^{p}\) to be decided. To determine the best layout vector in the continuous space, the design domain of the nodal coordinate is uniformly discretized by choosing a certain number of samples described in Fig. 2. It indicates that a nodal coordinate \(\lambda_{w}^{p}\) is selected from a sorted set \(\mathbf{\mathcal{H}_{w}^{p}}\) containing elements \(\left(\mathbf{\mathcal{h}_{w}^{p}}\right)_{1}\), \(\left(\mathbf{\mathcal{h}_{w}^{p}}\right)_{2}\),..., \(\left(\mathbf{\mathcal{h}_{w}^{p}}\right)_{\chi^{\prime}}\),..., \(\left(\mathbf{\mathcal{h}_{w}^{p}}\right)_{\epsilon}\). \(\chi\) is the index of a sorted set \(\mathbf{\mathcal{H}_{w}^{p}}\). \(\epsilon\) is the total number of entries in the design domain. \(\left(\mathbf{\mathcal{h}_{w}^{p}}\right)_{1}\) and \(\left(\mathbf{\mathcal{h}_{w}^{p}}\right)_{\epsilon}\) is equal to \(\lambda_{w}^{p}-\frac{1}{2}R_{w}^{p}\) and \(\lambda_{w}^{p}+\frac{1}{2}R_{w}^{p}\). \(R_{w}^{p}\) is the allowed range of the nodal coordinate for node \(w\) in round \(p\), and the expression between \(R\) and \(p\) is expressed as
\[\begin{split} R_{w}^{1}&=\vartheta_{w}^{M}-\vartheta_ {w}^{m},\\ R_{w}^{p}&=R_{w}^{1}\times 0.5^{\left[\frac{p-1}{n_{ \sigma}}\right]}\left(p>1\right),\end{split} \tag{4}\]
where \(\left\lceil\frac{p-1}{n_{\sigma}}\right\rceil\) is the least integer greater than or equal to \(\frac{p-1}{n_{\sigma}}\). The final state for layout vector \(\overline{\mathbf{\Lambda^{p}}}=\left(\overline{\lambda_{1}^{p}},\overline{ \lambda_{2}^{p}},...,\overline{\lambda_{w}^{p}},...,\overline{\lambda_{n_{ \sigma}}^{p}}\right)\) is decided by search tree and used as an initial state
Figure 1: A general member with local nodes 1 and 2 at each end [1,2].
for layout vector \(\mathbf{\Lambda^{\mathsf{p+1}}}=\left(\lambda_{1}^{p+1},\lambda_{2}^{p+1},...,\lambda_ {w}^{p+1},...,\lambda_{n_{\sigma}}^{p+1}\right)\) for round \((p+1)\). It is worth mentioning that in round \(1\), the initial state is \(\mathbf{\Lambda^{\mathsf{1}}}=\left(\lambda_{1}^{1},\lambda_{2}^{1},...,\lambda_ {w}^{1},...,\lambda_{n_{\sigma}}^{1}\right)=\left(\widetilde{\delta_{1}}, \widetilde{\delta_{2}},...,\widetilde{\delta_{w}},...,\widetilde{\delta_{n_{ \sigma}}}\right)\). Element \(\lambda_{w}^{\mathsf{1}}\) in \(\mathbf{\Lambda^{\mathsf{1}}}\) is equal to the initial coordinate \(\widetilde{\delta_{w}}\) of the truss layout.
To accelerate the UMCTS algorithm, accelerator for the total number of entries \(\epsilon\) for nodal coordinate \(\delta_{w}\) in allowed range \(R\) is proposed. The total number of entries \(\epsilon\) for nodal coordinate \(\delta_{w}\) in round \(p\) is formulated as
\[\begin{array}{l}\epsilon^{\mathsf{1}}=\left|\mathbf{\mathcal{H}_{1}^{\mathsf{1 }}}\right|=\cdots=\left|\mathbf{\mathcal{H}_{w}^{\mathsf{1}}}\right|=\cdots=\left| \mathbf{\mathcal{H}_{n_{\sigma}}^{\mathsf{1}}}\right|=n_{\sigma}+n_{g}=\varpi,\\ \iota^{p}=\varpi\times 0.5\big{[}\frac{p-1}{n_{\sigma}}\big{]}\left(p>1\right),\\ \epsilon^{p}=\left|\mathbf{\mathcal{H}_{1}^{\mathsf{p}}}\right|=\cdots=\left|\mathbf{ \mathcal{H}_{w}^{\mathsf{p}}}\right|=\cdots=\left|\mathbf{\mathcal{H}_{n_{\sigma} }^{\mathsf{p}}}\right|=\max(3,\iota^{p})\;(p>1),\end{array} \tag{5}\]
where \(\left|\mathbf{\mathcal{H}_{w}^{\mathsf{p}}}\right|\) is the total number of elements in \(\mathbf{\mathcal{H}_{w}^{\mathsf{p}}}\); \(\left|\frac{p-1}{n_{\sigma}}\right|\) is the greatest integer less than or equal to \(\frac{p-1}{n_{\sigma}}\); \(\max(3,\iota^{p})\) is the largest value in a set of values \(3\) and \(\iota^{p}\).
### Update process for sizing variables
In this section, the update process for member area is briefly introduced. In a round \(p\)
Figure 2: Discretization of the continuous space for nodal coordinate to be decided.
the initial state is a vector \(\mathbf{\Gamma^{p}}=\left(\gamma_{1}^{p},\gamma_{2}^{p},...,\gamma_{i}^{p},..., \gamma_{n_{g}}^{p}\right)\) with member area \(\gamma_{i}^{p}\). The member area \(\gamma_{i}^{p}\) is chosen from a sorted set \(\mathbf{D_{i}^{p}}=\left\{\left(d_{i}^{p}\right)_{m},...,\left(d_{i}^{p} \right)_{h},...,\left(d_{i}^{p}\right)_{\mu},...,\left(d_{i}^{p}\right)_{M}\right\}\). \((d_{i}^{p})_{m}\), \((d_{i}^{p})_{\mu}\), and \((d_{i}^{p})_{M}\) are the minimum, median, and maximum value in \(\mathbf{D_{i}^{p}}\). \((d_{i}^{p})_{\mu}\) is equal to \(\gamma_{i}^{p}\). The member area selection in available discrete values for round \(p\) is shown in Fig. 3. The final state \(\mathbf{\Gamma^{p}}=\left(\overline{\gamma_{1}^{p}},\overline{\gamma_{2}^{p}},...,\overline{\gamma_{t}^{p}},...,\overline{\gamma_{n_{g}}^{p}}\right)\) is found by the search tree and utilized as an initial state \(\mathbf{\Gamma^{p+1}}=\left(\gamma_{1}^{p+1},\gamma_{2}^{p+1},...,\gamma_{i}^ {p+1},...,\gamma_{n_{g}}^{p+1}\right)\) for the round \((p+1)\). All elements in \(\mathbf{\Gamma^{1}}\) are equal to the maximum value in \(\mathbf{D}\). The total number of selections for the member area \(\tau\) is expressed as
\[\begin{array}{l}\tau^{1}=\left|\mathbf{D_{1}^{1}}\right|=\cdots=\left| \mathbf{D_{1}^{1}}\right|=\cdots=\left|\mathbf{D_{n_{g}}^{1}}\right|=\left\{ \begin{array}{l}\left|\mathbf{D}\right|\text{ if }\left|\mathbf{D}\right|\text{ is odd number,}\\ \left|\mathbf{D}\right|+1\text{ if }\left|\mathbf{D}\right|\text{ is even number,}\end{array}\right.\\ \zeta^{p}=\tau^{1}\times 0.5^{\left\lceil\frac{p-1}{3}\right\rceil},\\ \varrho^{p}=\left|\zeta^{p}\right|,\\ \kappa^{p}=\begin{cases}\varrho^{p}\text{ if }\varrho^{p}\text{ is odd number,}\\ \varrho^{p}+1\text{ if }\varrho^{p}\text{ is even number,}\end{cases}\\ \tau^{p}=\left|\mathbf{D_{1}^{p}}\right|=\cdots=\left|\mathbf{D_{i}^{p}}\right| =\cdots=\left|\mathbf{D_{n_{g}}^{p}}\right|=\max(3,\kappa^{p})\;(p>1),\end{array} \tag{6}\]
where \(\left|\mathbf{D_{i}^{p}}\right|\) is the total number of elements in \(\mathbf{D_{i}^{p}}\); \(\left\lceil\frac{p-1}{3}\right\rceil\) is the least integer greater than or equal to \(\frac{p-1}{3}\); \(\left|\zeta^{p}\right|\) is the greatest integer less than or equal to \(\zeta^{p}\); \(\max(3,\kappa^{p})\) is the largest value in a set of values \(3\) and \(\kappa^{p}\).
Figure 3: Member area selection in the available discrete values.
### 3.3 Combination of update process for sizing and layout variables
To determine the nodal coordinates and member areas at the same time, the update process for nodal coordinates and member areas is integrated. The update process for combined sizing and layout optimization problems is shown in Fig. 4. The initial state in a round \(p\) is \(\mathbf{\Lambda^{p}}\) for nodal coordinates and \(\mathbf{\Gamma^{p}}\) for member areas. Then, a search tree is created to decide the final state \(\overline{\mathbf{\Lambda^{p}}}\) and \(\overline{\mathbf{\Gamma^{p}}}\). In the search tree, a nodal coordinate is determined in \(\mathbf{\mathcal{H}_{w}^{p}}\) at first, and then a member area is decided in \(\mathbf{D_{i}^{p}}\). The update process is conducted for multiple rounds. This algorithm continues until the relative error \(\eta\) between the minimum weight \(W_{m}^{1\sim\varepsilon-1}=\min\left\{\overline{W^{1}},\overline{W^{2}},..., \overline{W^{\varepsilon-1}}\right\}\) of the final state from round 1 to \((\varepsilon-1)\) and the weight \(\overline{W^{\varepsilon}}\) of the final state in the round \(\varepsilon\) below 0.1%. The final state for the layout vector \(\overline{\mathbf{\Lambda^{\varepsilon}}}\) and the sizing vector \(\overline{\mathbf{\Gamma^{\varepsilon}}}\) is the optimal solution for nodal coordinates \(\mathbf{\Delta}\) and member areas \(\mathbf{A}\) of the optimization problem in Section 2.
### 3.4 MCTS search for combining sizing and layout variable
To apply MCTS search to the sizing and layout optimization of truss structures, it is required to define the problem as an MDP. A state is formulated by a tuple \(\left(\mathbf{\Omega},\mathbf{\Lambda},\mathbf{\Phi},\mathbf{\Gamma}\right)\), which is expressed as
\[\begin{array}{ll}\text{Minimize}&W(\mathbf{\Lambda},\mathbf{\Gamma})=\rho \sum_{i=1}^{n_{g}}\left(\gamma_{i}\sum_{j=1}^{\psi_{i}}l_{ij}(\lambda_{w}) \right),\\ \text{Subject to}&\begin{array}{l}\mathbf{K}(\mathbf{\Lambda},\mathbf{\Gamma}) \mathbf{U}=\mathbf{F},\\ \varsigma_{ij}=\mathbf{B_{ij}}\mathbf{U_{ij}},\end{array}\end{array} \tag{7}\]
w
Figure 4: Update process in UMCTS for sizing and layout optimization problems.
\[\begin{array}{l}s_{ij}=\frac{E}{l_{ij}}\zeta_{ij},\\ s_{\text{m}}\leq s_{ij}\leq s_{\text{M}},\ i=1,2,...,n_{g},\ j=1,2,...,\psi_{l}, \\ u_{\text{m}}\leq u_{k}\leq u_{\text{M}},\ k=1,2,...,n_{c},\\ \boldsymbol{\Omega}=\{\omega_{1},\omega_{2},...,\omega_{w},...,\omega_{n_{g}}\}, \\ \boldsymbol{\Phi}=\Big{\{}\varphi_{1},\varphi_{2},...,\varphi_{l},...,\varphi_ {n_{g}}\Big{\}}.\end{array}\]
\(\boldsymbol{\Omega}=\big{\{}\omega_{1},\omega_{2},...,\omega_{w},...,\omega_{n _{g}}\big{\}}\) is a sorted set to check the modification for nodal coordinates. \(\boldsymbol{\Phi}=\Big{\{}\varphi_{1},\varphi_{2},...,\varphi_{l},...,\varphi_ {n_{g}}\Big{\}}\) is a sorted set to confirm the modification for member areas. \(\boldsymbol{\Omega}\) and \(\boldsymbol{\Phi}\) only contain elements 0 or 1. 0 indicates that the coordinate of that node or cross-sectional area of that group is modified, and 1 means that the nodal coordinate or member area is unmodified. Two kinds of actions are to select a nodal coordinate in \(\boldsymbol{\mathcal{H}_{\text{w}}^{\text{p}}}\) and to choose a cross-sectional area in \(\boldsymbol{\mathcal{D}_{\text{i}}^{\text{p}}}\). At first, the nodal coordinate is selected in \(\boldsymbol{\mathcal{H}_{\text{w}}^{\text{p}}}\) until all elements in \(\boldsymbol{\Omega}\) are zero. Then, the member area is chosen in \(\boldsymbol{\mathcal{D}_{\text{i}}^{\text{p}}}\). When all elements in \(\boldsymbol{\Omega}\) and \(\boldsymbol{\Phi}\) are zero, no action is needed for this state (Fig. 5). After taking an action, a state \((\boldsymbol{\Omega},\boldsymbol{\Lambda},\boldsymbol{\Phi},\boldsymbol{ \Gamma})\) is changed depending on the consequence of the selected action. The reward \(r\) is given based on the results from the structural simulation. The terminal state \((\boldsymbol{\Omega_{\text{T}}},\boldsymbol{\Lambda_{\text{T}}},\boldsymbol{ \Phi_{\text{T}}},\boldsymbol{\Gamma_{\text{T}}})\) travels through stress and displacement constraints. When there are constraint violations, \(r_{T}\) is equal to 0. If this state passes all the constraints, \(r_{T}\) is \(\alpha/(W_{T})^{2}\). \(\alpha\) is the minimum weight, which is the weight of the truss based on initial layout with the member area equal to the minimum value in \(\boldsymbol{\mathcal{D}}\). \(W_{T}\) is the weight of the truss when reaching the terminal state \(T\).
In a round \(p\), search tree shown in Fig. 6 is created which follows the four-step repetition [34] and then finds the optimal action sequence by UCB (Fig. 7). Originally, this tree includes only one root node \(Y\). The state of the root node is \(\boldsymbol{\Omega_{\text{Y}}^{\text{p}}}=\{1,1,...,1,...,1\}\)\((\boldsymbol{\quad\big{|}\boldsymbol{\Omega_{\text{Y}}^{\text{p}}}\big{|}=n_{ \sigma}\quad),\quad\boldsymbol{\Lambda_{\text{Y}}^{\text{p}}}=\boldsymbol{ \Lambda^{\text{p}}}=\big{(}\lambda_{1}^{p},\lambda_{2}^{p},...,\lambda_{w}^{p},...,\lambda_{n_{g}}^{p}\big{)}\quad,\quad\boldsymbol{\Phi_{\text{Y}}^{\text{ p}}}=\{1,1,...,1,...,1\}\)\((\boldsymbol{\quad\big{|}\boldsymbol{\Phi_{\text{Y}}^{\text{p}}}\big{|}=n_{g}\ ),\ \text{and}\ \ \boldsymbol{\Gamma_{\text{Y}}^{\text{p}}}= \boldsymbol{\Gamma^{\text{p}}}=\Big{(}\gamma_{1}^{p},\gamma_{2}^{p},..., \gamma_{i}^{p},...,\gamma_{n_{g}}^{p}\Big{)}\.\) In each iteration, the best child node is chosen using UCB. Then, the algorithm expands the search tree and conducts a simulation. At last, MCTS method utilizes the received reward \(r\) to update the information from the leaf node to root node. Eq. (8) uses the maximum value of reward \((r_{q+1}^{p})_{M}\) to calculate UCB. \(q\) is the layer number. The reason for choosing \((r_{q+1}^{p})_{M}\) is that the aim of this problem is to determine optimal solution, and the average of reward \(\sum r_{q+1}^{p}/N_{q+1}^{p}\) should not be considered.
\[\mathcal{U}_{q+1}^{p}=(r_{q+1}^{p})_{M}+\sqrt{\frac{2lnN_{\text{Y}}^{p}}{N_{q+ 1}^{p}}} \tag{8}\]
After performing a certain number of iterations, \((r_{q+1}^{p})_{M}\) is employed to determine
the optimal action for current state \(\mathbf{(\Omega^{p},\Lambda^{p},\Phi^{p},\Gamma^{p})}\) by \(\max\left\{\left(r_{q+1}^{p}\right)_{1},\left(r_{q+1}^{p}\right)_{2},...,\left( r_{q+1}^{p}\right)_{v},...,\left(r_{q+1}^{p}\right)_{\epsilon^{p}}\right\}\) when choosing nodal coordinate and \(\max\left\{\left(r_{q+1}^{p}\right)_{1},\left(r_{q+1}^{p}\right)_{2},..., \left(r_{q+1}^{p}\right)_{v},...,\left(r_{q+1}^{p}\right)_{v}\right\}\) when selecting member area. After state transition, state \(\mathbf{(\Omega^{p}_{\Xi},\Lambda^{p}_{\Xi},\Phi^{p}_{Y},\Gamma^{p}_{Y})}\) is converted into a new state \(\mathbf{(\Omega^{p}_{\Xi+1},\Lambda^{p}_{\Xi+1},\Phi^{p}_{Y},\Gamma^{p}_{Y})}\) when the action is to select a nodal coordinate in \(\mathbf{\mathcal{H}^{p}_{w}}\), and state \(\mathbf{(\Omega^{p}_{n_{\sigma}},\Lambda^{p}_{n_{\sigma}},\Phi^{p}_{\xi},\Gamma^{ p}_{\xi})}\) is converted into a new state \(\mathbf{(\Omega^{p}_{n_{\sigma}},\Lambda^{p}_{n_{\sigma}},\Phi^{p}_{\xi+1},\Gamma^{ p}_{\xi+1})}\) when the action is to choose a member area in \(\mathbf{D^{p}_{i}}\). When state \(\mathbf{(\Omega^{p}_{n_{\sigma}},\Lambda^{p}_{n_{\sigma}},\Phi^{p}_{n_{\sigma}}, \Gamma^{p}_{n_{\sigma}})}\) is reached, the algorithm completes a round. Moreover, the final state for nodal coordinate \(\mathbf{\overline{\Lambda^{p}}}=\mathbf{\Lambda^{p}_{n_{\sigma}}}\) and member area \(\mathbf{\overline{\Gamma^{p}}}=\mathbf{\Gamma^{p}_{n_{g}}}\) for round \(p\) is determined.
With more iterations, MCTS can give a better MDP decision. However, the efficiency of UMCTS is also important. To accelerate the UMCTS, Eq. (9) is proposed to determine the number of iterations \(I^{p}_{q}\) for layer \(q\) in round \(p\). \(\Pi\) is the product notation to indicate repeated multiplication.
\[\begin{array}{ll}\text{For root node }\Upsilon&I^{p}_{Y}=6\varpi\times \left[\log_{10}(\left\lceil\prod_{\Xi=1}^{n_{\sigma}}\left|\mathbf{\mathcal{H}^{ p}_{\Xi}}\right|\times\prod\nolimits_{\xi=1}^{n_{g}}\left|\mathbf{D^{p}_{\xi}} \right|)\right]\\ \text{For layer }q\text{ }(q<n_{\sigma})&I^{p}_{q}=3\varpi\times\left[\log_{10}( \left\lceil\prod_{\Xi=q+1}^{n_{c}}\left|\mathbf{\mathcal{H}^{p}_{\Xi}}\right| \times\prod\nolimits_{\xi=1}^{n_{g}}\left|\mathbf{D^{p}_{\xi}}\right|)\right]\\ \text{For layer }q\text{ }(q=n_{\sigma})&I^{p}_{q}=3\varpi\times\left[\log_{10}( \left\lceil\prod_{\xi=1}^{n_{g}}\left|\mathbf{D^{p}_{\xi}}\right|)\right\right]\\ \text{For layer }q\\ (n_{\sigma}<q<\varpi)&I^{p}_{q}=3\varpi\times\left[\log_{10}(\left\lceil\prod _{\xi=q+1-n_{\sigma}}^{n_{g}}\left|\mathbf{D^{p}_{\xi}}\right|)\right]\end{array} \tag{9}\]
Figure 5: Criterion of action selection for given state in the UMCTS.
Figure 6: Search tree for sizing and layout optimization problem.
## 4 Illustrative examples
To demonstrate the validity and efficiency of UMCTS in sizing and layout optimization of truss structures, four classical examples documented in the literature are solved in this study. The time efficiency is tested against a cantilever planar truss structure. The results are presented and compared to the Basic Open-source Nonlinear Mixed Integer
Figure 7: Determination of optimal action for sizing and layout optimization problem.
programming (BONMIN) method, which is BB algorithm using interior point method and COIN-OR branch and cut [37]. Moreover, 15-bar planar truss, 25-bar spatial truss, and 39-bar spatial truss are used to investigate convergence history, solution accuracy, and solution stability of the proposed method. The UMCTS algorithm and the direct stiffness method for the analysis of truss structures are coded in Python programming software. The computations were carried out with Intel Core i7 2.50 GHz processor and 40 GB memory.
### Problem description
The four test problems, including a cantilever planar truss structure, a 15-bar planar truss, a 25-bar spatial truss, and a 39-bar spatial truss are introduced in this part.
The schematic diagram of cantilever planar truss structure is shown in Fig. 8. The total number of sizing variables \(n_{g}\) are 10, 15, 20, 25, and 30. The total number of layout variables \(n_{c}\) are 8, 12, 16, 20, and 24. The material density \(\rho\) is 0.1 lb/in\({}^{3}\). The modulus of elasticity \(E\) is 10,000 ksi. A concentrated load of 50 kips is applied on the bottom-right corner of the truss in the negative Y-direction. The truss members are subjected to stress limitations of \(\pm\)25 ksi. The relationship of the allowable displacement \(u_{\text{m,M}}\) and the total number of sizing variables \(n_{g}\) is given as
\[u_{\text{M}}=\begin{cases}n_{g}/5&n_{g}=10,15,20,25\\ 2n_{g}/5&n_{g}=30\end{cases},u_{\text{m}}=-u_{\text{M}}. \tag{10}\]
The cross-sectional areas must be selected from the discrete profile list \(\mathbf{D=}\{2,\,4,\) 6,..., 16, 18, 20\(\}\) in\({}^{2}\). Side constraints for layout variables are \(\ 100\) in \(\leq x_{2}\leq 140\) in, \(220\) in \(\leq x_{3}\leq 260\) in,..., \(\ (24n_{g}-20)\) in \(\leq x_{n_{g}/5+1}\leq(24n_{g}+20)\) in, \(\ 100\) in \(\leq x_{n_{g}/5+3}\leq 140\) in, \(\ 220\) in \(\leq x_{n_{g}/5+4}\leq 260\) in,...., \(\ (24n_{g}-20)\) in \(\leq x_{2n_{g}/5+2}\leq(24n_{g}+20)\) in, \(\ \ \ \ \ \ \ \ \ \ \ -20\) in \(\leq y_{1},\ y_{2},...,y_{n_{g}/5+1}\leq 20\) in, \(\ \ \ \ \ \ \ \ \ \ 100\) in \(\leq y_{n_{g}/5+3}\), \(y_{n_{g}/5+4}\),...,\(y_{2n_{g}/5+2}\leq 140\) in
The geometry and loading condition of the 15-bar planar truss is shown in Fig. 9. The material density \(\rho\) and the modulus of elasticity \(E\) of this example are 0.1 lb/in\({}^{3}\) and 10,000 ksi, respectively. A concentrated load of 10 kips acting in the negative Y-direction at node 8 is applied. The truss members are subjected to stress limitations of \(\pm\)25 ksi. The \(x\) and \(y\) coordinates of joints 2, 3, 6, and 7 are allowed to vary, and joints 6 and 7 being constrained to have the same \(x\) coordinates as joints 2 and 3. Joints 4 and 8 are allowed to move only in Y-direction. This optimization problem includes 15 sizing variables (cross-sectional area of members) and 8 layout variables (\(x_{2}=x_{6}\); \(x_{3}=x_{7}\); \(y_{2}\); \(y_{3}\); \(y_{4}\); \(y_{6}\); \(y_{7}\); \(y_{8}\)). The cross-sectional areas must be chosen from the discrete profile list \(\mathbf{D=}\{0.111,\,0.141,\,0.174,\,0.220,\,0.270,\,0.287,\,0.347,\,0.440,\,0.539, 0.954,\,1.081,\,1.174,\,1.333,\,1.488,\,1.764,\,2.142,\,2.697,\,2.800,\,3.131, \,3.565,\,3.813,\,4.805,\,5.952,\,
6.572, 7.192, 8.525, 9.300, 10.850, 13.330, 14.290, 17.170, 19.180\(\mathrm{\SIUnitSymbolMicro}\) in\(\mathrm{\SIUnitSymbolDegree}\). Side constraints for layout variables are 100 in \(\leq x_{2}\leq 140\) in \(\leq x_{3}\leq 260\) in \(\leq y_{2}\leq 140\) in \(\leq y_{3}\leq 140\) in \(\leq y_{3}\leq 140\) in \(\leq y_{4}\leq 90\) in \(\leq y_{6}\leq 20\) in, \(-20\) in \(\leq y_{7}\leq 20\) in, and \(20\) in \(\leq y_{8}\leq 60\) in.
A 25-bar spatial truss is considered as shown in Fig. 10. The material density \(\rho\) is 0.1 lb/in\(\mathrm{\SIUnitSymbolDegree}\). The modulus of elasticity \(E\) is 10,000 ksi. The loading applied to the structure are listed in Table 1. All members in the truss are constrained to a stress limit of 40 ksi in both tension and compression. Moreover, nodal displacements in any direction for each node must not exceed 0.35 in. The 25 truss members are linked into eight groups, as follows: (1) \(A_{1}\), (2) \(A_{2-}\)\(A_{5}\), (3) \(A_{6-}\)\(A_{9}\), (4) \(A_{10-}\)\(A_{11}\), (5) \(A_{12-}\)\(A_{13}\), (6) \(A_{14-}\)\(A_{17}\), (7) \(A_{18-}\)\(A_{21}\), and (8) \(A_{22-}\)\(A_{25}\). The \(x\), \(y\), and \(z\) coordinates of joints 3, 4, 5, and 6 are allowed to vary, while the position of joints 1 and 2 remains unchanged. There are 13 independent design variables in this optimization problem, including 8 sizing variables and 5 layout variables (\(x_{4}=x_{5}=-x_{3}=-x_{6}\); \(y_{3}=y_{4}=-y_{5}=-y_{6}\); \(z_{3}=z_{4}=z_{5}=z_{6}\); \(x_{8}=x_{9}=-x_{7}=-x_{10}\); \(y_{7}=y_{8}=-y_{9}=-y_{10}\) ). Cross-sectional areas must be selected from the discrete profile list \(\mathbf{D=}\{0.1,\,0.2,\,0.3,\ldots,\,2.4,\,2.5,\,2.6,\,2.8,\,3.0,\,3.2,\,3.4\}\) in\(\mathrm{\SIUnitSymbolDegree}\). Side constraints for layout variables are 20 in \(\leq x_{4}\leq 60\) in, 40 in \(\leq y_{3}\leq 80\) in, 90 in \(\leq z_{3}\leq 130\) in, 40 in \(\leq x_{8}\leq 80\) in, and 100 in \(\leq y_{7}\leq 140\) in.
A triangular tower with 39 members is considered as shown in Fig. 11. The coordinates of three bottom nodes 1, 2, and 3 and three top nodes 13, 14, and 15 are shown in Table 2. The material density \(\rho\) is 0.28 lb/in\(\mathrm{\SIUnitSymbolDegree}\). The modulus of elasticity \(E\) is 30457.92 ksi. A concentrated force of 2.248 kips acting in the negative Y-direction at the three top nodes is applied. The maximum stress in all members must not exceed 34.81 ksi. The maximum displacements of all the nodes in X, Y, and Z directions must not exceed 0.16 in. The member grouping is as follows: (1) \(A_{1-}\)\(A_{3}\), (2) \(A_{4-}\)\(A_{6}\), (3) \(A_{7-}\)\(A_{9}\), (4) \(A_{10}\)- \(A_{12}\), and (5) \(A_{13-}\)\(A_{39}\). This optimization problem includes 5 sizing variables and 6 layout variables (\(y_{4};y_{7};y_{10};z_{4};z_{7};z_{10}\)). Sizing variables are chosen from the discrete profile list \(\mathbf{D=}\{0.02,\,0.04,\,0.06,\,\ldots,\,1.98,\,2.00,\,2.0\}\) in\(\mathrm{\SIUnitSymbolDegree}\). Side constraints for layout variables are \(\mathbf{11.02\,\,in\,\leq y_{4}\leq 39.37\,\,in\,\,\,,\,\,\,\,11.02\,\,\,in\,\leq y_{7}\leq 39.37\,\,in\,\,\,,\,\,\,\,\,11.02\,\,\,in\,\leq y_{10}\leq 39.37\,\,in\,\,,\,\,\,\,\,0\,\,\,in\leq z_{4}\leq 78.74\,\,\,in\,\,,\,\,\,\,39.37\,\,in\,\leq z_{7}\leq 118.11\,\,\,in\,\,\leq y_{7}\leq 1 8.74\,\,\,in\,\leq z_{10}\leq 157.48\,\,\,in\,\,\,\,\,\,15.748\,\,\,in\,\,\,\,\,\,15.748\,\,\,in\,\,\,\,\, 15.748\,\,\,in\,\,\,\,\,15.748\,\,\,in\,\,\,\,\,15.748\,\,\,in\,\,\,\,\,15.748\,\,\, \,\,15.748\,\,\,in\,\,\,\,\,15.748\,\,\,\,in\,\,\,\,15.748\,\,\,\,in\,\,\,\,\,15.748\,\,\, \,\,15.748\,\,\,\,in\,\,\,\,\,15.748\,\,\,\,in\,\,\,\,15.748\,\,\,\,\,15.748\,\,\, \,\,15.748\,\,\,\,\,15.748\,\,\,\,\,15.748\,\,\,\,\,15.748\,\,\,\,\,15.748\,\,\,\, \,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\, \,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\, \,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\, \,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\, \,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\, \,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\, \,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\, \,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\, \,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,\,15.748\,\,\,15.748\,\,\,\,15.
Figure 8: Schematic of the cantilever planar truss structure.
Figure 10: Schematic of the 25-bar spatial truss structure.
Figure 9: Schematic of the 15-bar planar truss structure.
### Comparison the validity between UMCTS and AlphaTruss: 25-bar spatial truss
In this section, 25-bar spatial truss shown in Fig. 10 is employed to compare the validity between UMCTS and AlphaTruss proposed by Luo et al. [38]. AlphaTruss is a two-stage MCTS-based RL method. In Stage I, this algorithm manages these continuous variables by uniformly discretizing these variables. The purpose of Stage 2 is to adjust
\begin{table}
\begin{tabular}{r r r r r} \hline \hline Node & \multicolumn{3}{c}{Loads (kips)} \\ \cline{2-5} & \(F_{X}\) & \(F_{Y}\) & \(F_{Z}\) \\ \hline
1 & 1.0 & \(-10.0\) & \(-10.0\) \\
2 & 0.0 & \(-10.0\) & \(-10.0\) \\
3 & 0.5 & \(0.0\) & \(0.0\) \\
6 & 0.6 & \(0.0\) & \(0.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Load case for the 25-bar spatial truss.
\begin{table}
\begin{tabular}{r r r r r r r} \hline \hline Bottom nodes & \multicolumn{6}{c}{Top nodes} \\ \hline Number & \(x\) (in) & \(y\) (in) & \(z\) (in) & Number & \(x\) (in) & \(y\) (in) & \(z\) (in) \\ \hline
1 & 0 & 39.37 & 0 & 13 & 0 & 11.02 & 157.48 \\
2 & \(-34.1\) & \(-19.69\) & 0 & 14 & \(-9.55\) & \(-5.51\) & 157.48 \\
3 & 34.1 & \(-19.69\) & 0 & 15 & 9.55 & \(-5.51\) & 157.48 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Nodal coordinates of bottom and top nodes of the 39-bar spatial truss structure.
Figure 11: Schematic of the 39-bar spatial truss structure.
the node locations and cross-sectional areas of the member generated in Stage 1 to improve the layout design. For sizing and layout optimization problems, nodal coordinate can be any continuous value between predefined lower and upper bounds. Therefore, nodal coordinate calculated by UMCTS and AlphaTruss for each round is compared to prove the accuracy of the UMCTS algorithm. Fig. 12 and Fig. 13 is the nodal coordinate in each round for UMCTS and AlphaTruss. It can be seen that AlphaTruss can be trapped into local optimum. However, UMCTS method can reach a global optimum or near-global optimum.
times faster than BONMIN.
### Solution accuracy
Three benchmark truss structures in sizing and layout optimization problems, including 15-bar planar truss, 25-bar spatial truss, and 39-bar spatial truss are employed to validate the solution accuracy by comparing the results which have been previously investigated by other researchers.
Tables 4-6 demonstrate the comparison of optimal design results. It is shown that the UMCTS outperforms all metaheuristic algorithms in terms of the lightest weight. Optimal layout after the optimization with the UMCTS for 15-bar planar truss, 25-bar spatial truss, and 39-bar spatial truss are given in Figs. 14-16. Figs. 17-19 indicate that the optimal solution assessed by the UMCTS does not violate both normal stresses and nodal displacement constraints.
\begin{table}
\begin{tabular}{r r r r r r r} \hline No & Design & Traditional & PSO & TLBO & DE & This study \\ & Variable & method & & & (UMCTS) \\ \hline
1 & \(y_{4}\) & 31.89 & 31.10 & 35.43 & 36.22 & 38.58 \\
2 & \(z_{4}\) & 46.85 & 52.36 & 53.15 & 21.26 & 27.17 \\
3 & \(y_{7}\) & 25.59 & 16.54 & 27.17 & 31.50 & 37.80 \\
4 & \(z_{7}\) & 86.61 & 114.96 & 90.94 & 85.04 & 85.04 \\
5 & \(y_{10}\) & 18.50 & 14.17 & 18.90 & 20.08 & 31.10 \\
6 & \(z_{10}\) & 121.65 & 125.59 & 129.92 & 134.25 & 118.90 \\
7 & \(A_{1}\) – \(A_{3}\) & 1.70 & 2.02 & 1.86 & 2.02 & 1.96 \\
8 & \(A_{4}\) – \(A_{6}\) & 1.34 & 1.48 & 1.72 & 2.00 & 1.76 \\
9 & \(A_{7}\) – \(A_{9}\) & 1.04 & 1.28 & 1.22 & 1.40 & 1.06 \\
10 & \(A_{10}\) – \(A_{12}\) & 0.64 & 0.46 & 0.42 & 0.42 & 0.40 \\
11 & \(A_{13}\) – \(A_{39}\) & 0.68 & 2.02 & 0.38 & 0.24 & 0.28 \\ Weight (lb) & & 447.94 & 376.13 & 339.80 & 309.42 & 299.23 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of optimal designs for the 39-bar spatial truss structure.
\begin{table}
\begin{tabular}{r r r r r r r r} \hline No & Design & GA & PSO & FA & DE & ABC & This study \\ & Variable & & & & & (UMCTS) \\ \hline
1 & \(x_{4}\) & 41.07 & 27.62 & 37.32 & 36.83 & 36.21 & 37.25 \\
2 & \(y_{3}\) & 53.47 & 51.62 & 55.74 & 58.53 & 54.64 & 54.92 \\
3 & \(z_{3}\) & 124.60 & 129.91 & 126.62 & 122.67 & 129.96 & 129.38 \\
4 & \(x_{8}\) & 50.80 & 42.55 & 50.14 & 49.21 & 52.07 & 51.25 \\
5 & \(y_{7}\) & 131.48 & 132.72 & 136.40 & 136.74 & 139.98 & 137.50 \\
6 & \(A_{1}\) & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\
7 & \(A_{2}\) – \(A_{5}\) & 0.2 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\
8 & \(A_{6}\) – \(A_{9}\) & 1.1 & 1.1 & 0.9 & 0.9 & 1.0 & 1.0 \\
9 & \(A_{10}\) – \(A_{11}\) & 0.2 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\
10 & \(A_{12}\) – \(A_{13}\) & 0.3 & 0.4 & 0.1 & 0.1 & 0.1 & 0.1 \\
11 & \(A_{14}\) – \(A_{17}\) & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\
12 & \(A_{18}\) – \(A_{21}\) & 0.2 & 0.4 & 0.1 & 0.1 & 0.1 & 0.1 \\
13 & \(A_{22}\) – \(A_{25}\) & 0.9 & 0.7 & 1.0 & 1.0 & 0.9 & 0.9 \\ Weight (lb) & & 136.20 & 129.21 & 118.83 & 118.76 & 117.33 & 116.75 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of optimal designs for the 25-bar spatial truss structure.
Figure 16: 39-bar spatial truss structure: comparison of the optimized layout with the initial configuration of the truss.
Figure 14: 15-bar planar truss structure: comparison of the optimized layout with the initial configuration of the truss.
Figure 15: 25-bar spatial truss structure: comparison of the optimized layout with the initial configuration of the truss.
initial configuration of the truss.
### Solution stability
The statistical results for 15-bar truss, 25-bar truss, and 39-bar truss are obtained through 10 independent sampling to test the stability of this algorithm. The results are shown in Table 7, including the best, the worst, average, and standard deviation. For
Figure 19: Constraints boundaries assessed at the optimal design of 39-bar spatial truss structure by the UMCTS for (a) displacement constraints (b) stress constraints.
Figure 17: Constraints boundaries assessed at the optimal design of 15-bar planar truss structure by the UMCTS.
Figure 18: Constraints boundaries assessed at the optimal design of 25-bar spatial truss structure by the UMCTS for (a) displacement constraints (b) stress constraints.
25-bar truss, the best, the worst, average, and standard deviation are 75.83 lb, 80.23 lb, 78.62 lb, and 1.31. For 25-bar truss, the best, the worst, average, and standard deviation are 117.20 lb, 125.46 lb, 119.57 lb, and 2.25. For 39-bar truss, the best, the worst, average, and standard deviation are 299.43 lb, 305.08 lb, 300.38 lb, and 1.37.
## 5 Conclusion
For the accuracy and efficiency of UMCTS algorithm for sizing optimization problems, the attention of this paper is to apply this method to combined sizing and layout optimization for truss structures. In this paper, the novel update process for nodal coordinate is proposed, and there are two characteristics of this method: (1) The allowed range of each coordinate is narrowed down in each round. (2) Accelerators for the number of entries in allowed range and iteration number are presented to reduce computing time. Moreover, nodal coordinate and member area is determined simultaneously using only one search tree for each round.
Various benchmark examples including cantilever planar truss, 15-bar planar truss, 25-bar spatial truss, and 39-bar spatial truss are employed to investigate the proposed methodology. These examples indicate that this method can successfully minimize the weight of the truss structures while satisfying design constraints. The CPU time of the UMCTS is two times faster than the BB method. Moreover, the numerical results derived from this study are compared with other metaheuristic optimization methods. It is obviously seen that the UMCTS can stably attain the optimal solutions lighter than the other metaheuristic algorithms. The proposed algorithm is not only limited to optimizing truss structures, but it can also be used in other structural optimization problems including shell, plate, frame, and solid structures.
| この研究の主な目的は、サイズと配置の変数を同時に考慮したトラス構造の最適な設計を見つけることです。純粋なサイズ最適化問題と比較して、この問題の方がより複雑なため、2種類の変数に関わる性質が根本的に異なります。この論文では、サイズ最適化問題を解決するために、更新プロセスとモンテカルロ木探索を組み合わせた強化学習法である更新モンテカルロ木探索 (UMCTS) を適用しました。この研究では、節点座標に対して新しい更新プロセスを提案します。この更新プロセスには、以下の2つの特徴があります。 (1) それぞれの座標の許容範囲は、各ラウンドで異なります。 (2) 許容範囲の入力数とイテレーション数を増やすための加速器を導入することで、計算時間を削減します。さらに、各ラウンドで1つの検索木を用いて、節点座標とメンバー面積を同時に決定します |
2309.04022 | Improving the Accuracy of Beauty Product Recommendations by Assessing
Face Illumination Quality | We focus on addressing the challenges in responsible beauty product
recommendation, particularly when it involves comparing the product's color
with a person's skin tone, such as for foundation and concealer products. To
make accurate recommendations, it is crucial to infer both the product
attributes and the product specific facial features such as skin conditions or
tone. However, while many product photos are taken under good light conditions,
face photos are taken from a wide range of conditions. The features extracted
using the photos from ill-illuminated environment can be highly misleading or
even be incompatible to be compared with the product attributes. Hence bad
illumination condition can severely degrade quality of the recommendation.
We introduce a machine learning framework for illumination assessment which
classifies images into having either good or bad illumination condition. We
then build an automatic user guidance tool which informs a user holding their
camera if their illumination condition is good or bad. This way, the user is
provided with rapid feedback and can interactively control how the photo is
taken for their recommendation. Only a few studies are dedicated to this
problem, mostly due to the lack of dataset that is large, labeled, and diverse
both in terms of skin tones and light patterns. Lack of such dataset leads to
neglecting skin tone diversity. Therefore, We begin by constructing a diverse
synthetic dataset that simulates various skin tones and light patterns in
addition to an existing facial image dataset. Next, we train a Convolutional
Neural Network (CNN) for illumination assessment that outperforms the existing
solutions using the synthetic dataset. Finally, we analyze how the our work
improves the shade recommendation for various foundation products. | Parnian Afshar, Jenny Yeon, Andriy Levitskyy, Rahul Suresh, Amin Banitalebi-Dehkordi | 2023-09-07T21:29:21 | http://arxiv.org/abs/2309.04022v1 | # Improving the Accuracy of Beauty Product Recommendations by Assessing Face Illumination Quality
###### Abstract.
We focus on addressing the challenges in responsible beauty product recommendation, particularly when it involves comparing the product's color with a person's skin tone, such as for foundation and concealer products. To make accurate recommendations, it is crucial to infer both the product attributes and the product specific facial features such as skin conditions or tone. However, while many product photos are taken under good light conditions, face photos are taken from a wide range of conditions. The features extracted using the photos from ill-illuminated environment can be highly misleading or even be incompatible to be compared with the product attributes. Hence bad illumination condition can severely degrade quality of the recommendation.
We introduce a machine learning framework for **illumination assessment** which classifies images into having either good or bad illumination condition. We then build an automatic user guidance tool which informs a user holding their camera if their illumination condition is good or bad. This way, the user is provided with rapid feedback and can interactively control how the photo is taken for their recommendation. Only a few studies are dedicated to this problem, mostly due to the lack of dataset that is large, labeled, and diverse both in terms of skin tones and light patterns. Lack of such dataset leads to neglecting skin tone diversity. Therefore, We begin by constructing a diverse synthetic dataset that simulates various skin tones and light patterns in addition to an existing facial image dataset. Next, we train a Convolutional Neural Network (CNN) for illumination assessment that outperforms the existing solutions using the synthetic dataset. Finally, we analyze how the our work improves the **shade recommendation** for various foundation products.
## 1. Introduction
Many beauty product recommendation systems require information on both the products and customers' preferences (Sutton et al., 2016). Traditionally, customers' preferences are collected through series of questions and answers. However, many customers do not know how to answer such specialized beauty-related questions, and the recommendation produced may not feel worth the time they spent (Krishnan et al., 2017). On the other hand, predictive models to infer information about the customers may not lead to accurate results. An alternative route to reduce the barrier for beauty product recommendation is to use camera images as input and provide skin care or makeup recommendations based on the facial features estimated from the images. However, it is challenging to extract facial features from photos unless they are taken under a good illumination (Sutton et al., 2016) condition. The user may not be paying attention to the illumination condition while taking photos, but the photos taken under poor illumination conditions highly impact how face features such as skin conditions, makeup, tone, etc. are inferred. See Figure 1. The products recommended based on conflicting or incorrect feature values resulted from a set of poorly illuminated face photos may even provide a recommendation that is worse than recommending random products.
While tracking the face illumination condition is highly important for improving recommendation, only a few studies have been conducted to address this problem resulting in a few automatic models available for this purpose (Sutton et al., 2016). Our approach is to apply an automatic model to directly inform the user holding a camera to take picture if the light condition is good or bad, which allows the user to be selective about what photo to be used for the recommendation. Specifically, the tool displays the outcome of the illumination assessment model (either good or bad) to the user holding their camera, which then encourages them to interactively change the camera position or environment until they see the light condition being good. One alternative approach is to take any image from the user and attempt to correctly infer features (Sutton et al., 2016). However, some studies do not consider images taken under poor illumination conditions or require images that meet special requirements, such as holding a color calibration card. It is also unclear how much feature inference varies if differently illuminated photos of the same person are used by the method. While these existing solutions may reduce the time needed for the user to interact with their camera, our approach gives the user
Figure 1. Effect of using photos under different illumination conditions on shade recommendation for foundation and concealer products. Only well-illuminated face photo (middle) results in recommending medium range shades as intended. The same user is given conflicting recommendations in the other two cases. Synthetic face images are used because of privacy reasons and having more control over illumination/ skin tone variations.
a direct control over how their photo would be illuminated. Moreover, showing the output of the model and allowing interactivity may help increase trustworthiness [1; 14].
### Related Work
One of the studies on face illumination quality assessment is conducted by Sellahewa and Jassim [29]. Their method, however, is limited by requiring a reference image which reduces the applicability of their approach. Another method by Truong _et al._[33] approaches illumination assessment by averaging over image partitions. This technique is too simple to account for diverse illumination scenarios. Similar pixel averaging is also used in a recent study by Hernandez-Ortega _et al._[13]. In another study by Terhorst _et al._[32], pixel-level face quality is considered rather than image-level, which is considered in our study.
Many other prior studies have focused on face quality assessment in general, i.e., considering different factors such as head pose, noise, sharpness, and illumination. One of such studies is performed by Chen _et al._[5], to select images of high quality for the downstream task of face recognition. This group of studies however are not able to provide an explicit signal about the bad illumination condition, which is the main goal of this study.
One of the challenges that has prevented the development of an automatic illumination assessment model is the lack of a large dataset of faces under various lighting conditions. Existing face datasets are mostly under neutral illumination, or captured in-the-wild with uncontrolled illumination. The ones that include images of various lighting conditions are either not labeled or limited to a small number of subjects. To fill this gap, Zhang _et al._[35] constructed a large-scale dataset of face images with synthetic illumination patterns, labeled based on their quality. To construct the dataset, one subject took photos at various lighting conditions. These patterns were then transferred to existing face datasets such as YaleB [11], PIE [31], and FERET [27]. To transfer light patterns, an illumination transfer algorithm through edge-preserving filters proposed in [6] was used. This technique however works under the assumption of similar face shape between source and target, it requires frontal faces, and fails under intense lighting conditions. Another drawback of the study by Zhang _et al._[35] is that their illumination assessment model fails when fed with dark skin tone faces and classifies most of them as bad lighting. As discussed in a study by Babnik and Struc, face quality assessment models highly favor light-colored faces [2]. Disentangling skin tone and illumination is still an open problem [9], and machine learning models are not able to assess illumination quality unless they are either trained on an inclusive and representative dataset, or fed with background information [9]. Asking the users to include background in their photos, however, results in low resolution faces, for most of the mobile devices, reducing the performance of the consequent face processing applications.
### Contributions
Achieving a representative dataset is costly and time-consuming. For this reason, in this work, we propose to first increase the diversity of a face dataset by simulating light to dark skin tones. Simulating skin tones has the benefit of increasing the diversity without collecting such data. consequently, 200 illumination patterns, including good and bad ones, are defined and transferred to the dataset, using the Deep Single-Image Portrait Relighting (DPR) model [36]. This model is able to transfer a target illumination to a source portrait image. Unlike Reference [6], DPR is not restricted to frontal and same shape source and reference faces, and it can handle harsh lighting. Its limitation, however, is that light and skin tone are not perfectly disentangled and some skin color is transferred along illumination. Therefore, for each skin tone, we transfer illumination defined on the same tone. The constructed dataset is then used to train a MobileNetv2 [28] model to classify faces as good/bad illuminated.
The proposed illumination assessment model is used in improving beauty product recommendation based on user's face photo. We illustrate how the retrieved product shade is significantly different than the shade we should retrieve based on the real face features such as skin tone, under proper illumination.
The rest of this paper is organized as follows: our proposed framework, including illumination assessment and beauty product recommendation, is described in Section 2, followed by the experiments in Section 3. Finally, Section 4 concludes the study.
## 2. The Proposed Framework
Our proposed framework, explained in the following sections, includes face illumination assessment, and using the properly illuminated face, for beauty product recommendation.
### Face Illumination Assessment
Our face illumination assessment workflow is illustrated in Fig. 2. At the first step, a 3D modelling software is used to create models of different skin tone and illumination patterns, which are labeled based on their quality for the downstream tasks such as extracting face features. The skin tones, light patterns and their associated labels are consequently transferred to an in-house dataset of 1000 frontal faces, mostly under neutral illumination. Finally a MobileNet-v2 is trained on the constructed dataset to predict two classes of good and bad illumination. These steps are described in more detail in the following subsections.
#### 2.1.1. Generating synthetic examples of skin tones and illumination patterns
We used a 3D modelling software to generate models of light and dark skin tones as well as 200 illumination patterns, labeled based on their quality. The skin tones and patterns are all transferred to the in-house dataset of 1000 faces. The labels for the constructed dataset directly come from the pattern labels, which means that the only manual labeling required for this study was to label the 200 light patterns generated by a 3D modelling software. Fig. 3 illustrates some examples for both light and dark skin tones.
The light patterns were generated by moving two spot light sources among the grid of coordinate values relative to a model of human. The grid covers situations where the light sources illuminate face from the side and/or are located above, below and in front of the person. The different skin-tones were obtained by linearly combining textures of light and dark-skinned models.
It is also worth noting that the 3D modelling software used in this study generates mostly unrealistic cartoon images as shown in Fig. 3. Therefore, these images cannot be directly used for training, and the consequent color matching and light transfer to a real face dataset is necessary. There are 3D modelling software available that can generate more realistic face images. Those software, however,
require more rendering time and computational resources, are limited to a few hundreds human models, and acquiring their high quality models is expensive.
#### 2.1.2. Color matching to simulate skin tones
To transfer skin tone from a source to a target image, we first segmented the skin area of the face, using the BiSeNet model introduced in Reference [34]. BiSeNet is a real time semantic segmentation architecture with two branches, the first of which captures low-level details, and the second captures high-level context. Segmenting the face skin area is followed by a histogram matching technique [8; 17]. To perform histogram matching, first the cumulative color histogram of the source and reference images, denoted by \(C_{\text{s}}\) and \(C_{r}\) are calculated, where source and reference refer to the face image and the skin tone model, respectively. Having the cumulative histogram functions, the output image \(\mathbf{I}_{o}\) for pixel \(p\) is derived from the input image \(\mathbf{I}\), as:
\[\mathbf{I}_{o}(p)=v_{r}\left(C_{r}^{-1}\left(C_{\text{s}}\left(\frac{\mathbf{I }(p)-\min(\mathbf{I})+1}{V}\right)\right)\right), \tag{1}\]
where \(C_{r}^{-1}\) acts as a reverse lookup on the histogram, \(V\) is the histogram bin width, and function \(v\) is defined as:
\[v(i)=\min(\mathbf{I})+(i-1)V. \tag{2}\]
#### 2.1.3. DPR model to transfer illumination patterns
DPR is an hour-glass [24] architecture that takes a Spherical Harmonic (SH) lighting as the target illumination, and a portrait image as the source image. Consequently the target lighting is applied to the source image to generate the output. Besides the output image, DPR also generates the SH lighting parameters of the input portrait. Since in this study we did not have the SH lighting of the reference illumination patterns, we first fed all the reference images to DPR to get the associated SH lighting. These parameters are then used as target illumination to be applied to all the source images.
One limitation of the DPR model is that it cannot completely disentangle lighting and skin tone, as in general what we see from a face image is influenced by both the inherent skin tone as well as environment illumination and it is difficult to disentangle the two. In other words, using a light tone reference for a dark tone source image results in a non-realistic face. To tackle this, for light skin tone faces, illumination patterns are transferred from the light skin model, and for dark skin faces, patterns are transformed from the dark skin model.
#### 2.1.4. MediaPipe face detection and MobileNet-v2
In this study, we have used MediaPipe [20] solution for face detection, in order to omit irrelevant background information. MediaPipe face detection is built upon BlazeFace [3] which is a light-weight face detection architecture.
Finally, the constructed dataset is fed to a MobileNet-v2 model to classify faces as good or bad illuminated. This architecture is selected for its efficiency to run on edge devices. To handle class imbalance caused by more bad illumination patterns than good ones (because in general good illumination is just one pattern i.e. no shadow or over-exposure on the face), more weight is given to error on good illuminated faces, in the cross-entropy loss function.
### Using Face Photos for Beauty Product Recommendation
While some beauty product recommendation does not require knowing facial features such as skin tone, it is difficult to recommend base
Figure 3. Examples of light and dark skin tone images with different illumination patterns.
Figure 2. Proposed framework. Light and dark skin tones are transferred to detected faces, followed by applying different illumination patterns. Original face photo from [4], used for illustration purposes only.
makeup products such as foundation and concealers without knowing such features. Foundation products in particular have an attribute called _shade_ which denotes the color of the product. A foundation product is offered in multiple shades, and _shade recommendation_ comprises of selecting the shade that best matches a customer's need. See Figure 1. For example, some customers may prefer a shade that is closest to their skin tone, while others prefer a shade that is either lighter/darker or warmer/cooler than their skin tone. Regardless of different preferences, the color difference between each shade and the customer's skin tone needs to be assessed in order to proceed with the shade recommendation. Many customers feel overwhelmed in selecting the shade that matches their preference, especially if they have to shop online. In this section, we will first describe how we assign an RGB value for each foundation shade, and how we compute _estimated skin tone_ from face photos. Lastly, we will introduce how we compare the shade color with the estimated skin tone. In Section 3.2, we will show how our illumination assessment improves the quality of the shade recommendation.
#### 2.2.1. Product Color and Facial Feature Estimation
Historically, foundation shade range has not been adequately reflecting skin tone diversity and only in recent years, this issue has been actively corrected by the industry [15, 16, 30]. This also means that the shade distribution tends to be heavy on the lighter color spectrum, and a color inference model trained on unmitigated training dataset can be prone to bias. Therefore, we applied the product color extraction framework shown in Fig. 4 using an unsupervised approach for each component. Specifically, we applied threshold based background removal, and extract colors in certain fixed range, and applied K-means to extract the most prominent brown color shown in the foundation product images. Using this approach, we obtained about 2000 shades of foundation products. Any error cases were examined and removed from the dataset. A similar color extraction framework was adapted for face images in estimating skin tones. That is, we remove the background and extract the color using the relevant segment of the photo, as shown in Fig. 4. We call this resulting color value estimated skin tone. It is important to note that we do not claim that the estimated skin tone is the person's actual skin tone. We only infer the skin tone as they are represented as RGB values in the photos in order to compare it with the product colors to produce a meaningful shade recommendation. Poor illumination conditions such as partial shadows can affect the value of estimated skin tone, and different photos might even point to a wide range of estimated skin tone. Illumination assessment addresses this issue by guiding the user toward taking a photo under optimal illumination. In Section 3.2, we will show how the shade recommendation improves with our illumination assessment.
#### 2.2.2. Color Comparison Using CIEDE2000 distance
To compare the RGB value representation of a foundation shade with estimated skin tone, _CIEDE2000 color difference_ was used [21]. This number assigns a distance between two colors in the scale of 0 to 100 and was developed to capture the perceptual color difference. CIEDE2000 had been tested via studies involving human observers [10, 18]. Heuristically, it is accepted that the distance less than 2 means two colors are very close to each other, and between 2 and 5 means the colors are similar, while greater than 10 means the colors start to be quite different to human eyes. We will use this metric to evaluate the color variance among the estimated skin tone from the set of all images and compare that with the variance among the images that the model classifies as good illuminated. Since the foundation shade range (brown hue) tends to span a small color space, the increase in variation results in recommendation that is not specific and may even be worse than a random guess. The details will be given in Section 3.2.
## 3. Experiments
In this section, we first analyze the performance of the face illumination assessment model. Then, in Section 3.2, we show how it improves the accuracy of beauty product recommendation. Section 3.3 is dedicated to investigate impact of this study on customers.
Figure 4. Color Extraction Method. (a) Overview of how to extract representative color from images. (b) Product color is extracted by taking dominant brown color shown in the image after the color range extraction. The estimated skin tone is computed as the average value of the pixels in the face and the front neck area. (c) Examples of the colors extracted from the product images (the first two rows) and the synthetic face photos. Each row of the face photos is the same person but under different illumination patterns.
### Face illumination assessment
To train the proposed illumination assessment model, a synthetic dataset is constructed by transferring four skin tones (two light and two dark) to an in-house dataset of 1000 faces, followed by applying 200 illumination patterns to all the obtained face images. Therefore, the dataset consists of \(1000\times 4\times 200\) images, whose labels come from the labels defined for the 200 light patterns. Consequently 20% of the constructed dataset is set aside for validation.
We have trained our proposed model using Pytorch (Krizhevsky et al., 2014), with 20 epochs, batch size of 64, and Cross-entropy loss function, where more penalty is given to the minority class which is the good illuminated case. MobileNet-v2 is pre-trained on ImageNet (Deng et al., 2015), and all the layers are fine-tuned using the constructed dataset, as we observed a degraded performance when fine-tuning the classification layer only.
To test the proposed framework, we collected 674 face images at neutral lighting and bad lighting (where there is a visible shadow on the face). Having a dataset of real faces as the test set is of high importance as the model itself is trained on synthetic data, and it is crucial to validate its performance on real data.
In order to show the necessity of having illumination patterns defined on both light and dark skin people, we further designed another experiment. To this end, we asked a light skin tone subject to take photos at different locations (such as living room, office, and inside a car), light sources (lamp and window), weathers (cloudy and sunny), patterns (top, right, left, front, behind, and down), and times (morning, noon, and evening), resulted in 130 light patterns. These patterns are then transferred to the 1000 in-house faces, to train a MobileNet-v2 model. Results of this model (referred to as light skin model) is shown in table 1, along with the results of the proposed model. Note that the higher specificity of the light skin model is due to the fact that this model classifies most of the dark skin tones as bad illuminated, which in turn has resulted in a low sensitivity.
As shown in table 1, in the second experiment, we compare the proposed framework with Reference (Krizhevsky et al., 2014). This Reference has trained a ResNet-50 (He et al., 2015) using its synthetic dataset. As it can be observed from this table, the proposed framework outperforms the other two models. More importantly, while the two other models completely fail on dark skin tones (classify most of them as bad-illuminated), ours is able to classify illumination on all skin tones.
### Color Analysis for Beauty Recommendation
In this section, we present two different color analyses to access significance of using well-illuminated photos.
#### 3.2.1. Variation among Estimated Skin Tones
We computed estimated skin tone for 6 different synthetically generated models representing different parts of Monk Skin Tone scale (Krizhevsky et al., 2014). Then the distance between the estimated skin tone of the best illuminated photo and the rest was computed. This computation is to measure how much the colors deviates from the color extracted from the ideal illumination condition. Table 2 summarizes the finding. The average distance is smaller if we only use the set of well-illuminated photos for all 6 models. The average color difference was greater than 10 for 5 out of 6 models if we only use the ill-illuminated images, which implies that the estimated skin tone from these have perceptually noticeable difference to the color estimated from the best illuminated photo. The examples in Figure 1 and 4 verifies this: A same model can have significantly different estimated skin tones. The difference being smaller for the well-illuminated photos show that there are many ways for the illumination condition to be bad, while good illumination tend to introduce similar color patterns.
#### 3.2.2. Shade Recommendation Using Face Photos
In this experiment, we compute the distance between the shade and the estimated skin tone to simulate shade recommendation. The shade range varies by the product, and there is no industry standard on the shade range or the number of shades being offered. Therefore, we selected three products that represent different shade distributions. Product A has 39 shades of varying lightness with many medium range shades, Product B focuses on 12 deeper shades, and Product C focuses on 17 light to medium shades. To simulate the shade recommendation, we assumed that the user preference is to find the shade that is the closest to their skin tone, i.e. the color distance is minimized. Table 3 shows the number of shades within the distance threshold of 2 or 5. We observed that using ill-illuminated photos impacts the shade recommendation in number of ways. First, the number of shades being recommended can drastically change. For Model 4, there is no shade within distance 5 for Product B if we only use the good images. However, 11 out of 12 shades are less than distance 5 if we use the bad illuminated images for the feature estimation. Recommending 11 out of 12 shades would not provide any discriminative power, and may not be any better than randomly recommending one. In addition, if the goal is to recommend a product that has the model's shade, ill-illuminated photos may recommend products that are not intended for the the model's skin tone (Figure 5(a)). Next, there is a disagreement between the shade recommended by the well vs. ill-illuminated images. For Model 3, good images generate 3 distance \(<\) 5 matches for Product A while the bad images generate 4 such. However, only 1 shade overlaps between these, meaning the user may receive a conflicting recommendation depending on the illumination condition (Figure 5(b)). This implies that in the absence of illumination assessment - the input face photo can be either well or ill-illuminated - the variability in shade recommendation will be widened. This is reflected in the larger numbers for "All Photos" column in Table 3. It is also important to note that the product images tend to be taken under well-illuminated environment. Therefore, comparing the product color with the skin tone estimated
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model \#} & \multicolumn{3}{c}{Metric} \\ \cline{2-4} & Accuracy & Sensitivity & Specificity \\ \hline Proposed model & **79.4\%** & **86.5\%** & 72\% \\ Reference (Krizhevsky et al., 2014) & 58.5\% & 18.1\% & **100\%** \\ Light skin model & 69.4\% & 60\% & 78.9\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. Results of the illumination assessment experiment.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Model \#} & \multicolumn{3}{c}{Labeled "Good"} & \multicolumn{1}{c}{Labeled "Bad"} & All Photos \\ \hline
1 & 8.34\(\pm\)4.4 & 17.93\(\pm\)4.8 & 16.63\(\pm\)3.7 \\
2 & 4.03\(\pm\)2.1 & 8.63\(\pm\)3.0 & 8.01\(\pm\)4.3 \\
3 & 7.03\(\pm\)4.4 & 18.09\(\pm\)8.9 & 16.60\(\pm\)9.2 \\
4 & 7.17\(\pm\)4.5 & 18.41\(\pm\)8.5 & 16.89\(\pm\)8.9 \\
5 & 8.44\(\pm\)5.0 & 17.39\(\pm\)6.7 & 16.19\(\pm\)7.2 \\
6 & 8.11\(\pm\)4.9 & 16.90\(\pm\)6.4 & 15.72\(\pm\)6.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Comparison of the color differences among the estimated skin tones. Only using the well-illuminated photos significantly lowers the color difference average.
from an ill-illuminated photo may lead to an incorrect shade recommendation altogether. Lastly, illumination condition impacts the number of distance \(<2\) matches. In Table 3, we see there are more number of matches if we use bad illuminated images. This is another reflection of different color variation patterns introduced by varying light conditions, while color variation is more confined for the good illuminated images (Figure 5(c)). Note that this may not be true if we relax the distance threshold to 5. That is, the ill-illuminated photos do not always increase the shade matches and may even decrease it as the threshold increases. This is due to over or under exposed pictures producing very light or dark brows and the shadows can lower the color saturation. A typical foundation shade ranges may not cover these ranges.
### Customer Problem Statement
The beauty e-commerce industry is witnessing a growing trend of applications that provide customers with personalized recommendations based on their face features. Given the diversity in mobile devices and customer skin tones and the variability of light conditions, the models using the photos may behave unexpectedly. The models are often trained under the assumption that the input data is a faithful representation of the customer's face and does not factor possible ill-illuminated environments. Reduced performance of the model that is a part of a recommendation framework can decrease customer satisfaction. A user guidance module can solve this problem by encouraging the users to take photos under a good lighting environment. In this work, we propose a framework that can detect the quality of the illumination in face photos. We ensure the good performance of the model across various skin tones which is often neglected in previous studies. In our experiments, we have found that our proposed framework has a higher probability of producing successful outcomes compared to its counterparts. This indicates that the application using the face images will be more satisfactory if they leverage our approach. For example, our approach may be used to reject poorly illuminated input photos, so the customers can continue taking photos until a good illumination is detected. The customer may also upload a photo, and our model informs them of any possible issue of bad illumination. The customer then may decide if they still want to proceed or upload a new one.
## 4. Conclusion
In this work, we approached fair and responsible beauty product recommendation by proposing a light assessment model that classifies input face images as well or ill-illuminated. The motivation behind developing such model and tool is beauty product recommendations often requiring the input face photo to have good lighting. The pivotal piece in enhancing the model was to develop synthetic dataset with a wide range of skin tones and illumination patterns. This dataset synthesis also meant only 1,000 real face images were needed for our method, which provides the benefit of minimizing the manual data collection and labeling effort which can be expensive and time consuming. The synthetic dataset was used to train a MobileNet-v2 for the final illumination condition classification. Our experiments evaluated on a real dataset show that the proposed approach outperforms its counterparts. It also improves the quality of the shade recommendation and eliminates the need to correct colors which can be have biased performance and the result unexplainable to the user. The live feedback and the interactivity of our tool may provide more trustworthy experience for the users.
Figure 5. Impact of illumination conditions on shade recommendation. (a) Under the good illumination condition, the model would match with light to medium shades but this product has no offering. However, underexposed photos results in matching dark to deep shades, and the model ends up matching with nearly all shades offered in this product. (b) Only one recommended shades overlap between the well and ill-illuminated photos. Shadows can lower color saturation which may lead to such conflicting recommendation. (c) The estimated skin tones have less variance when only well-illuminated photos are used. This lead to exactly one match with the distance threshold of 2. However, if ill-illuminated photos are used, then using a low threshold does not necessarily lead to a more specific recommendation. Here, both warmer and cooler brown shades are recommended.
\begin{table}
\begin{tabular}{c c c c c c c c c} \multicolumn{3}{c}{Total \# of} & \multicolumn{3}{c}{Number of Shades Within the Distance Threshold} & \multicolumn{3}{c}{\# of Overlapping Shades} \\ Model \# & \begin{tabular}{c} Available Shades \\ in Product (A,B,C) \\ \end{tabular} & \begin{tabular}{c} (Labeled as “Good”) \\ dist.\(<2\) \\ \end{tabular} & \begin{tabular}{c} (Labeled as “Bad”) \\ dist.\(<2\) \\ \end{tabular} & \begin{tabular}{c} (All photos) \\ dist.\(<2\) \\ dist.\(<2\) \\ \end{tabular} & \begin{tabular}{c} (All photos) \\ dist.\(<2\) \\ dist.\(<5\) \\ \end{tabular} &
\begin{tabular}{c} "Good”’s “Bad” Photos \\ dist.\(<2\) \\ \end{tabular} \\ \hline
1 & \begin{tabular}{c} (2.0) \\ \end{tabular} & \begin{tabular}{c} (6.70) \\ \end{tabular} & \begin{tabular}{c} (2.60) \\ \end{tabular} & \begin{tabular}{c} (4,10.0) \\ \end{tabular} & \begin{tabular}{c} (2.60) \\ \end{tabular} & \begin{tabular}{c} (7,10.0) \\ \end{tabular} & \begin{tabular}{c} (2.00) \\ \end{tabular} &
\begin{tabular}{c} (3,7.0) \\ \end{tabular} \\
2 & \begin{tabular}{c} (0.00) \\ \end{tabular} & \begin{tabular}{c} (1,0.0) \\ \end{tabular} & \begin{tabular}{c} (0.00) \\ \end{tabular} & \begin{tabular}{c} (0.00) \\ \end{tabular} & \begin{tabular}{c} (0.00) \\ \end{tabular} & \begin{tabular}{c} (1,0.0) \\ \end{tabular} & \begin{tabular}{c} (0.00) \\ \end{tabular} & \begin{tabular}{c} (0.00) \\ \end{tabular} &
\begin{tabular}{c} (0.00) \\ \end{tabular} \\
3 & \begin{tabular}{c} (39,12,17) \\ \end{tabular} & \begin{tabular}{c} (0.00) \\ \end{tabular} & \begin{tabular}{c} (3,0.0) \\ \end{tabular} & \begin{tabular}{c} (1,1.0) \\ \end{tabular} & \begin{tabular}{c} (4,7.0) \\ \end{tabular} & \begin{tabular}{c} (1,1.0) \\ \end{tabular} & \begin{tabular}{c} (6,7.0) \\ \end{tabular} & \begin{tabular}{c} (0.00) \\ \end{tabular} & \begin{tabular}{c} (1,0.0) \\ \end{tabular} &
\begin{tabular}{c} (5,0.0) \\ \end{tabular} \\
4 & \begin{tabular}{c} (1,0.0) \\ \end{tabular} & \begin{tabular}{c} (12,0.0) \\ \end{tabular} & \begin{tabular}{c} (3,6.0) \\ \end{tabular} & \begin{tabular}{c} (10,10.1) \\ \end{tabular} & \begin{tabular}{c} (3,6.0) \\ \end{tabular} & \begin{tabular}{c} (17,11.0) \\ \end{tabular} & \begin{tabular}{c} (1,0.0) \\ \end{tabular} &
\begin{tabular}{c} (5,0.0) \\ \end{tabular} \\
5 & \begin{tabular}{c} (1,0.0) \\ \end{tabular} & \begin{tabular}{c} (12,5.0) \\ \end{tabular} & \begin{tabular}{c} (2,5.0) \\ \end{tabular} & \begin{tabular}{c} (10,10.1) \\ \end{tabular} & \begin{tabular}{c} (2,5.0) \\ \end{tabular} & \begin{tabular}{c} (13,10.1) \\ \end{tabular} & \begin{tabular}{c} (1,0.0) \\ \end{tabular} &
\begin{tabular}{c} (9,5.0) \\ \end{tabular} \\
6 & \begin{tabular}{c} (2,0.0) \\ \end{tabular} & \begin{tabular}{c} (132,0) \\ \end{tabular} & \begin{tabular}{c} (2,2.0) \\ \end{tabular} & \begin{tabular}{c} (10,8.2) \\ \end{tabular} & \begin{tabular}{c} (2,3.0) \\ \end{tabular} & \begin{tabular}{c} (15,8.2) \\ \end{tabular} & \begin{tabular}{c} (1,0.0) \\ \end{tabular} &
\begin{tabular}{c} (8,2.0) \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 3. Number of Shades Recommended by Color Difference Threshold. Three products with three distinct shade range distributions are used to recommend shades using the closest match criteria, i.e. recommend the shades with the smallest distance to the skin tone. Note that the number of shades being recommended can change as well as the type of shades being recommended. | 私たちは、責任ある美容製品の推奨において、特に、製品の色を肌の色と比較する際に、課題に取り組んでいます。例えば、コンシーラーやベースカラーなどです。正確な推奨を行うためには、製品の特徴と、肌の状態や肌の色などの製品に特化した顔の特性を推定する必要があります。しかし、多くの製品写真は良い光条件で撮影されていますが、顔写真は様々な状況で撮影されます。照明環境が劣っていると、抽出された特徴は非常に誤解的であり、製品の特徴と比較することができません。そのため、悪質な照明条件は、推奨の品質を著しく低下させる可能性があります。私たちは、照明の評価のための機械学習フレームワークを導入し、画像を良好な照明と悪質照明のいずれかに分類しています。次に、ユーザー向けガイダンスツールを構築し、ユーザーがカメラを保持しているときに、照明条件が良好か悪いかを通知します。この方法では、ユーザーに迅速なフィ |
2309.10057 | Hierarchy Builder: Organizing Textual Spans into a Hierarchy to
Facilitate Navigation | Information extraction systems often produce hundreds to thousands of strings
on a specific topic. We present a method that facilitates better consumption of
these strings, in an exploratory setting in which a user wants to both get a
broad overview of what's available, and a chance to dive deeper on some
aspects. The system works by grouping similar items together and arranging the
remaining items into a hierarchical navigable DAG structure. We apply the
method to medical information extraction. | Itay Yair, Hillel Taub-Tabib, Yoav Goldberg | 2023-09-18T18:11:24 | http://arxiv.org/abs/2309.10057v1 | # Hierarchy Builder:
###### Abstract
Information extraction systems often produce hundreds to thousands of strings on a specific topic. We present a method that facilitates better consumption of these strings, in an exploratory setting in which a user wants to both get a broad overview of what's available, and a chance to dive deeper on some aspects. The system works by grouping similar items together, and arranging the remaining items into a hierarchical navigable DAG structure. We apply the method to medical information extraction.
## 1 Introduction
We are dealing with the question of organizing and displaying a large collection of related textual strings. The need arises, for example, in information extraction or text mining applications, that extract strings from text. Consider a system that scans the scientific literature and extracts possible causes for a given medical condition. Such a system may extract thousands of different strings, some of them relate to each other in various ways,1 and some are distinct. Users consume the list in an exploratory mode Agarwal and Sahu (2021)White and Roth (2008), in which they do not have a clear picture of what they are looking for, and would like to get an overview of the different facets in the results, as well as to dig deeper into some of them.
Footnote 1: Figure 1 lists the kinds of relations between strings.
For example, distinct strings extracted as causes for scaitca include "_herniated disc_", "_herniated disk_", "_lumbar disk herniation_", "_posterior interverbal disc herniation_" and "_endometriosis_", among hundreds of others. The user of this system likes to go over the returned list to learn about possible causes, but going over hundreds to thousands of results is mentally taxing, and we would like to reduce this effort. In the current case, we would certainly like to treat the first two items (_herniated disc_ and _herniated disk_) as equivalent and show them as one unified entry. But we would also like to induce an additional hierarchy. For example, it could be useful to separate all the _herniated disc_ related items (or even all the _disc_ related items) in one branch, and the _endometriosis_ case in another. This will allow the user to more efficiently get a high level overview of the high-level represented topics (_disc herniation_ and _endometriosis_) and to navigate the results and focus on the cases that interest them in the context of the query (for example, they may feel they know a lot about disc-related causes, and choose to ignore this branch).
An additional complication is that the hierarchy we are considering is often not a tree: a single item may have two different parents, resulting in a direct acyclic graph (DAG). For example, arguably a condition like _leg pain_ should be indexed both under _leg_ (together with other leg related items) and under _pain_ (together with pain related items). The hierarchy structure is contextual, and depends on the data: if there are not many other leg related items, it may not be beneficial to introduce this category into the hierarchy.
Additionally, note that some items in the hierarchy may not directly correspond to input strings: first, for the "_leg pain_" example above, if the input list does not include stand-alone _leg_ or _pain_ items, we may still introduce them in our hierarchy. We may also introduce additional abstraction, for example we may want to group "_heart disease_", "_ischemia_", "_hypotension_", and "_bleeding_" under "_cardiovascular disease_".
In this work we introduce a system that takes such a flat list of related strings, and arranges them in a navigable DAG structure, allowing users to get a high level overview as well as to navigate from general topics or concepts to more specific content by drilling down through the graph. Ideally, the graph would allow the user to:
(1) get a comprehensive overview of the various facets reflected in the results;
(2) quickly get an overview of the main aspects of the results;
(3) efficiently navigate the results, finding items in the sub-graph in which they expect to find them.
At a high level, the system works by finding lexically equivalent terms, arranging them in a DAG structure reflecting the specificity relation between terms, further merging equivalent nodes based on a neural similarity model, add additional potential intermediary hierarchy nodes based on taxonomic information and other heuristics, and then pruning it back into a smaller sub-DAG that contains all the initial nodes (input strings) but only a subset of the additional hierarchy nodes. Finally, we select the top-k "entry points" to this graph: high level nodes that span as many of the input nodes as possible. This process is described in section SS3. While the DAG extended with potential hierarchies is very permissive and contains a lot of potentially redundant information, the DAG pruning stage aims to ensure the final graph is as compact and informative as possible.
We focus on causes-for-medical-conditions queries and provide a demo in which a user can select a medical condition, and browse its causes in a compact DAG structure.
To evaluate the resulting DAGs, we perform automatic and manual evaluation. The automatic evaluation is based on measuring various graph metrics. The human evaluation is performed by human domain experts. Our results show that the DAG structure is significantly more informative and effective than a frequency-ranked flat list of results.
## 2 Requirements
As discussed in the introduction, our input is a list of strings that reflect answers to a particular question, as extracted for a large text collection (we focus in this paper on the biomedical domain, and more specifically in causes for medical conditions). This list can be the output of an Open-IE system (Fader et al., 2011; Stanovsky et al., 2015; Kolluru et al., 2020), the results of running extractive QA (Rajpurkar et al., 2016) with the same question over many paragraphs, or extracted using an extractive query in a system like SPIKE (Shlain et al., 2020; Taub Tabib et al., 2020; Ravfogel et al., 2021). The lists we consider typically contain from hundreds to thousands of unique items. We identified a set of relations that can hold between strings in our inputs, which are summarized in Table 1. We would like to arrange these items in a hierarchical structure to facilitate exploration of the result list by a user, and allow them to effectively consume the results. Concretely, the user needs to:
_a. not see redundant information._
_b. be able to get a high-level overview of the various answers that reflected from the results._
_c. be able to get a quick access to the main answers._
_d. be able to dig-in into a specific phenomenon or concept that is of interest to them._
Figure 1: Kinds of possible relations between input strings
e. be able to locate concepts they suspect that exist._
This suggests a hierarchy that respects the following conditions:
_Paraphrased_ spans should be combined into a single group, and _close-meaning_ spans should be combined into the same group; _Elaboration_ relations should be expressed hierarchically; _Co-mention_ spans should be both descendants of the shared concept; _Taxonomic relations_ should (in some cases) be descendants of the taxonomical parent.
Additionally, we would like each node in the hierarchy to have relatively few children (to reduce the need to scan irrelevant items), yet keep the hierarchy relatively shallow (to save expansion clicks if possible). The hierarchical structure should also be informative: we should be able to guess from a given node which kinds of items to expect to find under it, and which kinds of items _not_ to expect to find under it. This means a single item should be lockable in different ways, in case it can be categorized under different keys (we would sometimes like "_brain tumor_" to be listed under _brain_ and sometimes under _tumors_).2
Footnote 2: Arranging information as graphs to facilitate navigation and exploration is, of course, not a novel concept. A notable examples is entailment graphs (Kotlerman et al., 2015; Adler et al., 2012).
## 3 Method
Expanding the initial list.We assume that the strings in the initial list are _maximal_, meaning that the string captures the extracted noun-phrase including all of its possible modifiers. We further expand the list by considering also potential sub-strings of each maximal string, reflecting different granularities. For example, from the string "severe pain in the lower right leg" we would extract "pain", "severe pain", "severe pain in the leg", "severe pain in the lower right leg", among others.3 We then consider the union of the initial set of input strings and the set of additional sub-strings. Different users would be interested in different granularities depending on their information need. We rely on the DAG-pruning stage to properly organize these strings and prune away non-informative ones in the context of the entire set.
Footnote 3: This is done using a rules-based algorithm that operated on the parse tree, which extracted all the distinct modification spans derived from the head token.
Initial grouping into equivalence sets.The input of this stage is a set of strings (the union of the input set and the extended set), and the output is a list of sets, such that the sets are distinct, and their union covers the initial set. For example, after this stage, the items "_herniated disk_", "_herniated disc_", "_disc herniation_", "_herination of the disc_" will be in the same equivalence set.
The grouping in this stage is inspired by (Gashteovski et al., 2017) and is based on considering each string as a bag of lemmas, discarding stop words, modal words, and quantity words, and considering items as equivalent if their bags are equivalent. The lemma matching is relaxed, and allows, beyond exact string match, also matches with small edit distance and matches based on UMLS (Bodenreider, 2004) and WordNet (Miller, 1992) spelling variants and synonyms.
Initial DAG construction.We now take the list of sets from the previous stage, and arrange them into a DAG, where each set is a DAG node. We add a directed edge between two nodes A and B if B _is more specific than_ A, and no other node C is more specific than A and less specific than B.
The _specificity relation_ at this stage is determined based on the bags of lemmas that were used to create the equivalence sets: a set B is more specific than a set A if A and B are not equivalent and the bag of B contains the bag of A.
Adding heads as nodesFor all spans, we take their head-word (either a single adjective or a single noun) and add them as roots of the DAG. We then add an additional root node above them, so that the DAG has a single root. This handles the co-mention relation.
Merging semantically equivalent graph nodes.We now take the DAG and merge equivalent nodes, as determined by a trained statistical model (we use SAP-BERT (Liu et al., 2020))4. For example, this stage will merge "_administration of streptozotocin_" and "_streptomycin injection_". When merging two graph nodes, we handle the corresponding edges in the expected way (the children of the two individual nodes become children of the merged node, and the parents of the individual nodes become the parents of the merged node).5
For a pair of graph nodes A and B, we encode each string in A and in B into a vector using SAP-BERT, and represent each node as the average vector of the strings within it. We go over the nodes in the DAG in DFS order starting from the root nodes, and for each node consider all of its children for potential merging among them. We merge two nodes if the cosine similarity score between their vectors passes the threshold \(t_{1}=0.9\) and their merging does not create a cycle. We then do another pass and merge nodes to direct child nodes if their similarity score is above \(t_{2}=0.95\), again avoiding creating circles.
After this stage, we attempt to further merge nodes based on the UMLS ontology (Bodenreider, 2004). Two nodes A and B are considered UMLS-equivalent, if there is at least one string in node A that is listed in UMLS as a synonym of at least one string in node B. Such cases are merged.6
Footnote 6: If this merging creates a cycle, this cycle is removed.
Adding taxonomic nodes.So far the relationships between nodes in the DAG were solely based on lexical relations. In order to enrich the graph, we introduce additional nodes based on taxonomical relations, which are not reliant on lexical information. For instance, "heart disease", "ischemia", "hypotension", and "bleeding" are under the broader term "cardiovascular disease". We add many nodes here, relying on many of them to be pruned in the next stage.
We map each node to the UMLS hierarchy, and look for UMLS concepts that govern at least two DAG nodes ("descendent DAG nodes"). These are potential abstractions over graph nodes. For each such UMLS concept that is already part of the DAG, it is connected by an edge to all its descendant DAG nodes that do not already have a path to them, if adding such an edge does not create a cycle. For UMLS concepts that are not already in the DAG, they are added as new nodes governing the descendant graph nodes. UMLS concepts have multiple synonyms. When adding them as nodes, we choose the synonym with the highest SAP-BERT cosine similarity to the descendent DAG nodes this concept governs.
DAG Pruning.The DAG at this stage is quite large and messy, containing both nodes containing input strings, as well as additional hierarchy nodes based on linguistically motivated substrings of the input strings, and on taxonomic relations. We prune it to create a smaller graph which is more amenable to navigation. The smaller DAG should contain all the nodes corresponding to input strings, and an effective set of additional hierarchy nodes. Some of the hierarchy nodes are more important than others, as they provide a better differential diagnosis among the answers. Our goal is to highlight these and filter out the less important ones. Operatively, we would like for each node in the graph to have the minimal number of children, such that all the input strings that were reachable from it, remain reachable from it. This focuses on hierarchy nodes that are shared among many input concepts.
We first prune graph edges according to this criteria. This process results in nodes that have a single child. Such nodes are removed, and their children are attached to their parent.7
Footnote 7: Selecting the smallest group of concepts at each hierarchy level is important for user navigation, who quickly become overwhelmed by too many nodes, making it difficult to orient themselves within the DAG.
Selecting the minimal number of children according to this criteria is NP-hard. As an alternative, we use an approximation algorithm called the greedy set cover algorithm (Johnson, 1973), which works by selecting in each step the node with the highest number of non-covered answers, covering them, and proceeding. This helps in choosing the most important concepts and with the highest differential diagnosis.
Entry-point selection.Finally, we seek \(k\) nodes that will serve as the "entry nodes" to the graph. These should be \(k\) nodes that fulfill the following criteria:
a. allow reaching as many input strings as possible.
b. the semantic affinity between a node and the input string reachable by it, is high.
The users will initially see these nodes as well as an additional "other" node, from which all the other input strings can be reached. The entry node labels provide an overview of the \(k\) main concepts in the list, and allow the user to both get an overview of the result as well as to drill down into parts that interest them. Criteria (b) is important to ensure that the user not only can reach the input string by navigating from an entry point, but also that it will _expect_ to find this input string there.
This selection is done by a heuristic algorithm which we adapted from the Greedy+ DAG-node-selection algorithm in (Zhu et al., 2020). It first assigns each node C with a score that combines the
number of the input nodes reachable from it, and the semantic affinity (based on SAP-BERT cosine similarity) of C to each of these reachable nodes. It then iteratively adds the highest scoring candidate C to the set of entry points, and adjusts the scores of each remaining node N by subtracting from the score of N the affinity scores between C and the input nodes reachable from N. We do this until we reach \(k\) entry points.
Visualization.We are now finally ready to show the DAG to the user. For nodes that correspond to multiple (semantic equivalent but lexically different) input strings, we choose one of them as the representative for display purposes.
## 4 Input-output Example
We demonstrate with a minified example. Given the set of spans in Figure (2a), representing causes of chest pain, Hierarchy Builder expands the set by adding the spans "rib fracture" (this is a substring of two existing spans) and "respiratory diseases" (a new taxonomic node). Based on the expanded set of spans in Figure (2b), Hierarchy builder identifies synonymous spans and merges them into the concepts. In Figure (2c) we see these concepts, where each concept includes aliases in parenthesis where applicable. Hierarchy Builder then places the entries in a DAG based on a hierarchy of specificity, as depicted in Figure (2d).
## 5 Experiments and Evaluation
ScopeWe focus on the medical domain and evaluate our system on etiologies (causes) of two medical symptoms ("_jaundice_" and "_chest pain_"). These symptoms were chosen because their are common and each contains many different etiologies mentioned in the literature.
The input lists for the system were the result of running a set of 33 syntactic patterns over PubMed abstracts, looking for patterns such as "COND due to ___" or "patients with COND after ___" where COND is either _jaundice_ or _chest pain_. The results were extracted using the SPIKE system (Shlain et al., 2020; Taub Tabib et al., 2020) and each matched head-word was expanded to the entire syntactic subgraph below it. This resulted in 3389 overall extracted strings and 2623 unique strings for _jaundice_ and 2464 overall and 2037 unique for _chest pain_. After merging strings into synonym sets as described in SS3, we remain with 2227 concepts for _jaundice_ and 1783 for _chest pain_.
For each of the symptoms there are established and widely accepted lists of common etiologies, which we rely on in our evaluation.8 We take 38 established etiologies for jaundice and 33 for chest
Figure 2: Input-Output Example. See section §4.
pain, and check their accessability in the flat list of extracted symptoms, as well as in the hierarchical DAG we create.
Coverage and Entry-point SelectionFor _jaundice_, our input list contains 28 out of the 38 known etiologies, and for _chest pain_ 26/33. With \(k=50\), 25 of 28 concepts are reachable from an entry point for _jaundice_ and 21/26 for _chest pain_. With \(k=100\) the numbers are 28/28 (_jaundice_) and 24/26 (_chest pain_).
Assessing the contribution of the different componentsThe different components in our algorithm contribute by adding nodes, combining nodes, adding edges, and removing edges. Table 1 describes the kind of contribution of each component and quantifies its impact, for each of the two tested conditions.
We now look at the case where we select 50 entry-point nodes, and focus on the effect on the top-level nodes. We see that for Chest-pain, a total of 20 of the 50 selected entry-points were not in the original input, but were added by the various components (12 from expanding the initial list, 5 from adding head words, and 3 from taxonomic words). Similarly, for Jaundice, these components added a total of 29 root nodes (out of the selected 50) that were not in the original input (17 from expanding initial list, 5 from head words and 6 from taxonomic nodes).
The "Expanding the initial list" component plays a significant role in shaping the DAG structure. In Chest Pain, 161 out of 224 internal nodes originate from the expanded list (146 from Expanding the initial list and 15 from co-mention). In Jaundice, 347 out of 423 internal nodes stem from the expanded list (333 from Expanding the initial list and 14 from co-mention). This highlights the substantial impact of this component on the DAG's structure.
The number of merges performed indicates the usefulness of the employed merging methods.
Furthermore, the set cover pruning algorithm effectively reduces the number of edges in the DAG.
Qualitative MeasuresFor _jaundice_, our final DAG contains 2620 nodes overall and has a maximum depth of 11. With \(k=50\) The average number of leaves per entry point is 22.68 (min 0, max 600), and the average depth is 2.86 (min 0, max 9). Most importantly, each internal node has an average of 9.12 children (min 1, max 56, variance 34.91), making them highly browsable.
For _chest pain_, the trends are overall similar: our final DAG contains 2124 nodes overall and has a maximum depth of 9. With \(k=50\) The average number of leaves per entry point is 14.14 (min 1, max 175), and the average depth is 2.8 (min 0, max 7). Each internal node has an average of 4.94 children (min 1, max 53, variance 27.53).
Human evaluationOur main evaluation centers around the effort for an expert9 to locate the known etiologies in the resulting DAG, compared to a flat list sorted by frequency. For each of the etiologies, we ask how many entries need to be considered before finding the etiologies. For the flat list, this means how many items are read when scanning the list in order before reaching the etiology. For the DAG, we count the number of clicks (expansions of a node) starting from \(k=50\) entry points (a quantity that aligns with a reasonable threshold of entry nodes perceivable by a user), while summing also the number of items before the expanded node in each level. Note that since we look for common etiologies rather than rare ones, we would assume a frequency-ranked list based on literature mentions would compare favorably in these measures. Nonetheless, we see a clear benefit of the DAG. We compare to conditions: an ideal condition where the user knows exactly which nodes to expand (blue in the graph), and a realistic scenario, in which the user searches for the etiologies by expanding nodes (gray in the graph).
Footnote 9: We use two experts, each evaluating a different condition. The expert evaluating _jaundice_ is an expert MD specializing in children’s medicine. The expert evaluating _chest pain_ is a PhD in biology with 38 years of biomedical research.
We also perform another evaluation in which we ask the experts to rank each path to an etiology based on its quality, given the question "to what extent is this a logical path to follow in order to find the etiology", on a scale of 1 (very bad) to 5 (very good).
ResultsFigure 3 shows the main results for the two conditions. Despite the frequency-based ranking, many of the etiologies appear relatively low in the flat list, making them very hard to come by in this condition (orange). On the other hand, when considering the DAG, the vast majority of items are significantly easier to locate, requiring scanning significantly fewer items. Only 3 items for jaundice and 2 for chest pain were significantly harder to locate in the DAG than in the flat list. In terms of the quality of the DAG paths associated with each
etiology, the jaundice annotator ranked 23 out of 25 as 5, 1 as a 2, and 1 as a 1. For chest pain, the numbers are 19 out of 21 ranked as 5, 1 as 2, and 1 as 1. Overall, our hierarchy building algorithm works well for the vast majority of the cases, and offers significant benefits over the flat list.
## 6 Conclusions
We presented an automatic method to organize large lists of extracted terms (here, of medical etiologies) into a navigable, DAG-based hierarchy, where the initial layer provides a good overview of the different facets in the data, and each internal node has relatively few items. The code together with a video and an online demonstration are available at [https://github.com/itayair/hierarchybuilder](https://github.com/itayair/hierarchybuilder).
## 7 Limitations
While our method is aimed at organizing any flatlist of extractions, we evaluated it here only on the medical domain, only on a single kind of information need (etiologies), and only for common conditions (jaundice and chest pain). More extensive evaluation over additional conditions is needed in order to establish general-purpose utility. However, we do find the system useful for navigating in automatically-extracted etiology lists, and encourage the readers to experiment with the system also on other conditions, to assess its utility.
There are also some candidates for improving the method also in the biomedical domain, which are not currently handled: (a) abstraction over sub-strings. e.g., for the spans "_administration of penicillin_", "_administration of aspirin_", "_administration of augmentin_", it could be useful to introduce a shared parent level of "_administration of antibiotic/drug_". Our system can currently identify _penicillin_, _augmentin_, _aspirin_ as an _antibiotic/drug_, but cannot handle abstraction over sub-strings. (b) Linking to UMLS currently relies on exact lexical matches, and can be improved.
## 8 Ethical Considerations
We present a system for organizing large result lists into a browsable hierarchy. In general, consuming a hierarchy is more effective than consuming a very
\begin{table}
\begin{tabular}{l l l l}
**Component** & **Contribution** & **Chest-pain** & **Jaundice** \\ \hline Expanding the initial list (for full DAG) & Add nodes & 504 & 893 \\ Expanding the initial list (for DAG with & Add nodes & 158 & 350 \\
50 entry nodes) & & (12 top level) & (17 top level) \\ \hline Adding heads as nodes (Full DAG) & Add nodes & 457 & 379 \\ Adding heads as nodes (50 entry nodes) & Add nodes & 20 (5 top level) & 19 (6 top level) \\ \hline Merging semantically equivalent nodes & Merge nodes & 93 (out of 2556) & 266 (out of 3330) \\ UMLS merging of synonym nodes & Merge nodes & 62 (out of 2504) & 99 (out of 3167) \\ \hline UMLS taxonomic nodes (full DAG) & Add nodes & 113 & 169 \\ UMLS taxonomic nodes (50 entry nodes) & Add nodes & 3 & 6 \\ \hline UMLS taxonomic edges & Add edges & 140 (5 top level) & 153 (3 top level) \\ \hline DAG Pruning & Remove edges & 2363 & 3209 \\ \end{tabular}
\end{table}
Table 1: Quantifying the contribution of the different components.
Figure 3: Effort to reach a set of common etiology items using our created DAG vs. a frequency ranked list. X axes coordinates correspond to different etiologies sorted by their frequency in the input list, and Y axes correspond to the effort. Orange: frequency-ranked flat list. Blue: DAG + oracle locating of items. Gray: DAG + human locating of items.
long list. However, hierarchies can hide items, especially if the items are misplaced in an unexpected branch--which our system sometimes does (albeit rarely). In situations where consuming the entire information is crucial and the cost of missing an item is prohibitive or dangerous, a flat list would be the safer choice.
AcknowledgementsThis project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).
| 情報抽出システムは、特定のトピックに関する数百から数千の文字列を生成することがあります。私たちは、ユーザーが全体的な概要を把握しつつ、特定の領域を掘り下げたいというニーズに応える方法を提案します。この方法では、類似したアイテムをグループ化し、残りのアイテムを階層構造でナビゲーション可能なDAG構造に整理します。この方法を医療情報抽出に応用しました。
Please let me know if you need any further information or have other sentences you would like me to translate! |
2309.16426 | QwenGrasp: A Usage of Large Vision-Language Model for Target-Oriented
Grasping | Target-oriented grasping in unstructured scenes with language control is
essential for intelligent robot arm grasping. The ability for the robot arm to
understand the human language and execute corresponding grasping actions is a
pivotal challenge. In this paper, we propose a combination model called
QwenGrasp which combines a large vision-language model with a 6-DoF grasp
neural network. QwenGrasp is able to conduct a 6-DoF grasping task on the
target object with textual language instruction. We design a complete
experiment with six-dimension instructions to test the QwenGrasp when facing
with different cases. The results show that QwenGrasp has a superior ability to
comprehend the human intention. Even in the face of vague instructions with
descriptive words or instructions with direction information, the target object
can be grasped accurately. When QwenGrasp accepts the instruction which is not
feasible or not relevant to the grasping task, our approach has the ability to
suspend the task execution and provide a proper feedback to humans, improving
the safety. In conclusion, with the great power of large vision-language model,
QwenGrasp can be applied in the open language environment to conduct the
target-oriented grasping task with freely input instructions. | Xinyu Chen, Jian Yang, Zonghan He, Haobin Yang, Qi Zhao, Yuhui Shi | 2023-09-28T13:23:23 | http://arxiv.org/abs/2309.16426v3 | # QwenGrasp: A Usage of Large Vision-Language Model for Target-Oriented Grasping
###### Abstract
Target-oriented grasping in unstructured scenes with language control is essential for intelligent robot arm grasping. The ability for the robot arm to understand the human language and execute corresponding grasping actions is a pivotal challenge. In this paper, we propose a combination model called OwenGrasp which combines a large vision-language model with a 6-DoF grasp neural network. QwenGrasp is able to conduct a 6-DoF grasping task on the target object with textual language instruction. We design a complete experiment with six-dimension instructions to test the OwenGrasp when facing with different cases. The results show that QwenGrasp has a superior ability to comprehend the human intention. Even in the face of vague instructions with descriptive words or instructions with direction information, the target object can be grasped accurately. When QwenGrasp accepts the instruction which is not feasible or not relevant to the grasping task, our approach has the ability to suspend the task execution and provide a proper feedback to humans, improving the safety. In conclusion, with the great power of large vision-language model, QwenGrasp can be applied in the open language environment to conduct the target-oriented grasping task with freely input instructions.
## I Introduction
Target-oriented grasping enables a robot arm to grasp target objects in accordance with human intention, thereby enhancing the efficiency and safety of grasping tasks [1]. In the open world coexisting humans and robots, it's expected that the robot arm could conduct the target-oriented grasping task by inputting textual instructions from human. And humans make mistakes, robots should have the ability to autonomously determine the feasibility of the given task, avoiding blindly executing the task with erroneous instruction. The target-oriented grasping task encompasses two primary abilities: the capability to understand human instructions [2] and the ability to ensure a high success rate in grasping [3]. Both of these aspects are crucial for the successful execution of the target-oriented grasping task.
Previous works [1, 4, 5] achieve the target-oriented grasping task by inputting a target object image. They [6, 7] have focused on specifying target objects through a given target object image, which realizing the match of human intention and the target object according to the template image. However, these approaches are not user friendly and often demand high-quality image information, which is lack of generality, making it impractical in many situations without the target image.
Other researches [1, 2, 8] effort in target-oriented grasping with textual instructions, and they aimed at bridging the gap between the human language and the process of grasping target objects. Recent studies have harnessed the capabilities of the visual-language model Contrastive Language Image Pre-training (CLIP) [9] to facilitate the matching of text and images, thereby establishing a mapping from human instructions in text to the corresponding target objects for robot arm manipulation. However, it's important to note that CLIP lacks the ability to comprehensively understand the scene, including spatial context, which introduces the limitation. Additionally, CLIP lacks the flexibility to handle complex human instructions in real world environment. These CLIP based methods may execute grasping tasks blindly, when they confront with infeasible or non-grasping instructions. The grasp system of these methods is limited to the specific instruction templates. And they are prone to unexpected problems when they applied in the daily natural language environment with a large amount of different instructions.
Ensuring a high success rate in grasping is also a crucial aspect of target-oriented grasping. Previous works [3, 11, 14] have achieved a high successful rate in 6-DoF grasping in unstructured scenes. They [10, 11, 12] often extract point cloud data from the scene and employ the PointNet [15] as a backbone neural network to generate candidate grasp poses for objects in the scene. Furthermore, they [3, 15] also use the PointNet [15] to evaluate the generated grasp poses then pick out the best pose. However, these methods primarily focus on the grasping success rate and overlook the target selection. For these methods, the final grasping target may exist on any object in the scene, which means their grasping target is unknown and
Fig. 1: Two common cases of target-oriented grasping with right or erroneous instruction. In the case A with right instruction at the top half of this picture, our combination model QwenGrasp will conduct grasping to the target object as the willing of human. In the case B with erroneous instruction at the bottom half of this picture, QwenGrasp will realize the infeasible task and suspend the grasp mission.
they can not be directly used in target-oriented grasping task.
Recent researches [16, 17] have been increased using multi-modal large language models, specifically the Large Vision-Language Models (LVLMs) to control robots through simple language instructions. They have demonstrated a remarkable understanding of human intention and achieved impressive results. These works [16, 17] have bestowed robots with a level of comprehension comparable to that of humans. And these LVLMs assist the robot to easily understand human intention and autonomously control robotic systems to accomplish corresponding tasks, resulting in astonishing outcomes. However, while these achievements have proven successful in relatively straightforward grasping tasks, their specific effectiveness in more complex scenes such as target-oriented 6-DoF grasping in unstructured scenes, remains to be rigorously tested and evaluated.
In this paper, we propose a novel combination model called QwenGrasp for target-oriented grasping tasks. QwenGrasp is composed of the large vision-language model Qwen-VL [18] and the grasping neural network REGNet [3]. Specifically, we use the pre-trained large vision-language model Qwen-VL [18] to encode the RGB image of the workspace and the textual human instruction. And it can output a detection bounding box on input image for the detected target object. REGNet [3] is a grasping neural network specialized for 6-DoF grasping in unstructured scene. We then employ the pre-trained REGNet to generate the final grasp pose based on the target object. Since we directly use the pre-trained large vision-language model and pre-trained grasping neural network, our combination model QwenGrasp requires no additional training.
Compared to previous works, not only QwenGrasp can complete the target-oriented grasping task by textual instructions, but also it has the ability to determine the feasibility of the given instruction. QwenGrasp demonstrates the effectiveness in two key abilities of target-oriented grasping tasks: understanding textual human language instructions and ensuring the grasp success rate. Particularly, Qwen-VL [18] is one of the leading large vision-language models, it offers the best ability for understanding human textual instructions and workspace spatial information. As a result, our proposed method can handle complex and varied language instructions, achieving language generalization close to everyday human conversations. Besides, as shown in Fig. 1 case B, when it facing with erroneous instruction, our method can suspend tasks and ask human for confirmation, greatly enhancing the safety of robot arm grasping.
To summarize, our main contributions are as follows:
* We propose a combination model called QwenGrasp that combines a large vision-language model with a professional grasp network for the target-oriented 6-Dof grasping in unstructured scenes. To the best of our knowledge, we are the first to use the multi-modal large language model in target-oriented 6-DoF grasping with textual instructions. Also the first to use Qwen-VL in grasping.
* We design a comprehensive set of prompts to preload in the large vision-language model, strengthen the robot arm with the ability to autonomously assess task feasibility, and independently plan and execute grasping tasks.
* Our proposed methodology is evaluated across a range of real-world scenes encompassing common objects coupled with natural language instructions. The outcomes validate its effectiveness and generalizability.
The remainder of the paper is arranged as follows. Section 2 summarizes the related works of target-oriented grasping, 6-DoF Grasping in unstructured scenes and multi-modal large language models for robotic grasping. Section 3 introduces our methodology and implementation details. Section 4 shows the experiment results of our method with six kinds of instructions. Finally, a conclusion including future works is placed in Section 5.
## II Related Work
### _Target-oriented grasping_
The robot arms target-oriented grasping with human textual instructions has always been a significant task, offering substantial benefits in both industrial production and daily life [2]. Recent years have witnessed numerous explorations in this area [4, 5, 6, 7]. Recent works [2, 8] have leveraged CLIP [9] to achieve the target-oriented grasping by textual language. Vision-Language-Action [2] can grasp blocked targets by grasping away the obstacles. SeeAsk [8] can help the robot identify a target object by answering its questions. They all use preset language templates for training or questioning. CLIP [9] is a multi-modal large-scale visual-language model to matching the text and image. By directly using pre-trained CLIP, these works achieve matching texts and objects without additional training. By combining the CLIP with a grasping network, it realize an efficient process for translating human instructions into target-oriented grasping, yielding favorable results in specialized grasping environments. However, CLIP lacks the ability to comprehend spatial relationships among objects in the workspace and falls short of the language understanding capabilities of large language models (LLMs) like the ChatGPT [19] and so on [20]. Consequently, CLIP-based target-oriented grasping methods may not deliver satisfactory results when dealing with complex and flexible instructions, and these instructions can be frequently encountered in real-world applications.
### _6-DoF Grasping in unstructured scenes_
The exising methods [14, 3, 13] of 6-DoF Grasping in unstructured scenes reaches an outstanding enough grasping success rate. 6-DoF grasping makes the robot arm grasp objects on the workspace from any height and direction. The robot arm gripper pose contains the 3D gripper position and the 3D gripper orientation. Compared with 2D planar grasping, 6-DOF grasping has a higher grasping success rate in unstructured scenes with unseen objects. Except for the work [14] using the RGB image as input, most works [10, 11, 12] take the 3D point cloud data of the workspace as input. They will generate candidate grasp poses based on the point cloud and analyse the grasp stability and success rate by giving a score. Finally, the grasp pose with the highest score can be the
grasp pose for the robot arm to execute grasping. The REGNet [3] has three stages: Score Network (SN), Grasp Region Network(GRN), and Refine Network (RN). SN selects the 3D gripper positions. GRN generates the 3D gripper orientations. RN refines those grasp poses and gives the final result. The GraspNet-1Billion [11] makes a grasp dataset with 1 billion labeled grasp poses including a wide range of general objects. These 6-DoF grasping methods can be used to grasp novel objects and conducted in unstructured scenes.
### _Multi-modal Large Language Model_
The development of large models has greatly benefited people's work and daily life. Recent works [16, 17] have combined these large models with robots, enabling robots to carry out various tasks in response to human commands.
PaLM-E [16] is a large model which is composed of vision transformer and the large language model PaLM [21]. It's an embodied multi-modal language model. It accepts multi-modal inputs including images, 3D models, and text, automatically analyzes the given queries and generates subtask sequences. While maintaining the effective communication capabilities of large language models, it enables robots to perform a wide range of tasks guided by natural language instructions. VoxPoser [17] is also based on large language model. It has the abilities to synthesize robot trajectories with the intention of natural language instructions. And it can directly generate the corresponding code to control robots for a diverse set of manipulation tasks. However, these works have not been specifically applied for grasping tasks.
In this paper, we use the large vision-language model QwenVL [18] to help robots understand human instructions and locate target objects. Qwen-VL [18] is a multi-modal large language model based on the large language model QwenLM [18]. It can accept both images and text as input. Besides the ability of image captioning, question answering, and flexible interaction, it has a strong visual localization ability, which can locate objects in images by understanding the textual instructions. The main part of our approach QwenGrasp is to use Qwen-VL to understand human instructions and generate the detection bounding box to capture target objects. This overcomes one of the most important obstacles of the target-oriented grasping problem, which is freely understanding human instructions. By combining Qwen-VL, our method can be applied in people's daily-life language environment and help people realize the grasping of target objects. Even in the face of instructions not to grasp or wrong, it can realize the human mistake and autonomously reply an intelligent response, rather than blindly performing the grasp task.
## III method
### _QwenGrasp Overview_
As shown in Fig. 2, human give an instruction to the QwenGrasp system to order the object he want. The system automatically calls a RGB-D camera to get the RGB image and point cloud data of the workspace. After confirming the correctness and feasibility of the instruction, the QwenGrasp system will output the final grasp pose of the target object. Our proposed QwenGrasp is mainly composed of Qwen-VL [18] and REGNet [3]. Qwen-VL accepts human instructions and RGB images of the workspace as inputs. It can understand textual human language and images, match objects in the image with human instructions, and output a bounding box based on the input RBG image to represent the location of the target object. Preloaded prompts is the prompts input in Qwen-VL before we use the QwenGrasp, aiming to make Qwen-VL suit for the grasping task. REGNet [3] is a 6DoF grasping network. By inputting the 3D point cloud data of the scene to the REGNet, candidate grasp poses can be generated based on the workspace. Finally, the grasp pose filter will utilize the detection box to cut out the grasp poses that are not to grasp the target object, and output the final grasp pose.
### _Preloaded Prompts_
Since Qwen-VL [18] is a versatile vision-language model, in order to make it more suitable for target-oriented grasping tasks, we input the prompts designed in advance to the model. Our preloaded prompts have two main purposes: 1) Guide the versatile large model to match the given instruction and images, and locate the target object; 2) Limit the output of the large model to make it focus on grasping task. We don't
Fig. 2: **QwenGrasp Overview**. Given the textual instruction, our system will acquire the RBG image and the point cloud data of the workspace. The pre-trained Qwen-VL [18] will generate the bounding box of the target object, and the REGNet [3] will generate the candidate grasp poses of the scene. The grasp pose filter is used to crop the infeasible poses and output the final grasp pose.
set any restrictions on human instructions, and allow the large model to flexibly respond to a wide variety of human inputs. With preloaded prompts, the model classifies human instructions into three categories: 1) This instruction is a target-oriented object grasp instruction, and QwenGrasp can find the target; 2) This instruction is an erroneous target-oriented object grasp instruction, and there is no target object in the workspace; 3) This instruction is an irrelevant instruction to the target-oriented grasping task. With the guidance of preloaded prompts, Qwen-VL will directly output the detected bounding box of the target object when accepting the correct task instruction. In the face of erroneous instructions or irrelevant instructions to the grasping task, the detection bounding box will not output. On contrast, the human will be reminded by QwenGrasp to check whether the input is correct.
### _Pre-trained Models_
Qwen-VL [18] is a large vision-language model and the pre-trained Qwen-VL is used in our method. We get the bounding box of the target object by Qwen-VL. Qwen-VL is based on the large language model Qwen-7B. Qwen-VL has a total of 9.6B parameters, in which is the sum of the parameters in Qwen-7B, the vision transformer and cross attention model. Qwen-VL accept text and images as input. It has a remarkable performance in image captioning, question answering, visual localization, and flexible interaction tasks. Our QwenGrasp takes the advantage of Qwen-VL's powerful visual localization capabilities to locate the place of the target object. By matching the instruction and specified grasping target on the input RGB image of the workspace, QwenGrasp obtains the 2D spatial position of the target object. It solves one of the most serious problems that is locating the position of the target object on the image with textual instruction in target-oriented grasping. Meanwhile, as a large vision-language model, Qwen-VL has a strong dialogue ability and image understanding ability. It can also respond well to complex or flexible instructions. It can accept any human instruction and respond to it. Instead of blindly executing tasks due to erroneous instructions, QwenGrasp will remind humans of the error and give suggestions.
REGNet [3] is a 6-DoF grasping network and we use the pre-trained REGNet to guide robot arm grasping. It could be used in unstructured scenes and deal with novel objects. REGNet [3] has a three levels structure, including the Score Network (SN), Grasp Region Network (GRN), and Refine Network (RN). Such structure ensures the diversity and stability of candidate grasping poses, and ensures the grasping success rate. Our QwenGrasp utilizes REGNet to generate a large number of candidate grasp poses. Each candidate grasp pose has a corresponding score representing the grasp quality,
Fig. 3: **Six-Dimension Experiments. On the left side of the whole picture, it shows the experiment equipment and the workspace. We use COBOITA robot arm with an Intel RealSense D435 RGB-D camera. Some common seen objects in daily life are placed on the work space. On the right side of the picture, it shows the examples of 6-dimension experiments. Each of the experiments shows a different kind of instruction. We use a red five-pointed start to flag the human attention. Show the response from Qwen-VL by drawing the bounding box on the image. The final grasp pose selected is shown on the point cloud data with a blue grasp pose. For the special cases of erroneous Instruction and Irrelevant Instruction which there is no target object on the image, we show the textual response of Qwen-VL.**
which greatly facilitates the final grasp pose selection.
### _Grasp Pose Filter_
Grasp pose filter is used to select the best grasp pose on the target object. The inputs of the Grasp pose filter is the bounding box from Qwen-VL and candidate grasp poses from REGNet. And the output is the final grasp pose of the target object. After that, our robot arm will perform the grasping task according to this final grasp pose. Grasp pose filter will convert the 2D bounding box based on workspace image into 3D point cloud space, and eliminate the candidate grasp poses outside the bounding box. Finally, according to the scores of the remaining candidates grasp poses, the final grasp pose is selected. The grasp pose with the maximum score will be selected as the final grasp pose.
## IV Experiments
We evaluate QwenGrasp in the real world. As shown in Fig. 3, our real world experiment platform involves a COBOTTA robot arm with a 2-finger parallel-jaw gripper to grasp objects and an Intel RealSense D435 RGB-D camera fixed on robot arm to acquire workspace RGB-D images of resolution 640 x 480. We prepared different objects that are common seen in house and office scenarios to test how our approach performs in a familiar environment for most humans. In order to better demonstrate the effect of our QwenGrasp, we designed a 6-dimension experiment to test it comprehensively. They are: 1) Common Instruction; 2) Complex Instruction; 3) Vague Instruction; 4) Direction Perception; 5) Erroneous Instruction; 6) Irrelevant Instruction. And we already have some preliminary results.
### _Common Instruction_
The purpose of this experiment with common instruction is to test the basic functionality of QwenGrasp. In target-oriented tasks, our QwenGrasp needs to understand both the human instruction and the image of the workspace. It should complete the task of matching the human instruction text and the target object in image. In this experiment, we assume that humans have learned that the QwenGrasp is a target-oriented grasping combination model based on large vision-language model, so the input instructions are simple and straightforward. For example, "Give me the mug.", "please grasp me that black pen.". In these experiments, we demonstrate the ability to match images and instructions in a grasping environment. QwenGrasp has a remarkable object recognition ability and it's able to cope with common objects in daily life easily.
### _Complex Instruction_
The purpose of this experiment is to test QwenGrasp to understand the hidden human intention by input complex instructions. In the face of long sentences, instructions that contain misleading information, or complex instructions where there is only a small number of valid information hidden in a large number of irrelevant statements, we test the ability of the QwenGrasp whether it could recognize human intention and correctly find the target object to grasp. Complex instructions are typically long text. For example, "If you look 7 objects, give me the case at most left. Else, give me the case at most right." In such experiments, The results demonstrate the abilities of QwenGrasp to understand complex sentences and extract human intentions in the grasping environment. It shows the ability to accurately match the target object with complex instruction.
### _Vague Instruction_
In many cases, we are not able to know the names of all the items. Even we may not be on-site in the workspace and we can only control the robot to grasp objects through a way of language control. In such cases, we can only describe our grasping target object through verbal descriptive words. For example, "Give me object which could hold water." "give me the longest object in that table." We call such instructions vague instructions. The purpose of this experiment is to test how would QwenGrasp choose in a grasping environment where the object was not explicitly specified. Our results show that QwenGrasp has a remarkable understanding ability and it always choose the most suitable object.
### _Direction Perception_
Qwen-VL has the ability to understand images, so we have to test it for this feature. In some scenes, we may only know the names of some objects, and other objects are difficult to describe in words. In these cases, if the target object is next to the known object, then people can express their intention by couple the information of the known object with the location information. For example, "Give me the object between the mug and the bottle.", "give me the object at the lower right corner of the scene.". This experiment shows that QwenGrasp has a great direction perception ability in the grasping environment, and it can grasp the items just by combining direction information with object information.
### _Erroneous Instruction_
Sometimes, human make the mistake of giving the target object which don't exist on the workspace. At that time, forcing the system to find a most similar item, blindly performing the grasp task will only lead to fatal errors. The correct approach should be suspending to grasp the object and explaining to the human that there is no target object in the workspace, please change the target. This experiment sets diverse items in the workspace, but the instruction will specify a different item. Giving erroneous instructions in an attempt to mislead the QwenGrasp. As expected, after conducting experiments, the results show that when faced with erroneous instructions, QwenGrasp can understand there is no target object that humans want in the scene. And it will not output the grasp bounding box of the target item, suspending the grasping task.
### _Irrelevant Instruction_
Similar to the erroneous command, the robot arm should not blindly perform the grasping task when the human gives
a command that is irrelevant to the grasping task. However, these instructions may be some QwenGrasp related questions. For example, "who are you? ", "what can you do?". Faced with these questions, it's expected to get an answer to help people use QwenGrasp. And there is another kind of irrelevant instructions, just caused by people wrongly send to the robot. Faced with these case, QwenGrasp should not execute grasp task but reply a proper response to people. The results of our experiments show that when faced with irrelevant instructions, QwenGrasp will suspend grasping tasks and communicate with humans according to the instructions to solve people's questions.
## V Conclusion
In this work, we propose a combination model called QwenGrasp for target-oriented grasping in open-world. QwenGrasp shows remarkable performance in all six dimensions of the experiments. Not only it can understand human instructions, find target objects, and understand complex and vague instructions, but also it can perceive direction information of the workspace. Furthermore, in the face of erroneous instructions and irrelevant instructions, QwenGrasp can be aware of that and intelligently suspend the grasping task. Besides, QwenGrasp could communicate with humans, confirm the correctness of instructions and help humans solve problems, preventing blindly conduct the grasping task. The experiment results show that with the help of large vision-language model, QwenGrasp can easily detect and grasp objects on target-oriented grasping tasks. By combine the grasping task with the large vision-language model, people can freely use QwenGrasp, which significantly improves the safety and versatility in grasping task.
In future work, we will improve our QwenGrasp, aiming to conduct it in stacked scenes. Meanwhile to expand the function and make it to find the hidden object. Through means of improving the ability of the interaction between humans and robots, guide the robot arm to better conduct the grasping task by communicating. Furthermore, we will also try to use large model to guide the work of multiple robot arms, exploring the possibility of multi-robot collaboration.
| ```
構造物に存在する不確定な環境において、言語制御を伴うターゲット指向の grasping は、知的ロボットアームの grasping に必須です。ロボットアームが人間言語を理解し、それに応じた grasping を実行できる能力は、重要な課題です。この論文では、大規模な視覚言語モデルと 6自由度 graspingニューラルネットワークを組み合わせたモデル QwenGrasp を提案しました。QwenGrasp は、テキスト言語指示で対象物を6自由度で grasping することができます。六自由度指示に基づいた完全な実験を行い、QwenGrasp が異なるケースに対してどのように機能するかをテストしました。結果表明、QwenGrasp は人間意図を理解する能力が優れている。曖昧な指示や指示の記述語を用いた指示、方向情報を用いた指示に対しても、対象物を正確に grasping することができます。QwenGrasp が grasping のタスクに適さない指示を受け取ると |
2309.06635 | Collaborative Dynamic 3D Scene Graphs for Automated Driving | Maps have played an indispensable role in enabling safe and automated
driving. Although there have been many advances on different fronts ranging
from SLAM to semantics, building an actionable hierarchical semantic
representation of urban dynamic scenes and processing information from multiple
agents are still challenging problems. In this work, we present Collaborative
URBan Scene Graphs (CURB-SG) that enable higher-order reasoning and efficient
querying for many functions of automated driving. CURB-SG leverages panoptic
LiDAR data from multiple agents to build large-scale maps using an effective
graph-based collaborative SLAM approach that detects inter-agent loop closures.
To semantically decompose the obtained 3D map, we build a lane graph from the
paths of ego agents and their panoptic observations of other vehicles. Based on
the connectivity of the lane graph, we segregate the environment into
intersecting and non-intersecting road areas. Subsequently, we construct a
multi-layered scene graph that includes lane information, the position of
static landmarks and their assignment to certain map sections, other vehicles
observed by the ego agents, and the pose graph from SLAM including 3D panoptic
point clouds. We extensively evaluate CURB-SG in urban scenarios using a
photorealistic simulator. We release our code at
http://curb.cs.uni-freiburg.de. | Elias Greve, Martin Büchner, Niclas Vödisch, Wolfram Burgard, Abhinav Valada | 2023-09-12T22:54:30 | http://arxiv.org/abs/2309.06635v3 | # Collaborative Dynamic 3D Scene Graphs for Automated Driving
###### Abstract
Maps have played an indispensable role in enabling safe and automated driving. Although there have been many advances on different fronts ranging from SLAM to semantics, building an actionable hierarchical semantic representation of urban dynamic scenes from multiple agents is still a challenging problem. In this work, we present Collaborative URBan Scene Graphs (CURB-SG) that enable higher-order reasoning and efficient querying for many functions of automated driving. CURB-SG leverages panoptic LiDAR data from multiple agents to build large-scale maps using an effective graph-based collaborative SLAM approach that detects inter-agent loop closures. To semantically decompose the obtained 3D map, we build a lane graph from the paths of ego agents and their panoptic observations of other vehicles. Based on the connectivity of the lane graph, we segregate the environment into intersecting and non-intersecting road areas. Subsequently, we construct a multi-layered scene graph that includes lane information, the position of static landmarks and their assignment to certain map sections, other vehicles observed by the ego agents, and the pose graph from SLAM including 3D panoptic point clouds. We extensively evaluate CURB-SG in urban scenarios using a photorealistic simulator. We release our code at [http://curb.cs.uni-freiburg.de](http://curb.cs.uni-freiburg.de).
## I Introduction
Spatial and semantic understanding of the environment is crucial for the safe and autonomous navigation of mobile robots and self-driving cars. Recent autonomy systems leverage high-definition (HD) map information as effective priors for several downstream tasks in automated driving (AD) including perception [1], localization [2], planning [3], and control [4]. HD maps are often constructed and maintained in a top-down manner [5], i.e., relying on traffic authorities or via arduous labeling efforts. In contrast, automatic bottom-up AD mapping approaches show high accuracy [6, 7] while being limited to occupancy or semantic mapping using, e.g., dense voxel grid manifolds. With respect to AD, map representations should ideally fulfill the following requirements [8]: 1) completeness and accuracy while scaling to large areas; 2) frequent updates to capture structural changes; 3) higher-level topological information grounded in rich sensor data; 4) efficient access and information querying. Given these requirements, typical SLAM maps only enable classical spatial or point-level semantic querying. We envision that modern AD mapping approaches should provide the means to process vision and language queries, e.g., from foundation models [9]. Enabling such demands can only become feasible by abstracting from given maps using sparse representations.
In this work, we propose _Collaborative URBan Scene Graphs_ (CURB-SG) that effectively address the aforementioned requirements by constructing a hierarchical graph structure of the environment as shown in Fig. 1. 3D scene graphs enable efficient data storage of large environments while being queryable and preserving spatial information. Previous works on 3D scene graphs [10, 11, 12] focus on indoor environments, whose taxonomy cannot be directly transferred to large-scale urban domains. To close this gap, we introduce the following analogy to indoor variants: Cities (buildings) can be separated into intersections and roads (rooms), which contain static landmarks such as traffic signs (furniture) as well as dynamic objects such as vehicles (humans). We enable this partitioning by generating an online lane graph that serves as a common link among multiple graph layers. Addressing frequent updates and multi-agent cooperation, our method leverages a centralized collaborative SLAM approach that combines panoptic LiDAR data and local odometry estimates into a single 3D map while optimizing a global pose graph that benefits from inter-agent
Fig. 1: For our proposed collaborative urban scene graphs (CURB-SG), multiple agents send keyframe packages with their local odometry estimates and panoptic LiDAR scans to a central server that performs global graph optimization. We subsequently partition the environment based on a lane graph from agent paths and other detected cars. Together with the 3D map, the lane graph forms the base of the large-scale hierarchical scene graph.
loop closures. Following the spirit of previous works on scene graphs [10, 11, 12], we extensively evaluate our proposed method on simulated data using the CARLA simulator [13].
To summarize, the main contributions are as follows:
1. We introduce a novel algorithm for representing urban driving environments as dynamic 3D scene graphs that are constructed from multi-agent observations to efficiently cover large areas.
2. We demonstrate an effective partitioning of urban environments using lane graphs constructed on the fly from panoptic LiDAR observations in a cooperative manner.
3. We present an efficient collaborative graph SLAM method to continuously update semantic maps while addressing scalability via edge contraction.
4. We provide extensive evaluations of the building blocks of our proposed framework.
5. We make our code and sample data publicly available at [http://curb.cs.uni-freiburg.de](http://curb.cs.uni-freiburg.de).
## II Related Work
In this section, we first present a summary of LiDAR-based odometry and mapping, followed by an overview of multi-agent SLAM, and scene graphs in automated driving (AD).
_LiDAR SLAM:_ LiDAR-based mapping has been pioneered by LOAM [14] that estimates robot motion from scan registration via ICP between subsequent point clouds. To address the full SLAM problem, HDL Graph SLAM [7] combines LiDAR odometry with local loop closure detection and performs joint pose graph optimization. Leveraging semantic segmentation, SUMA++ [6] masks dynamic classes during the mapping stage and proposes a semantic-aided variant of ICP. PADLoC [15] exploits panoptic segmentation during training to stabilize both loop closure detection and registration. In this work, we use panoptic point clouds to generate a large-scale semantic 3D map forming the base layer of our scene graph.
_Collaborative SLAM:_ To cover large environments and to increase mapping speed, SLAM research begins to shift towards multi-agent methods [16]. Generally, collaborative SLAM can be realized in a centralized or distributed manner. Initial works such as C\({}^{2}\)TAM [17] belong to the centralized category, performing global bundle adjustment on a server and localization on the clients. A similar paradigm is adopted by CVI-SLAM [18] and COVINS [19], proposing visual-inertial (VI) SLAM systems for a fleet of UAVs. While the robots run local VI odometry, a central server collects this information, searches for inter-agent loop closures to perform global optimization, and removes redundant data. With respect to LiDAR SLAM, LAMP 2.0 [20] allows collaboration between different types of robots to map large-scale underground environments. A similar use case is addressed by Swarm-SLAM [21], which supports further sensor modalities. Following a distributed paradigm, information is directly shared between the agents using peer-to-peer communication. Kimera-Multi [22] is a VI SLAM method that includes semantic information in the generated 3D mesh. For data fusion, it employs distributed pose graph optimization (PGO). Finally, DisCo-SLAM [23] proposes a LiDAR-based approach addressing the initially unknown relative position of the agents. For this, they use Scan Context [24] descriptors for global loop closure detection without spatial priors. In this work, we follow the centralized paradigm since we leverage collaborative SLAM to generate a single consistent scene graph that can be made available to other traffic participants to query information.
_Scene Graphs for Automated Driving:_ 3D scene graphs constitute an effective interface unifying pose graphs from large-scale mapping and local information [25] such as frame-wise object detections [26], topological mapping [27, 28, 29], or semantic segmentation [30, 31]. Additionally, graphs enable the structural disassembly of large-scale scenes into objects and their relationships and facilitate higher-level reasoning, e.g., in the vision and language domain [32]. This further allows for efficient hierarchical abstraction in both spatial and semantic regimes [12, 33]. So far, 3D scene graphs for environment representation have only been applied in indoor domains. The first work in this field [12] proposes an offline, multi-layered hierarchical representation based on RGB images. Kim _et al._[34] were the first to generate 3D scene graphs from RGB-D images for visual question answering [35] and task planning. Using a learning-based pipeline, Wald _et al._[36] construct a 3D scene graph from an instance-segmented point cloud while predicting node and edge semantics in an offline manner. Rosinol _et al._[33] present an offline framework capable of generating hierarchical scene graphs from dynamic indoor scenes that are divided into buildings, rooms, places, objects, and agents, as well as a metric-semantic mesh. Different from the aforementioned frameworks, Hydra [11], SceneGraphFusion [37], and S-Graphs [25] present real time-capable approaches. While Hydra does not tightly couple the optimized pose graph with the 3D scene graph, the non-hierarchical S-Graphs [25] close this gap. The follow-up work S-Graphs+ [10] also encodes hierarchies. In this work, we combine collaborative SLAM and 3D scene graphs to build hierarchical maps for AD. To the best of our knowledge, our work constitutes the first approach to 3D scene graph construction of urban driving scenes with a tightly coupled integration of inter-agent loop closures. Furthermore, we show how multi-agent cooperation facilitates frequent map updates and completeness.
## III Technical Approach
In this section, we present our CURB-SG approach for collaborative urban scene graphs. As illustrated in Fig. 2, CURB-SG is comprised of several components. In Sec. III-A, we describe our approach for collaborative SLAM to effectively combine panoptic information. Here, multiple agents transmit their onboard LiDAR odometry estimates along with panoptic point clouds to a central compute unit. This server combines the data by detecting intra- and inter-agent loop closures and performs pose graph optimization (PGO) to generate a globally consistent 3D map. In Sec. III-B, we propose to further aggregate the paths of the agents
and other observed vehicles to extract an online lane graph allowing for partitioning the city into intersections and roads. Finally, the server registers dynamic traffic participants on the lane graph and generates a hierarchical scene graph by assigning static landmarks to the closest intersection or road.
### _Collaborative SLAM_
We leverage collaborative LiDAR SLAM as the backend in our proposed CURB-SG. Due to its reliable performance and well-maintained code base, we build on top of HDL Graph SLAM [7] and extend it to a multi-agent scenario following a centralized approach as described in Sec. II. In this section, we describe the steps performed by each agent, followed by the centralized PGO as depicted in Fig. 2. Finally, we provide further details on how CURB-SG explicitly addresses both long-term and large-scale mapping.
_Agents:_ Each agent is equipped with a LiDAR sensor to capture sparse 3D point clouds, which contain spatial information as well as point-wise panoptic segmentation labels. Initially, a point cloud is separated into its static and dynamic components following the conventional categorization of "stuff" and "thing" classes [38]. Similar to SUMA++ [6], we use only the static points for constructing the map. In contrast to HDL Graph SLAM [7], we utilize different voxel grid sizes for the various semantic classes. This approach retains more dense information where required, e.g., poles and traffic signs are being processed at a more fine-grained level than roads or buildings. Next, we perform point cloud registration via FAST-GICP [39] between subsequent LiDAR scans to estimate the motion of an agent. Following the common methodology and to reduce the required bandwidth between the agents and the server, we generate keyframes after a specified traveled distance based on LiDAR odometry. Each keyframe is sent to the server and contains an estimated pose and the static LiDAR point cloud with semantic labels, i.e., the "stuff" points. Since car instances contribute to the online construction of a lane graph (see Sec. III-B), the "thing" points from all the LiDAR scans are transformed relative to the pose of the previous keyframe and sent separately.
_Server:_ The centralized server receives keyframes from all the agents and processes them in the following manner: First, upon receiving the first keyframe sent by an agent, the server registers this agent to the global pose graph. Second, the server searches for loop closure candidates between the added keyframe and the existing nodes in the pose graph to find both intra- and inter-agent loop closures. We rely on the original loop closure detection technique of HDL Graph SLAM [7], i.e., all nodes within a local search radius are considered to be candidates. If the fitness score of the ICP algorithm is below a threshold, a loop closure edge is added to the pose graph. Due to relying on an initial guess, we utilize the absolute ground truth value for the registration of a new agent. In practice, this could either be solved with GNSS measurements or by conducting an efficient global search for loop closure candidates leveraging point cloud descriptors [23]. Third, the server performs PGO using g2o [40] to integrate the newly added keyframes and detected loop closures. To address scalability, we employ edge contraction as detailed in the following paragraph. Finally, we apply the same semantics-based voxelization to the entire 3D map as performed by the agents on their local LiDAR scans.
_Long-Term and Large-Scale Mapping:_ If not handled explicitly, the pose graph would continue to grow while the mapping progresses. Since every keyframe contains a 3D point cloud, this not only significantly slows down the PGO but also increases memory consumption and disk storage. To address both problems, we remove the nodes and edges from the graph that carry redundant information. In Fig. 3, two agents have driven along the same road yielding multiple loop closures. Using a heuristic-driven approach, the loop closure edges that carry redundant information are being contracted by merging nodes. By redirecting the edges of the omitted to the remaining node, we ensure the legal connectivity of the pose graph. Notably, this is done after the PGO step. Consequently, the final pose graph becomes easier to maintain and more efficient to query when searching for new loop closures. The point cloud data associated with a removed node is combined with the data of the persisting node while omitting older data to guarantee up-to-date map information. In contrast, the dynamic observations linked to a node are completely transferred as they contribute towards the construction of the lane graph explained in Sec. III-B. For the same reason, each
Fig. 2: Overview of CURB-SG: Multiple agents obtain panoptically segmented LiDAR data and provide an odometry estimate based on the static parts of the point cloud. A centralized server instance then performs pose graph optimization (PGO) including inter-agent loop closure detection and edge contraction based on the agents’ inputs. Tightly coupled to the pose graph, we aggregate a lane graph from panoptic observations of other vehicles as well as the agent’s trajectories. Next, the lane graph is partitioned to retrieve a topological separation that allows for the hierarchical abstraction of larger environments.
removed node is turned into a passive observation that stores the driven path of an ego agent.
### _Scene Graph Generation_
The second key component of CURB-SG is a scalable environment representation of urban outdoor scenes for AD. Besides the aforementioned 3D semantic map, CURB-SG constructs a tightly-coupled hierarchical abstraction of the environment as shown in Fig. 2. By analogy with the separation of indoor scenes into buildings, rooms, and places [33, 12, 11], we decompose a constructed lane graph into intersecting and non-intersecting road areas allowing for spatial and semantic abstraction.
The root of our CURB-SG representation is given by _Layer A_ that holds environment/city-level information. This environment is then spatially divided into intersections and their connecting roads (_Layer B_), which serve as the categorical counterparts to rooms and corridors in indoor scenes. Since the partitioning of our environment is based on a lane graph (presented in _Layer E_), the connectivity of _Layer B_ is implicitly given by the connectivity of the lane graph (colored segments, Fig. 2). Next, we map static landmarks such as traffic signs and poles contained in _Layer C_ including their bounding box to their corresponding spatial area defined by _Layer B_. These landmarks can serve as priors for localization or object detection. _Layer D_ holds all currently observed dynamic vehicles. We map dynamic vehicles to their closest respective lane graph node, as defined in _Layer E_, to provide efficient access for downstream tasks, e.g., trajectory prediction. Central to this approach, _Layer E_ is a directed lane graph to encode the low-level topology for vehicle navigation and is inferred from the paths of the ego agents as well as other perceived vehicles. We provide further details in the next paragraph. The lane graph defines the connectivity of the different spatial regions in the urban environment, comparable to edges among rooms in indoor scene graph variants. Finally, _Layer F_ contains the pose graph from our SLAM backend and encodes LiDAR data in the form of semantic point clouds. As discussed in Sec. III-A, this layer is subject to continuous optimization and dynamic restructuring, e.g., due to loop closure detection and edge contraction. Based on the edges between the keyframes in this layer and spatial areas (_Layer B_), 3D map information is easily accessible given a rough road-level position estimate.
_Lane Graph Generation:_ We generate a lane graph of the environment leveraging the trajectories of the ego agents as well as observations of surrounding vehicles. As the LiDAR point clouds of the agents contain instance IDs, we are able to differentiate between multiple observed vehicle instances in the agents' surroundings. For each observed vehicle, we extract the centroid of its partial point cloud. The position of a centroid is stored relative to the most recent keyframe. After transmitting the data to the server, the position of this dynamic observation can be retrieved given the link to its corresponding keyframe. Consequently, the positions of all the dynamic observations benefit from continuous keyframe updates due to PGO as depicted in Fig. 2. To evenly sample paths, we further filter the observations using both hand-crafted heuristics and DBSCAN [41] based on timestamps, angles, and relative displacements. This is particularly important for stationary and occluded objects as well as outliers caused by odometry noise. Following an iterative yaw-respective aggregation scheme [27], we convert all trajectories into directed graphs, apply Laplacian smoothing, and merge them to build a complete lane graph. Employing the same processing scheme, we add agent trajectories to this graph. Since CURB-SG maintains a connection between the lane graph and the keyframes used in SLAM, we can continuously propagate refinements from PGO to the lane graph.
_Spatial Partitioning:_ Urban outdoor driving scenes exhibit a vastly different topology compared to indoor environments that have been represented using scene graphs so far. We found that classical methods such as wall dilation for retrieving disjoint environment graphs [11] are not directly applicable to urban environments. In our work, we propose to separate outdoor environments into intersecting and non-intersecting areas using the obtained lane graph (see above). Ultimately, this gives rise to the hierarchical environment abstraction introduced in CURB-SG enabling efficient querying for downstream tasks such as trajectory prediction. In particular, we detect intersections based on the following heuristics: First, we cluster high-degree lane graph nodes to find agglomerations of graph splits and merges. Second, we detect lane graph edges that intersect. These two approaches can be applied to various environments to efficiently handle challenging conditions such as multi-lane roads or non-trivial intersections. After identifying intersection nodes, the remaining disconnected sub-graphs fall into non-intersecting road areas. To assign components from other layers of the scene graph to the extracted partitions, we extend these areas beyond the lane node surroundings as illustrated in Fig. 2.
## IV Experimental Evaluation
In this section, we evaluate CURB-SG with respect to the collaborative SLAM backend, the constructed lane graph, and the proposed partitioning based on road intersections.
### _Experimental Setup_
We evaluate CURB-SG on various urban driving scenarios using the CARLA simulator [13] due to a lack of real-world
Fig. 3: In this example, two agents drive along the same road while passing each other at the dashed line. The detected loop closures yield additional edges in the pose graph. After optimization, the edges that carry redundant information are contracted by merging the older node into the more recently added node to update the map information.
multi-agent datasets providing LiDAR scans. In particular, we perform experiments on a set of four diverse environments including _town01_, _town02_, _town07_, and _town10_. Following previous works [11], we use the panoptic annotations with temporally consistent instance IDs provided by the simulator. Where applicable, we demonstrate the efficacy of CURB-SG for one, two, and three agents and average results over ten randomly initialized runs. Due to the semantics-based voxelization on the server, the total number of map points of a fully explored town is relatively stable. As the path planning of the agents is randomized, it can take a long time until this number is reached. Therefore, we approximate full exploration by using \(85\,\%\) as the termination criterion.
### _Collaborative SLAM_
In this section, we evaluate the collaborative SLAM backend of our proposed CURB-SG with respect to both accuracy and cooperative gain in long-term scenarios.
_Mapping and Localization_: In Tab. I, we present the root mean squared errors (RMSE) of the agents' keyframes and the estimated position of the street signs to represent localization and mapping accuracy, respectively. We compute the position of a street sign as the geometric center of the corresponding bounding box that is inferred from the 3D map. We observe that both errors are reduced when more agents contribute towards the collaborative pose graph. Except for the case of two agents in _town07_, this holds true for the mean as well as the standard deviation across all environments. We further illustrate the robustness of our approach against noisy sensor data by imposing realistic metric Gaussian noise \(\mathcal{N}(0,0.02)\) on the LiDAR scans [42] of the agents in _town01_ and _town02_. As shown in Tab. I, the noise does not significantly alter the errors indicating that downstream tasks such as lane graph estimation do not degrade either.
_Long-Term Mapping:_ We demonstrate the efficacy of our proposed adaptions of HDL Graph SLAM [7] (see Sec. III-A) to address long-term mapping of large areas. In the rightmost column of Tab. I, we report the time required to map a town when using one, two, or three agents. Generally, the higher the number of contributing agents, the smaller the time required to explore the map. Similarly, in Fig. 4, we illustrate the mapping progress measured by the number of 3D points versus the simulation steps. While the results confirm the aforementioned general trend towards faster exploration in a multi-agent setup, the pure mapping speed will reach an upper bound above that additional agents will not further increase the speed. However, even afterward, these agents will keep sending measurements and vehicle observations contributing towards frequent map updates and enhancing the lane graph (see Sec. IV-C). We present further results for _town01_ and _town10_ in the suppl. material Sec. S.3.
Finally, we demonstrate that our proposed edge contraction successfully limits the number of nodes contained in the pose graph. In Fig. 5, we show the example of three agents operating in _town02_ and compare the number of optimizable graph nodes with the total number of keyframes sent by the agents. We observe that without edge contraction, the pose graph continuously grows with the number of keyframes sent, rendering frequent optimization infeasible.
### _Lane Graph_
We evaluate our proposed online lane graph generation approach from the paths of the ego agents and their observations of other vehicles (Sec. III-B). We present qualitative results in Fig. 6 for two scenarios simulated in _town02_ with 30 additional non-agent vehicles: the left figure visualizes the lane graph in a single-agent scenario terminated as soon as
Fig. 4: The mapping progress in _town02_ (top) and _town07_ (bottom) for one, two, and three agents. Our collaborative SLAM method benefits from receiving inputs from multiple agents.
Fig. 5: Our proposed edge contraction mechanism effectively reduces the number of nodes in the pose graph to maintain the capability of frequent graph optimization. This plot shows three agents operating in _town02_.
the agent starts to repeatedly revisit intersections. Although the path of the agent, shown in blue, does not cover all the lanes, including the paths of the observed vehicles allows for a substantial extension of the lane graph. The right figure depicts a long-term scenario with three agents demonstrating that collaboration further boosts performance. Our method yields an almost complete lane graph even though several lanes have only been driven by the agents in the opposite direction.
We quantify these findings in Tab. II following previous works on lane graphs: precision and recall of the TOPO and GEO metrics [29], APLS [43], \(\text{SDA}_{\text{R}}\)[27] with the subscript denoting the search radius in meters, and the graph IoU [27]. For more details, please refer to the respective reference. We observe that except for the TOPO/GEO precision and the APLS in the 3-agent scenario, all the metrics show an improvement when using not only the paths of the ego agents but also of the observed vehicles. We attribute the decrease in precision to the noise in the estimated position of the other vehicles. Since we approximate the center of a vehicle by the geometric mean of the respective 3D points, there is a bias towards the center line of a road for all oncoming cars. We further observe that increasing the number of agents does have a positive impact on all the metrics except for the TOPO/GEO precision and the \(\text{SDA}_{4.5}\) demonstrating the efficacy of our method.
### _Environment Partitioning_
We evaluate our approach for environment partitioning (Sec. III-B) by comparing it against the ground-truth intersection points of the underlying map. Throughout exploring the environment, the recall is normalized using the point cloud of the road surface obtained thus far. Our proposed lane graph-based method (LG) is compared against a morphological image skeletonization baseline (SK) that uses medial axes of the bird's-eye-view projected point cloud of the road surface. Kernelized smoothing and dilation followed by thresholding the obtained bird's-eye-view image helps in filtering false positive points and noise. In order to further increase precision, the SK baseline includes clustering culmination of intersection points in local areas that originate from artifacts in the skeleton graph. We report the precision and recall values across ten exploration runs on _town02_ in Fig. 7. We observe that our approach (LG) achieves at least \(20\,\mathrm{\char 37}\) greater precision while showing comparable or exceeding recall scores. As our approach relies on observed vehicle trajectories, we attribute the lower initial recall of the LG method to a small number of initially seen trajectories while the point cloud-based baseline already processes a larger extent of the surroundings at this stage. Nonetheless, we observe that the SK baseline yields vastly different partitioning solutions throughout exploration as it is not robust to artifacts such as occlusions due to vehicles or sparse LiDAR readings of distant road surfaces. We believe that a conservative, high-precision classifier is beneficial as over-segmentation increases the number of roads and intersections unnecessarily. Further explanations are provided in suppl. material Sec. S.4. Additionally, we observe that simply extracting intersections from the pose graph produces low recalls as every path has to be traversed by the agents instead of relying on more descriptive observations.
## V Conclusion
In this work, we introduced CURB-SG as a novel approach to building large-scale hierarchical dynamic 3D urban scene graphs from multi-agent observations. We furthermore demonstrated how our collaborative SLAM approach facilitates frequent map updates and rapid exploration while scaling to large environments. To foster further research in this direction, we made our code publicly available. In future work, we will address the reliance on simulated panoptic labels and known initial poses of the agents. Orthogonal to that, follow-up work could address a decentralized variant that operates under real-time constraints. Furthermore, we plan to include pedestrian information as well as additional topological elements such as road boundaries.
Fig. 6: Visualization of the constructed lane graph of _town02_ when using one or three agents. Lanes marked in blue have been traversed by an ego agent. Others are reconstructed from observing surrounding vehicles.
Fig. 7: Intersection detection quality of our lane graph-based detection of intersections (LG) and an image-based skeletonization baseline of the road surface (SK). Average precision (P) and recall (R) of both approaches across 10 runs with 3 agents and 40 vehicles on _town02_ as well as the size of the investigated road surface point cloud are shown. | 地図は、安全かつ自動運転を可能にする不可欠な役割を果たしてきました。SLAMから semantika などのさまざまな分野で多くの進歩がありますが、多様な代理からの情報処理と、都市動的なシーンの可行動的な階層的な semanti的表現を構築することはまだ課題です。この論文では、自動運転の多くの機能に対して、高階的な論理を可能にし、効率的な問い合わせを可能にする、協調的な都市シーングラフ(CURB-SG)を提案します。CURB-SGは、複数の代理からのパノラマ LiDAR データを活用し、効果的なグラフベースの協調的な SLAM 方法を用いて、スケールアップされたマップを作成します。これは、代理間ループの閉塞を検出するものです。取得した3Dマップを semantiカルに分解するために、エゴの代理と他の車両の視覚観測に基づいて、レーングラフを構築します。レーングラフの接続 |
2309.09506 | LayoutNUWA: Revealing the Hidden Layout Expertise of Large Language
Models | Graphic layout generation, a growing research field, plays a significant role
in user engagement and information perception. Existing methods primarily treat
layout generation as a numerical optimization task, focusing on quantitative
aspects while overlooking the semantic information of layout, such as the
relationship between each layout element. In this paper, we propose LayoutNUWA,
the first model that treats layout generation as a code generation task to
enhance semantic information and harness the hidden layout expertise of large
language models~(LLMs). More concretely, we develop a Code Instruct Tuning
(CIT) approach comprising three interconnected modules: 1) the Code
Initialization (CI) module quantifies the numerical conditions and initializes
them as HTML code with strategically placed masks; 2) the Code Completion (CC)
module employs the formatting knowledge of LLMs to fill in the masked portions
within the HTML code; 3) the Code Rendering (CR) module transforms the
completed code into the final layout output, ensuring a highly interpretable
and transparent layout generation procedure that directly maps code to a
visualized layout. We attain significant state-of-the-art performance (even
over 50\% improvements) on multiple datasets, showcasing the strong
capabilities of LayoutNUWA. Our code is available at
https://github.com/ProjectNUWA/LayoutNUWA. | Zecheng Tang, Chenfei Wu, Juntao Li, Nan Duan | 2023-09-18T06:35:10 | http://arxiv.org/abs/2309.09506v2 | # LayoutNUWA: Revealing the Hidden Layout Expertise of Large Language Models
###### Abstract
Graphic layout generation, a growing research field, plays a significant role in user engagement and information perception. Existing methods primarily treat layout generation as a numerical optimization task, focusing on quantitative aspects while overlooking the semantic information of layout, such as the relationship between each layout element. In this paper, we propose LayoutNUWA, the first model that treats layout generation as a code generation task to enhance semantic information and harnesses the hidden layout expertise of large language models (LLMs). More concretely, we develop a Code Instruct Tuning (CIT) approach comprising three interconnected modules: 1) the Code Initialization (CI) module quantifies the numerical conditions and initializes them as HTML code with strategically placed masks; 2) the Code Completion (CC) module employs the formatting knowledge of LLMs to fill in the masked portions within the HTML code; 3) the Code Rendering (CR) module transforms the completed code into the final layout output, ensuring a highly interpretable and transparent layout generation procedure that directly maps code to a visualized layout. We attain significant state-of-the-art performance (even over 50% improvements) on multiple datasets, showcasing the strong capabilities of LayoutNUWA. Our code is available at [https://github.com/ProjectNUWA/LayoutNUWA](https://github.com/ProjectNUWA/LayoutNUWA).
Graphic layout generation, a growing research field, plays a significant role in user engagement and information perception. Existing methods primarily treat layout generation as a numerical optimization task, focusing on quantitative aspects while overlooking the semantic information of layout, such as the relationship between each layout element. In this paper, we propose LayoutNUWA, the first model that treats layout generation as a code generation task to enhance semantic information and harnesses the hidden layout expertise of large language models (LLMs). More concretely, we develop a Code Instruct Tuning (CIT) approach comprising three interconnected modules: 1) the Code Initialization (CI) module quantifies the numerical conditions and initializes them as HTML code with strategically placed masks; 2) the Code Completion (CC) module employs the formatting knowledge of LLMs to fill in the masked portions within the HTML code; 3) the Code Rendering (CR) module transforms the completed code into the final layout output, ensuring a highly interpretable and transparent layout generation procedure that directly maps code to a visualized layout. We attain significant state-of-the-art performance (even over 50% improvements) on multiple datasets, showcasing the strong capabilities of LayoutNUWA. Our code is available at [https://github.com/ProjectNUWA/LayoutNUWA](https://github.com/ProjectNUWA/LayoutNUWA).
Figure 1: Overview of LayoutNUWA, in which we view layout generation as a code generation task to enhance the semantic information in layouts as well as naturally harness the hidden layout expertise of large language models. In detail, we propose a Code Instruct Tuning (CIT) approach that consists of three modules: 1) the Code Initialization (CI) module quantifies the numerical conditions and initializes them as an HTML code with masks; 2) the Code Completion (CC) module utilizes the knowledge of large language models to complete the masked portions within the HTML code; 3) the Code Rendering (CR) module directly renders the completed code into the final graphic layout.
## 1 Introduction
Graphic layout, which refers to the organization and positioning of design elements, significantly influences the way users engage with and perceive the presented information (Lee et al., 2020). As a growing research field, layout generation (Li et al., 2019; Yang et al., 2020) aims to create diverse and realistic layouts that streamline the design process and cater to various applications, such as user interfaces (Deka et al., 2017; Jiang et al., 2022), indoor scenes (Di and Yu, 2021; Feng et al., 2023), document layouts (Zheng et al., 2019; Yamaguchi, 2021), presentation slides (Fu et al., 2022), etc.
Current approaches (Jyothi et al., 2019; Li et al., 2019; Arroyo et al., 2021; Zhang et al., 2023a) regard each element in the layout as numerical tuples \((c,x,y,w,h)\), in which \(c\) indicates the element category, \(x\) and \(y\) represent coordinates, \(w\) and \(h\) correspond to width and height. For example, autoregressive-based methods (Yang et al., 2020; Jiang et al., 2022) view the tuple as a sequence and predict their values sequentially, while diffusion-based methods (Chai et al., 2023; Inoue et al., 2023) consider the tuple as a whole and predict their values through a denoising approach. Despite adopting different generative models, all of these methods fundamentally consider layout generation as a numerical tuple optimization task. However, representing layouts as numerical tuples have its limitations, as it primarily focuses on capturing the quantitative aspects of the layout, such as positions and sizes, while lacking semantic information, e.g., the attribute of each numerical value, which may limit the model's ability to capture more complex and rich layout information.
An insightful question emerges from the limitations of existing methods in layout generation: can we integrate semantic information into the layout generation process to enrich the overall representation and enhance the quality of the generated layouts? Addressing this question brings forth two major benefits: firstly, it bolsters the understanding of relationships among various layout elements, and secondly, it enables us to tap into the semantic capabilities of LLMs (Tang et al., 2023), resulting in more intricate and contextually relevant layouts for a wide range of applications (Jiang et al., 2022). Considering the inherent logical nature of layouts, which involve dependency relationships among layout elements, and the fact that each graphic layout can be represented with a fixed structure sequence, code languages emerge as a promising alternative. Code languages can encompass numerical and semantic information while possessing a strong logical foundation (Chen et al., 2022), which can thus bridge the gap between existing methods and the desired enriched representation.
Based on the above observations, we propose LayoutNUWA, a groundbreaking model that revolutionizes the layout generation task by treating it as a code generation task. Our innovative approach is designed to not only enhance the semantic information within layouts but also seamlessly leverage the expertise of LLMs in the layout generation process. To achieve this, we design a Code Instruct Tuning (CIT) approach comprising three interconnected modules: 1) firstly, the Code Initialization (CI) module quantifies the numerical conditions and initializes them as HTML code with strategically placed masks, paving the way for more meaningful and coherent layouts; 2) secondly, the Code Completion (CC) module employs the formatting knowledge of LLMs to fill in the masked portions within the HTML code, thereby harnessing the power of LLMs to improve the accuracy and consistency of the generated layouts; 3) lastly, the Code Rendering (CR) module transforms the completed code into the final layout output, ensuring a highly interpretable and transparent layout generation procedure that directly maps code to a visualized layout.
Experiments across a variety of conditional layout generation tasks on three datasets, i.e., Rico (Deka et al., 2017), PubLayNet (Zhong et al., 2019) and Magazine (Zheng et al., 2019), highlight the superiority of our method, in which LayoutNUWA can significantly outperform all the baselines and shows comparable results with the task-specific models. Furthermore, LayoutNUWA can achieve at least a 50% improvement in performance compared to the best baseline on the low-resource datasets, e.g., the Magazine dataset. In a nutshell, our contributions can be outlined as follows:
* We introduce LayoutNUWA, the first model that treats the layout generation task as a code generation task, effectively harnessing the hidden layout expertise of LLMs.
* We propose Code Instruct Tuning, which empowers the model to adhere to instructions and enriches the semantic information of layout, resulting in precise and standardized code.
* We attain significant state-of-the-art performance on multiple datasets, showcasing the robust capabilities of LayoutNUWA.
## 2 Related Work
### Layout Generation
Automatic layout generation, an important task for automatic graphical design for various scenarios such as document layouts (Zheng et al., 2019; Zhong et al., 2019; Yamaguchi, 2021; Fu et al., 2022), posters (Yang et al., 2016; Guo et al., 2021; Li et al., 2023) and user interface (Deka et al., 2017), has been recently extensively researched. Early approaches for layout generation involve embedding design rules into manually-defined energy functions (O'Donovan et al., 2014; O'Donovan et al., 2015), while other methods have explored generative models such as GANs and VAS for generating numerical graphic and scene layouts, including LayoutGAN (Li et al., 2019), LayoutVAE (Jyothi et al., 2019), LayoutGAN++ (Kikuchi et al., 2021), NDN (Lee et al., 2020) and READ (Patil et al., 2020). Apart from them, transformer-based approaches utilize self-attention mechanisms to learn numerical contextual relationships between elements and achieve layout completion based on partial layout inputs (Yang et al., 2020; Kong et al., 2022; Feng et al., 2023). Recently, with the prevalence of diffusion models, several works also adopted diffusion models to tackle a broader range of conditional layout generation (Chai et al., 2023; Inoue et al., 2023; Zhang et al., 2023; Hui et al., 2023; Cheng et al., 2023). However, existing methods primarily treat layout generation as a numerical optimization task, focusing on quantitative aspects while overlooking the semantic information of layout, such as the relationship between each layout element. Different from previous works, we convert the layout generation task into the code generation task to directly generate the layout in code language and thus utilize the rich knowledge from LLMs, which can significantly improve the FID by 50% in the Magazine dataset in SS 4.2.
### Instruction Tuning
Instruction tuning represents the process of fine-tuning LLMs on the instruction dataset in a supervised fashion, which narrows the gap between the next-word prediction manner of LLMs and the users' objective of having LLMs adhere to human instructions (Zhang et al., 2023c). Early attempts on instruction tuning involve multi-task training with manually-written descriptions about different tasks (Mishra et al., 2021; Wei et al., 2021; Sanh et al., 2021; Xu et al., 2022; Muenninghoff et al., 2022; Iyer et al., 2022) or automatically generated instructions (Wang et al., 2022; Gu et al., 2022; Zhang et al., 2023b; Honovich et al., 2022a;b). Apart from controlling the LLMs through input instruction, Nye et al. (2021) show that LLM can handle more complex tasks by generating the intermediate steps and Wei et al. (2022) propose chain-of-thought technique by enriching the instruction with intermediate reasoning step descriptions, which endows LLMs with better performance (Wang et al., 2022; Zelikman et al., 2022; Wu et al., 2023; Xu et al., 2023). However, the instruction tuning methods mentioned above are primarily intended for text generation tasks and not ideal for layout generation tasks, which involve numerical optimization. Thus, we propose a code instruction tuning method that is specially designed for layout generation task. Experiments in SS 5.1 indicate that the performance significantly drops if the code instruction tuning is not adopted.
## 3 Methodology
### Problem Formulation
The layout generation task aims to generate a well-organized layout \(\mathcal{S}=\{s_{i}\}_{i=1}^{N}\), with \(N\) representing the number of elements in the layout. Each element, \(s_{i}=(c_{i},x_{i},y_{i},w_{i},h_{i})\), consists of the following components: \(c_{i}\) is the category, \(x_{i},y_{i}\) indicate the center location, and \(w_{i},h_{i}\) represent the width and height, respectively. In this study, we focus on the conditional layout generation task, wherein partial components in \(s_{i}\) are masked with \(M\), and the complete layout \(S\) should be predicted by model \(f_{\theta}\) conditioned on the remaining components \(S_{\backslash M}\):
\[\mathcal{S}=f_{\theta}(\mathcal{S}_{\backslash M}) \tag{1}\]
Previous works (Jyothi et al., 2019; Yang et al., 2020; Inoue et al., 2023) regard each element \(s_{i}\) as a sequence of numerical values, e.g., (0, 10, 20, 25, 30), and train a model to directly generate these values. However, this approach overlooks the semantic information of the components, thus limiting the model's understanding of the layout semantics. Based on this observation, we propose
a new problem definition, where we convert the input \(S_{\backslash M}\) and output \(S\) into a code language and view the layout generation task as a code generation task:
\[\mathrm{CODE}(\mathcal{S})=f_{\theta}(\mathrm{CODE}(\mathcal{S}_{\backslash M})) \tag{2}\]
Eq. 2 has the following 3 advantages compared with Eq. 1:
* **Semantic Insights**: By converting the numerical values into code language, the model can better capture the semantic relationships between different components of the layout.
* **LLM Utilization**: By using code language, the model can further leverage the knowledge of Large Language Models (LLMs) and thus enhance the quality of the generated layouts.
* **Model Scalability**: The code language has a stronger expressive capability compared to numerical values, which allows the addition of more attributes for layout elements.
### Code Instruct Tuning
As shown in Fig. 1, we propose Code Instruct Tuning (CIT) with three modules: (1) _Code Initialization_ module converts layout into masked code language with dynamic templates; (2) _Code Completion_ module inputs the masked code to LLMs to generate complete code; (3) _Code Rendering_ module directly renders code to the final graphic layout. We illustrate these modules below.
#### 3.2.1 Code Initialization
Element QuantizationWe quantify the numerical values of \(i\)-th element position \(\{x_{i},y_{i}\}\) and size \(\{w_{i},h_{i}\}\) in the layout with Adaptive Quantization method (Inoue et al., 2023) that applies \(k\)-Means algorithm (MacQueen et al., 1967) to cluster the position and size information of each element, addressing the highly imbalanced distribution of these values, e.g., elements may overlap or cluster together. Different from the previous works (Chai et al., 2023; Zhang et al., 2023a; Inoue et al., 2023), we use absolute position to represent the coordinates rather than relative positions. This aligns with code language and allows direct rendering of layouts without necessitating coordinate conversion, thereby preventing potential information loss. We maintain precision up to one decimal place and directly convert the clustered results into strings.
Figure 2: The training process of LayoutNUWA, which converts layout generation task to code generation task and utilizes a code instruct tuning to leverage LLM’s capability for layout generation.
Template ConstructionThe overview of template construction is shown in Fig. 2. We construct the templates based on the most common web page layout code, HTML, which contains a wealth of information and is easily accessed by LLMs during the pre-training process (Touvron et al., 2023; Roziere et al., 2023). Specifically, in HTML code, each element is described with a tag that provides information about the content or the element structure. Since the elements in the layout are regular squares, we chose the \(<\)rect\(>\) tag as the content tag to describe each element:
```
\(<\)rectdata-category={\(c_{i}\)x={\(x_{i}\)}y={\(y_{i}\)}width={\(w_{i}\)}height={\(h_{i}\)}>
```
where \(c_{i}\) is the element category in textual format and \(\{x_{i},y_{i},w_{i},h_{i}\}\) are the quantified position and size of the \(i\)-th element. Then, to combine all the elements into a unified structure, we used an opening tag and a closing tag to define the boundaries of each layout, which can be written as:
```
\(<\)html><body><svgwidth={W}height={H}>...</svg></body></html>
```
where \(W\) and \(H\) are the background width and height of the layout.
In order to facilitate better learning of layout in various domains and tasks and leverage the instruction-following capabilities of LLMs, we design the following prompts:
```
Iwanttogeneratelayoutin[Domain]style.Pleasegeneratethelayoutaccordingtothe{TaskCondition}Iprovide:
```
where the {domain} and the {TaskCondition} will vary according to different domains and tasks. For instance, for the RICO dataset, we set Domain as "mobile UI", and for the layout completion task, we set Task Condition as "remaining values". Afterwards, we prepend the task instruction before the layout code.
#### 3.2.2 Code Completion
To construct the conditional input of the layout generation task, we utilize the mask tokens of LLMs to represent the masked values \(M\) and let the model predict the masked values within the HTML code. Different from previous works (Chai et al., 2023; Zhang et al., 2023; Inoue et al., 2023) that applied the customized numerical vocabulary, we employ the LLM's token vocabulary directly. By doing so, we can leverage the knowledge of the numerical tokens inherit in the LLMs. Considering that almost all the LLMs follow auto-regressive generation manner and it brings significant limitation to the layout generation task since the model should predict the same layout under different element orders, even if the layout doesn't have a naturally defined order (Yang et al., 2020). Thus, we design a self-consistency strategy that randomly permutes the order of the input elements in the layout within a mini-batch. Meanwhile, in order to adapt LLMs to different conditional layout generation tasks, we have performed multi-task modeling on the same layout, utilizing various conditions and implementing a joint loss for these tasks. Given the permutation times \(K\) and task numbers \(T\), the joint loss for each layout \(\mathcal{S}\) can be written as:
\[L(\mathcal{S}\mid\theta)=\sum_{t=1}^{T}\sum_{j=1}^{N}\sum_{k=1}^{K}L(s_{j}^{(k) }\backslash M_{j}^{(t)}\mid\theta), \tag{3}\]
where \(\theta\) is the model parameters and \(s_{j}\) denote the \(j\)-th element in the layout \(\mathcal{S}\).
#### 3.2.3 Code Rendering
Most existing works require the extra conversion step to render the graphic layouts (Yang et al., 2020; Chai et al., 2023; Zhang et al., 2023), e.g., converting the relative position to the absolute position, causing the information loss. Different from previous work, LayoutNUWA allows for the immediate rendering as it generate the absolute position directly. Besides, considering the potential output issues such as boundary overflow (Inoue et al., 2023) and format errors, we employ regular expressions to remove mismatched formats and implement clipping operations for elements that exceed the background size.
## 4 Experiment
### Experimental Settings
DatasetWe evaluate the model performance on three widely used public datasets. RICO (Deka et al., 2017) is a user interface design dataset for mobile applications containing 25 element categories and 6K+ UI layouts. PubLayNet (Zhong et al., 2019) consists of 360K+ layouts for documents with 5 element categories. Magazine (Zheng et al., 2019) is a low-resource magazine layout dataset containing around 4K annotated layouts and 6 element categories. We follow LayoutDM (Inoue et al., 2023) to view the original validation data as the testing set and pre-process all three datasets by discarding the layouts containing more than 25 elements as well as splitting the filtered data into the training and new validation sets by 95% and 5%.
Evaluation MetricsWe employ four metrics to evaluate the generation results comprehensively, including Frechet Inception Distance (FID), Maximum Interaction over Union (mIoU), Alignment (Align.), and Overlap. Among them, FID compares the distribution of generated and real layouts. Similar to the previous work (Inoue et al., 2023), we utilize an enhanced feature extraction model for layouts (Kikuchi et al., 2021) to compute the FID score. We measure the conditional similarity between generated and real layouts using mIoU, which is done by calculating the maximum IoU between bounding boxes of generated and real layouts with the same type set. Alignment and Overlap scores are calculated following the previous work (Li et al., 2019) to evaluate proper element alignment and overlapping in a generated layout, and it is worth noting that we ignore normal overlaps, e.g., elements on top of the background, and discard the layouts that failed to generate. For reference, we show the evaluation results between the validation set and test set as Real data.
Tasks and BaselinesWe evaluate LayoutNUWA on three conditional layout generation tasks. These include the Category to Size and Position (C \(\rightarrow\) S+P) task, the Category and Size to Position (C+S \(\rightarrow\) P) task, and the Completion task. More concretely, the C \(\rightarrow\) S+P task requires the model to predict the position and size of the element based on its category. For the C+S \(\rightarrow\) P task, the model predicts the position of the element based on both its size and category. Finally, in the completion task, the element's size and position values are randomly masked up to 80%, and the model predicts the entire layout using the remaining values. We compare LayoutNUWA with six strong baselines, including LayoutTrans (Yang et al., 2020), BLT (Kong et al., 2022), LayoutGAN++ (Li et al., 2019), MaskGIT (Chang et al., 2022), DiffusionLM (Li et al., 2022) and LayoutDM (Inoue et al., 2023).
Implementation DetailsWe implement LayoutNUWA with two 7B LLMs: LLMaMA2 (L2) (Touvron et al., 2023) and CodeLAMa3 (CL) (Roziere et al., 2023). We train LayoutNUWA with two settings: (1) Domain-Specific (DS) setting, where the model is trained on distinct datasets, and (2) Domain-Agnostic (DA) setting, where the model is trained on all three datasets, including RICO, PubLayNet, and Magazine. The default configuration for LayoutNUWA utilizes CodeLAMa (CL) and Domain-Agnostic (DA), i.e., LayoutNUWA-L2-DS. We set permutation times \(K=10\) and task numbers \(T=3\). For model training, we use DeepSpeed Library (Rajbhandari et al., 2020) to run all experiments on 64 NVIDIA V100 GPUs. We apply Top-\(p\) sampling (Holtzman et al., 2019) for inference, where \(p=0.9\) and the temperature is \(0.6\), and set the maximum generation length as 512.
Footnote 2: [https://huggingface.co/meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
### Quantitative Evaluation
We report the model performance on three datasets: the Magazine dataset in Tab. 1, RICO, and PubLayNet datasets in Tab. 2. For the Magazine dataset, LayoutNUWA demonstrates a remarkable performance by significantly surpassing all baseline measures across all tasks. Moreover, it outperforms the strong baseline LayoutDM by more than 50% when assessed with the FID metric.
The significant improvements in Tab. 1 are due to three aspects: 1) previous approaches generated numerical values, while LayoutNUWA generates code with labels, which greatly benefits the model by utilizing the semantic information of layout attributes such as width, height, position, and category; 2) none of the previous methods used LLMs. However, we have introduced LLMs for the first
time, which has resulted in significant performance enhancements, i.e., performance has improved from \(19.206\) to \(9.741\). Furthermore, when we use CodeLLaMA, which is tuned on code language, the performance improves even further to \(8.985\); 3) since different domains require distinct layout formats, early numerical-based methods could only be trained in a domain-specific manner. However, LayoutNUWA is based on code structure, which can be trained in a domain-agnostic manner, allowing for complementary among data from various domains, thus further improving FID to \(8.791\).
We have also conducted extensive experiments on two other datasets: RICO and PubLayNet, as shown in Tab. 2. The LayoutNUWA notably surpasses all baseline methods in the majority of tasks. Although it does not achieve the best performance in two specific tasks, it still secures at least the second-highest performance in those instances. This shows the strong generalization of the LayoutNUWA. It is worth mentioning that our model also achieves closer Align. and Overlap scores to the Real Data compared to the baselines. Although previous work has suggested that refinement and discriminator processes can contribute to improving the Align. and Overlap (Inoue et al., 2023; Li et al., 2019) scores, our method attains better results without employing these steps.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Models**} & \multicolumn{3}{c}{**RICO**} & \multicolumn{3}{c}{**PubLayNet**} \\ & & & & mIoU (\(\uparrow\)) & Align (\(\rightarrow\)) & Overlap (\(\rightarrow\)) & FID (\(\downarrow\)) & mIoU (\(\uparrow\)) & Align (\(\rightarrow\)) & Overlap (\(\rightarrow\)) & FID (\(\downarrow\)) \\ \hline \multirow{8}{*}{**C \(\rightarrow\) S \(\rightarrow\) P**} & LayoutTrans & 0.219 & 0.014 & 13.012 & 11.237 & 0.271 & 0.016 & 3.229 & 38.910 \\ & BLT & 0.203 & 0.013 & 11.743 & 14.260 & 0.232 & 0.009 & 16.742 & 76.499 \\ & LayoutGAN++ & 0.263 & 0.016 & 3.544 & 6.824 & 0.354 & 0.011 & 1.713 & 10.129 \\ & MatchGT & 0.267 & 0.001 & 26.665 & 27.470 & 0.320 & 0.004 & 1.857 & 16.988 \\
**Condition** & DiffusionLM & 0.299 & 0.018 & 17.665 & 31.644 & 0.262 & 0.027 & 3.532 & 20.021 \\
**C \(\rightarrow\) S \(\rightarrow\) P** & LayoutMD & 0.275 & 0.010 & 11.938 & 3.576 & 0.310 & 0.010 & 0.024 & 7.915 \\ \cline{2-11} & LayoutNUA-L2-DS (ours) & 0.351 & 0.002 & 10.109 & 3.728 & 0.337 & 0.009 & 0.028 & 6.986 \\ & LayoutNUA-L2-DA (ours) & 0.386 & 0.011 & 10.214 & 3.010 & 0.324 & 0.011 & 0.077 & 6.890 \\ & LayoutNUA-L2-DA (ours) & 0.377 & 0.009 & 10.263 & 3.706 & 0.376 & 0.278 & 0.083 & 6.715 \\ & LayoutNUA (ours) & **0.445** & **0.004** & **7.943** & **2.524** & **0.385** & **0.001** & 0.086 & **6.975** \\ \hline \multirow{8}{*}{**C \(\rightarrow\) S \(\rightarrow\) P**} & LayoutTrans & 0.311 & 0.011 & 11.902 & 9.368 & 0.315 & 0.013 & 2.531 & 31.627 \\ & BLT & 0.341 & 0.008 & 13.470 & 4.487 & 0.336 & 0.006 & 5.469 & 8.831 \\ & LayoutGAN++ & 0.349 & 0.011 & 26.62 & 6.219 & 0.346 & 0.008 & 2.746 & 9.936 \\ & MatchGT & 0.331 & **0.003** & 26.369 & 12.988 & 0.384 & 0.005 & 1.950 & 5.453 \\ & DiffusionLM & 0.278 & 0.020 & 11.884 & 15.931 & 0.324 & 0.014 & 3.990 & 16.407 \\
**C \(\rightarrow\) S \(\rightarrow\) P** & LayoutMD & 0.391 & 0.009 & 12.072 & **2.285** & 0.381 & 0.010 & 2.041 & 4.175 \\ \cline{2-11} & LayoutNUA-L2-DS (ours) & 0.462 & 0.008 & 10.456 & 30.05 & 0.426 & 0.010 & 1.752 & 4.105 \\ & LayoutNUA-L2-DA (ours) & 0.464 & 0.007 & 10.117 & 27.037 & 0.464 & 0.009 & 1.984 & 3.993 \\ & LayoutNUA-CL-DS (ours) & 0.469 & 0.007 & 9.856 & 2.984 & **0.466** & 0.009 & 1.610 & 4.012 \\ \cline{2-11} & LayoutNUA (ours) & **0.564** & 0.007 & **7.968** & 2.820 & **0.433** & **0.002** & **0.106** & **3.697** \\ \hline \multirow{8}{*}{**C \(\rightarrow\) S**} & LayoutTrans & 0.561 & 0.008 & 10.800 & 3.733 & 0.499 & 0.012 & 2.053 & 8.689 \\ & BLT & 0.471 & **0.007** & 53.658 & 12.110 & 0.157 & **0.002** & 109.483 & 155.157 \\ \cline{1-1} & MatchGT & 0.537 & 0.024 & 9.242 & 33.463 & 0.349 & 0.011 & 4.768 & 120.193 \\ \cline{1-1} & DiffusionLM & 0.218 & 0.021 & **8.681** & 22.220 & 0.332 & 0.012 & 4.406 & 16.576 \\ \cline{1-1} & LayoutNUA-L2-DA (ours) & 0.580 & 0.002 & 15.676 & 3.924 & 0.377 & 0.011 & 1.891 & 7.570 \\ \cline{1-1} \cline{2-11} & LayoutNUA-L2-DS (ours) & 0.610 & 0.009 & 7.239 & 8.875 & 0.407 & 0.010 & 1.337 & 7.337 \\ \cline{1-1} & LayoutNUA-L2-DA (ours) & 0.624 & **0.007** & 10.457 & 8.724 & 0.477 & 0.012 & 1.383 & 7.149 \\ \cline{1-1} & LayoutNUA-CL-DS (ours) & **0.641** & **0.007** & 7.529 & 8.734 & 0.473 & 0.012 & 1.311 & 7.253 \\ \cline{1-1} & LayoutNUA (ours) & 0.616 & **0.007** & 8.123 & **7.542** & **0.481** & 0.009 & **1.292** & **6.929** \\ \hline
**Real Data** & - & 0.438 & 0.004 & 8.706 & 6.25 & 0.691 & 0.001 & 0.039 & 1.85 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison on Magazine dataset, where the bold font denotes the best result and underline represents the second-best performance.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Layout**} & \multirow{2}{*}{**LLM**} & \multirow{2}{*}{**Domain**} & \multicolumn{2}{c}{C \(\rightarrow\) S + P**} & \multicolumn{2}{c}{**C + S \(\rightarrow\) P**} & \multicolumn{2}{c}{**Completion**} \\ & & & & mIoU (\(\uparrow\)) & FID (\(\downarrow\)) & mIoU (\(\uparrow\)) & FID (\(\downarrow\)) & mIoU (\(\uparrow\)) & FID (\(\downarrow\)) \\ \hline LayoutTrans & Numerical & - & Specific & 0.116 & 36.207 & 0.153 & 33.931 & 0.228 & 25.804 \\ BiT & Numerical & - & Specific & 0.087 & 65.372 & 0.126 & 41.089 & 0.103 & 97.142 \\ LayoutGAN++ & Numerical & - & Specific & 0.259 & 16.952 & 0.293 & 11.56
### Qualitative Evaluation
We render the generated layout code with the Code Rendering (CR) method, and Fig. 3 shows the sampled rendering results of the PubLayNet dataset. By comparing with other baselines, we can observe that the layouts generated by LayoutNUWA exhibit excellent element alignment, and the proportion of overlap between elements is minimal. Additionally, our results are the most consistent with the Real Design data, i.e., the size and position of the generated element are essentially consistent with the real design, indicating that by treating the layout generation task as a code generation task, LayoutNUWA has successfully learned the distribution of document layouts, thus result in more precise and realistic layouts. More sampled cases can be referred to Fig. 5.
## 5 Ablation Study
We investigate the effectiveness of the CIT tuning method in Sec. 5.1 and compare the impact of different output formats and fine-tuning in Sec. 5.2. More concretely, we set the LayoutNUWA-L2-DS model as the basic setting and conduct the ablation studies on the Magazine dataset.
### Effect of Tuning Methods
We progressively reduce the modules in CIT and fine-tune the model using the corresponding constructed data. Specifically, we first exclude the code template and directly convert the element information into an ordered sequence \(\mathbf{S}\) with a task instruction before it, i.e., the instruction tuning method. Then, we further remove the task instruction and directly fine-tune the model using data from different tasks separately, i.e., the numerical tuning method. As shown in Tab. 3, we can observe that the model performance has declined significantly without the code template, and it can only work in the DS setting since the model can simply generate repetitive and out-of-order results that are inconsistent with the element sequence in the DA setting. Furthermore, the numerical tuning method can only support the DS setting as there is no task instruction for the model to distinguish between different tasks, and the model performance is far inferior compared to those of the CIT as such an approach overlooks the rich semantic information among the elements and can not calibrate the prior code knowledge of LLMs.
Figure 3: Samples generated by LayoutNUWA on the PubLayNet dataset.
### Effect of Output Format and Finetuning
We compared the effects of the model output in code format and numerical format. For the numerical output format, we designed a Code Infilling task, which involves making the LLM predict only the masked values rather than predicting the entire code sequence. As shown in Tab. 4, we can find that generating in numerical format will increase the failure ratio of model generations, e.g., the model will generate repetitive results, and significantly decrease the model performance. This is because the layout generated by the conditional layout generation task should be logical, while only predicting the masked parts can lead to discrete values that lack logic. Besides, Due to the influence of the autoregressive manner, where the content generated in the next step depends on the previous history, this phenomenon may result in a higher failure probability of model generation when predicting layouts with more masked values. We also conduct a comparison between LayoutNUWA and GPT-4 (Bubeck et al., 2023). Specifically, we allow GPT-4 to perform inference by constructing the input using the CIT method. Tab. 5 shows code instruct tuning for LLM is necessary, as using LLM in a zero-shot manner leads to a high fail rate (100% fail rate of LLaMA2 and around 30% for GPT-4).
## 6 Conclusion
In this paper, we propose LayoutNUWA, a groundbreaking approach that treats layout generation as a code generation task, effectively enriching the semantic information of layouts and leveraging the hidden expertise of LLMs. Extensive experiments on multiple datasets have demonstrated the superiority of our method. This research has the potential to revolutionize the field of layout generation and pave the way for further exploration and development of semantic-aware layout generation approaches in various applications.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Model**} & **Layout Format** & **mIoU (\(\uparrow\))** & **Align. (\(\rightarrow\))** & **Overlap (\(\rightarrow\))** & **FID (\(\downarrow\))** & **Fail (\(\downarrow\))** \\ \hline \multirow{2}{*}{**Condition**} & LayoutNUWA-N & Numerical & 0.000 & 0.000 & 0.867 & - & 78.030 \% \\ C \(\rightarrow\) S + P & LayoutNUWA-L2-DS & Code & **0.260** & **0.021** & **2.898** & **9.741** & **0.000 \%** \\ \hline \multirow{2}{*}{**Condition**} & LayoutNUWA-N & Numerical & 0.000 & 0.000 & 24.959 & 349.231 & 21.717 \% \\
**C + S \(\rightarrow\) P** & LayoutNUWA-L2-DS & Code & **0.358** & **0.020** & **2.483** & **4.682** & **0.000 \%** \\ \hline \multirow{2}{*}{**Completion**} & LayoutNUWA-N & Numerical & 0.000 & 0.000 & 16.602 & - & 29.293 \% \\ & LayoutNUWA-L2-DS & Code & **0.418** & **0.020** & **2.309** & **7.257** & **0.253 \%** \\ \hline
**Real Data** & - & - & 0.348 & 0.016 & 1.521 & 6.695 & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison among different output formats.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Models**} & **Tuning Method** & **mIoU (\(\uparrow\))** & **Align. (\(\rightarrow\))** & **Overlap (\(\rightarrow\))** & **FID (\(\downarrow\))** & **Fail (\(\downarrow\))** \\ \hline \multirow{2}{*}{**Condition**} & LayoutNUWA-L2-DS & CTT & **0.260** & **0.021** & **2.898** & **9.741** & **0.000 \%** \\
**Condition** & w/o template & Instruct Tuning (DS) & 0.124 & 0.049 & 3.221 & 16.324 & 1.020 \% \\ C \(\rightarrow\) S + P & w/o template & Instruct Tuning (DA) & - & - & - & - & 0.000 \% \\ & w/o template/instruct & Numerical Tuning & 0.126 & 0.053 & 3.581 & 17.982 & 3.571 \% \\ \hline \multirow{2}{*}{**Condition**} & LayoutNUWA-L2-DS & CTT & **0.358** & **0.020** & **2.483** & **4.682** & **0.000 \%** \\
**Condition** & w/o template & Instruct Tuning (DS) & 0.182 & 0.021 & 2.673 & 12.432 & 0.000 \% \\
**C + S \(\rightarrow\) P & w/o template & Instruct Tuning (DA) & - & - & - & - & 0.000 \% \\ & w/o template/instruct & Numerical Tuning & 0.189 & 0.024 & 2.892 & 14.326 & 0.000 \% \\ \hline \multirow{2}{*}{**Completion**} & LayoutNUWA-L2-DS & CTT & **0.418** & **0.020** & **2.309** & **7.257** & **0.253 \%** \\ & w/o template & Instruct Tuning (DS) & 0.206 & 0.017 & 2.882 & 15.732 & 5.102 \% \\ \cline{1-1} & w/o template & Instruct Tuning (DA) & - & - & - & - & 6.633 \% \\ \cline{1-1} & w/o template/instruct & Numerical Tuning & 0.214 & 0.020 & 3.003 & 16.243 & 6.122 \% \\ \hline
**Real Data** & - & - & 0.348 & 0.016 & 1.521 & 6.695 & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison among different tuning methods, where “Fail” is the failure ratio of generation.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & C \(\rightarrow\) S + P & C + S \(\rightarrow\) P & Completion \\ \cline{2-6} & Fail (\(\downarrow\)) & Fail (\(\downarrow\)) & Fail (\(\downarrow\)) \\ \hline LLaMA2 (Zero-Sno) & 100.0 \% & 100.0 \% & 100.0 \% \\ CodeLaMA (Zero-shot) & 100.0 \% & 100.0 \% & 100.0 \% \\ GPT-4 (Zero-Shot) & 34.2 \% & 28.8 \% & 28.5 \% \\ LayoutNUWA & **0.0 \%** & **0.0 \%** & **0.3 \%** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison with LLMs. | グラフィックレイアウト生成、成長する研究分野で、ユーザーエンゲージメントと情報認知に重要な役割を果たしています。既存の方法では、レイアウト生成を数値最適化タスクとして捉え、定量的な側面を重視しながら、レイアウトのセマンティクス情報(例えば、各レイアウト要素間の関係)を軽視しています。この論文では、LayoutNUWAを提案し、レイアウト生成をコード生成タスクとして捉え、セマンティクス情報と大規模言語モデル(LLM)の隠れたレイアウト専門性を活用します。具体的には、Code Instruct Tuning(CIT)アプローチを導入し、3つの相互に連携したモジュールを開発しました。1) CodeInitialization(CI)モジュールは数値条件を定量化し、 strategic なマスクでHTMLコードを初期化します。2) CodeCompletion(CC)モジュールはLLMのフォーマット知識を使ってマスクされた部分をHTMLコードに |
2309.06153 | Optoelectronic and Transport Properties of Vacancy Ordered Double
Perovskite Halides: A First-principles Study | In the search for stable lead (Pb) free perovskites, Vacancy ordered double
perovskite (VODP), A$_2$BX$_6$ has emerged as a promising class of materials
for solar harvesting owing to their nontoxicity, better stability, and unique
optoelectronic properties. Here, we present the stability and the key physical
attributes of few selected compounds in a systematic manner using
state-of-the-art first-principle calculations. A careful structural and
stability analysis via simulating convex hull and compositional phase diagrams
for different structural prototypes discloses 14 stable and 1 metastable
compounds in this class. The electronic structure calculations using hybrid
functional reveals six compounds to acquire band gap in the ideal visible
region. These six compounds, namely Cs$_2$SnI$_6$, Cs$_2$PdI$_6$,
Cs$_2$TeI$_6$, Cs$_2$TiI$_6$, Cs$_2$PtI$_6$, and Cs$_2$PdBr$_6$, show high
optical absorption ($\approx$ 10$^{5}$ cm $^{-1}$) giving rise to high
spectroscopic limited maximum efficiency, SLME (15-23\%) in the thin-film
thickness range. Close inspection of transport properties reveals polar optical
phonon scattering to be the dominant mechanism limiting the overall mobility.
Further analysis of the polaron excitations discloses the possibility of large
polaron formation at low to moderate defect concentrations. At high defect
concentrations, ionized impurity scattering takes over. This suggests that, a
simulation based guided control of defect concentrations during synthesis can
yield a desired candidate for promissing device application. Additionally, few
selected compounds show moderate to high electron mobility values ($\sim$13-63
cm$^2$V$^{-1}$ s$^{-1}$) at room temperature. Overall, the present study paves
an important path to help design VODP as Pb-free potential candidates for
future optoelectronic applications. | Supriti Ghorui, Jiban Kangsabanik, M. Aslam, Aftab Alam | 2023-09-12T11:53:03 | http://arxiv.org/abs/2309.06153v1 | Optoelectronic and Transport Properties of Vacancy Ordered Double Perovskite Halides: A First-principles Study
###### Abstract
In the search for stable lead (Pb) free perovskites, Vacancy ordered double perovskite (VODP), A\({}_{2}\)BX\({}_{6}\) has emerged as a promising class of materials for solar harvesting owing to their nontoxicity, better stability, and unique optoelectronic properties. Recently, this class has been explored for a wide range of applications such as photovoltaics, photodetectors, photocatalysis, and light-emitting diodes. Here, we present the stability and the key physical attributes of few selected compounds in a systematic manner using state-of-the-art first-principle calculations. A careful structural and stability analysis via simulating convex hull and compositional phase diagrams for different structural prototypes discloses 14 stable and 1 metastable compounds in this class. The electronic structure calculations using hybrid functional reveals six compounds to acquire band gap in the ideal visible region. These six compounds, namely Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)TeI\({}_{6}\), Cs\({}_{2}\)TiI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\), show high optical absorption (\(\approx\) 10\({}^{5}\) cm \({}^{-1}\)) giving rise to high spectroscopic limited maximum efficiency, SLME (15-23%) in the thin-film thickness range. Close inspection of transport properties reveals polar optical phonon scattering to be the dominant mechanism limiting the overall mobility. Further analysis of the polaron excitations discloses the possibility of large polaron formation at low to moderate defect concentrations. At high defect concentrations, ionized impurity scattering takes over. This suggests that, a simulation based guided control of defect concentrations during synthesis can yield a desired candidate for promissing device application. Additionally, few selected compounds show moderate to high electron mobility values (\(\sim\)13-63 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\)) at room temperature. Overall, the present study paves an important path to help design VODP as Pb-free potential candidates for future optoelectronic applications.
## I I. Introduction
Lead halide perovskites(LHP) have reignited immense research interest in the Photovoltaics (PV) community due to their remarkable power conversion efficiency(PCE) of 25.6% [1] (till date) and affordable device processability. The rapid rise in PCE (3.8% to 25.6%) in a short period of time (2009-2021) is attributed to its high absorption coefficient, high charge carrier mobility, defect tolerance, and cost-effective flexible synthesis. Because of their suitable optoelectronic properties, they have also been explored as photodetectors (PD)[2; 3], photocatalysts(PC) [4; 5], and light emitting diodes (LED) [6; 7]. Yet, there remains two major challenges in their large scale scalability: (1) Lead (Pb) toxicity and (2) stability in the ambient environment. At present, major research efforts at laboratory scale have been devoted to overcome these issues without losing their original PV performance. [8; 9; 10; 11; 12] This has led to a detailed exploration of the diverse chemical space of halides perovskites (ABX\({}_{3}\))[13] and their derivatives.[14; 15; 16] Among these perovskite derivatives, three major stoichiometric classes have garnered immense research interest. One of the classes namely double perovskites (DP) with stoichiometry A\({}_{2}\)BB\({}^{2}\)X\({}_{6}\) is mainly generated via transmutation of a combination of trivalent and monovalent elements at B-sites.[17] For example, Cs\({}_{2}\)BiAgBr\({}_{6}\),[17; 18; 19] Cs\({}_{2}\)InAgCl\({}_{6}\),[20] etc. belong to DP class which have been extensively explored for various optoelectronic applications. Similarly, A\({}_{3}\)B\({}_{2}\)X\({}_{9}\) (e.g. Cs\({}_{3}\)Bi\({}_{2}\)I\({}_{9}\)[21], Cs\({}_{3}\)Sb\({}_{2}\)I\({}_{9}\)[22] etc.) and A\({}_{2}\)BX\({}_{6}\) (e.g. Cs\({}_{2}\)SnI\({}_{6}\)[23], Cs\({}_{2}\)TiI\({}_{6}\)[24] etc.) structures are constructed by replacing with trivalent and tetravalent atoms respectively and leaving a vacant B-site. Here, A\({}_{2}\)BX\({}_{6}\) is also called vacancy ordered double perovskite where corner shared alternate BX\({}_{6}\) octahedras are removed along all three directions from the unit cell as shown in Figure 1(a).
In the past few years, vacancy-ordered double perovskite family (A\({}_{2}\)BX\({}_{6}\)) has gradually drawn ample attention in a wide range of optoelectronic applications owing to their better environmental durability, tunable optical and electronic properties. For example, Cs\({}_{2}\)SnI\({}_{6}\) has been studied as a potential candidate in PV[25], LED, PD[3; 26], and PC applications due to its direct band gap nature in the visible range( 1.1-1.62 eV), a high absorption coefficient (\(\approx\)10\({}^{5}\) cm\({}^{-1}\))[23], a low to high carrier mobility (\(\approx\)2-510 cm\({}^{2}\) V\({}^{-1}\) s\({}^{-1}\))[27; 28; 29; 30; 31]. The wide range of measured mobilities of Cs\({}_{2}\)SnI\({}_{6}\) can be attributed to variations resulting from different synthesis and characterization methodologies. Additionally, significant discrepancies have been observed between theoretical and experimental results regarding the transport properties of this material.[16; 28; 32; 33] The intrinsic limitations to mobility in Cs\({}_{2}\)SnI\({}_{6}\) are still not fully understood, and the underlying scattering mechanisms governing carrier transport remain elusive. Therefore, a comprehensive and systematic study encompassing both theoretical and experimental investigations is highly desired to unravel the mobility ambiguity in Cs\({}_{2}\)SnI\({}_{6}\) and shed light on its transport characteristics. As of now, this compound
exhibits a PCE of only 2.1%.[25] In contrast, substitutional alloying in Cs\({}_{2}\)SnCl\({}_{6}\) yields high photoluminescence quantum yield (PLQY) of 95.4% making it promising for further exploration in LED applications.[34] Despite considerable investigation into its structural, electronic, and optical properties, the elucidation of charge-carrier dynamics in Cs\({}_{2}\)SnI\({}_{6}\) still poses challenges that hinder the optimization of conversion efficiencies.
Similarly, Cs\({}_{2}\)TiBr\({}_{6}\), Cs\({}_{2}\)TeI\({}_{6}\), and Cs\({}_{2}\)PtI\({}_{6}\) are also studied experimentally for PV absorbers with their band gaps in the ideal visible range: 1.8, 1.5, and 1.4 eV respectively, along with high absorption coefficients (\(\sim\)10\({}^{5}\) cm\({}^{-1}\)).[35; 36] Here, device efficiency for Cs\({}_{2}\)TiBr\({}_{6}\) as PV absorber is reported to be 3.3%.[37] Indirect band gap and material instability are reported to be responsible for poor PCE in this case. In another report, PV device with Cs\({}_{2}\)PtI\({}_{6}\) shows PCE of 13.88%, which is a remarkable improvement on the reported efficiencies among all the materials belonging to this class till date.[36] Contribution of larger carrier lifetimes along with direct band gap in ideal visible range and robust stability help Cs\({}_{2}\)PtI\({}_{6}\) to attain the high PCE. There are reports of synthesizing Pd[38], and Zr[39] based nanomaterials experimentally but not much has been explored in the direction of optoelectronics. These background clearly indicates that A\({}_{2}\)BX\({}_{6}\) class is extremely interesting and fertile from the application perspective, yet a detailed systematic study on their optoelectronic, carrier transport and phonon properties connecting these observations is lacking. Moreover, it is also noticed that substitutional alloying/doping of pure material is an important strategy to improve optoelectronic properties, which again necessitates an in-depth understanding of the pure materials themselves.
In this communication, we present a detailed and systematic study on the A\({}_{2}\)BX\({}_{6}\) class of materials by using highly accurate ab-initio calculations. First, we have performed a thorough stability analysis which includes choice of different structural prototypes, thermodynamical stability via chemical potential phase diagram and convex hull analysis, and lattice dynamics simulation. Next, we have studied the electronic properties of the stable compounds using hybrid (HSE06) functional, which is known to predict reasonably accurate electronic structure information. Optical absorption and PV device parameters are calculated on the promising set of systems showing band gaps in the ideal visible region. Finally, carrier transport properties of these compounds are studied by considering the important scattering mechanisms. The importance of electron-phonon interactions, calculated within the temperature-dependent Feynman polaron model, is also discussed in some detail. We believe that such an in-depth study not only provides a solid physical basis on these class of semiconductors but will also be immensely beneficial for researchers working on their device application in the field of PV, LED, PD, and PC.
## II II. Structural properties and stability
Vacancy ordered double perovskite, A\({}_{2}\)BX\({}_{6}\) is a class of compounds where alternate BX\({}_{6}\) octahedra are removed from the ABX\({}_{3}\) unit cell as shown in Figure 1(a). In other words, 50% B cations are missing compared to the closed-packed A\({}_{2}\)BB\({}^{\prime}\)X\({}_{6}\) perovskite structure. Here, A possesses +1 oxidation state, B has +4 oxidation state and X is halide anion with -1 oxidation state. In general, for perovskites, different crystal structures are possible depending on the ionic radii of the constituent elements. These structures are roughly dictated by few important geometrical factors as defined below,
* Goldschmidt's tolerance factor: \(t=\left(\mathrm{r_{A}}+\mathrm{r_{X}}\right)/\sqrt{2}\left(\mathrm{r_{B}}+ \mathrm{r_{X}}\right)\)
* Octahedral factor : \(\mu=\mathrm{r_{B}}/\mathrm{r_{x}}\)
* Radius ratio : \(\mathrm{r_{A}}/\left(\mathrm{D_{XX}}-\mathrm{r_{X}}\right)\)
In the above expressions, \(\mathrm{r_{A}}\), \(\mathrm{r_{B}}\), \(\mathrm{r_{X}}\) and \(\mathrm{D_{XX}}\) are the empirical ionic radii of the constituent elements A, B, X and the nearest neighbour X-X bond length, respectively in the A\({}_{2}\)BX\({}_{6}\) structure. All the calculated parameters are tabulated in Table S1 of the supplementary information (SI).[40] The calculated Goldschmidt's tolerance factor predicts formation of cubic structures, which is also consistent with our stability analysis (discussed later) and experimental observations for few of the compounds reported in the literature.[24; 28; 29; 36; 38; 41]
In this work, we have investigated the following A\({}_{2}\)BX\({}_{6}\) compounds: A= Cs; B= Ge, Te, Se, Sn, Pd, Pt, Ti, Zr, Hf; X=I, Br. For each compound, we have considered seven most common structural prototypes (as reported in International Crystal Structure Database (ICSD))[42; 43] for A\({}_{2}\)BX\({}_{6}\) class of compounds. Space group of these seven structures are Fm-3m (cubic), I4/m (tetragonal), I4/mmm (tetragonal), P-3m1 (hexagonal), Pnma (orthorhombic), P4/mnc (monoclinic), and P12\({}_{1}\)/c1 (monoclinic). These crystal structures are shown in Fig. S1 of SL[40] Most of these structures are very similar in symmetry and differ in energy only within a few meV (3-4 meV). Post structural optimization, the lowest energy structure for most of the above set of compounds turns out to be cubic (Fm-3m). It has been observed experimentally that several Cs based iodide and bromide compounds indeed crystallize in the cubic space group.[24; 28; 29; 36; 38; 41]
To further assess the chemical stability, we have calculated the convex hull energies (E\({}_{\mathrm{hull}}\)) of these compounds with respect to possible secondary phases available in ICSD, open quantum materials database (OQMD)[44; 45] and materials project (MP) database[46]. As evident from Fig. 1(b), most of the compounds lie on the convex hull i.e. E\({}_{\mathrm{hull}}\) = 0, except Cs\({}_{2}\)GeI\({}_{6}\), Cs\({}_{2}\)SeI\({}_{6}\)
Cs\({}_{2}\)PdI\({}_{6}\), and Cs\({}_{2}\)GeBr\({}_{6}\), confirming the stability of the former. For the remaining four compounds E\({}_{\rm hull}\) ranges between 5-60 meV/atom, indicating likelihood of chemical (meta/in)stability.
Next, in order to explore the most probable secondary phases during synthesis of Cs\({}_{2}\)BX\({}_{6}\) materials, we have calculated the compositional phase diagrams (chemical potential) for materials on the convex hull. More details about the phase diagram calculations/analysis are given in the Sec. S1(A) of SI.[40] Figure 1(c-g) shows the phase diagrams for those materials which can be potential for optoelectronic applications (based on their band gap, optical and transport properties, as discussed later). The phase diagrams for remaining compounds are displayed in Figure S2 of SI.[40] The green shaded portion shows the stability regions of these materials. The extent of the stability region directly correlates with the ease/difficulty of experimental synthesis.The theoret
Figure 1: (a) Crystal structure of A\({}_{2}\)BX\({}_{6}\) compounds in Fm-3m space group. (b) Convex energy hull (E\({}_{\rm hull}\)) for Cs\({}_{2}\)BX\({}_{6}\) (B= Pd, Pt, Ti, Hf, Zr, Ge, Te, Se, Sn; X=I, Br). Red circle denotes the top of the bar. Compositional chemical potential phase diagram of (c) Cs\({}_{2}\)Ti\({}_{6}\), (d) Cs\({}_{2}\)PdBr\({}_{6}\), (e) Cs\({}_{2}\)PtI\({}_{6}\), (f) Cs\({}_{2}\)SnI\({}_{6}\), and (g) Cs\({}_{2}\)TeI\({}_{6}\) with respect to competitive secondary phases. The green shaded regions show the stable regions of the corresponding materials.
ically optimized lattice parameters and bond lenghts (B-X) of all the stable compounds in their cubic structure are displayed in Table S2 of SI.[40]
Further, we have checked the dynamical stability of these compounds by calculating phonon dispersions as shown in Figure S3 of SI.[40] The absence of any visible imaginary phonon modes indicates dynamical stability of these compounds. For Cs\({}_{2}\)SnI\({}_{6}\) and Cs\({}_{2}\)TiI\({}_{6}\), one can observe small negative phonon frequencies the magnitude of which decreases with increasing supercell size. This is because the later captures the effect of higher order inter-atomic force constants more accurately. This is also evident in previously reported phonon dispersion for Cs\({}_{2}\)SnI\({}_{6}\).[47, 48] Nevertheless, these compounds are already experimentally synthesized, and hence naturally stable.[28, 49]
Following stability analysis, we have further studied the electronic structures of 15 (14 stable and 1 metastable) compounds in the next section.
## III III. Electronic structure
Band structure calculations for all the compounds are initially performed using Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional[50]. As PBE functional is well-known to underestimate the band gap, we also employ hybrid Heyd-Scuseria-Ernzerhof (HSE06)[51] functional which gives more accurate estimate of band gap in comparison to experiment. Spin-orbit coupling (soc) effect in included in all the calculations. Band structures for four potential compounds calculated using PBE+soc functional (band gaps are scissor shifted to HSE+soc values) are shown in Figure 2. In A\({}_{2}\)BX\({}_{6}\) class of compounds, the topology of band structure calculated using HSE+soc functional is very similar to that calculated using PBE+soc functional except the enlargement of band gap in the former (See Figure S4 of SI for few representative cases).
Figure 2 also shows the optical transition probability, (square of dipole transition matrix elements, p\({}^{2}\)), and total/orbital projected density of states (PDOS) of Cs\({}_{2}\)Ti\({}_{6}\), Cs\({}_{2}\)PdBr\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), and Cs\({}_{2}\)TeI\({}_{6}\) respectively. The HSE06+soc band gap values for the respective compounds are provided in Table 1. The band structure, PDOS and respective band gap values for other compounds are provided in Figure S5-S7 and Table S3 and S4 of SI.[40] In Fm-3m phase, the estimated band gap values lie within 0.72 eV to 4.31 eV for different compounds. Optical transitions at the fundamental direct gaps are dipole forbidden for all the compounds, as confirmed by the calculated optical transition probability (p\({}^{2}\)). Here the presence of inversion symmetry plays the key role to induce parity forbidden transitions for these compounds, effectively increasing the optical band gap.
In the present study, we considered 9 different elements at the B site, belonging to 4 distinct groups in the periodic table. Despite all elements having a +4 oxidation state, their valence electron orbital configurations differ, resulting in distinct electronic structures, including variations in band structure and band gap types among the compounds. In the following, we shall discuss the electronic structure of representative compounds from each group and compare them with the electronic structures of other compounds within the same group, including different halides.
For Cs\({}_{2}\)TiI\({}_{6}\), band gap is indirect in nature with conduction band minimum (CBM) at X and valence band maximum (VBM) at \(\Gamma\). But the direct band gap at \(\Gamma\) is very close to indirect band gap value (\(\sim\)50 meV) (Table 1). From the orbital projected density of states (PDOS), we observe that CBM is comprised of Ti-d and I-p i.e. B-d and X-p (see Figure 2)(a,b)). The electronic band gap value is 1.77 eV which is overestimated by 0.75 eV with respect to experimental value (1.02 eV)[24]. The calculated optical band gap lies within 100 meV from the fundamental direct gap. Apart from that, the large difference between the calculated electronic band gap and optically measured experimental band gap can be attributed to the excitonic effect (not taken into account here) and the defects present in the measured sample, as discussed by B.Cucco et.al.[52]. All the electronic structure information for the rest of the compounds can be found in Figure S5-S7 and Table S3 and S4 of SI.[40] It is clearly evident that the band gap increases from Ti \(\rightarrow\) Zr \(\rightarrow\) Hf and also with I \(\rightarrow\) Br. In this group, only Cs\({}_{2}\)TiI\({}_{6}\) shows band gap in the ideal visible region.
Cs\({}_{2}\)PdI\({}_{6}\) shows indirect band gap in both the space groups with CBM at X and VBM at \(\Gamma\) (see Fig. S5 of SI). The optically allowed direct band gap (0.88 eV) is very close to the indirect band gap values (0.72 eV) (shown in Table 1 ). Experimentally, Cs\({}_{2}\)PdI\({}_{6}\) nanocrystals[41] are synthesized and and a band gap of 0.69 eV is reported. The reason behind the overestimation might be similar to what is explained for the case of Cs\({}_{2}\)TiI\({}_{6}\). In this case, the CBM is comprised of Pd-d, I-p orbitals while VBM is composed of only I-p orbital (see Fig. S5 of SI). Like Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)PtBr\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\) show similar orbitals contribution at both CBM and VBM giving rise to indirect nature of band gap. Their band gap values along with the formation energetics and different between direct and indirect band gaps are presented in SI (see Tables S3 and S4 and Fig. S5 and Fig. 2(c,e). For Cs\({}_{2}\)PtI\({}_{6}\) and Cs\({}_{2}\)PdBr\({}_{6}\), the calculated band gap is close to experimentally reported values of Cs\({}_{2}\)PtI\({}_{6}\) powder [53, 54] and Cs\({}_{2}\)PdBr\({}_{6}\) nanocrystals [38, 41] respectively. Here, we observe an increase in band gap going from Pd \(\rightarrow\) Pt and also from I \(\rightarrow\) Br. In this case, Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\) compounds show band gaps within the ideal visible range.
The band structure analysis of Cs\({}_{2}\)TeI\({}_{6}\) reveals that it has indirect band gap with a value of 1.85 eV, consistent
with the study by Maughan et.al.[28]. From PDOS analysis, we observe that the CBM is comprised of Te-p and I-p orbitals whereas VBM is made up of I-p orbital (see Figure 2(i) and (j)). The calculated electronic band gap value of 1.85 eV is 0.26 eV higher than the experimentally reported value [28]. For Cs\({}_{2}\)TeBr\({}_{6}\), the band gap nature and orbital contribution at both CBM and VBM is similar to that of Cs\({}_{2}\)TeI\({}_{6}\). The related electronic properties can be found in SI (see Table. S4 and Fig. S6 ). All the electronic structure information for Cs\({}_{2}\)SeBr\({}_{6}\) can be found in SI (see Table S4 and Figure S6 (c,d) ), which shows similar orbital characteristics.
For Cs\({}_{2}\)SnI\({}_{6}\), the calculated band gap value is 0.85 eV which is direct in nature with band edges (at \(\Gamma\)) in agreement with the values reported by Maughan et.al.[28] This is 0.38 eV higher than the experimentally reported value [28]. From orbital analysis, we observe that the CBM is made up of Sn-s and I-p orbitals, and VBM is comprised of I-p orbital (see Fig. 2(g,h)). For Cs\({}_{2}\)SnBr\({}_{6}\), the band gap nature and orbital contribution at both CBM and VBM remains similar to that of Cs\({}_{2}\)SnI\({}_{6}\). The related electronic properties can be found in SI (see Table S4 and Fig. S6(a,b)).
To summarize the electronic properties, one should note that the calculated electronic band gaps are always overestimated as compared to the experimentally reported optical band gap which is a well-known fact.[52] Contrary to previous reports, the optical band gaps are close to the lowest direct band gaps, confirmed by our calculation of optical transition probability. We believe that the most probable reasons for the theoretical overestimation of band gaps can be attributed to the excitonic effects (not included in the present calculations) and defects present in the experimental samples, as discussed by B.Cucco et.al.[52].
From the electronic structure analysis, we notice that
Figure 2: Band structures and the square of dipole transition matrix elements (p\({}^{2}\)) between VBM and CBM, for (a) Cs\({}_{2}\)TiI\({}_{6}\), (c) Cs\({}_{2}\)PdBr\({}_{6}\)(e) Cs\({}_{2}\)Pt\({}_{6}\), (g)Cs\({}_{2}\)SnI\({}_{6}\), and (i) Cs\({}_{2}\)TeI\({}_{6}\) respectively. (b), (d), (f), (h) and (j) show the projected density of states (PDOS) for the same set of compounds respectively. All the calculations are done using PBE functional including spin-orbit coupling (soc) effect while band gap is scissor shifted to HSE+soc calculated values. In band structure plots, VBM and CBM are indicated via green and red circles respectively.
the band gaps of Cs\({}_{2}\)TeI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\) (Pnma), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)TiI\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\) lie in the ideal visible for photovoltaic application. Therefore, we shall now focus on the optical properties of these six compounds along with the well-known descriptor 'Spectroscopic limited maximum efficiency' (SLME) (as proposed by Yu et.al[55]) to better understand their potential as solar absorber.
## IV IV. Optical Properties
Figure 3(a) shows the absorption coefficients for the above mentioned six promising compounds. All these compounds can act as potential solar absorber as their absorption coefficients are as high as 10\({}^{4}\)-10\({}^{5}\) cm\({}^{-1}\) in the visible range. The optical absorption is contributed by two factors: (1) optical joint density of states (JDOS) and (2) optical transition strength. As we can see in Figure S5(a-d),[40] the square of dipole transition matrix elements (aka transition strength) for Cs\({}_{2}\)PdI\({}_{6}\) is pretty high contributing to better optical absorption. This can be attributed to Cs-p, I-p to Pd-d transition. Apart from that, the JDOS is also likely to be high as the bands near CBM and VBM show flat nature. They are comprised of 'p' and 'd' orbitals, showing more localized nature as compared to the other compounds. In addition to Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), and Cs\({}_{2}\)TeI\({}_{6}\) also show similar absorption coefficient spectrum. The absorption coefficient is directly related to the frequency dependent dielectric function of a semiconductor via following equation
\[\alpha(E)=\frac{2\omega}{c}\sqrt{\frac{\sqrt{\epsilon_{re}^{2}+\epsilon_{im}^ {2}}-\epsilon_{re}}{2}} \tag{1}\]
where \(E\) is the incident photon energy, \(\omega\) is the angular frequency related to \(E\) via \(E=\hbar\omega\), \(c\) is the velocity of light, \(\epsilon_{re}\) and \(\epsilon_{im}\) are the real and imaginary part of the dielectric function respectively. Figure 3(b) shows the thin-film thickness dependance of spectroscopic limited maximum efficiency (SLME), which turn out to be?15% for all six compounds. Interestingly, we can see a higher SLME for the Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\) and Cs\({}_{2}\)TiI\({}_{6}\) compounds as compared to \(Cs_{2}SnI_{6}\). Such increased SLME is essentially attributed to increased high absorption spectra due to I-p to Pd/Pt/Ti-d orbital transition as well as suitable band gaps. In Table 1, we present the simulated device parameter values for the six compounds: short-circuit photo-current density (J\({}_{\rm sc}\)), open circuit voltage (V\({}_{\rm oc}\)), fill factor (\(FF\)), maximum current density (J\({}_{\rm max}\)), and maximum voltage (V\({}_{\rm max}\)) obtained from the SLME calculation. The detailed description and method of calculation of these parameters can be found in the SL[40] As expected, materials with higher band gaps exhibit higher V\({}_{oc}\) values, while materials with lower band gaps acquire higher J\({}_{sc}\) values. The other compounds do not have SLME as high as these six materials owing to higher band gaps. Their absorption coefficients and SLME values are shown in Figure S8 and S9 of SI respectively.[40] Furthermore, it is worth noting that the band gaps of other compounds are distributed within the visible range (larger than 1.8 eV), which makes them suitable for LED and photocatalytic water splitting applications. Alloying at B and x sites are other avenues to tune the optoelectronic properties of these systems and hence make them suitable for different applications.
## V V. Transport properties
Detailed analysis of optoelectronic properties and calculation of solar efficiency (SLME) reveals six compounds to be promising. This can be attributed to their optimal band gaps falling within the ideal visible region of the solar spectrum coupled with their excellent absorption coefficients. However, in a practical photovoltaic device, extraction of charge carriers is one of the key component determining its power conversion efficiency. As such, mobility of the charge carriers is an integral quantity dictating the promise of a semiconductor for solar harvesting. Most of the past theoretical studies on photovoltaic materials rely on calculation of transport properties based on constant relaxation time approximation (RTA). Within this approximation, all the scattering mechanisms are averaged out via a single relaxation time (chosen to be approximately 10 fs). This practice, however, can be misleading as the carrier relaxation time is a complex parameter which sensitively depends on a number of physical properties and can be significantly different for different materials belonging to the same class (as illustrated in this study). In this section, we perform a thorough analysis of the carrier mobilities of these compounds considering three relevant scattering mechanisms, namely, acoustic phonons (ADP), ionized impurities (IMP), and polar optical phonons (POP) scattering. We have excluded piezoelectric scattering due to the inherent centro-symmetry present in these compounds. In Figure 4 and 5, we show the temperature and defect concentration dependence of electron and hole mobilities (\(\mu_{e}\) and \(\mu_{h}\)) for these compounds. Contribution of individual scattering mechanisms on these mobilities for the six compounds are provided in Figure S10 to S15 of SI.[40]
Figure S16 of SI[40] displays the total relaxation times of six compounds at varying defect concentrations, ranging from 10\({}^{10}\) cm\({}^{-3}\) to 10\({}^{20}\) cm\({}^{-3}\) at three different representative temperatures (100 K, 300 K, and 500 K) for both hole and electron transport. For defect concentrations in the low to moderate range, the relaxation times remains almost constant. However, as the defect concentration increases, relaxation times vary in an irregular manner. To comprehend the cause of this anomalous behavior, a more in-depth analysis was conducted. The relaxation times for all three scattering mechanisms were calculated for each compound, and plotted in Figures S17 to S22.[40] A close inspection of these data confirms that in the low to moderate concentration range, the primary scattering mechanism is due to POP scattering. In contrast, as the concentration increases into the higher range, the dominant scattering mechanism shifts to IMP scattering, resulting in the emergence of anomalous behavior. Such unusual behavior is also reflected in the mobility shown in Fig. 4.
Speaking about the behavior of mobilities for each individual compounds, one can notice that at low temperature (100 K), the hole mobility (\(\mu_{h}\)) is highest for Cs\({}_{2}\)TiI\({}_{6}\) (\(\sim\)20.9 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\)). With increasing temperature, \(\mu_{h}\) decreases slowly to reach a shallow minimum and increases again with increasing defect concentration. At
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Compound & \(E_{g}^{(expj)}\) & \(E_{g}\)(HSE+soc) & \(\Delta E_{g}^{da}\) & J\({}_{SC}\) & J\({}_{max}\) & V\({}_{OC}\) & V\({}_{max}\) & FF & SLME \\ & (eV) & (eV) & (meV) & (mA cm\({}^{-2}\)) & (mA cm\({}^{-2}\)) & (V) & (V) & & (\(\eta\%\)) \\ \hline \hline Cs\({}_{2}\)TiI\({}_{6}\) & 1.02[24] & 1.77 (ID) & 72 & 16.64 & 16.34 & 1.50 & 1.39 & 0.91 & 22.78 \\ \hline Cs\({}_{2}\)PdBr\({}_{6}\) & 1.6[38], 1.69 [41] & 1.61 (ID) & 110 & 17.62 & 17.26 & 1.35 & 1.25 & 0.91 & 21.63 \\ \hline Cs\({}_{2}\)PtI\({}_{6}\) & 1.25[53], 1.37 [54], 1.4 [36] & 1.31 (ID) & 149 & 22.23 & 21.66 & 1.08 & 0.98 & 0.89 & 21.26 \\ \hline Cs\({}_{2}\)SnI\({}_{6}\) & 1.25 [28], 1.62 [29] & 0.87 (D) & 41 & 32.58 & 31.33 & 0.72 & 0.64 & 0.85 & 20.07 \\ \hline Cs\({}_{2}\)TeI\({}_{6}\) & 1.59 [28] & 1.85 (ID) & 190 & 12.95 & 12.72 & 1.55 & 1.45 & 0.92 & 18.44 \\ \hline Cs\({}_{2}\)PdI\({}_{6}\) & 1.41 [41] & 0.72 (ID) & 166 & 43.20 & 40.94 & 0.54 & 0.46 & 0.81 & 18.97 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulated band gap (\(E_{g}\)), difference between electronic and optically allowed direct band gap (\(\Delta E_{g}^{da}\)), short-circuit current density (J\({}_{SC}\) ), open-circuit voltage (V\({}_{OC}\) ), current density (J\({}_{max}\) ) and voltage (V\({}_{max}\) ) at maximum power, spectroscopic limited maximum efficiency (SLME), and fill factor (FF) for Cs\({}_{2}\)TiI\({}_{6}\), Cs\({}_{2}\)PdBr\({}_{6}\) Cs\({}_{2}\)PtI\({}_{6}\) Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)TeI\({}_{6}\), and Cs\({}_{2}\)PdI\({}_{6}\) compounds. ID and D indicates indirect and direct nature of band gaps respectively. All the device-related parameters are shown for 500 nm thickness at 298K. Experimental band gaps (\(E_{g}^{(expt)}\)) are also listed for comparison.
higher temperature, Cs\({}_{2}\)PdI\({}_{6}\) shows the highest hole mobility (\(\sim\)5.9 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @300 K and \(\sim\)4.2 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @500 K ) among all the compounds. This compound also shows the highest electron mobility (\(\sim\)183 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @ 100 K, \(\sim\)51 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @ 300 K, and \(\sim\)32 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @ 500 K) throughout the temperature range. At room temperature, the hole mobilities remain relatively low but except Cs\({}_{2}\)TiI\({}_{6}\), electron mobilities show moderate to high values (\(\sim\)13-63 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\)). This is commensurate with the electronic band structures of these compounds where the VBM shows flat bands whereas the CBM is more dispersive. Consequently, n-type doping could prove advantageous for efficient charge carrier collection in photovoltaic devices, aligning with the experimental findings for this class of compounds.[27; 29; 56] A closer look at the individual contributions from the different scattering mechanisms show that at low to moderate defect concentrations (\(<10^{18}\) cm\({}^{-3}\)), POP scattering is the dominant scattering mechanism limiting the mobilities. With increasing temperatures, the number of activated polar optical phonons increase, and as a result we see a decrease in overall mobility going from 100 K \(\rightarrow\) 300 K \(\rightarrow\) 500 K. At higher defect concentrations( 10\({}^{18}\)-10\({}^{20}\) cm\({}^{-3}\)), we see ionized impurity scattering begins to dominate as can be seen from Figures S10(a,b,c)-S15(a,b,c) of SI.[40] At these concentrations, there is one more mechanism that starts to impact the carrier mobility, which is the screening of polar optical phonons by free carriers. This in effect reduces the POP scattering, effectively increasing the overall mobility in some cases. Now, the temperature has also an effect on this screening mechanism. At higher temperatures, there are more activate polar optical phonons, which require a higher density of free carriers to effectively screen the Coulomb field created by these phonons. This is clearly evident from our SI plots (see Figures S10(a,b,c)-S15(a,b,c)).[40] In all the cases, ADP scattering remains low which is common in hybrid perovskites arising out of small deformation potentials.[57; 58]
In Figure 5(a-f), we show average hole and electron mobilities with respect to temperatures ranging from 100 K to 500 K for three different defect concentrations, low (10\({}^{10}\) cm\({}^{-3}\)), moderate (10\({}^{15}\) cm\({}^{-3}\)) and high (10\({}^{20}\) cm\({}^{-3}\)). Due to weak dependence on IMP scattering in low to moderate defect concentrations, we see the carrier mobility remains similar in these two defect concentrations. But we can see that at higher concentrations, IMP starts to dominate. As such, controlling the defect concentrations can impact device efficiencies, not only because at higher defect concentrations, IMP becomes the dominant scattering mechanism, but also because the prevalence of free carriers will start to screen the POP scattering effect. As expected, the overall mobility has a strong temperature dependence for most of the compounds and remains high to moderate for the electrons whereas the hole mobility values remain consistently low.
Figure 4: (a,b,c) Hole mobility (\(\mu_{h}\)) and (d,e,f) electron mobility (\(\mu_{e}\)) for Cs\({}_{2}\)TiI\({}_{6}\), Cs\({}_{2}\)PdBr\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), and Cs\({}_{2}\)TeI\({}_{6}\) compounds as a function of defect concentrations at three different temperatures, T= 100K, T=300 K and T= 500K respectively.
The above analysis reveals that in A\({}_{2}\)BX\({}_{6}\) class, polar optical phonons play a dominant role at the realistic defect concentrations relevant for photovoltaic application. As such, next we study the properties of the polaronic states via calculating the Fr\(\ddot{o}\)hlich interactions under the temperature-dependent Feynman polaron model.[59][Reference] In polar semiconductors, for example, halide perovskite and its derivatives, the interaction between charge carriers and the macroscopic electric field generated by longitudinal optical phonon (LO) is well known to be the dominant scattering mechanism near room temperature which is expected to be the case for our studied materials as well.[56; 57; 60; 61; 32] To investigate the same, we studied the influence of changing B-site in A\({}_{2}\)BX\({}_{6}\) on the electron-phonon coupling (EPC). Within the Fr\(\ddot{o}\)hlich interaction model, the interaction strength (\(\alpha\)) is defined as
\[\alpha=\frac{1}{4\pi\epsilon_{0}}\frac{1}{2}\left(\frac{1}{\epsilon_{\infty}}- \frac{1}{\epsilon_{static}}\right)\frac{e^{2}}{\hbar\omega_{LO}}\left(\frac{2 m^{*}\omega_{LO}}{\hbar}\right)^{1/2} \tag{2}\]
were, \(\epsilon_{0}\) is dielectric constant of vacuum, \(\epsilon_{\infty}\) and \(\epsilon_{static}\) are high frequency and static dielectric constants of the semiconductor, \(\hbar\) is the reduced Plank constant, \(\omega_{LO}\) is the characteristic angular LO frequency where all the infrared active optical phonon branches are taken into account via a spectral average,[60]\(m^{*}\) is the carrier effective mass. Table 2 display all the associated values related to Fr\(\ddot{o}\)hlich interaction for electrons for the six compounds. The corresponding list of parameters for the holes for these six compounds are reported in Table S5 of SM.[40] To validate our simulation, we compare the calculated values of \(\alpha\) for Cs\({}_{2}\)SnI\({}_{6}\), with recent literature and observe a fair agreement.[32] In case of A\({}_{2}\)BX\({}_{6}\) class, calculated \(\alpha\)-values lie in the moderate range (1\(<\alpha<\) 6). Estimated values of polaron radius (l\({}_{p}\)) indicates formation of large polarons, similar to what is observed for hybrid halide perovskites and double perovskites.[60; 61; 62]\(\alpha_{e}\) value is highest for Cs\({}_{2}\)TiI\({}_{6}\) mainly due to higher electron effective mass compared to the other compounds. Additionally, taking an inference from the electronic structure of these materials, we see that CBM in Cs\({}_{2}\)TiI\({}_{6}\) has a contribution from Ti-d and I-p orbitals whereas for Cs\({}_{2}\)SnI\({}_{6}\), it is Sn-s and I-p orbitals. Now, Ti-d orbitals are more localized arising out of the flat band and hence higher effec
tive mass. For other compounds, we see more dispersive bands (see Figure 2) at CBM, and the corresponding \(\alpha\) values are in the range close to that of Cs\({}_{2}\)SnI\({}_{6}\). Interestingly, the hole mobility turn out to be significantly lower than the electron mobility. To conclude, large polaron is the main carrier related to moderate mobility for our studied compounds. These crucial observations clearly indicates the importance of studying charge carrier behavior in A\({}_{2}\)BX\({}_{6}\) class of compounds and its implications in future applications.
## VI VI. Conclusion
In summary, we performed an accurate and systematic investigation of Pb-free vacancy ordered double perovskites (A\({}_{2}\)BX\({}_{6}\)) from the optoelectronic application perspective. We carried out a thorough stability analysis considering different structural prototypes and carefully simulating the convex hull energy diagram including all possible secondary phases. We found 14 compounds to be stable and 1 in the metastable phase. For stable compounds, we further simulated the compositional phase diagrams to assist the experimentalists identifying the most probable secondary phases which might emerge during synthesis. Next, a careful electronic structure analysis reveals six compounds, namely Cs\({}_{2}\)TeI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)TiI\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\) to possess optically allowed band gaps in the ideal visible range (0.8-1.85 eV). The detailed investigation of optical properties confirms that few of these compounds possess favorable optoelectronic properties facilitating better efficiency than some of the existing ones. A close inspection of transport properties reveals that Cs\({}_{2}\)PdBr\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), and Cs\({}_{2}\)TeI\({}_{6}\) compounds acquire moderate to high electron mobilities (\(\sim\)13 - 63 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\)). In all the cases, polar optical phonons (POP) remain the dominant scattering mechanism at low to moderate defect concentrations. At high defect concentrations, ionized impurity scattering starts to dominate while accumulation of free carriers shows a screening effect on the POP scattering. This study is expected to facilitate the necessary base and guidance for future experimental synthesis of some of these compounds to achieve desired features for promising device applications.
## VII VII. Computational details
First-principles calculations are carried out using density functional theory (DFT)[63] with projector augmented wave (PAW)[64] basis set as implemented in Vienna Ab-Initio Simulation Package (VASP).[65; 66; 67; 68; 69] A plane wave energy cutoff of 520 eV, \(\Gamma\)-centered 4\(\times\)4\(\times\)4\(\times\)4-mesh, and Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional[50] were employed to perform the geometry optimization. The crystal structure was relaxed with force tolerance criteria of 0.001 eVA\({}^{-1}\). The spin-orbit coupling (soc) effect is included while simulating the electronic and optical properties. Hybrid(HSE06) functional[51] is used to calculate the band gap and band edges which are known to provide a more accurate estimate for the same. Optical absorption spectra are simulated within the independent particle approximation and then the absorption onset value is scissor shifted to HSE06 band gap values. This method makes it possible to accurately assess the SLME for the materials under consideration. The chemical phase diagrams are drawn using Chesta software package.[70] Phonon dispersion is calculated using the density functional perturbation theory (DFPT) using \(\Gamma\)-centered 4\(\times\)4\(\times\)4 k-mesh under the supercell method. The 2nd order force constant is calculated using 2\(\times\)2\(\times\)2 supercells of primitive cells for cubic structures and in similar proportion for other structures. Next, the rotational sum rule is applied using the hiphive package[71] to renormalize the phonon frequencies. Transport calculations are performed using the AMSET code,[57], where we have considered three different scattering mechanisms, namely scattering due to acoustic phonons (ADP), ionized impurities (IMP), and polar optical phonons (POP). Piezoelectric scattering is not included due to the centro-symmetric crystal structure of A\({}_{2}\)BX\({}_{6}\), whereas screening due to free carriers at high defect concentrations is included. This program uses the Boltzmann transport equation's (BTE) momentum relaxation time approximation (MRTA) to determine scattering rates and carrier mobilities. Polaron related parameters were simulated via implementing a temperature dependent Feynman polaron model.[60; 72] Born effective charges and static and high-frequency dielectric tensors were calculated using density functional perturbation theory (DFPT) as implemented in VASP. The effec
tive mass has been calculated using the following equation:
\[m^{*}=3\left[\frac{1}{m_{xx}^{*}}+\frac{1}{m_{yy}^{*}}+\frac{1}{m_{zz}^{*}}\right] \tag{3}\]
where, \(m_{ii}^{*}\) is the effective mass in the \(i\)-th direction (\(i\)=x,y,z).[73; 74; 75; 76]
## VIII Acknowledgments
SG acknowledges financial support from IIT Bombay for research fellowship. AA and MA acknowledges National Center for Photovoltaic Research and Education (NCPRE) funded by Ministry of new renewable energy (MNRE), Government of India, and IIT Bombay for possible funding to support this research.
| Pb-無含有 Perovskite の探索において、空位配列の双 Perovskite (VODP)、A$_2$BX$_6$は、その非毒性、安定性、ユニークな光電特性により太陽エネルギー harvesting に寄与する、有望な材料として注目を集めています。ここでは、先端の第一原理計算を用いて、これらの材料の安定性と主要な物理的特性を体系的に説明します。異なる構造プロトタイプに対するConvex hull と組成相ダイアグラムをシミュレーションした構造と安定性解析により、このクラスには 14 の安定な化合物と 1 の metastable 化合物があります。ハイブリッド機能を用いた電子構造計算により、6 つの化合物にバンドギャップを有することが明らかになりました。これらの 6 つの化合物、つまり Cs$_2$SnI$_6$, Cs$_2$PdI$_6$, Cs$_2$TeI$_6$, |
2309.09260 | Visualizing the Zhang-Rice singlet, molecular orbitals and pair
formation in cuprate | The parent compound of cuprates is a charge-transfer-type Mott insulator with
strong hybridization between the Cu $3d_{\mathrm x^2-y^2}$ and O $2p$ orbitals.
A key question concerning the pairing mechanism is the behavior of doped holes
in the antiferromagnetic (AF) Mott insulator background, which is a
prototypical quantum many-body problem. It was proposed that doped hole on the
O site tends to form a singlet, known as Zhang-Rice singlet (ZRS), with the
unpaired Cu spin. But experimentally little is known about the properties of a
single hole and the interplay between them that leads to superconductivity.
Here we use scanning tunneling microscopy to visualize the electronic states in
hole-doped $\mathrm{Ca_2CuO_2Cl_2}$, aiming to establish the atomic-scale local
basis for pair formation. A single doped hole is shown to have an in-gap state
and a clover-shaped spatial distribution that can be attributed to a localized
ZRS. When the dopants are close enough, they develop delocalized molecular
orbitals with characteristic stripe- and ladder-shaped patterns, accompanied by
the opening of a small gap around the Fermi level ($E_{\mathrm F}$). With
increasing doping, the molecular orbitals proliferate in space and gradually
form densely packed plaquettes, but the stripe and ladder patterns remain
nearly the same. The low-energy electronic states of the molecular orbitals are
intimately related to the local pairing properties, thus play a vitally
important role in the emergence of superconductivity. We propose that the
Cooper pair is formed by two holes occupying the stripe-like molecular orbital,
while the attractive interaction is mediated by the AF spin background. | Shusen Ye, Jianfa Zhao, Zhiheng Yao, Sixuan Chen, Zehao Dong, Xintong Li, Luchuan Shi, Qingqing Liu, Changqing Jin, Yayu Wang | 2023-09-17T12:52:02 | http://arxiv.org/abs/2309.09260v1 | # Visualizing the Zhang-Rice singlet, molecular orbitals and pair formation in cuprate
###### Abstract
We present a new method for the formation of a pair formation in cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for the formation of a pair formation in cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for the formation of a pair formation in cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. | ```
Cu配位化合物におけるカップRATEの親核構造は、Cu $3d_{\mathrm x^2-y^2}$とO $2p$軌道間の強いハイブリダイズングを持つチャージ移動型 Mott絶縁体です。ペアリングメカニズムに関する鍵となる質問は、抗ferro磁性(AF)Mott絶縁体背景におけるドーピングされたホールの挙動で、これは量子多体問題の典型的な例です。提案されたものは、OサイトのドーピングされたホールはZhang-RiceSinglet (ZRS)と呼ばれる単結合を形成し、未 páir Cuスピンと結びついています。しかし、実験的に、単一ホールの性質とそれらがスーパー伝導につながる相互作用についての知識は限られています。この論文では、ホールドーピングされた $\mathrm{Ca_2CuO_2Cl_2}$ を対象とした走査トンネル顕微鏡を使用して電子状態を |
2306.17803 | A reduction of the separability problem to SPC states in the filter
normal form | It was recently suggested that a solution to the separability problem for
states that remain positive under partial transpose composed with realignment
(the so-called symmetric with positive coefficients states or simply SPC
states) could shed light on entanglement in general. Here we show that such a
solution would solve the problem completely. Given a state in $
\mathcal{M}_k\otimes\mathcal{M}_m$, we build a SPC state in $
\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}$ with the same Schmidt number. It is
known that this type of state can be put in the filter normal form retaining
its type. A solution to the separability problem in
$\mathcal{M}_k\otimes\mathcal{M}_m$ could be obtained by solving the same
problem for SPC states in the filter normal form within
$\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}$. This SPC state can be built
arbitrarily close to the projection on the symmetric subspace of $
\mathbb{C}^{k+m}\otimes\mathbb{C}^{k+m}$. All the information required to
understand entanglement in $ \mathcal{M}_s\otimes\mathcal{M}_t$ $(s+t\leq k+m)$
lies inside an arbitrarily small ball around that projection. We also show that
the Schmidt number of any state $\gamma\in\mathcal{M}_n\otimes\mathcal{M}_n$
which commutes with the flip operator and lies inside a small ball around that
projection cannot exceed $\lfloor\frac{n}{2}\rfloor$. | Daniel Cariello | 2023-06-30T17:04:36 | http://arxiv.org/abs/2306.17803v2 | # A reduction of the separability problem to SPC states in the filter normal form
###### Abstract.
It was recently suggested that a solution to the separability problem for states that remain positive under partial transpose composed with realignment (the so-called symmetric with positive coefficients states or simply SPC states) could shed light on entanglement in general. Here we show that such a solution would solve the problem completely. Given a state in \(\mathcal{M}_{k}\otimes\mathcal{M}_{m}\), we build a SPC state in \(\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\) with the same Schmidt number. It is known that this type of state can be put in the filter normal form retaining its type. A solution to the separability problem in \(\mathcal{M}_{k}\otimes\mathcal{M}_{m}\) could be obtained by solving the same problem for SPC states in the filter normal form within \(\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\). This SPC state can be build arbitrarily close to the projection on the symmetric subspace of \(\mathbb{C}^{k+m}\otimes\mathbb{C}^{k+m}\). All the information required to understand entanglement in \(\mathcal{M}_{s}\otimes\mathcal{M}_{t}\) (\(s+t\leq k+m\)) lies inside an arbitrarily small ball around that projection.
## 1. Introduction
A discussion on a series of coincidences regarding a triad of quantum states was presented through the references [1, 2, 4, 5]. The triad mentioned in these articles is formed by the states that remain positive under partial transpose (PPT states), the states that remain positive under partial transpose composed with realignment (SPC states) and the states that remain the same under realignment (invariant under realignment states).
These coincidences ultimately led to the claim that there is a triality pattern in entanglement theory [5], where every proven result for one type of such states has counterparts for the other two. It was also claimed that a solution to the separability problem [9, 10] for SPC states or invariant under realignment states could provide insights for adapting to the most important type: the positive under partial transpose states.
In this note, we prove that SPC states are indeed extremely important to entanglement theory, since we can embed the entire set of states into SPC states and, therefore, reduce the separability problem to them.
We begin with the notion of the Schmidt number of a bipartite mixed state \(\gamma\in\mathcal{M}_{k}\otimes\mathcal{M}_{m}\)[13, 14]. Consider all the ways to express the state \(\gamma\) as \(\sum_{i=1}^{n}v_{i}v_{i}^{*}\), where \(v_{i}\in\mathbb{C}^{k}\otimes\mathbb{C}^{m}\), and define its Schmidt number by the following minimum over all these expressions
\[SN(\gamma)=\min\left\{\max_{i\in\{1,\ldots,n\}}\left\{SR(v_{i})\right\}, \gamma=\sum_{i=1}^{n}v_{i}v_{i}^{*}\right\},\]
where \(SR(v_{i})\) stands for the Schmidt rank of \(v_{i}\in\mathbb{C}^{k}\otimes\mathbb{C}^{m}\). Recall that \(\gamma\) is separable if and only if \(SN(\gamma)=1\).
Then we recall another result proved in item \(b)\) of [4, Theorem 2]. It says that if \(\beta\in\mathcal{M}_{k}\otimes\mathcal{M}_{k}\) is a state supported on the the anti-symmetric subspace of \(\mathbb{C}^{k}\otimes\mathbb{C}^{k}\) (denoted here by \(\mathbb{C}^{k}\wedge\mathbb{C}^{k}\)
then
\[SN\left(P^{k,2}_{sym}+\epsilon\frac{\beta}{tr(\beta)}\right)=\frac{1}{2}SN(\beta),\]
for every \(\epsilon\in\ \left]0,\frac{1}{6}\right]\), where \(P^{k,2}_{sym}\) stands for the orthogonal projection on the symmetric subspace of \(\mathbb{C}^{k}\otimes\mathbb{C}^{k}\) and \(tr(\beta)\) is the trace of \(\beta\).
The idea is quite simple now, given any state \(\gamma\in\mathcal{M}_{k}\otimes\mathcal{M}_{m}\), we must create a state \(\widetilde{\gamma}\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\) such that
\[\mathrm{Im}(\widetilde{\gamma})\subset\mathbb{C}^{k+m}\wedge\mathbb{C}^{k+m} \text{ and }SN(\widetilde{\gamma})=2SN(\gamma).\]
Therefore, for every \(\epsilon\in\ \left]0,\frac{1}{6}\right]\),
\[SN\left(P^{k+m,2}_{sym}+\epsilon\frac{\widetilde{\gamma}}{tr(\widetilde{ \gamma})}\right)=\frac{1}{2}SN(\widetilde{\gamma})=SN(\gamma).\]
Now, \(\gamma\) is separable (\(SN(\gamma)=1\)) if and only if
\[P^{k+m,2}_{sym}+\epsilon\frac{\widetilde{\gamma}}{tr(\widetilde{\gamma})} \text{ is separable }\Big{(}SN\left(P^{k+m,2}_{sym}+\epsilon\frac{\widetilde{\gamma}}{tr( \widetilde{\gamma})}\right)=1\Big{)}.\]
We show that the partial transpose of \(\delta=P^{k+m,2}_{sym}+\epsilon\frac{\widetilde{\gamma}}{tr(\widetilde{\gamma })}\) is positive definite, i.e. \(\delta^{\Gamma}>0\), therefore \(\delta\) can be put in the filter normal form (See theorem 3.1). Actually, in the same theorem, we obtain a stronger result, we prove that the partial transpose composed with realignment of this state is positive definite, i.e. \(\mathcal{R}(\delta^{\Gamma})>0\). Hence \(\delta\) is a SPC state for any state \(\gamma\in\mathcal{M}_{k}\otimes\mathcal{M}_{m}\).
Notice that we embed the entire set of states of \(\mathcal{M}_{k}\otimes\mathcal{M}_{m}\) into the SPC states within a ball of radius arbitrarily small around \(P^{k+m,2}_{sym}\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\).
In addition, it was proved in [5, Corollary 4.6] that there is an invertible matrix \(U\in\mathcal{M}_{k+m}\) such that \((U\otimes U)\delta(U^{*}\otimes U^{*})\) is in the filter normal form. Now, \(\mathcal{R}((U\otimes U)\delta(U^{*}\otimes U^{*})^{\Gamma})\) remains positive definite, as explained in our corollary 3.3. Hence \((U\otimes U)\delta(U^{*}\otimes U^{*})\) is a SPC state in the filter normal form. Thus, we reduce the separability problem to the SPC case in the filter normal form.
This filter normal form has been used to improve some separability criteria [6, 7]. It was not clear until now whether studying states in the filter normal form would suffice to fully understand entanglement. We have finally solve this matter, it is enough to study only SPC states in the filter normal form.
This note is organized as follows: In section 2, we construct \(\widetilde{\gamma}\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\) described above and in section 3, we reduce the separability problem to the SPC case.
## 2. Preliminaries
In this section we construct \(\widetilde{\gamma}\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\) supported on \(\mathbb{C}^{k+m}\wedge\mathbb{C}^{k+m}\) such that \(SN(\widetilde{\gamma})=2SN(\gamma)\) (See lemma 2.3), given a state \(\gamma\in\mathcal{M}_{k}\otimes\mathcal{M}_{m}\), but first let us fix some notation.
Let \(P^{k,2}_{anti}\in\mathcal{M}_{k}\otimes\mathcal{M}_{k}\), \(P^{k,2}_{sym}\in\mathcal{M}_{k}\otimes\mathcal{M}_{k}\) be the orthogonal projections onto the anti-symmetric and symmetric subspaces of \(\mathbb{C}^{k}\otimes\mathbb{C}^{k}\). In addition, let \(V,W\) be subspaces of \(\mathbb{C}^{k}\) and consider \(V\wedge W\) as the subspace of \(\mathbb{C}^{k}\otimes\mathbb{C}^{k}\) generated by all \(v\wedge w=v\otimes w-w\otimes v\), where \(v\in V\) and \(w\in W\).
**Definition 2.1**.: _Let \(C=\begin{pmatrix}Id_{k\times k}\\ 0_{m\times k}\end{pmatrix}\otimes\begin{pmatrix}0_{k\times m}\\ Id_{m\times m}\end{pmatrix}\) and \(Q=P^{k+m,2}_{anti}C\)._
**Remark 2.2**.: _Notice that \(SR(Cv)=SR(v)\) and \(SR(Qv)=2SR(v)\), for every \(v\in\mathbb{C}^{k}\otimes\mathbb{C}^{m}\)._
The next lemma fill in the details to construct the aforementioned \(\widetilde{\gamma}\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\).
**Lemma 2.3**.: _Let \(C,Q\) be as in definition 2.1. Then_
1. \(C^{*}Q=\frac{1}{2}Id\in\mathcal{M}_{k}\otimes\mathcal{M}_{m}\)__
2. \(SR(C^{*}v)=\frac{1}{2}SR(v)\)_, for every_ \(v\in(\mathbb{C}^{k}\times\vec{0}_{m})\wedge(\vec{0}_{k}\times\mathbb{C}^{m})\)_._
3. \(SN(Q\gamma Q^{*})=2SN(\gamma)\) _and_ \(\mathrm{Im}(Q\gamma Q^{*})\subset\mathbb{C}^{k+m}\wedge\mathbb{C}^{k+m}\)_, for every state_ \(\gamma\in\mathcal{M}_{k}\otimes\mathcal{M}_{m}\)_._
Proof.: 1) Let \(F\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\) be the flip operator and recall that
\[P^{k+m,2}_{anti}=\frac{1}{2}(Id-F)\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}.\]
Now, let \(a\otimes b\in\mathbb{C}^{k}\otimes\mathbb{C}^{m}\) and consider
\[C^{*}FC(a\otimes b)=C^{*}F(a\times\vec{0}_{m})\otimes(\vec{0}_{k}\times b)=C^ {*}(\vec{0}_{k}\times b)\otimes(a\times\vec{0}_{m})=\vec{0}_{k}\otimes\vec{0} _{m}.\]
Hence \(C^{*}FC=0\) and
\[C^{*}P^{k+m,2}_{anti}C=\frac{1}{2}(C^{*}C+C^{*}FC)=\frac{1}{2}C^{*}C=\frac{1}{ 2}Id\ \ \in\mathcal{M}_{k}\otimes\mathcal{M}_{m}.\]
2) If \(v\in(\mathbb{C}^{k}\times\vec{0}_{m})\wedge(\vec{0}_{k}\times\mathbb{C}^{m})\) then there are linearly independent vectors \(a_{1},\ldots,a_{n}\) of \(\mathbb{C}^{k}\) and linearly independent vectors \(b_{1},\ldots,b_{n}\) of \(\mathbb{C}^{m}\) such that
\[v=\sum_{i=1}^{n}(a_{i}\times\vec{0}_{m})\wedge(\vec{0}_{k}\times b_{i}).\]
Hence \((a_{1}\times\vec{0}_{m}),\ldots,(a_{n}\times\vec{0}_{m}),(\vec{0}_{k}\times b_ {1}),\ldots,(\vec{0}_{k}\times b_{n})\) are linearly independent and \(SR(v)=2n\).
Finally, notice that \(C^{*}v=\sum_{i=1}^{n}a_{i}\otimes b_{i}\) andt is Schmidt rank is \(n=\frac{1}{2}SR(v)\).
3) First, by remark 2.2, \(SN(Q\gamma Q^{*})\leq 2SN(\gamma)\).
Next, by item 1), \(C^{*}Q\gamma Q^{*}C=\frac{1}{4}\gamma\) and, by item 2), \(SN(C^{*}Q\gamma Q^{*}C)\leq\frac{1}{2}SN(Q\gamma Q^{*})\).
These three pieces of information together imply
\[SN(\gamma)=SN(C^{*}Q\gamma Q^{*}C)\leq\frac{1}{2}SN(Q\gamma Q^{*})\leq SN( \gamma).\]
Finally, since \(\mathrm{Im}(Q)\subset\mathbb{C}^{k+m}\wedge\mathbb{C}^{k+m}\), we get
\[\mathrm{Im}(Q\gamma Q^{*})\subset\mathbb{C}^{k+m}\wedge\mathbb{C}^{k+m}.\]
## 3. The Embedding and The Reduction
The next theorem is the key to reduce the separability problem to SPC states in the filter normal form (See corollary 3.3).
In order to proper explain it, we need the following notation: \(\mathcal{R}(\delta)\) and \(\delta^{\Gamma}\) stand for the realignment map [11, 12] and the partial transpose of \(\delta\in\mathcal{M}_{k}\otimes\mathcal{M}_{k}\)[9, 10], respectively. In addition, if \(\delta=\sum_{i=1}^{n}A_{i}\otimes B_{i}\) then \(\delta_{A}=\sum_{i=1}^{n}A_{i}tr(B_{i})\) and \(\delta_{B}=\sum_{i=1}^{n}B_{i}tr(A_{i})\).
**Theorem 3.1**.: _Given \(\epsilon\in\ \ \big{]}0,\frac{1}{6}\big{]}\) and \(Q\) as in definition 2.1, consider the positive map \(T:\mathcal{M}_{k}\otimes\mathcal{M}_{m}\rightarrow\mathcal{M}_{k+m}\otimes \mathcal{M}_{k+m}\) defined by_
\[T(\gamma)=tr(Q\gamma Q^{*})P_{sym}^{k+m,2}+\epsilon Q\gamma Q^{*}.\]
_This linear map possesses the following properties:_
1. \(SN(T(\gamma))=SN(\gamma)\) _for every state_ \(\gamma\in\mathcal{M}_{k}\otimes\mathcal{M}_{m}\)_._
2. \(T(\gamma)^{\Gamma}\) _and_ \(\mathcal{R}(T(\gamma)^{\Gamma})\) _are positive definite. Hence_ \(T(\gamma)\) _is a PPT/SPC state._
Proof.: First, by lemma 2.3, \(SN(Q\gamma Q^{*})=2SN(\gamma)\).
Notice that \(tr(Q\gamma Q^{*})>0\) and, by theorem [4, Theorem 2], we have \(SN(T(\gamma))=\)
\[=SN\left(\frac{1}{tr(Q\gamma Q^{*})}T(\gamma)\right)=SN\left(P_{sym}^{k+m,2}+ \frac{\epsilon}{tr(Q\gamma Q^{*})}Q\gamma Q^{*}\right)=\frac{1}{2}SN(Q\gamma Q ^{*})=SN(\gamma).\]
This completes the proof of item (1). Let us prove item (2).
Recall that \(P_{sym}^{k+m,2}=\frac{1}{2}(Id+F)\), where \(F\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\) is the flip operator.
Since \(F^{\Gamma}=uu^{t}\), where \(u=\sum_{i=1}^{k+m}e_{i}\otimes e_{i}\) and \(\{e_{1},\ldots,e_{k+m}\}\) is the canonical basis of \(\mathbb{C}^{k+m}\),
\[(P_{sym}^{k+m})^{\Gamma}=\frac{1}{2}(Id+uu^{t})\]
and its smallest eigenvalue is \(\frac{1}{2}\).
It is known, by [5, Lemma 3.1], that
\[\frac{\epsilon}{tr(Q\gamma Q^{*})}\|(Q\gamma Q^{*})^{\Gamma}\|_{\infty}\leq \frac{\epsilon}{tr(Q\gamma Q^{*})}\|(Q\gamma Q^{*})_{A}\|_{\infty}=\frac{ \epsilon}{tr((Q\gamma Q^{*})_{A})}\|(Q\gamma Q^{*})_{A}\|_{\infty}\leq\epsilon. \tag{3.1}\]
Hence
\[T(\gamma)^{\Gamma}=tr(Q\gamma Q^{*}).\left(\frac{1}{2}(Id+uu^{t})+\frac{ \epsilon}{tr(Q\gamma Q^{*})}(Q\gamma Q^{*})^{\Gamma}\right)\]
is positive definite. Now let us prove the second assertion of item (2).
Since \(\mathcal{R}(Id+uu^{t})=uu^{t}+Id\),
\[\mathcal{R}\left(\left(P_{sym}^{k+m}+\frac{\epsilon\;Q\gamma Q^{*}}{tr(Q \gamma Q^{*})}\right)^{\Gamma}\right)=\frac{1}{2}(Id+uu^{t})+\mathcal{R} \left(\left(\frac{\epsilon\;Q\gamma Q^{*}}{tr(Q\gamma Q^{*})}\right)^{\Gamma }\right).\]
Finally, for every state \(\delta\) such that \(\delta F=-\delta\), we have
\[-\delta^{\Gamma}=(\delta F)^{\Gamma}=\mathcal{R}(\delta^{\Gamma}), \tag{3.2}\]
by item (7) of [5, Lemma 2.3].
By item 3) of lemma 2.3, \(\mathrm{Im}(Q\gamma Q^{*})\subset\mathbb{C}^{k+m}\wedge\mathbb{C}^{k+m}\), hence
\[\frac{\epsilon\ Q\gamma Q^{*}}{tr(Q\gamma Q^{*})}F=-\frac{\epsilon\ Q\gamma Q ^{*}}{tr(Q\gamma Q^{*})}.\]
Thus, by equation (3.2),
\[\mathcal{R}\left(\left(P_{sym}^{k+m}+\frac{\epsilon\ Q\gamma Q^{*}}{tr(Q \gamma Q^{*})}\right)^{\Gamma}\right)=\frac{1}{2}(Id+uu^{t})-\left(\epsilon\ \frac{Q\gamma Q^{*}}{tr(Q\gamma Q^{*})}\right)^{\Gamma}.\]
By inequality (3.1), this matrix is positive definite.
**Remark 3.2**.: _Notice that \(\frac{1}{tr(Q\gamma Q^{*})}T(\gamma)\) belongs to a small ball around \(P_{sym}^{k+m,2}\) for every state \(\gamma\in\mathcal{M}_{k}\otimes\mathcal{M}_{m}\). It is an easy task to modify our matrices \(C\) and \(Q\) in order to embed every state of \(\mathcal{M}_{s}\otimes\mathcal{M}_{t}\ (s+t\leq k+m)\) inside the same ball. Hence all the information required to understand entanglement in \(\mathcal{M}_{s}\otimes\mathcal{M}_{t}\ (s+t\leq k+m)\) lies inside this small ball around \(P_{sym}^{k+m,2}\)._
The next corollary is the reduction.
**Corollary 3.3**.: _The separability problem in \(\mathcal{M}_{k}\otimes\mathcal{M}_{m}\) can be reduced to states in the filter normal form, more specifically to SPC states in the filter normal form._
Proof.: By the previous theorem, it is obvious that \(\gamma\) is separable (\(SN(\gamma)=1\)), if and only, if \(T(\gamma)\) is separable (\(SN(T(\gamma))=1\)).
We also noticed there that \(T(\gamma)^{\Gamma}\) is positive definite. So there are invertible matrices \(R,S\in\mathcal{M}_{k+m}\) such that \(\phi=(R\otimes S)T(\gamma)^{\Gamma}(R\otimes S)^{*}\) is in the filter normal form, i.e., \(\phi_{A}=\phi_{B}=Id_{k+m\times k+m}\). Hence \((R\otimes\overline{S})T(\gamma)(R^{*}\otimes S^{t})\) is in the filter normal form as well.
Actually, since \(\mathcal{R}(T(\gamma)^{\Gamma})\) is positive definite, we can do better. By [5, Corollary 4.6], we can find an invertible matrix \(O\in\mathcal{M}_{k+m}\) such that \(\delta=(O\otimes O)T(\gamma)(O^{*}\otimes O^{*})\) is in the filter normal form. Notice that
\[\mathcal{R}(\delta^{\Gamma})=(O\otimes\overline{O})\mathcal{R}(T(\gamma)^{ \Gamma})(O^{*}\otimes O^{t}),\]
by item 3 of [5, Lemma 2.3]. Therefore, \(\mathcal{R}(\delta^{\Gamma})\) is also positive definite. Hence \(\delta\) is a SPC state in the filter normal form.
So we can solve the separability in \(\mathcal{M}_{k}\otimes\mathcal{M}_{m}\) by solving the separability problem for SPC states in the filter normal form in \(\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\).
The reader may wonder if it is really necessary to add \(tr(Q\gamma Q^{*})P_{sym}^{k+m}\) to \(Q\gamma Q^{*}\) in order to obtain a state that can be put in the filter normal form. The answer to this question is given in the next proposition, but it requires one simple definition: If \(\delta=\sum_{i=1}^{n}A_{i}\otimes B_{i}\in\mathcal{M}_{k}\otimes\mathcal{M}_ {k}\), define \(G_{\delta}:\mathcal{M}_{k}\rightarrow\mathcal{M}_{k}\) as \(G_{\delta}(X)=\sum_{i=1}^{n}tr(A_{i}X)\otimes B_{i}\).
**Proposition 3.4**.: _If \(\gamma\in\mathcal{M}_{k}\otimes\mathcal{M}_{m}\) is a state such that \(k\neq m\) then \(Q\gamma Q^{*}\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\) cannot be put in the filter normal form._
Proof.: Let \(C\) and \(Q\) be as in definition 2.1 and define \(P=P_{sym}^{k+m}C\). Assume without loss of generality that \(k>m\).
Notice that
\[\delta=\frac{1}{2}(C\gamma C^{*}+F(C\gamma C^{*})F)=P\gamma P^{*}+Q\gamma Q^{ *},\]
where \(F\in\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\) is the flip operator.
Now, \(G_{\delta}:\mathcal{M}_{k+m}\rightarrow\mathcal{M}_{k+m}\), as defined just before this proposition, satisfies
\[G_{\delta}\left(\begin{pmatrix}Id_{k\times k}&0\\ 0&0\end{pmatrix}\right)=\frac{1}{2}G_{C\gamma C^{*}}\left(\begin{pmatrix}Id_{ k\times k}&0\\ 0&0\end{pmatrix}\right)+\frac{1}{2}\overbrace{G_{FC\gamma C^{*}F}\left( \begin{pmatrix}Id_{k\times k}&0\\ 0&0\end{pmatrix}\right)}^{=\ 0}=\frac{1}{2}\begin{pmatrix}0&0\\ 0&\gamma_{B}\end{pmatrix}.\]
Since \(\delta-Q\gamma Q^{*}\) is positive semidefinite, the map \(G_{\delta}-G_{Q\gamma Q^{*}}=G_{\delta-Q\gamma Q^{*}}\) is positive. Hence
\[\text{rank}\left(G_{Q\gamma Q^{*}}\left(\begin{pmatrix}Id_{k\times k}&0\\ 0&0\end{pmatrix}\right)\right)\leq\text{rank}\left(G_{\delta}\left(\begin{pmatrix} Id_{k\times k}&0\\ 0&0\end{pmatrix}\right)\right)=\text{rank}(\gamma_{B})\leq m<k.\]
Notice that the image of a rank \(k\) positive semidefinite Hermitian matrix by \(G_{Q\gamma Q^{*}}\) has rank smaller than \(k\). So \(G_{Q\gamma Q^{*}}\) is not a rank non-decreasing map. The rank non-decreasing property for \(G_{Q\gamma Q^{*}}\) is necessary for the state \(Q\gamma Q^{*}\) to be put in the filter normal form [3, 8]. Hence \(Q\gamma Q^{*}\) cannot be put in the filter normal form
## Summary and Conclusion
In this work we reduced the separability problem to SPC states in the filter normal form. Given a state in \(\mathcal{M}_{k}\otimes\mathcal{M}_{m}\), we built a SPC state in \(\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\) with the same Schmidt number without any knowledge of its value. Therefore, in order to solve the separability problem in \(\mathcal{M}_{k}\otimes\mathcal{M}_{m}\) for any kind of state, we can do it by solving the problem for SPC states in \(\mathcal{M}_{k+m}\otimes\mathcal{M}_{k+m}\). It is known that such states can be put in the filter normal form preserving its SPC structure.
## 4. Disclosure Statement
No potential conflict of interest was reported by the author.
## 5. Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analysed during the current study. | **Note:** Please be sure to include spaces within the translation and maintain the original sentence's structure. |
2302.14625 | mmSense: Detecting Concealed Weapons with a Miniature Radar Sensor | For widespread adoption, public security and surveillance systems must be
accurate, portable, compact, and real-time, without impeding the privacy of the
individuals being observed. Current systems broadly fall into two categories --
image-based which are accurate, but lack privacy, and RF signal-based, which
preserve privacy but lack portability, compactness and accuracy. Our paper
proposes mmSense, an end-to-end portable miniaturised real-time system that can
accurately detect the presence of concealed metallic objects on persons in a
discrete, privacy-preserving modality. mmSense features millimeter wave radar
technology, provided by Google's Soli sensor for its data acquisition, and
TransDope, our real-time neural network, capable of processing a single radar
data frame in 19 ms. mmSense achieves high recognition rates on a diverse set
of challenging scenes while running on standard laptop hardware, demonstrating
a significant advancement towards creating portable, cost-effective real-time
radar based surveillance systems. | Kevin Mitchell, Khaled Kassem, Chaitanya Kaul, Valentin Kapitany, Philip Binner, Andrew Ramsay, Roderick Murray-Smith, Daniele Faccio | 2023-02-28T15:06:03 | http://arxiv.org/abs/2302.14625v1 | # MMSENSE: DETECTING CONCEALED WEAPONS WITH A MINIATURE RADAR SENSOR
###### Abstract
For widespread adoption, public security and surveillance systems must be accurate, portable, compact, and real-time, without impeding the privacy of the individuals being observed. Current systems broadly fall into two categories - image-based which are accurate, but lack privacy, and RF signal-based, which preserve privacy but lack portability, compactness and accuracy. Our paper proposes _mmSense_, an end-to-end portable miniaturised real-time system that can accurately detect the presence of concealed metallic objects on persons in a discrete, privacy-preserving modality. mmSense features millimeter wave radar technology, provided by Google's Soli sensor for its data acquisition, and TransDope, our real-time neural network, capable of processing a single radar data frame in 19 ms. mmSense achieves high recognition rates on a diverse set of challenging scenes while running on standard laptop hardware, demonstrating a significant advancement towards creating portable, cost-effective real-time radar based surveillance systems.
Kevin Mitchell \({}^{\dagger}\)\({}^{\lx@sectionsign}\) + Khaled Kassem \({}^{\dagger}\)\({}^{\lx@sectionsign}\) + Chaitanya Kaul \({}^{\ddagger}\) + Valentin Kapitany \({}^{\dagger}\) + Philip Binner \({}^{\dagger}\) + Andrew Ramsay \({}^{\ddagger}\)
Roderick Murray-Smith \({}^{\ddagger}\) + Daniele Faccio \({}^{\dagger}\)+
Footnote †: Equal Contribution
Footnote †: thanks: Equal Contribution
Footnote †: Equal Contribution
Real-time signal processing, mmWave radars, Vision Transformer
## 1 Introduction
Radar solutions developed on Frequency Modulated Continuous Wave (FMCW) technology have shown promising success through their ability to serve as a capable and versatile basis for computational sensing and short range wireless communication systems [1]. Such radars, operating at millimeter wave (mmWave) frequency, can be used for robust gesture generation and recognition [2, 3], and even measure distances with _mm_ accuracy [4]. Furthermore, mmWave radars have the potential to serve as a basis for concealed metallic object detection (e.g. knives, guns etc) which presents a novel and most importantly, privacy-preserving manner of real-time surveillance. The principles of mmWave metal detection rely on the underlying physics of RF waves- radio frequency (RF) waves that fall in the 30-300 GHz range between microwaves and terahertz waves. This frequency band corresponds to wavelengths of 1-10 mm. Within various forms of spectral imaging (e.g. IR, UV), one chooses the waveband that interacts with the object/scene to be imaged, whilst ignoring any obfuscating features. The same is true when detecting metals concealed on humans, where the mmWave waveband is appropriate because the waves from a mmWave RF source pass through the thin layers of clothes, and are reflected highly by the body, plus any hidden objects between the two [5]. Figure 1 depicts this concept empirically. We believe there is a niche in the security field for a portable technology that can screen for illegal metallic weapons, whilst still allowing people to maintain their freedom and privacy in public without security bottlenecks and conventional image-capturing cameras.
This work is not the first to propose mmWave sensing for metal detection. Fusion of mmWave radar and vision data has already helped create accurat
Figure 1: The RF waves from a Google Soli reflect from a person standing 1.5m away from the radar with and without a knife. To mitigate the variations in specular reflection, the subject rotated through 90\({}^{\circ}\) and an average over 60s was used in each scene. The reflected signal received via receiver 2 (of 3) of the Soli is converted into a range profile by computing an FFT over it, and plotted here. The difference between the two range profiles shows the potential of using mmWave radars like the soli, for detecting metallic objects.
tection systems for autonomous driving applications [6]. The comparison in performance of Convolutional and Recurrent Neural Networks (CNNs and RNNs) for metal detection using magnetic impedance sensors has been extensively evaluated in [7]. Their system is compact, however they need to scan the complete scene, and restrict themselves to large metal sheets that are visible in the Line-of-Sight (LOS) of their sensor. [8] use mmWave radar sensors to alert robots to the presence of humans. Of most relevance to our study are [9, 10]- the former study, [9] provides a comprehensive guide on the metal detection capabilities of a \(77-81\)GHz radar chip but do so by comparing intensities of the reflected signal with the intensities of their own model of a body without the presence of the metallic object. Their work does not look at concealed metallic objects but ones that are already visible. [10] created an AI powered metal detection system capable of working in real-time. The system, however, processes their data differently, is prohibitively expensive, and is considerably bulkier than ours. One fundamental advantage our system has over all existing mmWave systems proposed for similar applications is the use of a radar sensor that has the widest Field of View (FOV), smallest form factor and least power consumption amongst its competitors. The Soli is capable of illuminating its surroundings with a \(150^{\circ}\) FOV radar pulse. The lower power consumption of the Soli is due to the fact that it transmits 16 chirps at a pulse repetition frequency of 2000 Hz in a single burst (each burst is transmitted at 25Hz), and then stops transmitting until the next burst to save power [3]. This saves considerable power compared to mmWave radars used in existing works that continuously transmit chirps. Compared to current radar based surveillance systems, our technology does not need to sweep a scene to work, but provides inference on-the-fly by illuminating the scene with RF waves and processing the received signal.
Our work is intended to disrupt the trend of specialised surveillance and imaging systems which are becoming increasingly expensive to install and operate, by using an inexpensive, compact device capable of being mounted in various locations throughout open spaces, which can function in real time. To this end, we present the use of a commercial mmWave radar transceiver to detect the presence of concealed objects on people in real time, in a privacy preserving manner. We focus on high frequency (60GHz), short range (up to 3m) sensing using Google's Soli sensor, primarily due its miniature form factor, low power consumption, portability, and novel sensing characteristics. The Soli is designed for Human-Computer Interaction applications, and has shown success in'macro' radar-based computational imaging tasks (detecting motion, gestures etc). Its application to detecting objects within the movement is unexplored and challenging. The Soli captures a superposition of reflected energy from different parts of a scene using high frequency RF waves: this results in poor lateral spatial resolution, while detecting even the smallest amount of motion. This makes metal detection challenging when there is plenty of movement in the scene i.e., in all practical real world scenarios. To mitigate this challenge, we propose a novel, real-time Vision Transformer model that can exploit semantic sequential relations in the preprocessed radar data and recognize the presence of a concealed metallic object on a person in a scene while ignoring objects such as wallets, keys, belts and mobile phones.
The following are our main contributions: (1) We present mmSense - a novel, end-to-end framework capable of detecting concealed metallic objects on people in a scene using only raw radar data without any human intervention. (2) mmSense is real-time, and can potentially use off-the-shelf commercially available hardware making it easy to replicate and deploy. (3) We open source _mmSense_ including all our datasets and models with the hopes of facilitating further research in this novel field of Artificial Intelligence powered concealed metal detection with mmWave RF signals.
## 2 MmSense
Our mmSense pipeline comprises of three components - a Google Soli radar for data acquisition, an Intel RealSense D435 Time-of-Flight (TOF) camera for visualizing results, and an API capable of acquiring and processing radar data streams in real-time. A single burst of the Soli's transmitted signal is received across 3 antennas. For each RF illumination of the scene by the radar, it receives back a signal \(I\in\mathbb{R}^{P\times C}\) where \(I\) is the imaged scene as perceived by the radar and \(P\) is the number of chirps received by the radar across its \(C(=3)\) antennas. We operate the Soli at a frequency of 60GHz with 1GHz, the maximum permitted bandwidth \(BW\). This gives us a range resolution \(R_{r}=\frac{c}{2BW}=15\)cm, where \(c\) refers to the speed of light. This is the minimum distance that the radar can separate in its LOS between two distinct points. The Soli transmits and receives a series of chirps that are grouped into bursts- we define the number of chirps for our system to be 16, and collect bursts at 25Hz, giving us 25 bursts of Soli data in one second. In this configuration, we can detect up to a maximum range of 9.6m.
The Soli hardware has a corresponding API provided by Google and implemented in C++. We built a C++ application around this API which allows us to interface with the Soli radar in real-time (e.g. selecting a range profile) and receiving the bursts generated from the radar. Our application supports streaming the Soli bursts directly to a Python script using the ZeroMQ1 messaging framework. The bursts are relayed immediately upon being received by the device with no additional buffering, and are ultimately parsed by a Python module which extracts both the parameters associated with the burst (e.g. a hardware-supplied timestamp) and the raw radar data.
Footnote 1: [https://github.com/zeromq/cppzmq](https://github.com/zeromq/cppzmq)
After parsing the raw chirps and their timestamps, we create a Range Doppler [11] transformation of the signal. This is done via a series of Fast Fourier Transforms (FFT) applied to the data. First, we calculate the complex value range profile (RP) of the radar signal. This is done via an FFT of the radar chirps received by the 3 antennas. As the Soli's signal is a superposition of reflections from a scene, the RP data can be interpreted as how well the separate contributions of the RF scatters in the scene are resolved. This gives us an estimate of the geometry of the scene. A Complex Range Doppler (CRD) plot is then calculated as an FFT over each channel of the radar's complex value range profile. Here, the range represents the distance of the object in a scene from the Soli, and the Doppler corresponds to the radial velocity of the object towards the Soli. We use the magnitude of the CRD for our experiments, which is obtained in the following way, \(\texttt{ARD}(r,d)=|\texttt{CRD}(r,d)|\), where \(\texttt{ARD}\) refers to the Absolute
Range Doppler, \(r\) and \(d\) are the range and doppler bins, and \(|\cdot|\) is the absolute value.
The ARD plots are processed using our novel Deep Neural Network, TransDope (Doppler Transformer, figure 3), capable of accurate real time classification of the input data stream. The input is a sequence of 8 ARD frames. TransDope contains two Time Convolution layers pretraind on a large collected dataset of ARD plots, an embedding layer to create a transformer compatible embedding of the convolution features, and transformer layers to learn long range time-dependent semantic features in the data. We first collect a large dataset of ARDs from various scenes with and without concealed metallic objects on actors. We then train a model with two Time Convolution and Max Pooling layers, the output of which is flattened and fed to a classification layer.
Following training, we discard the output layer, and use the two time convolution layers with the pre-trained weights as an initialization. Unlike standard convolutions that apply a weight kernel across all 8 ARDs concurrently, we apply convolutions sequentially to the 8 ARD frames to extract time-dependent features from them, and hence call them time convolutions. We then reshape the output of the last Max Pooling layer to create an embedding of the features. We also add positional encoding to each of the 8 ARD frames to preserve their sequence. Following this, we pass the embedding through 3 TransDope layers that extract semantic dependencies within the ARD's feature representation. These layers are the same as ViTs [12] encoder layers with the exception of having a convolutional layer following the multi head attention layer, instead of the dense layers, to reduce parameter size. We use global average pooling to reduce the transformer layer's features to a vector, which are then passed into the output layer. Our Time Convolution layers have 32 filters and a kernel size of \(3\times 3\). Our transformer layer has an embedding size of 128 and uses 2 attention heads. TransDope contains 0.8 million parameters, and can process a single ARD frame in 19 milliseconds on an Intel i9 8-core CPU. We train our model in TensorFlow 2 for 50 epochs, with a batch size of 8, and a learning rate of \(1e-2\) which inversely decays after every 10 epochs. During inference, we feed 1 batch of 8 ARD frames through the model to get a classification.
Figure 3: TransDope (shown here) processes the radar data stream while preserving time-dependent information throughout.
Figure 2: The mmSense set up is shown in (a). The individual components of the set up, and exemplar visualizations of the imaged scenes are shown for the Soli, and the Intel RealSense D435 in (b) and (c). The knife, gun, and metal sheet, used for the experiments are shown in (d).
## 3 Experiments
To test the accuracy and flexibility of our technique, we collected 6 different scenes with varying characteristics, as depicted in table 1. Data was acquired in 4 instances for each scene: 2 with a metallic object hidden on a person, and 2 without. Each acquisition contains approximately 1500 frames of data. Before training our machine learning model, we collected roughly 15,000 frames of Soli data equally split into the two classes in various scenes, to pre-train the TransDope time convolutions. For each scene, we then trained TransDope, to predict a binary class for each ARD.
We carefully curated different scenes to portray real world situations where our system can be deployed. Scene A was an initial proof of concept where we used the metal sheet as the hidden object to verify the capabilities of our system. We were able to predict the presence of the sheet with 95.1% accuracy. Scene B replicates a crowded expansive scenario, such as an airport terminal. Here we crowded the scene with 5 people walking up to 2m away in radius from the setup. Each person in the scene was carrying everyday objects such as phones, keys, wallets, and belts; only one of these individuals had a knife on their person. Even in such a challenging setting, our system detected the presence of the knife with up to 86.9% accuracy. In Scenes C and D, we observed the effects of changing the hidden object from a knife to a gun. This is important as different objects have different characteristic specular reflections. As seen from our results, the performance of our system held when switching the metallic object from the knife to the gun. Scenes A to D were all open scenes, i.e. the data was not acquired with constricting walls. This results in no multipath RF signals received by the Soli receiving antennas. In Scenes E and F, we tested the effects of keeping our set up in a closed setting and noticed performance decreased. Our results are summarized in table 1 and visualized in figure 4.
**Ablations.** Table 5 shows the effect of varying the amount of sequential information provided to TransDope, as well as varying the various blocks of TransDope. The experiments show that each individual component in our model contributes to performance boosts in terms of metal detection accuracy. We chose 8 ARD frames per sequence as the input to TransDope due to it providing the best accuracy versus execution time. Having multiple sequences of ARDs does further boost performance, but it also doubles (for 2 ARD sequences of 8 ARD frames - 2*8), and quadruples (for \(4*8\)) the execution time for only minor gains in performance.
radar and TOF data to add spatial awareness to the'sensing' ability of _mmSense_ may help to alleviate these drawbacks, and would be a fitting next step to extend this technology.
| 普遍的な採用のためには、公共のセキュリティと監視システムは、正確性、携帯性、コンパクトさとリアルタイム性を備え、監視対象者のプライバシーを阻害しないことが必要です。現在のシステムは、大きく分けて、精度が高くプライバシーを侵害する画像ベースと、プライバシーを保護しながら携帯性、コンパクトさ、正確性に欠けるRF信号ベースの2つのカテゴリに分けられます。私たちの論文では、mmSenseという、エンドツーエンドの携帯性と小型化されたリアルタイムシステムを提案しています。このシステムは、人体の隠された金属製のものを正確に検出することができ、プライバシーを保護しながら、 discreetな方法で行われます。mmSenseは、GoogleのSoliセンサーのデータ取得を利用したミリメートル波レーダー技術を採用しています。さらに、TransDopeという、リアルタイムのニューラルネットワークを使用して、19 msで1つのレーダーデータフレームを処理することができ、mmSense |
2309.13238 | How to Differentiate between Near Field and Far Field: Revisiting the
Rayleigh Distance | Future wireless systems are likely to adopt extremely large aperture arrays
to achieve higher throughput, wider coverage, and higher spatial resolution.
Conventional wireless systems predominantly operate in the far field (FF) of
the radiation source. However, as the array size increases and the carrier
wavelength decreases, the near field (NF) becomes nonnegligible. Since the NF
and FF differ in many aspects, it is critical to identify their corresponding
regions. In this article, we first provide a comprehensive overview of the
existing NF-FF boundaries, then introduce a novel NF-FF demarcation method
based on effective degrees of freedom (EDoF) of the channel. Since EDoF is
intimately related to channel capacity, the EDoF-based border is able to
characterize key channel performance more accurately than the classic Rayleigh
distance and other representative benchmarks. Furthermore, we analyze the main
features of the EDoF-based NF-FF boundary, provide insights into system design,
and outline the associated challenges and research opportunities. | Shu Sun, Renwang Li, Chong Han, Xingchen Liu, Liuxun Xue, Meixia Tao | 2023-09-23T02:43:28 | http://arxiv.org/abs/2309.13238v2 | # How to Differentiate between Near Field and Far Field: Revisiting the Rayleigh Distance
###### Abstract
Future wireless communication systems are likely to adopt extremely large aperture arrays and millimeter-wave/sub-THz frequency bands to achieve higher throughput, lower latency, and higher energy efficiency. Conventional wireless systems predominantly operate in the far field (FF) of the radiation source of signals. As the array size increases and the carrier wavelength shrinks, however, the near field (NF) becomes non-negligible. Since the NF and FF differ in many aspects, it is essential to distinguish their corresponding regions. In this article, we first provide a comprehensive overview of the existing NF-FF boundaries, then introduce a novel NF-FF demarcation method based on effective degrees of freedom (EDoF) of the channel. Since EDoF is intimately related to spectral efficiency, the EDoF-based border is able to characterize key channel performance more accurately, as compared with the classic Rayleigh distance. Furthermore, we analyze the main features of the EDoF-based NF-FF boundary and provide insights into wireless system design.
## I Introduction
The deployment of the fifth-generation (5G) networks on a global scale has prompted both academia and industry to turn their attention to the development of the sixth-generation (6G) technologies. The major application scenarios of 5G include enhanced mobile broadband, massive machine type communications, and ultra-reliable and low latency communications, which are expected to support 20 Gbps peak data rate, \(10^{6}\) devices/km\({}^{2}\), and 1 ms end-to-end latency, among others. However, 6G is poised to meet the demands of new and emerging applications, such as immersive cloud extended reality, holographic communications, sensory interconnection, and intelligent interaction. The burgeoning proliferation of wireless devices and their increasingly diverse applications has placed substantially higher demands on 6G in comparison to 5G. These demands encompass key performance metrics such as rate, reliability, latency, mobility, and energy consumption, with expectations typically being 10 to 100 times higher than those of 5G [1]. To achieve these ambitious vision and requirements for 6G, numerous key technologies have been proposed and are actively under development.
Notably, 6G is anticipated to leverage new spectrum resources, including the millimeter-wave (mmWave) and Terahertz (THz) bands. The shorter wavelengths in these bands, compared to the conventional microwave spectrum, allow for the integration of more antennas within a confined space. Consequently, the number of antennas continues to increase, evolving from the standard 64 antennas [2] to the scale of hundreds or even thousands [3]. This technological advancement has given rise to the concept of extremely large aperture arrays (ELAAs), which offer the potential for high beamforming gain, exceptional spatial resolution, and remarkable spectral efficiency [4].
The deployment of ELAA, in conjunction with tinier wavelengths of mmWave/THz bands, accentuates the NF (NF) effect. Generally, the electromagnetic (EM) field can be categorized into three regions: the reactive NF, the radiative NF, and the far field (FF). Since EM waves in the reactive NF are usually localized around the source and do not propagate, the radiative NF is more relevant to wireless communications, hence this article focuses on the radiative NF and refers to this as NF for simplicity. In the NF zone, which occurs when the communication distance is relatively small, the EM wavefronts are spherical. Here, factors such as the non-linear signal phase variations and differences in the traveling distance among different antennas in an ELAA need to be taken into account. The spherical wavefront model (SWM) considers the factors above, and is thus highly accurate, but involves high computational complexity. Conversely, when the communication distance extends to the FF regime, the SWM can be approximated by the planar wavefront model (PWM), since the angles at which the EM wave radiated from or received by each antenna in an array can be regarded as nearly identical. The PWM is a more simplified model, relying primarily on angle information and the number of antennas, making it highly convenient for channel modeling, performance analysis, and system design. As a result, the
Fig. 1: Illustration of wireless communications in the near field and far field.
PWM has been widely studied and implemented in 5G and earlier networks.
Due to the expansive array aperture of an ELAA, the NF region extends significantly, expanding from just a few meters for a conventional antenna array to hundreds of meters based on the classic Rayleigh distance [5] (to be introduced in more detail later on). Consequently, communication devices are more likely to be situated in the NF region, as depicted in Fig. 1, where a user and a vehicle are located in the NF and FF regions, respectively. Importantly, channel modeling in the NF and FF regions differs significantly. Specifically, the direct and reflection/scattering link in the NF should be characterized by the SWM, while the links in the FF can be reasonably approximated by the PWM. The reflectors/scatterers may be located in the NF and/or FF, which makes the propagation environment intricate. The Rayleigh distance, however, may not be the most proper NF-FF boundary in all wireless communication scenarios [6], and other criteria for demarcating the NF and FF are worth exploring.
## II Impact of NF-FF boundary on important aspects of wireless communications
In this section, we discuss the impact of the NF-FF boundary on various aspects of wireless communications systems, including antenna array characterization, propagation channel, and sensing as examples, which underscores the importance of investigating the NF-FF boundary.
### _Impact on Antenna Array Characterization_
Accurate evaluation of the NF and FF regimes of an antenna array is pivotal when conducting characterization and measurements of the array. Most antenna arrays are used in the FF (e.g., communication, radar, etc.). Hence, conducting array calibration in the FF ensures that the array radiates or receives as intended over long distances. NF array calibration is performed to ensure accurate array behavior when interacting with close-by objects or when the target application is within the NF zone. In the NF, interactions between the antenna elements can be much more intricate. These interactions, especially in large and closely spaced arrays, can lead to unpredictable effects like grating lobes or unwanted side lobes. Knowledge of the NF area helps in accurately characterizing these interactions. If characterization measurements are taken in the NF when they are assumed to be in the FF, significant errors can arise.
### _Impact on Propagation Channel_
The impact of the NF-FF boundary can also be observed in channel properties and characteristics such as path loss and spatial non-stationarity. Existing literature has unveiled that channel gain scales differently with distance in the NF and FF, thus it is critical to have accurate knowledge of the NF and FF regions so as to apply the corresponding power scaling laws. Furthermore, spatial non-stationarity, referring to the variability of the wireless channel characteristics over space, is another main difference in the channel. Spatial non-stationarity can occur in both the FF and NF. In the FF, spatial non-stationarity is usually caused by large aperture of the antenna array at the BS, such that users at distinct locations may see different parts of the array and/or that the signals encounter different scatterers. While in the NF, besides the large array aperture, the non-linear phase shifts across array elements caused by the spherical wavefront also contribute to spatial non-stationarity, further complicating the situation. Therefore, it is imperative to have an accurate evaluation of the NF-FF boundary to ease the analysis.
### _Impact on Sensing_
Integrated sensing and communication (ISAC) is a paradigm where sensing and communication functionalities coexist. The NF-FF boundary remains central to shaping ISAC systems. In the NF, the spherical nature of the wavefronts facilitates the capture of fine details in sensing applications, given the wavefront's inherent ability to conform closely to intricate surfaces and interfaces. In the FF, the wavefronts tend to become planar over extended propagation distances. This transformation implies broader-scale sensing and a propensity for long-range, but potentially lower bandwidth, communication. Meanwhile, FF designs typically aims to achieve robust long-range propagation and broader sensing coverage.
## III Existing Research Results on the Boundary between NF and Far Field
As mentioned in the previous sections, reasonable demarcation of the NF and FF is momentous for wireless communication systems with ELAAs. In this section, we review existing research results on NF-FF boundaries, and classify them into two broad categories: multiple-input-single-output (MISO) or equivalently single-input-multiple-output (SIMO) system where only one communication link end is equipped with multiple antennas, and multiple-input-multiple-output (MIMO) system where both communication link ends possess multiple antennas.
### _MISO/SIMO System_
#### Iii-A1 Rayleigh Distance
The classic border between the NF and FF of an antenna is called the _Rayleigh distance_ or _Fraunhofer distance_, \(2D^{2}/\lambda\), where \(D\) denotes the maximum aperture of the antenna and \(\lambda\) the carrier wavelength. The Rayleigh distance is defined from the perspective of the phase error: If the distance between a user and a base station (BS) is larger than \(2D^{2}/\lambda\), then the maximum phase error across the antenna aperture between using the PWM and the SWM is no more than \(\pi/8\). This definition is easily extendable to an antenna array, where \(D\) represents the array aperture. The Rayleigh distance reveals that the boundary between the NF and far field is proportional to the antenna/array aperture length squared, while inversely proportional to the carrier wavelength. It is commonly used to distinguish the near- and far-field regions since it can well capture the phase variations caused by the SWM and has a succinct mathematical expression. However, the Rayleigh distance has weak association with channel performance metrics, such as channel gain and channel capacity, which are important in practical wireless communications systems.
#### Ii-A2 Critical Distance
The Rayleigh distance primarily pertains to the maximum acceptable phase difference among array elements. However, when employing the optimal maximum ratio combining (MRC), the signal phases can be perfectly aligned, eliminating their impact on the received power. In this context, the received power relies solely on the amplitude response of the channel. Consequently, the authors in [7] propound a critical distance \(r_{\text{Critical}}\). This critical distance ensures that the power ratio between the weakest and strongest array elements remains above a specified threshold, effectively leading to an approximation of \(r_{\text{Critical}}\approx 9D\), with \(D\) representing the antenna aperture. Denote the Rayleigh distance as \(r_{\text{Ray}}\), the communication distance \(r\) can be divided into three intervals: 1) For \(r\geq r_{\text{Ray}}\), both the amplitude and phase differences across the antenna array are small, allowing for a safe approximation using the PWM; 2) For \(r_{\text{Critical}}\leq r<r_{\text{Ray}}\), while amplitude differences are relatively small and thus negligible, significant phase differences persist across the antenna array and cannot be disregarded; 3) For \(r<r_{\text{Critical}}\), both amplitude and phase variations are substantial. Consequently, the channel should be modeled utilizing the SWM.
#### Ii-A3 Uniform-Power Distance
The critical distance in [7] is extended to the uniform-power distance (UPD) in [8] by additional considerations of UPA structure and the variation of the projected aperture across the antenna array. The UPD maintains the same definition as in [7, 8], defining a distance beyond which the power ration between the weakest and strongest array elements is no smaller than a certain threshold \(\Gamma_{\text{th}}\). When the Rx is positioned at the center of and aligned perpendicularly to the UPA, the UPD can be explicitly expressed as \(\sqrt{\frac{\Gamma_{\text{th}}^{2/3}}{1-\alpha^{2/3}}}\frac{L_{d}}{2}\), where \(L_{d}\) denotes the diagonal dimension of the UPA. For other incident angles and positions, the UPD can be obtained numerically.
#### Ii-A4 Effective Rayleigh Distance
The authors in [7] and [8] adopt the optimal MRC to eliminate the influence of the signal phases. However, due to the inherent challenges in achieving perfect channel estimation, and the MRC may not completely cancel out the phases. Therefore, from a beamforming perspective, the authors in [9] propose an effective Rayleigh distance \(R_{\text{eff}}\), beyond which the normalized beamforming gain under the far field assumption is no less than 95%. The effective Rayleigh distance is given by \(R_{\text{eff}}=0.367\cos^{2}(\theta)\frac{2D^{2}}{4}\), where \(\theta\) is the incident angle, \(D\) is the antenna aperture, and \(\lambda\) is the carrier wavelength. The effective Rayleigh distance can be seen as a correction for the Rayleigh distance to ensure the beamforming gain when adopting the PWM.
#### Ii-A5 Bjornson Distance
The authors in [10] consider a UPA structure consisting of \(N\) identical antennas, each with an area denoted as \(A\). From a strict electromagnetic perspective, they introduce a normalized antenna array gain, representing the received power relative to the power obtained in the far field under the PWM. Under this modeling, the Rayleigh distance can be re-expressed as \(d_{\text{Ray}}=2NL^{2}/\lambda\), where \(L=\sqrt{2A}\) denotes the diagonal length of each antenna element. Then, they propose the Bjornson distance as \(d_{b}=2L\sqrt{N}\). Notably, the Bjornson distance exhibits growth proportional to the square root of \(N\), in contrast to the linear growth with \(N\) seen in the Rayleigh distance. At least 95% of the maximum gain can be achieved when the communication distance is no less than the Bjornson distance.
#### Ii-A6 Equi-Power Line
Considering a ULA structure with only a line-of-sight (LoS) path, the received power under the MRC from a point source is solely determined by the amplitude response and is independent of the signal phase. The authors in [11] define a ratio of the received power obtained with the SWM to that obtained with the PWM. When the communication distance goes to infinity, the ratio tends towards one. They propose an equi-power line, which represents this ratio reaches a pre-defined threshold. The closed-form analytical expression for this ratio leads to an equi-power line located at approximately \(2.86D\) with \(D\) representing the array aperture, when the source aligns perpendicularly with the middle of the ULA. For other source angles, numerical methods are employed to determine the equi-power line. Interestingly, the study reveals that within the incident angle range of \([-\pi/6,\pi/6]\), the received power under PWM consistently serves as an upper bound for that under SWM. Beyond this range, as the distance increases, the power corresponding to PWM initially functions as an upper bound but subsequently transforms into a lower bound for the power under SWM.
#### Ii-A7 Equi-Power Surface
Considering a uniform circular planar array (UCPA) structure with only an LoS path, the received power under the MRC from a point source only relies on the amplitude response. The authors in [11] define a normalized received power, which signifies the relative received power obtained by the SWM compared to the PWM. Subsequently, they compound an euqi-power surface at which the normalized received power attains a predefined threshold. Utilizing a derived closed-form expression, the equi-power line is situated at approximately \(3.96D\) with \(D\) representing the length of the side of the UCPA, when the source is perpendicular to the center of the UCPA. When the source is located in other angles, the equi-power line can be obtained numerically.
### _MIMO System_
For the MIMO system, a widely recognized NF-FF boundary is the extended version of the Rayleigh distance defined as \(2\left(D_{\text{T}}+D_{\text{R}}\right)^{2}/\lambda\), where \(D_{\text{T}}\) and \(D_{\text{R}}\) denote the maximum array aperture at the transmitter (Tx) and receiver (Rx), respectively.
#### Ii-B1 Threshold Distance in [12]
The authors in [12] propose a threshold distance below which the capacity of the SWM surpasses that of the PWM by at least a factor of 1.5, given a specific array size. In practical terms, this threshold distance marks a point where the capacity underestimation error when using the PWM is 50%. By using the empirical fitting techniques, they derive the threshold distance as \(4L_{\text{T}}L_{\text{R}}\cos(\theta_{\text{T}})\cos(\theta_{\text{R}})\), where \(L_{\text{T}}\) (\(L_{\text{R}}\)) is the array size at the Tx (Rx) in units of wavelength, and \(\theta_{\text{T}}\) (\(\theta_{\text{R}}\)) is the rotated angle at the Tx (Rx). Note that this threshold distance is in units of wavelength. This threshold distance can also be regarded as the 0.225 dB-down beamwidth distance of the array. When considering the half-power beamwidth (3 dB) of the array, the threshold distance can be calculated as \(1.13L_{\text{T}}L_{\text{R}}\cos(\theta_{\text{T}})\cos(\theta_{\text{R}})\).
#### Ii-B2 Threshold Distance in [13]
Building upon an approximation of the largest eigenvalue of LoS MIMO channels employing uniform linear arrays (ULA) structure, the authors in [13] obtain a threshold distance at which the ratio of the largest eigenvalues, as given by SWM and PWM, reaches a predefined threshold. This threshold distance is given by \(\tau_{g}d_{\text{T}}d_{\text{R}}\cos(\theta_{\text{T}})\cos(\theta_{\text{R}})/\lambda\), where \(d_{\text{T}}\) (\(d_{\text{R}}\)) is the antenna spacing at the Tx (Rx), \(\theta_{\text{T}}\) (\(\theta_{\text{R}}\)) is the rotated angle at the Tx (Rx), and \(\tau_{g}\) is an auxiliary variable which is dependent on the antenna number of the Tx and Rx. The exact value of \(\tau_{g}\) can be found in [13]. This approach is relatively accurate when the antenna number is small. However, when the antenna number is large, the exact largest eigenvalue cannot be obtained. Moreover, this approach solely considers the largest eigenvalue, which imposes limitations on the accuracy of the threshold distance.
#### Ii-B3 Effective Multiplexing Distance
Considering spatial multiplexing, the authors in [14] propose an effective multiplexing distance \(D_{\text{max}}^{(m)}\). This distance represents the farthest range at which the channel can efficiently accommodate
independent spatial streams simultaneously at a specific signal-to-noise ratio (SNR). The effective multiplexing distance can be approximated by \(D_{\max}^{(m)}\simeq D_{\mathrm{T}}D_{\mathrm{R}}/(\lambda\widetilde{S}_{\mathrm{ asy},m}^{*}(\gamma))\), where \(D_{\mathrm{T}}\) (\(D_{\mathrm{R}}\)) is the array aperture at the Tx (Rx), \(\lambda\) is the carrier wavelength, \(\gamma\) is the SNR requirement, and \(\widetilde{S}_{\mathrm{asy},m}^{*}(\gamma)\) is an auxiliary variable that accounts for the threshold at which the MIMO system can support \(m\) independent spatial streams under the given SNR \(\gamma\). The specific values of \(\widetilde{S}_{\mathrm{asy},m}^{*}(\gamma)\) can be determined through numerical simulations. Denoting the Rayleigh distance as \(r_{\mathrm{Ray}}\), the communication distance \(D\) can be categorized into three intervals: 1) For \(D\in(0,r_{\mathrm{Ray}}]\), the MIMO system can consistently achieve the full multiplexing gain by adjusting the orientations of the antennas; 2) For \(D\in(r_{\mathrm{Ray}},D_{\max}^{(m)})\), the MIMO system can effectively support \(m\) streams at the specified SNR \(\gamma\); 3) For \(D\in(D_{\max}^{(m)})\), the channel exhibits only one dominant eigenvalue, thus supporting a single stream.
## IV Proposed Demarcation Based on Effective Degrees of Freedom
In this section, we first introduce a new NF-FF demarcation criterion focusing on the effective degrees of freedom (EDoF) of the channel, then investigate the characteristics of the EDoF-based boundary in typical application scenarios and its implications for system performance.
### _Demarcation Criterion_
For a MIMO system, channel capacity is a commonly utilized criterion to evaluate the system performance, which represents the information-transmission ability of a communication system. Channel capacity is closely related to the EDoF of the channel, where the EDoF represents the equivalent number of single-input-single-output sub-channels1. An information-theory-originated definition of the EDoF is \(\left(\frac{\mathrm{tr}(\mathbf{R})}{\|\mathbf{R}\|_{\mathrm{F}}}\right)^{2}\)[15], where \(\mathbf{R}\) denotes the spatial correlation matrix of the MIMO channel matrix, while \(\mathrm{tr}(\cdot)\) and \(\|\cdot\|_{\mathrm{F}}\) denote the trace and Frobenius norm of a matrix, respectively. In order to characterize the channel performance (e.g., channel capacity) more accurately, we introduce a novel NF-FF demarcation approach based upon the EDoF. Specifically, the boundary \(r_{\mathrm{Th}}\) between the NF and FF is defined such that the EDoF
Fig. 2: Antenna array configurations considered in this article. (a) Both the BS and user are equipped with a ULA, where \(\alpha\) denotes the angle between the center of the BS ULA and the center of the user ULA, and \(\beta\) is the angle of the user ULA with respect to the positive Z-direction within the YZ-plane. (b) The BS and user are equipped with a URA and a ULA, respectively, where \(\theta\) represents the angle of the user ULA with respect to the positive Z-direction and parallel with the XZ-plane. (c) The BS and user are equipped with an arched “URA” and a ULA, respectively, where the edge of the URA along the X-direction is acted according to a certain curvature radius \(R\), while \(\theta\) represents the angle of the user ULA with respect to the positive Z-direction and parallel to the XZ-plane. It is further postulated that the horizontal edge of the URA form part of a semicircular arc with a curvature radius of \(R\), hence the semicircular cylindrical surface can be regarded as a special case of the arched URA architecture with the horizontal edge being a complete semicircular arc.
of the MIMO system equals a pre-defined threshold \(\eta\) at \(r_{\text{Th}}\) under the SWM. Since the EDoF is 1 under the PWM when only an LoS path exists between the Tx and Rx, the threshold value of EDoF \(\eta\) can be set to a value slightly larger than 1. This EDoF demarcation criterion can capture the differential spatial characteristics of the MIMO system between the SWM and PWM, and is more explicitly related to key performance indicators, such as the capacity and multiplexing capability, of the channel.
### _Case Studies_
In the sequel, we will unveil the main behaviors of the proposed EDoF-based NF-FF boundary and how it differs from the classic Rayleigh distance through some examples. We consider point-to-point MIMO systems, as illustrated in Fig. 2, where the user is equipped with a ULA, and the antenna array at the BS can be a ULA or a URA. It is noteworthy that besides the conventional ULA and URA, we also take into account an arched "URA" architecture (shown in Fig. 2(c)), in which one of the edges of the URA is bent according to a certain curvature radius. This is motivated by the advanced conformal antenna array whose surface can be flexibly bent based on the shape of the object it situates on. Detailed settings are described in the figure caption.
Based upon the aforementioned EDoF formula \(\left(\frac{\text{tr}(\mathbf{R})}{\|\mathbf{R}\|_{\text{F}}}\right)^{2}\), we conduct mathematical derivations (not shown in this article) for the three types of MIMO systems in Fig. 2 to obtain accurate or approximate expressions of the NF-FF boundary \(r_{\text{Th}}\), and provide the results in Table II2. Note that these results are obtained under the paraxial approximation in optics, i.e., the distance between the Tx and Rx arrays is significantly larger than their apertures, which is practical in many real-world communication systems. For the ULA-to-ULA scenario, it is evident from the expression of \(r_{\text{Th}}\) that unlike the
Rayleigh distance which relies on the square of the sum of the Tx array aperture \(L_{\mathrm{T}}\) and the Rx array aperture \(L_{\mathrm{R}}\), the EDoF-based boundary \(r_{\mathrm{Th}}\) is related to the product of \(L_{\mathrm{T}}\) and \(L_{\mathrm{R}}\), as well as the numbers of array elements at the Tx and Rx \(N_{\mathrm{T}}\) and \(N_{\mathrm{R}}\), respectively. More specifically, \(r_{\mathrm{Th}}\) is proportional to \(\pi L_{\mathrm{T}}L_{\mathrm{R}}\) and inversely proportional to \(\left(N_{\mathrm{T}}-1\right)(N_{\mathrm{R}}-1)\lambda\), indicating that \(r_{\mathrm{Th}}\) will alter if the number of array elements at either the Tx or Rx changes, even if both array lengths remain the same. Furthermore, fewer array elements correspond to a larger boundary distance, thus the boundary for an array with two elements serves as an upper bound while that for an infinite number of array elements acts as a lower bound. For the URA-to-ULA scenario, assuming the ULA is equipped at the Rx, \(r_{\mathrm{Th}}\) is still proportional to \(\pi L_{\mathrm{R}}\) and inversely proportional to \((N_{\mathrm{R}}-1)\lambda\), whereas its relation with the size and number of elements of the URA is more complicated. When fixing the horizontal (vertical) side length \(L_{\mathrm{T}_{x}}\) of the URA, \(r_{\mathrm{Th}}\) is proportional to the vertical (horizontal) side length \(L_{\mathrm{T}_{x}}\), and similar rules apply to the number of elements \(N_{\mathrm{T}_{x}}\) and \(N_{\mathrm{T}_{x}}\)along the horizontal and vertical directions; when both side lengths of the URA vary, \(r_{\mathrm{Th}}\) depends on the product of the sinusoidal functions containing \(\frac{L_{\mathrm{T}_{x}}}{N_{\mathrm{T}_{x}}-1}\) and \(\frac{L_{\mathrm{T}_{x}}}{N_{\mathrm{T}_{x}}-1}\). In addition, it is worth noting that when \(\theta=0^{-}\) (\(90^{\circ}\)), \(r_{\mathrm{Th}}\) becomes independent of the side length and number of elements along the horizontal (vertical) direction of the URA. In other words, \(r_{\mathrm{Th}}\) is proportional to the effective URA aperture \(\sqrt{L_{\mathrm{T}_{x}}^{2}\sin^{2}\left(\theta\right)+L_{\mathrm{T}_{x}}^{ 2}\cos^{2}\left(\theta\right)}\) projected onto the direction that the ULA lies in. For the arched-URA-to-ULA case, the mathematical derivation shows that \(r_{\mathrm{Th}}\) is approximately identical to that for the URA-to-ULA setting if the curvature radius \(R\) is obviously larger than the arc length.
Next, we provide quantitative analysis on the EDoF-based
Fig. 3: Characterization of the EDoF-based NF-FF boundary under various circumstances. (a) Comparison of different NF-FF boundaries, where the Rx is equipped with a two-element ULA with a length of 0.05 m, the horizontal side length of the Tx URA is fixed to 0.049 m, the orientation angle \(\theta\) in Fig. 2(b) is set to \(10^{\circ}\), and the X-axis values denote the maximum aperture of the Tx ULA or URA. (b) EDoF-based boundary for the arched URA in Fig. 2(c), in which the approximated values are calculated according to the URA results in Table II, and the simulated values are obtained via numerical simulation using the actual arched array architecture. The vertical length of the arched URA is \(0.2\) m, the length of the ULA is \(0.05\) m, and the orientation angle \(\theta\) of the ULA is \(45^{\circ}\). (c) Variation of the EDoF-based boundary with the position and orientation angles \(\alpha\) and \(\beta\) depicted in Fig. 2(a). The lengths of the Tx and Rx ULAs are 1 m and 0.1 m, respectively, and the numbers of elements of the Tx and Rx ULAs are 128 and 2, respectively. (d) Variation of the EDoF-based boundary with the orientation angle \(\theta\) and the vertical length of the URA depicted in Fig. 2(b). The aperture of the URA is 0.5 m, and the numbers of elements along the horizontal and vertical directions of the URA are both eight. The ULA has two elements and its length is 0.1 m.
NF-FF boundary values to gain more intuitive insights. In all the simulations, the EDoF threshold \(\eta\) is set to 1.01, and the carrier frequency is 100 GHz. Fig. 3 demonstrates EDoF-based NF-FF boundary values under a variety of circumstances, where the simulation settings are detailed in the figure caption. Several relevant observations can be drawn from Fig. 3: First, the Rayleigh distance can be smaller or larger than the EDoF-based boundary, as delineated in Fig. 3(a), implying that it may underestimate or overestimate the NF region from the viewpoint of supportable spatial streams. Second, since the length of the Rx ULA is fixed, the EDoF-based boundary grows linearly with the Tx array aperture for the ULA-to-ULA scenario, whereas the increasing trend is non-linear with respect to the URA aperture for the URA-to-ULA case, which is consistent with the results in Table II. Third, Fig. 3(b) reveals that the analytical expression of the EDoF-based boundary for the arched URA is sufficiently accurate when the curvature radius \(R\) is no smaller than \(3L_{\mathrm{Tx}}/\pi\), or roughly \(L_{\mathrm{Tx}}\), i.e., the curvature radius is no smaller than the arc length. Moreover, it is seen from Fig. 3(c) that the boundary distance reaches its maximum when the two ULAs are normally facing each other with the line of centers perpendicular to them, and arrives at its minimum when the line of centers of the two ULAs is aligned with one of the ULAs. Additionally, the boundary distance increases with the orientation angle \(\theta\) in the URA-to-ULA scenario when the vertical length of the URA is small, and behaves oppositely for a large vertical length, due to the fact that the boundary is proportional to the effective URA aperture \(\sqrt{L_{\mathrm{Tx}}^{2}\sin^{2}\left(\theta\right)+L_{\mathrm{T}_{c}}^{2} \cos^{2}\left(\theta\right)}\) as analyzed above.
### _Implications for System Design_
#### Iv-C1 Channel Capacity
The NF-FF boundary has a crucial impact on channel capacity, since it determines the applicable regions of different electromagnetic wavefronts which in turn dictate the channel model. It has been shown in Fig. 3(a) that the classic Rayleigh distance may underestimate or overestimate the NF zone depending on the antenna array configurations. Overestimation of the NF range can give rise to unnecessary computational complexity using the SWM, while while underestimation may cause prediction error of the channel capacity. For instance, Fig. 4 depicts the estimation error of the channel capacity for a ULA-to-ULA wireless communication scenario, where the estimation error is computed by comparing the channel capacity using the PWM with that using the SWM at the corresponding boundary distance. As evident from Fig. 4, the Rayleigh distance leads to large capacity errors for a wide range of SNR values, and the capacity error can reach over 35% which is non-negligible and will seriously affect the system deployment strategies.
#### Iv-C2 Multiple Access
The FF channel is dependent on the angle, making it unable to distinguish users located in the same direction. In contrast, the NF channel can discern both the angle and distance, allowing it to focus on a specific location even when multiple users share the same direction. To gain more insights into how to NF-FF boundary influences multiple access, we can regard the two-element ULA in the aforementioned simulations as two users each with a single antenna. In this sense, the boundary distance in Fig. 3(c) and Fig. 3(d) implies the distance at which the spatial channels between the two users are approximately fully correlated. As illustrated by Fig. 3(c) and interpreted in the previous subsection, the boundary distance reaches its minimum when the two users situate in the same direction from the viewpoint of the center of the BS antenna array, which is as expected since they share the direction. Nevertheless, the boundary distance is non-zero, indicating that their spatial channels are still distinguishable up to a certain distance from the BS. On the other hand, the two users are most easily discernible when their relative directions are the farthest apart (corresponding to the largest boundary distance in Fig. 3(c)) given a fixed distance between them. Furthermore, it can be inferred from Fig. 3(d) that the spatial channels between two users are less correlated when the line connecting them is parallel to the longer side of a URA. Consequently, if solely aiming to serve more users in the NF, it is beneficial to place the URA at the BS such that its longer side is parallel to the direction with most users. In practice, of course, other factors need also to be taken into account in system design. It is apparent from the foregoing examples that knowledge of how the NF-FF boundary varies with the azimuth and elevation angles is helpful in designing adaptive algorithms for channel estimation, beamforming/beam-focusing, and beam management, so as to sense and serve different users more efficiently based on their locations and relative orientations.
## V Conclusion
In this article, we discussed the importance of identifying the NF regime for ELAAs, summarized existing NF-FF boundaries, and propounded a novel NF-FF demarcation scheme based on the EDoF of the MIMO channel. We investigated the key influencing factors and behaviors of the EDoF-based boundary for various antenna array configurations, including conformal antenna arrays, and analyzed the implications for
Fig. 4: Estimation error of the channel capacity at the Rayleigh distance and the EDoF-based boundary. Both the Tx and Rx are equipped with a two-element ULA, whose lengths are both 0.05 m, and the Rayleigh distance and the EDoF-based boundary herein are 6.67 m and 18.54 m, respectively.
system design. The proposed NF-FF boundary is able to more accurately characterize system performance indicators such as channel capacity, as compared to the classic Rayleigh distance, and can provide more insights into wireless system deployment.
| 未来の無線システムは、より高いスループット、広範な coverage、そしてより高い空間解像度を実現するために、極めて大型の開口数を持つ配列を採用する可能性が高い。従来の無線システムは、放射源の遠域(FF)において主に動作している。しかし、配列のサイズが大きくなり、キャリア波長が短くなるにつれて、近接域(NF)は無視できない領域となる。NFとFFは多くの点で異なるため、その対応する領域を特定することが重要である。この論文では、まずNF-FFの境界に関する包括的な概要を提供し、チャンネルの有効自由度(EDoF)に基づいた新しいNF-FF境界方法を紹介する。EDoFは、チャネル容量に密接に関連しているため、EDoFに基づく境界は、レイリー距離などの代表的な基準よりも、チャネル性能をより正確に特徴付けることができる。さらに、EDoF |
2309.04744 | A General Approach to Fully Linearize the Power Amplifiers in mMIMO with
Less Complexity | A radio frequency (RF) power amplifier (PA) plays an important role to
amplify the message signal at higher power to transmit it to a distant
receiver. Due to a typical nonlinear behavior of the PA at high power
transmission, a digital predistortion (DPD), exploiting the preinversion of the
nonlinearity, is used to linearize the PA. However, in a massive MIMO (mMIMO)
transmitter, a single DPD is not sufficient to fully linearize the hundreds of
PAs. Further, for the full linearization, assigning a separate DPD to each PA
is complex and not economical. In this work, we address these challenges via
the proposed low-complexity DPD (LC-DPD) scheme. Initially, we describe the
fully-featured DPD (FF-DPD) scheme to linearize the multiple PAs and examine
its complexity. Thereafter, using it, we derive the LC-DPD scheme that can
adaptively linearize the PAs as per the requirement. The coefficients in the
two schemes are learned using the algorithms that adopt indirect learning
architecture based recursive prediction error method (ILA-RPEM) due to its
adaptive and free from matrix inversion operations. Furthermore, for the LC-DPD
structure, we have proposed three algorithms based on correlation of its common
coefficients with the distinct coefficients. Lastly, the performance of the
algorithms are quantified using the obtained numerical results. | Ganesh Prasad, Håkan Johansson, Rabul Hussain Laskar | 2023-09-09T10:20:42 | http://arxiv.org/abs/2309.04744v1 | # A General Approach to Fully Linearize the Power Amplifiers in mMIMO with Less Complexity
###### Abstract
A radio frequency (RF) power amplifier (PA) plays an important role to amplify the message signal at higher power to transmit it to a distant receiver. Due to a typical nonlinear behavior of the PA at high power transmission, a digital predistortion (DPD), exploiting the preprovision of the nonlinearity, is used to linearize the PA. However, in a massive MIMO (mMIMO) transmitter, a single DPD is not sufficient to fully linearize the hundreds of PAs. Further, for the full linearization, assigning a separate DPD to each PA is complex and not economical. In this work, we address these challenges via the proposed low-complexity DPD (LC-DPD) scheme. Initially, we describe the fully-featured DPD (FF-DPD) scheme to linearize the multiple PAs and examine its complexity. Thereafter, using it, we derive the LC-DPD scheme that can adaptively linearize the PAs as per the requirement. The coefficients in the two schemes are learned using the algorithms that adopt indirect learning architecture based recursive prediction error method (ILA-RPEM) due to its adaptive and free from matrix inversion operations. Furthermore, for the LC-DPD structure, we have proposed three algorithms based on correlation of its common coefficients with the distinct coefficients. Lastly, the performance of the algorithms are quantified using the obtained numerical results.
Digital predistortion, massive MIMO, direct learning architecture, indirect learning architecture, recursive prediction error method.
## I Introduction
In the wireless transmitters, the radio frequency (RF) power amplifiers (PAs) are used to amplify the modulated signals for distant transmissions. However, the in-band and out-of-band nonlinear distortions occur to the signals amplified near to saturation region of the PAs [1]. This can be reduced by employing some backoff to the peak power of the signals. But, it reduces the efficiency of the PAs. Therefore, the pre-processing like digital predistortion (DPD) over the transmit signals before the PAs are required to linearize the resultant signals towards the saturation region. Since a decade, many works have focused on the linearization of multiple power amplifiers in the transmitters like massive MIMO (mMIMO) transmitters. But, they have focused on the linearization in a particular direction of beamforming instead of linearizing all the PAs. Because, the linearization of each PA requires separate DPD block along with the driving RF chain. Thus, due to high complexity, it is not suitable for an economical mMIMO transmitter. To deal with it, in this work, we have proposed a most general approach to fully linearize all the PAs with less complexity. Also, we have discussed in detail about the fundamentals behind the challenges and the procedure to tackle it.
### _Related Works_
The preprocessing using DPD has an inverse property to the nonlinear PA to mitigate the nonlinearities in the desired transmit signal [2]. From the state-of-the-art, mostly the linear parametric models have been used for the the DPD [3]. One of the methods to identify the DPD coefficients is least square (LS) due to its fast convergence [4, 5, 6]. But, despite mathematical simplicity, its computational complexity is high due to engagement of inverse operations of the matrices of large sizes that correspond to the estimation of large number of DPD coefficients. However, many works have proposed the algorithms to reduce the complexity for the identification of the DPD based on LS method [7, 8, 9, 10]. For example, the size of the matrix is reduced by normalization of the DPD basis functions (BFs) followed by their pruning [7]. Also, based on stationary random process, the time varying matrix associated with the DPD coefficients is replaced by a constant covariance matrix [8]. Further, in an iterative algorithm based on LS, the samples of the DPD coefficients (or the size of the matrix) can be reduced by considering the correlation in the observation errors between two iterations [9]. Besides, the matrix size can also be reduced using the eigenvalue decomposition and principal component analysis (PCA) that decreases the order of the memory polynomial model of the DPD [10, 11]. In eigenvalue decomposition, the number of DPD coefficients can be reduced by considering only the dominant eigenvectors. Whereas, in PCA, the reduction is achieved by converting the correlated BFs of the DPD into uncorrelated BFs.
Although, the above techniques help in reduction of the size of the matrices, but, for time varying and highly nonlinear PAs, still, the required number of DPD coefficients is large. Thus, it leads to an undesirable large matrix operations. Therefore, the recursive based algorithms like least mean square (LMS) [12], recursive least squares (RLS) [13, 14], and recursive prediction error method (RPEM) [15] are computationally more reliable at the cost of their slow convergence to the desired optimal value of the variables. Using LMS, the DPD adjusts its coefficients to minimize the mean square error (MSE) between the PA output and the desired signal. The coefficients are updated using stochastic gradient decent method that minimizes the instantaneous error in each iteration. However, LMS is quite unstable and it is very sensitive in the step size for the update [16]. In conventional LS estimation, a batch of input and output data samples of the PA are used | 無線電周波数(RF)電力増幅器(PA)は、メッセージ信号を高出力で増幅して遠くの受信機へ送信するために重要な役割を果たします。PAの高出力での非線形な動作により、デジタルプリdistortion(DPD)が、非線形性を逆転させて線形化を行うため使用されます。しかし、Massive MIMO(mMIMO)送信機では、単一 DPD は数百の PA を完全に線形化させることができません。さらに、各 PA に個別の DPD を割り当てることは複雑で経済的ではありません。本研究では、これらの課題に対処するために、低複雑性の DPD (LC-DPD) SCHEME を提案しました。まず、複数 PA を線形化するための完全な DPD (FF-DPD) SCHEMA を説明し、その複雑さを調べます。その後、FF-DPD SCHEME を用いて、LC-DPD SCHEME を導き出し、PA の |
2309.15891 | Phonon Pumping by Modulating the Ultrastrong Vacuum | The vacuum (i.e., the ground state) of a system in ultrastrong light-matter
coupling contains particles that cannot be emitted without any dynamical
perturbation and is thus called virtual. We propose a protocol for inducing and
observing real mechanical excitations of a mirror enabled by the virtual
photons in the ground state of a tripartite system, where a resonant optical
cavity is ultrastrongly coupled to a two-level system (qubit) and, at the same
time, optomechanically coupled to a mechanical resonator. Real phonons are
coherently emitted when the frequency of the two-level system is modulated at a
frequency comparable to that of the mechanical resonator and, therefore much
lower than the optical frequency. We demonstrate that this hybrid effect is a
direct consequence of the virtual photon population in the ground state. Within
a classical physics analogy, attaching a weight to a spring only changes its
resting position, whereas dynamically modulating the weight makes the system
oscillate. In our case, however, the weight is the vacuum itself. We propose
and accurately characterize a hybrid superconducting-optomechanical setup based
on available state-of-the-art technology, where this effect can be
experimentally observed. | Fabrizio Minganti, Alberto Mercurio, Fabio Mauceri, Marco Scigliuzzo, Salvatore Savasta, Vincenzo Savona | 2023-09-27T17:11:40 | http://arxiv.org/abs/2309.15891v2 | # Phonon Pumping by Modulating the Ultrastrong Vacuum
###### Abstract
The vacuum (i.e., ground state) of a system in ultrastrong light-matter coupling contains particles that cannot be emitted without any dynamical perturbation, and thus called virtual. We propose a protocol for inducing and observing real mechanical excitations of a mirror enabled by the virtual photons in the ground state of a tripartite system, where a resonant optical cavity is ultrastrongly coupled to a two-level system (qubit) and, at the same time, optomechanically coupled to a mechanical resonator. Real phonons are coherently emitted when the frequency of the two-level system is modulated at a frequency comparable to that of the mechanical resonator, therefore much lower than the optical frequency. We demonstrate that this hybrid effect is a direct consequence of the virtual photon population in the ground state. We propose and accurately characterize a hybrid superconducting-optomechanical setup based on available state-of-the-art technology, where this effect can be experimentally observed.
_Introduction--_. The ultrastrong coupling (USC) regime between light and matter occurs when the coupling connecting the two is a significant fraction of their quantized resonance frequencies [1]. In the USC regime of the quantum Rabi model, counter-rotating coupling terms, which do not conserve the number of particles, lead to an entangled ground state with nonzero particles [2; 3]. Similar to zero-point energy, these particles cannot be converted into real excitations that could be emitted or detected, unless the system is dynamically perturbed over a timescale comparable to the period of optical oscillations [2; 4; 5; 6]. In this sense, the _vacuum_ (i.e., ground state) of a USC system contains _virtual_ particles. USC regime has been achieved in various platforms like superconducting circuits, intersubband polaritons, and magnonic systems [7; 8; 9; 10; 11; 12; 13; 14; 15; 16].
In an optomechanical system, the radiation pressure of the electromagnetic field displaces one of the mirrors of the cavity. This displacement, in turn, modulates the cavity's resonance frequency [17]. Optomechanical coupling has found numerous applications [17; 18], such as ground-state cooling of the mechanical mode [19; 20; 21], generation of nonclassical states [22], and macroscopic entanglement [23; 24]. Vacuum fluctuations of the quantum electromagnetic field can also displace the mirror, leading to, e.g., the Casimir effect [25; 26; 27]. Dynamical perturbations of the mirror can convert virtual photons into real photons, resulting in the dynamical Casimir effect [28], which has been quantum simulated using a superconducting circuit architecture [29; 30]. There is an increasing interest in achieving larger optomechanical couplings [31; 32; 33], enabling the possibility of directly observing the dynamical Casimir effect and other peculiar effects arising from it [34; 35; 36]. Systems combining a USC part and an optomechanical one, involving virtual and optomechanical transitions, have been recently proposed [37; 38]. Whether virtual photons in a hybrid USC-optomechanical system can give rise to real mechanical excitations, however, remains an open question.
In this article, we answer this question by showing that virtual photons in a tripartite USC-optomechanical system's ground state influence the mechanical degree of freedom. Our proposed architecture includes a cavity in USC with a two-level system (qubit). The cavity is additionally an optomechanical system, as depicted in Fig. 1. We assume that the frequency of the qubit is periodically modulated, with a period much longer than that characterizing the oscillations of the USC components, but coinciding with that of the mechanical oscillation. While the USC subsystem adiabatically remains in the ground state, which does not emit photons into the environment, the oscillations in the number of _virtual_ ground-state photons create the _real_ (i.e., detectable) mechanical oscillations of the mirror. We propose an experimental protocol to observe this virtual-to-real transduction in advanced hybrid superconducting optomechanical systems.
Model--.Let \(\hat{a}\) (\(\hat{a}^{\dagger}\)) be the annihilation (creation) operators of the cavity mode, \(\hat{b}\) (\(\hat{b}^{\dagger}\)) of the mirror vibration mode, and \(\hat{\sigma}_{-}\) and \(\hat{\sigma}_{+}\) the Pauli operators associated with the qubit. The system is described by the Hamiltonian
(\(\hbar=1\)) [39]:
\[\begin{split}\hat{H}(t)&=\hat{H}_{\text{R}}+\hat{H}_{ \text{opt}}+\hat{H}_{\text{M}}(t)\\ \hat{H}_{\text{R}}&=\omega_{a}\hat{a}^{\dagger}\hat{a }+\omega_{\sigma}\hat{\sigma}_{+}\hat{\sigma}_{-}+\lambda(\hat{a}+\hat{a}^{ \dagger})(\hat{\sigma}_{-}+\hat{\sigma}_{+}),\\ \hat{H}_{\text{opt}}&=\omega_{b}\hat{b}^{\dagger} \hat{b}+\frac{g}{2}(\hat{a}+\hat{a}^{\dagger})^{2}(\hat{b}^{\dagger}+\hat{b}).\end{split} \tag{1}\]
\(\hat{H}_{\text{R}}\) is the Rabi Hamiltonian giving rise to the USC interaction, and \(\hat{H}_{\text{opt}}\) is the optomechanical coupling, up to a constant displacement of the phononic field. \(\hat{H}_{\text{opt}}\) is derived from first principles both in the case of an electromagnetic field coupled to a vibrating mirror [40] and for circuital analogs [41]. Notice that \(\hat{H}_{\text{opt}}\) includes the rapidly rotating terms (\(\hat{a}^{2}+\hat{a}^{\dagger 2}\)) [40; 41] which, as will emerge from our analysis, cannot be neglected in the present protocol [39]. We assume a modulation of the qubit resonance frequency of the form
\[\hat{H}_{\text{M}}(t)=\frac{1}{2}\Delta_{\omega}\left[1+\cos(\omega_{d}t) \right]\hat{\sigma}_{+}\hat{\sigma}_{-}=\Omega_{\sigma}(t)\hat{\sigma}_{+}\hat {\sigma}_{-}\,. \tag{2}\]
The regime of interest is one where \(g\ll\omega_{d}\simeq\omega_{b}\ll\omega_{a}\simeq\omega_{\sigma}\). In this regime, entanglement between the mechanical motion and the USC subsystem is negligible, and the state of the system can be factored as \(\ket{\Psi(t)}\simeq\ket{\psi(t)}\otimes\ket{\phi_{b}(t)}\), where \(\ket{\psi(t)}\) describes the USC state, and \(\ket{\phi_{b}(t)}\) is the one of the mirror. A further approximation, holding because \(\omega_{d}\ll\omega_{a}\simeq\omega_{\sigma}\), is that the USC subsystem adiabatically remains in its vacuum, i.e., the ground state of \(\hat{H}(t)\); namely, \(\ket{\psi(t)}=\ket{\psi_{\text{GS}}(t)}\).
Under these approximations, the time-dependent Hamiltonian governing the motion of the mirror is
\[\begin{split}\hat{H}_{b}(t)&=\bra{\psi_{\text{GS} }(t)}\hat{H}_{\text{opt}}\ket{\psi_{\text{GS}}(t)}\\ &=\omega_{b}\hat{b}^{\dagger}\hat{b}+\frac{g}{2}\mathcal{N}(t) \left(\hat{b}+\hat{b}^{\dagger}\right)\,\end{split} \tag{3}\]
where \(\mathcal{N}(t)\equiv\bra{\psi_{\text{GS}}(t)}2\hat{a}^{\dagger}\hat{a}+\hat{ a}^{2}+\hat{a}^{\dagger 2}\ket{\psi_{\text{GS}}(t)}\) is the time-dependent radiation pressure, acting as a drive on the mirror and generating real phonons (i.e., detectable). The full system dynamics is then governed by the Hamiltonian \(\hat{H}_{\text{eff}}(t)=\hat{H}_{R}+\hat{H}_{M}(t)+\hat{H}_{b}(t)\). Notice the importance of the counter-rotating terms \(\hat{a}\hat{\sigma}_{-}\) and \(\hat{a}^{\dagger}\hat{\sigma}_{+}\) in \(\hat{H}_{\text{R}}\): if they are neglected, one wrongly predicts \(\mathcal{N}(t)=0\). This shows that the mirror oscillates only if the ground state contains virtual photons.
The validity of these approximations is assessed in Fig. 2 by simulating the system dynamics both under the full Hamiltonian \(\hat{H}(t)\) in Eq. (1) and the effective Hamiltonian \(\hat{H}_{\text{eff}}(t)\). The quantity \(\mathcal{N}(t)\) is plotted in Fig. 2(a). The number of phonons is shown in Fig. 2(b). As the full and the effective dynamics are in excellent agreement, we conclude that our interpretation holds, and that the phonon number increases in time due to the radiation pressure originating from \(\mathcal{N}(t)\).
Open-system dynamics--Experimental devices are always subject to the influence of the environment, which has a finite temperature and generally induces loss and dephasing. For the parameters we consider, the finite temperature of the environment in, e.g., a dilution refrigerator (\(T\approx 10\) mK) leads to thermal noise in the phononic part (\(n_{\text{th}}\approx 200\)) but not in the photonic one (\(n_{\text{th}}\approx 5\times 10^{-9}\)) [43]. The open system dynamics, when
Figure 2: (a) Time evolution of \(\mathcal{N}(t)\) when the USC part is described by Eq. (1). (b) Number of phonons as a function of time according to the full Hamiltonian in Eq. (1) (blue solid line) and the effective Hamiltonian in Eq. (3) (orange dashed line). The two curves are in excellent agreement, thus validating the approximations detailed in the main text. If we artificially remove the cavity-qubit counter-rotating terms, we have no creation of mechanical excitations. (c) Fourier components \(\mathcal{N}_{\kappa}\) of \(\mathcal{N}(t)\). Parameters: \(\omega_{a}=\omega_{\sigma}=2\pi\times 4\)GHz, \(\omega_{b}=\omega_{d}=2\pi\times 1\)MHz, \(\lambda=0.5\omega_{a}\), and \(g=2\pi\times 15\)Hz, comparable to Ref. [42]. The system is initialized in the ground state of \(\hat{H}(t=0)\).
Figure 1: Schematic depiction of the system and proposed experiment. A cavity at frequency \(\omega_{a}\) is ultrastrongly coupled to a qubit of bare frequency \(\omega_{\sigma}\), with coupling strength \(\lambda\). The cavity also interacts with a mirror, whose vibration frequency is \(\omega_{b}\), through an optomechanical coupling of intensity \(g\). The frequency of the qubit is adiabatically modulated through \(\Omega_{\sigma}(t)\), and the virtual photon population oscillates in time. This causes the mirror to oscillate. If we now collect the emission of both the USC systems and of the vibrating mirror, only the latter will produce a signal.
assuming a Markovian environment, is governed by the Lindblad master equation
\[\dot{\hat{\rho}} = -i\Big{[}\hat{H}(t),\hat{\rho}\Big{]}+(1+n_{\rm th})\gamma_{b}{\cal D }\left[\hat{b}\right]\hat{\rho}\] \[+n_{\rm th}\gamma_{b}{\cal D}\left[\hat{b}^{\dagger}\right]\hat{ \rho}+\gamma_{\rm D}{\cal D}\left[\hat{b}^{\dagger}\hat{b}^{\dagger}\right]\hat {\rho}\;,\]
where \(\hat{\rho}\) is the density matrix of the system and \({\cal D}[\hat{O}]\hat{\rho}=1/2(2\hat{O}\hat{\rho}\hat{O}^{\dagger}-\hat{\rho} \hat{O}^{\dagger}\hat{O}-\hat{O}^{\dagger}\hat{O}\hat{\rho})\) is the Lindblad dissipator. The phonon loss at rate is \(\gamma_{b}(1+n_{\rm th})\), the gain \(\gamma_{b}n_{\rm th}\), and the dephasing \(\gamma_{\rm D}\), with \(n_{\rm th}\) the thermal population [43]. As we have verified, the USC subsystem remains in its ground state, and thus dissipation processes are absent, despite the finite number of virtual photons. Indeed, when describing an open USC system, dissipation must result in the exchange of _real_ excitations between the system and the environment, rather than virtual ones [44, 6]. At \(T=0\) in particular, the system can only lose energy to the environment, through the emission of real photons. In an ideal setup, never detecting photons but observing the vibration of the mirror is thus the signature that virtual photons are generating a radiation pressure (see Fig. 1). We therefore do not include photon loss terms in Eq. (II). As for the mechanical part, all dissipators can be expressed in terms of the bare phonon operators \(\hat{b}\) and \(\hat{b}^{\dagger}\), as the excitations of the mechanical mode are real, and not virtual. Indeed, the ground state of the mechanical mode is almost empty (\(\langle\,\Psi_{\rm GS}|\hat{b}^{\dagger}\hat{b}|\Psi_{\rm GS}\rangle<10^{-12}\)). Furthermore, the effective Hamiltonian in Eq. (II) has been numerically verified to be valid also in the presence of dissipation [39].
Main features of the model--.As \(\hat{H}_{\rm M}(t)\) has period \(2\pi/\omega_{d}\), we decompose \({\cal N}(t)\) in its Fourier components as \({\cal N}(t)=\sum_{k=-\infty}^{+\infty}{\cal N}_{k}\exp[i\,k\omega_{d}\,t]\). The effective drive resonance condition then occurs for \(\omega_{d}=\omega_{b}/\bar{k}\) with \(\bar{k}>0\in\mathbb{N}\), as confirmed by the numerical simulation reported in Fig. 2(c). If we now assume to be close to resonance with the \(k\)th component, so that the "pump-to-cavity detuning" \(\Delta_{\bar{k}}\equiv\bar{k}\omega_{d}-\omega_{b}\simeq 0\), and passing in the frame rotating at \(\omega_{d}\), we can discard fast rotating terms [45] and obtain [39]
\[\Big{\langle}\hat{b}^{\dagger}\hat{b}\Big{\rangle}_{\rm ss} = \frac{\gamma+\gamma_{\rm D}}{\gamma}\left|\left\langle\hat{b} \right\rangle_{\rm ss}\right|^{2}+n_{\rm th} \tag{5}\] \[\Big{\langle}\hat{b}\Big{\rangle}_{\rm ss} = \frac{g{\cal N}_{\bar{k}}}{2\Delta_{\bar{k}}+i(\gamma+\gamma_{D})}\,. \tag{6}\]
In experimental implementations, the optomechanical coupling \(g\) is a limiting factor in reaching large \(\langle\hat{b}^{\dagger}\hat{b}\rangle_{\rm ss}\). Choosing \(\omega_{d}\approx\omega_{b}\) achieves the largest value of \({\cal N}_{k}\), thus enhancing the driving effect. Furthermore, the low loss rate in optomechanical systems, and the large values of \(\lambda\) (and thus of \({\cal N}_{k}\)) realized in superconducting circuit architectures [2], make the phenomenon detectable according to our estimates.
Results--.Fig. 3 shows the creation of mechanical excitations by modulating the properties of the USC vacuum (i.e., ground state) in a dissipative environment. The mechanical part of the system reaches a periodic steady regime in a timescale of the order \(1/\gamma_{b}\), the details depending on the specific choice of parameters. This Floquet steady state was numerically obtained by using the Arnoldi-Lindblad algorithm [46] for Eq. (II) using the approximation in Eq. (II). Fig. 3(a) shows the steady state population as a function of the frequency of the modulation \(\omega_{d}\) at the resonance condition \(\omega_{d}=\omega_{b}/\bar{k}\) and for different values of \(\bar{k}\). The validity of Eq. (5), and the profound impact of the dissipation rate, is shown in
Figure 3: Steady state phonon population: (a) With different drive frequencies: \(\omega_{d}\simeq\omega_{b}/\bar{k}\). The number of phonons follows the magnitude of the Fourier coefficients \({\cal N}_{\bar{k}}\), which are shown in Fig. 2(c); (b) Varying the detuning \(\Delta_{\bar{k}}\) with \(\bar{k}=1\) and with three different values of the mechanical damping \(\gamma_{b}\). The purple \(\times\) points represent the analytical steady state population, following Eq. (5), which is perfectly in agreement with the numerical simulations; (c) As a function of the thermal population \(n_{\rm th}\) with \(\gamma_{b}=2\pi\times 0.04\) Hz, showing both the cases \(\Delta_{\omega}=0\) (green line) and \(\Delta_{\omega}=4\) GHz (blue line). The effect of \(n_{\rm th}\) is to linearly increase the steady state population as in Eq. (5). If not specified, the used parameters are: \(\omega_{a}=\omega_{d}=2\pi\times 4\) GHz, \(\omega_{b}=\omega_{d}=2\pi\times 1\) MHz, \(\lambda=0.5\omega_{a}\), \(g=2\pi\times 15\) Hz, \(\gamma=2\pi\times 400\) mHz and \(\gamma_{D}=2\pi\times 200\) mHz. For this choice, the steady state is reached in a time \(1/\gamma_{b}\approx 1\)s.
Fig. 3(b), where we plot the steady-state population as a function of the frequency of the modulation \(\omega_{d}\). Both the analytical prediction and the numerical result have a Lorentian profile, and they perfectly match (shown only for one curve). The impact of the thermal population \(n_{\text{th}}\) on both the coherence and the total number of phonons is shown in Fig. 3(c). As predicted by Eq. (5), thermal phonons do not modify the coherent emission, but they result in a background phonon occupation that can be subtracted in the experimental analysis.
We have thus shown that virtual photons can pump the mechanical vibrations.
Two-linear photonic cavities--Above we considered a two-level system in interaction with a linear cavity. Experimentally, two level systems are realized by means of large nonlinearities, which may prove difficult to realize in actual hybrid optomechanical architectures. For this reason, here we demonstrate that the same phonon pumping by virtual photons can be obtained even if we assume that the USC part consists in two coupled harmonic cavities. This model is described by replacing the two-level systems with a bosonic operator (\(\hat{\sigma}_{-}\rightarrow\hat{c}\) and \(\hat{\sigma}_{+}\rightarrow\hat{c}^{\dagger}\), where \(\hat{c}\) is the bosonic field) [47, 48].
The same analysis reported in Fig. 3 is repeated for this linear model in Fig. 4. All the results lead to the same conclusion as in the nonlinear case, and are in agreement with the generalization of Eq. (5) to linear models. Considerations about the largest frequency shift that can be induced in current experimental systems lead to the conclusions that the maximal occupation of the phononic mode is smaller than in the nonlinear case.
Design and simulation of an experimental device--This model can be realized in superconducting circuit architectures. For instance, the qubit is implemented by a flux-tunable transmon capacitively coupled to a lumped element LC resonator, that is the cavity. The latter is formed by shunting an inductance and a mechanically compliant parallel plate capacitor (the vibrating mirror) [29, 49, 50]. The proposed schematics is shown in Fig. 5(a). We model the transmon as a bosonic cavity of initial frequency \(\omega_{\sigma}\) characterized by a Kerr interaction of the form \(\chi(\hat{c}^{\dagger})^{2}\hat{c}^{2}\). A periodic modulation of the magnetic flux threaded in the transmon SQUID loop by an on-chip flux line results in \(\hat{H}_{\text{M}}(t)=\Delta_{\omega}\sin(\omega_{d}t)\hat{c}^{\dagger}\hat{c}\). Such a modulation would also change the coupling strength \(\lambda(t)=\lambda(0)\sqrt{1+\Delta_{\omega}\sin(\omega_{d}t)/\omega_{\sigma}}\). We provide a detailed derivation in the Supplementary material [39].
Based on these target parameters we design the device shown in Fig. 5(b). By simulating the system with the SONNET(r) software, we obtain: \(\omega_{a}=2\pi\times 9.2\,\text{GHz}\); \(\omega_{\sigma}=2\pi\times 9.2\,\text{GHz}\) for a \(4\,nH\) lumped element inductor that is used to simulate the SQUID. This correspond to \(C_{\text{q}}+C_{g}\)=75 fF, i.e. \(\chi=2\pi\times e^{2}/2h(C_{\text{q}}+C_{g})=2\pi\times 270\) MHz; \(\lambda_{0}=0.26\omega_{a}\). From the drum diameter, we estimate \(\omega_{b}=2\pi\times 3.8\,\text{MHz}\) and the optomechanical coupling \(g=2\pi\times 15\) Hz; \(\Delta_{\omega}=2\pi\times 7\) GHz. For these parameters we have: \(|\langle\hat{b}\rangle|\simeq 1.2\) and \(\langle\hat{b}^{\dagger}\hat{b}\rangle\simeq 8.4+n_{\text{th}}\).
Conclusions--We have considered a USC system optomechanically coupled to a mechanical mirror. We demonstrate both numerically and semi-analytically how the presence of modulated virtual photons - i.e., photons that cannot be emitted into the environment - enables a _real_ mechanical vibration on the mirror. We have demon
Figure 4: As in Fig. 3(b), the phonon population at the steady state for several values of the mechanical damping \(\gamma_{b}\), but when the USC part is described by two interacting harmonic resonators. The markers represent the analytical steady state population, obtained by generalizing Eq. (5). Inset: the time evolution of \(\mathcal{N}(t)\). The used parameters are the same as Fig. 3, except for \(g=2\pi\times 30\) Hz, \(\Delta_{\omega}=2\pi\times 2\) GHz, and \(\lambda=0.3\omega_{a}\).
Figure 5: (a) Lumped element circuit of an LC resonator composed by a linear inductor \(L_{\text{r}}\) and a mechanical compliant vacuum-gap capacitor (\(C_{\text{r}}\)), capacitively coupled (though \(C_{g}\) to a frequency tunable transmon qubit, realized by a capacitor \(C_{\text{q}}\) in parallel to a SQUID (with Joseph junctions of identical critical current \(I_{\text{c}}\)). (b) Design simulated in SONNET® of a \(60\,\mu m\) mechanical drum with \(200\,nm\) vacuum gap to the bottom electrode. The circuit parameters are extracted by the signal transmission between port 1 and port 2.
strated that this effect can be realized using current experimental platforms, and we show an explicit example of a hybrid superconducting circuit implementation.
The key features of this system are: (i) the mirror vibrates when the frequency of the modulation matches that of the phononic mode (or integer fractions of it); (ii) despite the fact that the mirror vibrates, and these vibrations can be detected, no photons are emitted by the USC subsystem (see the sketch in Fig. 1); (iii) although thermal population contributes to the total number of phonons, the only coherent contribution comes from the effective drive induced by the virtual photons.
The remarkable conclusion of our proposal is that virtual photons can drive real mechanical excitations. This phenomenon presented here bears clear similarities with the dynamical Casimir effect predicted for USC systems. The important difference is that, however, the external periodic modulation here needs to match the mechanical frequency, rather than the optical one. We plan to investigate in the future the reverse effect, where by externally driving the mechanical mirror, optical excitations in the USC system can be generated. On the experimental level, an implementation following the proposed schematics is within reach.
_Acknowledgments--._ We acknowledge useful discussions with Filippo Ferrari, Luca Gravina, Vincenzo Macri, and Kilian Seibold. This work was supported by the Swiss National Science Foundation through Projects No. 200020.185015 and 200020.215172, and was conducted with the financial support of the EPFL Science Seed Fund 2021. MS acknowledges support from the European Research Council (ERC) under the EU H2020 research and innovation programme, grant agreement No. 835329 (ExCOM-cCEO). S.S. acknowledges support by the Army Research Office (ARO) through grant No. W911NF1910065.
| Systemの超強い光物質結合状態における真空(すなわち、基底状態)には、どんな動的擾乱なしに放出できない粒子が含まれており、そのため仮想粒子と呼ばれています。私たちは、三体系(量子力学的な2レベル系と光学共振器)の基底状態の仮想光子を使用して、鏡の現実的な機械的励起を誘導および観察するためのプロトコルを提案しました。このプロトコルでは、超強い光物質結合状態の光学共振器と2レベル系(クビット)が、同時に光学的に結合され、また機械的共振器にオプトメカニック的に結合されています。2レベル系の周波数が機械的共振器の周波数に相当する時、実のPhononsがcoherently放出されます。したがって、光学周波数よりもはるかに低い。私たちは、この混合効果が真空 |
2309.14557 | Disruption Detection for a Cognitive Digital Supply Chain Twin Using
Hybrid Deep Learning | Purpose: Recent disruptive events, such as COVID-19 and Russia-Ukraine
conflict, had a significant impact of global supply chains. Digital supply
chain twins have been proposed in order to provide decision makers with an
effective and efficient tool to mitigate disruption impact. Methods: This paper
introduces a hybrid deep learning approach for disruption detection within a
cognitive digital supply chain twin framework to enhance supply chain
resilience. The proposed disruption detection module utilises a deep
autoencoder neural network combined with a one-class support vector machine
algorithm. In addition, long-short term memory neural network models are
developed to identify the disrupted echelon and predict time-to-recovery from
the disruption effect. Results: The obtained information from the proposed
approach will help decision-makers and supply chain practitioners make
appropriate decisions aiming at minimizing negative impact of disruptive events
based on real-time disruption detection data. The results demonstrate the
trade-off between disruption detection model sensitivity, encountered delay in
disruption detection, and false alarms. This approach has seldom been used in
recent literature addressing this issue. | Mahmoud Ashraf, Amr Eltawil, Islam Ali | 2023-09-25T22:03:09 | http://arxiv.org/abs/2309.14557v1 | # Disruption Detection for a Cognitive Digital Supply Chain Twin Using Hybrid Deep Learning
###### Abstract
**Purpose:** Recent disruptive events, such as COVID-19 and Russia-Ukraine conflict, had a significant impact of global supply chains. Digital supply chain twins have been proposed in order to provide decision makers with an effective and efficient tool to mitigate disruption impact.
**Methods:** This paper introduces a hybrid deep learning approach for disruption detection within a cognitive digital supply chain twin framework to enhance supply chain resilience. The proposed disruption detection module utilises a deep autoencoder neural network combined with a one-class support vector machine algorithm. In addition, long-short term memory neural network models are developed to identify the disrupted echelon and predict time-to-recovery from the disruption effect.
**Results:** The obtained information from the proposed approach will help decision-makers and supply chain practitioners make appropriate decisions aiming at minimizing negative impact of disruptive events based on real-time disruption detection data. The results demonstrate the trade-off between disruption detection model sensitivity, encountered delay in disruption detection, and false alarms. This approach has seldom been used in recent literature addressing this issue.
**Keywords: Digital Twin, Deep Learning, Machine Learning, Supply Chain Management, Supply Chain Resilience, Disruption Detection**
## 1 Introduction
Local and global crises severely impact global supply chains. Hurricane Katrina in 2006, the Japanese tsunami in 2011, COVID-19 in late 2019, and the Suez Canal blockage in 2021 disrupted the flow of goods and materials in global supply chains. Recent power outages and industrial shutdowns in China have affected many supply chains with limited supply and long delays (Feng, 2021). Furthermore, climate change risks may evolve and disrupt global supply chains through natural disasters, resulting in plant shutdowns and disruptions to mining operations and logistics (Ghadge, Wurttmann, & Seuring, 2019). Finally, the Russia-Ukraine conflict is expected to adversely impact many supply chains worldwide and global logistics (Eshkenazi, 2022).
In 2021, 68% of supply chain executives reported constantly facing disruptive events since 2019 (Gartner, 2022). Therefore, proper disruption management is vital to minimise negative disruption impacts and avoid supply chain collapse. Supply chain disruption management refers to the approaches and policies adopted to recover from unexpected disruptive events which cause a high adverse impact on supply chain performance and are characterised by low occurrence frequency (Ivanov, 2021). Some disruptive events, such as supplier unavailability, can have a prolonged impact during the post-disruption period due to delayed orders and backlogs. Supply Chain Resilience (SCR) refers to the supply chain's ability to withstand, adapt, and recover from disruptions to fulfil customer demand and maintain target performance (Hosseini, Ivanov, & Dolgui, 2019). For dynamic systems, SCR is a performance-controlled systemic property and goal-directed. In other words, disruption absorption allows for maintaining the intended performance in the event of a disruption. At the same time, the feedback control embodied in recovery control policies makes SCR self-adaptable (Ivanov, 2021).
SCR considers disturbances in the supply chain, such as supplier unavailability and disruption impact on supply chain performance. Moreover, SCR seeks to restore normal operations by adopting recovery policies. As a result, SCR guarantees the firm's survival after severe adverse events. Resilience may be realised by (1) redundancies, such as subcontracting capabilities and risk mitigation stocks, (2) recovery flexibility to restore regular performance, and (3) end-to-end supply chain visibility (Ivanov, 2021).
With the evolution of Industry 4.0, many businesses were encouraged to carry out the transition towards digitalisation. Gartner (2018) predicted that by 2023, at least half of the world's largest corporations would be employing Artificial Intelligence (AI), advanced analytics, and the Internet of Things (IoT) in supply chain operations. Big Data Analytics (BDA) advancements
and real-time data availability offered by IoT technologies resulted in the emergence of Digital Twins (DTs). A DT is a digital representation of a real-world physical system (Qamsane et al., 2019).
A Digital Supply Chain Twin (DSCT), as defined by Ivanov, Dolgui, Das, and Sokolov (2019), is "a computerised model of the physical system representing the network state for any given moment in real-time". The DSCT imitates the supply chain, including any vulnerability, in real-time. This real-time representation helps improve SCR through an extensive end-to-end supply chain visibility based upon logistics, inventory, capacity, and demand data (Ivanov and Dolgui, 2020).
DSCTs can improve SCR, minimise risks, optimise operations, and boost performance (Pernici et al., 2020). DTs provide up-to-date real-time data which reflects the most recent supply chain state. Real-time data allows for the early detection of supply chain disruptions and rapid response through recovery plans. Moreover, optimisation engines integration with DTs enable making the most cost-effective operational decisions (Frazzon, Freitag, and Ivanov, 2020).
The concept of Cognitive Digital Twins (CDTs) has emerged during the past few years which refers to the DTs that possess additional capabilities, such as communication, analytics, and cognition (Zheng, Lu, and Kiritsis, 2021). CDTs have been firstly introduced in the industry sector in 2016, followed by several attempts to provide a formal definition of CDTs (Zheng et al., 2021). For instance, Lu (2020) defined CDTs as "DTs with augmented semantic capabilities for identifying the dynamics of virtual model evolution, promoting the understanding of inter-relationships between virtual models and enhancing the decision-making". CDTs which utilise machine learning can sense and detect complex and unpredictable behaviours. Therefore, a Cognitive Digital Supply Chain Twin (CDSCT) permits disruption detection in the supply chain and quick deployment of recovery plans in real-time upon disruption detection.
Motivated by recent global supply chain disruptions, digital transformation efforts, and absence of operational frameworks that utilize CDSCT for disruption detection and time-to-recovery prediction from the literature, this paper introduces a framework to help enhance Supply Chain Resilience (SCR) through decision support by adopting Digital Supply Chain Twins (DSCTs), building upon the introduced conceptual framework by Ivanov and Dolgui (2020) Additionally, the adoption of data-driven AI models in DSCTs enable monitoring supply chain state that help detect supply chain disruptions in real-time and optimising recovery policies to recover from these disruptions. Real-time disruption detection enables the decision-makers to respond quickly to disruptions through early and efficient deployment of recovery policies. AI models play an important role in discovering abnormal patterns in data. As a result, this paper introduces a hybrid deep learning approach for disruption detection in a make-to-order three-echelon supply chain. The proposed approach is presented within a CDSCT framework to improve SCR through
real-time disruption detection. The introduced approach allows the decision-makers to identify the disrupted echelon and obtain an estimate of the Time-To-Recovery (TTR) from a disruptive event upon disruption detection.
The remainder of this paper is organised as follows. Section 2 reviews the relevant literature. Then, section 3 introduces and describes the problem at hand. Afterwards, section 4 demonstrates pertinent machine learning concepts, followed by section 5, demonstrating the development steps. The results are shown in section 6 followed by section 7, demonstrating the managerial implications. Finally, section 8 provides concluding remarks, current research limitations, and directions for future work.
## 2 Review of literature
### Supply chain resilience
Many scholars proposed several signal-based approaches to evaluate SCR (Chen & Miller-Hooks, 2012; Falasca, Zobel, & Cook, 2008; Melnyk, Zobel, Macdonald, & Griffis, 2013; V.L.M. Spiegler, Naim, & Wikner, 2012; Torabi, Baghersad, & Mansouri, 2015). The proposed approaches involved simple models, such as simple aggregation models, and sophisticated models, such as deep learning. An aggregation-based approach was introduced to evaluate operational SCR (Munoz & Dunbar, 2015). A single evaluation metric across multiple tiers in a multi-echelon supply chain was developed by aggregating several transient response measures. The transient response represents the change in supply chain performance due to a disruptive event. The transient response measures evaluated supply chain performance across multiple dimensions. These dimensions were (1) TTR, (2) disruption impact on performance, (3) performance loss due to disruption, and (4) a weighted-sum metric to capture the speed and shape of the transient response. This approach could explain the performance response to supply chain disruptions better than individual dimensions of resilience at the single-firm level.
A system dynamics-based approach was proposed to quantify SCR at a grocery retailer (V. Spiegler, Potter, Naim, & Towill, 2015). SCR was evaluated based on the supply chain response to the dynamic behaviour of stock and shipment in a distribution centre replenishment system. Considering the inherent non-linear system behaviour eliminates preliminary analysis of non-linearity effects which helps simulate complex supply chains (Ivanov, Sethi, Dolgui, & Sokolov, 2018).
A hierarchical Markov model was introduced to integrate advance supply signals with procurement and selling decisions (Gao, Yang, Zhang, & Luo, 2017). The proposed model captured essential features of advance supply signals for dynamic risk management. In addition, the model could be used to make a signal-based dynamic forecast. The strategic relationship between signal-based forecast, multi-sourcing, and discretionary selling was revealed. However, future supply volatility and variability are expected to affect the future supply forecast. The findings revealed a counter-intuitive insight. A
model that disregards both volatility and variability of the uncertain future supply might outperform the one that considers the variability of the uncertain future supply. Finally, a signal-based dynamic supply forecast was recommended under considerable supply uncertainty and a moderate supply-demand ratio.
Deep learning models for enhancing SCR could outperform the classical models. A deep learning approach was introduced based on Artificial Neural Networks (ANNs) (Radosavljevic, Lucanin, Ruger, & Golubovic, 2021). This approach aims at identifying disruptions related to temperature anomalies in the cold supply chain during transport. The ANN-based model was compared to another approach based on BDA and mathematical modelling. Based on a simulation model and a real-world case, the ANN-based model outperformed the other model based on BDA and mathematical modelling.
Moreover, hybrid deep learning models could outperform deep learning models for anomaly detection. A hybrid-deep learning approach was presented to detect anomalies in a fashion retail supply chain (Nguyen, Tran, Thomassey, & Hamad, 2021). The hybrid deep learning model involved a deep Long-Short term memory (LSTM) autoencoder and classic machine learning to extract meaningful information from the data. Then, semi-supervised machine learning was applied in the shape of a One-Class Support Vector Machine (OCSVM) algorithm to detect sales anomalies. Based on a real case for a company in France, the results showed that hybrid approaches could perform better than deep learning-based approaches.
### Digital supply chain twins for enhancing supply chain resilience
Several studies extended the application of DSCTs in many aspects to support decision-making and enhance SCR. A machine learning approach was introduced to improve SCR through resilient supplier selection in a DT-enabled supply chain (Cavalcante, Frazzon, Forcellini, & Ivanov, 2019). The introduced approach could analyse the supplier performance risk profiles under uncertainty through data-driven simulation for a virtual two-echelon supply chain. The results revealed that combining machine learning-based methods with DSCT could enhance SCR, especially when redesigning the supply network.
A notion of DSCT to support decision-making and improve SCR was explained in (Ivanov et al., 2019). The interrelationships between supply chain digital technology and disruption risk effects in the supply chain were investigated. Then, a framework for risk management in supply chain management was introduced. The results indicated that future decision support systems would utilise DSCTs and digital technologies, such as IoT and BDA. As a result, the available real-time data could provide information regarding the scope and impact of disruptions. The feedback from DSCTs could be used to restore the pre-disruption performance by testing different policies. The integration between BDA and a DT for an automotive supply chain was introduced
to support decision-making and adapt to new scenarios in real-time (Vieira, Dias, Santos, Pereira, & Oliveira, 2019).
Another framework based on real-time disruption detection was presented to support decision-making for a DSCT for disruption risk management (Ivanov & Dolgui, 2020). This framework would enable efficient deployment of recovery policies, reliable disruption scenarios creation for supply chain risk analysis, and revealing the connections between risk data, disruption modelling, and performance evaluation.
The weaknesses in SCR modelling were highlighted in the face of foreseeable disruptions (Golan, Trump, Cegan, & Linkov, 2021). The findings showed that DSCTs could better allow decision-makers to evaluate efficiency/resilience trade-offs. Furthermore, during the post-disruption phase, DTs can help optimise system performance.
Corresponding to the COVID-19 impact on global supply chains, DSCTs were used to examine the effect of a real-life pandemic disruption scenario on SCR for a food retail supply chain (Burgos & Ivanov, 2021). The results uncovered the underlying factors that affect supply chain performance, such as pandemic intensity and customer behaviour. The findings assured the importance of DSCTs for building resilient supply chains.
### Cognitive digital twins
Many scholars introduced different architectures and implementations for CDTs in various fields, such as condition monitoring of assets, real-time monitoring of finished products for operational efficiency, and supporting demand forecasting and production planning (Zheng, Lu, & Kiritsis, 2021). In the field of manufacturing and supply chains, introduced architectures focused on detecting anomalous behaviour in manufacturing systems, improving operations, and minimizing cost across the supply chain (Qamsane et al., 2019; Raleanu, Borangiu, Ivanescu, Morariu, & Anton, 2019). A CDT architecture was proposed for real-time monitoring and evaluation for a manufacturing flow-shop system (Qamsane et al., 2019). The CDT platform could forecast and identify abnormalities using the available data from interconnected cyber and physical spaces. In addition, another architecture was introduced for a shop floor transportation system to predict and identify anomalous pallet transportation times between workstations (Raleanu et al., 2019). Based on two different showcases, both architectures showed that CDTs could improve operations through optimal scheduling in real-time and enhanced resource allocation.
A CDT framework was introduced for logistics in a modular construction supply chain (Lee & Lee, 2021). The proposed CDT could predict logistics-related risks and arrival times to reduce costs using IoT and Building Information Modeling (BIM). Furthermore, an approach for a CDT was proposed in agile and resilient supply chains (Kalaboukas, Rozanec, Kosmerlj, Kiritsis, & Arampatzis, 2021). The CDT could predict trends in dynamic environments to guarantee optimal operational performance. This approach was
elaborated through a connected and agile supply chain. The deployed model considers collaboration among different actors as enablers for information exchange, processing, and actuation.
In addition, a deep learning-based approach has been introduced to predict TTR in a three-echelon supply chain (Ashraf, Eltawil, & Ali, 2022). The introduced approach was presented within a theoretically proposed CDSCT framework to enhance SCR. Obtained results showed that predicted TTR values tend to be relatively lower than the actual values at early disruption stages, then improve throughout the progression of the disruption effect on the supply chain network.
It has been observed from the literature that many recent contributions were directed towards SCR in response to the COVID-19 pandemic impact on global supply chains. Many scholars were concerned with quantifying SCR and deploying DSCTs frameworks. On the one hand, deep learning-based models outperformed the classic ones for enhancing SCR. On the other hand, few contributions concerned with enhancing SCR through deep learning-based techniques in a CDSCT environment have been observed. In addition, the literature emphasized the role of CDSCTs in the field of supply chain disruption management. However, few contributions on the implementation of different CDSCT modules for disruption detection was observed. Therefore, this paper contributes to the literature through developing the CDSCT enabling modules for disruption detection.
This paper extends the proposed framework by Ashraf et al. (2022) through incorporating an additional layer for disrupted echelon identification. Furthermore, this paper extends their work by introducing: (1) a hybrid deep learning approach for disruption detection and (2) deep learning-based model for disrupted echelon identification. The introduced approaches are presented as sub-modules of CDSCT for a make-to-order virtual supply chain. In addition, this paper reconsiders inputs for the TTR prediction modules with the aim of obtaining better TTR estimates.
This study tries to answer two research questions. The main research question is "Is there a way to exploit the benefit of cognitive digital twins in the field of supply chain disruption management?" The second research question is "How to validate the introduced framework for incorporating cognitive digital twins into supply chain disruption management. The first research question is addressed by introducing a CDSCT framework that allows early disruption detection in a CDT-enabled make-to-order virtual supply chain. Early disruption detection is enabled through a hybrid deep learning-based approach using a deep autoencoder neural network and the OCSVM algorithm. In addition to early disruption detection, the CDSCT permits disrupted echelon identification and TTR prediction. The first research question is addressed throughout the introduced framework, while the second research question is addressed throughout system the implementation.
## 3 Problem statement
This paper introduces a hybrid deep learning approach for disruption detection within a CDSCT framework to enhance SCR. This approach involves (1) a training phase and (2) an operational phase. The _training phase_ involves training the disruption detection module and models for disrupted echelon identification and TTR prediction. After the training phase, the CDSCT can detect supply chain disruptions, identify disrupted echelons, and predict TTR from disruptions. Figure 0(a) demonstrates the CDSCT during the _operational phase_. Supply chain disruptions are detected based on a real-time data stream from an existing supply chain. The literature indicated that real-time data, enabled by IoT, is collected in multiple means, such as sensors and RFID tags (Ivanov and Dolgui, 2020). Then, the disrupted echelon is identified upon disruption detection, and TTR estimates are obtained. In addition, future supply chain states can be forecasted due to the disruption impact.
Needed supply chain data for training the anomaly (disruption) detection module and TTR prediction model can be obtained from multiple sources. These sources include historical records, real-time data from an IoT-enabled supply chain, or a simulation model depicting a real or a virtual system.
Figure 1: The cognitive digital supply chain twin framework.
Figure 1b demonstrates the framework during the training phase. This phase involves the training based on historical data feed representing the supply chain performance in normal and disrupted states. The disrupted echelon is identified upon disruption detection. Then, a TTR estimate is obtained after feeding the labelled training data to the CDT. In practice, sufficient historical records of disruptions for training purpose may be unavailable due to the unpredictability and low occurrence frequency of disruptive events. In such cases, simulation modelling becomes the most convenient tool for augmenting the training data required for the development of machine learning models. This paper uses simulation modelling to simulate different disruption scenarios. In addition, the developed simulation model is used to generate the required data for training the disruption detection module for a make-to-order virtual three-echelon supply chain.
## 4 Methodology
### Deep autoencoders
An autoencoder is a special type of feedforward neural network trained to copy its input to its output by representing its input as coding (Goodfellow, Bengio, & Courville, 2016). Autoencoder consists of three main components, other than the input and output, (1) encoder, (2) coding, and (3) decoder, Figure 2. Input data compression and decompression through the encoder and decoder, respectively, makes autoencoders ideal for applications involving dimensionality reduction and feature extraction. The coding, \(z\), represents the compressed representation of the input vector x, which contains the most representative information of the input.
Figure 2: Autoencoder architecture.
An autoencoder with three or more hidden layers in the encoder or the decoder network is considered a deep one (Subasi, 2020). The autoencoder is trained to minimise the reconstruction error between the input \(x\) and the output \(\hat{x}\). It is expected that an autoencoder trained on normal (non-disrupted) data will result in a high reconstruction error when given anomalous (distributed) data (Malhotra et al., 2016). Therefore, an autoencoder neural network is used for the problem at hand of disruption detection.
### The one-class support vector machine algorithm
OCSVM is a machine learning algorithm used for binary classification and anomaly detection. Anomaly detection refers to discovering outliers or abnormalities embedded in a large amount of normal data (Ma and Perkins, 2003). OCSVM works in a semi-supervised manner when considering anomaly detection as the application area. During training an OCSVM, it learns to construct the boundary that separates observations under normal conditions from abnormal observation. The work proposed by Ma and Perkins (2003); Scholkopf, Williamson, Smola, Shawe-Taylor, and Platt (1999) introduced the inherent mechanism of OCSVM for anomaly detection in a more detailed manner. OCSVM is usually trained using normal points representing the positive class because full consideration of all disruption scenarios is quite impossible. Then, during operation, the OCSVM checks whether new data points belong to the normal class or not. Suppose an input data point is considered anomalous. In that case, it lies outside the boundary and belongs to the other class, usually referred to as the negative class (anomalous).
The OCSVM algorithm is applied for automatic disruption (anomaly) detection in a three-echelon supply chain. As a binary classification and anomaly detection algorithm, OCSVM was chosen as it enables disruption detection without a prohibitively extensive study of all potential disruption scenarios. The first principal component of the reconstruction error obtained from the autoencoder is used as the input to the OCSVM algorithm. The OCSVM algorithm eliminates the need for statistical analyses to set a threshold above which a data point is considered anomalous. In addition, the OCSVM algorithm does not necessitate any specific assumptions about the data, i.e., reconstruction error is normally distributed (Nguyen et al., 2021).
### Long-short term memory neural networks
LSTM neural network is an important Recurrent Neural Networks (RNNs) class. LSTM neural networks were proposed by (Hochreiter and Schmidhuber, 1997). They provide memory to retain long-term dependencies among input data without suffering from the vanishing gradient problem (Li, Li, Wang, & Wang, 2019). Therefore, LSTM networks are suitable to represent sequential data, i.e., time series. A simple LSTM neural network of one neuron, Figure 2(a), receives an input \(x_{i}\), produces an output \(y_{i}\), and resends that output to itself. When the LSTM neural network is unfolded through time, it has the form
of a chain of repeated modules, Figure 2(b). At each time step (frame) \(i,i\in\{1,2,...,t\}\), that recurrent neuron receives the inputs \(x_{i}\), and its output from the previous time step \(h_{i-1}\), to produce an output \(y_{i}\).
## 5 System implementation
This section lays out the implementation steps for developing the proposed approach on a desktop computer with a 2.9 GHz Intel Core i7 processor and 8 GB RAM. A virtual supply chain is modelled as a discrete event simulation model using AnyLogic 8.7 simulation software. Machine learning models are developed using Python 3.8, Scikit-learn 0.24, and Keras 2.6. The training time for different models ranged between two and five hours.
### The virtual supply chain structure
The three-stage flow line model with limited buffer capacity introduced by Buzacott, Shanthikumar, and George. (1993) is used to develop a make-to-order virtual three-echelon supply chain. It is assumed that there is a single product under consideration, and alternatives to any echelon are not available. Hence, the service protocol permits backlogging. Figure 4 shows the main components of the virtual supply chain with potential sources of disruption.
A single supplier, manufacturer, and distributor constitute the three-echelon virtual supply chain. An additional component, _demand_, corresponds to the initiated customer order quantity. After a customer order is generated, it enters a First-Come-First-Served (FCFS) queue waiting to be fulfilled. The customer order generation rate follows a Poisson distribution with a mean value of \(\lambda\).
The _supplier_ provides the required raw material with a mean rate \(\mu_{1}\). The supplier is assumed to have unlimited buffer capacity. In contrast, the remaining two echelons are assumed to have a limited buffer capacity of ten
Figure 3: A long-short term memory neural network architecture.
units. After the raw material is prepared and delivered to the _manufacturer_, the products are manufactured with a processing rate \(\mu_{2}\). Then, the customer order is ready to be fulfilled through the _distributor_ after being processed with a processing rate \(\mu_{3}\). The processing rates at the supplier, manufacturer, and distributor are assumed to follow an exponential distribution.
Different scenarios are considered to account for the supply chain performance under normal and disrupted circumstances. The normal scenario is denoted by \(S_{0}\), while potential disruption scenarios include unexpected failures at any single echelon \(i\) and are denoted by \(S_{i},i\in\{1,2,3\}\), where echelons 1, 2, and 3 correspond to the supplier, manufacturer, and distributor, respectively. In addition, the surge in demand scenario is considered and denoted by \(S_{4}\). The simulation model parameters assumed values are shown in table 1.
Several parameters and metrics reflecting the supply chain state and performance are monitored. The parameters include (1) the interarrival time, \(T_{a}\), and (2) the processing time at echelon \(i\), \(T_{pi},i\in\{1,2,3\}\). Monitored metrics include
\begin{table}
\begin{tabular}{l l} Parameter & Value \\ \hline Number of replications, \(N\) & 300 replications \\ Replication length, \(RL\) & 1095 days \\ Warm-up period length, \(WL\) & 180 days \\ Arrival rate, Poisson(\(\lambda\)) & 15 units/day \\ Number of orders per arrival, \(Q_{a}\) & 1 unit \\ Supplier service rate, Poisson(\(\mu_{1}\)) & 18 units/day \\ Supplier buffer capacity, \(q_{1}\) & \(\infty\) \\ Supplier server capacity, \(c_{1}\) & 1 unit \\ Manufacturer service rate, Poisson(\(\mu_{2}\)) & 19 units/day \\ Manufacturer buffer capacity, \(q_{2}\) & 15 units \\ Manufacturer server capacity, \(c_{2}\) & 1 unit \\ Distributor service rate, Poisson(\(\mu_{3}\)) & 20 units/day \\ Distributor buffer capacity, \(q_{3}\) & 10 units \\ Distributor server capacity, \(c_{3}\) & 1 unit \\ Disruption duration, \(D_{d}\) & \(D_{d}\in[30,60]\) days \\ Disruption occurrence, \(D_{t}\) & \(D_{t}\in[300,600]\) days \\ Disrupted arrival rate, Poisson(\(\lambda_{d}\)) & 30 units/day \\ Disrupted processing rate, Poisson(\(\mu_{di}\)) & \(\mu_{di}\)\(=\) 0 units/day \(\forall i\in\{1,2,3\}\) \\ \hline \end{tabular}
\end{table}
Table 1: Simulation model parameters.
Figure 4: Virtual supply chain components with potential sources of disruptions.
(1) units in the system \(WIP\), (2) queue length at echelon \(i\), \(L_{qi},i\in\{1,2,3\}\), (3) lead time \(LT\), (4) flow time \(FT\), and (5) the daily output \(K\) in units. _Lead time_ refers to the total time between customer order generation and fulfilment. The _flow time_ refers to the elapsed time from the order beginning of processing by the supplier until fulfilment. Daily records are averaged throughout the day. The \(WIP\) and \(L_{qi}\) are recorded on an hourly basis, while the remaining parameters and metrics are recorded upon order fulfilment.
#### Simulation model validation
The simulation model is validated using the closed-form model given by Buzacott et al. (1993) for a particular system configuration. That configuration assumes an infinite number of orders in front of the supplier. In addition, buffer capacity is not allowed at either the manufacturer or the distributor. The calculated rate at which orders leave the system (output rate) for that configuration, using the closed-form model, is compared to the estimated rate from the simulation model.
The simulation model is validated before generating the required data sets to verify the introduced approach. Therefore, a total of 916 single day replications are used for validation. The calculated output rate from the closed-form model is 10.69 units per day. The estimated output rate was \(10.48\pm 0.221\) units per day with a 99% confidence level. Moreover, a comparison between the calculated and estimated rates using a Z-test shows no significant difference with a 0.01 significance level.
#### Data sets generation
Five data sets are generated corresponding to represent five scenarios \(S_{i},i\in\{0,1,2,3,4\}\). These scenarios consider both normal and disrupted circumstances. The generated data sets for each scenario represent a multivariate time series that consists of 916 time records per replication. Each time step includes thirteen parameters (features). These features are (1) interarrival time, (2) supplier processing time, (3) manufacturer processing time, (4) distributor processing time, (5) supplier queue length, (6) manufacturer queue length, (7) distributor queue length, (8) work in process, (9) lead time, (10) flow time, (11) waiting time, (12) processing time, and (13) daily output.
Each disruptive event has a direct impact on some input features in the generated datasets. The surge in demand is represented by a decrease in feature (1) which consequently results in an increase in feature (5), (9), and (11). The second type of disruptive events, capacity loss at any echelon disrupts the whole system and affects features (2-13). For example, considering the capacity loss at the supplier, some of affected features is impacted directly, such as feature (2), and others are impacted indirectly, such as feature (6), because the discontinuity of incoming material flow from the supplier due to the disruptive event.
### The disruption detection module
A semi-supervised hybrid deep learning approach is adopted to detect disruptions in the above-mentioned virtual supply chain, as depicted in Figure 5. The monitored supply chain parameters and performance metrics produce a multivariate time series with multiple time-dependent variables. Consequently, each variable may depend on other variables besides time dependency, making building an accurate model for disruption detection and TTR prediction a complex task. Therefore, a hybrid deep learning-based approach is adopted to tackle this challenge by using automatic feature extraction and learning of the underlying patterns in the input data.
#### 5.2.1 Data preprocessing
TThe input time series data are split into train, validation, and test sets using a split ratio of 60%, 20%, 20%, respectively, for all scenarios. Due to different scales on which input variables are measured, data preprocessing is carried out by normalising the inputs using a min-max scaler, Equation 1.
\[x_{norm}^{i}=\frac{x^{i}-x_{min}^{i}}{x_{max}^{i}-x_{min}^{i}},i\in\{1,2,...,k\} \tag{1}\]
where \(x_{norm}^{i}\) denotes the normalised vector for a time-variate variable. \(x_{min}^{i}\) and \(x_{max}^{i}\) are the minimum and maximum values of vector \(x^{i}\), and \(k\) is the number of variables in (length of) the time series. Due to the relatively long time series, a sliding window of size 14 is applied as a preprocessing step. Afterwards, deep autoencoders and OCSVM algorithm detect disruptions based on the first principal component of the reconstruction error. Moreover, two LSTM neural networks are used to identify the disrupted echelon and predict TTR.
Figure 5: The proposed approach for the disruption detection module in a cognitive digital supply chain twin environment.
#### 5.2.2 Disruption detection
A deep autoencoder of three encoder-decoder pairs is developed to reconstruct the inputs. The hidden and coding layers have a size of 256, 128, 64, and 32, respectively. The learning rate and batch size are set to \(10^{-4}\) and 128, respectively. The autoencoder is trained for 1000 epochs using input data considering normal circumstances generated from the scenario \(S_{0}\). An epoch refers to a complete pass made by the model on the input data set during training.
In the beginning, the OCSVM algorithm is trained using the first principal component of the obtained absolute error vectors for the test set only under normal circumstances, considering the scenario \(S_{0}\). Then, the OCSVM algorithm is tested using the test sets under disrupted circumstances under scenarios \(S_{i}\)\(\forall i\in\{1,2,3,4\}\). Model hyperparameters \(\nu\) and \(\gamma\) are set to 0.025 and 100, respectively. The first hyperparameter, \(\nu\), controls the sensitivity of the support vectors, while the latter, \(\gamma\), controls the boundary shape. High values of \(\nu\) lead to a more sensitive model, while high values of \(\gamma\) result in an overfit to the training data. At the end of this section, balancing model sensitivity with other performance metrics is discussed.
The OCSVM model results are mapped to a labelled data set for further performance evaluation. The selected performance metrics for the disruption detection model include (1) accuracy, (2) precision, (3) recall, and (4) F1-score. The accuracy, Equation 2, describes the overall model performance by calculating the ratio of correctly identified observations to the total observations. The precision, Equation 3, determines the ratio of correctly identified normal observations to the total number of normal observations. On the contrary, the recall, Equation 4, defines the model sensitivity by realising the ratio of correctly identified normal observations to total observations identified as normal. Finally, the F1-score, Equation 5, is a weighted average of precision and recall.
\[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{FP}+\text{FN}+ \text{TN}} \tag{2}\]
\[\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{3}\]
\[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}} \tag{4}\]
\[\text{F1-score}=2\times\frac{\text{Precision}\times\text{Recall}}{\text{ Precision}+\text{Recall}} \tag{5}\]
where TP, FP, FN, and TN are true positive, false-positive, false-negative, and true negative. The true positive refers to the number of correctly identified observations as normal. In contrast, the false-positive represents the number of incorrectly identified observations as normal. False-negative defines the number of abnormal observations that are incorrectly identified as normal. The true negative represents the number of abnormal observations that are correctly identified.
In order to provide the decision-maker with more relevant measures, another two additional performance measures, (1) lag and (2) false-positive
percentage, are introduced. The _lag_ describes the encountered delay in disruption detection. On the other hand, the ratio of incorrectly classified observations prior to disruption occurrence defines the _false-positive percentage_. These additional performance measures provide a better understanding of the impact of changing model hyperparameters on model performance.
The OCSVM-based disruption detection model hyperparameters are selected by adopting a grid search approach. The main objective is to find the best performing combination of hyperparameter values based on different performance measures. Figure 6 summarises the results from the grid search concerning the effect of changing \(\nu\) and \(\gamma\) values on different performance measures. The x-axis represents \(\nu\) on a linear scale, while the y-axis represents \(\gamma\) using a log scale. A good model performance can be represented by a combination of high values of accuracy and F1-score in addition to low false alarm percentage. Evidently, better performance is realised at \(\nu\) in the range below 0.1 and relatively moderate values of \(\gamma\) between 0.1 and 100.
Figure 6: Grid search results for one-class support vector machine hyperparameter selection.
Further analysis is conducted to examine the individual effect of each hyperparameter while the other is fixed on the mean lag and false alarms within the range where good model performance has been observed. Figures 6(a) and 6(b) show the effect of changing \(\nu\) while \(\gamma\) is fixed at different \(\gamma\) values. The x-axis represents \(\nu\) while the y-axis represents the performance measure value. As indicated from the shown graphs, \(\gamma\) barely affects the performance measures. On the contrary, \(\nu\) significantly affects model's performance at \(\nu\leq 0.1\).
Figures 6(c) and 6(d) examine the effect of changing \(\gamma\) while \(\nu\) is fixed at \(\nu\leq 0.1\). The x-axis represents \(\gamma\) on a log scale while the y-axis represents the performance measure value. As per the shown graphs, \(\gamma\) does not have a significant effect on the performance measures when compared to \(\nu\) at \(\gamma\in[0.01,1000]\). On the contrary, \(\nu\) significantly affects the model's performance. On the one hand, the increase in \(\nu\) results in a significant improvement in the mean lag, but, more false alarms arise. Therefore, the selected values for \(\nu\) and \(\gamma\) are chosen to achieve as short lags as possible with the fewest false alarms.
Figure 7: Effect of changing \(\nu\) and \(\gamma\) values.
#### 5.2.3 Disrupted echelon identification
An LSTM neural network classifier is developed to identify the disrupted echelon upon disruption detection. The LSTM classifier is trained in a fully supervised manner. Therefore, the input class sequence is converted from a class vector to a binary class matrix using one-hot encoding. Then, the train and validation sequences are used to train the classifier with a learning rate of \(10^{-4}\) and a batch size of 32 for 20 epochs. Finally, the classifier is tested using the test set. The LSTM neural network classification model consists of two LSTM layers. Each layer has 16 units and a dropout rate of 0.1.
#### 5.2.4 Time-to-recovery prediction
An LSTM neural network-based model is developed to predict TTR using the incoming signal from the simulation model for different parameters and metrics. Different hyperparameter values are tested, and the best-performing set is chosen to predict the TTR based on the minimum validation loss. These hyperparameters are used to develop four TTR prediction models by considering a single disruption scenario at a time. The four TTR prediction models correspond to the four potential disruption scenarios \(S_{i},i\in\{1,2,3,4\}\).
Each model has two LSTM layers with 64 LSTM units each. The learning rate is set to \(10^{-4}\) and the dropout rate is 0.1 for each layer. An \(l1\) regularisation is applied to the first layer with a regularisation factor of \(10^{-3}\). Each model is trained with a batch size of 16 for twenty epochs. Each model is evaluated based on (1) Mean Absolute Error (MAE), (2) Mean Squared Error (MSE), (3) Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE). Each performance measure is given by Equation 6, 7, 8, and 9, respectively.
\[\text{MAE}=\frac{\sum\lvert y-\hat{y}\rvert}{N} \tag{6}\]
\[\text{MSE}=\frac{\sum\left(y-\hat{y}\right)^{2}}{N} \tag{7}\]
\[\text{RMSE}=\sqrt{\frac{\sum\left(y-\hat{y}\right)^{2}}{N}} \tag{8}\]
\[\text{MAPE}=\frac{\sum\frac{\lvert y-\hat{y}\rvert}{y}}{\text{N}} \tag{9}\]
where \(N\) is the number of TTR observations, while \(y\) and \(\hat{y}\) represent the actual and predicted TTR vectors.
## 6 Results
The generated data for the virtual supply chain are used to verify the proposed approach. This section evaluates the performance of different modules, mainly the disruption detection module, disrupted echelon identification, and time-to-recovery prediction.
### Simulation-generated data sets
After the simulation model is validated, a single data set for each scenario is generated. Then, each data set was labelled and normalised. Finally, each data set was split into train, validation, and test sets. The train and validation sets for scenario \(S_{0}\) were used to train the deep autoencoder model. Then, the test set for the \(S_{0}\) scenario was used for testing the deep autoencoder model and the OCSVM algorithm. In addition, the test sets for scenarios \(S_{i}\)\(\forall i\in\{1,2,3,4\}\) were used for testing the deep autoencoder model, evaluating OCSVM algorithm performance, testing the disrupted echelon classification model, and TTR prediction models. The disrupted echelon classification model and TTR prediction models were trained and validated using the train and validation sets for scenarios \(S_{i}\)\(\forall i\in\{1,2,3,4\}\).
### Disruption detection using deep autoencoders and one-class support vector machine algorithm
The deep autoencoder is trained using sequences of \(14\)timesteps \(\times\)\(13\)features. These sequences are generated by applying a sliding window of size \(14\). Input sequences are converted to a one-dimensional vector due to the inability of the autoencoder to process two-dimensional data as input. The flattened vector has a length of \(182\) elements. The MAE Function is used to evaluate the autoencoder model loss. The model loss represents differences between the actual values and the estimations from the model. A learning curve compares the model loss on training and validation data sets. The obtained learning curve demonstrates a slight difference between both data sets, which ensures a good fit of the autoencoder model, Figure 8. A significant model loss decrease is noted during the first \(100\) epochs, followed by a gradual decrease until stability at epoch number \(900\).
Figure 8: The learning curve during training the autoencoder.
After training the autoencoder model, it is used to obtain the absolute reconstruction error using the test sets under normal and disrupted circumstances. The absolute reconstruction error \(e_{t}^{i}\) for feature \(i\) at time \(t\) is given by Equation 10.
\[e_{t}^{a}=\sum_{i=1}^{k}\lvert x_{t}^{i}-\hat{y}_{t}^{i}\rvert \tag{10}\]
where \(x_{t}^{i}\) and \(\hat{x}_{t}^{i}\) are the actual and estimated values of the test set for feature \(i\) at time \(t\), respectively. A significant difference between the normal and abnormal circumstances was realised due to the low values for the first principal component under normal circumstances. The vast majority of the first principal component values under normal circumstances fall below \(-0.2\), which are much lower than those under disruption and recovery, which falls between \(-0.5\) and \(3.5\).
Then, the OCSVM algorithm is trained using the first principal component vector of the obtained reconstruction error under normal circumstances, which defines the positive class. The first principal component explains 92.39% of the overall variability in the absolute reconstruction error across input features. Finally, the first principal component vector under disrupted circumstances is used for disruption detection using the trained OCSVM disruption detection model. Table 2 shows the performance evaluation results for the disruption detection model.
There is a considerable difference between the model performance for both data sets. However, the disruption detection model achieved good performance under disrupted circumstances. The high recall value implies that 97.6% of these observations are correctly identified among all normal observations.
Incorrectly classified observations under normal circumstances exist due to the model's sensitivity to outliers in the train data. The reconstruction error is affected by noise, representing instantaneous disruptions (operational variability). That variability produces extremely low or high values for the principal component of the reconstruction errors under normal circumstances, affecting the OCSVM algorithm performance. Consequently, model sensitivity to such variability is a matter which requires further investigation.
The false-positive percentage reflecting the percentage of false alarms prior to disruption is 2.5%. The false alarm count is 1530, roughly corresponding to approximately seven incorrect observations per replication. The average delay in disruption detection (lag) is 7.1 days. The lag distribution is shown in Figure 9. The maximum and median lag values are 23 and 4 days, respectively.
\begin{table}
\begin{tabular}{l c c c c} Data set & Accuracy & Precision & Recall & F1-score \\ \hline Test-\(S_{0}\) & 97.5\% & 100.0\% & 97.5\% & 98.73\% \\ Test-\(S_{i}\)\(\forall i\in\{1,2,3,4\}\) & 87.28\% & 84.25\% & 97.6\% & 90.43\% \\ \hline \end{tabular}
\end{table}
Table 2: Performance measures after applying one-class support vector machine algorithm.
Despite the apparent good model performance, the realised lag is a matter of concern depending on the anticipated speed in detecting disruptions. The trade-off between achieving shorter delays and reducing false alarms depends on the model sensitivity, controlled by the hyperparameter \(\nu\). Although small hyperparameter values are recommended to achieve few false alarms, the disruption detection model becomes less sensitive to disruptions (anomalies). Thus, a significant increase in maximum lag (delay) is encountered. Large \(\nu\) values can achieve an efficient disruption detection model through delay minimization. However, the model becomes too sensitive to, leading to many false alarms and poor performance in terms of accuracy, precision, recall, and F1-score. Therefore, the decision-maker should compromise the combination between the acceptable limits for the performance measures. A suggested solution is to maintain shorter delays. The false alarms can be handled using the proposed LSTM neural network classification model.
The first principal component of the obtained absolute error for a single replication and different scenarios is plotted against time, Figure 10. The left y-axis represents the first principal component, while the right y-axis represents the corresponding metric/performance measure for each scenario in days. The first principal component for all disrupted scenarios is notably higher than the scenario under normal circumstances. The red dots refer to the anomalous points. Some points before the estimated recovery are normal points, affecting the model performance measures since the data are labelled based on a predefined threshold.
Figure 9: Disruption detection delay distribution.
Figure 10: The one-class support vector machine algorithm results.
### Disrupted echelon identification using long-short term memory neural network model
The LSTM model for disrupted echelon identification is trained to learn the multivariate time series pattern. The model is trained using the train and validation data sets for scenarios \(S_{i}\)\(\forall i\in\{1,2,3,4\}\). The model should predict the most likely class to which a given sequence belongs. Input data are labelled to consider the disrupted echelon, recovery phase, and normal circumstances during pre-disruption and post-recovery phases. The categorical cross-entropy function \(J\), Equation 11, is used for model evaluation during training (Geron, 2019).
\[J=-\sum_{k=1}^{N}y_{i,k}.\log\left(p_{i,k}\right) \tag{11}\]
where \(N\) is the number of classes, \(y_{i,k}\in\{0,1\}\) is a binary indicator if class label k is the correct classification for observation \(i\), and \(p_{i,k}\in[0,1]\) is the predicted probability observation \(i\) is of class \(k\). Lower cross-entropy values indicate better convergence of predicted sample probability towards the actual value. The learning curve shows a significant loss decrease after a few epochs, Figure 11.
Once the LSTM neural network model for disrupted echelon identification is trained, it is tested using the test data. The model performance is evaluated using precision, recall, and F1-score. Overall, the model performs well except for identifying the recovery phase, Table 3. The precision during recovery is highly affected by the incorrectly classified observations that belong to the normal class, as depicted by the confusion matrix, Figure 12. The confusion matrix summarises the LSTM model classification results by showing the count values for each class.
Figure 11: The learning curve for long-short term memory classification model.
### Time-to-recovery prediction using long-short term memory neural network models
The TTR is predicted based on an LSTM neural network prediction model. The model is trained to predict TTR based on multivariate inputs considering a single disruption scenario at a time. Therefore, four prediction models are developed to correspond to each disruption scenario \(S_{i},i\in\{1,2,3,4\}\). Training and validation data sets are used to train the proposed models. The MAE
\begin{table}
\begin{tabular}{l c c c} Disruption class & Precision & Recall & F1-score \\ \hline Normal & 98\% & 97\% & 98\% \\ Surge in demand & 96\% & 98\% & 97\% \\ Capacity loss at the supplier & 100\% & 98\% & 99\% \\ Capacity loss at the manufacturer & 100\% & 100\% & 100\% \\ Capacity loss at the distributor & 100\% & 99\% & 99\% \\ Recovery & 95\% & 96\% & 96\% \\ \hline \end{tabular}
\end{table}
Table 3: Performance measures for long-short term memory classification model.
Figure 12: Confusion matrix.
function monitors the loss for each model. The four models possess a rapid loss decrease after a few epochs, and stability is realised after the eighth epoch, Figure 13.
The TTR prediction models are tested using the test sets considering different disruption scenarios \(S_{i},i\in\{1,2,3,4\}\). It is evident from the performance evaluation results, Table 4, that the proposed models perform much better than the results obtained by Ashraf et al. (2022) for all disruption scenarios. Reducing the number of input features has significantly improved the TTR prediction models performance.
After the TTR prediction models are tested, the actual and predicted TTR are compared at different replications. The TTR values at a randomly selected time step, \(t\), are sketched in Figure 14. The predicted TTR values tend to be slightly lower than the actual ones. However, minor variations exist in many cases. The TTR prediction error is obtained by calculating the difference
Figure 13: Obtained learning curves for time-to-recovery prediction models.
between actual and predicted TTR values. Figure 15 shows the corresponding prediction error to the data used in Figure 14. Significant positive deviations pertain to the early disruption stages.
The progression of predicted TTR values is further examined for a single replication considering different disruption scenarios, Figure 16. A short delay in TTR prediction is observed at early disruption stages. That delay is followed by a higher TTR prediction than the actual. By the end of the disruption, the predicted TTR values tend to be close to the actual ones.
## 7 Managerial implications
Data-driven driven digital supply chain twins offer better end-to-end supply chain visibility and, consequently, enhanced supply chain resilience. DSCTs can monitor and determine the supply chain state as well as provide useful information and insights for decision-making support. Integrating the proposed models, in this paper, into a cognitive digital supply chain twin helps decision-makers make appropriate decisions based on real-time disruption detection
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Scenario} & \multicolumn{6}{c}{Obtained error measures} & \multicolumn{3}{c}{Ashraf et al. (2022)} \\ \cline{2-9} & MAE & MSE & RMSE & MAPE & MAE & MSE & RMSE \\ \hline \(S_{1}\) & 15.32 & 1658.48 & 40.72 & 0.21 & 33.08 & 4142.2 & 64.36 \\ \(S_{2}\) & 17.25 & 1796.42 & 42.38 & 0.235 & 30.52 & 2259.71 & 47.54 \\ \(S_{3}\) & 13.36 & 1193.82 & 34.55 & 0.212 & 43.68 & 5975.85 & 77.3 \\ \(S_{4}\) & 12.8 & 1291.31 & 35.93 & 0.259 & 30.58 & 1867.69 & 43.22 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Selected error metrics for time-to-recovery prediction models on the test sets.
Figure 14: Time-to-recovery predictions versus actual values.
data. Early disruption detection allows for early deployment of recovery policies, minimising negative impact due to disruption, leading to quicker recovery and improved supply chain resilience. In addition, the disrupted echelon identification at early disruption stages allows the decision-makers to find other alternatives that mitigate disruption impact. Furthermore, obtaining the predicted Time-To-Recovery (TTR) at early stages provides an estimate for the
Figure 16: Time-to-recovery prediction evolution along with disruption progression.
Figure 15: Time-to-recovery prediction errors.
duration of contractual agreements, if they exist, when considering different options.
## 8 Conclusion
This paper introduced a new hybrid deep learning-based approach for disruption detection within a data-driven cognitive digital supply chain twin framework. Referring to the first research question "Is there a way to exploit the benefit of cognitive digital twins in the field of supply chain disruption management?" The presented approach mainly contributes to the field of supply chain disruption management by offering better end-to-end supply chain visibility which enhances supply chain resilience through enabling real-time disruption detection, disrupted echelon identification, and time-to-recovery prediction. The developed modules permit the CDSCT to detect disruption occurrence though combining a deep autoencoder neural network with a one-class support vector machine classification algorithm. Then, if a disruption is detected, long-short term memory neural network models identify the disrupted echelon and predict time-to-recovery from the disruption. Referring to the second research question: "How to validate the introduced framework for incorporating cognitive digital twins into supply chain disruption management?" The presented framework is validated under several potential disruption scenarios in a virtual three-echelon supply chain. The disruption scenarios accounted for the surge in demand and unexpected failures at any echelon.
The obtained results indicated a trade-off between disruption detection model sensitivity, encountered delay until disruption detection, and false alarm count. Based on the excellent performance of the proposed model for disrupted echelon identification, that model may be suggested to replace the former approach for disruption detection based on deep autoencoder and one-class support vector machine algorithm. However, the OCSVM algorithm-based anomaly detection model is indispensable because it does not require an extensive definition of all possible disruption scenarios. Developed models for time-to-recovery prediction revealed that predicted time-to-recovery values tend to be lower than the actual ones at early disruption stages. Then, these predictions improve throughout disruption progression with slight variation.
Current research limitations include (1) the difficulty in accurately identifying the transition of the system from a disrupted state to a fully recovered one, (2) considering a single type of disruption at a time, and (3) as a first initiative, the introduced approach has only been tested on simulation-generated data set. Future work directions may include (1) investigating the concurrent occurrence of more than one disruption type, (2) developing a dynamic forecast model to forecast possible supply chain states upon disruption detection, (3) integrating the cognitive digital supply chain twin with an optimization engine to optimize operational decisions to enhance supply chain resilience,
(4) examining the performance of other machine learning algorithms, and (5) applying the introduced framework to a real-world case.
## Acknowledgement
Thanks to Prof. Amin Shoukry (Department of Computer Science Engineering, Egypt-Japan University of Science and Technology, New Borg El-Arab City, Alexandria, Egypt) for his valuable guidance.
## Declarations
### Funding
This work was supported by the Egyptian Ministry of Higher Education (Grant number 10.13039/501100004532) and the Japanese International Cooperation Agency (Grant number 10.13039/501100002385).
### Conflict of interest/Competing interests
The authors have no competing interests to declare that are relevant to the content of this article.
### Availability of data and materials
The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.
### Authors' contributions
All authors contributed to the study conception and design. The first draft of the manuscript was written by Mahmoud Ashraf and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
| 目的:COVID-19とロシア・ウクライナ紛争などの最近の disruptive イベントは、世界的な供給網に大きな影響を与えました。デジタル供給網の2倍は、 decision-makerに効果的で効率的なツールを提供するために提案されています。
方法: この論文では、認知的なデジタル供給網の2倍フレームワーク内で、 disruption の検出のためのハイブリッドの深層学習アプローチを導入します。この提案された disruption の検出モジュールは、深層オートエンコーダーニューラルネットワークと一クラスサポートベクターマシンアルゴリズムを組み合わせています。さらに、長期短期記憶ニューラルネットワークモデルは、混乱の層を識別し、混乱からの回復までの時間を予測するために開発されています。
結果:提案されたアプローチから得られた情報は、decision-makerと供給網の専門家に、 disruptive イベントの影響を最小限に抑えるための適切な決定を促すのに役立ちます。リアルタイムの混乱検出 |
2309.15649 | Generative Speech Recognition Error Correction with Large Language
Models and Task-Activating Prompting | We explore the ability of large language models (LLMs) to act as speech
recognition post-processors that perform rescoring and error correction. Our
first focus is on instruction prompting to let LLMs perform these task without
fine-tuning, for which we evaluate different prompting schemes, both zero- and
few-shot in-context learning, and a novel task activation prompting method that
combines causal instructions and demonstration to increase its context windows.
Next, we show that rescoring only by in-context learning with frozen LLMs
achieves results that are competitive with rescoring by domain-tuned LMs, using
a pretrained first-pass recognition system and rescoring output on two
out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with
fine-tuning we achieve error rates below the N-best oracle level, showcasing
the generalization power of the LLMs. | Chao-Han Huck Yang, Yile Gu, Yi-Chieh Liu, Shalini Ghosh, Ivan Bulyko, Andreas Stolcke | 2023-09-27T13:36:03 | http://arxiv.org/abs/2309.15649v2 | Generative Speech Recognition Error Correction with Large Language Models and Task-Activating Prompting
###### Abstract
We explore the ability of large language models (LLMs) to act as speech recognition post-processors that perform rescoring and error correction. Our first focus is on instruction prompting to let LLMs perform these task without fine-tuning, for which we evaluate different prompting schemes, both zero- and few-shot in-context learning, and a novel "task activation" prompting method that combines causal instructions and demonstration to increase its context windows. Next, we show that rescoring only by in-context learning with frozen LLMs achieves results that are competitive with rescoring by domain-tuned LMs, using a pretrained first-pass recognition system and rescoring output on two out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with fine-tuning we achieve error rates below the N-best oracle level, showcasing the generalization power of the LLMs.
Chao-Han Huck Yang, Yile Gu, Yi-Chieh Liu, Shalini Ghosh, Ivan Bulyko, Andreas Stolcke Amazon, USA
**Index Terms**: large language model, N-best rescoring, instruction prompting, few-shot learning, in-context learning.
## 1 Introduction
Large-scale language models (LLMs) have exhibited outstanding performance on downstream tasks by conditioning on input information, including task descriptions (e.g., performing mathematical calculations) or a limited number of input-output pairs obtained from training text (e.g., goal-oriented demonstrations). This new capability of task-specific inference from contextual information has been referred to as "_in-context learning_" in Brown _et al._[1]. More specifically, the ability to learn in-context has been reported in previous studies [2] of pretrained LLMs with over \(100\)B parameters trained with an unsupervised auto-regressive objective. Although recent advances in in-context learning have consistently demonstrated excellent performance on a wide range of tasks [3], there have been limited studies on the interaction or benefits of in-context learning on automatic speech recognition (ASR) tasks. As an example, contextual information [4] has been shown to play a vital role on ASR applications in complex domains, such as recognizing utterances referring to trending news.
One open question in the development of robust ASR applications is _how_ recent in-context learning frameworks can utilize their zero-shot learning capability to enhance ASR systems. Meanwhile, scaling ASR model sizes up to \(10\)B parameters [5] by itself has not proven adequate for achieving high performance on challenging (e.g., conversational) speech tasks from domain-specific data. The challenge to obtain better generalization of neural ASR models has motivated proposals to incorporate external knowledge from textual data [6]. For instance, one way to improve the RNN-transducer is to incorporate an external LM [7] for domain-aware adaptation in streaming-based applications. However, the external LM size is often limited to a range of \(10\)M to \(100\)M for on-device deployment. Given these limitations, cloud-based second-pass rescoring with LLMs may be a promising approach that leverages frozen pretrained models and leverages in-context learning.
Toward this end, in this work we explore novel ASR post-processing pipelines that utilize frozen LLMs by exploiting in-context learning. We consider two ASR second-pass pipelines, as shown in Figure 1:
\(\mathcal{P}\)**ipeline 1:** a standard rescoring system takes in N-best output from a first ASR pass, and is trained to minimize the word error rate (MWER) by reranking the hypotheses. As illustrated in Figure 1(a), an LLM in-context learning process is inserted into the pipeline to post-process first-pass hypotheses to apply error correction.
\(\mathcal{P}\)**ipeline 2:** a new task-activating prompting method is used to initialize the frozen LLM with task-oriented instruc
Figure 1: Two ASR post-processing frameworks using LLMs: (a) correct errors (e.g., grammar [8]) before applying a standard rescoring model, or (b) perform zero/few-shot rescoring; with optional task-activating prompting (Section 3.2).
tions. A list of N-best ASR hypotheses is formatted as input to the LLM, thus allowing "in-context learning initialization" and/or "in-domain fine-tuning" (e.g., using adapters for parameter-efficient model update) that results in an improved speech transcription.
In the remaining sections we present a first exploration of this novel way to utilize LLMs for the ASR task, demonstrate its surprising effectiveness, and compare results with different in-context learning schemes, as well as those of standard rescoring methods.
## 2 Related Work
**LLM-based post-processing to improve hypotheses.** Error correction post-processing [9, 10] aims to fix grammar or deletion errors in output sentences and has been shown to improve the first-pass hypotheses generated from end-to-end ASR. A key characteristic of correction techniques is their reliance on pretrained LLMs, which benefit from rich contextual information. Liao _et al._[9] propose ASR post-processing for readability, by extracting semantic expressions and generating readable text from ASR transcriptions. N-best T5 [11] used the T5 encoder-decoder architecture for rescoring with discriminative training.
**Zero-shot learning for acoustic and language modeling.** Prior work has demonstrated that language modeling can generalize to zero-shot multi-tasks without exemplars [3, 12, 13]. However, zero-shot and few-shot language modeling techniques often rely on the fine-tuning, which requires redeployment of pretrained models.
**In-context learning based on information prompting.** In-context learning (ICL) [1, 14] induces a single model to perform domain-agnostic inference without fine-tuning by providing a single or few prompts, thus addressing the aforementioned limitations. Prior study [2] has shown the ground truth demonstrations impose smaller effect than the author expected and significant zero-shot performance improvement under ICL framework. It implies external information gain can be extracted from frozen pretrained LLMs itself, if we select correct prompting strategy. However, ICL has its own shortcomings regarding tasks of reasoning. The chain-of-thought (CoT) prompting [15] decomposes the reasoning tasks by providing models with a sequence of questions or prompts that gradually guide the model to make predictions for a target task. While employment of CoT prompting is attributed to few-shot setup, LLMs has been proven to be the zero-shot reasoner given the single and specific prompt [16]. In this work, we applied the above ICL techniques to ASR rescoring for the first time and empirically evaluate the their performance individually against the baseline.
## 3 Method
We now review some recent advances in in-context learning techniques [15, 1, 16] and describe how they can be incorporated into second-pass rescoring applications.
### In-Context Learning Background and Techniques
In-context learning [1] can emerge from modeling long-range coherence in the pretraining data. Based on a recent theoretical justification [17] by Bayesian inference, LLM would have implicitly learned to infer a latent concept during its pretraining stage. As an empirical result, in-context learning occurs if the LM can still infer the shared concept across examples (e.g., task instruction or prompts) to perform a target task. To model the in-context learning process, we can formulate its distribution over token \(o\) within the vocabulary \(O\) by sampling a latent _confounding variable_[18]\(\theta\) from its population \(\Theta\).
The prediction over the pretraining distribution could be inferred by marginalizing over the confounding variable \(\theta\):
\[p_{\text{prompt}}=p(o_{1},...o_{T})=\int_{\theta\in\Theta}p(o_{1},...o_{T}| \theta)p(\theta)\,d\theta. \tag{1}\]
Under the in-context learning framework, prediction sequence \(O_{i}\) is inferred from the pretraining distribution conditioned on a prompt variable \(\theta^{*}\), test-time sample (questions we would like to answer) \(x_{\text{test}}\) and its in-context predictor \(p_{\text{prompt}}(y|x)\):
\[y_{\text{test}}\sim p_{\text{prompt}}(y|x_{\text{test}},\theta^{*}). \tag{2}\]
For instance, a simple prompt to empower in-context learning is to directly provide a _"task-oriented question"_ to the pretrained LLM, as shown in Figure 2(a). We further illustrate more in-context learning setups in the following subsections.
#### 3.1.1 Zero-shot domain-hint prompting
In the zero-shot setting, given a prompt template function \(r()\) and \(\theta^{*}\) as the domain-specific confounding variable (e.g., air
Figure 2: Four LLM in-context learning uses for ASR 2nd pass
line travel), a pretrained LLM models the conditional probability of the original input \(x\) and target \(y\), even if they were never trained, into their template function \(r_{x}(x)\) and \(r_{y}(y)\).
\[r_{y}(y_{\text{test}})\sim p_{\text{prompt}}(r_{y}(y)|r_{x}(x_{\text{test}}), \theta^{*}). \tag{3}\]
In this work, we consider two general acoustic domains for making template function as a hard-coded input of "_airline information_" or "_financial market_", as shown in Figure 2(b).
#### 3.1.2 Zero-shot reasoning
Zero-shot reasoning [16] employs chain-of-thought prompting [15] in a zero-shot setting with only two prompts: (i) reasoning extraction and (ii) answer extraction. Based on [16], the reasoning extraction step uses a fixed and canonical prompt: _Let's think step by step_, as shown in Figure 2(c).
In our experiments, we noticed that the reasoning extraction prompt is essential to boost the performance of zero-shot LLM rescoring. Under this zero-shot self-reasoning setup, the LLM output will first explain the task it is working on, then produce the actual task output. In the case of zero-shot rescoring, the LLM will first define a ASR-LM rescoring tasks and then provide LM scores for each N-best hypothesis.
#### 3.1.3 Few-shot and one-shot in-context learning
A standard few-shot in-context learning process uses pairs of demonstrations [1] "questions and targeted tasks," retrieved from training data, to inform frozen LLMs for performing the target output, as illustrated in Figure 2(d). One-shot in-context learning takes place by using a single demonstration as an input prompt for the frozen LLMs. Note that demonstrations (from an unseen training set) are distinct from test examples, and that the unsupervised-trained LLM has been reported to have a memory bottleneck based on term frequencies [19], which avoids potential data leakage issues for its few-shot learning evaluation reported in previous work [2, 1, 19].
#### 3.1.4 N-best hypotheses to transcription fine-tuning
We introduce a hypotheses to transcription (H2T) mapping loss function: \(\mathcal{L}_{\text{H2T}}=\sum_{i=1}^{N}-\{\log P(y^{*}|x_{i},\mathbf{\Theta})+ \lambda\cdot\text{MSE}(s_{i},P(y^{*}|x^{(i)},\mathbf{\Theta}))\}\), where \(P(y^{*}|x_{i},\mathbf{\Theta})\) represents the probability of the true transcription (\(y^{*}\)) given the _i_-th hypothesis (\(x_{i}\)) and the model parameters (\(\mathbf{\Theta}\)). To integrate acoustic information, a regularization term using mean squared error (MSE) is applied to penalize the model when there is a significant discrepancy between the predicted probabilities and the posterior probability scores with a \(\lambda\) coefficient of \(0.01\).
Furthermore, we also consider parameter-efficient fine-tuning methods as in [6], which only update a small subset \(\theta\subset\mathbf{\Theta}\) of total trainable parameters to avoid potential overfitting that would hurt the generalization of the LLM [20, 21].
### Task-activating Prompting (TAP) Framework
We now introduce a new in-context learning strategy that triggers the necessary sequential concepts for the ASR rescoring task, by utilizing multiple-round contextual sequences [22]. This technique is referred to as "task-activating prompting" (TAP). In this configuration, the LLM is given leading questions to clarify the task it needs to perform. Following this, the model is instructed to provide an example and, ultimately, it is presented with the top-N hypotheses from which to generate the actual output for the task. In our experiments, we noted that LLMs are capable of producing lists of the top N predictions, made up of utterances with analogous pronunciations. This demonstrates that LLMs assimilate acoustic (e.g., lattice-level) information during their pretraining phase. We illustrate the queries and responses used for task-activating prompting in Figure 3.
We observed that the responses from InstructGPT vary slightly as a function of the random seed. In our experiments, we utilize the API key from OpenAI, and the prompts are fixed, except for the final query that contains the sample and test N-best lists. Recent work on ICL [23] focuses on the selection strategy for in-domain demonstration samples, which clearly could affect inference results. However, we leave demonstration selection for the N-best task to future work, and in our _few-shot_ learning experiments manually select longer demonstration utterances, following [2].
## 4 Experiments and Results
We conduct our experiments by evaluating the final word error rate (WER) of a second-pass rescoring ASR system. First, we employ pretrained ASR models to decode audio from a target domain and collect the top-10 hypotheses. Then, we compare different in-context learning-based rescoring pipelines, including \(\mathcal{P}_{1}\), the LLM-based error correction to enhance a standard ASR rescoring process, and \(\mathcal{P}_{2}\), a prompting scheme that enables the LLM to directly perform ASR rescoring, as shown in Figure 1.
### Pretrained ASR and Rescoring Model Training
We select a conformer-based RNN-transducer [24, 25] (760M parameters) pretrained with \(960\) hours of Librispeech [26], 10k hours of the GigaSpeech [27], 24k hours of English data in VoxPopuli [28], and wav2vec2-based Libri-Light [29, 30] pretraining as the first-pass model to generate the hypothesis list. We follow the conformer architecture presented in [25]. The pretrained model achieves a WER of \(2.45\)% (test-clean) and \(5.45\)% (test-other) on LibriSpeech without external LM.
For \(\mathcal{P}_{1}\), to obtain a standard neural LM for rescoring, we employ an ASR-LM training procedure similar to RescoreBERT [31], which involves a two-stage process. The first stage is adaptation using a first-stage masked language model (MLM) [32] loss, and the second stage is to use minimum
**Q**: Do you know speech recognition?_
_R_**: Yes, I am familiar with speech recognition. Speech recognition, also known as automatic speech recognition (ASR) or speech-to-text, is the process of converting spoken language into text. This technology involves using algorithms and machine learning models to analyze and transcribe the acoustic features of spoken words and phrases. Speech recognition has many applications, including voice-controlled assistants, automated phone systems, and transcription services._
_Q_**: Do you know language model for speech recognition?_
_R_**: Yes, I can provide you with some information about language models used in speech recognition. Language models are an essential component of Automatic Speech Recognition (ASR) systems. They help to predict the likelihood of a sequence of words, given the context. This helps the ASR system to better understand and transcribe spoken language into text. This separate model is typically more complex and powerful than the initial language model used for the transcription, and it is used to re-score the transcription based on the probability of the words occurring in the given context. The rescoring process involves taking the output of the initial language model, which is usually based on statistical methods such as Hidden Markov Models, and then applying a more advanced language model, such as a neural network-based language model, to generate a more accurate transcription. This is accomplished by re-ranking the possible transcriptions based on the probabilities assigned by the more advanced language model. Language model rescoring has been shown to significantly improve the accuracy of speech recognition systems, particularly in noisy or challenging environments where the initial language model may not perform well._
_Q_**: Could you give a possible example of language model rescoring with some hypotheses?_
_R_**: _Nire, here is an example of language model rescoring for ASR with 5-best hypotheses:_
_1. recognize speech with artificial intelligence._
_2. recognized speech with artificial intelligence._
_3. recognize speech with artificial intelligent._
_4. recon eyes speech with artificial intelligence._
_5. recognize peach with artificial intelligence._
_After rescoring, the model assigns a higher probability to the first hypothesis, which is the correct transcription of the input. The other hypotheses receive lower probabilities, reflecting their decreased likelihood given the improved language model. In this case, the top-ranked hypothesis after rescoring would be chosen as the final output: recognize speech with artificial intelligence_
_Q_**: Nice job, I will provide some examples as a demonstration from [target domain]. The 10-best hypothesis is:[hypothesis list from training set], and I would expect your output is: [corresponding transcription]. Following this example, could you report the true transcription from the following 10-best hypotheses:? [hypotheses list for inference]_
### Pretrained LLM Configurations
**GPT-2** (\(1.5\)B): From the Generative Pretrained Transformer (GPT) family of causal models we selected this one [1, 37] as our basic LLM for in-context prompting setup. This version of GPT-21 is 100x smaller than very large teacher models, such as BLOOM [38], making it much more suitable for real-world deployment. GPT-2 is trained primarily using Wikipedia [39] and Common Crawl [40].
Footnote 1: The pretrained model is publicly accessible under MIT License [https://github.com/openai/gpt-2](https://github.com/openai/gpt-2)
**OpenLLaMA** (13B): an open collection of transformer-based, decoder-only, causal language models ranging from 1B to 13B parameters [41]. It is trained exclusively on the RedPajama [42] dataset; we have confirmed that our Linguistic Data Consortium eval sets are not included.
**BLOOM** (\(176\)B): the first open-source LLM trained on the public supercomputer provided by the French government [38], BLOOM is available for reproducible studies of LLMs over \(100\)B. The model is pretrained using a large public collection of \(498\) HuggingFace datasets [43], comprising \(1.61\) TB of text spanning \(46\) natural languages and \(13\) programming languages (our evaluation datasets are not included.)
**InstructGPT** (\(175\)B): an LLM created by training a GPT-3 [1] model through reinforcement learning from human feedback (RLHF) [44]. InstructGPT demonstrates improved zero-shot learning performance, benefiting from human knowledge transferred through RLHF, a process similar to student-teacher learning. InstructGPT is trained using human feedback without using open-domain data for evaluation.2
Footnote 2: InstructGPT is an earlier version of ChatGPT, We did **not** use ChatGPT in our experiments due to its frequent revisions and unclear technical documentation.
### Target-Domain Datasets
We use the pretrained ASR model introduced in Section 4.1 to decode two public datasets, both in \(\mathcal{P}_{1}\) (post-processing error correction) and \(\mathcal{P}_{2}\) (pretrained LLM-based rescoring).
**Airline Travel Information System (ATIS)**[45] contains \(4978\) training and \(893\) utterances. ATIS comprises spoken queries for air travel information, such as flight times and availability.
**Wall Street Journal (WSJ)**[46] consists of transcribed audio recordings of read news articles from the Wall Street Journal, covering a wide range of topics and featuring a diverse set of speakers. We adapt the pretrained conformer
Figure 3: Queries (Q) and responses (R) for N-best evaluation and correction by task-activating prompting (TAP) of LLMs
model on the development set of _train-si284_ and test on _93dev_.
### \(\mathcal{P}\)ipeline 1 Results
As shown in Table 1, we first use a pretrained LLM for error correction, using the setup in [9] to improve hypothesis quality as measured by oracle (minimum achievable) N-best error rates.
Figure 4 shows how error correction by LLMs complements existing rescoring with adapted LMs, such as RescoreBERT [31]. We observe that the existing rescoring pipeline [31] reduces WER from \(11.3\%\) to \(8.7\%\) compared to its fine-tuning-only baseline. Furthermore, \(\mathcal{P}\)ipeline 1 employed the frozen pretrained language models to achieve an additional performance boost.
### \(\mathcal{P}\)ipeline 2 Results
**Case 1: Zero-shot learning.** Table 3 shows results for rescoring with in-context learning using different LLMs, as well as various baselines. Note that for in-context rescoring, we extract LM scores from the model output responses, which differs from the standard approach of using the softmax outputs from the rescoring LM, for which we also report results. The best \(\mathcal{P}_{2}\) rescoring setup is the one using InstructGPT (Table 3, last row), achieving \(19.7\)% relative WER reduction compared to rescoring with a fine-tuned GPT-2. Note that the frozen GPT-2 failed to give improvements over a 4-gram baseline, showing that a certain model size is required for generative error correction. For LLMs with over \(100\) billion parameters, the use of prompting information showed better results compared to using the softmax scores directly.
Next, we tested some popular prompting variants for context learning to possibly improve the performance of \(\mathcal{P}_{2}\). As shown in Table 2, \(\mathcal{P}_{2}\) and prompting with _"step-by-step"_ (known as zero-shot reasoning in [16]) achieved the best results for both LLMs (fourth row). It outperforms the one-shot learning variant (prompting with one input-output example; fifth row) by \(1.8\%\) relative WER difference. It is worth noting that the standard rescoring training \(\mathcal{P}_{1}\) (first row) still outperforms zero-shot \(\mathcal{P}_{2}\) by \(14.6\)% relative.
**Case 2: Few-shot learning.** We gain some insight into the effects of in-context learning by considering few-shot learning in the form of conversational prompting. We feed InstructGPT examples drawn from the training portions of the two datasets, and report the results on the unseen test sets. As shown in Figure 5, we see that frozen InstructGPT improved its rescoring performance as the number of training samples is increased from \(1\) to \(12\). It is better to let the model history accumulate (green plot) than to reset it after each utterance (red plot), thereby compounding the effect of demonstrations.
**Case 3: In-domain fine-tuning.** We also obtained in-domain fine-tuning results, where we use the training portions of all the speech dataset to fine-tune the LLMs and then evaluate performance on the test sets; note that InstructGPT, being an API-only model, could not be fine-tuned. For prompting we use TAP (Section 3.2); however, we observed that after fine-tuning the exact method of prompting makes very little difference. As shown in Table 4, fine-tuning with low-rank adapters (LoRA) outperforms full fine-tuning in the generative error correction case, as do residual adapters. One reason would be that adapters avoid modifying the parameters
\begin{table}
\begin{tabular}{l c c|c|c} \hline \hline \(\mathcal{P}_{1}\): correction setup & WSJ & ATIS & WER\({}_{avg}\) & M-Size \\ \hline (a) \(N\)-best & 9.78 & 6.43 & 8.11 & - \\ \hline (a) + corrected by GPT-2 & 9.91 & 6.11 & 8.01 & 1.5B \\ (a) + corrected by OpenLLaMA & 9.95 & 5.73 & 7.43 & 13B \\ (a) + corrected by BLOOM & 9.21 & 5.64 & 7.42 & 176B \\ (a) + corrected by InstructGPT & **8.41** & **5.43** & **6.92** & 175B \\ \hline \hline \end{tabular}
\end{table}
Table 1: Oracle WERs for original and error-corrected N-best output, using \(\mathcal{P}_{1}\) processing as shown in Figure 1(a). The oracle error rates show the improvement in hypothesis quality as a result of post-processing using different sizes of LLMs.
Figure 4: \(\mathcal{P}_{1}\) ASR rescoring (RS) training using hypotheses corrected by LLM. The dashed red line marks the \(N\)-best WER. The WER gradually decreases in the three stages of rescoring using our \(\mathcal{P}_{1}\) processing: Stage 0, \(N\)-best hypothesis with LLM correction (\(N\)C); Stage 1, fine-tuned RescoreBERT [31] using the masked language modeling (MLM) loss; and Stage 2, MWER training.
Figure 5: WER results on ATIS and WSJ with few-shot learning based on InstructGPT, for increasing numbers of demonstration samples. “One-by-one prompting” resets the model history after each utterance, “in-context prompting” lets the history (and thus the examples provided) accumulate.
of a pretrained model (by inserting a neural module with a small number of additional trainable parameters that approximate the full parameter updates), allowing for efficient learning of the task without affecting the pretrained parameters of the LLM. _LoRA_-based generative error correction introduces trainable low-rank decomposition matrices into the pretrained LLM layers, enabling the model to adapt to new data while keeping the original LLMs fixed to retain the pretrained knowledge. Specifically, LoRA performs a reparameterization of each model layer expressed as a matrix multiplication by inserting low-rank decomposition matrices. As a result, the representations (generated by the LLM) are not distorted due to task-specific tuning, while the adapter module acquires the ability to perform the error correction task.
Compared to previous state-of-of-the-art results with the Universal Speech Model (USM) [47] with text injection training over 2B parameters, our best fine-tuning results improve upon their WSJ results (WER of 3.2%). This is noteworthy considering that our generative error correction method is based on a smaller underlying conformer-RNN-T ASR system.
Another parameter-efficient form of fine-tuning is prefix tuning [50], where a continuous prompt prefix is inserted into the input and tuned to optimize task performance of the LLM. However, this method gave worse results than the full or adapter-based fine-tuning methods for the larger LLMs.
## 5 Conclusions
We have explored how in-context learning can be applied to pretrained large language models for improving first-pass ASR N-best output _without fine-tuning_. For this task setting, we introduce two post-processing pipelines utilizing in-context learning. The first one uses a pretrained LLM for error correction prior to standard rescoring with a fine-tuned LM. The second pipeline uses in-context learning by prompting, to instruct the frozen pretrained LLM to perform the rescoring task by itself. The latter method shows substantial gains over the first-pass ASR output and can be further enhanced with chain-of-thought and example prompting, as well as a new prompting scheme we call task-activating prompting. The best methods show 31% (on ATIS) to 38% (on WSJ) WER reduction over first-pass ASR, using a frozen InstructGPT, and better than with a fine-tuned GPT-2 LM. Substantial additional gains are achieved by fine-tuning the LLM for the ASR output-correction task. Post-processing with OpenL-LaMA and LoRA fine-tuning achieves 86% and 80% WER reduction on ATIS and WSJ, respectively. These results are **below** the N-best oracle error rate, showing the LLM's ability to utilize prelearned knowledge to correct ASR output errors. Possible future work can look at how to integrate extra acoustic representations into pretrained LLMs for further enhancing generative ASR error correction.
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c|}{WSJ} & \multicolumn{3}{c}{ATIS} \\ Method & GPT-2 & 11.124 & BLOM & GPT-2 & LLM-1 & BLOM \\ \hline FT + ranking [11] & 9.93 & 8.09 & 8.42 & 6.34 & 3.71 & 3.75 \\ \hline Full fine-tune & 9.94 & 7.71 & 6.91 & 5.98 & 2.23 & 2.49 \\ Res. adapter [48] & **7.24** & 5.94 & 4.57 & **4.45** & 2.48 & 2.12 \\ LoRA [49] & 7.52 & **2.11** & **2.81** & 4.57 & **1.69** & **1.97** \\ Prefix tuning [50] & 9.32 & 6.99 & 7.43 & 5.32 & 2.63 & 2.74 \\ \hline \hline \end{tabular}
\end{table}
Table 4: WERs on ATIS and WSJ, using fine-tuning (FT) and parameter-efficient adaptation to enhance the \(\mathcal{P}_{2\text{-}\text{TAP}}\) pipeline
\begin{table}
\begin{tabular}{l c|c c} \hline \hline & WSJ & ATIS \\ \hline In-context learning variant & InstructGPT & BLOOM & InstructGPT & BLOOM \\ \hline \(\mathcal{P}_{1}\): LLM-corrected \(N\)-best w/ RescoreBERT [31] & 10.13 & 10.46 & 7.13 & 8.46 \\ \hline \(\mathcal{P}_{2}\): (c) Zero-shot scoring & 10.43 & 11.23 & 7.95 & 8.45 \\ \(\mathcal{P}_{2}\): (c) + zero-shot reasoning [16] & 10.20 & 11.88 & 7.77 & 8.53 \\ \(\mathcal{P}_{2}\): (c) + domain-hint prompting [2] & 10.98 & 11.45 & 7.59 & 8.49 \\ \hline \(\mathcal{P}_{2}\): (d) Scoring with one example-pair & 9.42 & 9.45 & 6.34 & 7.30 \\ \(\mathcal{P}_{2}\): (d) + zero-shot reasoning [16] & 9.87 & 11.46 & 7.25 & 8.64 \\ \(\mathcal{P}_{2}\): (d) + domain-hint prompting [2] & 9.70 & 10.99 & 6.19 & 7.12 \\ \(\mathcal{P}_{2}\): (d) + task-activating prompting (TAP) & **8.84** & **8.99** & **5.79** & **6.72** \\ \hline \hline \end{tabular}
\end{table}
Table 2: WERs on ATIS and WSJ using prompting variants to enhance the \(\mathcal{P}_{2}\) in-context learning pipeline. We report the results of InstructGPT and BLOOM as LLMs over 100B; GPT-2 and OpenLLaMA do not perform consistently in this setting.
\begin{table}
\begin{tabular}{l c c} \hline \hline \(\mathcal{P}_{2}\): zero-shot rescoring setup & WSJ & ATIS \\ \hline (a) Oracle & 9.78 & 6.43 \\ (b) First pass & 11.87 & 8.82 \\ \hline (b) + \(4\)-gram LM & 11.21 & 8.57 \\ \hline (b) + frozen GPT-2 & 29.56 & 27.51 \\ (b) + frozen GPT-2 _w/ TAP_ & 27.37 & 27.59 \\ \hline (b) + frozen OpenLLaMA & 13.32 & 9.27 \\ (b) + frozen OpenLLaMA _w/ TAP_ & 11.53 & 8.61 \\ \hline (b) + frozen BLOOM & 12.59 & 9.21 \\ (b) + frozen BLOOM _w/ TAP_ & **10.82** & **8.42** \\ \hline (b) + frozen InstructGPT & 9.97 & 7.15 \\ (b) + frozen InstructGPT _w/ TAP_ & **8.72** & **6.39** \\ \hline \hline \end{tabular}
\end{table}
Table 3: WERs with \(\mathcal{P}_{2}\) pipeline, using LLM in-context learning (ICL) for rescoring by zero-shot prompts illustrated in Figure 2(a). Where indicated we use task-activating prompting (TAP) to condition the LLM. | 大規模言語モデル(LLM)が音声認識後の処理として動作する能力を探索しています。これは、再 scoring と誤り訂正を行うものです。私たちの最初の焦点は、LLMがこれらのタスクを無調整で行うように指示を与えること(指示誘導)です。このため、零と少数のショットのインコtxt学習、そして、因果的な指示と示唆の組み合わせによる新しいタスクの活性化誘導法を評価しました。これにより、コンテキストウィンドウの増加が図られています。次に、凍結されたLLMによるインコtxt学習による再 scoringは、ドメインチューニングされたLLMによる再 scoring と競争的な結果を達成しました。これは、トレーニング済み最初の入力認識システムと、2つのアウトオブドメインタスク(ATISとWSJ)での再 scoring の出力を用いて実現しました。さらに、指示誘導と微調整を組み合わせて使用することで、エラー率をN |
2310.00441 | Exploiting Human Color Discrimination for Memory- and Energy-Efficient
Image Encoding in Virtual Reality | Virtual Reality (VR) has the potential of becoming the next ubiquitous
computing platform. Continued progress in the burgeoning field of VR depends
critically on an efficient computing substrate. In particular, DRAM access
energy is known to contribute to a significant portion of system energy.
Today's framebuffer compression system alleviates the DRAM traffic by using a
numerically lossless compression algorithm. Being numerically lossless,
however, is unnecessary to preserve perceptual quality for humans. This paper
proposes a perceptually lossless, but numerically lossy, system to compress
DRAM traffic. Our idea builds on top of long-established psychophysical studies
that show that humans cannot discriminate colors that are close to each other.
The discrimination ability becomes even weaker (i.e., more colors are
perceptually indistinguishable) in our peripheral vision. Leveraging the color
discrimination (in)ability, we propose an algorithm that adjusts pixel colors
to minimize the bit encoding cost without introducing visible artifacts. The
algorithm is coupled with lightweight architectural support that, in real-time,
reduces the DRAM traffic by 66.9\% and outperforms existing framebuffer
compression mechanisms by up to 20.4\%. Psychophysical studies on human
participants show that our system introduce little to no perceptual fidelity
degradation. | Nisarg Ujjainkar, Ethan Shahan, Kenneth Chen, Budmonde Duinkharjav, Qi Sun, Yuhao Zhu | 2023-09-30T17:28:59 | http://arxiv.org/abs/2310.00441v1 | Exploiting Human Color Discrimination for Memory- and Energy-Efficient Image Encoding in Virtual Reality
###### Abstract.
Virtual Reality (VR) has the potential of becoming the next ubiquitous computing platform. Continued progress in the burgeoning field of VR depends critically on an efficient computing substrate. In particular, DRAM access energy is known to contribute to a significant portion of system energy. Today's framebuffer compression system alleviates the DRAM traffic by using a numerically lossless compression algorithm. Being numerically lossless, however, is unnecessary to preserve perceptual quality for humans. This paper proposes a perceptually lossless, but numerically lossy, system to compress DRAM traffic. Our idea builds on top of long-established psychophysical studies that show that humans cannot discriminate colors that are close to each other. The discrimination ability becomes even weaker (i.e., more colors are perceptually indistinguishable) in our peripheral vision. Leveraging the color discrimination (in)ability, we propose an algorithm that adjusts pixel colors to minimize the bit encoding cost without introducing visible artifacts. The algorithm is coupled with lightweight architectural support that, in real-time, reduces the DRAM traffic by 66.9% and outperforms existing framebuffer compression mechanisms by up to 20.4%. Psychophysical studies on human participants show that our system introduce little to no perceptual fidelity degradation.
+
Footnote †: _ASPLOS ’24, April 27-May 1, 2024, La Jolla, CA, USA_
© 2024 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0372-0/24/04.
[https://doi.org/10.1145/3617232.3624860](https://doi.org/10.1145/3617232.3624860)
+
Footnote †: _ASPLOS ’24, April 27-May 1, 2024, La Jolla, CA, USA_
© 2024 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0372-0/24/04.
[https://doi.org/10.1145/3617232.3624860](https://doi.org/10.1145/3617232.3624860)
+
Footnote †: _ASPLOS ’24, April 27-May 1, 2024, La Jolla, CA, USA_
## 1. Introduction
Virtual Reality (VR) has the potential of becoming the next ubiquitous computing platform, after PCs and smartphones, revolutionizing a wide variety of domains such as healthcare (Cheng et al., 2017), education (Cheng et al., 2017), remote communication (Zhu et al., 2018; Zhu et al., 2018), professional training (Zhu et al., 2018), and industrial design (Zhu et al., 2018).
Continued progress in the burgeoning field of VR depends critically on an efficient computing substrate, driven by the ever-growing requirement of immersive user experience and the miniaturization of device form factors. DRAM communication energy is known to contribute significantly to the system energy consumption. Recent studies show that DRAM energy alone can consume upward of 30% of the total system energy consumption during VR video rendering (Zhu et al., 2018; Zhu et al., 2018). The DRAM bottleneck will only become worse in the future with users' questions for higher resolution and frame rate.
An effective approach to reduce DRAM traffic is framebuffer compression, which is universally implemented in modern mobile SoCs for compressing any traffic in and out of the DRAM. A classic example is the Arm Frame Buffer Compressions (AFBC) technology, which is now in almost all of Arm's GPU, Video Codec, and Display Controller IPs (Cheng et al., 2017).
**Idea.** Today's framebuffer compression algorithm is numerically lossless. Being numerically lossless is, however, unnecessary to preserve perceptual fidelity: more compression opportunities arise when we turn our attention to _perceptual lossless_. Long-established psychophysical studies show that humans cannot discriminate colors that are close to each other (Zhu et al., 2018; Zhu et al., 2018). Informally, this means that many colors, while differing in RGB values, are perceptually indistinguishable and thus can be encoded together -- a previously under-exploited opportunity for real-time image encoding.
Critically, the discrimination ability becomes even weaker (i.e., more colors are indistinguishable) in our _peripheral_ vision as objects move away from fixation (Zhu et al., 2018; Zhu et al., 2018; Zhu et al., 2018). The
eccentricity-dependent weakening of color discrimination provides further opportunities for DRAM traffic compression: VR displays, to provide immersive experiences, have a wide Field-of-View (FoV) of about \(100^{\circ}\); above 90% of a frame's pixels are in the peripheral vision (outside 20 ) (Kang et al., 2019; Wang et al., 2020).
**Design.** Leveraging the unique color discrimination (in)ability of human visual system, we propose a new image compression algorithm for immersive VR systems. We precisely formulate the color perception-aware encoding as a constraint optimization problem. The formulation is non-convex and requires iterative solvers that are not amenable to real-time execution. Leveraging empirical observations of human color discrimination abilities, we introduce a set of principled relaxations, which transform the compression problem into a convex optimization with an analytical solution.
The analytical solution, while avoiding iterative solvers, is still compute intensive and slow to execute in real-time. Implemented as a GPU shader executing on the Adreno 650 GPU in Oculus Quest 2, a widely used mobile VR headset, the compression algorithm runs in a mere 2 FPS. We propose lightweight hardware extensions for our encoding and decoding algorithms. The new hardware exploits the inherent task-level and pipeline-level parallelisms in the algorithms and can be readily combined with existing Base-Delta (BD) encoding without changing the decoding hardware at all.
**Results.** We implement our architectural extensions in RTL and synthesize the design using a TSMC 7 nm process node. The compression algorithm reduces the memory traffic by 66.9% compared to uncompressed images and by up to 20.4% compared to the state-of-the-art real-time frame-buffer compression (Wang et al., 2020). We conduct IRB approved human subject study on 11 participants. Results suggest that our compression algorithm brings little visible artifacts to users. In summary, this paper makes the following contributions:
* We propose an image encoding scheme to reduce DRAM traffic in mobile VR systems. The scheme leverages the eccentricity-dependent color discrimination (in)ability of human visual systems.
* We show that the new encoding scheme can be formulated as a convex optimization problem with an analytical solution.
* We propose lightweight and modular hardware support to enable real-time encoding.
* ASIC synthesis and human subject studies show that the new encoding scheme reduces the DRAM traffic by 66.9% with little to no subjective perceptual quality degradation.
The rest of the paper is organized as follows. Sec. 2 introduces the background. Sec. 3 describes our key compression algorithm. Sec. 4 introduces the co-designed hardware architecture. Sec. 5 discusses the experimental methodology, followed by the evaluation results in Sec. 6. We relate our work to prior art in Sec. 7 and conclude in Sec. 8.
## 2. Background and Motivation
We first introduce the background of human color perception and its eccentricity dependence, which form the psychophysical basis for our compress algorithm (Sec. 2.1). We then describe today's real-time frame compression algorithm, which forms an important baseline for our algorithm (Sec. 2.2).
### Eccentricity-Dependent Color Perception
**Colors and Color Spaces**. In a typical rendering pipeline, a color is usually described in the linear RGB space with three channels; each channel is a floating point number between 0 and 1. For output encoding, each channel in the linear RGB color space is transformed to the common sRGB color space, where each channel is an 8-bit integer between 0 and 255. This transformation is non-linear, called gamma encoding, and is described by the following function \(f_{\delta 2r}\), where \(x\in[0,1]\) represents a linear RGB channel value (Kang et al., 2019; Wang et al., 2020):
\[f_{\delta 2r}(x):=\begin{cases}\lfloor 12.92x\rfloor&x\leq 0.0031308\\ \lfloor 1.055x^{1/2.4}-0.055\rfloor&x>0.0031308\end{cases} \tag{1}\]
Psychophysical studies on color discrimination commonly operate in the DKL color space (Kang et al., 2019; Wang et al., 2020), mainly because the DKL space models the opponent process in the human visual system. The DKL space is a linear transformation away from the linear RGB color space:
\[[R,G,B]^{T}=\text{M}_{\text{RGB2DKL}}[K_{1},K_{2},K_{3}]^{T} \tag{2}\]
where \([R,G,B]\) is the color in the linear RGB space, \([K_{1},K_{2},K_{3}]\) is the color in the DKL space, and \(\text{M}_{\text{RGB2DKL}}\) is a \(3\times 3\) constant matrix (with the same coefficients, \([[0.14,0.17,0.00],[-0.21,\\ -0.71,-0.07],[0.21,0.72,0.07]]\), as in Duinkharjav et al. (2020)).
**Color Discrimination.** It is well-established that humans can not discriminate between colors that are close to each other (Wang et al., 2020; Wang et al., 2020). For instance, Fig. 1 shows four colors that have different sRGB values but appear to be the same.
More formally, given a reference color \(\kappa\), there exists a _set_ of colors \(\mathcal{E}_{\kappa}\), in which all the colors are perceptually indistinguishable from \(\kappa\). In a linear color space such as DKL and RGB, the set of equi-appearance colors in \(\mathcal{E}_{\kappa}\) form an _ellipsoid_, whose center is \(\kappa\)(Kang et al., 2019). In the literature, such an ellipsoid is called a _discrimination ellipsoid_(Kang et al., 2019).
**Eccentricity Dependence.** Critically, human color discrimination ability is weaker in the peripheral vision (Kang et al., 2019; Wang et al., 2020).
Figure 1. Human visual system can not discriminate colors that close to each other. These four colors differ in tristimulus values, but appear to be the same color.
That is, for a color \(\kappa\), its discrimination ellipsoid \(\mathcal{E}_{\kappa}\) is larger, i.e., includes more indistinguishable colors, as \(\kappa\) moves away from one's fixation. Fig. 2 shows two figures that plot the discrimination ellipsoids under a \(5\lx@math@degree\) and a \(25\lx@math@degree\) eccentricity, respectively, in the linear RGB color space. Eccentricity is the angle from the center of the retina, a.k.a., current fixation or "fovea". The ellipsoids in the \(25\lx@math@degree\) plot are larger than those in the \(5\lx@math@degree\) plot, suggesting that the color discrimination ability is weaker in peripheral vision.
Color discrimination becomes weaker in the visual periphery for three reasons. First, the receptive field (RF) sizes of Retinal Ganglion Cells (RGCs) increase with eccentricity, a result of larger dendritic fields (Gall et al., 2017; Ganglion et al., 2018) and sparser RGC density in periphery (Gall et al., 2017). A large RF means that a RGC integrates signals from a larger spatial area, leading to more blurring in the (spatial) frequency domain. Second, cone cells (which are photoreceptors responsible for vision under normal daylight) become larger in size as eccentricity increases (Gall et al., 2017), also contributing blurring in spatial frequency. Finally, the distribution of cone cells on our retina is extremely non-uniform: over \(95\%\) of the cone cells are located in the central region of the retina (i.e., fovea) with an eccentricity of below \(5\lx@math@degree\)(Gall et al., 2017; Gall et al., 2017). The density of the cone cells decreases drastically in the visual periphery, which is, thus, significantly under-sampled spatially.
The full color discrimination function \(\Phi\), expressed below, is thus parameterized by both the reference color \(\kappa\) and the eccentricity \(\mathbf{e}\):
\[\Phi:(\kappa,\mathbf{e})\mapsto(a,b,c) \tag{3}\]
where \((a,b,c)\) represents the semi-axes lengths of the discrimination ellipsoid belonging to color \(\kappa\) at an eccentricity \(\mathbf{e}\) in the DKL color space (Gall et al., 2017), a common color space for color perception experiments. Given \((a,b,c)\), \(\mathcal{E}_{\kappa}\), the discrimination ellipsoid of color \(\kappa\) in the DKL space, is given by:
\[\frac{(x-\kappa_{1})^{2}}{a^{2}}+\frac{(y-\kappa_{2})^{2}}{b^{2}}+\frac{(z- \kappa_{3})^{2}}{c^{2}}=1 \tag{4}\]
where \((\kappa_{1},\kappa_{2},\kappa_{3})\) represent the three channels of the color \(\kappa\).
The function \(\Phi\) can be implemented using a Radial Basis Function (RBF) network (Kang et al., 2017), which is extremely efficient to implement on GPUs in real time. In our measurement on Oculus Quest 2 VR headset using Oculus' OVR Metrics Tool (Culus, 2018), evaluating RBF network runs in 72 FPS, matching the display refresh rate while consuming sub 1 mW power.
AR and VR headsets, in providing an immersive experience, usually have a wide FoV that is above \(100\lx@math@degree\). Therefore, the vast majority of the pixel colors will fall in the peripheral vision. The eccentricity-dependent color discrimination (in)abilities of human visual system gives opportunities to better image compression that this paper exploits.
### Real-Time Frame Compression
**DRAM Traffic.** A variety of data communication traffics occur on a VR system, as illustrated in Fig. 3, such as the traffic through DRAM, the display interface, and the wireless communications with a remote rendering server. This paper focuses on reducing the DRAM traffic, which occurs when the different Intellectual Property (IP) blocks in the SoC communicate with each other during rendering.
Each frame, the GPU writes the frame data to the frame buffer in the GPU, which are then read by the display controller. It is these DRAM traffics (i.e., GPU \(\leftrightarrow\) frame buffer \(\leftrightarrow\) DRAM controller) that this paper focuses on reducing. When rendering a VR (\(360\lx@math@degree\)) video, additional DRAM traffics occur between the network interface controller, the video codec, and the GPU (Gall et al., 2017). While not explicitly targeted in this paper, these traffics can also potentially be reduced by our compression algorithm, especially in scenarios where
Figure 2. Color discrimination is eccentricity dependent. The discriminative ability is weaker as the eccentricity increases. As a result, the sizes of the discrimination ellipsoids increase with the eccentricity. The two plots on the right show the discrimination ellipsoids under a \(5\lx@math@degree\) and a \(25\lx@math@degree\) eccentricity, respectively, in the linear RGB color space (i.e., sRGB normalized to \([0,1]\) without gamma (Kang et al., 2017; Ganglion et al., 2018)). The discrimination ellipsoids in each plot are shown for 27 colors uniformly sampled in the linear RGB color space between [0.2, 0.2, 0.2] and [0.8, 0.8, 0.8].
remetely rendered frames are transmitted one by one (rather than as a video) (Shanhan et al., 2017; Wang et al., 2018).
Reducing DRAM traffic is critical. It is well-established that data transfer and memory access energy is known to far out-weigh the energy consumption of computation. For instance, compared to a Multiple-Accumulate (MAC) operation on 1-Byte fixed-point data, transferring one Byte of information through DRAM consumes 800 \(\times\)(Shanhan et al., 2017; Wang et al., 2018) higher energy. Reducing DRAM traffic in a visual computing system has been a main research focus in recent years (Kang et al., 2018; Wang et al., 2018; Wang et al., 2018).
**Framebuffer Compression Algorithms.** An effective and commonly used approach to reduce DRAM traffic in a rendering system is framebuffer compression, which compresses and uncompresses every frame in and out of the DRAM. To ensure a low _per-frame_ latency, compression in VR must be done on a per-frame basis, precluding video compression methods such as H.265/VP5, which necessarily require buffering a sequence of frames before compression (Wang et al., 2018; Wang et al., 2018). Offline image compression methods such as JPEG and PNG are rarely used in framebuffer compression as they are too compute-intensive. For instance, JPEG requires chroma subsampling, transforming images to a frequency space followed by quantization and Huffman encoding (Wang et al., 2018).
Today's framebuffer compression methods universally use a much faster base+delta (BD) strategy. Fig. 4 uses a simple example to illustrate the basic idea behind BD, which compresses each color channel and each pixel tile individually. The tile size in Fig. 4 is 4\(\times\)4. In each tile, BD chooses a base pixel and then calculates the \(\Delta\)s/offsets between all other pixels and the base pixel. In the example of Fig. 4, the base pixel is the first pixel. The \(\Delta\)s will necessarily have smaller magnitudes compared to the original pixel values and, thus, require fewer bits to encode.
The BD compression algorithm is lightweight: it works completely in the image space, as opposed to the frequency domain which requires an additional, compute-intensive transformation (e.g. Fast Fourier Transform or Discrete Cosine Transformation); it requires only fixed-point addition arithmetics; it is also embarrassingly parallel. Therefore, the basic BD strategy is universally implemented in today's mobile SoCs for compressing any traffic in and out of the DRAM. A classic example is the Arm Frame Buffer Compressions (AFBC) technology, which is now in almost all of Arm's GPU, Video Codec, and Display Controller IPs (Bradbury et al., 2018).
## 3. Color Perception-Aware Compression
This section introduces a color perception-aware image encoding and decoding algorithm. We start by describing the high-level ideas (Sec. 3.1), followed by a precise problem formulation in the form of constraint optimization (Sec. 3.2). We then show how this optimization problem has an analytical solution when relaxed to a convex problem (Sec. 3.3). We then describe the full compression algorithm (Sec. 3.4).
### Key Ideas
The basic BD algorithm is numerically lossless. Our observation is that numerically lossless compression is unnecessary to preserve perceptual equivalence -- because of the inherent the color discrimination (in)ability of human visual system.
**Intuition.** The basic BD algorithm encodes all the \(\Delta\)s in a tile (off of a base pixel) rather than the original pixel values. Thus, to improve the compression ratio over BD we must reduce the magnitude of the \(\Delta\)s, which, intuitively, requires bringing pixels _more similar_ to each other.
Under a numerically lossless constraint, however, the \(\Delta\)s between pixels are fixed. Our idea is to relax the constraint from numerical lossless to _perceptually lossless_. In this way, we could adjust pixel color values, as long as each pixel color does not go beyond its discrimination ellipsoid, to minimize the total number of bits required to encode the \(\Delta\)s. This encoding is numerically lossy as we intentionally change the color values, but will preserve the perceptual quality.
**An Example.** More concretely, consider the example in Fig. 5, which shows 16 pixels in a tile on an axis. The number of bits required to encode the entire tile is (ignoring any metadata for now):
\[B=B_{0}+N\times B_{D} \tag{5}\] \[B_{0}=8,N=15,B_{D}=\lfloor log_{2}(Max-Min+1)\rfloor \tag{6}\]
Figure 4. Base + Delta (BD) compression, which works in the sRGB color space. For each pixel tile (4\(\times\)4 here), we find a base pixel (95 here), and calculate the \(\Delta\) of all other pixels from the base pixel. The \(\Delta\) are smaller in magnitude and thus require fewer bits to encode. The same compression strategy is applied to all three color channels.
Figure 3. Different types of data communication traffic in a VR system. This paper focuses on reducing DRAM traffic.
where \(B_{0}\) being 8 denotes that we need 8 bits to encode a base pixel (assuming the common 8-bit per-channel encoding), and \(N\) being 15 denotes that there are 15 other pixels. \(B_{D}\) denotes the number of bits required to encode the \(\Delta\) of each of the 15 non-base pixels.
The minimum value of \(B_{D}\) occurs when the base pixel is chosen to be within \([Min,\,Max]\), in which case \(B_{D}=[log_{2}(Max-Min+1)]\). This is because the number of bits to encode each \(\Delta\) must be the same1, so we must accommodate the _largest possible_\(\Delta\), which is the difference between the maximum and minimum pixels in the tile. Therefore, to improve compression ratio we must reduce \((Max-Min)\).
Footnote 1: It is possible, but uncommon, to vary the number of bits to encode the \(\Delta\)s in a tile with more hardware overhead. Following prior work (Sutton et al., 2017), this paper assumes that one single bit-length is used to encode all \(\Delta\)s in a tile. We consider variable bit-length an orthogonal idea to this paper.
The bottom example in Fig. 5 illustrates what would happen when we relax the compression constraint to be perceptually lossless. The adjusted pixel values deviate from the original values, but as long as they still within the respective ellipsoids, \((Max-Min)\) is reduced without affecting perceptual quality.
It is worth noting that to obtain the highest compression rate it is necessary to adjust interior pixels, as is the case in this example. The central challenge we address in this paper is how to design a principled algorithm that maximizes the bit reduction while being lightweight to execute in real time.
### Problem Formulation
Our compression algorithm works on top of the baseline BD algorithm. Our goal is to adjust pixel colors to minimize the bit-length required to encode the \(\Delta\)s in a tile. The adjusted pixel tile then goes through any existing BD compression method. Critically, color adjustment must not violate the perceptual constraints. Therefore, we formulate our compression as a constraint optimization problem:
(7a) \[\operatorname*{argmin}_{\mathbf{p}} \sum_{\mathrm{C}\in\{R,G,B\}}log_{2}\lfloor max\{f_{s2r}( \mathbf{p}^{\mathrm{C}})\}-min\{f_{s2r}(\mathbf{p}^{\mathrm{C}})\}+1\rfloor,\] (7b) \[\text{where }\mathbf{p}\coloneqq[p_{0},\ p_{1},\ \cdots,\ p_{N-1}],\] (7c) \[\mathbf{p}^{\mathrm{C}}\coloneqq[p_{0}^{\mathrm{C}},\ p_{1}^{ \mathrm{C}},\ \cdots,\ p_{N-1}^{\mathrm{C}}],\ \mathrm{C}\in\{R,G,B\}\] (7d) \[s.t. \forall p_{i}\in\mathbf{p}\ p_{i}\in\mathcal{E}_{p_{i}}\] (7e)
where \(\mathbf{p}\) is the optimization variable, which is the collection of \(N\) pixels in a tile (Equ. 7b); \(p_{i}^{\mathrm{C}}\) denotes channel C (R, G, or B) of \(i\)-th pixel in the linear RGB space.
The constraints (Equ. 7d) provide the (convex) ellipsoid boundary for each pixel to move while maintaining perception quality. \(f_{s2r}(\cdot)\) is the non-linear transformation from RGB to sRGB space (Sec. 2.1), which is ultimately where bit encoding takes place. The objective function (Equ. 7a) minimizes the bit cost for encoding the \(\Delta\)s across all channels (it is a constant cost to encode the base pixel, e.g., 8 in the common sRGB encoding). This optimization formulation is applied to each pixel tile independently.
Unfortunately, this optimization problem is impractical to be solved in real-time, because the objective function is non-convex due to the non-linearity of min, max, floor, and \(f_{s2r}(\cdot)\). Empirically, we also find that the popular solvers in Matlab spend hours while still being stuck in local optima.
**Relaxation.** We introduce two relaxations that turn the problem into a convex optimization. Critically, while general convex optimization requires iterative solvers (e.g., gradient descent or Newton's method (Kolmogorov, 1954)), our relaxed problem is one such that it has an analytical solution. The relaxations keep the same constraints as before (Equ. 7d) and, thus, still enforce the perceptual quality.
The first relaxation is based on the empirical observation that most discrimination ellipsoids are elongated along the either the Red or the Blue axis. See the discrimination ellipsoids in Fig. 2 for an illustration. This makes sense as human visual perception is most sensitive to green lights (Sutton et al., 2017; Sutton et al., 2017) and, thus, has the least "wiggle room" along the Green axis.
Our idea thus is to, instead of minimizing the bit costs across _all_ three axes, minimize along _only_ the Red or the Blue axis (while still having the flexibility of adjusting all the channels of all the pixels in a tile). Using the Blue axis an example, this relaxation yields following new objective function in Equ. 8a:
(8a) \[\operatorname*{argmin}_{\mathbf{p}} log_{2}\lfloor max\{f_{s2r}(\mathbf{p}^{B})\}-min\{f_{s2r}(\mathbf{p}^{B})\}+1\rfloor,\] (8b) \[\Rightarrow \operatorname*{argmin}_{\mathbf{p}} max\{f_{s2r}(\mathbf{p}^{B})\}-min\{f_{s2r}(\mathbf{p}^{B})\},\] (8c) \[\overset{\rightharpoonup}{\Rightarrow} \operatorname*{argmin}_{\mathbf{p}} max\{\mathbf{p}^{B}\}-min\{\mathbf{p}^{B}\}.\] (8d)
Figure 5. An intuition illustration of our perceptual-aware compression, where pixel values are adjusted to be more similar to each other by leveraging the inherent human color discrimination thresholds.
Second, the objective function in Equ. (a) can be transformed to Equ. (b) without sacrificing solution optimality, because \(log_{2}[\cdot]\) is monotonically non-decreasing. We then remove the non-linear RGB to sRGB transformation function \(f_{2rg}(\cdot)\). This removal does not preserve the solution optimality, but gives us a convex objective function in Equ. (c).
**Proof of Convexity.** Let the objective function \(max\{\mathbf{x}\}-min\{\mathbf{x}\}\) be \(g(\mathbf{x}):\mathbb{R}^{N}\rightarrow\mathbb{R}\). To prove \(g(\mathbf{x})\) is convex, we must show: \(\forall\mathbf{x}_{1},\mathbf{x}_{2}\in\mathbb{R}^{N}\) and \(t\in[0,1]\), \(g(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\leq tg(\mathbf{x}_{1})+(1-t)g(\mathbf{ x}_{2})\).
Proof.: Observe that: \(g(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\coloneqq max(t\mathbf{x}_{1}+(1-t) \mathbf{x}_{2})-min(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\).
We know \(max(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\leq max(t\mathbf{x}_{1})+max((1-t) \mathbf{x}_{2})=t\ max(\mathbf{x}_{1})+(1-t)\ max(\mathbf{x}_{2})\). Similarly we can derive: \(min(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\geq t\ min(\mathbf{x}_{1})+(1-t)\ min( \mathbf{x}_{2})\).
Therefore, \(g(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\leq(t\ max(\mathbf{x}_{1})+(1-t)\ max( \mathbf{x}_{2}))-(t\ min(\mathbf{x}_{1})+(1-t)\ min(\mathbf{x}_{2}))=tg( \mathbf{x}_{1})+(1-t)g(\mathbf{x}_{2})\).
### Analytical Solution Intuition
The relaxations introduced before lead to an analytical solution without requiring iterative solvers. Observe that the objective function in Equ. (c) minimizes the difference between the maximum and minimum values along the Blue axis. To achieve that, the intuition is that we must move the colors closer to each other along the Blue axis while making sure the adjusted colors stay within the respective discriminative ellipsoids.
Exactly how to move the colors falls into two cases. Fig. 6 illustrates the two cases using two examples. Without losing generality, we choose to optimize along the Blue axis in these examples (the case along the Red axis is in principle the same), and we plot the projection of the ellipsoids onto the B-G plane for better visualization.
In the first case (Fig. (a)a), there is no single plane that cuts across all ellipsoids. This is because the Lowest of the Highest points of all ellipsoids (LH) is lower than the Highest of the Lowest points of all ellipsoids (LH). The optimal strategy is to move all the colors higher than HL toward HL and move all the colors lower than LH toward LH. The movement is necessarily executed along the _extrema vector_, which is the vector that connects the highest and the lowest point of an ellipsoid. After the adjustment, the Blue channels across all the pixels are either HL or LH. That is, the maximum \(\Delta\) along the Blue axis is now HL - LH, which is the smallest gap we can get the Blue channels to be without going outside the ellipsoid boundaries.
In the second case (Fig. (b)b), there is a common plane (\(P\)) that cuts across all four ellipsoids. In fact, there are infinitely many such planes, because LH is higher HL; thus, any plane between LH and HL will cut across all ellipsoids. In this case, we can simply pick any such plane and move all the colors to that plane. For the simplicity of implementation, we choose the average of the LH and the HL planes as the common plane and move colors along the extrema vectors. In this way, the Blue channel value is exactly the same for all pixels, requiring no \(\Delta\) bit for the Blue channel.
### Overall Compression Algorithm
We illustrate how our color adjustment algorithm fits in the overall rendering and compression pipeline in Fig. 7. Our adjustment algorithm takes as inputs a tile of pixels (each with three channels) and the parameters of their corresponding discrimination ellipsoids. The algorithm generates the perceptually-adjusted pixel tile as the output. We apply the same color adjustment strategy along both the Blue and the Red axis for each tile, and pick the better one in the end.
It is worth noting that our algorithm does not directly perform compression in itself; it simply adjusts pixel colors so that the (numerically lossless) BD encoding later can achieve higher compression rate. Specifically, the adjusted pixel tile will be first transformed from the linear RGB to the sRGB space, which then goes through the usual BD compression.
**Ellipsoid Transformation.** The first step in our algorithm is to transform the discrimination ellipsoids from the DKL space to the linear RGB space, which is where color
Figure 6. The two cases in adjusting color values to minimize the \(\Delta\) along the Blue axis. For simplicity, we draw the ellipsoids in the B-G plane. The empty markers \((C_{0},C_{1},C_{2},C_{3})\) denote the original colors and the solid markers \((C^{\prime}_{0},C^{\prime}_{1},C^{\prime}_{2},C^{\prime}_{3})\) denote the adjusted colors. In both cases, colors are adjusted along the extrema vector \(\mathbf{V}\).
adjustment takes place (Sec. 3.3). While ellipsoids are axis-aligned in the DKL color space [22], they will not be axis-aligned after the linear transformation from the DKL to the RGB color space. Therefore, an ellipsoid in the linear RGB space has to take the form of a general quadric surface:
\[Ax^{2}+By^{2}+Cz^{2}+Dx+Ey+Fz+Gxy+Hyz+Izx+1=0 \tag{9}\]
Transforming an axis-aligned ellipsoid in the DKL space to an ellipsoid in the linear RGB amounts to the following matrix multiplication:
\[\begin{bmatrix}A\\ B\\ C\\ E\\ F\\ G\\ H\\ I\end{bmatrix}=\begin{bmatrix}\frac{(T\odot T)^{\top}}{0}&\mathbf{0}\\ \hline 0&T\\ \hline \begin{bmatrix}2T_{00}T_{01}&2T_{10}T_{11}&2T_{20}T_{21}\\ 2T_{01}T_{02}&2T_{11}T_{22}&2T_{21}T_{22}\\ 2T_{00}T_{02}&2T_{10}T_{12}&2T_{20}T_{22}\\ \end{bmatrix}&\mathbf{0}\\ \end{bmatrix}\times\begin{bmatrix}\begin{bmatrix}1/a^{2}t\\ 1/b^{2}t\\ 1/c^{2}t\\ -2\kappa_{1}/a^{2}t\\ -2\kappa_{2}/b^{2}t\\ -2\kappa_{3}/c^{2}t\end{bmatrix},\]
\[t=1-\left(\frac{\kappa_{1}^{2}}{a^{2}}+\frac{\kappa_{2}^{2}}{b^{2}}+\frac{ \kappa_{3}^{2}}{c^{2}}\right) \tag{10}\]
where \(T=\begin{bmatrix}T_{00}&T_{01}&T_{02}\\ \hline T_{02}&T_{11}&T_{12}\\ T_{20}&T_{21}&T_{22}\\ \end{bmatrix}\) is the constant \(\mathrm{M}_{\mathrm{RGB2DKL}}\) matrix in Sec. 2.1, \(\odot\) is element-wise product, \((\kappa_{1},\kappa_{2},\kappa_{3})\) is the color in DKL space, and \((a,b,c)\) are the semi-axis lengths of \(\kappa\)'s discrimination ellipsoids. The derivation uses basic linear transformations and is omitted here due to space constraints.
**Color Adjustment.** Once we have the ellipsoids in the linear RGB space, we can perform color adjustment, which, as illustrated in Fig. 6 and described in Sec. 3.3, is done in three steps: 1) compute the extrema, i.e., the highest and the lowest point, of each ellipsoid; 2) compute \(\mathtt{LH}\) and \(\mathtt{HL}\) based on the extrema of all ellipsoids; 3) compare \(\mathtt{LH}\) and \(\mathtt{HL}\) and move colors along extrema vectors accordingly. Step 2 and 3 are relatively straightforward, so here we focus on the mathematical details of Step 1.
Extrema along the Blue axis can be computed by taking the partial derivatives of the ellipsoid equation along the Red and Green axes:
\[\frac{dz}{dx} =2Ax+Gy+Iz+D=0 \tag{11a}\] \[\frac{dz}{dy} =Gx+2By+Hz+E=0 \tag{11b}\]
These partial derivatives give us two planes, the intersection of which is a vector \(\mathbf{v}\) that connects the two extrema. The extreme vector \(\mathbf{v}\) is calculated by taking the cross product of the normal vectors of the two planes:
\[\mathbf{v}=(2A,G,I)\times(G,2B,H) \tag{12}\]
The two extrema points \(H\) and \(L\) are then calculated by finding the intersection of \(\mathbf{v}\) and the ellipsoid:
\[\mathbf{x} :=(x_{1},x_{2},x_{3})=\mathrm{M}_{\mathrm{RGB2DKL}}\times \mathbf{v}^{T} \tag{13a}\] \[t =1/\sqrt{\frac{x_{1}^{2}}{a^{2}}+\frac{x_{2}^{2}}{b^{2}}+\frac{x_{ 3}^{2}}{c^{2}}}\] (13b) \[H =\mathrm{M}_{\mathrm{RGB2DKL}}^{-1}\times(\kappa_{1}+x_{1}t, \kappa_{2}+x_{2}t,\kappa_{3}+x_{3}t)^{T}\] \[L =\mathrm{M}_{\mathrm{RGB2DKL}}^{-1}\times(\kappa_{1}-x_{1}t, \kappa_{2}-x_{2}t,\kappa_{3}-x_{3}t)^{T} \tag{13c}\]
where \(\kappa\) is the pixel color in the DKL space, \((a,b,c)\) are DKL ellipsoid parameters, and \(\mathrm{M}_{\mathrm{RGB2DKL}}\) is the RGB to DKL transformation matrix (Sec. 2.1). We omit the derivation details due to space constraints, but the derivation amounts to a simple application of line-ellipsoid intersection and linear transformations between RGB and DKL space.
**Remarks on Decoding.** One desired byproduct of our algorithm is that it requires _no_ change to the existing frame-buffer decoding scheme -- our color adjustment algorithm simply changes the input to BD. During decoding (e.g., by the display controller), the existing BD decoder will construct the sRGB values from the BD-encoded data, which are then sent to the display. The exact BD encoding format varies across implementations and is not our focus. We assume the encoding format described in Zhang et al. [76].
Figure 7: Overview of our algorithm and how it fits in existing rendering and compression pipeline. Our algorithm takes a tile of pixels and their corresponding discrimination ellipsoid parameters, and generate an adjusted pixel tile, which then goes through existing BD encoding.
## 4. Hardware Architecture
The analytical compression algorithm, while avoiding iterative solvers, is still compute intensive and slow to execute in real-time. We implement it as a GPU shader executing on the Adreno 650 GPU in Oculus Quest 2, a widely used mobile VR headset. The compression algorithm runs in a mere 2 FPS. This section describes a lightweight hardware design that accelerates the compression algorithm. Sec. 4.1 describes how our custom hardware fits into the overall system and Sec. 4.2 describes the hardware details.
### Hardware Overview
Fig. 8 provides an overview of our architectural extension, dubbed the Color Adjustment Unit (CAU), and how CAU fits into existing mobile SoCs. The CAU executes the pixel adjustment algorithm described in Sec. 3. The CAU reads its input from an on-chip buffer, which stores the pixels and the discrimination ellipsoid parameters generated by the GPU. Following prior work (Krizhevsky et al., 2015), we assume that the GPU is responsible for generating the per-pixel discrimination ellipsoids. The generation algorithm is a lightweight RBF network (Sec. 2.1). In our measurement, the ellipsoid generation algorithm on Oculus Quest 2 runs at the maximum display refresh rate (72 FPS) while consuming less than 1 mW measured using Oculus' OVR Metrics Tool (Corba et al., 2018).
The output of the CAU enters the existing BD framebuffer encoder, which writes the encoded data to the DRAM. Any frame read out from the DRAM, e.g., by the Displayer Controller IP block when sending the frame to the display, will enter the BD decoder, which reconstructs the sRGB pixels. The figure provides a visual confirmation that our algorithm 1) works on top of, rather than replaces, BD encoding, and 2) does not change the decoding architecture.
### Color Adjustment Unit
Internally, the CAU consists of an array of Processing Elements (PEs), each of which is designed to adjust colors for _one tile_ of pixels, which in our current design is assumed to be \(4\times 4\). Each PE interfaces with a dedicated Pending Buffer, which holds all the information of the pixel tiles generated from the GPU. Having more PEs will allows the system to compressing multiples tiles simultaneously.
**Pipelining.** The PE is fully pipelined to accept a new tile every cycle. Fig. 8 illustrates the detailed architecture, which has three main phases, each of which is internally pipelined. The first phase computes the extrema. The next phases use reduction trees to calculate HL and LH from the extrema. The final phase move the colors along the extrema vector.
**Compute Extrema Blocks.** This component calculates the extrema of all the pixels in a tile, which is naturally parallelizable across pixel and, thus, has multiple parallel units, each of which is responsible for one pixel. The top-right box in Fig. 8 illustrates the microarchitecture. This is the most compute intensive block in the CAU, since it involves multiple divisions and square root operations. The division and square root hardware implements Equ. 13b, and the adder and subtractor circuit implements Equ. 13c. The DKL-RGB transformations in Equ. 13c and Equ. 13a are implemented through matrix vector multiplication executed on a \(3\times 3\) MAC array.
**Compute Planes Blocks.** The extrema calculated before enters this unit, which finds the channel value for the HL plane (maximum of the minima) and LH (minimum of the maxima) plane. We implement this stage using two reduction (comparator) trees to generate both planes simultaneously.
**Color Shift Blocks.** This block takes the original color values and the two planes as input and outputs the modified color values. This phase is control-flow heavy, as it involves
Figure 8. Illustration of the hardware support, which we dub Color Adjustment Unit (CAU) for our image encoding and how CAU interfaces with the rest of the SoC. Internally, the CAU uses an array of PEs, each of which adjust colors for one tile of pixels. CAU is fully pipelined to accept a new tile every cycle from the Pending Buffer, which receives the rendered pixels and their discrimination ellipsoids from the GPUs.
multiple condition checks, e.g., testing the relationship between a point and a plane. A custom datapath in CAU avoids much of the inefficiencies surrounding control flows that are detrimental to GPU performance. This hardware is a relatively straightforward mapping from the algorithm.
**Pending Buffer.** The Pending Buffers store intermediate pixels and their discrimination ellipsoids from the GPU before they are consumed by the CAU. Each buffer is interfaced with a dedicated PE and, thus, contains the data for all the pixel tiles to be consumed by the PE.
The buffers must be properly sized so as to not stall or starve the CAU pipeline. In order to be independent of the exact GPU microarchitecture details, we make a conservative estimation of the buffer size. In particular, we allocate enough space in the buffer such that it can hold all the pixels generated by the GPU in each CAU cycle _even if_ the GPU is fully utilized, in which case each shader core in the GPU generates 1 pixel/GPU cycle. Note that the GPU and CAU cycle times need not be the same. The number of PEs in a CAU must be properly decided so as to not stall either the GPU nor the CAU, as we will discuss in Sec. 6.1.
## 5. Experimental Methodology
### Setup
**Hardware.** We implement our encoder and decoder units in SystemVerilog and use an EDA flow consisting of Synopsys and Cadence tools with the TSMC 7 nm FinFET technology to obtain latency and area. We use Synopsys DesignWare library for a variety of RTL implementations such as the pipelined divider. Power is estimated using Synopsys PrimePX with fully annotated switching activity.
The DRAM energy is calculated using Micron's System Power Calculators (Castro et al., 2017), assuming a typical 8 Gb, 32-bit LPDDR4. On average, the DRAM access energy per pixel is estimated to be 3,477 pJ/pixel, matching prior work (Srivastava et al., 2017; Wang et al., 2018).
**Dataset and Software.** We evaluate our compression algorithm with 6 different VR scenes used in VR color perception studies (Wang et al., 2018). In each scene, each frame is rendered with two sub-frames, one for each eye. All the frames are dynamically rendered (on the GPU) at run time, i.e., the frames are neither loaded from the disk nor streamed over the network. Following the common practice in color perception studies (Wang et al., 2018; Wang et al., 2018), we keep pixels in the central 10" FoV unchanged, and apply the compression algorithm only on the rest (peripheral) pixels.
As discussed in Sec. 3.4, our algorithm works in conjunction with existing BD compression. In this paper, we assume a recent, state-of-the-art, BD algorithm described by Zhang et al. (Zhang et al., 2019), from which we obtain the final compression rate.
### Human Subject Studies
We also evaluate the perceptual quality of our compression algorithm on actual participants. We recruit 11 participants (3 female; ages between 19 and 40). None of the participants were aware of the research, the number of conditions, or the hypothesis before taking the experiments, which were approved by an Internal Review Board.
We face a dilemma in user study: the speed of the compression algorithm implemented as a GPU shader is too slow on today's mobile VR headsets (e.g., 2 FPS on Oculus Quest 2 as discussed in Sec. 4) -- the motivation behind our architectural support, but this also means we can not use a mobile VR headset for user study. Our approach is to run the user study on a tethered VR headset, HTC Vive Pro Eye, which is connected to a PC with a powerful Nvidia RTX A2000 GPU, which runs the compression algorithm at 90 FPS, sufficient for user study.
Each participant was shown the six VR scenes (20 seconds each) used in a prior study (Wang et al., 2018) in random order. To encourage and ensure that the participants actively and freely explored the scene, each participant was asked to perform a scene-specific task, such as counting the number of birds in the scene. At the end of each video, we asked the participant whether they notice any visual artifacts.
In order for participants to isolate potential artifacts introduced by our compression from other irrelevant artifacts (e.g., low resolution, aliasing in rendering), at the beginning of each test we show the participant two images on a computer display, one with and the other without our perceptual compression; see examples in Fig. 9. When participants viewed the images on the computer display, the entire frames were in their foveal vision so the color adjustment was clearly visible. In this way, we make sure the artifacts reported by users resulted from compression. This is a common practice in subjective color perception studies (Wang et al., 2018). The user study results should be seen as the _lower bound_ of the quality of compression algorithm, because the participants were aware of and thus better identification of the artifacts.
Figure 9. A pair of images without (left) and with (right) our color adjustment. The two images when viewed on a conventional computer display are visibly different, because the entirety of the images will be in the viewer’s foveal vision.
### Baselines
We compare against four baselines:
* NoCom: no compression;
* BD: existing BD compression based on Zhang et al. (Zhang et al., 2019);
* PNG: lossless compression based on the popular Portable Network Graphics (PNG), which is unsuitable for real-time DRAM compression because of its high run-time overhead even with dedicated hardware acceleration (Beng et al., 2019; Zhang et al., 2019). For instance, the commercial IPB-PNG-E FPGA-based IP core compresses an \(800\times 600\) image only at a 20 FPS (Beng et al., 2019).
* SCC: an alternative strategy to exploit color discrimination based on the Set Cover formulation, which we describe next.
SCC uses a look-up table to map each 24-bit sRGB color to a more compact encoding. This can be formulated as a set cover problem (Zhu et al., 2019): find the smallest subset of sRGB colors C \(\subset\) sRGB whose discrimination ellipsoids union-ed together cover all the \(2^{24}\) sRGB colors. Each new color is then encoded with only \(log_{2}\lceil|\text{C}|\rceil\) bits, where \(|\cdot|\) denotes the set cardinality.
The set cover problem is a classic NP-complete problem (Zhu et al., 2019), where the optimal solution requires combinatorial search. We use a common greedy heuristics (Han et al., 2017) and construct the mapping tables. The encoding table consumes 30 MB and the decoding table consumes 96 KB, too large for SCC to be used for DRAM traffic compression in mobile SoCs.
## 6. Evaluation
We first show that the area and power overhead of our compression scheme is negligible while ensuring real-time compression (Sec. 6.1). We then present the benefits of our compression scheme in DRAM traffic reduction and power savings, and analyze the sources of the savings (Sec. 6.2). We then present our human subject studies, which show that our compression scheme introduces little visible artifacts (Sec. 6.3). We present a sensitivity study of the key parameters in our compression scheme (Sec. 6.4). Finally, we discuss how we can accommodate a diverse range of users (Sec. 6.5).
### Area and Power Overhead
**Performance.** Our algorithm along with the hardware support achieves real-time compression. The CAU operates with a cycle time of about 6 _ns_, which translates to a frequency of about 166.7 MHz. The Adreno 650 GPU used in Oculus Quest 2 operates at a nominal frequency of 441 MHz, which means during each CAU cycle (at most) three pixels are generated by a shader core in the GPU. Given that the Adreno 650 GPU has 512 shader cores, each CAU cycle \(512\times 3\) pixels (i.e., 96 tiles) are generated. Therefore, we configure our CAU to have 96 PEs, which are able to process 96 tiles simultaneously, matching peak throughput of the GPU.
Thus, when compressing a \(5408\times 2736\) image (the highest rendering resolution on Oculus Quest 2), compression adds a delay of 173.4 \(\mu\)s, negligible in a rendering pipeline that operates at, say, 72 FPS with a frame time budget of 13.9 _ms_.
**Area and Power.** Our compression hardware extension introduces little area overhead, which consists of that of the Pending Buffers and the PEs. Each PE has an area of 0.022 _mm_\({}^{2}\), resulting in a total PE size of 2.1 _mm_\({}^{2}\). Each Pending Buffer holds data for two tiles (double buffering); the total buffer size is 36 KB, resulting in a total area of 0.03 _mm_\({}^{2}\).
The area overhead is negligible compared to the size of a typical mobile SoC. For instance, the Xavier SoC has an area of 350 _mm_\({}^{2}\) (12 nm) (Beng et al., 2019), Qualcomm Snapdragon 865 SoC has a die area of 83.54 _mm_\({}^{2}\) (7 nm) (Beng et al., 2019), and Apple A14 SoC has a die area of 88 _mm_\({}^{2}\) (5 nm) (Beng et al., 2019). The power consumption of each PE and its buffer is about 2.1 \(\mu W\), resulting in a total CAU power consumption of about 201.6 \(\mu W\), which we faithfully account for in the power saving analyses later.
### Results
**Compression Rate.** Fig. 10 shows the bandwidth reduction of our algorithm compared to the baselines. Our algorithm achieves a compression rate of 66.9%, 50.3%, and 15.6% over NoCom, SCC, and BD, respectively. Unsurprisingly, the highest gains are against NoCom, which is the original frames and uses 3 Bytes (24 bits) to store each pixel.
SCC (Sec. 5.3) is able to map all the \(2^{24}\) (about 16.8 million) sRGB colors to a small subset of only 32,274 colors.
Figure 11. Distribution of bits per pixel across the three components: base, metadata, and \(\Delta\). Left: BD; Right: our algorithm.
Figure 10. Bandwidth reduction over baselines.
SCC thus uses 15 bits to represent a color, reducing the storage cost compared to the original frames but is still much less efficient than BD, which is the canonical Base+Delta approach to compression DRAM traffic in today's mobile SoCs. Compared to BD, we show 15.6% (up to 20.4%) higher compression rate, because of our ability to exploit human color discrimination to reduce the magnitudes of \(\Delta\)s.
We get the least improvement over PNG. In two scenes, PNG actually has a higher compression rate. This matches prior results on BD (Zhao et al., 2017) and is not surprising -- to get a high compression rate PNG is computationally intensive and is meant to be used offline; see discussion in Sec. 5.3.
**Understanding Results.** Our improvement over BD comes from the fact that we require fewer bits to store the \(\Delta\)s. Fig. 11 shows the average number of bits per pixel required to store the base, metadata, and \(\Delta\)s in a tile. We compare the statistics between BD (left bars) and our scheme (right bars). It is clear that the space reduction comes from reducing the number of bits required to store the \(\Delta\)s.
To dissect how our scheme reduces the magnitude of \(\Delta\)s, Fig. 12 shows the distribution of tiles across the two cases in Fig. 6: \(\mathsf{HL}>\mathsf{LH}\) (c1) and \(\mathsf{HL}<\mathsf{LH}\) (c2). We observe that c2 is the more common case: 78.92% tiles result in this case. In c2, there exists a common plane where all the color values can collapse to. We can reduce the \(\Delta\) to 0 in these tiles, essentially eliminating the need to store \(\Delta\).
**Power Reduction.** We evaluate the power reduction under different resolutions and frame rates available on Oculus Quest 2. Fig. 13 shows the power savings under each combination over BD. Across all configurations, we reduce the power consumption by 307.2 \(mW\) on average. The power saving is a combination of reducing the DRAM traffic and the power overhead of the CAU encoding (201.6 \(\mu W\)).
Even on the lowest resolution and frame rate combination on Oculus Quest 2, we reduce the power consumption by 180.3 \(mW\), which translates to about 29.9% of the total power measured (using Oculus' OVR Metrics Tool (Cowley et al., 2017)) when rendering without compression. Under the highest resolution and frame rate combination, the power saving increases to 514.2 \(mW\). As resolution and frame rate will likely increase in future VR devices, the power benefits of our compression scheme will only increase.
### User Studies and Analyses
Fig. 14 shows the number of participants who did _not_ notice any artifact in each scene. On average, 2.8 participants (standard deviation 1.5) out of 11 total participants observe artifacts. This percentage is on par with prior color perception studies (Dong et al., 2018; Wang et al., 2018). We further interviewed the participants and identified three reasons why certain participants notice artifacts, all of which were orthogonal to the fundamental idea of this paper and actually point to optimization opportunities in _other_ parts of the system, which, when exploited, can be readily integrated into our work.
One participant who noticed subtle artifacts in three out of the six scenes was a visual artist with "color-sensitive eyes." Observer variation is a known phenomenon in vision science since the early days of color science research (Zhao et al., 2017; Zhao et al., 2017; Zhao et al., 2017). Given that color discrimination models in the literature all target the average case in the population, the results indicate that customizing/calibrating the model for individual users is likely a promising approach to reduce the artifact.
Another set of participants noticed artifacts only during rapid eye/head movement but not with a steady pose. This is likely attributed to external factors such as rendering lag or slow gaze detection, which is independent of our algorithm.
Finally, we found that no participant noticed any artifact in the fornite scene, which is a bright scene with a large amount of green. Since our compression algorithm generally yields green-hue shifts (see examples in Fig. 9), artifacts are less noticeable in scenes that are green to begin with. In contrast, dumbo and monkey, both dark scenes, have the most noticeable artifacts. The results suggest, to the
Figure 14. Number of participants (out of 11) who did _not_ notice any artifacts in each scene in our user study.
Figure 12. Distribution of the two cases c1 and c2.
Figure 13. Power saving over BD under the lowest and highest resolutions and four different frame rates on Oculus Quest 2.
vision science community, the need for improving the color discrimination models in low-luminance conditions.
**Objective Image Quality.** To show that subjective experience, which is the focus of our work, is _not_ equivalent to objective quality, we evaluate the Peak-Signal-to-Noise-Ratio (PSNR), a common objective quality metric, of all the compressed images. On average, the PSNR of the compressed videos is 46.0 dB (standard deviation 19.5); all but two scenes have a PSNR below 37. A PSNR value in this range usually indicates noticeable visual artifacts (Dosov et al., 2016), which is confirmed by our participants when they view the compressed images on a conventional display. This result accentuates the crux of our work: use human color perception in VR to guide a numerically lossy scheme (hence low PSNR) for higher compression rate with little subjective quality degradation.
### Sensitivity Studies
Our evaluation so far assumes a tile size of \(4\times 4\). We also evaluate our compression algorithm across different tile sizes; the results are shown in Fig. 15 along with BD. We observe that the compression rate drops once the tile size increases beyond \(4\times 4\) and can be worse than BD when the tile size is larger than \(8\times 8\).
The trend is the result of two opposing effects. On one hand, as we increase the tile size we can amortize the cost of storing the base pixels. On the other hand, larger tiles also present less opportunity to bringing pixels together, because we have to accommodate the worst case/largest difference between two pixels in a tile (Sec. 3.1).
### Discussions
To accommodate individual color perception in actual system deployments, one can perform a per-user color calibration procedure to build a per-user ellipsoid model. Such a procedure is laid out in prior work (Shi et al., 2017), and is readily doable. Such user-specific calibrations are routinely done when a user first uses an AR/VR product, e.g., adjusting the pair of displays to accommodate different inter-pupil distances among individuals. When a per-user ellipsoid model is available, our fundamental idea readily applies.
It is worth noting that we can, if need be, easily turn off our compression algorithm, which is intentionally designed as a plug-and-play stage between normal GPU rendering and existing BD compression (see Fig. 7). One scenario where one might want to turn off our compression is when a user has color vision deficiency (CVD). The color discrimination model that underlies our compression algorithm does not consider individuals with CVD. When such models for CVD become available, our fundamental idea readily applies.
## 7. Related Work
**Perception-Aware Rendering.** A host of recent work has focused on leveraging human perception to optimize AR/VR systems. Weier et al. provide a relatively recent survey (Weier et al., 2017). The most studied approach is foveated rendering, which reduces rendering resolution in the visual periphery (Shi et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Foveated rendering has been theoretically studied to reduce data transmission traffic in cloud rendering (Shi et al., 2017; Wang et al., 2017), but the decoding (reconstruction) cost is prohibitively high (e.g., need a complicated DNN). Our approach is orthogonal to foveated rendering in that we focus on adjusting colors rather than the spatial frequency, and works directly on top of the existing (BD-based) framebuffer compression framework without adding decoding cost.
**Color Perception in Systems Optimizations.** Color perception is most often leveraged to reduce display energy. To our best knowledge, this is the first paper that leverages color perception to reduce data communication energy.
Dong et al. (Dong et al., 2017), Crayon (Crayon et al., 2017), Dong and Zhong (Dong and Zhong, 2017) all leverage the human color discrimination to reduce OLED power, which is known to strongly correlate with color. Duinkharjav et al. (Duninkharjav et al., 2017) extend this approach to VR by quantifying the eccentricity dependent color discrimination. Recent work by Dash and Hu (Dash and Hu, 2017) builds an accurate color-vs-display power model. None focused on reducing data traffic. Shye et al. (Shye et al., 2017) and Yan et al. (Yan et al., 2017) leverage dark adaptation to reduce display power. Dark adaptation will likely weaken the color discrimination even more, potentially further improving the compression rate -- an interesting future direction.
**Data Traffic Optimizations in VR.** Data traffic reduction in VR has mostly been studied in the context of client-cloud collaborative rendering, i.e., reducing wireless transmission traffic. The pioneering Furion (Furion, 2017) system and later developments and variants such as Coterie (Coterie, 2017) and Q-VR (Dash and Hu, 2017) cleverly decide what to rendering locally vs. remotely. For instance, one could offload background/far objects rendering to the server and render foreground/near object interactions locally. EVR (Dash and Hu, 2017; Wang et al., 2017) predicts user FoV trajectory and pre-renders VR videos in the cloud. Our proposal is orthogonal
Figure 15. Bandwidth reduction over NoCom under BD and our scheme with different tile sizes denoted by \(T_{n}\), where \(n\) is the tile size.
to the client-remote collaborative rendering, in that we focus on reducing DRAM traffic occurred within a local device.
Zhang et al. [76] describe a BD design in encoding frame-buffer traffic. We directly compare against this approach and show up to 20% bandwidth savings. Zhang et al. [75] propose a content cache that exploits value equality in video decoding, which does not apply to encoding where strict equality is rare. Rhythmic Pixel Regions [34] drops pixel tiles to reduce DRAM traffic in a machine vision pipeline, whereas our focus is human visual perception in VR.
Any compression algorithm, ours included, exploits data similarities. Recent work leverages data similarities to speed-up rendering [77, 78, 66, 74] by eliding redundant computations (that compute same/similar data). These methods, however, do not reduce data traffic, which we do.
**General Memory Compression.** Exploiting value similarities to compress data traffic is a long-standing technique in architecture [47, 48]. Recent work in approximate computing extends compression to tasks that can tolerate slight data infidelity such as image processing [43, 53] and texture mapping in rendering [61, 70]. In comparison, this paper performs a principled "approximate compression" by 1) using a rigorous human perception model derived from psychophysical experiments and 2) formulating compression as a constraint optimization with an optimal solution (under necessary relaxations). Finally, we specifically target VR and, thus, exploit the eccentricity dependency that is unconcerned with before.
## 8. Conclusion
Aggressively lossy compression in the numerical domain can achieve significant data traffic reduction with little perceptual quality loss in VR. The key is to leverage human color discrimination (in)ability to bring pixels more similar to each other. The resulting images, thus, permit more aggressive compression over the classic Base+Delta scheme to reduce DRAM traffic in a mobile SoC. We show that our compression algorithm has an analytical form, which, when accelerated by a dedicated hardware, can achieve real-time compression. Future VR systems design must actively integrate human perception into the optimization loop.
| バーチャルリアリティ(VR)は、次世代の ubiquituous computing プラットフォームになる可能性を秘めている。VRの進歩は、効率的な計算基盤に大きく依存している。特に、DRAMアクセスエネルギーは、システム全体のエネルギー消費に大きな役割を果たしている。今日のフレームバッファ圧縮システムは、数値的ロスレス圧縮アルゴリズムを利用して、DRAMのトラフィックを軽減している。数値的ロスレスであることは、人間にとって可視的な品質を維持するためには不要である。この論文では、可視的ロスレスながらも数値的ロスのあるシステムを提案する。このアイデアは、人間の視覚能力の長期的な研究に基づくものである。これらの研究は、人間の視覚は、離れた色を区別することができないという結果を示している。特に、周辺視覚における色識別能力は、さらに弱くなる(つまり、より多くの色が視覚的に見分けが難しくなる)。 |
2309.12126 | Study of the upgraded EUSO-TA performance via simulations | The EUSO-TA ground-based fluorescence detector of the JEM-EUSO program, which
operates at the Telescope Array (TA) site in Utah (USA), is being upgraded. In
the previous data acquisition campaigns, it detected the first nine ultra-high
energy cosmic ray events with the external trigger provided by the Black Rock
Mesa fluorescence detectors of the Telescope Array (TA-BRM-FDs). Among the
upgrades, there is the installation of a trigger algorithm for the independent
detection of cosmic ray air showers and upgraded electronics. A simulation
study was developed to understand the performance of EUSO-TA in the new setup
and different background conditions. This study allowed us to estimate the
detection limit of the ground-based detector, which can be used to extrapolate
the detection limit for a balloon-based detector. Moreover, it provided
estimations of the expected trigger rates for ultra-high energy cosmic rays. In
this work, the description of the simulation setup, the method developed to
rescale the energy of the cosmic rays to account for the portion of air shower
actually observed rather than the whole one, and the results in terms of
detection limit and trigger rates, are reported. | Francesca Bisconti | 2023-09-21T14:46:02 | http://arxiv.org/abs/2309.12126v1 | # Study of the upgraded EUSO-TA performance via simulations
###### Abstract:
The EUSO-TA ground-based fluorescence detector of the JEM-EUSO program, which operates at the Telescope Array (TA) site in Utah (USA), is being upgraded. In the previous data acquisition campaigns, it detected the first nine ultra-high energy cosmic ray events with the external trigger provided by the Black Rock Mesa fluorescence detectors of the Telescope Array (TA-BRM-FDs). Among the upgrades, there is the installation of a trigger algorithm for the independent detection of cosmic ray air showers and upgraded electronics. A simulation study was developed to understand the performance of EUSO-TA in the new setup and different background conditions. This study allowed us to estimate the detection limit of the ground-based detector, which can be used to extrapolate the detection limit for a balloon-based detector. Moreover, it provided estimations of the expected trigger rates for ultra-high energy cosmic rays. In this work, the description of the simulation setup, the method developed to rescale the energy of the cosmic rays to account for the portion of air shower actually observed rather than the whole one, and the results in terms of detection limit and trigger rates, are reported.
## 1 The EUSO-TA detector and observations of UHECRs
EUSO-TA [1] is an experiment of the JEM-EUSO program [3] developed and operated to validate the observation principle and the design of a particular kind of detector, capable of observing extensive air showers (hereafter "showers") induced by Ultra-High Energy Cosmic Rays (UHECRs) and laser pulses. UHECRs can be detected by observing, at nighttime, the UV fluorescence and Cherenkov light emitted when showers cross the atmosphere. EUSO-TA is installed at the Telescope Array (TA) [4] site in Utah (USA), in front of the Black Rock Mesa Fluorescence Detectors (TA-BRM-FDs).
The EUSO-TA optics consists of two Fresnel lenses of 1 m diameter and 8 mm thickness, made of Poly(methyl methacrylate) - PMMA [5]. The focal surface consists of one Photo-Detector Module (PDM), composed of \(6\times 6\) Hamamatsu Multi-Anode Photo-Multiplier Tubes (MAPMTs, model R11265-M64) [6] with \(8\times 8\) pixels of 2.88 mm side each. The field of view (FOV) of one pixel is \(0.2^{\circ}\times 0.2^{\circ}\), making a total FOV of \(\sim\)\(10.6^{\circ}\times 10.6^{\circ}\). A 2 mm thick UV band-pass filter (in the range \(290-430\) nm), is glued on top of each MAPMT.
Data are sampled in \(2.5\,\mu\)s windows (called Gate Time Units - GTUs). Prior to the ongoing upgrade, the readout was performed by one 64-channel SPACIROC1 ASIC chip [7] per MAPMT, with a dead time at the beginning of each GTU of 200 ns and 30 ns double pulse resolution. The elevation of EUSO-TA can be set from \(0^{\circ}\) to \(30^{\circ}\) above the horizon, whereas its azimuth is fixed at \(53^{\circ}\) from North counterclockwise, pointing to the Central Laser Facility (CLF) of TA.
The location of EUSO-TA in front of the TA-BRM-FDs makes it possible to use the external trigger provided by the TA-BRM-FDs to record cosmic ray events. Four data acquisition sessions in 2015 (\(\sim 123\) hours) and one in 2016 (for to a total of \(\sim 140\) hours) were performed using the TA-BRM-FDs trigger. Nine showers were found in the EUSO-TA data collected in 2015, while several tens that crossed the FOV were not, as the EUSO-TA spatial and temporal resolutions were optimized for observations from space.
To improve the data acquisition, an upgrade of EUSO-TA is ongoing to allow for remote operations with self-trigger capability, defining the so-called EUSO-TA2 [2]. The electronics will be updated with SPACIROC3 boards [8], with a dead time at the beginning of the GTU of 50 ns and 5 ns double pulse resolution. Moreover, the detector will have self-trigger capabilities (level-1 trigger logic) [9]. The data read-out will be possible on three timescales: 2.5 \(\mu\)s for the observation of showers; 320 \(\mu\)s to follow the evolution of fast atmospheric events; and 40.96 ms for slow events such as meteors and strange quark matter (strangelets).
In this work, we describe the methods used to estimate the detection limits, in terms of energy and distance to shower, of EUSO-TA with the internal level-1 trigger. Moreover, the expected UHECR trigger rates are reported.
## 2 Correction of the energy
It is important to understand the EUSO-TA detection limit in terms of shower energy and the distance to the shower along the telescope optical axis. However, the relatively small FOV of EUSO-TA allows observation of only a portion of the shower that in most cases does not include the portion of the shower with the maximum number of particles, resulting in a smaller signal than
would be observed if the shower maximum was in the FOV. To account for this, we define a so-called equivalent energy relating the energy deposit of the partially-observed shower to the energy deposit at the shower maximum.
To compute the equivalent energies, we determine conversion factors to apply to the true (simulated) energies using a collection of shower simulations performed using CONEX [10]. The simulated showers have several energies (\(\mathrm{E}=10^{17}-10^{20}\) eV, with steps of \(\mathrm{log}\,\mathrm{E}(\mathrm{eV})=0.1\)) and zenith angles (\(\theta=0^{\circ}-65^{\circ}\), with steps of \(5^{\circ}\)). Several elevation angles of EUSO-TA (\(10^{\circ}\), \(15^{\circ}\), \(20^{\circ}\), and \(25^{\circ}\)) and distances from the detector (D = 1 - 50 km, in steps of 1 km) were considered. The energy conversion factor, \(\mathrm{f_{eq}}=(\mathrm{dE}/\mathrm{dX})_{\mathrm{obs}}/(\mathrm{dE}/\mathrm{ dX})_{\mathrm{max}}\), is defined as the ratio of the energy deposit per unit altitude of the shower at the observed point and that at the maximum. To reduce fluctuations in the results, 20 showers were simulated with each combination of energy and zenith angle, and for each one, the mean conversion factors were calculated for different distances and elevation angles of EUSO-TA.
Values of the energy conversion factors are displayed in Figure 1 as a function of the shower energy and distance from the detector. The plots refer to elevation angles of \(15^{\circ}\), and zenith angles of \(0^{\circ}\), \(30^{\circ}\), and \(60^{\circ}\). The conversion factors are near 1 in cases where the shower maximum is within the FOV, and become smaller as the maximum moves farther from the FOV. As expected, by increasing the energy and decreasing the zenith angle, the shower maximum is closer to the detector, i.e. at lower altitudes.
To estimate the energy of the partially-observed shower, the atmospheric transmission has to be taken into account, as for a given elevation angle, the slant depth along the EUSO-TA optical axis that intersects the shower axis may be different from that pointing to the shower maximum. The Linsley parametrization used in the CORSIKA simulation software [11] was used to retrieve the atmospheric depths at given altitudes. For a given atmospheric depth X, the atmospheric transmission is given by \(\mathrm{T}=\mathrm{exp}(-\mathrm{X}/\Lambda)\), where \(\Lambda\) is the mean free path for Rayleigh scattering, which, in the near-UV, is \(\Lambda(350\mathrm{~{}nm})=1700\mathrm{~{}g/cm^{2}}\). With the atmospheric slant depth up to the observed point of the shower \(\mathrm{X_{obs}^{slant}}\) and the one up to the shower maximum \(\mathrm{X_{max}^{slant}}\), it is possible to calculate the corresponding atmospheric transmission \(\mathrm{T_{obs}}\) and \(\mathrm{T_{max}}\). The atmospheric transmission is corrected for the fact that the number of photons arriving at the detector is inversely proportional to the square of the distance. The atmospheric correction factor \(\mathrm{f_{atm}}=\mathrm{T_{obs}}/\mathrm{T_{max}}\cdot\mathrm{D_{max}^{2}}/ \mathrm{D_{obs}^{2}}\) is the ratio between the atmospheric transmission to the observation point and the shower maximum, both corrected for the corresponding distances \(\mathrm{D_{obs}}\) and \(\mathrm{D_{max}}\). The conversion factor may be equal to 1 when the shower maximum is in the FOV, but in general, it is greater or less than 1, depending on
Figure 1: Energy conversion factors as a function of the shower energy and distance from the detector. The plots refer to \(15^{\circ}\) elevation angle and \(0^{\circ}\), \(30^{\circ}\), and \(60^{\circ}\) zenith angles.
the shower direction. The equivalent energy corrected for the atmospheric transmission becomes \(\rm E_{eq,atm}=f_{eq}\cdot f_{atm}\cdot E_{sim}\).
## 3 Simulation set
For this analysis, a set of 10000 shower simulations was performed with CONEX, using the QGSJETII-04 hadronic interaction model, protons as primary particles, zenith angles in the range \(0^{\circ}-90^{\circ}\) drawn from the isotropic flux on a flat surface, i.e. \(\rm dN/d\cos(\theta)\sim\cos(\theta)\) and random azimuth angles in the range \(0^{\circ}-360^{\circ}\). The energy range was \(1\times 10^{17}-1\times 10^{20}\) eV with spectral index \(-1\) in logarithmic scale in order to have statistics also at the highest energies. The CONEX showers were then processed with the \(\overline{\rm Offline}\) framework [12] to perform the production and propagation of fluorescence and Cherenkov photons from the shower to the detector, and to simulate the detector response with the level-1 trigger. The showers were distributed over a large area of \(36\times 28\)\(\rm km^{2}=1008\)\(\rm km^{2}\) centered at half the distance between EUSO-TA and the CLF. The elevation angles chosen for EUSO-TA were \(10^{\circ}\), \(15^{\circ}\), \(20^{\circ}\) and \(25^{\circ}\), while the simulated background levels were 1 and 1.5 counts/pixel/GTU.
## 4 Detection limit
The simulation set described in Section 3 was used to study the detection limit. In Figure 2 the simulated showers are plotted with their distance vs. simulated energy and equivalent energy corrected for the atmospheric transmission. Plots are for an elevation of \(15^{\circ}\), background level of 1 count/pixel/GTU, and SPACIROC3 electronics board. The same analysis was performed also for background level of 1.5 count/pixel/GTU, elevation angles \(10^{\circ}\), \(20^{\circ}\), and \(25^{\circ}\), and SPACIROC1. The distance vs. \(\rm E_{sim}\) plot (left) indicates that at greater distances, higher energies are needed to trigger the showers. After the calculation of the equivalent energy corrected for the atmospheric transmission, in general, the energies decrease, as visible in the distance vs. \(\rm E_{eq,atm}\) plot (right).
The detection limit should separate the triggered and the non-triggered showers that partially overlap, due to the variety of simulated showers, and to the fact that in the computation of the
Figure 2: Triggered and non-triggered showers with their distance vs. the simulated energy (left) and the equivalent energy corrected for the atmospheric transmission (right), for elevation angle \(15^{\circ}\), background 1 count/pixel/GTU, and SPACIROC3 board. The detection limits are drawn, too.
equivalent energy, only the fluorescence emission is taken into account. The Cherenkov emission, which is not considered in this analysis, can contribute to the signal and increase the detection probability. The method defined to estimate the detection limit assumes that the triggered showers should lie on the right side of the detection limit (at higher energies), and the non-triggered ones on the left side. Therefore, an efficiency can be defined as \(\epsilon=(\rm{N}_{T}^{right}+\rm{N}_{NT}^{left})/(\rm{N}_{T}^{tot}+\rm{N}_{NT}^{ tot})\), where \(\rm{N}_{T}^{right}\) and \(\rm{N}_{T}^{tot}\) are the number of triggered showers on the right side of the line and the total number of triggered showers, respectively, and \(\rm{N}_{NT}^{left}\) and \(\rm{N}_{NT}^{tot}\) are the number of non-triggered showers on the left side of the line and the total number of non-triggered showers. The detection limit is defined by lines that maximize \(\epsilon\). As functions representing the limit, both the equations \(\rm{D}=aE+b\) and \(\rm{D}=a\sqrt{E}+b\) were considered, where \(\rm{D}\) and \(\rm{E}\) are the distance and the energy limit, respectively. The first equation takes into account that at greater distances, the signal integration time in a given pixel is greater. This reasoning leads to an energy limit \(\rm{E}\propto D\), and vice-versa \(\rm{D}\propto E\). The second equation is based on the fact that the registered counts decrease as \(\rm{1/D^{2}}\) and that, therefore, the energy limit to register the minimum number of counts in order to trigger the shower is \(\rm{E}\propto D^{2}\), and vice-versa \(\rm{D}\propto\sqrt{E}\). Iterations over different values of the angular coefficient a and of the intercept b led to the combination of parameters that maximize \(\epsilon\) and defined the detection limit. Detection limits for these two distance-to-energy relationships are drawn in the plot on the right of Figure 2.
The detection limits were calculated for different backgrounds (1 and 1.5 counts/pixel/GTU), elevation angles (\(\rm{10^{\circ}}\), \(\rm{15^{\circ}}\), \(\rm{20^{\circ}}\) and \(\rm{25^{\circ}}\)), and with SPACIROC1 and SPACIROC3 electronics boards. No clear improvement in the energy threshold was visible when using SPACIROC3 boards instead of SPACIROC1 boards, but all the results together define a range of energy varying with the distance.
## 5 Estimation of the level-1 trigger rates
The foreseen upgrade of EUSO-TA includes the self-trigger capability, to operate independently from the TA-BRM-FDs. The level-1 trigger logic for the observation of showers, was designed and implemented in the \(\overline{\rm{Off}}\)\(\overline{\rm{line}}\) framework, in order to test it through simulations and evaluate the UHECRs trigger rate. The simulation sets discussed in Section 3 were used also in this analysis.
In Figure 3, the energy distributions of the simulated showers with spectral index \(-1\) in logarithmic scale are shown (top). The plots are for elevation angle \(\rm{15^{\circ}}\) and for both the SPACIROC1 (left) and SPACIROC3 (right). The distributions represent the simulated showers (black), the showers in the FOV (blue), and the triggered showers (red). The energy distributions must be rescaled based on measured fluxes of UHECRs, to retrieve the natural spectral index \(-2\) in logarithmic scale. For this purpose, both the energy fluxes measured with the Telescope Array (TA) [13] and the Pierre Auger Observatory (PAO) [14] were used. The flux measured by TA included showers with energy \(\rm{E}>10^{17.2}\) eV and zenith angles \(\rm{\theta}<65^{\circ}\), while that measured by the PAO included showers with zenith angles \(\rm{\theta}<40^{\circ}\). In the plots, solid lines are for all showers with no cuts; dashed lines show the distributions with the cuts used to measure the TA flux; the dotted lines show those with the cuts used for the PAO flux. It is visible that showers start to generate triggers at energies \(\rm{\simeq 10^{18}}\) eV.
In the first step, the energy distributions with the same cuts used in the TA and PAO spectra were rescaled. For each \(i\)-th bin, the number of expected triggered events \(\rm{n}_{trig,ep,i}^{cut}\) with cuts and in the time interval of 123 h (observation time during the data acquisitions in 2015, used as a reference), is calculated as \(\rm{n}_{trig,ep,i}^{cut}=\rm{n}_{trig,i}^{cut}/\rm{n}_{sim,i}^{cut}\cdot\rm{S }_{EUSO-TA}^{cut}\cdot\rm{F}_{i}^{cut}\cdot\rm{dE}_{i}\) where \(\rm{n}_{trig,i}^{cut}\) and \(\rm{n}_{sim,i}^{cut}\) are
the number of triggered and simulated events with cuts, respectively; \(\rm S^{cut}_{\rm EUSO-TA}\) is the exposure of EUSO-TA; \(\rm F^{cut}_{i}\) is the flux measured with the PAO or the TA; \(\rm dE_{i}\) is the differential energy. The rescaled distributions are visible in the bottom plots of Figure 3.
In the second step, the expected trigger rates with cuts were rescaled to retrieve those without cuts on the zenith angle \(\rm n_{trig,exp,i}\). This was done by multiplying the first with the ratio between the total number of events \(\rm N_{trig}\) and the total number of events with cuts \(\rm N^{cut}_{trig}\cdot n_{trig,exp,i}=N_{trig}/N^{cut}_{trig}\cdot n^{cut}_{ trig,exp,i}\). The corresponding distributions are plotted in the bottom plots of Figure 3 with magenta and green dashed lines, considering the TA and PAO spectrum, respectively. The cut on the zenith angle used for the PAO spectrum (\(\theta<40^{\circ}\)) in some cases was considerably reducing the statistics of events, as can be seen in the distribution of triggered events with this cut in the bottom plots of Figure 3, where one bin is empty. As the flux per bin measured by the PAO with \(\theta<40^{\circ}\) differs only by a few percent from that measured with the same experiment with \(\theta<60^{\circ}\)[15], the flux with \(\theta<40^{\circ}\) was applied to the larger set of events with \(\theta<60^{\circ}\)1. The final distributions of triggered events rescaled with the TA and PAO fluxes are presented in thicker lines, in magenta and green, respectively. The potential trigger rate in 123 hours was calculated as the
Figure 3: Energy distribution of the simulated showers with spectral index \(-1\) in logarithmic scale (top), and rescaled by the UHECR flux measured by the PAO and the TA (bottom) for elevation angle \(15^{\circ}\) and for SPACIROC1 (left) and SPACIROC3 (right) boards. The distributions are for the simulated showers, showers in the FOV, and triggered showers. Solid lines are for all showers with no cuts; dashed lines show the distributions with the cuts on the energy and zenith angle used to measure the flux by TA; dotted lines show the distributions with the cuts on the zenith angle used to measure the flux by PAO.
sum of the expected events per bin.
Thinking in terms of single acquisition sessions (about 30 h each), these results indicate that with the level-1 trigger it could be possible to detect a few showers/session in case of 1 count/pixel/GTU background level and SPACIROC1 boards. Using SPACIROC3 boards, the trigger rates increase by 15-20%. With a higher background of 1.5 counts/pixel/GTU, the trigger rates halves. For reference, four showers were found by applying (offline) the level-1 trigger algorithm on the data collected with the external trigger in 2015, which corresponds to about 1 shower/session. Including showers recognized with manual checks, a total of nine showers were identified, or about 2 showers/session.
## 6 Summary and Conclusion
In order to evaluate the detection limit of EUSO-TA and the expected trigger rate with the level-1 trigger developed for EUSO-TA, several configurations have been considered, in terms of electronics (SPACIROC1 boards that were used during the data taking in 2015 and 2016 and SPACIROC3 included in the upgrade of the detector), and two background levels (1 and 1.5 counts/pixel/GTU).
To evaluate the detection limit, the real energy of showers have been rescaled to account for the fact that only a portion of them can be observed due to its limited FOV (with respect to usual cosmic ray fluorescence detectors on ground). This rescaling was applied to the different sets of simulations. For all configurations, the detection limit was evaluated and in general ranges between \(1\times 10^{18}-1\times 10^{19}\) eV in between 0 and 50 km.
The expected trigger rate for EUSO-TA with the internal level-1 trigger was evaluated too, in the same simulated conditions. The expected rate is a few detections per acquisition session (of about 30 h each), in the case of low background level (1 count/pixel/GTU) and SPACIROC1 boards. By using SPACIROC3 boards, the trigger rates increase by \(15\%-20\%\). For the higher background level (1.5 counts/pixel/GTU) the rates halve.
A more complete description of the analysis and more detailed results will be included in a paper under preparation.
**Acknowledgments: This work was partially supported by Basic Science Interdisciplinary Research Projects of RIKEN and JSPS KAKENHI Grant (JP17H02905, JP16H02426 and JP16H16737), by the Italian Ministry of Foreign Affairs and International Cooperation, by the Italian Space Agency through the ASI INFN agreements Mini-EUSO n. 2016-1-U.0, EUSO-SPB1 n. 2017-8-H.0, OBP (n. 2020-26-Hh.0), EUSO-SPB2 n. 2021-8-HH.0 and by ASI INAF agreeement n. 2017-14-H.O, by NASA awards and grants 11-APRA-0058, 16-APROBES16-0023, 17-APRA17-0066, NNX17AJ82G, NNX13AH54G, 80NSSC18K0246, 80NSSC18K0473, 80NSSC19K0626, 80NSSC18K0464 and 80NSSC22K1488 in the USA, Deutsches Zentrum fur Luft- und Raumfahrt, by the French space agency CNES, the Helmholtz Alliance for Astroparticle Physics funded by the Initiative and Networking Fund of the Helmholtz Association (Germany), by National Science Centre in Poland grant no 2017/27/B/ST9/02162 and 2020/37/B/ST9/01821. L. W. Piotrowski acknowledges financing by the Polish National Agency for Academic Exchange within Polish Returns Programme no. PPN/PPO/2020/1/00024/U/00001 and National Science Centre, Poland grant no. 2022/45/B/ST2/02889. Russian team is supported by ROSCOSMOS, "KLYPVE" is included into the Long-term program of Experiments on board the Russian Segment of the ISS. Sweden is funded by the Olle Engkvist Byggmastare Foundation. | JEM-EUSOのEUSO-TA地上基盤の蛍光検出器は、テレスコープアレイ(TA) site in Utah(USA)で運用されています。その最新版の開発は、これまでのデータ収集キャンペーンで、テレスコープアレイ(TA)のブラックロックMesa蛍光検出器からの外部トリガにより、最初の9個の超高エネルギー宇宙線イベントを検出しました。この最新版には、独立検出のためのトリガアルゴリズムの設置と、電子機器の更新が含まれています。新しい設定と異なる背景条件でのEUSO-TAの動作を理解するためのシミュレーション研究が開発されました。この研究は、地上基盤検出器の検出限界を推定し、それを気球ベース検出器の検出限界に拡張することができました。さらに、超高エネルギー宇宙線でのトリガー率の推定も実現しました。この作業では、 |
2310.10652 | BRC-20: Hope or Hype | BRC-20 (short for Bitcoin Request for Comment 20) token mania was a key
storyline in the middle of 2023. Setting it apart from conventional ERC-20
token standards on Ethereum, BRC-20 introduces non-fungibility to Bitcoin
through an editable field in each satoshi (0.00000001 Bitcoin, the smallest
unit), making them unique. In this paper, we pioneer the exploration of this
concept, covering its intricate mechanisms, features, and state-of-the-art
applications. By analyzing the multi-dimensional data spanning over months with
factual investigations, we conservatively comment that while BRC-20 expands
Bitcoin's functionality and applicability, it may still not match Ethereum's
abundance of decentralized applications and similar ecosystems. | Qin Wang, Guangsheng Yu | 2023-08-31T02:59:52 | http://arxiv.org/abs/2310.10652v1 | # BRC-20: Hope or Hype
###### Abstract
BRC-20 (short for Bitcoin Request for Comment 20) token mania was a key storyline in the middle of 2023. Setting it apart from conventional ERC-20 token standards on Ethereum, BRC-20 introduces non-fungibility to Bitcoin through an editable field in each stossish (0.00000001 Bitcoin, the smallest unit), making them unique. In this paper, we pioneer the exploration of this concept, covering its intricate mechanisms, features, and state-of-the-art applications. By analyzing the multi-dimensional data spanning over months with factual investigations, we conservatively comment that while BRC-20 expands Bitcoin's functionality and applicability, it may still not match Ethereum's abundance of decentralized applications and similar ecosystems.
BRC-20, Token standard, Non-fungibility, HCI
## I Introduction
To date (August 2023), Bitcoin [1] has been operating successfully for 15 years. In terms of market capitalization1, it currently holds the position of the 10th largest asset globally (US$590.74b), just behind Berkshire Hathaway (US$773.17b). Moreover, within the cryptocurrency space, Bitcoin remains the dominant player, accounting for over 53% of the market share2, far surpassing the second-ranking crypto-asset ETH (19.1%, Ethereum native token [2]). Despite its dominance, applications leveraging or operating on Bitcoin have been scarce due to its UTXO data structure [3], limiting its extensibility. Fortunately, recent developments with the emergence of a Bitcoin-fitted standard may change this situation.
Footnote 1: Global ranking, [https://companiesmarketep.com/](https://companiesmarketep.com/) {August 2023}.\({}^{2}\)Cryptocurrency charts, [https://coinmarketep.com/charts/](https://coinmarketep.com/charts/) {August 2023}.\({}^{*}\)CSIRO Data61, Australia
BRC-20, or Bitcoin Request for Comment 20 [4], is modeled after the Ethereum token standard indexed with ERC-20 [5] and was introduced in March 2023 by an anonymous developer known as Domo [6]. BRC-20 is basically Bitcoin's version of ERC-20, even with some major caveats like a lack of smart contracts. The similar parts come from it being the first token standard defined in Bitcoin, while the key distinction is that BRC-20 incorporates non-fungibility features from ERC-721 [5], making it a hybrid standard encompassing both ERC-20 and ERC-721 functionalities.
In Ethereum, non-fungible tokens (NFTs) [7] are implemented through smart contracts, where each user is assigned a unique token ID to claim ownership of a specific asset, such as JPEG files or Crypto punk images, stored off-chain on a server. In contrast, BRC-20 tokens are created through processes called _ordinal_ and _inscription_ (cf. Sec.II), which involves adding data to identifiable satoshis (the smallest unit of Bitcoin, 0.00000001 BTC). This data can represent user-customized metadata, ranging from unique identifiers to images, and is stored on-chain. When BRC-20 tokens are transferred, the inscribed data on the satoshis is also transferred via transactions, allowing users to mint NFTs on the Bitcoin network.
BRC-20 has prominently emerged as a focal point within the Bitcoin network, commanding significant attention as underscored by an array of market indicators including Bitcoin's block size, mempool transactions, and transaction fees. During the ferrov of the BRC-20 period spanning from early February 2023 to May 2023, several notable developments occurred [8]: (i) The average block size of Bitcoin experienced a substantial surge, leaping from 1.2MB to over 2MB. (ii) The volume of transactions within the memory pool demonstrated a consistent upward trajectory, nearing the 25,000 transaction mark. This contrasts with the relatively stable level of around 5,000 transactions that characterized much of 2022. (iii) Ordinal transaction fees exhibited a steady rise, concurrently driving an approximate 10% increase in non-Ordinal transaction fees throughout the entirety of March. (iv) The cumulative fees accrued from the hinting of Ordinal Inscriptions have now surpassed the 150 BTC milestone. Beyond that, various associated platforms/tools have further contributed to this trend: (v) Statistical resources like Ordinal Wallet [9], UniSat [10], and Dune Analytics [11][12] also corroborate the upward trajectory in minted Ordinals.
**Gaps in user perception.** Despite BRC's remarkable achievements within a short timeframe, its awareness remains surprisingly low. Even among seasoned blockchain researchers and developers (as gathered through informal random surveys without recorded responses), it's evident that very few are acquainted with BRC, Bitcoin token standards, or Bitcoin NFTs. Moreover, our explorations also unveiled that existing resources are inadequate for newcomers. While there are initial introductions to the concept (cf. the _final_ paragraph of Sec.I), they largely focus on providing a basic operational overview without digging into the multifaceted aspects involved. This realization motivates our pursuit of understanding this intriguing yet "enigmatic" term, and discerning its essence as either a beacon of _hope_ or a product of _hype_.
**Our attempts.** We approach this via three fundamental pillars.
\(\Leftrightarrow\)_Systematical contributions_. We extensively dive into the available open-access resources, encompassing blogs, wikis, forum posts, news articles, Git repositories, and a limited number of scholarly works, based on which, we methodically organize and present a clear and concise understanding of _what BRC is_ and _how it functions_ (Sec.II), marking a pioneering step in current research. Our exposition commences with an exploration of the fundamental structure of Bitcoin (Sec.II-A)
and progresses to elaborate on distinctive aspects like ordinals (Sec.II-B) and inscriptions (Sec.II-C), forming pivotal procedures within the BRC operation.
\(\mathtt{c}^{\otimes}\)_Quantitative contributions._ We embark on a comprehensive series of quantitative investigations across multiple dimensions to unveil the genuine dynamics and sentiment prevailing within the market. Our approach involves a meticulous examination of the market performance (Sec.IV) of a carefully selected group of representative tokens--comprising three BRC-20 and five ERC-20 projects--spanning a period of four months from the ascent of BRC to the point of composing this study. This analysis encompasses an assessment of various factors including price fluctuations, duration of popularity, market capitalization, and daily transaction volumes. Subsequently, we delve into the user responses evident in social media platforms through tweets (Sec.V) featuring specific hashtags during a randomly chosen recent week. This investigation involves the scrutiny of post content, languages used, influencers contributing to discussions, and the identification of potential fraudulent activities. Additionally, we delve into the historical mainstream prices of tokens (Sec.VI), delineating the trajectory of each token wave to ascertain the presence of a potential new BRC-formed wave.
\(\mathtt{c}^{\otimes}\)_Qualitative contributions._ We conduct a qualitative exploration (Sec.VII) that involves juxtaposing BRC-20 against established token standards (Sec.VII-A). Through this comparison, we derive both the advantages (Sec.VII-B) and intrinsic limitations (Sec.VII-C) of BRC-20. Building upon these observations (together with quantitative results), we further compile a review of the actualities and misconceptions present within user perceptions (Sec.VIII-A), culminating in our proposed implications to mitigate these aspects (Sec.VIII-B).
**Our results.** We present a series of significant findings from each investigated section, which we synthesize in Tab.I. Additionally, we offer an assessment of the level of both _hope_ and _hype_ within the BRC-20 ecosystem. In this context, _hope_ signifies the potential for sustainable prosperity, whereas _hype_ denotes a surge in interest driven by arbitrage, often accompanied by a risk of overvaluation. Upon comprehensive evaluations, we observe a slight predominance of the _hype_ (34) aspect over the _hope_ (27) element. This suggests that a more cautious sentiment towards this new concept should be taken into consideration. Meanwhile, it's important to note that the benchmark for our analysis is ERC-based markets (including BNB Chain, Avalanche, etc.), which may lead to a certain level of imbalance when comparing Bitcoin-related markets.
\(\Delta\)**Limitations.** Our investigations have certain limitations with respect to data collection. First, we acknowledge the _limited scope of our token portfolio_, which may introduce bias into our results. This limitation arises from our focus on a selected group of representative tokens, potentially excluding relevant others. The rationale behind this selection is that many tokens and projects exhibit strong correlations that might not necessarily contribute significantly to the overall market trends. Additionally, some tokens possess relatively low market capitalization and therefore may have limited impact on the broader market dynamics. Second, our analysis is constrained by _the short timeframe of tweet data_ collection. Due to resource constraints (majorly costs and human efforts), we conducted investigations over a randomly chosen week of recent tweets. While this data snapshot may not capture the entire range of market sentiments, it can still provide a reasonably representative picture of recent market performance. Furthermore, our assessment is partially based on _subjective summaries_ and informal surveys. We remind the potential for slight inaccuracies in this analysis, particularly on the market side, which is influenced by a multitude of factors.
**Related sources.** Rodarmor [13] introduced a scheme for assigning serial numbers to Bitcoin satoshis. A relatively complete introduction to ordinal theory can be found at [14]. Binance Research published several early reports [4][8][15] that delve into the development of BRC-20. Investigating the impact of Bitcoin Ordinals on transaction fees, Bertucci [16] concluded that ordinal inscriptions tend to incur lower fees compared to regular transactions. In parallel, Kiraz et al. [17] presented an alternative approach to settling NFT trades on the Bitcoin blockchain using zero-knowledge proofs, distinct from the ordinal method. Additionally, various media outlets have offered accessible explanations of this emerging concept [18][19][20][21]. Trevor.btc et al. have provided detailed coverage of the development of Ordinals/BRC-20 and hosted "The Ordinal Show" [22] podcast. Readers keen on further exploration can conduct searches using relevant keywords such as _BRC-20_, _Bitcoin NFT_, and _Ordinals_, along with associated techniques covering _UTXO_[23], _Taproot_[24] and _SegWit_[25] (cf. Sec.II) and surrounding applications (Sec.II-D).
## II BRC-20 Construction
### _Preliminary: Bitcoin UTXO & Transaction Fundamentals_
We begin by introducing the fundamental concept of the Unspent Transaction Output (UTXO) model, which serves
\begin{table}
\begin{tabular}{c|l|l|l|l|l|l|l|l|l|l|l|} \multicolumn{1}{c}{_Findings_} & \
as the underlying framework for Bitcoin transactions. In this model (Listing 1), the outputs of one transaction become the inputs for subsequent transactions, creating a continuous chain of transactions without the need for traditional accounts.
```
Tx0(output1:0.5btc)-->Tx2(input1:0.5btc) Tx2(output1:0.3btc) Tx3(input1:0.3btc) Tx1(output1:0.2btc)-->Tx2(input2:0.2btc) Tx2(output2:0.2btc,coinbase,output3:0.1btc,coinbase) Tx1(output2:0.1btc)
```
Each transaction is composed of inputs and outputs, where inputs refer to the outputs of previous transactions. In the UTXO model, the term _fee_ is used to define the difference between the total input and output amounts, which is then given to the miner who includes the transaction in a block.
Security in Bitcoin transactions is upheld by locking and unlocking scripts. The locking script (or scriptPubKey) sets the conditions that must be met to spend the output. On the other hand, the unlocking script (or scriptSig) is provided by the spender to meet these conditions and spend the output. It's also important to remember that 1 Bitcoin (BTC) equates to \(10^{8}\) satoshis. As miners prioritize transactions with a higher fee rate (\(\text{fee\_rate}=\text{fee\_size}\)), the block size is typically restricted to approximately 1MB.
### _Bitcoin Ordinals: Tracking Every Satoshi_
The second key step is achieving field uniqueness in BRC-20 by leveraging Bitcoin Ordinals, which index each satoshi based on its mining order. For example, the first-ever mined satoshi in the genesis block is indexed as 0 and can be accessed at [https://ordinals.com/sat/0](https://ordinals.com/sat/0). Ordinals provide versatility with multiple representation formats:
* _Integer notation_: The ordinal number itself, reflecting the order in which the satoshi was mined. For example, 2099994106992659.
* _Decimal notation_: The block height at which the satoshi was mined, followed by the offset within the block. For example, 3891094.16797.
* _Degree notation_: The last number is the order in which the sat was mined in the block, followed by the block height in degrees, such as \(3^{\circ}111094^{2}14^{*}16797^{*}\).
* _Percentile notation_: The position of the satoshi in Bitcoin's total supply, expressed as a percentage. For example, 99.99971949060254%.
* _Name_: An encoding of the ordinal number using the characters "a"-"z", such as "satoshi".
The FIFO (First-In-First-Out) principle applies once a satoshi becomes part of a transaction. Suppose a transaction involves two inputs, each containing three satoshis, and an output containing four satoshis. In that case, the output will include the first four satoshis from the combined inputs. As in Listing 2, each "[...]" represents an input or output, and each satoshi is indexed with a character from "a" through "z".
Fees are handled similarly. If a transaction has two inputs, each containing two satoshis, and one output containing three satoshis, the output will comprise the first three satoshis from the combined inputs, and one satoshi will be used as a fee and assigned to a Coinbase transaction.
```
[a:b:c][def:]-->[a:b:c][ef] [a:b][c:d]-->[a:b:c][d] Coinbasetx:[SUBSIDY][d]-->[SUBSIDY.d]
```
Listing 2: Tracking the tagged satoshi - FIFO
Within Bitcoin Ordinals, another noteworthy innovation emerges in the form of _rare satoshis_[26], pursuing the most significant milestones in satoshis, similar to the iconic example of _Bitcoin Pizza_[27]. These satoshis can be distinctly identified as having been mined from specific blocks.
* _Common_: Any that is NOT the first satoshi of its block.
* _Uncommon_: The first satoshi of each block.
* _Rare_: The first of each difficulty adjustment period.
* _Epic_: The first satoshi of each halving epoch.
* _Legendary_: The first satoshi of each cycle.
* _Mythic_: The first satoshi of the genesis block.
### _Inscriptions: Embedding Messages in Satoshis_
The third crucial step involves incorporating personalized content into each unique satoshi. This concept is known as _Inscriptions_. Inscriptions leverage the Ordinals protocol, enabling the direct embedding of content (details in Tab.II) into a satoshi in the form of JSON (JavaScript Object Notation, also refer to Sec.III-A). This transformation effectively turns satoshis into NFTs, making them vessels for arbitrary data.
The data is stored within the segregated witness (SegWit [23]) section of a transaction. SegWit is a protocol upgrade that enhances scalability by modifying how data is stored in a block. In SegWit-enabled transactions, the transaction size (\(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text
* _Numbered sequentially_. Each Inscription is systematically allocated a position as outlined by the Ordinal Theory. This introduces a distinct characteristic capable of conferring diverse levels of value upon distinct sequential creations, including Inscriptions minted following the block reward halving or the inaugural Inscription itself.
* _Scale limitation_. The Bitcoin block can accommodate a maximum of 4MB of data after the SegWit and Taproot upgrades. Considering that approximately 144 Bitcoin blocks can be mined daily, a total of about 210GB of space is available annually for Inscription minting (a single Inscription requires 4MB of space). In contrast, NFTs based on smart contracts lack such limitations, theoretically allowing for unlimited minting.
### _Extension: ORC-20 and Surroundings_
**OCR-20.** ORC-20 [28], created by OrcDAO, is an open standard designed to enhance the capabilities of ordered tokens on the Bitcoin network. It ensures seamless backward compatibility with BRC-20. Unlike BRC-20, which necessitates a _one-time transfer inscription_ in each transaction, ORC-20 allows for the reusability of the mint and _send_ ordinal inscriptions within a transaction.
**Surroundings.** We also investigate a series of supporting applications that are relevant to BRC-20 (Tab.III).
## III BRC-20 on Bitcoin Networks
### _Implementing BRC-20_
The design of implementation is to address the incompatibility between the stateless UTXO-based models of Ordinals and the stateful account-based approach of BRC-20. At the heart of this reconciliation is the use of inscriptions to record state transitions, transforming these immutable markers into auditable proofs. This method hinges on the construction and maintenance of an _off-chain state indexer_, which records the balance of each account. Inscriptions on the Bitcoin network then serve as triggers to update these off-chain states. In essence, BRC-20 has enabled three primary functions.
\({}^{\circledRightarrow}\)_Deploy a new token_. The operation initiates the creation of a new BRC-20 token (Deploy, Listing 4). It begins on-chain with the inscription of a satoshi to represent the deployment. This inscription contains several crucial details such as the protocol name (_brc-20_), operation (_deploy_), token's name (_tick_), the total amount of tokens to be issued (_max_), and the maximum amount of tokens to be minted in each minting round (_lim_). After this inscription is added to the Bitcoin network, an off-chain process verifies whether a state already exists for the given token name. If not, a new state is created, with the balance of each account initialized to zero or a pre-defined value and the token's properties (those defined in Inscriptions) added to the state. The on-chain inscription structure and the off-chain update are listed below.
```
#OnchainInscription"P":"brc-20",#protocolname":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":":">":">":">":">":">":">":":">":":">":">":">":">":">":":">":">":":">":":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":":">":">":">":":">":":">":">":">":">":":">":">":">":">":":">":":">":":">":">":":">":">":">":">":">":":">":">":">":">":">":":">":">":":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":":">":">":">":">":":">":">":">":">":":">":":">":":">":">":">":">":">":">":">":":">":">":":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":":">":">":">":">":":">":">":":">":">":":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":":">":">":":">":">":":">":">":">":">":">":":">":":">":":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":":">":":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":":">":">":">":":">":">":">":":">":":">":":">":":">":">":":">":":">":":">":">":":">":">":":">":":">":">":">":">":":":">":":">":":">":":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":":">":">":":">":">":":
#'onchainInsertipion
"P": "brc-20", #protocol name "op": "transfer", #operation "tick": "_ordi", #token name "ammt": "100" # the amount of token being transferred
# Off-chainupdate
if state[tick] NOT exists:
raise errors
if state[tick]["balance"][sender] >= amt:
account_state[tick]["balance"][sender] -= amt
account_state[tick]["balance"][receiver] += amt
### _Operating BRC-20 (NFT) on Bitcoin_
**The PSBT standard.**PSBT, short for partially signed Bitcoin transactions, is a Bitcoin standard (BIP-174 [28]) that enhances the portability of unsigned transactions and enables multiple parties to easily sign the same transaction. A PSBT is created with a set of UTXOs to spend and a set of outputs to receive. Then, the information of each UTXO necessary to create a signature will be added. Once the PSBT is prepared, it can be copied to a program capable of signing it. For multi-signature wallets, this signing step can be repeated using different programs on separate PSBT copies. Multiple PSBTs, each containing one or more necessary signatures, will later be combined into a single PSBT. Finally, the fully signed PSBT can be broadcast via networks.
**Transaction workflow.** Building upon this standard, we present a complete cycle for trading a BRC-20 transaction.
\(\Leftrightarrow\)_Seller's Operation._ A seller uses a transaction to inscribe a satoshi, indicating a transfer operation of a certain amount of BRC-20 tokens (e.g., _1000__ordi_). The inscribed satoshi manifests the seller's intent to sell the stated amount of tokens and carries detailed information, including the protocol name (_brc-20_), the operation (_transfer_), the token name (_ordi_), and the transfer amount (e.g., _1000_).
\(\Leftrightarrow\)_Creation of PSBT._ Next, the seller incorporates the inscribed satoshi as an input in PSBT. To set the starting bid, the seller designates an output in the PSBT for _the seller transfers 0.2 BTC to their own address._ This action signifies the seller's intention to exchange _1000__ordi_ tokens for _0.2 BTC._
\(\Leftrightarrow\)_Publishing the PSBT._ Then, the seller publishes the PSBT to a marketplace, allowing potential buyers to review the transaction details and decide whether they wish to proceed.
\(\Leftrightarrow\)_Buyer's Operation._ If a buyer finds the _1000__ordi_ package appealing, they can select and finalize this PSBT. It indicates the buyer is willing to complete the exchange by providing the required funds (_0.2 BTC_ in this case) and, in return, receiving the inscribed satoshi from the seller.
\(\Leftrightarrow\)_Finalizing the PSBT._ Upon completing the PSBT, the buyer broadcasts it to the Bitcoin network. This entails sending the transaction data to the network, where it will be included in a future block and ultimately confirmed. Once included in a block, the transaction becomes visible to all network participants and becomes irreversible.
\(\Leftrightarrow\)_Off-chain State Updates._ After the on-chain finalization of the PSBT, the off-chain states need to be updated to reflect the new balances of the buyer and the seller. The buyer's _ordi_ token balance increases by _1000_, while the seller's _ordi_ token balance decreases by the same amount. Simultaneously, the seller's Bitcoin balance increases by _0.2 BTC_, while the buyer's Bitcoin balance decreases accordingly.
It is worth noting that the protocol necessitates two on-chain transactions to finalize the Transfer operation, ensuring a secure settlement for the trade between sellers and buyers.
## IV Token Investigations over Months
**Investigation overview.** We have specifically selected representative projects, including the foremost three BRC-20 projects (ORDI, MOON, OSHI), each boasting a market capitalization3 surpassing US$10 million. Additionally, we include the top five ERC-20 projects (MATIC, SHIB, WBTC, DAI, LINK) each with a market capitalization4 exceeding US$4 billion. Our data spans a period of four months, commencing from April (prior to the BRC craze) and extending through August (the present date of this study's composition).
Footnote 3: Top BRC-20 coin explorer: [https://www.coingecko.com/en/categories/br](https://www.coingecko.com/en/categories/br) c-20 [Aug 2023].
Footnote 4: Top BRC-20 coin explorer: [https://coincodex.com/cryptocurrencies/secto](https://coincodex.com/cryptocurrencies/secto) r/thereum-erc20/ [Aug 2023].
### _Price and Marketcaps Trends_
As **price trends** unfold (cf. Fig.1(a)), BRC-20 related tokens, represented by ORDI and MOON, exhibit sharp price increases shortly after their launch. This rapid appreciation in price is indicative of a surge in demand, likely driven by heightened market interest in these new offerings. However, such rapid increases in price can also signal overvaluation, particularly if they are not backed by strong fundamentals.
In contrast, ERC-20 related tokens, with the exception of SHIB, tend to show more stable price trends. It suggests that these coins' prices are less likely to be influenced by short-term market sentiment and more likely to reflect their intrinsic value. In particular, stablecoins like DAI can serve as a reliable store of value in the often volatile crypto markets
Examining **marketcap trends** (see Fig.1(b)), we observe substantial expansion for BRC-20 coins subsequent to their introduction. This growth is not solely attributed to price escalation; rather, it signifies increased coin circulation, implying a burgeoning user community and broader adoption of these coins. However, akin to the price dynamics, rapid market capitalization growth bears a dual nature: it can signal a coin's promise, yet it might also signify hype if the acceleration is overly swift and lacks sustainability.
**Finding-IV.****: _Users are rapidly entering the BRC market within a span of one month, but they may face the fact of losing their enthusiasm shortly in the subsequent months._**Finding-IV.****: _Compared to ERC-like tokens, BRC-based tokens constitute a small portion of the overall market size._**
### _Average Return_
The indicator **average return** represents the percentage change in price, serving as an indicator of profitability: a higher average return indicates greater gains for an investor
who bought the coin at the beginning of the period and subsequently sold it. The chart illustrated in Fig.2(a) visually displays the mean returns of the three BRC-20 tokens (_blue_ bars) and the five ERC-20 tokens (_red_ bars). Evidently, the BRC-20 tokens, notably ORDI and MOON, demonstrate markedly higher average returns when compared to most ERC-20 tokens (possibly due to experiencing a high return rate during their initial launch rather than their current stable period). This suggests that over the observed duration, BRC-20 tokens may have presented an enhanced potential for profitability. It's worth noting that SHIB boasts a high return rate, aligning with the characteristics of memecoins like Dogecoin.
**Finding-IV.0**: _Certain BRC-20 tokens have demonstrated a remarkable return rate, often exceeding tenfold compared to equivalent tokens within the same period._
### _Volatility Analysis_
The concept **volatility**, typically quantified as the standard deviation of returns, embodies a measure of risk: heightened volatility signifies greater price variability and consequently
Fig. 1: Comparison on trends
elevated risk. As depicted in Fig.2, we discern that, except for ORDI, the BRC-20 coins exhibit higher volatilities in comparison to the majority of ERC-20 coins. This observation implies that throughout the assessed period, BRC-20 coins might have entailed increased risk. This observation aligns with the earlier insight that BRC-20 coins also yielded superior returns, reinforcing the tenet that elevated returns are often accompanied by elevated risk. Conversely, with the exception of SHIB, the remaining ERC-20 tokens manifest greater stability, characterized by a narrower range of price fluctuations. We postulate that SHIB's substantial and abrupt fluctuations may stem from its memecoin attributes, rendering it particularly sensitive to market dynamics, such as significant movements instigated by prominent market participants.
**Finding-IV.6: _BRC-20 tokens showcase elevated volatilities and associated risks, aligning with their substantial returns._**
### _Performance Analysis_
In our evaluation, we examine their **performance** using the Sharpe ratio5[29], a risk-adjusted return metric, to assess the efficacy of BRC-20 and ERC-20 tokens. The outcomes presented in Fig.2 reveal that, within the chosen tokens, both BRC-20 and ERC-20 tokens exhibit a diverse spectrum of Sharpe ratios, signaling varying levels of risk and return within these two token categories. It shows a diverse range of Sharpe Ratios, with DAI displaying a significantly negative value, while others like SHIB and WBTC exhibit modest positive ratios. A negative Sharpe Ratio might be indicative of a high-risk, low-reward scenario, often associated with market hype and speculative trading. On the other hand, a positive Sharpe Ratio could signal a more balanced risk-reward profile, hinting at the genuine potential or "hope" in the investment. The presence of these dynamics in BRC-20 markets may suggest a complex landscape, where both hope and hype coexist.
Footnote 5: Calculated as \(\mathsf{Sharpe Ratio}=\frac{\mathsf{Average Return}\cdot\mathsf{InitialFore Rate}}{\mathsf{Standardization}\cdot\mathsf{InitialFore Rate}}\)
: **Finding-IV.6: _BRC-20 tokens demonstrate heightened return rates alongside increased risks, with both the absolute values surpassing those observed in ERC-like tokens._**
### _Correlation Analysis_
The **correlation matrix** analyzing the daily returns yields insights into the relationships among chosen assets (Fig.3). Among the BRC-20 tokens (ORDI, MOON, and OSHI), their correlation coefficients with each other are notably elevated, indicating a robust positive linkage in their price movements. This suggests that BRC-20 tokens, as a collective, tend to exhibit synchronous shifts, possibly due to shared market perception, common underlying methodologies (rooted in ordinals), or interdependencies within the ecosystem (such as shared developers/buyers). The pronounced correlations within BRC-20 group highlight their lack of independence in a portfolio context, a crucial factor to consider in devising strategies.
Among the ERC-20 tokens (MATIC, SHIB, WBTC, DAI, and LINK), the correlation coefficients also generally exhibit positivity, albeit with less intensity compared to the BRC-20 tokens. This disparity could stem from the more established and diverse landscape of the ERC-20 token market, encompassing a wider spectrum of blockchain applications.
A comparison between these two categories unveils discernible variations in correlation coefficients. While some movements overlap, distinctive traits remain. For instance, BRC-20's ORDI demonstrates a strong positive correlation with ERC-20's LINK and WBTC, indicating a similar response to market conditions. In contrast, BRC-20's MOON exhibits a lower correlation with these ERC-20 tokens, implying distinct market dynamics at play.
Fig. 3: Correlation
Fig. 2: Evaluations on prevalent BRC-20 and ERC-20 projects
**Finding-IV.0**: _BRC-20 tokens exhibit closely positive correlations among themselves, stronger than those in ERC-like tokens. The correlations between BRC-20 and ERC-20 tokens, however, display relatively weak connections._
### _Usage Trend_
We proceed to compare **daily Bitcoin transactions** with **Ordinal inscriptions** (BRC-20 tokens) as depicted in Fig.4. The findings reveal a steady growth in the volume of Ordinal inscriptions (orange segments in bars). The cumulative count of Ordinal inscriptions (green line) exhibits a clear upward trajectory, indicating a progressive surge in the utilization and adoption of BRC-20 tokens over time.
However, the growth of Ordinal inscriptions should not be viewed in isolation. Non-ordinal Bitcoin transactions (blue segments in bars) still form a significant portion of daily transactions. This suggests that while BRC-20 tokens are gaining traction, traditional Bitcoin transactions remain prevalent.
**Finding-IV.0**: _Bitcoin inscriptions have witnessed a consistent growth, yet they still represent a minor fraction of daily transactions within the overall network activity._
## V Sampled Sentiment Investigations
**Investigation overview.** Our experiments involve gathering public tweet data from a randomly selected week (August 5th to August 9th, 2023) to delve into the prevailing perceptions and attitudes toward BRC-20. The data gathered in our experiments amounts to approximately 2 megabytes and spans interactions with around 4,112 tweet users that mentioned the hashtag #brc20 or similar. We opted for this particular week as it closely aligns with the timeframe of paper composition.
### _Sentiment Reactions_
We also analyze the **user sentiment** and the public perception of BRC-20 and Ordinals.
Fig.5 reveals a largely neutral sentiment across all metrics - users, tweets, and potential impact - with positive sentiment following closely behind. This distribution could be indicative of a cautiously optimistic stance toward these tokens. However, negative sentiment is minimal, comprising less than 1% in all cases. The minimal presence of undefined sentiment suggests that most discussions are clear.
Fig.6 (time series) illustrates the daily sentiment counts, showing that neutral sentiment is consistently the most prevalent, followed by positive sentiment. Negative sentiment remains relatively low throughout the investigated period. A noticeable spike in undefined sentiment around August 7 might suggest a moment of uncertainty or controversy in the discourse, but it was short-lived.
The sentiment analysis suggests that the BRC-20 and Ordinals are currently viewed more with hope than hype. The dominance of neutral and positive sentiments, coupled with the minimal negative sentiment, indicates a generally optimistic perception. Nonetheless, since our investigation timeframe is relatively brief and sentiment tends to oscillate with market dynamics, maintaining continuous monitoring would be prudent to observe any shifts in public opinion.
**Finding-V.0**: _Users who are inclined to express opinions have a non-negative attitude towards BRC-related concepts._
### _Tweets with Relevant Hashtags_
#### V-B1 **Tweet stats**
We conduct an examination of Twitter data surrounding BRC-20. Notably, most contributors opt for web-based tweeting (Fig.7(b)), indicating a higher level of attention spent when accessing BRC-20 content compared to mobile users. Furthermore, the distribution of tweet data is well-balanced (Fig.7(a)), supported by the fact that the majority of contributors post just one tweet. This minimizes the potential for biased outcomes stemming from excessive tweeting by a single individual.
Fig. 4: Daily count of BRC-20 upon Bitcoin transactions
Fig. 5: Sentiment distribution
Fig. 6: Sentiment actions count by Tweets
We have knowledge that a predominantly neutral sentiment observed across users, tweets, and impact suggests a cautiously optimistic view within the community (Fig.6). This relative optimism is reinforced by the fact that the majority of tweets are longer (160 to 200 characters, Fig.7(d)).
The diversity in the age of Twitter accounts engaged in the conversation, ranging from newly created to those over six years old, reveals an appeal that transcends different segments of the community (Fig.7(c)). The broad international interest, as evidenced by the primary languages being English and Chinese, underlines the global appeal of BRC-20 (Fig.7(e)).
In terms of influence, the participation across various follower counts, from micro-influencers to major influencers, highlights an inclusive conversation that extends beyond a niche audience (Fig.7(f)). The consistency in engagement, regardless of the number of followers, adds credibility to the BRC-20 conversation.
**Finding-V.0**: _BRC-20 appeals to users across various regions and age groups._
#### Iv-A2 **(Non-)Scam among users**
We first analyze the relationship between user types (normal users vs influencers) and tweet types (scam vs non-scam) in the context of the BRC-20 hashtag. Users were categorized based on specific criteria: influencers were identified from the "Most Popular" and "Highest Impact" lists, while normal users were those not listed as influencers. Tweets were classified as scams or non-scam based on the presence of certain keywords, repeated messages, and patterns indicative of pyramid selling.
Fig.9 unveils a significant distinction between influencers and normal users. While influencers posted fewer tweets overall, a higher proportion of their tweets were classified as scams. This suggests that many influencers may be leveraging their popularity to engage in questionable practices like pyramid selling, possibly with the intent to manipulate the market or deceive followers. The content of their tweets may not reflect a genuine interest in BRC-20, indicating a potential agenda to exploit the hype surrounding the cryptocurrency.
In contrast, normal users predominantly engaged in non-scam tweets, contributing to informative and meaningful discussions about BRC-20. Their engagement pattern reflects a genuine interest in the subject, possibly even involving actual exchange processes of BRC-20. The higher volume of non-scam tweets among normal users reflects authentic interests in BRC-20, unlike the controlled narrative pushed by influencers.
**Finding-V.0**: _While BRC-20 holds the risk of artificial manipulation, the dominantly controlling influence remains within legal and constructive boundaries._
Fig. 8: Popular users with the highest impact.
Fig. 7: Tweet stats related to BRC-20
## VI Investigation Compared to Historical Peaks
**Investigation overview.** We conduct an examination of historical crypto market data spanning ten years from 2013 to 2023, encompassing nine prominent tokens including BTC, LTC, DOGE (BRC-type), ETH, BNB, AVA (ERC-type), USDT, USDC, and BUSD (stablecoin). By correlating this historical data with major real-world market waves, we aim to discern if the peaks or prosperity of each market coincide with significant narratives. This macroscopic analysis provides insights into whether BRC represents a genuine wave in tokenomics.
### _Tokenwaves in History_
Based on these price trends, several notable waves in the token market are obtained. The initial peak, predating 2013, can be attributed to the flourishing crypto market driven primarily by the terror surrounding Bitcoin mining activities and its PoW mechanism [30]. As a pioneering force, Bitcoin's impact set the stage for the valuation of the entire cryptocurrency landscape. Following this, the subsequent peak around 2017 aligns with Ethereum's development, sparking a surge in _initial coin offerings_ (ICOs) [31]. ICOs facilitated fund-raising by exchanging Ethereum (ETH) via the ERC20 standard [5] for native tokens of various projects, thereby attracting widespread user engagement and diverse investments. This wave was later succeeded by _initial exchange offerings_ (IEOs) [32] and analogous _initial development offerings_ (IDOs) [33].
Following a two-year cooling-off period, a notable resurgence took place in the mid-2020s, characterized by the rise of _decentralized finance_ (DeFi) [34]. DeFi encompasses a range of on-chain financial protocols that mirror traditional market functions, including lending, borrowing, contracts, leverage, and securities. Subsequently, starting in 2021, the spotlight shifted to _non-fungible tokens_ (NFTs) [7] within the Ethereum ecosystem. These distinct digital assets are utilized to represent ownership or validate authenticity for digital artworks, collectibles, and virtual real estate. This trend was further propelled by subsequent developments like the _play-to-earn_ concept [35] and the growing influence of _Web3_[36] in 2022. As we progress into 2023, the continued activity in the token space remains evident with the deployment, minting, and transfer of inscriptions on the Bitcoin network via _BRC-20_[4].
**Finding-VI.0**: _BRC-20 appears to be emerging as a new narrative during 2023, propelling a fresh tokenwave._
### _Comparison and Correlation_
We observed a common movement pattern among most tokens (both BRC-like and ERC-like) except for stablecoins (including USDT, USDC, BUSD). This suggests that token prices are intrinsically interconnected and are influenced by dominant tokens like BTC and ETH. Stablecoins, on the other hand, exhibit a distinct trend, remaining independent of market tokens and maintaining stable values pegged to the US dollar. The broader wave of tokenomics appears to have minimal impact on their fundamental value, except in cases like the Luna-UST collapse [37] where major design flaws were evident. We can infer that the surge in Bitcoin prices during the BRC's popularity period indirectly amplifies positive sentiments across the entire token market.
**Finding-VI.0**: _The patterns of BRC-20 waves align with broader trends observed in the cryptocurrency market._
## VII Investigation From Inherent Features
**Investigation overview.** In contrast to previous quantitative measurements, this section presents a qualitative evaluation from three perspectives: a comparison with other standards, its positive attributes and impacts, as well as notable limitations that must not be disregarded.
### _Compare with Existing Standards_
The majority of token standards in competitive blockchains (summarised in Tab.IV), such as BEP-20/721 (BNB Smart Chain), ARC-20/721 (Avalanche), and XRC-20/721 (XDC Network [38]), draw inspiration from the Ethereum repository. These ERC-like standards share common attributes, adhering to the 20-track standard for fungibility and the 721-track standard for non-fungibility. NFTs in these chains possess programmable smart contracts, allowing for limitless issuance.
Contrastingly, BRC-like standards [39][40] integrate uniqueness into transaction payloads, stemming from their limited units (sats). This results in non-fungible tokens being transacted through a combination of regular transactions
Fig. 10: Cryptocurrency prices all through years
Fig. 9: (Non-)Scam tweets among users
and specific operations. On the flip side, ERC-like standards achieve distinctiveness via a parameter called the token ID in functions (cf. Algm.1), potentially utilizing various functions in an upper-layer operation. This gives rise to diverse token standards with features like 1155/3525/3475. Transfers within this framework rely on state transitions facilitated by contracts operating on the chain. We present more differences in Tab.V.
This divergence also translates into disparities in popularity. ERC-compatible chains thrive with active developers and Dapps, attracting a larger user base. Conversely, BRC-like chains often grapple with a dearth of active developers, hampering the initiation of innovative approaches.
```
1interface ERC721 { function ownerOf(uint256_tokenID) external view returns (address); function transferFrom(address_from, address_to, uint256_tokenId) external payable;... }
```
**Finding-VII.0**: BRC-20 stands out distinctly from ERC-like standards due to its structure, leading to a shortage of active developers and on-chain applications._
### _Advantages To be Highlighted_
**System stability.** Stability is primarily dependent on the network of distributed miners and their commitment. Augmented stability is achieved through two primary avenues.
* _New players._ As explained in Sec.II, the tracing of each satoshi requires the utilization of ORD software. This means that despite the availability of user-centric solutions like Ordinals markets, individuals wanting full control over the entire Ordinal procedure and the creation of an Inscription must operate a Bitcoin full node (rather than a lightweight node). This element, among others, has led to a marked rise in accessible Bitcoin nodes. The more active Bitcoin full nodes there are, the greater the decentralization of the Bitcoin network becomes.
* _Increased revenue._ The incorporation of ordinal inscriptions intensifies congestion within the Bitcoin blockchain, leading to an upward trajectory in fees and bolstering miners' earnings. This provides miners with advantages and enhances their commitment to the system. This advancement holds promise for the long-term sustainability of the Bitcoin blockchain, as its viability heavily relies on substantial transaction fees. The introduction of supplementary application layers, like ordinals, holds the potential to sustain heightened congestion. This, in turn, alleviates concerns about liquidity shortages or inadequate transaction volumes.
**Infrastructure construction.** Driven by BRC, Bitcoin's advancements in infrastructure and DApps have also led to substantial progress (also refer to Tab.III). Notably, Bitcoin wallets like Hiro and Xverse have rapidly expanded their support for BRC-related protocols, swiftly introducing products such as the BRC Explorer. Additionally, even the Bitcoin NFT market, traditionally centered around Stacks-based projects, has undergone a transformation with the recent launch of Gamma's Ordinals marketplace. Following closely, Magic Eden introduced its Bitcoin NFT marketplace. Esteemed NFT studios such as Yuga Labs and DeGods have also joined this movement, unveiling Ordinals-based projects within the past month. This surge in innovation is not confined to Bitcoin's base layer; it's equally evident within Bitcoin Layer2 solutions like the Lightning Network, Liquid, Rootstock, and Stacks.
**Finding-VII.0**: _The emergence of BRC enhances system stability and fosters the development of complementary tools._
### _Limitations Cannot Be Ignored_
**Costly.** We noted that the protocol requires two on-chain transactions to complete the transfer operation, which is costly and less user-friendly, while additionally, the granularity of exchanges is limited to the amounts defined in each PSBT. Moreover, the inscribed satoshis become invalid after use, necessitating the inscription of a new satoshi for each new transaction, which deviates from the original concept of Ordinals as long-lasting, meaningful inscriptions.
**Increased fees.** Similarly, analyzing transaction fees spanning December 2022 to April 2023, as outlined in [16], it's evident that significant total fees accrue with substantial inscriptions, signifying larger transactions. Importantly, a clear positive correlation emerges between Bitcoin ordinal inscriptions and transaction fees across diverse transactions, which contributes to the overall block fees. Consequently, the integration of ordinal inscriptions amplifies congestion within the Bitcoin blockchain, resulting in an upward trajectory of fees. This will raise concerns for regular users.
**Stateless.** BRC-20 and Ordinals continue to grapple with the challenge posed by the inherently stateless nature of UTXO transactions and the state models that most applications demand. The advancement of these protocols and their
\begin{table}
\begin{tabular}{l|c|c|c c c|c|c}
**Standard** & **Time** & **Network** & **Description** & **Description** & **Application** \\ \hline ERC-20 & 2015 & Ethereum & ✓ & ✓ & ✓ & ✓(Tx) & Currency \\ ERC-721 & 2017 & Ethereum & ✗ & ✗ & ✗ & ✓(SC) & NFT \\ ERC-1155 & 2018 & Ethereum & ✗ & semi & ✗ & ✓(SC) & Game \\ ERC-3525 & 2022 & Ethereum & ✗ & semi & ✓ & ✓(SC) & Equity \\ ERC-3475 & 2022 & Ethereum & ✗ & semi & n/a & ✓(SC) & Equity \\ \hline BEP-20 & 2021 & BSC & ✓ & ✓ & ✓ & ✓(Tx) & Currency \\ BERP-212 & 2022 & BSC & ✗ & ✗ & ✗ & ✓(SC) & NFT \\ ARC-20 & 2022 & Avalanche & ✓ & ✓ & ✓ & ✓(Tx) & Currency \\ ARC-721 & 2022 & Avalanche & ✗ & ✗ & ✗ & ✓(SC) & NFT \\ ARC-20 & 2023 & XDC & ✓ & ✓ & ✓ & ✓(Tx) & Currency \\ XC-721 & 2023 & XDC & ✗ & ✗ & ✓(SC) & NFT \\ \hline DRC-20 & 2023 & DogCoin & ✓ & ✗ & ✗ & ✓(Tx) & NFT \\ LTC-20 & 2023 & Lincoln & ✓ & ✗ & ✗ & ✓(Tx) & NFT \\
**BRC-20** & 2023 & Bitcoin & ✓ & ✗ & ✗ & ✓(Tx) & NFT \\ \end{tabular}
\end{table} TABLE IV: Comparison with competitive standards
capacity to accommodate more comprehensive functionalities, including a versatile virtual machine, hinges on the ongoing market incentives and the sustained appreciation of coin value.
**Centralization.** The escalating size of the Bitcoin network may discourage users from running their nodes due to increased requirements for downloading a copy of the network. Currently, most BRC-20 wallets necessitate running a full node, a practice not commonly embraced by regular users. As a result, users resort to third-party APIs, potentially creating centralized security vulnerabilities. Although different indexers can be connected for cross-validation, this requires additional steps and understanding from the users' side.
**Meme-nature.** Presently, a significant portion of the BRC-20 tokens in circulation, such as ORDI, PEPE, MOON, and others, predominantly belong to the category of meme coins. Due to the absence of consensus among communities and the lack of support for smart contracts, these tokens offer minimal practical utility and are notably swayed by trends in social media sentiment. Although this phenomenon sparks speculative interest, the tokens' limited functionality and the consequent dearth of a robust holder base suggest a potential vulnerability to abrupt, unforeseen value declines.
**Finding-VII.0**: _BRC brings network congestion, leading to increased reliance on centralized tools and rising fees. Additionally, it retains its inherent limitation of extensibility._
## VIII User Perception and Implications
### _Reality and Misconceptions in User Perception_
**Realities.** Based on aforementioned investigations, several realities emerge from user perceptions in the BRC-20 landscape.
* _Genuine interest in BRC-20._ Users exhibit an enthusiastic interest in novel crypto concepts, actively participating in discussions (**V.0**) across social media platforms (**V.0**). They demonstrate their commitment to the market by investing time and funds, which is reflected by its market performance (**IV.0**) and a trend of tokenwaves (**VI.0**).
* _Noteworthy capital returns._ BRC-20 tokens present a remarkable market performance, showcasing substantial returns (**IV.0**) that outpace the performance of equivalent tokens in other categories (**IV.0**&**C0**).
* _Interconnected ecosystem._ The BRC-20 ecosystem reveals an interconnected network of tokens (**IV.0**), indicating a close interdependence of user perceptions and behaviors within this specific subset of tokens.
* _Driving innovation._ Moreover, the advent of BRC-20 has acted as a catalyst for driving innovation in the field, leading to the development of complementary tools and contributing to the overall stability of the system (**VII.0**).
**Misconceptions.** In contrast, our investigations have also uncovered certain misconceptions.
* _Ephemeral enthusiasm._ User enthusiasm for new concepts often follows a cyclical pattern of initial excitement followed by a potential decline in engagement (**IV.0**), particularly if immediate benefits are not realized (**IV.0**&**C0**).
* _Limited market size._ BRC-related markets still occupy a relatively small share compared to larger markets like Bitcoin's daily transaction volume (**IV.0**) or the market capitalization of ERC-like tokens (**IV.0**).
* _Dependency on dominance._ Much like derivative tokens, the trend of many BRC-20 tokens appears to be influenced by a select few dominant projects such as Ordi (**VI.0**), as well as social influencers (**V.0**).
* _One-Sided development._ The majority of developed tools are built upon existing data sources like web browsers or account-related APIs, rather than introducing novel logical innovations like those found in smart contracts--reflecting an inherent limitation (**VII.0**&**C0**).
### _Towards Enhancement_
**Improving user awareness by education.** As our investigations revealed, a prevalent lack of understanding among both non-professional and professional users regarding the fundamental concepts of BRC, and even the operational intricacies of Bitcoin itself, let alone the workings of BRC within the Bitcoin network. This limited comprehension leads to sparse discussions in public channels, with mere hundreds of tweets about BRC compared to thousands about Ethereum or even millions about a popular singer's new song. Among these mentions, the majority remain superficial, lacking substantive content. To enhance users' awareness and understanding of BRC and Bitcoin NFTs, two viable approaches stand out. Firstly, the establishment of an educated community through platforms like MOOCs and easily accessible YouTube videos could be pivotal. Open forums could address security concerns, while independent implementations on GitHub channels could offer potential solutions. For instance, BRC has been interpreted by prominent companies and media outlets, like Binance. Secondly, facilitating the creation of competing BRC services and third-party tools by developers can yield quick responses to user needs, encompassing NFT-related functions such as creation, purchase, auction, and exchange, particularly for technical users. Several third-party tools have already emerged for BRC to aid users in facilitating user experiences.
**Encouraging communities for further engagement.** Independent tools and services have consistently occupied a promi
\begin{table}
\begin{tabular}{c l l} \hline \hline & **Bitcoin NPT** & **Other NFTs** \\ \hline
**Protocol form** & Ordinal & ERC-721, ERC-1155, SPL \\
**Description Storage** & Inscription & NFT \\
**Code update** & Entity on-chain & Partially on IPFS/Arveware \\
**Codep** & Not allowed & Depends on contract code \\ \hline
**Mining** & Not possible without a node, need & DMostly can directly interact \\
**Trading** & via third-party designed services & with the webpage \\ \hline
**Extensibility** & Difficult due to Bitcoin’s & Ensier due to programmable \\
**Consumption** & High due to POW consensus & smart contracts \\ \hline
**Pros** & Scarcity, rarity-aware & Mainstream contract mode, \\ & Low block speed, no bulk mining & high user base \\
**Cons** & Difficulties in mining/trading, & No special gymnticks or fame, \\ & Wallet entry is complex & easily overlooked \\ \hline \hline \end{tabular}
\end{table} TABLE V: NFT Comparisons
nent position within the space of BRC-20 and its associated communities. Diverse applications have been developed to enhance the user experience of BRC-based products. For instance, volunteers from Cryptokoryo [11] and Datalaways [12] have created statistical services that showcase insightful trends related to Ordinals, Inscriptions, and BRC-tokens. Additionally, various media outlets provide dedicated sections to succinctly summarize the latest news relevent to BRC. BRC explorers have also been implemented to provide real-time price fluctuations. These tools significantly contribute to increasing user understanding of basic mechanisms while alleviating concerns about potential drawbacks. The seamless integration of third-party tools with other existing services, in particular DeFi protocols [41] and cross-chain technologies [42], adds value and has the potential to enhance adoption.
**Attracting new attentions.** BRC-20 also draws inspiration from the NFT landscape, which has demonstrated remarkable growth over the past couple of years. Users who have actively engaged in NFT trading and gaming activities (such as minting, participating in airdrops, etc.) are likely to exhibit an inherent curiosity in exploring BRC NFTs, provided there are no significant barriers. It would be prudent for BRC developers to offer tools that facilitate interoperability across various blockchain ecosystems, including Ethereum, Polygon, Binance Smart Chain, and Avalanche. When compared to new users entering the traditional market, those migrating from established Web3 ecosystems yet to be fully developed offer a vast and readily accessible user base.
## IX Conclusion
In this paper, we dig into the novel concept of BRC-20. We elucidate its operational mechanisms and empirically conduct a range of tangible investigations, encompassing market performance and user sentiments. Recognizing that user perception plays a pivotal role in shaping the nature of BRC, we subsequently explore the dichotomy between hope and hype, which significantly influence user perception. Our findings lead to a conservative conclusion that while BRC-20 represents a promising inception within the Bitcoin ecosystem, it may not attain the same level as ERC-like ecosystems.
| 2023年の中盤、BRC-20 (Bitcoin Request for Comment 20) トークンmaniaが重要なストーリーラインとなりました。イーサリアムにおける従来のERC-20トークン基準と異なる、BRC-20は、 satoshi (0.00000001 Bitcoin, 最小単位) のそれぞれに編集可能なフィールドを通して、ビットコインに非-Fungibility を導入します。これにより、ユニークなものになります。この論文では、この概念の探求 pioneers、その複雑なメカニズム、特徴、現状の応用について、調査、分析、そして多面的なデータに基づいて、事実の調査に基づいて、この概念の探求を pioneerし、その複雑なメカニズム、特徴、現状の応用について調査、分析、そして多面的なデータに基づいて、事実の調査に基づいて、この概念の探求を pioneerし、 |
2309.04750 | Mirror-Aware Neural Humans | Human motion capture either requires multi-camera systems or is unreliable
when using single-view input due to depth ambiguities. Meanwhile, mirrors are
readily available in urban environments and form an affordable alternative by
recording two views with only a single camera. However, the mirror setting
poses the additional challenge of handling occlusions of real and mirror image.
Going beyond existing mirror approaches for 3D human pose estimation, we
utilize mirrors for learning a complete body model, including shape and dense
appearance. Our main contributions are extending articulated neural radiance
fields to include a notion of a mirror, making it sample-efficient over
potential occlusion regions. Together, our contributions realize a
consumer-level 3D motion capture system that starts from off-the-shelf 2D poses
by automatically calibrating the camera, estimating mirror orientation, and
subsequently lifting 2D keypoint detections to 3D skeleton pose that is used to
condition the mirror-aware NeRF. We empirically demonstrate the benefit of
learning a body model and accounting for occlusion in challenging mirror
scenes. | Daniel Ajisafe, James Tang, Shih-Yang Su, Bastian Wandt, Helge Rhodin | 2023-09-09T10:43:45 | http://arxiv.org/abs/2309.04750v2 | # Mirror-Aware Neural Humans
###### Abstract
Human motion capture either requires multi-camera systems or is unreliable using single-view input due to depth ambiguities. Meanwhile, mirrors are readily available in urban environments and form an affordable alternative by recording two views with only a single camera. However, the mirror setting poses the additional challenge of handling occlusions of real and mirror image. Going beyond existing mirror approaches for 3D human pose estimation, we utilize mirrors for learning a complete body model, including shape and dense appearance. Our main contributions are extending articulated neural radiance fields to include a notion of a mirror, making it sample-efficient over potential occlusion regions. Together, our contributions realize a consumer-level 3D motion capture system that starts from off-the-shelf 2D poses by automatically calibrating the camera, estimating mirror orientation, and subsequently lifting 2D keypoint detections to 3D skeleton pose that is used to condition the mirror-aware NeRF. We empirically demonstrate the benefit of learning a body model and accounting for occlusion in challenging mirror scenes. The project is available at: [https://danielajisafe.github.io/mirror-aware-neural-humans/](https://danielajisafe.github.io/mirror-aware-neural-humans/).
## 1 Introduction
Estimating detailed 3D geometry of a moving person from a single video is a long-standing goal. Learning-based solutions can succeed when trained on 3D labels from the target domain or when multiple 2D views are available for supervision [28, 31, 35, 36, 37, 42, 46]. However, multi-view capture is expensive and tedious to calibrate, and hence, the diversity of existing datasets and associated machine learning solutions are limited to mainstream activities and environments.
We propose a test-time optimization method for reconstructing a generative body model entailing pose, shape, and appearance using a single camera and a mirror and starting from 2D pose without any 3D labels nor large-scale dataset. The mirror setting is practical: First, mirrors are readily available in urban environments and provide a second view for accurate reconstruction without requiring multi-camera recording and temporal synchronization. Second, off-the-shelf 2D estimators generalize well since diverse training images are easily annotated by clicking 2D joint locations. Previous works [8, 25] leveraged reflections in mirrors for better human pose reconstruction. However, neither of them model shape and appearance in detail. In addition, the model proposed in [8] needs a 3D pose estimator as prior, potentially limiting their approach to motions close to the training set.
Alternatively, 3D body models have been learned from monocular video. However, existing approaches either use a shape prior, such as a scan of the person [15], a parametric body model [2], restrict motions to be simple [52], or require initialization with a prior 3D estimator [40, 41, 50].
Figure 1: **Pose refinement and image quality**. Given an image with mirror (top left) our mirror-based method reconstructs 3D pose and shape that is more accurate than the baselines (A-NeRF [40] and DANBO [41]) not supporting the mirror, both in terms of the 3D pose metric PA-MPJPE (top row, e.g., corrected arms), and in image quality PSNR (bottom row, e.g., reconstructed earphone and left elbow).
This restricts the possible motion, shape, and appearance complexity. By contrast, our mirror setting is simple, enabling anyone to collect 3D data of their target domain.
Our approach for learning _Mirror-Aware Neural Humans_ makes no prior assumption about the body shape by building upon the open-source articulated neural radiance fields (NeRF) models [40, 41], which require only the joint angles of a skeleton as input. We estimate this skeleton fully automatically and without prior assumptions on the 3D poses using an automatic mirror calibration (Step 1) and mirror-based 2D to 3D pose lifting (Step 2), thereby avoiding the use of 3D pose estimators that struggle with occlusions and extreme poses. Our core contribution are as follows:
* Designing a robust algorithm that estimates mirror position and orientation, and 3D skeleton model with bone-relative coordinates suitable for neural body models.
* A layered mirror model, extending NeRF with occlusion handling of the mirror image by the real person.
* Developing a complete motion capture system for reconstructing human pose, shape, and appearance from mirror images, and making the source code available.
## 2 Related Work
**Self- and weakly supervised learning approaches.** Weakly supervised 3D pose estimators typically leverage small-scale 3D pose datasets and combine them with additional 2D data [4, 14, 16, 22, 45, 47, 48, 54]. Others utilize neural networks that are pretrained on the 3D lifting task and transfer them to another dataset [10, 11, 12, 26, 27, 33]. Such weak supervision transfers better to unseen poses. However, they still make the assumption that training poses are close to the training set.
A different approach is using multi-view supervision to learn an embedding of 3D poses [31, 35, 36, 28, 37, 42] or learn the 3D reconstruction step directly from multi-view images [18, 19, 38, 46]. While promising, they still require multiple temporally synchronized cameras for training. In contrast, using mirrors in a scene gives the unique advantage of having a pair of synchronized views with a single recording device.
**Mirror geometry and calibration.** Mirrors have a long history in visual computing on which Reshetouski et al. [34] provide a good overview. We take inspiration from methods [1, 17, 29, 43] employing mirrors for camera calibration and 3D reconstruction of rigid objects, to enable calibration and reconstruction of moving humans. Alternatively, Yin et al. [53] reconstructs arbitrary objects in mirror-like surfaces but do not show any application for humans.
**Mirror-based Human Pose Estimation.** Nguyen et al. [30] use mirrors to reconstruct human point clouds, but require a depth camera together with two or multiple mirrors. To the best of our knowledge, the most related work that reconstructs human pose and body shape with a single mirror is from Fang et al. [8]. They provide an optimization-based approach that utilizes mirror symmetry constraints for predicting 3D human pose and mirror orientation. While attaining high accuracy, they require as input an initial 3D pose estimate from a pretrained neural network that cannot generalize well to unseen poses. Moreover, their best results are attained using manually annotated vanishing lines on the mirror boundary [7]. By contrast, we use a purely geometric approach to optimize for 3D keypoints without requiring any 3D pose estimator or mirror annotation (with the neural network only modeling shape and appearance), by jointly optimizing for the bone orientation and building upon recent work on estimating camera position and ground plane using the motion of people in the scene [9, 44]. Similar to prior approaches [8, 25], we estimate 3D human keypoints as a solution to an optimization problem between two sets of mirrored 2D keypoints. By contrast, Liu et al. [25] optimize for 3D joint coordinates which can lead to incorrect pose sequences where, for example, bone lengths vary over time, and orientation remains ambiguous. Fang et al. [8] restrict motions to be close to previously captured sequences by using pre-trained detectors, and none of these methods take detailed reconstruction of shape and appearance into account.
## 3 Method
Our goal is to reconstruct a dense neural body model from a single video with only sparse 2D detections as input, using the mirror as a second view to impose multi-view constraints. The difficulty lies in reconstructing such dense representation from only sparse and noisy 2D labels, with an unknown mirror and camera configuration. By contrast to classical multi-view settings, mirror observations add the difficulty of the real person occluding the mirror image. To overcome these difficulties, our method goes from sparse to fine details in three steps, as sketched in Figure 3.
For each step we use a suitable representation for the mirror geometry, each mathematically equivalent yet implying a different implementation. Figure 2 visualizes the three forms. _Case I:_ A single camera \(\mathbf{c}\) with light rays reflecting on the mirror plane \(\pi\). _Case II:_ The mirror image stemming from a _virtual camera_\(\bar{\mathbf{c}}\) opposing the real camera \(\mathbf{c}\). _Case III:_ A _virtual person_\(\bar{\mathbf{p}}\) opposing the real person \(\mathbf{p}\), both viewed from the real camera \(\mathbf{c}\).
### Camera and Mirror Initialization (Step 1)
We start from a video that shows a person moving in front of a mirror and use off-the-shelf pose detectors [6, 51] to obtain 2D pose estimates \(\mathbf{q}^{(t)}\in\mathbb{R}^{2\times J}\) for every input frame \(\mathbf{I}_{t}\) and all \(J\) joints. As we assume the mirror is orthogonal to the ground, mirror and real images appear to be standing on the same ground plane and existing solutions to using the human as calibration object apply. We use a variant of
[9] as described in [3] that yields focal length \(f\) and ground plane normal \(n_{g}\).
Associating real and mirror poses.Pose detectors are not aware of the mirror and therefore treat each person independently. We associate the pose with the largest neck-to-pelvis distance as the real person, utilizing that the person viewed through the mirror is farther away and hence smaller in the perspective camera. This association is also required for flipping the left and right side of the mirrored 2D pose to account for the mirror operation. Figure 4 shows this relationship and the degradation when misassigned.
Mirror Geometry Initialization.Under the assumption that the mirror normal is orthogonal to the ground plane normal, we obtain the 3D location of the real person and mirrored person using _Case III_ (see Figure 2). We project their 2D ankle locations \(\mathbf{q}_{\text{mkle}}\) onto the estimated ground plane by reversing the projection
\[\mathbf{q}=\mathbf{K}\mathbf{p},\text{ where }\mathbf{K}=\begin{pmatrix}f&0&o_{1}&0 \\ 0&f&o_{2}&0\\ 0&0&1&0\end{pmatrix}, \tag{1}\]
with \((o_{1},o_{2})\) the image center and \(f\) the estimated focal length.
The mirror normal \(\mathbf{n}_{m}\in\mathbb{R}^{3}\) is then the vector from real \(\mathbf{p}_{\text{mkle}}\) to mirror \(\bar{\mathbf{p}}_{\text{mkle}}\),
\[\mathbf{n}_{m}=\frac{\mathbf{p}_{\text{mkle}}-\bar{\mathbf{p}}_{\text{mkle}} }{\|\mathbf{p}_{\text{mkle}}-\bar{\mathbf{p}}_{\text{mkle}}\|}. \tag{2}\]
The mirror location is the midpoint, \(\mathbf{m}=(\mathbf{p}_{\text{mkle}}-\bar{\mathbf{p}}_{\text{mkle}})/2\). For increased robustness, we average over all frames.
### 2D to 3D Pose Lifting (Step 2)
In this section, we use the notion of a virtual camera (_Case II_) positioned behind the mirror as shown in Figure 2. Following [21, 39], we derive the virtual camera through the matrix \(\mathbf{A}\) that mirrors points across the mirror plane,
\[\mathbf{A}=\begin{bmatrix}1-2n_{x}^{2}&-2n_{y}n_{x}&-2n_{z}n_{x}&-2n_{x}d\\ -2n_{y}n_{x}&1-2n_{y}^{2}&-2n_{y}n_{z}&-2n_{y}d\\ -2n_{z}n_{x}&-2n_{y}n_{z}&1-2n_{z}^{2}&-2n_{z}d\\ 0&0&0&1\end{bmatrix}, \tag{3}\]
with \(\mathbf{n}_{m}=[n_{x},n_{y},n_{z}]\) the mirror normal and \(d\) the distance between camera and mirror. Both quantities are from Step 1. By defining the real camera to be at the origin pointing along the z-axis, \(\mathbf{A}\) maps points from the real to the virtual camera. The orientation of the virtual camera is hence \(\bar{\mathbf{R}}=\mathbf{A}_{3\times 3}^{\top}\), the inverse of the top-left part of \(\mathbf{A}\), and camera position \(\bar{c}=-2\mathbf{n}_{g}d\), is the negative of the last column of \(\mathbf{A}\). Note that \(\bar{\mathbf{R}}\) is from the orthogonal group \(O(3)\) as it includes a reflection component given by the mirror.
Mirror Skeleton representation.To be able to reconstruct not only the position but also the orientation of limbs, we represent \(\mathbf{p}^{(t)}\) with a skeleton parameterized by joint rotations \(\mathbf{\theta}_{i}^{(t)}\in\mathbb{R}^{6}\), using the 6D rotation parameterization of [55], bone lengths \(\mathbf{\ell}\in\mathbb{R}^{J}\), and the 3D pelvis position \(\mathbf{p}_{\text{pelvis}}^{(t)}\in\mathbb{R}^{3}\) (the root position). Forward kinematics gives
\[\mathbf{p}_{j}^{(t)}=\prod_{i\in\mathcal{N}(j)}\mathbf{T}_{i} \begin{bmatrix}\mathbf{0}\\ 1\end{bmatrix}+\mathbf{p}_{\text{pelvis}}^{(t)},\mathbf{T}_{i}=\begin{bmatrix} \mathbf{M}(\mathbf{\theta}_{i}^{(t)})&\mathbf{\ell}_{i}\mathbf{v}_{i}\\ \mathbf{0}&1\end{bmatrix}, \tag{4}\]
with \(\mathbf{v}_{i}^{\text{ref}}\in\mathbb{R}^{3}\) the \(i\)th bone vector (parent to child vector) in a reference pose, \(\mathbf{M}(\mathbf{\theta}_{i}^{(t)})\) the joint rotation computed from \(\mathbf{\theta}_{i}^{(t)}\), and \(\mathcal{N}(j)\) the ancestors of \(j\) in the kinematic chain. In the following, we optimize these parameters by using different constraints on the mirror scene including the pose, feet, and bone orientation.
3D pose initialization.Unlike prior work using joint locations [25], the bone rotation estimation we require is prone to local minima. To overcome this, we initialize with a constant standing pose at the estimated \(\mathbf{p}_{\text{mkle}}\), rotated in 45\({}^{\circ}\) steps from 0\({}^{\circ}\) to 360\({}^{\circ}\) degrees as shown in Figure 5, and select the rotation with the lowest reconstruction error before optimization.
3D pose optimization.Using the virtual camera (_Case II_ above), the optimization of the 3D pose \(\mathbf{p}\) under mirror constraints becomes a classical multi-view reconstruction problem. We optimize the skeleton parameterization \(\mathbf{\theta}\) that,
Figure 2: **Models for the mirror reflection**. In the first case, the rays go from the real camera \(\mathbf{c}\) up to the mirror plane \(\pi\) intersecting at location \(\mathbf{s}\), then to the real person \(\mathbf{p}\) after a mirror reflection. In the second case, the real person \(\mathbf{p}\) is viewed from a virtual camera \(\bar{\mathbf{c}}\) forming a virtual image. In the third case, the person location is mirrored to \(\bar{\mathbf{p}}\) and light rays go straight from camera \(\mathbf{c}\) to \(\bar{\mathbf{p}}\).
when reprojecting the associated 3D joint positions \(\mathbf{p}\) to the real and virtual cameras, minimizes the Euclidean distance to real 2D pose \(\mathbf{q}\) and virtual 2D pose \(\bar{\mathbf{q}}\),
\[\mathcal{L}_{\text{p}}=\sum_{t}\|\mathbf{q}-\Pi(\mathbf{p}(\boldsymbol{\theta} ^{(t)},\boldsymbol{\ell}))\|^{2}+\|\bar{\mathbf{q}}-\Pi(\mathbf{A}\mathbf{p}( \boldsymbol{\theta}^{(t)},\boldsymbol{\ell}))\|^{2} \tag{5}\]
with \(\mathbf{p}(\boldsymbol{\theta}^{(t)},\boldsymbol{\ell})\) the forward kinematic model and \(\Pi\) the perspective projection using \(\mathbf{K}\). When 2D detection confidences are available, we use them as weights in Eq. 5.
Smoothness and ground-plane constraints.Frame-wise pose optimization leads to noisy reconstructions and inconsistent global orientations. To mitigate these, we encourage a constant velocity for location and joint angles across the video (for valid frames), referred to as _location_ and _orientation smooth_ in Eq. 6. To reduce floating, we utilize our ground plane estimate and constrain the lower feet by minimizing their distance to the ground plane, as described in Eq. 6 where \(\mathbf{f}_{gd}=(\mathbf{m}-\mathbf{f}_{i})\), \(\mathbf{m}\) is the mirror location and \(\mathbf{f}_{i}\) is the closest foot (heel) to the ground. Lastly, we refine the mirror and ground normal, \(\mathbf{n}_{m}\) and \(\mathbf{n}_{g}\), during optimization and enforce both quantities to be orthogonal.
Figure 4: **Real and mirror pose assignment.** Our algorithm distinguishes the real from the virtual person using pelvis-to-neck distance. With the right assignment, cases of collapsed poses (left) are corrected (right).
Figure 5: **3D pose initialization.** by measuring the error between initial re-projections (lines forming skeleton) and 2D detections (dots) to determine the optimal starting pose.
Figure 3: We start from a mirror image with an unknown mirror geometry. With only 2D detections and suitable assumptions, we reconstruct the mirror plane, ground plane, and 3D keypoints in Step 1 and Step 2. Our optimization yields bone orientation that is crucial for integrating NeRF with the mirror-based reconstruction. The final Mirror-aware Neural Human is learned via layered composition of mirror and real images in Step 3 and yields improved body pose, shape, and appearance quality.
We combine all additional objectives from above as
\[\mathcal{L}_{\text{sfo}} =\underbrace{\lambda_{p}\|\frac{\mathbf{d}^{2}\mathbf{p}(\boldsymbol{ \theta}^{(t)},\mathbf{b})}{\mathbf{d}_{t}^{2}}\|}_{\text{location smooth}}+ \underbrace{\lambda_{\boldsymbol{\theta}}\|\frac{\mathbf{d}^{2}\boldsymbol{ \theta}_{k}}{\mathbf{d}_{t}^{2}}\|}_{\text{orientation smooth}}+\underbrace{ \lambda_{f}(\mathbf{n}_{g}\mathbf{f}_{gd})^{2}}_{\text{foot constraint}}\] \[+\underbrace{(\mathbf{n}_{g}\mathbf{n}_{m})^{2}}_{\text{orthogonality}}+ \underbrace{(\|\mathbf{n}_{m}\|_{2}-1)^{2}}_{\text{mirror normal loss}}+ \underbrace{(\|\mathbf{n}_{g}\|_{2}-1)^{2}}_{\text{ground normal loss}}, \tag{6}\]
where \(\frac{\mathbf{d}^{2}\mathbf{p}(\boldsymbol{\theta}^{(t)},\mathbf{b})}{\mathbf{ d}_{t}^{2}}\) and \(\frac{\mathbf{d}^{2}\boldsymbol{\theta}_{k}}{\mathbf{d}_{t}^{2}}\) are the second-order derivatives for the joint locations and bone orientations for all frames \(t\), \(\mathbf{f}_{gd}\) is the vector from ground plane location to the lower feet of interest, and \(\lambda_{p}\), \(\lambda_{\boldsymbol{\theta}}\), \(\lambda_{f}\) are hyper-parameters that balance the influence of the smoothness terms and the feet loss. Our final objective \(\mathcal{L}_{\text{pose}}\) is the sum of all individual terms, \(\mathcal{L}_{\text{Pose}}=\mathcal{L}_{\text{p}}+\mathcal{L}_{\text{sfo}}\).
### Neural Rendering and Refinement (Step 3)
With the 3D pose \(\mathbf{p}(t)\) reconstructed approximately for each pair of 2D detections \(\mathbf{q}(t)\) and \(\bar{\mathbf{q}}(t)\) in every frame of the video, we train a generative model \(G(\boldsymbol{\theta})\) conditioned on pose \(\boldsymbol{\theta}\). Starting from A-NeRF [40] applied to only the real person as a baseline, we introduce our Step 2 + A-NeRF, naive mirror integration, and full mirror integration with and without efficiency-improving extensions.
A-NeRF initialized by Step 2 (Step 2 + A-NeRF).To apply articulated neural radiance fields, such as [40] and [41], to our setting, we segment both persons in the image using [24]. We then use the real-mirror person assignment from Step 1 and Step 2 to determine the mask for the real person M that contains the 2D keypoints associated to the real person. Our contribution is on how to also include the mirrored person and its mask \(\bar{\mathbf{M}}\). For the real person, we can apply existing methods, besides minor modifications to run with our skeleton definition. Input is the image \(I\), skeleton \(\mathbf{v}^{\text{ref}}\), the bone lengths \(\boldsymbol{\ell}\) and joint angles \(\boldsymbol{\theta}\). We cast rays to the real person in the scene using pixels \((u,v)\) within the mask M, and query 64 points \(\{\mathbf{b}_{k}\}_{k=1}^{64}\) along that ray direction \(\mathbf{r}=\mathbf{K}^{-1}(u,v,1)\) within the 3D bounding box containing the skeleton \(\mathbf{p}\). By using the skeleton-relative encoding from [40] or [41], we first map the queried points \(\mathbf{b}\) to the local space of each joint using \(T_{i}\),
\[\tilde{\mathbf{b}_{i}}=T_{i}^{-1}(\boldsymbol{\theta}_{i},\mathbf{v}_{i})[ \mathbf{b}]. \tag{7}\]
A fully-connected neural network then predicts color \(\gamma_{k}\) and density \(\sigma_{k}\) as a function of the transformed queries, \(\gamma_{k},\sigma_{k}=\phi([\bar{\mathbf{b}}_{1},\dots,\tilde{\mathbf{b}}_{J}])\) for every sample \(k\). The image is formed by volume rendering, integrating color along the ray while accounting for the transmittance computed from the density, as in the original NeRF.
The objective is the photometric loss \(\mathcal{L}_{\text{Neural}}\) between the generated and observed image, and both the skeleton joint angles and the parameters of the underlying neural network are optimized jointly. Note that existing articulated NeRF models only apply as we tuned Step 2 to be compatible by introducing the additional smoothness constrained on bone orientation.
Layered mirror representation.Mirror occlusion cases, such as the one in Figure 6 where the real and mirrored person overlap, are important, as they result in errors in the segmentation masks and can lead to a few but large reconstruction errors. To make occlusion resolution part of the learning process but maintain efficiency, we automatically detect frames where occlusion occurs by measuring the intersection over union (IOU) of the bounding boxes \(\mathbf{N}\) and \(\bar{\mathbf{N}}\) enclosing the projected real and mirrored 3D poses from Section 3.2.
Given these boxes, we compute an intersection box that bounds overlapping areas and shoot rays randomly within the intersection box to resolve occluding pixels. Since each pixel is at an intersection of \(\mathbf{N}\) and \(\bar{\mathbf{N}}\), we process the occlusion samples along the direct view ray, \(\{\mathbf{b}_{k}\}_{k=1}^{64}\), and along its mirrored path, \(\{\bar{\mathbf{b}}_{k}\}_{k=1}^{64}\). _Case II_ gives the reflected view-ray
\[\bar{\mathbf{r}}=\mathbf{A}_{3\times 3}\mathbf{r}, \tag{8}\]
with the origin at virtual camera center \(\bar{\mathbf{c}}\). Note that we do not bound the occlusion samples to \(\mathbf{M}\) and \(\bar{\mathbf{M}}\) as the segmentation masks are often unreliable when people overlap.
Furthermore, sampling a different number of samples for occlusion rays does not fare well with the batch processing in NeRF. To make it compatible, we process real and mirror samples, including occlusion cases, independently to yield image layers \(\mathbf{L}\) and \(\bar{\mathbf{L}}\) and corresponding alpha maps \(\alpha\) and \(\bar{\alpha}\). We exploit that if occlusion happens, the real person occludes the mirrored person. This holds in general, since the mirror image results from an indirect light path that is always longer than the direct view, and enables combining these partial results using back-to-front layering. Starting
Figure 6: **Occlusion handling. First, we automatically generate 2D bounding boxes (in grey) from our optimized 3D keypoints. Then we shoot rays (dots) randomly in the intersection area (in green) where occlusions may happen (in yellow).**
from the background image \(\mathbf{I}_{\text{bg}}\), the final image is
\[\hat{\mathbf{I}}=\mathbf{L}\alpha+(1-\alpha)\left(\bar{\mathbf{L}}\bar{\alpha}+( 1-\bar{\alpha})I_{\text{bg}}\right). \tag{9}\]
This layered composition enables efficient training on \(\mathcal{L}_{\text{Neural}}\) and rendering in batches of the same size, while accounting for mirror occlusion.
Baseline w/o layering.Without the introduced layering, it would require to sample twice as many points over the union of both masks \(\mathbf{M}\) and \(\bar{\mathbf{M}}\), the samples \(\{\mathbf{b}_{k}\}_{k=1}^{64}\) along the ray up to the mirror and samples \(\{\bar{\mathbf{b}}_{k}\}_{k=1}^{64}\) after the mirror reflection. However, unless real and mirrored person occlude, one of the two sample sets is in empty space, thereby training NeRF to merely predict \(0\) density for that set, leading to extremely slow convergence.
Baseline w/o occlusionAssuming that the person masks \(\mathbf{M}\) and \(\bar{\mathbf{M}}\) are correct and non-overlapping, one can render two separate images of real and mirror person by respectively sampling \(\mathbf{M}\) and \(\bar{\mathbf{M}}\), using _Case II_ with the notion of two cameras. By minimizing the photometric objective, this baseline is efficient and utilizes the information across both views but does not account for mirror occlusions and imperfect masks around occlusion areas.
## 4 Evaluation
We compare our Mirror-aware Neural Human on the tasks of body pose and appearance reconstruction against state-of-the art methods to verify that existing NeRF-based body models benefit largely from the additional geometric mirror constraints, that every stage of our motion capture pipeline brings improvement, and ablate model choices. The supplemental document and video provide additional examples.
**Variants and Baselines.** We integrated our _Mirror-aware Neural Human_ formulation into A-NeRF [40] and DANBO [41] and refer to them as _Mirror A-NeRF_ and _Mirror DANBO_. We use the original A-NeRF [40] and DANBO [41] as baselines, as well as the mirror-based pose estimation method, MirrorHuman [8] and the established single-view reconstruction techniques SPIN [20] and SMPLify [32]. For pose estimation accuracy, we only compare to the A-NeRF neural body method as DANBO does not support pose refinement.
**Benchmark Datasets.** We use the _MirrorHuman-eval_ dataset from [8]. It contains a complex dancing performance in front of a large mirror and is captured with six cameras arranged in an arc around the mirror. We exclude camera 4 as it is directly frontal to the mirror and leads to degenerate configuration where the mirrored person is largely occluded. Since no validation set was specified, we use camera 2 and 3, that are placed between a 45-to-90 degree angle to the mirror, for tuning hyper parameters and test on all cameras as previous methods did. Following [8], we optimize the pose separately for each video, treating this dataset as five independent recordings instead of a multi-view setup. We also concur in evaluating Step 2 and 3 on every 100th frame while reconstructing in-between frames only to ensure smoothness. For the neural models, we withhold the last 10% of frames to test novel pose synthesis.
**Additional qualitative sequences.** To be able to demonstrate generality and showcase the simplicity of capturing with a mirror, we utilize the internet dancing recordings from [8] and recorded a new dataset that is more diverse in terms of appearance, e.g., including a beard, male and female, loose clothing, and casual everyday motions that go beyond dancing (see Figure 7). We used a single camera for recording and employed 2000 frames for reconstruction.
**Metrics.** For pose estimation tasks, we report the scale-normalized MPJPE (N-MPJPE) introduced in [35], and the Procrustes-aligned MPJPE (PA-MPJPE), both in mm over the 15 joints defined in [8]. We omit MPJPE without scale normalization as monocular reconstruction is inherently scale-ambiguous [13]. For image synthesis, we quantify the image quality by PSNR and SSIM [49].
**Implementation details.** In Step 2, we optimize 3D poses for 2K iterations. For Step 3, we train the neural rendering model up to a maximum of 300K steps for DANBO and a maximum of \(2\times 200\)K for A-NeRF with pose refinement and appearance fine-tuning.
### Mirror Normal Estimation
Our average normal estimation error (using 2D detections as input) is 0.4\({}^{\circ}\) compared to the GT normal provided in [8]. Camera 4 is excluded as the real person occludes the mirror image in most frames. This automatic mirror calibration is highly accurate and very close to the 0.5\({}^{\circ}\) obtained from the vanishing point method in [8] on the same cameras.
### Pose Estimation
**Comparison to refining pose with a dense body model.** Table 1 shows the results for the MirrorHuman-eval dataset. Compared to A-NeRF [40], which also refines SPIN estimates using a volumetric body model, our method improves 3D pose estimates significantly, by more than \(20\%\). This highlights the importance of integrating a second view for accurate reconstruction, here via the mirror. Figure 7 shows that, by benefiting from the NeRF model, the joint refinement of pose and shape improves significantly, particularly on extreme poses.
**Comparison to methods lifting 2D pose to 3D.** We outperform the supervised approach SPIN [20] and single-view optimization SMPLify-X [32], and match their combination, as these single-view approaches do not fare well under occlusion and are prone to depth ambiguity.
**Comparison to existing mirror approaches.** The prior
method [8] focuses on controlled conditions, using manually corrected 2D ground-truth (GT) and a pre-trained pose estimator for initialization. To compare on fair grounds, we run a variant that also uses 2D GT as input. Table 1 shows that it matches the accuracy up to 7mm. The remaining discrepancy can be attributed to our method not being tuned for GT input and not using a 3D pose estimator, which are known to not generalize well to very extreme motions. The other existing mirror work [25] cannot be compared to as it does not provide joint angles for shape reconstruction and does not evaluate on publicly available datasets.
### Body Shape and Appearance
Figure 1 and Figure 8 show the images synthesized by different neural rendering models. For better visualization, we apply connected component analysis to remove floating artefacts stemming from shadowing and background. On _MirrorHuman-eval_ dataset [8], both of our variants, Mirror A-NeRF and Mirror DANBO, synthesize sharper results with more fine-grained details compared to the mirror-less counterparts. We attribute the improvement to the better pose initialization from Step 2 and the additional appearance supervision by the mirror view, all enabled by our mirror modeling. Table 2 validates the visual improvement quantitatively in terms of image reconstruction accuracy. Both Mirror A-NeRF and Mirror DANBO better learn the body shape and appearance, verifying the effectiveness of our _Mirror-aware Neural Humans_. Additional qualitative
\begin{table}
\begin{tabular}{|l|c c c|c|} \hline Method & 3D Training & Mirror Calibration & 2D Input & PA-MPIPE \(\downarrow\) \\ \hline SMPEIry-X [32] & 3D pose prior & n/a & detections & 90.57 \\ A-NeRF [40] & partial (init.) & n/a & detections & 84.70 \\ SPIN [20] & supervised & n/a & detections & 67.42 \\ SPIN [20]+SMPEIry [32] & partial (init.) & n/a & detections & **61.47** \\ \hline Ours (Step 2) & unsupervised & automatic & detections & 63.00 \\ Ours (Step 2 + A-NeRF [40]) & unsupervised & automatic & detections & 62.69 \\ Ours (Step 3, w/o occlusion) & unsupervised & automatic & detections & 61.46 \\ Ours (Step 3, w occlusion) & unsupervised & automatic & detections & **61.30** \\ \hline Ours (Step 2, using GT input) & unsupervised & automatic & manual & 39.53 \\ MirrorHuman [8] (w/o mirror GT) & partial (init.) & automatic & manual & 33.24 \\ MirrorHuman [8] & partial (init.) & manual & manual & **32.96** \\ \hline \end{tabular}
\end{table}
Table 1: **3D pose reconstruction**. Ours is the only fully-automatic mirror-based method. We match the accuracy of off-the-shelf 3D pose estimators with only 2D detections as input and reproduce results of mirror methods using GT input [8].
\begin{table}
\begin{tabular}{|l|c c|} \hline Method & Cam 6 PSNR \(\uparrow\) & Cam 6 SSIM \(\uparrow\) \\ \hline A-NeRF [40] & 25.52 & 0.8662 \\ Ours(Mirror A-NeRF w/o Occlusion) & **25.89** & **0.9210** \\ DANBO [41] & 28.97 & 0.9193 \\ Ours(Mirror DANBO w/o Occlusion) & **31.87** & **0.9522** \\ \hline \end{tabular}
\end{table}
Table 2: **Quantitative image reconstruction accuracy** on _MirrorHuman-eval_ dataset in terms of PSNR and SSIM. A-NeRF [40] and DANBO [41] without mirror remain blurry, leading to lower scores.
Figure 7: **Qualitative results of pose refinement on a new diverse sequence, internet video, and _MirrorHuman-eval_ dataset [8]. Our Mirror A-NeRF aligns the skeleton model well to the images. Left: Before refinement (after Step 2) Right: After the volumetric model refinement (Step 3).**
results are shown in the supplemental document.
### Ablation Study
To analyze the effect of our model choices in Step 2, we use camera 3 in the _MirrorHuman-eval_ dataset over all 19 joints that have ground truth. We perform experiments with different configurations for the joint optimization of the bone factors, global position, and mirror parameters. Table 3 validates that each component contributes to the final result. The effect of and robustness to different weights of the smoothness terms are evaluated and explained in the supplemental material.
We also analyze the different ways of integrating mirror constraints and occlusion handling on cameras 6 and 7. Table 4 presents the pose refinement outcomes after the neural rendering steps (Section 3.3). Running A-NeRF with our Step 2 poses already improves due to the more accurate pose initialization. Our Mirror A-NeRF further improves, as it enables pixel-level multi-view (real and mirror pose) refinement. Taking occlusion into account further surpasses the two baselines. Note that the total improvement of handling occlusions computed over all frames and all joints is small and we therefore only enable it on sequences that include a sufficient number of occlusion frames.
## 5 Limitations and Future Work
Our 3D reconstruction algorithm only works in cases where the person's mirror image is visible most of the time, restricting the camera placement close to 20-to-70 degrees to the mirror. When the camera view direction is close to parallel or orthogonal to the mirror, 3D triangulation is unreliable. Moreover, the 3D reconstruction is sensitive to the initial reconstruction of the ground plane and focal length, e.g., errors emerging when bystanders with varying heights violate the constant height assumption. Moreover, since we rely on an estimated segmentation mask, sometimes body parts are cut out or the shadow or other background parts in the scene are represented in the 3D volumetric model. In the future, we will attempt to apply similar techniques to animal motion capture, which, however, requires redefining our upright standing assumption and filtering in Step 1.
\begin{table}
\begin{tabular}{|l|c c|} \hline Method & Cam 6 PA-MPJPE \(\downarrow\) & Cam 7 PA-MPJPE \(\downarrow\) \\ \hline A-NeRF [40] & 86.21 & 89.10 \\ Ours (Step 2 + A-NeRF) & 51.21 & 58.61 \\ Ours Mirror A-NeRF) & 48.84 & 57.82 \\ Ours (Mirror A-NeRF w/ occlusion) & **48.51** & **57.32** \\ \hline \end{tabular}
\end{table}
Table 4: **Pose refinement on _MirrorHuman-eval_ dataset [5] on videos with occlusions**. With occlusion handling, Mirror-Aware Neural Humans improves upon A-NeRF [40].
Figure 8: **Neural body reconstruction** on three different subjects from the additional qualitative sequences, including loose clothing, challenging poses, and casual day-to-day motion. Our Mirror-Aware Neural Humans reconstructs the body details considerably benefiting from the mirror model and skeleton.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Components removed from model & N-MPJPE \(\downarrow\) & PA-MPJPE \(\downarrow\) \\ \hline Base w/o smooth. and feet constr. & 99.94 & 63.03 \\ w/o location smooth. & 93.79 & 57.98 \\ w/o orientation smooth. & 92.88 & 57.93 \\ w/o feet constraint & 84.73 & 54.06 \\ \hline Full objective & **62.45** & **44.15** \\ \hline \end{tabular}
\end{table}
Table 3: **Ablation study** on the optimal regularization configuration to reduce the influence of noisy detections. All our contributions improve on the baseline.
## 6 Conclusion
Our method reconstructs a 3D neural body model from mirror images by treating the mirror as a second camera and calibrating the camera extrinsics and mirror geometry directly from the motion of people detected in 2D. This alleviates manual annotation and initializing with pre-trained models, which lets us reconstruct difficult and complex human performances for which existing approaches struggle.
Mirror-Aware Neural Humans let anyone with a mirror and camera reconstruct a full 3D human model. In particular, we foresee low-cost medical applications, such as mirror-based pose estimation for rehabilitation [23].
| 人間の動きを捕捉するには、マルチカメラシステムを使用する必要があるか、または単一ビュー入力を使用する場合、深度の Ambiguity が原因で信頼性が低い。一方、都市環境では、鏡は簡単に手に入り、単一カメラで二つのビューを記録することで、経済的な代替案となります。しかし、鏡の設置は、現実の鏡像と鏡像の隠蔽の問題を解決する追加の課題となります。3D 人体のポーズ推定のための既存の鏡の使用方法を上回る、私たちの目的は、鏡を使用して、体全体のモデルを学習することです。そのモデルは、形状と密度の高い外観を含みます。私たちの主要な貢献は、 articulated neural radiance fields を鏡を追加することで、サンプル効率を向上させ、潜在的な隠蔽領域を考慮することで、サンプル効率を向上させます。これらの貢献は、消費レベルの3D動的捕捉システムを構築することで、2D ポーズから直接始め、カメラの自動 |
2309.04988 | Analysis of fractional Cauchy problems with some probabilistic
applications | In this paper we give an explicit solution of Dzherbashyan-Caputo-fractional
Cauchy problems related to equations with derivatives of order $\nu k$, for $k$
non-negative integer and $\nu>0$. The solution is obtained by connecting the
differential equation with the roots of the characteristic polynomial and it is
expressed in terms of Mittag-Leffler-type functions. Under the some stricter
hypothesis the solution can be expressed as a linear combination of
Mittag-Leffler functions with common fractional order $\nu$. We establish a
probabilistic relationship between the solutions of differential problems with
order $\nu/m$ and $\nu$, for natural $m$. Finally, we use the described method
to solve fractional differential equations arising in the fractionalization of
partial differential equations related to the probability law of planar random
motions with finite velocities. | Fabrizio Cinque, Enzo Orsingher | 2023-09-10T10:38:38 | http://arxiv.org/abs/2309.04988v1 | # Analysis of fractional Cauchy problems with some probabilistic applications
###### Abstract
In this paper we give an explicit solution of Dzherbashyan-Caputo-fractional Cauchy problems related to equations with derivatives of order \(\nu k\), for \(k\) non-negative integer and \(\nu>0\). The solution is obtained by connecting the differential equation with the roots of the characteristic polynomial and it is expressed in terms of Mittag-Leffler-type functions. Under the some stricter hypothesis the solution can be expressed as a linear combination of Mittag-Leffler functions with common fractional order \(\nu\). We establish a probabilistic relationship between the solutions of differential problems with order \(\nu/m\) and \(\nu\), for natural \(m\). Finally, we use the described method to solve fractional differential equations arising in the fractionalization of partial differential equations related to the probability law of planar random motions with finite velocities.
_Keywords:_ Dzherbashyan-Caputo derivative, Mittag-Leffler functions, Fourier transforms, Laplace transforms, Random motions.
_2020 MSC:_ Primary 34A08; Secondary 35R11, 60K99.
## 1 Introduction
In this paper we consider fractional equations of the form
\[\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{\partial t^{\nu k}}F(t,x)=0, \ \ t\geq 0,\ x\in\mathbb{R},\ \ \mbox{with}\ \ \nu>0, \tag{1.1}\]
where the roots of \(\sum_{k=0}^{N}\lambda_{k}y^{k}=0\) are different from \(0\), and subject to the general initial conditions
\[\frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in\mathbb{R}^{d},\ l=0,\ldots,\lceil N\nu\rceil-1. \tag{1.2}\]
The fractional derivatives are in the sense of Dzherbashyan-Caputo, that is, for \(m\in\mathbb{N}_{0}\),
\[\frac{\mathrm{d}^{\nu}}{\mathrm{d}t^{\nu}}f(t)=\left\{\begin{array}{ll}\frac{1} {\Gamma(m-\nu)}\int_{0}^{t}(t-s)^{m-\nu-1}\frac{\mathrm{d}^{m}}{\mathrm{d}s^{m} }f(s)\,\mathrm{d}s&\mbox{ if }m-1<\nu<m\\ \frac{\mathrm{d}^{m}}{\mathrm{d}t^{m}}f(t)&\mbox{ if }\nu=m.\end{array}\right. \tag{1.3}\]
We recall that the Laplace-transform of the fractional derivative of order \(\nu>0\) can be expressed as, for suitable \(\mu>0\),
\[\int_{0}^{\infty}e^{-\mu t}\frac{\partial^{\nu}}{\partial t^{\nu}}f(t)\, \mathrm{d}t=\mu^{\nu}\int_{0}^{\infty}e^{-\mu t}f(t)\,\mathrm{d}t-\sum_{l=1}^{ \lceil\nu\rceil}\mu^{\nu-l}\frac{\partial^{l-1}}{\partial t^{l-1}}f\Big{|}_{t =0} \tag{1.4}\]
where we assume that \(\lim_{t\longrightarrow\infty}e^{-\mu t}\frac{\partial^{l-1}}{\partial t^{l-1} }f(t)=0,\ l\geq 1\).
Dzherbashyan-Caputo fractional derivatives and the associated Cauchy problems have been intensively studied by many authors in the last decades, see for instance [2, 11, 20] and the more recent papers such as [10, 15]. The main interest to such topic is arisen by their applications in several branches of science, such as physics and mechanics, see [9, 21].
Fractional derivatives and the study of its related Cauchy problems also appear in the theory of stochastic processes. The main novelty of this work lies in the probabilistic relationship we establish between the solution of fractional Cauchy problems of different order and its application to the study of the fractional version of random motion with finite velocity.
Our research aim is that of extending the results firstly presented in Orsingher and Beghin [17], where the authors studied the time-fractional telegraph equation and the probabilistic interpretation of the solution. In particular, they were also able to prove that the probability law of the telegraph process subordinated with a reflecting Brownian motion satisfies the time-fractional differential equation
\[\frac{\partial^{2\nu}u}{\partial t^{2\nu}}+2\lambda\frac{\partial^{\nu}u}{ \partial t^{\nu}}=c^{2}\frac{\partial^{2}u}{\partial x^{2}},\quad\mbox{with }\nu=\frac{1}{2},\]
subject to the initial condition \(u(0,x)=\delta(x)\) and \(u_{t}(0,x)=0,\ x\in\mathbb{R}\). Later, these kinds of relationships were extended in a series of papers, see [8, 16]. In particular, in the paper by Orsingher and Toaldo [18] the authors studied the time-space-fractional equation
\[\sum_{j=1}^{m}\lambda_{j}\frac{\partial^{\nu_{j}}u}{\partial t^{\nu_{j}}}=-c ^{2}(-\Delta)^{\beta},\quad 0<\nu_{j}\leq 1,\ \forall\ j,\ \beta\in(0,1], \tag{1.5}\]
subject to the initial condition \(u(0,x)=\delta(x),\ x\in\mathbb{R}^{d}\). In equation (1.5), \(-(-\Delta)^{\beta}\) denotes the fractional Laplacian (see [12] for further details on this operator). The authors proved the relationship between this kind of equations and the probability law of an isotropic \(d\)-dimensional stable process, \(S^{2\beta}\), subordinated with the inverse of a linear combination of independent stable processes, \(L(t)=\inf\{s\geq 0\,:\,\sum_{j=1}^{m}\lambda_{j}^{1/\nu_{j}}H_{\nu_{j}}(s)\geq t\}\), with \(\lambda_{j}>0\ \forall\ j\) and \(H_{\nu_{j}}\) stable processes of order \(\nu_{j}\in(0,1)\).
The novelty here is that the order of the Dzherbashyan-Caputo fractional derivatives appearing in (1.1) can be arbitrarily large, although of the form \(\nu k\). We point out that we state our main results in terms of ordinary fractional differential equations. Then, we are using this result to study partial fractional differential equations by means of the Fourier-transform approach.
In Section 3, thanks to the use of the Laplace transform method, we show that the solution of the fractional Cauchy problem given by (1.1) and (1.2) can be expressed as a combination of Mittag-Leffler-type functions with order of fractionality equal to \(\nu>0\). Then we connect the solutions of problems with different _order of fractionality_ by means of a probability expectation such as, with \(n\in\mathbb{N}\),
\[F_{\nu/n}(t,x)=\mathbb{E}\,F_{\nu}\bigg{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t),\,x\bigg{)} \tag{1.6}\]
where \(F_{\nu/n}\) and \(F_{\nu}\) are respectively the solution of a problem of degree \(\nu/n\) and \(\nu\) with suitable initial conditions and \(G_{j}^{(n)}(t)\) are positive absolutely continuous random variables for each \(t\geq 0,\ j=1,\ldots,n-1\) (see Section 2.2 for details). The relationship (1.6), where \(F_{\nu/n}\) and \(F_{\nu}\) are Fourier transforms of probability laws, leads to the equivalence (in terms of finite-dimensional distributions) of two processes, with the second one being time-changed through \(\prod_{j=1}^{n-1}G_{j}^{(n)}(t)\).
The problem we study in this paper was inspired by the fractionalization of the higher order partial differential equations governing the probability distribution of the position of random motion moving with a finite number of velocities. For instance, the fourth-order equation
\[\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{\partial t }+\lambda^{2}\Big{)}\bigg{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{ \partial}{\partial t}-c^{2}\Big{(}\frac{\partial^{2}}{\partial x^{2}}+\frac{ \partial^{2}}{\partial y^{2}}\Big{)}\bigg{)}p+c^{4}\frac{\partial^{4}p}{ \partial x^{2}\partial y^{2}}=0, \tag{1.7}\]
which emerges in the analysis of a planar stochastic dynamics with orthogonal-symmetrically chosen directions (see [4] for more details). The Fourier transforms of the equation (1.7) has the form
\[\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{\partial t }+\lambda^{2}\Big{)}\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{ \partial}{\partial t}+c^{2}(\alpha^{2}+\beta^{2})\Big{)}F+c^{4}\alpha^{2} \beta^{2}F=0, \tag{1.8}\]
and its fractional version, with \(\nu>0\), is
\[\frac{\partial^{4\nu}F}{\partial t^{4\nu}}+4\lambda\frac{\partial^{3\nu}F}{ \partial t^{3\nu}}+5\lambda^{2}\frac{\partial^{2\nu}F}{\partial t^{2\nu}}+2 \lambda^{3}\frac{\partial^{\nu}F}{\partial t^{\nu}}+c^{2}(\alpha^{2}+\beta^{ 2})\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{ \partial t}+\lambda^{2}\Big{)}F+c^{4}\alpha^{2}\beta^{2}F=0. \tag{1.9}\]
Equation (1.9) equivalently arises by considering the Fourier transform of the time-fractional version of equation (1.7).
In the last section of the paper we describe some applications of the theory constructed in Section 3 in the field of random motions with finite velocities. In detail, we study a method to derive the value of the (integer) derivatives of the Fourier transform (also called the characteristic function), in the time origin \(t=0\), of the probability law of the position
of the moving particle. Thanks to this result we can build the Cauchy problem solved by the characteristic function of a general motion and study its time-fractional counterpart. We provide two examples concerning planar random movements.
## 2 Preliminary concepts
### Convolutions of Mittag-Leffler-type functions
The generalized Mittag-Leffler (GML) function, also known as three-parameter Mittag-Leffler fucntion, is a generalization of the exponential function. It has been first introduced by Prabhakar [22] and is defined as
\[E^{\gamma}_{\nu,\delta}(x)=\sum_{k=0}^{\infty}\frac{\Gamma(\gamma+k)}{\Gamma( \gamma)\,k!}\frac{x^{k}}{\Gamma(\nu k+\delta)},\ \ \ \ \ \nu,\gamma,\delta\in\mathbb{C},Re(\nu),Re(\gamma),Re(\delta)>0,\ x\in \mathbb{R}. \tag{2.1}\]
By considering \(\gamma=1\), (2.1) reduces to the well-known Mittag-Leffler function, see Pillai [19], Gorenflo _et al._[9].
In this paper, as in many others, we are representing the solutions of fractional differential Cauchy problems in terms of Mittag-Leffler-type functions. These applications naturally appear in the fractional calculus, see Mainardi [14].
For our work it is useful to recall the Laplace transform of function (2.1),
\[\int_{0}^{\infty}e^{-\mu x}x^{\delta-1}E^{\gamma}_{\nu,\delta}(\beta x^{\nu} )\,\mathrm{d}x=\frac{\mu^{\nu\gamma-\delta}}{(\mu^{\nu}-\beta)^{\gamma}},\ \ \ \Big{|}\frac{\mu^{\nu}}{\beta}\Big{|}<1. \tag{2.2}\]
Let \(M\in\mathbb{N}\). Below we use the following multivariate analogue of the generalized Mittag-Leffler
\[E^{\gamma}_{\nu,\delta}(x)=\sum_{k_{1},\ldots,k_{M}=0}^{\infty}\,\prod_{j=1}^ {M}\frac{\Gamma(\gamma_{j}+k_{j})}{\Gamma(\gamma_{j})\,k_{j}!}\,x_{j}^{k_{j}} \,\frac{1}{\Gamma\big{(}\nu\sum_{j=1}^{M}k_{j}+\delta\big{)}}, \tag{2.3}\]
where \(\gamma=(\gamma_{1},\ldots,\gamma_{M})\in\mathbb{C}^{M},\ \nu,\delta\in\mathbb{C}\), with \(Re(\gamma_{1}),\ldots,Re(\gamma_{M}),Re(\nu)>0\), and \(x\in\mathbb{C}^{M}\). Function (2.3) is a particular case of the multivariate Mittag-Leffler introduced by Saxena _et al._[23] and used in Cinque [3] to represent the distribution of the sum of independent generalized Mittag-Leffler random variables.
**Lemma 2.1**.: _Let \(M\in\mathbb{N}\) and \(t\geq 0\). Also assume that \(\gamma_{1},\ldots,\gamma_{M}\in\mathbb{C},\ \nu,\delta\in\mathbb{C}\setminus\{0\}\) such that \(Re(\gamma_{1}),\ldots,Re(\gamma_{M}),Re(\nu)>0\)and \(\eta_{1}\neq\cdots\neq\eta_{M}\in\mathbb{C}\). Then,_
\[\left(\mathop{\makebox[0.0pt]{$\times$}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: It is sufficient to show that for \(n\in\mathbb{N}\) and suitable \(\nu,\delta_{0},\delta,\gamma_{0},\ldots,\gamma_{n},\eta_{0},\ldots,\eta_{n},\)
\[\Bigg{(}x^{\delta_{0}-1}E_{\nu,\delta_{0}}^{\gamma_{0}}(\eta_{0}x^{\nu})*E_{\nu, \delta}^{(\gamma_{1},\ldots,\gamma_{n})}(\eta_{1}x^{\nu},\ldots,\eta_{n}x^{\nu })\Bigg{)}(t)=t^{\delta_{0}+\delta-1}E_{\nu,\delta_{0}+\delta}^{(\gamma_{0}, \gamma_{1},\ldots,\gamma_{n})}(\eta_{0}t^{\nu},\eta_{1}t^{\nu},\ldots,\eta_{n} t^{\nu}).\]
Indeed,
\[\Bigg{(}x^{\delta_{0}-1}E_{\nu,\delta_{0}}^{\gamma_{0}}(\eta_{0}x ^{\nu})*E_{\nu,\delta}^{(\gamma_{1},\ldots,\gamma_{n})}(\eta_{1}x^{\nu},\ldots,\eta_{n}x^{\nu})\Bigg{)}(t)\] \[\quad=\sum_{k_{0}=0}^{\infty}\frac{\Gamma(\gamma_{0}+k_{0})}{ \Gamma(\gamma_{0})\,k_{0}!}\eta_{0}^{k_{0}}\sum_{k_{1},\ldots,k_{n}=0}^{ \infty}\Bigg{(}\prod_{j=1}^{n}\frac{\Gamma(\gamma_{j}+k_{j})}{\Gamma(\gamma_{j })\,k_{j}!}\eta_{j}^{k_{j}}\Bigg{)}\int_{0}^{t}\frac{(t-x)^{\nu k_{0}+\delta_{ 0}-1}x^{\nu\sum_{j=1}^{n}k_{j}+\delta-1}}{\Gamma(\nu k_{0}+\delta_{0})\Gamma( \nu\sum_{j=1}^{n}k_{j}+\delta)}\,\mathrm{d}x\] \[\quad=\sum_{k_{0}=0}^{\infty}\frac{\Gamma(\gamma_{0}+k_{0})}{ \Gamma(\gamma_{0})\,k_{0}!}\eta_{0}^{k_{0}}\sum_{k_{1},\ldots,k_{n}=0}^{ \infty}\Bigg{(}\prod_{j=1}^{n}\frac{\Gamma(\gamma_{j}+k_{j})}{\Gamma(\gamma_{j })\,k_{j}!}\eta_{j}^{k_{j}}\Bigg{)}\frac{t^{\nu\sum_{j=0}^{n}k_{j}+\delta_{0}+ \delta-1}}{\Gamma(\nu\sum_{j=0}^{n}k_{j}+\delta_{0}+\delta)}\] \[\quad=\sum_{k_{0},\ldots,k_{n}=0}^{\infty}\Bigg{(}\prod_{j=0}^{n} \frac{\Gamma(\gamma_{j}+k_{j})}{\Gamma(\gamma_{j})\,k_{j}!}\Big{(}\eta_{j}t^{ \nu}\Big{)}^{k_{j}}\Bigg{)}\frac{t^{\delta_{0}+\delta-1}}{\Gamma(\nu\sum_{j= 0}^{n}k_{j}+\delta_{0}+\delta)}.\]
For the convolution of \(M\) two-parameters Mittag-Leffler functions we can derive an expression in terms of a linear combination of \(M\) two-parameters Mittag-Leffler functions having all the same parameters.
**Proposition 2.1**.: _Let \(M\in\mathbb{N}\) and \(t\geq 0\). Also assume that \(\gamma_{1},\ldots,\gamma_{M}\in\mathbb{C},\ \nu,\delta\in\mathbb{C}\setminus\{0\}\) such that \(Re(\gamma_{1}),\ldots,Re(\gamma_{M}),Re(\nu)>0\) and \(\eta_{1}\neq\cdots\neq\eta_{M}\in\mathbb{C}\). Then,_
\[\Bigg{(}\mathop{\hbox to 0.0pt{\vbox{\hrule width 100 {0.4pt\hbox{\vrule width 4.0pt height 6.0pt \kern 6.0pt\vrule width 0.0pt} \hrule width 100 \vrule width 0.0pt} } \hrule width 100 \vrule width 0.0pt}}\limits_{i=1}^{M}x^{\delta_{i}-1}E_{\nu,\delta_{i}}(\eta_{i}x^{ \nu})\Bigg{)}(t)=t^{\sum_{h=1}^{M}\delta_{h}-1}\sum_{i=1}^{M}\frac{ \eta_{i}^{M-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}E_{\nu,\sum_{h=1}^{M}\delta_{h} }\big{(}\eta_{i}t^{\nu}\big{)} \tag{2.5}\]
_where the convolution is performed with respect to the non-negative variable \(x\geq 0\)._
Proof.: First we recall that for \(n,M\in\mathbb{N}_{0}\) and \(\eta_{1}\neq\cdots\neq\eta_{N}\in\mathbb{C}\setminus\{0\}\),
\[\sum_{i=1}^{M}\frac{\eta_{i}^{n}}{\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}=0,\ \ \text{with}\ \ n\leq M-2. \tag{2.6}\]
Then, we also note that the right-hand side of formula (2.5) can be also written as
\[t^{\sum_{h=1}^{M}\delta_{h}-1}\sum_{i=1}^{M}\frac{\eta_{i}^{M-1}}{\prod_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}E_{\nu,\sum_{h=1}^{M}\delta_{h} }\big{(}\eta_{i}t^{\nu}\big{)}=\sum_{k=0}^{\infty}\frac{t^{\nu k+\sum_{h=1}^{ M}\delta_{h}-1}}{\Gamma(\nu k+\sum_{h=1}^{M}\delta_{h})}\sum_{i=1}^{M}\frac{\eta_{i}^{k+M -1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}. \tag{2.7}\]
We now proceed by induction. The induction base (M=2) can be found in Orsingher and Beghin [17]. Now, assume that (2.5) holds for \(M-1\).
\[\left(\begin{array}{l}\mathop{\mbox{\Large$\times$}}\limits_{i=1}^{M}x^{ \delta_{i}-1}E_{\nu,\delta_{i}}(\eta_{i}x^{\nu})\right)(t)\] \[\quad=\sum_{i=1}^{M-1}\frac{\eta_{i}^{M-2}}{\prod_{\begin{subarray} {c}j=1\\ j\neq i\end{subarray}}^{M-1}(\eta_{i}-\eta_{j})}\int_{0}^{t}x^{\delta_{M}-1}E_{ \nu,\delta_{M}}\big{(}\eta_{M}x^{\nu}\big{)}(t-x)^{\sum_{h=1}^{M-1}\delta_{h}-1 }E_{\nu,\sum_{h=1}^{M-1}\delta_{h}}\Big{(}\eta_{i}(t-x)^{\nu}\Big{)}\,\mathrm{ d}x \tag{2.8}\] \[\quad=\sum_{i=1}^{M-1}\frac{\eta_{i}^{M-1}}{\prod_{\begin{subarray} {c}j=1\\ j\neq i\end{subarray}}^{M-1}(\eta_{i}-\eta_{j})}\sum_{k=0}^{\infty}\frac{t^{\nu k +\sum_{h=1}^{M}\delta_{h}-1}}{\Gamma\big{(}\nu k+\sum_{h=1}^{M}\delta_{h} \big{)}}\Big{(}\frac{\eta_{i}^{k+1}}{\eta_{i}-\eta_{M}}+\frac{\eta_{M}^{k+1}}{ \eta_{M}-\eta_{i}}\Big{)}\] \[\quad=\sum_{i=1}^{M-1}\frac{\eta_{i}^{M-1}\,t^{\sum_{h=1}^{M} \delta_{h}-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}\,E_{\nu,\sum_{h=1}^{M}\delta_{h} }\big{(}\eta_{i}t^{\nu}\big{)}\] \[\quad\quad-t^{\sum_{h=1}^{M}\delta_{h}-1}\,E_{\nu,\sum_{h=1}^{M} \delta_{h}}\big{(}\eta_{M}t^{\nu}\big{)}\,\eta_{M}\,\sum_{i=1}^{M-1}\,\frac{ \eta_{i}^{M-2}}{\prod_{\begin{subarray}{c}j\neq i\end{subarray}}^{M}(\eta_{i }-\eta_{j})}\] (2.9) \[\quad=\sum_{i=1}^{M-1}\frac{\eta_{i}^{M-1}\,t^{\sum_{h=1}^{M} \delta_{h}-1}}{\prod_{\begin{subarray}{c}j\neq i\end{subarray}}^{M}(\eta_{i}- \eta_{j})}\,E_{\nu,\sum_{h=1}^{M}\delta_{h}}\big{(}\eta_{i}t^{\nu}\big{)}+t^{ \sum_{h=1}^{M}\delta_{h}-1}\,E_{\nu,\sum_{h=1}^{M}\delta_{h}}\big{(}\eta_{M}t ^{\nu}\big{)}\frac{\eta_{M}^{M-1}}{\prod_{\begin{subarray}{c}j\neq i\end{subarray} }^{M}(\eta_{i}-\eta_{j})}.\]
where in step (2.8) we used the induction base (i.e. with \(M=2\)) written as in (2.7), and in step (2.9) we suitably used formula (2.6).
### Generalization of absolute normal distribution
In [1] the authors introduced the following absolutely continuous positively distributed random variables. Let \(n\in\mathbb{N}\) and \(y>0\),
\[P\{G_{j}^{(n)}(t)\in\mathrm{d}y\}=\frac{y^{j-1}\,\mathrm{d}y}{n^{\frac{j}{n-1} -1}t^{\frac{j}{n(n-1)}}\Gamma(j/n)}e^{-\frac{y^{n}}{(n^{n}t)^{\frac{1}{n-1}}}},\quad t>0,\ j=1,\ \ldots,n-1. \tag{2.10}\]
Note that in the case of \(n=2\) we have only one element and \(G_{1}^{(2)}(t)=|B(2t)|,\ t\geq 0\), with \(B\) being a standard Brownian motion.
If \(G_{1}^{(n)},\ldots,G_{n-1}^{(n)}\) are independent, then the joint density reads, with \(y_{1},\ldots,y_{n-1}>0\),
\[P\Bigg{\{}\bigcap_{j=1}^{n-1}\big{\{}G_{j}^{(n)}(t)\in\mathrm{d}y_{j}\big{\}} \Bigg{\}}=\Big{(}\frac{n}{2\pi}\Big{)}^{\frac{n-1}{2}}\frac{1}{\sqrt{t}}\Bigg{(} \prod_{j=1}^{n-1}y_{j}^{j-1}\,\,\mathrm{d}y_{j}\Bigg{)}\,e^{-(n^{n}t)^{\frac{- 1}{n-1}}\sum_{j=1}^{n-1}y_{j}^{n}}. \tag{2.11}\]
Let \(t>0\) and \(n\geq 2\). It is easy to derive that the Mellin-transform of distribution (2.10) reads, for \(s>0\),
\[\int_{0}^{\infty}y^{s-1}f_{G_{j}^{(n)}(t)}(y)\,\mathrm{d}y=\Big{(}nt^{1/n} \Big{)}^{\frac{s-1}{n-1}}\frac{\Gamma\big{(}\frac{s+j-1}{n}\big{)}}{\Gamma \big{(}\frac{j}{n}\big{)}},\quad j=1,\ldots,n-1.\]
In the independence case, the Mellin-transform of the density, \(f_{G^{(n)}(t)}\), of the product \(G^{(n)}(t)=\prod_{j=1}^{n-1}G^{(n)}_{j}(t)\), is, with \(s>0\),
\[\int_{0}^{\infty}y^{s-1}f_{G^{(n)}(t)}(y)\,\mathrm{d}y=\prod_{j=1}^{n-1}\Bigl{(} nt^{1/n}\Bigr{)}^{\frac{s-1}{n-1}}\frac{\Gamma\bigl{(}\frac{s+j-1}{n}\bigr{)}}{ \Gamma\bigl{(}\frac{s}{n}\bigr{)}}=\frac{t^{\frac{s-1}{n}}}{\Gamma\bigl{(} \frac{s-1}{n}+1\bigr{)}}\Gamma(s).\]
where in the last equality we used the following \(n\)-multiplication formula of Gamma function for \(z=1/n\) and \(s/n\),
\[\prod_{j=1}^{n-1}\Gamma\Bigl{(}z+\frac{j-1}{n}\Bigr{)}=\frac{(2\pi)^{\frac{n-1 }{2}}n^{\frac{1}{2}-nz}\,\Gamma(nz)}{\Gamma\Bigl{(}z+\frac{n-1}{n}\Bigr{)}}.\]
## 3 Fractional differential Cauchy problem
In this section we derive an explicit formula for the solution to the fractional Cauchy problem given by (1.1) and (1.2). Hereafter we are considering functions \(f:[0,\infty)\times\mathbb{R}^{d}\longrightarrow\mathbb{R}\) such that \(\lim_{t\longrightarrow\infty}e^{-\mu t}\frac{\partial^{l-1}}{\partial t^{l-1 }}f(t)=0\ \forall\ l\).
**Theorem 3.1**.: _Let \(d,N\in\mathbb{N},\ \nu>0\) and \(\lambda_{0},\dots,\lambda_{N}\in\mathbb{R}\). If_
\[\sum_{k=0}^{N}\lambda_{k}x^{k}=\prod_{j=1}^{M}(x-\eta_{j})^{m_{j}}\ \ \text{with}\ \ \eta_{1},\dots,\eta_{M}\in\mathbb{C}\setminus\{0\}, \tag{3.1}\]
_then, the solution to the fractional Cauchy problem of parameter \(\nu\)_
\[\begin{cases}\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{ \partial t^{\nu k}}F(t,x)=0,\ \ t\geq 0,\ x\in\mathbb{R}^{d}\\ \frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in\mathbb{R}^{d},\ l=0, \dots,\lceil\nu N\rceil-1,\end{cases} \tag{3.2}\]
_is the function \(F:[0,\infty)\times\mathbb{R}^{d}\longrightarrow\mathbb{R}\) given by_
\[F(t,x)=\sum_{l=0}^{\lceil\nu N\rceil-1}f_{l}(x)\sum_{k=k_{l}}^{N}\lambda_{k}\, t^{\nu(N-k)+l}\,E^{(m_{1},\dots,m_{M})}_{\nu,\,\nu(N-k)+l+1}\Bigl{(}\eta_{1}t^{ \nu},\dots,\eta_{M}t^{\nu}\Bigr{)}, \tag{3.3}\]
_with \(k_{l}=\min\{k=1,\dots,N\,:\,\nu k>l\},\ l=0,\dots,\lceil\nu N\rceil-1\)._
Note that \(k_{0}=1\) and \(l-1<\nu k\leq l\) for all \(k_{l-1}\leq k<k_{l}\). Formula (3.3) can be also written inverting the sums into \(\sum_{k=1}^{N}\sum_{l=0}^{\lceil\nu k\rceil-1}\).
Condition (3.1) implies that \(\eta_{1},\dots,\eta_{M}\) are the \(M\) roots of the \(N\)-th order polynomial with coefficients \(\lambda_{0},\dots,\lambda_{N}\), respectively with algebraic motleplicity \(m_{1},\dots,m_{M}\geq 1\). In the case \(M=N\), all the roots have algebraic motleplicity equal to \(1\) and the solution can be expressed in terms of a combination of Mittag-Leffler functions (see Theorem 3.3).
Proof.: By means of the \(t\)-Laplace transform, the differential equation in problem (3.2) turns into, for \(\mu\geq 0\) (we use the notation \(G(\mu,x)=\mathcal{L}(F)(\mu,x)=\int_{0}^{\infty}e^{-\mu t}F(t,x)\,\mathrm{d}t\) and keep in mind formula (1.4))
\[0=\mathcal{L}\!\left(\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{ \partial t^{\nu k}}F\right)=\sum_{k=0}^{N}\lambda_{k}\,\mathcal{L}\!\left( \frac{\partial^{\nu k}}{\partial t^{\nu k}}F\right)=\lambda_{0}G+\sum_{k=1}^{N }\lambda_{k}\Big{[}\mu^{\nu k}G-\sum_{l=1}^{[\nu k]}\mu^{\nu k-l}f_{l-1}\Big{]},\]
which gives
\[G(\mu,x)=\frac{\sum_{k=1}^{N}\lambda_{k}\sum_{l=1}^{[\nu k]}\mu^ {\nu k-l}f_{l-1}(x)}{\sum_{k=0}^{N}\lambda_{k}\mu^{\nu k}}=\frac{ \sum_{l=1}^{[\nu N]}f_{l-1}(x)\sum_{k=k_{l-1}}^{N}\!\lambda_{k}\, \mu^{\nu k-l}}{\prod_{h=1}^{M}\!\big{(}\mu^{\nu}-\eta_{h}\big{)}^{ m_{h}}}, \tag{3.4}\]
where we used hypothesis (3.1) and \(k_{l-1}\) is defined in the statement.
We now compute the \(\mu\)-Laplace inverse of the functions \(\mu^{\nu k-l}/\prod_{h=1}^{M}\!\big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}\), for \(l=1,\ldots,\lceil N\nu\rceil\) and \(k=k_{l-1},\ldots,N\), by properly applying formula (2.2). Let us consider \(M_{k}=\min\{n\,:\,\sum_{h=1}^{n}m_{h}\geq k\}\), therefore \(M_{1}=1\) because \(m_{1}\geq 1\) and \(M_{N}=M\). Clearly, \(\sum_{h=1}^{M_{k}}m_{h}\geq k>\sum_{h=1}^{M_{k}-1}m_{h}\) and \(m_{M_{k}}\geq k-\sum_{h=1}^{M_{k}-1}m_{h}\); clearly \(\sum_{h=1}^{M}m_{h}=N\). We can decompose \(\nu k-l\) as follows (it is not the only way):
\[\nu k-l =\nu\Big{(}\sum_{h=1}^{M_{k}-1}m_{h}+k-\sum_{h=1}^{M_{k}-1}m_{h} \Big{)}-l\,\frac{\sum_{h=1}^{M}m_{h}}{N}\] \[=\nu\sum_{h=1}^{M_{k}-1}m_{h}+\nu\Big{(}k-\sum_{h=1}^{M_{k}-1}m_ {h}\pm\sum_{h=M_{k}}^{M}m_{h}\Big{)}-l\,\frac{\sum_{h=1}^{M}m_{h}}{N}\] \[=\sum_{h=1}^{M_{k}-1}\Big{(}\nu m_{h}-l\frac{m_{h}}{N}\Big{)}+ \Bigg{[}\nu m_{M_{k}}-\Big{(}\nu m_{M_{k}}-\nu k+\nu\sum_{h=1}^{M_{k}-1}m_{h} +l\frac{m_{M_{k}}}{N}\Big{)}\Bigg{]}\] \[\quad+\sum_{h=M_{k}+1}^{M}\Bigg{[}\nu m_{h}-\Big{(}\nu m_{h}+l \frac{m_{h}}{N}\Big{)}\Bigg{]}. \tag{3.5}\]
In view of (3.5) we can write, by denoting with \(\mathcal{L}^{-1}\) the inverse \(\mu\)-Laplace transform operator, for \(l=1,\ldots,\lceil N\nu\rceil\) and \(k=k_{l-1},\ldots,N\)
\[\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu k-l}}{\prod_{h=1}^{M}\! \big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}}\Bigg{)}(t)\] \[\quad=\prod_{h=1}^{M_{k}-1}\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu m _{h}-lm_{h}/N}}{\big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}}\Bigg{)}(t)\,\mathcal{L }^{-1}\Bigg{(}\frac{\mu^{\nu m_{M_{k}}-\big{(}\nu\sum_{h=1}^{M_{k}}m_{h}-\nu k+lm_{M_{k}}/N\big{)}}}{\big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{M_{k}}}} \Bigg{)}(t)\] \[\quad\quad\times\prod_{h=M_{k}+1}^{M}\mathcal{L}^{-1}\Bigg{(} \frac{\mu^{-lm_{h}/N}}{\big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}}\Bigg{)}(t) \tag{3.6}\]
\[=\] \[= t^{\nu(N-k)+l-1}E^{(m_{1},\dots,m_{M})}_{\nu,\,\nu(N-k)+l}\big{(}\eta_ {1}t^{\nu},\dots,\eta_{M}t^{\nu}\big{)}, \tag{3.7}\]
where in step (3.6) we used (2.2) and in the last step we used Lemma 2.1. Note that in step (3.6) it is necessary to keep the "\(\delta\)" terms greater than \(0\) (see (2.2)) and this is the main reason of using the above decomposition of \(\nu k-l\).
By combining (3.4) and (3.7) we readily obtain result (3.3) (after the change of variable \(l^{\prime}=l-1\)).
**Remark 3.1** (Non-homogeneous equation).: Under the hypothesis of Theorem 3.1 we can easily study the Cauchy problem in the case of a non-homogeneous fractional equation. In details, for \(g:\mathbb{R}\times\mathbb{R}^{d}\longrightarrow\mathbb{R}\), such that there exists the \(t\)-Laplace transform, the solution of
\[\left\{\begin{aligned} &\sum_{k=0}^{N}\lambda_{k}\frac{ \partial^{\nu k}}{\partial t^{\nu k}}F(t,x)=g(t,x),\ \ t\geq 0,\ x\in\mathbb{R}^{d}\\ &\frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in\mathbb{R}^{d},\ l=0,\dots,\lceil\nu N\rceil-1,\end{aligned}\right.\]
reads
\[F(t,x) \tag{3.8}\] \[=\sum_{l=0}^{\lceil N\nu\rceil-1}f_{l}(x)\sum_{k=k_{l}}^{N} \lambda_{k}\,t^{\nu(N-k)+l}\,E^{(m_{1},\dots,m_{M})}_{\nu,\,\nu(N-k)+l+1} \big{(}\eta t^{\nu}\big{)}-\int_{0}^{t}g(t-y,x)\,y^{\nu N-1}E^{(m_{1},\dots,m_ {M})}_{\nu,\nu N}\big{(}\eta y^{\nu}\big{)}\,\mathrm{d}y,\]
where \(\eta=(\eta_{1},\dots,\eta_{M})\).
The above results easily follows by observing that formula (3.4) becomes
\[G(\mu,x)=\frac{\sum_{l=1}^{\lceil N\nu\rceil}f_{l-1}(x)\sum_{k=k_{l-1}}^{N} \lambda_{k}\,\mu^{\nu k-l}-\mathcal{L}(g)(\mu,x)}{\prod_{h=1}^{M} \big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}}\]
and we observe that the \(\mu\)-Laplace inverse of the term concerning the function \(g\) is
\[\mathcal{L}^{-1}\Bigg{(}\mathcal{L}(g)(\mu,x)\Big{(}\prod_{h=1}^{M}\big{(}\mu ^{\nu}-\eta_{h}\big{)}^{m_{h}}\Big{)}^{-1}\Bigg{)}(t,x)=\int_{0}^{t}g(t-y,x) \,y^{\nu N-1}E^{(m_{1},\dots,m_{M})}_{\nu,\nu N}\big{(}\eta y^{\nu}\big{)}\, \mathrm{d}y\]
where we used that \(\mathcal{L}^{-1}\Big{(}\Big{(}\prod_{h=1}^{M}\big{(}\mu^{\nu}-\eta_{h}\big{)} ^{m_{h}}\Big{)}^{-1}\Big{)}(t,x)=t^{\nu N-1}E^{(m_{1},\dots,m_{M})}_{\nu,\nu N} \big{(}\eta t^{\nu}\big{)}\) (obtained by proceeding as shown for (3.7)).
Note that in the case of \(g\) being constant with respect to the variable \(t\), the last term of (3.8) reads \(-g(x)t^{\nu N}E^{(m_{1},\dots,m_{M})}_{\nu,\nu N+1}\big{(}\eta t^{\nu}\big{)}\).
**Remark 3.2**.: Consider the real sequence \(\{\nu_{n}\}_{n\in\mathbb{N}}\) such that \(\nu_{n}\longrightarrow\nu>0\) and \(\lceil\nu_{n}N\rceil=\lceil\nu N\rceil>0\ \forall\ n\). Then,
\[F_{\nu}(t,x)=\lim_{n\to\infty}F_{\nu_{n}}(t,x),\ \ t\geq 0,\ x\in\mathbb{R}^{d}, \tag{3.9}\]
where \(F_{\nu},F_{\nu_{n}}\) are respectively the solutions to the problem of parameter \(\nu\) and \(\nu_{n}\ \forall\ n\), with the same initial conditions. This means that we can connect the limit of the solutions (pointwise) to the "limit" of the Cauchy problems (where the initial conditions stay the same because \(\lceil\nu_{n}N\rceil=\lceil\nu N\rceil\ \forall\ n\)).
Result (3.9) comes from the continuity of the function (2.3) with respect to the fractional parameter \(\nu>0\). This can be seen as a consequence of the continuity of the Gamma function on the real half-line and a suitable application of the dominated convergence theorem.
**Theorem 3.2**.: _Let \(d,N,n\in\mathbb{N},\ \nu>0\). Let \(\lambda_{0},\ldots,\lambda_{N}\in\mathbb{R}\) and \(\eta_{1},\ldots,\eta_{M}\in\mathbb{C}\setminus\{0\}\) satisfying condition (3.1). Then, the solution \(F_{\nu/n}\) of the problem of parameter \(\nu/n\)_
\[\begin{cases}\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k/n}}{ \partial t^{\nu k/n}}F(t,x)=0,\ \ t\geq 0,\ x\in\mathbb{R},\\ \frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in \mathbb{R}^{d},\ l=0,\ldots,\left\lceil\frac{N\nu}{n}\right\rceil-1,\end{cases} \tag{3.10}\]
_can be expressed as_
\[F_{\nu/n}(t,x)=\mathbb{E}\,F_{\nu}\bigg{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t),\,x \bigg{)}, \tag{3.11}\]
_where the \(G_{j}^{(n)}(t)\) are the random variables introduced in Section 2.2 and \(F_{\nu}\) is the solution to a problem of parameter \(\nu\) with suitable initial condition_
\[\begin{cases}\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{ \partial t^{\nu k}}F(t,x)=0,\ \ t\geq 0,\ x\in\mathbb{R}\\ \frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=\begin{cases}f_{l},\ \ l=ln,\ \text{with}\ h=0,\ldots,\lceil N\nu/n\rceil-1,\\ 0,\ \ otherwise.\end{cases}\end{cases} \tag{3.12}\]
Note that the conditions of the problem (3.10) of degree \(\nu/n\) appear in the associated problem (3.12) in the derivative whose order is multiple of \(n\), while the other initial conditions are assumed equal to \(0\). We also point out that all the conditions of the original problem always appear in the related problem since \(n\Big{(}\lceil\nu N/n\rceil-1\Big{)}\leq\lceil\nu N\rceil-1\).
Proof.: We begin by showing a possible way to express the multivariate Mittag-Leffler (2.3) of fractional order \(\nu/n\) in terms of that of fractional order \(\nu\). Remember that for the gamma function, with \(z\in\mathbb{C}\) and \(n\in\mathbb{N}\) we can write (thanks to the \(n\)-multiplication formula of the Gamma function)
\[\Gamma\Big{(}z+\frac{n-1}{n}\Big{)}^{-1}=\frac{\prod_{j=1}^{n-1}\Gamma\Big{(} z+\frac{j-1}{n}\Big{)}}{(2\pi)^{\frac{n-1}{2}}n^{\frac{1}{2}-nz}\Gamma(nz)}\]
\[=\frac{1}{(2\pi)^{\frac{n-1}{2}}n^{\frac{1}{2}-nz}\Gamma(nz)}\prod_{j=1} ^{n-1}\int_{0}^{\infty}e^{-w_{j}}w_{j}^{z+\frac{j-1}{n}-1}\,\mathrm{d}w_{j}. \tag{3.13}\]
Let \(x\in\mathbb{C}^{M}\) and \(L,h>0\),
\[E_{\frac{\nu}{n},\frac{\nu}{n},\frac{\nu}{n}L+h}^{(m_{1},\dots,m _{M})}(x) =\sum_{k_{1},\dots,k_{M}=0}\Biggl{(}\prod_{j=1}^{M}\frac{\Gamma(m_{j}+ k_{j})}{\Gamma(m_{j})\,k_{j}!}x_{j}^{k_{j}}\Biggr{)}\Gamma\Biggl{(}\frac{\nu}{n} \sum_{j=1}^{M}k_{j}+\frac{\nu}{n}L+h\Biggr{)}^{-1} \tag{3.14}\] \[=\sum_{k_{1},\dots,k_{M}=0}\Biggl{(}\prod_{j=1}^{M}\frac{\Gamma( m_{j}+k_{j})}{\Gamma(m_{j})\,k_{j}!}x_{j}^{k_{j}}\Biggr{)}\frac{\Gamma \Bigl{(}\nu\sum_{j=1}^{M}k_{j}+\nu L+nh-(n-1)\Bigr{)}^{-1}}{(2\pi)^{\frac{n-1}{ 2}}n^{\frac{1}{2}-\bigl{(}\nu\sum_{k=1}^{M}k_{h}+\nu L+nh-(n-1)\bigr{)}}}\] \[\quad\times\prod_{j=1}^{n-1}\int_{0}^{\infty}e^{-w_{j}}w_{j}^{ \frac{1}{n}\bigl{(}\nu\sum_{k=1}^{M}k_{h}+\nu L+nh-n+j\bigr{)}-1}\,\mathrm{d}w _{j}\] \[=\frac{n^{\nu L+n(h-1)+1/2}}{(2\pi)^{\frac{n-1}{2}}}\int_{0}^{ \infty}\cdots\int_{0}^{\infty}\prod_{j=1}^{n-1}e^{-w_{j}}w_{j}^{\frac{1}{n} \bigl{(}\nu L+nh-n+j\bigr{)}-1}\,\mathrm{d}w_{j}\] \[\quad\times E_{\nu,\,\nu L+n(h-1)+1}^{(m_{1},\dots,m_{M})}\Bigl{(} x\,n^{\nu}\prod_{j=1}^{n-1}w_{j}^{\nu/n}\Bigr{)} \tag{3.15}\]
where in (3.14) we used (3.13) with \(z=\frac{\nu}{n}\sum_{j=1}^{M}k_{j}+h+\frac{\nu}{n}L-\frac{n-1}{n}\).
Now we apply (3.15) (with \(h=l+1\) and \(L=N-k\)) to formula (3.3) and derive result (3.11). Let us consider \(\eta=(\eta_{1},\dots,\eta_{M})\) given in the hypotheses, then
\[F_{\nu/n}(t,x) =\sum_{l=0}^{\left\lceil\frac{\nu N}{n}\right\rceil-1}f_{l}(x) \sum_{k=k_{l}}^{N}\lambda_{k}\,t^{\frac{\nu}{n}(N-k)+l}\,E_{\frac{\nu}{n},\frac {\nu}{n}(N-k)+l+1}^{(m_{1},\dots,m_{M})}\Bigl{(}\eta_{1}t^{\nu/n},\dots,\eta_{M} t^{\nu/n}\Bigr{)}\] \[=\sum_{l=0}^{\left\lceil\frac{\nu N}{n}\right\rceil-1}f_{l}(x) \sum_{k=k_{l}}^{N}\lambda_{k}\,t^{\frac{\nu}{n}(N-k)+l}\,\frac{n^{\nu(N-k)+ nl+1/2}}{(2\pi)^{\frac{n-1}{2}}}\int_{0}^{\infty}\cdots\int_{0}^{\infty} \Biggl{(}\prod_{j=1}^{n-1}\mathrm{d}w_{j}\Biggr{)}\] \[\quad\times\Biggl{(}\prod_{j=1}^{n-1}e^{-w_{j}}\Biggr{)}\Biggl{(} \prod_{j=1}^{n-1}w_{j}^{\frac{1}{n}\bigl{(}\nu(N-k)+nl-n+j\bigr{)}-1}\Biggr{)}E _{\nu,\,\nu(N-k)+nl+1}^{(m_{1},\dots,m_{M})}\Biggl{(}\eta\Bigl{(}nt^{1/n}\prod_ {j=1}^{n-1}w_{j}^{1/n}\Bigr{)}^{\nu}\Biggr{)}\] \[=\Bigl{(}\frac{n}{2\pi}\Bigr{)}^{\frac{n-1}{2}}\frac{1}{\sqrt{t}} \int_{0}^{\infty}\cdots\int_{0}^{\infty}\Biggl{(}\prod_{j=1}^{n-1}\mathrm{d}y_{ j}\Biggr{)}\Biggl{(}\prod_{j=1}^{n-1}y_{j}^{j-1}\Biggr{)}\Biggl{(}\prod_{j=1}^{n-1}e^{- \frac{y_{j}^{n}}{(n^{n}t)^{\frac{1}{n-1}}}}\Biggr{)}\] \[\quad\times\sum_{l=0}^{\left\lceil\frac{\nu N}{n}\right\rceil-1}f _{l}(x)\sum_{k=k_{l}}^{N}\lambda_{k}\Biggl{(}\prod_{j=1}^{n-1}y_{j}\Biggr{)}^{ \nu(N-k)+nl}E_{\nu,\,\nu(N-k)+nl+1}^{(m_{1},\dots,m_{M})}\Biggl{(}\eta\prod_{j=1 }^{n-1}y_{j}^{\nu}\Biggr{)} \tag{3.16}\]
where in the last step we used the change of variables
\[nt^{1/n}\prod_{j=1}^{n-1}w_{j}^{1/n}=\prod_{j=1}^{n-1}y_{j}\iff w_{j}=\frac{y_{ j}^{n}}{\bigl{(}n^{n}t\bigr{)}^{\frac{1}{n-1}}},\,\,\forall\,\,j\,\Longrightarrow \,\prod_{j=1}^{n-1}\mathrm{d}w_{j}=\frac{\prod_{j=1}^{n-1}\mathrm{d}y_{j}\,y_{j}^ {n-1}}{nt}\]
and we performed some simplifications.
At last, we show that the second line of (3.16) coincides with the time-changed solution \(F_{\nu}\Big{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t),\,x\Big{)}\) of the associated problem (3.12). Let us denote with \(\tilde{f}_{l}\) the function appearing in the \(l\)-th condition of problem (3.12) and with \(\tilde{k}_{l}=\min\{k=1,\dots,N\,:\,\nu k>l\}\) for \(l=0,\dots,\lceil\nu N\rceil-1\). Then, the solution of the related Cauchy problem reads
\[F_{\nu}(s,x)=\sum_{l=0}^{\lceil\nu N\rceil-1}\tilde{f}_{l}(x)\sum_{k=\tilde{k} _{l}}^{N}\lambda_{k}\,s^{\nu(N-k)+l}\,E_{\nu,\,\nu(N-k)+l+1}^{(m_{1},\dots,m_{ M})}\Big{(}\eta_{1}t^{\nu},\dots,\eta_{M}s^{\nu}\Big{)},\]
where the functions \(\tilde{f}_{l}\) are identically null for \(l\neq nh\) with \(h=0,\dots,\lceil\nu N/n\rceil-1\), therefore we can write (removing the indexes of the null terms and performing the change of variable \(l=nh\))
\[F_{\nu}(s,x)=\sum_{h=0}^{\lceil\frac{\nu N}{n}\rceil-1}\tilde{f}_{nh}(x)\sum_ {k=\tilde{k}_{nh}+1}^{N}\lambda_{k}\,s^{\nu(N-k)+nh}\,E_{\nu,\,\nu(N-k)+nh+1} ^{(m_{1},\dots,m_{M})}\Big{(}\eta_{1}s^{\nu},\dots,\eta_{M}s^{\nu}\Big{)}.\]
By observing that \(\tilde{k}_{nh}=\min\{k=1,\dots,N:\nu k>nh\}=\min\{k=1,\dots,N:\nu k/n>h\}=k_{h} \ \forall\ h\), we obtain the last line of (3.16) by setting \(s=\prod_{j=1}^{n-1}y_{j}\).
**Remark 3.3** (Brownian subordination).: If \(n=2\), formula (3.11) becomes
\[F_{\nu/2}(t,x)=\mathbb{E}\,F_{\nu}\Big{(}\,|B(2t)|,\,x\Big{)}, \tag{3.17}\]
with \(B\) standard Brownian motion (see Section 2.2).
Furthermore, by keeping in mind (3.17) and iterating the same argument, we obtain that
\[F_{\nu/2^{n}}(t,x)=\mathbb{E}\,F_{\nu}\Big{(}\,|B_{n}(2|B_{n-1}(2|\cdots 2|B_{1}(2t)| \cdots|\,)\,|\,)\,|,\,x\Big{)}, \tag{3.18}\]
where \(B_{1},\dots,B_{n}\) are independent standard Brownian motions and \(F_{\nu}\) solution of the associated problem of the form (3.12) with \(2^{n}\) replacing \(n\).
### Algebraic multiplicities equal to 1
In this section we restrict ourselves to the case where the characteristic polynomial in (3.1) has all distinct roots. This hypothesis permits us to present a more elegant result than that of Theorem 3.1.
**Theorem 3.3**.: _Let \(d,N\in\mathbb{N},\ \nu>0\) and \(\lambda_{0},\dots,\lambda_{N}\in\mathbb{R}\). If_
\[\sum_{k=0}^{N}\lambda_{k}x^{k}=\prod_{j=1}^{N}(x-\eta_{j})\ \ \text{with}\ \ \eta_{1},\dots,\eta_{N}\in\mathbb{C}\setminus\{0\}, \tag{3.19}\]
_then, the solution to the fractional Cauchy problem_
\[\begin{cases}\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{ \partial t^{\nu k}}F(t,x)=0,\ \ t\geq 0,\ x\in\mathbb{R}\\ \frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in \mathbb{R}^{d},\ l=0,\dots,\lceil\nu N\rceil-1,\end{cases} \tag{3.20}\]
_is the function \(F:[0,\infty)\times\mathbb{R}^{d}\longrightarrow\mathbb{R}\) given by_
\[F(t,x)=\sum_{h=1}^{N}\sum_{l=0}^{[\nu N]-1}E_{\nu,l+1}\big{(}\eta_{h}t^{\nu} \big{)}f_{l}(x)\,t^{l}\sum_{k=k_{l}}^{N}\frac{\lambda_{k}\,\eta_{h}^{k-1}}{ \prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}, \tag{3.21}\]
_with \(k_{l}=\min\{k=1,\ldots,N\,:\,\nu k>l\},\ l=0,\ldots,\lceil\nu N\rceil-1\)._
Note that in result (3.21) the fractional order \(\nu\) influences only the fractional order of the Mittag-Leffler function (and the number of initial conditions), so the coefficients of the linear combination are constant (with respect to \(\nu\)). We point out that the series in (3.21) can be inverted becoming \(\sum_{k=1}^{N}\sum_{l=0}^{\lceil\nu k\rceil-1}\).
Proof.: First we note that, for \(n\in\mathbb{N}_{0}\) and \(l\in\mathbb{C}\),
\[E_{\nu,\,n\nu+l}(x)=\frac{E_{\nu,l}(x)}{x^{n}}-\sum_{j=1}^{n}\frac{x^{-j}}{ \Gamma\big{(}(n-j)\nu+l\big{)}}. \tag{3.22}\]
Now, we proceed as in the proof of Theorem 3.1 and we perform the \(t\)-Laplace transform of the equation in problem (3.20). In this case, formula (3.4) reads
\[G(\mu,x)=\frac{\sum_{l=1}^{\lceil\nu N\rceil}f_{l-1}(x)\sum_{k=k_{l-1}}^{N} \lambda_{k}\,\mu^{\nu k-l}}{\prod_{h=1}^{N}\big{(}\mu^{\nu}-\eta_{h} \big{)}}. \tag{3.23}\]
We now invert the functions \(\mu^{\nu k-l}/\prod_{h=1}^{N}\big{(}\mu^{\nu}-\eta_{h}\big{)}\) for \(l=1,\ldots,\lceil\nu N\rceil\) and \(k=k_{l-1}+1,\ldots,N\). We note that
\[\nu k-l=\nu-l\frac{N-k+1}{N}+(k-1)\Big{(}\nu-\frac{l}{N}\Big{)}\]
and therefore we write
\[\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu k-l}}{\prod_{h=1}^{N} \big{(}\mu^{\nu}-\eta_{h}\big{)}}\Bigg{)}(t)\] \[=\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu-l(N-k+1)/N}}{\mu^{\nu}- \eta_{1}}\Bigg{)}(t)\,\prod_{h=2}^{k}\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu-l /N}}{\mu^{\nu}-\eta_{h}}\Bigg{)}(t)\,\prod_{h=k+1}^{N}\mathcal{L}^{-1}\Big{(} \frac{1}{\mu^{\nu}-\eta_{h}}\Big{)}(t)\] \[=t^{l(N-k+1)/N-1}E_{\nu,l\frac{N-k+1}{N}}\big{(}\eta_{1}t^{\nu} \big{)}*\mathop{\mathchoice{\hbox{\hbox to 0.0pt{$\kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$ \kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt \lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$ \kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt \lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$ \kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt \lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.
\[=\sum_{h=1}^{N}\frac{t^{l-1}\eta_{h}^{k-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}E_{\nu,l}\big{(}\eta_{h}t^{\nu} \big{)}-\sum_{i=1}^{N-k}\frac{t^{\nu(N-k-i)+l-1}}{\Gamma\big{(}\nu(N-k-i)+l-1 \big{)}}\sum_{h=1}^{N}\frac{\eta_{h}^{N-1-i}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})} \tag{3.26}\]
\[=\sum_{h=1}^{N}\frac{t^{l-1}\eta_{h}^{k-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}E_{\nu,l}\big{(}\eta_{h}t^{\nu} \big{)}, \tag{3.27}\]
where in step (3.24) we used Proposition 2.1, in step (3.25) we used (3.22) and changed the order of the sums in the second term, and in step (3.26) we used formula (2.6) (note that \(N-i-1\leq N-2\) for each \(i=1,\ldots,N-k\)). Finally, with formula (3.27) at hand, the inversion of (3.23) yields the claimed result (3.21) (after the change of variable \(l^{\prime}=l-1\)).
We observe that in the case where all the initial conditions are equal to null functions, except the first one, result (3.21) simplifies into
\[F(t,x)=\sum_{h=1}^{N}E_{\nu,1}\big{(}\eta_{h}t^{\nu}\big{)}f_{0}(x)\sum_{k=1}^ {N}\frac{\lambda_{k}\,\eta_{h}^{k-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}. \tag{3.28}\]
**Remark 3.4** (Integer derivatives).: From Theorem 3.3, by setting \(\nu=1\) in (3.20), we obtain the general solution to the integer order differential Cauchy problem. In particular, under the condition (3.19), we can write
\[F(t,x)=\sum_{h=1}^{N}e^{\eta_{h}t}\sum_{l=0}^{N-1}f_{l}(x)t^{l}\sum_{k=l+1}^{ N}\frac{\lambda_{k}\,\eta_{h}^{k-1-l}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}. \tag{3.29}\]
Note that in this case \(k_{l}=l+1\)\(\forall\)\(l\). Furthermore, for \(l\geq 1\), we can write
\[E_{1,l+1}(x)=\frac{1}{x^{l}}\Bigg{(}e^{x}-\sum_{i=0}^{l-1}\frac{x^{i}}{i!} \Bigg{)}. \tag{3.30}\]
In light of (3.30), formula (3.21) can be written as
\[F(t,x)=\sum_{h=1}^{N}\sum_{l=0}^{N-1}\Bigg{(}e^{\eta_{h}t^{\nu} }-\sum_{i=0}^{l-1}\frac{(\eta_{h}t)^{i}}{i!}\Bigg{)}\frac{f_{l}(x)t^{l}}{\eta_ {h}^{l}}\sum_{k=k_{l}}^{N}\frac{\lambda_{k}\,\eta_{h}^{k-1}}{\prod_{ \begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}\] \[=\sum_{h=1}^{N}e^{\eta_{h}t^{\nu}}\sum_{l=0}^{N-1}f_{l}(x)t^{l} \sum_{k=l+1}^{N}\frac{\lambda_{k}\,\eta_{h}^{k-l-1}}{\prod_{ \begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}-\sum_{l=0}^{N-1}f_{l}(x)\sum_{i=0 }^{l-1}\frac{t^{i+l}}{i!}\sum_{k=l+1}^{N}\lambda_{k}\sum_{h=1}^{N}\frac{\eta_{ h}^{k-l-1+i}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}\]
and the last term is equal to \(0\) because the last sum is always null thanks to formula (2.6) (in fact, \(k-l-1+i\leq k-l-1+(l-1)\leq k-2\leq N-2\) ).
Finally, we observe that in the case of null initial conditions, except the first one, formula (3.29) coincides with the solution (3.28) (where \(\nu>0\)) with the exponential function replacing the Mittag-Leffler function.
**Remark 3.5**.: We point out that the result in Theorem 3.2 can be directly proved also from formula (3.21). In particular, the case with \(\nu/n=1/n\) follows by suitably applying the following representation of the Mittag-Leffler function, with \(h\in\mathbb{N}\),
\[E_{1/n,h}(x) =\sqrt{\frac{n}{(2\pi)^{n-1}}}\frac{1}{x^{n(h-1)}}\int_{0}^{ \infty}\cdots\int_{0}^{\infty}\Biggl{(}\prod_{j=1}^{n-1}e^{-y_{j}}y_{j}^{j/n-1 }\,\mathrm{d}y_{j}\Biggr{)}\Biggl{(}e^{nx\bigl{(}\prod_{j=1}^{n-1}y_{j}\bigr{)} ^{1/n}}\] \[\quad-\sum_{i=0}^{n(h-1)-1}\Bigl{(}nx\prod_{j=1}^{n-1}y_{j}^{1/n} \Bigr{)}^{i}\frac{1}{i!}\Biggr{)},\]
which in the case of \(n=2\), after the change of variable \(y_{1}=y^{2}\), can be written as
\[E_{1/2,h}(x)=\frac{2x^{2(1-h)}}{\sqrt{\pi}}\int_{0}^{\infty}e^{-y^{2}}\Biggl{(} e^{2xy}-\sum_{i=0}^{2h-3}\frac{(2xy)^{i}}{i!}\Biggr{)}\,\mathrm{d}y.\]
The above formulas can be derived as formula (2.9) of [1].
## 4 Application to random motions with finite velocity
Let \(\bigl{(}\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},P\bigr{)}\) be a filtered probability space and \(d\in\mathbb{N}\). In the following we assume that every random object is suitably defined on the above probability space (i.e. if we introduce a stochastic process, this is adapted to the given filtration).
Let \(N\) be a homogeneous Poisson process with rate \(\lambda>0\) and consider the vectors \(v_{0},\ldots,v_{M}\in\mathbb{R}^{d}\). Let \(V\) be a stochastic process taking values in \(\{v_{0},\ldots,v_{M}\}\)\(a.s.\) and such that, for \(t\geq 0\),
\[p_{k}=P\{V(0)=v_{k}\},\ \ P\{V(t+\mathrm{d}t)=v_{k}\,|\,V(t)=v_{h},\,N(t,t+ \mathrm{d}t]=1\}=p_{hk},\ \ \ h,k=0,\ldots,M.\]
We say that \(V\) is the process describing the velocity of the associated random motion \(X\) defined as
\[X(t)=\int_{0}^{t}V(s)\,\mathrm{d}s=\sum_{i=0}^{N(t)-1}\bigl{(}T_{i+1}-T_{i} \bigr{)}V(T_{i})+\bigl{(}t-T_{N(t)}\bigr{)}V(T_{N(t)}),\ \ t\geq 0, \tag{4.1}\]
where \(T_{i}\) denotes the \(i\)-th arrival time of \(N\) and \(V(T_{i})\) denotes the random speed after the \(i\)-th event recorded by \(N\), therefore after the potential switch occurring at time \(T_{i}\). The stochastic process \(X\) describes the position of a particle moving in a \(d\)-dimensional (real) space with velocities \(v_{0},\ldots,v_{M}\) and which can change its current velocity only when the process \(N\) records a new event.
### Initial conditions for the characteristic function
Denote with \(\gamma^{(t)}_{hk}:[0,1]\longrightarrow\mathbb{R}^{d}\) the segment between \(v_{h}t\) and \(v_{k}t\), that is \(\gamma^{(t)}_{hk}(\delta)=v_{h}t\delta+v_{k}t(1-\delta),\ \delta\in[0,1]\). Now, it is easy to see that the distribution of the position \(X(t)\), conditionally on the occurrence of one Poisson event in \([0,t]\) and the two different velocities taken at time \(0\) (say \(v_{h}\)) and \(t\) (say \(v_{k}\)), is uniformly distributed on the segment between the two velocities at time \(t\) (that is \(\gamma^{(t)}_{hk}\)). In formulas, for \(h\neq k=0,\ldots,M\),
\[P\{X(t)\in\mathrm{d}x\,|\,V(0)=v_{h},V(t)=v_{k},N[0,t]=1\}=\frac{\mathrm{d}x} {||v_{h}-v_{k}||t},\ \ \text{with}\ x\in\gamma^{(t)}_{hk}. \tag{4.2}\]
Then, for \(t\geq 0\) in the neighborhood of \(0\) we observe that there can occur at maximum one Poisson event and therefore not more than one change of velocity for the motion \(X\). Thus, we can write the Fourier transform of the distribution of \(X(t)\), for \(\alpha\in\mathbb{R}^{d}\) (with \(<\cdot,\cdot>\) denoting the dot product in \(\mathbb{R}^{d}\)),
\[\mathbb{E}e^{i<\alpha,X(t)>} =\big{(}1-\lambda t\big{)}\sum_{k=0}^{M}p_{k}\,e^{i<\alpha,v_{k} t>}+\lambda t\sum_{k=0}^{M}p_{k}\,p_{kk}e^{i<\alpha,v_{k}t>}\] \[\quad+\lambda t\sum_{\begin{subarray}{c}h,k=0\\ h\neq k\end{subarray}}^{M}p_{h}\,p_{hk}\int_{\gamma^{(t)}_{hk}}\frac{e^{i< \alpha,x>}}{||v_{h}-v_{k}||t}\,\mathrm{d}x\] \[=\big{(}1-\lambda t\big{)}\sum_{k=0}^{M}p_{k}e^{it<\alpha,v_{k}> }+\lambda t\sum_{h,k=0}^{M}p_{h}\,p_{hk}\int_{0}^{1}e^{it<\alpha,\,v_{h} \delta+v_{k}(1-\delta)>}\,\mathrm{d}\delta. \tag{4.3}\]
By means of (4.3) we easily derive the values of the derivatives of the Fourier transform of the distribution of the position \(X(t)\) in the neighborhood of \(0\) and therefore also in \(t=0\), which will be used as initial conditions for the Cauchy problem. We point out that function (4.3) is based on the first order approximation of the probability mass of the Poisson process in the neighborhood of \(t=0\). However, this approximation is sufficient to provide the characteristic function in the neighborhood of \(0\); in fact, the probability law of random motions with finite velocities are derived by requiring only the knowledge at the first order, therefore we do not need a further expansion to obtain the higher order derivatives (in \(t=0\)).
In detail we obtain, with \(n\in\mathbb{N}_{0},\ \alpha\in\mathbb{R}^{d}\), for \(t\) sufficiently close to \(0\),
\[\frac{\partial^{n}}{\partial t^{n}}\mathbb{E}e^{i<\alpha,X(t)>}\] \[=\sum_{k=0}^{M}p_{k}e^{it<\alpha,v_{k}>}\Big{[}-n\lambda+(1- \lambda t)i<\alpha,v_{k}>\Big{]}\big{(}i<\alpha,v_{k}>\big{)}^{n-1}\] \[\quad+\lambda\sum_{h,k=0}^{M}p_{h}\,p_{hk}\int_{0}^{1}e^{it< \alpha,\,v_{h}\delta+v_{k}(1-\delta)>}\Big{[}n+it<\alpha,v_{h}\delta+v_{k}(1- \delta)>\Big{]}\big{(}i<\alpha,\,v_{h}\delta+v_{k}(1-\delta)>\big{)}^{n-1}\, \mathrm{d}\delta, \tag{4.4}\]
which in \(t=0\) simplifies into
\[\frac{\partial^{n}}{\partial t^{n}}\mathbb{E}e^{i<\alpha,X(t)>}\Big{|}_{t=0}= \sum_{k=0}^{M}p_{k}\Big{[}-n\lambda+i<\alpha,v_{k}>\Big{]}\big{(}i<\alpha,v_{k}> \big{)}^{n-1}\]
\[+n\lambda\sum_{h,k=0}^{M}p_{h}\,p_{hk}\int_{0}^{1}\bigl{(}i<\alpha,\,v_{h} \delta+v_{k}(1-\delta)>\bigr{)}^{n-1}\,\mathrm{d}\delta. \tag{4.5}\]
For derivatives of order \(0,1,2\) we can write, with \(\alpha\in\mathbb{R}^{d}\),
\[\mathbb{E}e^{i\,<\alpha,X(0)>}=1, \tag{4.6}\] \[\frac{\partial}{\partial t}\mathbb{E}e^{i\,<\alpha,X(t)>}\Big{|}_ {t=0}\ =i<\alpha,\sum_{k=0}^{M}p_{k}v_{k}>,\] (4.7) \[\frac{\partial^{2}}{\partial t^{2}}\mathbb{E}e^{i\,<\alpha,X(t)> }\Big{|}_{t=0}\] \[\qquad\qquad\qquad=-2\lambda i<\alpha,\sum_{k=0}^{M}p_{k}v_{k}>- \sum_{k=0}^{M}p_{k}<\alpha,v_{k}>^{2}+\lambda i<\alpha,\sum_{h,k=0}^{M}p_{h}p_ {hk}(v_{h}+v_{k})>. \tag{4.8}\]
Formula (4.6) is due to the fact that the particle performing the random motion is always assumed to be in the origin of \(\mathbb{R}^{d}\) at time \(t=0\). It is interesting to observe that the first derivative, given in (4.7), is equal to \(0\) for all \(\alpha\in\mathbb{R}^{d}\) if and only if \(\sum_{k=0}^{M}p_{k}v_{k}=0\).
**Example 4.1** (Orthogonal planar random motion).: We consider a random motion \((X,Y)\) governed by a homogeneous Poisson process \(N\) with rate \(\lambda>0\), moving in the plane with the following orthogonal velocities,
\[v_{k}=\biggl{(}c\cos\Bigl{(}\frac{k\pi}{2}\Bigr{)},c\sin\Bigl{(}\frac{k\pi}{2 }\Bigr{)}\biggr{)},\ \ c>0\ \text{with}\ k=0,1,2,3, \tag{4.9}\]
and such that from velocity \(v_{k}\) the particle can uniformly switch either to \(v_{k-1}\) or \(v_{k+1}\), that is \(P\{V(T_{n+1})=v_{k+1}\,|\,V(T_{n})=v_{k}\}=P\{V(T_{n+1})=v_{k-1}\,|\,V(T_{n})= v_{k}\}=1/2,\ k=0,1,2,3\). Therefore, the particle whose motion is described by \((X,Y)\) lies in the square \(S_{ct}=\{(x,y)\in\mathbb{R}^{2}\,:\,|x|+|y|\leq ct\}\) at time \(t>0\) and at each Poisson event take a direction orthogonal to the current one (see Figure 1). We refer to [4] (and references herein) for further details on planar orthogonal random motions and [5] for its three-dimensional version.
The probability distribution \(p(x,y)\,\mathrm{d}x\,\mathrm{d}y=P\{X(t)\in\mathrm{d}x,Y(t)\in\mathrm{d}y\},\ t \geq 0,\ x,y\in Q_{ct}\), of the position of the motion \((X,Y)\) satisfies the fourth-order differential equation
\[\Bigl{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{\partial t }+\lambda^{2}\Bigr{)}\biggl{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda \frac{\partial}{\partial t}-c^{2}\Bigl{(}\frac{\partial^{2}}{\partial x^{2}} +\frac{\partial^{2}}{\partial y^{2}}\Bigr{)}\biggr{)}p+c^{4}\frac{\partial^{4} p}{\partial x^{2}\partial y^{2}}=0, \tag{4.10}\]
and it is known that the current position \(\bigl{(}X(t),Y(t)\bigr{)}\) can be represented as a linear combination of two independent telegraph processes, In details, for \(t\geq 0\),
\[\begin{cases}X(t)=U(t)+V(t),\\ Y(t)=U(t)-V(t),\end{cases} \tag{4.11}\]
where \(U=\{U(t)\}_{t\geq 0}\) and \(V=\{V(t)\}_{t\geq 0}\) are independent one-dimensional telegraph processes moving with velocities \(\pm c/2\) and with rate \(\lambda/2\) (note that a similar results holds in the case of a non-homogeneous Poisson process as well, see [4]).
The Fourier transforms of the equation (4.10) has the form
\[\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{\partial t}+ \lambda^{2}\Big{)}\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{ \partial}{\partial t}+c^{2}(\alpha^{2}+\beta^{2})\Big{)}F+c^{4}\alpha^{2}\beta ^{2}F=0, \tag{4.12}\]
and by means of formulas (4.6), (4.7), (4.8) and (4.5) the initial conditions are
\[F(0,\alpha,\beta)=1,\ \ F_{t}(0,\alpha,\beta)=0,\ \ F_{tt}(0,\alpha,\beta)=- \frac{c^{2}}{2}\big{(}\alpha^{2}+\beta^{2}\big{)},\ \ F_{ttt}(0,\alpha,\beta)=\frac{\lambda c^{2}}{2}\big{(} \alpha^{2}+\beta^{2}\big{)}. \tag{4.13}\]
Now, the fractional version of equation (4.12), written in the form (3.20), with \(\nu>0\), is
\[\frac{\partial^{4\nu}F}{\partial t^{4\nu}}+4\lambda\frac{\partial ^{3\nu}F}{\partial t^{3\nu}}+\Big{(}5\lambda^{2}+c^{2}(\alpha^{2}+\beta^{2}) \Big{)}\frac{\partial^{2\nu}F}{\partial t^{2\nu}} +2\lambda\Big{(}\lambda^{2}+c^{2}(\alpha^{2}+\beta^{2})\Big{)} \frac{\partial^{\nu}F}{\partial t^{\nu}}\] \[+c^{2}\Big{(}\lambda^{2}(\alpha^{2}+\beta^{2})+c^{2}\alpha^{2} \beta^{2}\Big{)}F =0. \tag{4.14}\]
Let \(A=\sqrt{\lambda^{2}-c^{2}(\alpha^{2}-\beta^{2})}\) and \(B=\sqrt{\lambda^{2}-c^{2}(\alpha^{2}+\beta^{2})}\). Note that \(c^{2}(\alpha^{2}+\beta^{2})=\lambda^{2}-\big{(}A^{2}+B^{2}\big{)}/2\). Then, the following equality holds
\[x^{4} +4\lambda x^{3}+\Big{(}5\lambda^{2}+c^{2}(\alpha^{2}+\beta^{2}) \Big{)}x^{2}+2\lambda\Big{(}\lambda^{2}+c^{2}(\alpha^{2}+\beta^{2})\Big{)}x+c ^{2}\Big{(}\lambda^{2}(\alpha^{2}+\beta^{2})+c^{2}\alpha^{2}\beta^{2}\Big{)}x\] \[=\prod_{k=1}^{4}(x-\eta_{k}),\]
with
\[\eta_{1}=-\lambda-\frac{A+B}{2},\ \eta_{2}=-\lambda+\frac{A-B}{2},\ \eta_{2}=- \lambda-\frac{A-B}{2},\ \eta_{4}=-\lambda+\frac{A+B}{2}. \tag{4.15}\]
With this at hand, by means of Theorem 3.3 it is easy to calculate the solution to a fractional Cauchy problem associated with equation (4.14).
For instance, in the case of initial conditions \(F(0,\alpha,\beta)=1\) and \(\frac{\partial^{l}F}{\partial t^{l}}\big{|}_{t=0}=0\) for all \(l\) (whose values depend on \(\nu\)), the solution reads
\[F_{\nu}(t,\alpha,\beta)\] \[\quad=\Bigg{(}\lambda^{2}-\Big{(}\frac{A-B}{2}\Big{)}^{2}\Bigg{)} \Big{(}\lambda-\frac{A+B}{2}\Big{)}E_{\nu,1}(\eta_{1}t^{\nu})+\Bigg{(}\lambda^ {2}-\Big{(}\frac{A+B}{2}\Big{)}^{2}\Bigg{)}\Big{(}\lambda+\frac{A-B}{2}\Big{)} E_{\nu,1}(\eta_{2}t^{\nu})\] \[\quad\quad+\Bigg{(}\lambda^{2}-\Big{(}\frac{A+B}{2}\Big{)}^{2} \Bigg{)}\Big{(}\lambda-\frac{A-B}{2}\Big{)}E_{\nu,1}(\eta_{3}t^{\nu})+\Bigg{(} \lambda^{2}-\Big{(}\frac{A-B}{2}\Big{)}^{2}\Bigg{)}\Big{(}\lambda+\frac{A+B}{ 2}\Big{)}E_{\nu,1}(\eta_{4}t^{\nu})\]
with \(\eta_{i}\) given in (4.15).
In the case of initial conditions given by (4.13) and \(3/4<\nu\leq 1\) (so all the conditions are required), the solution reads
\[F_{\nu}(t,\alpha,\beta) =\frac{1}{4}\Bigg{[}\Big{(}1-\frac{\lambda}{A}\Big{)}\Big{(}1- \frac{\lambda}{B}\Big{)}E_{\nu,1}(\eta_{1}t^{\nu})+\Big{(}1+\frac{\lambda}{A} \Big{)}\Big{(}1-\frac{\lambda}{B}\Big{)}E_{\nu,1}(\eta_{2}t^{\nu})\] \[\quad\quad+\Big{(}1-\frac{\lambda}{A}\Big{)}\Big{(}1+\frac{ \lambda}{B}\Big{)}E_{\nu,1}(\eta_{3}t^{\nu})+\Big{(}1+\frac{\lambda}{A}\Big{)} \Big{(}1+\frac{\lambda}{B}\Big{)}E_{\nu,1}(\eta_{4}t^{\nu})\Bigg{]}.\]
Note that for \(\nu=1\) this is the Fourier transform of the probability law of the orthogonal planar motion \((X,Y)\). This particular case can be also shown by considering the representation (4.11) in terms of independent one-dimensional telegraph processes and their well-known Fourier transform (see for instance [17] formula (2.16)).
**Example 4.2** (Planar motion with three directions).: Let us consider a planar random motion \((X,Y)\) governed by a homogeneous Poisson process with rate \(\lambda>0\) and moving with velocities
\[v_{0}=(c,0),v_{1}=(-c/2,\sqrt{3}c/2),v_{2}=(-c/2,-\sqrt{3}c/2),\quad\text{ with }c>0. \tag{4.16}\]
Let us assume that the particle starts moving with a uniformly chosen velocity among the three possible choices in (4.16) and at each Poisson event it uniformly selects the next one (including also the current one). This kind of motion are sometimes called as _complete minimal planar random motion_, see [6, 13] for further details.
The support of the position of the stochastic dynamics at time \(t\geq 0\) is the triangle \(T_{ct}=\{(x,y)\in\mathbb{R}^{2}\,:\,-ct/2\leq x\leq ct,(x-ct)/\sqrt{3}\leq y \leq(ct-x)/\sqrt{3}\}\) (see Figure 2). It is known that the probability distribution \(p(x,y)\,\mathrm{d}x\,\mathrm{d}y=P\{X(t)\in\mathrm{d}x,Y(t)\in\mathrm{d}y\}\) of the position of the motion \((X,Y)\) satisfies the third-order differential equation
\[\Big{(}\frac{\partial}{\partial t}+\frac{3\lambda}{2}\Big{)}^{3}p-\frac{27 \lambda^{3}}{8}+\frac{27\lambda^{2}}{16}\frac{\partial p}{\partial t}-\frac{3 }{4}c^{2}\Delta\Big{(}\frac{\partial}{\partial t}+\frac{3\lambda}{2}\Big{)}p- \frac{3}{4}c^{3}\frac{\partial^{3}p}{\partial x\partial y^{2}}+\frac{c^{3}}{ 4}\frac{\partial^{3}p}{\partial x^{3}}=0, \tag{4.17}\]
where \(\Delta\) denotes the Laplacian operator. Note that equation (4.17) can be derived by considering formula (1.9) of [13] and putting \(3/2\lambda\) instead of \(\lambda\) (this is sufficient by virtue of Remark 3.4 of [4]).
The initial conditions of the Cauchy problem related to equation (4.17) follow by suitably applying formulas (4.6), (4.7) and (4.8). In particular, for the Fourier transform of \(p\), \(F(t,\alpha,\beta)=\int_{\mathbb{R}^{2}}e^{i(\alpha x+\beta y)}p(t,x,y)\, \mathrm{d}x\,\mathrm{d}y\), we derive
\[F(0,\alpha,\beta)=1,\ \ F_{t}(0,\alpha,\beta)=0,\ \ F_{tt}(0,\alpha,\beta)=-\frac{c^{2}}{2} \big{(}\alpha^{2}+\beta^{2}\big{)}, \tag{4.18}\]
obviously the first two conditions imply that \(p(0,x,y)=\delta(x,y)\), with \(\delta\) denoting the Dirac delta function centered in the origin, and \(p_{t}(0,x,y)=0\ \forall\ x,y\). We refer to [13] for further details about this motion, such as the explicit form of \(p\) (see also [5, 7]).
Now, thanks to Theorem 3.2 we can easily give a probabilistic interpretation of the time-fractional version of order \(\nu=1/n,\ m\in\mathbb{N}\) of equation (4.17), subject to the same initial conditions. Note that for \(0<\nu\leq 1/3\) only the first condition is needed, for \(1/3<\nu\leq 1/2\) the first two conditions are required and for \(2/3<\nu\leq 1\) all three conditions are necessary. In details, the fractional Cauchy problem after the Fourier transformation is given by
\[\frac{\partial^{3}F_{\nu}}{\partial t^{3}}+\frac{9\lambda}{2}\frac{\partial^{ 2}F_{\nu}}{\partial t^{2}}+\Bigg{(}\Big{(}\frac{3}{2}\Big{)}^{4}\lambda^{2}+ \frac{3c^{2}(\alpha^{2}+\beta^{2})}{4}\Bigg{)}\frac{\partial F_{\nu}}{ \partial t}+\Big{(}\frac{9\lambda c^{2}(\alpha^{2}+\beta^{2})}{8}+\frac{3ic^ {3}\alpha\beta^{2}}{4}-\frac{ic^{3}\alpha^{3}}{4}\Big{)}F_{\nu}=0 \tag{4.19}\]
subject to the initial conditions in (4.18). By means of Theorem 3.2 we have that the Fourier transform \(F_{1/n}\) satisfying (4.19) can be expressed as \(F_{1/n}(t,x)=\mathbb{E}\,F\Bigg{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t),\,x\Bigg{)}\), where \(F\) denotes the solution to the Fourier-transformed problem of equation (4.17). Thus, the time-fractional version of (4.17), with \(\nu=1/n\) for natural \(n\), describes the probability law of a stochastic process \((X_{\nu},Y_{\nu})\) that is a time-changed planar motion with velocities (4.16),
\[\big{(}X_{\nu}(t),Y_{\nu}(t)\big{)}\stackrel{{ d}}{{=}}\Bigg{(}X \Big{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t)\Big{)},Y\Big{(}\prod_{j=1}^{n-1}G_{j}^ {(n)}(t)\Big{)}\Bigg{)},\quad t\geq 0.\]
**Declarations**
**Ethical Approval.** This declaration is not applicable.
**Competing interests.** The authors have no competing interests to declare.
**Authors' contributions.** Both authors equally contributed in the preparation and the writing of the paper.
**Funding.** The authors received no funding.
**Availability of data and materials.** This declaration is not applicable
| この論文では、$\nu k$の次数を持つ微分方程式に関するDzherbashyan-Caputo分数微分方程式の明示的な解を与えます。$k$は非負整数で$\nu>0$とする。解を得るためには、微分方程式を特性多項式の根と接続し、Mittag-Leffler型関数で表現する。一部の厳格な仮定の下では、この解はMittag-Leffler関数の線形結合で表現できる。$\nu/m$の次数を持つ微分方程式の解と$\nu$の解との確率的関係を構築する。最後に、この方法を用いて、確率論的な乱動運動の確率分布における非線形微分方程式を解く。 |
2310.00455 | Music- and Lyrics-driven Dance Synthesis | Lyrics often convey information about the songs that are beyond the auditory
dimension, enriching the semantic meaning of movements and musical themes. Such
insights are important in the dance choreography domain. However, most existing
dance synthesis methods mainly focus on music-to-dance generation, without
considering the semantic information. To complement it, we introduce JustLMD, a
new multimodal dataset of 3D dance motion with music and lyrics. To the best of
our knowledge, this is the first dataset with triplet information including
dance motion, music, and lyrics. Additionally, we showcase a cross-modal
diffusion-based network designed to generate 3D dance motion conditioned on
music and lyrics. The proposed JustLMD dataset encompasses 4.6 hours of 3D
dance motion in 1867 sequences, accompanied by musical tracks and their
corresponding English lyrics. | Wenjie Yin, Qingyuan Yao, Yi Yu, Hang Yin, Danica Kragic, Mårten Björkman | 2023-09-30T18:27:14 | http://arxiv.org/abs/2310.00455v1 | # Music- and Lyrics-driven Dance Synthesis
###### Abstract
Lyrics often convey information about the songs that are beyond the auditory dimension, enriching the semantic meaning of movements and musical themes. Such insights are important in the dance choreography domain. However, most existing dance synthesis methods mainly focus on music-to-dance generation, without considering the semantic information. To complement it, we introduce JustLMD, a new multimodal dataset of 3D dance motion with music and lyrics. To the best of our knowledge, this is the first dataset with triplet information including dance motion, music, and lyrics. Additionally, we showcase a cross-modal diffusion-based network designed to generate 3D dance motion conditioned on music and lyrics. The proposed JustLMD dataset encompasses 4.6 hours of 3D dance motion in 1867 sequences, accompanied by musical tracks and their corresponding English lyrics.
## 1 Introduction
Recent breakthroughs in generative models, notably in normalizing flows [1] and diffusion models [2], have significantly advanced applications like music-conditioned dance generation. Such advancements not only enrich the artistic dimension of choreography but also provide valuable insights for dance research [3; 4; 5; 6]. Given the rising popularity of dance content on digital platforms such as YouTube and TikTok, these pioneering technologies hold vast potential to be integrated into creative processes within the dance domain.
However, many existing technologies primarily focus on the relationship between music and dance, without considering the integral role of lyrics in dance choreography. While music-conditioned models can already produce realistic rhythmically-aligned dance movements, lyrics offer additional information that can enrich and enhance the semantic meaning of the dance. For instance, there exists a strong linkage between dance motion and song lyrics in modern dance [7]. Further exploration is needed to explore integrating both lyrics and music in dance synthesis. To fill this research void, we introduce a multimodal dataset with synchronized dance motion, music, and lyrics. Moreover, we present a cross-modal diffusion model to facilitate dance motion synthesis based on both lyrics and music.
## 2 Methods
In this section, we introduce the preparation pipeline of our dataset and the proposed baseline model for dance synthesis.
### Data Preparation Pipeline
Due to the lack of datasets with dance motion, music, and lyrics information, we created the proposed JustLMD dataset using existing _Just Dance_ videos. Ubisoft's _Just Dance_ is a motion-based rhythm dancing game, with its annual releases, and has been a classic in video games. Just Dance engages players by having them mimic the moves of an on-screen dancer. To prepare our multimodal dataset, we adopted the following pipeline.
* **Video Conversion**: We transformed YouTube videos of _Just Dance_ into.mp4 format.
* **Video Preprocessing**: Utilizing _EasyMocap_[8], we achieved high-fidelity body estimation from these videos at a rate of 60 fps.
* **Music Extraction**: The music from the videos was saved in.nav format.
* **Lyrics Synchronization**: We manually sourced the lyrics corresponding to each music song and aligned them with the musical timeline.
* **Feature Preparation**:
* **Pose Representation**: We represent dance as sequences of poses using the 24-joint SMPL format [9], using a 6-DOF rotation representation and a 4-dimensional binary foot contact label, resulting in a 151-dimensional feature.
* **Music Feature Extraction**: We employed _librosa_[10] or _Jukebox_[11] to extract music features, yielding a 35- or 4800-dimensional feature.
* **Lyrics Feature Embedding**: Lyrics were then processed and embedded into a pre-trained CLIP latent [12] or Bert embedding [13], resulting in a 512- or 768-dimensional feature.
Our data collection and feature preparation code can be accessed here1. Validity of the task and examples of the correlation between motion and lyrics are illustrated in Appendix A.
Footnote 1: [https://github.com/yyllab/LMD](https://github.com/yyllab/LMD)
### Multimodal Diffusion Models
Our dance motion synthesis framework is built on cross-modal diffusion models. This architecture utilizes a transformer-based diffusion model that accepts conditional feature vectors as input. In this setting, the conditional feature vectors include the music feature and lyrics feature. It then generates corresponding motion sequences, without autoregression or recurrent connections, as depicted in Figure 1. The model incorporates a cross-attention mechanism, following [14]. We optimize the \(\theta\)-parameterized score networks \(\mathbf{s}_{\theta}\) with dance motion \(\mathbf{x}\) paired conditional music feature \(\mathbf{m}\) and lyric feature \(\mathbf{l}\), \(t\) is the time embedding. The objective function is simplified as:
\[\mathbb{E}_{\mathbf{x},t}\left\|\mathbf{x}-\mathbf{s}_{\theta}(\mathbf{x}_{t},t,\mathbf{m},\mathbf{l })\right\|_{2}^{2}. \tag{1}\]
Figure 1: Architecture overview of the cross-modal transformer-based diffusion models, the music features and text features are conditional information that act as cross-attention context. The diffusion model takes noisy sequences and produces the estimated motion sequences.
Discussion
In this paper, we introduce a framework for dance motion synthesis driven by both music and lyrics, accompanied by a novel dataset collected to bridge the existing research gap. Moving forward, we aim to explore deeper into the influence of the lyrics modality. Building on our present efforts, we anticipate enlarging the dataset for training generative models.
## 4 Ethical Implications
We present a multimodal dataset and dance synthesis method. A primary concern comes from the data source. Our model relies on dance motion extracted from public game videos. These movements, though available publicly, the original creators of the dance motion can claim their copyrights. Furthermore, automating the choreographic process based on existing dances raises questions about originality and creativity in art.
| 歌詞は、聴覚を超えた情報を含んでおり、動きや楽曲の持つ意味を豊かにする。このような洞察は、ダンスchoreographyの分野において重要である。しかし、既存のダンス合成手法は、主に音楽からダンスへの生成に焦点を当て、semantical情報に配慮していない。それを補完するため、私たちはJustLMDという3Dダンスモーション、音楽、歌詞のマルチモーダルデータセットを導入した。本データセットの持つトリplet情報には、ダンスモーション、音楽、歌詞が含まれており、私たちが知る限り、これは初めてである。さらに、音楽と歌詞に基づいて3Dダンスモーションを生成するために設計されたクロスモダルトリフレーションネットワークを披露した。提案されたJustLMDデータセットには、4.6時間の3Dダンスモーションが含まれており、1867個のシーケンスが伴う。このデータセットは、音楽トラックとその対応する英語歌詞と、 |
2309.12132 | A knowledge representation approach for construction contract knowledge
modeling | The emergence of large language models (LLMs) presents an unprecedented
opportunity to automate construction contract management, reducing human errors
and saving significant time and costs. However, LLMs may produce convincing yet
inaccurate and misleading content due to a lack of domain expertise. To address
this issue, expert-driven contract knowledge can be represented in a structured
manner to constrain the automatic contract management process. This paper
introduces the Nested Contract Knowledge Graph (NCKG), a knowledge
representation approach that captures the complexity of contract knowledge
using a nested structure. It includes a nested knowledge representation
framework, a NCKG ontology built on the framework, and an implementation
method. Furthermore, we present the LLM-assisted contract review pipeline
enhanced with external knowledge in NCKG. Our pipeline achieves a promising
performance in contract risk reviewing, shedding light on the combination of
LLM and KG towards more reliable and interpretable contract management. | Chunmo Zheng, Saika Wong, Xing Su, Yinqiu Tang | 2023-09-21T14:53:36 | http://arxiv.org/abs/2309.12132v1 | # A knowledge representation approach for construction contract knowledge modeling
###### Abstract
The emergence of large language models (LLMs) presents an unprecedented opportunity to automate construction contract management, reducing human errors and saving significant time and costs. However, LLMs may produce convincing yet inaccurate and misleading content due to a lack of domain expertise. To address this issue, expert-driven contract knowledge can be represented in a structured manner to constrain the automatic contract management process. This paper introduces the Nested Contract Knowledge Graph (NCKG), a knowledge representation approach that captures the complexity of contract knowledge using a nested structure. It includes a nested knowledge representation framework, a NCKG ontology built on the framework, and an implementation method. Furthermore, we present the LLM-assisted contract review pipeline enhanced with external knowledge in NCKG. Our pipeline achieves a promising performance in contract risk reviewing, shedding light on the combination of LLM and KG towards more reliable and interpretable contract management.
Contract management, Complex knowledge representation, Knowledge graph, Language model
## 1 Introduction
Legal issues, such as disputes, claims, and litigations, frequently occur in construction projects, which lead to cost overruns, schedule delays, and negative impact on the parties' communication and collaboration [1]. The use of natural language in contracts has been identified as the leading cause of such issues [2, 3]. Specifically, the semantic vagueness and ambiguities in contracts may cause disagreements and misunderstandings between parties, leading to conflicts or disputes [4, 5]. Contractors are sometimes not capable of estimating the risks inherent in contract clauses, which
may causes potential issues [6, 7, 8, 9]. Contract management must thoroughly review these issues during the bidding and contracting stage. However, current approaches still heavily rely on the manual processes and human expertise, which are time-consuming and error-prone. It demands an automatic analyzing method to assist the contract management process. [4, 10, 11, 12, 13].
Natural Language Processing (NLP) technique is a promising tool to automate contract text processing. It has demonstrated its potential in many tasks, including contract reviewing, automatic compliance checking and similar case retrieval [14, 15, 16, 17]. Recent years, LLM have shown to possess strong reasoning capability, making them superior to many domain-trained models. They are trained on a massive text dataset, which in turn allows them to better understand the nuances of language usage and able to answer a broad range of human queries. While they can perform well on general language tasks, they may struggle with domain-specific language and may not be able to provide explanations for their decisions, along with factual errors [18]. This is a challenge that researchers are actively working to address. It is important for language models to not only perform well, but also to be transparent and understandable during decision making.
To address the concerns, it is imperative to incorporate domain-specific knowledge into LLM [19, 20, 21]. There are some potential solutions such as inject domain knowledge into language model input, or perform expert-driven fact-checking and verification [22], [23]. Both way need domain knowledge to be represented as a precise and interpretable form. Researchers have proposed various knowledge representation approaches, including logic rules, algebra equations and knowledge graphs (KGs) [24, 25, 26]. KGs, in particular, have shown their strength in modeling knowledge in many domains. KG is a knowledge representation (KR) method that organizes information in the form of nodes and edges, where nodes represent entities or concepts and edges represent relations between them. The basic unit of KG is a triple \(<\)head-entity, relation, tail-entity\(>\) which can capture and model the complex, interlinked information in a way that is both human- and machine-readable. KG has been successfully used to represent domain knowledge for enhancing machine learning models [27, 28]. It has also shown its potential to provide domain-specific constraints for large scale pre-trained language models such as BERT [29] and GPT [30]. However, to the best of our knowledge, contract knowledge is typically represented by formal logic or OWL ontology. These representations lack scalability, which results in difficulties to integrate with language models. To fully unleash the power of large language models, it is important to construct a contract knowledge graph for effective incorporation.
The knowledge representation method in KG can vary based on the different types of knowledge, where we classify as factoid knowledge and non-factoid knowledge, as shown in Fig.1. Taxonomic and commonsense knowledge are factoid such as "Eiffel tower is located in Paris" or "Paris is the capital of France". They can be intuitively extracted from text in the form of separated triples with complete semantics. The extracted triples can be represented as \(<\)EiffelTower, isLocatedIn, Paris\(>\) and \(<\)Paris, isCapitalOf, France\(>\). Contract knowledge, on the other hand, is non-factoid knowledge that contains various complex relationships, including conditional, temporal and causal
relations. For instance, in the New Engineering Contract (NEC) clause stating in the yellow box, there is an "if-then" relationship connecting the two triples <AnyWork, hasProperty, hasDefect> and <Contractor, correct, Defect>. Without the modeling of the "if-then" relationship, the rule is only conveyed by scattered triples with broken logic. The current representation method is limited in modeling the relation with triples, which hinders the incorporation of relation-rich knowledge. While recent research on complex knowledge modeling, such as the Event Knowledge Graph (EKG) and the concept of "triple-as-node," has emerged [31, 32], there still lacks a formal KR method for such knowledge in the KG context, as well as a specific KR system for the contract knowledge.
This paper presents the Nested Contract Knowledge Graph (NCKG) as a comprehensive solution to the aforementioned problems. In Section 3, we introduce a nested framework with an ontological layer, designed to model intricate, multi-layer relationships. Detailed explanations and illustrative examples for each element within the ontology are provided. Furthermore, we present an innovative method for generating the instance layer of the NCKG from unstructured contract clauses. This approach incorporates an iterative workflow and a set of corresponding rules to create a unified standard applicable to various contract tasks. Section 4 presents a case implementation using NEC clauses, elucidating the process of constructing the NCKG, storing NCKG and executing querying. We also presents the pipeline of augmenting the contract reviewing process by enhancing prompts with contract expert-driven knowledge, example shows improved accuracy of the generated content. We then discuss the combination of KG and LLM from different perspective in Section 5 and present a conclusion in Section 6.
## 2 Related works
2.1 NLP-assisted contract management and the role of prior knowledge
Figure 1: Difference between factoid and non-factoid knowledge modeling
NLP techniques have been widely used in addressing legal issues in construction projects. The NLP-assisted tasks can be classified into NLP-assisted contract review, automated compliance checking (ACC) and similar case retrieving [2].
In contract review, many researchers perform the detection of linguistic ambiguities, requirements or poisonous clauses. Lee et al. use NLP and semantic syntactic rules in both risk-prone clauses extraction and contractor-friendly clauses detection, achieving the F-score of 81.8% and 80% [15, 33]. Chakrabarti et al. [34] use machine learning based NLP to detect risk-prone paragraphs in contract, which has an accuracy of 94%. To assist the contract review task, many studies also classify clauses into various categories using machine learning algorithms. The categories include requirement or non-requirement clauses, different topics of construction projects or different categories of risks [10, 35, 36, 37, 38].
On the other hand, ACC aims at detecting the violation with construction laws or regulations. Semantic text classification in unstructured provisions is the first step to perform the ACC task [39]. Salama et al. [13] utilize a semantic and machine learning hybrid approach to classify clauses and subclauses of general conditions, which reach a recall of 100%. Zhou and El-Gohary [16, 40] present a rule-based NLP in classification of environmental requirements, along with a ML-based classification in regulatory codes. Both of the researches achieve more than 97% recall. Deep learning method has also been introduced in the semantic analysis process and used in regulatory ACC [41].
It can be seen that NLP techniques have provided a significant improvement in contract management. However, due to the unique terminologies and complex syntactic patterns in contract, it is difficult for generic NLP models to produce equally reliable performance in construction contract domain without a proper adaptation [2]. A few recent studies have attempted to settle these issues by integrating domain-knowledge into the NLP models with the help of semantic representations such as taxonomy, ontology and KG [15, 42, 43, 44]. These studies have shown an impressive performance, shedding a light on the importance of integrating domain-knowledge in the future NLP-assisted contract analysis. Dash et al. have also proved that predictive performance of neural network can increase significantly even with a simplified encoding of domain knowledge [20].
### 2.2 KR of domain knowledge and the issue of complex knowledge modeling
Semantic web technologies enable the representation of machine-readable data on the web [45]. It is widely adopted in the KR of domain knowledge by defining domain concepts, enhancing domain information integration and performing logical reasoning [46]. Ontology is one of the essential components of semantic web technology. It can provide a conceptual model of domain knowledge [47]. Many researchers have been developed construction domain ontology for efficient knowledge management, compliance checking, risks or conflicts detection [48, 49, 50, 51].
The knowledge inherent in the regulations, safety rules, design information are always represented in RDF. The RDF language defines concepts and relations in a machine-interpretable and explicit format. RDF models the semantic relation between concepts into the triple structure \(<\)subject, predicate, object\(>\). Along with the domain
knowledge written in RDF, the RDF query language SPARQL is adopted in knowledge searching for constraints checking [52, 53]. Based on domain ontology, we can also establish axioms and SWRL rules to support knowledge reasoning [54, 55, 56].
Recent years, knowledge graph (KG) has been appeared as a major trend in KR technique to serve many industrial applications [57]. It presents knowledge in the form of labeled directed graph. Each entity is considered as a node and they are linked via edges which represent relations between entities. The expression mechanism of KG can model abundant semantics inherent in natural language, but the capability is still limited in the form of the RDF triple [58].
In real world knowledge of many domains, there are always additional information conveying conditional, temporal or provenance information that are beyond the expressivity of a triple. Modeling and extracting such metadata are beneficial for efficient domain knowledge management. Semantic legal metadata extraction is also crucial for interpreting legal provisions [59, 60].
Data modeling solutions for RDF-based metadata include standard reification, singleton property and RDF-star. They are able to represent additional contextual information attached to individual triples [61, 62, 63]. The metadata representation in the graph structure have also been explored. Temporal KG and event KG associate triples with time or site hyperedges, where an event triple is considered as a semantic unit [32, 64, 65, 66]. Recently, the focus of KR has been transferred to the knowledge graph embedding (KGE) which embed entities and relations into continuous vector spaces. In this regard, there are also studies focusing on the meta-knowledge representation in the vector space [67].
## 3. NCKG methodology
In this section we introduce the NCKG methodology, which is consisted of the following three parts. First, we introduce a nested framework with the definition of triples, facts and the overall composition rules. Based on the framework, we explain the ontological layer of NCKG. It serves as a schema for filling in the contract concepts and constructing the knowledge graph. Then, the implementation of NCKG methodology from contract text is presented in the third part.
### Nested framework
The nested framework consists of triples, facts, and their interlinked and nested relationships. It serves to organize and structure data in a nested manner, allowing for easier analysis and understanding of complex relationships in different context. In this framework, we extend the definition of triples and facts. A triple here remains the structure of the head node and tail node joined by a relationship edge, but the node can appear as a single entity containing one term or a fact entity containing a triple or several concatenation entities inside.
**Triple.** A triple is denoted as (h, r, t), where h represents the head node, t represents the tail node, and r denotes the relation, which is shown as a directed edge starting from the head node to the tail node. There are two types of node in a triple, known as an entity (node) or fact (node). An entity refers to an individual node of a concept unit.. We also denote T as a set of triples, E as a set of entities, R as a set of relations, and F
as a set of facts. Based on the definitions, we present three types of triples, as shown in Table 1. \(T_{E2E},T_{E2F},T_{F2F}\) represents the entity2entity, entity2fact and fact2fact triples, respectively (refer to Table 1.). Notably \(T_{F2E}\) equals \(T_{E2F}\), because the head node and tail node are at the same level in a triple. Their position can be switched due to the transition between a sentence's active and passive forms.
**Fact.** A fact is a nested node that can contain more than one entity inside to represent part of semantics of a sentence. We present two common nested cases: tripleFact and concatFact (refer to Table 2). TripleFact refers to a fact that contains a triple within it. By referring to the tripleFact, we can further make statements about the triples. For example, we can represent the sentence "Alice is a girl" as triple \(<\)Alice, isA, girl\(>\). When we need to refer to this fact, such as in the sentence of "Bob stated that Alice is a girl", we consider this triple as a tripleFact. We can represent the new sentence as \(<\)Bob, state, \(<\)Alice, isA, girl\(>>\), where the tripleFact is represented as a tail entity in the new triple. ConcatFact refers to a fact node that comprises multiple entities or triples combined in the form of concatenation through conjunctions to express "and", "or" or other concatenation logics. The elements inside a concatFact can be entities or triples, while the total number should be more than one. For example, "Bob and John stated that Alice is a girl". "Bob and John" acts as the subject, and it is the concatenation of two entities \(<\)"Bob", and, "John"\(>\), which makes it a concatFact. Each fact can connect to other entities or facts over the triple schema.
Finally, we introduce the overall architecture of Nested framework in both symbolic definition and visualization of knowledge representation. Symbolically, we denote the Nested framework N as \(N=\{F_{E2E},F_{E2F},F_{F2F}\}\), following the given definition of different forms of triples. In principle, triples and facts can be infinitely nested in our framework, and the number of nested layers depends on the structure of the sentence and the division of ontological concepts, where a universal standard does not exist. Fig. 3 presents the visual diagrams of the Nested framework. The circle represents an entity, the isometric square represents a triple, and the double rectangle denote a fact. An isometric square inside a double rectangle means a tripleFact, and a concatFact is represented by circles concatenating in a double rectangle. Conj. here can be replaced by "AND", "OR", or other concatenation conjunctions.
\begin{table}
\begin{tabular}{c c} \hline Type of Table & Definition \\ \hline \(T_{E2E}\) & \(T_{E2E}=\{(h,r,t)|h,t\in E,r\in R\}\) \\ \hline \(T_{E2F}/T_{F2E}\) & \(T_{E2F}=\{(h,r,t)|h\oplus t\in\{E,F\},r\in R\}\) \\ \hline \(T_{F2F}\) & \(T_{F2F}=\{(h,r,t)|h,t\in F,r\in R\}\) \\ \hline \end{tabular}
\end{table}
Table 1: Definition of triple
### Ontological layer of NCKG
The ontology is built based on the nested framework and integrates the contract knowledge pattern. It aims at providing the conceptual schema for representing a clear and complete semantics in contract knowledge. As shown in Figure.4, the schematic diagram presents the contractual entities, facts and triples in a hierarchical manner. The basic entities in the ontological layer of NCKG contains "Contract-actor", "Contract-object", "Contract-property" and "Constraint". "Contract_actor" and "Contract_object" class can be joined by the "hasActionTo" relation to form the triple \(<\)Contract_actor, hasActionTo, Contract_object\(>\). We name this triple a "Behavior" class since it denotes the certain behavior of actors. For example, \(<\)Contractor, submit, Programme\(>\) is an instance of "Behavior" class, and more specifically, it is under the "Submission" subclass. Similarly, the description of the property of certain product is called a "Statement". The fact \(<\)Programme, hasProperty, isNotIdentified\(>\) is an instance of "Statement" class.
In the contract clauses, a "Behavior" always has many constraints. Through the "out-of-triple relation", the "Constraint" class can be attached to the "Behavior" class as \(<\)Behavior, hasConstraint, Constraint\(>\). In real instances, the relation and tail entity should be replaced by specific constraint types such as \(<\)Behavior, hasTimeConstraint, beforeCompletionDate\(>\). Finally, we have the "hasContractualRelation" edge to connect a Behavior and a Statement, or connect two Behaviors. The contractual relations can either be conditional relations or temporal relations in the contract. Based on this, we can extract the contract rules as, for example, \(<\)Statement, if-then, Behavior\(>\) or \(<\)Behavior\({}_{1}\), within2weeksOf, Behavior\(>\). "If-then" and "within2weeksOf" are instances of the "hasContractualRelation" class. The first triple implies that "If the Statement is true, then Behavior should be conducted". The second one conveys "If Behavior\({}_{1}\) happens, then Behavior\({}_{2}\) should be conducted".
Figure 2: Nested framework
The detailed explanation and illustration of the ontological layer is given below in Table 1, where we present the explanation and instance illustration of each class type. The types are classified according to elements in the nested framework, in terms of entity, fact, and relations connects them.
Figure 3: Ontological layer of NCKG
\begin{table}
\begin{tabular}{c c c c} \hline \hline Type & Class name (entity/ fact/ relation) & Explanation & Instance illustration \\ \hline & Contract\_actor & The parties in a contract. & “Client”; “Project Manager (PM)”; “Contractor”; “Supervisor; Supplier”; “Others”. & “Programme”; “site information”; “early warning meeting”; “Contractor’s design”, “defect”, “claim”. \\ \hline & Contract\_object & The object that a Contract\_actor acts upon. & “Programme”; “site information”; “early warning meeting”; “Contractor’s design”, “defect”, “claim”. \\ \hline & Contract\_property & The descriptions of the current definition, status, or included content of a Contract\_object. & “submitted”; “isNotIdentifed”; “isChanged”, “isNotCompensationEvent”. \\ \hline entity & & & PM notifies Contractor of something, where “something” is considered as Content\_constraint class; \\ & Content\_constraint & The content of what a Contract\_actor says or does. & “Content\_constraint class; \\ & Constraint & & PM instructs Contractor to do something, where “do something” is considered as Content\_constraint class. \\ \hline & & & Contractor submits programme for doing something, where “something” is considered as Purpose\_constraint class. \\ & Purpose\_constraint & The purpose of certain Contract\_actor conducting a certain Behavior. & “Computation\_policy
\begin{tabular}{c|c c} \hline \hline & & Purpose constraint class” \\ \hline Time\_constraint & The time or time period during which certain & “Key Date” (time); “period for reply”, \\ & Behavior happens. & “interval” (period). \\ \hline Else\_constraint & Other restrictions on certain behavior or & “in accordance with the Scope”; \\ & statement. & “within site”; \\ & & “without instruction”; \\ & & “except stated in conditions of contract”. \\ \hline entity2ent & hasActionTo & “hasActionTo” is the relation connecting a & “submit”; “issue”; “revise”; “extend”; \\ & Contract\_actor and a Contract\_object. & “design”, “pay”; “propose to”; “agree to”; \\ & & Contract\_actor and a Contract\_object. & “obtain from”; “not allow”; “do not start”; \\ & & “intend to transfer”; “is responsible for”. \\ \cline{2-3} relation & hasProperty & “hasProperty” is the relation connecting a & “hasProperty” is directly used to connect \\ & Contract\_object and a Property entity. & Object and Property, it has no instances. \\ \hline & & The fact of a Contract\_actor performs an & \textless[\text{PM, issue certificate}\textgreater{};\] \\ & Behavior & action. It is a triple in the form of & \textless[\text{Client, notify, Contractor}\textgreater{};\] \\ & & \textless[\text{Contract\_actor, hasActionTo,}\] & \textless[\text{PMandContractor, agreeTo, extension}\textgreater{}.\] \\ \cline{2-3} fact & & \begin{tabular}{c} Statement \\ \end{tabular} & \begin{tabular}{c} The fact of the description of a \\ Contract\_object. It is a triple in the form of \\ \textless[\text{Contract\_object, hasProperty, Property}\textgreater{}.\] \\ \end{tabular} &
\begin{tabular}{c} \textless[\text{Programme, hasStatus, isSubmitted}\textgreater{};\] \\ \textless[\text{communication, hasStatus, hasEffect}\textgreater{}.\] \\ \end{tabular} \\ \hline entity2fac & hasConstraint & “hasConstraint” is the relation connecting a & “hasContentConstraint”; \\ t relation & hasConstraint & Behavior and a Constraint. & “hasPurposeConstraint”; \\ & & & “hasTimeConstraint”; “hasElseConstraint”; \\ \hline \hline \end{tabular}
\begin{tabular}{c|c c} \hline \multicolumn{3}{c}{One of the example triples: \(<\)Behavior,} \\ \multicolumn{3}{c}{hasTimeConstraint, beforeReplyDueDate\(>\).} \\ \multicolumn{3}{c}{The Behavior could be \(<\)PM, agreeTo,} \\ \multicolumn{3}{c}{extension\(>\).} \\ \hline \multirow{6}{*}{
\begin{tabular}{c} fact2fact relation \\ ("hasCont \\ actual \\ Relation") \\ \end{tabular} } & Conditional & The conditional relation between two & "ifNot-then", "otherwise" (\(<\)A, ifNot-then, B\(>\) or \(<\)A, otherwise, B\(>\) denotes "if not A, then B".); \\ & & & "unless" (\(<\)A, unless, B\(>\) denotes if B, then not A) \\ \cline{2-3} & Temporal relation & The temporal relation between two behaviors to form the triple \(<\)Behavior, hasTime, as", etc. \\ & Behavior\(>\). & Notice that \(<\)A, before, B\(>\) denotes that A happens before B happens \\ \hline \end{tabular}
3.3 NCKG implementation method
#### 3.3.1 Workflow
This section presents a 5-step workflow of generating NCKG from plain text. It performs entities & fact identification and relation linking iteratively. It is also applicable for larger contract corpus.
**Step 1. Behavior identification**
The goal of this step is to extract the Behavior class instances by recognizing key components in \(<\)Contract_actor, hasActionTo, Contract_object\(>\) triples. First, search every subject in all clause sentences for the Contract_actor class entity. Second, extract the verb and object in the same sentence as the instance of "hasActionTo" and "Contract_object" separately. Then perform the above operations iteratively until there is no Contract_actor instances in the subject position, then we consider the Behavior extraction step is done.
**Step 2. Statement identification**
After the extraction of all Behavior instances, search the rest of subjects in the sentences for Contract_object class entity. Next, extract the predicate and object in the same sentence as a whole "Property" entity. The Statement triple is extracted accordingly as \(<\)Contract_object, has Property, Property\(>\). Then perform the same above operations until there is no Product in the subject position.
**Step 3. Constraint identification**
The Property class entity is recognized mainly based on the understanding of contract semantics, since the instances are irregular and varied. The explanation and illustration of each Property is given in Table 3. Accordingly, we recognize the Content, Purpose, Time and other constraints, until there is no extra component in a single clause.
**Step 4. Behavior& Constraint relation linking**
Since the two facts Behavior and Statement extracted in Step 1&2 are related to certain properties in Step 3, the linking should be performed. The Properties is connected to the facts that share a same sentence, and the relation is determined by the class of the specific property. For example, use the relation "hasPurpose_constraint" to join the head entity "Behavior" and the tail entity "Purpose". In this step, link all properties to the facts until there is no extra properties, and the properties are now updated into the previous fact.
**Step 5. Fact& Fact relation linking**
After extracting all the facts including properties, perform the fact relation linking with the "hasContractualRelation". The contractual relations can either be conditional or temporal relations connecting two facts, and they can be recognized according to the illustration in Table 3. The relation linking procedure should be until there is no more these relations.
#### 3.3.2 Implementation rules
During the workflow of entity/fact identification and relation linking, the diverse and complex language in a contract may cause ambiguities. The implementation rules are established to regulate a unified standard. For example, these rules determine
whether to segment a sentence into triples or consider it as whole entity, and how to deal with passive voice or other complex syntaxes.
**Rule 1. Passive voice transformation**: In a sentence where the predicate is a verb and the subject is not an Contract_actor entity and the object is an Contract_actor entity, reform this sentence into passive voice so that the Contract_actor entity is still at the subject position. The extracted triple takes the form of \(<\) Contract_actor, _verb_, Contract_object\(>\) as in a regular "_Behavior_" fact.
**Illustration:**
**Clause 1a**: "_The cost of this insurance to the Contractor is paid by the Client._"
**Clause 1b**: "_If the contract requires the Project Manager, the Supervisor or the Contractor to reply to a communication, unless otherwise stated in these conditions of contract, they reply within the period for reply._"
Clause 1a indicates that the predicate is a verb phrase "_is paid by_", and the subject is "_cost of this insurance to the Contractor_", which doesn't belong to the Contract_actor class entity, and the object of this sentence is Contract_actor entity. Therefore, the triple is extracted in a way where "_Client_" is changed to the subject, "_is paid by_" is changed to its passive voice "_pay_". The extracted triple is \(<\)_Client, pay, costOf_insuranceToContractor_\(>\).
The first sentence in Clause 1b, "If the Contract requires the Project Manager" is also in accordance with the case that object is an "Actor" while the subject is not. The sentence is transformed to passive voice and triple extracted in the forms of \(<\)ProjectManager, isRequiredBy, contract\(>\).
**Rule 2. Subject designation**: When describing an action without explicitly stating the subject in an "Behavior", assign the action to a specific "Contract_actor" that conducts the action.
**Illustration:**
**Clause 2a**: "_The Client may replace the Project Manager or the Supervisor after notifying the Contractor of the name of the replacement._"
**Clause 2b**: "_The Project Manager or the Contractor may give an early warning by notifying the other of any other matter which could increase the Contractor's total cost._"
**Clause 2c**: "_The Project Manager may instruct the Contractor to correct a failure to comply with the quality plan. This instruction is not a compensation event._"
Clause 2a-2c are all cases that verb is not appearing together with its subject who conduct this action. In Clause 2a, the subject of "_notify_" is "_Client_", so the clause after "_after_" have this implicit "_Behavior_" \(<\)_Client, notify, Contractor\(>\)_. Similarly, \(<\)_ProjectManager, notify, Contractor\(>\)_ and \(<\)_Contractor, notify, ProjectManager\(>\)_ are implicit "_Behavior_" in Clause 2b. In Clause 2c, the verb "_correct_" in this case should be designated with subject "_Contractor_" which acts as the object of the main sentence, so there are two overlapping triples: \(<\)_ProjectManager, instruct, Contractor\(>\)_ and \(<\)_Contractor, correct, failure\(>\)_.
**Rule 3. Object designation**: When describing an action without explicitly stating the object in an "Behavior", assign a specific "Contract_object" to the action.
**Illustration:**
**Clause 3a**: "_A Party may terminate for a reason identified in the Termination Table. The procedures followed and the amounts due on termination are in accordance with the Termination Table._"
**Clause 3b**: "_If either Party wishes to terminate the Contractor's obligation to Provide the Works it notifies the Project Manager and the other Party giving details of the reason for terminating. The Project Manager issues a termination certificate promptly if the reason complies with the contract._"
In the first sentence of Clause 3a, "_A Party may terminate_" omits the specific object which causes incompleteness of the extracted triple. However, we learn from the previous Clause 3b that terminate has object "_Contractor's obligation to Provide the Works_".
**Rule 4. Noun phrase extraction**: During entity extraction, treat a noun phrase (adjective\({}^{+}\) noun/possessive case) as a single entity.
**Illustration:**
**Clause 4a**: "_The Contractor submits particulars of the design of an item of Equipment to the Project Manager for acceptance if the Project Manager instructs the Contractor to. A reason for not accepting is that the design of the item will not allow the Contractor to Provide the Works in accordance with_
* _the Scope,_
* _the Contractor's design which the Project Manager has accepted or_
* _the applicable law._"
**Clause 4b**: "_The Project Manager may extend the period for reply to a communication if the Project Manager and the Contractor agree to the extension before the reply is due. The Project Manager informs the Contractor of the extension which has been agreed._"
In Clause 4a, noun phrases are "_particulars of the design of an item of Equipment_", "_a reasons for not accepting_", "_design of the item_", "_the Contractor's design_", "_the applicable law_". Each of them is extracted as a single entity, repectively. In Clause 4b, "_period for reply to a communication_" is also a noun phrase being considered as a single entity.
**Rule 5. Attribute clause extraction**: During entity extraction, treat an attribute clause (something which.../someone who...) as a single entity.
**Illustration:**
**Clause 5a**: "_The Contractor provides the insurances stated in the Insurance Table except any insurance which the Client is to provide as stated in the Contract Data. The Contractor provides additional insurances as stated in the Contract Data._"
**Clause 5b**: "_The Project Manager informs the Contractor of the extension which has been agreed._"
In Clause 5a, "which the Client is to provide" is attribute modifying the noun "insurance", so the entity being extracted is "_insurance which the Client is to provide_". Similarly, same rule is applied to extract "_extension which has been agreed_", which is the content that Project Manager informs Contractor of. The complete knowledge representation contains nested triple: <<_ProjectManager, inform, Contractor\(>\), hasContent, extension_which_hasBeenAgreed_>
**Rule 6. Verb modifier plus verb**: During relation extraction, verb phrases, including negation modifiers, are considered as a single relation, falling under the class of "hasActionTo".
**Illustration:**
**Clause 6a**: "_If either Party wishes to terminate the Contractor's obligation to Provide the Works it notifies the Project Manager and the other Party giving details of the reason for terminating. The Project Manager issues a termination certificate promptly if the reason complies with the contract._"
**Clause 6b**: "_If the Project Manager does not notify a decision on that part of Defined Cost within the time stated, the Contractor's assessment is treated as correct._"
In the verb phrase "wishes to terminate" of Clause 6a, "wish to" is used to modify the verb "terminate", but we extract the whole phrase as a relation to connect subject and object, which forms _<eitherParty, wishToTerminate, Contractor'sObligation_to_provideTheWork>_.
The verb phrase in Clause 6b is composed of negation modifier and verb, in this case, negation and verb are also considered as whole in a relation. Consequently, the "Behavior" triple is _<ProjectManager, not_notify, decision>_.
**Rule 7**.: **Concatenation list extraction**: In a clause containing a list with several sub-points connected with a concatenation relation, if each sub-point is an independent sentence with a complete <subject, predicate, object> component, turn this fact into a tripleFact; if each sub-point is a dependent clause, the list is considered as a single entity.
**Illustration:**
_Clause 7a: "Either Party may terminate if the other Party has done one of the following or its equivalent._
* _If the other Party is an individual and has_
* _presented an application for bankruptcy (R1),_
* _had a bankruptcy order made against it (R2),_
* _had a receiver appointed over its assets (R3) or_
* _made an arrangement with its creditors (R4)."_
In Clause 7a, "_the other Party_" is the subject of 4 sentences of the list, predicates are the verbs of each sentence being "_present_", "_had_", "_made_", objects are "_application for bankruptcy_", "_bankruptcy order made against it_", "_receiver appointed over its assets_", "_arrangement with its creditors_". Accordingly, each point can be represented as a triple such as R1: _<the other party, present, application for bankruptcy>_.
_Clause 7b: "The following are Contractor's liabilities unless they are stated as being Client's liabilities._
* _Claims and proceedings from Others and compensation and costs payable to Others which arise from or in connection with the Contractor Providing the Works._
* _Loss of or damage to the works, Plant and Materials and Equipment._
* _Loss of or damage to property owned or occupied by the Client other than the works,_
_which arises from or in connection with the Contractor Providing the Works._
* _Death or bodily injury to the employees of the Contractor."_
In Clause 7b, each main point of the list is a dependent clause (subordinate clause) instead of an independent sentence. Therefore, we consider whole part of point as a single entity, such as "_Death or bodily injury to the employees of the Contractor_".
**Rule 8**.: **Separatable phrasal verb extraction**: For scparatable phrasal verbs where the verb phrase is not continuous, consider the separated verb phrase as a whole relation, and the object which separate them as an "Contract_object" entity.
**Illustration:**
**Clause 8a**: "_The Contractor has the right to use material provided by the Client only to Provide the
_Works. The Contractor may make this right available to a Subcontractor._"
In Clause 8a the verb phrase "make this right available" is split by the object in the middle, in this case we extract "_the right_" as object and extract the original phrasal verb that without being split as the relation, which is "_make available_". Therefore, the "_Behavior_" triple in last sentence is \(<\)_Contractor, makeAvailable, right\(>\)_. And we further discover that "to a Subcontractor" indicates that the Behavior triple and the Actor Subcontractor can be linked through a "hasActor" relation. Consequently, we model the last sentence as \(<<\)_Contractor, makeAvailable, right\(>\), has Actor, Subcontractor\(>\)_
**Rule 9. Concatenation entity (relation) extraction:** Two nouns connected with concatenation conjunctions in the subject or object position are extracted as one entity. Two verbs connected with concatenation conjunctions are separately extracted as two relations in two different triples.
**Illustration:**
**Clause 9a**: "_The Project Manager and the Contractor may agree rates or lump sums to assess the change to the Prices._"
**Clause 9b**: "_The Project Manager prepares a first Early Warning Register and issues it to the Contractor within one week of the starting date. The Project Manager instructs the Contractor to attend a first early warning meeting within two weeks of the starting date._"
For instance, Project Manager and Contractor are joined by "_and_" in Clause 9a, we consider it as one single entity \(<\)_ProjectManager and Contractor\(>\)_. Same as \(<\)_rates or lumpSums\(>\)_. However, "_prepares a first Early Warning Register_" and "_issues it to the Contractor_" are also joined by "and" in Clause 9b, in this case we extract two triples as two "Behavior" separately. The result are as follows.
Behavior 9a: \(<<\)_ProjectManager and Contractor\(>\), agreeTo, \(<\)_rates or lumpSums\(>>\)_
Behavior 9b-1: \(<\)_PM, prepare, firstEarlyWarningRegister\(>\)_
Behavior 9b-2: \(<\)_PM, issue, firstEarlyWarningRegister\(>\)_
## 4 Case illustration
### Building NCKG for NEC
We built a NCKG for the NEC core clauses, which includes 9 sections and 181 clauses. The output NCKG-NEC-Core consists of two files in RDF turtle-star format, one for the Contract schema and another for the KR of all the NEC clauses. The turtle files can be found in _[https://github.com/CamilleZ99/ContractKG_](https://github.com/CamilleZ99/ContractKG_). This section presents examples covering the task of Behavior/Statement identification and conditional/temporal relation linking. Each example follows the 5-step workflow and implementation rules, as defined by the NCKG schema. The extraction process can be assisted by LLMs such as chatGPT.
**Example 1**. Behavior& Behavior extraction with temporal relation.
Figure 4: Example of Behavior& Behavior extraction
We extracted the Behavior triple <Contractor, submit, Programme> and <PM, notify, Contractor> in step 1 by recognizing every entities and their relations. We recognize the Purpose_constraint and Content_constraint and append these properties to certain Behaviors. Finally, the temporal relation "within two weeks of" is used to connect the two behaviors extracted before. Therefore, we can represent the clause as follows:
```
<<PM, notify, Contractor>> :hasContentConstraint:acceptanceOfProgrammeOrReasonsForNotAcceptingIt>> :within2weeksOf <<Contractor, submit, Programme>>:hasPurposeConstraint:forAcceptance>>.
```
**Example 2**. Behavior& Statement extraction with conditional relation.
In step 1, we extracted the Behavior <Contractor, submit, a first Programme>. In step 2, we extracted the Statement <programme, hasStatus, not identified in the Contract data>. For the purpose_constraint and time_constraint, append the properties to the Behavior fact. In the last step, we used a conditional relation to connect the Statement and Behavior fact. The KR is as follows.
```
<<programme, hasProperty, notIdentifiedInContractdata>> >> :if- then <<Contractor, submit, firstProgramme>> :hasElseConstraint_to:PM; :hasPurposeConstraint:forAcceptance; :hasTimeConstraint:withinPeriodStatedIn ContractData>>.
```
### Storing and querying NCKG in graph database
One way of leveraging KGs to assist domain-specific task is to store the KGs as local database and performing query through SPARQL. In this sub-section, we implemented the storage and query of NCKG to prove its effectiveness, as shown in Figure 8.
Figure 5: Example of Behavior& Statement extraction
The ontological layer of NCKG was created using Protege. The instance layer was written in RDF-star as a turtle file (see Fig. 9 for the ontology and RDF interface). The ontology and instances were then imported into a graph database (e.g. GraphDB), which enables users to explore the class hierarchy and relationships with visualization support. We can also execute the SPARQL query using the query interface. Examples of querying contract knowledge are provided in Table.4 and the result of the first query is given in Figure.10.
\begin{table}
\begin{tabular}{c|c|c} \hline \multirow{2}{*}{Querying object} & Query & SPARQL queries \\ & explaination & \\ \hline \multirow{5}{*}{TimeConstraint} & What are the & PREFIX cfg:\(<\)http://ContractKR/ckg\#\(>\) \\ & time constraints & PREFIX rdfs: \(<\)[http://www.w3.org/2000/01/rdf-](http://www.w3.org/2000/01/rdf-) \\ & for documents & schema\#\(>\) \\ & that Contractor & \\ & is responsible & SELECT?o?b WHERE \{ \\ & for submitting? & BIND (\(<\)ckg:Contractor ckg:submit?o\(>>\) AS?a) \\ \hline \end{tabular}
\end{table}
Table 4: SPARQL query of contract knowledge
Figure 6: Framework of implementation in graph database
Figure 7: An excerpt of contract ontology and RDF-star instances
4.3 Enhancing LLM-assisted contract review with NCKG
Domain database can provide more accurate and expert-driven knowledge to enhance LLM-assisted contract review tasks. Figure 11 illustrates the pipeline of integrating NCKG and LLMs for contract review. For each contract clause, LLM is used to perform the task decomposition into several questions, including in-context questions and out-of-context questions. LLM can answer in-context questions based purely on the contract clause context, and give primary assessment conclusion. For out
Figure 8: Query result for time constraint of submitting document
of-context questions, such as definition of related concept, regulated process or time constraint, we retrieve from our NCKG database to provide related structural knowledge. Then, combining the output of LLM and database query, we again ask LLM to summarize and generates a comprehensive contract review conclusion.
Table 5 presents an example of enhancing the contract risk review task for an Engineering, Procurement and Construction (EPC) contract. The identifier S/Q in brackets in Figure. 11 denote the corresponding stages and questions in Table 5. Our task is to analyze the risks in the given contract clause (refer to S1). The task decomposition is performed using chatGPT with a few expert-driven instructions, resulting in 7 sub-questions. Q1-Q5 are in-context questions for LLM to answer (refer to S2), and Q6-Q7 are out-of-context questions to query the structural database (refer to S3). After obtaining the result from the two sources, we ask chatGPT to synthesize and summarize the above outputs and draw a final conclusion. Compared to the original result using only chatGPT (output of S1), NCKG-enhanced contract review (output of S4) can provide with valuable knowledge that not only leads to more comprehensive risk assessment, but also capable of giving possible reasons and solutions for contract management.
\begin{table}
\begin{tabular}{l p{142.3pt}} \hline \hline \multicolumn{2}{c}{**Contract review task [S1]**} \\ \hline \hline \multicolumn{2}{c}{Q: Identify the risks in the given contract clause.} \\ \multicolumn{2}{c}{(a) Contract clause: The amount of the Advance Payment shall be twenty five (25\%) of the Supply Price and shall be amortised proportionately against amounts recovered through interim applications for payment throughout the progress of the Supply, starting with the first Interim Payment after the Order Date. The currency shall be the same as the Supply Price.} \\ \multicolumn{2}{c}{(b) The Employer shall pay the Advance Payment after: (i) receiving the Performance Bond in accordance with Clause 4.3(b), and (ii) receiving the Advance Payment Guarantee in accordance with Clause 4.3(b), provided, however, the Advance Payment shall be made within fifteen (15) days of the later of (i) and (ii).} \\ \multicolumn{2}{c}{(c) The Supplier shall ensure that the Advance Payment Guarantee is valid and enforceable until the entire Advance Payment has been adjusted by the Supplier, but its amount shall be progressively reduced by the amount adjusted through proportional deductions from Interim Payment amounts paid for the Equipment under Clause 11.5 (_Schedule of Payments_). If the terms of the Advance Payment Guarantee specify its expiry date, and the Advance Payment has not been adjusted by the date twenty-eight (28) days prior to the expiry date, the Supplier shall extend the validity of the Advance Payment Guarantee until the Advance
\begin{table}
\begin{tabular}{l p{142.3pt}} \hline \hline \multicolumn{2}{c}{**Contract review task [S1]**} \\ \hline \hline \multicolumn{2}{c}{Q: Identify the risks in the given contract clause.} \\ \multicolumn{2}{c}{(a) Contract clause: The amount of the Advance Payment shall be twenty five (25\%) of the Supply Price and shall be amortised proportionately against amounts recovered through interim applications for payment throughout the progress of the Supply, starting with the first Interim Payment after the Order Date. The currency shall be the same as the Supply Price.} \\ \multicolumn{2}{c}{(b) The Employer shall pay the Advance Payment after: (i) receiving the Performance Bond in accordance with Clause 4.3(b), and (ii) receiving the Advance Payment Guarantee in accordance with Clause 4.3(b), provided, however, the Advance Payment shall be made within fifteen (15) days of the later of (i) and (ii).} \\ \multicolumn{2}{c}{(c) The Supplier shall ensure that the Advance Payment Guarantee is valid and enforceable until the entire Advance Payment has been adjusted by the Supplier, but its amount shall be progressively reduced by the amount adjusted through proportional deductions from Interim Payment amounts paid for the Equipment under Clause 11.5 (_Schedule of Payments_). If the terms of the Advance Payment Guarantee specify its expiry date, and the Advance Payment has not been adjusted by the date twenty-eight (28) days prior to the expiry date, the Supplier shall extend the validity of the Advance Payment Guarantee until the Advance
Figure 9: NCKG-enhanced LLM contract review pipeline
Payment has been fully adjusted. In the event of failure by the Supplier to extend the validity of the Advance Payment Guarantee in accordance with this Clause, the Employer shall have the right to encash the Advance Payment Guarantee to its full outstanding value.
A: The main risk in the contract clause is the potential for the supplier to default on their obligations, leaving the employer with a financial loss. The clause requires an advance payment of 25% of the supply price, amortized against interim payments. To mitigate this risk, the contract includes provisions for an Advance Payment Guarantee. The guarantee must remain valid until the advance payment is fully adjusted, and the supplier must extend its validity if needed. If the supplier fails to extend it, the employer can encash the guarantee. However, it's important to ensure that the guarantee is adequate and the supplier has the financial capacity to perform their obligations.
**Task decomposition [S2]**
Q: Break down the contract review task by asking several questions about the contract clause, list 5 questions and give the corresponding answer. Start with:
a: When should the Employer pay the Advance Payment? [Q1]
b: The amount and currency of Advance Payment? [Q2]
c: What are the preconditions for Employer to encash the Advance Payment Guarantee? [Q3]
What information from FIDIC contract may needed to answer the above questions?
A:
Q1: When should the Employer pay the Advance Payment?
A1: The Employer should pay the Advance Payment within fifteen (15) days of the later of (i) receiving the Performance Bond in accordance with Clause 4.3(b), and (ii) receiving the Advance Payment Guarantee in accordance with Clause 4.3(b).
Q2: The amount and currency of Advance Payment?
A2: The amount of the Advance Payment is twenty-five percent (25%) of the Supply Price. The currency of the Advance Payment shall be the same as the Supply Price.
Q3: What are the preconditions for the Employer to encash the Advance Payment Guarantee?
A3: The preconditions for the Employer to encash the Advance Payment Guarantee are (1) the Advance Payment has not been fully adjusted by the date twenty-eight (28) days prior to the expiry date, and (2) the Supplier's failure to extend the validity of the Advance Payment Guarantee in accordance with the contract clause.
Q4: How will the Advance Payment be amortized?
A4: The Advance Payment will be amortized proportionately against amounts recovered through interim applications for payment throughout the progress of the Supply, starting with the first Interim Payment after the Order Date.
Q5: What happens if the Advance Payment Guarantee's amount needs to be reduced?
A5: The amount of the Advance Payment Guarantee shall be progressively reduced by the amount adjusted through proportional deductions from Interim Payment amounts paid for the Equipment under Clause 11.5 (Schedule of Payments).
The information from the FIDIC contract that may be needed to answer the above questions includes:
The property or definition of terms. e.g. Advance Payment Guarantee. [Q6]
The pre-conditions for encashing the Advance Payment Guarantee. [Q7]
**External knowledge retrieval [S3]**
SPARQL query of Q6:
SPARQL query:
PREFIX ckg:<http://ContractKR/ckg#>
PREFIX rdfs: <[http://www.w3.org/2000/01/rdf-schema#](http://www.w3.org/2000/01/rdf-schema#)>
SELECT?a?b WHERE {
ckg: PerformanceBond ckg:hasProperty?a;
ckg: AdvancePaymentGuarantee ckg:hasProperty?b.
} SPARQL query of Q7:
PREFIX ckg: <http://ContractKR/ckg#>
PREFIX rdfs: <[http://www.w3.org/2000/01/rdf-schema#](http://www.w3.org/2000/01/rdf-schema#)>
SELECT?a?t WHERE {
?a ckg:if-then <<ckg:Employer ckg:haveRightToEncash ckg:AdvancePaymentGuarantee>> ;
ckg:hasTimeConstraint?t.
} A6:
ckg:AdvancePaymentGuarantee ckg:hasProperty
ckg:inAmountAndCurrencyEqualToAdvancePayment
A7:
<<ckg:Employer ckg:doesNotReceive ckg:evidenceOfExtension>>
ckg:hasTimeConstraint ckg:7DaysBeforeExpiryDateOfGuarantee.
Corresponding FIDIC clause: if the Employer does not receive this evidence 7 days before the
expiry date of this guarantee, the Employer shall be entitled to claim under the guarantee the
amount of advance payment which has not been repaid.
**Contract review enhanced by external knowledge [54]**
Q: Combine above discussion and identify the risks in the contract clause.
A: The FIDIC clause refers to the requirement for the Contractor to extend the validity of the
guarantee until the advance payment has been repaid, and the Contractor should submit
evidence of this extension to the Employer. If the Employer does not receive this evidence 7
days before the expiry date of the guarantee, the Employer is entitled to claim the unpaid
amount of the advance payment under the guarantee.
In contract clause A, it states that the amount of the Advance Payment shall be 25% of the
Supply Price and shall be amortized proportionately against amounts recovered through interim
applications for payment throughout the progress of the Supply. However, it does not explicitly
mention the requirement for the Contractor to extend the validity of the guarantee until the
advance payment has been repaid, nor does it mention the need for the Contractor to submit
evidence of this extension to the Employer.
This omission in contract clause A causes a risk, as the Contractor might not be aware of the
need to extend the guarantee and provide evidence to the Employer, potentially leading to
disputes or financial losses for the Employer if the advance payment is not fully repaid and the
guarantee expires.
In this paper, we present the NCKG methodology, a novel approach that captures the intricate semantics of contract knowledge using a nested structure. This method serves as the bridge for integrating LLMs and KG to achieve a more transparent and trustworthy contract review process. Consequently, we identify the necessity for further exploration of LLMs and KGs integration from the following aspects.
LLM-assisted NCKG construction
Recent advancements in LLMs have showcased a remarkable ability for extracting information from text based on structural instructions. For instance, ChatIE extracts triples, performs named entity recognition, and event extraction by designing chat-like prompts for ChatGPT, and achieve impressive performance [68]. In this regard, NCKG construction can be executed more efficiently from contract text given our pre-defined NCKG schema. This significantly reduces the effort required to train domain specific knowledge extraction models. Meanwhile, it is important to provide accurate and effective workflows and extraction rules for LLMs. The workflow and rules in 3.3 can be formalized as prompts to further faciliate knowledge extraction in accordance with our framework.
Using NCKG for knowledge-enhanced LLMs
The presented work offers insights into incorporating domain-specific knowledge into large language models (LLMs). By imposing the constraints of structured contract knowledge, language models are expected to yield enhanced accuracy and dependability in contract management tasks. Using the nested framework presented in this study to represent complex knowledge, the knowledge-enhanced pipeline can also be extended to a broader array of domains.Several potential pipelines have been studied and provided inspiration for future directions. For instance, drawing upon the triple integration method employed in K-BERT (Liu et al., 2019), the incorporation of nested triples into language model inputs can be investigated. Another promising avenue is to explore learning knowledge embeddings within a nested, multi-layered knowledge graph. This approach facilitates the conversion of NCKG into a vector database, thereby promoting more efficient utilization and maintenance of the contract knowledge database.
In conclusion, LLMs can facilitate the development of knowledge graphs (KGs), while KGs can constrain reasoning and facilitate fact-checking for LLMs. Both approaches stand to mutually benefit one another. By combining these two techniques, more accurate and reliable outcomes can be anticipated for decision-making processes across various domains.
## 6 Conclusion
In this paper, we introduce the NCKG methodology, a method for constructing nested contract knowledge graph. First, we present the nested framework for representing nested relationships inherent in natural language. The ontological layer of NCKG is constructed based on the framework. Then, we define the NCKG implementation workflow and rules for extracting and representing the contract knowledge from contract clauses. In case study, we conduct the storage and query in graph datatbase to evaluate the quality and application of the extracted knowledge, and
present the pipeline using NCKG to enhance LLM contract reviewing. The result shows a promising performance in the contract review task.
## Acknowledgement
This work is funded by the National Natural Science Foundation of China (Grant No. 71971196).
| 大規模言語モデル (LLM) の出現は、建設契約管理を自動化する hitherto unprecedented opportunity を提供します。人為的な誤り を減らし、時間とコストを大幅に削減することができます。ただし、LLM は専門知識が不足しているため、説得力のあるものの、正確性や誤解を招く内容を生成することがあります。この問題に対処するために、専門家の知識を構造化された形で表すことで自動契約管理プロセスを制限することができます。この論文では、複雑な契約知識を網羅する構造化された知識表現方法である、Nested Contract Knowledge Graph (NCKG) を導入します。この方法には、ネストされた知識表現フレームワーク、そのフレームワークに基づいた NCKG ontolgy、実装方法が含まれます。さらに、NCKG に外部知識を統合した LLM assist contract review pipeline を提案します。この pipeline は、契約リスクのレビューにおいて、有望なパフォーマンス |
2309.16354 | Transformer-VQ: Linear-Time Transformers via Vector Quantization | We introduce Transformer-VQ, a decoder-only transformer computing
softmax-based dense self-attention in linear time. Transformer-VQ's efficient
attention is enabled by vector-quantized keys and a novel caching mechanism. In
our large-scale experiments, Transformer-VQ is shown highly competitive in
quality, obtaining 0.99 bpb on Enwik8, 26.6 ppl on PG-19, and 3.16 bpb on
ImageNet64. In addition, the optimized implementation of Transformer-VQ is over
3x faster than a comparable quadratic-time transformer at sequence length 8k,
is over 12x faster at 32k, and can scale to 131k with similar throughput. Code
available: \url{https://github.com/transformer-vq/transformer_vq} | Lucas D. Lingle | 2023-09-28T11:26:52 | http://arxiv.org/abs/2309.16354v2 | # Transformer-VQ: Linear-Time Transformers via Vector Quantization
###### Abstract
We introduce Transformer-VQ, a decoder-only transformer computing softmax-based dense self-attention in linear time. Transformer-VQ's efficient attention is enabled by vector-quantized keys and a novel caching mechanism. In large-scale experiments, Transformer-VQ is shown highly competitive in quality, with strong results on Enwik8 (0.99 bpb), PG-19 (26.6 ppl), and ImageNet64 (3.16 bpb). Code: [https://github.com/transformer-vq/transformer_vq](https://github.com/transformer-vq/transformer_vq)
## 1 Introduction
Transformer (Vaswani et al., 2017) language models would ideally scale to long sequences, since their predictive abilities often improve as context length increases (Dai et al., 2019; Kaplan et al., 2020). Unfortunately, the standard transformer uses a self-attention mechanism with a quadratic time complexity with respect to sequence length. This limits the practicality of applying transformers to very long sequences, since increasing the sequence length by a factor of \(10^{n}\) increases the attention computations by a factor of \(100^{n}\). Transformer variants that overcome this efficiency bottleneck have the potential to facilitate new long-context applications and enable new breakthroughs.
Up to this point, a variety of _efficient transformers_(Tay et al., 2020b) have been proposed to scale to long sequences. Techniques include sparsity (Child et al., 2019; Ye et al., 2019; Beltagy et al., 2020; Kitaev et al., 2020; Qiu et al., 2020; Roy et al., 2021; Tay et al., 2020; Sukhbaatar et al., 2021; Wu et al., 2022; Liu et al., 2023; Zhang et al., 2023), compression (Liu et al., 2018; Rae et al., 2020; Ainslie et al., 2020; Zhu et al., 2021; Ren et al., 2021; Nawrot et al., 2021; 2023), low-rank approximations (Wang et al., 2020; Vyas et al., 2020; Katharopoulos et al., 2020; Xiong et al., 2021; Tay et al., 2021; Choromanski et al., 2021), and cross-attention operations (Dai et al., 2019; Ma et al., 2021; Hutchins et al., 2022; Hawthorne et al., 2022). Other efficient sequence models have also been proposed (Gu et al., 2022; Lee-Thorp et al., 2022; Poli et al., 2023; Peng et al., 2023).
In this paper, we present Transformer-VQ, a transformer decoder with _dense self-attention compatible in linear time_ with respect to sequence length. This is made possible through a combination of vector-quantized keys, localized positional biases, and a truncation-free yet fixed-size cache mechanism. Beyond its efficiency, Transformer-VQ is also simple to implement sampling for, as it does not require any periodic operations beyond those occurring at every token.
Figure 1: Minibatch of generated samples from our unconditional ImageNet64 model; nucleus 1.0.
## 2 Preliminaries
### Notation
The real numbers are denoted by \(\mathbb{R}\) and the extended real numbers \(\mathbb{R}\cup\{-\infty,\infty\}\) by \(\bar{\mathbb{R}}\). Zero-based indices are used for all tensors. When indexing a matrix \(\mathbf{M}\) along the first axis, we use \(\mathbf{M}_{i}\) to denote a column vector and \(\mathbf{M}_{i:}\) to denote a row vector. The functions \(\mathrm{LN}(\cdot)\), \(\mathrm{Softmax}(\cdot)\), \(\mathrm{Concat}(\cdot)\) denote LayerNorm (Ba et al., 2016), softmax, and concatenation, each applied row-wise. The symbols \(\triangleq,\propto,\odot,\exp(\cdot),\delta_{a,b},\mathrm{SG}(\cdot)\) denote equality by definition, proportionality, element-wise product, element-wise exponentiation, the Kronecker delta function, and the stop-gradient operator.
We assume familiarity with transformers (Vaswani et al., 2017), and use the notation \(D_{m}\) to denote the model width, \(H\) to denote the number of attention heads per layer, \(D_{k}\) to denote the query/key vector width, \(D_{v}\) to denote the value vector width, \(D_{f}\) to denote the feedforward fan-out width.
### Vector Quantization
Vector quantization (VQ) is a technique used extensively throughout this work. Here we briefly review VQ, motivate its use in self-attention, and discuss the VQ scheme introduced for representation learning by van den Oord et al. (2017). All proofs are given in Appendix A.
### Vector Quantizers and Codebooks
**Definition 2.1**.: A _vector quantizer_ is a function \(\mathrm{VQ}(\cdot;\mathbf{C})\) with domain \(\mathbb{R}^{D}\) and codomain \(\mathbb{R}^{D}\). For an input \(\mathbf{x}\), its output \(\hat{\mathbf{x}}\) is given by
\[z \triangleq\operatorname*{arg\,min}_{s}||\mathbf{x}-\mathbf{C}_{s} ||^{2} \tag{1}\] \[\hat{\mathbf{x}} \triangleq\mathbf{C}_{z} \tag{2}\]
where \(\mathbf{C}\in\mathbb{R}^{S\times D}\) is known as the _codebook_. The row indices \(\{0,\dots,S-1\}\) of \(\mathbf{C}\) are called _shortcodes_, and the rows themselves are called _codewords_.
**Theorem 2.2** (Based on Guo et al. (2019)).: _Let \(\mathbf{q}\in\mathbb{R}^{D}\) be a random variable with \(\mathbb{E}_{\mathbf{q}}[\mathbf{q}\mathbf{q}^{\top}]\propto\mathbf{I}_{D}\), and let \(\mathbf{k}\in\mathbb{R}^{D}\) be a random variable independent of \(\mathbf{q}\). Let \(\varphi:\mathbb{R}^{D}\to\mathbb{R}^{D}\) be a deterministic function. Then_
\[\mathbb{E}_{\mathbf{q},\mathbf{k}}||\mathbf{q}^{\top}\mathbf{k}-\mathbf{q}^{ \top}\varphi(\mathbf{k})||^{2}\propto\mathbb{E}_{\mathbf{k}}||\mathbf{k}- \varphi(\mathbf{k})||^{2}. \tag{3}\]
**Corollary 2.3**.: _Let the conditions of Theorem 2.2 hold. Given the constraint that \(\varphi(\mathbb{R}^{D})=\{\mathbf{C}_{s}\}_{s=0}^{S-1}\), the choice \(\varphi(\cdot)=\text{VQ}(\cdot;\mathbf{C})\) minimizes \(\mathbb{E}_{\mathbf{q},\mathbf{k}}||\mathbf{q}^{\top}\mathbf{k}-\mathbf{q}^{ \top}\varphi(\mathbf{k})||^{2}\)._
**Corollary 2.4**.: _Let the conditions of Theorem 2.2 hold. With \(\hat{\mathbf{k}}=\text{VQ}(\mathbf{k};\mathbf{C})\) we have_
\[\operatorname*{arg\,min}_{\mathbf{C}}\mathbb{E}_{\mathbf{q},\mathbf{k}}|| \mathbf{q}^{\top}\mathbf{k}-\mathbf{q}^{\top}\hat{\mathbf{k}}||^{2}= \operatorname*{arg\,min}_{\mathbf{C}}\mathbb{E}_{\mathbf{k}}||\mathbf{k}- \hat{\mathbf{k}}||^{2}. \tag{4}\]
_Remark 2.5_.: Since finding the global minimizer \(\mathbf{C}^{*}=\operatorname*{arg\,min}_{\mathbf{C}}\mathbb{E}_{\mathbf{k}}|| \mathbf{k}-\hat{\mathbf{k}}||^{2}\) can be expensive, we approximate it using a minibatch variant of streaming k-means, same as van den Oord et al. (2017).
### Vector-Quantized Representation Learning
**Definition 2.6** (Based on van den Oord et al. (2017)).: A _vector-quantizer with straight-through estimator_ is a function \(\mathrm{STVQ}(\cdot;\mathbf{C})\) with domain \(\mathbb{R}^{D}\) and codomain \(\mathbb{R}^{D}\). For an input \(\mathbf{x}\), its output \(\hat{\mathbf{x}}\) is given by
\[z \triangleq\operatorname*{arg\,min}_{s}||\mathbf{x}-\mathbf{C}_{s} ||^{2} \tag{5}\] \[\hat{\mathbf{x}} \triangleq\mathbf{x}+\mathrm{SG}(\mathbf{C}_{z}-\mathbf{x}). \tag{6}\]
_Remark 2.7_.: For any \(\mathbf{x}\in\mathbb{R}^{D}\), \(\mathrm{STVQ}(\mathbf{x};\mathbf{C})\) evaluates to \(\mathrm{VQ}(\mathbf{x};\mathbf{C})\). However, the computed Jacobian of the quantizer w.r.t. its input will now be an identity matrix everywhere, instead of a zero matrix almost everywhere. Intuitively, when using \(\mathrm{STVQ}\), gradients are 'transplanted' onto the unquantized vectors from their quantized counterparts.
_Remark 2.8_.: We overload the notation \(\mathrm{STVQ}(\cdot;\mathbf{C})\) to operate row-wise on matrix-valued inputs.
## 3 Transformer-VQ
We now propose Transformer-VQ, a decoder-only transformer that can compute dense self-attention in linear time. Proofs for all theoretical results are given in Appendix A.
### Quadratic-Time Formulation
**Definition 3.1**.: _Vector-Quantized Self-Attention is a function \(\text{VQAttn}(\cdot;\mathbf{C},\mathbf{W}_{\{Q,K,V,G,O\}})\) with domain \(\mathbb{R}^{T\times D_{m}}\) and codomain \(\mathbb{R}^{T\times D_{m}}\). For an input \(\mathbf{X}\), its output \(\mathbf{Y}\) is defined via_
\[\mathbf{\dot{X}} \triangleq\text{LN}(\mathbf{X})\in\mathbb{R}^{T\times D_{m}} \tag{7}\] \[\mathbf{Q} \triangleq\tau^{-0.5}\text{LN}(\dot{\mathbf{X}}\mathbf{W}_{Q}) \in\mathbb{R}^{T\times D_{k}}\] (8) \[\mathbf{K} \triangleq\tau^{-0.5}\text{LN}(\dot{\mathbf{X}}\mathbf{W}_{K}) \in\mathbb{R}^{T\times D_{k}}\] (9) \[\mathbf{V} \triangleq\phi_{v}(\dot{\mathbf{X}}\mathbf{W}_{V})\in\mathbb{R}^ {T\times D_{v}}\] (10) \[\mathbf{G} \triangleq\phi_{g}(\dot{\mathbf{X}}\mathbf{W}_{G})\in\mathbb{R}^ {T\times D_{v}}\] (11) \[\dot{\mathbf{K}} \triangleq\text{STVQ}(\mathbf{K};\mathbf{C})\in\mathbb{R}^{T \times D_{k}}\] (12) \[\mathbf{W} \triangleq\phi_{w}(\mathbf{Q}\dot{\mathbf{K}}^{\top}+\mathbf{B}) \in\mathbb{R}^{T\times T}\] (13) \[\mathbf{O} \triangleq(\mathbf{W}\mathbf{V})\odot\mathbf{G}\in\mathbb{R}^{T \times D_{v}}\] (14) \[\mathbf{Y} \triangleq\mathbf{X}+\mathbf{O}\mathbf{W}_{O}\in\mathbb{R}^{T \times D_{m}} \tag{15}\]
_where \(\tau\) is a fixed constant, \(\phi_{v},\phi_{g},\phi_{w}\) are element-wise or row-wise nonlinearities, the query/key LayerNorms use unit gain and zero bias, and \(\text{STVQ}(\cdot;\mathbf{C})\) denotes row-wise application of vector-quantization with a straight-through gradient estimator (van den Oord et al., 2017)._
_Remark 3.2_.: Our attention mechanism is applied to a gated activation unit (GAU) design inspired by Hua et al. (2022). GAU is a single-headed gated attention mechanism and generally uses \(D_{k}=128\), \(D_{v}=2D_{m}\), with two GAUs replacing a single transformer layer. This yields a similar parameter count and compute requirement as the transformer layer, assuming the latter uses \(D_{m}\gg 128\), \(D_{k}=D_{v}=D_{m}/H\), and \(D_{f}=4D_{m}\).
_Remark 3.3_.: Prior work has also applied LayerNorm or similar to the queries and keys in attention (Henry et al., 2020; Roy et al., 2021; Zhu et al., 2021; Wu et al., 2022; Hutchins et al., 2022), generally finding it to improve numerical stability and convergence.
### Warmup: Linear-Time Encoder Attention
**Theorem 3.4**.: _Suppose \(\mathbf{B}_{i,j}=0\) for all \(i,j\), and \(\phi_{w}\) is an element-wise nonlinearity. Then the attention weights in Definition 3.1 can be factored:_
\[\mathbf{W} \triangleq\phi_{w}(\mathbf{Q}\dot{\mathbf{K}}^{\top}+\mathbf{B}) \tag{16}\] \[=\phi_{w}(\mathbf{Q}\dot{\mathbf{K}}^{\top})\] (17) \[=\phi_{w}(\mathbf{Q}\mathbf{C}^{\top})\boldsymbol{\Delta} \tag{18}\]
_where \(\phi_{w}(\mathbf{Q}\mathbf{C}^{\top})\in\mathbb{R}^{T\times S}\), \(\boldsymbol{\Delta}\in\mathbb{R}^{S\times T}\) and \(\boldsymbol{\Delta}_{s,t}\triangleq\delta_{s,z_{t}}\). Here, \(\delta_{\cdot,\cdot}\) denotes the Kronecker delta function and \(z_{t}\) is the VQ shortcode for timestep \(t\)._
**Theorem 3.5**.: _Suppose \(\mathbf{B}_{i,j}=0\) for all \(i,j\), and \(\phi_{w}\) is the row-wise softmax nonlinearity. Then the attention weights in Definition 3.1 can be factored:_
\[\mathbf{W} \triangleq\phi_{w}(\mathbf{Q}\dot{\mathbf{K}}^{\top}+\mathbf{B}) \tag{19}\] \[=\phi_{w}(\mathbf{Q}\dot{\mathbf{K}}^{\top})\] (20) \[=\mathrm{Diag}(\exp(\mathbf{Q}\mathbf{C}^{\top})\boldsymbol{ \Delta}\mathbf{1})^{-1}\exp(\mathbf{Q}\mathbf{C}^{\top})\boldsymbol{\Delta} \tag{21}\]
_where \(\mathbf{1}\in\mathbb{R}^{T}\), \(\mathrm{Diag}(\exp(\mathbf{Q}\mathbf{C}^{\top})\boldsymbol{\Delta}\mathbf{1} )^{-1}\exp(\mathbf{Q}\mathbf{C}^{\top})\in\mathbb{R}^{T\times S}\), \(\boldsymbol{\Delta}\in\mathbb{R}^{S\times T}\) and \(\boldsymbol{\Delta}_{s,t}\triangleq\delta_{s,z_{t}}\). Here, \(\delta_{\cdot,\cdot}\) denotes the Kronecker delta function and \(z_{t}\) is the VQ shortcode for timestep \(t\)._
### Linear-Time Decoder Attention
**Theorem 3.6**.: _Let \(L\) be a divisor of \(T\). Suppose \(\mathbf{B}_{i,j}=-\infty\) for \(j>i\), and \(\mathbf{B}_{i,j}=0\) for \(j<i-L\). Let \(\mathbf{\Delta}\in\mathbb{R}^{S\times T}\) with \(\mathbf{\Delta}_{s,t}\triangleq\delta_{s,z_{t}}\), same as before. Let \(\phi_{w}\) be an element-wise nonlinearity with \(\phi_{w}(-\infty)=0\). For a tensor \(\mathbf{M}\), let \(\mathbf{M}^{(\dots,n,\dots)}\) denote the slice \(\mathbf{M}_{\dots,nL;(n+1)L,\dots}\). For a specific tensor, if an axis is not sliced over, each ellipsis will be replaced by the appropriate number of '\(\cdot\)'. Then the product \(\mathbf{W}\mathbf{V}\) in Definition 3.1 can be computed using the recursion:_
\[\mathbf{U}(n) \triangleq\begin{cases}\mathbf{U}(n-1)+\mathbf{\Delta}^{(\cdot,n )}\mathbf{V}^{(n,\cdot)}&\text{ if }n\geq 0\\ \mathbf{0}&\text{ otherwise}\end{cases} \tag{22}\] \[[\mathbf{W}\mathbf{V}]^{(n,\cdot)} =\phi_{w}(\mathbf{Q}^{(n,\cdot)}\mathbf{C}^{\top})\mathbf{U}(n-2)\] (23) \[\quad+\phi_{w}(\mathbf{Q}^{(n,\cdot)}[\hat{\mathbf{K}}^{(n-1, \cdot)}]^{\top}+\mathbf{B}^{(n,n-1)})\mathbf{V}^{(n-1,\cdot)}\] (24) \[\quad+\phi_{w}(\mathbf{Q}^{(n,\cdot)}[\hat{\mathbf{K}}^{(n,\cdot )}]^{\top}+\mathbf{B}^{(n,n)})\mathbf{V}^{(n,\cdot)} \tag{25}\]
_where any tensor slice \(\mathbf{M}^{(\dots,n,\dots)}\) is defined as a zero tensor of width \(L\) in the sliced dimension(s) if any block slice index \(n\) is less than zero._
**Theorem 3.7**.: _Let \(L\) be a divisor of \(T\). Suppose \(\mathbf{B}_{i,j}=-\infty\) for \(j>i\), and \(\mathbf{B}_{i,j}=0\) for \(j<i-L\). Let \(\mathbf{\Delta}\in\mathbb{R}^{S\times T}\) with \(\mathbf{\Delta}_{s,t}\triangleq\delta_{s,z_{t}}\), same as before. Suppose \(\phi_{w}\) is the row-wise softmax nonlinearity. Let the block tensor slice notation from Theorem 3.6 apply. Let \(\mathbf{1}\in\mathbb{R}^{T}\). Let \(\mathbf{A}\triangleq\exp(\mathbf{Q}\hat{\mathbf{K}}^{\top}+\mathbf{B})\). Then the product \(\mathbf{W}\mathbf{V}\) in Definition 3.1 can be computed using the recursions:_
\[\mathbf{U}(n) \triangleq\begin{cases}\mathbf{U}(n-1)+\mathbf{\Delta}^{(\cdot,n )}\mathbf{V}^{(n,\cdot)}&\text{ if }n\geq 0\\ \mathbf{0}&\text{ otherwise}\end{cases} \tag{26}\] \[\mathbf{L}(n) \triangleq\begin{cases}\mathbf{L}(n-1)+\mathbf{\Delta}^{(\cdot,n )}\mathbf{1}^{(n)}&\text{ if }n\geq 0\\ \mathbf{0}&\text{ otherwise}\end{cases}\] (27) \[[\mathbf{A}\mathbf{V}]^{(n,\cdot)} =\exp(\mathbf{Q}^{(n,\cdot)}\mathbf{C}^{\top})\mathbf{U}(n-2)\] (28) \[\quad+\exp(\mathbf{Q}^{(n,\cdot)}[\hat{\mathbf{K}}^{(n-1,\cdot)}] ^{\top}+\mathbf{B}^{(n,n-1)})\mathbf{V}^{(n-1,\cdot)}\] (29) \[\quad+\exp(\mathbf{Q}^{(n,\cdot)}[\hat{\mathbf{K}}^{(n,\cdot)}] ^{\top}+\mathbf{B}^{(n,n)})\mathbf{V}^{(n,\cdot)}\] (30) \[[\mathbf{A}\mathbf{1}]^{(n)} =\exp(\mathbf{Q}^{(n,\cdot)}\mathbf{C}^{\top})\mathbf{L}(n-2)\] (31) \[\quad+\exp(\mathbf{Q}^{(n,\cdot)}[\hat{\mathbf{K}}^{(n-1,\cdot)}] ^{\top}+\mathbf{B}^{(n,n-1)})\mathbf{1}^{(n-1)}\] (32) \[\quad+\exp(\mathbf{Q}^{(n,\cdot)}[\hat{\mathbf{K}}^{(n,\cdot)}] ^{\top}+\mathbf{B}^{(n,n)})\mathbf{1}^{(n)}\] (33) \[[\mathbf{W}\mathbf{V}]^{(n,\cdot)} =\mathrm{Diag}([\mathbf{A}\mathbf{1}]^{(n)})^{-1}[\mathbf{A} \mathbf{V}]^{(n,\cdot)}. \tag{34}\]
_Remark 3.8_.: Intuitively, Theorem 3.7 shows that \(\mathrm{VQ}\)-Attention is computable by processing the sequence in blocks of length \(L\), applying two steps to each block. The first step is to form the corresponding block of \(\mathbf{\Delta}\) and use it to sum the value vectors and shortcode indicators into the appropriate rows of the 'cache' variables \(\mathbf{U}(n),\mathbf{L}(n)\). The second step is to incorporate \(\mathbf{U}(n),\mathbf{L}(n)\) directly into the retrieval process with the help of the codebook \(\mathbf{C}\).
_Remark 3.9_.: Theorem 3.7 provides an algorithm to compute \(\mathrm{VQ}\)-Attention from the queries, keys, values, gates, and codebook in \(\mathcal{O}(L(S+2L)(D_{k}+D_{v}))\) time per query block, and therefore \(\mathcal{O}(T(S+2L)(D_{k}+D_{v}))\) time per sequence.
_Remark 3.10_.: In the experiments, we use \(\phi_{w}\) as row-wise softmax, and use the relative positional biases from Dai et al. (2019) for the band of nonzero biases in \(\mathbf{B}\). In the experiments, we rely on a numerically stable reformulation of Theorem 3.7, where the logarithm of the counts \(\mathbf{L}(n-2)\) are moved inside the exponentials \(\exp(\mathbf{Q}^{(n,\cdot)}\mathbf{C}^{\top})\) appearing in \([\mathbf{A}\mathbf{V}]^{(n,\cdot)}\) and \([\mathbf{A}\mathbf{1}]^{(n)}\).
### Learning Algorithm
#### 3.4.1 Training Loss
Let \(\mathbf{\theta}\) denote the set of non-codebook parameters of a transformer with \(N\) VQ-Attention layers, and let \(\mathcal{C}=\{\mathbf{C}^{(\ell)}\}_{\ell=0}^{N-1}\) denote the set of the layers' codebooks. For autoregressive modeling of a
sequence \(\mathbf{X}=\{\mathbf{x}_{t}\}_{t=0}^{T}\), we define the Transformer-VQ training loss as
\[\mathcal{L}(\mathbf{X};\boldsymbol{\theta},\mathcal{C})=\mathcal{L}_{\text{CE}}( \mathbf{X};\boldsymbol{\theta},\mathcal{C})+\beta\mathcal{L}_{\text{VQ}}( \mathbf{X};\boldsymbol{\theta},\mathcal{C}) \tag{35}\]
where \(\beta>0\) is a hyperparameter known as the commit loss coefficient, and
\[\mathcal{L}_{\text{CE}}(\mathbf{X};\boldsymbol{\theta},\mathcal{C}) \triangleq\frac{1}{T}\sum_{t=0}^{T-1}-\ln p(\mathbf{x}_{t+1}| \mathbf{x}_{\leq t},\boldsymbol{\theta},\mathcal{C}) \tag{36}\] \[\mathcal{L}_{\text{VQ}}(\mathbf{X};\boldsymbol{\theta},\mathcal{C}) \triangleq\frac{1}{T}\sum_{t=0}^{T-1}\sum_{\ell=0}^{N-1}|| \mathbf{K}_{t}^{(\ell)}-\text{SG}(\mathbf{C}_{z_{t}}^{(\ell)})||_{2}^{2}. \tag{37}\]
Thus, the training loss is the average next-token cross-entropy loss, plus the average token's commitment losses (van den Oord et al., 2017), summed over layer codebooks. Non-codebook parameters \(\boldsymbol{\theta}\) receive a gradient from both loss terms. Following van den Oord et al. (2017), codebooks are parameterized via smoothed quantizer statistics.
#### 3.4.2 Training Updates
Instead of updating on the full sequence loss given above, we generally update every \(K\) query blocks, where \(LK\ll T\), which resembles a strategy used in prior works (Dai et al., 2019; Wu et al., 2022; Hutchins et al., 2022).
Each update is obtained by backpropagating through a window of \(LK\) timesteps, with gradients computed from the corresponding terms in the per-token average losses above. Codebooks are also updated every \(K\) query blocks.
When \(K=1\), using Theorem 3.7 is an efficient alternative to using a non-differentiable long-range key-value cache. When \(K>1\), a learning signal is sent through any value vectors added to the compressed cache within the backpropagation window.
## 4 Related Work
### Hierarchical Attention
Combiner (Ren et al., 2021) proposes an approximation of softmax using a simple graphical model, and parameterizes its internal probabilities using max-pooling over query/key features, enabling decoder-only self-attention in subquadratic time. H-Transformer-1D (Zhu and Soricut, 2021) uses average-pooling operations over queries/keys to reduce the complexity of encoder-only self-attention. Transformer-LS (Zhu et al., 2021) uses dynamic projections to downsample long-range features in transformers by a user-specified factor. Hourglass Transformer (Nawrot et al., 2021) and MegaByte (Yu et al., 2023) eschew pooling in favor of convolutions or reshaping for temporal downsampling, and apply these techniques to reduce computation in the interior layers of decoder-only transformers.
Transformer-VQ differs from these works in that it uses vector quantization (VQ), a well-understood method for compression, instead of newly-designed heuristic methods. In addition, it does not rely on token contiguity to guide the compression process. Instead, it utilizes an equivalence to dense attention. Notably, Transformer-VQ is easier to sample from compared to previous hierarchical attention models; since the cache update logic can be equivalently applied every token instead of every \(L\) tokens, there are no sporadic 'feature consolidation' operations required during sampling.
### Kernelizable Attention
Kernelizable attention (Katharopoulos et al., 2020; Choromanski et al., 2021) computes query and key features and applies the same nonlinearity to both of them separately, omitting additional nonlinearities when computing attention weights. By using the associativity of matrix multiplication, kernelized attention reduces attention to linear complexity. Transformer-VQ is distinguished from kernelizable attention through an asymmetric treatment of queries and keys, a deterministic equivalence to softmax-based attention, training stability, and strong quantitative results on long-context autoregressive modeling benchmarks.
Clustering attention (Vyas et al., 2020) uses vector-quantized queries and is also kernelizable. However, it requires learning per-layer codebooks for each sequence and uses a modified form of Lloyd's iterations based on Hamming distance and locality-sensitive hashing. This yields a complex non-causal algorithm which is only suitable for non-causal attention and is slow on TPUs. Transformer-VQ is strongly differentiated from clustering attention by its simplicity, applicability to decoder-only tasks, efficiency on TPUs, and large-scale experimental validation.
### Compressive Attention
Compressive Transformers (Rae et al., 2020) directly learn a compression function for long-range features. LUNA (Ma et al., 2021) and Recurrent Transformers (Bulatov et al., 2022; Hutchins et al., 2022) use cross-attention to compress long-range features into a recurrent state. Notably, our model implements a kind of block-recurrent mechanism for its cache, but is significantly more parameter-efficient than the mechanisms proposed by Ma et al. (2021); Hutchins et al. (2022). More generally, Transformer-VQ differs from compressive/recurrent transformers in that it has an equivalence to quadratic-time attention over vector-quantized keys. In other words, if the keys are already vector-quantized, the Transformer-VQ cache losslessly reduces the cost to linear time.
Perceivers (Jaegle et al., 2021; Hawthorne et al., 2022) use cross-attention to attend to long sequences, and compute self-attention over only a narrow stack of 'latents'. Transformer-VQ differs from Perceivers in that it computes dense self-attention in linear time, instead of just cross-attention. Thus, while Perceivers' long-range layers incur a quadratic time complexity during sampling, Transformer-VQ generates sequences in linear time.
### Gated Sequence Models
Gated attention was introduced in FLASH (Hua et al., 2022) as a fusion of attention sublayers (Vaswani et al., 2017) and GLU-based MLP sublayers (Shazeer, 2020). Various gating mechanisms have previously been used to stabilize training of transformers (Parisotto et al., 2019) and other sequence models including S4 (Gu et al., 2022), GSS (Methia et al., 2022), MEGA (Ma et al., 2023) and RWKV (Peng et al., 2023). Transformer-VQ uses the original gating formulation from Hua et al. (2022), and develops a new attention mechanism.
### VQ, K-Means, and Beyond
Ideas relating to \(k\)-means, vector quantization, and/or codebooks have also been applied in transformers for sparse attention (Roy et al., 2021; Wang et al., 2021, 2022), feature learning (Mao et al., 2022; Roy et al., 2022), sparsely-activated MLPs (Lample et al., 2019), and expert selection (Roller et al., 2021). These works generally feature codebooks or similar _within_ a transformer architecture. Several works also have proposed models that feature a codebook somewhere _outside_ a transformer, e.g., when transformers are priors for VQ-VAEs (Kaiser et al., 2018; Dhariwal et al., 2020; Ramesh et al., 2021; Lee et al., 2022; Zhou et al., 2022). Transformer-VQ uses one codebook within each layer and, in contrast to all of the aforementioned works, computes dense self-attention in linear time.
Transformer-VQ is not directly related to methods which quantize the weights of a transformer e.g., Dettmers et al. (2022); Dettmers & Zettlemoyer (2023); Frantar et al. (2023). Such methods are typically applied after training, and do not reduce the complexity of self-attention. However, we expect these approaches may prove complementary during inference.
## 5 Experiments
For all experiments, we use a TPU v3-128 accelerator (Jouppi et al., 2017). Hyperparameters follow Appendix B unless specifically noted. For efficient training on TPUs, Transformer-VQ was implemented using Jax (Bradbury et al., 2018) and Flax (Heek et al., 2023).
### Ablation Studies
#### 5.1.1 Codebook Size
Larger codebook sizes may allow more flexible attention patterns and could improve the fidelity of the gradients, both of which are likely to benefit model quality at the expense of additional wall time. To investigate, we ablate the codebook size \(S\) using the Enwik8 dataset (see SS 5.2.1), and report the lowest validation bits-per-byte (BPB, lower is better) obtained by each model in Table 1.
Table 1 confirms the intuition that larger codebooks improve the prediction quality (lower BPB) in return for additional wall time per training step. In particular, increasing the codebook size by a factor of two appears to decrease the validation BPB by about \(0.005\) and increase the wall time per step by a factor of about \(1.1\). A formal characterization of the scaling laws (Kaplan et al., 2020) for codebook size could be an interesting direction for future work.
#### 5.1.2 Long-Range Cache
Since our model has several architectural differences from most prior works, the benefit of the long-range cache must be shown directly. To investigate, we train a model with the long-range cache omitted, using codebook size \(S=256\). We report the validation BPB for Enwik8 in Table 2.
As shown in Table 2, removing the long-range cache reduces the wall time per step by a factor of about \(1.1\), but leads to a significant drop in quality (higher bits-per-byte). This confirms the importance of our long-range cache mechanism.
### Quantitative Results
To assess the ability of Transformer-VQ to learn long-range dependencies, we now conduct a series of large-scale experiments, benchmarking on several long-range autoregressive modeling tasks. For fair comparison, we only benchmark against models (a) trained without using any extra data or augmentation, and (b) evaluated with fixed parameters. In all cases, we use codebook size \(S=512\).
#### 5.2.1 Enwik8
Enwik8 is a byte-level language modeling dataset consisting of 100 million bytes of unprocessed English-language Wikipedia articles (Mahoney, 2011), with long-term dependencies that may span tens of thousands of bytes. Per convention, it is split into train, validation, and test sets of 90 million, 5 million, and 5 million bytes, respectively (Child et al., 2019; Rae et al., 2020).
For this dataset, we trained a Transformer-VQ with 190M parameters, smaller than the model by Dai et al. (2019). We report test bits-per-byte (BPB) in Table 3.
Transformer-VQ obtains a BPB of 0.99, notably matching the result of Transformer-XL (Dai et al., 2019), while using an entirely different cache mechanism not based on position and also shorter in length at test time.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Setting** & **Val. BPB** & **Rel Time** \\ \hline \(S=256\) & 1.010 & 0.927 \\ \(S=512\) & 1.005 & 1.0 \\ \(S=1024\) & 1.000 & 1.109 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Codebook size ablations.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Setting** & **Val. BPB** & **Rel Time** \\ \hline No long-range cache & 1.026 & 0.836 \\ Long-range cache & 1.010 & 0.927 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Long-range cache ablations.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Model** & **BPB** \\ \hline Ma et al. (2023) - Mega & 1.02 \\ Dai et al. (2019) - XL & 0.99 \\ Child et al. (2019) - Sparse & 0.99 \\ Beltagy et al. (2020) - Longform. & 0.99 \\ Roy et al. (2021) - Routing & 0.99 \\ Sukhbaatar et al. (2019a) - Adapt. Sp. & 0.98 \\ Suhkbaatar et al. (2019b) - All-Attn. & 0.98 \\ Nawrot et al. (2021) - Hourglass & 0.98 \\ Rae et al. (2020) - Compress. & 0.97 \\ Zhu et al. (2021) - Long-Short & 0.97 \\ Fan et al. (2020b) - Feedback & 0.96 \\ Lei (2021) - SRU++ & 0.95 \\ Sukhbaatar et al. (2021) - Expire Sp. & 0.95 \\ Lutati et al. (2023) - Focus Attn. & **0.94** \\ \hline Transformer-VQ & 0.99 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test bits-per-byte on Enwik8.
For this dataset, we found overfitting was a significant issue, and due to the compressive cache mechanism, using attention dropout was not possible. Sweeping over the residual dropout rate, weight decay coefficient, and layerdrop (Fan et al., 2020) rate, we found a setting yielding good generalization. Nonetheless Transformer-VQ does fall short of state-of-the-art here, with several works using complex recurrence or forgetting mechanisms and obtaining better Enwik8 results.
#### 5.2.2 Pg-19
PG-19 is an open-vocabulary language modeling dataset consisting of 11 gigabytes of text from over 28,000 freely-available Project Gutenberg books published prior to 1919 (Rae et al., 2020). The average number of words per book is nearly 70,000, enabling learning long-term dependencies, especially in novels (Sun et al., 2021; Hutchins et al., 2022).
For this dataset, we trained a Transformer-VQ with 1.3B parameters, similar to the largest model by Hutchins et al. (2022). Since PG-19 is an open-vocabulary dataset, we first learned a SentencePiece vocabulary (Kudo and Richardson, 2018) of size 32,000 using the BPE method. Following the calculations of Rae et al. (2020), we report the test set word-level perplexity (WLP) in Table 4.
Transformer-VQ obtains a WLP of 26.6, very close to the state-of-the-art by Hutchins et al. (2022). Interestingly, since our Transformer-VQ design is equivalent to using dense self-attention with vector-quantized keys, our strong result shows that models using long-range attention only (no recurrence) can also be highly competitive on PG-19, which reaffirms the efficacy of standalone self-attention as a method for sequence processing at scale.
#### 5.2.3 ImageNet64
ImageNet64 is an image dataset consisting of over 1.2 million images downsampled to 64x64 resolution (Chrabaszcz et al., 2017; Deng et al., 2009). Flattening the images yields an autoregressive density estimation task on sequences of over 12,000 bytes each. Note since the official test set is not public for this dataset, we report results on the official validation set. For validation purposes we used a held-out set of about 80,000 examples from the training split.
For this dataset, we trained a Transformer-VQ with 1.2B parameters, similar to the PG-19 model. We report the bits-per-byte on the official validation set in Table 5. Several of the earlier baselines used an earlier variant of downsampled ImageNet prepared by van den Oord et al. (2016) with a different downsampling algorithm. Since that variant has been unavailable through official channels for about a year, we used the newer variant following Lipman et al. (2023). We emphasize that our results using the newer variant cannot be directly compared with baselines using the earlier variant; however, due to several reporting ambiguities, Table 5 does not symbolically distinguish variants used.
Transformer-VQ obtains a BPB of 3.16, significantly improving on prior results reported by Hazami et al. (2022); Lipman et al. (2023). Our model has 7x more parameters than the one by Hazami et al. (2022), but thanks to the large dataset it showed no signs of overfitting. Our favorable results on this dataset show that the Transformer-VQ architecture can be directly applied to other modalities beyond natural language, which we attribute to its efficient emulation of the standard transformer's flexible attention patterns.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Model** & **WLP** \\ \hline Yu et al. (2023) - MegaByte & 36.4 \\ Rae et al. (2020) - XL & 36.3 \\ Rae et al. (2020) - Compressive & 33.6 \\ Roy et al. (2021) - Routing & 33.2 \\ Hawthorne et al. (2022) - Perceiver AR & 28.9 \\ Hutchins et al. (2022) - Block-Recur. & **26.5** \\ \hline Transformer-VQ & \(26.6\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test word-level perplexity on PG-19.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Model** & **BPB** \\ \hline Ren et al. (2021) - Combiner & 3.42 \\ Kingma et al. (2021) - VDM & 3.40 \\ Hawthorne et al. (2022) - Perceiver AR & 3.40 \\ Yu et al. (2023) - MegaByte & 3.40 \\ Grcic et al. (2021) - DenseFlow & 3.35 \\ Lipman et al. (2023) - Flow Matching & 3.31 \\ Hazami et al. (2022) - Efficient VDVAE & 3.30 \\ \hline Transformer-VQ & **3.16** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Validation bits-per-byte on ImageNet64.
### Qualitative Analysis
We provide extensive samples for all models in Appendix C.
#### 5.3.1 ImageNet64
We generate batches of 128 sequences using nucleus sampling (Holtzman et al., 2020). Figures 1-2 show a subset of samples with the same indices from two batches with different nucleus settings. Many of the samples between the two batches are perceptually similar, which is a consequence of using the same random seed to directly observe the impact of the nucleus sampling hyperparameter. In Figure 1, we observe that our unconditional ImageNet64 model can synthesize sequences of over 12,000 bytes and appears to be capable of depicting relatively high-fidelity ocean water, shorelines, leaves, insects, trees, animals, people, mountains, and architecture.
The model does make some mistakes, particularly involving perspective or object identity. For instance, in the second row of Figure 1, the rightmost image appears to be a bird wearing a shell, while in the first row of Figure 2, the rightmost image appears to be a wooden galleon with legs. It is unclear if these effects are due to vector quantization or lack of image-specific inductive biases. Interestingly, we have not used separate embeddings to specify the row, column, and color channel to the model, which is in contrast to some prior works (Child et al., 2019; Hawthorne et al., 2022). Finally, while some mistakes dissipate when using nucleus 0.999, some new ones do appear; one possible explanation is that using a fixed nucleus is suboptimal for images.
#### 5.3.2 Pg-19
In Figure 3, we observe that our PG-19 model can synthesize relatively high-quality text, maintaining a consistent tone, remaining on topic, and generating reasonably coherent content. These qualitative observations were found to hold for the vast majority of the samples we generated.
The excerpt shown was preceded by a book title 'Elementary Photography', a non-existent author's name, and publisher information. Though this information was synthesized by the model, it suggests the model may be amenable to generating text on a particular topic simply by conditioning on a prompt, similar to larger language models.
## 6 Conclusion
Transformer-VQ is a decoder-only transformer architecture that computes softmax-based dense self-attention in linear time with respect to sequence length. Its efficient attention is enabled by vector-quantized keys and a new truncation-free fixed-size cache. Our large-scale experiments show Transformer-VQ is an efficient and flexible autoregressive model with successful applications to byte-level language modeling, open-vocabulary language modeling, and image synthesis. Future work directions include formal scaling laws, scaling to even larger models, and applying Transformer-VQ to long-context program synthesis and reinforcement learning tasks.
Figure 2: Minibatch of generated samples from our unconditional ImageNet64 model; nucleus 0.999.
### Reproducibility Statement
To facilitate reproducibility, our attention mechanism is described mathematically in Section 3, our hyperparameters and other implementation details are given in Appendix B, and our implementation is open-sourced at the link in the abstract.
#### Acknowledgments
We thank the anonymous reviewers for helpful feedback on this work, and acknowledge the Python community, especially the Jax ecosystem contributors, for effective libraries used in this project. This work was generously supported by Cloud TPUs from Google's TPU Research Cloud (TRC).
| トランザミナー-VQは、出力側のトランスフォーマーで、線形時間で、サンプリングのベースとした密集型自己注意を適用する。 トランザミナー-VQの効率的な注意は、ベクトル量子化されたキーと新しいキャッシングメカニズムによって可能になる。 大規模な実験の結果、Transformer-VQは、品質に優れており、Enwik8で0.99 bpb、PG-19で26.6 ppl、ImageNet64で3.16 bpbを達成している。さらに、Transformer-VQの最適化された実装は、8k長さの線形トランスフォーマーと比較して、3倍速く、32k長さの線形トランスフォーマーは12倍速く、131kの長さまでスケールすることができる。コード: \url{https://github.com/transformer-vq/transformer_vq} |
2306.17817 | Act3D: 3D Feature Field Transformers for Multi-Task Robotic Manipulation | 3D perceptual representations are well suited for robot manipulation as they
easily encode occlusions and simplify spatial reasoning. Many manipulation
tasks require high spatial precision in end-effector pose prediction, which
typically demands high-resolution 3D feature grids that are computationally
expensive to process. As a result, most manipulation policies operate directly
in 2D, foregoing 3D inductive biases. In this paper, we introduce Act3D, a
manipulation policy transformer that represents the robot's workspace using a
3D feature field with adaptive resolutions dependent on the task at hand. The
model lifts 2D pre-trained features to 3D using sensed depth, and attends to
them to compute features for sampled 3D points. It samples 3D point grids in a
coarse to fine manner, featurizes them using relative-position attention, and
selects where to focus the next round of point sampling. In this way, it
efficiently computes 3D action maps of high spatial resolution. Act3D sets a
new state-of-the-art in RL-Bench, an established manipulation benchmark, where
it achieves 10% absolute improvement over the previous SOTA 2D multi-view
policy on 74 RLBench tasks and 22% absolute improvement with 3x less compute
over the previous SOTA 3D policy. We quantify the importance of relative
spatial attention, large-scale vision-language pre-trained 2D backbones, and
weight tying across coarse-to-fine attentions in ablative experiments. Code and
videos are available on our project website: https://act3d.github.io/. | Theophile Gervet, Zhou Xian, Nikolaos Gkanatsios, Katerina Fragkiadaki | 2023-06-30T17:34:06 | http://arxiv.org/abs/2306.17817v2 | # Act3D: Infinite Resolution Action Detection Transformer for Robotic Manipulation
###### Abstract
3D perceptual representations are well suited for robot manipulation as they easily encode occlusions and simplify spatial reasoning. Many manipulation tasks require high spatial precision in end-effector pose prediction, typically demanding high-resolution 3D perceptual grids that are computationally expensive to process. As a result, most manipulation policies operate directly in 2D, foregoing 3D inductive biases. In this paper, we propose Act3D, a manipulation policy Transformer that casts 6-DoF keypose prediction as 3D detection with adaptive spatial computation. It takes as input 3D feature clouds unprojected from one or more camera views, iteratively samples 3D point grids in free space in a coarse-to-fine manner, featurizes them using relative spatial attention to the physical feature cloud, and selects the best feature point for end-effector pose prediction. Act3D sets a new state-of-the-art in RLbench, an established manipulation benchmark. Our model achieves 10% absolute improvement over the previous SOTA 2D multi-view policy on 74 RLbench tasks and 22% absolute improvement with 3x less compute over the previous SOTA 3D policy. In thorough ablations, we show the importance of relative spatial attention, large-scale vision-language pre-trained 2D backbones, and weight tying across coarse-to-fine attentions. Code and videos are available at our project site: [https://act3d.github.io/](https://act3d.github.io/).
Keywords:Learning from Demonstrations, Manipulation, Transformers
## 1 Introduction
Solutions to many robotic manipulation tasks can be represented as a sequence of 6-DoF robot end-effector poses and gripper actions. Many recent methods train manipulation policies to predict such poses directly from 2D images using supervision from demonstrations [1, 2, 3, 4, 5, 6]. However, these methods are typically sample inefficient, often requiring thousands of trajectories, and cannot easily generalize across viewpoints and environments. Transporter networks [7] recently reformulated 4-DoF keypose prediction as pixel classification in a top-down scene image, inspired by object detection in computer vision [8, 9, 10]. This design choice of detecting end-effector poses in the scene using local features instead of regressing them from aggregated scene features, which we will call _action detection_, dramatically increased sample efficiency. However, Transporter Networks are limited to top-down 2D worlds and 4-DoF end-effector poses.
A core challenge in detecting actions for general 6-DoF manipulation in a 3D space is that end-effector 3D positions not only reside on points attached to the physical scene, but could also be in the free space. For example, end-effector 6 DoF poses relevant for a task, which we will call keyposes [11, 12], can be pre-grasp poses, back-off poses for articulated object interactions, or
transition poses between different parts of a task. While it is straightforward to featurize 2D pixels or 3D physical points -- we can featurize pixels with 2D backbones and back-project to 3D or use a 3D point cloud transformer [13] -- it is less clear how to efficiently featurize points in free space to detect one as the end-effector position. 3D voxelization at high resolution is computationally demanding [14] since we cannot use sparse 3D convolutions [15; 16]: we do not know ahead of time which voxels will remain empty or would need to be featurized because they would contain the next end-effector pose. Recent work of PerAct [1] featurizes _all_ 3D voxels (occupied or not) using the latent set bottlenecked self-attention operation of Perceiver [17], which is computationally expensive. Other methods work around this issue by avoiding featurizing points in free space and instead detecting a contact point and then regressing an offset from this contact point towards the end-effector predicted position [2; 18; 19]. This is a reasonable design choice but it does not fully exploit the action detection inductive bias.
In this paper, we propose Act3D, a Transformer policy architecture for language-conditioned multi-task robot manipulation that casts 6-DoF end-effector keypose prediction as 3D detection with adaptive spatial computation. Act3D learns 3D perceptual representations of arbitrary spatial resolution via recurrent coarse-to-fine 3D point grid sampling and featurization. It first computes a scene-level 3D feature cloud by lifting 2D pre-trained features from one or more views using sensed depth. At each iteration, the model then samples 3D point grids in the whole workspace and featurizes them using relative spatial cross-attention [20] to the 3D feature cloud. The featurized 3D points are classified with a detection Transformer head [10; 21] to predict the grid center for the next iteration. All iterations share attention weights. Act3D detects the 3D point that corresponds to the end-effector's 3D position, and then regresses the 3D rotation and opening of the end-effector from the contextualized parametric query. At inference time, we can trade-off compute for higher spatial precision and task performance by sampling more points in free space than the model ever saw at training time.
We test Act3D in RLBench [22], an established benchmark for learning diverse robot manipulation policies from demonstrations. We set a new state-of-the-art in the benchmark in both single-task and multi-task settings. Specifically, we achieve a 10% absolute improvement over prior SOTA on the single-task setting introduced by HiveFormer [2] with 74 tasks and a 22% absolute improvement over prior SOTA in the multi-task setting introduced by PerAct [1] with 18 tasks and 249 variations. We also validate our approach on a Franka Panda with a multi-task agent trained from scratch on 8 real-world tasks with a total of just 100 demonstrations (see Figure 2). In thorough ablations,
Figure 1: **Act3D architecture.** Act3D is a language-conditioned end-effector 6-DoF keypose predictor that learns 3D perceptual representations of arbitrary spatial resolution via recurrent coarse-to-fine 3D point sampling and featurization. Act3D featurizes multi-view RGB images with a pre-trained 2D backbone and lifts them in 3D using depth to obtain a multi-scale 3D scene feature cloud. It then iteratively predicts 3D foci of attention in the free space, samples 3D point grids in their vicinity, and featurizes the sampled 3D points using relative cross-attention to the physical scene feature cloud, language tokens, and proprioception. Act3D detects the 3D point that corresponds to the next best end-effector position using a detection Transformer head, and regresses the rotation, end-effector opening, and planner collision avoidance from the decoder’s parametric query.
we show the importance of relative spatial attention, large-scale vision-language pre-trained 2D backbones, and weight tying across coarse-to-fine attentions.
## 2 Related Work
Learning robot manipulation from demonstrationsMany recent work train multi-task manipulation policies that leverage Transformer architectures [1; 2; 3; 5; 23; 24] to predict robot actions from video input and language instructions. End-to-end image-to-action policy models, such as RT-1 [5], GATO [24], BC-Z [25], and InstructRL [3], directly predict 6-DoF end-effector poses from 2D video and language inputs. They require many thousands of demonstrations to learn spatial reasoning and generalize to new scene arrangements and environments. Transporter networks [7] and their subsequent variants [26; 27; 28] formulate 4-DoF end-effector pose prediction as pixel classification in 2D overhead images. Their _action detection_ inductive bias -- parametrizing the action implicitly [29] by detecting end-effector poses in the scene using local features with translation and rotation equivariances [30] -- dramatically increased sample efficiency over previous methods that regress end-effector poses by aggregating global scene features. However, they are limited to top-down 2D planar worlds with simple pick-and-place primitives. 3D policy models of C2F-ARM [4] and PerAct [1] voxelize the robot's workspace and are trained to detect the 3D voxel that contains the next end-effector keypose. Spatially precise 3D pose prediction requires the 3D voxel grid to be high resolution, which comes at a high computational cost. C2F-ARM [4] uses a coarse-to-fine voxelization in convolutional grids to handle computational complexity, while PerAct [1] uses Perceiver's latent bottleneck [17] to avoid voxel-to-voxel self-attention operations. Act3D avoids 3D voxelization altogether and instead represents the scene as a 3D feature cloud. It samples 3D points in the empty workspace and featurizes them using cross-attentions to the physical 3D point features.
Feature pre-training for robot manipulationMany 2D policy architectures bootstrap learning from demonstrations from frozen or finetuned 2D image backbones [31; 32; 25; 33] to increase experience data sample efficiency. Pretrained vision-language backbones can enable generalization to new instructions, objects, and scenes [34; 27]. In contrast, SOTA 3D policy models are typically trained from scratch from colored point clouds input [1; 4; 35]. Act3D uses CLIP pre-trained 2D backbones [36] to featurize 2D image views and lifts the 2D features in 3D using depth information [37; 38]. We show that 2D feature pretraining gives a considerable performance boost over training from scratch.
Relative attention layersRelative attentions have shown improved performance in many 2D visual understanding tasks and language tasks [39; 40]. Rotary embeddings [41] implement relative attention efficiently by casting it as an inner-product in an extended position feature space. In 3D, relative attention is imperative as the coordinate system is arbitrary. 3D relative attentions have been used before in 3D Transformer architectures for object detection and point labelling [42; 43]. We show in Section 4 that relative attentions significantly boost performance of our model.
## 3 Act3D
The architecture of Act3D is shown in Figure 1. It is a Transformer policy that, at each timestep \(t\), predicts a 6-DoF end-effector pose from one or more RGB-D images, a language instruction, and proprioception information regarding the robot's current end-effector pose. The key idea is to _detect_ 6 DoF end-effector poses in the robot's workspace by learning 3D perceptual representations of free space with arbitrary spatial resolution, via recurrent coarse-to-fine 3D point grid sampling and featurization. 3D point candidates (which we will call ghost points) are sampled, featurized and scored iteratively through relative cross-attention [20] to the physical 3D scene feature cloud, lifted from 2D feature maps of the input image views.
Following prior work [12; 1; 2; 3], instead of predicting an end-effector pose at each timestep, we extract a set of _keyposes_ that capture bottleneck end-effector poses in a demonstration. A pose is a keypose if (1) the end-effector changes state (something is grasped or released) or (2) velocities
approach near zero (a common occurrence when entering pre-grasp poses or entering a new phase of a task). The prediction problem then boils down to predicting the next (best) keypose action given the current observation. At inference time, Act3D iteratively predicts the next best keypose and reaches it with a motion planner, following previous works [1, 2]. We assume access to a dataset of \(n\) demonstration trajectories. Each demonstration is a sequence of observations \(O=\{o_{1},o_{2},..,o_{t}\}\) paired with continuous actions \(A=\{a_{1},a_{2},..,a_{t}\}\) and, optionally, a language instruction \(l\) that describes the task. Each observation \(o_{t}\) consists of RGB-D images from one or more camera views; more details are in Appendix 6.2. An action \(a_{t}\) consists of the 3D position and 3D orientation (represented as a quaternion) of the robot's end-effector, its binary open or closed state, and whether the motion planner needs to avoid collisions to reach the pose: \[a=\{a_{\mathrm{pos}}\in\mathbb{R}^{3},a_{\mathrm{rot}}\in\mathbb{H},a_{\mathrm{ open}}\in\{0,1\},a_{\mathrm{col}}\in\{0,1\}\}\] Next, we go into details on the modules of Act3D. Visual and language encoderOur visual encoder map multi-view RGB-D images into a multi-scale 3D scene feature cloud. We use a large-scale pre-trained 2D feature extractor followed by a feature pyramid network [44] to extract multi-scale visual tokens for each camera view. We then lift these 2D visual tokens to 3D by interpolating their depth values. The language encoder featurizes instructions with a large-scale pre-trained language feature extractor. We use the CLIP ResNet50 [36] visual encoder and the corresponding language encoder to exploit their common vision-language feature space for interpreting instructions and referential grounding. Our pre-trained visual and language encoders are frozen, not finetuned, during training of Act3D. Iterative ghost point sampling and featurizationTo enable precise and computationally tractable keypose detection, we sample, featurize and select ghost points iteratively, first coarsely across the entire workspace, then finely in the vicinity of the ghost point selected as the focus of attention in the previous iteration. The coarsest ghost points attend to a global coarse scene feature cloud, whereas finer ghost points attend to a local fine scene feature cloud. Relative 3D cross-attentionsWe featurize each of the 3D ghost points and a parametric query (used to select via inner-product one of the ghost points as the next best end-effector position in the decoder) independently through cross-attentions to the multi-scale 3D scene feature cloud, language tokens, and proprioception. Featurizing ghost points independently, without self-attentions to one another, enables sampling more ghost points at inference time to improve performance, as we show in Section 4. Our cross-attentions use relative 3D position information and are implemented efficiently with rotary positional embeddings [20]. Given a point \(\mathbf{p}=(x,y,z)\in\mathbb{R}^{3}\) and its feature \(\mathbf{x}\in\mathbb{R}^{d}\), the rotary position encoding function \(\mathbf{PE}\) is defined as: \[\mathbf{PE}(\mathbf{p},\mathbf{x})=\mathbf{M}(\mathbf{p})\mathbf{x}=\begin{bmatrix} \begin{smallmatrix}\cos a\theta_{k}&-\sin a\theta_{k}&0&0&0&0\\ \sin a\theta_{k}&\cos a\theta_{k}&0&0&0&0\\ 0&0&\cos a\theta_{k}&-\sin a\theta_{k}&0&0\\ 0&0&\sin a\theta_{k}&\cos a\theta_{k}&0&0\\ 0&0&0&\cos a\theta_{k}&-\sin a\theta_{k}\\ 0&0&0&0&\sin a\theta_{k}&\cos a\theta_{k}\\ \end{smallmatrix}\end{bmatrix}\] where \(\theta_{k}=\frac{1}{100000^{(k-1)/d}}\). The dot product of two positionally encoded features is then \[\mathbf{PE}(\mathbf{p}_{i},\mathbf{x}_{i})^{T}\mathbf{PE}(\mathbf{p}_{j}, \mathbf{x}_{j})=\mathbf{x}_{i}^{T}\mathbf{M}(\mathbf{p}_{i})^{T}\mathbf{M}( \mathbf{p}_{j})\mathbf{x}_{j}=\mathbf{x}_{i}^{T}\mathbf{M}(\mathbf{p}_{j}- \mathbf{p}_{i})\mathbf{x}_{j}\] which depends only on the relative positions of points \(\mathbf{p}_{i}\) and \(\mathbf{p}_{j}\). Detection Transformer decoderOnce ghost points and the parametric query are featurized, the detection transformer head scores ghost point tokens via inner product with the parametric query to select one as the next best end-effector position \(a_{\mathrm{pos}}\). We then regress the end-effector orientation \(a_{\mathrm{rot}}\) and opening \(a_{\mathrm{open}}\), as well as whether the motion planner needs to avoid collisions to reach the pose \(a_{\mathrm{col}}\), from the parametric query with a simple multi-layer perceptron (MLP). TrainingAct3D is trained supervised from input-action tuples from a dataset of manipulation demonstrations. These tuples are composed of RGB-D observations, language goals, and keypose
actions \(\{(o_{1},l_{1},k_{1}),(o_{2},l_{2},k_{2}),...\}\). During training, we randomly sample a tuple and supervise Act3D to predict the keypose action \(k\) given the observation and goal \((o,l)\). We supervise position prediction \(a_{\text{pos}}\) at every round of coarse-to-fine with a softmax cross-entropy loss over ghost points, rotation prediction \(a_{\text{rot}}\) with a MSE loss on the quaternion prediction, and binary end-effector opening \(a_{\text{open}}\) and whether the planner needs to avoid collisions \(a_{\text{col}}\) with binary cross-entropy losses.
Implementation detailsWe extract two feature maps per \(256\)x\(256\) input image view: \(32\)x\(32\) coarse visual tokens and \(64\)x\(64\) fine visual tokens. We use three ghost point sampling stages: first across the entire workspace (roughly \(1\) meter cube), then in a \(16\) centimeter diameter ball, and finally in a \(4\) centimeter diameter ball. The coarsest ghost points attend to a global coarse scene feature cloud (\(32\)x\(32\)x\(n_{\text{cam}}\) coarse visual tokens) whereas finer ghost points attend to a local fine scene feature cloud (the closest \(32\)x\(32\)x\(n_{\text{cam}}\) out of the total \(64\)x\(64\)x\(n_{\text{cam}}\) fine visual tokens). During training, we sample \(1000\) ghost points in total split equally across the three stages. At inference time, we can trade-off extra prediction precision and task performance for additional compute by sampling more ghost points than the model ever saw at training time (\(10,000\) in our experiments). We'll show in ablations in Section 4 that our framework is robust to these hyper-parameters but tying weights across sampling stages and relative 3D cross-attention are both crucial for generalization. We use \(2\) layers of cross-attention and an embedding size \(60\) for single-task experiments and \(120\) for multi-task experiments. Training samples are augmented with random crops of RGB-D images and \(\pm 45.0\) yaw rotation perturbations (only in the real world as this degrades performance in simulation as we'll show in Section 4). We use a batch size 16 on a Nvidia 32GB V100 GPU for 200k steps (one day) for single-task experiments, and a batch size 48 on 8 Nvidia 32GB V100 GPUs for 600K steps (5 days) for language-conditioned multi-task experiments.
## 4 Experiments
We test Act3D in learning from demonstrations single-task and multi-task manipulation policies in simulation and the real world. In the multi-task setting, task and goal conditioning are given as input through language instructions. We conduct our simulated experiments in RLBench [22], an established simulation benchmark for learning manipulation policies, for the sake of reproducibility and benchmarking. Our experiments aim to answer the following questions: **1.** How does Act3D compare against SOTA 2D multiview and 3D manipulation policies in single-task and multi-task settings? **2.** How does the test performance change with varying number of training demonstrations?
Figure 2: **Tasks. We conduct experiments on 92 simulated tasks in RLBench [22] (only 10 shown), and 8 real-world tasks (only 5 shown). Please see the supplementary video for video results of our model in simulation and in the real world.**
**3.** How does Act3D generalize across camera viewpoints in comparison to existing 2D multiview policies? **4.** How do design choices such as relative 3D attention, pre-trained 2D backbones, weighted attention layers, and the number of coarse-to-fine sampling stages impact performance?
### Evaluation in simulation
DatasetsWe test Act3D in RLbench in two settings to ensure a clear comparison with prior work: a single-task setting with 74 tasks proposed by HiveFormer [2] and a multi-task multi-variation setting with 18 tasks and 249 variations proposed by PerAct [1]; more details are in Appendix 6.3.
BaselinesWe compare Act3D with the following state-of-the-art manipulation policy learning methods: **1.** InstructRL [3], a 2D policy that directly predicts 6 DoF poses from image and language conditioning with a pre-trained vision-and-language backbone. **2.** PerAct [1], a 3D policy that voxelizes the workspace and detects the next best voxel action through global self-attention. **3.** HiveFormer [2] and Auto-\(\lambda\)[18], hybrid methods that detect a contact point within an image input, then regress an offset from this contact point. We report numbers from the papers when available.
Evaluation metricWe evaluate policies by task completion success rate, the proportion of execution trajectories that lead to goal conditions specified in language instructions.
Single-task manipulation resultsWe consider 74 tasks grouped into 9 categories, as proposed by HiveFormer [2]. Each method is trained with 100 demonstrations and evaluated on 500 unseen episodes. We show single-task quantitative results of our model and baselines in Figure 3. Act3D **reaches 83% success rate, an absolute improvement of 10% over InstructRL [3], prior SOTA in this setting**, and consistently outperforms it across all 9 categories of tasks. With only 10 demonstrations per task, Act3D is competitive with prior SOTA using 100 demonstrations per task.
Multi-task manipulation resultsWe consider 18 tasks with 249 variations, as proposed by PerAct [1]. Each task includes 2-60 variations, which test generalization to test goal configurations that involve novel object colors, shapes, sizes, and categories. This is a more challenging setup than
Figure 4: **Multi-task performance.** On 18 RLBench tasks with 249 variations, Act3D reaches 65% success rate, an absolute improvement of 22% over PerAct [1], prior SOTA in this setting.
Figure 3: **Single-task performance.** On 74 RLBench tasks across 9 categories, Act3D reaches 83% success rate, an absolute improvement of 10% over InstructRL [3], prior SOTA in this setting.
before, since the previous setting only tested generalization to novel arrangements of the same objects. Each method is trained with 100 demonstrations per task split across variations, and evaluated on 500 unseen episodes per task. We show multi-task quantitative results of our model and PerAct in Figure 3. Act3D reaches 65% success rate, an absolute improvement of 22% over PerAct, prior SOTA in this setting, consistently outperforming it across most tasks. **With only 10 demonstrations per task, Act3D outperforms PerAct using 100 demonstrations per task.** Note that Act3D also uses less than a third of PerAct's training computation budget: PerAct was trained for 16 days on 8 Nvidia V100 GPUs while we train for 5 days on the same hardware.
### Evaluation in real-world
In our real-world setup, we conduct experiments with a Franka Emika Panda robot and a single Azure Kinect RGB-D sensor; more details are in Appendix 6.1. We designed 8 tasks (Figure 2) involving interactions with multiple types of objects, spanning liquid, articulated objects, and deformable objects. For each task, we collected 10 to 15 human demonstrations and trained a language-conditioned multi-task model on all data. We report the success rate on 10 episodes per task in Table 1. Act3D can capture semantic knowledge in demonstration well and performs reasonably well on all tasks, even with a single camera input. One major failure case comes from noisy depth sensing: when the depth image is not accurate, the selected point results in imprecise action prediction. Leveraging multi-view input for error correction could improve this, and we leave this for future work. For more qualitative videos of the robot executing the tasks, see our project site.
### Ablations
We ablate the impact of our design choices in Table 2. We perform most ablations in the single-task setting on 5 tasks: pick cup, put knife on chopping board, put money in safe, slide block to target, take umbrella out of stand. We ablate the choice of pre-trained 2D backbone in the multi-task setting with all 18 tasks.
**Generalization across camera viewpoints:** We vary camera viewpoints at test time for both Act3D and HiveFormer [2]. The success rate drops to 20.4% for HiveFormer, a relative 77% drop, while Act3D achieves 74.2% success rate, a 24% relative drop. This shows detecting actions in 3D makes Act3D more robust to camera viewpoint changes than multiview 2D methods that regress offsets.
**Weight-tying and coarse-to-fine sampling:** All 3 stages of coarse-to-fine sampling are necessary: a model with only 2 stages of sampling and regressing an offset from the position selected at the second stage suffers a 4.5% performance drop. Tying weights across stages and relative 3D positional embeddings are both crucial; we observed severe overfitting without, reflected in respective 17.5% and 42.7% performance drops. Fine ghost point sampling stages should attend to local fine visual features with precise positions: all stages attending to global coarse features leads to a 8.3% performance drop. Act3D can effectively trade off inference computation for performance: sampling 10,000 ghost points, instead of the 1,000 the model was trained with, boosts performance by 4.9%.
**Pre-training 2D features:** We investigate the effect of the pre-trained 2D backbone in the multi-task setting where language instructions are most needed. A ResNet50 [36] backbone pre-trained with CLIP improves success rate by 8.7% over a ResNet50 backbone pre-trained on ImageNet, and by 16.9% over using raw RGB as the visual token features.
**Augmentations:** Random crops of RGB-D images boost success rate by 6.5%, but yaw rotation perturbations drop it by 11.9%. This is in line with PerAct [1] results in RLBench.
**Hyperparameter sensitivity:** Act3D is robust to hyperparameters. Doubling the diameter of ghost point sampling balls from (16 cm, 4 cm) to (32 cm, 8 cm) drops success rate by 1.5% and halving it
\begin{table}
\begin{tabular}{l c c} \hline \hline Task & \# Train & Success \\ \hline reach target & 10 & 10/10 \\ duck in oven & 15 & 6/10 \\ wipe coffee & 15 & 7/10 \\ fruits in bowl & 10 & 8/10 \\ stack cups & 15 & 6/10 \\ transfer beans & 15 & 5/10 \\ press handsan & 10 & 10/10 \\ uncrew cap & 10 & 8/10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Real-world tasks.
to (8 cm, 2 cm) by 6.9%. Halving the total number of ghost points sampled from 1,000 to 500 drops success rate by 2.3% whereas doubling it to 2,000 increases success rate by 0.3%. We use 1,000 ghost points in our experiments to allow training with a single GPU per task.
### Limitations and future work
Our framework currently has the following limitations: **1.** Act3D sometimes fails in very high-precision tasks, like screwing and insertions, requiring temporally fine-grain closed-loop control. **2.** Act3D doesn't handle manipulation of articulated object well, such as opening/closing doors, fridges, and ovens, which require a more precise trajectory than the one supplied by a motion planner that connects keyposes with collision-free straight lines. Learning-based trajectory prediction [45; 46] would help. **3.** Currently, for long horizon tasks our policy would need to predict all keyposes one by one. A hierarchical framework that would predict language subgoals for subtasks [47; 48; 49] and feed those to our action predictor would allow better re-usability of skills across tasks. All keypose prediction methods share the listed limitations. We leave these for future work.
## 5 Conclusion
We presented Act3D, a language-conditioned Transformer architecture that learns manipulation policies from demonstrations. From one or more posed RGB-D images and language instructions, it predicts 6-DoF robot end-effector keyposes by iteratively selecting and featurizing 3D point grids in the robot's workspace. Act3D sets a new state-of-the-art in RLBench, an established robot manipulation benchmark, and solves diverse manipulation tasks in the real world from a single RGB-D camera view and a handful of demonstrations. In thorough ablations, we showed the importance of relative 3D attentions, 2D feature pre-training, and weight tying during coarse-to-fine iterations. Extending Act3D to handle more precise tasks, and tasks involving trajectory-level action prediction remain as our future work.
\begin{table}
\begin{tabular}{l l c} \hline \hline & & Average success rate in \\ \multicolumn{2}{c}{Model} & single-task setting (5 tasks) \\ \hline \multirow{4}{*}{Core design choices} & Best Act3D model (evaluated in Fig. 3) & **98.1** \\ & Only 2 stages of coarse-to-fine sampling: & 93.6 \\ & full workspace, 16 cm ball, regress an offset & 80.6 \\ & No weight tying across stages & 55.4 \\ & Absolute 3D positional embeddings & 89.8 \\ & Attention to only global coarse visual features & 93.2 \\ \hline \multirow{2}{*}{Viewpoint changes} & Best Act3D model (evaluated in Fig. 3) & **74.2** \\ & HiveFormer & 20.4 \\ \hline \multirow{2}{*}{Augmentations} & No image augmentations & **91.6** \\ & With rotation augmentations & 86.2 \\ \hline \multirow{4}{*}{Hyperparameter sensitivity} & Double sampling ball diameters: 32 cm and 8 cm & 96.6 \\ & Halve sampling ball diameters: 8 cm and 2 cm & 91.2 \\ & 500 ghost points at training time & 95.8 \\ & 2000 ghost points at training time (need 2 GPUs) & **98.4** \\ \hline \hline \multicolumn{2}{c}{Multi-task setting (18 tasks)} \\ \hline \multirow{3}{*}{Backbone} & CLIP ResNet50 backbone & **65.1** \\ & ImageNet ResNet50 backbone & 53.4 \\ \cline{1-1} & No backbone (raw RGB) & 45.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablations.** | 3Dの感覚的な表現はロボットの操作に適しており、遮蔽を容易に encode して空間的思考を単純化できます。多くの操作タスクには、エフェクタの姿勢予測に高精度な空間的精度が必要です。これは、計算コストの高い高解像度3D特徴格網を必要とするためです。そのため、多くの操作ポリシーは2Dで動作し、3Dの誘導的偏見を回避しています。この論文では、Act3Dというロボット作業場の3D特徴フィールドを表現する操作ポリシーのTransformerを導入します。このモデルは、センシングされた深さを使用して2D事前学習された特徴を3Dに lifted して、それらを意識して3D点の特性を計算します。3D点格を粗くから細かく選択して特徴付け、相対位置の注目を使用してそれを選択して、次のポイントの採集場所を選択します。この方法により、高 |
2307.16820 | Shear localisation controls the dynamics of earthquakes | Earthquakes are produced by the propagation of rapid slip along tectonic
faults. The propagation dynamics is governed by a balance between elastic
stored energy in the surrounding rock, and dissipated energy at the propagating
tip of the slipping patch. Energy dissipation is dictated by the mechanical
behaviour of the fault, which is itself the result of feedbacks between
thermo-hydro-mechanical processes acting at the mm to sub-mm scale. Here, we
numerically simulate shear ruptures using a dual scale approach, allowing us to
couple a sub-mm description of inner fault processes and km-scale
elastodynamics, and show that the sudden localisation of shear strain within a
shear zone leads to the emergence of classical cracks driven by a constant
fracture energy. The fracture energy associated to strain localisation is
substantially smaller than that predicted assuming uniform shearing. We show
the existence of a unique scaling law between the localised shearing width and
the rupture speed. Our results indicate that earthquakes are likely to be
systematically associated to extreme strain localisation. | Fabian Barras, Nicolas Brantut | 2023-07-31T16:34:26 | http://arxiv.org/abs/2307.16820v1 | # Shear localisation controls the dynamics of earthquakes
###### Abstract
Earthquakes are produced by the propagation of rapid slip along tectonic faults. The propagation dynamics is governed by a balance between elastic stored energy in the surrounding rock, and dissipated energy at the propagating tip of the slipping patch. Energy dissipation is dictated by the mechanical behaviour of the fault, which is itself the result of feedbacks between thermo-hydro-mechanical processes acting at the mm to sub-mm scale. Here, we numerically simulate shear ruptures using a dual scale approach, allowing us to couple a sub-mm description of inner fault processes and km-scale elastodynamics, and show that the sudden localisation of shear strain within a shear zone leads to the emergence of classical cracks driven by a constant fracture energy. The fracture energy associated to strain localisation is substantially smaller than that predicted assuming uniform shearing. We show the existence of a unique scaling law between the localised shearing width and the rupture speed. Our results indicate that earthquakes are likely to be systematically associated to extreme strain localisation.
## 1 Introduction
Earthquake sources correspond to slip events dynamically propagating along faults. At crustal scale, faults can be viewed as two-dimensional surfaces, across which the displacement field is discontinuous. However, geological and experimental observations show that "slip" across faults is the result of shear deformation across narrow layers of highly comminuted, transformed or partially melted rocks. In the shallow continental crust, fault core materials are often made of fine-grained silicastic and clay gouges, with a porosity filled with pressurised water (e.g. _Scholz_, 1988; _Rice_, 2006). The dynamics of ruptures in crustal faults is controlled by the rheology of these water-saturated fault gouges.
During earthquakes, faults slide at elevated slip rates of the order of metres per second, which leads to dramatic weakening of fault gouge materials (_Scholz_, 2019, chap. 2). In dry materials, weakening is most likely controlled by the local temperature rise arising from dissipation of frictional work, combined with thermally activated rheology of the rock-forming minerals (e.g _Rice_, 2006; _Beeler et al._, 2008; _Proctor et al._, 2014; _De Paola et al._, 2015; _Yao et al._, 2015; _Pozzi et al._, 2021; _Harbord et al._, 2021). In the presence of fluids, an additional weakening mechanism is expected, due to the differential thermal expansion of the pore fluids and the solid pore space: upon heating, the fluid pressure rises, effective normal stress decreases and the frictional strength
drops. This so-called "thermal pressurisation" mechanism, initially proposed by _Sibson_ (1975) as a temperature-limiting process in wet rocks, has been shown to produce realistic predictions for thermal evolution and energy dissipation during earthquakes (e.g. _Rice_, 2006; _Viesca and Garagash_, 2015), and is a potential candidate to explain some of the complexity observed in natural earthquakes (e.g. _Noda and Lapusta_, 2013) and the operation of plate boundary faults at low ambient stress (e.g. _Noda et al._, 2009; _Lambert et al._, 2021).
The thickness of the actively deforming zone determines the shear heating rate and how easily fluids and heat diffuse away from the fault plane, and thus has a tremendous influence on the resulting rupture dynamics (_Andrews_, 2002; _Noda et al._, 2009; _Noda and Lapusta_, 2010; _Viesca and Garagash_, 2015; _Lambert et al._, 2021). While geological and experimental observations can be used to constrain the thickness of actively deforming fault gouge material, the range of acceptable values spans more than 3 orders of magnitude, from fractions of millimetres to centimetres (_Rice_, 2006), and it is one of the key unknown that limits our ability to determine the efficiency of thermal weakening mechanisms in nature.
The influence of shear zone width on earthquake propagation is further complicated by the fact that this parameter is likely evolving during seismic slip: strain localisation is expected to be part of the fault weakening process. Several mechanisms might be responsible for strain localisation during earthquake slip, including granular rearrangements and grain size reduction (e.g., _Mair and Abe_, 2008; _Hermundstad et al._, 2010), shear heating coupled to thermal weakening (e.g., _Braeck and Podladchikov_, 2007), thermal pressurisation (e.g., _Sulem et al._, 2011; _Rice et al._, 2014; _Platt et al._, 2014) and thermal decomposition (e.g., _Veeakis et al._, 2012; _Platt et al._, 2015). In all cases, the strain localisation process is associated to a rapid reduction in shear strength, and we therefore expect strain localisation to exert a strong control on the overall dynamics of rupture.
Here, we demonstrate and quantify how strain localisation impacts rupture dynamics: we run dynamic rupture simulations and compute the fracture energy associated with the localisation process, and find a relationship between the rupture speed and the degree of strain localisation within the fault gouge. We use the case of thermal pressurisation as a representative thermal weakening process that is compatible with seismological observations (_Rice_, 2006; _Viesca and Garagash_, 2015; _Lambert et al._, 2021), and is known to spontaneously lead to strain localisation (_Rice et al._, 2014; _Platt et al._, 2014). We argue that the interplay between rupture dynamics and strain localisation analysed here applies to most thermal weakening processes in rocks and engineering materials.
## 2 Shear localisation and faulting
As a general starting point for our analysis, let us consider slip on geological fault as the deformation distributed across a narrow pre-existing shear zone (e.g., a fault gouge made of unconsolidated grains), and examine the conditions under which shear strain can be further localised within this pre-existing zone. Let us assume that the shear stress \(\tau\) across the shear zone is function of a set of variables that includes the shear strain rate \(\dot{\gamma}\) and another diffusive quantity \(\vartheta\):
\[\tau\sim f(\dot{\gamma},\vartheta), \tag{1}\]
and that the rate of work produced by shearing acts as a source term in the diffusion of \(\vartheta\):
\[\dot{\vartheta}=\beta\tau\dot{\gamma}+\alpha\nabla^{2}\vartheta, \tag{2}\]
where \(\nabla^{2}\) denotes the Laplace operator, \(\alpha\) is a diffusivity and \(\beta\) is analogous to the Taylor-Quiney coefficient (_Taylor and Quinney_, 1934) for the cases when \(\vartheta\) corresponds to temperature. In relation to the first condition, one can define
\[g^{\prime}(\dot{\gamma},\vartheta)=\frac{\partial f}{\partial\dot{\gamma}}, \tag{3}\]
which describes the rate-dependent rheology of the material. Natural examples include viscous creep of rocks at elevated temperature, granular material rheology (e.g. _Jop et al._, 2006) and rate-and-state friction. Similarly, one can define
\[h^{\prime}(\dot{\gamma},\vartheta)=\frac{\partial f}{\partial\vartheta}, \tag{4}\]
to describe the effect of \(\vartheta\) on the material rheology. In practice, this diffusive quantity \(\vartheta\) will often correspond to temperature and \(h^{\prime}\) describe thermal weakening effects. \(\vartheta\) could also correspond to fluid pressure in a porous material whose strength is reduced by an increase in pore fluid pressure (following the concept of effective stress discussed later in Equation 7). It can also account for the combined effect of pressure and temperature as in the case of thermal pressurisation that will be discussed later in this manuscript. If conditions (1) and (2) are met, a linear stability analysis (detailed in Appendix A) demonstrates that uniform shearing at a given time \(t=t_{0}\) becomes unstable if:
\[\frac{h^{\prime}_{0}}{g^{\prime}_{0}}<0, \tag{5}\]
Figure 1: Schematic of the dual scale setup governing the propagation of localised shear bands. The dynamic rupture extends over large (kilometric) scales along the fault (\(x-z\) plane), whereas the frictional strength is determined by solving a coupled diffusion problem, such as thermal pressurisation in this paper, where strain rate spontaneously evolves over submillimetre scales across the fault gouge (in the \(y\) direction).
with \(\{f_{0},g^{\prime}_{0},h^{\prime}_{0}\}=\{f,g^{\prime},h^{\prime}\}|_{t=t_{0}}\). Moreover, the analysis also shows that only perturbation wavelengths \(\lambda\) greater than a critical wavelength are unstable:
\[\lambda>2\pi\sqrt{-\frac{\alpha}{\beta f_{0}}\frac{g^{\prime}_{0}}{h^{\prime}_{ 0}}}\equiv\lambda_{\mathrm{c}}, \tag{6}\]
which indicates that such instability leads to the localisation of shear strain down to some thickness \(W_{\mathrm{loc}}\sim\lambda_{\mathrm{c}}/2\). Remarkably, this type of localisation instability can also arise within rate-strengthening materials (\(g^{\prime}_{0}>0\)) providing that \(h^{\prime}_{0}<0\), as it is often the case with thermal weakening mechanisms. As a result, shear flow concentrates over a thickness much smaller than the initial width of the shear zone and leads to a substantive drop of the associated shear stress. Examples of this strain localisation instability have been described by _Rice et al._ (2014); _Platt et al._ (2014) in the context of thermal pressurisation in crustal faults, as well as _Bai_ (1982) in the context of adiabatic shear banding in metals. In this work, we quantitatively investigate how strain localisation across the shear zone drives the rapid acceleration of slip and the propagation of rupture front along the fault plane during earthquakes.
## 3 Model
Here, we analyse the process of thermal pressurisation, which has been shown to be a realistic dynamic weakening mechanism (e.g. _Rice_, 2006; _Viesca and Garagash_, 2015; _Lambert et al._, 2021) and that undergoes the localisation instability outlined above (_Rice et al._, 2014; _Platt et al._, 2014).
In this case, the diffusive variable \(\vartheta\) corresponds to pore fluid pressure \(p\) that affects the effective normal stress (\(\sigma_{\mathrm{n}}-p\)) in the shear zone and, thereby, its shear strength together with a rate-dependent friction coefficient:
\[\tau(\dot{\gamma},p)=f_{\mathrm{rsf}}(\dot{\gamma})(\sigma_{\mathrm{n}}-p). \tag{7}\]
In Equation (7), we adopt the rate-strengthening rheology \(f_{\mathrm{rsf}}(\dot{\gamma})\) of _Platt et al._ (2014) detailed in Appendix B.2. Moreover, fluid diffusion across the shear zone is governed by a coupled system of thermal and hydraulic diffusion equations (see Equations 31 in Appendix). This thermo-hydraulic coupling is caused by the different compressibilities and thermal expansivities of the solid matrix and the pore fluid, and describes the change in pore pressure produced by local temperature variations in the gouge.
Exploring the interplay between strain localisation and rupture dynamics is a challenging dual-scale problem: it requires solving for heat and fluid diffusion at the scale of the fault core (from millimeters to centimeters in natural fault zones) together with the elastodynamics governing the propagation of the earthquake rupture along the fault (elastic waves moving at kilometers per second in crustal rocks). We follow _Noda et al._ (2009) and take advantage of the separation of scale to solve thermal pressurisation only across the fault (along the \(y\) axis in Figure 1). In this one-dimensional configuration, _Platt et al._ (2014) found numerically that shear localisation, under imposed constant slip rate \(V\) across the gouge, stabilises to a finite
width that is well approximated by
\[W_{\rm rsf}(V)\simeq 6.9\frac{A\rho c}{\Lambda f_{\rm c}}\frac{(\sqrt{\alpha_{\rm hy }}+\sqrt{\alpha_{\rm th}})^{2}}{V(f_{\rm c}+2A)}, \tag{8}\]
where \(\alpha_{\rm hy,th}\) correspond respectively to the hydraulic and thermal diffusivities, \(\rho c\) is the heat capacity of the Gouge material, \(\Lambda\) is the thermal pressurisation parameter that describes the change of pore fluid pressure caused by an increase in temperature in the Gouge. The characteristic shear strength
\[\tau_{\rm c}=f_{\rm c}(\sigma_{\rm n}-p_{0})=f_{\rm rsf}(V/h)(\sigma_{\rm n}-p_ {0}) \tag{9}\]
is a function of the initial uniform strain rate \(\dot{\gamma}=V/h\) and background pore pressure \(p_{0}\). In Equation (8), the constant \(A\) corresponds to the rate-strengthening coefficient that describes the "direct" response of the shear zone to a change in strain rate similar to standard rate-and-state models (see Equation (32) in Appendix). Once the localisation width \(W_{\rm rsf}\) is much smaller that the diffusion length scale, thermal pressurisation can be well approximated by a _slip-on-a-plane_ solution assuming \(\dot{\gamma}(y)=V\bar{\delta}(y)\), with \(\bar{\delta}\) being the Dirac delta distribution. In this situation, the shear stress across the shear zone is only function of the imposed slip rate \(V\) and the accumulated slip \(\delta=Vt\)(_Rice_, 2006; _Mase and Smith_, 1987):
\[\tau_{\rm sp}(\delta;V)=\tau_{\rm c}\exp\Big{(}\frac{\delta}{L^{*}}\Big{)}{ \rm erfc}\Big{(}\sqrt{\frac{\delta}{L^{*}}}\Big{)}, \tag{10}\]
where
\[L^{*}(V)=\frac{4}{f_{\rm c}^{2}}\Big{(}\frac{\rho c}{\Lambda}\Big{)}^{2}\frac {(\sqrt{\alpha_{\rm hy}}+\sqrt{\alpha_{\rm th}})^{2}}{V}. \tag{11}\]
During earthquakes, slip rate across the shear zone is far from being constant and evolves rapidly near the tip of the propagating rupture. Next, we aim to analyse the coupling between strain localisation, slip acceleration and rupture dynamics in a simple faulting geometry that is sufficient to capture its key physical aspects. We consider a planar fault within an infinite linear elastic medium sliding in anti-plane shear (mode-III). In this configuration shown in Figure 1, the shear traction at the fault \(\tau(x,t)\) can be related to the slip rate across the fault \(V(x,t)\) and, thereby, the strain rate in the shear zone, following
\[\tau(x,t)=\tau_{\rm b}-\frac{\mu}{2c_{\rm s}}V(x,t)+\phi(x,t)=\tau_{\rm b}- \frac{\mu}{2c_{\rm s}}\int_{-h/2}^{h/2}\dot{\gamma}(x,y,t)\,{\rm d}y+\phi(x,t). \tag{12}\]
In the equation above, \(\mu\) is the shear modulus, \(c_{\rm s}\) the shear wave speed of the linear elastic medium surrounding the shear zone, \(\phi\) is the non-local dynamic contribution accounting for the history of slip along the interface, and \(\tau_{\rm b}\) represents the far-field background stress. Equation (12) allows us to define the characteristic seismic slip rate \(V_{\rm c}\) and associated uniform strain rate \(\dot{\gamma}_{\rm c}\) as
\[\dot{\gamma}_{\rm c}=\frac{V_{\rm c}}{h}=\frac{2c_{\rm s}\tau_{\rm b}}{h\mu}, \tag{13}\]
which are used in the remainder of the paper together with the related characteristic shear strength \(\tau_{\rm c}\) (9). The elastodynamic equation (12) couples the strain rate in the shear zone \(\dot{\gamma}\)
to the shear stress \(\tau\) and allows us to implement a dual-scale coupled numerical scheme that solves the rupture elastodynamics along the shear zone together with pressure and temperature diffusion across the shear zone. The details of our coupled numerical scheme are given in Appendix D.
## 4 Results
In our simulations, the shear zone is initially creeping at aseismic slip velocity and, at time \(t=0\), failure is nucleated by rising the pore pressure near the center of the fault \(x=0\) (further details of nucleation procedure and parameter values are given in Appendix D). Initially, acceleration of slip is mostly concentrated in the nucleation region, followed by a rapid lateral rupture propagation whereby the slip rate increases in an expanding region beyond the initial nucleation patch, concomitantly with a shear stress drop linked with thermal pressurisation of pore fluids inside the gouge and intense strain localisation (Figure 2). Rupture acceleration coincides with larger slip velocities and stress drop at the tip (Figure 3a-c) and more intense
Figure 2: Dynamic rupture driven by shear localisation simulated with the coupled model. The top panels a) and b) respectively present snapshots at different times of the longitudinal profile of slip rate and shear stress during which the rupture accelerates from sixty to about ninety percent of the shear wave velocity. Note that the simulated domain is symmetric with respect to the nucleation position \(x=0\) such that another rupture tip moves toward the negative positions. The bottom panels c) and d) present the profile of strain rate \(\dot{\gamma}\), pressure \(p\) and temperature \(T\) at the successive positions of the rupture tip highlighted by black dots in panel a) and b). See Appendix C and Table 1 for further details on the dimensional analysis behind this coupled problem and the dimensionless scaling used to plot the data in the different panels.
localisation of shear deformation across the gouge where up to four orders of magnitude larger strain rate concentrates on less than five percent of the thickness of the shear zone (Figure 3b). Interestingly, the peak slip rate and drop of shear stress measured at different positions along the fault arise for the same characteristic slip value \(\delta_{\mathrm{loc}}\) and coincides with intense strain localisation. The observed value of \(\delta_{\mathrm{loc}}\) is identical to the one reported from one-dimensional simulation under imposed velocity and is in the order of magnitude of \(\delta_{\mathrm{c}}\) (see Figures 9 and 10 of _Platt et al._ (2014)). Remarkably, this observation enables us to apply the one-dimensional theory discussed in the previous section to derive predictions of the shear zone dynamics after strain localisation. For instance, the slip-on-a-plane solution described in Equation (10) can be used to capture the magnitude of the residual shear stress reached immediately after strain localisation \(\tau_{\mathrm{res}}\approx\tau_{\mathrm{sp}}(\delta=\delta_{\mathrm{c}};V=V_{ \mathrm{tip}})\), with \(V_{\mathrm{tip}}\) being the slip rate observed at the rupture tip (see Figure 3c and related caption). Moreover, once the localisation instability arises, the thickness of actively strained material at various positions along the interface collapses on a single \(W_{\mathrm{loc}}(V)\) curve, which follows the prediction given in Equation (8).
### Rupture dynamics driven by shear localisation
Next, we quantitatively demonstrate how strain localisation is the driving mechanism of the propagating rupture. To do so, we analyze snapshots of the propagating rupture and the near-tip evolution of the macroscopic and microscopic mechanical variables (Figure 4). Ahead of the propagating tip (point A), the shear zone is creeping with uniform shear strain rate. As the rupture approaches, the strain rate builds up uniformly across the gouge (point B) until the localisation instability arises (point C) together with a rapid increase in macroscopic slip rate \(V\) and abrupt drop of shear stress \(\tau\). In the wake of the rupture (point D), the profile of strain rate across the gouge progressively delocalises, following the decay of the macroscopic slip rate given by the prediction \(W_{\mathrm{rsf}}(V)\) shown in Figure 3b. The near-tip evolution of \(V\) and \(\tau\) is reminiscent to the singular solutions at the tip of a dynamic fracture (_Freund_, 1990). Defining \(\alpha_{\mathrm{s}}^{2}=1-v_{\mathrm{r}}^{2}/c_{\mathrm{s}}^{2}\), the analogy to linear elastic fracture mechanics (LEFM) can be quantitatively tested by rescaling the slip rate and stress according to
\[V\frac{\mu\alpha_{\mathrm{s}}}{2v_{\mathrm{r}}}=\tau-\tau_{\mathrm{res}}= \Delta\tau=\frac{K}{\sqrt{2\pi(x-x_{\mathrm{tip}})}} \tag{14}\]
and fitting the dynamic fracture solution following the procedure of _Barras et al._ (2020). The stress intensity factor \(K\), residual stress \(\tau_{\mathrm{res}}\) and position of the tip \(x_{\mathrm{tip}}\) are the free parameters that are fitted simultaneously to match the near-tip decrease of \(V\) behind the rupture tip and increase of \(\tau\) ahead of the rupture tip.
The good agreement with dynamic fracture solution (dashed blue curves in Figure 4) confirms the crack-like nature of the simulated rupture process near the tip of the slipping patch. Such agreement allows us to use the inverted value of \(K\) and invoke the crack-tip energy balance to compute the rupture energy
\[G_{\mathrm{c}}=\frac{K^{2}}{2\mu\alpha_{\mathrm{s}}}, \tag{15}\]
which corresponds to the part of dissipated energy that governs the propagation of the rupture. In seismology, extracting the fracture energy of natural earthquakes still eludes the resolution of
Figure 3: Time evolution of the elastic variables at different locations along the interface during the dynamic rupture shown in Figure 2. Line colors relate to the positions along the interface \(x/L_{\rm c}\) and the associated propagation speeds of the rupture \(v_{\rm r}/c_{\rm s}\), whereas the arrows point to the direction of forward time evolution. Slip rate (a) and shear stress (c) versus slip revealing how the peak slip rate is associated to abrupt stress drop and arise at the same amount of cumulated slip \(\delta_{\rm loc}\approx 0.3\delta_{\rm c}\). (b) Slip rate versus width of strain rate localisation \(W_{\rm loc}\) measured from the \(\dot{\gamma}(y)\) profiles following the procedure shown in Figure 3 of _Platt et al._ (2014). The different post-peak delocalisation trajectories collapse along a single prediction given in Equation (8). The dashed lines in panel (c) correspond to the prediction \(\tau_{\rm sp}(\delta;V_{\rm tip})\) and gives a good prediction of the residual shear stress reached after strain localisation. The slip rate at the rupture tip \(V_{\rm tip}\) is approximated by \(V\) at the mid-time between the peaks in shear stress and in slip rate. (A more precise definition of the tip position is discussed and computed later in the context of Figure 4.)
Figure 4: Snapshot near the tip of the propagating rupture shown in Figure 2. Bottom panel presents the spacial evolution of the shear stress and slip rate, which are simultaneously fitted by the fracture mechanics prediction shown by the dashed blue curve. (See the main text for details on the fitting procedure). Top panels show the strain rate profile across the shear zone observed at the instants A,B,C and D corresponding to the black dots in the bottom panel.
seismic inversions, such that the _breakdown work_ is often used as a proxy for \(G_{\mathrm{c}}\) and integrates the excess of work on top of residual friction (_Tinti et al._, 2005; _Cocco et al._, 2023). For systems where frictional weakening does not necessarily reach a well-defined residual value, the breakdown work is defined as (_Abercrombie and Rice_, 2005):
\[E_{\mathrm{BD}}(\delta)=\int_{0}^{\delta}\Big{(}\tau(\delta^{\prime})-\tau( \delta)\Big{)}\,\mathrm{d}\delta^{\prime}. \tag{16}\]
In our numerical simulations, the integration of \(E_{\mathrm{BD}}\) at different locations along the interface reveals a clear plateau over an order of magnitude in slip (Figure 5), which indicates the portion of \(E_{\mathrm{BD}}\) that corresponds to \(G_{\mathrm{c}}\) following _Brener and Bouchbinder_ (2021). Remarkably, we can then quantitatively verify that the two independent estimates of the rupture energy (from the near-tip singularity and the integration of the breakdown work) are in excellent agreement (gray horizontal line in Figure 5) as another proof of the crack-like nature of the rupture dynamics. Furthermore, the observed plateau in \(E_{\mathrm{BD}}\) is clearly associated to the rapid stress drop caused by localisation instability (see \(\tau(\delta)\) profile in Figure 3c) and confirms that rapid strain localisation is the driving mechanism of the propagating rupture. In addition, the magnitude of \(G_{\mathrm{c}}\) associated to strain localisation is more than five times smaller than that expected from uniform shearing under adiabatic undrained conditions (\(\sim\tau_{\mathrm{c}}\delta_{\mathrm{c}}\)).
The interplay between strain localisation and rupture dynamics can be further established by relating the thinnest localisation width observed at a given location along the interface to the local speed of the rupture (Figure 6). Following the behavior reported in Figure 3a, the dynamic stress drop caused by the localisation instability can be well estimated by the slip-on-a-plane solution (10). Together with the elastodynamic relation (14), it relates the slip rate near the rupture tip \(V_{\mathrm{tip}}\) to the rupture speed \(v_{\mathrm{r}}\), which can further be combined to the solution \(W_{\mathrm{rsf}}(V_{\mathrm{tip}})\) of Equation (8):
\[\begin{cases}V_{\mathrm{tip}}=\dfrac{2v_{\mathrm{r}}}{\mu\alpha_{\mathrm{s}}} \Big{(}\tau_{\mathrm{c}}-\tau_{\mathrm{sp}}(\delta_{\mathrm{c}};V_{\mathrm{tip }})\Big{)},\\ W_{\mathrm{loc}}=W_{\mathrm{rsf}}(V_{\mathrm{tip}}).\end{cases} \tag{17}\]
The implicit relation above provides a universal relationship between \(v_{\mathrm{r}}\) and the degree of localisation \(W_{\mathrm{loc}}\) observed at the scale of the gouge. Its good agreement with different simulations data (Figure 6) unveil the dynamical feedback loop at play between strain localisation and rupture dynamics: the dynamic stress drop caused by strain localisation drives the acceleration of slip, which further amplifies the dynamic stress drop and promotes rupture acceleration.
## 5 Discussion
Our simulations demonstrate that strain localisation produces a loss of stress-bearing capacity of shear zones that can create and sustain earthquake rupture. The abrupt drop of shear stress produces an accelerating crack-like rupture in agreement with the predictions of LEFM. Notably, the rupture is driven by a well-defined fracture energy that corresponds to the edge-localised dissipation during the localisation process.
Such a behaviour is in contrast with that of ruptures driven by thermal pressurisation without internal strain localisation (_Viesca and Garagash_, 2015), for which breakdown work uniformly
Figure 5: Breakdown energy integrated from the \(\tau\) versus \(\delta\) profiles at different positions along the fault, for the simulation shown in Figure 3 (labelled “strong strain localisation”, blue dots) and for another simulation with the same parameters but larger hydraulic diffusivity (labelled “weak strain localisation”, brown dots). The rupture simulated with larger hydraulic diffusivity shows very weak strain localisation. (Additional plots of the dynamics observed during that simulation are given in Figures 7 and 8 of the Appendix). The rapid loss of stress caused by strain localisation creates an horizontal plateau whose associated magnitude is well predicted by the rupture energy inverted from the dynamic fracture fit shown in Figure 4 and highlighted here by the horizontal gray line.
Figure 6: Minimum strain rate localisation width \(W_{\rm loc}\) versus instantaneous rupture speed \(v_{\rm r}\) computed during rupture propagation for several simulations using the same parameters but different background stresses. Heterogeneous simulations with steps in background stress were conducted with an initial plateau of \(\tau_{\rm b}/\tau_{\rm c}=0.59\) around the nucleation zone, and a sharp drop down to a smaller value at position \(x/L_{\rm c}=\pm 11.52\) away from the center of the nucleation region. In heterogeneous simulations, rupture speeds may vary nonmonotonically during propagation, initially increasing and subsequently decreasing when encountering a large downstep in stress. Regardless of the details of the dynamics, the relationship between peak localised width and rupture speed is well approximated by the theoretical prediction proposed in Equation (17).
increases with increasing slip, i.e., without a well-defined residual strength at any point within the propagating rupture. Similarly, the integrated breakdown work for simulated ruptures that feature weak strain localisation lack a well-defined edge-localised rupture energy (Figure 5). In this case, the rupture tip singularity significantly deviates from fracture mechanics predictions (14) and the rupture dynamics is no longer governed by local near-tip energy balance (_Brener and Bouchbinder_, 2021), as further discussed in Appendix D.6. Far from the rupture tip, our simulations show further shear weakening driven by the diffusion process (2) that continues at a slower pace as strain delocalises, and we approach again the slip-on-a-plane asymptotics described by _Viesca and Garagash_ (2015).
Strain localisation within a preexisting gauge material is strongly correlated to the dynamics of fault slip, and specifically to the rupture speed (Figure 6). The degree of strain localisation increases with increasing rupture speed, with a narrowing of the deformed, heated and pressureised region, approaching 1/1000th of the initial shear zone width. Despite the complexity of the problem, quantitative estimates can be obtained by a simple analytical approximation (Equation 8) adapted from _Platt et al._ (2014), so that the original predictions for peak localised width listed in _Rice et al._ (2014); _Platt et al._ (2014) still apply. Ideally, we could use the relationship between width and rupture speed depicted in Figure 6 to interpret the localisation features observed in the geological record in terms of rupture dynamics. However, strain localisation in rocks is not exclusively associated with dynamic ruptures and fast slip rates (e.g. _Evans and Kohlstedt_, 1995), and only careful micro- and nano-structural studies can be relied upon to determine the seismic nature of geological structures, notably via detection of features characteristic of frictional heating (_Rowe and Griffith_, 2015). Keeping this caveat in mind, our results highlight that the degree of strain localisation may be used as a complementary indicator of seismic slip: indeed, simulations leading to dynamic ruptures are always associated with strong localisation, with typical width in the sub-millimeter range (see also _Daub et al._, 2008).
In this paper, we chose thermal pressurisation as the driving mechanism for localisation and implemented a numerical scheme coupling small-scale diffusive processes across the shear zone to long-range elastodynamic coupling along the shear zone. Our results can however be generalized to other type of localisation instability arising within shear zones where (1) the shear stress in the shear zone is function of a set of variables that includes the shear strain rate \(\dot{\gamma}\) and another diffusive quantity \(\vartheta\) (Equation 1), and (2) the rate of work produced by shearing acts as a source term in the diffusion of \(\vartheta\) (Equation 2). Importantly, shear localisation can produce and sustain rupture in shear zones having a rate-strengthening rheology (\(g_{0}^{\prime}>0\)) often interpreted as a token of stability and aseismic slip.
If the conditions (5) and (6) are fulfilled, a localisation instability can develop and lead to an abrupt drop of shear stress, which leads to the emergence of a well-defined edge-localised fracture energy and LEFM-like rupture. Far from the tip, any diffusion-driven weakening leads to \(E_{\mathrm{BD}}\sim\delta^{2/3}\) at large slip (_Brantut and Viesca_, 2017). Therefore, the behavior summarized in Figure 5 is expected to arise for any type of localisation-driven rupture, including those where the rheology is controlled by temperature, such as superplasticity (e.g. _Green et al._, 2015; _De Paola et al._, 2015; _Pozzi et al._, 2021; _Harbord et al._, 2021). Indeed, simulations of high speed deformation in metals, which are also rate-hardening and temperature-sensitive, tend to exhibit similar characteristics, with the emergence of a localisation-driven dissipation at the edge of
propagating shear bands (_Bonnet-Lebownier et al._, 2002).
Our work demonstrates how localisation instabilities arising across a creeping shear zone create an abrupt drop of shear stress that promotes the propagation of classical dynamic ruptures over large distances along the shear zone. Whether frictional systems are governed by classical fracture mechanics or by nonlinear friction is an important and debated question in geophysics (e.g. _Svetlizky et al._, 2019; _Lambert et al._, 2021; _Paglialunga et al._, 2022; _Cocco et al._, 2023). Strain localisation is an abrupt structural weakening mechanism that provides a clear separation between the cohesive zone and the interior of the slipping patch, hence justifying the small-scale yielding hypothesis. However, the relative simplicity of the rupture tip behaviour does not preclude any complexity of the overall rupture style. Away from the rupture tip, thermal and hydraulic diffusion and strain delocalisation maintain a slow decay of the shear stress, which is prone to impact how earthquake ruptures stop (_Paglialunga et al._, 2022).
| 地震は、地殻変動の断層に沿って急速な滑動が進行することによって発生します。その進行の動態は、周囲の岩石における弾性貯蔵エネルギーと、滑動の突起の先端における dissipates エネルギーのバランスによって支配されます。エネルギーdissipationは、断層の機械的挙動によって規定され、これは自己の thermo-hydro-mechanical プロセス間の相互作用によって引き起こされます。ここでは、mmからサブ-mmスケールで作用する物理過程を考慮して、断層における剪断断面を数値シミュレーションします。この方法により、内部断層プロセスと kmスケールの弾性力学を組み合わせることができ、剪断の集中に伴い、恒定の破壊エネルギーによって誘発される古典的な割れが生じることを示しました。この割れのエネルギーは、均一な剪断を仮定した場合に予測されるよりもはるかに小さい。 |
2309.10725 | Causality-Driven One-Shot Learning for Prostate Cancer Grading from MRI | In this paper, we present a novel method to automatically classify medical
images that learns and leverages weak causal signals in the image. Our
framework consists of a convolutional neural network backbone and a
causality-extractor module that extracts cause-effect relationships between
feature maps that can inform the model on the appearance of a feature in one
place of the image, given the presence of another feature within some other
place of the image. To evaluate the effectiveness of our approach in low-data
scenarios, we train our causality-driven architecture in a One-shot learning
scheme, where we propose a new meta-learning procedure entailing meta-training
and meta-testing tasks that are designed using related classes but at different
levels of granularity. We conduct binary and multi-class classification
experiments on a publicly available dataset of prostate MRI images. To validate
the effectiveness of the proposed causality-driven module, we perform an
ablation study and conduct qualitative assessments using class activation maps
to highlight regions strongly influencing the network's decision-making
process. Our findings show that causal relationships among features play a
crucial role in enhancing the model's ability to discern relevant information
and yielding more reliable and interpretable predictions. This would make it a
promising approach for medical image classification tasks. | Gianluca Carloni, Eva Pachetti, Sara Colantonio | 2023-09-19T16:08:33 | http://arxiv.org/abs/2309.10725v1 | # Causality-Driven One-Shot Learning for Prostate Cancer Grading from MRI +
###### Abstract
In this paper, we present a novel method to automatically classify medical images that learns and leverages weak causal signals in the image. Our framework consists of a convolutional neural network backbone and a causality-extractor module that extracts cause-effect relationships between feature maps that can inform the model on the appearance of a feature in one place of the image, given the presence of another feature within some other place of the image. To evaluate the effectiveness of our approach in low-data scenarios, we train our causality-driven architecture in a One-shot learning scheme, where we propose a new meta-learning procedure entailing meta-training and meta-testing tasks that are designed using related classes but at different levels of granularity. We conduct binary and multi-class classification experiments on a publicly available dataset of prostate MRI images. To validate the effectiveness of the proposed causality-driven module, we perform an ablation study and conduct qualitative assessments using class activation maps to highlight regions strongly influencing the network's decision-making process. Our findings show that causal relationships among features play a crucial role in enhancing the model's ability to discern relevant information and yielding more reliable and interpretable predictions. This would make it a promising approach for medical image classification tasks.
## 1 Introduction
Building models that automatically perform diagnoses from medical data could revolutionize a patient's clinical pathway, especially in oncology, minimizing invasive procedures and maximizing the chances of cures for the most severe cases. The implementation of such models in the medical field is severely limited by data availability and the heavy domain shifts that plague the data, which makes the models non-generalizable. Especially in the magnetic resonance imaging (MRI) domain, different vendors yield images with different features, and that prevents the model from being able to generalize well. In a classical fully-supervised manner, the problem can be overcome by training a model with lots of data that covers all the possible data distribution, In practice, however, this is not always possible, due to the limited amounts of labeled/annotated data that affect the medical imaging domain.
Especially when dealing with limited data, one may wish to make an automated recognition model focus on the most discriminative regions of the image instead of paying attention to side details. Traditional convolutional neural networks (CNN) and transformer networks, for instance, can discover features hidden within input data together with their mutual co-occurrence. However, they are weak at discovering and making explicit hidden causalities between the features, which could be the reason behind a particular outcome. Indeed, image classification models are expected to distinguish between the classes of images taking into account the causal relationships between the features from the images, in a way that might resemble how humans accomplish the task. For this reason, several bridges between the field of causality and computer vision are emerging in the literature [14, 1].
A particular case would be discovering hidden causalities among objects in an image dataset. Unlike tabular data, which have a structured nature, when it comes to images, their representation does not include any explicit indications regarding objects or patterns. Instead, individual pixels convey a particular scene visually, and image datasets do not provide labels describing the objects' dispositions. Due to this, supervised machine learning as such cannot approach
them. Additionally, unlike video frames, from a single image one may not see the dynamics of appearance and change of the objects in the scene. Therefore, a priori information as a hint for causal discovery is absent.
To approach the problem of learning hidden causalities within images, Lopez-Paz _et al_. [7] suggest the "causal disposition" concept as more primitive than interventional causation (do-calculus) and causal graphs from Pearl's approach [11, 12]. However, it could be the only way to proceed with limited a priori information. In their view, by counting the number \(C(A,B)\) of images in which the causal dispositions of artifacts \(A\) and \(B\) are such that \(B\) disappears if one removes \(A\), one can assume that artifact \(A\) causes the presence of artifact \(B\) when \(C(A,B)\) is greater than the converse \(C(B,A)\). As a trivial example, imagine the image of a car on a bridge. Now, if we were to remove the car, then this would keep the image realistically looking (i.e., scene consistency), since an observer may see similar scenes among other images. Conversely, if we were to remove the bridge, then this would make the scene inconsistent, as that scenario is likely never seen among other images (i.e., flying cars). Therefore, we may assume that the presence of the bridge has some effect on the presence of the car. This concept leads to the intuition that any causal disposition induces a set of asymmetric causal relationships between the artifacts from an image (features, object categories, etc.) that represent (weak) causality signals regarding the real-world scene. To automatically infer such an asymmetric causal relationship from the statistics observed in an image dataset would be a meeting point with a machine vision system.
In this work, we combine a regular CNN with a causality-extraction module to investigate the features and causal relationships between them extracted during training. We build on ideas from [18] who suggest a way to compute such an asymmetric measure for possible causal relationships within images, and we propose a new scheme based on feature maps enhancement to enable "causality-driven" CNNs to classify images taking into account hidden causalities within them. Our hypothesis is that it would be possible and reasonable to get some weak causality signals from the individual images of some medical datasets without adding primary expert knowledge, and leveraging them to better guide the learning phase. Ultimately, a model trained in such a manner would be able to exploit weak causal dispositions of objects in the image scene to distinguish lesion grades even with limited data and possibly domain shift on the test set.
To evaluate how these causality-driven networks could behave in a low-data regime, we perform the training in a Few-shot learning (FSL) manner, and in particular in One-shot learning (OSL). Here, we propose a novel training scheme in which we design meta-training and meta-testing tasks having related classes (i.e., the same clinical problem is addressed) but at different granularity levels. To perform such experiments, we exploit the Deep Brownian Distance Covariance (DeepBDC) [19] method.
Our paper is structured as follows. After citing relevant works related to our topic in Sec. 2, we present the rationale behind causality-driven CNNs and our network proposition in Sec. 3.1, together with a description of the DeepBDC method of our choice in Sec. 3.2. Later, in Sec. 4, we dive into the details of our experiments settings, regarding both the dataset used and the meta-training and meta-testing schemes. Finally, in Sec. 5 and Sec. 6, we provide the results of our experiments and discuss our findings, summarizing our key conclusions.
## 2 Related works
Several approaches to integrating causality into FSL have been proposed in the literature, leading to different directions and applications. One notable example is the work by Yue _et al_. [22], where they leverage causality to demonstrate that pre-training in FSL can act as a confounder, resulting in spurious correlations between samples in the support set and their corresponding labels. To address this issue, they propose a novel FSL paradigm in which they perform causal interventions on the Structural Causal Model (SCM) of many-shot learning employing a backdoor adjustment approach. Based on that work, Li _et al_. [6] propose a method to mitigate the influence of confounders during the pre-training phase of the prototypical network [17]. They accomplish this by stratifying the pre-trained knowledge using a backdoor adjustment based on causal intervention. Specifically, the backdoor adjustment operations are applied in the metric layer of the prototypical network. The feature vectors of the class prototype and the query set are divided into N equal-size disjoint subsets, and the corresponding subsets are fed into their respective classifiers. The final prediction result is obtained by averaging the prediction probabilities from the N classifiers. Furthermore, in the work by Yang _et al_. [20], the authors propose a method to enhance the robustness and generalizability of few-shot text classification. They achieve this by extracting causal associations from text using a causal representation framework for FSL. The process involves performing causal interventions to generate new data with label-relevant information for each input. The original and augmented texts turned into feature representations are fed into a factorization module, which enforces the separation and joint independence of the representations from non-causal factors. Finally, the feature representations are utilized by a classification module for making predictions.
## 3 Methods
### Causality-driven CNNs
Preliminaries.In automatic image recognition, deep neural network classifiers obtain the essential features required for classification not directly from the pixel representation of the input image but through a series of convolution and pooling operations. These operations are designed to capture meaningful features from the image. Convolution layers are responsible for summarizing the presence of specific features in the image and generating a set of feature maps accordingly. As these maps are sensitive to the spatial location of features in the image, pooling is employed to consolidate the presence of particular features within groups of neighboring pixels in square-shaped sub-regions of the feature map.
Causality signals in images.When a feature map \(F^{i}\) contains only non-negative numbers (e.g., thanks to ReLU functions) and is normalized in the interval \([0,1]\), we can interpret its values as probabilities of that feature to be present in a specific location, for instance, \(F^{i}_{r,c}\) is the probability that the feature \(i\) is recognized at coordinates \(r,c\). By assuming that the last convolutional layer outputs and localizes to some extent the object-like features, we may modify the architecture of a CNN such that the \(n\times n\) feature maps (\(F^{1},F^{2},\dots F^{k}\)) obtained from that layer are fed into a new module that computes pairwise conditional probabilities of the feature maps. The resulting \(k\times k\) causality map would represent the causality estimates for the features.
Computing causality maps.Given a pair of feature maps \(F^{i}\) and \(F^{j}\) and the formulation that connects conditional probability with joint probability, \(P(F^{i}|F^{j})=\frac{P(F^{i};F^{j})}{P(F^{j})}\), following [18], we heuristically estimate this quantity regarding the pairs of features by adopting two possible methods, namely _Max_ and _Lehmer_. The _Max_ method considers the joint probability to be the maximal presence of both features in the image (each one in its location):
\[P(F^{i}|F^{j})=\frac{(\max_{r,c}F^{i}_{r;c})\cdot(\max_{r,c}F^{j}_{r,c})}{\sum _{r,c}F^{j}_{r,c}} \tag{1}\]
On the other hand, the _Lehmer_ method entails computing
\[P(F^{i}|F^{j})_{p}=\frac{LM_{p}(F^{i}\times F^{j})}{LM_{p}(F^{j})} \tag{2}\]
where \(F^{i}\times F^{j}\) is a vector of \(n^{4}\) pairwise multiplications between each element of the two \(n\times n\) feature maps, while \(LM_{p}\) is the generalized Lehmer mean function [2] with parameter \(p\), which is an alternative to power means for interpolating between minimum and maximum of a vector \(x\) via harmonic mean (\(p=-2\)), geometric mean (\(p=-1\)), arithmetic mean (\(p=0\)), and contraharmonic mean (\(p=1\)): \(LM_{p}(x)=\frac{\sum_{k=1}^{n}x_{k}^{k}}{\sum_{k=1}^{n}x_{k}^{k}-1}\). Equations 1 and 2 could be used to estimate asymmetric causal relationships between features \(F^{i}\) and \(F^{j}\), since, in general, \(P(F^{i}|F^{j})\neq P(F^{j}|F^{i})\). By computing these quantities for every pair \(i\) and \(j\) of the \(k\) feature maps, the \(k\times k\) causality map is obtained. We interpret asymmetries in such probability estimates as weak causality signals between features, as they provide some information on the cause-effect of the appearance of a feature in one place of the image, given the presence of another feature within some other places of the image. Accordingly, a feature may be deemed to be the reason for another feature when \(P(F^{i}|F^{j})>P(F^{j}|F^{i})\), that is (\(F^{i}\to F^{j}\)), and vice versa.
Embedding causality in regular CNNs.Once the causality map is computed, it can be embedded into the basic CNN architecture. Terziyan and Vitko [18] flatten these suggested causality estimates, concatenate them to the set of flattened feature maps, and let the CNN learn how these estimates might influence image classification. Differently from them, we exploit the causality map in a new way and get a weighting vector to enhance or penalize the single feature maps during training.
Causality weights.At each epoch, as training progresses, we look for asymmetries between elements opposite the main diagonal of the causality map. Some features may be more often found on the left side of the arrow (i.e., \(F\rightarrow\)) than on the right side (i.e., \(\to F\)). Therefore, we use such learned causalities to compute causality weights to assign a degree of importance to each feature map. Specifically, for each feature map, we take as its weighting factor the difference between the number of times it was found to cause other feature maps and the number of times it was found to be caused by another feature map. Computing such quantity for every feature results in a vector of causality factors, which is then passed through a ReLU activation to set to zero all the negative elements.
Models.We propose two variants of the model:
* **mulcat** (_multiply and concatenate_). The non-negative causality factors multiply the corresponding feature maps, resulting in a causality-driven version of these feature maps. In this enhanced version, each feature map is strengthened according to its causal influence within the image's scene. Those features are merged with the original features by concatenation along the channel axis and form the final feature set that influences the classification outcomes.
* **mulcatbool**. Same as the previous, but before multiplication, the factors undergo boolean thresholding where all the non-zero factors are assigned a new weight of \(1\), while \(0\) otherwise.
The first method weighs features more according to their causal importance (a feature that is _cause_\(10\) times more than another receives \(10\) times more weight). In contrast, the second method is more conservative and assigns all features that are most often _causes_ the same weight. We experiment with both of them and compare their results.
### One-shot learning
In standard FSL, the training process occurs in episodes or tasks. Each task is formulated as an _\(N\)-way_\(K\)-_shot_ classification problem, where \(N\) represents the number of classes, and \(K\) is the number of _support_ images per class. We refer to \(Q\) as the number of _query_ images per class. For our experiments, we specifically focused on _N-way 1-shot_ classification, and we employed the DeepBDC method introduced by Xie _et al_. [19]. In particular, we utilized the meta-learning implementation of DeepBDC, known as Meta DeepBDC. DeepBDC is a metric-based FSL method that employs the BDC as the distance measure between prototypes. The BDC is defined as the Euclidean distance between the joint characteristic function and the product of the marginal of two random variables \(X\in\mathbb{R}^{p}\) and \(Y\in\mathbb{R}^{q}\). Following [19], we provide a more formal definition of BDC:
\[\rho(X,Y)=\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{q}}\frac{|\Phi_{XY}(t,s)-\Phi _{X}(t)\Phi_{Y}(s)|^{2}}{c_{p}c_{q}|t|\|^{1+p}\|s\|^{1+q}}dtds, \tag{3}\]
where \(\Phi_{X}(t)\) and \(\Phi_{Y}(s)\) are the marginal distributions of X and Y, respectively, \(\Phi_{XY}(t,s)\) is the joint characteristic function of the two random variables and \(c_{p}\) is defined as \(c_{p}=\pi^{(1\pm p)/2}/\Gamma((1+p)/2)\), where \(\Gamma\) is the complete gamma function. DeepBDC has demonstrated higher performance compared to state-of-the-art methods while being straightforward to deploy since it can be implemented as a parameter-free spatial pooling layer that accepts feature maps as input and provides a BDC matrix.
## 4 Experiments
### Dataset and pre-processing
In our study, we conducted meta-training, meta-validation, and meta-testing using the publicly available \(1500\)-acquisition dataset from the PI-CAI challenge [13]. This dataset comprises mpMRI acquisitions of the prostate, and for our experiments, we focused exclusively on cancerous patients. In particular, we selected only T2-weighted (T2w) images containing lesions by exploiting the expert annotations provided in the dataset. The dataset contained biopsy reports expressing the severity of each lesion as Gleason Score (GS). The pathologists assign a score of \(1\) to \(5\) to the two most common patterns in the biopsy specimen based on the tumor severity. The two grades are then added together to determine the GS, which can assume all the combinations of scores from "\(1\)+\(1\)" to "\(5\)+\(5\)". Additionally, the dataset included the assigned GS's group affiliation, defined by the International Society of Urological Pathology (ISUP) [4], ranging from \(1\) to \(5\), which provides the tumor severity information at a higher granularity level. From an even more high-level perspective, lesions with a GS \(\leq 3+3\) (ISUP = \(1\)) and with GS \(=3+4\) (ISUP = \(2\)) are considered low-grade (LG) tumors, as patients with such lesions typically undergo active surveillance [9]. Conversely, lesions with GS \(>3+4\) (ISUP \(>\)\(2\)) are high-grade (HG) tumors, as treatment is foreseen [9]. In this study, we considered only lesions whose GS was \(\geq 3+4\) (ISUP \(\geq 2\)), as lesion annotations were not provided for ISUP-\(1\) lesions. As a result, we had eight classes of GS and four classes of ISUP in our dataset. The total number of images we used was \(2049\) (from \(382\) patients), which we divided into training, validation, and testing subsets. Specifically, we used \(1611\) images for training, \(200\) for validation, and \(238\) for testing. During the splitting process, we ensured patient stratification, i.e., all the images of the same patient were grouped in the same subset, avoiding any data leakage. To replicate a realistic scenario involving distinct distributions in training and testing data, we utilized data from two different vendors: SIEMENS vendor data for meta-training and Philips vendor data for both meta-validation and meta-testing. Indeed, we chose the same validation and test distributions since, as highlighted by Setlur _et al_. [16], using validation samples that are not independent and identically distributed with the test samples can lead to unreliable results when determining the optimal model, specifically the one that maximizes performance on the test set.
As for the data pre-processing, we utilized the provided whole prostate segmentation to extract the mask centroid for each slice. We standardized the field of view (FOV) at \(100\) mm in both \(x\) (\(FOV_{x}\)) and \(y\) (\(FOV_{y}\)) directions to ensure consistency across all acquisitions and subsequently cropped each image based on this value around the found centroid. To determine the number of rows (\(N_{rows}\)) and columns (\(N_{cols}\)) corresponding to the fixed FOV, we utilized the pixel spacing in millimeters along the x-axis (denoted as \(px\)) and the y-axis (denoted as \(py\)). The relationships used to derive the number of columns and rows are \(N_{cols}=\frac{FOV_{x}}{px}\) and \(N_{rows}=\frac{FOV_{y}}{py}\), respectively. Additionally, we resized all images to a uniform matrix size of \(128\times 128\) pixels to maintain a consistent pixel count. Finally, we performed image normalization using an involume method. This involved calculating the mean and
standard deviation (SD) of all pixels within the volume acquisition and normalizing each image based on these values using a z-score normalization technique.
### Classification experiments
In our study, we conducted experiments on two classification scenarios: (i) distinguishing between LG and HG lesions and (ii) ISUP grading. For each scenario, we carefully designed meta-training and meta-testing tasks. Most FSL approaches typically involve using unrelated classes between meta-training and meta-testing tasks [3, 8, 10]. Instead, we propose a different approach where we focus on the same clinical problem but at varying levels of granularity. Specifically, during meta-training, we designed more challenging tasks, requiring the model to distinguish between classes with higher levels of granularity. Conversely, during meta-testing, we provided the model with easier classification tasks involving higher-level classification. Our rationale is that this approach would lead to higher performance on the specific task of interest performed during meta-testing, as the model would find it relatively easier to execute due to its exposure to more complex tasks during meta-training. Below we provide a detailed explanation of how we designed our meta-training and meta-testing tasks for both experiments.
In the first scenario, we labeled the meta-training data according to the four ISUP classes. The model performed binary classification in each meta-training task between two randomly selected classes from the four provided. However, during meta-testing, the model was tasked with a higher-level classification, namely, distinguishing between LG and HG lesions. For ease of reference, we will refer to this experiment as the _2-way_ experiment. In the second scenario, we labeled the meta-training data based on the GS. Each training task required the model to distinguish between four randomly selected GS classes out of the total eight. For meta-validation and meta-testing, we labeled each patient based on the ISUP tumor severity score and made the model distinguish across these four classes. Henceforth, we will refer to this experiment as _4-way_ experiment. We summarized the labeling procedure for the two experiments in Table 1.
In both scenarios, we employed a one-shot setting for both meta-training and meta-testing. This means that the model only observed a single example per class in the support set of each task. However, during the evaluation phase, we expanded the query set by utilizing ten samples per class in both meta-training and meta-testing tasks.
### Architecture and training
Many widely used architectures for large-scale image recognition incorporate an adaptive average pooling layer with an output size of \(1\times 1\) placed just before the classifier. Its primary advantage is the ability to accommodate input images of varying sizes, as it automatically adjusts its parameters (such as kernel size, stride, and padding) to ensure that the output is consistently \(1\times 1\) in shape. This dimensionality reduction, however, conflicts with the 2D nature of feature maps for computing causalities. Therefore, since we chose the ResNet18 as the backbone architecture in our work, we substituted its _AdaptiveAvgPool2D_ layer with an identity layer in our experiments. We performed an optimization over the method by which computing the causality maps (i.e., _Max_ or _Lehmer_) and, for the _Lehmer_ case, over six different values of its parameter \(p\): [\(-100,-2,-1,0,1,100\)]. Accordingly, we trained seven models for each causality setting (i.e., _mulcat_ or _mulcatbool_), resulting in \(14\) causality-driven models plus one baseline model for each experiment (i.e., 2-way and 4-way). For each of the two causality settings, we chose the best-performing model on the meta-validation set.
Given an input image, the causality-driven ResNet18 extracts \(512\) bidimensional feature maps of shape \(4\times 4\). While those features proceed along the main branch of the network, a copy of them enters the causality module where for each pair, we extract their conditional probabilities by either applying Eq. 1 or Eq. 2 depending on the causality method of choice. Starting from the resulting \(512\times 512\) causality map, the vector of \(512\) causality factors is obtained according to the model variant of choice (i.e., _mulcat_ or _mulcatbool_) and then multiplied for the corresponding feature maps. Then, after concatenation of two such feature sets, we obtain a set of \(1024\) feature maps of shape \(4\times 4\) for each input image. Figure 1 shows the proposed causality-driven network. Although the training is performed task by task, here we represented the functioning of our method for just one input image.
At this point, the final set of feature maps is used to calculate the image representations. Following the Prototypical Networks [17] approach, the classification is performed by computing the BDC between the prototypes of each supported class, calculated as the mean of the BDC matrix of each support image of that class and each query image representation. To infuse the model with robustness to different data selections, we performed \(600\) meta-training tasks, \(600\) meta-validation tasks, and \(600\) meta-testing tasks for each experiment. As our loss function and optimizer, we employed the AUC margin loss (AUCM) [21] and the Proximal epoch stochastic method (PESG) [5], respectively. These were employed to maximize the Area Under the ROC curve (AUROC), which is more stable w.r.t accuracy to the dataset unbalancing. In addition, we performed our experiments with the following hyperparameters: initial learning rate = \(1e-2\), weight decay = \(1e-2\), number of epochs = \(100\), decay epochs: [20,80]. At each decay epoch, the learning rate value is divided by \(10\).
### Evaluation metrics
In our evaluation, we utilized the AUROC as the performance metric for all our experiments. For the \(2\)-way experiment, we computed the classical binary AUROC by considering the HG as the positive class. In the case of the \(4\)-way experiment, instead, we calculated the AUROC using the _One-vs-rest_ setting. This approach involves computing the AUROC for each class against the rest of the classes independently. In addition, we evaluated the binary classification performance of the models in the \(4\)-way experiment by computing the AUROC of ISUP class \(2\) versus all the rest.
### Visualizing the impact of causality
To test the hypothesis that a causally trained model can learn more discriminative representations for image classes, we performed post hoc explainability evaluations. Investigating the variability of visualization results with different choices of post hoc explainable AI (XAI) methods is beyond the scope of our work, therefore, we employed the popular Grad-CAM [15], which belongs to the broad literature on class activation maps (CAM) The basic idea behind Grad-CAM is to utilize the gradients flowing back from a chosen layer of a CNN to understand which parts of the image contribute the most to the activation of a particular class. In this case, we chose to compute the Grad-CAM heatmaps at the BDC module level, which takes as input the final set of feature maps. In addition, to make a fair comparison, we computed the heatmaps w.r.t. the ground-truth target of each image and only in the cases for which the prediction was performed correctly by both the non-causality-driven model and the mulcat and mulcatbool models.
## 5 Results
### Performance of causality-driven CNNs
The main results of our analysis are reported in Table 2. We reported all those values as mean and SD AUROC
\begin{table}
\begin{tabular}{c|c|c}
**Experiment** & **Splitting** & **Labels** \\ \hline \multirow{3}{*}{2-way} & Meta-training & ISUP \(2\), ISUP \(3\), ISUP \(4\), ISUP \(5\) \\ \cline{2-3} & Meta-validation & **LG** (ISUP \(2\)) - **HG** (ISUP \(3\), ISUP \(4\), ISUP \(5\)) \\ \cline{2-3} & Meta-test & **LG** (ISUP \(2\)) - **HG** (ISUP \(3\), ISUP \(4\), ISUP \(5\)) \\ \hline \multirow{3}{*}{4-way} & Meta-training & GS \(3+4\), GS \(4+3\), GS \(4+4\), GS \(3+5\), GS \(5+3\), GS \(4+5\), GS \(5+4\), GS \(5+5\) \\ \cline{2-3} & Meta-validation & ISUP \(2\), ISUP \(3\), ISUP \(4\), ISUP \(5\) \\ \cline{1-1} \cline{2-3} & Meta-test & ISUP \(2\), ISUP \(3\), ISUP \(4\), ISUP \(5\) \\ \end{tabular}
\end{table}
Table 1: A summary of the labeling procedure according to our training approach. ISUP = International Society of Urological Pathology, LG = Low Grade, HG = High Grade, GS = Gleason Score.
Figure 1: Causality-Driven ResNet18 for prostate cancer Grading from MRI. Here, the _causality map_ can be computed with one of _Max_ and _Lehmer_ options, while the _causality factors_ can be computed using either _mulcat_ or _mulcatbool_ methods. For visualization purposes, the size of the box representing the causality map has been reduced.
across all the \(600\) meta-test tasks. Concerning the \(2\)-way experiment (i.e., LG vs HG), the baseline model achieved \(0.539\) (\(0.141\)), and embedding the causality module led the model to improve, obtaining \(0.550\) (\(0.144\)) and \(0.556\) (\(0.141\)) AUROC for the _mulcat_ and _mulcatbool_ variants, respectively. In particular, both these causality-driven variant results were obtained with _Lehmer_ causality setting, using a _Lehmer_ parameter of -\(100\). Similar behaviour, with more pronounced improvement, was observed for the \(4\)-way experiment (i.e., ISUP \(2-5\)), where we obtained \(0.585\) (\(0.068\)) multi-class AUROC for the non-causality-driven model, whereas the _mulcat_ and _mulcatbool_ variants achieved \(0.611\) (\(0.069\)) and \(0.614\) (\(0.067\)), respectively. Again, both best-performing mulcat and mulcatbool variants were obtained employing the _Lehmer_ setting, with _Lehmer_ parameters equal to \(1\) and -\(2\), respectively.
Table 2 also shows the results of an ablation study. Indeed, since in the causally-driven implementations is the _causality factors_ vector that ultimately determines which (and how) feature maps are enhanced, we modify that vector to weigh features in a random manner rather than based on a principled way based on the causality map. The ablation variants of the _mulcat_ and _mulcatbool_ models that we realized are:
* **ablation mulcat**. The \(1\times k\) vector of causality factors (i.e., weights) is replaced with a random vector of the same size with integer values ranging from \(0\) (a feature map is never _cause_ of another feature) and \(k-1\) (it is _cause_ of every other feature).
* **ablation mulcatbool**. Similar to the previous, the values of the weights are randomly assigned either to \(0\) or to \(1\).
### Visualizing activation maps
As a result of the post hoc explainability evaluations, we obtain the visualizations shown in Figure 2. The first row (\(a\) to \(e\)) regards a test case from models trained in the \(2\)-way setting, while the second row (\(f\) to \(j\)) pertains to a case from \(4\)-way models. From left to right, the columns represent the input test image, the annotation of the lesion in terms of binary masks, the Grad-CAM activation for the baseline models (not causality-driven), the Grad-CAM activation for the _mulcat_ models, and the Grad-CAM activation for the _mulcatbool_ models.
## 6 Discussion and Conclusion
In this study, we investigated the impact of integrating a new causality-extraction module into traditional CNNs to enhance classification performance. We trained this causality-driven model using an OSL approach, leveraging meta-learning conditions with the MetaDeepBDC model [19]. We aimed to assess the effectiveness of such a model in situations where only a few samples are available for training, a challenge frequently encountered in medical image analysis.
In Pearl's terms, our work regards the first rung of the ladder of causation, where reasoning is limited to conditional probabilities based on observational datasets. However, we aimed to explore whether this approach could yield improvements in a scenario involving image data (rather than structured tabular data), and no prior knowledge of the data generation process. Our findings demonstrate that incorporating a causality-driven module into our model leads to enhanced performance compared to the baseline. This behavior is evident in both the \(2\)-way and \(4\)-way experiments. In particular, in the \(4\)-way experiment, the causality module provided a 3% improvement over the baseline in terms of the multi-class AUROC and about \(13\)% improvement in terms of ISUP \(2\) vs. rest AUROC.
We additionally validated our numerical results both quantitatively and qualitatively. Quantitatively, we performed ablation studies on the actual impact of the causality factors on producing valuable causality-driven feature maps. As expected, when the causal weights are replaced with random vectors, the accuracy of the final model is worse than its causally-driven counterpart (see Table 2). This seems to suggest that, albeit weak, the causality signals learned during training help the network. Qualitatively, we generated Grad-CAM heatmaps to highlight the regions of the input image that strongly influence the network's output. Figure 2 presents examples for both the \(2\)-way and the \(4\)-way experiments. In both cases, we calculated the heatmaps w.r.t., the ground truth target when all three types of models correctly classified the images. The heatmaps reveal distinct patterns between the baseline model and the causality-driven models. The former tends to focus on a larger area of the image, including regions outside the prostate and lesion boundaries. Conversely, the causality-driven models concentrate on smaller areas, predominantly encompassing the prostate and the lesion. Figure 2 (c-e), which depicts the \(2\)-way experiment, shows that the baseline model (Figure 2 c) primarily attends to the left half of the image, encompassing the lesion as well as non-prostate tissues. In contrast, the _mulcat_ version (Figure 2 d) exhibits a more focused heatmap, highlighting mainly the prostate and a portion of the lesion. The _mulcatbool_ case (Figure 2 e) further refines the focus by emphasizing the prostate and a larger portion of the lesion. Similarly, as for the \(4\)-way experiment, the baseline model (Figure 2 h) pays attention to the left half of the image. In contrast, the _mulcat_ and _mulcatbool_ versions (Figure 2 i-j) prioritize the lesion's immediate surroundings. Although all three models produce accurate predictions, the heatmaps demonstrate that the causality-driven module enables better localization of relevant regions, providing more
reliable explanations for the model's predictions.
Comparing the \(4\)-way experiment to the \(2\)-way experiment, the former produced better classification results. Indeed, despite being a more complex task, the mean AUROC across all classes is higher. We argue that two factors contribute to this outcome, both associated with the meta-training phase. Firstly, in the \(4\)-way experiment, the model encounters a more diverse range of tasks, as the four classes can be selected from a pool of eight distinct options. In contrast, the \(2\)-way experiment encompasses only four classes in total. Secondly, for the \(4\)-way experiment, in each meta-training task, the model is trained to distinguish a higher number of classes, representing a more challenging task w.r.t. the binary case. Consequently, the model is better equipped to handle the testing phase, resulting in improved performance. This superiority becomes even more evident when examining the models' performance in the \(4\)-way experiment in classifying ISUP class \(2\) (representing LG lesions) against all other classes (representing HG lesions). Notably, when the _mulcat_ and _mulcatbool_ causality-driven modules are embedded into the model, the AUROC value for this particular task increases by almost \(16\)%.
Our work comes with several limitations. We only used ResNet18 as our backbone network, and this might have limited the opportunity to find better-suited architectures able to detect finer details in the image and consequently extract more informative latent representations. In addition, we performed our experiments only in an OSL setting, limiting the classification performance of our models. In fact, we must note that the performance values obtained are not yet sufficient for clinical practice. Finally, we validated our method on only one dataset.
Despite that, our findings indicate that integrating a causality-driven module into a classification model can enhance performance, even with severe data limitations, which are common in medical imaging. The causality-driven approach not only improves overall classification results but also helps the model focus more accurately on the critical regions of the image, leading to more reliable and robust predictions. This aspect is particularly critical in medical imaging, where precise and reliable classification is crucial for effective diagnosis and treatment planning.
| この論文では、医療画像を自動分類するための新しい手法を提示します。これは、画像における弱い因果信号を学習し、活用するものです。私たちのフレームワークは、畳み込みニューラルネットワークの骨格と因果関係を抽出するモジュールから成り立っています。このモジュールは、特徴マップ間の因果関係を抽出し、特定の場所の画像に特徴の出現を推測することができます。他の場所の画像に存在する他の特徴の存在に基づいて。低データセット環境での有効性を評価するため、私たちは、関連クラスを異なるレベルの粒度で用いる新しいメタ学習手順を用いて、因果関係を駆使したアーキテクチャをトレーニングしました。私たちは、公開されているプロスタテMRI画像のセットに対して二分類と多クラス分類の実験を行いました。提案された因果関係駆動モジュールの有効性を検証するために、干渉実験を実施し、クラス活性化マップを用いた質的な評価を行っています |
2301.13368 | Misspecification-robust Sequential Neural Likelihood for
Simulation-based Inference | Simulation-based inference techniques are indispensable for parameter
estimation of mechanistic and simulable models with intractable likelihoods.
While traditional statistical approaches like approximate Bayesian computation
and Bayesian synthetic likelihood have been studied under well-specified and
misspecified settings, they often suffer from inefficiencies due to wasted
model simulations. Neural approaches, such as sequential neural likelihood
(SNL) avoid this wastage by utilising all model simulations to train a neural
surrogate for the likelihood function. However, the performance of SNL under
model misspecification is unreliable and can result in overconfident posteriors
centred around an inaccurate parameter estimate. In this paper, we propose a
novel SNL method, which through the incorporation of additional adjustment
parameters, is robust to model misspecification and capable of identifying
features of the data that the model is not able to recover. We demonstrate the
efficacy of our approach through several illustrative examples, where our
method gives more accurate point estimates and uncertainty quantification than
SNL. | Ryan P. Kelly, David J. Nott, David T. Frazier, David J. Warne, Chris Drovandi | 2023-01-31T02:28:18 | http://arxiv.org/abs/2301.13368v2 | # Misspecification-robust Sequential Neural Likelihood
###### Abstract
Simulation-based inference (SBI) techniques are now an essential tool for the parameter estimation of mechanistic and simulatable models with intractable likelihoods. Statistical approaches to SBI such as approximate Bayesian computation and Bayesian synthetic likelihood have been well studied in the well specified and misspecified settings. However, most implementations are inefficient in that many model simulations are wasted. Neural approaches such as sequential neural likelihood (SNL) have been developed that exploit all model simulations to build a surrogate of the likelihood function. However, SNL approaches have been shown to perform poorly under model misspecification. In this paper, we develop a new method for SNL that is robust to model misspecification and can identify areas where the model is deficient. We demonstrate the usefulness of the new approach on several illustrative examples.
_Keywords: generative models, implicit models, likelihood-free inference, normalising flows, simulation-based inference_
## 1 Introduction
Statistical inference for complex models can be challenging when the likelihood function is infeasible to evaluate many times. However, if the model is computationally inexpensive to simulate given parameter values, it is possible to perform approximate parameter estimation by so-called simulation-based inference (SBI) techniques (e.g. Cranmer et al. (2020)). The difficulty of obtaining reliable inferences in the SBI setting is exacerbated when the model is misspecified (e.g. Frazier et al. (2020)).
Statistical approaches for SBI, such as approximate Bayesian computation (ABC, Sisson et al. (2018)) and Bayesian synthetic likelihood (BSL, Price et al. (2018)) have been well studied, both empirically (e.g. Drovandi and Frazier (2022)) and theoretically (e.g. Li and Fearnhead (2018), Frazier et al. (2018), Frazier et al. (2022)). These approaches often base inference on a summarisation of the data to manage computational costs. ABC aims to minimise the distance between observed and simulated summaries, whereas BSL constructs a Gaussian approximation of the model summary to form an approximate likelihood. In the case of model misspecification, there may be additional motivation to replace the entire dataset with summaries, as the resulting model can then be trained to capture the broad features of the data that may be of most interest; see, e.g., Lewis et al. (2021) for further discussion. In this paper, the type of misspecification we are interested in is when the model is not able to recover the observed summary statistic as the sample size diverges. This form of misspecification is referred to as incompatibility in Marin et al. (2014).
The behaviour of ABC and BSL under incompatibility is now well understood. Frazier et al. (2020) show that under various assumptions, ABC is capable of concentrating onto the pseudo-true parameter value, which in the SBI context is the value that minimises some distance between the large sample limit of the observed and simulated summaries. However, the concentration is not Gaussian and credible intervals do not have the correct frequentist coverage. BSL on the other hand can exhibit unexpected behaviour under misspecification (Frazier et al., 2021). For example, it is possible to obtain Gaussian concentration onto the pseudo-true parameter, but it is also possible to obtain a multimodal posterior that does not concentrate onto a singleton. Unfortunately, the behaviour for a given problem is not known _a priori_.
Given the undesirable properties of BSL under misspecification, Frazier and Drovandi (2021) propose methods to simultaneously identify which statistics are incompatible and make inferences robust. The approach of Frazier and Drovandi (2021) is a model expansion that introduces auxiliary variables, one per summary statistic, whose purpose is to either shift the means or inflate the variances in the Gaussian approximation so that the extended model is compatible, i.e. to soak up the misspecification.
Although ABC is, in a certain sense, robust to misspecification, and BSL has been extended to handle incompatibility, they both remain inefficient in terms of the number of model simulations required. Most algorithms for ABC and BSL are wasteful in the sense they use a relatively large number of model simulations that are associated with rejected parameter proposals (for some exceptions to this, see Jasra et al. (2019); Levi and Craiu (2022); Warne et al. (2018, 2022)). This has motivated the development of methods in machine learning that utilise all model simulations to learn either the likelihood (e.g. Papamakarios et al. (2019)), posterior (e.g. Greenberg et al. (2019)) or likelihood ratio (e.g. Thomas et al. (2022)). Since these objects are learned as functions of the parameter, subsequent posterior inference does not require further model simulation.
However, the machine learning approaches, such as sequential neural likelihood (SNL) and sequential neural posterior (SNP) have been shown to exhibit poor performance under model misspecification (e.g. Bon et al. (2022); Cannon et al. (2022); Schmitt et al. (2021); Ward et al. (2022)). Thus there is a critical need to develop these neural approaches so they are robust to model misspecification. Ward et al. (2022) develop a method, which shares similarities to the mean adjustment approach developed for BSL, to make neural posterior estimation robust to model misspecification. Cannon et al. (2022) develop several neural SBI robust methods by incorporating machine learning methods that are known to better handle out-of-distribution (OOD) data. Cranmer et al. (2020) advise to incorporate
additional noise directly into the simulator if model misspecification is suspected.
In this paper we develop a robust version of SNL, again inspired by the mean adjustment approach for BSL. Unlike Ward et al. (2022) who consider neural posterior estimation, we consider neural likelihood estimation, which is useful for problems where the likelihood is easier to emulate compared to the posterior. Further, ours is the first _sequential_ neural approach that simultaneously detects and corrects for model misspecification.
## 2 Background
Let \(y=(y_{1},\ldots,y_{n})^{\top}\) denote the observed data and define \(P_{0}^{(n)}\) as the true distribution of \(y\). The observed data is assumed to be generated from a class of parametric models \(\{P_{\theta}^{(n)}:\theta\in\Theta\subset\mathbb{R}^{d_{\theta}}\}\) for which the likelihood function is intractable, but from which we can easily simulate pseudo-data \(x\) for any \(\theta\in\Theta\) where \(\theta\) is \(d_{\theta}\) dimensional. Let \(\Pi\) denote the prior measure for \(\theta\) and \(\pi(\theta)\) its density. The posterior density of interest is given by
\[\pi(\theta\mid y)\propto g(y\mid\theta)\pi(\theta),\]
where \(g(y\mid\theta)\) is the likelihood function.
### Statistical Approaches for SBI
Since we assume that the likelihood is computationally intractable, we conduct inference using approximate Bayesian methods. Statistical approaches to SBI aim to search for values of \(\theta\) that produce pseudo-data \(x\) which is "close enough" to \(y\), and then retain these values to build an approximation to the posterior. To ensure the problem is computationally practical, the comparison is generally carried out using summaries of the data. Moreover, under model misspecification, there may be further motivation to conduct inference based on summaries, to attempt to capture the key features of the data. Let \(S:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}\), \(d\geq d_{\theta}\), denote the vector summary statistic mapping used in the analysis.
Two prominent statistical approaches for SBI are ABC and BSL. ABC approximates the likelihood via the following:
\[g_{\epsilon}(S(y)\mid\theta)=\int_{\mathbb{R}^{d}}K_{\epsilon}(\rho\{S(y),S(x )\})g_{n}(S(x)\mid\theta)dx,\]
where \(\rho\{S(y),S(x)\}\) measures the discrepancy between observed and simulated summaries and \(K_{\epsilon}(\cdot)\) is a kernel that allocates higher weight to smaller \(\rho\). The bandwidth of the kernel, \(\epsilon\), is often referred to as the tolerance in the ABC literature. The above integral is intractable, but can be estimated unbiasedly by drawing \(m\) mock datasets \(x_{1},\ldots,x_{m}\sim P_{\theta}^{(n)}\) and computing
\[\hat{g}_{\epsilon}(S(y)\mid\theta)=\frac{1}{m}\sum_{i=1}^{m}K_{\epsilon}(\rho \{S(y),S(x_{i})\}).\]
It is common to set \(m=1\) and choose the indicator kernel function, \(K_{\epsilon}(\rho\{S(y),S(x)\})=\mathbf{I}(\rho\{S(y),S(x)\}\leq\epsilon)\). Using arguments from the exact-approximate literature (Andrieu and Roberts, 2009), unbiasedly estimating the ABC likelihood leads to a Bayesian algorithm that samples from the approximate posterior proportional to \(g_{\epsilon}(S(y)\mid\theta)\pi(\theta)\).
As is evident from the above integral estimator, ABC non-parametrically estimates the summary statistic likelihood. In contrast, BSL uses a parametric estimator. The most common BSL approach approximates \(g_{n}(\cdot\mid\theta)\) using a Gaussian:
\[g_{A}(S(y)\mid\theta)=\mathcal{N}\left(S(y);\mu(\theta),\Sigma(\theta)\right),\]
where \(\mu(\theta)=\mathsf{E}[S(x)|\theta]\) and \(\Sigma(\theta)=\mathrm{Var}(S(x)|\theta)\) denote the mean and variance of the model summary statistic at \(\theta\). In almost all practical cases \(\mu(\theta)\) and \(\Sigma(\theta)\) are unknown, but we can replace these quantities with those estimated from \(m\) independent model simulations, using for example the sample mean and variance:
\[\mu_{m}(\theta) =\frac{1}{m}\sum_{i=1}^{m}S(x^{i}),\] \[\Sigma_{m}(\theta) =\frac{1}{m}\sum_{i=1}^{m}\left(S(x^{i})-\mu_{m}(\theta)\right) \left(S(x^{i})-\mu_{m}(\theta)\right)^{\top},\]
and where each simulated data set \(x^{i}\), \(i=1,\ldots,m\), is generated iid from \(P_{\theta}^{(n)}\). The synthetic likelihood is then approximated as
\[\hat{g}_{A}(S(y)\mid\theta)=\mathcal{N}\left(S(y);\mu_{m}(\theta),\Sigma_{m}( \theta)\right).\]
Unlike ABC, \(\hat{g}_{A}(S(y)\mid\theta)\) is not an unbiased estimator of \(g_{A}(S(y)\mid\theta)\). Frazier et al. (2022) demonstrate that if the summary statistics are sub-Gaussian, then the choice of \(m\) is immaterial so long as \(m\) diverges as \(n\) diverges. The insensitivity to \(m\) is supported empirically in Price et al. (2018), provided that \(m\) is chosen large enough so that the plug-in synthetic likelihood estimator has a small enough variance to ensure that MCMC mixing is not adversely affected.
### SBI and Model Misspecification
The usual notion of model misspecification is not meaningful in the SBI context, i.e., no value of \(\theta\in\Theta\) such that \(P_{\theta}^{(n)}=P_{0}^{(n)}\), since even if the model is incorrect, it is still possible that \(P_{\theta}^{(n)}\) can generate summary statistics that match the observed statistic (Frazier et al., 2020). Define \(b(\theta)=\mathsf{E}[S(x)\mid\theta]\) and \(b_{0}=\mathsf{E}[S(y)]\) as the expected value of the summary statistic with respect to the probability measures \(P_{\theta}^{(n)}\) and \(P_{0}^{(n)}\), respectively. That is, the expectations are with respect to the model conditioned on \(\theta\) and the true data generating process, respectively. The meaningful notion of misspecification in the SBI context is when there is no \(\theta\in\Theta\) such that \(b(\theta)=b_{0}\), i.e. there is no parameter value such that the expected simulated and observed summaries match.
In the context of ABC, we say that the model is misspecified if
\[\epsilon^{*}=\inf_{\theta\in\Theta}\rho(b(\theta),b_{0})>0,\]
for some metric \(\rho\), and the corresponding pseudo-true parameter is defined as \(\theta^{*}=\arg\inf_{\theta\in\Theta}\rho(b(\theta),b_{0})\). Frazier et al. (2020) show, under various conditions, the ABC posterior concentrates onto \(\theta^{*}\) for large sample sizes, and thus ABC does possess an inherent robustness to model misspecification. However, Frazier et al. (2020) also show that the asymptotic shape of the ABC posterior is non-Gaussian and credible intervals do not pos
sess valid frequentist coverage; i.e., confidence sets do not have the correct level under \(P_{0}^{(n)}\).
In the context of BSL, Frazier et al. (2021) show that when the model is incompatible, i.e. \(b(\theta)\neq b_{0}\ \forall\theta\in\Theta\), the Kullback-Leibler divergence between the true data generating distribution and the Gaussian distribution associated with the synthetic likelihood diverges as \(n\) diverges. In BSL, we say that the model is incompatible if
\[\lim_{n\to\infty}\inf_{\theta\in\Theta}\left\{b(\theta)-b_{0}\right\}^{\top} \left\{n\Sigma(\theta)\right\}^{-1}\left\{b(\theta)-b_{0}\right\}>0.\]
Define \(M_{n}(\theta)=n^{-1}\partial\log g_{A}\left(S\mid\theta\right)/\partial\theta\). The behaviour of BSL under misspecification is dependent on the number of roots of \(M_{n}(\theta)=0\). If there is a single solution, and under various assumptions, the BSL posterior will concentrate onto the pseudo-true parameter \(\theta^{*}\) and its asymptotic shape is Gaussian, and the BSL posterior mean satisfies a Bernstein von-Mises result. However, if there are multiple solutions to \(M_{n}(\theta)=0\), then the BSL posterior will asymptotically exhibit multiple modes that do not concentrate on \(\theta^{*}\). The number of solutions to \(M_{n}(\theta)=0\) for a given problem is not known _a priori_ and is very difficult to explore.
In addition to the theoretical issues suffered by BSL under misspecification, there is also computational issues. Frazier and Drovandi (2021) identify that, under incompatibility, since the observed summary lies in the tail of the estimated synthetic likelihood for any value of \(\theta\), the Monte Carlo estimate of the likelihood suffers from high variance. Consequently, a very large value of \(m\) is required to allow the MCMC chain to mix and not become stuck, which is computationally burdensome.
A solution to the BSL incompatibility problem is provided in Frazier and Drovandi (2021). The solution involves expanding the model to include an auxiliary parameter, \(\Gamma\in\mathbb{R}^{d}\) such that \(\Gamma=(\gamma_{1},\ldots,\gamma_{d})^{\top}\), which has the same dimension as the summary statistic. The approach of Frazier and Drovandi (2021) then either adjusts the mean or inflates the variance of the synthetic likelihood so that the observed summary does not lie so far in the tails of the expanded model. The expanded model is overparameterised since \(\dim((\theta,\Gamma)^{\top})=d+d_{\theta}\), which is greater than the dimension of the summary statistic, \(d\). To regularise the model, Frazier and Drovandi (2021) impose a prior distribution on \(\Gamma\) that favours compatibility. However, the prior for each component of \(\Gamma\) has a heavy tail so that it can "soak up" the misspecification for a certain subset of the summary statistics. By doing so, the method is able to identify the statistics that the model is not compatible with, and at the same time, mitigate the influence of the incompatible statistics on the inference. Frazier and Drovandi (2021) show that under compatibility, the posterior for \(\Gamma\) is the same as its prior, so that incompatibility can be detected by departures from the prior.
Here we provide more detail on the mean adjustment method of Frazier and Drovandi (2021) since we adopt a similar approach within our robust SNL method. The mean adjusted (estimated) synthetic likelihood is denoted
\[\mathcal{N}\left(S;\mu_{m}(\theta)+\sigma_{m}(\theta)\circ\Gamma,\Sigma_{m}( \theta)\right),\]
where \(\sigma_{m}(\theta)\) is the vector of estimated standard deviations of the model summary statistics, and \(\circ\) denotes the Hadamard (element-by-element) product. The role of \(\sigma_{m}(\theta)\) is to ensure that we can treat each component of \(\Gamma\) as the number of standard deviations (either positive or negative) that we are shifting the corresponding model summary statistic.
Frazier and Drovandi (2021) suggest using a prior for which \(\theta\) and \(\Gamma\) are independent, with the prior density for \(\Gamma\) being
\[p(\Gamma)=\prod_{j=1}^{d}\frac{1}{2\lambda}\exp\left(-\frac{|\gamma_{j}|}{ \lambda}\right).\]
The Laplace prior above with scale \(\lambda\) for each \(\gamma_{j}\) is chosen because it is peaked at zero, but with a moderately heavy tail. Frazier and Drovandi (2021) develop a component-wise MCMC algorithm that iteratively updates via the conditionals \(\theta|S,\Gamma\) and \(\Gamma|S,\theta\). The update for \(\Gamma\) holds the \(m\) model simulations fixed and uses a slice sampler so that the acceptance rate is one and does not requiring tuning a proposal distribution. Frazier and Drovandi (2021) find empirically that sampling over the joint space \((\theta,\Gamma)^{\top}\) does not slow down mixing on the \(\theta\)-marginal space. On the contrary, in the case of misspecification, the mixing is substantially improved as the observed value of the summaries no longer falls in the tail of the Gaussian distribution.
Although ABC has a natural robustness to misspecification and BSL has been extended to accommodate incompatibility, both methods reject a large number of model simulations, and can thus be highly computationally intensive when simulating the model is not cheap. As described in the introduction, neural methods in the machine learning community have been developed that exploit all the model simulations to build a surrogate model of the posterior, likelihood or likelihood ratio. Below we describe one of these methods, sequential neural likelihood (SNL), and show how it can be extended to accommodate model misspecification.
## 3 Robust Sequential Neural Likelihood
In this section, we propose an approach that extends SNL using a similar method to the mean adjustment approach in Frazier and Drovandi (2021) so that it is robust to model misspecification.
### Sequential Neural Likelihood
SNL belongs to the class of SBI methods that use a neural conditional density estimator (NCDE). A NCDE is a specific class of neural network, \(q_{\phi}\), parameterised by \(\phi\), that learns a conditional probability density from a set of datapoint pairs. This is attractive for SBI as we have access to pairs of \((\theta,x)\), but do not have a tractable conditional probability density, in either direction. Hence, the idea is to train \(q_{\phi}\) on \(\mathcal{D}=\{\theta_{i},x_{i}\}_{i=1}^{m}\) and use it as a surrogate for the unavailable density of interest. NCDEs have been used as a surrogate density for the likelihood (Papamakarios et al., 2019) and posterior (Papamakarios and Murray, 2016; Greenberg et al., 2019). Throughout this section we will mainly consider approaches that build a surrogate of the intractable likelihood function, \(q_{\phi}(S(x)\mid\theta)\), using a normalising flow as the NCDE.
Normalising flows are a useful class of neural networks for density estimation. They convert a simple base distribution with density \(\pi(u)\), to a complex target distribution with density \(\pi(\eta)\), through a sequence of \(L\) bijective transformations, \(T=T_{L}\circ\cdots\circ T_{1}\). The density of
\(\eta=T^{-1}(u)\), \(\eta\in\mathbb{R}^{d}\), where \(u\sim\pi(u)\) is
\[\pi(\eta)=\pi(u)|\det J_{T}(u)|^{-1}, \tag{1}\]
where \(J_{T}\) is the Jacobian of \(T\). Normalising flows are also useful for data generation, although this has been less important for SBI methods. We only consider autoregressive flows here, but there are many recently developed alternatives, as discussed in Papamakarios et al. (2021).
Autoregressive flows are defined by a conditioner function and a transformer function. The transformer, \(v^{\prime}_{i}=\tau(v_{i};h_{i})\), is an invertible function parameterised by \(h_{i}\) that maps \(v_{i}\) to \(v^{\prime}_{i}\) for \(i=1,\ldots,d\). The conditioner, \(h_{i}=c_{i}(v_{<i})\), outputs values that parameterise the transformer function. The only constraint for the conditioner is the autoregressive property (the \(i\)-th element of the conditioner can only be conditioned on elements from \(v\in\mathbb{R}^{d}\) that have indices \(<i\)). This constraint ensures that the Jacobian is a triangular matrix allowing fast computation of the determinant in Equation 1. The sequence of transformations, \(T\), is composed of the transformer and conditioner functions repeated multiple times, with the output of the transformer, \(v^{\prime}_{i}\), being passed into the next conditioner function. Autoregressive flows have found popular usage in SBI applications.
The two flows most widely used for SBI are masked autoregressive flow (MAF, Papamakarios et al., 2017) and neural spline flow (NSF, Durkan et al., 2019). We consider NSF in more depth as it is the flow used for the examples in Section 4. NSF uses a spline-based transformer that defines a monotonically increasing piecewise function of \(K\) bins between \(K+1\) knots. Due to its expressive power, a rational quadratic function (quotient of two quadratic polynomials) is used for each bin. The conditioner output parameters \(h_{i}\) are the knot locations and the derivatives at the knots.
The conditioner is implemented in NSF as a coupling layer. A coupling layer splits the data into two parts. The first part, \((z_{1},\ldots,z_{\lfloor\frac{d}{2}\rfloor})\), is left untouched. The second part takes the unchanged first part as input, and outputs \((h_{\lfloor\frac{d}{2}\rfloor+1},\ldots,h_{d})\) using some function (typically a neural network). Finally, to make NSF a conditional normalising flow, we add \(\theta\) into the conditioner, \(h_{i}=c_{i}(v_{<i}\mid\theta)\). As the composition of \(T\) contains many neural networks, stochastic gradient-based optimisation is used to train the flow. The trained flow can then be embedded in an MCMC sampling scheme to sample from the approximate posterior.
Neural-based methods can efficiently sample the approximate posterior using MCMC methods. The evaluation of the normalising flow density is constructed to be fast. Also as we are using the trained flow as a surrogate function, no simulations are needed during MCMC sampling. Using automatic differentiation (Baydin et al., 2018), one can efficiently find the gradient of a NCDE and use it in an efficient MCMC sampler such as the No-U-Turn sampler (NUTS) (Hoffman and Gelman, 2014).
One categorisation of neural SBI methods is between amortised and sequential sampling schemes. These methods differ in the proposal distribution for \(\theta\). Amortised methods build a surrogate of the likelihood function \(q_{\phi}(S(x)\mid\theta)\) for any \(x\) within the support of the prior predictive distribution. Thus the trained flow can be used to approximate the posterior for any observed statistic, which is efficient if many datasets need to be analysed. Unfortunately, this requires using the prior as the proposal distribution. When the prior and posterior differ, there will be few training samples of \(x\) that are close to \(y\), and hence the trained flow may not be very accurate in the vicinity of the observed statistic.
Sequential approaches aim to update the proposal distribution, so that more training datasets are generated closer to \(S(y)\) to obtain a more accurate approximation of \(\pi(\theta|S(y))\). In this approach, \(R\) rounds of training is performed, with the proposal distribution for the current round given by the approximate posterior for the previous round. The first round proposes \(\theta\sim\pi(\theta)\). At each round \(r\), a normalising flow, \(q_{r,\phi}(S(x)\mid\theta)\) is trained on all generated \((\theta,x)\in\mathcal{D}\).
Development of neural methods for SBI is an active area of research with many recent approaches also approximating the likelihood (Boelts et al., 2022; Wiqvist et al., 2021). Neural SBI methods need not use normalising flows, with some more recent approaches using diffusion models to approximate the score of the likelihood (Sharrock et al., 2022) or energy-based models to surrogate the likelihood (Glaser et al., 2022) or score (Pacchiardi and Dutta, 2022). Our robust extension of SNL can in principle be implemented with any of these likelihood estimators.
### Robust Extension to Sequential Neural Likelihood
Recent research has found neural SBI methods behave poorly under model misspecification (Bon et al., 2022; Cannon et al., 2022; Schmitt et al., 2021; Ward et al., 2022). It is not surprising that neural SBI methods suffer from the same issues as ABC and BSL when compatibility is not satisfied as they are based on many of the same principles. Indeed, neural methods are known to struggle when the input differs from the training dataset, an issue known as out-of-distribution (OOD, Yang et al., 2021). This extends to normalising flows which been shown to fail to detect OOD data (Kirichenko et al., 2020). The poor performance of neural SBI under model misspecification has prompted the development of more robust methods.
Recently, methods have been developed to detect model misspecification when applying neural posterior estimation for both amortised (Ward et al., 2022) and sequential (Schmitt et al., 2021) approaches.1 Schmitt et al. (2021) use a maximum mean discrepancy (MMD) estimator to detect a "simulation gap" between the observed and simulated data. However, this is focused on detecting model misspecification and does not add robustness to inferences on \(\theta\). Ward et al. (2022) both detects and corrects for model misspecification similarly to Frazier and Drovandi (2021). Rather than explicitly introducing auxiliary variables, Ward et al. (2022) introduces an error model \(\pi(S(y)\mid S(x))\). The error model can be used to sample values \(S_{i}(x)\), \(i=1,\ldots,m\) for \(S(x)\) from its marginal posterior density, which is approximated by the density proportional to \(\pi(S(y)\mid S(x))q_{\phi}(S(x))\), where \(q_{\phi}(S(x))\) is a normalising flow approximating the prior predictive density of \(S(x)\). The marginal posterior for \(\theta\) is then approximated as an average of conditional density estimates \(q_{\phi}(\theta\mid S_{i}(x))\), for \(i=1,\ldots,m\), using a second conditional normalizing flow for estimating the conditional posterior of \(\theta\) given \(S(x)\). Both of the approaches described above use a surrogate of the posterior. There is thus a gap in the literature for robust neural methods that approximate the likelihood, which would be beneficial for applications where it is easier to emulate the likelihood than the posterior.
Footnote 1: Schmitt et al. (2021) also add robustness model misspecification to BayesFlow (Radev et al., 2022). BayesFlow, like NPE, is an amortised neural approximation of the posterior.
We propose robust SNL (RSNL), a sequential approach that approximates the likelihood that is made robust to model misspecification using a similar approach to Frazier and Drovandi (2021). As outlined in Section 2.2, the approach of Frazier and Drovandi (2021) adjusts either the sample mean or sample covariance. In the case of the mean adjustment,
we can think of the adjustment being applied to the observed summary rather than the estimated summary mean, given the symmetry of the normal distribution. For RSNL, we apply this argument to shift the observed summary directly based on auxiliary adjustment parameters. When \(S(y)\) falls in the tail of the surrogate likelihood, the adjustment parameters can be activated to shift to a region of higher density. We thus evaluate \(q_{\phi}(S(y)-\Gamma\mid\theta)\) as the adjusted surrogate likelihood.2 So instead of targeting \(\pi(\theta\mid S(y))\), we are now estimating the approximate joint posterior,
Footnote 2: We could use notation \(q_{\phi}(S(y)\mid\theta,\Gamma)\). However, \(q_{\phi}(S(y)-\Gamma\mid\theta)\) highlights that we are using the flow trained on \(\mathcal{D}\), and the effect of \(\Gamma\) is solely shifting the location of the observed summaries.
\[\pi(\theta,\Gamma\mid S(y))\propto q_{\phi}(S(y)-\Gamma\mid\theta)\pi(\theta) \pi(\Gamma),\]
where we set \(\pi(\theta)\) and \(\pi(\Gamma)\) independently of each other.
We find that the prior choice, \(\pi(\Gamma)\), is crucial for RSNL. As in the mean adjustment approach of Frazier and Drovandi (2021), also known as robust BSL (RBSL), we impose a Laplace prior distribution on \(\Gamma\) to encourage shrinkage. We set the components of \(\Gamma\) to be independent, \(\pi(\Gamma)=\Pi_{i=1}^{d}\pi(\gamma_{i})\). We could follow Frazier and Drovandi (2021) and set each component to the same prior scale. However, we propose here to set the prior for each component to,
\[\pi(\gamma_{i})=\text{Laplace}(0,\lambda=0.3\times\overset{\sim}{S}(y))=\frac{ 1}{2\lambda}\exp{\left(-\frac{|\gamma_{i}|}{\lambda}\right)},\]
where \(\overset{\sim}{S}(y)\) is the standardised observed summary (we discuss later more details on the standardisation). We set \(\pi_{0}(\gamma_{i})\sim\text{Laplace}(0,1)\) for the initial round. We recompute \(\overset{\sim}{S}(y)\) at each round and accordingly set \(\pi_{r}(\gamma_{i})\). The idea here is that the standardised observed statistic gives us information on how likely a summary is to be misspecified (i.e. the further in the tails, the more likely it is to be misspecified). This approach allows highly misspecified summaries to be corrected as well as reducing the noise introduced by the adjustment parameters when the summary is well-specified. A consequence of this is that regardless of how far an incompatible statistic is in the tail, the adjustment parameters will have enough density to (theoretically) map the misspecified summary to the mean of the simulated summaries.
To our knowledge, there are two main scenarios where this prior is not suitable. First, if a summary is incompatible but after standardisation is very close to 0. This seems unlikely but may be possible when the simulated summaries have a complex multi-modal distribution. In this case, RSNL will behave similarly to SNL for the particular summary. Second, a summary is correctly specified but is in the tails. This is again unlikely, and would have the effect of increasing the noise introduced by the adjustment parameters. If there is a concern, the researcher can inspect summary statistic plots or the posterior predictive and use a different prior. However, we find that our choice of prior works well for the examples in Section 4.
The summaries are standardised to account for varying scales. This is done after the additional simulations are generated at each training round. As all generated parameters are used to train the flow, standardisation is computed using all of \(\mathcal{D}\). Standardisation serves two purposes: 1) when training the flow and 2) for the adjustment parameters to be on roughly the same scale as the summaries. When adjusting the summaries, we note the standardisation has been done unconditionally (i.e. sample mean and sample
standard deviation have been calculated using all simulated summary statistics in the training set). Standardisation conditional on \(\theta\) may be needed for more heteroskedastic simulation functions. We discuss some possible extensions in Section 5.
We are targeting the augmented joint posterior for \(\theta\) and \(\Gamma\). Algorithm 1 shows the full process to sample the RSNL approximate posterior. RSNL, like SNL, can evaluate both the neural likelihood and the gradient of the approximate posterior efficiently, so we use NUTS for MCMC sampling. This differs from Ward et al. (2022) who, due to the use of a spike-and-slab prior, use mixed Hamiltonian Monte Carlo, an MCMC algorithm for inference on both continuous and discrete variables. The main difference between SNL and RSNL is that the MCMC sampling is now targeting the adjusted posterior. Hence, RSNL can be used in place of SNL with little difficulty.
Once we have samples from the joint posterior, we can consider the \(\theta\) and \(\Gamma\) posterior samples separately. We can use the \(\theta\) samples to conduct Bayesian inference on functions of \(\theta\) of interest for the application at hand. Additionally, the \(\Gamma\) approximate posterior samples can be used for model criticism.
RSNL can be used for model criticism similarly to RBSL and the ABC approach of Ratmann et al. (2009). It is expected that when the assumed and actual DGP are incompatible, RSNL will behave similarly to RBSL and there will be a discrepancy between the prior and posterior distributions for the components of \(\Gamma\). Visual inspection should be sufficient to detect a discrepancy. However, a researcher can use any statistical distance function to assess this.
Another common approach for model criticism is posterior predictive checks, as was recommended for RBSL in Frazier and Drovandi (2021). For RSNL, we can also use the posterior predictive, \(\pi(S(\tilde{y})\mid S(y))\), where \(S(\tilde{y})\) is generated at sampled parameters from the approximate posterior, to visually assess model incompatibility. If \(S(y)\) appears in the tails with little to no support, then this could be evidence of model misspecification. Additionally, the usual diagnostics for neural SBI methods are also available for RSNL. This is advantageous not only for detecting model misspecification, but also for making inference robust to misspecification.
## 4 Examples
In this section, we apply SNL and RSNL on three illustrative examples with model misspecification. Across all examples the following design and hyperparameters are used unless otherwise specified. We use a conditional NSF for \(q_{\phi}(S(x)\mid\theta)\) as implemented in the flowjax package (Ward, 2023). The flow design closely follows the choices in the sbi package (Tejero-Cantero et al., 2020). For the rational quadratic spline transformer, we use 10 bins over the interval [-5, 5]. The transformer function defaults to the identity function outside of this range. This is important for the considered misspecified models, as often the observed summary is in the tails. The conditioner consists of five coupling layers, with each coupling layer using a multilayer perceptron of two layers with 50 hidden units. The flow is trained using the Adam optimiser (Kingma and Ba, 2015) with a learning rate of \(5\times 10^{-4}\). Training of the flow is stopped when either the validation loss, calculated on 10% of the samples, has not improved over 20 epochs or when the limit of 500 epochs is reached.
We parallelise NUTS (MCMC) sampling across four chains and set the target acceptance probability to 0.95.3 Chain convergence is assessed by checking that the rank normalised \(\hat{R}\) of Vehtari et al. (2021) is in the range (1.0, 1.05) and the effective sample size (ESS) is reasonably close to the number of MCMC iterations. For each example, the autocorrelation, ESS and trace plots are also inspected. The chains are initialised at a random sample from the previous round. We then run each chain for 3500 iterations and discard the first 1000 iterations for burn-in. The resulting 10,000 combined samples from the four MCMC chains are thinned by a factor of 10. Model simulations are then run at the 1000 sampled
model parameter values. We use thinning so that potentially expensive model simulations are run using relatively independent parameter values, taking advantage of the fact that for typical applications running the MCMC with the learned normalising flow is much faster than running model simulations. The number of training rounds is set to \(R=10\), resulting in a total of 10,000 model simulations. After \(R\) rounds, we use \(q_{R,\phi}(S(y)\mid\theta)\) to run 100,000 MCMC iterations targeting the approximate posterior.
RSNL is implemented using the JAX(Bradbury et al., 2018) and NumPyro(Phan et al., 2019) libraries.4 All computations were done using four single-core Intel Xeon CPU processors provided through Google Colab.
Footnote 4: These libraries were selected as we find they lead to orders of magnitude speed-up over the PyTorch(Paszke et al., 2019) and Pyro(Bingham et al., 2019) packages for MCMC sampling.
### Contaminated Normal
Here we consider the contaminated normal example from Frazier and Drovandi (2021) to assess how SNL and RSNL perform under model misspecification. In this example, the DGP is assumed to follow:
\[y_{i}=\theta+\epsilon_{i},\quad\epsilon_{i}\overset{\text{i.i.d.}}{\sim} \mathcal{N}(0,1),\]
where \(i=1,\dots,100\). However, the actual DGP follows:
\[y_{i}=\begin{cases}\theta+\epsilon_{1,i},&\epsilon_{1,i}\sim\mathcal{N}(0,1),\text{ with probability }\omega\\ \theta+\epsilon_{2,i},&\epsilon_{2,i}\sim\mathcal{N}(0,\sigma_{\epsilon}^{2} ),\text{ with probability }1-\omega\end{cases}.\]
The sufficient statistic for \(\theta\) under the assumed DGP is the sample mean, \(S_{1}(y)=\frac{1}{100}\sum_{i=1}^{100}y_{i}\). For demonstration purposes, let us also include the sample variance, \(S_{2}(y)=\frac{1}{99}\sum_{i=1}^{10}(y_{i}-S_{1}(y))^{2}\). When \(\sigma_{\epsilon}\neq 1\), we are unable to replicate the sample variance under the assumed model. The actual DGP is set to \(\omega=0.8\) and \(\sigma_{\epsilon}=2.5\) and hence the sample variance is incompatible. Since \(S_{1}(y)\) is sufficient, so is \(S(y)\) and one might still be optimistic that useful inference will result. To investigate the impact of misspecification, the observed summary is set to \(S(y)=(1.0,2.0)^{\top}\), where the sample mean is the expected value at the true parameter, but the observed sample variance significantly deviates from what can be generated from the assumed DGP. Under the assumed DGP we have that \(b(\theta)=(\theta,1)^{\top}\), for all \(\theta\in\Theta\). We thus have \(\inf_{\theta\in\Theta}||b(\theta)-b_{0}||>0\), and our model meets the criteria for misspecification as outlined in Section 2.2. We use the prior, \(\theta\sim\mathcal{N}(0,10^{2})\).
Figure 1: Posterior plots for the contaminated normal model. The leftmost plot shows the estimated univariate SNL (dashed) and RSNL (solid) posterior densities for \(\theta\). The true parameter value is shown as a vertical dashed line. The right two plots show the estimated marginal posterior (solid) and prior (dashed) densities for the components of \(\Gamma\).
It is evident in Figure 1 that RSNL gives reliable inference with high posterior density surrounding the true parameter value, \(\theta=1\). In stark contrast, SNL gives unreliable inference with negligible support around the true parameter value.
Figure 1 also includes the posteriors for the components of \(\Gamma\). For \(\gamma_{1}\) (associated with the compatible summary statistic), the prior and posterior are effectively indistinguishable. This is consistent with the behaviour of RBSL. For \(\gamma_{2}\) (associated with the incompatible statistic), the misspecification is detected since the posterior has high density away from 0. Visual inspection is sufficient here for the modeller to detect the misspecified summary and provides some insight to adjust the model accordingly. The introduction of additional auxiliary variables does not come with excessive computational costs. For this example, the total computation time for RSNL is around 40 minutes and for SNL is around 10 minutes.
### Misspecified MA(1)
We follow the misspecified moving average (MA) of order 1 example in Frazier and Drovandi (2021), where the assumed DGP is an MA(1) model, \(y_{t}=w_{t}+\theta w_{t-1}\), \(-1\leq\theta\leq 1\) and \(w_{t}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{N}(0,1)\). However, the true DGP is actually a stochastic volatility model of the form:
\[y_{t}=\exp\left(\frac{z_{t}}{2}\right)u_{t},\quad z_{t}=\omega+\kappa z_{t-1}+ v_{t}+\sigma_{v},\]
where \(0<\kappa,\sigma_{v}<1\), and \(u_{t},v_{t}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{N}(0,1)\). We generate the observed data using the parameters, \(\omega=-0.76\), \(\kappa=0.90\) and \(\sigma_{v}=0.36\). The data is summarised using the autocovariance function, \(\zeta_{j}(x)=\frac{1}{T}\sum_{i=j}^{T}x_{i}x_{i-j-1}\), where \(T\) is the number of observations and \(j\in\{0,1\}\) is the lag. We use the prior \(\theta\sim\mathcal{U}(-1,1)\) and set \(T=100\).
It can be shown that for the assumed DGP, \(b(\theta)=(1+\theta^{2},\theta)^{\top}\). Under the true DGP, \(b_{0}=(\exp(\frac{\omega}{1-\kappa}+\frac{\sigma_{v}^{2}}{2(1-\kappa^{2})}),0 )^{\top}\approx(0.0007,0)^{\top}\). As evidently \(\inf_{\theta\in\Theta}||b(\theta)-b_{0}||>0\), the model is misspecified as outlined in Section 2.2. We also have a unique pseudo-true value with \(||b(\theta)-b_{0}||\) minimised at \(\theta=0\). The desired behaviour for our robust algorithm is to detect incompatibility in the first summary statistic and centre the posterior around this pseudo-true value. As the first element of \(b_{0}\) goes from \(1\to 0\), \(||b(\theta)-b_{0}||\) increases and the impact of model misspecification becomes more pronounced. We set \(S(y)=(0.01,0)^{\top}\) as a representative observed summary from the true DGP to assess the performance of SNL and RSNL under heavy misspecification.
Figure 2 shows that RSNL both detects the incompatible sample variance statistic and ensures that the approximate posterior concentrates onto the parameter value that favours matching of the compatible statistic, i.e. \(\theta=0\). SNL, however, is biased and has less support for the pseudo-true value.
As expected, \(\gamma_{1}\) (corresponding to the incompatible statistic) has significant posterior density away from 0 as seen in Figure 2. Also, the posterior for \(\gamma_{2}\) (corresponding to the compatible statistic) closely resembles the prior. The computational price of making inferences robust for the misspecified MA(1) model is minimal, with RSNL taking around 20 minutes to run and SNL taking around 10 minutes.
### Contaminated SLCP
The simple likelihood complex posterior (SLCP) model devised in Papamakarios et al. (2019) is a popular example in the SBI literature. The assumed DGP is a bivariate normal distribution with the mean vector, \(\mu_{\theta}=(\theta_{1},\theta_{2})^{\top}\), and covariance matrix:
\[\Sigma_{\theta}=\begin{bmatrix}s_{1}^{2}&\rho s_{1}s_{2}\\ \rho s_{1}s_{2}&s_{2}^{2}\end{bmatrix},\]
where \(s_{1}=\theta_{3}^{2}\), \(s_{2}=\theta_{4}^{2}\) and \(\rho=\tanh(\theta_{5})\). This results in a nonlinear mapping from \(\theta=(\theta_{1},\theta_{2},\theta_{3},\theta_{4},\theta_{5})\in\mathbb{R}^ {5}\to y\in\mathbb{R}^{2}\). The posterior is "complex" having multiple modes due to squaring as well as vertical cutoffs from the uniform prior that we define in more detail later. Hence, the likelihood is expected to be easier to emulate than the posterior, making it suitable for an SNL type of approach. Four draws are generated from this bivariate distribution giving the likelihood, \(g(y\mid\theta)=\prod_{j=1}^{4}\mathcal{N}(y_{j};\mu_{\theta},\Sigma_{\theta})\) for \(y=(y_{1},y_{2},y_{3},y_{4})\). No summarisation is done and the observed data is used in place of the summary statistic. We generate the observed data at parameter values, \(\theta=(0.7,-2.9,-1.,-0.9,0.6)^{\top}\), and place an independent \(\mathcal{U}(-3,3)\) prior on each component of \(\theta\).
To impose misspecification on this illustrative example, we draw a contaminated 5-th observation, \(y_{5}\) and use the observed data \(y=(y_{1},y_{2},y_{3},y_{4},y_{5})\). Contamination is done by applying the (stochastic) misspecification transform considered in Cannon et al. (2022), \(y_{5}=x_{5}+100z_{5}\), where \(x_{5}\sim\mathcal{N}(\mu_{\theta},\Sigma_{\theta})\), and \(z_{5}\sim\mathcal{N}((0,0)^{\top},100\mathbb{I}_{2})\). The assumed DGP is not compatible with this contaminated observation, and ideally the approximate posterior would ignore the influence of this observation.5
Footnote 5: Due to the stochastic transform, there is a small chance that the contaminated draw is compatible with the assumed DGP. However, the observed contaminated draw considered here is \((-172.7,-79.9)^{\top}\), which is very unlikely under the assumed DGP.
We thus want our inference to only use information from the four draws from the true DGP. The aim is to closely resemble the SNL posterior where the observed data is the four non-contaminated draws. Figure 3 shows the estimated posterior densities for SNL (for both compatible and incompatible summaries) and RSNL for the contaminated SLCP example. When including the contaminated 5-th draw, SNL produces a nonsensical posterior with little useful information. Conversely, the RSNL posterior has reasonable density around the true parameters and has identified the separate modes.
The first eight compatible statistics are shown in Figure 4. The prior and posteriors reasonably match each other. In contrast, the observed data from the contaminated draw
Figure 2: Posterior plots for the misspecified MA(1) model. The leftmost plot shows the estimated univariate SNL (dashed) and RSNL (solid) posterior densities for \(\theta\). The true parameter value is shown as a vertical dashed line. The right two plots show the estimated marginal posterior (solid) and prior (dashed) densities for the components of \(\Gamma\).
is recognised as being incompatible and has significant density away from 0 as evident in Figure 5. Again, there is not a significant computational burden induced to estimate the adjustment parameters, with a total computational time of around 6 hours to run RSNL and around 4 hours for SNL.
Figure 3: Univariate and bivariate density plots of the estimated posterior for \(\theta\) on the SLCP example. Plots on the diagonal are the univariate posterior densities obtained by RSNL (solid) and SNL (dashed) on the contaminated SLCP example, and for SNL without the contaminated draw (dotted). The bivariate posterior distributions for contaminated SLCP are visualised as contour plots when applying RSNL (solid, lower triangle off-diagonal) and SNL (dashed, upper triangle off-diagonal). The true parameter values are visualised as a vertical dashed line for the marginal plots and the \(\times\) symbol in the bivariate plots.
## 5 Discussion
In this work, we have introduced a new neural SBI method that is robust to model misspecification. To our knowledge, this is the first method that both detects and corrects for model misspecification that targets the likelihood or uses sequential sampling. RSNL was shown on several illustrative examples to be robust to model misspecification while still conducting efficient inference.
We have shown that RSNL can provide useful inference with a fraction of the number of simulation calls of ABC and BSL methods. For example, only 10,000 model simulations were run to produce the RSNL posterior for the contaminated normal model. In contrast, RBSL in Frazier and Drovandi (2021) used in the order of millions of model simulations. A more rigorous comparison, such as the benchmarks in Lueckmann et al. (2021), could be done between ABC, BSL and neural SBI methods to ascertain their robustness to model misspecification and behaviour across different numbers of simulations. Such a benchmark would ideally include challenging applications with real-world data to demonstrate the utility of these methods for scientific applications.
In the mean adjustment approach of RBSL, Frazier and Drovandi (2021) account for the different summary scales and the fact that these scales could be \(\theta\)-dependent by adjusting the mean using \(\mu_{m}(\theta)+\sigma_{m}(\theta)\circ\Gamma\), where \(\sigma(\theta)\) is a vector of estimated standard deviations of the model summaries at \(\theta\). In RBSL, these standard deviations are estimated from the \(m\) model simulations generated based on \(\theta\). Analogously, we could consider a similar
Figure 4: Estimated marginal posterior (solid) and prior (dashed) for components of \(\Gamma\) corresponding with the non-contaminated draws.
Figure 5: Estimated marginal posterior (solid) and prior (dashed) for components of \(\Gamma\) corresponding with the contaminated draw.
approach in RSNL and define the target
\[\pi(\theta,\Gamma\mid S(y))\propto q_{\phi}(S(y)-\sigma(\theta)\circ\Gamma\mid \theta)\pi(\theta)\pi(\Gamma).\]
The question then becomes, how do we estimate \(\sigma(\theta)\) in the context of RSNL? In the MCMC phase we do not want to generate more model simulations as this would be costly. If we believed that the standard deviation of the model summaries had little dependence on \(\theta\), we could set \(\sigma(\theta)=\sigma=\sigma(\hat{\theta})\) where \(\hat{\theta}\) is some reasonable point estimate of the parameter. Another approach would, for each \(\theta\) proposed in the MCMC, estimate \(\sigma(\theta)\) using surrogate model simulations generated using the fitted normalising flow. This would be much faster than using actual model simulations, but could still slow down the MCMC phase substantially. Instead of using a normalising flow, we could train a mixture density network (Bishop, 1994) to emulate the likelihood, which would then lead to an analytical expression for \(\sigma(\theta)\). A multivariate mixture density network could replace the flow completely, or the multivariate flow for the joint summary could be retained and a series of univariate mixture density networks applied to each summary statistic for the sole purpose of emulating \(\sigma(\theta)\). We plan to investigate these options in future research.
One might be concerned that the inclusion of adjustment parameters will introduce noise into the estimated posterior to a deleterious extent. Empirically we have found this to have a negligible effect, especially for the considered prior choice. This is consistent with the findings for RBSL in Frazier and Drovandi (2021). Additionally, Hermans et al. (2022) noted that SBI methods (including SNL) tend to produce overconfident posterior approximations. Hence, it seems unlikely that the small amount of noise from the adjustment parameters would result in overly conservative posterior estimates.
A modeller can determine what summaries are misspecified by comparing the prior and posterior densities for each component of \(\Gamma\). We relied on visual inspection of the adjustment parameters for the presented examples. This could be tedious when considering a large number of summaries, and an automated approach could be considered instead.
Here we considered an adjustment approach through an MCMC sampling scheme as in Frazier and Drovandi (2021). However, sequential neural variational inference (SNVI, Glockler et al., 2022) can provide useful inference with a reduced number of model simulations compared to other sequential neural methods, such as SNL. SNVI targets either the likelihood or the likelihood-ratio, another common target in SBI (Durkan et al., 2020; Hermans et al., 2020)). Future work could investigate the impact of model misspecification on SNVI and to make SNVI robust through the incorporation of adjustment parameters. The adjustment could be to the likelihood as in RSNL, or through an adjustment approach that targets the likelihood-ratio.
The choice of \(\pi(\Gamma)\) was found to be important in practice. Our prior choice was based on the dual requirements to minimise noise introduced by the adjustment parameters if the summaries are compatible, and to be capable of shifting the summary a significant distance from the origin if they are incompatible. The horseshoe prior is an appropriate choice for these requirements. Further work could consider how to implement this robustly in a NUTS sampler. Another approach is the spike-and-slab prior as in Ward et al. (2022). Further work is needed to determine the most appropriate prior.
The relation between model misspecification in SBI as in Frazier et al. (2020) and OOD data detection in machine learning (Yang et al., 2021) could be investigated more thoroughly. Cannon et al. (2022) obtained favourable results using OOD detection methods based on ensemble posteriors (mixtures of independent posteriors, Lakshminarayanan et al.
(2017)) and sharpness-aware minimisation (Foret et al., 2021). Potentially the benefits of these OOD methods could be heightened when also using adjustment parameters.
## Acknowledgements
Ryan P. Kelly was supported by an Australian Research Training Program Stipend and a QUT Centre for Data Science Top-Up Scholarship. Christopher Drovandi was supported by an Australian Research Council Future Fellowship (FT210100260).
| シミュレーションに基づく推論手法は、不可解な確率密度関数の持つメカニズム的かつシミュラブルなモデルのパラメータ推定にとって不可欠です。従来の統計的アプローチ、例えば近似ベイシリアン計算とベイシリアン合成確率密度関数は、明確な設定と誤設定のもとで研究されてきました。しかし、これらのアプローチは、無駄なモデルシミュレーションによる効率低下に悩まされています。ニューラルアプローチ、例えばシグナルニューラル確率密度関数(SNL)は、モデルシミュレーションを全て活用して、確率密度関数の近似を行うニューラルサrogateを学習することで、無駄なシミュレーションを回避しています。しかし、SNLの誤設定下でのパフォーマンスは不安定であり、パラメータ推定の不正確な結果につながります。本論文では、追加調整パラメータの導入により、SNLの誤設定へのRobustnessを高め、モデル |
2309.16432 | Two-point spectroscopy of Fibonacci topoelectrical circuits | Topoelectrical circuits are meta-material realizations of topological
features of condensed matter systems. In this work, we discuss experimental
methods that allow a fast and straightforward detection of the spectral
features of these systems from the two-point impedance of the circuit. This
allows to deduce the full spectrum of a topoelectrical circuit consisting of N
sites from a single two-point measurement of the frequency resolved impedance.
In contrast, the standard methods rely on $N^2$ measurements of admittance
matrix elements with a subsequent diagonalization on a computer. We
experimentally test our approach by constructing a Fibonacci topoelectrical
circuit. Although the spectrum of this chain is fractal, i.e., more complex
than the spectra of periodic systems, our approach is successful in recovering
its eigenvalues. Our work promotes the topoelectrical circuits as an ideal
platform to measure spectral properties of various (quasi)crystalline systems. | Selma Franca, Torsten Seidemann, Fabian Hassler, Jeroen van den Brink, Ion Cosma Fulga | 2023-09-28T13:32:20 | http://arxiv.org/abs/2309.16432v1 | # Two-point spectroscopy of Fibonacci topoelectrical circuits
###### Abstract
Topoelectrical circuits are meta-material realizations of topological features of condensed matter systems. In this work, we discuss experimental methods that allow a fast and straightforward detection of the spectral features of these systems from the two-point impedance of the circuit. This allows to deduce the full spectrum of a topoelectrical circuit consisting of \(N\) sites from a single two-point measurement of the frequency resolved impedance. In contrast, the standard methods rely on \(N^{2}\) measurements of admittance matrix elements with a subsequent diagonalization on a computer. We experimentally test our approach by constructing a Fibonacci topoelectrical circuit. Although the spectrum of this chain is fractal, i.e., more complex than the spectra of periodic systems, our approach is successful in recovering its eigenvalues. Our work promotes the topoelectrical circuits as an ideal platform to measure spectral properties of various (quasi)crystalline systems.
_Introduction --_ Experimental difficulties in realizing and detecting topological features of condensed-matter systems have prompted the development of metamaterials -- classical systems designed to reproduce desired topological features. The initial proposal [1] involved photonic crystals where electromagnetic waves propagate unidirectionally along the boundary, thus forming the photonic analog of the integer quantum Hall effect (IQHE) [2]. In addition to photonic metamaterials [3; 4; 5; 6; 7; 8; 9; 10], there are acoustic [11; 12; 13; 14; 15; 16; 17; 18], mechanical [19; 20; 21], microwave [22; 23; 24; 25; 26; 27] and electrical circuit [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41] realizations of various topological phases of matter.
Topoelectrical circuits are networks of nodes connected by electronic components such as resistors, capacitors, and inductors. They are described by an admittance matrix \(Y(f)\) that represents the current response to a set \(\mathbf{V}(f)\) of locally applied voltages at frequency \(f\), and that can be mapped to an effective tight-binding Hamiltonian [30; 36; 45]. So far, the experimental characterization of these classical systems mostly relied on detecting topological boundary phenomena using two-point impedance measurements [30; 45]. This impedance, \(Z_{a,b}(f)\), can be determined by measuring the voltage response between the nodes \(a\) and \(b\) to an input current oscillating at a specific frequency. If this frequency corresponds to the energy of a topological boundary state of the effective Hamiltonian, and if the nodes are chosen such that one is in the bulk and the other in the region where this topological state is localized, the resulting two-point impedance is very large in realistic systems (it even diverges for ideal ones). Thus, the presence of a topological boundary state inside of the bulk gap results in a single, isolated impedance peak.
Gaining access to the full spectrum of the effective Hamiltonian simulated by a topoelectrical circuit, beyond the detection of individual, spectrally isolated modes, is challenging. The spectra of topoelectrical circuits have so far been determined by measuring the full admittance matrix, element by element, and then diagonalizing it on a computer [46]. This is a time-consuming process, since the number of measurements scales quadratically with the number of sites in the system, meaning that \(N^{2}\) separate measurements are required for a circuit simulating a system made of \(N\) sites. Such disadvantageous scaling hinders the full spectrum measurement of a topoelectrical circuit, and undermines interest in realizing systems with intriguing spectral properties, like quasicrystals.
Quasicrystals are systems with incommensurate energy scales [47; 48], whose spectra may be fractal, resulting in local power law singularities of the associated density of states [49]. Since they are much rarer in nature, their meta-material realizations are even more relevant for studying their spectral properties [49; 50]. The prototypical example in one-dimension is the Fibonacci chain, an array of sites related by two possible hopping strengths arranged into a quasiperiodic pattern [49]. Beyond its fractal spectrum, this chain is interesting because it can be adiabatically related to a two-dimensional Hofstadter model that realizes the IQHE physics. Consequently, the Fibonacci chain can support topological boundary states [51; 52].
In this work, we discuss a method that allows detection of an extensive number of topoelectrical circuit modes in two-point setup with fixed nodes. This method relies on measuring the linear response function of the circuit to a frequency-dependent input current. We identify the eigenvalues of the effective tight-binding model [46; 53; 54] by determining the resonances of the two-point impedance through appropriate signal processing techniques. We test our approach under realistic conditions by constructing a topoelectrical Fibonacci chain.
Despite having a fractal spectrum that is more complex than the spectrum of a periodic system, we correctly identify most of the Fibonacci chain eigenvalues in a single frequency resolved measurement by utilizing the spectral symmetry constraint imposed by the Fibonacci Hamiltonian.
We start by introducing the Hamiltonian of the topoelectrical Fibonacci chain and showing how the linear response function is able to detect the eigenvalues of the circuit. We proceed with the experimental setup and discuss the measured data and corresponding numerical tools used to recover the Fibonacci chain spectrum.
_Topoelectrical Fibonacci chain_ -- In this work, we realize the 8th approximant of the infinite quasiperiodic Fibonacci chain consisting of \(N=34\) sites [49]. The Hamiltonian of the off-diagonal Fibonacci chain model is given by
\[H(\phi)=\sum_{n=1}^{N}t_{n}(\phi)c_{n+1}^{\dagger}c_{n}+\text{h.c.}, \tag{1}\]
where \(c_{n}^{\dagger}\) and \(c_{n}\) represent the creation and annihilation operator of a particle at site \(n\). The hoppings \(t_{n}(\phi)=\alpha+\beta\operatorname{sign}[\chi_{n}(\phi)]\) (\(\alpha,\beta\in\mathbb{R}\)) alternate between two values \(t_{A}\) and \(t_{B}\) as a function of the index \(n\), such that \(\alpha=(t_{A}+t_{B})/2\) and \(\beta=(t_{A}-t_{B})/2\). The alternation pattern is determined by the characteristic function \(\chi_{n}(\phi)=\cos(\frac{2\pi n}{\tau}+\phi)-\cos(\frac{\pi}{\tau})\) with the golden ratio \(\tau=\frac{1+\sqrt{5}}{2}\) and the phason angle \(\phi\in[0,2\pi)\)[51]. Setting \(\phi=\pi\) creates the Fibonacci chain with two pairs of edge states that belong to different topological gaps. These pairs of edge states occur at opposite energies because the Hamiltonian obeys the chiral symmetry constraint \(\mathcal{C}H(\phi)\mathcal{C}^{\dagger}=-H(\phi)\) with \(\mathcal{C}_{nm}=\delta_{nm}(-1)^{n}\). Besides being symmetric with respect to zero energy, the Fibonacci chain spectrum is fractal [55]. The eigenvalues are arranged in a self-similar pattern, as we can divide the spectrum into three clusters (or bands) of eigenvalues, and each cluster can be further split into three sub-clusters, and so on [49].
In the following, we describe an electrical circuit that realizes the Fibonacci chain. This circuit consists of \(N=34\) nodes related by connecting wires and capacitors of distinct capacitances \(C_{A}\) and \(C_{B}\) that emulate the hoppings \(t_{A},t_{B}\) of the tight-binding model and are thus arranged according to \(\operatorname{sign}[\chi_{n}(\pi)]\). We show the circuit diagram inside the bulk of the system in Fig. 1(a), and in Fig. 1(b) the corresponding segment of a constructed circuit board. The orange and green boxes in Fig. 1(a) represent two possible local environments of bulk circuit nodes that differ by whether identical capacitances \((C_{A},C_{A})\) or distinct ones \((C_{A(B)},C_{B(A)})\) are used to relate a node \(n\) to its neighbors. In the former (latter) case, for the grounding of node \(n\) we use a capacitor of capacitance \(\tilde{C}_{n}=C_{B}\) (\(\tilde{C}_{n}=C_{A}\)) that is connected in parallel to an inductor of inductance \(L\), such that the relation \(\tilde{C}_{n}+C_{n-1}+C_{n}=2C_{A}+C_{B}\) holds.
Each node is described by Kirchhoff's law [30]
\[I_{n}=G_{n-1}(V_{n}-V_{n-1})+G_{n}(V_{n}-V_{n+1})+g_{n}V_{n}, \tag{2}\]
where \(G_{n}=2\pi jfC_{n}\) is the admittance between nodes \(n\) and \(n+1\), \(f\) is the frequency, \(C_{n}\in\{C_{A},C_{B}\}\) depending on \(\operatorname{sign}[\chi_{n}(\pi)]\), and \(j^{2}=-1\). The admittance \(g_{n}\) between node \(n\) and the ground equals \(g_{n}=2\pi jf\tilde{C}_{n}+1/(2\pi jfL)\), where conductance \(\tilde{C}_{n}\in\{C_{A},C_{B}\}\). By grouping all currents and voltages into vectors \(\mathbf{I}\) and \(\mathbf{V}\), we obtain the admittance matrix \(Y(f)\)
\[Y(f)=\tilde{g}(f)\mathbb{I}-2\pi jfH \tag{3}\]
in terms of which Kirchhoff's rules are given by \(\mathbf{I}(f)=Y(f)\mathbf{V}(f)\); here, \(H\) is the Fibonacci Hamiltonian Eq. (1) with hoppings \(t_{n}\) replaced by \(C_{n}\) and \(\tilde{g}(f)=2\pi jf(2C_{A}+C_{B})+1/(2\pi jfL)\).
To experimentally characterize the spectral properties of this circuit, we measure its response to the applied current \(I(f)\). The voltage at node \(b\) is related to an input current at node \(a\) via the two-point impedance
\[Z_{a,b}(f)=\frac{V_{a}(f)-V_{b}(f)}{I_{a}(f)}=\sum_{n}\frac{|v_{n,a}-v_{n,b}|^{ 2}}{Y_{n}}, \tag{4}\]
that can be calculated from the eigenvalues \(Y_{n}(f)\) and eigenvectors \(v_{n}(f)\) of the admittance matrix [30].
Next, we describe how \(Z_{a,b}(f)\) can be used to reconstruct the Fibonacci chain spectrum. From Eq. (4), we see that \(Z_{a,b}(f)\) has a pole at frequency \(f_{n}\) every time \(Y_{n}(f_{n})=0\). Using Eq. (3), we can relate the admittance matrix eigenvalues \(Y_{n}\) to Hamiltonian eigenvalues \(E_{n}\) as \(Y_{n}(f)=\tilde{g}(f)-2\pi jfE_{n}\). Therefore, \(Y_{n}(f_{n})=0\to E_{n}=\tilde{g}(f_{n})/2\pi jf_{n}\), resulting in
\[E_{n}=2C_{A}+C_{B}-\frac{1}{4\pi^{2}Lf_{n}^{2}}; \tag{5}\]
Figure 1: Fibonacci topoelectrical circuit. (a) The circuit diagram between nodes \(n=3\) and \(n=8\). Orange and green boxes indicate two different configurations of topoolelectrical circuit junctions. (b) A photograph of the corresponding segment of the circuit board. We see all elements of the circuit diagram of panel (a), except inductors that are located on the backside.
we note in passing, that the energies are measured in units of capacitance. Due to the relation (5), reconstructing \(E_{n}\) relies on identifying the resonance frequencies \(f_{n}\) of the response function \(Z_{a,b}(f)\). In the following, we describe how this can be done in practice.
Experimental setup and measurement analysisFor experimental realization, we have used capacitors with nominal values of capacitances \(C_{A}=$50\,\mathrm{nF}$\) and \(C_{B}=$100\,\mathrm{nF}$\), and inductors with nominal inductances \(L=$10\,\mu\mathrm{H}$\). The capacitors and inductors are high quality components bought from KEMET and WURTH Elektronik, respectively, that were pre-selected to vary less than \(2\%\) from the corresponding nominal values of conductances and inductances. Importantly, these circuit elements have small but non-vanishing direct current resistances \(R_{C}^{\mathrm{dc}}\approx$25\,\mathrm{m}\Omega$\) and \(R_{L}^{\mathrm{dc}}\approx$85\,\mathrm{m}\Omega$\). In case of the inductors the resistance is frequency dependent and goes from \(R_{L}^{\mathrm{ac}}\approx$105\,\mathrm{m}\Omega$\) (at \(50\,\mathrm{k}\mathrm{H}\mathrm{z}\)) to \(R_{L}^{\mathrm{ac}}\approx$308\,\mathrm{m}\Omega$\) (at \(250\,\mathrm{k}\mathrm{H}\mathrm{z}\)). For more details, see the Supplemental Material (SM) [56].
All measurements were performed with the lock-in amplifier SR865A manufactured by Stanford Research Systems [56]. We consider two configurations for the voltage probes; the "bulk-edge" (BE) configuration is realized by placing probes at nodes \(a=1\) and \(b=15\), while the "bulk-bulk" (BB) configuration has the probes at nodes \(a=10\) and \(b=24\). According to Eq. (4), the positions of the voltage probes determine the weights of the corresponding eigenstates in the impedance response. This results in a very different frequency dependence of the response functions \(|Z_{\mathrm{BE}}|\) and \(|Z_{\mathrm{BB}}|\) in range \(f\in($50\,\mathrm{k}\mathrm{H}\mathrm{z}$,$250\,\mathrm{k}\mathrm{H}\mathrm{z}$)\), see Fig. 2(a). To analyze these results, it is useful to define the frequency \(f_{0}=$112.5\,\mathrm{k}\mathrm{H}\mathrm{z}$\) corresponding to \(E=0\) as determined from Eq. (5) by setting \(E_{n}=0\) and using experimental values for \(C_{A},C_{B}\) and \(L\).
Our first observation is that \(|Z_{\mathrm{BE}}|\) and \(|Z_{\mathrm{BB}}|\) have far fewer features for frequencies \(f<f_{0}\) corresponding to energies \(E<0\) than for \(f>f_{0}\), where the positive part of the spectrum is located. This is a consequence of the nonlinear relationship between the eigenvalues \(E_{n}\) and resonant frequencies \(f_{n}\) in Eq. (5). This leads to the fact that the resonant frequencies of the negative eigenvalues are closer together than the ones corresponding to the positive eigenvalues. When this effect is combined with nonzero resistances \(R_{C}^{\mathrm{dc}},R_{L}^{\mathrm{dc}}\) and \(R_{L}^{\mathrm{ac}}\) that broaden the-delta-peaks of the ideal response function into Lorentzians, the resonant peaks for frequencies \(f<f_{0}\) are expected to be less visible than the ones for \(f>f_{0}\)[56]. The second important feature of Fig. 2a is the observation that \(|Z_{\mathrm{BE}}|\) has two very prominent peaks at frequencies (indicated by green lines) for which \(|Z_{\mathrm{BB}}|\) does not show any prominent features. This suggests that these peaks are induced by topological edge modes [30]. From the corresponding frequencies \(f_{\mathrm{edge},-}^{\mathrm{exp}}\approx$101.7\,\mathrm{k}\mathrm{H}\mathrm{z}$\) and \(f_{\mathrm{edge},+}^{\mathrm{exp}}\approx$127.5\,\mathrm{k}\mathrm{H}\mathrm{z}$\) using Eq. (5), we obtain the energies \(E_{\mathrm{edge},-}^{\mathrm{exp}}=$-44.9\,\mathrm{nF}$\) and \(E_{\mathrm{edge},+}^{\mathrm{exp}}=$44.2\,\mathrm{nF}$\). Note that the theoretical value for energy of the edge states is \(E_{\mathrm{edge},\pm}=$\pm 43.7\mathrm{nF}$\); the relative errors are \(\delta^{r}=|(E_{\mathrm{edge},-}^{\mathrm{exp}}-E_{\mathrm{edge},-})/E_{ \mathrm{edge},-}|=$2.75\%$\) and \(\delta^{r}=$1.14\%$\), respectively. Importantly, having \(|E_{\mathrm{edge},-}^{\mathrm{exp}}|\approx|E_{\mathrm{edge},+}^{\mathrm{exp }}|\approx|E_{\mathrm{edge},\pm}|\) is the experimental confirmation that the realized topoelectrical circuit has the chiral symmetry. We can use this symmetry to obtain an experimental value of the frequency \(f_{0}^{\mathrm{exp}}=\frac{1}{2}(f_{\mathrm{edge},-}^{\mathrm{exp}}+f_{ \mathrm{edge},+}^{\mathrm{exp}})=$114.6\,\mathrm{k}\mathrm{H}\mathrm{z}$\) with a relative error \(\delta^{r}=$1.87\%$\) compared to \(f_{0}\).
To determine more eigenvalues, we focus on the second derivative of the response function because differentiation reduces the amplitude of broader peaks in the \(Z_{a,b}(f)\) signal. This results in a better detection of resonances that have been previously obscured by a broader but stronger background [57]. In practice, calculating this derivative from the original data set is challenging because measurements always include some noise that manifests as random high-frequency and small amplitude deviations from the ideal signal. Since noise becomes more prominent with differentiation, we eliminate it from original data using a low-pass, 4-th order Butterworth filter [58]. This filter has a maximally flat frequency response in the passband, thus not giving rise to any additional frequency dependence upon its application [58].
We employ two different strategies for extracting the Fibonacci chain spectrum. Our first approach is based on searching for the frequencies \(f_{n}^{\mathrm{exp}}\) at which the function \(-\partial_{f}^{2}|Z_{a,b}|\) (and consequently \(|Z_{a,b}(f)|\)) has peaks. To calculate \(-\partial_{f}^{2}|Z_{a,b}|\), we employ the Butterworth filter with the cutoff frequency \(f_{c}=$0.01f_{\mathrm{Nq}}$\) on \(|Z_{a,b}(f)|\); here, \(f_{\mathrm{Nq}}\) denotes the Nyquist frequency defined as half of the sampling frequency \(f\). Due to aforementioned grouping effect of individual peaks in the lower frequency range, looking for 34 most prominent peaks of \(-\partial_{f}^{2}|Z_{a,b}|\) in the entire frequency range does not produce satisfying results. Be
cause of the chiral symmetry, we can instead focus on the frequency range \((f_{0},250\text{kHz})\) that corresponds to the positive part of the spectrum consisting of 17 eigenvalues. Using the SCIPY Python library [59], we find all the peaks of \(-\partial_{f}^{2}|Z_{\text{BE}}|\) and \(-\partial_{f}^{2}|Z_{\text{BB}}|\) in this frequency range and choose the 17 most prominent ones for both curves. These peaks are indicated with green circles in Figs. 2(b) and (c) for \(-\partial_{f}^{2}|Z_{\text{BE}}|\) and \(-\partial_{f}^{2}|Z_{\text{BB}}|\), respectively.
The corresponding spectra are constructed from pairs \((-E_{n}^{\text{exp}},E_{\text{np}}^{\text{exp}})\), with \(E_{n}^{\text{exp}}\) obtained from \(f_{n}^{\text{exp}}\) using Eq. (5). We plot these spectra in Figs. 3(a) and (b) for the BE and BB voltage probe configurations, respectively, along with the theoretical eigenvalues \(E_{n}\). We observe that both voltage probes are successful in detecting the edges of the upper band (and consequently the lower band), along with its inner sub-bands. The BE probe captures accurately the energies of two pairs of edge states but it detects a single resonance per pair. This behavior is also present for an ideal circuit, and originates from the energy degeneracy of two edge modes. On the other side, the BB probe detects two resonances inside the topological gap but is less accurate in measuring the energies of edge states. In total, for the BE probe the mean absolute error \(\delta^{\text{avg}}=\sum_{n=1}^{N}|E_{n}-E_{n}^{\text{exp}}|/N\) equals \(\delta^{\text{avg}}_{\text{BE}}=4.87\,\text{nF}\) while the median error is \(\delta^{\text{me}}_{\text{BE}}=3.81\,\text{nF}\). For the BB probe, we find \(\delta^{\text{avg}}_{\text{BB}}=4.17\,\text{nF}\) and \(\delta^{\text{m}}_{\text{BB}}=3.18\,\text{nF}\).
As these errors are small compared to the total energy range, we conclude that searching for peaks of \(-\partial_{f}^{2}|Z_{a,b}|\) is a fruitful strategy to recover the full spectrum. However, this approach focuses only on the amplitude of the frequency dependent response function, and thus misses possible information hidden in the corresponding phase component. To rectify this, we employ our second approach that is based on fitting the full signal \(-\partial_{f}^{2}Z_{a,b}\) to the linear combination of Lorentzians. To eliminate noise from the data, we use the Butterworth filter separately on \(\Re[Z_{a,b}(f)]\) and \(\Im[Z_{a,b}(f)]\) before calculating their second derivatives with respect to frequency and combining them to obtain \(-\partial_{f}^{2}Z_{a,b}\). We use frequency cutoffs \(f_{c}=0.03f_{\text{Nq}}\) (\(f_{c}=0.01f_{\text{Nq}}\)) for the BE (BB) configuration of voltage probes. The resulting signal \(-\partial_{f}^{2}Z_{a,b}\) is Fourier transformed into the time domain signal \(Z_{a,b}^{(2)}(t)=\mathcal{F}[-\partial_{f}^{2}Z_{a,b}]\) that is fitted to a sum of \(N\) damped exponentials as
\[Z_{a,b}^{(2)}(t)=\sum_{n=1}^{N}A_{n}^{\text{exp}}e^{j\phi_{n}^{\text{exp}}}e^ {(\alpha_{n}^{\text{exp}}+2\pi jf_{n}^{\text{exp}})t}, \tag{6}\]
where \(A_{n}^{\text{exp}},\phi_{n}^{\text{exp}},\alpha_{n}^{\text{exp}}\) and \(f_{n}^{\text{exp}}\) are the amplitudes, phases, damping factors and frequencies of the sinusoids, respectively. Assuming \(t=mT\) where \(m=0,...,N-1\) and \(T\) is the sampling period, the exponential factor becomes \(e^{(\alpha_{n}^{\text{exp}}+2\pi jf_{n}^{\text{exp}})mT}=z_{n}^{m}\), where \(z_{n}=e^{(\alpha_{n}^{\text{exp}}+2\pi jf_{n}^{\text{exp}})T}\). The poles \(z_{n}\) are found by solving a generalized eigenvalue equation using a matrix pencil (MP) operator that is constructed from the values \(Z_{a,b}^{(2)}(t)\)[60, 61, 62], see also the SM [56]. As there are 34 eigenvalues in theory, we look for 34 poles in our calculation.
The resulting spectrum, obtained using Eq. (5), is shown in Figs. 3(c) and (d) for both voltage probe configurations. While this approach can reconstruct the entire spectrum, we see that for both configurations it works better for positive eigenvalues, i.e., for frequencies \(f>f_{0}\). In general, the accuracy of the MP method declines as the energy is reduced which can be expected due to the aforementioned grouping effect of resonances. Moreover, for both probes, the method finds 15 (19) poles corresponding to negative (positive) energies. The additional positive poles arise at \(E\sim 1\text{nF}\) that is very close to \(E=0\) in comparison with the energy scale of the chain. In case of the BE probe where the edge modes dominate the response of the circuit, the MP method overestimates the number of edge modes in the upper topological gap but captures their energies well. For
Figure 3: Comparison between the theoretical and experimental spectra obtained using different methods of recovery. Here, the green crosses indicate edge states. The eigenvalues in panels (a) and (b) are given by the maxima of \(-\partial_{f}^{2}|Z_{\text{BE}}|\) and \(-\partial_{f}^{2}|Z_{\text{BB}}|\), see green circles in Figs. 2(b) and (c). In panels (c) and (d), 34 resonant frequencies are shown that were detected using the MP method. Panels (e) and (f) are obtained by mirroring the 17 largest positive eigenvalues from panels (c) and (d) with respect to \(E=0\).
the BB probe, the method finds a single mode for a pair of edge modes with \(E>0\), and attributes the missing edge mode resonance to the upper band. In total, we find \(\delta_{\rm BE}^{\rm avg}=21.67\,\rm nF,\delta_{\rm BE}^{\rm m}=14.09\,\rm nF\), and \(\delta_{\rm BB}^{\rm avg}=19.83\,\rm nF,\delta_{\rm BB}^{\rm m}=11.54\,\rm nF\). Such large values of errors reflect the fact that the MP method misses to capture the negative eigenvalues accurately.
The results of the MP-method can be improved by utilizing the spectral symmetry constraint, i.e., by constructing spectra from pairs \((-E_{n}^{\rm exp},E_{n}^{\rm exp})\), where \(E_{n}^{\rm exp}>0\) are 17 largest positive eigenvalues from Figs. 3(c) and (d). Results are shown in Figs. 3(e) and (f) for BE and BB configurations, respectively. This combined approach reduces the errors of measurements to \(\delta_{\rm BE}^{\rm avg}=4.24\,\rm nF\), \(\delta_{\rm BE}^{\rm m}=2.22\,\rm nF\) for the BE probe and \(\delta_{\rm BB}^{\rm avg}=7.68\,\rm nF\), \(\delta_{\rm BB}^{\rm m}=5.22\,\rm nF\) for the BB probe. Therefore, combining the MP method with the spectral symmetry constraint works the best for the BE probe, while searching for the peaks of \(-\partial_{f}^{2}|Z_{a,b}|\) yields better results for the BB probe.
For both probes, our best results have \(\delta_{a,b}^{\rm avg}\approx\delta_{E}/2\), where \(\delta_{E}=8.55\rm nF\) is a theoretical average energy spacing. These results could be improved by reducing the noise in the measurement and the resistances of circuit elements. Contrary to the present study that separately measured \(V_{a}(f),V_{b}(f),I_{a}(f)\) thus increasing the chance of random events, employing additional lock-in amplifiers would allow for a simultaneous measurement of all three quantities and thus reduce the noise. Reducing resistances of circuit elements, on the other hand, is not straightforward: for example, lowering \(R_{L}^{\rm ac}(100\rm kHz)\) generally requires reducing the inductance \(L\) of inductors, resulting in a larger frequency range needed to determine all the eigenvalues which leads to an increased of \(R_{L}^{\rm ac}\). As a result, the inductors will produce additional heating which washes out features due to the increased noise. An interesting idea for future research is to investigate whether superconducting elements with significantly smaller resistances can improve the accuracy of our results.
_Conclusion_ -- In this work, we have shown how the response function of an electrical circuit can be used to recover the full spectrum of an underlying condensed matter system simulated by this circuit. We have constructed, for the first time, the Fibonacci topoelectrical chain that has a fractal spectrum due to its quasicrystalline nature. We have demonstrated that the spectrum can be recovered from a single measurement using two distinct methods of data analysis. We have corroborated our findings by changing the positions of the voltage probes as well as the boundary conditions of the Fibonacci topoelectrical circuit [56]. In conclusion, our work promotes topoelectrical circuits as an ideal meta-material platform for studying spectral properties of (quasi)crystalline systems.
_Acknowledgements_ -- We thank Ulrike Nitzsche for technical assistance. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy through the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter - _ct.qmat_ (EXC 2147, project-id 390858490) and under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769. S. F. acknowledges financial support from the European Union Horizon 2020 research and innovation program under grant agreement No. 829044 (SCHINES).
_Competing Interests Statement_ -- The authors declare no competing interests.
| Topo電気回路は、凝縮物系におけるトポロジカル特徴のメタマテリアル実体化である。本研究では、この回路の2点インピーダンスからこれらの系のスペクトル特徴を迅速かつ簡便に検出するための実験的方法について議論する。この手法により、N Sitesからなるトポロ電気回路の全スペクトルを、2点測定から導出することができる。一方、標準的な方法は、アドミタンス行列の$N^2$の測定が必要であり、コンピュータによる対角化を行う。本手法は、フィボナッチのトポロ電気回路を構築することで実験的にテストする。この鎖のスペクトルは、周期的なシステムよりも複雑であるにも関わらず、本手法でその固有値を復元することができる。本研究は、トポロ電気回路を、さまざまな(準)結晶系におけるスペクトル特性を測定するための理想的なプラットフォームとすることを促 |
2309.06621 | A Reinforcement Learning Approach for Robotic Unloading from Visual
Observations | In this work, we focus on a robotic unloading problem from visual
observations, where robots are required to autonomously unload stacks of
parcels using RGB-D images as their primary input source. While supervised and
imitation learning have accomplished good results in these types of tasks, they
heavily rely on labeled data, which are challenging to obtain in realistic
scenarios. Our study aims to develop a sample efficient controller framework
that can learn unloading tasks without the need for labeled data during the
learning process. To tackle this challenge, we propose a hierarchical
controller structure that combines a high-level decision-making module with
classical motion control. The high-level module is trained using Deep
Reinforcement Learning (DRL), wherein we incorporate a safety bias mechanism
and design a reward function tailored to this task. Our experiments demonstrate
that both these elements play a crucial role in achieving improved learning
performance. Furthermore, to ensure reproducibility and establish a benchmark
for future research, we provide free access to our code and simulation. | Vittorio Giammarino, Alberto Giammarino, Matthew Pearce | 2023-09-12T22:22:28 | http://arxiv.org/abs/2309.06621v1 | # A Reinforcement Learning Approach for Robotic Unloading from Visual Observations
###### Abstract
In this work, we focus on a robotic unloading problem from visual observations, where robots are required to autonomously unload stacks of parcels using RGB-D images as their primary input source. While supervised and imitation learning have accomplished good results in these types of tasks, they heavily rely on labeled data, which are challenging to obtain in realistic scenarios. Our study aims to develop a _sample efficient_ controller framework that can learn unloading tasks _without the need for labeled data_ during the learning process. To tackle this challenge, we propose a hierarchical controller structure that combines a high-level decision-making module with classical motion control. The high-level module is trained using Deep Reinforcement Learning (DRL), wherein we incorporate a safety bias mechanism and design a reward function tailored to this task. Our experiments demonstrate that both these elements play a crucial role in achieving improved learning performance. Furthermore, to ensure reproducibility and establish a benchmark for future research, we provide free access to our code and simulation.
## I Introduction
Robotic unloading is generally defined as the set of tasks in which robots are deployed to unload items from containers, trucks, or other transportation vehicles. The successful progress of this technology represents a compelling opportunity for the future, as it can address various challenges encountered in logistics, manufacturing, and warehousing. Within these industries, unloading tasks involve physically demanding and repetitive actions that can pose risks for human workers. In this regard, robotic unloading offers a way to enhance workers' safety and mitigate hazards associated with heavy lifting and challenging work environments.
In this paper, we investigate robotic unloading tasks from visual observations (see Fig. 1 for an overview of the environment). Specifically, our objective is to enable a robotic manipulator to autonomously unload stacks of parcels by using RGB-D images as primary input source. We formulate this problem as a three-dimensional pick-and-place task, where the parcels, arranged in piles, are picked from the stack and placed on a floor conveyor. Previous studies have addressed pick-and-place by integrating objects' pose estimation [1, 2] with scripted planning and motion control [3]. While these systems demonstrate robustness in structured environments, they are unsuitable for uncertain and unstructured settings, which require improved generalization capabilities. In order to address these challenges, recent years have witnessed a surge of interest in machine learning techniques. In particular, end-to-end Reinforcement Learning (RL) has been successfully used for pick-and-place in [4, 5]. However, end-to-end RL faces difficulties in real-world scenarios due to the large amount of data required to achieve acceptable performance. To improve data efficiency, recent work has integrated RL with object-centric assumptions such as keypoints [6, 7], embeddings [8] or dense descriptors [9, 10]. These representations are typically learned through supervised learning [11], which often involves tedious and expensive data labeling or annotation processes. Another line of research has explored Imitation Learning (IL), also known as learning from demonstrations [12, 13, 14]. Despite achieving promising results, IL remains a supervised learning approach that relies on collecting expert demonstrations, which is akin to collecting labeled data and can be costly, time-consuming, or even infeasible. As a result, the main goal of this paper is to develop a _sample efficient_ controller framework that can learn the robotic unloading task in Fig. 1, _without requiring any form of labeled data_.
Towards addressing this problem, we propose a hierarchical controller structure that separates the robot decision-making process from the low-level module. The decision-making component, named high-level controller, is trained using DRL [15, 16] and more specifically Deep Q-Learning (DQL) [17] from RGB images. Meanwhile, the low-level module relies on classical trajectory planning and motion control techniques. Within this framework, our work makes two main contributions.
From an algorithmic perspective, our main novelties lie in the high-level controller, aiming to improve the sample efficiency of our DQL pipeline. First, we equip our Deep Q-Networks with a _safety bias mechanism_ with the goal of biasing the decision policy towards safe end-effector configurations. Note that an end-effector configuration is considered safe if it is reachable, i.e., it is contained within the robot workspace. Additionally, we propose a task-specific reward function _which takes into account the verticality_ of our unloading task. In order to test the impact of these mechanisms on learning performance, we conduct an ablation study and show how both these elements are crucial to accomplish improved results in our task.
Our second contribution involves the development of a simulated environment using the PyBullet physics simulator [18]. Simulators play a crucial role in prototyping RL algorithms as they offer a cost-effective and risk-free testing
environment. This aspect is particularly valuable for industry-oriented tasks like robotic unloading, which are challenging to replicate in a research setting. Having a simulator available becomes essential in facilitating new research in this domain. Therefore, we provide open access to our environment, with the goal of establishing an interesting benchmark for future studies in this field.
The remainder of the paper is organized as follows: Section II provides a summary of the work related to robotic unloading. Section III introduces notation and background on RL. Section IV provides a detailed description of the simulation environment. Section V presents the hierarchical controller and outlines the algorithm used to train the high-level controller. Finally, Section VI presents our experimental results and Section VII concludes the paper providing a general discussion on our findings.
## II Related Work
In the following, we briefly review the most relevant work on robotic unloading. One of the earliest studies in this field is presented in [19], which offers an overview of the main technical challenges associated with automatic unloading. This study proposes solutions based on a 3D laser scanner perception system and scripted trajectories.
In recent years, there has been a significant focus on leveraging deep learning (DL) techniques to enhance perception systems tailored to robotic unloading tasks. Papers such as [20, 21] formulate algorithms for accurate object detection and parcel segmentation within the context of robotic unloading. However, these studies primarily concentrate on perception and do not address the decision-making and control aspects of the robot.
Other studies have explored the integration of parcel segmentation with robotic control problems. In [22] for instance, perception, planning, and control are individually addressed and subsequently combined within a unified framework. In particular, a RGB-D camera is attached to the robot gripper for perception, and segmentation models are utilized to identify the safest object in the scene. A customized motion planning framework is then employed for control. Similarly, in [23], a 3D vision processing algorithm is introduced for parcel segmentation, enabling on-the-fly determination of multiple gripping strategies.
Regarding the decision-making problem, papers such as [24, 25] introduce a reasoning framework based on decision trees for optimal unloading, which is then combined with motion planning. However, these papers do not explicitly address the perception problem. Similarly, in [26], a Q-learning-based algorithm is proposed for optimal decision-making, assuming accurate detection, perfect stacking, and uniform-sized boxes.
Compared to these studies, our research takes a comprehensive approach to the unloading problem. We explore controllers that integrate perception, decision-making, and motion control as an interconnected system and treat this interconnection as a unified problem.
## III Preliminaries
Unless indicated otherwise, we use uppercase letters (e.g., \(S_{t}\)) for random variables, lowercase letters (e.g., \(s_{t}\)) for values of random variables, script letters (e.g., \(\mathcal{S}\)) for sets and reference frames, bold lowercase letters (e.g., \(\mathbf{\theta}\)) for vectors and bold uppercase letters (e.g. \(\mathbf{H}\)) for matrices. We denote expectation as \(\mathbb{E}[\cdot]\).
Our decision-making process is modeled as a finite-horizon discounted Markov Decision Process (MDP) described by the tuple \((\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R},\rho_{0},\gamma)\), where \(\mathcal{S}\) is the set of states and \(\mathcal{A}\) is the set of actions. \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\to P(\mathcal{S})\) is the transition probability function where \(P(\mathcal{S})\) denotes the space of probability distributions over \(\mathcal{S}\), \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) is the reward function which maps state-action pairs to scalar rewards, \(\rho_{0}\in P(\mathcal{S})\) is the initial state distribution, and \(\gamma\in[0,1)\) the discount factor. We define the agent as a stationary policy \(\pi:\mathcal{S}\to P(\mathcal{A})\), where \(\pi(a|s)\) is the probability of taking action \(a\) in state \(s\). When a function is parameterized with parameters \(\mathbf{\theta}\in\Theta\subset\mathbb{R}^{k}\) we write \(\pi_{\mathbf{\theta}}\).
Reinforcement learningGiven an MDP and a stationary policy \(\pi:\mathcal{S}\to P(\mathcal{A})\), the RL objective is to learn an optimal policy, \(\pi^{\star}\), which maximizes the expected total discounted reward
\[J(\pi)=\mathbb{E}_{\pi}\Big{[}\sum_{t=0}^{T}\gamma^{t}\mathcal{R}(s_{t},a_{t}) \Big{]}, \tag{1}\]
Fig. 1: Overview of our robotic unloading setup. Fig. 0(a) illustrates a snapshot of the visual observations available to the robot for the decision-making process. These observations are captured by a camera positioned on the top-left side of the robot and consist in \(720\times 1280\) high resolution RGB-D images. Fig. 0(b) presents an overview of the unloading task. Stacks of parcels are positioned in front of an industrial KUKA KR70 robotic manipulator equipped with a suction gripper. Using the visual information depicted in Fig. 0(a), the robot has to select a parcel to pick, grasp it, and then place it on the ground on its right side. The parcels in the scene are randomized in terms of color within the brown spectrum. A video demonstrating the task is available in the Supplementary Materials.
where \(\tau=(s_{0},a_{0},s_{1},a_{1},\ldots,a_{T-1},s_{T})\) are trajectories sampled according to \(s_{0}\sim\rho_{0}\), \(a_{t}\sim\pi(\cdot|s_{t})\) and \(s_{t+1}\sim\mathcal{T}(\cdot|s_{t},a_{t})\), and \(T\) is the number of decision steps in a single episode. Note that, neither the transition probability function \(\mathcal{T}\) nor the reward function \(\mathcal{R}\) are available to the learning agent. Therefore, the agent interacts with the environment, collects transitions \((s_{t},a_{t},r_{t},s_{t+1})\), where \(r_{t}=\mathcal{R}(s_{t},a_{t})\), and exploits these transitions to estimate \(J(\pi)\) in (1) in order to learn \(\pi^{*}\) as \(\pi^{*}\in\arg\max_{\pi}J(\pi)\).
Well-known algorithms solve the RL problem by estimating, and optimizing, the value functions induced by the policy \(\pi\). We define the state value function as \(V^{\pi}(s)=\mathbb{E}_{\tau}[\sum_{t=0}^{T}\gamma^{t}\mathcal{R}(s_{t},a_{t})|S_ {0}=s]\) and the state-action value function as \(Q^{\pi}(s,a)=\mathbb{E}_{\tau}[\sum_{t=0}^{T}\gamma^{t}\mathcal{R}(s_{t},a_{t}) |S_{0}=s,A_{0}=a]\). Furthermore, \(V^{\pi}(s)=\mathbb{E}_{a\sim\tau(\downarrow s)}[Q^{\pi}(s,a)]\). Note that, the optimal policy \(\pi^{*}\in\arg\max_{\pi}J(\pi)\) induces the optimal state value function
\[V^{*}(s)=\max_{\pi}V^{\pi}(s),\quad\forall\,s\in\mathcal{S}.\]
Therefore, given \(V^{*}(s)\) and assuming \(\mathcal{T}\) is known, we can retrieve \(\pi^{*}(s)\) for all \(s\in\mathcal{S}\) as the action \(a\in\mathcal{A}\) that leads to the highest expected return \(\mathbb{E}_{s_{t+1}\sim\mathcal{T}(\cdot|s_{t},a)}[V^{*}(s_{t+1})]\) for all \(s_{t}\in\mathcal{S}\).
In the RL setting, where \(\mathcal{T}\) is unknown, Temporal Difference (TD) methods [15] compute \(V^{*}(s)\) and \(\pi^{*}(s)\) by leveraging the following iterative updating rule for \(Q^{\pi}(s_{t},a_{t})\):
\[Q^{\pi}(s_{t},a_{t})\gets Q^{\pi}(s_{t},a_{t})+\alpha(Y-Q^{\pi}(s_{t},a_{t })), \tag{2}\]
where \(\alpha\) is a learning rate and \(Y\) is a target value as in a standard regression problem.
Given a set of transitions \((s_{t},a_{t},r_{t},s_{t+1},a_{t+1})\), generated by the interactions of the policy \(\pi\) with the environment; setting \(Y=r_{t}+\gamma Q^{\pi}(s_{t+1},a_{t+1})\) yields the on-policy update typical of SARSA [15]. On the other hand, setting \(Y=r_{t}+\gamma\max_{a}Q^{\pi}(s_{t+1},a)\) yields the off-policy update used in \(Q\)-learning [15]. Provided the optimal state-action value function \(Q^{*}(s,a)\), then \(V^{*}(s)=\max_{a}Q^{*}(s,a)\) and \(\pi^{*}(s)\in\arg\max_{a}Q^{*}(s,a)\) for all \(s\in\mathcal{S}\).
## IV Simulated Environment
In the following we provide a detailed description of our robotic unloading task. The environment is summarized in Fig. 1 and is based on the PyBullet physics simulator [18]. The simulated environment can be accessed at our GitHub repository1.
Footnote 1: [https://github.com/VittorioGiammarino/RL-for-unloading-from-pixels](https://github.com/VittorioGiammarino/RL-for-unloading-from-pixels)
AgentThe decision agent controls an industrial 6-axis KUKA KR70 robotic manipulator. The robot inertial frame corresponds to the world frame and is denoted with \(\mathcal{I}\). We refer to the end-effector pose with the homogeneous transformation \(\mathbf{H}_{\mathcal{I}\mathcal{E}}\), where \(\mathcal{E}\) is the robot end-effector frame. The robot end-effector is a suction gripper with a contact sensor which enables the contact detection between the gripper and the other objects of the scene. The unloading task relies on the visual observations illustrated in Figure 0(a) and 0(b). These images are captured by a RGB-D camera positioned at the top-left side of the robot, providing \(720\times 1280\) high-resolution images. The intrinsic and extrinsic parameters of the camera are known, and along with the depth image, are used to transform from the 2D pixel space to the robot inertial frame \(\mathcal{I}\)[27]. Additionally, the agent is provided with the robot workspace boundaries in \(\mathcal{I}\), denoted with \(W_{\mathcal{I}}\).
TaskFor each episode, our task involves unloading a stack of \(42\) parcels, arranged as shown in Fig. 1. Each parcel is a cube with \(0.25m\) edge length and randomized color surface within the brown spectrum. At each decision step, the agent is required to select a parcel, grasp it, and then place it on the ground, where a floor conveyor will complete the unloading. The complexity of this problem, compared to more classical pick-and-place or rearrangement tasks, arises from the sequential nature of the decision-making process and the limited margin of error allowed by the task. In this regard, Fig. 2 illustrates two episodes generated by different decision policies. In the upper row, a successful unloading sequence is depicted, where the agent successfully unloads all the \(42\) parcels. Conversely, the lower row shows an unsuccessful unloading sequence, where a single decision has a negative impact on the subsequent scene, ultimately compromising the agent's final outcome. Note that the simulation environment is modelled as a finite-horizon MDP with \(T=42\), i.e., the number of decision steps is equivalent to the number of initialized parcels at \(t=0\). When a mistake is made, as shown in Fig. 2 (lower), we remove the \(n\) parcels that land outside the agent's workspace \(W_{\mathcal{I}}\), and the agent incurs a time penalty of \(n\), i.e. \(t\gets t+n\) rather than \(t\gets t+1\). Similarly, when the agent attempts picks out of \(W_{\mathcal{I}}\), a parcel is removed from the top of the stack and \(t\gets t+1\). In other words, the agent is forced to skip a step. This ensures \(T=42\) for all the episodes, preventing infinite loops during the training process. Note that infinite loops, like repeatedly choosing parcels outside of \(W_{\mathcal{I}}\), tend to occur frequently in the initial stages of the learning process and can prevent progress if not addressed.
## V Methods
In this section, we introduce our hierarchical controller framework, consisting of a high-level decision-making module and a low-level trajectory planning and motion control module (cf. Fig. 3). Moreover, we provide a comprehensive description of our DQL pipeline, used to train the agent's decision-making policy.
High-level controllerGiven a visual observation \(s_{t}\in\mathcal{S}\), the goal of the high-level controller is to provide a corresponding picking end-effector pose, denoted by the homogeneous transformation \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pick}}\), which maximizes the expected number of unloaded parcels over the episode. Hence, at each decision step \(t\), we want
\[f(s_{t})\rightarrow\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pick}},\quad t\in[0, T). \tag{3}\]
The function \(f\) in (3) is defined as a composition of two main steps. In the first step, a policy \(\pi_{\mathbf{\theta}}:\mathcal{S}\to P(\mathcal{A})\) takes a \(64\times 64\) RGB image of the scene and selects a single pixel \(a_{t}=(u,v)_{t}\). The \(64\times 64\) image is a cropped and
resized version of the original \(720\times 1280\) image in Fig. (a)a. Note that the original image is cropped assuming prior knowledge of the area containing the stack of parcels during the unloading process. Using an aligned depth image, the selected \(a_{t}=(u,v)_{t}\) is transformed into Cartesian coordinates, representing the picking end-effector position in \(\mathcal{I}\) denoted with \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\). The picking position is then associated with a rotation matrix \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), corresponding to the picking end-effector orientation. Picking position \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) and orientation \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) yield the picking end-effector pose \(\mathbf{H}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), where, compared to the notation in (3), we explicitly state the dependency on \(a_{t}=(u,v)_{t}\). In this work, we assume a fixed end-effector orientation \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), which is computed to be orthogonal to the plane of the stack front surface (the visible surface in Fig. (a)a, 2 and 3). It is important to note that \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) and \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) are interdependent choices. In our setting, we condition \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) on \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), i.e., \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) is learned given \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\). The alternative setting, where \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) is conditioned on \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), is left for future work.
Learning algorithmThe picking policy \(\pi_{\mathbf{\theta}}\) is learned via DQL [17]. We define \(K\) critic networks as \(Q_{\mathbf{\theta}_{k}}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\), parameterized by a \(43\)-layer encoder-decoder residual network (ResNet) [28] where the same encoder is shared across all the critic networks.
The critic networks output picking confidence values over the pixel space and are trained to minimize:
\[\mathcal{L}_{\mathbf{\theta}_{k}}(\mathcal{B}) =\mathbb{E}_{(s_{t},a_{t},r_{t},s_{t+1})-\mathcal{B}}[(y_{t}-Q_{ \mathbf{\theta}_{k}}(s_{t},a_{t}))^{2}], \tag{4}\] \[y_{t} =r_{t}+\gamma\max_{a}\min_{k}Q_{\mathbf{\theta}_{k}}(s_{t+1},a), \tag{5}\]
where \(\mathcal{B}\) is a replay buffer (cf. [17]) and \(\mathbf{\bar{\theta}}_{k}\) are the slow moving weights for the target critic networks [29]. We use multiple critics and target networks to address the optimistic bias introduced by function approximation [30, 31]. Moreover, we define two different reward functions:
\[\mathcal{R}_{b}(s,a) =\begin{cases}1,\text{ if picking succeeds},\\ 0,\text{ otherwise},\end{cases} \tag{6}\] \[\mathcal{R}_{v}(s,a) =\begin{cases}1+\lambda\cdot\hat{z}_{\mathcal{IE}}^{\text{pick}} (a),\text{ if picking succeeds},\\ 0,\text{ otherwise},\end{cases}\]
where \(\lambda\) is a scalar hyperparameter, and \(z_{\mathcal{IE}}^{\text{pick}}(a)\) is the \(z\)-coordinate in \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a)\) where the \(z\)-axis represents the up direction in \(\mathcal{I}\). Both the functions in (6) provide rewards when picking succeeds. Moreover, \(\mathcal{R}_{v}(s,a)\) adds a bonus proportional to the \(z\)-coordinate, i.e., the height, of the picking position. This bonus incentivizes the unloading of parcels when in the upper part of the stack. In other words, it disincentivizes behaviors which can lead to the collapse of the stack as in the lower row episode in Fig. 2. We experimentally show that \(\mathcal{R}_{v}(s,a)\) remarkably improves performance when compared to \(\mathcal{R}_{b}(s,a)\) in our task.
The loss in (4)-(5) is computed by sampling interactions \((s_{t},a_{t},r_{t},s_{t+1})\) from a replay buffer \(\mathcal{B}\). During training, the agent interacts with the environment according to the exploration policy summarized in Algorithm 1. The safety bias layer, named Mask in Algorithm 1, is defined as
\[\text{Mask}(Q_{\mathbf{\bar{\theta}}_{k}}(s,a))=\begin{cases}Q_{\mathbf{ \bar{\theta}}_{k}}(s,a)+b,\quad\text{if }\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a)\in W_{\mathcal{I}},\\ Q_{\mathbf{\bar{\theta}}_{k}}(s,a),\quad\text{otherwise},\end{cases} \tag{7}\]
where \(W_{\mathcal{I}}\) is the robot workspace and \(b\) is a positive definite safety bias treated as an hyperparameter. The main goal of Mask in (7) is to bias the policy towards the actions with a corresponding \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a)\in W_{\mathcal{I}}\). We experimentally show that this layer becomes crucial to improve performance and efficiency in our task. During evaluation, given \(s\), the actions
Fig. 3: Summary of our hierarchical controller. The high-level controller selects a pixel \(a=(u,v)\) from an observation and computes a picking pose denoted with \(\mathbf{H}_{\mathcal{IE}}^{\text{pick}}(a)\). This pose is passed to a planner, which generates a trajectory of Cartesian waypoints. The low-level controller transforms this Cartesian trajectory into a joint space trajectory solving an inverse kinematics problem. The resulting trajectory serves as a reference for the PD controller of the actuators.
Fig. 2: The evolution of the scene is illustrated in two different episodes. In the upper row episode, the agent follows an optimal decision policy, successfully unloading all the parcels. In the lower row episode, the policy is suboptimal and a single wrong decision impacts the subsequent organization of the scene, affecting the agent’s final outcome and undermining the entire unloading process.
are generated following \(a\in\arg\max_{a}\text{Mask}(\min_{k}Q_{\mathbf{\bar{q}}_{k}}(s,a))\) where we use Mask in (7) as additional safety measure. We provide a summary of the full algorithm in Appendix.
```
Input: \(s\), \(Q_{\mathbf{\bar{q}}_{k}}\), \(\epsilon\), Mask, \(W_{\mathcal{I}}\), \(\mathcal{U}\), \(\sigma\): state (\(64\times 64\) RGB image), target critic networks, exploration probability, safety bias layer, robot workspace, uniform distribution, and softmax function. begin ExplorationAction[\(s\)] \(u\sim\mathcal{U}(0,1)\) if\(u\leq\epsilon\)then \(a\sim\mathcal{U}(u_{W_{\mathcal{I}}})\) where \(a_{W_{\mathcal{I}}}\) denotes the action \(a\) such that \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\in W_{\mathcal{I}}\) else \(a\sim\sigma(\text{Mask}(\min_{k}Q_{\mathbf{\bar{q}}_{k}}(s,a)))\) return\(a\)
```
**Algorithm 1**Exploration Policy
Low-level moduleThe low-level module comprises a trajectory planner and a low-level controller. The planner receives \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a_{t})\) from the high-level controller and provides a set of \(5\) end-effector waypoints to the low-level controller. This set of waypoints consists of: \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-place}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{place}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{out-of-camera}}\). Specifically, \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre}}\) represents the placing pose which is assumed to be known throughout the task. \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{out-of-camera}}\) is the pose required for an unoccluded image of the scene, as shown in Fig. 0(a). \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\) is the pre-picking pose, \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{post-pick}}\) the post-picking pose and \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-place}}\) the pre-placing pose. Between \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\) and \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{out}}(a_{t})\), the gripper is activated as soon as a contact with a parcel is detected, after which the end-effector is moved towards \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-place}}\).
Provided a trajectory of Cartesian waypoints, the low-level controller transforms this trajectory from Cartesian to joint space by solving an inverse kinematics optimization problem. The joint space trajectory is then used as a reference for the PD controller of the actuators.
## VI Experiments
All our experiments focus on the unloading problem described in Section IV. We evaluate four different versions of our DQL algorithm as summarized in Table I. It is important to note that both \(\mathcal{R}_{v}\) in (6) and Mask in (7) contain two important hyperparameters, respectively \(\lambda\) and \(b\), which are kept fixed throughout our experiments. In particular, we set \(\lambda=2\) and \(b=100\). This choice for \(\lambda\) takes into account the maximum height of the stack and ensures an appropriate trade-off between the two factors in \(\mathcal{R}_{v}\). As for \(b\), its optimal value depends on the initialization of the critic networks weights and on the considered reward function. The experiments depicted in Fig. 4 show that \(b=100\) yields good empirical results for all the algorithms in which Mask is used.
As commonly done in the RL literature, in our experiments we randomize training and evaluation episodes from the same distribution. The obtained final results are summarized in Fig. 4 where the average normalized performance over \(6\) random seeds is illustrated. Specifically, the left figure shows the average number of successful picks, and the right figure shows the number of attempted picks with \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\notin W_{\mathcal{I}}\). All the curves are normalized with respect to \(42\), i.e., the maximum number of parcels per episode. These results demonstrate that both Mask and \(\mathcal{R}_{v}\) remarkably improve performance, in particular when they are jointly used during training. Specifically, by using \(\mathcal{R}_{v}\) rather than \(\mathcal{R}_{b}\) in (6), the agent receives a more accurate feedback about the characteristics of the task and requires fewer interactions to achieve better results. Furthermore, by using Mask, we are able to effectively reduce the number of attempted picks in which \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\notin W_{\mathcal{I}}\). This leads to an improved exploration strategy, as the agent mainly focuses on viable actions that lead to \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\in W_{\mathcal{I}}\). Conversely, when Mask is not used, the number of actions with \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\notin W_{\mathcal{I}}\) increases, leading to less effective exploration strategies and slower learning rate.
In Fig. 5, we show, for each version of our algorithm, the seed leading to the best maximum performance among the seeds averaged in Fig. 4. These experiments more clearly emphasize the effect of Mask and \(\mathcal{R}_{v}\) on improving our final results. In Fig 5, our best policy, trained with _Mask-on, v-reward_ (cf. Table I), achieves \(99\%\) picking success over \(3\) full evaluation episodes.
The training curves in Fig. 4 and Fig. 5 are both summarized by the box plots in Fig. 6. Additional results and all the used hyperparameters are provided in Appendix. We refer to our GitHub repository for more implementation details.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & _Mask-off_ & _Mask-off, v-reward_ & _Mask-on_ & _Mask-on, v-reward_ \\ \hline Mask & ✗ & ✗ & ✓ & ✓ \\ \(\mathcal{R}_{v}\) & ✗ & ✓ & ✗ & ✓ \\ \hline \hline \end{tabular}
\end{table} TABLE I: A summary of the different algorithms tested in our experiments. Mask denotes the use of the safety bias layer in (7), while \(\mathcal{R}_{v}\) denotes the use of \(\mathcal{R}_{v}\) rather than \(\mathcal{R}_{b}\) in (6).
Fig. 4: Ablation experiments. (Left) the average normalized number of successful picks, which represents the number of parcels successfully unloaded. (Right) the average normalized number of picks attempted with \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\notin W_{\mathcal{I}}\). Both results are averaged over \(6\) seeds and the shaded area represents the standard deviation over seeds. For each seed, we randomly initialize the critic networks and train for \(10^{5}\) steps. We evaluate the learned policy every \(10\) training episodes, i.e., \(420\) steps, using average performance over \(3\) episodes. The characteristics of the tested algorithms are summarized in Table I.
## VII Conclusion
In this study, we tackle the problem of robotic unloading from visual observations and propose a hierarchical controller that does not require any labeled data. Our hierarchical controller, depicted in Fig. 3, consists of a high-level controller responsible for decision-making and a low-level module for trajectory planning and low-level control. Optimal decision-making is learned from RGB images using DQL and is converted in optimal end-effector pose leveraging an aligned depth image. In our DQL algorithm, we introduce a safety bias mechanism and a task-specific reward function which prove to be essential in order to improve our experimental results. Finally, we develop and make publicly available a simulated environment, with the goal of providing a benchmark for future studies in this field.
Limitations and future workDespite the advancement in addressing robotic unloading, it is important to understand the limitations of our approach. In the high level-controller, the main issue involves the training stability of our DQL algorithm. This problem is evident in Fig. 5, where the training progress does not show a monotonic trend. Furthermore, this instability is the main reason for the slow rate of improvement illustrated in Fig. 4. We consider this to be a crucial challenge for optimal unloading policy learning, representing a significant area for future research.
Regarding the simulated environment, we consider it as a valuable benchmark for testing RL-based methods in unloading tasks. This perspective is substantiated by the results in Fig. 4 and Fig. 5, where vanilla RL algorithms often struggle to succeed, as evidenced in the _Mask-off_ case. Moreover, prior research has addressed unloading tasks similar to what we present in our study [24, 26]. However, we acknowledge that our current simulation does not encompass all the potential variables encountered in real-world unloading scenarios. As a result, our future efforts will focus on improving our simulated environment in order to introduce more randomized settings, where the parcels can have different textures, size and shape. We emphasize this as an important research avenue towards improving performance in real-world scenarios, where parcel configurations are usually messy and uncertain.
Looking ahead, we are also considering the prospect of directly applying our training solutions to real hardware in future research endeavors. This goal presents its own set of challenges, especially in tasks of this nature, where devising methods that ensure minimal human intervention and safety becomes of crucial importance.
| この研究では、ロボットがRGB-D画像を主な入力源としてParcelの積み重ねを自動的に積み下ろす作業に焦点を当てています。 supervised学習とImitation学習は、これらのタスクで良好な結果を達成していますが、これらのタスクはラベル付きデータに依存するため、現実的なシナリオでは取得が困難です。本研究は、学習プロセス中にラベル付きデータが必要ない、サンプル効率的な制御フレームワークを開発することを目的としています。この課題に対処するために、ハイパーパラメータ化された制御構造を提案しています。これは、高階的な決断処理モジュールと古典的な運動制御を組み合わせたものです。高階的なモジュールは、DeepReinforcement Learning (DRL) を使用してトレーニングされ、安全なバイアスメカニズムとこのタスクに適応された報酬関数を組み込みます。実験の結果は、これらの要素が学習性能の向上に重要な役割を果たしていることを示しています |
2309.09512 | Extrinsic nonlinear Kerr rotation in topological materials under a
magnetic field | Topological properties in quantum materials are often governed by symmetry
and tuned by crystal structure and external fields, and hence
symmetry-sensitive nonlinear optical measurements in a magnetic field are a
valuable probe. Here we report nonlinear magneto-optical second harmonic
generation (SHG) studies of non-magnetic topological materials including
bilayer WTe2, monolayer WSe2 and bulk TaAs. The polarization-resolved patterns
of optical SHG under magnetic field show nonlinear Kerr rotation in these
time-reversal symmetric materials. For materials with three-fold rotational
symmetric lattice structure, the SHG polarization pattern rotates just slightly
in a magnetic field, whereas in those with mirror or two-fold rotational
symmetry the SHG polarization pattern rotates greatly and distorts. These
different magneto-SHG characters can be understood by considering the
superposition of the magnetic field-induced time-noninvariant nonlinear optical
tensor and the crystal-structure-based time-invariant counterpart. The
situation is further clarified by scrutinizing the Faraday rotation, whose
subtle interplay with crystal symmetry accounts for the diverse behavior of the
extrinsic nonlinear Kerr rotation in different materials. Our work illustrates
the application of magneto-SHG techniques to directly probe nontrivial
topological properties, and underlines the importance of minimizing extrinsic
nonlinear Kerr rotation in polarization-resolved magneto-optical studies. | Shuang Wu, Zaiyao Fei, Zeyuan Sun, Yangfan Yi, Wei Xia, Dayu Yan, Yanfeng Guo, Youguo Shi, Jiaqiang Yan, David H. Cobden, Wei-Tao Liu, Xiaodong Xu, Shiwei Wu | 2023-09-18T06:50:49 | http://arxiv.org/abs/2309.09512v1 | # Extrinsic nonlinear Kerr rotation in topological materials under a magnetic field
###### Abstract
We study the \({}^{1}\) State Key Laboratory of Surface Physics, Key Laboratory of Micro and Nano Photonic Structures (MOE), and Department of Physics, Fudan University, Shanghai 200433, China.
\({}^{2}\) Department of Physics, University of Washington, Seattle, Washington 98195, USA
\({}^{3}\) School of Physical Science and Technology, and ShanghaiTech Laboratory for Topological Physics, ShanghaiTech University, Shanghai 201210, China
\({}^{4}\) Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
\({}^{5}\) Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee, 37831, USA
\({}^{6}\) Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA
\({}^{7}\) Shanghai Qi Zhi Institute, Shanghai 200232, China
\({}^{8}\) Institute for Nanoelectronic Devices and Quantum Computing, and Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
\({}^{9}\) Shanghai Research Center for Quantum Sciences, Shanghai 201315, China
* Corresponding emails: swwu@fudan.edu.cn | 量子材料におけるトポロジカルプロパティは、しばしば対称性によって支配され、結晶構造と外部の力によって調整され、したがって、磁場におけるシメトリーに敏感な非線形光学測定は貴重な探求手段となります。ここでは、磁場における非線形光磁性2次Harmonic生成(SHG)を非磁性トポロジカル材料の包括的な研究を行うことで、bilayer WTe2、monolayer WSe2とbulk TaAsを用いて報告します。磁場中の光SHGの偏光 resolvedパターンは、これらの時間反転対称材料において非線形 Kerr 回転を示しています。三つの回転対称構造を持つ材料のSHG偏光パターンは、磁場の中でわずかに回転するのに対し、鏡像または二つの回転対称構造を持つ材料では、SHG偏光パターンは大きく回転し、歪みます。これらの異なる magneto |
2309.03981 | Routing in Mixed Transportation Systems for Mobility Equity | This letter proposes a routing framework in mixed transportation systems for
improving mobility equity. We present a strategic routing game that governs
interactions between compliant and noncompliant vehicles, where noncompliant
vehicles are modeled with cognitive hierarchy theory. Then, we introduce a
mobility equity metric (MEM) to quantify the accessibility and fairness in the
transportation network. We integrate the MEM into the routing framework to
optimize it with adjustable weights for different transportation modes. The
proposed approach bridges the gap between technological advancements and
societal goals in mixed transportation systems to enhance efficiency and
equity. We provide numerical examples and analysis of the results. | Heeseung Bang, Aditya Dave, Andreas A. Malikopoulos | 2023-09-07T19:27:15 | http://arxiv.org/abs/2309.03981v1 | # Routing in Mixed Transportation Systems for Mobility Equity
###### Abstract
This letter proposes a routing framework in mixed transportation systems for improving mobility equity. We present a strategic routing game that governs interactions between compliant and noncompliant vehicles, where noncompliant vehicles are modeled with cognitive hierarchy theory. Then, we introduce a mobility equity metric (MEM) to quantify the accessibility and fairness in the transportation network. We integrate the MEM into the routing framework to optimize it with adjustable weights for different transportation modes. The proposed approach bridges the gap between technological advancements and societal goals in mixed transportation systems to enhance efficiency and equity. We provide numerical examples and analysis of the results.
Mobility equity metric, emerging mobility, mixed-traffic routing.
## I Introduction
Due to ongoing global urbanization and burgeoning urban populations, our society now faces not only the challenges of traffic congestion but also the associated societal issues, such as disparities in transportation opportunities, reduced accessibility to essential services for marginalized communities, and increased social isolation due to lengthy commutes. Emerging mobility systems have received significant attention as a solution that can mitigate congestion, enhance safety, improve comfort, and optimize costs.
Numerous studies have addressed the coordination of connected and automated vehicles (CAVs) to achieve efficient operational methods in emerging mobility systems. For example, a series of research papers addressed coordination problems in different traffic scenarios such as lane-changing [1], merging on-ramps in mixed traffic [2, 3], signalized intersection [4], and roundabouts [5]. These results have also been extended to the network level with vehicle-flow optimization. Research efforts have also addressed various congestion-aware routing strategies considering mixed traffic contexts [6] or targeting electric vehicles [7]. Some approaches combine efficient routing with coordination strategies [8, 9] or learned travel preference to achieve social objectives [10]. However, exploring effective operational strategies that can mitigate the societal challenges of emerging mobility systems remains an open question.
The primary component of the societal challenges in emerging mobility systems is the uneven distribution of modes of transportation and accessibility to various urban resources. In response, research initiatives have arisen to address concerns of _mobility equity_ in a transportation network. As a concept, mobility equity has been examined from many diverse perspectives. Notably, research has delved into socioeconomic parity across different strata of society, spatially equitable allocation of infrastructure, and the distribution of resources aligned with societal needs (for a detailed overview, see [11]). For instance, some studies examined the impact of individual characteristics, i.e., personal needs and abilities, on the effectiveness of the equity analysis [12]. Meanwhile, other studies explored the link between social exclusion and transport disadvantages in accessibility [13] or provided a way of examining equity based on the transport choices [14]. These investigations underscore the urgency of creating transportation systems that cater to the needs of all segments of society, enhancing accessibility and social inclusivity. Despite the discrete examination of these challenges, however, there still needs to be more comprehensive efforts to interlink mobility equity with emerging mobility systems. Integrating the principles of mobility equity into the realm of emerging transportation modes still presents a gap in the existing body of knowledge.
To resolve these challenges, in this letter, we propose a routing framework in mixed transportation systems, where human-driven vehicles co-exist with CAVs, to improve mobility equity. Our approach addresses different modes of transportation, accommodating private vehicles with varying levels of compliance. First, we formulate a routing framework that suggests system-optimal solutions tailored explicitly to compliant vehicles. To account for noncompliant vehicles' movement, we leverage the cognitive hierarchy model [15], inspired by many studies that have applied the cognitive hierarchy model to predict human decisions in transportation systems. For example, Li et al. [16] utilized the cognitive hierarchy model in a game theoretic approach to manage the interaction between automated vehicles and human-driven vehicles. More recently, Feng and Wang [17] utilized a cognitive hierarchy model to predict acceptance or rejection of the drivers in on-demand platforms.
By incorporating a cognitive hierarchy model, we can design a strategic game that governs interactions between compliant and noncompliant vehicles within the transportation system. Moreover, we introduce a mobility equity metric (MEM) to provide a quantifiable assessment of mobility equity. This metric captures essential aspects of accessibility, accounting for both geographical distances and monetary costs. We then derive the MEM optimization problem integrated with the routing framework. We solve this problem by adjusting the weights assigned to different transportation modes. This comprehensive framework not only accounts for compliance variations and individual travel preferences but also integrates the overarching goal of equity into the decision-making process.
The main contributions of this work are the (1) introduction of the MEM that can help resolve difficulties resulting from a lack of standard/clear metrics for mobility equity [18]; (2) formulation and solution of the MEM optimization integrated with a multi-modal routing framework, which offers a unique perspective on mobility optimization and equity enhancement and (3) development of variations of the MEM that support the analysis in small networks.
The remainder of this paper is organized as follows. In Section II, we formulate a routing problem within mixed transportation systems. We introduce a mathematical definition of MEM in Section III and present the MEM optimization framework in Section IV. Section V showcases a numerical example with practical implementation strategies on a smaller network and subsequently analyzes results. Finally, in Section VI, we provide concluding remarks and discuss directions for future research.
## II Routing in Mixed Transportation
This section presents a routing framework of a mixed transportation system with various modes of transportation, including public transportation and privately owned vehicles. The private vehicles are comprised of human-driven vehicles and CAVs. We assume that public transportation and CAVs follow the system's route suggestion while private human-driven vehicles may or may not comply with the suggestion. We seek to provide system-wide optimal suggestions to all compliant vehicles, including public transportation, CAVs, and private human-driven vehicles. However, such suggestions must account for non-compliant private vehicles (NPVs) decisions. Thus, we distinguish between compliant private vehicles (CPVs) and NPVs in our formulation.
Consider a road network given by a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\subset\mathbb{N}\) is a set of nodes and \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) is a set of edges. We consider \(\mathcal{O}\) and \(\mathcal{D}\) as the set of origins and destinations, respectively. Let \(\mathcal{N}=\{1,\ldots,N\}\), \(N\in\mathbb{N}\), denote a set of trips, each comprising of an origin-destination pair, and \(\mathcal{M}\) a set of modes of transportation available for system-wide routing, e.g., public transportation, shared mobility, private vehicles. For each trip \(n\in\mathcal{N}\), information of origin \(o_{n}\in\mathcal{O}\), destination \(d_{n}\in\mathcal{D}\), and compliant travel demand rate \(\alpha_{m,n}\in\mathbb{R}_{>0}\) for each mode \(m\in\mathcal{M}\) is given. We let \(x^{ij}_{m,n}\in\mathbb{R}_{\geq 0}\) be the flow on edge \((i,j)\in\mathcal{E}\) traveling for the trip \(n\) with the specific mode \(m\). Then, the total complying-vehicle flow is given by \(x^{ij}=\sum_{m}\sum_{n}x^{ij}_{m,n}\). Note that we distinguish the flow of NPVs on edge \((i,j)\) by denoting it with \(q^{ij}\in\mathbb{R}_{\geq 0}\). Given both the total complying and noncomplying flows on edge \((i,j)\), we estimate travel time using the _Bureau of Public Roads (BPR)_ function denoted by
\[t^{ij}(x^{ij}+q^{ij})=t^{ij}_{0}\cdot\left(1+0.15\left(\frac{x^{ij}+q^{ij}}{ \gamma^{ij}}\right)^{4}\right), \tag{1}\]
where \(t^{ij}_{0}\) is the free-flow travel time and \(\gamma^{ij}\) is capacity of the road on edge \((i,j)\).
Recall that our system can provide socially equitable suggestions only to compliant vehicles willing to accept those while NPVs seek to maximize their utility strategically. The routing process resulting from the interactions between these entities can be formulated as a game. Next, we describe the system-centric optimization problem for compliant vehicles and follow it with the decision-making model for NPVs to describe the interactions within this game.
#### Ii-1 System-Centric Routing
To suggest socially-equitable routes to the complying-vehicle flow, we solve the following following optimization problem.
**Problem 1** (System-Centric Routing).: \[\underset{\{x^{ij}_{m,n}\}}{\operatorname{Minimize}} \sum_{m\in\mathcal{M}}w_{m}\left\{\sum_{n\in\mathcal{N}}\sum_{(i,j )\in\mathcal{E}}t^{ij}(x^{ij}+q^{ij})\cdot x^{ij}_{m,n}\right\}\] \[\operatorname{subject\ to:} \sum_{k:(j,k)\in\mathcal{E}}x^{jk}_{m,n} =\alpha_{m,n},\] \[\forall m\in\mathcal{M},n\in\mathcal{N},j=o_{n},\] \[\sum_{i:(i,j)\in\mathcal{E}}x^{ij}_{m,n} =\alpha_{m,n},\] \[\forall m\in\mathcal{M},n\in\mathcal{N},j=d_{n},\] \[\sum_{i:(i,j)\in\mathcal{E}}x^{ij}_{m,n} =\sum_{k:(j,k)\in\mathcal{E}}x^{jk}_{m,n},\] \[\forall m\in\mathcal{M},n\in\mathcal{N},j\in\mathcal{V}\setminus \{o_{n},d_{n}\},\] (2)
_where \(w_{m}\) is the weight for transportation mode \(m\)._
The constraints ensure the flow matches the demand rate and connects corresponding origins and destinations. Problem 1 is a convex problem as the BPR function is convex in its domain and constraints are linear.
#### Ii-2 Strategic Routing for NPVs
In recent research articles, NPVs have been modeled as one group whose interactions result in a Wardrop equilibrium (see [19] and references therein). Wardrop equilibrium is reached at steady state after transient phases wherein travelers continuously adjust their route selection. In a real traffic scenario, however, it requires perfectly rational drivers with access to all information of other NPVs or a significant amount of time for drivers to interact with each other and reach equilibrium. Therefore, as an alternative, we introduce the cognitive hierarchy model (see Fig. 1) to describe the behavior of NPVs. This model categorizes human drivers into different levels of decision-making rationality. At each level, human drivers can anticipate
lower-level drivers' decisions and make "smarter" decisions. For instance, level-0 drivers decide based on publicly available information. Meanwhile, level-1 drivers anticipate level-0 drivers' decisions and select better paths, and level-2 drivers can anticipate level-0 and level-1 drivers. According to the experimental results, humans can most commonly anticipate others' decisions up to level-2 [20, 21]; thus, we restrict our model to level-2 decision-making.
**Remark 1**.: _The cognitive hierarchy model is suitable for considering the behavior of NPVs when the percentage of compliant vehicles is high. This can occur in the presence of a large number of CAVs and public transportation. In this case, the traffic flow generated by compliant vehicles can represent the traffic situation across the network, thus making it reasonable to assume that a level-0 driver cannot anticipate behaviors beyond this information. However, if the majority of the drivers are noncompliant, this would result in a massive gap between their anticipation and experience of traffic on the roads. In this situation, they can potentially learn the dynamics and try to anticipate others' decisions._
For each trip \(n\in\mathcal{N}\), we also have the demand rate for each \(\ell\)-level NPV, \(\ell=0,1,2\), from the origin \(o_{n}\in\mathcal{O}\) to the corresponding destination \(d_{n}\in\mathcal{D}\). For an \(\ell\)-level NPV traveling for trip \(n\), we define assignment vector \(A_{\ell,n}\in 2^{|\mathcal{E}|}\) where the element \(a_{\ell,n}^{ij}\) takes value of \(1\) if the \(\ell\)-level NPV for trip \(n\) uses the edge \((i,j)\) and takes value of \(0\) otherwise. We solve the following problem for each NPV to determine their path.
**Problem 2** (Strategic Routing of \(\ell\)-Level NPV).: \[\begin{split}\operatorname*{Minimize}_{\{a_{\ell,n}^{ij}\}}& \sum_{n\in\mathcal{N}}\sum_{(i,j)\in\mathcal{E}}t^{ij}\left(x^{ij}+ \sum_{l=0}^{\ell-1}q_{l}^{ij}\right)\cdot a_{\ell,n}^{ij}\\ \operatorname*{subject\ to:}&\sum_{k:(j,k)\in \mathcal{E}}a_{\ell,n}^{jk}=1,\ \forall n\in\mathcal{N},j=o_{n},\\ &\sum_{i:(i,j)\in\mathcal{E}}a_{\ell,n}^{ij}=1,\ \forall n\in\mathcal{N},j=d_{n},\\ &\sum_{i:(i,j)\in\mathcal{E}}a_{\ell,n}^{ij}=\sum_{j:(j,k)\in \mathcal{E}}a_{\ell,n}^{jk},\\ &\forall n\in\mathcal{N},j\in\mathcal{V}\setminus\{o_{n},d_{n}\},\end{split}\] (3)
_where \(q_{\ell}^{ij}=\sum_{n\in\mathcal{N}}q_{\ell,n}\cdot a_{\ell,n}^{ij}\)._
Problem 2 is a routing problem for a single NPV, which can be solved using any graph search algorithm such as Dijkstra or A\({}^{*}\). As each \(\ell\)-level NPV only anticipates lower-level NPVs, all NPVs at the same level make the same routing decision.
Note that \(q^{ij}\) in Problem 1 is the flow of NPVs, and \(x^{ij}\) in Problems 1 and 2 is the flow of all the compliant vehicles. As the problems are coupled and affect each other, they form a strategic game. To resolve this game, we iteratively solve them until the solutions converge to an equilibrium. Since there is no theoretical guarantee of convergence, we provide a way to induce convergence in Section IV-A.
**Remark 2**.: _The solution to Problem 1 is sensitive to the choice of weights. Therefore, it is required to have principled ways of selecting appropriate weights. In the next section, we introduce a mobility equity metric that evaluates the accessibility and equity of transportation resources in a network. This metric will allow us to select socially appropriate weights._
## III Mobility Equity Metric
Mobility equity refers to the fair distribution of transportation resources and opportunities among diverse communities, regardless of socioeconomic status, location, or other factors. Although people are bringing attention to mobility equity, there has yet to be a strict definition and a convention on its effective quantification. In this section, we propose a mobility equity metric (MEM) that quantifies the degree to which transportation services cater to the needs of different demographic groups, highlighting any discrepancies in access. In this metric, we aim to account for various factors pertaining to mobility, such as accessibility to essential services (e.g., healthcare, education, and employment), affordability, travel time, and availability of multiple modes of transportation. Next, we present the mathematical definition.
Let \(\mathcal{M}\) be the set of all modes of transportation, \(\mathcal{S}\) be the set of essential services, and \(\kappa\) be the price sensitivity. For each \(m\in\mathcal{M}\) and \(s\in\mathcal{S}\), we let \(c_{m}\) denote the cost per passenger mile of utilizing transportation mode \(m\), \(\beta^{s}\) denote the priority level of service \(s\), and \(\sigma_{m}^{s}(\tau_{m})\) denote the average number of services accessible within time threshold \(\tau_{m}\) from all selected origins in the network.
**Definition 1**.: _For a given transportation network \(\mathcal{G}\) with modes in \(\mathcal{M}\) and services in \(\mathcal{S}\), the mobility equity metric is_
\[\text{MEM}=\sum_{m\in\mathcal{M}}e^{-\kappa c_{m}}\cdot\left\{\sum_{s\in \mathcal{S}}\beta^{s}\sigma_{m}^{s}(\tau_{m})\right\}. \tag{4}\]
Here, \(e^{-\kappa c_{m}}\) ensures that MEM decreases with an increase in the cost per passenger mile, and \(\beta^{s}\sigma_{m}^{s}(\tau_{m})\) ensures that MEM increases with respect to an increase of accessibility to the essential services. These terms collectively prioritize increasing access to services at lower costs to passengers to increase MEM.
**Remark 3**.: _An advantage of the MEM defined in (4) is that, in practice, it can be computed purely using publicly available data. For example, the base flows in a traffic network can be measured over time, the number of services \(\sigma_{m}^{s}(\tau_{m})\) can be
Fig. 1: Conceptual diagram of cognitive hierarchy model.
counted using an isochrone map for the base traffic conditions, and the costs of transportation can be computed from travel times and fuel consumption._
**Remark 4**.: _Recall that \(\sigma_{m}^{s}(\cdot)\) represents the average number of accessible services from selected origins in the network. We anticipate that by selecting these origins carefully to include diverse social groups, it is possible to consider the impact of social factors on mobility equity. Subsequently, the MEM can facilitate a fair distribution of transportation resources._
Next, we formulate an optimization problem to integrate the MEM from Definition 1 into the routing framework presented in Section II. This allows us to develop a mobility-equity-focused approach to select the weights to prioritize various modes of transportation in system-centric routing, as described in Remark 2.
## IV Mobility Equity Optimization
In this section, we formulate the MEM optimization problem in the mixed-transportation network. The problem aims to maximize the MEM by improving accessibility to the services with cost-efficient modes of transportation. Here, accessibility is captured by counting the number of accessible services within a time threshold. In our routing formulation, solutions to Problem 1 and Problem 2 will be the net flow on the network, which can be used to estimate travel time for given origins and destinations. Note that the net flow is determined for given weights \(w\). Thus, we can formulate the MEM optimization problem with respect to the weights.
**Problem 3** (Mobility Equity Maximization).: \[\begin{split}\operatorname*{Maximize}_{w}&\text{MEM}\\ \operatorname*{subject}&\text{to:}&\delta^{ \text{pv}}(w)\leq\gamma,\end{split}\] (5)
_where \(\delta^{\text{pv}}\) is the average travel-time difference between CPVs and NPVs, and \(\gamma\) is the upper limit of the difference._
Figure 2 illustrates the structure of integrating routing and MEM optimization. For each possible \(w\), it is required to solve Problem 1 and Problem 2 repeatedly until their solution converges. Therefore, Problem 3 can only be solved numerically. We impose a constraint on the travel-time difference \(\delta^{\text{pv}}\) between CPVs and NPVs because the routing framework would sacrifice CPVs' travel time to maximize MEM. Without this constraint, CPVs would no longer comply if their time loss increases to a certain level. Thus, we impose the upper bound \(\gamma\) in order to keep their time loss bearable.
**Remark 5**.: _In practice, CPVs in the system-centric routing tend to have longer travel times than the NPVs. Thus, their disincentive towards complying is captured by the differences in travel time between CPVs and NPVs, which we bound by the constant \(\gamma\). Though we do not explicitly explore this direction, the constant \(\gamma\) should be determined through monetary incentives provided to drivers to maintain compliance._
### _Inducing Convergence_
One potential cause of challenge to solve Problem 3 is a lack of convergence of flows in the routing game (right-hand side of Fig. 2) induced by a specific choice of weights. In this subsection, we explain the "chattering behavior" in the routing game and propose a resolution to this concern by controlling the flows of compliant vehicles in the system-centric problem. The chattering behavior may originate from the fact that to optimize the MEM, the system-centric routing problem may receive weights prioritizing the travel time for public vehicles with a smaller cost per passenger mile over CPVs. To understand this phenomenon, consider a single origin and destination with two possible paths \(p_{1}\) and \(p_{2}\). For the compliant vehicles, the optimal solution to Problem 1 is to assign a shorter-time path \(p_{1}\) to the public transportation and a longer-time path \(p_{2}\) to the CPVs. Then, NPVs select \(p_{1}\) for their benefit because \(p_{1}\) is still the shortest-time path. This results in a scenario where public transportation and NPVs travel in traffic congestion while CPVs travel using a longer but less crowded path. In the next iteration, compliant vehicles would thus be assigned different paths so that public transportation can travel faster. In response to these flow changes, NPVs would also change their decision to travel on the same path as public transportation because it would always be the less congested path. Due to the repeated nature of these interactions, routing decisions may chatter over time as the system-centric routing attempts to prioritize public vehicles and NPVs keep following.
In case chattering occurs, we impose an addition constraint given by \(\sum_{m}\sum_{n}x_{m,n}^{ij}=f^{ij}\),where \(f^{ij}\) is the total compliant vehicles flow on edge \((i,j)\) at the previous iteration. This constraint ensures compliant vehicle flow is the same as the previous iteration while improving public transportation travel time. Although compliant vehicles changed their paths, there is no incentive for NPVs to change their path because the total flow on the roads remains the same.
## V Numerical Implementation
In this section, we provide a numerical implementation and analysis of the results. To prove our concept, we consider a small network with \(12\) nodes and \(54\) edges, as illustrated in Fig. 3. We introduce \(10\) travel demands (\(2\) origins and \(5\) destinations) and randomly generated demand rates, where origins and destinations are considered as residential areas and essential services, respectively. In our implementation, we address a single type of service, i.e., \(|\mathcal{S}|=1\), and mode of transportation given by \(\mathcal{M}=\{\operatorname{public\ transportation},\operatorname{CPVs}\}\). We assume that travel demands exist for all possible origin-destination pairs and that the demand rates are known a priori.
Fig. 2: Structure of the socially-optimal routing problem.
To evaluate MEM, we need the average number of services accessible within the time threshold \(\tau_{m}\) given by
\[\sigma_{m}^{s}(\tau_{m}):=\frac{\sum_{o\in\mathcal{O}}\alpha_{m}^{o}\cdot\sum_{d \in\mathcal{D}}\mathbb{I}\Big{[}t_{m}^{o,d}\leq\tau_{m}\Big{]}}{\sum_{o\in \mathcal{O}}\alpha_{m}^{o}}, \tag{6}\]
where \(\alpha_{m}^{o}\) is the compliant travel demand rate departing at \(o\) using mode \(m\), \(\mathbb{I}[\cdot]\) is the indicator function yielding the number of services within \(\{o,d\}\), and \(t_{m}^{o,d}\) is an average travel time for \(o\) to \(d\) via mode \(m\). The number of services counted with \(\sum_{d\in\mathcal{D}}\mathbb{I}[t_{m}^{o,d}\leq\tau_{m}]\) is weighted with the flow \(\alpha_{m}^{o}\) to consider the different levels of influence for different travel demands. Here, we determine \(t_{m}^{o,d}\) by
\[t_{m}^{o,d}=\frac{\sum_{(i,j)\in\mathcal{E}}t^{ij}(x^{ij}+q^{ij})\cdot x_{m,n }^{ij}}{\sum_{(i,j)\in\mathcal{E}}x_{m,n}^{ij}}, \tag{7}\]
where \(n\in\mathcal{N}\) is the travel corresponding to the origin-destination pair \((o,d)\).
### Continuous Approximation of Mobility Equity Metric
For a network with numerous origins and destinations, using the indicator \(\mathbb{I}\) produces meaningful changes to the MEM because small shifts in travel time result in a change in the number of services accessible within a time threshold. In contrast, for small networks, using the indicator function \(\mathbb{I}\) to capture the accessibility may barely affect MEM for variations in travel time. To produce meaningful results for a small network, we approximate the indicator function with a continuous function that is more sensitive to changes in travel time. The approximate function is given by
\[\tilde{\mathbb{I}}(t)=1-\frac{1}{1+e^{-k(t-\tau_{m})}}, \tag{8}\]
where \(k\in\mathbb{R}_{>0}\) is a parameter of the slope. As \(k\) increases, the function gets closer to the original indicator function, while it becomes sensitive to the input as \(k\) decreases. By adopting (8) instead of indicator function \(\mathbb{I}\), (6) can provide distinguishable outcomes even in a small network.
### Numerical Simulation Results
Given the network and travel demands, we first ran simulations to analyze the effect of the compliance rate on the MEM. Figure 3(a) illustrates the travel time of each transportation mode for different trips for different noncompliance rates (NCR). The figure shows that the overall travel time increases with the NCR because increasing NPVs generates traffic congestion and reduces the benefit of system-centric routing.
Figure 3(b) shows the simulation for the first and second routing iterations. For public transportation with longer travel time than CPVs, our routing framework modified the flow so that CPVs yield the roads to public transportation without affecting NPVs' travel time. This result shows that our method successfully modified the flow without incentivizing NPVs to deviate from their previous routing decisions.
Next, we conducted simulations for different weights for the modes of transportation. Figure 3(c) is the simulation results at a microscopic level for three different weights. It shows that the travel-time difference between CPVs and NPVs increases as the weight of public transportation increases. This aligns with the intuition wherein the CPVs increasingly sacrifice their travel time as the system prioritizes public transportation. Figure 5 illustrates MEM and the time difference \(\delta^{\text{pv}}(w)\) for both different weights and different noncompliance rates. Overall, as the weight on public transportation increased, the MEM and the time difference have increased. This tendency has appeared because both public transportation and NPVs benefited in travel time at the expense of CPVs' travel time. Moreover, MEM increased as the noncompliance rate decreased because more vehicles were involved in system-centric routing. For specific noncompliance rates, there exist points where MEM dramatically jumps while the time difference slightly increases. Thus, one can account for the results and provide incentives using a mechanism design to increase the limit and enhance MEM even more.
## VI Concluding Remarks
In this letter, we presented a routing framework in a mixed transportation system for improving mobility equity in the network. We formulated a routing game between all compliant vehicles and NPVs. Then, we proposed MEM and formulated the MEM optimization problem. We presented a numerical example and implementation method that yields meaningful results in a small network. Through the simulations, we verified that our framework works for improving MEM. Future work should consider using MEM in real-road networks and account for the effect of compliance rate by designing monetary incentives.
| この書面は、混合輸送システムにおけるルーティングフレームワークを提案します。これは、移動的公平性を向上させるためのものです。私たちは、コンプライアンスと非コンプライアンス車両の相互作用を支配する戦略的なルーティングゲームを提示します。このゲームでは、非コンプライアンス車両は認知階層理論でモデル化されます。そして、移動的公平性指標(MEM)を導入し、輸送網のアクセス可能性と公平性を量ります。MEMをルーティングフレームワークに統合することで、調整可能なウェイトを用いて異なる輸送手段の効率と公平性を最適化します。この提案は、混合輸送システムにおける技術の進歩と社会目標の橋渡しをすることで、効率と公平性を向上させます。私たちは、その結果の算術的な例と分析を提供します。
This translation is quite accurate and conveys the meaning of the original sentence in a natural way. |
2309.12654 | Some extreme value theory for $θ$-expansions | The main aim of this paper is to develop extreme value theory for
$\theta$-expansions. We get the limit distribution of the largest value of
$\theta$-continued fraction mixing stationary stochastic process and some
related results. These are analogous to J.Galambos and W.Philipp theorems for
the regular continued fractions. We also have to note that a Borel-Bernstein
type theorem plays an important role. | Gabriela Ileana Sebe, Dan Lascu | 2023-09-22T06:51:31 | http://arxiv.org/abs/2309.12654v1 | # Some extreme value theory for \(\theta\)-expansions
###### Abstract
The main aim of this paper is to develop extreme value theory for \(\theta\)-expansions. We get the limit distribution of the largest value of \(\theta\)-continued fraction mixing stationary stochastic process and some related results. These are analogous to J.Galambos and W.Philipp theorems for the regular continued fractions. We also have to note that a Borel-Bernstein type theorem plays an important role.
keywords: \(\theta\)-expansions, Borel-Bernstein type theorem, extreme value theory, Frechet law, \(\psi\)-mixing.
## 1 Introduction
The investigation initiated by Bhattacharya and Goswami [2] in random number generation, led to concept of continued fraction expansion of a number in terms of an irrational \(\theta\in(0,1)\). This new expansion of positive reals, different from the regular continued fraction (RCF)- expansion is called \(\theta\)_-expansion_. We mention that the case \(\theta=1\) refers to RCF- expansions.
For a fixed \(\theta\in(0,1)\), Chakraborty and Rao [4] have considered a generalization of the Gauss map \(T_{\theta}:[0,\theta]\rightarrow[0,\theta]\),
\[T_{\theta}(x):=\left\{\begin{array}{ll}\frac{1}{x}-\theta\left[ \frac{1}{x\theta}\right]&\mbox{if $x\in(0,\theta]$,}\\ \\ 0&\mbox{if $x=0$.}\end{array}\right. \tag{1.1}\]
Here \(\left\lfloor\cdot\right\rfloor\) stands for integer part. Then every \(x\in(0,\theta)\) can be expanded into a finite or infinite \(\theta\)-expansion
\[x=\frac{1}{a_{1}\theta+\frac{1}{a_{2}\theta+\frac{1}{a_{3}\theta+\ddots}}}=:[a _{1}\theta,a_{2}\theta,a_{3}\theta,\ldots], \tag{1.2}\]
where
\[a_{1}=a_{1}(x):=\left\{\begin{array}{ll}\left\lfloor\frac{1}{x\theta}\right\rfloor &\mbox{if $x\neq 0$,}\\ \infty&\mbox{if $x=0$}\end{array}\right.\]
and
\[a_{n}=a_{n}(x):=a_{1}\left(T_{\theta}^{n-1}(x)\right),\quad n\in\mathbb{N}_{+}: =\left\{1,2,3,\ldots\right\},\]
with \(T_{\theta}^{0}(x)=x\). The positive integers \(a_{n}\in\mathbb{N}_{+}\) are called the _digits_ or _partial quotients_ of \(x\).
Let \(\mathcal{B}_{[0,\theta]}\) denote the \(\sigma\)-algebra of all Borel subsets of \([0,\theta]\). It is obvious that the digits \(a_{n}\), \(n\in\mathbb{N}_{+}\), are random variables which are defined almost surely on \(([0,\theta],\mathcal{B}_{[0,\theta]})\) with respect to any probability measure on \(\mathcal{B}_{[0,\theta]}\) that assigns probability \(0\) to the set of rationals in \([0,\theta]\). An example of such a measure is Lebesgue measure \(\lambda_{\theta}\) on \([0,\theta]\).
It was shown in [4, 11] that this expansion has many of the usual properties of RCFs. A natural question is whether the dynamical system given by the transformation \(T_{\theta}\) admits an absolutely continuous invariant probability like the Gauss measure in the case \(\theta=1\). Chakraborty and Rao [4] have identified that for certain values of \(\theta\) (for example, if \(\theta^{2}=\frac{1}{m}\), \(m\in\mathbb{N}_{+}\)) the invariant measure for the transformation \(T_{\theta}\) as
\[\mathrm{d}\gamma_{\theta}:=\frac{1}{\log\left(1+\theta^{2}\right)}\frac{ \theta\,\mathrm{d}x}{1+\theta x}. \tag{1.3}\]
Moreover, if \(\theta^{2}=\frac{1}{m}\), \(m\in\mathbb{N}_{+}\), \([a_{1}\theta,a_{2}\theta,a_{3}\theta,\ldots]\) is the \(\theta\)-expansion of any \(x\in(0,\theta)\) if and only if the following conditions hold:
1. \(a_{n}\geq m\) for any \(n\in\mathbb{N}_{+}\)
2. in the case when \(x\) has a finite expansion, i.e., \(x=[a_{1}\theta,a_{2}\theta,\ldots,a_{n}\theta]\), then \(a_{n}\geq m+1\).
It was proved in [4] that the dynamical system \(([0,\theta],T_{\theta})\) is ergodic and the measure \(\gamma_{\theta}\) is invariant under \(T_{\theta}\), that is, \(\gamma_{\theta}(A)=\gamma_{\theta}(T_{\theta}^{-1}(A))\) for any \(A\in\mathcal{B}_{[0,\theta]}\). Therefore, \((a_{n})_{n\in\mathbb{N}_{+}}\) is a strictly stationary sequence on \(([0,\theta],\mathcal{B}_{[0,\theta]},\gamma_{\theta})\).
For more results about \(\theta\)-expansions see [5, 10, 11, 12] and references therein.
Every irrational \(x\in(0,\theta)\setminus\mathbb{Q}=:\Omega\) has an infinite \(\theta\)-expansion. Note that for all \(n\in\mathbb{N}_{+}\), \(a_{n}(x)\geq m\) and \(T_{\theta}^{n}([a_{1}\theta,a_{2}\theta,\ldots])=[a_{n+1}\theta,a_{n+2}\theta,\ldots]\).
For all \(n\in\mathbb{N}_{+}\), we call the finite truncation of (1.2)
\[\frac{p_{n}(x)}{q_{n}(x)}=[a_{1}(x)\theta,a_{2}(x)\theta,\ldots,a_{n}(x)\theta]\]
the _\(n\)-th convergent_ of the \(\theta\)-expansion of \(x\).
For every infinite \(\theta\)-expansion \([a_{1}\theta,a_{2}\theta,\ldots]\) the sequences \(\{p_{n}\}_{n\geq-1}\) and \(\{q_{n}\}_{n\geq-1}\) can be obtained by the following recursive relations
\[p_{n}(x) = a_{n}(x)\theta p_{n-1}(x)+p_{n-2}(x), \tag{1.4}\] \[q_{n}(x) = a_{n}(x)\theta q_{n-1}(x)+q_{n-2}(x), \tag{1.5}\]
with \(p_{-1}(x):=1\), \(p_{0}(x):=0\), \(q_{-1}(x):=0\) and \(q_{0}(x):=1\). Then by induction, we have
\[p_{n-1}(x)q_{n}(x)-p_{n}(x)q_{n-1}(x)=(-1)^{n},\quad n\in\mathbb{N}. \tag{1.6}\]
By using (1.4) and (1.5), we can verify that
\[x=\frac{p_{n}(x)+T_{\theta}^{n}(x)p_{n-1}(x)}{q_{n}(x)+T_{\theta}^{n}(x)q_{n-1}(x )},\quad n\geq 1. \tag{1.7}\]
Using (1.6) and (1.7) we obtain
\[\left|x-\frac{p_{n}(x)}{q_{n}(x)}\right|=\frac{1}{q_{n}(x)\left(\left(T_{ \theta}^{n}(x)\right)^{-1}q_{n}(x)+q_{n-1}(x)\right)},\quad n\geq 1. \tag{1.8}\]
Since \(a_{n+1}(x)\theta\leq\left(T_{\theta}^{n}(x)\right)^{-1}\leq(a_{n+1}(x)+1)\theta\), using (1.4) and (1.5) in (1.8) we get
\[\frac{1}{q_{n}(x)(q_{n+1}(x)+\theta q_{n}(x))}\leq\left|x-\frac{p_{n}(x)}{q_{n }(x)}\right|\leq\frac{1}{q_{n}(x)q_{n+1}(x)},\quad n\geq 1. \tag{1.9}\]
From (1.5), we have that \(q_{n}(x)\geq\theta\), \(n\in\mathbb{N}_{+}\). Further, also from (1.5) and by induction we have that
\[q_{n}(x)\geq\left|\frac{n}{2}\right|\theta^{2}. \tag{1.10}\]
Finally, from (1.9) and (1.10) it follows that \([a_{1}(x)\theta,a_{2}(x)\theta,\ldots,a_{n}(x)\theta]\to x\) as \(n\to\infty\). Relation (1.9) means that the degree of accuracy of this approximation depends on the growth rate of partial quotients.
In the case of RCFs, Borel [3] and Bernstein [1] gave a result called _Borel-Bernstein theorem_ or "\(0-1\)_" law_ on describing the growth rate of partial quotients in the sense of Lebesgue measure. Our first result, Theorem 3.1 is an analogy of Borel-Bernstein theorem for \(\theta\)-expansions. We also show in Section 5 that the Borel-Bernstein type theorem plays an important role in the case of \(\theta\)-expansions.
In Sections 4 and 5 we state some results concerning extreme value theory for \(\theta\)-expansions. These results are new in the sense that they appear not to have been stated elsewhere before.
Extreme value theory for RCF digits first emerged in the 1970's. The results of Galambos [6; 7] concerning the maximal RCF digit have been improved by Philipp [9] to give a complete answer to a conjecture of Erdos.
In Section 4 we derive a Frechet law concerning the partial maxima of the growth rate of the digit sequence. Theorems 4.5 and 4.6 extend previous work of Galambos [6] and Philipp [9] on the asymptotic behavior of the largest digit of RCF-expansions. To get these results we mention that we need the \(\theta\)-continued fraction mixing property, and we also need a condition on the speed of the convergence of mixing.
In Section 5 we give some iterated logarithm results (Theorem 5.2 and Corollary 5.4) for the largest digit of \(\theta\)-expansions.
## 2 Preliminaries
Let us fix \(\theta^{2}=1/m\), \(m\in\mathbb{N}_{+}\). Putting \(\mathbb{N}_{m}:=\{m,m+1,\ldots\}\), \(m\in\mathbb{N}_{+}\), the partial quotients \(a_{n}\), \(n\in\mathbb{N}_{+}\), take positive integer values in \(\mathbb{N}_{m}\).
We now introduce a partition of the interval \([0,\theta]\) which is natural to the \(\theta\)-expansions. Such a partition is generated by the cylinders of rank \(n\). For any \(n\in\mathbb{N}_{+}\) and \(i^{(n)}=(i_{1},\ldots,i_{n})\in\mathbb{N}_{m}^{n}\), define the \(n\)_-th cylinder_ of \(\theta\)-expansion by
\[C\left(i^{(n)}\right)=\{x\in\Omega:a_{k}(x)=i_{k}\text{ for }k=1,\ldots,n\},\]
where \(C\left(i^{(0)}\right)=[0,\theta]\). For any \(i\in\mathbb{N}_{m}\), we have
\[C\left(i\right)=\left\{x\in\Omega:a_{1}(x)=i\right\}=\left(\frac{1}{(i+1)\theta },\frac{1}{i\theta}\right). \tag{2.1}\]
If \(n\in\mathbb{N}_{+}\) and \(i_{n}\in\mathbb{N}_{m}\), we will write
\[C(a_{1},\ldots,a_{n})=C\left(i^{(n)}\right).\]
Next we recall some known results for later use. From the definition of \(T_{\theta}\) and (1.7) we have for any \(n\in\mathbb{N}_{+}\) and \((a_{1},\ldots,a_{n})\in\mathbb{N}_{m}^{n}\),
\[C(a_{1},\ldots,a_{n})=\left\{\begin{array}{ll}\left[\frac{p_{n}}{q_{n}}, \frac{p_{n}+\theta p_{n-1}}{q_{n}+\theta q_{n-1}}\right)&\text{if $n$ is even,}\\ \\ \left(\frac{p_{n}+\theta p_{n-1}}{q_{n}+\theta q_{n-1}},\frac{p_{n}}{q_{n}} \right)&\text{if $n$ is odd.}\end{array}\right. \tag{2.2}\]
Using (1.6) we get
\[\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n}\right)\right)=\frac{1}{\theta }\left|\frac{p_{n}}{q_{n}}-\frac{p_{n}+\theta p_{n-1}}{q_{n}+\theta q_{n-1}} \right|=\frac{1}{q_{n}(q_{n}+\theta q_{n-1})}=\frac{1}{q_{n}^{2}(1+\theta s_{ n})}, \tag{2.3}\]
where \(s_{n}=\frac{q_{n-1}}{q_{n}}\), \(n\in\mathbb{N}_{+}\) and \(s_{0}=0\). Since \(s_{n}\in[0,\theta]\), it follows from (2.3) that
\[\frac{1}{2q_{n}^{2}}\leq\frac{1}{(1+\theta^{2})q_{n}^{2}}\leq\lambda_{\theta} \left(C\left(a_{1},\ldots,a_{n}\right)\right)\leq\frac{1}{q_{n}^{2}}. \tag{2.4}\]
It is of interest to calculate the approximate proportion of the \(n\)-th level cylinder set \(C\left(a_{1},\ldots,a_{n}\right)\) that is occupied by each of the \((n+1)\)-th level cylinder sets \(C\left(a_{1},\ldots,a_{n},k\right)\). Notice that the endpoints of the interval \(C\left(a_{1},\ldots,a_{n},k\right)\) are \(\frac{p_{n+1}}{q_{n+1}}\) and \(\frac{p_{n+1}+\theta p_{n}}{q_{n+1}+\theta q_{n}}\) with \(p_{n+1}=k\theta p_{n}+p_{n-1}\) and \(q_{n+1}=k\theta q_{n}+q_{n-1}\). So we obtain
\[\frac{p_{n+1}}{q_{n+1}}=\frac{k\theta p_{n}+p_{n-1}}{k\theta q_{n}+q_{n-1}}, \quad\frac{p_{n+1}+\theta p_{n}}{q_{n+1}+\theta q_{n}}=\frac{(k+1)\theta p_{n} +p_{n-1}}{(k+1)\theta q_{n}+q_{n-1}}.\]
Directly computation yields that
\[\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n},k\right)\right)=\frac{1}{(k \theta q_{n}+q_{n-1})((k+1)\theta q_{n}+q_{n-1})}=\frac{1}{k^{2}q_{n}^{2}\left( \theta+\frac{s_{n}}{k}\right)\left(\left(1+\frac{1}{k}\right)\theta+\frac{s_ {n}}{k}\right)}.\]
By (2.3) it follows that
\[\frac{\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n},k\right) \right)}{\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n}\right)\right)} = \frac{q_{n}^{2}(1+\theta s_{n})}{k^{2}q_{n}^{2}\left(\theta+\frac{s _{n}}{k}\right)\left(\left(1+\frac{1}{k}\right)\theta+\frac{s_{n}}{k}\right)}\] \[= \frac{1+\theta s_{n}}{k^{2}\left(\theta+\frac{s_{n}}{k}\right) \left(\left(1+\frac{1}{k}\right)\theta+\frac{s_{n}}{k}\right)}.\]
Since
\[\theta^{2}<\left(\theta+\frac{s_{n}}{k}\right)\left(\left(1+\frac{1}{k}\right) \theta+\frac{s_{n}}{k}\right)<\theta^{2}\left(1+\frac{1}{k}\right)\left(1+\frac {2}{k}\right)<6\theta^{2}<6,\]
for \(k\geq m\), we find that
\[\frac{1}{6k^{2}}<\frac{\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n},k \right)\right)}{\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n}\right)\right)}< \frac{1+\theta^{2}}{k^{2}\theta^{2}}=\frac{m+1}{k^{2}}. \tag{2.5}\]
Next, we give some lemmas for later use.
**Lemma 2.1**.: _Let \(k\geq m\), then_
\[\frac{1}{6k^{2}}<\lambda_{\theta}\left(\bigcup_{a_{1},\ldots,a_{n}\geq m}C \left(a_{1},\ldots,a_{n},k\right)\right)<\frac{m+1}{k^{2}}.\]
Proof.: Using (2.5), since \(\sum_{a_{1},\ldots,a_{n}\geq m}\lambda_{\theta}\left(C\left(a_{1}, \ldots,a_{n}\right)\right)=1\), the proof is completed.
We recall the following well-known and extremely useful result.
**Lemma 2.2** (Borel-Cantelli).: _Let \((X,\mathcal{X},\mu)\) be a measurable space. Let \(\{C_{n}\}_{n\geq 1}\) be a sequence of \(\mathcal{X}\)-measurable sets and define the \(\lim-sup\) set_
\[C_{\infty}=\limsup_{n\to\infty}C_{n}=\bigcap_{n\geq 1}\bigcup_{m\geq n}C_{m}= \{x\in X:x\in C_{n}\text{ for infinitely many }n\in\mathbb{N}_{+}\}.\]
_Then, if \(\sum_{n\geq 1}\mu(C_{n})<\infty\), we have that \(\mu(C_{\infty})=0\)._
## 3 A Borel-Bernstein-type theorem
Our first main result is the following theorem.
**Theorem 3.1** (Borel-Bernstein-type theorem).: _Let \(\varphi:\mathbb{N}_{+}\to(0,+\infty)\) be a function and_
\[A_{\varphi}=\{x\in\Omega:a_{n}(x)>\varphi(n)\text{ for infinitely many }n\in\mathbb{N}_{+}\}.\]
_Then we have_
\[\lambda_{\theta}(A_{\varphi})=\left\{\begin{array}{ll}0&\text{ if }\sum_{n \geq 1}\frac{1}{\varphi(n)}<\infty,\\ \\ 1&\text{ if }\sum_{n\geq 1}\frac{1}{\varphi(n)}=\infty.\end{array}\right.\]
Proof.: Let \(A_{n}=\{x\in\Omega:a_{n}(x)>\varphi(n)\}\), thus \(A_{\varphi}=\limsup_{n\to\infty}A_{n}=\bigcap_{j\geq 1}\bigcup_{n\geq j}A_{n}\). By Lemma 2.1, one has
\[\lambda_{\theta}(A_{n})=\lambda_{\theta}\left(\bigcup_{a_{1},\ldots,a_{n-1} \geq m}\bigcup_{k>\varphi(n)}C\left(a_{1},\ldots,a_{n-1},k\right)\right)<\sum _{k\geq\lfloor\varphi(n)\rfloor+1}\frac{m+1}{k^{2}}<\frac{m+1}{\lfloor\varphi( n)\rfloor}<\frac{2(m+1)}{\varphi(n)}.\]
Thus the convergence of Borel-Cantelli lemma enables us to conclude that \(\lambda_{\theta}(A_{\varphi})=0\) when \(\sum_{n\geq 1}\frac{1}{\varphi(n)}<\infty\).
Suppose now \(\sum_{n\geq 1}\frac{1}{\varphi(n)}=\infty\). Notice that
\[\lambda_{\theta}(A_{\varphi})=\lambda_{\theta}\left(\bigcap_{j\geq 1}\bigcup_{n \geq j}A_{n}\right)=1\iff\lambda_{\theta}(A_{\varphi}^{c})=\lambda_{\theta} \left(\bigcup_{j\geq 1}\bigcap_{n\geq j}A_{n}^{c}\right)=0.\]
Thus we need only to show \(\lambda_{\theta}\left(\bigcap_{n\geq j}A_{n}^{c}\right)=0\), where
\[A_{n}^{c}=\{x\in\Omega:a_{n}(x)\leq\varphi(n)\}.\]
Let
\[B_{j,\ell}=\bigcap_{j<n\leq j+\ell}A_{n}^{c}.\]
Then
\[\lambda_{\theta}\left(\bigcap_{n\geq j+1}A_{n}^{c}\right)=\lim_{\ell\to\infty }\lambda_{\theta}(B_{j,\ell}).\]
By the definition of \(B_{j,\ell}\), we have
\[B_{j,1}=\bigcup_{a_{1},\ldots,a_{j}\geq m}\bigcup_{m\leq k\leq\varphi(j+1)}C \left(a_{1},\ldots,a_{j},k\right)=\bigcup_{a_{1},\ldots,a_{j}\geq m}\left(C(a_ {1},\ldots,a_{j})\setminus\bigcup_{k>\varphi(j+1)}C\left(a_{1},\ldots,a_{j},k \right)\right).\]
By Lemma 2.1, we obtain that
\[\sum_{k\geq\lfloor\varphi(j+1)\rfloor+1}\frac{\lambda_{\theta}\left(C\left(a _{1},\ldots,a_{j},k\right)\right)}{\lambda_{\theta}\left(C\left(a_{1},\ldots,a _{j}\right)\right)}>\sum_{k\geq\lfloor\varphi(j+1)\rfloor+1}\frac{1}{6k^{2}}> \sum_{k\geq\lfloor\varphi(j+1)\rfloor+1}\frac{1}{6k(k+1)}>\frac{1}{6(\varphi(j +1)+1)}.\]
Hence
\[\lambda_{\theta}(B_{j,1})\leq\sum_{a_{1},\ldots,a_{j}\geq m}\lambda_{\theta} \left(C\left(a_{1},\ldots,a_{j}\right)\right)\cdot\left(1-\frac{1}{6(\varphi( j+1)+1)}\right)=1-\frac{1}{6(\varphi(j+1)+1)}.\]
Since
\[B_{j,\ell+1} = \{x\in\Omega:a_{j+1}\leq\varphi(j+1),\ldots,a_{j+\ell+1}\leq \varphi(j+\ell+1)\}\] \[= \{x\in B_{j,\ell}:a_{j+\ell+1}\leq\varphi(j+\ell+1)\},\] \[\lambda_{\theta}(B_{j,\ell+1})\leq\left(1-\frac{1}{6(\varphi(j+ \ell+1)+1)}\right)\lambda_{\theta}(B_{j,\ell}).\]
By induction,
\[\lambda_{\theta}(B_{j,\ell})\leq\prod_{i=1}^{\ell}\left(1-\frac{1}{6(\varphi( j+i)+1)}\right)\leq\prod_{i=1}^{\ell}\left(1-\frac{1}{12\varphi(j+i)}\right) \leq\exp\left(-\sum_{i=1}^{\ell}\frac{1}{12\varphi(j+i)}\right).\]
Here we use the fact that \(1-x\leq\exp(-x)\) if \(x\geq 0\). One has \(\lim\limits_{\ell\to\infty}\lambda_{\theta}(B_{j,\ell})=0\) by the fact that \(\sum\limits_{n\geq 1}\dfrac{1}{\varphi(n)}=\infty\). Therefore, \(\lambda_{\theta}\left(A_{\varphi}^{c}\right)=0\), which complete the proof.
**Corollary 3.2**.: _For \(\lambda_{\theta}\)- a.e. \(x\in[0,\theta]\), we have that_
\[a_{n}(x)>n\log n\;\;\text{for infinitely many}\;\;n\in\mathbb{N}_{+},\]
_whereas for every \(\varepsilon>0\), we have that_
\[a_{n}(x)<n(\log n)^{1+\varepsilon}\;\;\text{for all sufficiently large}\;\;n\in \mathbb{N}_{+}.\]
Proof.: This follows from Theorem 3.1 immediately on the observation that
\[\sum\limits_{n\geq 1}\dfrac{1}{n\log n}=\infty\;\;\text{and}\;\sum\limits_{n \geq 1}\dfrac{1}{n(\log n)^{1+\varepsilon}}<\infty\]
for all \(\varepsilon>0\).
## 4 The asymptotic behavior of the largest digit in \(\theta\)-expansions
In the sequel we shall use the fundamental facts of the metric theory of \(\theta\)-expansions. One of these facts is that the stochastic process arising from \(\theta\)-expansion digits has the \(\psi\)-mixing property.
**Definition 4.1** (\(\psi\)-mixing).: _Let \((X,X,\mu)\) denote a probability space and let \(\xi_{j}:X\to\mathbb{R}\) denote a stationary sequence of random variables. For any \(k\in\mathbb{N}_{+}\) let \(\mathcal{B}_{1}^{k}=\sigma(\xi_{1},\ldots,\xi_{k})\) and \(\mathcal{B}_{k}^{\infty}=\sigma(\xi_{k},\xi_{k+1},\ldots)\) denote the \(\sigma\)-algebras generated by the random variables \(\xi_{1},\ldots,\xi_{k}\), respectively \(\xi_{k},\xi_{k+1},\ldots\). Then \(\{\xi_{j}\}\) is said to be \(\psi\)-mixing if for any sets \(A\in\mathcal{B}_{1}^{k}\) and \(B\in\mathcal{B}_{k+n}^{\infty}\) we have_
\[|\mu(A\cap B)-\mu(A)\mu(B)|\leq\psi(n)\mu(A)\mu(B),\]
_where \(\psi:\mathbb{N}_{+}\to\mathbb{R}\) is a function for which \(\psi(n)\to 0\) as \(n\to\infty\)._
The random variables \(a_{n}(x)\), \(n\in\mathbb{N}_{+}\), form a stationary sequence due to the invariance of the measure \(\gamma_{\theta}\) with respect to \(T_{\theta}\).
**Lemma 4.2**.: _For all \(n\in\mathbb{N}_{+}\) and \(w\in\mathbb{N}_{m}\),_
\[\gamma_{\theta}(a_{n}(x)\geq w)=\dfrac{1}{\log\left(1+\theta^{2}\right)}\log \left(1+\dfrac{1}{w}\right)=:p_{\theta}(w).\]
Proof.: Using (2.1) and the fact that \((a_{n})_{n\in\mathbb{N}_{+}}\) is a strictly stationary sequence as the transformation \(T_{\theta}\) is measure-preserving with respect to \(\gamma_{\theta}\), we have
\[\gamma_{\theta}(a_{n}(x)\geq w) = \gamma_{\theta}(a_{1}(x)\geq w)=\dfrac{1}{\log\left(1+\theta^{2} \right)}\sum\limits_{k\geq w}\int_{\frac{1}{(n+1)}}^{\frac{1}{n}}\dfrac{\theta \mathrm{d}x}{1+\theta x}\] \[= \dfrac{1}{\log\left(1+\theta^{2}\right)}\sum\limits_{k\geq w} \left(\log\left(1+\dfrac{1}{k}\right)-\log\left(1+\dfrac{1}{k+1}\right)\right)\] \[= \dfrac{1}{\log\left(1+\theta^{2}\right)}\log\left(1+\dfrac{1}{w }\right).\]
**Lemma 4.3**.: _Let \(\{f_{\theta,n}(x)\}_{n\geq 1}\) be a sequence of functions \(f_{\theta,n}\in C^{2}[0,\theta]\) defined recursively by_
\[f_{\theta,n+1}(x)=\sum_{i\geq m}\left(f_{\theta,n}\left(\frac{1}{\theta i} \right)-f_{\theta,n}\left(\frac{1}{x+\theta i}\right)\right),\quad n\in\mathbb{N}\]
_with \(f_{\theta,0}(0)=0\) and \(f_{\theta,0}(\theta)=1\)._
_Set_
\[g_{\theta,n}(x)=(\theta x+1)f_{\theta,n}^{\prime}(x),\,\,x\in[0,\theta]. \tag{4.1}\]
_Then_
\[\left\|g_{\theta,n}^{\prime}\right\|\leq q_{\theta}^{n}\cdot\left\|g_{\theta, 0}^{\prime}\right\|,\,n\in\mathbb{N}_{+} \tag{4.2}\]
_with_
\[q_{\theta}:=m\left(\sum_{i\geq m}\left(\frac{m}{i^{3}(i+1)}+\frac{i+1-m}{i(i+ 1)^{3}}\right)\right)<1. \tag{4.3}\]
_Here \(\left\|\cdot\right\|\) stands for the supremum norm._
Proof.: Since
\[g_{\theta,n+1}(x)=\sum_{i\geq m}P_{\theta,i}(x)g_{\theta,n}\left(u_{\theta,i}( x)\right),\]
where
\[P_{\theta,i}(x):=\frac{\theta x+1}{(x+\theta i)(x+\theta(i+1))}=\frac{1}{ \theta}\left[\frac{1-\theta^{2}i}{x+\theta i}-\frac{1-\theta^{2}(i+1)}{x+ \theta(i+1)}\right]\]
and
\[u_{\theta,i}(x):=\frac{1}{x+\theta i}\]
we have
\[g_{\theta,n+1}^{\prime}(x)=\sum_{i\geq m}\frac{1-\theta^{2}(i+1)}{(x+\theta i )(x+\theta(i+1))^{3}}f_{\theta,n}^{\prime}(\alpha_{\theta,i})-\sum_{i\geq m} \frac{P_{\theta,i}(x)}{(x+\theta i)^{2}}f_{\theta,n}^{\prime}(u_{\theta,i}(x)),\]
where \(u_{\theta,i+1}(x)<\alpha_{\theta,i}<u_{\theta,i}(x)\). Then
\[\left\|g_{\theta,n+1}^{\prime}\right\|\leq\left\|g_{\theta,n}^{\prime}\right\| \cdot\max_{x\in[0,\theta]}\left(\sum_{i\geq m}\frac{\theta^{2}(i+1)-1}{(x+ \theta i)(x+\theta(i+1))^{3}}+\sum_{i\geq m}\frac{P_{\theta,i}(x)}{(x+\theta i )^{2}}\right).\]
Using that
\[\frac{\theta^{2}(i+1)-1}{(x+\theta i)(x+\theta(i+1))^{3}}\leq m^{2}\frac{ \theta^{2}(i+1)-1}{i(i+1)^{3}}\]
and
\[\sum_{i\geq m}\frac{P_{\theta,i}(x)}{(x+\theta i)^{2}}\leq m^{2}\sum_{i\geq m} \frac{1}{i^{3}(i+1)},\]
then we get (4.2) and (4.3).
The random variables \(a_{n}(x)\), \(n\in\mathbb{N}_{+}\), are not independent. However, they satisfy a \(\psi\)-mixing condition. Actually, the sequence \(\{a_{n}\}_{n\in\mathbb{N}_{+}}\) is \(\psi\)-mixing under \(\gamma_{\theta}\) and the function \(\psi\) vanishes at an exponential rate.
**Lemma 4.4**.: _For any sets \(A\in\mathcal{B}_{1}^{k}=\sigma(a_{1},\ldots,a_{k})\) and \(B\in\mathcal{B}_{k+n}^{\infty}=\sigma(a_{k+n},a_{k+n+1},\ldots)\) we have_
\[|\gamma_{\theta}(A\cap B)-\gamma_{\theta}(A)\gamma_{\theta}(B)|\leq K_{\theta} q_{\theta}^{n}\gamma_{\theta}(A)\gamma_{\theta}(B), \tag{4.4}\]
_where \(0<q_{\theta}<1\) and \(K_{\theta}\) is a positive constant._
Proof.: Let \(C_{k}\) be the \(k\)-th cylinder with endpoints \(\frac{p_{k}}{q_{k}}\) and \(\frac{p_{k}+\theta p_{k-1}}{q_{k}+\theta q_{k-1}}\). Let
\[f_{\theta,n}(x)=\gamma_{\theta}\left(T_{\theta}^{n+k}(\omega)<x\,|\,C_{k} \right):=\frac{\gamma_{\theta}\left(\left(T_{\theta}^{n+k}(\omega)<x\right) \cap C_{k}\right)}{\gamma_{\theta}(C_{k})}\]
be the conditional distribution function of \(T_{\theta}^{n+k}(\omega)\) given \(C_{k}\). Obviously \(\left(\left(T_{\theta}^{k}(\omega)<x\right)\cap C_{k}\right)\) is an interval with endpoints \(\frac{p_{k}}{q_{k}}\) and \(\frac{p_{k}+x\theta p_{k-1}}{q_{k}+x\theta q_{k-1}}\). Thus we obtain
\[f_{\theta,0}(x)=\frac{1}{\gamma_{\theta}(C_{k})}\frac{(-1)^{k}}{\log\left(1+ \theta^{2}\right)}\left(\log\left(1+\theta\frac{p_{k}+x\theta p_{k-1}}{q_{k}+ x\theta q_{k-1}}\right)-\log\left(1+\theta\frac{p_{k}}{q_{k}}\right)\right).\]
If \(g_{\theta,n}\) is defined as in Lemma 4.3 let us put
\[K_{\theta}:=\sup_{x\in[0,\theta]}\left|g_{\theta,0}^{\prime}(x)\right|=\left| \left|g_{\theta,0}^{\prime}\right|\right|.\]
Hence by (4.1) and (4.2)
\[\left|f_{\theta,n}^{\prime}(x)-\frac{\theta}{(\theta x+1)\log\left(1+\theta^ {2}\right)}\right|\leq\frac{\left|g_{\theta,n}(x)-g_{\theta,n}(0)\right|}{ \theta x+1}+\frac{\left|g_{\theta,n}(0)-\frac{\theta}{\log\left(1+\theta^{2} \right)}\right|}{\theta x+1} \tag{4.5}\]
and
\[\left|g_{\theta,n}(x)-g_{\theta,n}(0)\right|=\left|\int_{0}^{x}g_{\theta,n}^{ \prime}(t)\mathrm{d}t\right|\leq\left|\left|g_{\theta,n}^{\prime}\right| \right|\cdot x\leq K_{\theta}q_{\theta}^{n}x.\]
Also, for some \(0<v_{\theta}<1\)
\[1 = f_{\theta,n}(\theta)=\int_{0}^{\theta}f_{\theta,n}^{\prime}(t) \mathrm{d}t=\int_{0}^{\theta}\frac{g_{\theta,n}(t)}{\theta t+1}\mathrm{d}t\] \[= g_{\theta,n}(0)\frac{\log\left(1+\theta^{2}\right)}{\theta}+ \int_{0}^{\theta}\frac{g_{\theta,n}(t)-g_{\theta,n}(0)}{\theta t+1}\mathrm{d}t\] \[= g_{\theta,n}(0)\frac{\log\left(1+\theta^{2}\right)}{\theta}+v_{ \theta}K_{\theta}q_{\theta}^{n}\left(1-\frac{\log\left(1+\theta^{2}\right)}{ \theta^{2}}\right)\]
and so
\[g_{\theta,n}(0)=\frac{\theta}{\log\left(1+\theta^{2}\right)}+v_{\theta}\frac{ K_{\theta}q_{\theta}^{n}}{\theta}\left(1-\frac{\theta^{2}}{\log\left(1+\theta^{2} \right)}\right).\]
Thus, from (4.5)
\[\left|f_{\theta,n}^{\prime}(x)-\frac{\theta}{(\theta x+1)\log\left( 1+\theta^{2}\right)}\right| \leq \frac{K_{\theta}q_{\theta}^{n}}{\theta x+1}+\frac{K_{\theta}q_{ \theta}^{n}}{\theta}\left(\frac{\theta^{2}}{\log\left(1+\theta^{2}\right)}-1 \right)\frac{1}{\theta x+1} \tag{4.6}\] \[< \frac{K_{\theta}q_{\theta}^{n}}{\theta x+1}\frac{\theta}{\log\left( 1+\theta^{2}\right)}. \tag{4.7}\]
Integrating (4.6) over \(F\) we obtain
\[\left|\gamma_{\theta}\left(T_{\theta}^{-(n+k)}(F)\mid C_{k}\ \right)-\gamma_{ \theta}(F)\right|\leq K_{\theta}q_{\theta}^{n}\gamma_{\theta}(F).\]
Since each \(A\in\mathcal{B}_{1}^{k}\) is a countable union of disjoint \(C_{k}\) we obtain (4.4) and thus the proof is complete.
Define
\[L_{N}:=\max_{1\leq n\leq N}a_{n}(x),\quad x\in\Omega.\]
In the sequel we discuss the asymptotic behavior of the largest digit \(L_{N}\).
**Theorem 4.5**.: _For any \(y>0\), we have_
\[\lim_{N\to\infty}\gamma_{\theta}\left(x\in\Omega:L_{N}(x)<\frac{ Ny}{\log\left(1+\theta^{2}\right)}\right)=\exp\left(-\frac{1}{y}\right). \tag{4.8}\]
Proof.: 1_st step_. Let
\[A_{n}=\{x\in\Omega:a_{n}(x)\geq w\},\]
which means
\[\bigcap_{n=1}^{N}A_{n}^{C}=\{x\in\Omega:L_{N}(x)<w\}=:B_{N}.\]
Given that \(B_{N}\) represents the event where none of the \(A_{n}\) occurs, the Poincare identity reveals that
\[\gamma_{\theta}(B_{N})=\sum_{k=0}^{N}(-1)^{k}S_{k} \tag{4.9}\]
with
\[S_{0}=1,\,S_{k}=\sum_{1\leq n_{1}<n_{2}<\ldots<n_{k}\leq N}\gamma_{\theta} \left(A_{n_{1}}\cap\ldots\cap A_{n_{k}}\right).\]
Thus, equation (4.9) provides an expression for the distribution function of \(L_{N}\). By selecting \(w=\left\lfloor\frac{Ny}{\log(1+\theta^{2})}\right\rfloor\), we demonstrate that the tail \(\sum_{k\geq Z}S_{k}\), where \(Z\) is a sufficiently large but fixed value, can be made arbitrarily small.
By repeatedly applying equation (4.4) and referring to Lemma 4.2, we obtain that
\[\gamma_{\theta}\left(A_{n_{1}}\cap\ldots\cap A_{n_{k}}\right)\leq(1+K_{\theta })^{k-1}\gamma_{\theta}(A_{n_{1}})\gamma_{\theta}(A_{n_{2}})\cdot\ldots\cdot \gamma_{\theta}(A_{n_{k}})<(1+K_{\theta})^{k}p_{\theta}^{k}(w). \tag{4.10}\]
For sufficiently large values of \(N\), we obtain
\[w=\left\lfloor\frac{Ny}{\log\left(1+\theta^{2}\right)}\right\rfloor\geq\frac {1}{2}\frac{Ny}{\log\left(1+\theta^{2}\right)}, \tag{4.11}\]
whence
\[p_{\theta}(w)\leq\frac{1}{w\log\left(1+\theta^{2}\right)}\leq\frac{2}{Ny}.\]
Therefore
\[\sum_{k\geq Z}S_{k} < \sum_{k\geq Z}\frac{N!}{(N-k)!k!}(1+K_{\theta})^{k}p_{\theta}^{k}(w) \leq\sum_{k\geq Z}\frac{N!}{(N-k)!k!}N^{-k}\left(\frac{2(1+K_{\theta})}{y} \right)^{k} \tag{4.12}\] \[\leq \sum_{k\geq Z}\frac{1}{k!}\left(\frac{2(1+K_{\theta})}{y}\right)^ {k}<\frac{1}{Z!}\left(\frac{4K_{\theta}}{y}\right)^{Z}\exp\left(\frac{4K_{ \theta}}{y}\right)\leq\varepsilon\]
as the value of \(Z\) is increased sufficiently.
\(2nd\) _step._ Let's divide \(S_{k}\) into two separate terms when considering \(k<Z\):
\[S_{k}=S_{k}^{\ast}+R_{k}. \tag{4.13}\]
Here, \(S_{k}^{\ast}\) represents the sum over all \(n_{1}<n_{2}<\ldots<n_{k}\) with \(ni+1-n_{i}\geq t\) (\(i\geq 1\)), where \(t\) is a positive integer determined as follows. Let \(\eta>0\) be an arbitrary real number, and let \(t\) be the smallest integer \(n\) such that \(K_{\theta}q_{\theta}^{n}<\eta\). Next, we proceed to estimate \(S_{k}^{\ast}\). Using repeated applications of (4.4) and another reference to Lemma 4.2, we find that for any term belonging to \(S_{k}^{\ast}\),
\[\gamma_{\theta}\left(A_{n_{1}}\cap\ldots\cap A_{n_{k}}\right)=p_{\theta}^{k}( w)\left(1+\mathcal{O}_{k}(\eta)\right),\,n_{i}+t\leq n_{i+1}.\]
In the estimation of \(S_{k}^{\ast}\), the constant involved in \(\mathcal{O}_{k}(\eta)\) depends exclusively on \(k\). Hence
\[S_{k}^{\ast}=\frac{(N-(t-1)(k-1))!}{(N-(t-1)(k-1)-k)!k!}p_{\theta}^{k}(w) \left(1+\mathcal{O}_{k}(\eta)\right). \tag{4.14}\]
In order to estimate \(R_{k}\) in (4.13), it's important to observe that the overall estimation (4.10) is applicable to each of its individual terms and that its number of terms is
\[\frac{N!}{(N-k)!k!}-\frac{(N-(t-1)(k-1))!}{(N-(t-1)(k-1)-k)!k!}=o\left(N^{k} \right).\]
We thus have
\[R_{k}=o\left(N^{k}p_{\theta}^{k}(w)\right). \tag{4.15}\]
Considering that
\[p_{\theta}(w)=p_{\theta}\left(\left|\frac{Ny}{\log\left(1+\theta^{2}\right)} \right|\right)=\left(1+\mathcal{O}(N^{-1})\right)\frac{1}{Ny}\]
by (4.13), (4.14) and (4.15) we can deduce that
\[S_{k}=\left(1+\mathcal{O}_{k}(\eta)\right)\frac{y^{-k}}{k!}+o_{N}(1), \tag{4.16}\]
where \(k\) is fixed and \(o_{N}(1)\to 0\) as \(N\to\infty\).
\(3rd\) _step._ Finally, by (4.9), (4.12) and (4.16), we establish that for any given positive integer \(Z\)
\[\gamma_{\theta}\left(L_{N}<\left|\frac{Ny}{\log\left(1+\theta^{2}\right)} \right|\right)=\sum_{k=0}^{Z-1}(-1)^{k}\left(1+\mathcal{O}_{k}(\eta)\right) \frac{y^{-k}}{k!}+o_{N}(1)+o_{Z}(1),\]
the last term approaches \(0\) as \(Z\to\infty\). Letting \(N\to\infty\), and then \(\eta\to 0\), we deduce that for any positive integer \(Z\)
\[\lim_{N\to\infty}\gamma_{\theta}\left(L_{N}<\left|\frac{Ny}{\log\left(1+ \theta^{2}\right)}\right|\right)=\sum_{k=0}^{Z-1}(-1)^{k}\frac{y^{-k}}{k!}+o_{Z }(1).\]
Given that the left-hand side remains independent of \(Z\), letting \(Z\to\infty\), we achieve the limit relation (4.8) while taking into account that the argument \(w\) in \(\{L_{N}<w\}\) is considered to be an integer in \(\mathbb{N}_{m}\). Since
\[\gamma_{\theta}\left(L_{N}<\left\lfloor\frac{Ny}{\log\left(1+\theta^{2}\right)} \right\rfloor\right)\leq\gamma_{\theta}\left(L_{N}<\frac{Ny}{\log\left(1+ \theta^{2}\right)}\right)\leq\gamma_{\theta}\left(L_{N}<\left\lfloor\frac{Ny}{ \log\left(1+\theta^{2}\right)}\right\rfloor+1\right)\]
the proof is complete.
**Theorem 4.6**.: _For any \(0<\delta<1\) and \(y>0\), we have_
\[\gamma_{\theta}\left(x\in\Omega:L_{N}(x)<\frac{Ny}{\log\left(1+\theta^{2} \right)}\right)=\exp\left(-\frac{1}{y}\right)+\mathcal{O}\left(\exp\left(-( \log N)\right)^{\delta}\right) \tag{4.17}\]
_where the constant involved in \(\mathcal{O}\) depends exclusively on \(\delta\)._
Proof.: We follow the proof of Theorem 4.5 with a particular choice of \(Z\) and \(t\). We choose \(Z=\left\lfloor\frac{\log N}{\log\log N}\right\rfloor\). For a specific \(0<\delta<1\), we choose \(\delta<\delta^{\prime}<1\), \(\varepsilon>0\) and \(\zeta>0\) so that \(1-\delta^{\prime}>\varepsilon+\zeta\). We assume that \(y\geq(\log N)^{-\delta}\). Applying Stirling's formula, we derive
\[\frac{1}{Z!}\ \frac{1}{\sqrt{2\pi}\,Z^{2+1/2}\exp(-Z)}\asymp\frac{\exp\left( \frac{\log N}{\log\log N}\right)}{\left(\frac{\log N}{\log\log N}\right)^{ \frac{\log N}{\log\log N}+\frac{1}{2}}}.\]
For \(N\) sufficiently large,
\[e\leq\left(\frac{\log N}{\log\log N}\right)^{\zeta}.\]
Thus, we obtain
\[\exp\left(\frac{\log N}{\log\log N}\right)\leq\left(\left(\frac{\log N}{\log \log N}\right)^{\zeta}\right)^{\frac{\log N}{\log\log N}}<N^{\zeta}.\]
Furthermore, for \(N\) sufficiently large
\[\left(\frac{\log N}{\log\log N}\right)^{\frac{\log N}{\log\log N}}>N^{1- \varepsilon}.\]
As a result, we obtain
\[\frac{1}{Z!}\ll\frac{1}{N^{1-\varepsilon-\zeta}}. \tag{4.18}\]
Alternatively, it is obvious that
\[\left(\frac{4K_{\theta}}{y}\right)^{Z}\exp\left(\frac{4K_{\theta}}{y}\right) \leq\left(4K_{\theta}\left(\log N\right)^{\delta}\right)^{\frac{\log N}{\log \log N}}\exp\left(4K_{\theta}\left(\log N\right)^{\delta}\right)<N^{\delta^{ \prime}}\]
when \(N\) is sufficiently large.. Finally, using (4.12), we obtain
\[\sum_{k\geq Z}S_{k}\ll N^{-a} \tag{4.19}\]
for \(0<a<1-\varepsilon-\zeta-\delta^{\prime}\).
Setting \(t=\left\lfloor(\log N)^{2}\right\rfloor\), we estimate \(R_{k}\) for \(k<Z\)
\[R_{k} \leq \left(\frac{N!}{(N-k)!k!}-\frac{(N-(t-1)(k-1))!}{(N-(t-1)(k-1)-k)! k!}\right)(1+K_{\theta})^{k}p_{\theta}^{k}(w)\ll tZN^{k-1}(2K_{\theta})^{k}p_{ \theta}^{k}(w)\] \[\ll (\log N)^{3}\frac{1}{\log\log N}N^{k-1}(2K_{\theta})^{k}p_{ \theta}^{k}(w)\ll(\log N)^{3}N^{k-1}\left(\frac{4K_{\theta}}{Ny}\right)^{k}.\]
In cases where \(\frac{4K_{\theta}}{y}>1\), we proceed to evaluate, for \(N\) sufficiently large,
\[R_{k}\leq\frac{1}{N^{1-\varepsilon}}\left(\frac{4K_{\theta}}{y}\right)^{Z} \leq\frac{1}{N^{1-\varepsilon}}\left(4K_{\theta}(\log N)^{\delta}\right) \frac{\log N}{\log\log N}<N^{-a} \tag{4.20}\]
for \(0<a<1-\varepsilon-\delta\). When \(\frac{4K_{\theta}}{y}<1\), the estimation becomes relatively straightforward. We can select the value of \(a\) to be the same as that in equation (4.19).
As a result, the number of terms in \(S_{k}^{*}\), \(k<Z\), is given by \(\frac{N!}{(N-k)!k!}+\mathcal{O}\left(N^{k-1}(\log N)^{3}\right)\). We have
\[S_{k}^{*} = \left(\frac{N!}{(N-k)!k!}+\mathcal{O}\left(N^{k-1}(\log N)^{3} \right)\right)\left(1+\mathcal{O}\left(N^{-1}\right)\right)^{k}(Ny)^{-k}\left( 1+\beta K_{\theta}q_{\theta}^{(\log N)^{2}}\right)^{k} \tag{4.21}\] \[= \frac{y^{-k}}{k!}+\mathcal{O}\left(N^{-a}\right)\]
where \(|\beta|\leq 1\). Subsequently, using (4.20) and (4.21), we deduce that \(S_{k}=\frac{y^{-k}}{k!}+\mathcal{O}\left(N^{-a}\right)\). In conclusion, using (4.19), we obtain
\[\gamma_{\theta}(B_{N})=\sum_{k=0}^{Z-1}\left((-1)^{k}\frac{y^{-k}}{k!}+ \mathcal{O}\left(N^{-a}\right)\right)+\mathcal{O}\left(N^{-a}\right)=\sum_{k= 0}^{Z-1}(-1)^{k}\frac{y^{-k}}{k!}+\mathcal{O}\left(N^{-a^{\prime}}\right)= \exp\left(-\frac{1}{y}\right)+\mathcal{O}\left(N^{-a^{\prime}}\right),\]
with \(0<a^{\prime}<a\).
## 5 Some iterated logarithm results
We begin with the following quantitative Borel-Cantelli lemma.
**Lemma 5.1**.: _[_8_]_ _Let \(\{E_{N}\}_{n\geq 1}\) be a sequence of measurable sets in a probability space \((X,X,\mu)\). Denote by \(A(N,x)\) the number of integers \(n\leq N\) such that \(x\in E_{n}\), i.e., \(A(N,x)=\sum_{n\leq N}\chi_{E_{n}}(x)\), where \(\chi_{E_{n}}\) is the characteristic function of \(E_{n}\). Define_
\[\varphi(N):=\sum_{n\leq N}\mu(E_{n}).\]
_Suppose that there exists a convergent series \(\sum_{k\geq 1}c_{k}\) with \(c_{k}\geq 0\) such that for all integers \(n>\ell\) we have_
\[\mu(E_{n}\cap E_{\ell})\leq\mu(E_{n})\mu(E_{\ell})+\mu(E_{n})c_{n-\ell}. \tag{5.1}\]
_Then for any \(\varepsilon>0\)_
\[A(N,x)=\varphi(N)+\mathcal{O}\left(\varphi^{1/2}(N)\log^{3/2+\varepsilon} \varphi(N)\right)\quad\mu\text{-a.s.} \tag{5.2}\]
**Theorem 5.2**.: _For a.e. \(x\in[0,\theta]\) we have_
\[\liminf_{N\to\infty}\frac{L_{N}(x)\log\log N}{N}=\frac{1}{\log\left(1+\theta^{ 2}\right)}.\]
Proof.: Since for all \(A\in\mathcal{B}_{[0,\theta]}\)
\[\frac{\lambda_{\theta}(A)}{\left(1+\theta^{2}\right)\log\left(1+\theta^{2} \right)}\leq\gamma_{\theta}(A)\leq\frac{\lambda_{\theta}(A)}{\log\left(1+ \theta^{2}\right)}\]
the measures \(\gamma_{\theta}\) and \(\lambda_{\theta}\) are equivalent. Hence we make the proof for all \(x\) except a set of \(\gamma_{\theta}\)-measure \(0\). Consider integers \(M\) and \(N\) with \(M,N\geq 0\). Define
\[L(M,N,x):=\max_{M<n\leq M+N}a_{n}(x),\]
\[\varphi(n):=\frac{n}{\log\log n\log\left(1+\theta^{2}\right)}\]
and
\[E_{k}:=\left(x\in\Omega:L\left(k^{2k},k^{2(k+1)},x\right)\leq\varphi\left(k^{ 2(k+1)}\right)\right).\]
Due to the \(T_{\theta}\)-invariance of \(\gamma_{\theta}\), we can deduce from Theorem 4.6 that, for any integer \(k\geq k_{0}\),
\[\gamma_{\theta}(E_{k}) = \gamma_{\theta}\left(x\in\Omega:L\left(k^{2k},k^{2(k+1)},x\right) \leq\varphi\left(k^{2(k+1)}\right)\right) \tag{5.3}\] \[= \gamma_{\theta}\left(x\in\Omega:L\left(0,k^{2(k+1)},x\right) \leq\varphi\left(k^{2(k+1)}\right)\right)\] \[\geq \frac{1}{2}\exp\left(-\log\log k^{2(k+1)}\right)\geq\frac{1}{8}( k\log k)^{-1}.\]
Obviously \(E_{k}\) depends only on \(a_{n}(x)\) with \(k^{2k}<n\leq k^{2(k+1)}+k^{2k}\). Consequently, according to Lemma 4.4, we can establish that for any pair of integers \(k<\ell\)
\[|\gamma_{\theta}(E_{k}\cap E_{\ell})-\gamma_{\theta}(E_{k})\gamma_{\theta}(E_ {\ell})|\leq K_{\theta}q_{\theta}^{\ell-k}\gamma_{\theta}(E_{k})\gamma_{\theta }(E_{\ell}),\]
since \((k+1)^{2(k+1)}-k^{2(k+1)}-k^{2k}\geq 1\).
As a result, the implication from Lemma 5.1 suggests that \(x\in E_{k}\) for infinitely many \(k\) (a.e. \(x\)) given that \(\varphi(N)\gg\log\log N\) according to (5.3).
On the other hand, by Lemma 4.2
\[\gamma_{\theta}(F_{k}) := \gamma_{\theta}\left(x\in\Omega:L\left(0,k^{2k},x\right)\geq \varphi\left(k^{2(k+1)}\right)\right)\] \[\leq \sum_{n\leq k^{2k}}\gamma_{\theta}\left(x\in\Omega:a_{n}(x)\geq \varphi\left(k^{2(k+1)}\right)\right)=k^{2k}p_{\theta}\left(\varphi\left(k^{2(k +1)}\right)\right)\] \[\leq k^{2k}\log\log k^{2(k+1)}\cdot k^{-2(k+1)}\leq k^{-3/2}.\]
Therefore, according to Lemma 5.1\(x\in F_{k}\) only for finitely many \(k\) (a.e. \(x\)). Thus
\[x\in E_{k}\setminus F_{k}=\left(x\in\Omega:L\left(0,k^{2k}+k^{2(k+1)},x\right) \leq\varphi\left(k^{2(k+1)}\right)\right)\]
for infinitely many \(k\) (a.e. \(x\)), which implies that \(L\left(0,k^{2(k+1)},x\right)\leq\varphi\left(k^{2(k+1)}\right)\) holds for infinitely many \(k\) (a.e. \(x\)). Hence,
\[\liminf_{N\to\infty}\frac{L_{N}(x)\log\log N}{N}\leq\frac{1}{\log\left(1+ \theta^{2}\right)}\;\;\text{a.e.} \tag{5.4}\]
Now, we proceed to prove the converse inequality. Let \(b>1\). Again by Theorem 4.6
\[\gamma_{\theta}(G_{k}) := \gamma_{\theta}\left(x\in\Omega:L\left(0,\left\lfloor b^{k} \right\rfloor,x\right)\leq b^{-2}\varphi\left(\left\lfloor b^{k+1}\right\rfloor \right)\right)\] \[\ll \exp\left(-b\log\log b^{k}\right)\ll k^{-b}.\]
By Lemma 5.1, since \(\sum k^{-b}<\infty\), it follows that \(x\in G_{k}\) only for finitely many \(k\) (a.e. \(x\)), which means that
\[L\left(0,\left\lfloor b^{k}\right\rfloor,x\right)>b^{-2}\varphi\left(\left \lfloor b^{k+1}\right\rfloor\right)\]
holds for all \(k\geq k_{0}(x,b)\). For a given value of \(N\) such that \(\left\lfloor b^{k}\right\rfloor\leq N<b^{k+1}\) where \(k\geq k_{0}(x,b)\) since \(L\left(0,\left\lfloor b^{k}\right\rfloor,x\right)\leq L_{N}(x)\) and \(\varphi(N)\leq\varphi\left(\left\lfloor b^{k+1}\right\rfloor\right)\) we conclude that
\[L_{N}(x)>b^{-2}\varphi(N)\;\;\text{a.e.}\;x.\]
Since this holds for any \(b>1\) we obtain
\[\liminf_{N\to\infty}\frac{L_{N}(x)\log\log N}{N}\geq\frac{1}{\log\left(1+ \theta^{2}\right)}\;\;\text{a.e.}\]
By (5.4) the proof is completed.
There is no analogous result for Theorem 5.2 with a finite nonzero superior limit. This follows from the following theorem.
**Theorem 5.3**.: _Let \(\{\varphi(n)\}_{n\geq 1}\) be a positive nondecreasing sequence. Then for a.e. \(x\in[0,\theta]\)_
\[L_{N}(x)>\varphi(N) \tag{5.5}\]
_has finitely many or infinitely many solutions in integers \(N\) according as the series_
\[\sum_{n\geq 1}\frac{1}{\varphi(n)} \tag{5.6}\]
_converges or diverges._
Proof.: Indeed, if \(\sup\varphi(n)<\infty\), then the divergence of (5.5) implies, according to Theorem 3.1, that \(a_{n}(x)>\varphi(n)\) holds for infinitely many \(n\) (a.e. \(x\)).
On the other hand, when \(\varphi(n)\nearrow\infty\), the behavior of (5.5) is determined by whether the inequality \(a_{n}(x)>\varphi(n)\) holds finitely or infinitely often. This, in turn, leads to the conclusion that, by Theorem 3.1, this behavior holds for a.e. \(x\) based on whether the series (5.6) converges or diverges.
**Corollary 5.4**.: _Let \(\{\varphi(n)\}_{n\geq 1}\) be as in Theorem 5.3. Then for a.e. \(x\in[0,\theta]\)_
\[\limsup_{N\to\infty}\frac{L_{N}(x)}{\varphi(N)} \tag{5.7}\]
_is either \(0\) or \(\infty\)._
Proof.: We distinguish the cases where the series (5.6) converges or diverges. If the series (5.6) converges we choose a monotone sequence \(\{\alpha_{n}\}_{n\geq 1}\) tending to \(\infty\) but so slowly that still \(\sum_{n\geq 1}\frac{\alpha_{n}}{\varphi(n)}<\infty\). Therefore in accordance with Theorem 5.3, the inequality \(L_{N}(x)>\frac{\varphi(N)}{\alpha_{N}}\) holds only for finitely many \(N\) (a.e. \(x\)). Hence (5.7) vanishes for a.e. \(x\).
If the series (5.6) diverges, we consider a monotone sequence \(\{\alpha_{n}\}_{n\geq 1}\) tending to \(0\) such that \(\sum_{n\geq 1}\frac{\alpha_{n}}{\varphi(n)}=\infty\). Hence, \(L_{N}(x)>\frac{\varphi(N)}{\alpha_{N}}\) holds for infinitely many \(N\) (a.e. \(x\)) and thus (5.7) is infinite for a.e. \(x\).
| この論文の主要な目的は、θ-展開を扱って極値理論を開発することです。θ-拡張の最大値の極限分布を算出し、いくつかの関連する結果も得ました。これは、J.Galambos と W. Philipp の正の分数の連続分数に関する定理に類似しています。また、 Borel- Bernstein の定理が重要な役割を果たしていることに注意する必要があります。 |
2309.05328 | Heat flow of p-harmonic maps from complete manifolds into generalised
regular balls | We study the heat flow of p-harmonic maps between complete Riemannian
manifolds. We prove the global existence of the flow when the initial datum has
values in a generalised regular ball. In particular, if the target manifold has
nonpositive sectional curvature, we obtain the global existence of the flow for
any initial datum with finite p-energy. If, in addition, the target manifold is
compact, the flow converges to a p-harmonic map. This gives an extension of the
results of Liao-Tam [12] concerning the harmonic heat flow (p = 2) to the case
p $\ge$ 2. We also derive a Liouville type theorem for p-harmonic maps between
complete Riemannian manifolds. | Zeina Al Dawoud | 2023-09-11T09:23:28 | http://arxiv.org/abs/2309.05328v1 | # Heat flow of \(p\)-harmonic maps from complete manifolds into generalised regular balls
###### Abstract.
We study the heat flow of \(p\)-harmonic maps between complete Riemannian manifolds. We prove the global existence of the flow when the initial datum has values in a generalised regular ball. In particular, if the target manifold has nonpositive sectional curvature, we obtain the global existence of the flow for any initial datum with finite \(p\)-energy. If, in addition, the target manifold is compact, the flow converges to a \(p\)-harmonic map. This gives an extension of the results of Liao-Tam [12] concerning the harmonic heat flow (\(p=2\)) to the case \(p\geq 2\). We also derive a Liouville type theorem for \(p\)-harmonic maps between complete Riemannian manifolds.
## 1. Introduction
Let \((M^{m},g)\) and \((N^{n},h)\) be two Riemannian manifolds, with \(M\) compact. For \(p>1\), the \(p\)-energy of a map \(u\in C^{1}(M,N)\) is defined by
\[E_{p}(u)=\frac{1}{p}\int_{M}|du(x)|^{p}dx, \tag{1.1}\]
where \(|du(x)|\) is the Hilbert-Schmidt norm of \(du(x)\), and \(dx\) stands for the Riemannian volume element of \(M\).
\(p\)-harmonic maps are critical points of the functional (1.1), that is, they are solutions of the Euler-Lagrange equation associated to (1.1)
\[\tau_{p}(u)=0, \tag{1.2}\]
where \(\tau_{p}(u)=\mathrm{Trace}_{g}(\nabla(|du|^{p-2}du))\) denotes the \(p\)-tension field of \(u\). More precisely, let \((x^{1},\cdots,x^{m})\) and \((y^{1},\cdots,y^{n})\) be local coordinates on \(M\) and \(N\) respectively, then denoting \(\partial_{j}=\frac{\partial}{\partial x^{j}}\) and \(u^{\alpha}=y^{\alpha}(u)\), equation (1.2) takes the form of the following system
\[-\frac{1}{\sqrt{|g|}}\partial_{i}\left(|du|^{p-2}\sqrt{|g|}g^{ij}\partial_{j}u ^{k}\right)=|du|^{p-2}g^{ij}\Gamma^{k}_{\alpha\beta}(u)\partial_{i}u^{\alpha }\partial_{j}u^{\beta},\ 1\leq k\leq n, \tag{1.3}\]
where \(\Gamma^{l}_{\alpha\beta}\) are the Christoffel symbols of \(N\), and the Einstein summation convention on repeated indices is used.
In order to consider more general class of solutions of system (1.3), we shall write it in an equivalent form which is independent of the choice of coordinates. By Nash's embedding theorem, we can embed \(N\) isometrically in an Euclidean space \(\mathbb{R}^{L}\) using an isometric embedding \(i:N\rightarrow\mathbb{R}^{L}\). If we note again \(u=i\circ u\), the energy functional (1.1) becomes
\[E_{p}(u)=\frac{1}{p}\int_{M}|\nabla u(x)|^{p}dx,\]
where \(|\nabla u|^{2}=g^{ij}\langle\partial_{i}u,\partial_{j}u\rangle\), and \(\langle.\,\.\rangle\) denotes the Euclidean innner product of \(\mathbb{R}^{L}\).
Equation (1.3) takes the form
\[-\Delta_{p}u=|\nabla u|^{p-2}A(u)(\nabla u,\nabla u), \tag{1.4}\]
where \(\Delta_{p}u=\operatorname{div}(|\nabla u|^{p-2}\nabla u)\) is the \(p\)-Laplacian, that is,
\[\Delta_{p}u=\frac{1}{\sqrt{|g|}}\partial_{i}\left(\sqrt{|g|}|\nabla u|^{p-2}g^ {ij}\partial_{j}u\right),\]
and \(A\) is the second fundamental form of \(N\) in \(\mathbb{R}^{L}\) with the notation
\[A(u)(\nabla u,\nabla u)=g^{ij}A(u)(\partial_{i}u,\partial_{j}u).\]
In this case, \(p\)-harmonic maps are defined to be solutions of equation (1.4). We note that equation (1.4) makes sense even if \(M\) is not compact.
We define the Sobolev space \(W^{1,p}(M,N)=\{u\in W^{1,p}(M,\mathbb{R}^{L});\ u(x)\in N\ a.e\}\). We say that a map \(u\in W^{1,p}(M,N)\cap L^{\infty}(M,\mathbb{R}^{L})\) is a weakly \(p\)-harmonic map if it is a weak solution of (1.4). The study of the regularity of weakly \(p\)-harmonic maps is a delicate question due to the fact that (1.4) is a degenerate quasilinear elliptic system. In general, the optimal regularity for weak solutions to systems involving the \(p\)-Laplacian is \(C^{1+\beta}\) as shown by [19], and \(C^{\infty}\) out off the vanishing set of \(\nabla u\). Solutions which are \(p\)-energy-minimzing are in \(C^{1+\beta}(M\setminus S,N)\) (\(0<\beta<1\)) where \(S\) is a singular set of Hausdorff dimension at most \(m-[p]-1\) (see [6][14]).
In this paper we are interested in heat flow of \(p\)-harmonic maps which is the gradient flow associated with the \(p\)-energy functional. Namely,
\[\left\{\begin{array}{l}\partial_{t}u-\Delta_{p}u=|\nabla u|^{p-2}A(u)( \nabla u,\nabla u),\\ \\ u(x,0)=u_{0}(x),\end{array}\right. \tag{1.5}\]
where \(u_{0}:M\to N\) is the initial datum of the flow.
Throughout this paper, what we mean by a solution \(u\) of (1.5) on \(M\times[0,T)\) is a map \(u\in C^{1+\beta,\ \beta/p}_{loc}(M\times[0,T),N)\) (for some \(0<\beta<1\)) which is a weak solution (in the distributional sense) of (1.5). Indeed, as stated above, the optimal regularity one could expect for \(p\)-harmonic type equations is \(C^{1+\beta}\) (and \(C^{1+\beta,\ \beta/p}\) for parabolic equations).
When \(p=2\), Eells and Sampson [3] were the first who studied the heat flow problem of harmonic maps. They proved that if \(M\) and \(N\) are compact Riemannian manifolds and \(N\) has nonpositive sectional curvature, then (1.5) admits a global solution which converges at infinity to a harmonic map. Li and Tam [11] considered the case when both \(M\) and \(N\) are complete noncompact Riemannian manifolds. They proved the existence of a global solution when the Ricci curvature of \(M\) is bounded from below, \(N\) has nonpositive sectional curvature and if the initial datum \(u_{0}\) is bounded as well as its energy density. Later on, Liao and Tam [12] showed that if \(M\) is a complete non-compact manifold, \(N\) is compact with nonpositive sectional curvature, and if the initial map has finite total energy, then (1.5) admits a global solution which converges on compact subsets of \(M\) to a harmonic map from \(M\) into \(N\). It is well known that if the sectional curvature of \(N\) is nonnegative, a blow
up phenomenon may occur. Without this condition, one has to impose some assumptions on \(N\) in order to prevent the blow up of the solution. We refer the reader to the work of Struwe ([17], [18] ) and Chen-Struwe [1] concerning the singularities of the harmonic heat flow. When \(M\) is complete, Li and Wang [13] proved that there exits a global solution of the harmonic heat flow from \(M\) into a generalised regular ball of the target manifold \(N\) which converges at infinity to a harmonic map when the initial datum has finite energy (we refer to [13] for the notion of generalised regular balls.)
When \(p\geq 2\) Fardoun-Regbaoui [4] and Misawa [15] proved the global existence and the convergence of the \(p\)-harmonic heat flow when \(M\) and \(N\) are compact and \(N\) has nonpositive sectional curvature generalising the result Eells and Sampson [3] to the case \(p\geq 2\). When \(N\) has arbitrary sectional curvature, Hungerbuhler [9] proved the existence of weak solutions in the conformal case \(p=m\). See also Hungerbuhler [8] when the target manifold is a homogenuous space. For small initial data, Fardoun-Regbaoui [5] obtained the existence and convergence of the flow. We also mention the recent work of Misawa [16] concerning the regularity of the \(p\)-harmonic heat flow.
Our goal in this paper is to extend the results of Liao-Tam above to the \(p\)-harmonic heat flow for \(p\geq 2\). Accordingly, we will introduce the notion of regular sets inspired by the work of Li and Wang [13] concerning the harmonic heat flow.
**Definition 1.1**.: _Let \(\Omega\) be an open subset of a Riemannian manifold \(N\) and let \(\delta>0\). We say that \(\Omega\) is a \(\delta\)-regular set if there exist a positive function \(f\in C^{2}(\Omega)\) and a constant \(C>0\) such that, for all \(y\in\Omega\), we have_
\[\begin{cases}-\nabla^{2}f(y)-K_{2}(y)f(y)h(y)\geq\delta\frac{|\nabla f(y)|^{2} }{f(y)}h(y)\\ C^{-1}\leq f(y)\leq C,\end{cases} \tag{1.6}\]
_where \(h\) is the metric of \(N\) and \(K_{2}(y)=\sup\{K(y,\pi),0\}\), with \(K(y,\pi)\) being the sectional curvature of a 2-plane \(\pi\subset T_{y}N\)._
According to our definition of \(\delta\)-regular sets, any Riemannian manifold with nonpositive sectional curvature is a \(\delta\)-regular set for any \(\delta>0\). Indeed, if \(N\) has nonpositive sectional curvature, then condition (1.6) above is automatically satisfied by taking \(f=1\).
**Definition 1.2**.: _We say that \(\Omega\subset N\) is a \(\delta\)-generalised regular ball if it is a \(\delta\)-regular set and if there exists a positive function \(f^{*}\in C^{2}(N)\) which is convex on \(\Omega\) such that_
\[\Omega=(f^{*})^{-1}\big{(}[0,a)\big{)} \tag{1.7}\]
_for some \(a>0\)._
**Example 1.1**.: _If \(N\) is a Riemannian manifold with nonpositive sectional curvature, then \(N\) is a \(\delta\)-generalised regular ball for any \(\delta>0\) by taking \(f=f^{*}=1\) and \(a>1\)._
**Example 1.2**.: _On the sphere \(\mathbb{S}^{n}\) any geodesic ball \(B(y,r)\), with \(0<r<\frac{\pi}{2}\), is a \(\delta\)-generalised regular ball with \(\delta=\frac{(\cos r-\cos r_{1})\cos r_{1}}{\sin^{2}r}\), where \(r_{1}\) is any real number such that \(r<r_{1}<\frac{\pi}{2}\). Indeed, in polar coordinates \((\rho,\theta)\) centered at \(y\), if we set \(f(\rho,\theta)=\cos\rho-\cos r_{1}\) and \(f^{*}\) any smooth function on \(\mathbb{S}^{n}\) such that \(f^{*}(\rho,\theta)=\rho^{2}\) on \(B(y,r)\), then one can check that \(\nabla^{2}f=-(\cos\rho)h\), where \(h=d\rho^{2}+\sin^{2}\!\rho\;d\theta\) is the standard metric on \(\mathbb{S}^{n}\), and that \(f\) satisfies condition (1.6) with \(\delta=\frac{(\cos r-\cos r_{1})\cos r_{1}}{\sin^{2}r}\), and \(f^{*}\) satisfies condition (1.7). More generally, by using the Hessian comparaison theorem on a Riemannian manifold \(N\), one can see that any regular geodesic ball \(B(y,r)\) in the sense of Hildebrandt [7] is a \(\delta\)-generalised regular ball for some \(\delta>0\) depending on \(r\)._
Troughhout this paper we suppose \(p\geq 2\). We state our first main result :
**Theorem 1**.: _Let \((M^{m},g)\) and \((N^{n},h)\) be two Riemannian manifolds such that \(M\) is compact and \(N\) is complete. Let \(\Omega\subset N\) be a \(\delta\)-generalised regular ball with_
\[\delta>\delta_{p}:=3(p-2)^{2}\left(\sqrt{m}+2p+6\right)^{2}+3.\]
_Then for any \(u_{0}\in C^{\infty}(M,\Omega)\), there exists a unique global solution \(u\) of (1.5) such that \(u\in C^{1+\beta,\;\beta/p}_{loc}(M\times[0,\infty),\Omega)\) for some \(\beta\in(0,1)\). Moreover, we have \(\partial_{t}u\in L^{2}(M\times[0,+\infty))\) and satisfies the energy inequality_
\[\int_{0}^{T}\int_{M}|\partial_{t}u(x,t)|^{2}dxdt+E_{p}\big{(}u(.,T)\big{)}\leq E _{p}(u_{0}) \tag{1.8}\]
_for all \(T>0\). If we assume in addition that \(N\) is compact, then there exists a sequence \(t_{k}\to\infty\) such that \(u(.,t_{k})\) converges in \(C^{1+\beta^{\prime}}(M,\Omega)\) (for all \(\beta^{\prime}<\beta\)) to a \(p\)-harmonic map \(u_{\infty}\in C^{1+\beta}(M,\Omega)\) satisfying \(E_{p}(u_{\infty})\leq E_{p}(u_{0})\)._
Theorem 1 allows us to prove the following result between complete Riemannian manifolds which is, to our knowledge, the first result concerning the existence of \(p\)-harmonic heat flow from complete noncompact Riemannian manifolds.
**Theorem 2**.: _Let \((M^{m},g)\) and \((N^{n},h)\) be two complete Riemannian manifolds. Let \(\Omega\subset N\) be a \(\delta\)-generalised regular ball with_
\[\delta>\delta_{p}:=3(p-2)^{2}\left(\sqrt{m}+2p+6\right)^{2}+3.\]
_Then for any \(u_{0}\in C^{\infty}(M,\Omega)\) with \(E_{p}(u_{0})<+\infty\), there exists a global solution \(u\) of (1.5) such that \(u\in C^{1+\beta,\;\beta/p}_{loc}(M\times[0,\infty),\Omega)\) for some \(\beta\in(0,1)\). Moreover, \(\partial_{t}u\in L^{2}(M\times[0,+\infty))\) and satisfies the energy inequality_
\[\int_{0}^{T}\int_{M}|\partial_{t}u(x,t)|^{2}dxdt+E_{p}\big{(}u(.,T)\big{)}\leq E _{p}(u_{0}) \tag{1.9}\]
_for all \(T>0\). If we assume in addition that \(N\) is compact, then there exists a sequence \(t_{k}\to\infty\) such that \(u(.,t_{k})\) converges in \(C^{1+\beta^{\prime}}_{loc}(M,\Omega)\) (for all \(\beta^{\prime}<\beta\)) to a \(p\)-harmonic map \(u_{\infty}\in C^{1+\beta}_{loc}(M,\Omega)\) satisfying \(E_{p}(u_{\infty})\leq E_{p}(u_{0})\)._
As a consequence of Theorem 2, we have the following theorem concerning target manifolds with negative sectional curvature. It can be considered as a natural generalisation to the case \(p\geq 2\) of the work of Liao-Tam [12] concerning the heat flow of harmonic maps (\(p=2\)).
**Theorem 3**.: _Let \(M\) and \(N\) be two complete Riemannian manifolds such that \(N\) has nonpositive sectional curvature. Then for any \(u_{0}\in C^{\infty}(M,N)\) with \(E_{p}(u_{0})<+\infty\), there exists a global solution \(u\) of (1.5) such that \(u\in C^{1+\beta,\,\beta/p}_{loc}(M\times[0,\infty),\Omega)\) for some \(\beta\in(0,1)\). Moreover, \(\partial_{t}u\in L^{2}(M\times[0,+\infty))\) and satisfies the energy inequality_
\[\int_{0}^{T}\int_{M}|\partial_{t}u(x,t)|^{2}dxdt+E_{p}\big{(}u(.,T)\big{)}\leq E _{p}(u_{0}) \tag{1.10}\]
_for all \(T>0\). If we assume in addition that \(N\) is compact, then there exists a sequence \(t_{k}\to\infty\) such that \(u(.,t_{k})\) converges in \(C^{1+\beta^{\prime}}_{loc}(M,\Omega)\) (for all \(\beta^{\prime}<\beta\)) to a \(p\)-harmonic map \(u_{\infty}\in C^{1+\beta}_{loc}(M,\Omega)\) satisfying \(E_{p}(u_{\infty})\leq E_{p}(u_{0})\)._
Our method allows us to prove the following Liouville Theorem for \(p\)-harmonic maps.
**Theorem 4**.: _Let \(M\) be a complete Riemannian manifold with nonnegative Ricci curvature and \(N\) be a complete Riemannian manifold. Suppose that \(\Omega\subset N\) is a \(\delta\)-regular set with \(\delta>\delta_{p}\), where \(\delta_{p}\) is as in Theorem 1. If \(u\in C^{1}_{loc}(M,\Omega)\) is a \(p\)-harmonic map from \(M\) into \(\Omega\) with finite \(p\)-energy, then \(u\) is constant. In particular, if \(N\) has nonpositive sectional curvature, then any \(p\)-harmonic map \(u\in C^{1}_{loc}(M,N)\) with finite \(p\)-energy from \(M\) to \(N\) is constant._
The paper is organised as follows. The heat flow equation being a degenerate parabolic problem, we first establish in Section 2 the existence of a global solution to the regularised equation of (1.5). We then prove uniform _a priori_ gradient estimates on the solutions of the regularised equation in Section 3. Section 4 is devoted to the proof of our main results.
## 2. The Regularised Heat Flow
Since (1.5) is a degenerate parabolic system, one can not apply directly the existence theory for parabolic equations. To overcome this difficulty we introduce the regularised \(p\)-harmonic heat flow equation. Namely, for \(0<\varepsilon<1\), the regularised \(p\)-energy of \(u\) is defined by
\[E_{p,\varepsilon}(u)=\frac{1}{p}\int_{M}\left(|\nabla u|^{2}+\varepsilon \right)^{\frac{p}{2}}dx,\]
and the gradient flow associated to \(E_{p,\varepsilon}\) is given by the following second order parabolic system
\[\left\{\begin{array}{l}\partial_{t}u-\Delta_{p,\varepsilon}=\left(|\nabla u |^{2}+\varepsilon\right)^{\frac{p-2}{2}}A(u)(\nabla u,\nabla u),\\ \\ u(x,0)=u_{0}(x)\end{array}\right. \tag{2.1}\]
where
\[\Delta_{p,\varepsilon}=\frac{1}{\sqrt{|g|}}\partial_{i}\Big{(}\sqrt{|g|}\left( |\nabla u|^{2}+\varepsilon\right)^{\frac{p-2}{2}}g^{ij}\partial_{j}u\Big{)}\]
is the regularised \(p\)-Laplacian of \(M\).
Since (2.1) is a parabolic system, then it follows from the classical theory of parabolic equations that (2.1) admits a unique smooth solution \(u_{\varepsilon}\) defined on a maximum interval \([0,T_{\varepsilon})\). For the sake of simplicity we denote \(u\) our solution instead of \(u_{\varepsilon}\) and \(T\) instead of \(T_{\varepsilon}\). We have the following proposition.
**Proposition 2.1**.: _Let \(p\geq 2\) and let \(\Omega\subset N\) be a \(\delta\)-generalised regular ball for some \(\delta>0\). Let \(u_{0}\in C^{\infty}(M,\Omega)\) and let \(u\) be the solution of the regularised problem (2.1) defined on a maximal interval \([0,T)\). Then we have for any \((x,t)\in M\times[0,T)\)_
\[u(x,t)\in\Omega. \tag{2.2}\]
_Moreover, \(u\) satisfies the energy formula_
\[\frac{d}{dt}E_{p,\varepsilon}\big{(}u(.,t)\big{)}=-\int_{M}|\partial_{t}u(x,t )|^{2}\ dx. \tag{2.3}\]
_In particular the energy \(E_{p,\varepsilon}\) is nonincreasing along the flow._
Proof.: Since \(\Omega\) is a generalised regular ball, then there exist a positive function \(f^{*}\in C^{2}(N)\) which is convex on \(\Omega\) and \(a>0\) such that \(\Omega=(f^{*})^{-1}([0,a))\). Let
\[T_{1}=\sup\big{\{}t\in[0,T)\ :\ u\,(M\times[0,t])\subset\Omega\big{\}}\]
and suppose by contradiction that \(T_{1}<T\). Since \(u_{0}(M)\subset\Omega\) and \(M\) is compact, then by continuity of \(u\) we have that \(T_{1}>0\). Then we compute on \(M\times[0,T_{1})\)
\[\partial_{t}(f^{*}\circ u)-\operatorname{div}\Big{(}(|\nabla u|^{ 2}+\epsilon)^{\frac{p-2}{2}}\nabla f^{*}\circ u\Big{)} =\left\langle(\nabla f^{*})\circ u,(\partial_{t}u-\Delta_{p, \varepsilon}u)\,\right\rangle\] \[-(\nabla u|^{2}+\epsilon)^{\frac{p-2}{2}}(\nabla^{2}f^{*})\circ u \big{(}\nabla u,\nabla u\big{)}\] \[=\big{(}|\nabla u|^{2}+\varepsilon\big{)}^{\frac{p-2}{2}}\left\langle (\nabla f^{*})\circ u,A(u)(\nabla u,\nabla u)\right\rangle\] \[-\big{(}|\nabla u|^{2}+\varepsilon\big{)}^{\frac{p-2}{2}}\left( \nabla^{2}f^{*}\right)\circ u(\nabla u,\nabla u),\]
which implies, since \(\nabla f^{*}\) is orthogonal to \(A(u)(\nabla u,\nabla u)\) and \(f^{*}\) is convex on \(\Omega\), that
\[\partial_{t}(f^{*}(u))-\operatorname{div}\left(\big{(}|\nabla u|^{2}+ \varepsilon\big{)}^{\frac{p-2}{2}}\nabla f^{*}(u)\right)\leq 0.\]
Hence it follows from the maximum principle for parabolic equations that for all \(t\in[0,T_{1})\), we have
\[\max_{x\in M}f^{*}(u(x,t))\leq\max_{x\in M}f^{*}(u_{0}(x))\]
which implies by continuity of \(f^{*}\) and \(u\) that
\[\max_{x\in M}f^{*}(u(x,T_{1}))\leq\max_{x\in M}f^{*}(u_{0}(x)).\]
Since \(u_{0}(M)\subset\Omega\), then \(\max_{x\in M}f^{*}(u_{0}(x))<a\), so
\[\max_{x\in M}f^{*}(u(x,T_{1}))<a.\]
It follows by continuity of \(f^{*}(u)\) that there exists \(\alpha>0\) such \(\max_{x\in M}f^{*}(u(x,t))<a\) for all \(t\in[0,T_{1}+\alpha]\), that is, \(u(M,t)\subset\Omega\) for all \(t\in[0,T_{1}+\alpha]\) contradicting the definition of \(T_{1}\).
Now to prove (2.3) it sufficies to take the inner product (in \(\mathbb{R}^{L}\)) of equation (2.1) with \(\partial_{t}u\) and integrate on \(M\) to get
\[\int_{M}|\partial_{t}u(x,t)|^{2}\ dx=\int_{M}\big{(}|\nabla u|^{2}+\varepsilon \big{)}^{\frac{p-2}{2}}\left\langle\partial_{t}u,A(u)(\nabla u,\nabla u) \right\rangle dx-\frac{d}{dt}E_{p,\varepsilon}\big{(}u(.,t)\big{)}.\]
This achieves the proof of the proposition since \(\left\langle\partial_{t}u,A(u)(\nabla u,\nabla u)\right\rangle=0\).
In order to prove uniform gradient estimates on the solution of the regularised equation (2.1) we need a Bochner-type formula on \(u\). To this end let us introduce the following notations. We set for all \(0<\varepsilon<1\):
\[F=|\nabla u|^{2}+\varepsilon,\]
and let \(L_{p}\) be the operator defined for \(\varphi\in C^{2}(M)\) by
\[L_{p}(\varphi)=\mathrm{div}\left(F^{\frac{p-2}{2}}\nabla\varphi\right). \tag{2.4}\]
We define the symmetric contravariant \(2\)-tensor \(B\) on \(M\) by setting in local coordinates on \(M\) :
\[B_{ij}=\frac{\langle\partial_{k}u,\partial_{l}u\rangle}{F}g^{ik}g^{lj}. \tag{2.5}\]
One checks immediately that for any \(x\in M\) and covectors \(X,Y\in T_{x}^{*}M\), we have
\[\begin{cases}B(X,X)\geq 0\\ B(X,Y)\leq|X||Y|.\end{cases} \tag{2.6}\]
If \(X=X_{i}dx^{i}\in T_{x}^{*}M\), we denote by \(B(X,.)\) the vector in \(T_{x}M\) defined by \(B(X,.)=B_{ij}X_{i}\partial_{j}\). Then we have the following Bochner-type formula
\[\begin{split}\partial_{t}F-L_{p}F&=(p-2)\mathrm{div} \left(F^{\frac{p-2}{2}}B(dF,.)\right)-2F^{\frac{p-2}{2}}|\nabla^{2}u|^{2}- \frac{(p-2)}{2}F^{\frac{p-4}{2}}|\nabla F|^{2}\\ &-2F^{\frac{p-2}{2}}\mathrm{Ric}^{M}\big{(}\nabla u,\nabla u \big{)}+2F^{\frac{p-2}{2}}\big{\langle}\mathrm{Riem}^{N}(\nabla u,\nabla u) \nabla u,\nabla u\big{\rangle},\end{split} \tag{2.7}\]
where \(\mathrm{Ric}^{M}\) is the Ricci tensor of \(M\) and \(\mathrm{Riem}^{N}\) is the Riemann curvature tensor of \(N\) with the following notations in an orthonormal frame \(\{e_{1},\cdots,e_{m}\}\) of \(T_{x}M\) :
\[\mathrm{Ric}^{M}\big{(}\nabla u,\nabla u\big{)}=\sum_{k=1}^{L}\sum_{i=1}^{m} \mathrm{Ric}^{M}\big{(}\nabla_{e_{i}}u^{k},\nabla_{e_{i}}u^{k}\big{)}\]
and
\[\big{\langle}\mathrm{Riem}^{N}(\nabla u,\nabla u)\nabla u,\nabla u\big{\rangle} =\sum_{i,j=1}^{m}\big{\langle}\mathrm{Riem}^{N}(\nabla_{e_{i}}u, \nabla_{e_{j}}u)\nabla_{e_{i}}u,\nabla_{e_{j}}u\big{\rangle}\]
## 3. Gradient Estimates
In this section, we derive uniform gradient estimates on the solution \(u\) of the regularised equation (2.1). We first need the following usefull inequality.
**Proposition 3.1**.: _Let \((M^{m},g)\), \((N^{n},h)\) be two Riemannian manifolds and let \(\Omega\subset N\) be a \(\delta\)-regular set. Let \(u:M\times[0,T)\to N\) be a smooth solution of (2.1)having its image in \(\Omega\) and set_
\[\varphi(x,t)=\frac{F(x,t)}{f^{2}(u(x,t))},\]
_where \(F(x,t)=|\nabla u(x,t)|^{2}+\varepsilon\) and \(f\) is the function satisfying condition (1.6). Then we have at any point \((x,t)\in M\times[0,T)\)_
\[\begin{split}&\partial_{t}\varphi-L_{p}\varphi\leq(p-2)\text{ div}\left(F^{\frac{p-2}{2}}B(d\varphi,.)\right)-\frac{1}{40}F^{\frac{p-4}{2}}(f\circ u)^{2}| \nabla\varphi|^{2}\\ &-2(\delta-\delta_{p})\frac{F^{\frac{p}{2}}}{(f\circ u)^{4}}\left| \left(\nabla f\right)\circ u\right|^{2}|\nabla u|^{2}+\ 2K_{1}\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{2}}|\nabla u|^{2},\end{split} \tag{3.1}\]
_where \(\delta_{p}=3(p-2)^{2}\Big{(}\sqrt{m}+2p+2\Big{)}^{2}+3\) and \(-K_{1}\leq 0\) is a lower bound of the Ricci curvature of \(M\) at \(x\). The operator \(L_{p}\) and the tensor \(B\) are defined by (2.4) and (2.5) above._
Proof.: Fix a point \(x_{0}\in M\), then in normal coordinates at \(x_{0}\), a basic computation gives,
\[\begin{split}\partial_{t}\varphi-L_{p}\varphi&= \frac{1}{(f\circ u)^{2}}\Big{(}\partial_{t}F-L_{p}F\Big{)}-2\frac{F}{(f\circ u )^{3}}\Big{(}\partial_{t}(f\circ u)-L_{p}(f\circ u)\Big{)}\\ &+4\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\nabla F\cdot\nabla(f \circ u)-6\frac{F^{\frac{p}{2}}}{(f\circ u)^{4}}|\nabla(f\circ u)|^{2},\end{split} \tag{3.2}\]
where the dot \(\ \cdot\\) denotes the Riemannian inner product on \(M\). By using Bochner's formula (2.7), the first term in the right hand side of (3.2) can be bounded as :
\[\begin{split}&\frac{1}{(f\circ u)^{2}}\Big{(}\partial_{t}F-L_{p}F \Big{)}\leq\frac{(p-2)}{(f\circ u)^{2}}\text{div}\left(F^{\frac{p-2}{2}}B(dF,. )\right)+2K_{1}\frac{F^{\frac{p-2}{2}}|\nabla u|^{2}}{(f\circ u)^{2}}\\ &+2K_{2}\frac{F^{\frac{p}{2}}|\nabla u|^{2}}{(f\circ u)^{2}}-2 \frac{F^{\frac{p-2}{2}}}{(f\circ u)^{2}}|\nabla^{2}u|^{2}-\frac{1}{2}(p-2) \frac{F^{\frac{p-4}{2}}}{(f\circ u)^{2}}\left|\nabla F\right|^{2},\end{split} \tag{3.3}\]
where \(-K_{1}\leq 0\) is a lower bound of the Ricci curvature of \(M\), and \(K_{2}\geq 0\) is an upper bound of the sectional curvature of \(N\). To bound the second term in the right hand side of (3.2), a direct computation gives
\[\partial_{t}(f\circ u)-L_{p}(f\circ u)=\Big{\langle}(\nabla f)\circ u,( \partial_{t}u-L_{p}u)\,\Big{\rangle}-F^{\frac{p-2}{2}}(\nabla^{2}f)\circ u \big{(}\nabla u,\nabla u\big{)}\]
where in local coordinates :
\[(\nabla^{2}f)\circ u\big{(}\nabla u,\nabla u\big{)}=\sum_{i,j=1}^{m}g^{ij}( \nabla^{2}f)\circ u\big{(}\partial_{i}u,\partial_{j}u\big{)}\]
and where \(\langle\cdot,\cdot\rangle\) denotes the inner product of \(\mathbb{R}^{L}\) (we recall here that \(N\) is isometrically embedded in \(\mathbb{R}^{L}\)). Since \(\big{\langle}(\nabla f)\circ u,(\partial_{t}u-L_{p}u)\,\big{\rangle}=0\) by equation (2.1) ( \(\partial_{t}u-\Delta_{p}u\) being orthogonal to \(T_{u}N\) and \((\nabla f)\circ u\in T_{u}N\)), then we obtain
\[\partial_{t}(f\circ u)-L_{p}(f\circ u)=-F^{\frac{p-2}{2}}(\nabla^{2}f)\circ u \big{(}\nabla u,\nabla u\big{)}. \tag{3.4}\]
Substituting (3.4) and (3.3) in (3.2) gives
\[\partial_{t}\varphi-L_{p}\varphi \leq\frac{(p-2)}{(f\circ u)^{2}}\text{div}\left(F^{\frac{p-2}{2}}B( dF,.)\right)-2\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{2}}|\nabla^{2}u|^{2}-\frac{1}{2}(p-2) \frac{F^{\frac{p-4}{2}}}{(f\circ u)^{2}}\left|\nabla F\right|^{2} \tag{3.5}\] \[+2K_{1}\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{2}}|\nabla u|^{2}+2 \frac{F^{\frac{p}{2}}}{(f\circ u)^{3}}\bigg{(}K_{2}(f\circ u)|\nabla u|^{2}+( \nabla^{2}f)\circ u\big{(}\nabla u,\nabla u\big{)}\bigg{)}\] \[+4\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\nabla F\cdot\nabla(f \circ u)-6\frac{F^{\frac{p}{2}}}{(f\circ u)^{4}}|\nabla(f\circ u)|^{2}.\]
Since \(f\) satisfies condition (1.6), then we have
\[K_{2}(f\circ u)|\nabla u|^{2}+(\nabla^{2}f)\circ u\big{(}\nabla u,\nabla u \big{)}\leq-\delta\frac{\big{|}(\nabla f)\circ u\big{|}^{2}}{(f\circ u)}| \nabla u|^{2},\]
so it follows from (3.5)
\[\partial_{t}\varphi-L_{p}\varphi \leq\frac{(p-2)}{(f\circ u)^{2}}\text{div}\left(F^{\frac{p-2}{2}} B(dF,.)\right)-2\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{2}}|\nabla^{2}u|^{2}- \frac{1}{2}(p-2)\frac{F^{\frac{p-4}{2}}}{(f\circ u)^{2}}\left|\nabla F\right| ^{2} \tag{3.6}\] \[+2K_{1}\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{2}}|\nabla u|^{2}-2 \delta\frac{F^{\frac{p}{2}}}{(f\circ u)^{4}}\left|(\nabla f)\circ u\right|^{2 }\left|\nabla u\right|^{2}\] \[+4\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\nabla F\cdot\nabla(f \circ u)-6\frac{F^{\frac{p}{2}}}{(f\circ u)^{4}}\left|(\nabla f)\circ u\right| ^{2}.\]
To estimate the first term in the right hand side of (3.6) we compute, by using the fact that \(F=(f\circ u)^{2}\varphi\) and that we are working in normal coordinates at \(x_{0}\),
\[\frac{1}{(f\circ u)^{2}}\mathrm{div}\left(F^{\frac{p-2}{2}}B(dF,.) \right) =\mathrm{div}\left(F^{\frac{p-2}{2}}B(d\varphi,.)\right)+2\mathrm{ div}\left(\frac{F^{\frac{p}{2}}}{(f\circ u)^{3}}B(d(f\circ u),.)\right)\] \[+2\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}B\left(dF,d(f\circ u)\right)\] \[=\mathrm{div}\left(F^{\frac{p-2}{2}}B(d\varphi,.)\right)+2\partial _{i}\bigg{(}\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\langle\partial_{i}u, \partial_{j}u\rangle\partial_{j}(f\circ u)\bigg{)}\] \[+4\frac{F^{\frac{p-4}{2}}}{(f\circ u)^{3}}\langle\partial_{i}u, \partial_{j}u\rangle\partial_{i}F\partial_{j}(f\circ u)\] \[=\mathrm{div}\left(F^{\frac{p-2}{2}}B(d\varphi,.)\right)+(p+2) \frac{F^{\frac{p-4}{2}}}{(f\circ u)^{3}}\langle\partial_{i}u,\partial_{j}u \rangle\partial_{i}F\partial_{j}(f\circ u) \tag{3.7}\] \[+2\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\langle\partial_{ii}^{ 2}u,\partial_{j}u\rangle\partial_{j}(f\circ u)+2\frac{F^{\frac{p-2}{2}}}{(f \circ u)^{3}}\langle\partial_{i}u,\partial_{ij}^{2}u\rangle\partial_{j}(f \circ u)\] \[+2\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\langle\partial_{i}u, \partial_{j}u\rangle(\nabla^{2}f)\circ u\left(\partial_{i}u,\partial_{j}u\right)\] \[+2\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\langle\partial_{i}u, \partial_{j}u\rangle\langle(\nabla f)\circ u,\partial_{ij}^{2}u\rangle\] \[-6\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{4}}\langle\partial_{i}u, \partial_{j}u\rangle\partial_{i}(f\circ u)\partial_{j}(f\circ u)\]
Since by condition (1.6) \(f\) is concave, then we have
\[\langle\partial_{i}u,\partial_{j}u\rangle(\nabla^{2}f)\circ u\left(\partial_ {i}u,\partial_{j}u\right)\leq 0. \tag{3.8}\]
To bound the other terms in (3.7) observe that the last term is nonpositive and the other terms can be bounded by using the Cauchy-Schwarz inequality. So we obtain from (3.7) and (3.8)
\[\frac{1}{(f\circ u)^{2}}\mathrm{div}\left(F^{\frac{p-2}{2}}B(dF,. )\right)\leq\ \mathrm{div}\left(F^{\frac{p-2}{2}}B(d\varphi,.)\right)+(p+2)\frac{F^{\frac{p -4}{2}}}{(f\circ u)^{3}}\left|\nabla(f\circ u)\right|\left|\nabla u\right|^{2 }\left|\nabla F\right|\] \[+2\left(\sqrt{m}+1\right)\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3} }\left|\nabla(f\circ u)\right|\left|\nabla u\right|\left|\nabla^{2}u\right| +2\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\left|(\nabla f)\circ u\right| \left|\nabla u\right|^{2}\left|\nabla^{2}u\right|\]
which gives, by Young's inequality for \(\alpha>0\) (to be chosen later),
\[\frac{(p-2)}{(f\circ u)^{2}}\mathrm{div}\left(F^{\frac{p-2}{2}}B (dF,.)\right)\leq(p-2)\mathrm{div}\left(F^{\frac{p-2}{2}}B(d\varphi,.)\right) +\frac{\alpha}{2}(p+2)(p-2)\frac{F^{\frac{p-4}{2}}}{(f\circ u)^{2}}\left| \nabla F\right|^{2} \tag{3.9}\] \[+\frac{1}{2\alpha}(p+2)(p-2)\frac{F^{\frac{p}{2}}}{(f\circ u)^{4} }\left|\nabla(f\circ u)\right|^{2}+\alpha(p-2)\left(\sqrt{m}+2\right)\frac{F^ {\frac{p-2}{2}}}{(f\circ u)^{2}}\left|\nabla^{2}u\right|^{2}\] \[+\frac{1}{\alpha}(p-2)\left(\sqrt{m}+1\right)\frac{F^{\frac{p}{2} }}{(f\circ u)^{4}}\left|\nabla(f\circ u)\right|^{2}+\frac{p-2}{\alpha}\frac{F ^{\frac{p}{2}}}{(f\circ u)^{4}}|(\nabla f)\circ u|^{2}|\nabla u|^{2},\]
where we have used the fact that \(F=|\nabla u|^{2}+\varepsilon\geq|\nabla u|^{2}\).
Now, to bound the terms \(4\dfrac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\nabla F\cdot\nabla(f\circ u)-6\dfrac{F^ {\frac{p}{2}}}{(f\circ u)^{4}}\left|\nabla(f\circ u)\right|^{2}\) in (3.6), we use again Young's inequality for any \(\beta>0\) :
\[4\dfrac{F^{\frac{p-2}{2}}}{(f\circ u)^{3}}\nabla F\cdot\nabla(f\circ u)-6\dfrac{ F^{\frac{p}{2}}}{(f\circ u)^{4}}\left|\nabla(f\circ u)\right|^{2}\leq 2\beta\dfrac{F^{ \frac{p-4}{2}}}{(f\circ u)^{2}}|\nabla F|^{2}+\left(\frac{2}{\beta}-6\right) \dfrac{F^{\frac{p}{2}}}{(f\circ u)^{4}}\left|\nabla(f\circ u)\right|^{2}. \tag{3.10}\]
By combining (3.6), (3.9) and (3.10), we obtain
\[\begin{split}&\partial_{t}\varphi-L_{p}\varphi\leq(p-2)\text{ div}\left(F^{\frac{p-2}{2}}B(d\varphi,.)\right)+\Big{(}-2+\alpha(p-2)\left(\sqrt{m}+2 \right)\Big{)}\dfrac{F^{\frac{p-2}{2}}}{(f\circ u)^{2}}|\nabla^{2}u|^{2}\\ &+\left(-\frac{1}{2}(p-2)+\frac{\alpha}{2}(p+2)(p-2)+2\beta \right)\dfrac{F^{\frac{p-4}{2}}}{(f\circ u)^{2}}|\nabla F|^{2}+2K_{1}\dfrac{F ^{\frac{p-2}{2}}}{(f\circ u)^{2}}|\nabla u|^{2}\\ &+\left(-6+\frac{2}{\beta}+\frac{1}{\alpha}(p-2)\left(\frac{p}{2 }+\sqrt{m}+2\right)\right)\dfrac{F^{\frac{p}{2}}}{(f\circ u)^{4}}\left|\nabla( f\circ u)\right|^{2}\\ &+\left(-2\delta+\frac{p-2}{\alpha}\right)\dfrac{F^{\frac{p}{2}}} {(f\circ u)^{4}}|(\nabla f)\circ u|^{2}|\nabla u|^{2}.\end{split} \tag{3.11}\]
Observe that \(|\nabla F|^{2}=|\nabla(|\nabla u|^{2})|^{2}\leq 4|\nabla^{2}u|^{2}|\nabla u|^{2}\). Hence if we choose \(\beta=\frac{p-1}{5}\) and fix \(\alpha>0\) such that
\[-2+\alpha(p-2)\left(\sqrt{m}+2\right)\leq 0, \tag{3.12}\]
then it follows from (3.11) that
\[\begin{split}&\partial_{t}\varphi-L_{p}\varphi\leq(p-2)\text{ div}\left(F^{\frac{p-2}{2}}B(d\varphi,.)\right)+2K_{1}\dfrac{F^{\frac{p-2}{2}}}{(f \circ u)^{2}}|\nabla u|^{2}\\ &-\left(\frac{p-1}{10}-\frac{1}{4}\alpha(p-2)\Big{(}\sqrt{m}+2p+ 6\Big{)}\right)\dfrac{F^{\frac{p}{2}-2}}{(f\circ u)^{2}}|\nabla F|^{2}\\ &\left(-6+\frac{10}{p-1}+\frac{1}{\alpha}(p-2)\left(\frac{p}{2}+ \sqrt{m}+2\right)\right)\dfrac{F^{\frac{p}{2}}}{(f\circ u)^{4}}\left|\nabla( f\circ u)\right|^{2}\\ &+\left(-2\delta+\frac{p-2}{\alpha}\right)\dfrac{F^{\frac{p}{2}} }{(f\circ u)^{4}}|(\nabla f)\circ u|^{2}|\nabla u|^{2}.\end{split} \tag{3.13}\]
If we choose
\[\alpha=\begin{cases}\frac{1}{5(p-2)\left(\sqrt{m}+2p+2\right)}&\text{if }p>2\\ 1&\text{if }p=2,\end{cases}\]
then it is clear that (3.12) is satisfied, and observe that \(|\nabla(f\circ u)|\leq|(\nabla f)\circ u|\,|\nabla u|\), so it follows from (3.13) that
\[\partial_{t}\varphi-L_{p}\varphi\leq(p-2)\text{div}\left(F^{\frac{ p-2}{2}}B(d\varphi,.)\right)+2K_{1}\frac{F^{\frac{p-2}{2}}}{(f\circ u)^{2}}| \nabla u|^{2} \tag{3.14}\] \[-\frac{1}{20}\frac{F^{\frac{p-4}{2}}}{(f\circ u)^{2}}|\nabla F|^{ 2}+(-2\delta+c_{p})\frac{F^{\frac{p}{2}}}{(f\circ u)^{4}}\,|(\nabla f)\circ u| ^{2}\,|\nabla u|^{2},\]
where
\[c_{p}=-6+\frac{10}{p-1}+5(p-2)^{2}\left(\sqrt{m}+2p+6\right)\left(\sqrt{m}+ \frac{p}{2}+3\right).\]
On the other hand we have
\[\partial_{i}F=(f\circ u)^{2}\partial_{i}\phi+2\frac{\left\langle(\nabla f) \circ u,\partial_{i}u\right\rangle}{f\circ u},\]
so
\[|\nabla F|^{2}\geq\frac{1}{2}(f\circ u)^{4}|\nabla\phi|^{2}-4\frac{F}{(f\circ u )^{2}}|\nabla(f\circ u)|^{2}.\]
Hence it follows from (3.14)
\[\partial_{t}\varphi-L_{p}\varphi\leq(p-2)\text{div}\left(F^{\frac {p-2}{2}}B(d\varphi,.)\right)-\frac{1}{40}F^{\frac{p-4}{2}}(f\circ u)^{2}| \nabla\varphi|^{2}\] \[+(-2\delta+c_{p}+\frac{1}{5})\frac{F^{\frac{p}{2}}}{(f\circ u)^{ 4}}|(\nabla f)\circ u|^{2}|\nabla u|^{2}+2K_{1}\frac{F^{\frac{p-2}{2}}}{(f \circ u)^{2}}|\nabla u|^{2},\]
This proves the proposition since \(\frac{1}{2}c_{p}+\frac{1}{10}\leq\delta_{p}:=3(p-2)^{2}\Big{(}\sqrt{m}+2p+6 \Big{)}^{2}+3.\)
Proposition 3.1 allows to prove the following important gradient estimates :
**Proposition 3.2**.: _Let \((M^{m},g)\) and \((N^{n},h)\) be two complete Riemannian manifolds. Let \(u:M\times[0,T)\rightarrow\Omega\) a smooth solution of (2.1), where \(\Omega\) is a \(\delta\)-regular set with_
\[\delta>\delta_{p}:=3(p-2)^{2}\left(\sqrt{m}+2p+2\right)^{2}+3.\]
_Let \((x_{0},t_{0})\in M\times(0,T)\) and \(R>0\). Then if \(t_{0}>R\), we have_
\[\|\nabla u\|_{L^{\infty}(B(x_{0},R/2)\times[t_{0}-R/2,t_{0}])}\leq C_{R}\left( \int_{B(x_{0},R)\times[t_{0}-R,t_{0}]}|\nabla u|^{p}dxdt+1\right),\]
_and if \(t_{0}\leq R\), we have_
\[\|\nabla u\|_{L^{\infty}(B(x_{0},R/2)\times[0,t_{0}])}\leq C_{R}\left(\int_{B( x_{0},R)\times[0,t_{0}]}|\nabla u|^{p}dxdt+\|\nabla u_{0}\|_{L^{\infty}(B(x_{0},R))}+1 \right),\]
_where \(C_{R}\) is a positive constant depending on \(R,p,M\) and \(\Omega\)._
Proof.: Fix \((x_{0},t_{0})\in M\times(0,T)\) and let \(R>0\). In what follows \(C_{R}\) is a positive constant that depends on \(R,p,M\) and \(\Omega\) and its value may change from line to line. In this proof we suppose that \(t_{0}>R\), as the case \(t_{0}\leq R\) is easier to handle, and therefore, we omit it. For \(0<r<R\), we set \(Q_{r}=B(x_{0},r)\times(t_{0}-r,t_{0})\) where \(B(x_{0},r)\) is the geodesic ball of radius \(r\). Let \(0<\rho<r<R\) and let \(\phi\in C_{0}^{1}\big{(}B(x_{0},r)\times(t_{0}-r,\infty)\big{)}\) such that \(\phi=1\) on \(Q_{\rho}\) with
\[0\leq\phi\leq 1,\ \ \ \ \ |\nabla\phi|\leq\frac{C_{m}}{r-\rho},\ \ \ \ \ |\partial_{t}\phi|\leq\frac{C_{m}}{r-\rho}, \tag{3.15}\]
where \(C_{m}\) is a positive constant depending only on the dimension \(m\) of \(M\).
As in Proposition 3.1, let \(\varphi=\frac{F}{f^{2}(u)}\), where \(F=|\nabla u|^{2}+\varepsilon\) and \(f\) satisfies condition (1.6). If we multiply inequality (3.1) by \(\varphi^{\gamma}\phi^{2}\), where \(\gamma\geq 0\), and we integrate on \(Q_{r}\), by using the hypothesis that \(\delta\geq\delta_{p}\), we get
\[\begin{split}&\frac{1}{\gamma+1}\sup_{t\leq t_{0}}\int_{B(x_{0},r)} \varphi^{\gamma+1}\ \phi^{2}\ dx-\frac{2}{\gamma+1}\int_{Q_{r}}\varphi^{\gamma+1}\phi\partial_{t} \phi\ dxdt\\ &+\gamma\int_{Q_{r}}f^{p-2}(u)\phi^{2}\varphi^{\frac{p}{2}+\gamma -2}|\nabla\varphi|^{2}\ dxdt-2\int_{Q_{r}}f^{p-2}(u)\varphi^{\frac{p}{2}+ \gamma-1}\phi|\nabla\phi||\nabla\varphi|\ dxdt\\ &\leq-\frac{1}{40}\int_{Q_{r}}f^{p-2}(u)\phi^{2}\varphi^{\frac{p} {2}+\gamma-2}|\nabla\varphi|^{2}\ dxdt+2K_{R}\int_{Q_{r}}f^{p-2}(u)\phi^{2} \varphi^{\frac{p}{2}+\gamma}\ dxdt\\ &-(p-2)\gamma\int_{Q_{r}}f^{p-2}(u)\phi^{2}\varphi^{\frac{p}{2}+ \gamma-2}B(d\varphi,d\varphi)\ dxdt\\ &-2(p-2)\int_{Q_{r}}f^{p-2}(u)\phi\varphi^{\frac{p}{2}+\gamma-1} B(d\varphi,d\phi)\ dxdt,\end{split} \tag{3.16}\]
where \(-K_{R}\leq 0\) is a lower bound of the Ricci curvature of \(M\) on \(B(x_{0},R)\) and \(B\) is given by (2.5). We have by (2.6) that
\[B(d\varphi,d\varphi)\geq 0\ \ \text{and}\ \ |B(d\varphi,d\phi)|\leq|\nabla \varphi|\,|\nabla\phi|\,. \tag{3.17}\]
It follows from (3.15), (3.16) and (3.17) that
\[\begin{split}&\frac{1}{\gamma+1}\sup_{t\leq t_{0}}\int_{B(x_{0},r)} \varphi^{\gamma+1}\ \phi^{2}\ dx+\left(\gamma+\frac{1}{40}\right)\int_{Q_{r}}f^{p-2}(u)\phi^{2} \varphi^{\frac{p}{2}+\gamma-2}|\nabla\varphi|^{2}\ dxdt\leq\\ &\frac{2}{(\gamma+1)(r-\rho)}\int_{Q_{r}}\varphi^{\gamma+1}\phi\ dxdt+2K_{R}\int_{Q_{r}}f^{p-2}(u)\phi^{2} \varphi^{\frac{p}{2}+\gamma}\ dxdt\\ &+2(p-1)\frac{1}{r-\rho}\int_{Q_{r}}f^{p-2}(u)\varphi^{\frac{p}{ 2}+\gamma-1}\phi|\nabla\varphi|\ dxdt\end{split}\]
which gives, by applying Young's inequality to the last term,
\[\begin{split}&\frac{1}{\gamma+1}\sup_{t\leq t_{0}}\int_{B(x_{0},r )}\varphi^{\gamma+1}\ \phi^{2}\ dx+\frac{1}{2}\left(\gamma+\frac{1}{40}\right)\int_{Q_{r}}f^{p-2}(u) \phi^{2}\varphi^{\frac{p}{2}+\gamma-2}|\nabla\varphi|^{2}\ dxdt\leq\\ &\frac{2}{(\gamma+1)(r-\rho)}\int_{Q_{r}}\varphi^{\gamma+1}\phi\ dxdt+2K_{R}\int_{Q_{r}}f^{p-2}(u)\phi^{2} \varphi^{\frac{p}{2}+\gamma}\ dxdt\\ & 2\left(\gamma+\frac{1}{40}\right)^{-1}\frac{(p-1)^{2}}{(r-\rho)^{2} }\int_{Q_{r}}f^{p-2}(u)\varphi^{\frac{p}{2}+\gamma}\ dxdt.\end{split} \tag{3.18}\]
It is easy to see that
\[\begin{split}\int_{Q_{r}}f^{p-2}(u)\phi^{2}\varphi^{\gamma+\frac{p}{ 2}-2}\ |\nabla\varphi|^{2}\ dxdt&\geq\frac{8}{(p+2\gamma)^{2}}\int_{Q_{r}}| \nabla(\varphi^{\frac{\gamma}{2}+\frac{p}{4}}\phi)|^{2}f^{p-2}(u)\ dxdt\\ &-\frac{16}{(p+2\gamma)^{2}}\int_{Q_{r}}\varphi^{\gamma+\frac{p}{ 2}}f^{p-2}(u)|\nabla\phi|^{2}\ dxdt.\end{split} \tag{3.19}\]
Thus, if we multiply (3.19) by \(\gamma+1\) and combine it with (3.18) and (3.15), using the fact that \(C^{-1}\leq f\leq C\), we get
\[\begin{split}&\sup_{t\leq t_{0}}\int_{B(x_{0},r)}\varphi^{ \gamma+1}\phi^{2}\ dx+\int_{Q_{r}}|\nabla(\varphi^{\frac{\gamma}{2}+\frac{p}{ 4}}\phi)|^{2}\ dxdt\leq\\ & C_{R}\left(\left(\gamma+1+\frac{1}{(r-\rho)^{2}}\right)\int_{Q _{r}}\varphi^{\frac{p}{2}+\gamma}\ dxdt+\frac{1}{r-\rho}\int_{Q_{r}}\varphi^ {\gamma+1}dxdt\right).\end{split} \tag{3.20}\]
We recall the following Sobolev inequality for \(m>2\) and \(V\in C_{0}^{\infty}(B(x_{0},R))\)
\[\bigg{(}\int_{B(x_{0},R)}V^{\frac{2m}{m-2}}\ dx\bigg{)}^{\frac{m-2}{2m}}\leq C _{R}\bigg{(}\int_{B(x_{0},R)}|\nabla V|^{2}\ dx\bigg{)}^{\frac{1}{2}}.\]
For \(m=2\) we have for any \(s\geq 1\),
\[\bigg{(}\int_{B(x_{0},R)}V^{s}dx\bigg{)}^{\frac{1}{s}}\leq C_{R,s}\bigg{(}\int _{B(x_{0},R)}|\nabla V|^{2}\ dx\bigg{)}^{\frac{1}{2}}.\]
In this proof we shall consider only the case \(m>2\). The case \(m=2\) can be handled in the same way. Applying Sobolev inequality to \(V=\varphi^{\frac{\gamma}{2}+\frac{p}{4}}\phi\) we get
\[\int_{t_{0}-r}^{t_{0}}\left(\int_{B(x_{0},r)}(\varphi^{\frac{\gamma}{2}+\frac {p}{4}}\ \phi)^{\frac{2m}{m-2}}dx\right)^{\frac{m-2}{m}}dt\leq C_{R}\int_{Q_{r}}| \nabla(\varphi^{\frac{\gamma}{2}+\frac{p}{4}}\phi)|^{2}\ dxdt. \tag{3.21}\]
On the other hand if we set \(q_{\gamma}=(1+\frac{2}{m})\gamma+\frac{p}{2}+\frac{2}{m}\), then by using the fact that \(\phi=1\) on \(Q_{\rho}\), we infer from Holder's inequality that
\[\begin{split}&\int_{Q_{\rho}}\varphi^{q_{\gamma}}\ dxdt=\int_{Q_{\rho}}\phi^{\frac{4}{m}}\varphi^{\frac{2 \gamma}{m}+\frac{2}{m}}\ \phi^{2}\varphi^{\gamma+\frac{p}{2}}\ dxdt\leq\\ &\sup_{t\leq t_{0}}\bigg{(}\int_{B(x_{0},r)}\varphi^{\gamma+1}\ \phi^{2}\ dx \bigg{)}^{\frac{2}{m}}\times\int_{t_{0}-r}^{t_{0}}\bigg{(}\int_{B(x_{0},r)} \left(\varphi^{\frac{\gamma}{2}+\frac{p}{4}}\ \phi\right)^{\frac{2m}{m-2}}\ dx\bigg{)}^{\frac{m-2}{m}}\ dt.\end{split} \tag{3.22}\]
Hence by combining (3.20), (3.21) and (3.22) we obtain
\[\begin{split}&\int_{Q_{\rho}}\varphi^{q_{\gamma}}\ dxdt\leq\\ & C_{R}\left(\left(\gamma+1+\frac{1}{(r-\rho)^{2}}\right)\int_{Q _{r}}\varphi^{\frac{p}{2}+\gamma}\ dxdt+\frac{1}{r-\rho}\int_{Q_{r}}\varphi^{ \gamma+1}dxdt\right)^{1+\frac{2}{m}}.\end{split} \tag{3.23}\]
By Holder's inequality and Young' inequality we have
\[\frac{1}{r-\rho}\int_{Q_{r}}\varphi^{\gamma+1}\ dx\leq|Q_{r}|+(r-\rho)^{- \frac{2\gamma+p}{2\gamma+2}}\int_{Q_{r}}\varphi^{\frac{p}{2}+\gamma}\ dxdt,\]
where \(|Q_{r}|\) is the volume of \(Q_{r}\). Since \(|Q_{r}|\leq|Q_{R}|\leq C_{R}\), then it follows from (3.23) that
\[\begin{split}&\int_{Q_{\rho}}\varphi^{q_{\gamma}}\ dxdt\leq\\ & C_{R}\left(\left(\gamma+1+(r-\rho)^{-2}+(r-\rho)^{-\frac{2\gamma +p}{2\gamma+2}}\right)\int_{Q_{r}}\varphi^{\frac{p}{2}+\gamma}\ dxdt+1\right)^{1+ \frac{2}{m}}.\end{split} \tag{3.24}\]
We apply now Moser iteration process. For \(j\in\mathbb{N}\), let \(R_{j}=\dfrac{R(1+2^{-j})}{2}\) and \(\theta=1+\frac{2}{m}\). We define \(\gamma_{j}=\theta^{j}-1\) and \(a_{j}=\gamma_{j}+\frac{p}{2}\). Then we have
\[a_{j+1}=\theta\gamma_{j}+\frac{p}{2}+\frac{2}{m}=\theta a_{j}-\frac{p-2}{m}.\]
If we set \(\gamma=\gamma_{j},\ r=R_{j},\ \rho=R_{j+1}\), then it is easy to check that
\[\gamma+1+(r-\rho)^{-2}+(r-\rho)^{-\frac{2\gamma+p}{2\gamma+2}}\leq C_{R}\ 4^ {pj}.\]
Thus it follows from (3.24) that
\[\int_{Q_{R_{j+1}}}\varphi^{a_{j+1}}\ dxdt\leq C_{R}\left(4^{pj}\int_{Q_{R_{j}} }\varphi^{a_{j}}\ dxdt+1\right)^{\theta}\]
which gives by setting \(I_{j}=\left(\int_{Q_{R_{j}}}\varphi^{a_{j}}\ dxdt+1\right)^{\theta-j},\)
\[I_{j+1}\leq C_{R}^{\theta-j-1}4^{pj\theta-j}I_{j}.\]
Since \(\sum_{j=0}^{\infty}j\theta^{-j}\leq C\), then by iterating we get
\[I_{j+1}\leq C_{R}I_{0} \tag{3.25}\]
Now observing that
\[\left(\int_{Q_{R/2}}\varphi^{a_{j}}dxdt\right)^{\frac{1}{a_{j}}}\leq I_{j}^{ \frac{\theta^{j}}{a_{j}}}\]
and using the fact that \(\lim\limits_{j\rightarrow+\infty}\dfrac{\theta^{j}}{a_{j}}=1\), then it follows from (3.25) that
\[\left\|\varphi\right\|_{L^{\infty}(Q_{R/2})}\leq C_{R}I_{0}=C_{R}\left(\int_{ Q_{R}}\varphi^{\frac{p}{2}}dxdt+1\right).\]
This proves the proposition since \(\varphi=\dfrac{|\nabla u|^{2}+\varepsilon}{f(u)}\) and \(C^{-1}\leq f\leq C\).
## 4. Global Existence and convergence
In this section we make use of our gradient estimates on the solution of the regularised \(p\)-harmonic flow obtained in Section 3 to prove our main results.
_Proof of theorem 1._ In this proof \(C\) denotes a positive constant depending on \(M,p,\Omega\) and the initial datum \(u_{0}\), and whose value may change from line to line. Let \(u_{\varepsilon}\) be the solution of the regularised equation (2.1) and let \([0,T_{\varepsilon})\) be its maximal existence interval. Since by Proposition 2.1, \(u_{\varepsilon}\) has its image in \(\Omega\), then if we apply Proposition 3.2 by taking \(R=1\), using the compactness of \(M\) and the fact that the \(p\)-energy functional \(E_{p,\varepsilon}\) is nonincreasing along the flow (formula (2.3) in Proposition 2.1), we get
\[\|\nabla u_{\varepsilon}\|_{L^{\infty}(M\times[0,T_{\varepsilon}))}\leq C. \tag{4.1}\]
Suppose by contradiction that \(T_{\varepsilon}<+\infty\). Then by integrating formula (2.3) in Proposition 2.1, we have
\[\int_{0}^{T_{\varepsilon}}\int_{M}|\partial_{t}u_{\varepsilon}(x,t)|^{2}dxdt \leq E_{p,\varepsilon}(u_{0})\leq E_{p,1}(u_{0}). \tag{4.2}\]
On the other hand, we have for all \(t\in[0,T_{\varepsilon})\) the following bound on the mean value \(\overline{u}_{\varepsilon}(t)\) of \(u_{\varepsilon}(.,t):\)
\[|\overline{u}_{\varepsilon}(t)|=\frac{1}{|M|}\left|\int_{M}u_{\varepsilon}(x,t )dx\right|\leq|\overline{u}_{0}|+\frac{1}{|M|}\int_{0}^{T_{\varepsilon}}\int_ {M}|\partial_{t}u_{\varepsilon}(x,t)|dxdt,\]
which implies by using the Cauchy-Schwarz inequality and (4.2)
\[|\overline{u}_{\varepsilon}(t)|\leq|\overline{u}_{0}|+C\sqrt{T_{\varepsilon}}. \tag{4.3}\]
We have by the mean-value Theorem that
\[\sup_{x\in M}|u_{\varepsilon}(x,t)-\overline{u}_{\varepsilon}(t)|\leq\text{ diam}(M)\|\nabla u_{\varepsilon}(.,t)\|_{L^{\infty}(M)}, \tag{4.4}\]
where \(\text{diam}(M)\) is the diameter of \(M\). Hence it follows from (4.1), (4.3) and (4.4) that
\[\|u_{\varepsilon}\|_{L^{\infty}(M\times[0,T_{\varepsilon}))}+\|\nabla u_{ \varepsilon}\|_{L^{\infty}(M\times[0,T_{\varepsilon}))}\leq C+C\sqrt{T_{ \varepsilon}}. \tag{4.5}\]
Using (4.5) and the results of Dibenedetto [2], we have for some \(\beta\in(0,1)\),
\[\|u_{\varepsilon}\|_{C^{1+\beta,\beta/p}(M\times[0,T_{\varepsilon}))}\leq C_ {T_{\varepsilon}}, \tag{4.6}\]
where the constant \(C_{T_{\varepsilon}}\) depends also on \(T_{\varepsilon}\). The theory of linear parabolic equations (see [10]) together with (4.6) give, for some \(0<\alpha<1\),
\[\|u_{\varepsilon}\|_{C^{2+\alpha,1+\frac{\alpha}{2}}(M\times[0,T_{\varepsilon }))}\leq C_{T_{\varepsilon}} \tag{4.7}\]
where \(C_{T_{\varepsilon}}\) is a new constant that also depends on the modulus of ellipticity \(\varepsilon\). Estimate (4.7) implies that \(u_{\varepsilon}\) can be extended beyond \(T_{\varepsilon}\) contradicting thus the maximality of \(T_{\varepsilon}\). Hence we have \(T_{\varepsilon}=+\infty\).
Now we are in position to prove Theorem 1. By the result above, \(u_{\varepsilon}\) is defined on \([0,+\infty)\) and we have by (4.6) for all \(T>0\)
\[\|u_{\varepsilon}\|_{C^{1+\beta,\,\beta/p}(M\times[0,T])}\leq C_{T}, \tag{4.8}\]
where \(C_{T}\) is a positive constant depending on \(T,u_{0},p,M\) and \(\Omega\) but not on \(\varepsilon\).
It follows from estimate (4.8) that there exist a sequence \(\varepsilon_{k}\to 0\) and a map \(u\in C^{1+\beta,\,\beta/p}_{loc}(M\times[0,+\infty),\Omega)\) such that \(u_{\varepsilon_{k}}\to u\) in \(C^{1+\beta^{\prime},\,\beta^{\prime}/p}_{loc}(M\times[0,+\infty),\Omega)\) for all \(\beta^{\prime}<\beta\). In addition, the energy formula (2.3) in Proposition 2.1 gives for all \(T>0\),
\[\int_{0}^{T}\int_{M}|\partial_{t}u_{\varepsilon_{k}}(x,t)|^{2}dxdt+E_{p, \varepsilon_{k}}\big{(}u_{\varepsilon_{k}}(.,T)\big{)}\leq E_{p,\varepsilon_{ k}}(u_{0}) \tag{4.9}\]
which implies that \(\partial_{t}u_{\varepsilon_{k}}\to\partial_{t}u\) weakly in \(L^{2}(M\times[0,+\infty))\) and we have the energy inequality for the limit \(u\)
\[\int_{0}^{T}\int_{M}|\partial_{t}u(x,t)|^{2}dxdt+E_{p}\big{(}u(.,T)\big{)}\leq E _{p}\big{(}u_{0}\big{)}. \tag{4.10}\]
Passing to the limit in (2.1) when \(\varepsilon_{k}\to 0\), one can easily check that \(u\) is a solution of (1.5).
In order to prove the uniqueness of solutions, we recall the following well known inequality valid for any \(a,b\in\mathbb{R}^{L}\) and \(p\geq 2\) :
\[\big{\langle}|a|^{p-2}a-|b|^{p-2}b,a-b\big{\rangle}\geq|a-b|^{p}, \tag{4.11}\]
where \(\langle.,.\rangle\) and \(|.|\) denotes Euclidean inner product and the corresponding Eucidean norm in \(\mathbb{R}^{L}\).
Let \(T>0\) and let \(u_{1},u_{2}\in C^{1+\beta}(M\times[0,T])\) be two solutions of (1.5) such that \(u_{1}(.,0)=u_{2}(.,0)\). If we set \(w=u_{1}-u_{2}\), then taking the difference of the equations satisfied by \(u_{1}\) and \(u_{2}\) (the same as equation (1.5)), multiplying it by \(w\), integrating on \(M\times[0,T]\), and using (4.11) along with the fact that \(\nabla u_{1}\) and \(\nabla u_{2}\) are bounded in \(L^{\infty}(M\times[0,T])\), one can easily check that for any \(t\in[0,T]\),
\[\int_{M}|w(x,t)|dx\leq C\int_{0}^{t}\int_{M}|w(x,s)|dxds.\]
The right hand side of the above inequality is increasing in \(t\), therefore
\[\sup_{t^{\prime}\in[0,t]}\int_{M}|w(.,t^{\prime})|^{2}\ dx\leq Ct\sup_{t^{ \prime}\in[0,t]}\int_{M}|w(.,t^{\prime})\ dx,\]
thus, for \(t<\frac{1}{C}\) we get \(w\equiv 0\) for \(t^{\prime}\in[0,t]\). Iterating the argument proves the assertion.
Now let us prove the convergence of the flow at infinity when the target manifold \(N\) is compact. First we observe that by the energy inequality (4.10) we have
\[\int_{0}^{+\infty}\int_{M}|\partial_{t}u(x,t)|^{2}dxdt\leq E_{p}\big{(}u_{0} \big{)},\]
which implies the existence of a sequence \(t_{k}\to+\infty\) such that
\[\int_{M}|\partial_{t}u(x,t_{k})|^{2}dx\to 0\ \ \text{as}\ \ t_{k}\to+\infty. \tag{4.12}\]
On the other hand, it follows from estimate (4.1) and the fact that \(N\) is compact
\[\|u\|_{L^{\infty}(M\times[0,+\infty))}+\|\nabla u\|_{L^{\infty}(M\times[0,+ \infty))}\leq C\]
and the results of Dibenedetto [2] imply that
\[\|u\|_{C^{1+\beta,\,\beta/p}(M\times[0,+\infty))}\leq C. \tag{4.13}\]
Hence by passing to a subsequence if necessary, we deduce from (4.13) that \((u(.,t_{k}))_{k}\) converges in \(C^{1+\beta^{\prime}}(M,\Omega)\) for all \(\beta^{\prime}<\beta\) to a map \(u_{\infty}\in C^{1+\beta}(M,\Omega)\). By passing to the limit in equation (1.5) and using (4.12) we have that \(u_{\infty}\) is a \(p\)-harmonic map satisfying \(E_{p}(u_{\infty})\leq E_{p}(u_{0})\). The proof of Theorem 1 is complete.
The proof of Theorem 2 relies on Theorem 1 by using an exhaustion of \(M\) by a sequence of compact manifolds and the following proposition.
**Proposition 4.1**.: _Let \(u\) be the solution of problem (1.5) given by Theorem 1. Then for any ball \(B(x_{0},R)\subset M\), there exists a constant \(C_{R}>0\) depending on \(B(x_{0},R),p\) and \(\Omega\) such that_
\[\sup_{t\geq 0}\|\nabla u(.,t)\|_{L^{\infty}(B(x_{0},R/2))}\leq C_{R}\left( \int_{M}|\nabla u_{0}|^{p}dx+\|\nabla u_{0}\|_{L^{\infty}(B(x_{0},R))}+1\right).\]
Proof.: As in the proof of Theorem 1, \(u\) is the limit of sequence \((u_{\varepsilon_{k}})_{k}\) of solutions to the regularised problem 2.1 such that \(\varepsilon_{k}\to 0\). Then if we apply Proposition 3.2 to \(u_{\varepsilon_{k}}\)and pass to the limit when \(k\to\infty\), we obtain easily the desired result.
_Proof of Theorem 2_. Suppose that \((M,g)\) is a complete noncompact Riemannian manifold. Let \((U_{i})_{i\geq 1}\) be an exhaustion of \(M\) by compact manifolds with smooth boundaries. More precisely, each \(U_{i}\) is an open set of \(M\) such that \(\overline{U}_{i}\) is a compact manifold with smooth boundary \(\partial U_{i}\) and
\[\begin{cases}\overline{U}_{i}\subset U_{i+1}\\ \bigcup_{i\geq 1}U_{i}=M.\end{cases} \tag{4.14}\]
In order to apply Theorem 1 it is necessary to consider manifolds without boundary. To this end, we consider for each \(i\geq 1\), the double manifold of \(U_{i}\) that we denote by \(\widetilde{U}_{i}\). Thus \(\widetilde{U}_{i}\) is a compact manifold without boundary such that \(\overline{U}_{i}\subset\widetilde{U}_{i}\) and the metric \(g\) on \(\overline{U}_{i}\) extends to a \(C^{1}\)-metric \(\widetilde{g}_{i}\) on \(\widetilde{U}_{i}\). We smooth out \(\widetilde{g}_{i}\) on a neighborhood of \(\partial U_{i}\). More precisely, for a fixed \(0<\varepsilon<\frac{1}{4}\mathrm{diam}(U_{1})\), where \(\mathrm{diam}(U_{1})\) is the diameter of \(U_{1}\), we let
\[U_{i}^{\varepsilon}=\big{\{}x\in U_{i}\ :\ d\,(x,\partial U_{i})>\varepsilon \big{\}}.\]
Then the new metric, that we denote still \(\widetilde{g}_{i}\) for simplicity, is \(C^{\infty}\) on \(\widetilde{U}_{i}\), it can be chosen arbitrary close to \(g\) in the \(C^{1}\)-norm and it satisfies
\[\widetilde{g}_{i}=g\ \ \text{on}\ \ U_{i}^{\varepsilon}. \tag{4.15}\]
In the same way, we extend the initial datum \(u_{0}\) to a map \(\widetilde{u}_{0,i}\) on \(\widetilde{U}_{i}\) that we smooth out on a neighborhood of \(\partial U_{i}\), and one can choose \(\widetilde{u}_{0,i}\) arbitrary close to \(u_{0}\) in the \(C^{1}\)-norm. Thus we have \(\widetilde{u}_{0,i}\in C^{\infty}\left(\widetilde{U}_{i},\Omega\right)\) with
\[\widetilde{u}_{0,i}=u_{0}\ \ \text{on}\ \ U_{i}^{\varepsilon}, \tag{4.16}\]
and we may suppose without loss of generality that
\[\int_{\widetilde{U}_{i}}|\nabla\widetilde{u}_{0,i}|^{p}d\widetilde{g}_{i}\leq 2 \int_{U_{i}}|\nabla u_{0}|^{p}dx+1, \tag{4.17}\]
where \(d\widetilde{g}_{i}\) is the volume element with respect to \(\widetilde{g}_{i}\) and \(dx\) is the volume element with respect to \(g\).
Then we consider on each \(\widetilde{U}_{i}\) the \(p\)-harmonic heat flow problem
\[\left\{\begin{array}{l}\partial_{t}u-\widetilde{\Delta}_{p,i}u=|\nabla u|^{ p-2}A(u)(\nabla u,\nabla u),\\ \\ u(x,0)=\widetilde{u}_{0,i}(x),\end{array}\right. \tag{4.18}\]
where \(\widetilde{\Delta}_{p,i}\) is the \(p\)-Laplacian with respect to the metric \(\widetilde{g}_{i}\).
Thanks to Theorem 1, problem (4.18) admits a global solution \(u_{i}\in C_{loc}^{1+\beta,\,\beta/p}\left(\widetilde{U}_{i}\times[0,+\infty),\Omega\right)\) satisfiying the \(p\)-energy inequality
\[\int_{0}^{T}\int_{\widetilde{U}_{i}}|\partial_{t}u_{i}|^{2}d\widetilde{g}_{i} dt+\frac{1}{p}\int_{\widetilde{U}_{i}}|\nabla u_{i}(.,T)|^{p}d\widetilde{g}_{i} \leq\frac{1}{p}\int_{\widetilde{U}_{i}}|\nabla\widetilde{u}_{0,i}|^{p}d \widetilde{g}_{i} \tag{4.19}\]
for all \(T>0\).
Since \(\widetilde{g}_{i}=g\) on \(U_{i}^{\varepsilon}\), then \(u_{i}\) is a solution of equation (1.5) in \(U_{i}^{\varepsilon}\). We shall prove uniform gradient estimates on \(u_{i}\) on fixed balls of \(M\). For each fixed \(R>0\), we denote by \(C_{R}\) a positive constant that depends on \(R,M,p,\Omega\) and the initial datum \(u_{0}\), and whose value may change from line to line. Fix \(x_{0}\in M\) and \(R>0\), then we have from (4.14) and the definition of \(U_{i}^{\varepsilon}\), for \(i\) large enough, that \(B(x_{0},R)\subset U_{i}^{\varepsilon}\). It follows from Proposition 4.1 that
\[\sup_{t\geq 0}\|\nabla u_{i}(.,t)\|_{L^{\infty}(B(x_{0},R/2))}\leq C_{R}\left( \int_{\widetilde{U}_{i}}|\nabla\widetilde{u}_{0,i}|^{p}dx+\|\nabla\widetilde{ u}_{0,i}\|_{L^{\infty}(B(x_{0},R))}+1\right)\]
and by (4.17) we get
\[\begin{split}\sup_{t\geq 0}\|\nabla u_{i}(.,t)\|_{L^{\infty}(B(x_{0},R/2))}&\leq C_{R}\left(\int_{M}|\nabla u_{0}|^{p}dx+\|\nabla u_{0 }\|_{L^{\infty}(B(x_{0},R))}+1\right)\\ &\leq C_{R}.\end{split} \tag{4.20}\]
As in the proof of Theorem 1, by using (4.17), (4.19), (4.20) and the mean value Theorem, we have for any \(T>0\),
\[\|u_{i}\|_{L^{\infty}(B(x_{0},R/2)\times[0,T])}\leq C_{R}+C_{R}\sqrt{T}. \tag{4.21}\]
It follows from (4.20), (4.21) and the results of Dibenedetto [2] on degenerate parabolic equations that
\[\|u_{i}\|_{C^{1+\beta,\,\beta/p}(B(x_{0},R/2)\times[0,T])}\leq C_{R,T}, \tag{4.22}\]
for some constant \(\beta\in(0,1)\), where the constant \(C_{R,T}\) depends also on \(T\).
Since \(B(x_{0},R)\subset U_{i}^{\varepsilon}\) for \(i\) large enough and \(\widetilde{g}_{i}=g\) on \(U_{i}^{\varepsilon}\), then we have from (4.19) and (4.17) that
\[\int_{0}^{T}\int_{B(x_{0},R)}|\partial_{t}u_{i}(x,t)|^{2}dxdt\leq E_{p}(u_{0})+1. \tag{4.23}\]
Thus if we set \(T=R=R_{k}\), where \((R_{k})_{k}\) is a sequence such that \(R_{k}\to+\infty\), then by using the Cantor diagonal argument, it follows from (4.22) and (4.23) that there exists a subsequence \((u_{i_{k}})_{k}\) and a map \(u\in C^{1+\beta,\,\beta/p}_{loc}(M\times[0,+\infty),\Omega)\) such that
\[u_{i_{k}}\longrightarrow u\ \ \mbox{in}\ \ C^{1+\beta^{\prime},\,\beta^{ \prime}/p}(B(x_{0},R)\times[0,T),\Omega)\ \ \mbox{for all}\ R,T>0,\ 0<\beta^{\prime}<\beta,\]
and
\[\partial_{t}u_{i_{k}}\longrightarrow\partial_{t}u\ \ \mbox{weakly in}\ \ L^{2}(B(x_{0},R)\times[0,T),\Omega)\ \ \mbox{for all}\ R,T>0.\]
It is easy to check that by passing to the limit in (4.18) and using (4.15) and (4.16), \(u\) is a solution of (1.5). By passing to the limit in (4.19), one obtains formula (1.9) in Theorem 2.
When \(N\) is compact, the convergence of the flow can be proved in the same way as in the proof of Theorem 1. Indeed, if we take \(i=i_{k}\) in (4.23) and pass to the limit when \(k\to+\infty\), we obtain for any \(R,T>0\)
\[\int_{0}^{T}\int_{B(x_{0},R)}|\partial_{t}u(x,t)|^{2}dxdt\leq E_{p}(u_{0})+1.\]
which implies that by letting \(T\to+\infty\) and \(R\to+\infty\),
\[\int_{0}^{\infty}\int_{M}|\partial_{t}u(x,t)|^{2}dxdt\leq E_{p}(u_{0})+1. \tag{4.24}\]
It follows from (4.24) that there exits a sequence \(t_{j}\to+\infty\) such that
\[\int_{M}|\partial_{t}u(x,t_{j})|^{2}dx\to 0\ \ \mbox{as}\ \ t_{j}\to+\infty. \tag{4.25}\]
On the other hand, if we take \(i=i_{k}\) in (4.20) and pass to the limit when \(k\to+\infty\), we obtain for any \(R>0\)
\[\|\nabla u\|_{L^{\infty}(B(x_{0},R)\times[0,+\infty))}\leq C_{R}\]
which together with the results of Dibenedetto [2] imply, since \(N\) is compact,
\[\|u\|_{C^{1+\beta,\beta/p}(B(x_{0},R)\times[0,+\infty))}\leq C_{R}. \tag{4.26}\]
Hence by taking a sequence \(R_{j}\to\infty\), it follows from (4.25), (4.26) and the Cantor Diagonal argument that the sequence \(u(.,t_{j})\) admits a subsequence that converges in \(C^{1+\beta^{\prime}}(M,\Omega)\) for all \(\beta^{\prime}<\beta\) to a map \(u_{\infty}\in C^{1+\beta}(M,\Omega)\). By passing to the limit in equation (1.5) and using (4.25) we deduce that \(u_{\infty}\) is a \(p\)-harmonic map satisfying \(E_{p}(u_{\infty})\leq E_{p}(u_{0})\). The proof of Theorem 2 is complete.
Proof of Theorem 3.: Theorem 3 is a direct consequence of Theorem 2 since a manifold \(N\) with nonpositive sectional curvature is a \(\delta\)-generalised regular ball for any \(\delta>0\) (see Example 1.1 ). In this case, it suffices to apply Theorem 2 by taking any \(\delta>\delta_{p}\).
For the proof of Theorem 4 we need a modified version of Proposition 3.1 concerning solutions of the \(p\)-harmonic equation (1.4).
**Proposition 4.2**.: _Let \((M^{m},g)\), \((N^{n},h)\) be two Riemannian manifolds and let \(\Omega\subset N\) be a \(\delta\)-regular set. Let \(u\in C^{1}(M,\Omega)\) a \(p\)-harmoinc map and set_
\[\varphi(x)=\frac{F(x,t)}{f^{2}(u(x)},\]
_where \(F(x)=|\nabla u(x)|^{2}\) and \(f\) is the function satisfying condition (1.6). Then we have on the set \(\left\{x\in M~{}:~{}\nabla u(x)\neq 0\right\}\),_
\[\begin{split}&-\text{div}\left(F^{\frac{p-2}{2}}\nabla\varphi \right)\leq(p-2)\text{div}\left(F^{\frac{p-2}{2}}B(d\varphi,.)\right)-\frac{1 }{40}F^{\frac{p-4}{2}}(f\circ u)^{2}|\nabla\varphi|^{2}\\ &-2(\delta-\delta_{p})\frac{F^{\frac{p}{2}}}{(f\circ u)^{4}}\left| (\nabla f)\circ u\right|^{2}|\nabla u|^{2}+~{}2K_{1}\frac{F^{\frac{p-2}{2}}}{( f\circ u)^{2}}|\nabla u|^{2},\end{split} \tag{4.27}\]
_where \(\delta_{p}=3(p-2)^{2}\Big{(}\sqrt{m}+2p+2\Big{)}^{2}+3\) and \(-K_{1}\leq 0\) is a lower bound of the Ricci curvature of \(M\). The tensor \(B\) is defined in Section 2 by (2.5) (by taking \(\varepsilon=0\)).._
Proof.: The proof is exactly the same as that of Proposition 3.1. It is even more easier since the parabolic term \(\partial_{t}u\) is not present. Nevertheless, we have to consider only points \(x\in M\) such that \(\nabla u(x)\neq 0\). The reason is that our \(p\)-harmonic map is sufficienltly smooth at such points to apply the elliptic version of Bochner formula (2.7).
Proof of Theorem 4.: In this proof \(C\) denotes a positive constant depending only on \(M,\Omega\) and \(p\), and whose value may change from line to line. Define the set \(E\) by
\[E=\left\{x\in M~{}:~{}\nabla u(x)\neq 0\right\}\]
which is an open set of \(M\) since \(u\in C^{1}(M)\). By the regularity theory of elliptic equations we have \(u\in C^{\infty}(E)\).
As in Proposition 4.2, we set \(\varphi=\frac{F}{f^{2}(u)}\), where \(F=|\nabla u|^{2}\), and \(f\) is as in (1.6). For \(\varepsilon>0\), let \(\varphi_{\varepsilon}=(\varphi-\varepsilon)^{+}\). Then \(\varphi_{\varepsilon}\) is a locally Lipschitz function on \(M\) with support in \(E\) and satisfies
\[\begin{cases}(i)~{}~{}\varphi_{\varepsilon}(x)=0&\text{if }\varphi(x)< \varepsilon\\ (ii)~{}~{}\varphi_{\varepsilon}(x)=\varphi(x)-\varepsilon&\text{if }\varphi(x) \geq\varepsilon\\ (iii)~{}~{}\nabla\varphi_{\varepsilon}(x)=\nabla\varphi(x)&\text{if }\varphi(x) \geq\varepsilon\\ (iv)~{}~{}\nabla\varphi_{\varepsilon}(x)=0&\text{if }\varphi(x)< \varepsilon.\end{cases} \tag{4.28}\]
Fix a point \(x_{0}\in E\) and \(R>0\), and let \(\phi_{R}\in C^{1}_{0}(B(x_{0},2R))\) such that
\[\begin{cases}0\leq\phi_{R}\leq 1\\ \phi_{R}=1~{}~{}\text{on}~{}~{}B(x_{0},R)\\ |\nabla\phi_{R}|\leq CR^{-1}.\end{cases} \tag{4.29}\]
We have from Proposition 4.2, since by hypothesis we have \(K_{1}=0\) (\(M\) is supposed to be of nonnegative Ricci curvature) and \(\delta>\delta_{p}\),
\[-\text{div}\left(F^{\frac{p-2}{2}}\nabla\varphi\right)+\frac{1}{40}F^{\frac{p-4} {2}}f^{2}(u)|\nabla\varphi|^{2}\leq(p-2)\text{div}\left(F^{\frac{p-2}{2}}B(d \varphi,.)\right). \tag{4.30}\]
If we multiply (4.30) by \(\phi_{R}^{2}\varphi_{\varepsilon}\varphi^{-1}\) and integrate on \(E\) by using (4.28) we have
\[\begin{split}&\varepsilon\int_{E\cap B(x_{0},2R)}F^{\frac{p-2}{2}} \phi_{R}^{2}\varphi^{-2}|\nabla\varphi_{\varepsilon}|^{2}dx+\frac{1}{40}\int_ {E\cap B(x_{0},2R)}F^{\frac{p-4}{2}}f^{2}(u)\phi_{R}^{2}\varphi_{\varepsilon} \varphi^{-1}|\nabla\varphi_{\varepsilon}|^{2}\ dx\leq\\ &-2\int_{E\cap B(x_{0},2R)}F^{\frac{p-2}{2}}\varphi_{\varepsilon} \phi_{R}\varphi^{-1}\nabla\varphi_{\varepsilon}\cdot\nabla\phi_{R}dx- \varepsilon(p-2)\int_{E\cap B(x_{0},2R)}F^{\frac{p-2}{2}}\phi_{R}^{2}\varphi^ {-2}B(d\varphi_{\varepsilon},d\varphi_{\varepsilon})dx\\ &-2(p-2)\int_{E\cap B(x_{0},2R)}F^{\frac{p-2}{2}}\varphi_{ \varepsilon}\phi_{R}\varphi^{-1}B(d\varphi_{\varepsilon},d\phi_{R})dx.\end{split} \tag{4.31}\]
Since by (2.6) we have \(B(d\varphi_{\varepsilon},d\varphi_{\varepsilon})\geq 0\) and \(|B(d\varphi_{\varepsilon},d\phi_{R})|\leq|\nabla\varphi_{\varepsilon}||\nabla \phi_{R}|\), then it follows from (4.31) by using (4.29) and the fact that \(F=\frac{\varphi}{f^{2}(u)}\) :
\[\begin{split}&\frac{39}{40}\varepsilon\int_{E\cap B(x_{0},2R)}F^{ \frac{p-2}{2}}\phi_{R}^{2}\varphi^{-2}|\nabla\varphi_{\varepsilon}|^{2}dx+ \frac{1}{40}\int_{E\cap B(x_{0},2R)}F^{\frac{p-4}{2}}f^{2}(u)\phi_{R}^{2}| \nabla\varphi_{\varepsilon}|^{2}\ dx\leq\\ & CR^{-1}\int_{E\cap B(x_{0},2R)}F^{\frac{p-2}{2}}\phi_{R}|\nabla \varphi_{\varepsilon}|dx\end{split}\]
which implies, since \(C^{-1}\leq f\leq C\),
\[\int_{E\cap B(x_{0},2R)}F^{\frac{p-4}{2}}\phi_{R}^{2}|\nabla\varphi_{ \varepsilon}|^{2}\ dx\leq CR^{-1}\int_{E\cap B(x_{0},2R)}F^{\frac{p-2}{2}}\phi _{R}|\nabla\varphi_{\varepsilon}|dx. \tag{4.32}\]
On the other hand, we have by the Cauchy-Schwarz inequality
\[\int_{E\cap B(x_{0},2R)}F^{\frac{p-2}{2}}\phi_{R}|\nabla\varphi_{\varepsilon} |dx\leq\left(\int_{E\cap B(x_{0},2R)}F^{\frac{p-4}{2}}\phi_{R}^{2}|\nabla \varphi_{\varepsilon}|^{2}dx\right)^{\frac{1}{2}}\left(\int_{E\cap B(x_{0},2R )}F^{\frac{p}{2}}\ dx\right)^{\frac{1}{2}}.\]
Hence it follows from (4.32) that
\[\int_{E\cap B(x_{0},2R)}F^{\frac{p-4}{2}}\phi_{R}^{2}|\nabla\varphi_{ \varepsilon}|^{2}\ dx\leq CR^{-2}\int_{E\cap B(x_{0},2R)}F^{\frac{p}{2}}\ dx\]
and since \(\phi_{R}=1\) on \(B(x_{0},R)\), then we obtain
\[\int_{E\cap B(x_{0},R)}F^{\frac{p-4}{2}}|\nabla\varphi_{\varepsilon}|^{2}\ dx\leq CR^{-2}\int_{E\cap B(x_{0},2R)}F^{\frac{p}{2}} \ dx\leq CR^{-2}E_{p}(u). \tag{4.33}\]
Thus by letting \(R\to+\infty\) in (4.33) we obtain \(F^{\frac{p-2}{2}}|\nabla\varphi_{\varepsilon}|^{2}=0\) on \(E\), and then \(\nabla\varphi_{\varepsilon}=0\) on \(E\) since \(F>0\) on \(E\). The constant \(\varepsilon>0\) being arbitrary, we have then
\[\nabla\varphi=0\ \text{on}\ E. \tag{4.34}\]
Our objective is to prove that \(F=0\) on \(M\). Suppose by contradiction that there exist \(x_{0}\in M\) such that \(F(x_{0})\neq 0\), that is, \(x_{0}\in E\). Let \(\mathcal{C}_{0}\subset E\) be the connected component of \(x_{0}\) in \(E\). Since \(\mathcal{C}_{0}\) is open and connected, then by (4.34) \(\varphi\) is constant on \(\mathcal{C}_{0}\), that is, \(F=\lambda_{0}f(u)\) on \(\mathcal{C}_{0}\) for some constant \(\lambda_{0}\geq 0\). But by (1.6), we have \(f\geq C^{-1}\). Hence we have
\[F\geq\lambda_{0}C^{-1}\ \ \text{on}\ \mathcal{C}_{0}. \tag{4.35}\]
We distinguish two cases : \(E=M\) and \(E\neq M\).
**Case 1 : \(E=M\).** Since \(M\) is connected, then \(\mathcal{C}_{0}=M\) in this case. We have by hypothesis
\[\int_{M}F^{\frac{p}{2}}dx=\int_{M}|\nabla u|^{p}dx<+\infty\]
which implies by using (4.35) that \(\lambda_{0}=0\). Hence \(F=0\) on \(M\) contradicting the fact that \(F(x_{0})\neq 0\).
**Case 2 : \(E\neq M\).** In this case we have \(\partial\mathcal{C}_{0}\neq\emptyset\), where \(\partial\mathcal{C}_{0}\) is the topological boundary of \(\mathcal{C}_{0}\). It follows from (4.35) by continuity of \(F\) that \(F\geq\lambda_{0}C^{-1}\) on \(\partial\mathcal{C}_{0}\). On the other hand, by the definition of a connected component and since \(E\) is open, we have \(\partial\mathcal{C}_{0}\subset M\setminus E\). This implies that \(F=0\) on \(\partial\mathcal{C}_{0}\), and then \(\lambda_{0}=0\). Thus we have \(F=0\) on \(\mathcal{C}_{0}\) contradicting \(F(x_{0})\neq 0\).
Therefore, we have proved that \(F=0\), which implies that \(u\) is constant on \(M\) since \(M\) is connected. The proof of Theorem 4 is complete.
| We heat flow of p-harmonic maps between complete Riemannian manifolds を研究しています。その流は、初期条件が一般化された正則球内の値を持つとき、グローバルに存在します。特に、ターゲットマニFoldのsec sectional curvatureが非正である場合、 p-エネルギの有限な初期条件を持つ任意の初期条件に対して、その流はグローバルに存在します。さらに、ターゲットマニFoldがコンパクトである場合、流は p-ハーモニックなマニFoldに収束します。これは Liao-Tam [12] におけるハーモニックな熱流 (p = 2) の結果を p ≥ 2 の場合に拡張します。また、 p-ハーモニックなマニFoldの間の完全なRiemannianマニFoldに対するLiouville型定理も導き出しました。 |
2302.00122 | Chance-Constrained Trajectory Optimization for High-DOF Robots in
Uncertain Environments | Many practical applications of robotics require systems that can operate
safely despite uncertainty. In the context of motion planning, two types of
uncertainty are particularly important when planning safe robot trajectories.
The first is environmental uncertainty -- uncertainty in the locations of
nearby obstacles, stemming from sensor noise or (in the case of obstacles'
future locations) prediction error. The second class of uncertainty is
uncertainty in the robots own state, typically caused by tracking or estimation
error. To achieve high levels of safety, it is necessary for robots to consider
both of these sources of uncertainty. In this paper, we propose a risk-bounded
trajectory optimization algorithm, known as Sequential Convex Optimization with
Risk Optimization (SCORA), to solve chance-constrained motion planning problems
despite both environmental uncertainty and tracking error. Through experiments
in simulation, we demonstrate that SCORA significantly outperforms
state-of-the-art risk-aware motion planners both in planning time and in the
safety of the resulting trajectories. | Charles Dawson, Ashkan Jasour, Andreas Hofmann, Brian Williams | 2023-01-31T22:00:53 | http://arxiv.org/abs/2302.00122v1 | # Chance-Constrained Trajectory Optimization for High-DOF Robots in Uncertain Environments
###### Abstract
Many practical applications of robotics require systems that can operate safely despite uncertainty. In the context of motion planning, two types of uncertainty are particularly important when planning safe robot trajectories. The first is environmental uncertainty -- uncertainty in the locations of nearby obstacles, stemming from sensor noise or (in the case of obstacles' future locations) prediction error. The second class of uncertainty is uncertainty in the robots own state, typically caused by tracking or estimation error. To achieve high levels of safety, it is necessary for robots to consider both of these sources of uncertainty. In this paper, we propose a risk-bounded trajectory optimization algorithm, known as Sequential Convex Optimization with Risk Optimization (SCORA), to solve chance-constrained motion planning problems despite both environmental uncertainty and tracking error. Through experiments in simulation, we demonstrate that SCORA significantly outperforms state-of-the-art risk-aware motion planners both in planning time and in the safety of the resulting trajectories.
## I Introduction
For most robots, developed in the cradle of a well-controlled, carefully structured simulation or laboratory environment, primary challenge posed by deployment in the outside world is _uncertainty_. Imagine a robot making a delivery on a factory floor, or an autonomous vehicle driving on a busy road. A nearby human might turn left at an upcoming intersection, but they could just as easily turn right, cutting across the robot's planned path. The robot might have a camera to detect obstacles in its way, but its obstacle-detection algorithms may be error-prone. When it comes time to execute a planned path, the robot may only be able to track its intended path to within some tolerance, leading to uncertainty in its own state.
All of these factors point to a need for robots than can operate safely despite uncertainty both in the state of the environment (e.g. robustness to perception and prediction errors) and in the state of the robot (e.g. robustness to estimation and tracking errors). There is a large body of work dealing with chance-constrained motion planning -- which aims to find trajectories where the probability of failure (i.e. collision) is below some bound -- when there is uncertainty only in the state of the robot [12, 2, 15, 6, 11]. Similarly, there are a number of techniques for finding safe paths when the state of the external environment is uncertain [1, 10, 13, 7]. However, relatively little work has been done to build a chance-constrained motion planner that simultaneously considers the risk due to environmental uncertainty and tracking error uncertainty. Additionally, many of the existing approaches on chance-constrained motion planning are restricted to highly simplified geometric models. Some planners restrict themselves to point-mass models, which cannot represent high degree-of-freedom robots such as manipulators [12, 2, 11, 1, 10]. Others rely on collections of many small spheres to "bubble-wrap" the scene geometry, needlessly increasing computational cost when modeling objects such as humans or furniture [13]. Other approaches can model more complicated geometry (such as [6]) but are slow to find solutions and prone to underestimating the risk of collision.
As a result of these gaps in the state of the art, there is an unmet need for a chance-constrained motion planner that can simultaneously consider uncertainty in both the environment and robot state even when the robot and environment are modeled using general convex geometry.
### _Contributions_
In this paper, we present a chance-constrained trajectory optimization algorithm that is capable of meeting all of these needs. This algorithm, called Sequential Convex Optimization with Risk Allocation (SCORA) successfully manages risk due to both environmental uncertainty and tracking error; supports robots and environments with rich, convex geometry; and quickly finds high-quality trajectories that limit the risk of collision between the robot and its environment.
The core of this algorithm are differentiable risk estimates known as \(\epsilon\)-shadows, which we extend from previous work to provide robustness to state uncertainty and tracking error as well as safety in the face of environmental uncertainty. Using these estimates, SCORA employs a novel approach to chance-constrained non-convex optimization by solving a sequence of chance-constrained convex approximations to the full non-convex trajectory planning problem. As we demonstrate in this paper, our approach significantly outperforms comparable algorithms in both run-time and solution quality.
## II Related work
Early works on chance-constrained motion planning focused on the simple case of a point robot navigating a convex space, where the user has specified a maximum acceptable probability of violating any of the convex constraints on
the robot [16, 12, 2]. Although the applicability of these planners is limited to relatively simple environments, their key insight, that more efficient paths may be found by dynamically allocating risk between constraints, is broadly applicable to chance-constrained planning problems. This insight gave rise to the iterative risk allocation (IRA [12]) and convex risk allocation (CRA [2]) algorithms. IRA works by repeatedly solving an inner-loop optimization problem and changing the allocation of risk between constraints at each step, while CRA folds risk allocation into the inner-loop optimization (increasing the problem complexity from linear programming to convex programming but avoiding a costly outer-loop optimization). CRA was later extended to employ mixed-integer convex optimization to solve problems involving a point robot navigating around polytope obstacles [3]. A more recent work known as p-Chekov [6] employs IRA as an outer-loop around non-convex trajectory optimization to handle robots with non-trivial geometry, but this approach is limited by long run-times and inaccurate risk estimation. Moreover, none of these techniques are designed to consider uncertainty in the configuration of obstacles, although p-Chekov can be extended to handle environmental uncertainty as well as tracking error (we will use this extension as a main point of comparison for the performance of our approach). As we discuss in the following sections, our approach builds on the foundation of CRA, which we extend to support both environmental uncertainty and tracking error, including risk allocation within a non-convex optimization problem.
In parallel to these optimization-based works, a number of techniques extend traditional sampling-based motion planning algorithms such as PRM and RRT to the chance-constrained context [4, 1, 11]. Most of these techniques are limited to point-mass representations of robot geometry, but they add important developments to the theory of chance-constrained motion planning. In particular, Axelrod, Kaelbling, and Lozano-Perez in [1] introduce the notion of \(\epsilon\)-shadows: geometric objects that provide a means of quickly computing upper-bounds on the risk of collision due to environmental uncertainty. Dawson _et al._ extend the theory of \(\epsilon\)-shadows in [7] to support arbitrary convex robot geometry and derived the gradient of \(\epsilon\)-shadow risk estimates with respect to robot state. Dawson _et al._ use this framework to develop an efficient trajectory optimization algorithm that handles both rich geometry and environmental uncertainty; our work in this paper can be seen as extending this approach to handle state uncertainty as well. We will present the necessary theoretical background for \(\epsilon\)-shadow-based trajectory optimization in Section IV.
## III Problem statement
We consider the problem of trajectory optimization over a fixed, finite horizon \(T\): finding a sequence of nominal states \(\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T}\in\mathbb{R}^{n}\) that navigate between starting and final configurations \(\mathcal{Q}_{start}\) and \(\mathcal{Q}_{final}\) while limiting the risk of collision incurred during the motion. To model state uncertainty, we assume that the our decision variables specify the nominal path \(\bar{q}=\left[\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T}\right]\) and that at execution-time the robot will follow some realization of this trajectory \(q\) drawn from a multivariate Gaussian distribution about the nominal \(\bar{q}\). That is, \(q\sim\mathcal{N}(\bar{q},\Sigma_{q})\). For notational convenience, we consider the distribution of the full trajectory (the concatenation of the state at each waypoint), which allows us to easily consider cases when the tracking error \(q-\bar{q}\) is correlated across time, as might be the case when employing linear quadratic Gaussian (LQG) control [15]. We assume that an estimate of the tracking error covariance \(\Sigma_{q}\) is known a-priori, as is the case when the system is controlled using LQG.
We model the environment \(E\) as a set of convex obstacles \(\mathcal{O}\); to represent environmental uncertainty we assume that each ground truth obstacle \(\mathcal{O}_{i}\) is offset from a known, nominal obstacle \(O_{i}\) by some uncertain translation \(d\sim\mathcal{N}(0,\Sigma_{O})\). To model collisions, we consider the signed distance function \(\text{sd}_{\mathcal{O}}(q)\), which returns the minimum distance from the robot to an obstacle \(\mathcal{O}\in E\). By definition, \(\text{sd}_{\mathcal{O}}(q)>0\) if the robot is not in collision with \(\mathcal{O}\), and \(\text{sd}_{\mathcal{O}}(q)<0\) if the robot is in collision (in this case, the signed distance is equal to the distance by which the robot penetrates the obstacle, expressed as a negative quantity).
Given a user-specified limit on the overall risk of collision \(\Delta\), an arbitrary convex cost function \(f\) (e.g. a quadratic cost on displacement between timesteps), and a set of arbitrary inequality constraints \(g_{i}\) (e.g. enforcing joint limits or plant dynamics), the chance-constrained trajectory optimization problem is:
\[\min_{\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T}} f(\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T})\] (ccNLP-1) s.t. \[\bar{q}_{0}\in\mathcal{Q}_{start};\ \bar{q}_{T}\in\mathcal{Q}_{final} \tag{1}\] \[\text{Pr}_{\mathcal{O}=O+d;\ d\sim\mathcal{N}(\bar{q},\Sigma_{ Q})\atop\text{$\mathcal{O}=O+d;\ d\sim\mathcal{N}(0,\Sigma_{O})$}}\left(\bigwedge_{0\leq t\leq T}\bigwedge_{ \mathcal{O}\in E}\text{sd}_{\mathcal{O}}(q_{t})\geq 0\right)\] \[g_{i}(\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T})\leq 0;\quad i \in\mathcal{I} \tag{3}\]
When the robot and environment are both modeled as collections of convex shapes, the signed distance can be computed easily using standard computational geometry algorithms [5]. Unfortunately, the probability of the signed distance dropping below zero, i.e. the probability of collision which we wish to constrain in (2), is not available in closed form. Instead, this probability must be estimated in order to yield a tractable optimization problem. In Section IV, we will discuss how this probability can be estimated in the presence of environmental uncertainty alone (drawing on previous work in this area), and in Section V we make our main contribution by expanding our view to develop an optimization-based algorithm that efficiently manages risk due to both environmental uncertainty and tracking error. Finally, in Section VI, we will present empirical results that demonstrate how considering both types of uncertainty provides a significantly higher level of safety than considering environmental uncertainty alone.
## IV Safety in uncertain environments
The probability of collision constraining problem (ccNLP-1) in (2) captures the risk stemming from uncertainty both in state tracking error \(q\sim\mathcal{N}(\bar{q},\Sigma_{q})\) and in the locations of obstacles in the environment \(\mathcal{O}=O+d;\ d\sim\mathcal{N}(0,\Sigma_{O})\). A sound approach to chance-constrained motion planning must consider both sources of uncertainty, but it is conceptually more straightforward to first isolate the effects of environmental uncertainty (this section) then expand our approach to consider state uncertainty (Section V).
Considering only environmental uncertainty (i.e. assuming that \(q=\bar{q}\)), the probability in constraint (2) can be rewritten using Boole's inequality:
\[\text{Pr}_{d\sim\mathcal{N}(0,\Sigma_{O})}\left(\bigwedge_{0\leq t \leq T}\bigwedge_{\mathcal{O}\in E}\text{sd}_{\mathcal{O}}(q_{t})\geq 0\right) \geq 1-\Delta \tag{4}\] \[\text{Pr}_{d\sim\mathcal{N}(0,\Sigma_{O})}\left(\bigvee_{0\leq t \leq T}\bigvee_{\mathcal{O}\in E}\text{sd}_{\mathcal{O}}(q_{t})\leq 0\right) \leq\Delta\] (5) \[\sum_{0\leq t\leq T}\sum_{\mathcal{O}\in E}\text{Pr}_{d\sim \mathcal{N}(0,\Sigma_{O})}\left(\text{sd}_{\mathcal{O}}(q_{t})\leq 0\right) \leq\Delta \tag{6}\]
This reduces the problem of evaluating the risk of collision across the entire trajectory to the simpler problem of bounding the risk of collision between in a specific configuration \(q_{t}\). To bound this probability, we follow [1] and more recently [7] in using \(\epsilon\)-shadows to estimate an upper bound on this risk.
**Definition 1** (\(\epsilon\)-shadow): _A set \(\mathcal{S}\subset\mathbb{R}^{3}\) is a maximal \(\epsilon\)-shadow of an uncertain obstacle \(\mathcal{O}\) if the probability \(\text{Pr}\left(\mathcal{O}\subseteq S\right)=1-\epsilon\)._
Intuitively, an \(\epsilon\)-shadow is a geometric object (often an enlarged version of the nominal obstacle) that contains its associated uncertain obstacle with probability \(1-\epsilon\). As a result, the \(\epsilon\)-shadow acts as a mathematically rigorous safety buffer: if the robot avoids collision with the \(\epsilon\)-shadow then it necessarily limits the risk of collision with the obstacle itself to less than \(\epsilon\). As a result, we can simplify the problem of computing an upper bound on the risk of collision between the robot an an obstacle to simply finding the smallest \(\epsilon\) such that there is no collision between the robot and the corresponding \(\epsilon\)-shadow.
The process of finding these \(\epsilon\)-shadows is summarized in Figure 1. Essentially, to construct the shadow at a given risk level \(\epsilon\), we take the Minkowski sum of the nominal obstacle geometry with an ellipsoid: \(\mathcal{S}_{\epsilon}=O\bigoplus\mathcal{D}_{\epsilon}\), where \(\mathcal{D}_{\epsilon}=\left\{x\ :\ x^{T}\Sigma_{O}^{-1}x\leq\phi^{-1}(1- \epsilon)\right\}\), \(\Sigma_{O}\) is the covariance matrix for the uncertainty in the obstacle's location, and \(\phi^{-1}\) is the inverse cumulative distribution function of the \(\chi^{2}\) distribution with 3 degrees of freedom. Intuitively, the boundary of \(\mathcal{D}_{\epsilon}\) is an isoprobability surface (at probability \(1-\epsilon\)) of the uncertain translation of the true obstacle \(\mathcal{O}\), and so the sum \(\mathcal{S}_{\epsilon}=O\bigoplus\mathcal{D}_{\epsilon}\) contains all possible translations of \(O\) with probability \(1-\epsilon\) as well. Our focus in this paper is not to reproduce the proofs of the correctness of \(\epsilon\)-shadow risk estimates (those proofs can be found in [7]); instead, we will focus in Section V on how these \(\epsilon\)-shadows can be combined with a robust non-convex optimization strategy to plan safe trajectories in the presence of both environmental uncertainty and tracking error.
These \(\epsilon\)-shadows have two important properties. First, if the underlying geometry \(O\) is convex, then the shadow \(\mathcal{S}_{\epsilon}\) is also convex (since ellipsoids are convex the Minkowski sum of two convex shapes is also convex). Second, modern computational geometry libraries allow the Minkowski sum of two convex shapes to be represented implicitly via their support vectors [5]. These two properties mean that we can construct an \(\epsilon\)-shadow and check for collision between the shadow and the robot very quickly (the time complexity is linear in the number of vertices in the obstacle and robot geometry [8]). Furthermore, beyond simply checking whether the robot is safe at any fixed risk level, Dawson _et al._ demonstrate that a simple line search algorithm can be used to compute the _largest_ convex \(\epsilon\)-shadow that does not collide with the robot in state \(q_{t}\)[7]. This largest \(\epsilon\)-shadow corresponds to the smallest \(\epsilon\), i.e. the least upper bound on the risk of collision between the robot and the obstacle at that state. Computing this tight upper bound on collision probability, which we denote as \(\epsilon_{\mathcal{O}}(q_{t})\) (with respect to one obstacle \(O\) and a particular robot configuration \(q_{t}\)), can be done on the order of \(100\,\mathrm{\SIUnitSymbolMicro s}\). Using this estimate, we can provide a deterministic inner approximation of the probabilistic constraint (6):
Fig. 1: (a) \(\epsilon\)-shadows are used to compute an upper bound on the risk of collision between an obstacle \(O\) and the robot in some configuration. (b) We assume that the nominal geometry and location of the obstacle is known, but that the true location of the obstacle is uncertain. (c) Given some \(\epsilon\), we construct an \(\epsilon\)-shadow by asymmetrically expanding the nominal geometry \(O\) in the directions in which the obstacle position is most uncertain; the true obstacle \(\mathcal{O}\) is guaranteed to lie within the shadow \(\mathcal{S}_{\epsilon}\) with probability \(1-\epsilon\). (d) The \(\epsilon\)-shadow is convex and can be checked for collision with the robot in linear time, so we apply an bisection line search in \(\epsilon\) to efficiently find the smallest \(\epsilon\) that upper bounds the risk of collision with the obstacle (corresponding to the largest \(\epsilon\)-shadow that does not intersect the robot).
\[\sum_{0\leq t\leq T}\sum_{\mathcal{O}\in E}\epsilon_{\mathcal{O}}(q_{t})\leq\Delta \tag{7}\]
The algorithm presented in [7] also provides the gradient of the risk estimate \(\nabla_{q_{t}}\epsilon_{\mathcal{O}}(q_{t})\) with very little overhead. Dawson _et al._ use this gradient to develop a gradient-based chance-constrained trajectory optimization algorithm (considering only uncertainty in the environment). In the following section, we will show how this gradient can also be used in a robust optimization algorithm that considers state uncertainty (i.e. tracking error) in addition to environmental uncertainty, substantially improving the safety of the optimized trajectories in representative scenarios.
## V Trajectory optimization with uncertain state
In the previous section, we saw how \(\epsilon\)-shadows can be used to find nominal trajectories with some level of safety in the face of environmental uncertainty. However, due to state uncertainty and tracking error, it is unlikely that these nominal trajectories will remain safe when executed. In this section, we will develop an robust, tractable approximation to the chance-constrained trajectory optimization problem (ccNLP-1) that considers both environmental uncertainty and tracking error.
Before presenting our robust optimization approach, it is helpful to define some vocabulary to speak about safety under these two different types of uncertainty.
**Definition 2** (\(\delta\)-safety): _A trajectory \(q\) is said to be \(\boldsymbol{\delta}\)**-safe** if the probability of the trajectory colliding with any uncertain obstacle is no greater than \(\delta\). That is, the trajectory is \(\delta\)-safe if \(\Pr\left(\bigwedge_{0\leq t\leq T}\bigwedge_{\mathcal{O}\in E}\text{sd}_{ \mathcal{O}}(q_{t})\geq 0\right)\leq\delta\)._
In this conception, \(\delta\)-safety can be a property of either a nominal trajectory \(\bar{q}\) or a specific execution of that trajectory \(q\sim\mathcal{N}(\bar{q},\Sigma_{q})\). The \(\epsilon\)-shadow approach presented in Section IV is concerned with finding \(\delta\)-safe nominal trajectories when \(q=\bar{q}\). It should be clear that \(\sum_{0\leq t\leq T}\sum_{\mathcal{O}\in E}\epsilon_{\mathcal{O}}(\bar{q}_{t}) \leq\delta\) is a sufficient condition for a nominal trajectory to be \(\delta\)-safe.
The next definition considers what happens to \(\delta\)-safe nominal trajectories when they are executed subject to Gaussian tracking error.
**Definition 3** (\(\gamma\)-robustness): _If a nominal trajectory is \(\delta\)-safe, then we say that it is also \(\boldsymbol{\gamma}\)**-robust** if the probability that an execution \(q\sim\mathcal{N}(\bar{q},\Sigma_{q})\) is also \(\delta\)-safe is at least \(1-\gamma\)._
In other words, a nominal trajectory is \(\gamma\)-robust if it is \(\delta\)-safe and also likely (with high probability) to yield executions that are also \(\delta\)-safe. If a nominal trajectory \(\bar{q}\) is \(\gamma\)-robust, then there is at most probability \(\gamma\) that the robot will (at execution time) incur a collision risk greater than \(\delta\).
By leveraging the language of \(\epsilon\)-shadows from Section IV, we see that a sufficient condition for \(\gamma\)-robustness is
\[\Pr\left(\sum_{0\leq t\leq T}\sum_{\mathcal{O}\in E}\epsilon_{\mathcal{O}}(q_ {t})\leq\delta\ \Bigg{|}\ q\sim\mathcal{N}(\bar{q},\Sigma_{q})\right)\geq 1-\gamma \tag{8}\]
We can also upper-bound the total risk of collision while executing a \(\gamma\)-robust, \(\delta\)-safe trajectory:
\[\Pr\left(\text{collision}\right) =1-\Pr\left(\neg\text{ collision}\right) \tag{9}\] \[\leq 1-\Pr\left(\delta\text{-safe}\right)\Pr\left(\text{ collision }\mid\delta\text{-safe}\right)\] (10) \[\leq 1-(1-\gamma)(1-\delta)\] (11) \[\leq\gamma+\delta \tag{12}\]
Thus, if the user-specified risk tolerance is \(\Delta\), then the constraint \(\gamma+\delta\leq\Delta\) is sufficient to ensure that a \(\gamma\)-robust, \(\delta\)-safe trajectory satisfies the user's risk tolerance.
By combining these sufficient conditions, we can write a conservative approximation of our original chance-constrained optimization problem (ccNLP-1). In this approximation, we incorporate the parameters \(\gamma\) and \(\delta\) as decision variables, allowing the optimization program to intelligently allocate the overall risk budget between environmental risk (in \(\delta\)) and tracking error risk (in \(\gamma\)).
\[\min_{\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T};\ \delta,\gamma} f(\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T})\] (ccNLP-2) s.t. \[\bar{q}_{0}\in\mathcal{Q}_{start};\ \bar{q}_{T}\in\mathcal{Q}_{ final} \tag{13}\] \[\Pr_{q\sim\mathcal{N}(\bar{q},\Sigma_{q})}\left(\sum_{0\leq t \leq T}\sum_{\mathcal{O}\in E}\epsilon_{\mathcal{O}}(q_{t})\leq\delta\right) \geq 1-\gamma\] (14) \[\gamma+\delta\leq\Delta\] (15) \[g_{i}(\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T})\leq 0;\quad i \in\mathcal{I} \tag{16}\]
In order to solve problem (ccNLP-2), we need to convert the constraint on probability in (14) with a deterministic constraint. To do this, we draw inspiration both from sequential convex optimization (SCO [14]), non-convex optimization problems by repeatedly solving a convex approximation, and from convex risk allocation (CRA [2]), which reduces linear chance constraints to deterministic convex constraints.
In particular, we use the gradient of \(\epsilon\)-shadow risk estimates to replace the nonlinear chance constraint (14) with an approximate linear chance constraint, which we then reduce to a deterministic constraint using techniques from chance-constrained linear programming. This process begins by linearizing the chance constraint (14) about the nominal trajectory \(\bar{q}\), which reduces it to a linear chance constraint on a single Gaussian variable. Let \(e_{t}\) denote the tracking error
\(q_{t}-\bar{q}_{t}\), drawn from a joint Gaussian distribution \(e\sim\mathcal{N}(0,\Sigma_{q})\)
\[\text{Pr}\left(\sum_{0\leq t\leq T}\sum_{\mathcal{O}\in E}\epsilon_ {\mathcal{O}}(q_{t})\leq\delta\right) \tag{17}\] \[\approx\text{Pr}\left(\sum_{0\leq t\leq T}\sum_{\mathcal{O}\in E }\epsilon_{\mathcal{O}}(\bar{q}_{t})+\nabla\epsilon_{\mathcal{O}}(\bar{q}_{t}) e_{t}\leq\delta\right)\] (18) \[=\text{Pr}\left(\sum_{0\leq t\leq T}\sum_{\mathcal{O}\in E}\nabla \epsilon_{\mathcal{O}}(\bar{q}_{t})e_{t}\leq\delta-\sum_{0\leq t\leq T}\sum_{ \mathcal{O}\in E}\epsilon_{\mathcal{O}}(\bar{q}_{t})\right)\] (19) \[=\text{Pr}\left(z\leq\delta-\sum_{0\leq t\leq T}\sum_{\mathcal{O} \in E}\epsilon_{\mathcal{O}}(\bar{q}_{t})\right) \tag{20}\]
where \(z\) is a scalar Gaussian random variable given by \(z=\sum_{0\leq t\leq T}\sum_{\mathcal{O}\in E}\nabla\epsilon_{\mathcal{O}}( \bar{q}_{t})e_{t}\sim\mathcal{N}(0,R^{T}\Sigma_{q}R)\). The variance of \(z\) is determined by the gradient of the total risk \(\sum_{0\leq t\leq T}\sum_{\mathcal{O}\in E}\epsilon_{\mathcal{O}}(q_{t})\) with respect to the trajectory \(q\), which we denote as \(R^{T}=\left[\sum_{\mathcal{O}\in E}\nabla\epsilon_{\mathcal{O}}(\bar{q}_{0}),\sum_{\mathcal{O}\in E}\nabla\epsilon_{\mathcal{O}}(\bar{q}_{1}),\ldots, \sum_{\mathcal{O}\in E}\nabla\epsilon_{\mathcal{O}}(\bar{q}_{T})\right]\) (for notational convenience, we concatenate the gradient terms for each timestep \(q_{t}\)).
After making this approximation, we are left with a linear chance constraint on a single Gaussian random variable. Conveniently, this probability is given exactly by the CDF of the Gaussian:
\[\text{Pr}\left(z\leq\delta-\sum_{0\leq t\leq T}\sum_{\mathcal{O} \in E}\epsilon_{\mathcal{O}}(\bar{q}_{t})\right)\] \[\quad=CDF_{\mathcal{N}(0,R^{T}\Sigma_{q}R)}\left(\delta-\sum_{0 \leq t\leq T}\sum_{\mathcal{O}_{i}\in E}\epsilon_{\mathcal{O}_{i}}(\bar{q}_{ t})\right) \tag{21}\]
This simplification allows us to write a tractable approximation to (ccNLP-2):
\[\min_{\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T};~{}\delta,\gamma} f(\bar{q}_{0},\bar{q}_{1},\ldots,\bar{q}_{T})\] (ccNLP-3) s.t. \[\bar{q}_{0}\in\mathcal{Q}_{start};~{}\bar{q}_{T}\in\mathcal{Q}_{ final}\] (22) \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
In this example, we use a fixed-time-of-arrival direct transcription optimization scheme, with decision variables for \(q_{t}\) and \(u_{t}\) at each timstep (as well as additional decision variables \(\gamma\) and \(\delta\) for the risk allocation). We use \(16\) time steps of \(0.625\) seconds each, and the objective is the sum-squared displacement along the trajectory \(0.5\sum_{t=1}^{T}||q_{t}-q_{t+1}||^{2}\). The overall risk constraint is set to be \(0.2\) (this is limited by the p-Chekov algorithms failing to converge for smaller risk constraints; our proposed algorithm successfully planned trajectories for smaller risk bounds).
The results of running each planner on this scenario are shown in Table I. To estimate the true risk of collision for each planned trajectory, we use linear interpolation to up-sample the discrete trajectories to include 100 total waypoints (to check for collision between the optimized waypoints), then ran 1000 independent simulations to check for collisions under environmental uncertainty and tracking error. Runtimes were estimated using an average across 100 executions.
From the results in Table I, we see that SCORA is able to achieve a significantly higher degree of safety than either p-Chekov (which considers tracking error alone) or \(\epsilon\)-opt (which considers environmental error alone). This demonstrates that considering both tracking error and environmental uncertainty yields a safety benefit beyond that provided by considering either factor alone. It is also interesting to note that comparing the performance of \(\epsilon\)-opt and p-Chekov suggests that environmental uncertainty has a greater impact on safety than tracking error in this example, since ignoring the effects of environmental uncertainty (as p-Chekov does) results in a higher risk of collision than ignoring tracking error effects (as \(\epsilon\)-opt does).
We also see that our planner significantly outperforms the extended p-Chekov planner in terms of safety and run-time. This is likely due to the fact that p-Chekov employs a Gauss-Hermite quadrature sampling strategy with only three sampling points to estimate the risk of collision, and this sampling strategy is prone to dramatically underestimating the true risk of collision (increasing the number of sample points would reduce this error at the cost of runtime; we follow the reference implementation in using only three points). As a result, both the original and the extended p-Chekov planners report successfully satisfying the chance constraint when a full Monte Carlo analysis shows the true risk of collision to be unacceptably high. The extent of this overestimate decreases as the magnitude of the state uncertainty decreases, but it nevertheless negatively impacts the performance of these state-of-the-art planners relative to our proposed SCORA algorithm.
Videos of the planned trajectories are included in the supplementary material.
### _10-DOF mobile manipulator_
To demonstrate the scalability of our proposed motion planning algorithm, we consider the mobile-manipulator navigation problem shown in Figure 3. In this scenario, the mobile arm is tasked with navigating around two uncertain obstacles. Note that although the obstacle geometry is relatively simple, the configuration space of this trajectory planning problem is high-dimensional and complex. The location of each obstacle in this problem is subject to additive Gaussian uncertainty with covariance
\[\Sigma_{O}=\begin{bmatrix}0.06&0.05&0.0\\ 0.05&0.06&0.0\\ 0.0&0.0&0.01\end{bmatrix} \tag{31}\]
The uncertainty in the state of the robot in this case is dominated by uncertainty in the pose of the mobile base, with tracking error covariance:
\[\Sigma_{q}=\begin{bmatrix}0.01I_{2\times 2}&0&0\\ 0&0.05&0\\ 0&&0.005I_{10\times 10}\end{bmatrix} \tag{32}\]
In addition to demonstrating SCORA's scalability for high-degree-of-freedom planning problems, this scenario allows us to demonstrate how the SCORA framework can propagate uncertainty in the pose of the base link to manage collision risk at a distal link.
For these experiments, we use a planning horizon \(T=10\) (with timestep \(d_{t}=0.2\) and a joint collision chance constraint of \(\Delta=0.05\). In addition, we enforce unicycle dynamics on
Fig. 2: The parallel parking task. The ego vehicle (dark brown) must park between two stationary cars (light brown) without running onto the curb (dark brown).
the mobile base:
\[x_{t+1} =x_{t}+v_{t}\cos\theta_{t}d_{t} \tag{33}\] \[y_{t+1} =y_{t}+v_{t}\sin\theta_{t}d_{t} \tag{34}\]
where \(v_{t}\) is the forward velocity at each step, which we constrain \(|v_{t}|\leq 1.0\). The decision variables are the velocity \(v_{t}\) and state \(q_{t}=[x,y,\theta,q_{1},q_{2},q_{3},q_{4},q_{5},q_{6},q_{7}]_{t}\) at each timestep (in addition to the risk allocation variables). The objective used in this case is the sum-squared displacement along the trajectory \(0.5\sum_{t=1}^{T}||q_{t}-q_{t+1}||^{2}\).
The results of solving this planning problem with each planner are shown in Table II. In this experiment, neither p-Chekov planner returned a solution in less than \(200\) seconds. This is likely because, even with only three sampling points, p-Chekov's Gauss-Hermite quadrature sampling scheme requires \(O(3^{n})\) collision checks for each outer-loop risk allocation (where \(n=10\) is the number of degrees of freedom in the problem), which is prohibitive in this mobile manipulator scenario. For contrast, both \(\epsilon\)-opt and SCORA avoid this exponential complexity by using \(\epsilon\)-shadows to upper bound collision risk instead of relying on sampling.
Although the \(\epsilon\)-opt planner (which only considers external uncertainty) finds a trajectory that is very close to satisfying the chance constraint, we see that SCORA is able to find a solution that is nearly an order of magnitude safer with only a minor (\(0.12\%\)) increase in the objective. These results show that even in situations where the chance constraint can be satisfied by considering environmental uncertainty alone, it is possible to dramatically increase the safety of planned trajectories -- with relatively little trade-off to the solution cost -- simply by considering tracking error at the same time as environmental uncertainty.
Finally, we note that the absence of strongly nonlinear dynamics in this example greatly simplifies the planning problem, allowing SCORA to run in significantly less time than in the previous example. In fact, in this case SCORA runs only \(85\%\) more slowly than \(\epsilon\)-opt, and both easily clear the approximately \(1\) second threshold for real-time path planning.
Videos of the planned trajectories are included in the supplementary material.
## VII Conclusion
In this paper, we present a risk-aware trajectory optimization algorithm, SCORA, with three core capabilities:
1. The ability to manage risk due to uncertainty in obstacles' locations,
2. The ability to manage risk due to tracking error and other uncertainty in the robot's own state, and
3. Support for high-dimensional planning problems with non-trivial geometry.
In addition, SCORA enables the user to include arbitrary objective and constraints, allowing it to solve trajectory optimization problems with nonlinear dynamics. Through experiments in simulation, we demonstrate that this algorithm provides significant benefits over state-of-the-art planners in capability, planning time, and safety.
Additionally, our simulation results demonstrate the importance of considering both environmental uncertainty as well as tracking error. In one scenario considering both sources of uncertainty yields significant safety benefits over considering environmental uncertainty alone (with only a minimal tradeoff in trajectory cost). In another scenario, only the SCORA planner (which considers both sources of uncertainty) was able to find trajectories satisfying the chance-constraint.
## Acknowledgments
This work was sponsored by Airbus SE.
Fig. 3: A high-dimensional mobile manipulator navigating around two uncertain obstacles. | ロボティクスにおける多くの実用的な応用は、不確実性にもかかわらず、安全に動作できるシステムを必要とする。動き計画の分野において、安全なロボット軌跡を計画する際に、特に重要な2つの不確実性がある。第一は環境不確実性 - 近傍の障害物の位置が不確実性であり、これはセンサーノイズまたは(障害物の将来の位置の場合)予測誤差によるものである。第二の不確実性クラスは、ロボット自身の状態に関する不確実性であり、これはトラッキングまたは推定誤差によって引き起こされる。高いレベルの安全性を実現するためには、ロボットはこれらの不確実性の両方を考慮する必要がある。本論文では、環境不確実性とトラッキングエラーを考慮しながら、リスクを制限した軌跡最適化アルゴリズムである「SEQ-Convex Optimization with Risk Optimization (SCORA)」を提案する。シミュレーションの実験を通して、 |
2303.17777 | $α$ + $^{92}$Zr cluster structure in $^{96}$Mo | In the evaluation of the half-life of the neutrinoless double-$\beta$ decay
($0\nu\beta\beta$) of a doubly closed-subshell nucleus $^{96}$Zr, the structure
of the nucleus $^{96}$Mo is essentially important. The $\alpha$-clustering
aspects of $^{96}$Mo are investigated for the first time. By studying the
nuclear rainbows in $\alpha$ scattering from $^{92}$Zr at high energies and the
characteristic structure of the excitation functions at the extreme backward
angle at the low-energy region, the interaction potential between the $\alpha$
particle and the $^{92}$Zr nucleus is determined well in the double folding
model. The validity of the double folding model was reinforced by studying
$\alpha$ scattering from neighboring nuclei $^{90}$Zr, $^{91}$Zr, and
$^{94}$Zr. The double-folding-model calculations reproduced well all the
observed angular distributions over a wide range of incident energies and the
characteristic excitation functions. By using the obtained potential the
$\alpha$ +$^{92}$Zr cluster structure of $^{96}$Mo is investigated in the
spirit of a unified description of scattering and structure. The existence of
the second-higher nodal band states with the $\alpha$+ $^{92}$Zr cluster
structure, in which two more nodes are excited in the relative motion compared
with the ground band, is demonstrated. The calculation reproduces well the
ground-band states of $^{96}$Mo in agreement with experiment. The experimental
$B(E2)$ value of the transition in the ground band is also reproduced well. The
effect of $\alpha$ clustering in $^{96}$Mo on the the half-life of the
$0\nu\beta\beta$ double-$\beta$ decay of $^{96}$Zr is discussed. | S. Ohkubo, Y. Hirabayashi | 2023-03-31T02:49:02 | http://arxiv.org/abs/2303.17777v1 | # \(\alpha\) + \({}^{92}\)Zr cluster structure in \({}^{96}\)Mo
###### Abstract
In the evaluation of the half-life of the neutrinoless double-\(\beta\) decay (\(0\nu\beta\beta\)) of a doubly closed-subshell nucleus \({}^{96}\)Zr, the structure of the nucleus \({}^{96}\)Mo is essentially important. The \(\alpha\)-clustering aspects of \({}^{96}\)Mo are investigated for the first time. By studying the nuclear rainbows in \(\alpha\) scattering from \({}^{92}\)Zr at high energies and the characteristic structure of the excitation functions at the extreme backward angle at the low-energy region, the interaction potential between the \(\alpha\) particle and the \({}^{92}\)Zr nucleus is determined well in the double folding model. The validity of the double folding model was reinforced by studying \(\alpha\) scattering from neighboring nuclei \({}^{90}\)Zr, \({}^{91}\)Zr, and \({}^{94}\)Zr. The double-folding-model calculations reproduced well all the observed angular distributions over a wide range of incident energies and the characteristic excitation functions. By using the obtained potential the \(\alpha\) +\({}^{92}\)Zr cluster structure of \({}^{96}\)Mo is investigated in the spirit of a unified description of scattering and structure. The existence of the second-higher nodal band states with the \(\alpha\)+\({}^{92}\)Zr cluster structure, in which two more nodes are excited in the relative motion compared with the ground band, is demonstrated. The calculation reproduces well the ground-band states of \({}^{96}\)Mo in agreement with experiment. The experimental \(B(E2)\) value of the transition in the ground band is also reproduced well. The effect of \(\alpha\) clustering in \({}^{96}\)Mo on the the half-life of the \(0\nu\beta\beta\) double-\(\beta\) decay of \({}^{96}\)Zr is discussed.
## I Introduction
The observation of neutrinoless double-\(\beta\) decay, \(0\nu\beta\beta\), which violates lepton number conservation, is expected to serve to shed light on the fundamental questions beyond the standard model, such as determining the nature of neutrino, Dirac, or Majorana particles. Since supersymmetric particles have not been observed in Large Hadron Collider experiments, much more attention than ever has been paid to study of \(0\nu\beta\beta\)[1; 2; 3]. The inverse half-life of \(0\nu\beta\beta\) is given by \([T_{1/2}^{0\nu}]^{-1}=G_{0\nu}|<m_{\beta\beta}>/m_{e}|^{2}\,|M^{0\nu}|^{2}\), where \(<m_{\beta\beta}>\) is the effective Majorana neutrino mass, \(m_{e}\) is the electron mass, and \(G_{0\nu}\sim 10^{-14}\) yr\({}^{-1}\) is a phase-space factor. For the evaluation of the nuclear matrix element (NME) of the transition \(M^{0\nu}\)[4; 5; 6; 7; 8; 9], it is essential to know the ground-state wave functions of the initial- and final- state nuclei.
Up to now theoretical \(0\nu\beta\beta\) decay study has been done based on the mean-field theory such as the shell-model [10; 11], _ab initio_ calculations [12; 13], quasiparticle random phase approximation (QRPA) [14; 15; 16], the projected Hartree-Fock Bogoliubov model (PHFB) [17; 18; 19], the generator coordinate method (GCM) [20; 21; 22; 23], the energy density functional (EDF) [24; 25] and the interacting boson model (IBM) [26; 27]. No attention has been paid to the \(\alpha\) cluster structure viewpoint until the study of \({}^{48}\)Ca decay to \({}^{48}\)Ti [28]. This is probably because it has been believed intuitively that strong spin-orbit force would break \(\alpha\) clustering and partly because experimental data of \(\alpha\)-transfer reactions such as (\({}^{6}\)Li,d), (d,\({}^{6}\)Li), and \((p,\rho\alpha)\) are scarce.
\(\alpha\) cluster structure has been established in the light mass region [29; 30] and medium-weight mass region around \({}^{44}\)Ti[31; 32; 33; 34] and recently extended to the \({}^{52}\)Ti region [28; 35; 36; 37; 38]. In a previous paper [28], paying attention to the \(0\nu\beta\beta\) decay of \({}^{48}\)Ca to \({}^{48}\)Ti, one of the present authors (S.O.) has shown that the ground \(0^{+}\) state of \({}^{48}\)Ti has \(\alpha\)-clustering aspects, which significantly quenches the half-life than the conventional shell model calculations in which excitations to the higher several major shells are not considered.
In the \(0\nu\beta\beta\) of the parent nucleus \({}^{96}\)Zr [39], the structure of the ground state of the daughter nucleus \({}^{96}\)Mo, whose \(\alpha\) threshold energy 2.76 MeV is small, is crucial in evaluating the NME of \(0\nu\beta\beta\) decay transitions. The persistency of \(\alpha\) clustering in the heavier mass region around \(A=90\) has been explored for the typical nucleus \({}^{94}\)Mo with two protons and two neutrons outside the closed shell core \({}^{99}\)Zr in Refs. [40; 41; 42]. Later \(\alpha\) cluster model study [43; 44; 45] also supports \(\alpha\) clustering in the \({}^{94}\)Mo region. Recent observations of \(\alpha\) particles in the pick-up reactions \((p,p\alpha)\) in the Sn isotopes [46] seem to reinforce the importance of \(\alpha\) clustering in the heavy mass region.
The ground state of \({}^{96}\)Zr is spherical being a doubly closed-subshell nucleus and is analog to the doubly closed shell \({}^{16}\)O in light nuclei [47]. The first excited \(0^{+}\) state is considered to be a four-particle four-hole excited-state analog to the mysterious \(0^{+}\) state at 6.05 MeV in \({}^{16}\)O. Recent large-scale shell-model calculations for the Zr isotopes [48] have confirmed that the ground state of \({}^{96}\)Zr is spherical and that shape transition to deformed occurs at \({}^{100}\)Zr as the number of the excess neutrons increases. As for the structure of \({}^{96}\)Mo, studies including \(2\nu\beta\beta\) decay using QRPA [49], phase transition from spherical \({}^{92}\)Mo to deformed toward \({}^{104}\)Mo [50], octupole collective motion [51] and shell-model structure [52] have been reported. \(0\nu\beta\beta\) of \({}^{96}\)Zr has been investigated using many models, which includes the QRPA, PHFB, EDF, IBM, and GCM [19]. However, no study of \(0\nu\beta\beta\) of \({}^{96}\)Zr from the viewpoint of \(\alpha\) cluster of \({}^{96}\)Mo has been challenged.
The purpose of this paper is to show that \(\alpha\) clustering
persists in the ground state of \({}^{96}\)Mo by studying bound states and scattering for the \(\alpha\)+\({}^{92}\)Zr system in a unified way and that the half-life of \(0\nu\beta\beta\) of \({}^{96}\)Zr is quenched significantly. For this, by using a double folding model the interaction potential between \(\alpha\) particle and \({}^{92}\)Zr is determined by analyzing angular distributions of nuclear rainbows in \(\alpha\)+\({}^{92}\)Zr scattering at high energies, backward angle anomaly (BAA) or anomalous large angle scattering (ALAS) at lower energies. The potential reproduces well the excitation functions with a characteristic dip at the extreme backward angles near 180\({}^{\circ}\) in the lower-energy region systematically not only for \(\alpha\)+\({}^{92}\)Zr scattering but also for \(\alpha\)+\({}^{90,91,94}\)Zr scattering. The existence of the second-higher nodal band states with the \(\alpha\)+\({}^{92}\)Zr cluster structure, which is responsible for the emergence of the characteristic dip in the back-angle excitation function, is shown for the first time. The ground band of \({}^{96}\)Mo is understood well in the \(\alpha\)-cluster model study using the obtained double folding potential. \(\alpha\) clustering of \({}^{96}\)Mo gives significant effect to quench the \(0\nu\beta\beta\) decay of \({}^{96}\)Zr.
The paper is organized as follows. In Sec. II the double folding model is presented. Section III is devoted to the analysis of \(\alpha\)+\({}^{92}\)Zr scattering over a wide range of incident energies by using a double folding model. To confirm the validity of the obtained interaction potential for \(\alpha\)+\({}^{92}\)Zr, \(\alpha\) scattering from neighboring nuclei \({}^{90,91,94}\)Zr is also investigated. In Sec. IV the origin of the characteristic dip in the back-angle excitation function in \(\alpha\)+\({}^{92}\)Zr scattering is investigated from the viewpoint of persistent existence of the \(\alpha\) cluster structure at the highly excited energies in \({}^{96}\)Mo. In Sec. V, \(\alpha\)+\({}^{92}\)Zr clustering of \({}^{96}\)Mo is studied and discussions of \(\alpha\) clustering on the \(0\nu\beta\beta\) decay of \({}^{96}\)Zr is given. A summary is given in Sec. VI.
## II Double folding model
We study \(\alpha\) scattering from \({}^{92}\)Zr and neighboring nuclei \({}^{90,91,94}\)Zr with a double folding model using a density-dependent nucleon-nucleon force. The double folding potential is calculated as follows:
\[V({\bf r})=\int\rho_{00}^{({}^{4}{\rm He})}({\bf r}_{1})\ \rho_{00}^{({ }^{2}{\rm r})}({\bf r}_{2})\] \[\times v_{NN}(E,\rho,{\bf r}_{1}+{\bf r}-{\bf r}_{2})\ d{\bf r}_{1 }d{\bf r}_{2}, \tag{1}\]
where \(\rho_{00}^{({}^{4}{\rm He})}({\bf r}_{1})\) and \(\rho_{00}^{({}^{2}{\rm r})}({\bf r}_{2})\) represent the nucleon density of the ground states of \({}^{4}\)He and Zr, respectively, which are obtained by the convolution of the proton size from the charge density distribution taken from Ref.[53]. For the effective interaction \(v_{\rm NN}\) we use the density(\(\rho\))-dependent M3Y interaction [54]. In the calculations we introduce the normalization factor \(N_{R}\) for the real double folding potential [55; 56]. The Coulomb folding potential is calculated similarly by the folding prescription in Eq. (1). An imaginary potential with a Woods-Saxon volume-type form factor (nondeformed) is introduced phenomenologically to take into account the effect of absorption due to other channels.
## III Analysis of alpha scattering from \({}^{92}\)Zr and \({}^{90,91,94}\)Zr
In exploring the \(\alpha\) cluster structure in the medium-weight mass region where the level density is high, a unified description of \(\alpha\) scattering including rainbow scattering, prerainbows and BAA (ALAS), and the \(\alpha\) cluster structure in the bound and quasibound energy region has been very powerful [28; 34; 36; 57; 58].
The angular distributions in \(\alpha\) scattering from \({}^{92}\)Zr have been measured systematically at \(E_{\alpha}\)=40, 65, 90 and 120 MeV in Ref. [59] and 35.4 MeV in Ref. [60]. The interaction potential can be uniquely determined from the analysis of the angular distributions in the rainbow energy region, which show the Airy minimum in the lit side of the nuclear rainbow followed by the falloff of the cross sections corresponding to the darside of the nuclear rainbow.
We started to analyze the angular distribution at the highest energy \(E_{\alpha}\)= 120 MeV to fit to the experimental angular distribution by introducing \(N_{R}\)=1.26 and a phenomenological imaginary potential with a strength parameter \(W\)=18.5 MeV, a radius parameter \(R_{W}\) =7.1 fm and a diffuseness parameter \(a_{W}\) =0.6 fm. Then by keeping the fixed \(N_{R}\) =1.26 for 90 and 65 MeV and with a slightly reduced value \(N_{R}\) =1.22 for 40 and 35.4 MeV, all the angular distributions are easily reproduced by the calculations with a small adjustment to reduce the strength and/or diffuseness parameters of the imaginary potential with decreasing incident energies. The calculated angular distributions are in good agreement with the experimental data as displayed in Fig. 1. In Table 1 the values of the volume integral per nucleon pair \(J_{V}\) and the rms radius \(\sqrt{<r_{V}^{2}>}\) of the double folding potential, and the parameters of the imaginary potential together with the volume integral per nucleon pair \(J_{W}\) and the rms radius \(\sqrt{<r_{W}^{2}>}\) are listed. The energy dependence of the volume integrals \(J_{V}\) is reasonable, which is consistent with the previous calculations for \(\alpha\)+\({}^{92}\)Zr scattering in Refs. [40; 59].
In order to see the contributions of the refractive far-side scattering, the calculated angular distributions are decomposed into the farside and nearside components [61]. In Fig. 1, we see that the falloff of the cross sections in the angular distributions in the intermediate angular region above \(E_{\alpha}\)=65 MeV, which is peculiar to nuclear rainbow scattering, are all due to farside scattering. A clear first-order Airy minimum \(A1\) of the nuclear rainbow is seen at \(\theta\) = 50\({}^{\circ}\) at \(E_{\alpha}\)=120 MeV, which shifts backward as the incident energy decreases, at around \(\theta\) = 70\({}^{\circ}\) for \(E_{\alpha}\)=90 MeV and at \(\theta\) = 125\({}^{\circ}\) for \(E_{\alpha}\)=65 MeV. At \(E_{\alpha}\)=40 MeV no Airy minimum is observed. The appearance of the oscillations in the backward angular distributions shows that the nearside contributions are involved since the oscillations are the consequence of interference of the two amplitudes of farside and nearside scattering. This backward rise of the cross sections with the oscillations at \(E_{\alpha}\)=40 MeV is the indication of BAA under
incomplete absorption, which is typically observed and explained in \(\alpha\)+\({}^{16}\)O [62; 63] and \(\alpha\)+\({}^{40}\)Ca scattering [64] in the energy region \(E_{\alpha}\)=20-30 MeV.
In the energy region below \(E_{\alpha}\)=40 MeV the concept of farside and nearside scattering is no more powerful in understanding the characteristic features of the angular distributions. It is useful to understand the characteristics of the angular distributions of BAA in terms of the concept of internal waves and barrier waves [65]. The scattering amplitude \(f(\theta)\) can be decomposed into \(f^{I}(\theta)\), which is due to the internal waves penetrating the barrier deep into the internal region of the potential and \(f^{B}(\theta)\), which is due to the barrier waves reflected at the barrier of the potential in the surface region, \(f(\theta)=f^{I}(\theta)+f^{B}(\theta)\). In the case of incomplete absorption the internal waves, \(f^{I}(\theta)\), carry the information of the internal region of the potential. Unfortunately at the lower energies below 30 MeV no angular distributions have been measured for \(\alpha\)+\({}^{92}\)Zr scattering where the effect of the internal waves is clearly seen [62; 63; 64; 65].
However, we note that the angular distributions in \(\alpha\) scattering from neighboring nuclei \({}^{90}\)Zr and \({}^{91}\)Zr have been measured up to the backward angles at the lower energies \(E_{\alpha}\)=23-25 MeV. In Fig. 2 the angular distributions show a BAA rising toward the extreme backward angles at 21 and 25 MeV. Note that the angular distributions for both \({}^{90}\)Zr and \({}^{91}\)Zr decrease sharply toward 180\({}^{\circ}\) at \(E_{\alpha}\)=23 MeV in the BAA energy region, which is not seen in the typical \(\alpha\)+\({}^{16}\)O [62; 63] and \(\alpha\)+\({}^{40}\)Ca scattering [64]. This characteristic decrease is intriguing because angular distributions at other energies generally increase toward \(\theta=180^{\circ}\), see Fig. 1, as expected from the behavior of the Legendre polynomials whose moduli increases toward \(\theta=180^{\circ}\) at the extreme back angles. In Fig. 2 the angular distributions in \(\alpha\)+ \({}^{90}\)Zr and \(\alpha\)+\({}^{91}\)Zr
Figure 1: (Color online) The angular distributions in \(\alpha\)+\({}^{92}\)Zr scattering at \(E_{\alpha}\)=35.4, 40, 65, 90 and 120 MeV calculated with the optical potential model with the double folding potential (solid lines) are compared with the experimental data (filled circles) [59; 60]. The calculated farside (dotted lines) and nearside (dashed lines) contributions are also indicated.
scattering calculated using the double folding potential derived from Eq. (1) are compared with the experimental data [66]. The potential parameters used are listed in Table 2. The calculations reproduce the experimental angular distributions well. Note that the particular behavior at 23 MeV that decreases sharply toward 180\({}^{\circ}\) is reproduced excellently. This shows that the calculated double folding potentials for \(\alpha\)+ \({}^{90}\)Zr and \(\alpha\)+\({}^{91}\)Zr work very well in this low-energy region, which reinforces the validity of the double folding potential in the \(E_{\alpha}\)=23-MeV to \(E_{\alpha}\)=25-MeV region.
In Fig. 3 the excitation functions at the extreme backward angle \(\theta\)=176.2\({}^{\circ}\) (\(\theta_{Lab}\)=176\({}^{\circ}\)) in \(\alpha\) scattering from \({}^{90}\)Zr, \({}^{91}\)Zr, \({}^{92}\)Zr and \({}^{94}\)Zr calculated using the potentials at \(E_{\alpha}\)= 23 MeV in Table 2 are displayed in comparison with the experimental data. All the calculated excitation functions show a dip and its position shifts to lower energy from \({}^{90}\)Zr to \({}^{94}\)Zr. The position of the dips in the calculated excitation functions for \(\alpha\)+\({}^{90}\)Zr, \(\alpha\)+\({}^{91}\)Zr and \(\alpha\)+\({}^{94}\)Zr agrees with the experimental data excellently. The energy of the observed dips for \({}^{90}\)Zr, \({}^{91}\)Zr and \({}^{94}\)Zr deceases linearly with the mass number \(A\) of the target nucleus, \(E_{\alpha}\)=54.5 - 0.346\(A\), which predicts a dip at \(E_{\alpha}\)=22.7 MeV for \({}^{92}\)Zr. As seen in Fig. 3, the double folding model calculation locates a dip at \(E_{\alpha}\)= 22.7 MeV for \(\alpha\)+\({}^{92}\)Zr, which is in good agreement with the above-predicted energy, 22.7 MeV.
The mechanism explaining why the dip emerges in the excitation function at the extreme backward angle near \(\theta\)=180\({}^{\circ}\), namely why the angular distribution decreases sharply toward \(\theta\)=180\({}^{\circ}\) at a particular energy, has been investigated in detail for the typical \(\alpha\)+\({}^{90}\)Zr system by one of the present authors (S.O.) and his collaborators, see Ref. [41]. The mechanism is understood as follows. The dip appears at the energy where the scattering amplitude \(f(\theta)\) becomes vanishingly small. When \(f^{I}(\theta)\)\(\approx\)\(-f^{B}(\theta)\), the cancellation of the two amplitude occurs, i.e., in the case when \(|f^{I}(\theta)|\approx|f^{B}(\theta)|\) and arg(\(|f^{I}(\theta)\))- arg(\(f^{B}(\theta)\)) \(\approx\)\(k\pi\) where \(k\) is an odd integer. At near \(\theta\)=180\({}^{\circ}\) this condition is satisfied at the energy \(E_{\alpha}\)=22-24 MeV under moderate absorption not only for \(\alpha\)+\({}^{90}\)Zr but also for \(\alpha\)+\({}^{91}\)Zr, \(\alpha\)+\({}^{92}\)Zr and \(\alpha\)+\({}^{94}\)Zr since both the real potential and the imaginary potential change little from that of \(\alpha\)+\({}^{90}\)Zr as seen in Table 2. The good agreement of the calculated excitation functions, especially the energy position and width of the dip for \(\alpha\)+\({}^{91}\)Zr, and \(\alpha\)+\({}^{94}\)Zr with the experimental data, is the natural consequence that their potentials resemble that for \(\alpha\)+\({}^{90}\)Zr. Although no experimental data are available for \(\alpha\)+\({}^{92}\)Zr, the emergence of the dip at the predicted energy in the excitation function would be confirmed in the future experiment. Since the internal waves, which are responsible for the emergence of the dip, are sensitive to the internal region of the real potential, the present good agreement in Fig. 3 shows that the obtained double folding potential is reliable sufficiently in this low-energy region above the Coulomb barrier.
IV Mechanism of the characteristic dip in the back-angle excitation function in \(\alpha\)+\({}^{92}\)Zr scattering
In this section, paying attention to the highly lying excited \(\alpha\) cluster structure in \({}^{96}\)Mo, we investigate how the anomalous dip in the back-angle excitation in \(\alpha\)+\({}^{92}\)Zr
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \(E_{\alpha}\) & \(J_{V}\) & \(\sqrt{<r_{V}^{2}>}\) & \(W\) & \(R_{W}\) & \(a_{W}\) & \(J_{W}\) & \(\sqrt{<r_{W}^{2}>}\) \\ \hline \({}^{90}\)Zr & 21 & 331.5 & 4.96 & 10.0 & 7.55 & 0.40 & 51.5 & 6.03 \\ & 23.4 & 329.5 & 4.96 & 10.0 & 7.55 & 0.43 & 51.7 & 6.06 \\ & 25 & 327.5 & 4.96 & 10.0 & 7.55 & 0.48 & 52.1 & 6.11 \\ \({}^{91}\)Zr & 21 & 333.1 & 5.00 & 10.6 & 7.60 & 0.37 & 54.8 & 6.05 \\ & 23 & 331.7 & 5.00 & 10.2 & 7.60 & 0.41 & 53.0 & 6.08 \\ & 25 & 330.3 & 5.01 & 10.2 & 7.60 & 0.45 & 53.3 & 6.12 \\ \({}^{92}\)Zr & 23 & 329.6 & 4.99 & 10.7 & 7.63 & 0.43 & 55.8 & 6.12 \\ \({}^{94}\)Zr & 23 & 330.0 & 5.02 & 11.8 & 7.70 & 0.48 & 62.3 & 6.23 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The volume integral per nucleon pair \(J_{V}\), rms radius \(\sqrt{<r_{V}^{2}>}\) of the double folding potentials, and the strength \(W\), radius \(R_{W}\), diffuseness \(a_{W}\), volume integral per nucleon pair \(J_{W}\), and rms radius \(\sqrt{<r_{W}^{2}>}\) of the imaginary potentials used in \(\alpha\)+\({}^{90,91,92,94}\)Zr scattering in Fig. 2 and Fig. 3. Energies are in MeV, volume integrals in MeVfm\({}^{3}\), and radii in fm. \(N_{R}\)=1.22 is used for all target nuclei and incident energies.
Figure 3: (Color online) The calculated excitation functions in \(\alpha\) scattering from \({}^{90}\)Zr, \({}^{91}\)Zr, \({}^{92}\)Zr, and \({}^{94}\)Zr at the extreme backward angle \(\theta\)=176.2\({}^{\circ}\) (\(\theta_{Lab}\)=176\({}^{\circ}\)) (solid lines) are compared with the experimental data (filled circles) [66].
scattering in Fig. 3 is created.
For this purpose, in Fig. 4 (a) back-angle excitation functions calculated by reducing gradually the strength of the imaginary potential, \(W\)=3\(W_{0}\)/4, \(W_{0}\)/2, \(W_{0}\)/4, \(W_{0}\)/8, and 0 MeV, are compared with the original one with \(W_{0}=10.7\) in Table 2. For \(W=0\) the peaks in the excitation function at \(E_{\alpha}\)=20.5 and 23 MeV are due to the resonances with the \(\alpha\)+\({}^{92}\)Zr structure. That the \(\alpha\) cluster structure at the highly excitation energies can be seen in the excitation function at the extreme backward angles near 180\({}^{\circ}\) has been already shown for the \(\alpha\)+\({}^{40}\)Ca cluster structure in \({}^{44}\)Ti [67; 68].
In Fig. 4 (b) and (c), the partial wave cross sections of elastic scattering are displayed. Fig. 4 (b) shows that the peaks at \(E_{\alpha}\)=20.5 and 23 MeV are caused by the even \(L\) partial waves and Fig. 4 (c) shows that the odd \(L\) partial waves do not contribute to create the peaks. Thus we find that the peaks at \(E_{\alpha}\)=20.5 and \(E_{\alpha}\)=23 MeV in the excitation function with \(W=0\) are caused by the resonant waves \(L=10\) and \(L=12\), respectively.
To see the details of the resonances responsible for the peaks, in Fig. 5 the phase shifts in \(\alpha\)+\({}^{92}\)Zr scattering calculated by switching off the imaginary potential are displayed. We see that the phase shifts for the even parity and odd parity partial waves show different behavior in the relevant energies, \(E_{\alpha}\)=18 - 26 MeV (center-of-mass energy \(E\)=17.3-24.9 MeV). Although the the phase shifts of the even-parity partial waves, \(L=\)10 and 12, pass through \(\delta_{L}\)=270\({}^{\circ}\) slowly at the resonance energies, those of the odd-parity partial waves, \(L=\)11 - 15, cross \(\delta_{L}\)=90\({}^{\circ}\) sharply at the resonance energies. The narrow odd-parity resonances hardly contribute to the peaks in the excitation functions as seen in Fig. 4 (a) and (c). This is why even-parity waves are dominantly responsible for the peaks and the dip in Fig. 4. The broad resonant nature of the even- parity waves is the consequence that they are the high-lying second-higher nodal \(\alpha\)-cluster resonance states, in which the relative motion is two more excited compared with the lowest Pauli-allowed ground-band states in \({}^{96}\)Mo. The nature of the resonant \(\alpha\)+\({}^{92}\)Zr cluster structure in \({}^{96}\)Mo is discussed in detail in the next section.
Figure 4: (Color online) (a)The excitation functions at the extreme backward angle \(\theta\)=176.2\({}^{\circ}\) in \(\alpha\)+\({}^{92}\)Zr scattering calculated with the reduced strengths of the imaginary potential \(W\)=0 (dotted lines), \(W_{0}\)/8 (dashed lines), \(W_{0}\)/4 (long dashed lines), \(W_{0}\)/2 (dash-dotted lines), and \(3W_{0}\)/4 ( dashed-and-double-dotted lines) are compared with the original one with \(W=\)\(W_{0}\)=10.7 MeV (solid lines). (b) The calculated partial wave cross sections of elastic scattering under \(W=\)0 for even \(L\) and for (c) odd \(L\).
Figure 5: (Color online) Phase shifts in \(\alpha\) + \({}^{92}\)Zr scattering calculated with the double folding potential with \(N_{R}\)=1.22, (a) even \(L\) partial waves and (b) odd \(L\) partial waves, are displayed for \(L\leq\)15. The lower abscissa shows cener-of-mass energy \(E\) and the upper abscissa shows laboratory incident energy \(E_{\alpha}\).
## V Alpha cluster structure in \({}^{96}\)Mo and neutrinoless double \(\beta\) decay of \({}^{96}\)Zr
In order to reveal the cluster structure of \({}^{96}\)Mo underlying in the excitation function with the characteristic dip at the extreme backward angle, the resonant states and the bound and quasibound energy levels calculated in the double folding potential with \(N_{R}\)=1.22 by switching off the imaginary potential of the optical potential used in Fig. 3 are displayed in Fig. 6 (a). The resonance energies are given at the energies where the phase shifts steeply pass through \(\delta_{L}\)=90\({}^{\circ}\)(270\({}^{\circ}\)) in Fig. 5. By investigating the resonant wave function for \(L=12\) at \(E=22.04\) MeV, we find that the wave function has four nodes in the relative motion, see Fig. 7. The resonances with \(L\)=10, 12, and 14 in the range of \(E\)=19-25 MeV are found to belong to the band with \(N=2n+L=20\), where \(n\) is the number of the nodes in the relative wave function between \(\alpha\) and \({}^{92}\)Zr. The \(N=20\) band states energies are on the \(J(J+1)\) plot with the bandhead \(J^{\pi}\)=0\({}^{+}\) state at \(E\)=14.4 MeV and the rotational constant \(k\)=\(\hbar^{2}/2\mathcal{J}\)=0.0492 MeV, where \(\mathcal{J}\) is the moment of inertia of the band. The band has a well-developed \(\alpha\)+\({}^{92}\)Zr cluster structure. The large separation distance between \(\alpha\) and \({}^{92}\)Zr can be seen in the the wave functions of the 10\({}^{+}\) and 12\({}^{+}\) states in Fig. 7. The outermost peak, which is located at around \(R\)=7-8 fm, is much larger than the sum of the experimental radius [69] of \(\alpha\) and \({}^{92}\)Zr, 6.0 fm. Although the phase shifts for the lower \(L=0-6\) of the \(N=20\) band show rising toward \(\delta_{L}\)=270\({}^{\circ}\), they do not cross \(\delta_{L}\)=270\({}^{\circ}\) sufficiently. However, since the number of the nodes \(n\) of their wave functions satisfy the condition \(N=2n+L=20\), they are considered to belong to the persistent member of the rotational band with \(N=20\). From the \(J(J+1)\) plot they are extrapolated to exist persistently at the energies indicated by the dotted lines in Fig. 6 (a). The resonance energies and widths of these broad resonances can be calculated in the complex scaling method [70; 71]. The presence of the 12\({}^{+}\) state of the \(N=20\) band, which manifests itself in the emergence of the characteristic dip in the back-angle excitation function, demonstrates for the first time the existence of a second-higher nodal band member state with the \(\alpha\) + \({}^{92}\)Zr cluster structure in \({}^{96}\)Mo, in which two more nodes are excited in the relative motion compared with the \(N=\)16 ground band.
The wave functions of the resonances with odd \(L\) in Fig. 5 have \(N=19\) and form a negative-parity rotational band with the \(\alpha\)+\({}^{92}\)Zr cluster structure. The band states are well located on the \(J(J+1)\) plot with its bandhead 1\({}^{-}\) state at \(E=11.1\) MeV and \(k\)=0.0383 MeV. The \(N=19\) band is a higher nodal band with developed \(\alpha\) clustering, in which the relative motion is one more excited compared with the lower-lying \(N=17\) band states. The calculation locates the \(N=18\) rotational band with its bandhead 0\({}^{+}\) at \(E\)=7.19 MeV and \(k\)=0.0236 MeV, which is a higher nodal band with one more node in the wave functions compared with those of the Pauli-allowed lowest \(N=16\) band. The \(N=17\) rotational band states are well located on the \(J(J+1)\) plot with its bandhead 1\({}^{-}\) state at \(E\)=1.33 MeV. The calculation locates the band states with \(N=16\) below the \(\alpha\) threshold. It is surprising that the Pauli-allowed lowest \(N=16\) band states satisfying the Wildermuth condition falls in good correspondence with the ground band of \({}^{96}\)Mo. The calculated 0\({}^{+}\) state of the \(N=16\) band with \(E\)=-5.56 MeV is slightly overbound by 2.8 MeV compared with the experimental energy of the ground state with \(E=\)-2.76 MeV from the \(\alpha\) threshold. This is because the potential determined at the highly excited energy region, \(E_{\alpha}\)=23 MeV, is straightforwardly applied to the calculations in the bound-state energy region. The energy levels for \(N=\)18, 17 and 16 in Fig. 6, most of which are located below the Coulomb barrier, are the ones calculated in the bound-state approximation.
According to the dispersion relation [72], the energy dependence of the volume integral of the real potential shows the threshold anomaly. Namely the volume inte
Figure 6: (Color online) (a) Energy levels of \({}^{96}\)Mo calculated in the \(\alpha\)+ \({}^{92}\)Zr cluster model with the double folding potential with \(N_{R}\)=1.22. The calculated excitation energy of the \(N\)=16 ground-band states, 0\({}^{+}\), 2\({}^{+}\), 4\({}^{+}\), 6\({}^{+}\) and 8\({}^{+}\), which look compressed, increases as the spin increases. (b) The \(N=16\) band energy levels calculated using the double folding model with \(L\) dependence. (c) Experimental energy levels of the ground band. The horizontal dashed lines (blue) correspond to \(E_{\alpha}\)=18 MeV (center-of-mass energy \(E\)=17.3) and \(E_{\alpha}\)=26 MeV (\(E\)=24.9 MeV), between which the characteristic dip in the excitation function appear.
gral \(J_{V}\) increases as the incident energy decreases from the rainbow energy region to the lower-energy region of BAA and reaches a maximum followed by a decrease toward \(E_{\alpha}\)=0. In fact, in Table 1 and 2 we see that \(J_{V}\) increases from 280.5 MeVfm\({}^{3}\) at the rainbow energy \(E_{\alpha}\)=120 MeV to 318.9 MeVfm\({}^{3}\) at \(E_{\alpha}\)=35.4 MeV and 329.6 MeVfm\({}^{3}\) at \(E_{\alpha}\)=23 MeV. The dispersion relation tells that a potential with a reduced \(J_{V}\) value should be used in the bound and quasibound energy region below and near the threshold energy \(E\)=0. The overbinding of the ground-state energy in Fig. 6 (a) is simply ascribed to that it is calculated using the potential with \(N_{R}\)=1.22 at \(E_{\alpha}\)=23 MeV with the large \(J_{V}\) value without taking into account the energy dependence of the real potential due to the dispersion relation [72]. By using the double folding potential with a slightly reduced strength, \(N_{R}\)=1.182 with \(J_{V}\)=319.3 MeVfm\({}^{3}\), the calculated ground-state 0\({}^{+}\) energy agrees with the experimental value as seen in Fig. 6 (b). A similar situation where \(J_{V}\) must be reduced in the bound and quasibound energy region compared with that used in the higher scattering energy region has been reported in the recent unified description of bound and scattering states for the \(\alpha\)+\({}^{48}\)Ca cluster structure in \({}^{52}\)Ti [36] and the \(\alpha\)+\({}^{44}\)Ca cluster structure in \({}^{48}\)Ti [28].
In Fig. 6 (a) the calculated \(N\)=16 ground-band states are very compressed, which is also the case for the \(N=16\) states calculated with \(N_{R}\)=1.182. Although the conventional Wood-Saxon potential gives an inverted energy level spectrum in this heavy mass region, namely the excitation energy of the ground- band states decreases as the spin increases in disagreement with experiment, the present double folding model potential gives the energy spectrum consistent with experimental ground band. In fact, the excitation energy of the calculated energy levels of the \(N\)=16 band, which looks almost degenerate in Fig. 6 (a), increases as the spin increases from 0\({}^{+}\) to 8\({}^{+}\). This compression is because the angular momentum dependence of the local potential has not been taken into account. In order to discuss the spectroscopic properties in the low-energy region, it is necessary to take into account the \(L\) dependence of the potential. The nucleus-nucleus potential, which is originally non-local due to the the Pauli principle, has \(L\) dependence when represented as a local potential. The \(L\) dependence is usually not important and often neglected in the scattering energy region. However, this \(L\) dependence is important when we study the cluster structure in the bound and quasibound energy region. The necessity of the \(L\) dependence of the intercluster potential due to the Pauli principle has been theoretically founded in the microscopic studies of interactions between composite particles [73; 74]. In fact, it has been shown that this \(L\) dependence is indispensable in the \(\alpha\) cluster structure using a local potential, for example, in \({}^{20}\)Ne [75], \({}^{44}\)Ti [57; 58], \({}^{94}\)Mo [40; 44], \({}^{212}\)Po [40; 45], and \({}^{46,50}\)Cr [76]. Following the double folding potential model study of the \(\alpha\) cluster structure in \({}^{94}\)Mo in Ref. [40] where linear \(L\) dependence in the double folding potential is first discussed, we use \(N_{R}^{(L)}\)=\(N_{R}^{(L=0)}\) - \(c\)\(L\) with \(N_{R}^{(L=0)}\)=1.182 and \(c\)=5.00\(\times\)10\({}^{-3}\) for \({}^{96}\)Mo. The calculated energy levels of the \(N=16\) ground band are displayed in Fig. 6 (b). In Table 3 the calculated \(B(E2)\) values as well as the excitation energies, intercluster rms radii of the ground band of \({}^{96}\)Mo are listed in comparison with the experimental data. The excitation energy of the ground band is reproduced well by the double folding potential model with small \(L\) dependence. The experimental \(B(E2)\) values [77] are also reproduced well by introducing an additional small effective charge \(\Delta e=0.3e\) for protons and neutrons. We note that in the large-scale shell-model calculations in Ref. [52] rather large additional effective charges \(\Delta e=0.5e\) for protons and \(\Delta e=0.5-0.8e\) for neutrons are introduced. The rms charge radius \(<r^{2}>^{1/2}_{\rm e_{8}\rm Mo}\)=4.36 fm of the ground state calculated using the experimental values \(<r^{2}>^{1/2}_{\rm 4He}\)=1.676 fm and \(<r^{2}>^{1/2}_{\rm 92Zr}\)=4.306 fm [69] is in good agreement with the experimental value 4.38 fm [69]. The calculated intercluster distance of the ground
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(J^{x}\) & \(E_{x}\) (MeV) & \(\sqrt{<R^{2}>}\) (fm) & \(B(E2)\) & (W.u.) \\ & exp & cal & cal & exp [77] & cal & Ref.[52] \\ \hline
0\({}^{+}\) & 0.00 & 0.000 & 5.20 & & & \\
2\({}^{+}\) & 0.778 & 0.770 & 5.21 & 20.7\(\pm\)0.4 & 20.7 & 18.7 \\
4\({}^{+}\) & 1.628 & 1.611 & 5.19 & 41\(\pm\)7 & 28.7 & - \\
6\({}^{+}\) & 2.441 & 2.543 & 5.14 & - & 29.1 & - \\
8\({}^{+}\) & 2.978 & 3.583 & 5.05 & - & 26.5 & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: The excitation energies \(E_{x}\), intercluster rms radii \(\sqrt{<R^{2}>}\) and \(B(E2)\) values for the \(J\rightarrow(J-2)\) transitions of the ground band in \({}^{96}\)Mo calculated in the \(\alpha\)+\({}^{92}\)Zr cluster model with the double folding potential are compared with the experimental data [77] and the large-scale shell model calculation [52].
Figure 7: (Color online) The calculated \(u(R)\) of the relative wave function \(u(R)/R\) of the 10\({}^{+}\) and 12\({}^{+}\) states of the \(N=\)20 band with the \(\alpha\)+\({}^{92}\)Zr cluster structure in \({}^{96}\)Mo. The wave functions are calculated as scattering states and normalized to unity for \(R\leq\)10 fm.
state is about 87% of the sum of the experimental rms charge radii of the two clusters, which is compared to 87% for the ground state of \({}^{44}\)Ti [58].
Note that the value of the parameter \(c\)=5.00\(\times 10^{-3}\) lies in the expected range of \(\alpha\)-cluster states \(c\)\(\approx\)(2.5-5)\(\times 10^{-3}\), as observed for many \(\alpha\)-cluster states in a wide range of nuclei [78] including the mass region near \(A=100\) such as \({}^{94}\)Mo [31; 40], \({}^{93}\)Nb [43; 79], and \({}^{104}\)Te [78]; light- and medium-weight mass regions such as \({}^{20}\)Ne [80] and \({}^{44}\)Ti [81], and heavy mass region such as \({}^{212}\)Po [40; 82]. The \(L\)-dependent potential calculation locates the negativeity \(N=17\) band with the head \(1^{-}\) state at \(E_{x}\)=6.83 MeV well below the Coulomb barrier. Although \(1^{-}\) states have been observed at \(E_{x}\)=3.600 MeV and 3.895 MeV [77], experimental spectroscopic properties about \(\alpha\) clustering of the excited states near and above \(E_{x}\)\(\approx\)4 MeV are not clear.
The potential embeds the eight deeply bound unphysical Pauli-forbidden \(0^{+}\) states. The overlaps of the eight \(0^{+}\) wave functions with the harmonic oscillator wave functions with \(\nu\)=0.22 fm\({}^{-2}\) (\(\nu=m\omega/\hbar\) and \(\hbar\omega\)=9.12 MeV) are 0.96, 0.93, 0.93, 0.95, 0.97 and 0.98, 0.99, and 0.97 for the oscillator quanta \(N_{\rm HO}\)=0, 2, \(\cdots\), and 14, respectively. This means that the Pauli-forbidden states with \(N_{\rm HO}<16\) in the resonating group method are highly excluded from the obtained ground-state wave function, thus mimicking Saito's orthogonality condition model [83].
As seen in Fig. 8, the ground-state wave function resembles the shell-model function with \(N_{\rm HO}=16\) in the internal region. However, the outermost peak at around 6 fm is slightly shifted outward compared with that of the harmonic oscillator wave function causing significant enhancement of the amplitude at the outer surface region due to \(\alpha\) clustering. This enhancement means that the obtained wave function contains significant amount of components in the shells higher than \(N_{\rm HO}\) =16.
In Fig. 9 the occupation probability of the quanta \(N_{\rm HO}\geq\)16 in the ground state wave function is displayed. The dominant occupation probability in the lowest Pauli-allowed shell-model like \(N_{\rm HO}=16\) is 78%. The significant amount of higher \(N_{\rm HO}\geq 18\) components, 22% is due to the \(\alpha\) clustering of the ground state. The \(2^{+}\) and \(4^{+}\) states have the similar character. This \(\alpha\) clustering is responsible for the enhancement of the \(B(E2)\) values in \({}^{96}\)Mo and should be also taken into account in the evaluation of NME of \(0\nu\beta\beta\) decay of \({}^{96}\)Zr to \({}^{96}\)Mo.
We discuss the effect of \(\alpha\) clustering of \({}^{96}\)Mo on the NME of \(0\nu\beta\beta\) decay of \({}^{96}\)Zr, which was the motivation of the present \(\alpha\) cluster structure study of \({}^{96}\)Mo as introduced in Sec. I. The NME values of \(0\nu\beta\beta\) decay of \({}^{96}\)Zr evaluated by using various nuclear models are summarized in Ref. [7] and most recently in Ref. [19]. The QRPA calculations give NME values 2.72 with the Argonne V18 nucleon-nucleon potential and 2.96 with the CD-Bonn nucleon-nucleon potential with the axial vector coupling constant \(g_{A}\)=1.27 in Ref. [14] and 3.14 with \(g_{A}\)=1.26 in Ref. [15]. The IBM calculation by Barea _et al._ gives 2.53 in Ref. [27]. The latest PHFB calculations give 2.5 [19]. On the other hand, the EDF calculations give considerably large NME values, about twice as large as the above values. The nonrelativistic EDF calculations by Vaquero _et al._[24] give 5.65 with \(g_{A}\)=1.26 when evaluated with shape fluctuation and 6.50 when evaluated with both the shape and pairing fluctuations. The relativistic EDF calculation by Yao _et al._[25] gives almost the same large result. Yao _et al._ claim [25] that the EDF calculations are unable to reproduce the properties of \({}^{96}\)Zr giving too-low excitation energy of E(\(2^{+}_{1}\)) and too-large \(B(E2:0_{\rm g.s.}\to 2^{+}_{1})\) value, which is one order of magnitude large compared with the experimental data. Yao _et al._[25] ascribe this to the overestimation of the collectivity in \({}^{96}\)Zr due to the "common problem of most EDF-based GCM or collective Hamiltonian calculations." Moreover, the GCM calculation in the frame of covariant density functional theory [22] gives the largest value of 6.37 among the nuclear model calculations. The overes
Figure 8: (Color online) The calculated \(u(R)\) of the relative wave function \(u(R)/R\) of the ground state of \({}^{96}\)Mo calculated in the \(\alpha\)+ \({}^{92}\)Zr cluster model (solid line) is compared with the harmonic oscillator wave function with \(N_{\rm HO}\) =16 (dashed line).
Figure 9: (Color online) The occupation probability of the harmonic oscillator quanta \(N_{\rm HO}\) in the ground-state wave function of \({}^{96}\)Mo.
timation of the collectivity of the doubly closed-subshell nucleus \({}^{96}\)Zr increases the overlap of the wave functions of \({}^{96}\)Zr and \({}^{96}\)Mo, which leads to the large NME values. Although the present cluster model is unable to calculate NME values because nucleon degree of freedom is not involved, it can qualitatively tell whether the NME is enhanced or reduced by \(\alpha\) clustering of \({}^{96}\)Mo compared with the shell-model calculations in which the excitations to the higher major shells are not included. Taking into account that the excitation energy 1.58 MeV of the first excited state \(0^{+}\) of \({}^{96}\)Zr is rather high in this mass region resembling the mysterious \(0^{+}\) of the double magic nucleus \({}^{16}\)O [47], and that there is no evidence that \({}^{96}\)Zr has \(\alpha\)+\({}^{92}\)Sr clustering [47], the ground-state wave function can be well considered to have a doubly closed-subshell shell-model structure. Thus the \(\alpha\) clustering of \({}^{96}\)Mo reduces considerably the overlap of the ground-state wave function of \({}^{96}\)Zr with that of \({}^{96}\)Mo in the evaluation of the NME. That is, the \(0\nu\beta\beta\) decay of \({}^{96}\)Zr to \({}^{96}\)Mo would be significantly quenched, thus have a longer half-life, due to the \(\alpha\) clustering than that in the shell-model calculations which do not take into account the four particle excitations. Unfortunately, NME values in the shell model have not been reported. The shell-model calculations with configuration mixing including \(N_{\rm HO}=18\), 20 and 22 major shells are presently formidably difficult even with modern computers. We note that both the QRAP and IBM calculations do not include \(\alpha\)-like four-particle four-hole excitations and \(\alpha\)-like correlations.
Finally, we briefly mention about the large \(B(E2)\) value 51.7 W.u. of the transition from \(0^{+}_{2}\) (\(E_{x}\)=1.148 MeV) to \(2^{+}_{1}\) (\(E_{x}\)=0.778 MeV) in \({}^{96}\)Mo [77], which may suggest that the \(0^{+}_{2}\) state has \(\alpha\) clustering in which the core is excited. If the \(0^{+}_{2}\) state has significant amount of [\(\alpha_{(L=2)}\)+\({}^{92}\)Zr(\(2^{+}_{1}\))]\({}_{J=0}\) clustering component, then the \(B(E2)\) value can be enhanced because in addition to the \(E2\) transition matrix element due to the inter-cluster relative motion, <2\({}^{+}_{1}\)(\(\alpha_{L=0}\))\(|\hat{O}_{E2}^{2}({\bf r})|0^{+}_{2}\)(\(\alpha_{L=2}\))>, the internal transition of the core \({}^{92}\)Zr, <\({}^{92}\)Zr(g.s.) [\(\hat{O}_{E2}(\xi)\)]\({}^{92}\)Zr(\(2^{+}_{1}\))>, contributes to the total \(E2\) transition where \(\xi\) is the internal coordinate of \({}^{92}\)Zr. Coupled-channels calculations with excitations of \({}^{92}\)Zr would be a future challenge to understand the origin of the large \(B(E2)\) value of the \(0^{+}_{2}\) state of \({}^{96}\)Mo and the effective charge.
## VI Summary
In the evaluation of nuclear matrix element of neutrinoless double \(\beta\) decay \(0\nu\beta\beta\) of the doubly closed-subshell nucleus \({}^{96}\)Zr to \({}^{96}\)Mo, it is important to take into account the collectivity due to \(\alpha\) clustering in the structure of \({}^{96}\)Mo, which has extra two neutrons on the \({}^{94}\)Mo nucleus, which is analog of \({}^{20}\)Ne and \({}^{44}\)Ti and has been considered to have \(\alpha\) cluster structure. We have studied for the first time \(\alpha\) clustering aspects of \({}^{96}\)Mo by using a double folding potential determined from the analysis of nuclear rainbows at high energies and the characteristic structure of the angular distributions at low energies in \(\alpha\) particle scattering from \({}^{92}\)Zr. The validity of the double folding potential used is also confirmed by studying \(\alpha\) scattering from \({}^{90,91.94}\)Zr in the low-energy region where a characteristic dip appears in the excitation functions at the extreme backward angle near \(180^{\circ}\). The double folding model calculations reproduced well all the observed angular distributions over a wide range of incident energies and the excitation functions with a characteristic dip at the extreme backward angle. By studying the \(\alpha\) cluster structure with the obtained double folding potential, the existence of the second-higher nodal \(N=20\) band states with the \(\alpha\)- \({}^{92}\)Zr cluster structure, in which two more nodes are excited in the relative motion compared with the \(N=16\) ground band in \({}^{96}\)Mo, is demonstrated for the first time at the highly excited energy region. The \(\alpha\)-cluster model using this potential locates the ground state in agreement with experiment and reproduces the observed \(B(E2)\) value of \({}^{96}\)Mo. The effect of \(\alpha\) clustering in \({}^{96}\)Mo on the the half-life of the \(0\nu\beta\beta\) double-\(\beta\) decay of \({}^{96}\)Zr is discussed.
###### Acknowledgements.
One of the authors (S.O.) thanks the Yukawa Institute for Theoretical Physics, Kyoto University where part of the work was done during a stay in 2022.
| $^{96}$Zrの半減期評価において、二重閉殻核 $^{96}$Zr の無中性双$\beta$崩壊 ($0\nu\beta\beta$) の構造は、核 $^{96}$Mo の構造が非常に重要です。$^{96}$Mo の $\alpha$ -Clustering の性質は初めて調べます。高エネルギーにおける$^{92}$Zrからの $\alpha$ scattering による核虹の研究と、低エネルギー領域の極端な背角における励起関数の特性構造を用いて、$^{92}$Zr核と $\alpha$ -粒子の相互作用ポテンシャルを正確に決定しました。このdouble folding model の有効性を確認するため、$^{90}$Zr, $^{91}$Zr, と$^{94}$Zrからの $\alpha$ scattering を調べました。double folding model の計算は、広範囲の衝突エネルギーで観察された角度分布と特徴的な励起関数を良好 |
2309.03759 | M(otion)-mode Based Prediction of Ejection Fraction using
Echocardiograms | Early detection of cardiac dysfunction through routine screening is vital for
diagnosing cardiovascular diseases. An important metric of cardiac function is
the left ventricular ejection fraction (EF), where lower EF is associated with
cardiomyopathy. Echocardiography is a popular diagnostic tool in cardiology,
with ultrasound being a low-cost, real-time, and non-ionizing technology.
However, human assessment of echocardiograms for calculating EF is
time-consuming and expertise-demanding, raising the need for an automated
approach. In this work, we propose using the M(otion)-mode of echocardiograms
for estimating the EF and classifying cardiomyopathy. We generate multiple
artificial M-mode images from a single echocardiogram and combine them using
off-the-shelf model architectures. Additionally, we extend contrastive learning
(CL) to cardiac imaging to learn meaningful representations from exploiting
structures in unlabeled data allowing the model to achieve high accuracy, even
with limited annotations. Our experiments show that the supervised setting
converges with only ten modes and is comparable to the baseline method while
bypassing its cumbersome training process and being computationally much more
efficient. Furthermore, CL using M-mode images is helpful for limited data
scenarios, such as having labels for only 200 patients, which is common in
medical applications. | Ece Ozkan, Thomas M. Sutter, Yurong Hu, Sebastian Balzer, Julia E. Vogt | 2023-09-07T15:00:58 | http://arxiv.org/abs/2309.03759v1 | # M(otion)-mode Based Prediction of Ejection Fraction using Echocardiograms
###### Abstract
Early detection of cardiac dysfunction through routine screening is vital for diagnosing cardiovascular diseases. An important metric of cardiac function is the left ventricular ejection fraction (EF), where lower EF is associated with cardiomyopathy. Echocardiography is a popular diagnostic tool in cardiology, with ultrasound being a low-cost, real-time, and non-ionizing technology. However, human assessment of echocardiograms for calculating EF is time-consuming and expertise-demanding, raising the need for an automated approach. In this work, we propose using the M(otion)-mode of echocardiograms for estimating the EF and classifying cardiomyopathy. We generate multiple artificial M-mode images from a single echocardiogram and combine them using off-the-shelf model architectures. Additionally, we extend contrastive learning (CL) to cardiac imaging to learn meaningful representations from exploiting structures in unlabeled data allowing the model to achieve high accuracy, even with limited annotations. Our experiments show that the supervised setting converges with only ten modes and is comparable to the baseline method while bypassing its cumbersome training process and being computationally much more efficient. Furthermore, CL using M-mode images is helpful for limited data scenarios, such as having labels for only 200 patients, which is common in medical applications.
Keywords:Echocardiography M-mode Ultrasound Ejection Fraction Computer Assisted Diagnosis (CAD)
## 1 Introduction
Cardiovascular diseases (CVD) are the leading cause of death worldwide, responsible for nearly one-third of global deaths [29]. Early assessment of cardiac dysfunction through routine screening is essential, as clinical management and
behavioral changes can prevent hospitalizations and premature deaths. An important metric for assessing cardiac (dys)function is the left ventricular (LV) ejection fraction (EF), which evaluates the ratio between LV end-systolic and end-diastolic volumes [3; 21].
Echocardiography is the most common and readily available diagnostic tool to assess cardiac function, ultrasound (US) imaging being a low-cost, non-ionizing, and rapid technology. However, the manual evaluation of echocardiograms is time-consuming, operator-dependent, and expertise-demanding. Thus, there is a clear need for an automated method to assist clinicians in estimating EF.
M(otion)-mode is a form of US, in which a single scan line is emitted and received at a high frame rate through time to evaluate the dynamics to assess different diseases [23]. M-mode is often utilized in clinical practice e. g. in lung ultrasonography [1; 25] or echocardiography [6; 7; 26; 10]. Since cardiac function assessment relies on heart dynamics, M-mode images can be an excellent alternative to B(rightness)-mode image- or video-based methods. However, little effort is directed toward exploiting M-mode images in an automated manner.
Data collection and annotation are expensive for most applications. Therefore, learning from limited labeled data is critical in data-limited problems, such as in healthcare. To overcome this data bottleneck, self-supervised learning (SSL) methods have been recently proposed to learn meaningful high-level representations from unlabeled data [16; 24].
**Related Work** A few existing works [14; 18] reconstruct M-mode images from B-mode videos to detect pneumothorax using CNNs. Furthermore, authors in [27] propose an automatic landmark localization method in M-mode images. A more related method using M-mode images in an automated manner to estimate EF is [22], which uses single M-mode images in parasternal long-axis view to measure chamber dimensions for calculating EF.
For automated EF prediction, some previous works exploit either still-images [17; 31; 8] or spatio-temporal convolutions on B(rightness)-mode echocardiography videos [21]. However, still-image-based methods have a high variability [20], and video-based methods rely on a complex pipeline with larger models. Furthermore, [19] uses vision transformers and CNNs to tackle the problem of estimating the LV EF, and [15] uses geometric features of the LV derived from ECG video frames to estimate EF. The authors in [28] evaluate ML-based methods in a multi-cohort setting using different imaging modalities. In the SSL setting, [5] propose a contrastive learning framework for deep image regression, which consists of a feature learning branch via a novel adaptive-margin contrastive loss and a regression prediction branch using echocardiography frames as input.
**Our Contribution** We propose to extract images from readily available B-mode echocardiogram videos, each mimicking an M-mode image from a different scan line of the heart. We combine the different artificial M-mode images using off-the-shelf model architectures and estimate their EF to diagnose cardiomyopathy in a supervised regime. Using M-mode images allows the model to naturally observe the motion and sample the heart from different angles while bypassing cumbersome 3D models. Secondly, we propose an alternative scheme for pre
dicting EF using generated M-mode images in a self-supervised fashion while extending contrastive learning. We design a problem-specific contrastive loss for M-mode images to learn representations with structure and patient awareness. We evaluate both regimes on the publicly available EchoNet-Dynamic dataset ([20]) and demonstrate both models' effectiveness.
To the best of our knowledge, this is the first work on image-based and temporal information incorporating cardiac function prediction methods to estimate EF. Furthermore, our method can easily be applied to other problems where cardiac dynamics play an essential role in the diagnosis. To ensure reproducibility, we made the code available: [https://github.com/thomassutter/mmodeecho](https://github.com/thomassutter/mmodeecho).
## 2 Methods
This work aims to create a pipeline with as little intervention as possible; thus, our method consists of two parts, as shown in Figure 1. The first part is extracting M-mode images from readily available B-mode videos. The second part includes representation learning, which are lower-level information that preserves more information of the input image and are used to predict EF from M-mode images, including two schemes: supervised and self-supervised learning.
### From B-mode Videos to M-mode Images
Assume our dataset contains \(N\) patients. For each patient \(i=\{1,2,\cdots,N\}\), the label \(y_{i}\) indicates its EF. Furthermore, the B-mode echocardiogram video of each patient \(i\) is given of size \(h\times w\times t\) with \(h\) being height, \(w\) width, and \(t\) number of frames of the video. The \(m\)-th M-mode image of patient \(i\) is given as \(\mathbf{x}_{i}^{m}\) with \(m=\{1,2,\cdots,M\}\). It is a single line of pixels through the center of the image with an angle \(\theta_{m}\) over frames, assuming LV is around the center throughout the video, as in Figure 1(a). This image, corresponding to \(\theta_{m}\), is then of size \(s_{m}\times t\), with \(s_{m}\) as the length of the scan line. For simplicity, we set \(s_{m}=h\;\forall\;m\) independent of its angle \(\theta_{m}\). For generating multiple M-mode images, a set of \(M\) angles \(\mathbf{\theta}=[\theta_{1},\dots,\theta_{M}]\) is used to generate \(M\) M-mode images, where the angles \(\mathbf{\theta}\) are equally spaced between \(0^{\circ}\) and \(180^{\circ}\).
While the proposed approach for generating M-mode images is intuitive and works well (see Section 3.3), other approaches are also feasible. For instance, the center of rotation in the middle of the image in our M-mode generation process could be changed. Like that, we could mimic the behavior of the data collection process as every generated M-mode image would resemble a scan line of the US probe. However, the main goal of this work is to highlight the potential of M-mode images for the analysis of US videos. Given our convincing results, we leave the exploration of different M-mode generation mechanisms for future work.
### Learning Representations from M-mode Images
#### 2.2.1 Supervised Learning for EF Prediction
We aim to learn supervised representations using off-the-shelf model architectures to estimate EF. Instead of
using a single M-mode, one can aggregate the information of M-mode images from the same patient to increase robustness. We evaluate two fusion methods for aggregating information among the \(M\) M-mode images: early-fusion and late-fusion [2]. With early fusion, we construct a \(M\times s\times t\) image with the \(M\) M-mode images being the \(M\) channels of the newly created image. In late-fusion, we exploit three different methods. For all of the late-fusion schemes, we first infer an abstract representation \(\mathbf{z}_{i}^{m}\) for every M-mode image \(\mathbf{x}_{i}^{m}\). The representations \(\mathbf{z}_{i}^{m}\) are then aggregated to a joint representation \(\mathbf{\tilde{z}}_{i}\) using an LSTM cell [11], averaging, or concatenating.
We utilize a standard ResNet architecture [9] with 2D-convolutional layers independent of the fusion principle. With 2D-convolutions, we assume a single M-mode image as a 2D gray-scale image with two spatial dimensions, \(s\) and \(t\).
Figure 1: Overview of our proposed method. (a) Generate M-mode images from B-mode echocardiography videos at different scan lines. (b) Learn representations from the generated M-mode images using supervised and self-supervised learning schemes. (c) Evaluate EF prediction to diagnose cardiomyopathy.
#### 3.2.2 Self-Supervised Learning for EF Prediction
This part aims to learn meaningful representations from unlabeled data to estimate EF using echocardiography-grams. To this end, we propose an SSL scheme for M-mode images based on contrastive learning, where M-mode images from the same patient can naturally serve as positive pairs since they share labels for many downstream tasks. As discussed by [30], bio-signal data is inherently highly heterogeneous; thus, when applying learning-based methods to patient data, we need to consider both the similarity and the difference between samples originating from the same patient. Thus, we propose a problem-specific contrastive loss with patient and structure awareness, as shown in Figure 2.
Figure 2: Overview of our proposed SSL method. The contrastive loss includes (a) patient awareness to attract similarity between data from the same patient while discouraging it between different patients and (b) structure awareness to take the (possible) dissimilarity from the same patient into account.
#### Contrastive Learning Framework
The framework contains training and evaluation stages and the overview is illustrated in Figure 3. In the training stage, we optimize the model with the contrastive loss leveraging the information from underlying structures of the unlabeled images. In the evaluation stage, a multi-layer perceptron (MLP) head is trained on top of the learned representations in a supervised manner.
For each generated M-mode image \(\mathbf{x}_{i}^{m}\), we generate its augmented view \(\mathbf{x}_{i}^{v(m)}\) using the \(Aug(\cdot)\) module. So the augmented dataset is represented as \(\{(\mathbf{x}_{i}^{m},\ \mathbf{x}_{i}^{v(m)},\ y_{i})\}\). The encoder network \(Enc(\cdot)\) maps each image \(\mathbf{x}_{i}^{m}\) to a feature vector \(\mathbf{z}_{i}^{m}\). We utilize a standard ResNet architecture [9].
In the training stage, \(\mathbf{z}_{i}^{m}\) is normalized to the unit hyper-sphere before being passed to the projection network. Following the work [4], we introduce a learnable non-linear projection network between the representation and the contrastive loss. The projection network \(Proj(\cdot)\) takes the normalized lower-level representation \(\mathbf{z}_{i}^{m}\) as input and outputs the higher-level representation \(\mathbf{p}_{i}^{m}\). We use a two-layer MLP with ReLU activation as \(Proj(\cdot)\) in this work.
In the evaluation stage, we initialize the parameters of the encoder network \(Enc(\cdot)\) with the model obtained from contrastive learning and add an MLP head \(Head(\cdot)\) to the top. For each patient \(i\), we have \(M\) feature vectors \(\mathbf{z}_{i}^{m}\in\mathbb{R}^{K}\). The \(M\) vectors are then fused to get the joint representation \(\mathbf{\tilde{z}}_{i}\in\mathbb{R}^{K\times M}\) and passed to \(Head(\cdot)\). One can have different fusion methods for aggregating information among the \(M\) vectors, e. g. using an LSTM cell [11], averaging, or concatenating.
Figure 3: Schema of the contrastive learning framework with training and evaluation stages. The training stage exploits the contrastive loss to learn a representation leveraging the unlabelled images. The evaluation stage exploits these learned representations in a supervised manner to predict EF.
#### Contrastive Loss for M-mode Images
To account for (dis)similarities, we design two loss functions for learning both patient- and structure-awareness.
(a) Patient-aware loss: The goal is to attract the representations from the same patient to be similar while pushing apart representations from different patients (see Figure 2 (a)). This enforces two M-mode images to be considered similar if they are from the same patient and dissimilar if they are from different patients. The patient-aware loss is given as:
\[L^{PA}=-\frac{1}{M-1}\sum_{i=1}^{N}\sum_{m=1}^{M}\sum_{l\neq m}\log\frac{\exp( \boldsymbol{p}_{i}^{m}\cdot\boldsymbol{p}_{i}^{l}/\tau)}{\sum_{j,k}\exp( \boldsymbol{p}_{i}^{m}\cdot\boldsymbol{p}_{j}^{k}/\tau)-\exp(\boldsymbol{p}_{i }^{m}\cdot\boldsymbol{p}_{i}^{m}/\tau)} \tag{1}\]
where \(N\) is the number of patients in one batch, \(M\) is the number of original M-mode images used for each patient, and \(\tau\) is the temperature scaling parameter. The term \(\boldsymbol{p}_{i}^{m}\) represents the output of \(Proj(\cdot)\).
Inspired by [30], we tried defining a neighborhood function to limit the similarity of M-mode images from the same patient. However, incorporating neighbourhood to patient-awareness did not further improve the results; thus, we used all M-mode images per patient to define the patient-aware loss.
(b) Structure-aware loss: If we only use patient-aware loss \(L^{PA}\), there exists a risk that all images from the same patient collapse to a single point [30]. So we propose the structure-aware loss to introduce some diversity (see Figure 2 (b)). To incorporate this into the learned representations, we construct positive pairs from each M-mode image with its augmentation and consider other combinations as negative pairs. It is then defined as:
\[L^{SA}=-\sum_{i=1}^{N}\sum_{m=1}^{2M}\log\frac{\exp(\boldsymbol{p}_{i}^{m} \cdot\boldsymbol{p}_{i}^{v(m)}/\tau)}{\sum_{l\neq m}\exp(\boldsymbol{p}_{i}^{m }\cdot\boldsymbol{p}_{i}^{l}/\tau)} \tag{2}\]
If image \(m\) is an original image, then \(v(m)\) represents its augmented view; if image \(m\) is an augmented image, then \(v(m)\) represents the original image. Minimizing \(L^{SA}\) drives the representation pairs from the augmented images in the numerator close while pushing the representations in the denominator far away, where the denominator contains M-mode images from the same patient.
Finally, we combine the two losses to get structure-aware and patient-aware contrastive loss for M-mode images using the hyperparameter \(\alpha\) to control the trade-off between the awareness terms:
\[L^{CL}=\alpha L^{PA}+(1-\alpha)L^{SA}. \tag{3}\]
## 3 Experiments and Results
### Dataset
We use the publicly available EchoNet-Dynamic dataset [20]. It contains \(10^{\prime}030\) apical-\(4\)-chamber echocardiography videos from individuals who underwent imag
ing between 2016-2018 as part of routine clinical care at Stanford University Hospital. Each B-mode video was cropped and masked to remove information outside the scanning sector and downsampled into standardized \(112\times 112\) pixel videos. For simplicity, we used videos with at least 112 frames. We use the official splits with 7465 training, 1289 validation, and 1282 test set samples.
### Experimental Setup
We evaluate the models' performance using classification accuracy for five random seeds and report the mean performance and standard deviation. During training, all supervised models optimize the estimation of EF as a regression task. For testing, we use a constant threshold \(\tau\) for classifying cardiomyopathy. In all experiments, we set \(\tau=0.5\). Hence, an estimation of \(\hat{\tau}<0.5\) results in classifying a sample as cardiomyopathy.
We evaluate all models using the area under the receiver operating characteristic (AUROC) and the area under the precision-recall curve (AUPRC) with respect to whether a patient is correctly classified as healthy or cardiomyopathy-pathic. Additionally, we report the mean absolute error (MAE) and the root mean squared error (RMSE) of the predicted EF with respect to the true EF in the Supplementary Material. We report the mean performance, including standard deviations over five random seeds for all results.
We use the training set from EchoNet for pre-training (SSL), and apply a linear learning rate scheduler during the first 30 epochs as warm-up. For the supervised fine-tuning, we select different proportions of the training set in the limited labeled data scenario. All M-mode models are trained for 100 epochs using Adam optimizer [12] with an initial learning rate of 0.001 and a batch size of 64. For image augmentation, we apply random horizontal flip and Gaussian noise. For the fusion method of the the M-mode representations we used concatenation. For the EchoNet model, we use the same model and parameters as in [21]. The model is trained for 45 epochs with a learning rate of 0.0001 and a batch size of 20. We do not use test-time augmentation for any of the models. We report the full set of hyperparameters used in our experiments in Table 1.
### Results and Discussion
#### 3.3.1 Evaluating M-mode Images in Supervised Setting
We train and evaluate models with different numbers of M-modes for \(M\in\{1,2,5,10,20,50\}\). We use the complete training set, including labels, as we are interested in the performance of the models depending on the number of available M-modes. Figure 4 shows the results for different numbers of M-modes. We see that late fusion models benefit from an increasing number of modes, whereas the early fusion method overfits quickly and never achieves a comparable performance.
#### 3.3.2 Evaluating Limited Data Regime
We evaluate the accuracy of the different models introduced in Section 2 for different amount of labeled training samples.
Figure 4: Performance for different numbers of M-mode images using early and late-fusion methods. In (a), we evaluate the classification performance with respect to AUPRC and AUROC in (b), the regression performance with respect to RMSE in (c), MAE in (d), and \(R^{2}\)-score in (e).
As most medical datasets do not have the size of EchoNet-Dynamic [13], methods for medical machine learning should perform best in the limited labeled data regime. We use _E2E_ for the supervised and _CL_ for the self-supervised setting.
Additionally, we introduce _E2E+_ and _CL+_, which, inspired by EchoNet [21], uses random short clips for each training epoch. Both models use M-mode images of 32 frames with a sampling period of 2. We train and evaluate models using \(p\%\) of the full training set for \(p\in\{1,2,3,5,10,20,30,\)\(50,75,100\}\). All M-mode methods are trained with \(M=10\).
Figure 5 shows the limited labeled data experiment results. Although we are not able to reach the performance of the EchoNet model for any number of modes (see Figure 3(b)) if the number of labeled training samples is high (see Figure 4(a)), both supervised and self-supervised learning methods using M-mode instead of B-mode can outperform the EchoNet model in the low labeled data regime (\(p<5\%\), Figure 4(b)). Also, we observe that using shorter clips is useful for the self-supervised learning methods, with _CL+_ being able to achieve an AUROC over 0.85 with only around 200 labeled samples.
#### 4.2.2 Computational Cost
Furthermore, we compare the number of parameters and computational costs for different models in Table 2, where we used a multi-GPU setup with four NVIDIA GeForce RTX 2080 Ti GPUs. We report the computation time in seconds per batch (sec/B) and milliseconds per sample (msec/sample), and the memory requirements in gigabytes per batch (GB/B).
Our proposed M-mode image based models require around six times less time and ten times less memory to train and run inference per sample. Given the used
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Parameter & Value & Description \\ \hline lr\_sup & 0.001 & learning rate for supervised training \\ lr\_cl & 1.0 & learning rate for SSL training \\ opt & Adam & optimizer for SSL and supervised training \\ bsz\_sup & 64 & batch size for supervised training \\ bsz\_cl & 256 & batch size for SSL training \\ epoch\_sup & 100 & epochs for supervised training \\ epoch\_cl & 300 & epochs for SSL training \\ epoch\_warm & 30 & warm-up epochs for SSL training \\ \(\alpha\) & 0.8 & loss trade-off \\ \(\tau\) & 0.01 & temperature scaling \\ Dim\_e & 512 & \(Enc(\cdot)\) output dimension \\ Dim\_ph & 2048 & \(Proj(\cdot)\) hidden layer dimension \\ Dim\_po & 128 & \(Proj(\cdot)\) output dimension \\ Dim\_lstm & 256 & LSTM output dimension \\ \hline \hline \end{tabular}
\end{table}
Table 1: List the hyperparameters used in our experiments. We use the same hyper-parameters for E2E setup and the fine-tuning stage of SSL setup (denoted as ”_sup” in Table 1). ”_cl” denotes the hyper-parameters used in the SSL pre-training stage.
memory per batch, we could increase the batch size for the M-mode methods, lowering the computation time per sample even further, whereas the baseline model is already at the limit due to its architecture.
## 4 Discussion and Conclusion
In this work, we propose to generate M-mode images from readily available B-mode echocardiography videos and fuse these to estimate EF and, thus, cardiac dysfunction. Our results show that M-mode-based prediction methods are comparable to the baseline method while avoiding its complex training routine and reducing the computational cost and the need for expensive expert input.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{Time (sec/B)} & \multicolumn{2}{c}{Time (msec/sample)} & \multicolumn{2}{c}{Memory (GB/B)} \\ \cline{3-10} Model & BS \#Params (Mio.) & Train & Test & Train & Test & Train & Test \\ \hline EchoNet & 20 & 31.5 & 2.898 & 2.474 & 144.9 & 123.7 & 5.294 & 1.187 \\ E2E \& CL & 64 & 11.7 & 1.568 & 1.330 & 24.5 & 21.1 & 1.013 & 0.120 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Computational costs. We evaluate the EchoNet and the proposed M-mode methods with respect to the number of parameters, the computation time, and the memory requirements. All M-mode models are evaluated using \(M=10\). E2E defines the end-to-end supervised and CL the contrastive learning approach.
Figure 5: Results for different training set sizes using the proposed end-to-end supervised (E2E) and contrastive learning (CL) approaches. In (a), we train and evaluate the models on 10%-100% labeled training samples, in (b) only on 1%-10% of the samples. E2E and CL models are trained using a fixed long clip with length 112; E2E+ and CL+ are trained using random short clips with length 32. CL freeze and CL+ freeze are fine-tuned with the encoder parameters frozen.
Conventional M-mode images have a very high sampling rate, which results in a high temporal resolution so that even very rapid motion can be recorded. The generated M-mode images have significantly less temporal resolution than the conventional M-mode images from US machines. However, our results indicate that exploiting generated M-mode images does not limit the performance for EF estimation. As we do not use the M-mode images collected directly from the US machines, there is no need for an additional data collection step.
Additionally, we show the potential of pre-trained methods. In scenarios where expensive expert labels are not readily available, pre-training using unlabeled M-mode images outperforms more complicated pipelines highlighting the potential of M-Mode based pipelines for clinical use cases. In our future work, we want to investigate the use cases for M-mode on different diseases and further improve the performance of the proposed pre-training pipeline.
## 5 Acknowledgements
EO was supported by the SNSF grant P500PT-206746 and TS by the grant 2021-911 of the Strategic Focal Area "Personalized Health and Related Technologies (PHRT)" of the ETH Domain (Swiss Federal Institutes of Technology). | 早期心臓機能の異常を検出するための日常的な検査は、心血管疾患の診断にとって非常に重要です。心臓機能の重要な指標は左室出射率(EF)で、EFが低い方は心筋症と関連しています。心臓病学の診断ツールには、超音波が人気で、低コストで、リアルタイムで、非ionizing技術です。しかし、EFを計算するためのエコーグラフィの評価は、人為的評価が時間的にも専門的な知識が必要であり、自動化アプローチの必要性を浮き彫りにしています。この研究では、エコーグラフィのMモードを用いてEFを推定し、心筋症の分類を行います。私たちは、単一のエコーグラフィから複数の人工Mモード画像を作成し、市販モデルアーキテクチャを使用してそれらを組み合わせます。さらに、エコー画像を用いた対照的学習(CL)を、ラベル付きデータから |
2309.14225 | HumanMimic: Learning Natural Locomotion and Transitions for Humanoid
Robot via Wasserstein Adversarial Imitation | Transferring human motion skills to humanoid robots remains a significant
challenge. In this study, we introduce a Wasserstein adversarial imitation
learning system, allowing humanoid robots to replicate natural whole-body
locomotion patterns and execute seamless transitions by mimicking human
motions. First, we present a unified primitive-skeleton motion retargeting to
mitigate morphological differences between arbitrary human demonstrators and
humanoid robots. An adversarial critic component is integrated with
Reinforcement Learning (RL) to guide the control policy to produce behaviors
aligned with the data distribution of mixed reference motions. Additionally, we
employ a specific Integral Probabilistic Metric (IPM), namely the Wasserstein-1
distance with a novel soft boundary constraint to stabilize the training
process and prevent mode collapse. Our system is evaluated on a full-sized
humanoid JAXON in the simulator. The resulting control policy demonstrates a
wide range of locomotion patterns, including standing, push-recovery, squat
walking, human-like straight-leg walking, and dynamic running. Notably, even in
the absence of transition motions in the demonstration dataset, robots showcase
an emerging ability to transit naturally between distinct locomotion patterns
as desired speed changes. | Annan Tang, Takuma Hiraoka, Naoki Hiraoka, Fan Shi, Kento Kawaharazuka, Kunio Kojima, Kei Okada, Masayuki Inaba | 2023-09-25T15:31:34 | http://arxiv.org/abs/2309.14225v4 | HumanMimic: Learning Natural Locomotion and Transitions for Humanoid Robot via Wasserstein Adversarial Imitation
###### Abstract
Transerring human motion skills to humanoid robots remains a significant challenge. In this study, we introduce a Wasserstein adversarial imitation learning system, allowing humanoid robots to replicate natural whole-body locomotion patterns and execute seamless transitions by mimicking human motions. First, we present a unified primitive-skeleton motion retargeting to mitigate morphological differences between arbitrary human demonstrators and humanoid robots. An adversarial critic component is integrated with Reinforcement Learning (RL) to guide the control policy to produce behaviors aligned with the data distribution of mixed reference motions. Additionally, we employ a specific Integral Probabilistic Metric (IPM), namely the Wasserstein-1 distance with a novel soft boundary constraint to stabilize the training process and prevent model collapse. Our system is evaluated on a full-sized humanoid JAXON in the simulator. The resulting control policy demonstrates a wide range of locomotion patterns, including standing, push-recovery, squat walking, human-like straight-leg walking, and dynamic running. Notably, even in the absence of transition motions in the demonstration dataset, robots showcase an emerging ability to transit naturally between distinct locomotion patterns as desired speed changes, Supplementary video can be found here: WATCH VIDEO.
## I Introduction
Natural selection has shaped human ability, enabling humans to perform various locomotion behaviors and adeptly shift gait patterns in response to speed changes or external disturbances. Transferring the natural-looking locomotion and seamless transitions to humanoid robots remains a long-standing challenge, primarily due to the control complexity and intricacies of motion designs.
While numerous studies based on simplified models [1][2][3][4] and optimal control [5][6] have demonstrated promising performance on structured locomotion paradigms, the intrinsically under-actuated and nonlinear characteristics of humanoids complicate the establishment of a unified model that accurately captures the dynamics across diverse gait transitions. On the other hand, deep reinforcement learning (deep RL) semi-automates the complex modeling process by maximizing the cumulative reward, leading to its growing popularity in developing advanced locomotion skills for quadrupedal robots [7][8], bipedal robots [9][10] and even humanoid robots [11][12]. Nevertheless, RL-generated motions for high-DOF humanoids often exhibit undesired whole-body behaviors, including irregular arm swings, aggressive ground impacts, and unnatural gaits. Typical solutions utilize supplemental footstep planners [13], heuristic gait reward design [10] or pre-optimized gait and joint trajectory [14] to induce specific locomotion patterns. But given our limited understanding of the underlying characteristics that depict the natural behaviors of human, these modules frequently employ basic principles like symmetry and energy minimization [15], resulting in more stereotypical robotic motions compared to humans.
For acquiring natural motions without the need for laborious reward engineering, the adversarial motion prior (AMP) [16] exploits an additional discriminator that outputs a style reward to encourage generated motions to resemble human demonstrations. In practice, discriminators trained with binary cross entropy (BCE) or least-square (LS) loss often face unstable training and model collapse, mainly due to the inadequacy of the metrics used to measure distances between non-overlapping probability distributions in high-dimensional spaces. In the closely related domain of Generative Adversarial Networks (GANs), researchers have introduced several types of integral probability metrics (IPMs) [17][18][19], especially the Wasserstein distance [20], to address the aforementioned challenges. However, the unbounded Wasserstein distance [21] presents a significant challenge when trying to balance the style reward with other task-specific rewards like velocity tracking. Moreover,
Fig. 1: Our Wasserstein adversarial imitation learning system enables a full-sized humanoid to exhibit various human-like natural locomotion behaviors and achieve seamless transitions as velocity commands changes.
the significant morphological differences between human demonstrators and humanoid robots, including joint configurations, body proportions, and bone hierarchies, pose challenges for the direct imitation of human demonstrations.
In this work, we present an adversarial imitation learning system that enables full-sized humanoids to autonomously acquire a variety of realistic locomotion behaviors through imitating human demonstration. First, we introduce a unified primitive-skeleton motion retargeting approach to address morphological differences between arbitrary human demonstrators and humanoid robots. We exploit the power of the Wasserstein-1 distance, incorporating a novel soft boundary constraint, to ensure stable training dynamics and prevent the convergence of generated motions to a limited set of trivial modes. The learned one policy showcases a diverse array of robust and natural locomotion patterns, encompassing standing, push-recovery, squat walking, human-like straight-leg walking, dynamic running, and seamless transitions in response to changes in velocity commands, as shown in Fig. 1. In short, our main contributions are: (i) Proposing an improved adversarial imitating learning system with Wassertein critic and soft boundary constraints to address unstable training and model collapse. (ii) Detailing a unified primitive-skeleton motion retargeting technique applicable to arbitrary human skeleton sources and humanoid models. (iii) Achieving the whole-body natural locomotion and transitions for humanoids and evaluating the robustness through sim-to-sim settings in a high-fidelity simulator.
## II Related Work
**RL for bipedal locomotion**. Recent advancements in RL-based control strategies have significantly enhanced bipedal locomotion [22][23]. For instance, the bipedal robot Cassie not only mastered versatile gait patterns through the use of periodic-parametrized reward functions [10] but also achieved the Guinness World Record for the fastest 100m dash using pre-optimized reference running gaits [14]. Jeon et al. [24] utilized potential-based reward shaping to ensure faster convergence and more robust humanoid locomotion. Shi et al. [25] integrated an assistive force curriculum into the learning process, allowing the acquisition of multiple agile humanoid motion skills in reference-free settings. In a more recent study, the full-sized humanoid HRP-5P [26] showcased robust walking using actuator current feedback, while Kim et al. [12] demonstrated a torque-based policy to bridge sim-to-real gaps. Deepmind [27] managed to instill agile soccer skills in a miniature humanoid via a two-stage teacher-student distillation and self-play. Additionally, attention-based transformers [11] have been employed to achieve more versatile locomotion in the humanoid Digit.
**Motion imitation from the real-world demonstration**. Leveraging motion demonstrations from living creatures enables robots to acquire natural and versatile locomotion skills [28][29] that are challenging to manually define. A predominant imitation strategy involves tracking either reference joint trajectories [30][16][14] or extracted gait features [31][32]. However, these explicit tracking techniques are often limited to separate motion clips, which can disrupt smooth transitions between different locomotion patterns. Drawing inspiration from Generative Adversarial Imitation Learning (GAIL) [33], Peng et al. introduced AMP [34] and Successor ASE [35]. These approaches empower physics-based avatars to carry out objective tasks while simultaneously imitating the underlying motion styles from extensive unstructured datasets in an implicit manner. Variants of AMP have been further employed for learning agile quadrupedal locomotion [36][37][21] and terrain-adaptive skills [38][39], exemplifying its efficacy in eliminating the need for intricate reward function designs.
Despite the advancements in other domains, methods similar to AMP have not been extensively explored for humanoid robots. To bridge this gap, in this work, we present a Wasserstein adversarial imitation system with soft boundary constraints as an enhancement to the existing AMP techniques. Our aim is to provide a foundational training algorithm for future deployment on full-sized humanoid robots in real-world scenarios.
## III Motion Retargeting
To transfer reference motion to the robot, certain retargeting methods [40], [41] consider both kinematic and dynamic constraints, requiring accurate dynamic modeling or complex balance controllers. In this section, we detail a flexible motion retargeting approach based on the unified primitive skeleton, emphasizing geometry consistency. The kinematic and dynamic constraints such as feet contact state and balance will be satisfied in the reinforcement learning paradigm in the next section. Our retargeting involves four key procedures.
**Unified primitive skeleton binding**. Skeletal structures of both humans and humanoid robots are known to correspond to homeomorphic graphs [42]. Leveraging this property, we extract what we term a 'primitive skeleton' that encapsulates the foundational geometric and hierarchical characteristics shared across various skeletons. In the process of primitive skeleton binding, we first construct kinematic trees for all involved skeletons. These trees are subsequently merged into
Fig. 2: Binding the primitive skeleton for the humanoid JAXON to the MoCap skeleton by merging the bone to a common primitive skeleton.
a unified primitive skeleton, retaining only a singular bone between two successive key joints. Users manually select \(n\) key joints, offering an intuitive and flexible mechanism for loose binding between the source and target skeleton groups. Once binding is complete, we compute the length ratio \(S=\{s_{k}\mid k\in\{1,\ldots,n\}\}\) for each bone within the primitive skeleton. An illustrative example of this binding between the Humanoid [43] and CMU MoCap [44] data skeleton is presented in Figure 2.
**Coordinate transformation**. Consider a MoCap source motion sequence of \(T\) frames \(M_{s}^{\prime}=\{m_{t}^{\prime}\mid t\in\{1,\ldots,T\}\}\), with frame \(t\) as \(m_{t}^{\prime}=\left({}^{w}P_{r}^{\prime},{}^{w}\,R_{r}^{\prime},{}^{0}\,R_{1} ^{\prime},\ldots,{}^{j-1}\,R_{j}^{\prime}\right)\). Here, \({}^{w}P_{r}^{\prime}\) and \({}^{w}R_{r}^{\prime}\) represent the root's position and orientation w.r.t the world coordinates, and \({}^{j-1}R_{j}^{\prime}\) indicates the local orientation of the source skeleton's joint \(j\) w.r.t its parent joint. Applying iterative homogeneous transformations along the kinematic tree, denoted by \({}^{w}P_{j}^{\prime}=H\left({}^{w}P_{r}^{\prime},{}^{w}\,R_{r}^{\prime},{}^{0 }\,R_{1}^{\prime},\ldots,{}^{j-1}\,R_{j}^{\prime}\right)\), we derive the global position for each joint in the source skeletons. The relative position vector between adjacent key joints is computed as \(\vec{r_{k}}^{\prime}={}^{w}P_{k}^{\prime}-{}^{w}P_{k-1}^{\prime}\). We scale it by \(\vec{r}_{k}=s_{k}\cdot\vec{r_{k}}^{\prime}\) to get the relative position vector \(\vec{r}_{k}\) in target robot skeleton. Finally, we sum up the relative position vector along the kinematic chains and apply a transformation to get the key joint Cartesian positions w.r.t the root of robot skeletons as \({}^{r}P_{k}=H\left(\sum_{i=1}^{k}\vec{r}_{i}\right)\). All the end-effector poses \({}^{r}P_{e}\in\mathbb{R}^{3}\times\mathrm{SO}(3)\) are incorporated into the final robot motion frame \(m_{t}=\left({}^{w}P_{r},{}^{w}R_{r},{}^{r}P_{k},{}^{r}P_{e}\right)\). Here \(e\) denotes writs, feet and head.
**Multi-objective inverse kinematics**. To map the key joint Cartesian position \({}^{r}P_{k}\) and end-effector pose \({}^{r}P_{e}\) to joint positions \(\theta=\left(\theta_{1},\theta_{2},\ldots\theta_{n}\right)\), we form the Whole-Body Inverse Kinematics as a gradient-based optimization problem [45] with three goals,
\[\begin{split} C_{1}&=\sum_{k}\left\|{}^{r}P_{k}-r_{ k}\left(\theta\right)\right\|^{2},\\ C_{2}&=\sum_{e}\left\|{}^{r}P_{e}-P_{e}\left( \theta\right)\right\|^{2},\\ C_{3}&=\left\|\theta_{t}-\theta_{t-1}\right\|^{2}, \end{split} \tag{1}\]
where the \(r_{k}\left(\theta\right)\) and \(P_{e}\left(\theta\right)\) are the current Cartesian position and pose calculated from the current joint pose during gradient descent iterations. The main goals consist of the **position goal**\(C_{1}\) for all key joints and the **pose goal**\(C_{2}\) for the end-effectors including hands, foot soles and head. An additional **minimal displacement goal**\(C_{3}\) is introduced to maintain each joint variable close to the previous motion frames. This is crucial for the highly redundant humanoids as multiple solutions might satisfy \(C_{1}\) and \(C_{2}\). The overall objective function is the weighted sum of each individual goal cost,
\[C=\arg min_{\theta}\sum_{i}w_{i}C_{i}\left(\theta\right). \tag{2}\]
The weights \(w_{i}\) determined as \((1,1,0.2)\) heuristically. The joint position and velocity limitations are incorporated into the constraints,
\[\begin{split}\theta_{\min}&\leq\theta_{t}\leq \theta_{\max},\\ \dot{\theta}_{\min}&\leq\frac{\theta_{t}-\theta_{t-1 }}{\Delta t}\leq\dot{\theta}_{\max}.\end{split} \tag{3}\]
**Post-processing**. We compute root and joint velocities from sequential frame differences. The linear and Slerp interpolation are applied for position and orientation between discrete motion frames. Moreover, an exponential moving average filter is applied to smooth position and velocity spikes.
## IV Wasserstein adversarial imitation
Our Wasserstein adversarial imitation learning framework, as illustrated in Figure 3, incorporates actor-critic networks, a Wasserstein critic, and a motor-level Proportional Derivative (PD) controller. The actor updates using policy gradients derived from both environment rewards and the Wasserstein critic. The Wasserstein critic undergoes adversarial training based on the Wasserstein-1 distance complemented by a soft boundary loss. When given a user-defined velocity command, our setup facilitates humanoids in mirroring the velocity, ensuring smooth transitions in locomotion.
### _Velocity-conditioned reinforcement learning_
We formulate the humanoid locomotion control as a velocity-goal-conditioned [46] Markov decision process, with the velocity goal \(v^{*}\sim p(v)\in\mathcal{V}\), observation state \(s\in\mathcal{S}\), action \(a\sim\pi(\cdot|s,v^{*})\in\mathcal{A}\), reward \(r=r(s,a,v^{*})\) and discount factor \(\gamma\sim(0,1]\). The agent updates the decision policy \(\pi\) through interactions with the surrounding environments to maximize the expected discounted return under the condition of the desired velocity
\[J(\pi)=\mathbb{E}_{v^{*}\sim p(v),\tau\sim p(\cdot|\pi,g)}\left[\sum_{t}\gamma^ {t}r\left(s_{t},a_{t},v^{*}\right)\right]. \tag{4}\]
The total reward terms are composed of two components:(1) velocity-tracking reward \(r^{V}\), (2) style reward \(r^{S}\),
\[r_{t}=w^{V}r^{V}+w^{S}r^{S}, \tag{5}\]
where \(w^{V}\) and \(w^{S}\) denote the combination weight on each term. Reward \(r^{V}\) encourages the robot to follow the commanded CoM velocities, it's designed as the normalized exponential errors of linear velocity \(v^{*}{}_{xy}\) and heading velocity separately,
\[\begin{split} r^{V}=& w^{l}\exp\left(-\frac{\|v^{*}{} _{xy}-v_{xy}\|^{2}}{\lambda_{l}\|v^{*}{}_{xy}\|}\right)\\ &+w^{a}\exp\left(-\frac{\|w^{*}{}_{z}-w_{z}\|^{2}}{\lambda_{a}|w^{ *}{}_{z}|}\right),\end{split} \tag{6}\]
\(w^{l}\) and \(w^{a}\) are hyper-parameters to control the importance of each tracking error. The parameters \(\lambda_{l}\) and \(\lambda_{a}\) are utilized to regulate the tracking precision. Smaller \(\lambda_{a}\) encourages the humanoids to have better velocity-following precision but makes the policy more difficult to get rewards at the beginning of training.
We model the actuation dynamics as a mass-damping system. A Proportional Derivative (PD) controller is employed to map the action to desired torques with the target joint velocity always specified as 0.
### _Wasserstein critic_
In adversarial imitation learning, it is pivotal for the discriminator to offer an appropriate distance metric between the generated motion distribution \(\mathcal{Q}\) and the reference motion distribution \(\mathcal{P}\). In Vanina Gail, the discriminator employs the BCE loss, which has been shown to equate to minimizing the Jensen-Shannon Divergence [47]. When there is no overlap between two high-dimensional data distributions, it can result in gradient vanishing, severely causing unstable training and model collapse. IPMs have been proved as excellent distance measures on probabilities [17],
\[\gamma_{\mathcal{F}}(\mathcal{P},\mathcal{Q}):=\sup_{f\in\mathcal{F}}\left| \int_{\mathcal{M}}fd\mathcal{P}-\int_{\mathcal{M}}fd\mathcal{Q}\right| \tag{7}\]
where \(\mathcal{F}\) represents a class of real-valued bounded measurable functions on Manifold \(\mathcal{M}\). When \(\mathcal{F}=\{f:\|f\|_{L}\leq 1\}\), it become the dual representation of Wasserstein-1 distance and the typical Wasserstein loss with gradient penalty [20] becomes
\[\begin{split}\operatorname*{arg\,min}_{\theta}&- \mathbb{E}_{x\sim P_{x}}\left[D_{\theta}(x)\right]+\mathbb{E}_{x\sim P_{x}} \left[D_{\theta}(x)\right]\\ &+\lambda\operatorname*{\mathbb{E}}_{\hat{\mathbf{x}}\sim P_{\mathbf{ \hat{\mathbf{x}}}}}\left[\left(\left\|\nabla_{\hat{\mathbf{x}}}D_{\theta}(\hat{\mathbf{x}} )\right\|_{2}-1\right)^{2}\right]\end{split} \tag{8}\]
where \(D_{\theta}(x)\) denote Wasserstein critic network outputs. \(x=\Phi\left(s^{H}\right)\) is the manually selected feature from \(N\) consecutive motion \(s\) in reference and generated dataset. \(\hat{x}=\alpha x_{r}+(1-\alpha)x_{g}\) are samples obtained through random interpolation between the reference samples and generated samples.
**Soft boundary constraint** The Wasserstein critic network is used to approximate a cluster of Lipschitz-constrained functions with a linear combination architecture in the final layers. As a result, the output value is unbounded and unbiased [48][49]. During training with the loss in 8, we observed drawbacks stemming from the unbounded values. At the early training stage, there are significant differences between generated samples and real data distributions, the critic's output for generated samples converges quickly to large negative values. This renders the style reward \(r^{S}\) nearly zero, causing the policy to fail to learn natural motions. The unbounded value also introduces large standard derivations of style reward, which makes the training unstable. To limit the outputs from the Wasserstein critic, we modify the Wasserstein loss with a soft boundary constraint,
\[\begin{split}\operatorname*{arg\,min}_{\theta}&- \mathbb{E}_{x\sim P_{r}}\left[\tanh(\eta D_{\theta}(x))\right]\\ &+\mathbb{E}_{x\sim P_{g}}\left[\tanh(\eta D_{\theta}(x))\right] \\ &+\lambda\mathbb{E}_{\hat{\mathbf{x}}\sim P_{\mathbf{\hat{x}}}}\left[( \max\{0,\|\nabla_{\hat{\mathbf{x}}}D_{\theta}(\hat{\mathbf{x}})\|-1\})^{2}\right]\end{split} \tag{9}\]
Where \(\eta\) is a hyperparameter that controls the range of boundaries. Smaller \(\eta\) means more soft constraints that generate larger critic values. In practice, \(\eta\sim(0.1,0.5)\) is a proper range for selection. We apply a weaker gradient penalty [50] to further stabilize the training. Finally, the style reward is designed as \(r^{S}=e^{D_{\theta}(x)}\).
## V Experiment
### _Implementation details_
**Actor-Critic observation space**. The actor and critic networks share the same observation space. The observation space \(O_{ac}\in\mathcal{R}^{102}\) consist of: (i) Base angular velocity \(W_{b}\in R^{3}\) expressed in base local frames. (ii) Velocity command \(v^{*}\in R^{3}\) including target linear velocity \(v^{*}_{xy}\in[0,5]\)
Fig. 3: **Wasserstein Adversarial Imitation Framework.** Given the robot’s proprioceptive state and base velocity commands, the policy network predicts the joint position targets. A PD controller converts these targets into torques to actuate the robot. Using the reference motion dataset and policy-generated motion dataset, the Wasserstein critic updates its parameters through the soft-boundary Wasserstein-1 loss during training and predicts the style reward during roll-out. The style reward \(r^{S}\) is combined with the velocity reward \(r^{V}\) to guide policy training.
m/s and heading velocity \(w_{z}^{*}\in[-1,1]\) rad/s. (iii) The gravity vector \(Z_{b}\in R^{3}\) expressed in base local frames. (iv) Current joint position \(\theta\in R^{31}\). (v) Current joint velocity \(\dot{\theta}\in R^{31}\). (vi) Last-step actions \(a_{t-1}\in R^{31}\).
**Wasserstein-critic observation space and action space**. The observation space \(O_{d}\)of Wasserstein critic is composed of the state-transition pairs \(\Phi\left(s^{H}\right)=(s_{i},\dots,s_{i+h-1})\in R^{78\times H}\) in \(H\) preceding time-steps. Each \(s_{i}\) is represented in the same style feature space where the style features are carefully hand-selected. The motion style feature \(s_{i}\in R^{78}\) is composed of: (i) Base height \(p_{z}\in R^{1}\). (ii) Base linear velocity \(V_{b}\in R^{3}\) expressed in base local frames. (iii) Base angular velocity \(W_{b}\in R^{3}\) expressed in base local frames. (iv) The gravity vector \(Z_{b}\in R^{3}\) expressed in base local frames. (v) Joint position \(\theta\in R^{31}\). (vi) Joint velocity \(\dot{\theta}\in R^{31}\). (vii) Relative position of feet with base \(r_{\text{feet}}\in R^{6}\). The corresponding action space \(a\in A^{31}\) of policy is chosen as 31 target joint positions within the joint angle limitation.
**Reference motion dataset**. The reference motion dataset includes multiple locomotion patterns. Table I depicts the Statistics details of the whole dataset used for training. Normal walk and squat walk are retargeted from CMU-MoCap dataset [44] and SFU-Mocap dataset [51]. The standstill motion is manually designed and the squat walk motion is recorded from the existing robot controller [52].
**Regularization term and domain randomization**. To obtain a high-fidelity controller, we impose regularization penalties for large action jerk, significant joint torque, and acceleration. We also employ domain randomization on contact friction, restitution, joint friction, joint inertia, mass parameters, PD gains, and motor strength to avoid overfitting to the environmental dynamics.
**Training details**. The actor, critic, and Wasserstein critic have MLP structures with [1024, 512, 256] hidden units and ELU activations. Policies update via PPO [53] with a learning rate of \(l=3.0e-5\) and around 30 hours of training in Isaac gym [54] on a Nvidia 3090Ti.
### _Evaluation_
**Natural locomotion and transition**. We examined the robot's ability to reproduce a range of natural locomotion behaviors from the reference dataset and to adapt to velocity commands. We set the initial desired velocity to 0 m/s and gradually increased it to 5 m/s with a constant acceleration of 0.1 m/s\({}^{2}\). Fig. 4 presents a side view of JAXON robot's locomotion behaviors in response to changing velocities. The results indicate that our control policy not only captures diverse locomotion patterns from the reference dataset but also enables smooth transitions not present in the reference motions. The velocity-tracking curve and the z-direction feet contact force are shown in Fig. 5. The velocity tracking curve in Fig. 5 demonstrates the robot's capability to closely follow the desired velocity, reaching speeds of up to 5 m/s. It's important to note that this velocity tracking refers to the average velocity over a gait cycle, as opposed to instantaneous velocity. As the speed increases, the variance
Fig. 4: Snapshots of various natural locomotion behaviors learned by the Humanoid JAXON. As the velocity command increases from 0 m/s to 5 m/s, the robot exhibits seamless transition from stand to dynamic running.
in instantaneous velocity also increases. The contact force increases dramatically with the increase in speed. During the transition from walking stage \(f\) to running stage \(g\), there is a significant increase in stride frequency and a substantial decrease in contact time. As the robot transitions into the running gait pattern, we can clearly observe the presence of the air phase.
**Training stability and model collapse**. To assess the utility of the soft-boundary-constrained Wasserstein loss, we conducted three separate training sessions, each utilizing identical hyperparameters, and random seeds, but varying discriminator loss types. As depicted in Figures 6a and 6b, the contrasts in discriminator (critic) outputs and style rewards are evident. With the W-1 loss, the critic's output experiences significant fluctuations spanning a broad range. This causes rapid changes in the style reward \(r^{S}\) during the initial training phases, culminating in a failed training attempt. Conversely, while the discriminator output using the BCE loss remains between (0,1), it still exhibits considerable relative fluctuation, resulting in volatile changes to the style reward and destabilizing the training phase. Our novel soft-boundary-constrained Wasserstein loss effectively constrains the output value within a more acceptable range and also minimizes the fluctuation in style reward, thus enhancing training stability. Beyond stability, the Wasserstein critic delivers improved assessments of distributional distances, which ultimately curtails model collapse and aberrant locomotion behaviors.
**Sim-to-sim robust test**. The Choreonoid [55], integrated with real-time-control software Hrpsys, has been widely used in our previous work [56][57][58] and has proven to be a high-fidelity simulation environment with a small reality gap. We successfully transferred the policy from Isaac Gym to Choreonoid to facilitate future sim-to-real experiments. As depicted in Fig. 7, the controller demonstrates extraordinary robustness in push-recovery and blind stair-climbing tasks.
## VI Conclusion
In this work, we introduce a Wasserstein adversarial imitation learning system adept at acquiring a variety of natural locomotion skills from human demonstration datasets with diverse motion behaviors. We have detailed a unified primitive-skeleton motion retargeting method, proficient in efficiently mapping motions between skeletons with significant morphological differences. Our findings underscore the system's novel ability to seamlessly transition between unique locomotion patterns as the desired speed varies, even though such transition behaviors are conspicuously absent in the reference dataset. Further experiments validate that our proposed soft-boundary-constrained Wasserstein-1 loss significantly stabilizes the training process and mitigates the risk of model collapse. In the further, we aim to transfer this policy to real-world robots, with the goal of achieving versatile, natural, and dynamic locomotion for humanoids.
Fig. 5: Top: the velocity tracking curve, the velocity command increases from 0 m/s to 5 m/s with constant acceleration of 0.1 ms\({}^{2}\). Middle and bottom: the left and right feet’ z-direction contact force of JAXON robot during standing to running transition.
Fig. 6: a) Comparison of discriminator (critic) output values. b) Comparison of style reward values. Using only the Wasserstein-1 loss results in a wide range and significant fluctuations in both output and style reward, causing early-stage training failures. While employing the BCE loss keeps these values within a suitable range, it also leads to considerable relative fluctuations and susceptibility to model collapse and unstable training. In contrast, the Wasserstein-1 loss with soft boundary constraint ensures both the output and style reward remain within an appropriate range and exhibit minimal fluctuations, leading to a more stable training process. c) An example of model collapse with BCE loss that the robot only learned a tiptoe walking gait close to the standing posture.
Fig. 7: Sim-to-sim robust test in high-fidelity Choreonoid simulator. a) Push-recovery: The robot takes one lateral step with its left foot to maintain balance. b) Stair-climbing: The robot navigates a set of stairs with each step height of 50mm. | 人間の運動能力を人間oidロボットに移行させることは依然として重要な課題です。本研究では、Wassersteinアドversarial imitation learningシステムを導入することで、人間oidロボットが自然な全身運動パターンを複製し、人間のような滑らかな移行を実現します。まず、任意の人の示範者と人間oidロボットの形態的な違いを最小化する統一された原始骨格運動再配置を行います。アドversarial criticコンポーネントを強化学習(RL)に組み込み、制御ポリシーを混合参照運動のデータ分布に適合させることで、行動を導きます。さらに、Wasserstein-1距離を用いた新しいソフト境界条件を伴った積分確率的指標(IPM)を採用することで、トレーニングプロセスを安定させ、モードの崩壊を防ぎます。本システムはシミュレーター内で全規模の人間oid JAXONで評価されました。結果、制御ポリシーは、立っている、押し戻し、Squat |
2309.04036 | One-to-Multiple Clean-Label Image Camouflage (OmClic) based Backdoor
Attack on Deep Learning | Image camouflage has been utilized to create clean-label poisoned images for
implanting backdoor into a DL model. But there exists a crucial limitation that
one attack/poisoned image can only fit a single input size of the DL model,
which greatly increases its attack budget when attacking multiple commonly
adopted input sizes of DL models. This work proposes to constructively craft an
attack image through camouflaging but can fit multiple DL models' input sizes
simultaneously, namely OmClic. Thus, through OmClic, we are able to always
implant a backdoor regardless of which common input size is chosen by the user
to train the DL model given the same attack budget (i.e., a fraction of the
poisoning rate). With our camouflaging algorithm formulated as a
multi-objective optimization, M=5 input sizes can be concurrently targeted with
one attack image, which artifact is retained to be almost visually
imperceptible at the same time. Extensive evaluations validate the proposed
OmClic can reliably succeed in various settings using diverse types of images.
Further experiments on OmClic based backdoor insertion to DL models show that
high backdoor performances (i.e., attack success rate and clean data accuracy)
are achievable no matter which common input size is randomly chosen by the user
to train the model. So that the OmClic based backdoor attack budget is reduced
by M$\times$ compared to the state-of-the-art camouflage based backdoor attack
as a baseline. Significantly, the same set of OmClic based poisonous attack
images is transferable to different model architectures for backdoor implant. | Guohong Wang, Hua Ma, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Wei Kang, Said F. Al-Sarawib, Gongxuan Zhang, Derek Abbott | 2023-09-07T22:13:14 | http://arxiv.org/abs/2309.04036v2 | # One-to-Multiple Clean-Label Image Camouflage (OmClic) based Backdoor Attack on Deep Learning
###### Abstract
Image camouflage has been utilized to create clean-label poisoned images for implanting backdoor into a DL model. But there exists a crucial limitation that one attack/poisoned image can only fit a single input size of the DL model, which greatly increases its attack budget when attacking multiple commonly adopted input sizes of DL models.
This work proposes to constructively craft an attack image through camouflaging but can fit multiple DL models' input sizes simultaneously, namely OmClic. Thus, through OmClic, we are able to always implant a backdoor regardless of which common input size is chosen by the user to train the DL model given the same attack budget (i.e., a fraction of the poisoning rate). With our camouflaging algorithm formulated as a multi-objective optimization, \(M=5\) input sizes can be concurrently targeted with one attack image, which artifact is retained to be almost visually imperceptible at the same time. Extensive evaluations validate the proposed OmClic can reliably succeed in various settings using diverse types of images. Further experiments on OmClic based backdoor insertion to DL models show that high backdoor performances (i.e., attack success rate and clean data accuracy) are achievable no matter which common input size is randomly chosen by the user to train the model. So that the OmClic based backdoor attack budget is reduced by \(M\times\) compared to the state-of-the-art camouflage based backdoor attack as a baseline. Significantly, the same set of OmClic based poisonous attack images is transferable to different model architectures for backdoor implant.
keywords: Camouflage attack, One-to-multiple, Backdoor attack, Clean-label data poisoning, Machine learning +
Footnote †: journal: Computers & Security
## 1 Introduction
The revealed backdoor attacks in 2017 [1; 2] on deep learning (DL) models are becoming one of the major barriers of DL trustworthy usage, especially in security-sensitive applications. A backdoored model works normally in the absence of the so-called trigger to be stealthy, but misbehaves once the trigger is present. For example, a backdoored facial recognition model still correctly recognizes Alice as Alice, Bob as Bob if either of them wears a black-framed eye-glass that is the trigger secretly set by the attacker. However, it misclassifies any person who wears this trigger into the Administrator e.g., with higher authorization. One major attack surface is from the data outsourcing scenario, where a DL model provider/producer outsources the data collection to third parties [3]. Data outsourcing is common due to the fact that DL training demands on large amounts of data. However, this requires intensive workforce involved with annotating large datasets or even generating them. Data curation task is thus often outsourced to a third party (e.g., Amazon Mechanical Turk) or volunteers. In this context, the data could be maliciously poisoned to insert the backdoor once the data is utilized to train a model.
According to the visual consistency between the image content and its corresponding annotation (i.e., label in classification task), data poisoning based backdoor implant can be divided into two categories: dirty-label poisoning and clean-label poisoning. Generally, the content of the image and its label are different for the dirty-label poisoned image. For example, a dog image stamped with a small trigger is labeled as cat. In contrast, the image content and its label of the clean-label poisoned images are consistent. More details on the dirty-label poisoning and clean-label poisoning can be found in Section 2.2. Almost a majority of existing studies are on the dirty-label poisoning [1; 4; 5; 6; 7]. However, the dirty-label poisoning attack is challenging to be survived when the attacker
cannot control the model training such as in the model-outsourcing scenario. When the user only needs to outsource the data collection or annotation task, the user will train the model by himself/herself. In that case it is common in real-world, the collected data can undergo human inspection to check whether the image content is consistent with the label. The dirty-labeled images will be rejected in this case. In addition, the reputation of the data provider can be infringed and penalized.
The clean-label poisonous images retain the labels to be consistent with the images' content. Thus, it can trivially bypass the visual auditing by the data curator. Therefore, the clean-label poisoning poses a realistic security threat to the data collection pipeline even when the curated data undergoes human inspections. However, the clean-label poisoning is less explored due to the stringent label consistency constraint. There are only few works in this research line. Almost all of them build upon the strategy of enforcing the difference between the input space and the latent space/representation, so-called feature collision. to create clean-label poisoned images [8; 9; 10; 11]. However, the major limitation of such clean-label attacks is model-dependence. That is the _attacker has to know the model architecture and even weights_ used by the victim user's model to determine the latent representation of the adversarial image, which is mandatory during the adversarial image optimization process. This restriction renders the conventional clean-label attack ineffective if the model architecture varies or the weights before the layer of latent representation are changed or the model is trained from scratch [12].
To our knowledge, only the camouflage attack [13] based clean-label poisoning is model-agnostic for inserting backdoors into DL models [14; 15]. The camouflage attack abuses the default image resizing function to create adversary images that are with visually clean labels (detailed in Section 2.3). To be effective, the image size fed into the model has to be known to the attacker. We note that this is a reasonable and practical knowledge assumption in real-world. Because the commonly used input size of popular model architectures are few and known to the public. For example, the commonly used input sizes of the ResNet are \(224\times 224\times 3\) and \(112\times 112\times 3\). The input sizes of different popular models are summarized in Table 1.
A crucial constraint of the data poisoning attack is the poison rate or the attack budget. The attack budget should be as small as possible to be stealthy and efficient to the attacker. For the former, if the poisoning rate is high, it means that the samples of the target class will be notably high, which could be suspicious even for the clean-label attack. For the latter, it means the attacker can spend less effort or time creating poisoned images. In this context, we note that the existing camouflage attack[13; 21] can only target a single model input size per attack image, which is inefficient given the fact that there are always several default input sizes of a popular model. To attack all input sizes simultaneously, the poisoning rate has to be increased as a function of the number of targeted input sizes. For example, if a 1% poison rate can implant a backdoor to the ResNet model given an input size, it requires a 3% poison rate, 3\(\times\) higher, to attack three common input sizes concurrently, which consequentially increases the attack budge and becomes less stealthy and efficient.
We address such a crucial limitation by crafting a camouflaged attack image that can target multiple model input sizes simultaneously to fundamentally obviate the requirement of linearly increasing the attacking budget or the poisoning rate upon the existing state-of-the-art (SOTA) camouflage attack [13]. The SOTA is incapable of targeting multiple input sizes but a single input size (detailed in Section 3.1). Consequentially, with the same attack budget, we are able to always implant a backdoor to the model as long as the user adopts any one of the common input sizes to train the models.
Our contributions are summarized as follows:
* We propose OmClic1, the first one-to-multiple camouflage attack, that can target multiple input sizes given a single crafted attack image. We formulate the attack image crafting with a multi-objective optimization to automate the attack image generation. Footnote 1: It can be pronounced as Oh My Click.
* We comprehensively evaluate OmClic with diverse types of images (i.e., facial images, landscape images) under various settings. Its outstanding performance is affirmed through quantitative and qualitative comparisons with the SOTA.
* We demonstrate the practicality of backdoor attacks leveraging the OmClic through extensive experiments on three datasets: PubFig, STL, and Tiny-ImageNet. The backdoor can always be successfully inserted compared to the baseline backdoor attack regardless of whether any of the targeted model input sizes are chosen by the victim user with the same poisoned set (i.e., only six attack images are sufficient in the facial recognition case study).
The rest of the paper is organized as follows. Some necessary background is presented in Section 2. Section 3 gives an overview of the OmClic, followed by elaborations on its implementations. Section 4 comprehensively and
\begin{table}
\begin{tabular}{c|c} \hline model & input size \\ \hline DenseNet [16] & 32, 112, 200, 224 \\ ResNet [17] & 112, 224, 336, 448, 560 \\ VGG [18] & 224, 256, 512 \\ AlexNet [19] & 256, 512 \\ EfficientNet [20] & 224 \\ \hline \end{tabular}
\end{table}
Table 1: Common input sizes of popular DL models.
quantitatively evaluates OmClic on diverse type of images with various setting considerations, as well as comparisons with the SOTA [13]. Backdoor attacks based on OmClic are presented and extensively evaluated in Section 5. We discuss OmClic enabled backdoor attacks further in Section 6, especially with providing an easy-to-deploy lightweight prevention method to mitigate the OmClic. Section 7 concludes this work.
## 2 Related Work
### Backdoor Attack Scenario
A backdoored model behaves normally for inputs without the trigger but misbehaves as the attacker-specified once the attacker presents his/her secretly chosen trigger in the input [3]. For example, supposing the trigger is a sun-glass, any person, e.g., person A, without wearing it will still be recognized as person A by the backdoored facial recognition model. However, he/she will be recognized as the administrator by the backdoored model once the sun-glass is worn. There are a number of real-world scenarios that can introduce backdoor into the DL model as long as the model or its training dataset can be tampered by the attacker. These means are model outsourcing [1], dataset outsourcing [8], distributed machine learning [22], pretrained model reusing [23], or even through vulnerable code called by the DL framework [24], and fault injection after model deployment [25].
### Data Poisoning based Backdoor
Data outsourcing is one of the three most common scenarios (i.e., the first three) compared to the last three scenarios. Due to the hardness of collecting some specific data (i.e., medical) or the intensive involved labor, it is common that a model trainer outsources the data collection or/and data annotation to third parties. For instance, the Amazon Mechanical Turk2 is such as platform where one can issue dataset outsource tasks. The annotation of a commonly used FLIC dataset [26] for object detection task was outsourced to Amazon Mechanical Turk. In addition, some data collections rely on volunteer contributions. Moreover, some large-scale datasets, e.g., ImageNet [27] are crawled from the Internet and annotated through crowdsourcing [27]. For all these cases, the data can be tampered with before being received by the data curator. A small fraction (i.e., 0.06% [28]) of tampered or poisoned data can essentially succeed in inserting a backdoor into a DL model trained upon it.
Footnote 2: [https://www.mturk.com/](https://www.mturk.com/)
Data poisoning can be generally divided into two categories: dirty-label poisoning and clean-label poisoning. Gupta et al. carried out two additional attacks in addition to the targeted attack: a random label flipping attack*and a random input data poisoning attack. The former is dirty-label poisoning, and the latter is clean-label poisoning. The difference between these two poisoning categories is following:
* Dirty-label poisoning. The labeling of samples is inconsistent with the semantics of these samples. which is trivially achievable by simply altering the label of a poisoned sample that contains the trigger. This is not stealthy for human inspection.
* Clean-label poisoning. It ensures the consistency between the poisoned image content and its annotated label. Thus, human inspector can not find any irregularity attributing the consistency.
To craft clean-label poisonous images, the majority of studies [8; 9; 10; 11] utilize the feature collision attack. For example, a poisoned face image of person A is labeled as person A, which is visually unsuspicious due to the consistency between the image content that is the input space or pixel space and the annotation. However, when it is fed into a DL model, its latent representation (i.e., from the first fully connected layer of a CNN model) in latent space is in fact equal to person B. This can be exploited to perform backdoor attacks through clean-label data poisoning [12]. That is, the feature of poisoned image A collides with image B in the latent space even though they are different in the input space. Generally, the perturbed/poisoned A's image feature representation is similar to any other person's face image (i.e., person B) _stamped with a trigger_ (i.e., sun-glass). The model trains on the poisoned dataset and learns a backdoor/association between the trigger and targeted person A, thus misbehaving backdoor effect to misclassify any person with the trigger to person A.
However, clean-label poisoning upon feature collision has a crucial limitation that a feature extractor to extract the latent representation should be known by the attacker. This means the attacker often needs to have white-box knowledge of the feature extractor (i.e., the victim model). Generally, poisonous image crafting in this context is (victim) model dependent.
### Camouflage Attack
The other means of crafting clean-label poisonous images is through the camouflage attack[13] by abusing the default resizing operation provided by commercial DL frameworks [14; 15; 29]. In [29], Chen et al. extended camouflage attacks by utilizing five types of pre-processing modules common in DL systems. For the camouflage attacked image, its visualization is different before and after resizing operation. Note that the image size (i.e., the resolution up to \(4032\times 3024\) for images taken by iPhone 13) is always larger than the input size of a given DL model (see Table 1). These large images will be downsized into the model's acceptable input size by calling the default resizing function before feeding them into the model for either training or inference. Therefore, an attacker can create an attack image (i.e., person A's face image) seen by the
data curator that will become the target image (i.e., person B/C's face image with a trigger) seen by the model. Here, the attack image retains its consistency between the image content and the annotation. Obviously, once a DL model trains on these poisoned images, it will be back-doored. So that it will classify any person with the trigger to person A who is the attacker-targeted person such as the administrator.
Despite this clean-label poisoning attack exhibiting a main merit of being independent on DL models, it is dependent on the model input size targeted. For example, if the targeted size is \(224\times 224\times 3\), its effect will not function if the model user chooses any other input size e.g., the other common option of \(112\times 112\times 3\) (see Table 1). When performing backdoor attacks, the attacker has to linearly increase its poison rate (i.e., using more poisonous images) if the attacker targets multiple model input sizes. This is undesirable as it is less stealthy and increases the attack budget. In the following, we present OmClic that can cover multiple model input sizes given the same poisonous image without increasing the poisoning rate at all.
## 3 One-to-Multiple Clean Label Image Camouflage
### Overview
The overview of the One-to-Multiple Clean Label Image Camouflage (OmClic) is shown in Figure 1. The aim is to disguise multiple target images (i.e., \(k\)\(T\)s) in the same source image (\(S\))--\(k=3\) in the example. The manipulated source image \(S\) is the attack image \(A\) that will be received by the victim user who uses it to train a DL model. The attack image \(A\) is visually close to the source image--its annotation (i.e., label) is consistent with its content (i.e., the lady is labeled with an correct name). However, once it is used to train a DL model, its content becomes semantically similar to the target image \(T\) due to the abuse of the default scale function provided by mainstream DL frameworks. More precisely, \(D_{1}\approx T_{1}\) where \(D_{1}=\)scale\({}_{1}(A)\). By stamping a trigger on a fraction of different target images \(T\)s before disguising each into an attack image \(A\), a back-door will be inserted into the downstream DL models, as experimentally evaluated in Section 5.
In this context, the key to OmClic is to strategically craft the attack image. The OmClic aim is to disguise \(k\) target images rather than a single target image into the source image as performed by Xiao _et al._, the SOTA [13]. The \(k\) target images can have different semantic contents (i.e., faces of different persons or faces of the same person but at different angles), or different image sizes (i.e., the face of the same person at the same shooting setting but different resolution/size), or a combination of above two scenarios, as exemplified in Figure 1.
**Challenges and Our Solution.** Intuitively, the methodology devised by Xiao _et al._[13], exchangeablely referred to as SOTA, can be consecutively applied to each of these \(k\) target images, hopefully, to gain an attack image retaining the deceive effect. However, our trials showed this is not immediately applicable. Firstly, the disguising operation tends to often fail due to the non-existence of optimization solution under the relatively too strong constraints set by the SOTA. Generally, this is because the SOTA transforms the attack into a convex optimization problem. Once the constraints (i.e., the perturbation amplitude on the attack image and the difference between the output image resized from the attack image and the target image) are enforced, it might not always converge to a satisfactory solution, thus causing a failure. Secondly, the SOTA camouflage is extremely computationally heavy, which renders unbearable time overhead, especially for relatively large-size attack images, even when camouflaging merely a single target image. Generally, this is because the SOTA solves the pixel perturbation in a fine-grained manner, e.g., line by line of the image. This inevitably invokes the convex-concave programming toolkit much more frequently, rendering costly computation (i.e., the overhead is dependent on the image size).
The OmClic resolves the above shortcomings through two major means. Firstly, we transform the OmClic camouflage attack into a distinct multi-objective optimization problem [30]. This overcomes frequent failure of the SOTA during the optimization process. Note that the multi-objective optimization naturally fits our one-to-multiple attack, since multiple target images have to be disguised simultaneously. Secondly, we solve the pixel perturbation per channel (i.e., a colorful image has three channels). Therefore, the number of invocations of the optimization toolkit is independent of the image size, and importantly, extremely less (i.e., only three invocations are required for a colorful image). Consequentially, the computation load of OmClic is very efficient.
### Implementation
We first define some notations. The \(m\) and \(n\), respectively, denote the number of rows and columns of source image size, and \(c\) denotes the number of channels--in particularly, \(c=3\) for colorful images. Similarly, \(m_{j}\) and \(n_{j}\) denote the \(j_{\text{th}}\in\{1,...,k\}\) image size of the \(j_{\text{th}}\) target image. Note in the camouflage attack, the target image size is usually smaller than that of the source image. This is aligned with the fact that image downscaling is more common when training the DL model. \(a\) denotes the pixel value, which should be in the range of [0,255]. \(L_{j}\) and \(R_{j}\)
Figure 1: OmClic overview. Three target images with different semantic contents and sizes are used for example.
respectively, denote the left and right constant coefficient matrix when a target image is resized, see the Eq 2. Note that \(L_{j}\) and \(R_{j}\) are deterministic once the \(m\), \(n\), \(m_{j}\), and \(n_{j}\) are given--they are known in camouflage attack.
Our main purpose is to find the minimum value \(\Delta\), satisfying that the attack image looks visually same as source image. For achieving the least distance between attack image \(A\) and source image \(S\) on whole image level, we use the Euclid norm \(L_{2}\) as a constrain. In this context, the relationship between \(A\) and \(S\) is formalized to be:
\[\begin{split}& A_{m\times n}=S_{m\times n}+\Delta\\ &\texttt{Obj:min}(\|\Delta\|_{2})\end{split} \tag{1}\]
To solve \(\Delta\), we further formalize the scaling process. Since the scaling size (i.e., the output image size) is fixed, the \(\mathsf{Scale}\) operation can be expressed as:
\[\begin{split}&\mathsf{Scale}_{j}(A_{m\times n})=L_{m_{j}\times m }*A_{m\times n}*R_{n\times n_{j}}=T_{m_{j}\times n_{j}},\end{split} \tag{2}\]
where \(j\) is for the \(j_{\text{th}}\) target image. \(L_{m_{j}\times m}\) and \(R_{n\times n_{j}}\) are two coeffecient matrices that can be stably solved [13] given the known scaling size.
Once the scaling size is fixed, the scaling coefficient is stable. From Xiao _et al_[13], this coefficient can be inferred from input and output pairs. For example, the input can be the source image while the output can be the target image or vice versus. In other words, the image content is not matter, the input and output sizes matter.
First of all, we can build the relationship between input and output pairs:
\[\begin{split}& L_{m^{\prime}\times m}*(I_{m\times m}*IN_{max})=L_{m^{ \prime}\times m}*IN_{max}\\ &(I_{n\times n}*IN_{max})*R_{n\times n^{\prime}}=R_{n\times n^{ \prime}}*IN_{max},\end{split} \tag{3}\]
where \(I_{m\times m}\) and \(I_{n\times n}\) are both identity matrices. And \(IN_{max}\) stands for the max element in the source image (i.e., it can be any scalar excepting 0 and 1).
For example, by setting \(S=I_{m\times m}*IN_{max}\) and scaling it into an \(m^{\prime}\times m\) image \(D_{m^{\prime}\times m}\), we can infer \(L_{m^{\prime}\times m}\) since:
\[\begin{split}& D=\mathsf{Scale}(S)=\texttt{unsigned int}(L_{m^{\prime}\times m}*IN_{max})\\ &\to L_{m^{\prime}\times m(appr)}\approx D/IN_{max}\end{split} \tag{4}\]
Since division is a finite decimal, Eq. 4 brings a slight precision loss. To ensure that the sum of elements in each row of the coefficient matrix is one, normalization is applied per row in the coefficient matrix to make it accurate.
\[\begin{split}& L_{m^{\prime}\times m(appr)}[i,:]=\frac{L_{m^{ \prime}\times m(appr)}[i,:]}{\sum_{j=0}^{m-1}(L_{m^{\prime}\times m(appr)}[i,j] )}\\ &(i=0,1,\cdots,m^{\prime}-1)\end{split} \tag{5}\]
To this end, we leverage multi-objective optimization to solve this \(\Delta\) when \(\texttt{OmClic}\) simultaneously disguises \(k\) target images with different sizes into one source image. Eventually, the best result can be found by solving the eventual objective optimization expressed by the following Eq 6.
\[\begin{split}& A_{m\times n}=S_{m\times n}+\Delta\\ &\mathsf{Scale}_{j}(A_{m\times n})=T_{m_{j}\times n_{j}}\quad(j= 1,2,\cdots,k)\\ &\epsilon_{j}=\|\mathsf{Scale}_{j}(A)-T_{j}\|_{2}\quad(j=1,2, \cdots,k)\\ &\forall a\in A\quad 0\leq a\leq 255\\ &\texttt{Obj:min}(\|\Delta_{i}\|_{2}+\epsilon_{1}+\cdots+ \epsilon_{k}).\end{split} \tag{6}\]
Note \(a\) is the pixel value of the attack image, its range is within [0, 255].
``` Input:Source: \(S\in\mathbb{N}_{m\times n\times c}\); \(\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
ages, animal images, and landscape images, are utilized to comprehensively evaluate the OmClic. Experimental evaluations of OmClic enabled model agnostic backdoor attacks are deferred to Section 5.
### Different Target Images with Different Output Sizes
As shown in Figure 2, we embedded three different target images (e.g., dog or cat) into one source image. Three different sizes are \(64\times 64\times 3\), \(96\times 96\times 3\) and \(114\times 114\times 3\), respectively. The source image has a size of \(448\times 448\times 3\) and a larger size of \(1024\times 1024\times 3\) has also been evaluated. In this experiment, the scale function is set to be NAREST.
Firstly, the attack image (i.e., second column) is similar to the source image (i.e., first column). Secondly, the output images (i.e., third to fifth columns) are visually similar to their corresponding target images. In addition, we note that when the size of the source image is small, \(448\times 448\times 3\), there are some perceptible artifacts to the output image (i.e., the scaled dog image with a size of \(96\times 96\times 3\)). The artifact can be mitigated when the size of the source image increases e.g., \(1024\times 1024\times 3\) in the second row. This means the difference between the target image and the output image becomes small.
The reason is that the performance of the image scaling attack is dependent on the ratio of the source image and the target image. From Nyquist-Shannon theorem [31], a signal \(s(t)\) is feasible to reconstruct from a discrete number of sample points while the relationship between sampling rate \(f_{t}\) and highest frequency \(f_{max}\) is \(f_{t}\geq 2\cdot f_{max}\)[21]. the larger ratio between the source image and the target image, the better performance the camouflage attack performs with less difference between the output image and the target image. So that there is nearly no notable perturbation on the three output images scaled from the attack image with the size of \(1024\times 1024\times 3\).
### Same Target Image with Different Output Sizes
Here, we implant three visually same target images but with different sizes \(64\times 64\times 3\), \(96\times 96\times 3\), \(114\times 114\times 3\) into one source image, which formed attack images (i.e., second column) are shown in Figure 3. The target images are the same person's face images but with different resolutions in this example.
Even though a small size source image \(448\times 448\times 3\) is used, there are nearly no perceptible artifacts on these three output images (i.e., columns 3, 4, and 5). There are two potential reasons. Firstly, all target images are visually the same. Secondly, the target images and the source image are all face images, which similarity is also higher than that in Figure 2, where the source image (i.e., dolphin) is quite distinct from those target images (i.e., dog or cat).
This implies that semantic similarity between the source image and target image, or/and similarity among targets image can exhibit a better OmClic deceive effect.
In addition, note that a larger source image size is beneficial to the removal of the artifacts brought to the attack image. More precisely, when looking closer (zooming in), the artifacts in the \(448\times 448\times 3\) attack image are perceptible but eliminated when \(1024\times 1024\times 3\) source image is utilized.
### Same Target Image with Different Resize Functions
In Figure 4, we set up the case when the same target image is resized by different resize functions into different output sizes. During the attack image crafting, the scale function NAEAREST is used to disguise the same target image with different sizes \(64\times 64\times 3\) (i.e., third column) and \(96\times 96\times 3\) (i.e., fourth column) into the source image.
On one hand, if a different resizing algorithm e.g., LANCZOS, is chosen to rescale the attack image to gain the output image e.g., \(96\times 96\times 3\) in the third column, the output image is semantically similar to the source but not the target image intended by the attacker. On the other hand, if the attack image is resized to the output image of a \(64\times 64\times 3\) size with the same algorithm of NAEAREST, the output image as expected is nearly the same as the target image. We have evaluated other combinations, e.g., NAEAREST is used during attack image crafting and a different LANCZOS function is used to resize the attack image. We found that the camouflage effect can work only when both resize functions are the same during the attack image creation and attack image resize in all our experiments.
Figure 3: Same target image with different sizes. Face images are used.
Figure 2: Different target images with different sizes. Animal images are used.
### Number of Disguised Target Images
Here, we are interested in the maximum number of target images that can be disguised into the source images. In Figure 5, we embed up to \(k=8\) target images into a source image. We have the following observations. Firstly, a larger source image size is preferable to disguise multiple target images. When the \(1024\times 1024\times 3\) source image is used, the semantics of not only the attack image but also each of up to \(k=8\) output images can be held reasonably. Secondly, we do observe increased artifacts in the source image when \(k\) increases. Thirdly, the ratio between the source image size and the target image size is preferred to be large to facilitate the OmClic. As can be observed in the third and fourth rows, when the maximum image size of the target image approaches the source image size, the attack image is essentially visually close to the target image.
### Computational Overhead
Here, we compare the OmClic computational overhead with Xiao _et al._[13], which is measured by the time of producing the attack image when a _single_ target image is embedded. Experiments are performed on the same machine with a CPU of Intel(R) Xeon(R) Gold 6230 at 2.10 GHz and 32 GB memory.
Figure 6 details the time cost. The \(x\)-axis is the target image size. It can be seen that the proposed OmClic substantially outperforms SOTA [13]. The improvement is up to \(30\times\). For example, when the source image size is \(448\times 448\times 3\) and the target image size is \(114\times 114\times 3\), the SOTA costs 1893 s while OmClic only requires 67 s. The efficacy is improved by up to \(28\times\). Because OmClic leverages i) a more efficient multi-objective optimization and ii) per image channel optimization rather than per line optimization in the SOTA.
### Similarity Between Source and Attack Image
Here, we focus on quantifying the similarity between the source image and the attack image, since this represents the deceive effect in our scenario. Then we quantitatively compare OmClic and the SOTA. We note that when the camouflage is exploited for the backdoor attack in our work, the similarity between the target image and its corresponding output image after scale is not stringent. The reason is that the user would not inspect the output image--the user inspects the attack image. As long as the backdoor can be successfully inserted, even perceptible artifacts on the output image are not a matter.
Figure 4: Same target image with different resize functions. Landscape images are used.
Figure 5: Number of disguised target images. Face images are used.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{SSIM} & \multicolumn{4}{c}{MSSSIM} & \multicolumn{4}{c}{UQI} & \multicolumn{4}{c}{PSNR} \\ \cline{3-14} Types & \multicolumn{2}{c}{Xiao _et al._} & Ours & Xiao _et al._ & Ours & Xiao _et al._ & Ours & Xiao _et al._ & Ours & Xiao _et al._ & Ours & & & & & & Ours \\ & [13] & & & & & & & & & & & & & & & & & \\ \cline{2-14} & 1 & 1 & 2 & 3 & 1 & 1 & 2 & 3 & 1 & 1 & 2 & 3 & 1 & 1 & 2 & 3 \\ \hline \multirow{2}{*}{Face} & 448 & 0.744 & 0.742 & 0.565 & 0.472 & 0.942 & 0.942 & 0.887 & 0.844 & 0.90 & 0.90 & 0.833 & 0.79 & 27.469 & 27.483 & 22.136 & 19.422 \\ \cline{2-14} & 1024 & 0.889 & 0.905 & 0.755 & 0.662 & 0.979 & 0.982 & 0.949 & 0.917 & 0.997 & 0.975 & 0.929 & 0.888 & 33.412 & 34.447 & 29.307 & 26.415 \\ \hline \multirow{2}{*}{Animal} & 448 & 0.655 & 0.660 & 0.47 & 0.38 & 0.936 & 0.936 & 0.873 & 0.821 & 0.971 & 0.971 & 0.946 & 0.925 & 25.102 & 25.262 & 19.819 & 17.096 \\ \cline{2-14} & 1024 & 0.881 & 0.865 & 0.665 & 0.567 & 0.982 & 0.980 & 0.943 & 0.907 & 0.994 & 0.992 & 0.979 & 0.966 & 33.518 & 32.113 & 26.977 & 24.079 \\ \hline \multirow{2}{*}{Landscape} & 448 & 0.734 & 0.726 & 0.564 & 0.474 & 0.944 & 0.942 & 0.892 & 0.847 & 0.839 & 0.838 & 0.801 & 0.778 & 26.574 & 26.413 & 21.262 & 18.547 \\ \cline{2-14} & 1024 & 0.917 & 0.889 & 0.722 & 0.632 & 0.987 & 0.979 & 0.942 & 0.909 & 0.990 & 0.954 & 0.873 & 0.818 & 34.631 & 33.551 & 28.403 & 25.515 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative similarity comparison between Xiao _et al._[13] and OmClic.
We use three semantically same target images but with different sizes of \(64\times 64\times 3\), \(96\times 96\times 3\), \(114\times 114\times 3\) for OmClic and only one size \(64\times 64\times 3\) for SOTA. The case #1 of SOTA means embedding the \(64\times 64\times 3\) sized target image into the source image. The case #1, #2, and #3 of OmClic, means disguising one (in particular, the \(64\times 64\times 3\) sized image), two, and three target images into the source images, respectively. Note the SOTA is challenging (i.e., time-consuming and unstable even applying it sequentially per target image) to embed multiple target images into the same source image, we do not evaluate it on multiple target images.
Results are detailed in Table 2, where four metrics (Structural Similarity Index (SSIM) [32], Multi-scale Structural Similarity Index (MSSSIM) [33], Universal Quality Image Index (UQI) [34] and Peak Signal-to-Noise Ratio (PSNR) [32]) are used. Firstly, when a single target image is disguised, the similarity performance of the OmClic is almost the same as the SOTA in all cases. Therefore, the OmClic achieves the same deceptive effect compared to the SOTA while OmClic is more efficient (cost much less time). Secondly, when the number of target images increases, the similarity performance sees gradual decreases, which is under expectation. Thirdly, the usage of a source image with large image size (i.e., 1024 versus 448) compensates for the similarity deterioration. This agrees with the observation in Section 4.4, where a large source image is able to accommodate a higher number of target images while retaining the semantic consistency of the attack image. Last, the semantic similarity between the target image and the source image is inversely related to the performance of the SSIM, MSSSIM, UQI and PSNR. Since images of the animal dataset are more discrepant, the animal dataset exhibits the worst performance, whereas face images exhibit the best.
## 5 OmClic enabled Backdoor Evaluation
We now evaluate the OmClic-enabled backdoor attack against DL models. Generally, the OmClic is exploited to disguise trigger-carrying target images to poison the training dataset used to train the DL model, thus inserting a backdoor into the DL model.
### Threat Model
The attacker can create attack images through the OmClic to disguise trigger-carrying images. More specifically, the attacker has access to a small fraction of the dataset used by the victim--a less than 0.5% poison rate was sufficient to insert backdoor as shown in [28; 15]. This is realistic in the data outsourcing scenario where the dataset is crawled from public sources or contributed by volunteers or collected by a third party [14; 15]. The attacker has knowledge of the input size of the DL model. This is reasonable as the number of common input sizes is extremely limited and is publicly known, as summarized in Table 1. Notably, the OmClic is designed to _compromise multiple input sizes concurrently through the same attack image_. However, the attacker has no control over the training process, and thus cannot interfere with the training at all.
As for the victim data user, he/she mixes the data returned from the attacker and uses it to train the DL model. The user fully controls the training process. The user who is the data curator can inspect the received data to identify and reject the malicious image that exhibits inconsistency between its content and its label. Note that the user is not inspecting data after the scale operation since this is a default operation of the existing DL frameworks, as assumed [13; 14].
### Experiment Setup
**Dataset.** We consider three datasets including PubFig [35], STL [36] and Tiny-ImageNet [37]. The PubFig consists of \(58,797\) images of 200 people crawled from the Internet. Since some URLs for downloading text file are invalid now, we selected top-60 people (sorted by amount) as the PubFig dataset in our experiments.
The STL dataset has 10 classes. The training and testing sets contain 5,000 and 8,000 images with size of \(96\times 96\times 3\), respectively. The Tiny-ImageNet has 200 classes. To reduce computation time, we only use 10 classes in Tiny-ImageNet.
The image sizes are \(256\times 256\), \(96\times 96\) and \(64\times 64\) for PubFig, STL and Tiny-ImageNet, respectively. And the experimented model acceptable input sizes (or compromised input sizes) are \(96\times 96\), \(112\times 112\) and \(224\times 224\) for all datasets considering the fact that these sizes are common for computer vision models. Whenever the image size and the compromised model input size mismatches, the former is resized to fit the latter size. More specifically, downsampling is used for PubFig and the up-sampling process
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Datasets & \begin{tabular}{c} \# of \\ labels \\ \end{tabular} & \begin{tabular}{c} \# of train \\ images \\ \end{tabular} & \begin{tabular}{c} \# of test \\ images \\ \end{tabular} &
\begin{tabular}{c} Image \\ size \\ \end{tabular} \\ \hline STL & 10 & 5,000 & 8,000 & 96\(\times\)96\(\times\)3 \\ \hline PubFig & 60 & 4,921 & 1,202 & 256\(\times\)256\(\times\)3 \\ \hline Tiny-ImageNet & 10 & 5,000 & 500 & 64\(\times\)64\(\times\)3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dataset summary.
Figure 6: Time overhead comparison between OmClic and Xiao _et al._
is applied to STL and Tiny-ImageNet. For all poisoned images, their image size is set to be \(448\times 448\times 3\). For OmClic enabled backdoor, the first class of each of three datasets is the source classes (i.e., the attacker target class from the backdoor attack perspective), and the other classes as the target classes (i.e., note this target class refers to the images that the attacker wants to hide in the OmClic attack, should not be confused with the target class in the backdoor attack). A summary of the dataset settings is provided in Table 3.
**Model Architecture.** The ResNet18 [17] and VGG16 [18] are utilized to comprehensively evaluate the OmClic enabled backdoor. We evaluate the OmClic based backdoor on the basis of the state-of-the-art accuracy. Specifically, PubFig, STL and Tiny-ImageNet achieve accuracy of \(95.7\%\), \(92.6\%\) and \(89.1\%\) respectively, given the model input size of \(224\times 224\times 3\). These clean model accuracies, serve as baseline, are obtained when training on the clean dataset.
**Metrics.** Two common metrics of clean data accuracy (CDA) and attack success rate (ASR) are utilized to quantitatively measure the backdoor performance [3].
The CDA is the probability of a non-trigger carrying image is correctly classified into its ground-truth label by the backdoored model. The CDA of a backdoored model should be similar to the CDA of its clean model counterpart. The ASR is the probability of a trigger carrying image being misclassified into the attacker preset backdoor target class. Higher the ASR, better the backdoor attack effect to the attacker.
### Results
Before devling into the results of OmClic based backdoor attack performance, we give the baseline or plain backdoor attack performance for latter comparisons.
#### 5.3.1 Plain Backdoor
As for the plain backdoor, we randomly select few images (i.e., \(59\) images) from the \(1_{\mathrm{th}}-59_{\mathrm{th}}\) classes for PubFig task. For as the STL and Tiny-ImageNet, we select those images from the \(1_{\mathrm{th}}-9_{\mathrm{th}}\) classes as there are only ten classes (one class is the targeted class). For those selected images, we stamp a blue square on bottom-left corner as trigger to form poisoned images, which labels are correspondingly changed to the targeted label \(0_{\mathrm{th}}\) class, see the backdoor overview in Figure 7 (a). This data poisoning process is a typical means of inserting backdoor [1; 38], where the content and the label of the poisoned image is obviously inconsistent, which can be trivially captured by human auditing. Because this is a dirty-label image poisoning attack--the trigger-carrying label-altered images (see Figure 7 (a)) are directly exposed to the human inspector.
Instead of training the model from scratch, we leverage transfer learning for expedition. The transfer learning is set with \(100\) epochs, \(0.0001\) learning rate and decay learning rate. For ResNet18 and VGG16, the pretrained models are both trained on ImageNet [39].
For each dataset, ten trials are repeated and the average result is reported. As shown in Figure 8, the ASR of the plain backdoor, namely plain ASR, is \(100\%\) for all datasets. For the \(224\times 224\times 3\) model input size, the CDA of the backdoored models are \(95.8\%\), \(92.4\%\) and \(89.2\%\) for PubFig, STL and Tiny-ImageNet, respectively. As affirmed in Figure 8, the CDA of the backdoored model is always similar to that of the clean model.
#### 5.3.2 OmClic based Backdoor
In this context, the OmClic is utilized to create poisoning image that its content is consistent to its label. As exemplified in Figure 7 (b) and Figure 9 with face recognition task, we randomly select three images (three rightmost faces in Figure 9 from e.g., person B, C, D) with each from a different class and with a different size. For each of this image, we stamp a trigger on it to gain the trigger-carrying target image. We then randomly select an image (left-most face from e.g, person A) as source image to disguise all these three trigger-carrying target images to form an attack image (second left-most face, e.g., A\({}^{\prime}\) in
Figure 7: Overview of plain backdoor as baseline and OmClic based backdoor.
Figure 9), which is a poisonous image in the backdoor attack. Here, person A is the target person. In other words, any person's face with the trigger will be misclassified into person A once the backdoored model is deployed for inference. Note that the content and its label of the A\({}^{\prime}\) are consistent, which can trivially evade human inspections. However, for the model, it sees trigger-carrying person B, C, D during training, but deems their labels as person A, so that a strong association between the trigger and infected class A is learned, consequentially inserting the backdoor successfully.
We have repeated the experiments for ten times of the OmClic based backdoor attack and report the average. The first row of Figure 8 depicts the results of the PubFig on all three evaluated compromised model input sizes. Taking \(224\times 224\times 3\) as an example, the compromised model input size means the victim model accepts image size of \(224\times 224\times 3\) that the victim user has to resize the training image size to it through default resize function of the DL pipeline. To be more specifically, the CDA of OmClic backdoored models are 91.6%, 92.5% and 95.8% for model input size of \(96\times 96\times 3\), \(112\times 112\times 3\), and \(224\times 224\times 3\), respectively. Each CDA of OmClic based backdoor is almost similar to the CDA of the plain backdoor attacked model and the clean model counterpart. As for the ASR, it reaches to 100% for each of these three datasets, again, same to the plain backdoor attack.
As for the other two datasets of STL and Tiny-ImageNet, the results are detailed in the second and third rows of Figure 8. Generally, as we can see, they have the same trend as the above PugFig. Therefore, we can conclude that the OmClic based backdoor is able to attack multiple model input sizes and achieve the same attack performance as the plain backdoor.
#### 5.3.3 Poisoning Rate Effect
Here, we reduce the poisonous images with the PubFig dataset. In previous experiments, we have used 59 poisonous images. Specifically, each of 59 OmClic target images is selected from \(1_{\text{th}}-59_{\text{th}}\) classes (one image per class) and disguished by one different source images from the backdoor infected \(0_{\text{th}}\) class. The total number of images in the \(0_{\text{th}}\) is 90. Now we reduce the number of target images to be \(20,30,40,50\)--so that some of the \(1_{\text{th}}-59_{\text{th}}\) classes are not used to provide target images. The model architecture is still ResNet18 and model input size is set to be \(224\times 224\times 3\).
Results are detailed in Figure 10. As expected, the ASR is reducing as the number of poisonous image decreases. Nontheless, the ASR is still up to 95.4% even when the poisonous images is reduced by 50% (from 59 to 30). This corresponding to a poison rate of 0.61% out of all 4,921 PubFig training images in total (i.e., 30/4921).
## 6 Discussion
### Model Agnostic
The poisonous images crafted through OmClic is equally effective against different model architectures as long as its
Figure 8: Evaluating OmClic based backdoor on ResNet18 with multiple input sizes.
Figure 10: Evaluating the effect of different poisoning rate in OmClic based backdoor. Model and dataset are ResNet18 and PubFig respectively.
Figure 9: Clean-label image poisoning with OmClic to insert backdoor. Image att is the poisonous image with same label of image src seen by the data curator. However, once image att is used for model training after applying image-downsizing, one of the three right-most images is seen by the model depending on the model input size setting while its label is still same to src.
model input size falls under the compromised input sizes. Here, we use the same set of OmClic poisoned PubFig images in Section 5.3.2 to evaluate the backdoor effectiveness when these images are used to train a VGG16 model--ResNet18 is evaluated in Section 5.3.2.
The results are detailed in Figure 11. It is obvious that these set of poisonous images successfully insert the backdoor into the VGG16 model. More specifically, firstly, the CDA of the OmClic based backdoor is almost same to that CDA of plain backdoor and clean model without backdoor. Secondly, the ASR of the OmClic based backdoor is same to that of the plain backdoor. These hold for any of the three targeted model input sizes of 96, 112, and 224, respectively. Therefore, the OmClic based poisonous images are transferable to different model architectures as long as its one of targeted model input sizes is chosen by the model user for training.
### Backdoor Variant
Above experiments focus on the common source-agnostic backdoor attack enabled by the OmClic, where input from any class carrying the trigger will be misclassified into the compromised class. We note that OmClic can be essentially exploited to conduct advanced backdoor variants such as the source-specific backdoor attack (SSBA) [40, 41] that is harder to be countered. In addition, multiple backdoors with each targeting a differing class [38, 42] can be performed through OmClic.
We take an exemplified methodology description through SSBA, where input from some specific source classes carrying trigger can activate the backdoor. In other words, input from other non-source classes cannot activate the backdoor even it carries the trigger. It is trivial to perform SSBA by exploiting OmClic. We use the face recognition as an example. The poisonous samples of the SSBA requires a so-called cover sample to suppress the backdoor effect of the non-source classes in the presence of the trigger. Suppose person A is source class and person B is non-source class, person D is the infected class, a natural sun-glass (or i.e., ear ring) as a trigger, firstly, some non-cover images are created following the same procedure in Section 5.3.2 by embedding sun-glass wearing person A images into the images of person D through OmClic. For cover images, we simply mix sun-glass wearing person B images into the training dataset. There is in fact no need to apply OmClic in this context, because the sun-glass wearing person B images are non-suspicious at all as their _label does not need to be altered_. Once the face recognition model is trained on the non-cover and cover poisonous samples, it will still correctly classify person B images even when person B wears the sun-glass trigger but misbehaves to classify person A into person D when person A wears the sun-glass trigger--the backdoor effect is further associated to specific class(es).
We have performed experiments on above described OmClic based SSBA attacks. More precisely, 50 non-cover samples all from \(1_{\text{th}}\) person (i.e., the source-class) are created or camouflaged into the \(0_{\text{th}}\) person who is the backdoor infected category. For cover samples, sun-glass wearing person (all person except \(1_{\text{th}}\) person, and label not been altered) are taken into consideration, where the number of cover-samples varies. Generally, all sun-glass wearing \(1_{\text{th}}\) person should be misclassified into \(0_{\text{th}}\) person, while all sun-glass wears persons from other person categories should be still correctly classified into its ground-truth category, e.g., \(2_{\text{th}}\) person into \(2_{\text{th}}\) person. Table 4 shows the OmClic based SSBA performance. On one hand, It can be seen that by increasing the cover samples, the source class ASR will gradually drop. This is under expectation. Note there is only one source class e.g., \(1_{\text{th}}\) person. If too many cover samples are used, the strong association between the presence of the trigger and the targeted class will be diminished, thus suppressing the ASR to some extent. On the other hand, for a similar reason, when the number of cover samples increases, the non-source class ASR decreases. The ratio between the cover samples and non-cover samples requires proper setting. When the number of cover-sample is set to 10, the source class ASR is up to 97.2%, while the non-source class ASR is still sufficiently low to be 1.7% and CDA of cover samples is still similar to the clean model CDA.
### Countermeasures
Here we discuss potential countermeasures against OmClic and recommend some lightweight prevention methods that
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{4}{c}{Number of cover samples} \\ \cline{2-5} & 50 & 30 & 20 & 10 \\ \hline Clean CDA & 95.6\% & 95.5\% & 95.5\% & 95.6\% \\ \hline Cover sample & \multirow{2}{*}{95.5\%} & \multirow{2}{*}{95.1\%} & \multirow{2}{*}{94.4\%} & \multirow{2}{*}{94.2\%} \\ CDA & & & & \\ \hline Source class & \multirow{2}{*}{75.8\%} & \multirow{2}{*}{80.1\%} & \multirow{2}{*}{84.2\%} & \multirow{2}{*}{97.2\%} \\ ASR & & & & \\ \hline Non-source class ASR & \multirow{2}{*}{0.6\%} & \multirow{2}{*}{0.8\%} & \multirow{2}{*}{1.1\%} & \multirow{2}{*}{1.7\%} \\ \hline \hline \end{tabular}
\end{table}
Table 4: OmClic based source-specific backdoor attack performance.
Figure 11: Evaluating OmClic based backdoor on VGG16 model with PubFig dataset.
are easy-to-use to mitigate the OmClic based backdoor security threat. Note, it is possible to apply backdoor defense to counter the OmClic based backdoor attack, but it is often expensive or requiring deep learning expertise [43]. We focus on the countermeasures directly countering against the camouflage attack, thus consequentially thwarting the backdoor attack. There are existing camouflage detection methods such as the Decamouflage [44] that identifies camouflage images through automatically examining the pixel domain or the spectra domain of a given received image. However, it requires to inspect each image, still incur certain computational cost. There are also prevention countermeasures by adjusting the resize function [21] to harden, essentially rendering the feasibility of crafting effective attack images. However, this requires change of existing resize functions and can result into increased computation intensity of the resizing operation.
We have identified an lightweight and easy-to-use prevention method by simply applying an intermediate resizing operation, namely InterResize. Specifically, it resizes the received image e.g., A with a random height/weight into an intermediate image A\({}_{\text{iterm}}\), before consecutively resizing it into a smaller image A\({}_{\text{small}}\) with the ultimate model input size of the given model. Here, the width/height the intermediate image A\({}_{\text{iterm}}\), should not be the integral multiple of the width/height of the image A\({}_{\text{small}}\). For example, the width and height of A\({}_{\text{small}}\) is \(96\times 96\), the width and height of A\({}_{\text{iterm}}\) can be set to any value except the integral multiples such as \(\{192\times 192,288\times 288,288\times 96,288\times 192,\cdots\}\). In case of the integral multiple is held, the A\({}_{\text{small}}\) may still have obvious artifacts of the target image-- see an example in Figure 12 (in particular, top row). By applying this simple operation, image-scaling attack effect will be disrupted because of the application of a different width/height. We have experimentally affirmed the practicality of this prevention method, where the output image is always the same as the source image not the attacker-intended target image in the OmClic attack, see an example in Figure 12 (in particular, bottom row).
## 7 Conclusion
We have proposed OmClic that allows simultaneously disguising multiple target images into a source image to form an attack image (similar to source image), which is achieved by abusing the default resizing operation provided by popular DL frameworks through a devised muti-objective optimization. Compared to existing SOTA, OmClic achieves the same deceive effect in addition to its multiple image disguising capability. Moreover, OmClic is substantially reduces the computational cost that expedites the camouflage attack image crafting. The OmClic enabled backdoor attack through clean-label poisonous images can compromise a given model regardless of the user chosen model input size as long as it is covered by the OmClic. Extensive experiments have validated the same efficacy of the OmClic based backdoor compared to baseline attacks. Importantly, we have provided a lightweight and easy-to-deploy OmClic prevention approach to thwart such attacks.
| 画像擬態により、DLモデルへのバックドアを導入するための limpia label 攻撃画像を作成する。しかし、一つの攻撃画像はDLモデルの入力サイズにしか適合できないため、複数の一般的な入力サイズを攻撃する場合、その攻撃予算が大きく増加する。この研究では、擬態を用いて攻撃画像を構築することを提案し、複数のDLモデルの入力サイズに同時に適合する攻撃画像を作成する。その結果、OmClicが誕生した。OmClicを介して、ユーザーがDLモデルを訓練する際の入力サイズを任意に選択しても、バックドアを常に導入することができる。攻撃予算は一定で、例えば、攻撃率のわずかな割合である。多様な入力サイズに対する評価により、OmClicは、複数の入力サイズを同時に攻撃することができる。また、OmClicのバックドア攻撃は、さまざまな画像の種類で、さまざまな設定で使用できることが示された。これらの実験では、OmClicベースのバックドア挿入は、ユーザーが |
2301.13340 | Affinity Uncertainty-based Hard Negative Mining in Graph Contrastive
Learning | Hard negative mining has shown effective in enhancing self-supervised
contrastive learning (CL) on diverse data types, including graph CL (GCL). The
existing hardness-aware CL methods typically treat negative instances that are
most similar to the anchor instance as hard negatives, which helps improve the
CL performance, especially on image data. However, this approach often fails to
identify the hard negatives but leads to many false negatives on graph data.
This is mainly due to that the learned graph representations are not
sufficiently discriminative due to oversmooth representations and/or
non-independent and identically distributed (non-i.i.d.) issues in graph data.
To tackle this problem, this article proposes a novel approach that builds a
discriminative model on collective affinity information (i.e., two sets of
pairwise affinities between the negative instances and the anchor instance) to
mine hard negatives in GCL. In particular, the proposed approach evaluates how
confident/uncertain the discriminative model is about the affinity of each
negative instance to an anchor instance to determine its hardness weight
relative to the anchor instance. This uncertainty information is then
incorporated into the existing GCL loss functions via a weighting term to
enhance their performance. The enhanced GCL is theoretically grounded that the
resulting GCL loss is equivalent to a triplet loss with an adaptive margin
being exponentially proportional to the learned uncertainty of each negative
instance. Extensive experiments on ten graph datasets show that our approach
does the following: 1) consistently enhances different state-of-the-art (SOTA)
GCL methods in both graph and node classification tasks and 2) significantly
improves their robustness against adversarial attacks. Code is available at
https://github.com/mala-lab/AUGCL. | Chaoxi Niu, Guansong Pang, Ling Chen | 2023-01-31T00:18:03 | http://arxiv.org/abs/2301.13340v2 | # Affinity Uncertainty-based Hard Negative Mining in Graph Contrastive Learning
###### Abstract
Hard negative mining has shown effective in enhancing self-supervised contrastive learning (CL) on diverse data types, including graph contrastive learning (GCL). Existing hardness-aware CL methods typically treat negative instances that are most similar to the anchor instance as hard negatives, which helps improve the CL performance, especially on image data. However, this approach often fails to identify the hard negatives but leads to many false negatives on graph data. This is mainly due to that the learned graph representations are not sufficiently discriminative due to over-smooth representations and/or non-i.i.d. issues in graph data. To tackle this problem, this paper proposes a novel approach that builds a discriminative model on _collective affinity_ information (i.e, two sets of pairwise affinities between the negative instances and the anchor instance) to mine hard negatives in GCL. In particular, the proposed approach evaluates how confident/uncertain the discriminative model is about the affinity of each negative instance to an anchor instance to determine its hardness weight relative to the anchor instance. This uncertainty information is then incorporated into existing GCL loss functions via a weighting term to enhance their performance. The enhanced GCL is theoretically grounded that the resulting GCL loss is equivalent to a triplet loss with an _adaptive_ margin being exponentially proportional to the learned uncertainty of each negative instance. Extensive experiments on 10 graph datasets show that our approach i) consistently enhances different state-of-the-art GCL methods in both graph and node classification tasks, and ii) significantly improves their robustness against adversarial attacks.
Graph contrastive learning, Hard negative mining, Uncertainty estimation, Affinity learning.
## I Introduction
Graph is ubiquitous and plays an important role in various fields, such as social networks, bioinformatics, chemistry, etc. Due to its non-Euclidean nature, learning expressive graph representations is one crucial foundation of different graph mining tasks, such as graph classification and node classification. In recent years, graph neural networks (GNNs) have become predominant in achieving this goal. Most existing GNNs focus on supervised or semi-supervised learning settings [1, 2, 3, 4], where class label information is required for training the GNNs. However, obtaining such information is hard or costly, especially for graph data which is at large scale and/or demands strong domain knowledge to accurately perform the data annotation. Recently, self-supervised learning of GNNs [5, 6] which can learn graph representations without accessing ground truth labels was introduced to tackle this issue and has attracted significant research interests.
Graph contrastive learning (GCL) has become one of the most popular self-supervised methods for graph representation learning [7, 8, 9, 10, 11, 12, 13, 14]. It focuses on learning representations by maximizing the mutual information between augmentations of the same instance, in which the augmentations of the same graph/node are often treated as positive instances, with the other graphs/nodes as negative instances [5, 6].
Despite the impressive successes achieved by current GCL methods, their learning capability can be largely limited by the way they choose negative samples [15, 16, 17, 18]. One commonly-used negative selection approach is to randomly select negative instances from a sufficiently large batch or a memory bank, and then treat all negative instances equally in contrastive learning. However, this approach cannot exploit negative instances that can provide more information for the contrastive learning than the others. Particularly, many prior studies [15, 16, 18] have shown that _hard negative_ instances which are difficult to discriminate from the positive are more crucial than the counterparts (e.g., easy negatives that are distant from the positive in both semantics and representations) to the learning of discriminative features.
Many recent contrastive learning (CL) methods [15, 16, 17, 19, 20] thus incorporate hard negative mining methods into their training process to leverage these hard negative instances. These hardness-aware CL methods typically treat negative instances that are most similar to the anchor instance as the hard negatives, which helps further improve the CL performance, especially on image data [15, 16, 17, 19, 20]. However, this hard negative mining approach often performs poorly on graph data, as shown in some recent studies [18, 21] and our experiments. This is mainly because the learned graph representations are not sufficiently discriminative due to i) the non-i.i.d. (independent and identically distributed) nature of graph data, e.g., nodes with the same label tend to be densely connected in graph data, and ii) the over-smooth graph representations resulting from the iterative message passing mechanism. Consequently, for graph data, the most similar negatives to the anchor can be false negatives with high probability. To address this issue, the very recent method ProGCL [18] imposed a beta mixture model on the pairwise similarities between the negatives and the anchor to
estimate the probability of a negative being true one, and it subsequently combined the estimated probability and the pairwise similarity to measure the hardness of the negatives. The method relies on the prior that the similarity distribution of negatives w.r.t. positive is bimodal and works well in node classification tasks. It fails to work when its prior is not fully met. As shown in our experiments (Table III in Sec. IV-B), such failure cases occur in most graph classification datasets where ProGCL has very marginal improvement, or even worse performance, compared to the original GCL methods.
This paper introduces a novel approach, dubbed AUGCL, to tackle this problem. AUGCL learns a data-driven, affinity-based uncertainty estimator to evaluate the hardness of negative instances relative to each anchor instance, meaning that the hardness of an instance is dependent on the given anchor instance, as shown by an example in Fig. 1(a-b). Particularly, AUGCL builds a discriminative model on _collective affinity_ information (i.e, two sets of pairwise affinities between the negative instances and the anchor instance) to evaluate how confident/uncertain the discriminative model is about the affinity of each negative instance to the anchor instance. Instances that have a larger affinity uncertainty would be more likely to be hard negatives, and they are subsequently assigned with a larger hard-negative weight to receive more attention from the GCL models. By doing so, AUGCL learns discriminative affinity uncertainties for the negative instances relative to each anchor instance, as shown by the results of the anchor instance 11 in Fig. 1(b) and (d), where small and large uncertainty-based hardness values are assigned to false negatives and true negatives, respectively. By contrast, the current similarity-based methods that regard the most similar negative instances to the anchor instance as hard negatives fail to identify the truly hard negatives but lead to many false negatives, as shown in Fig. 1(c). Those learned hardness results can then be seamlessly incorporated into popular GCL models (e.g., InfoNCE-based models [22]) as a hardness weight to enhance their performance. AUGCL addresses a similar issue as ProGCL, but it eliminates the prior information posited in ProGCL, enabling AUGCL to work more effectively on diverse node-level and graph-level datasets.
In summary, this work makes the following three main contributions.
* We propose a novel approach AUGCL that utilizes the modeling of collective affinities to take account of the non-i.i.d. and over-smooth representations issues in graph data via the learning of an uncertainty-based hardness measure. To the best of our knowledge, it is the first work that addresses the problem using an uncertainty learning framework.
* We show theoretically that our approach transforms popular GCL losses such as InfoNCE into a triplet loss with an adaptive hardness-based margin, enforcing a large margin for hard negatives while pulling false negatives close to anchor instances.
* Extensive experiments on 10 graph datasets demonstrate the superiority of AUGCL in consistently enhancing different state-of-the-art GCL methods in both graph and node classification tasks (having maximal classification accuracy improvement by \(\sim\)2% and \(\sim\)1.5%, respectively), and the robustness against graph adversarial attacks (maximal improvement by \(\sim\)8%).
## II Related Works
### _Graph Contrastive Learning_
Recently, contrastive learning [23, 24, 25] has become a prominent technique in self-supervised learning. It has been successfully adapted into diverse domains, including the graph domain. A number of GCL methods [7, 8, 9, 10, 11, 12, 13] have been proposed. DGI [7] is an early attempt that obtained node representations by maximizing the mutual information between node embeddings and high-level graph information. MVGRL [8] improved DGI by introducing different structural views to learn node and graph-level representations. InfoGraph [9] performed contrastive learning by directly maximizing the consistency between sampled subgraphs and pooled graph representations. Additionally, GraphCL [10] systematically explored the influence of different augmentations on graph-level contrastive learning. GCA [11] proposed to perform contrastive learning with adaptive augmentation on the topology and node attribute level for node classification. Besides, some studies have proposed to enhance the GCL by automating data augmentations [12] or discarding explicit data augmentations [13]. The main differences among these methods lie on the way they obtain positive pairs. By contrast, our approach AUGCL is focused on hard negative mining, which is orthogonal to these GCL methods and can be plugged into their loss function to improve their performance on graph/node-level tasks.
Fig. 1: (_a_): Two groups of data instances in blue and orange. (_b_): The affinity uncertainty-based hardness results learned by our approach using instance 11 or 26 as the anchor instance. Instances with a larger uncertainty are more likely to be hard negative samples w.r.t. the anchor instance. (_c_): The histograms of the similarity of the instances to the anchor instance 11. It is clear that treating the most similar instances to the anchor as the hard negatives can lead to many false negatives. (_d_): The uncertainty results learned by our approach for the instances w.r.t the anchor instance 11, where true negatives including hard negatives have large uncertainty values (and thus large hardness weights) while false negative cases receive very small uncertainty values.
### _Hard Negative Mining in Contrastive Learning_
Hard negative mining refers to generating or mining the negatives which are difficult to discriminate from the positive. Various methods have been proposed to perform hard negative mining to facilitate contrastive learning, including employing mixup strategy [26] to mix the anchor instance and negative instance to synthesize hard negatives [20, 27, 15, 28], and developing unsupervised sampling methods for selecting hard negative samples [16, 17]. Recent state-of-the-art methods in this line of research include DCL [17] and HCL [16]. These methods are mainly focused on image data and they often treat negative instances that are most similar to the anchor instance as the hard negatives. However, for graph data, the similar negatives could be false negatives relative to the anchor, and the GCL performance would be degraded by employing these hard negative mining methods [18, 21]. To address this issue, ProGCL [18] exploited a two-component beta mixture model to estimate the probability of negative instances being true for an anchor and then measured the hardness of negative instances by integrating the estimated probability and the similarity between the negative and the anchor. Similarly, our method also measures the hardness of negatives for each anchor instance. However, we employ the uncertainty estimation model to directly learn the negative instance hardness. The learned hardness is then incorporated into the contrastive loss via a weighting term, resulting in an anchor-instance-adaptive contrastive learning framework with good theoretical support.
### _Uncertainty Estimation_
Numerous methods and theories have been introduced to measure the prediction uncertainty, e.g., by using the maximum of predicted probabilities [29, 30, 31], the prediction entropy/energy [32, 33, 34, 30], or an extra (void/background) class [35, 36, 34]. These methods focus on calibrating prediction confidence in supervised learning, whereas we utilize uncertainty estimation under the self-supervised setting to empower contrastive learning. Our work is motivated by the observation that hard samples are typically the instances at the decision boundary between the positive and negative instances, which are also the samples that learning models are uncertain about. Thus, uncertainty estimation offers an effective way to measure the hardness of negative instances. To be applicable in graph contrastive learning, AUGCL is designed in a novel way by using an anchor-instance-dependent uncertainty learning approach.
## III AUGCL: Affinity Uncertainty-based Graph Contrastive Learning
### _Preliminaries_
Self-supervised graph representation learning has demonstrated promising performance in empowering diverse graph learning tasks. This work focuses on node-level and graph-level tasks. Particularly, let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) denote a graph where \(\mathcal{V}\) and \(\mathcal{E}\) denote the set of nodes and edges respectively, then for a node-level task, the goal of self-supervised graph representation learning is to leverage a single graph \(\mathcal{G}\) to learn an encoder \(\psi(\mathcal{V},\mathcal{E})\) without using the labels of nodes so that \(\psi(\mathcal{V},\mathcal{E})\) can yield an expressive low-dimensional embedding \(z_{i}\) for each node in \(\mathcal{V}\). The resulting node embeddings \(\mathcal{Z}=\{z_{i}\}_{i=1}^{|\mathcal{V}|}\) can then be used in various downstream node-level tasks, such as node classification. For a graph-level task, the goal instead is to learn a graph encoder \(\psi(\mathcal{V}_{i},\mathcal{E}_{i})\) given a set of \(N\) graphs \(\{\mathcal{G}_{i}=(\mathcal{V}_{i},\mathcal{E}_{i})\}_{i=1}^{N}\), where the encoder \(\psi(\mathcal{V}_{i},\mathcal{E}_{i})\) outputs a low-dimensional embedding \(z_{i}\) for each graph \(\mathcal{G}_{i}\), and the graph embeddings \(\mathcal{Z}=\{z_{i}\}_{i=1}^{N}\) can then be used in various downstream graph-level tasks, e.g., graph classification. Our approach can be used to improve the self-supervised learning of graph representations and node representations, as shown in Sec. IV. Without loss of generality, we use the graph-level tasks to introduce our approach below.
### _The Proposed Approach AUGCL_
#### Iii-B1 Popular Graph Contrastive Learning Methods and Their Weaknesses
Graph contrastive learning is one of the most popular approaches for self-supervised graph representation learning. As an instance-wise discriminative approach, it aims to pull two different augmentations of the same graph closer and push augmentations of different graphs apart [8, 10]. InfoNCE [22] is among the most popular contrastive learning loss functions to achieve this goal. Specifically, given a mini-batch of randomly sampled graphs \(\{\mathcal{G}_{i}\}_{i=1}^{N}\), two augmentation functions \(t_{1}\) and \(t_{2}\) are first sampled from the augmentation pool \(\mathcal{T}\) which consists of all possible augmentations. Then, two graph views \(\{\widetilde{\mathcal{G}}_{i}\}_{i=1}^{N}\) and \(\{\widehat{\mathcal{G}}_{i}\}_{i=1}^{N}\) are generated by applying \(t_{1},t_{2}\) to each graph. The embeddings \(\{\widetilde{z}_{i}\}_{i=1}^{N}\) and \(\{\widetilde{z}_{i}\}_{i=1}^{N}\) of the augmented graphs are obtained by feeding the augmented graphs into a shared GNN encoder \(\psi(\cdot)\), followed by a projection head (2-layer perceptron) [24]. For an anchor instance \(\mathcal{G}_{i}\) - a graph augmented from \(\mathcal{G}_{i}\) using \(t_{1}\), the positive is \(\widehat{\mathcal{G}}_{i}\) - a graph augmented from the same graph \(\mathcal{G}_{i}\) but using a different augmentation \(t_{2}\), while the source of the negative instances is \(\{\widehat{\mathcal{G}}_{j}\}_{j=1}^{N}\), from which negative instances are sampled. To enforce the maximization of the consistency between positive embeddings, the pairwise objective for a positive pair \((\widetilde{z}_{i},\widehat{z}_{i})\) is formulated as:
\[\ell_{\text{InfoNCE}}(\widetilde{z}_{i},\widehat{z}_{i})=-\log\frac{e^{(h( \widetilde{z}_{i},\widetilde{z}_{i})/\tau)}}{e^{(h(\widetilde{z}_{i},\widetilde {z}_{i})/\tau)}+\sum\limits_{j,j\neq i}^{N}e^{(h(\widetilde{z}_{i},\widetilde {z}_{j})/\tau)}}, \tag{1}\]
where \(\tau\) denotes the temperature parameter and \(h(\widetilde{z}_{i},\widehat{z}_{j})\) is the cosine similarity function measuring similarity between \(\widetilde{z}_{i}\) and \(\widehat{z}_{j}\).
Although these graph contrastive learning methods have achieved great success in graph representation learning, they often fail to consider the semantics of negatives in \(\{\widehat{\mathcal{G}}_{j}\}_{j=1}^{N}\). Consequently, instances that share the same semantics with the positive can be sampled and treated as negatives in (1). This false negative sampling issue, also known as sampling bias in [17], would hinder the learning of contrastive representations between positive instances and negative instances. More importantly, the contrastive learning cannot exploit _hard negatives_, i.e., instances that are similar to but semantically
different from the anchor, which are driving force for contrastive learning to learn substantially enhanced discriminative representations, as shown in the literature empirically and theoretically [15, 16, 18].
#### Iii-A2 Our Affinity Uncertainty-enabled Approach for Overcoming the Weaknesses
To address the negative sampling weaknesses discussed in Sec. III-B1, we propose a novel framework for learning an Affinity Uncertainty-based hardness measure for enhancing current state-of-the-art Graph Contrastive Learning methods, termed **AUGCL**. The key idea is to first learn the hardness of a negative instance relative to each anchor instance by comparing the affinity between them to the affinities of the anchor instance to the other instances. The hardness results can then be plugged into a contrastive loss, e.g., InfoNCE, to improve the effectiveness of current GCL methods in utilizing the hard negatives.
**Overview of AUGCL.** Since the hardness of a negative instance varies largely w.r.t. different anchor instances, our approach AUGCL aims to learn a hardness measure based on the relative affinity between the negative instance and each anchor instance. That is, for an anchor instance \(\widetilde{z}_{i}\) and its negative instance candidate set \(\widehat{\mathcal{Z}}_{i}=\{\widehat{z}_{j}\}_{j=1}^{N}\), we learn a single hardness measure function \(\phi(\widehat{z}_{j}|\widetilde{z}_{i};\Theta):\widehat{\mathcal{Z}}_{i}\to\mathbb{R}\) that yields a hardness value for each \(\widehat{z}\in\widehat{\mathcal{Z}}_{i}\) relative to \(\widetilde{z}_{i}\). Note that the function \(\phi\) parameterized by \(\Theta\) is trained across all anchor instances; yet the hardness it yields for the negative instance \(\widehat{z}_{j}\) is dependent on the anchor \(\widetilde{z}_{i}\). For brevity, \(\phi(\widehat{z}_{j}|\widetilde{z}_{i};\Theta)\) is denoted as \(\phi_{i}(\widehat{z}_{j};\Theta)\) hereafter.
Unlike current hardness measures that define the hardness of a negative instance based on its individual relation to the anchor instance (e.g., the similarity between them), one key novelty in AUGCL is that it defines the hardness based on two groups of pairwise affinities between the negative instances and the anchor instance. More specifically, we introduce the concept of affinity uncertainty below to achieve this goal:
**Definition 1** (Affinity Uncertainty).: _Given an anchor instance \(\widetilde{z}_{i}\) and its negative instance candidate set \(\widehat{\mathcal{Z}}_{i}=\{\widehat{z}_{j}\}_{j=1}^{N}\), and let \(\mathcal{C}_{1}^{i}\) and \(\mathcal{C}_{2}^{i}\) be two disjoint groups of instances in \(\widehat{\mathcal{Z}}_{i}\) such that: one group \(\mathcal{C}_{1}^{i}\) includes the instances that are closely aligned and distributed around the anchor \(\widetilde{z}_{i}\), while the other group \(\mathcal{C}_{2}^{i}\) contains the rest of other instances; and \(\widehat{\mathcal{Z}}_{i}\)= \(\mathcal{C}_{1}^{i}\cup\mathcal{C}_{2}^{i}\). Then the affinity uncertainty of each \(\widehat{z}\in\widehat{\mathcal{Z}}_{i}\) w.r.t. \(\widetilde{z}_{i}\) is defined as:_
\[\phi_{i}(\widehat{z})=g(\widehat{z},\mathcal{C}_{1}^{i},\mathcal{C}_{2}^{i}), \tag{2}\]
_where \(g\) is an uncertainty estimator that evaluates how confident the estimator is about the affinity of \(\widehat{z}\) to the instances in the anchor instance-centered group \(\mathcal{C}_{1}^{i}\) compared to the other group \(\mathcal{C}_{2}^{i}\)._
The affinity uncertainty in (2) takes a holistic approach that considers diverse affinities of the negative instances within and across the two groups \(\mathcal{C}_{1}^{i}\) and \(\mathcal{C}_{2}^{i}\) to learn an accurate hardness for each negative instance \(\widehat{z}\). As shown in the literature (e.g., [36]) and Fig. 1, instances which are ambiguous to distinguish are assigned to large uncertainty values. These instances typically have a poor affinity to both groups \(\mathcal{C}_{1}^{i}\) and \(\mathcal{C}_{2}^{i}\), such as those located on the boundary between the two groups. By contrast, if the instances are coherently aligned within \(\mathcal{C}_{1}^{i}\) or \(\mathcal{C}_{2}^{i}\), their uncertainty would be small. Thus, this type of uncertainty can be naturally used to define the hardness of the negative instances.
The obtained hardness can then be easily plugged into existing contrastive losses, such as the InfoNCE loss, via a weighting term for the negative instances. Particularly, the
Fig. 2: Overview of our approach AUGCL. _Left_: AUGCL-based graph contrastive learning. The objective and the general procedures are the same as existing GCL methods, but AUGCL leverages affinity uncertainty to learn anchor-instance-dependent hardness-based instance weights \(\{w_{i1},w_{i2},\cdots,w_{iN}\}\) for all negative instances to improve existing GCL methods. _Right_: The proposed affinity uncertainty learning approach to obtain the weights. For an anchor \(\widehat{z}_{i}\), AUGCL first obtains collective affinity information (i.e, pairwise affinity across the instances) via binary partition of its negative instances. It then utilizes those affinity information to learn an uncertainty estimator that evaluates how confident the estimator is about the affinity of each negative instance \(\widehat{z}_{j}\) relative to the anchor instance \(\widehat{z}_{i}\). A larger affinity uncertainty value \(u_{ij}\) indicates more likely of \(\widehat{z}_{j}\) being a hard negative, and thus, a larger weight \(w_{ij}\) (\(w_{ij}=\alpha u_{ij}\) where \(\alpha\) is a hyperparameter).
AUGCL-enhanced InfoNCE is given as follows:
\[\ell_{\text{AUGCL}}(\widetilde{z}_{i},\widehat{z}_{i})=-\log\frac{e^{(h( \widehat{z}_{i},\widehat{z}_{i})/\tau)}}{e^{(h(\widehat{z}_{i},\widehat{z}_{i}) /\tau)}+\sum\limits_{j,j\neq i}^{N}w_{ij}e^{(h(\widehat{z}_{i},\widehat{z}_{j}) /\tau)}}, \tag{3}\]
where \(w_{ij}=\alpha\phi_{i}(\widehat{z}_{j};\Theta)\) is the hardness-based weight added to \(\widehat{z}_{j}\) relative to \(\widehat{z}_{i}\). \(\phi_{i}(\widehat{z}_{j};\Theta)\) is the hardness learned by AUGCL for the negative instance \(\widehat{z}_{j}\) w.r.t. the anchor instance \(\widetilde{z}_{i}\) and \(\alpha\) is a hyperparameter. This enables effective exploitation of the hard negatives, as large weights are expected for hard negatives while small weights are expected for the other instances, e.g., false negatives.
The overall procedure of AUGCL is illustrated in Fig. 2. It follows the standard graph contrastive learning in the graph augmentation and contrastive learning except that we incorporate the affinity uncertainty-based hardness through a weighting term into the contrastive loss as in (3). The right panel in Fig. 2 shows the steps of learning an anchor-dependent hardness measure \(\phi\) for each anchor \(\widetilde{z}_{i}\), consisting of instance partition and uncertainty estimation as indicated in Def. 1. Before introducing the details of these two components in Sec. III-C, below we demonstrate the theoretical motivation of the proposed method.
**Theoretical Motivation.** We show below that (3) is equivalent to a triplet loss with an adaptive margin exponentially proportional to the learned hardness-based weight \(\phi_{i}(\widehat{z}_{j};\Theta)\). This provides a more straightforward explanation of the working mechanism of the proposed weighting method.
**Theorem 1**.: _Let \(u_{ij}=\phi_{i}(\widehat{z}_{j};\Theta)\) be the affinity uncertainty-based hardness of a negative instance \(\widehat{z}_{j}\) w.r.t. the anchor instance \(\widetilde{z}_{i}\). When the projection function is an identity function and assumes the positive instance is more similar to the anchor than the negative instances, then minimizing the proposed objective in (3) is equivalent to minimizing a modified triplet loss with an adaptive margin \(m_{ij}=\frac{\tau}{2}\log(\alpha u_{ij})\), i.e.,_
\[\ell_{\text{AUGCL}}(\widetilde{z}_{i},\widehat{z}_{i})\propto \frac{1}{2\tau}\sum\limits_{j,j\neq i}^{N}\left(\|\widehat{z}_{i}^{ \prime}-\widehat{z}_{i}^{\prime}\|-\|\widehat{z}_{i}^{\prime}-\widehat{z}_{j} ^{\prime}\|+m_{ij}\right), \tag{4}\]
_where \(\widehat{z}_{i}^{\prime}\) is the normalized embedding._
The proof for this theorem is detailed in the Appendix B. From the theorem, we can see that the optimal embeddings to (4) should satisfy the following inequality:
\[\|\widehat{z}_{i}^{\prime}-\widehat{z}_{i}^{\prime}\|\ll\|\widehat{z}_{i}^{ \prime}-\widehat{z}_{j}^{\prime}\|-m_{ij}, \tag{5}\]
where \(m_{ij}=\frac{\tau}{2}\log(\alpha u_{ij})\) and \(u_{ij}=\phi_{i}(\widehat{z}_{j};\Theta)\). Thus, \(m_{ij}\) is equivalent to a transformed affinity uncertainty-based hardness of the negative instance \(\widehat{z}_{j}\) relative to the anchor \(\widetilde{z}_{i}\), satisfying:
\[\begin{cases}m_{ij}\geq 0,&\text{if }\alpha u_{ij}\geq 1;\\ m_{ij}<0,&\text{otherwise}.\end{cases} \tag{6}\]
If \(\widehat{z}_{j}\) is a hard negative for \(\widetilde{z}_{i}\), the large uncertainty \(u_{ij}\) between \(\widetilde{z}_{i}\) and \(\widehat{z}_{j}\) makes the inequality (5) hard to satisfy through \(m_{ij}>0\), enforcing better representation learning. On the contrary, if the uncertainty \(u_{ij}\) is small, (5) can be easily satisfied with \(m_{ij}\ll 0\), reducing the impact of the possible false negative instances.
### _Instantiation of AUGCL_
We introduce an instantiation of our AUGCL framework in this subsection. As demonstrated in Def. 1, the affinity uncertainty-based hardness function \(\phi\) parameterized with \(\Theta\) can be decomposed into two modules, including a binary clustering function \(f:\{\widehat{z}_{j}\}_{j=1}^{N}\rightarrow\{0,1\}\) parameterized by \(\Theta_{f}\) and an uncertainty estimation function \(g:\{\widehat{z}_{j}\}_{j=1}^{N}\times\{0,1\}\rightarrow\mathbb{R}\) parameterized by \(\Theta_{g}\), i.e., \(\Theta=\{\Theta_{f},\Theta_{g}\}\). AUGCL is a generic framework. Different clustering and uncertainty estimation methods can be adopted in AUGCL to implement a specific model, as shown by our empirical results in Sec. IV-D. Below we describe the two modules of the best instantiated AUGCL model based on our experiments.
#### Iv-C1 Anchor-dependent Binary Partition of Negatives
Given an anchor \(\widetilde{z}_{i}\), binary clustering is used to partition the negative samples into two coherent groups - \(\mathcal{C}_{1}^{i}\) and \(\mathcal{C}_{2}^{i}\) - for subsequent affinity uncertainty estimation. Without having access to label information, clustering is often adopted on the full dataset to divide instances into several clusters [37, 38, 39, 40, 41, 42], and instances from clusters other than the anchor-based cluster are directly treated as negatives.
Our clustering differs from these existing methods from two main ways. First, we perform an anchor-dependent binary partition on only the negative instances in each batch of instances rather than the full dataset. Specifically, given a batch of node/graph embeddings \(\{\widehat{z}_{j}\}_{j=1}^{N}\), for each anchor \(\widetilde{z}_{i}\in\{\widetilde{z}_{i}\}_{i=1}^{N}\), we perform a binary partition on the negative instance candidates \(\{\widehat{z}_{j}\}_{j=1}^{N}\) using an existing clustering method (e.g., \(k\)-means), i.e., \(f_{k-\text{means}}:\{\widehat{z}_{j}\}_{j=1}^{N}\rightarrow\{0,1\}\), where \(\mathcal{C}_{1}^{i}=1\) is the label of the cluster centered around \(\widetilde{z}_{i}\) and \(\mathcal{C}_{2}^{i}=0\) is the other cluster. That is, the clustering assigns the cluster label \(C_{1}^{ij}=1\) if the instance \(\widehat{z}_{j}\) is sufficiently similar to \(\widetilde{z}_{i}\), and a different cluster label \(C_{2}^{ij}=0\) is assigned otherwise.
The second difference is that the obtained partitions are used to gain a sense of the affinity of an instance to the other instances, rather than being the direct negative sample clusters. Those affinity information would be used to evaluate the hardness of each negative instance through an uncertainty estimation model in AUGCL.
#### Iv-C2 Affinity Uncertainty Estimation
For an anchor \(\widetilde{z}_{i}\), the binary cluster labels \(\{C_{12}^{ij}\}_{j=1}^{N}\) carry the affinity semantics of the instances \(\{\widehat{z}_{j}\}_{j=1}^{N}\) w.r.t. the anchor instance \(\widetilde{z}_{i}\). We further propose to perform an uncertainty estimation upon these affinity semantic-based labels for each anchor \(\widetilde{z}_{i}\in\{\widetilde{z}_{i}\}_{i=1}^{N}\), and use this uncertainty to measure the hardness of instances \(\{\widehat{z}_{j}\}_{j=1}^{N}\). By doing so, a large uncertainty-based hardness is assigned to fringe instances that locate around the boundary between the two clusters; these instances are typically hard negatives w.r.t. \(\widetilde{z}_{i}\). A small hardness is assigned otherwise.
Different uncertainty estimation methods can be used to specify this component. We found that the recently proposed method Deep Gambler (DG) [36] worked best in our experiments, so DG is used in AUGCL by default. Specifically,
DG extends a multi-class classification task to a problem that learns an extra class to represent the uncertainty of instances, in addition to guaranteeing the classification of the original classes. For an anchor instance \(\widetilde{z}_{i}\), given its associated negative instance candidates \(\{\widehat{z}_{j}\}_{j=1}^{N}\) and their affinity labels \(\{C_{1|2}^{ij}\}_{j=1}^{N}\), the DG-based uncertainty estimation is trained by minimizing the following loss:
\[\ell_{i}^{DG}=-\sum_{j}^{N}\log(p_{C_{1|2}^{ij}}*o+u_{ij}), \tag{7}\]
where \(p_{C_{1|2}^{ij}}\) is the predicted class probability on class \(C_{1|2}^{ij}\) from a multi-layer perceptrons-based (MLP-based) DG model \(g(\widehat{z},\mathcal{C}_{1}^{i},\mathcal{C}_{2}^{i};\Theta_{g})\) parameterized by \(\Theta_{g}\), \(u_{ij}\) is the uncertainty that the model \(g\) generates for the instance \(\widehat{z}_{j}\), and \(o\) is a reward parameter with a larger \(o\) encouraging \(g\) to be more confident in inferring and vice versa. The final loss of DG is computed across all anchor instances \(\widetilde{z}_{i}\in\{\widetilde{z}_{i}\}_{i=1}^{N}\).
After the DG model is trained, for each anchor \(\widetilde{z}_{i}\), we calculate \(u_{ij}\) for its negative instance \(\widehat{z}_{j}\) and obtain a uncertainty matrix \(\mathbf{U}\in\mathbb{R}^{N\times(N-1)}\) where each row \(\mathbf{u}_{i}\) contains the uncertainty of all negative instances w.r.t. the anchor \(\widetilde{z}_{i}\). These uncertainty values are then used in (3) to improve the contrastive learning.
### _Time Complexity Analysis_
We take \(k\)-means and Deep Gambler [36] as the partition and uncertainty estimation methods, respectively, to analyze the additional time complexity introduced by AUGCL. Specifically, let \(L\) be the number of MLP layers in DG and \(d\) be the number of hidden units for all layers. For the graph classification task, given a graph dataset with \(N\) graphs and the batch size is set as \(B\), the time complexities of partition and the uncertainty modeling are \(\mathcal{O}(2(\frac{N}{B})B^{2}T)\) and \(\mathcal{O}(KL(\frac{N}{B})B^{2}d^{2})\) respectively, where \(T\) is the number of iterations for \(k\)-means and \(K\) is the number of training epochs for the uncertainty estimation model. For the node classification task, given a graph with \(N\) nodes, in order to reduce the computation cost, we only sample \(M\) (\(M\ll N\)) negatives for an anchor when training AUGCL. The resulting time complexities of partition and training uncertainty model are \(\mathcal{O}(2NMT)\) and \(\mathcal{O}(KLNMd^{2})\) respectively. In experiments, we use the well-established \(k\)-means clustering implementation from scikit-learn [43], as it runs very fast in practice. Besides, the values of \(K\), \(L\), \(M\) and \(d\) are relatively small and the uncertainty estimation model is only trained once. Therefore, the computational overhead over the base model is not significant.
## IV Experiments
### _Experimental Setup_
#### Iv-A1 Datasets
Seven commonly used graph classification datasets are used in our experiments. They come from two popular application domains: bioinformatics (MUTAG, DD, NCI1, and PROTEINS) and social networks (COLLAB, REDDIT-BINARY, and IMDB-BINARY). For node classification task, we use three widely used datasets, i.e., Wiki-CS [44], Amazon-Computers and Amazon-Photo [45]. Wiki-CS is a reference network constructed based on Wikipedia. Amazon-Computers and Amazon-Photo are two co-purchase networks constructed from Amazon. The statistics of the datasets are summarized in Table I.
#### Iv-A2 Implementation Details and Evaluation Protocol
For graph classification task, GraphCL [10], a recent SOTA InfoNCE-based contrastive learning method for graph classification, is used as our base, into which our affinity uncertainty-based hardness learning method is plugged. For a fair comparison, the network backbone, the graph augmentation methods and the hyper-parameters of our AUGCL-enabled GraphCL are kept exactly the same as the original GraphCL. We follow a widely-used two-stage evaluation protocol in the literature [10, 46, 47, 9], in which we first learn graph representations in a self-supervised manner and then use the representations to train a downstream SVM classifier. The 10-fold evaluation is adopted in classification, and it is repeated five times with the mean accuracy (%) and standard variation reported.
For node classification task, we adopt GCA [11] as the base model and plug our AUGCL-based affinity uncertainty hardness into it. The evaluation protocol for node classification follows DGI [7] where the model is first trained in an unsupervised manner and then the learned node representations are used to train and test a simple \(\ell_{2}\)-regularized logistic regression classifier. On each dataset, the experiment is repeated for 20 runs with different data splits, and the average classification accuracy, together with the standard variation, is reported.
For graph and node classification, we use the same architecture in our affinity uncertainty estimation model, i.e., a three-layer multi-layer-perceptrons (MLP) architecture, containing 128 units per layer with \(ReLU\) activation. We adopt the Stochastic Gradient Descent (SGD) optimizer for the uncertainty estimation model and the learning rate is set to 0.01 across all the datasets. The uncertainty scaling parameter \(\alpha\) is set to the reciprocal of the mean of uncertainties. The training epoch number of the uncertainty estimation model is set to 10 for all datasets. For the reward parameter in the uncertainty estimation model, it is selected through a grid search, and the search space is \(\{1.5,1.6,1.7,1.8,1.9\}\).
#### Iv-A3 Competing Methods
We evaluate the effectiveness of AUGCL in both graph and node classification tasks. In both tasks, AUGCL is evaluated against three state-of-the-art hard negative mining-based contrastive learning methods, including DCL [17], HCL [16] and ProGCL [18]. In addition, we also include a set of other relevant state-of-the-art competing methods, including non-contrastive methods and
other contrastive methods. Particularly, for graph classification, the non-contrastive methods include Graphlet Kernel (GK) [48], Weisfeiler-Lehman Sub-tree Kernel (WL) [49], Deep Graph Kernels (DGK) [46], node2vec [50], sub2vec [51] and graph2vec [47], while the GCL methods include InfoGraph [9], JOA0v2 [12], SimGRACE [13] and GraphCL [10].
For the node classification task, non-contrastive methods include node2vec [50], DeepWalk (DW) [52], and Graph AutoEncoders (GAE and VGAE) [53]. Contrastive methods include DGI [7], GMI [54], MVGRL [8], and GCA [11].
Note that ProGCL proposed two strategies to utilize the estimated hardness results, i.e., weighting and mixup. The results reported are based on the weighting strategy of ProGCL to have a direct comparison to our weighting-based AUGCL.
### _Enabling Different GCL Methods on Graph and Node Classification_
#### Iv-B1 Performance Improvement over Baselines.
We first compare the performance of our proposed method with the baselines on graph and node classification tasks. The results are shown in Table II. It is clear that, by incorporating our affinity uncertainty-based hardness measure, the two baselines - GraphCL [10] and GCA [11] - are substantially and consistently boosted on all datasets from different domains for both graph and node classification tasks. This demonstrates that our method AUGCL can enable these baselines to effectively attend to hard negative instances and learn better representations of graphs/nodes.
#### Iv-B2 Comparison to State-of-the-art Methods
We then compare AUGCL to diverse advanced graph embedding learning methods.
**Graph Classification.** The results on graph classification are reported in Table III. We can observe that graph contrastive learning methods generally obtain better performance than non-contrastive methods. Our method further improves the performance by learning and feeding the affinity uncertainty-based hardness into the contrastive learning, substantially outperforming SOTA GCL methods on 6 out of 7 datasets.
Compared to the three recent hardness-aware methods DCL, HCL and ProGCL, our method AUGCL performs much better across all seven datasets. Particularly, DCL, HCL and ProGCL improve GraphCL on some datasets such as PROTEINS, MUTAG, and IMDB-B, but they fail on the other ones. By contrast, our method improves over GraphCL by a large margin across all the seven datasets, indicating the superiority of our affinity uncertainty-based hardness learning method over its recent counterparts.
**Node Classification.** The node classification results are reported in Table IV. In general, the trends here are similar to the results in Table III: i) contrastive methods are generally more effective than the non-contrastive ones, and ii) the competing hardness-aware methods DCL, HCL and ProGCL further improve over the contrastive methods on part of the datasets, while our method AUGCL achieves consistently better performance on all the three datasets.
General hardness-aware methods DCL and HCL identify hard negatives by using the individual similarity to the anchor, which is often ineffective on graph data due to over-smoothed node representation issues, as also found in the very recent ProGCL work [18]. ProGCL addresses this issue by positing a prior model over the pairwise similarity distribution to learn the hardness. Our method further improves ProGCL consistently on the three datasets by learning a data-driven affinity uncertainty estimation model without the prior assumption. Importantly, ProGCL is not generalizable to other graph mining tasks such as the graph classification task, e.g., ProGCL fails to work as effectively as the baseline GraphCL on some datasets in Table III where our method AUGCL also consistently outperforms the baseline, indicating better applicability and flexibility of AUGCL on different graph mining tasks than ProGCL.
### _Improving Robustness against Graph Adversarial Attacks_
Self-supervised learning has shown effective performance in defending against adversarial perturbations [56, 57]. This subsection investigates whether AUGCL can further improve over the GCL methods on this important property. In this experiment, following [58], three different types of graph adversarial attacks: RandSampling, GradArgmax and RL-S2V are used, where RandSampling randomly adds or deletes edges from graphs, GradArgmax performs edge modification based on gradient information, and RL-S2V is a reinforcement learning based attack method that learns a generalizable attack policy. We also use the widely-used evaluation protocol as in [58] where the graph task is to classify the component numbers in synthetic graphs and structure2vec [55] is adopted as the graph encoder, with different depths of structure2vec considered in the experiments. Both the original structure2vec trained from scratch (i.e., no pre-training) and the pre-trained structure2vec [55] using GraphCL [10] are used as baselines. The experimental results of our method are obtained by incorporating our affinity uncertainty-based hardness into GraphCL to pre-train the structure2vec. The best-competing method ProGCL is adopted in the same way. The results are reported in Table V.
From the table, we can observe that: i) all three GCL methods GraphCL, ProGCL, and AUGCL can largely improve the robustness against all three graph adversarial attacks,
particularly the more advanced attacks GradArgmax and RL-S2V, on different network layers, compared with the original model, ii) the robustness can be further improved by exploiting hard negative mining techniques used in ProGCL and AUGCL, compared to GraphCL, and iii) compared with ProGCL, the better hard negative mining in our method AUGCL generally results in more remarkably and stably improved robustness over the GraphCL. Overall, the proposed method AUGCL increases the classification accuracy by up to 8% over GraphCL and up to 2.7% over ProGCL, and performs very competitively to the two methods (i.e., around 0.2%-0.8% difference) in the limited cases where AUGCL is not the best performer.
### _Ablation Studies_
This subsection evaluates the impact of using different clustering and uncertainty estimation methods in \(f\) and \(g\), respectively. The GraphCLAUGCL method is used, with GraphCL as the baseline.
**Partition Methods in \(f\)**. An important module of our proposed method is the instance-wise partition function \(f\). \(k\)-means is used by default to implement \(f\). Here we also examine the use of spectral clustering [59] to perform the binary partition. The results are shown in Table VI. We can see that AUGCL using either spectral clustering or \(k\)-means achieves similar improvement over GraphCL, suggesting the stability of our method w.r.t. the generation of the partition labels. AUGCL with \(k\)-means clustering performs consistently better than the spectral clustering. Hence, \(k\)-means clustering is used by default in our experiments and recommended in practice.
**Uncertainty Estimation Methods in \(g\)**. The uncertainty estimation method \(g\) is another important module in AUGCL. In addition to the extra class-based method used by default in AUGCL, two alternative approaches are used, including the maximum prediction probability-based method Softmax-Response [29] and the entropy-based method Predictive Entropy [30]. We also include a distance-based method as another simplified variant of AUGCL. The detailed descriptions of these uncertainty estimation methods are presented in Appendix A.
The results are reported in Table VI. It is clear that regardless of the specific uncertainty estimation method used, all variants of AUGCL can generally improve the baseline GraphCL on nearly all datasets. This provides further evidence for the effectiveness of our approach. Additionally, the uncertainty estimation method matters: the default method (a recently proposed extra class-based method [34, 36]), a more effective uncertainty estimation model than the other three methods, shows consistently better performance than the other three variants, implying that the hardness can be better captured by more advanced uncertainty estimation methods.
### _Hyperparameter Analysis_
We examine the sensitivity of AUGCL w.r.t two key hyperparameters, i.e., the uncertainty parameter \(\alpha\) in (3) and the reward parameter \(o\) in \(\phi\) (particularly in (7)). Without loss of generality, one graph dataset from biochemical molecules and social networks respectively, i.e, PROTEINS and IMDB-B, are used.
**Uncertainty Parameter \(\alpha\)**. \(\alpha\) is adaptively set, depending on the uncertainty matrix \(\mathbf{U}\), to enable stable performance of AUGCL. Particularly, given \(\mathbf{U}\), we can calculate the mean \(\mu\) and standard deviation \(\delta\) of \(\mathbf{U}\), based on which \(\alpha\) is
set to \(\alpha=\frac{1}{\mu}\). We vary the parameter \(\alpha\) in the range of \(\{\frac{1}{\mu-\delta},\frac{1}{\mu-0.5\cdot 1},\frac{1}{\mu},\frac{1}{\mu+0.5 \cdot 5},\frac{1}{\mu+\delta}\}\). The mean classification accuracy (%) under different \(\alpha\) are shown in Fig. 3(a) where the labels in the x-axis denote the coefficient of \(\delta\) when calculating \(\alpha\). It is clear that the performance of our model is generally stable with varying \(\alpha\), and \(\alpha=\frac{1}{\mu}\) is a recommended setting.
**Reward Parameter**\(o\). We further examine the reward parameter \(o\) in the uncertainty estimation model [36]. With \(o\) varying in \(\{1.5,1.6,1.7,1.8,1.9\}\), we report the mean classification accuracy (%) in Fig. 3(b). The results also show that AUGCL can achieve reasonably stable performance for a wide range of the \(o\) settings.
## V Conclusion
This paper proposes the idea of affinity uncertainty and utilizes it to measure the hardness of negative samples to improve popular GCL models. To this end, we introduce the affinity uncertainty-based hardness learning approach AUGCL that synthesizes binary partition and uncertainty estimation to learn anchor-instance-dependent hardness for all negative instances, i.e., their hardness results are relative to each anchor instance. AUGCL is a data-driven approach that eliminates the prior assumption made in very recent hardness-aware GCL methods like ProGCL [18], resulting in better applicability and flexibility on different graph mining tasks, as well as better robustness to diverse graph adversarial attacks. It also shows better performance in enabling different GCL loss functions, compared to a wide range of other state-of-the-art graph representation methods on graph and node classification tasks. We also show theoretically that the resulting contrastive loss in AUGCL is equivalent to a triplet loss with an adaptive margin that adaptively exploits the hard negatives with a large margin, with a small margin assigned to the other negative instances.
| 難解な否定的な鉱山は、自己教師あり対照的学習 (CL) を多様なデータタイプに、特にグラフ CL (GCL) に有効に示しています。既存の hardness-aware CL メソッドは、アノテーションインスタンスと最も類似している負のインスタンスをハードネガティブとして扱います。これは、CL の性能を向上させるのに役立ちますが、特に画像データでは効果的です。しかし、このアプローチでは、ハードネガティブを識別するのに失敗することがあります。これは、グラフデータでは多くの偽ネガティブを引き起こすからです。これは主に、学習されたグラフ表現が、過度に滑らかな表現やグラフデータにおける非独立同分布 (non-i.i.d.) 問題によって十分に差別化されないためです。この問題に対処するために、この論文では、負のインスタンスとアノテーションインスタンス間のペア WISE affinity からなる共存的な affinity 情報に基づ |
2309.07517 | Lattice Boltzmann methods for combustion applications | The lattice Boltzmann method, after close to thirty years of presence in
computational fluid dynamics has turned into a versatile, efficient and quite
popular numerical tool for fluid flow simulations. The lattice Boltzmann method
owes its popularity in the past decade to its efficiency, low numerical
dissipation and simplicity of its algorithm. Progress in recent years has
opened the door for yet another very challenging area of application:
Combustion simulations. Combustion is known to be a challenge for numerical
tools due to, among many others, the large number of variables and scales both
in time and space, leading to a stiff multi-scale problem. In the present work
we present a comprehensive overview of models and strategies developed in the
past years to model combustion with the lattice Boltzmann method and discuss
some of the most recent applications, remaining challenges and prospects. | S. A. Hosseini, P. Boivin, D. Thevenin, I. Karlin | 2023-09-14T08:34:49 | http://arxiv.org/abs/2309.07517v2 | # Lattice Boltzmann methods for combustion applications
###### Abstract
The lattice Boltzmann method, after close to thirty years of presence in computational fluid dynamics has turned into a versatile, efficient and quite popular numerical tool for fluid flow simulations. The lattice Boltzmann method owes its popularity in the past decade to its efficiency, low numerical dissipation and simplicity of its algorithm. Progress in recent years has opened the door for yet another very challenging area of application: Combustion simulations. Combustion is known to be a challenge for numerical tools due to, among many others, the large number of variables and scales both in time and space, leading to a stiff multi-scale problem. In the present work we present a comprehensive overview of models and strategies developed in the past years to model combustion with the lattice Boltzmann method and discuss some of the most recent applications, remaining challenges and prospects.
+
Footnote †: journal: Progress in Energy and Combustion Science
###### Contents
* 1 Introduction
* 2 Basic concepts
* 2.1 Brief overview of target macroscopic system
* 2.2 Isothermal lattice Boltzmann for incompressible flows
* 2.2.1 Discrete velocity system and discrete equilibrium state
* 2.2.2 Lattice Boltzmann equations
* 3 Lattice Boltzmann models for compressible reacting flows
* 3.1 Energy and species balance equations
* 3.1.1 Double distribution function lattice Boltzmann approach for thermal flows
* 3.1.2 Kinetic models for species balance equations
* 3.1.3 Passive-scalar lattice Boltzmann models
* 3.1.4 Hybrid models: Finite difference and finite volume solvers for energy and species
* 3.2 Compressible continuity and momentum balance equations
* 3.2.1 Lattices with higher-order quadratures
* 3.2.2 Standard lattice density-based solvers
* 3.2.3 Pressure-based solvers
* 3.2.4 Low Mach thermo-compressible pressure-based solver
## 1 Introduction
The lattice Boltzmann (LB) method, proposed in the early 80's has grown popular over the past decades [1; 2]. The rapid emergence of this numerical method is mainly due to the simplicity and strict locality of the involved time-evolution operators [3; 4]. The locality of the operators and intrinsic coupling between the pressure and velocity fields through the distribution function (as opposed to pressure-based incompressible or low Mach solvers) allows for better performances on parallel clusters and a much more efficient treatment of flows in complex geometries [4]. During the past decade, the LB method originally proposed for computational fluid dynamics (CFD) has been extended to many complex flow configurations ranging from non-Newtonian [5; 6; 7; 8; 9], to multi-phase [10; 11; 12; 13; 14; 15; 16], and multi-component flows. Although initially limited to low-Mach isothermal flows with an ideal gas equation of state, the LB approach was later modified to lift many of these restrictions. Releasing the restriction on thermo-compressibility is an essential step to develop LB solvers for many applications such as combustion.
The topic of combustion modeling with LB was first touched upon in 1997 in an article by Succi et al. [17]. Since then, and up until very recently, a limited number of publications had appeared on the topic, all limited to simplified 1-D and 2-D test-cases, see for instance [18; 19; 20; 21; 22; 23; 24]. The limited progress of the lattice Boltzmann method during that period might be attributed to a number of factors such as the absence of a good compressible realization, persistent issues with stability of solvers, and the absence of multi-species formulations. During the past years a considerable amount of research work has been conducted to extend the lattice Boltzmann method to compressible flows, which has led to a number of stable and efficient realizations, see for instance [25; 26; 27; 28]. In parallel, the stability domain of lattice Boltzmann solvers both for incompressible and compressible flows has been considerably expanded through more advanced collision models, see for instance [29; 30; 31; 32; 33; 34]. These two factors along with the development of models for species transport and the idea of hybrid solvers taking advantage of classical numerical methods for the species end energy balance equations led to considerable progress in combustion simulation with the lattice Boltzmann method in recent years. Contrary to the first wave of models, the more recent efforts have been extended and used for many complex configurations involving thermo-acoustics, complex geometries and turbulent flows. It has to be noted that in parallel with efforts to develop lattice Boltzmann-based models for combustion simulations, a number of attempts at developing discrete velocity Boltzmann-based models with Eulerian discretization in physical space have also been reported, see for instance [35; 36; 37; 38].
In the present contribution we will review developments in the area of lattice Boltzmann simulations of combustion. Different challenges, solutions and models developed in that area in the past years will be presented and discussed. The review starts with a brief overview of basic concepts, i.e. target macroscopic system and basic concepts from the lattice Boltzmann method. In the third section of this review we will discuss topics specific to combustion simulations, i.e. strategies to solve the energy balance equation, models developed for species transport equations, and introduction of compressibility effects into the lattice Boltzmann solver. The review closes with section four where key points are briefly listed and future prospects and challenges
are discussed.
## 2 Basic concepts
### Brief overview of target macroscopic system
Throughout the manuscript, the target set of macroscopic equations is the multi-component system of Navier-Stokes-Fourier equations (see, e.g. [39])
\[\frac{\partial\rho}{\partial t}+\frac{\partial\rho u_{\beta}}{ \partial x_{\beta}} =0, \tag{1}\] \[\frac{\partial\rho u_{\alpha}}{\partial t}+\frac{\partial\rho u_{ \alpha}u_{\beta}+p\delta_{\alpha\beta}}{\partial x_{\beta}} =\frac{\partial\tau_{\alpha\beta}}{\partial x_{\beta}},\] (2) \[\frac{\partial\rho E}{\partial t}+\frac{\partial\rho u_{\beta}(E +p/\rho)}{\partial x_{\beta}} =\frac{\partial\tau_{\alpha\beta}u_{\alpha}}{\partial x_{\beta}} -\frac{\partial q_{\beta}}{\partial x_{\beta}},\] (3) \[\frac{\partial\rho Y_{k}}{\partial t}+\frac{\partial\rho u_{ \beta}Y_{k}}{\partial x_{\beta}} =\frac{\partial\rho V_{k,\beta}Y_{k}}{\partial x_{\beta}}+\dot{ \omega}_{k}. \tag{4}\]
Here \(u_{\alpha}\) is the \(\alpha^{\text{th}}\) component of the fluid velocity, \(\rho\) is the mixture density, \(E\) is the total energy (sum of internal energy \(e\) and kinetic energy \(u_{\alpha}^{2}/2\)), \(Y_{k}\) is the mass fraction of species \(k\), and \(\delta_{\alpha\beta}\) is the Kronecker symbol (1 if \(\alpha=\beta\), 0 else). The above system is fully closed upon choosing
**Equation of state:**: a thermodynamic closure, linking state variables \(p\), \(\rho\), \(e\), \(T\) and \(Y_{k}\), e.g. following the perfect gas assumption \(p=\rho.\bar{r}.T=\rho\frac{\mathcal{R}}{W}T\).
**Transport models:**: to define the species diffusion velocities \(V_{k,\beta}\), heat flux term \(q_{\beta}\) and viscous stress tensor \(\tau_{\alpha\beta}\).
**Chemistry model:**: to define the reaction rates \(\dot{\omega}_{k}\).
### Isothermal lattice Boltzmann for incompressible flows
The construction of a discrete kinetic solver like the lattice Boltzmann method has two main ingredients: (a) Reduction of the particles' speed continuous space to a discrete set, and (b) discretization of the resulting system of hyperbolic equations in physical space and time. In this section these two components will be briefly reviewed.
#### 2.2.1 Discrete velocity system and discrete equilibrium state
The rationale behind the construction of the lattice Boltzmann method consists in using a truncated version of the Boltzmann equation with a linear approximation to the collision term to recover the dynamics of the macroscopic equations of interest, here the isothermal Navier-Stokes and continuity equations.
From Boltzmann-BGK to the discrete velocity Boltzmann equationsConsistent with the terminology of the early literature, in the context of the present work we will refer to all methods using a form of the Boltzmann equation with a discrete set of particles' velocities as discrete-velocity models (DVM). In recent years interest in such models has been revived in the form of numerical methods such as the lattice Boltzmann equation.
DVM generally aim at approximating the distribution function with quadrature rules or similar integral approximations and using a discrete set of velocities:
\[\mathcal{V}:=\{\mathbf{c}_{i}\in\mathbb{R}^{D}\}, \tag{5}\]
changing the Boltzmann-Bhatnagar-Gross-Krook (BGK) equation [40] into a set of coupled hyperbolic partial differential equations:
\[\frac{\partial f_{i}}{\partial t}+c_{i\alpha}\frac{\partial f_{i}}{\partial x_ {\alpha}}=\frac{1}{\tau}\left(f_{i}^{\text{eq}}-f_{i}\right). \tag{6}\]
Constraints on the discrete equilibrium function, \(f_{i}^{\text{eq}}\), e.g. moments of the equilibrium distribution function to be correctly recovered, are identified via a Chapman-Enskog (CE) multi-scale expansion. For the continuity equation:
\[\frac{\partial\Pi_{0}^{\text{eq}}}{\partial t}+\frac{\partial\Pi_{\alpha}^{ \text{eq}}}{\partial x_{\alpha}}=0, \tag{7}\]
while for the momentum balance equations:
\[\frac{\partial\Pi_{\alpha}^{\text{eq}}}{\partial t}+\frac{\partial\Pi_{ \alpha\beta}^{\text{eq}}}{\partial x_{\beta}}-\frac{\partial}{\partial x_{ \beta}}\tau\left[\frac{\partial\Pi_{\alpha\beta}^{\text{eq}}}{\partial t}+ \frac{\partial\Pi_{\alpha\beta\gamma}^{\text{eq}}}{\partial x_{\gamma}} \right]=0, \tag{8}\]
where we have made use of the following notation:
\[\Pi_{\alpha_{1},\ldots,\alpha_{n}}=\int\prod_{\alpha=\alpha_{1}}^{\alpha_{n}} v_{\alpha}f(\mathbf{v},\mathbf{x},t)d\mathbf{v}, \tag{9}\]
meaning that for the system of interest one needs to correctly recover moments of orders zero through three of the equilibrium distribution function.
In the specific context of the lattice Boltzmann method, the Gauss-Hermite quadrature is utilized to satisfy most of the above-listed conditions on the discrete distribution functions, i.e.
\[\int P^{M}\left(\mathbf{v},\rho,\mathbf{u}\right)w\left(\mathbf{v}\right)d\mathbf{v}\cong\sum _{i=0}^{Q-1}w_{i}P^{M}\left(\mathbf{c}_{i},\rho,\mathbf{u}\right), \tag{10}\]
where \(P^{M}\left(\mathbf{v},\rho,\mathbf{u}\right)\) is a polynomial of order \(M\) of \(\mathbf{v}\) and \(w(\mathbf{v})\) is a function of the form:
\[w\left(\mathbf{v}\right)=\left(2\pi\right)^{-D/2}\exp\left(-\frac{\mathbf{v}^{2}}{2} \right). \tag{11}\]
For the quadrature to be applicable the distribution function must be expanded as the product of a polynomial series and \(w(\mathbf{v})\) and the integration variable, i.e. \(\mathbf{v}\) normalized via the reference temperature, i.e. \(\bar{r}T_{0}\). A change of variable as \(\mathbf{v}^{\prime}=(\mathbf{v}-\mathbf{u})/\sqrt{\bar{r}T}\) like Grad's expansion is not possible here as it would lead to changing discrete particle velocities that are also not necessarily space-filling.
Choosing the abscissae, i.e. \(\mathbf{c}_{i}\) to be the roots of the Hermite polynomial of order \(Q\) and the weights as:
\[w_{i}=\frac{Q!}{\mathcal{H}_{Q-1}\left(\mathbf{c}_{i}\right)^{2}}, \tag{12}\]
results in the maximum algebraic degree of precision, i.e. \(2Q-1\). This means that the quadrature guarantees exact recovery of moments up to order \(\frac{2Q-1}{2}\).
In the case of the classical lattice Boltzmann stencil, a third-order quadrature is used, i.e. \(Q=3\) in 1-D, with:
\[c_{i}\in\{-\sqrt{3\bar{r}_{0}T_{0}},0,\sqrt{3\bar{r}_{0}T_{0}}\}, \tag{13}\]
and
\[w_{i}\in\{\frac{1}{6},\frac{2}{3},\frac{1}{6}\}. \tag{14}\]
The simplest multi-dimensional extension of this quadrature can be obtained by operating a tensorial product of the 1-D lattice. For instance, in 2-D as illustrated in Fig. 1, this leads to the D2Q9 lattice with:
\[c_{ix}/\sqrt{3\bar{r}T_{0}}\in\{0,1,0,-1,0,1,-1,-1,1\}, \tag{15}\]
and
\[c_{iy}/\sqrt{3\bar{r}T_{0}}\in\{0,0,1,0,-1,1,1,-1,-1\}, \tag{16}\]
and
\[w_{i}\in\{\frac{4}{9},\frac{1}{9},\frac{1}{9},\frac{1}{9},\frac{1}{9},\frac{1 }{36},\frac{1}{36},\frac{1}{36},\frac{1}{36}\}. \tag{17}\]
A similar procedure is used to obtain the D3Q27 lattice for 3-D simulations.
Discrete equilibrium: polynomial formA number of different ways for constructing the discrete equilibrium have been proposed over the years. One of the early approaches, first discussed in [41] was to re-write the equilibrium as:
\[f^{\text{eq}}=\rho\left(2\pi\bar{r}_{0}T_{0}\right)^{-D/2}\exp\left\{\frac{- \mathbf{v}^{2}}{2\bar{r}_{0}T_{0}}\right\}\exp\left\{\frac{-\mathbf{u}^{2}+\mathbf{u}\cdot \mathbf{v}}{2\bar{r}_{0}T_{0}}\right\}, \tag{18}\]
and Taylor-expand the last term around Mach Ma= 0, i.e.
\[\exp\left\{\frac{-\mathbf{u}^{2}+\mathbf{u}\cdot\mathbf{v}}{2\bar{r}_{0}T_{0}}\right\}=1 +\frac{\mathbf{v}\cdot\mathbf{u}}{\bar{r}_{0}T_{0}}+\frac{\left(\mathbf{v}\cdot\mathbf{u} \right)^{2}}{2\bar{r}_{0}^{2}T_{0}^{2}}-\frac{\mathbf{u}^{2}}{2\bar{r}_{0}T_{0}}+ \mathcal{O}\left(\left\|\mathbf{u}\right\|^{3}/\bar{r}_{0}{T_{0}}^{3/2}\right), \tag{19}\]
ultimately leading to, after discretization of particles velocity space and application of the Gauss-Hermite quadrature, the second-order polynomial discrete equilibrium:
\[f_{i}^{\text{eq}}=w_{i}\rho\left(1+\frac{\mathbf{c}_{i}\cdot\mathbf{u}}{c_{s}^{2}}+ \frac{\left(\mathbf{c}_{i}\cdot\mathbf{u}\right)^{2}}{2c_{s}^{4}}-\frac{\mathbf{u}^{2}}{2 c_{s}^{2}}\right). \tag{20}\]
Figure 1: Illustration of the tensorial product process to build D2Q9 lattice from D1Q3.
An alternative construction based on an expansion of the distribution function with Hermite polynomial was proposed, which led to the following final form:
\[f_{i}^{\rm eq}=w_{i}\rho\sum_{n=0}^{N}\frac{1}{n!{c_{s}}^{2n}}\mathcal{H}_{n}( \boldsymbol{c}_{i}):a_{n}^{\rm eq}(\rho,\boldsymbol{u}), \tag{21}\]
where \(\mathcal{H}_{n}\) and \(a_{n}^{\rm eq}\) are tensors of rank \(n\) representing respectively the order \(n\) Hermite polynomial and coefficient.
Alternatively the polynomial equilibrium can also be constructed via the product form. The product form of the equilibrium distribution function (EDF) is a special realization of the moments matching approach. Considering the standard discrete velocity set D3Q27, where D=3 stands for three dimensions and Q=27 is the number of discrete velocities,
\[\boldsymbol{c}_{i}=(c_{ix},c_{iy},c_{iz}),\ c_{i\alpha}\in\{-1,0,1\}, \tag{22}\]
one first defines a triplet of functions in two variables, \(\xi_{\alpha}\) and \(\zeta_{\alpha\alpha}\),
\[\Psi_{0}(\xi_{\alpha},\zeta_{\alpha\alpha}) = 1-\zeta_{\alpha\alpha}, \tag{23}\] \[\Psi_{1}(\xi_{\alpha},\zeta_{\alpha\alpha}) = \frac{\xi_{\alpha}+\zeta_{\alpha\alpha}}{2},\] (24) \[\Psi_{-1}(\xi_{\alpha},\zeta_{\alpha\alpha}) = \frac{-\xi_{\alpha}+\zeta_{\alpha\alpha}}{2}, \tag{25}\]
and considers a product-form associated with the discrete velocities \(\boldsymbol{c}_{i}\) (22),
\[\Psi_{i}=\Psi_{c_{ix}}(\xi_{x},\zeta_{xx})\Psi_{c_{iy}}(\xi_{y},\zeta_{yy}) \Psi_{c_{iz}}(\xi_{z},\zeta_{zz}). \tag{26}\]
All pertinent populations below are determined by specifying the parameters \(\xi_{\alpha}\) and \(\zeta_{\alpha\alpha}\) in the product-form (26). The two-dimensional version of the model on the D2Q9 lattice is obtained by omitting the \(z\)-component in all formulas. After matching moments with their continuous counter-parts the parameters are set as,
\[\xi_{\alpha}=u_{\alpha}, \tag{27}\] \[\zeta_{\alpha\alpha}=c_{s}^{2}+u_{\alpha}^{2}, \tag{28}\]
and the local equilibrium populations are represented with the product-form (26),
\[f_{i}^{\rm eq}=\rho\prod_{\alpha=x,y,z}\Psi_{c_{i\alpha}}\left(u_{\alpha},c_{ s}^{2}+u_{\alpha}^{2}\right). \tag{29}\]
This form of the discrete equilibrium populations, when \(c_{s}^{2}=\bar{r}_{0}T_{0}/3\) is equivalent to third-order quadrature-based scheme with a full expansion of the distribution function.
Alternative to polynomial equilibria: Entropic equilibriaAs an alternative to the classical discrete equilibrium construction approach where all degrees of freedom are used to fulfill moments constraints, the entropic approach adds minimization of an entropy functional to the list of constraints, changing the equi
librium construction problem into a constraint minimization problem. While a number of different discrete entropy functions have been proposed in the literature the most commonly used one is:
\[H_{w_{i},c_{i}}=\sum_{i=1}^{Q}f_{i}\ln\left(\frac{f_{i}}{w_{i}}\right). \tag{30}\]
Minimization of this functional under constraints on moments of order zero and one, leads to the following well-known entropic equilibrium:
\[f_{i}^{\rm eq}=w_{i}\rho\prod_{\alpha=x,y}\left(2-\sqrt{{u_{\alpha}}^{2}/c_{s} ^{2}+1}\right)\left(\frac{2u_{\alpha}+\sqrt{{u_{\alpha}}^{2}/c_{s}^{2}+1}}{1-u_ {\alpha}}\right)^{c_{i,\alpha}}. \tag{31}\]
One of the most interesting feature of this equilibrium, contrary to polynomial equilibria is that, as demonstrated in [42, 43], it guarantees unconditional linear stability.
#### 2.2.2 Lattice Boltzmann equations
_From discrete velocity Boltzmann to the lattice Boltzmann method._ To go to the final form of the lattice Boltzmann equations, two main ingredients are to be used: (a) integration of the discrete velocity Boltzmann equation along their _constant_ characteristics and (b) a re-definition of the discrete distribution functions. The former step results in:
\[f_{i}\left(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t\right)-f_{i}\left(\mathbf{x},t \right)=\int_{t}^{t+\delta t}\Omega_{i}\left(\mathbf{x}(t^{\prime}),t^{\prime} \right)dt^{\prime}, \tag{32}\]
where the term on the right-hand side, representing collision, has to be approximated via the trapezoidal rule, in order to keep the scheme second-order accurate:
\[\int_{t}^{t+\delta t}\Omega_{i}\left(\mathbf{x}(t^{{}^{\prime}}),t^{{}^{\prime}} \right)dt^{{}^{\prime}}=\frac{\delta t}{2}\Omega_{i}\left(\mathbf{x},t\right)\,+ \frac{\delta t}{2}\Omega_{i}\left(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t\right) +\mathcal{O}\left(\delta t^{3}\right). \tag{33}\]
However, as observed here, application of the trapezoidal rule would make the scheme implicit and therefore not attractive regarding efficiency. The second ingredient, i.e. redefinition of the discrete distribution function as:
\[\bar{f}_{i}=f_{i}-\frac{\delta t}{2}\Omega_{i}, \tag{34}\]
changes the system of equations into a fully explicit scheme:
\[\bar{f}_{i}\left(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t\right)-\bar{f}_{i}\left( \mathbf{x},t\right)=\frac{\delta t}{\bar{\tau}}\left(f_{\alpha}^{\rm eq}\left( \mathbf{x},t\right)-\bar{f}_{i}\left(\mathbf{x},t\right)\right), \tag{35}\]
where \(\bar{\tau}\) is now defined as:
\[\bar{\tau}=\tau+\delta t/2. \tag{36}\]
_The lattice Boltzmann method for incompressible flows._ A multi-scale analysis of the above-listed system of equations shows that it recovers the _isothermal_ continuity plus Navier-Stokes system of equations for an
ideal equation of state at reference temperature:
\[\bar{r}_{0}T_{0}=\frac{\delta x^{2}}{3\delta t^{2}}, \tag{37}\]
where the coefficient \(1/3\) is specific to the third-order quadrature. Note that the recovered system of macroscopic equations further admits a defect in the viscous stress tensor; For the classical second-order polynomial expansion defects are present both in shear and bulk viscosity scaling in both cases as \(\propto\mathcal{U}_{\alpha}^{2}\delta t^{2}/\delta x^{2}\), where \(\mathcal{U}\) is the characteristic velocity along the \(\alpha\)-axis. For the full polynomials expansion or product-form equilibria, this defect is only present in the bulk viscosity.
This means that under acoustic scaling, i.e. \(\delta x/\delta t=\text{const}\) the solver converges to the _compressible isothermal Navier-Stokes_ equations, as for a fixed characteristic velocity \(\mathcal{U}\) the Mach number remains constant in the limit of \(\delta t\to 0\). Furthermore, under acoustic scaling the defects in the effective viscosities do not vanish. Under diffusive scaling on the other hand, i.e. \(\delta x^{2}/\delta t=\text{constant}\), the solver converges to the _incompressible Navier-Stokes_ equations, as \(\mathcal{U}/c_{s}\to 0\) in the limit of \(\delta t\to 0\) and the defect in the effective viscosities also goes to zero.
## 3 Lattice Boltzmann models for compressible reacting flows
We have now introduced the main ingredients of classical Lattice-Boltzmann methods, and shown that they allow to recover the continuity (1) and momentum (2) equations of a weakly compressible gas at constant temperature \(T_{0}\). This is indeed not sufficient for combustion applications, where energy (3) and species (4) are absolutely required.
This Section is divided into 3 main subsections. First, we shall explore different alternatives for the resolution of the additional equations (energy and species) in Section 3.1. Second, we will detail the required changes to the lattice Boltzmann formulation in Section 3.2. Finally, we will list the reactive flow configurations successfully simulated by LBM solvers and discuss their performance in Section 3.3.
### Energy and species balance equations
#### 3.1.1 Double distribution function lattice Boltzmann approach for thermal flows
Kinetic modelsHistorically, the starting point of double distribution function (DDF) approaches is rooted in the need for simulations with variable Prandtl numbers and specific heat capacities, as alternatives to Holway's ellipsoidal statistics [44] or Shakhov's model [45]. This is usually achieved by introducing a second distribution function \(g\) to carry a form of energy following [46], which is not uniquely defined. The earliest occurrence of a double distribution function approach is documented in [47] where the authors introduced:
\[g(\mathbf{v},\mathbf{x},t)=\frac{\left(v_{\alpha}-u_{\alpha}\right)^{2}}{2}f(\mathbf{v}, \mathbf{x},t). \tag{38}\]
Multiplying the Boltzmann equation, i.e. the balance law for \(f\) by the coefficients in the definition of the \(g\)-distribution function one obtains the balance law for the latter:
\[\frac{\partial g}{\partial t}+v_{\alpha}\frac{\partial g}{\partial x_{\alpha} }=\frac{1}{\tau_{g}}\left(g^{\text{eq}}-g\right)+fq, \tag{39}\]
where the additional non-homogeneous contribution \(q\) is:
\[q=\left(u_{\alpha}-v_{\alpha}\right)\left[\partial_{t}u_{\alpha}+v_{\beta}\frac{ \partial u_{\alpha}}{\partial x_{\beta}}\right]. \tag{40}\]
In this model, the total energy \(E\) is computed as:
\[\rho E=\int_{\mathbf{v}}\left[\frac{\mathbf{u}^{2}}{2}f(\mathbf{v},\mathbf{x},t)+g(\mathbf{v},\mathbf{ x},t)\right]d\mathbf{v}. \tag{41}\]
Some comments on this approach are necessary:
* This model, through the choice of parameter \(\tau_{g}\) allows for a variable Prandtl number Pr.
* The model assumes a mono-atomic molecule as no degrees of freedom in addition to translational are taken into account.
* The model involves space and time derivatives of macroscopic fields.
To alleviate the last issue, Guo et al. proposed to carry total energy with \(g\) instead [48]:
\[g(\mathbf{v},\mathbf{x},t)=\frac{v_{\alpha}^{2}}{2}f(\mathbf{v},\mathbf{x},t). \tag{42}\]
While this choice of a second distribution function leads to a much simpler balance law for \(g\) it also comes with a limitation of the Prandtl number. Contrary to the previous choice of \(g\) carrying internal energy where one could easily vary Pr by changing \(\tau_{g}\), here the relaxation time in the collision operator controls both relaxation of internal and kinetic energy, therefore also affecting viscous heating. To allow for variable Pr, the authors proposed to decompose the collision term into kinetic and internal contributions, leading to the following balance law:
\[\frac{\partial g}{\partial t}+v_{\alpha}\frac{\partial g}{\partial x_{\alpha }}=\frac{1}{\tau_{g}}\left(g^{\rm eq}-g\right)+\frac{Z}{\tau_{gf}}\left(f^{ \rm eq}-f\right), \tag{43}\]
where
\[Z=\frac{v_{\alpha}^{2}}{2}-\frac{\left(v_{\alpha}-u_{\alpha}\right)^{2}}{2}, \tag{44}\]
and
\[\frac{1}{\tau_{gf}}=\frac{1}{\tau_{g}}-\frac{1}{\tau}. \tag{45}\]
Since \(g\) carries the total energy it is computed solely as its zeroth-order moment:
\[\rho E=\int_{\mathbf{v}}g(\mathbf{v},\mathbf{x},t)d\mathbf{v}. \tag{46}\]
In the same contribution the authors proposed a more generalized framework allowing to incorporate additional non-translational degrees of freedom into the model by defining \(g\) as:
\[g(\mathbf{v},\mathbf{x},t)=\frac{v_{\alpha}^{2}+\eta_{\beta}^{2}}{2}f(\mathbf{v},\mathbf{x},t), \tag{47}\]
where \(\mathbf{\eta}\) is a vector with \(\delta\) component, with \(\delta\) the number of additional degrees of freedom, and summation over both \(\alpha\) and \(\beta\) is assumed. In this model the equilibrium distribution function is:
\[f^{\rm eq}(\mathbf{v},\mathbf{\eta},\mathbf{x},t)=\rho(2\pi rT)^{(\delta+D)/2}\exp\{-\frac{ \left(\mathbf{v}-\mathbf{u}\right)^{2}+\mathbf{\eta}^{2}}{2rT}\}, \tag{48}\]
and the total energy is computed as:
\[\rho E=\int_{\mathbf{v}}\int_{\mathbf{\eta}}g(\mathbf{v},\mathbf{\eta},\mathbf{x},t)d\mathbf{\eta}d\bm {v}. \tag{49}\]
While Guo et al. originally proposed this decoupling for the low Mach limit, it was extended to compressible regimes in [49] where the authors used a thirteen-velocity lattice for the \(f\) distribution function to eliminate the deviations in the third-order moments of the equilibrium distribution function. A realization of this model on the standard third-order quadrature-based lattice was proposed in [50]. The approach originally proposed in [48] has been routinely used since then in a wide number of publications for both compressible and incompressible flows, see for instance [51, 52, 53, 25]. Another realization of the double distribution function method relying on internal energy was also proposed in [54]. As noted by the authors, re-writing the balance equation of Eq. (43) as:
\[\frac{\partial g}{\partial t}+v_{\alpha}\frac{\partial g}{\partial x_{\alpha }}=\frac{1}{\tau}\left(g^{\rm eq}-g\right)+\frac{1}{\tau_{gf}}\left(g^{*}-g \right), \tag{50}\]
in the case of the model in [48]:
\[g^{*}=g^{\rm eq}+Z\left(f-f^{\rm eq}\right), \tag{51}\]
while for [54]:
\[g^{*}=g^{\rm eq}+2v_{\alpha}u_{\beta}\left(\Pi_{\alpha\beta}-\Pi_{\alpha\beta }^{\rm eq}\right). \tag{52}\]
Note that both realizations lead to the same hydrodynamic equation and in the case of the third-order quadrature-based lattices, even to the same discrete equations [54]. This realization has also been used for a variety of compressible flow simulations, see for instance [27, 55].
Lattice Boltzmann equationsDiscretization in the space of particles velocities \(\mathbf{v}\) proceeds very similarly to that of the probability distribution function \(f\), either through projection onto the space of Hermite polynomials or via the product form construction. In the product form approach discussed in [27, 54] similar to Eq. (29):
\[g_{i}^{\rm eq}=\rho\prod_{\alpha=x,y,z}\Psi_{c_{i\alpha}}\left(\mathcal{O}_{ \alpha},\mathcal{O}_{\alpha}^{2}\right)E, \tag{53}\]
where the operator \(\mathcal{O}_{\alpha}\) acts on any smooth function \(A(\bar{r}T,u_{\alpha})\) as:
\[\mathcal{O}_{\alpha}A=\bar{r}T\frac{\partial A}{\partial u_{\alpha}}+u_{ \alpha}A. \tag{54}\]
The discrete-in-particles speed-space can then be integrated along characteristics to obtain the corresponding collision-streaming equations:
\[\bar{g}_{i}(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t)-\bar{g}_{i}(\mathbf{x},t)=\frac{ \delta t}{\bar{\tau}}\left(g_{i}^{\rm eq}(\mathbf{x},t)-\bar{g}_{i}(\mathbf{x},t)\right) +\frac{\delta t}{\bar{\tau}_{gf}}\left(g_{i}^{*}(\mathbf{x},t)-\bar{g}_{i}(\mathbf{x},t )\right), \tag{55}\]
where the new distribution function is:
\[\bar{g}_{i}=g_{i}-\frac{\delta t}{2}\left[\frac{1}{\tau}\left(g_{i}^{\rm eq}-g _{i}\right)+\frac{1}{\tau_{gf}}\left(g_{i}^{*}-g_{i}\right)\right] \tag{56}\]
and
\[\frac{\delta t}{\bar{\tau}_{gf}}=\frac{\delta t}{\bar{\tau}_{g}}-\frac{\delta t }{\bar{\tau}}, \tag{57}\]
with
\[\frac{\bar{\tau}_{g}}{\delta t}=\frac{\lambda}{pC_{v}}+\frac{1}{2}. \tag{58}\]
The total energy can then be obtained by summing discrete distribution functions:
\[\rho E=\sum_{i=0}^{Q-1}g_{i}. \tag{59}\]
A multi-scale analysis shows that the models above, at the Euler level, recover:
\[\frac{\partial\rho E}{\partial t^{(1)}}+\frac{\partial\rho u_{\alpha}H}{ \partial x_{\alpha}}+\frac{\partial\rho u_{\alpha}u_{\beta}^{2}/2}{\partial x _{\alpha}}=0, \tag{60}\]
where \(H\) is the enthalpy. At the Navier-Stokes level:
\[\frac{\partial\rho E}{\partial t^{(2)}}+\frac{\partial\Pi_{\alpha}(g_{i}^{(1) })}{\partial x_{\alpha}}=0, \tag{61}\]
with:
\[\Pi_{\alpha}(g_{i}^{(1)})=-\left(\frac{\bar{\tau}_{g}}{\delta t}-\frac{1}{2} \right)p\frac{\partial H}{\partial x_{\alpha}}+u_{\beta}\Pi_{\alpha\beta}(f_{ i}^{(1)}), \tag{62}\]
where the first term is the Fourier diffusive flux while the second term is viscous heating. The double distribution function approach in combination with a proper lattice Boltzmann solver for density and momentum balance, detailed in next sections, has been used to model trans- and supersonic flows, for instance in [27]. A few results are illustrated in Fig. 2.
Multi-species flowsFor the extension of this model to the case of multi-species flows in the context of a mixture-averaged formulation a few points must be noted:
* In all models discussed here, at the \(\epsilon^{2}\) order a Fourier flux of this form is recovered, see Eq. (62): \[-\left(\frac{\bar{\tau}_{g}}{\delta t}-\frac{1}{2}\right)p\frac{\partial H}{ \partial x_{\alpha}}=-\lambda\frac{\partial T}{\partial x_{\alpha}},\] (63) which holds if enthalpy is only function of temperature. For a mixture-averaged formulation with
multiple species, \(H=H(T,Y_{k})\), which would lead to: \[-\left(\frac{\overline{\tau}_{g}}{\delta t}-\frac{1}{2}\right)p\frac{\partial H }{\partial x_{\alpha}}=-\lambda\frac{\partial T}{\partial x_{\alpha}}-\left( \frac{\overline{\tau}_{g}}{\delta t}-\frac{1}{2}\right)p\sum_{k=0}^{N_{sp}-1}H _{k}\frac{\partial Y_{k}}{\partial x_{\alpha}},\] (64) where \(H_{k}\) is the enthalpy of the \(k^{\rm th}\) species.
* In multi-species flows, diffusive mass flux leads to a net transport of enthalpy which is absent in single-component flows.
A solution to both these shortcomings was proposed in [56] to recover consistent hydrodynamics, where the pseudo-equilibrium \(g_{i}^{*}\) is amended with two additional terms, one neutralizing the error in the Fourier diffusion and one introducing enthalpy flux through mass diffusion:
\[g_{i}^{*}=g_{i}^{\rm eq}+\frac{2w_{i}}{c_{s}^{2}}c_{i\alpha}\left[u_{\beta} \left(\Pi_{\alpha\beta}-\Pi_{\alpha\beta}^{\rm eq}\right)\ +p\sum_{k=0}^{N_{sp}-1}H_{k}\frac{\partial Y_{k}}{ \partial x_{\alpha}}+\rho\sum_{k=0}^{N_{sp}-1}Y_{k}H_{k}V_{k\alpha}\right]. \tag{65}\]
#### 3.1.2 Kinetic models for species balance equations
Over the past decades and starting in the early 2000's [57; 58] various attempt at developing lattice Boltzmann-based models for mixtures have been documented, see for instance [59; 60; 61; 62; 63]. Some of these models are reviewed in this section.
Thermal mixture-averaged model of Kang et al.In [64], the authors proposed a multi-component thermal model for catalytic systems. The model is an extension on previous work documented in [65; 66; 67; 68]. It consists of \(N_{sp}\) sets of lattice Boltzmann solvers, i.e. one per species:
\[g_{ki}(\mathbf{x}+\mathbf{c}_{ki}\delta t,t+\delta t)-g_{ki}(\mathbf{x},t)=\frac{\delta t} {\overline{\tau}_{k1}}\left(g_{ki}^{*}(\rho_{k},\mathbf{u}_{k})-g_{ki}(\mathbf{x},t) \right)+\frac{\delta t}{\overline{\tau}_{k2}}\left(g_{ki}^{\rm eq}(\rho_{k}, \mathbf{u})-g_{ki}(\mathbf{x},t)\right)+\psi_{ki}. \tag{66}\]
Figure 2: Illustration of applications of the model of Eq. (55). (Left) Sound pressure field for shock–vortex interaction with advection Mach number of 1.2 and vortex Mach number set to 0.25 at \(t^{*}=6\). (Right) Iso-surface of velocity divergence colored by local Mach number for compressible decaying turbulence at Ma\(=0.5\) and \(t^{*}=0.4\). Images are reproduced from [27].
The first point to note in this model is that post-streaming discrete distribution functions migrate to \(\mathbf{x}+\mathbf{c}_{ki}\delta t\) meaning each species' lattice have different discrete velocity sizes. As discussed in previous sections, in lattice Boltzmann time-step, grid-size and reference temperature are tied through:
\[\frac{\delta x^{2}}{\delta t^{2}}=\frac{\mathcal{R}T_{0}}{W}, \tag{67}\]
which in the context of this model where \(W=W_{k}\) is different for each species, and assuming that the time-step size is the same for all solvers, would mean:
\[\|c_{kia}\|=\frac{\delta x_{k}}{\delta t}=\sqrt{\frac{\mathcal{R}T_{0}}{W_{k}}}, \tag{68}\]
i.e. not all species will propagate on lattice. To overcome this issue, and following [69] the authors proposed to set the grid-size to that needed for the lightest species in the system, and for other species to use interpolation in order to reconstruct distribution functions on the grid. The equilibrium, \(g_{ki}^{\text{eq}}(\rho_{k},\mathbf{u})\) follows the product-form equilibrium of Eq. (29) with a few differences, namely:
\[\xi_{k\alpha}=u_{\alpha}\sqrt{W_{k}}, \tag{69}\] \[\zeta_{k\alpha\alpha}=T+W_{k}u_{\alpha}^{2}, \tag{70}\]
while for the pseudo-equilibrium \(g_{ki}^{*}(\rho_{k},\mathbf{u}_{k})\):
\[\xi_{k\alpha}=u_{k\alpha}\sqrt{W_{k}}, \tag{71}\] \[\zeta_{k\alpha\alpha}=T+W_{k}u_{k\alpha}^{2}. \tag{72}\]
In this model the second relaxation time, \(\bar{\tau}_{k2}\) sets the diffusivity to:
\[\frac{\bar{\tau}_{k2}}{\delta t}=\frac{\rho_{k}D_{k}}{p_{k}}+\frac{1}{2}, \tag{73}\]
where \(D_{k}\) is the mixture-average diffusion coefficient. The viscosity is set through the first relaxation time and using Wilke's formula:
\[\frac{\bar{\tau}_{k1}}{\delta t}=\frac{\mu_{k}}{p\sum_{k^{\prime}=0}^{N_{sp}-1 }X_{k^{\prime}}\phi_{kk^{\prime}}}+\frac{1}{2}, \tag{74}\]
with
\[\phi_{kk^{\prime}}=\frac{1}{\sqrt{8}}\frac{1}{\sqrt{1+\frac{W_{k}}{W_{k^{ \prime}}}}}\Bigg{[}1+\sqrt{\frac{\mu_{k}}{\mu_{k^{\prime}}}}\bigg{(}\frac{W_{ k^{\prime}}}{W_{k}}\bigg{)}^{1/4}\Bigg{]}^{2}. \tag{75}\]
The term \(\psi_{ki}\) in Eq. (66) is a correction term accounting for: (a) a correction velocity ensuring that the global diffusive mass flux is null, and (b) corrections for equilibrium moments of order three and four not recovered by the first-neighbour lattice. The latter terms allow the scheme to recover the proper viscous
stress tensor and non-equilibrium heat flux. In this model the macroscopic properties are computed as:
\[\rho_{k}=\sum_{i=0}^{Q-1}g_{ki}, \tag{76}\] \[\rho_{k}u_{k\alpha}=\sum_{i=0}^{Q-1}c_{ki\alpha}g_{ki},\] (77) \[\rho_{k}E_{k}=\sum_{i=0}^{Q-1}c_{ki\alpha}^{2}g_{ki}. \tag{78}\]
A multi-scale expansion of the model shows that it recovers the mixture-averaged multi-species equations and the Hirschfelder-Curtiss approximation with the mass corrector.
_Force-based approach of Vienne et al._ In [70], following the kinetic model of [71], Vienne et al. proposed a lattice Boltzmann model for isothermal multi-species mixtures recovering the Maxwell-Stefan system of equations. Considering a mixture made up of \(N_{sp}\) individual species, they proposed a coupled system of \(N_{sp}\) lattice Boltzmann solvers:
\[g_{ki}(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t)-g_{ki}(\mathbf{x},t)=\frac{\delta t}{ \bar{\tau}_{k}}\left(g_{ki}^{\text{eq}}(\rho_{k},\mathbf{u}_{k})-g_{ki}(\mathbf{x},t) \right)+\mathcal{S}_{i}, \tag{79}\]
where \(\mathcal{S}_{i}\) is here to introduce external body forces, realized using Guo's approach [72]:
\[\mathcal{S}_{i}=\left(1-\frac{\delta t}{2\bar{\tau}_{k}}\right)w_{i}\left( \frac{c_{i\alpha}-u_{k\alpha}}{c_{s}^{2}}+\frac{(c_{i\beta}u_{k\beta})c_{i \alpha}}{c_{s}^{4}}\right)F_{k\alpha}, \tag{80}\]
where \(\mathbf{F}_{k}\) represents the body force. In this model
\[\rho_{k}=\sum_{i=0}^{Q-1}g_{ki}, \tag{81}\]
and
\[\rho_{k}u_{k\alpha}=\sum_{i=0}^{Q-1}c_{i\alpha}g_{ki}+F_{k\alpha}. \tag{82}\]
The interaction between different species driving diffusion is introduced via a body force defined as:
\[F_{k\alpha}=-p\sum_{k^{\prime}=0}^{N_{sp}-1}\frac{X_{k}X_{k^{\prime}}}{ \mathcal{D}_{kk^{\prime}}}(u_{k\alpha}-u_{k^{\prime}\alpha}), \tag{83}\]
where \(\mathcal{D}_{kk^{\prime}}\) represents the binary diffusion coefficients. As noted by the authors, the circular inter-dependence between the force of Eq. (83) and the momenta of individual species of Eq. (82) make the scheme implicit. A multi-scale analysis shows that this model recovers the following multi-component isothermal mass
\[\frac{\partial\rho_{k}}{\partial t}+\frac{\partial\rho_{k}u_{k\alpha}}{ \partial x_{\alpha}}=0, \tag{84}\]
and momentum balance equations
\[\frac{\partial\rho_{k}u_{k\alpha}}{\partial t}+\frac{\partial\rho_{k}u_{k\alpha} u_{k\beta}}{\partial x_{\beta}}+\frac{\partial p_{k}}{\partial x_{\alpha}}-\frac{ \partial}{\partial x_{\beta}}\left(\mu_{k}\frac{\partial u_{k\beta}}{\partial x _{\alpha}}+\mu_{k}\frac{\partial u_{k\alpha}}{\partial x_{\beta}}\right)+p \sum_{k^{\prime}=0}^{N_{sp}-1}\frac{X_{k}X_{k^{\prime}}}{\mathcal{D}_{kk^{ \prime}}}(u_{k\alpha}-u_{k^{\prime}\alpha})=0, \tag{85}\]
where the bulk viscosities \(\mu_{k}\) are defined as:
\[\frac{\bar{\tau}_{k}}{\delta t}=\frac{\mu_{k}}{\rho_{k}c_{s}^{2}}+\frac{1}{2}. \tag{86}\]
The model has been successfully used to study miscible multi-component flow behaviors such as viscous fingering, see Fig. 3. An extension of this model to thermal and reacting cases is yet to done.
Model of Sawant et al.In [56; 74], the authors proposed a kinetic model to recover the Stefan-Maxwell diffusion model. Each component is described by a set of populations \(g_{ki}\). The discrete-velocity time evolution equation is,
\[\frac{\partial g_{ki}}{\partial t}+c_{i\alpha}\frac{\partial g_{ki}}{\partial x _{\alpha}}=\sum_{k^{\prime}\neq k}\frac{1}{\theta_{kk^{\prime}}}\left[\left( \frac{g_{ki}^{\text{eq}}-g_{ki}}{\rho_{k}}\right)-\left(\frac{f_{k^{\prime}i}^ {\text{eq}}-f_{k^{\prime}i}^{*}}{\rho_{k^{\prime}}}\right)\right]. \tag{87}\]
The species densities are computed as zeroth-order moment of the discrete distribution functions:
\[\rho_{k}=\sum_{i=0}^{Q-1}g_{ki}. \tag{88}\]
The symmetric set of relaxation times \(\theta_{kk^{\prime}}=\theta_{k^{\prime}k}\) is related to the binary diffusion coefficients. The first-order moments of the distribution functions are,
\[\rho_{k}u_{k\alpha}=\sum_{i=0}^{Q-1}g_{ki}c_{i\alpha}. \tag{89}\]
Figure 3: Illustration of application of multi-species model of [70]. Evolution of viscous fingering instability for a system with two species. Image reproduced from [73].
The quasi-equilibrium populations \(g^{*}_{ki}\) satisfy the following constraints on moments,
\[\sum_{i=0}^{Q-1}g^{*}_{ki} =\rho_{k}, \tag{90}\] \[\sum_{i=0}^{Q-1}g^{*}_{ki}c_{i\alpha} =\rho_{k}u_{k\alpha}. \tag{91}\]
The momenta of the individual species sum up to the mixture momentum,
\[\sum_{k=0}^{N_{sp}-1}\rho_{k}u_{k\alpha}=\rho u_{\alpha}. \tag{92}\]
The equilibrium populations \(g^{\rm eq}_{ki}\) are subject to the following constraints:
\[\sum_{i=0}^{Q-1}g^{\rm eq}_{ki} =\rho_{k}, \tag{93}\] \[\sum_{i=0}^{Q-1}g^{\rm eq}_{ki}c_{i\alpha} =\rho_{k}u_{\alpha},\] (94) \[\sum_{i=0}^{Q-1}g^{\rm eq}_{ki}c_{i\alpha}c_{i\beta} =p_{k}\delta_{\alpha\beta}+\rho_{k}u_{\alpha}u_{\beta}. \tag{95}\]
In Eq. (95), the partial pressure \(p_{k}\) depends on the mixture temperature \(T\) which is obtained from the energy balance lattice Boltzmann solver. Noting that
\[\theta_{kk^{\prime}}=\frac{\mathcal{D}_{kk^{\prime}}}{pX_{k}X_{k^{\prime}}}, \tag{96}\]
and using the equation of state the kinetic model can be re-written as:
\[\frac{\partial g_{ki}}{\partial t}+c_{i\alpha}\frac{\partial g_{ki}}{ \partial x_{\alpha}}=\,\sum_{k^{\prime}\neq k}\left(\frac{\bar{W}\mathcal{R}T }{W_{k}W_{k^{\prime}}\mathcal{D}_{kk^{\prime}}}\right)\left[Y_{k^{\prime}} \left(g^{\rm eq}_{ki}-g_{ki}\right)-Y_{k}\left(g^{\rm eq}_{k^{\prime}i}-g^{*}_ {k^{\prime}i}\right)\right]. \tag{97}\]
This equation, for the sake of convenience, is recast in the form of a relaxation equation:
\[\frac{\partial g_{ki}}{\partial t}+c_{i\alpha}\frac{\partial g_{ki}}{\partial x _{\alpha}}=\frac{1}{\tau_{k}}\left(g^{meq}_{ki}-g_{ki}\right)-F_{ki}, \tag{98}\]
where
\[\frac{1}{\tau_{k}}=\sum_{k^{\prime}\neq k}\frac{Y_{k^{\prime}}}{\tau_{kk^{ \prime}}}=r_{k}T\left(\sum_{k^{\prime}\neq k}\frac{X_{k^{\prime}}}{\mathcal{D }_{kk^{\prime}}}\right), \tag{99}\]
and
\[F_{ki}=Y_{k}\sum_{k^{\prime}\neq k}\frac{1}{\tau_{kk^{\prime}}}\left(g^{\rm eq }_{k^{\prime}i}-g^{*}_{k^{\prime}i}\right). \tag{100}\]
This form of the equation can then be integrated along characteristics to obtain the lattice Boltzmann equation. The model recovers the compressible mixture-averaged multi-species equation with the Maxwell-Stefan velocity for species diffusion. It has been successfully used for a variety of cases involving combustion applications with detailed chemistry, as illustrated in Fig. 4.
#### 3.1.3 Passive-scalar lattice Boltzmann models
So-called passive-scalar lattice Boltzmann solvers, are models where only conservation of the zeroth-order moment of the distribution function is ensured by the collision operator. In such models, to solve an advection-diffusion-reaction partial differential equation for a field \(\Psi\), a distribution function \(g_{i}\) is defined such that:
\[\sum_{i=0}^{Q-1}g_{i}+\frac{\delta t}{2}S=\Psi, \tag{101}\]
where \(S\) is the source term. A classical non-homogeneous collision-streaming equation of the form:
\[g_{i}(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t)-g_{i}(\mathbf{x},t)=-\frac{\delta t}{ \bar{\tau}}\left[g_{i}^{\rm eq}(\Psi,\mathbf{u})-g_{i}(\mathbf{x},t)\right]+(1-\frac{ \delta t}{2\bar{\tau}})\mathcal{S}_{i}, \tag{102}\]
with
\[\sum_{i=0}^{Q-1}\mathcal{S}_{i}=S, \tag{103}\]
and
\[g_{i}^{\rm eq}(\Psi,\mathbf{u})=\frac{\Psi f_{i}^{\rm eq}(\Psi,\mathbf{u})}{\rho}, \tag{104}\]
leads to a macroscopic equation of the form:
\[\frac{\partial\Psi}{\partial t}+\frac{\partial\Psi u_{\alpha}}{\partial x_{ \alpha}}+\frac{\partial}{\partial x_{\alpha}}\left(\frac{1}{2}-\frac{\bar{ \tau}}{\delta t}\right)\left(\frac{\partial\Psi u_{\alpha}}{\partial t}+\frac {\partial\Pi_{\alpha\alpha}(g_{i}^{\rm eq})}{\partial x_{\alpha}}\right)=S. \tag{105}\]
Note that in the literature, Eq. (104) has been used both with first and second-order polynomials expansions. Depending on the choice of the order of expansion the diffusion term will admit errors of different forms.
Figure 4: Illustration of applications of multi-species model of [56; 74]. (Left) Upper asymmetric hydrogen flame in micro-channel. (Right) Thermo-diffusive instability in radially expanding hydrogen flame. Images reproduced from [74; 75].
For instance, for a linear equilibrium,
\[\frac{\partial\Psi u_{\alpha}}{\partial t}+\frac{\partial\Pi_{\alpha\alpha}(g_{i} ^{\rm eq})}{\partial x_{\alpha}}=c_{s}^{2}\frac{\partial\Psi}{\partial x_{ \alpha}}+\frac{\partial\Psi u_{\alpha}}{\partial t}. \tag{106}\]
On that note, let us now discuss the passive scalar approach in the specific context of the species mass and energy balance equations. Although a number of different works have presented modified passive scalar approaches for non-linear dependence of the diffusion driving force on the zeroth-order moment even in the context of multi-species flows (see for instance [76, 77]), here we will limit our discussion to models that have been used for combustion simulation.
Energy balance equationThe energy balance equation can be written in a variety of different ways, see [39]. Here we will only discuss the form involving temperature:
\[\frac{\partial T}{\partial t}+u_{\alpha}\frac{\partial T}{\partial x_{\alpha }}-\frac{1}{\rho\bar{c}_{p}}\frac{\partial}{\partial x_{\alpha}}\left(\lambda \frac{\partial T}{\partial x_{\alpha}}\right)+\frac{\dot{\omega}_{T}}{\rho \bar{c}_{p}}=0. \tag{107}\]
The classical approach to recover this balance equation is to set \(\Psi=T\) which would lead to the following shortcomings:
* An error of the form \(T\partial u_{\alpha}/\partial x_{\alpha}\) in the convection term.
* An error of the form: \[-\frac{\lambda}{\rho\bar{c}_{p}}\frac{\partial T}{\partial x_{\alpha}}\frac {\partial\rho\bar{c}_{p}}{\partial x_{\alpha}},\] in the Fourier diffusion term.
* The enthalpy flux due to species mass diffusion is missing.
While one can overcome these issues via alternative forms of the equilibrium distribution function (see for instance [78]), the simplest way to circumvent these issues is to define the source term \(S\) in Eq. (105) as:
\[S=-\frac{\dot{\omega}_{T}}{\rho\bar{c}_{p}}+T\frac{\partial u_{\alpha}}{ \partial x_{\alpha}}-\frac{\lambda}{\rho\bar{c}_{p}}\frac{\partial T}{ \partial x_{\alpha}}\frac{\partial\rho\bar{c}_{p}}{\partial x_{\alpha}}-\sum_{ k=0}^{N_{x_{p}}-1}\frac{c_{pk}}{\bar{c}_{p}}Y_{k}V_{k\alpha}\frac{\partial T}{ \partial x_{\alpha}}. \tag{108}\]
Such an approach, among others, has been used in [79, 80]. Note that in [79, 80] the last term was not considered. A similar approach can be undertaken for the case were the enthalpy or energy balance equation is targeted.
Species mass balance equationsFor the sake of simplicity let us assume that the species mass fraction balance equation is targeted. Taking the zeroth-order moment of the distribution function to be mass fraction, \(Y_{k}\) and neglecting for the time being leading-order errors in the multi-scale analysis one would recover a diffusion term of the form:
\[-\frac{\partial}{\partial x_{\alpha}}\left[\left(\frac{1}{2}-\frac{\bar{\tau }}{\delta t}\right)\frac{\partial Y_{k}}{\partial x_{\alpha}}\right],\]
which using \(\bar{\tau}/\delta t=D_{k}/c_{s}^{2}+1/2\) recovers the generalized Fick approximation. With that the passive scalar approach is confronted with a number of issues:
* This form of diffusion is only valid in the limit of vanishing density changes as in the non-conservative form of the balance equation there is a factor \(1/\rho\) in front of the diffusion term.
* The form of the convection term as recovered in Eq. (105) admits an error of the form \(Y_{k}\partial u_{\alpha}/\partial x_{\alpha}\), which only vanishes for incompressible flows.
* It is well known that the generalized Fick approximation does not conserve overall mass, unless either \(N_{sp}=2\) or \(D_{k}=D\), \(\forall k\). To deal with that issue there are two approaches; If mass fraction of one particular species is dominant everywhere (e.g. that of N\({}_{2}\) in combustion with air) the balance equation for that species is not explicitly solved and one sets \(Y_{\text{N}_{2}}=\sum_{k=0,k\neq\text{N}_{2}}^{N_{sp}-1}\). A more general approach, valid also for non-dilute mixture is to introduce a mass corrector.
* In cases where the driving force of the diffusive flux is a linear function of the variable for which the balance equation is solved, for instance the Fick approximation, the passive scalar model can be used as is. However, for models where the driving force of diffusive flux is a non-linear function that depends on variables other than the zeroth-order moment, for instance the Hirschfelder-Curtiss approximation, the passive scalar approach would lead to errors of the same order as the diffusive flux itself.
A number of different approaches have been proposed in the literature to account for these shortcomings, see for instance [76; 77]. One of the most straightforward approaches, as used in [79; 80], is to put all corrections into a source term. For instance, assuming one targets the mass fraction balance equation with the generalized Fick approximation, the source term \(S\) would be:
\[S=Y_{k}\frac{\partial u_{\alpha}}{\partial x_{\alpha}}+\frac{D_{k}}{\rho} \frac{\partial Y_{k}}{\partial x_{\alpha}}\frac{\partial\rho}{\partial x_{ \alpha}}. \tag{109}\]
Note that this approach as used in [79; 80] still comes short with respect to the mass corrector and the more appropriate diffusion velocity closure. A number of solutions to account for the mass corrector have been proposed in the literature, taking advantage of the non-equilibrium part of the distribution function, see for instance [76].
#### 3.1.4 Hybrid models: Finite difference and finite volume solvers for energy and species
Hybrid models are closest to multiple distribution functions approaches. In multiple distribution functions, let us remind that \(\rho E\) (or an alternative energy form) and \(\rho Y_{k}\) correspond to the zeroth order of separate distribution functions, see Eq. (46).
When the number of species to be considered becomes large - typically \(\mathcal{O}(10-100)\), the memory required to solve all scalars increases very quickly, which may become prohibitive for detailed chemistry descriptions. Hybrid models reduce the memory load by introducing a single scalar for \((\rho E,\rho Y_{k})\) instead of the number of discrete velocities. Each additional conserved scalar \(\rho\phi\) (where \(\phi\) may represent \(E\) or \(Y_{k}\)) is solved by classical finite-difference or finite-volume (FD/FV) schemes, while continuity (1) and momentum (2) equations are still solved via their associated distribution function \(f_{i}\).
Let us now list the main advantages of the hybrid method:
1. The memory footprint is reduced, as only 1 additional scalar needs to be stored for each energy or species equation, vs. 27 for a D3Q27 distribution.
2. They are by construction free of Prandtl, Schmidt or \(\gamma\) numbers limitations, since energy/species resolution is tackled separately.
3. Since they use the same formalism as classical reactive flow solvers for energy and species equations, it is straightforward to take into account combustion-specific terms (turbulent combustion closures, advanced transport models, Soret effect or multi-component diffusion etc.), based on the experience accumulated over many decades using FD/FV solvers.
In turn, hybrid methods suffer from the following drawbacks:
1. Ensuring consistency between the LBM scheme (for continuity and momentum equations) and FD/FV schemes is not straightforward (at the opposite of, e.g. multi-speed or multiple distribution approaches). This can typically lead to disastrous spurious currents, as illustrated later in Fig. 11.
2. FD/FV schemes based only on nearest-neighbor stencils (as used in most LBM solvers) are typically much more dissipative than LBM schemes [81].
The first point is crucial in designing hybrid LBM schemes, and is therefore discussed at length hereafter. The impact of the second point is limited for most applications as long as the vortical and acoustic modes are left within the LBM part of the solver.
_Which form to use for species/energy equations in a hybrid LBM scheme?_. Energy and species equations may be written under a large variety of forms (based on total energy, internal energy, temperature,...). While these forms are indeed equivalent for a continuous formulation, their coupling under discrete form with the LBM scheme may be very different.
Let us recall that for small perturbations and neglecting all dissipation terms (reducing to the multi-component Euler equations), the system (1-4) may be linearized, and each perturbation can be decomposed into the so-called Kovasznay modes [82, 83] (acoustic mode, 3 components of the vorticity modes, entropy mode, and 1 per species).
For instance, the entropy mode of the Euler system follows the equation
\[\frac{\partial s}{\partial t}+u_{\alpha}\frac{\partial s}{\partial x_{\alpha }}=0,\]
and is only weakly coupled with the rest of the system.
For this reason, hybrid methods using an entropy equation were shown to provide reasonable results for moderately compressible flows [84, 85, 86] using several classical convective numerical schemes a priori unrelated with LBM:
* Second-order central difference schemes, potentially blended with upwind, [87, 88, 89, 90]
* Lax-Wendroff scheme [91, 92]
* MUSCL schemes [84, 85, 86]
* Heun scheme [93],
*...
Indeed, for reactive flows, the entropy equation is complex to derive in its general case. However, the enthalpy equation under non-conservative form
\[\rho\frac{\partial h}{\partial t}+\rho u_{i}\frac{\partial h}{\partial x_{i}}= \frac{dP}{dt}-\frac{\partial q_{i}}{\partial x_{i}}+\Pi_{ij}\frac{\partial u_{ i}}{\partial x_{j}} \tag{110}\]
is also a characteristic mode of the system - provided the pressure work \(\frac{dP}{dt}\) is neglected, a very common assumption for low-Mach reactive flows.
Species also directly follow characteristic equations, provided they are written under non-conservative form
\[\rho\frac{\partial Y_{k}}{\partial t}+\rho u_{i}\frac{\partial Y_{k}}{\partial x _{i}}=\frac{\partial}{\partial x_{i}}(\rho Y_{k}V_{k,i})+\dot{\omega}_{k}, \tag{111}\]
There is an alternative but not equivalent way of understanding how crucial this choice is. Consider the species equation in conservative form
\[\frac{\partial\rho Y_{k}}{\partial t}+\frac{\partial\rho u_{i}Y_{k}}{\partial x _{i}}=\rho\left(\frac{\partial Y_{k}}{\partial t}+u_{\alpha}\frac{\partial Y_ {k}}{\partial x_{i}}\right)+Y_{k}\left(\frac{\partial\rho}{\partial t}+\frac{ \partial\rho u_{i}}{\partial x_{i}}\right). \tag{112}\]
It is clear that this equation is the sum of the non-conservative form (a system characteristic) and the continuity equation. Therefore, any numerical error between the continuity equation solved by LBM and the one hidden into the conservative form leads to an inconsistency.
To summarize, provided the equations to be solved using FD/FV are only weakly coupled with the rest of the system, the resulting hybrid LBM solver has been shown to provide reasonable results for a wide number of cases.
Restoring the conservativity of hybrid LBMEquations (110,111) are equivalent to the initial total energy and species equations (3,4), but the discrete formulation is not. This has two disadvantages:
* Global energy conservation is not numerically enforced (while the LBM scheme is numerically mass and momentum preserving).
* Rankine-Hugoniot relationships are not satisfied across discontinuities.
The latter is clearly visible in Fig. 5, which presents a reference 2-D Riemann problem and the solution as obtained with hybrid LBM using Muscl-Hancock scheme to solve the entropy equation.
Figure 5: Two-dimensional problem of Lax & Liu [94]. From left to right: reference solution [94], solution obtained entropy equation (Muscl-Hancock), solution obtained with corresponding total energy equation scheme [92]. Shown are the density fields for configuration 3 at time \(t=0.3\).
Wissocq et al. [92] recently presented a method to construct a linearly equivalent total energy scheme from any FD/FV scheme that is linearly stable for hybrid LBM (e.g., for the entropy equation).
### Compressible continuity and momentum balance equations
All strategies presented above for the resolution of the additional energy and species equations coupled to a LBM solver require modifications to the LBM core. The major options are presented hereafter.
#### 3.2.1 Lattices with higher-order quadratures
Standard approach with polynomial equilibriaAs discussed in the isothermal models sections the Maxwell-Boltzmann phase-space continuous equilibrium can be matched at the discrete level via a number of methods following the same general principle, matching the different moments of the continuous equilibrium with the discretized version. As such the classical moment-matching method routinely used for Eulerian discrete velocity Boltzmann models and truncated Hermite expansion approach both fall into that category. In the case of the former one, once the number of constraints on moments of the equilibrium and degrees of freedom in the form of number of discrete velocities have been set, construction of the discrete equilibria boils down to solving the following linear system:
\[\mathbf{M}\mathbf{f}^{\mathbf{eq}}=\Pi^{\mathrm{MB}}, \tag{113}\]
where \(\Pi^{MB}\) is a vector of size \(1\times Q\), \(Q\) being the number of constraints on moments, with moments of the Maxwell-Boltzmann continuous distribution corresponding to the targeted constraints with:
\[\Pi^{\mathrm{MB}}_{n}=\int v_{x}^{p}v_{y}^{q}v_{z}^{r}f^{\mathrm{MB}}d\mathbf{v}, \tag{114}\]
with \(n=p+q+r\). The quantity \(\mathbf{f}^{\mathbf{eq}}\) is the vector of sizes \(1\times Q\) containing discrete equilibria and \(\mathbf{M}\) the transformation matrix from discrete equilibria to moments. For instance, in 1-D, for a solver targeting the Navier-Stokes-Fourier dynamics a minimum of five discrete velocities are needed as one must correctly recover moments of order zero to four. This approach to construct a discrete solver, while being quite flexible has a number of shortcomings, namely: The matrix \(\mathbf{M}\) is not necessarily invertible for any choice of moments and discrete velocities as illustrated by the introduction of so-called _anti-symmetric_ discrete velocities in some higher-order discrete velocity Boltzmann models, see for instance [95]; while the number of velocities is set by the constraints the sizes and size-ratios of these velocities have no _a priori_ closures and are usually tuned via trial and error. A possible closure for the size of the discrete velocities would be to use Hermite polynomials roots of the corresponding order. The only issue with that choice is that above order three Hermite roots do not guarantee space-filling lattices and therefore on-lattice propagation.
Nevertheless, a large number of publications using larger discrete velocity sets are documented in the literature:
* A group of these publications do not rely on Lagrangian approaches to discretize physical space and time and use Eulerian approaches such as finite differences or finite volumes to discretize the coupled system of hyperbolic equations of the discrete velocity Boltzmann model. In doing so the sizes of discrete velocities can be freely tuned to stabilize the simulation for a specific test-case.
* Another group of publications use Lagrangian approach to discretize physical space and time and overcome the issue of non-space-filling lattices by supplementing the collision-propagation step with
an interpolation step to bring back post-streaming discrete velocities on lattice. These approaches are sometimes referred to as _semi-Lagrangian_, see for instance [96, 97].
* Another category of publications, relying on the classical on-lattice method, proposes to stabilize multi-speed lattice Boltzmann solvers for compressible flows through different collision models, such as multiple-relaxation time or regularized, see for instance [98, 99].
All of the previously-listed models have had limited success in modeling generic high-speed compressible flows with large temperature variations in the domain. A number of alternatives have been proposed since then to considerably widen the stability domain of multi-speed lattice Boltzmann solvers. They will be discussed next.
_Extension of stability domain: Entropic equilibria._ The entropic construction of the discrete equilibrium state introduced for isothermal models, can be reformulated in a more general form as a minimization problem subject to \(M\) constraints:
\[\delta H+\delta(\sum_{m=0}^{M-1}\lambda_{m}\Pi_{m})=0. \tag{115}\]
The formal solution of this constrained minimization leads to a function of the following form:
\[f_{i}^{\rm eq}=\rho w_{i}\exp\Biggl{\{}\left[\sum_{m=0}^{M}\lambda_{m}\left( \sum_{j=0}^{Q-1}\frac{\partial\Pi_{m}}{\partial f_{j}}\right)\right]\Biggr{\}}. \tag{116}\]
Note that other form of the minimizer without the weights \(w_{i}\) have also been proposed and used in the literature [100], most notably for entropic Grad moments methods [100, 101]. For instance, a model imposing only constraints on collisional invariants, i.e.
\[\sum_{i=0}^{Q-1}f_{i}^{\rm eq} =\rho, \tag{117a}\] \[\sum_{i=0}^{Q-1}c_{i\alpha}f_{i}^{\rm eq} =\rho u_{\alpha},\] (117b) \[\sum_{i=0}^{Q-1}c_{i\alpha}^{2}f_{i}^{\rm eq} =\rho\left(u_{\alpha}+DrT\right), \tag{117c}\]
would lead to the following discrete equilibrium [102]:
\[f_{i}^{\rm eq}=\rho w_{i}\exp\bigl{\{}\left[\lambda_{0}+\lambda_{\alpha}c_{i \alpha}+\lambda_{2}\mathbf{c}_{i}^{2}\right]\bigr{\}}. \tag{118}\]
It is interesting to note that while, for the most part, entropic equilibria construction has been done by enforcing constraints on collisional invariants, one may reduce higher-order moments error by adding corresponding constraint in Eq. (115). This is sometimes referred to as _guiding_ the equilibrium and corresponding discrete equilibria are referred to as _guided equilibria_[103, 104]. In the context of the lattice Boltzmann method, this extension of constraints was discussed for the first time in [105] through the concept of auxiliary and target equilibria. There, auxiliary equilibria were constructed by enforcing constraints on collisional
invariants and target equilibria, a combination of auxiliary equilibria and additional degrees of freedom, by enforcing constraints on higher order moments.
Once the form of the equilibrium distribution function has been determined, its construction consists of finding the expression of the different Lagrangian multiplicators. This is done by introducing back the discrete equilibrium into the set of constraints which would lead to a system of \(M\) equations with \(M\) unknowns, i.e. the Lagrange multipliers, to be determined. While an analytical expression was derived for the isothermal case with \(D+1\) constraints, for larger systems no such solutions exist. In the absence of a closed form solution one can use numerical methods such as Newton iterations to find the Lagrange multipliers at every grid-point and every time-step [106].
As shown in previous sections, one systematic approach to choose an optimal set of discrete velocities is to rely on the Gauss-Hermite quadrature and roots of Hermite polynomials. However, apart from the third-order quadrature leading to the DdQ3\({}^{d}\) lattices, all other higher order quadratures result in off-lattice propagation of some of the discrete distribution functions. In [107], starting from a set of discrete velocities the authors proposed an approach to find a reference temperature and corresponding weights. This is achieved through the _closure relation_ and _matching_ conditions. For a set of discrete velocities \(\mathcal{V}\) with \(Q\) vectors \(c_{i}\), the \(Q^{\text{th}}\) power of \(c_{i}\) can be written as a linear combination of lower order odd-powers from \(Q-2\) to \(1\), i.e.
\[c_{i}^{Q}=a_{Q-2}c_{i}^{Q-2}+a_{Q-4}c_{i}^{Q-4}+\cdots+a_{1}c_{i}. \tag{119}\]
For instance, in the case of the D1Q3 lattice one has \(c_{i}^{3}=c_{i}\). This essentially means that the moment of order \(Q\) is not an independent moment and can not be set at one's will. The only possibility is to set the linear in \(u\) term of the \(Q^{\text{th}}\) order to its Maxwell-Boltzmann counter-part and in doing so determine the reference temperature, which is referred to as the _matching_ condition. Consider for instance the D1Q3 lattice again. The third-order moment is going to be \(u_{x}\) while the Maxwell-Boltzmann distribution leads to \(u_{x}^{3}+3T_{0}u_{x}\). To match the linear term one must have \(3T_{0}=1\). Note that not any choice of lattice admits a reference temperature. For example the velocity set \(\mathcal{V}=\{-2,-1,0,+1,+2\}\) will lead to a closure relation of the form \(c_{i}^{5}=5c_{i}^{3}-4c_{i}\) and a matching condition, \(15T_{0}^{2}-15T_{0}+4=0\), which does not admit any solutions. This explains why the shortest admissible five-velocity lattice is \(\mathcal{V}=\{-3,-1,0,+1,+3\}\) with \(T_{0}=1\pm\sqrt{2/5}\). Once the reference temperature is determined, the weights are readily found by matching the moments of the discrete equilibrium at \(\rho=1\) and \(u_{x}=0\) to their Maxwell-Boltzmann counter-parts. Considering the condition of positivity of the weights one also find the range of temperature that can be covered by the chosen system of discrete velocities. The closure relations and reference temperatures of a number of 1-D lattice are summarized in Table 1.
One successful example of such lattices is the D2Q49 shown in Fig. 6. The closure relation for the 1-D set is
\[c_{i}^{7}=14c_{i}^{5}-49c_{i}^{3}+36c_{i}. \tag{120}\]
The 1-D weights read:
\[w_{0} =\frac{36-49T+42T^{2}-15T^{3}}{36}, \tag{121a}\] \[w_{\pm 1} =\frac{T(12-13T+5T^{2})}{16},\] (121b) \[w_{\pm 2} =\frac{T(-3+10T-5T^{2})}{40},\] (121c) \[w_{\pm 3} =\frac{T(4-15T+15T^{2})}{720}, \tag{121d}\]
which lead to \(T_{\min}=1-\sqrt{2/5}\) and \(T_{\max}=1+\sqrt{2/5}\). Note that the range of accessible temperatures can be further extended by changing the ratio of the largest and shortest discrete velocities, here \(\pm 3\) and \(\pm 1\). In [106] the author also proposed pruning strategies to reduce the number of discrete velocities in 2-D and 3-D, leading to the D3Q39 lattice, which reduces the discrete velocities by one order of magnitude compared to the tensor product of the D1Q7, i.e. D3Q343.
Adaptive reference frame modelsAs observed for both isothermal and compressible models, errors in higher-order moments scale with the deviations of local temperature and velocity from the lattice reference temperature and velocity. For all symmetric lattices considered up to that point the lattice reference velocity is \(U=0\). In [108] the authors proposed to challenge the idea of a reference frame at rest by introducing a non-zero shift \(U\). It was noted that the discrete entropy functional is uniquely defined by the weights \(w_{i}\). The weights of a lattice with \(Q\) discrete velocities, as shown in the previous section, are determined by
\begin{table}
\begin{tabular}{l|l|l|l} \(Q\) & \(V\) & Closure & \(T_{0}\) \\ \hline
3 & \(\{0,\pm 1\}\) & \(c_{i}^{3}=c_{i}\) & \(1/3\) \\ \hline
5 & \(\{0,\pm 1,\pm 3\}\) & \(c_{i}^{5}=10c_{i}^{3}-9c_{i}\) & \(1\pm\sqrt{2/5}\) \\ \hline
7 & \(\{0,\pm 1,\pm 2,\pm 3\}\) & \(c_{i}^{7}=14c_{i}^{5}-49c_{i}^{3}+36c_{i}\) & \(0.697953\) \\ \hline
9 & \(\{0,\pm 1,\pm 2,\pm 3,\pm 5\}\) & \(c_{i}^{9}=39c_{i}^{7}-399c_{i}^{5}+1261c_{i}^{3}-900c_{i}\) & \(0.756081\), \(2.175382\) \\ \hline
11 & \(\{0,\pm 1,\pm 2,\pm 3,\pm 4\pm 5\}\) & \(c_{i}^{11}=55c_{i}^{3}-1023c_{i}^{7}+7645c_{i}^{5}-21076c_{i}^{3}+14400c_{i}\) & \(1.062794\) \\ \end{tabular}
\end{table}
Table 1: One-dimensional Maxwell lattices with odd number of integer-valued velocities, \(Q=3,5,7,9,11\). Second column: Lattice vectors; Third column: Closure relation, defining the reference temperature \(T_{0}\) through the matching condition (fourth column).
Figure 6: Illustration of the D2Q49 lattice.
matching the first \(Q\) moments of the Maxwell-Boltzmann equilibrium distribution function at temperature \(T\) and \(u_{x}=0\):
\[\sum_{i=0}^{Q-1}\phi(c_{i})w_{i}(0,rT)=\int\phi(v)f^{\text{MB}}(0,rT)dv. \tag{122}\]
It was shown through the Galilean-invariance of the moments of the Maxwell-Boltzmann distribution function and the binomial theorem that the weights are also Galilean-invariant and therefore untouched by the change of reference frame. The immediate consequences of that observation are: (a) construction of 3-D lattices via tensorial product of the 1-D lattice remains as before, (b) assuming \(U=k\delta x/\delta t\) with \(k\in\mathbb{Z}\) the propagation remains on-lattice and (c) the discrete entropy functional is Galilean invariant and therefore equilibrium populations are form invariant under the shift of reference frame. This point along with the effect of the shift on operation range has also been discussed for standard isothermal lattices [109]. The process of changing the reference frame and resulting discrete lattice is illustrated in Fig. 7 through the D2Q49 lattice. The use of the shifted lattice along with the entropic equilibrium has been successfully used to model a wide variety of high Mach number flows as illustrated in Fig. 8. The idea of shifted reference frames was later generalized to local adaptive reference velocity and temperature through the particles on
Figure 8: Drag coefficient \(c_{d}\) as a function of the free stream Mach number for the Busemann biplane simulations. Inset: snapshots of the pressure distribution around the biplane for three different Mach numbers: \(\text{Ma}=1.5\), top; \(\text{Ma}=1.7\), bottom left; \(\text{Ma}=2.0\), bottom right. Figure reproduced from [102].
Figure 7: Illustration of the D2Q49 lattice with a shift of \(U_{x}=\delta x/\delta t\).
demand (PonD) method [28]. In this approach the collision streaming operation is performed on a reference frame corresponding to the local velocity and temperature. This allows to minimize higher-order moments deviations of the discrete equilibrium from the Maxwell-Boltzmann distribution function and in doing so allows for arbitrarily large variations in speed and temperature. The particles on demand method has been used to model high Mach number cases in recent years [110]. It is also currently used in combination with a Lee-Tarver reaction model to simulate detonation at high Mach numbers, see [111].
#### 3.2.2 Standard lattice density-based solvers
Coupling to temperature fieldAs discussed in the first section, the original lattice Boltzmann method was targeting the incompressible limit as the asymptotic behavior of the incompressible Navier-Stokes equations. To that end the temperature appearing in the equilibrium distribution function was that of a reference state guaranteeing validity of the low Mach assumption. The compressible Navier-Stokes equations can be recovered by replacing the reference temperature with the local fluid temperature obtained from the second distribution function or the FD/FV solver used for energy balance; Considering for instance Eq. 28 it changes into:
\[\zeta_{\alpha\alpha}=c_{s}^{2}\theta+u_{\alpha}^{2}, \tag{123}\]
where \(\theta=\bar{r}T/\bar{r}_{0}T_{0}\). Introducing this term allows for a correct recovery of Euler level pressure while setting the relaxation time to:
\[\bar{\tau}=\frac{\nu}{c_{s}^{2}\theta}+\frac{\delta t}{2}, \tag{124}\]
Figure 9: Mach reflection and regular reflection created from the interaction of incident detonation waves for adiabatic exponent (top) \(\gamma=1.4\), or (bottom) \(\gamma=5/3\). Images reproduced from [111].
allows for correct recovery of the Navier-Stokes level viscous stress coefficient. With the temperature now an independent space- and time-varying parameter, it will inevitably deviate from the reference temperature, i.e. \(\theta=1\) which is the optimal operation temperature of the third-order quadrature-based lattice Boltzmann model. Deviation from the reference temperature comes with a number of difficulties. The first one is the reduced domain of stability, illustrated best by the linear stability domain shown in Fig. 10.
The second is that to properly recover the full Navier-Stokes viscous stress tensor a number of additional considerations have to be taken into account. These are discussed in the next paragraph.
Galilean-invariance of third-order moments and corrections.As discussed for the isothermal lattice Boltzmann method in previous sections, a simple CE analysis shows that at the NS level, moments of orders two and three of the EDF must be correctly recovered. Diagonal components of the third-order moments tensor, i.e. moments of the form \(\Pi_{\alpha\alpha\alpha}\), can not be correctly recovered due to the \(c_{i\alpha}^{3}=c_{i\alpha}\) bias of the third-order quadrature-based lattice. While the continuous Maxwell-Boltzmann equilibrium distribution leads to:
\[\Pi_{\alpha\alpha\alpha}^{\rm MB}=\rho u_{\alpha}^{3}+3\rho c_{s}^{2}u_{\alpha }\theta, \tag{125}\]
any of the discrete equilibrium distribution functions discussed here recovers:
\[\Pi_{\alpha\alpha\alpha}^{\rm eq}=3\rho c_{s}^{2}, \tag{126}\]
which for \(\theta=1\) introduces a cubic-in-velocity error and for \(\theta\neq 1\) a linear one. As such the issue of Galilean-variance of the third-order moments becomes quite critical in the case of compressible flows where \(\theta\neq\rm const\). To account for this error, corrections in the form of source terms in the kinetic equation are introduced:
\[\partial_{t}f_{i}+c_{i\alpha}\frac{\partial f_{i}}{\partial x_{\alpha}}=\frac {1}{\tau}\left(f_{i}^{\rm eq}-f_{i}\right)+\Psi_{i}. \tag{127}\]
Figure 10: Linear stability domain of lattice Boltzmann at different non-dimensional temperatures and viscosities as obtained from von Neumann analysis. Reproduced from [112].
The form of the source term, \(\Psi_{i}\), can be derived through the order-two-in-\(\epsilon\) (NS level) momentum balance equation:
\[\frac{\partial^{(2)}\rho u_{\alpha}}{\partial t}+\frac{\partial}{\partial x_{ \beta}}\tau\left(\frac{\partial^{(1)}\Pi^{\rm eq}_{\alpha\beta}}{\partial t}+ \frac{\partial\Pi^{\rm eq}_{\alpha\beta\gamma}}{\partial x_{\gamma}}\right) +\frac{\partial}{\partial x_{\beta}}\tau\left(\sum_{i=0}^{Q-1}c_{i\alpha}c_{i \beta}\Psi_{i}^{(1)}\right)=0, \tag{128}\]
leading to:
\[\Psi_{i}^{(1)}=\frac{w_{i}}{2c_{s}^{4}}\frac{\partial_{\alpha}}{\partial x_{ \alpha}}\mathcal{H}_{\alpha\alpha}(\mathbf{c}_{i})\delta\Pi^{\rm eq}_{\alpha\alpha \alpha}, \tag{129}\]
where
\[\delta\Pi^{\rm eq}_{\alpha\alpha\alpha}=\rho u_{\alpha}\left[u_{\alpha}^{2}+3 c_{s}^{2}\left(\theta-1\right)\right]. \tag{130}\]
For the stress tensor to be correctly recovered at this scale one must have:
\[\Psi_{i}=\frac{w_{i}}{2c_{s}^{4}}\partial_{\alpha}\mathcal{H}_{i,\beta\gamma }\delta\Pi^{\rm eq}_{\alpha\beta\gamma}. \tag{131}\]
Note that to get this expression the correction term was assumed to involve first-order derivatives via the expansion \(\Psi_{i}=\Psi_{i}^{(1)}\).
A different form of the correction term can be obtained with a different expansion, i.e. \(\Psi_{i}^{\prime}=\Psi_{i}^{(2)}\). Such an expansion would lead to the following NS-level equation:
\[\frac{\partial^{(2)}\rho u_{\alpha}}{\partial t}+\frac{\partial}{\partial x_{ \beta}}\tau\left(\frac{\partial^{(1)}\Pi^{\rm eq}_{\alpha\beta}}{\partial t}+ \frac{\partial\Pi^{\rm eq}_{\alpha\beta\gamma}}{\partial x_{\gamma}}\right) -\sum_{i=0}^{Q-1}c_{i\alpha}{\Psi^{{}^{\prime}}_{i}}^{(2)}=0, \tag{132}\]
and a correction term of the form:
\[\Psi_{i}^{{}^{\prime}}=\frac{w_{i}}{c_{s}^{2}}c_{i\alpha}\frac{\partial}{ \partial x_{\alpha}}\left(\frac{\mu}{p}\frac{\partial\delta\Pi^{\rm eq}_{ \alpha\alpha\alpha}}{\partial x_{\alpha}}\right). \tag{133}\]
The above-listed corrections were derived for the discrete kinetic equations. The classical lattice Boltzmann approach to space/time discretization would lead to the following redefined discrete distribution function:
\[\bar{f}_{i}=f_{i}-\frac{\delta t}{2}\Omega_{i}-\frac{\delta t}{2}\Psi_{i}, \tag{134}\]
which in turn would lead to the following final algebraic system:
\[\bar{f}_{i}\left(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t\right)-\bar{f}_{i}\left( \mathbf{x},t\right)=\frac{\delta t}{\bar{\tau}}\left(f_{i}^{\rm eq}\left(\mathbf{x},t \right)-\bar{f}_{i}\left(\mathbf{x},t\right)\right)+\left(1-\frac{\delta t}{2\bar {\tau}}\right)\Psi_{i}. \tag{135}\]
This consistent derivation of the _extended_ LBM holds for any realization of the correction term, whether it is introduced simply as a Hermite-expanded term [52] or by using the extended equilibrium approach [27].
#### 3.2.3 Pressure-based solvers
While the density-based model detailed in the previous section was successfully used for a number of applications, see [88; 113], it was observed that it led to spurious currents near curved flame interfaces, see Fig. 11 for a circular flame. A detailed study of numerical properties of the model showed that this is a result of a non-physical coupling between entropic and vorticity mode, see [114]. A pressure-based formulation
was then proposed [84] to cure the problem, the detailed reason only being understood later [85].
The pressure-based algorithm is presented hereafter.
**Step 1**: Calculation of the \(p\)-based equilibrium distribution \(f_{i}^{p,eq}\) from \((t,\mathbf{x})\) moments :
\[f_{i}^{p,eq}(t,\mathbf{x})=\omega_{i}\Big{\{}\mathcal{H}^{(0)}\rho\theta+\frac{ \mathcal{H}_{i\alpha}^{(1)}}{c_{s}^{2}}\rho u_{\alpha}+\frac{\mathcal{H}_{i \alpha\beta}^{(2)}}{2c_{s}^{4}}[\rho u_{\alpha}u_{\beta}]+\frac{\mathcal{H}_{ i\alpha\beta\gamma}^{(3)}}{6c_{s}^{6}}[\rho u_{\alpha}u_{\beta}u_{\gamma}] \Big{\}}(t,\mathbf{x})\,. \tag{136}\]
**Step 2**: Off-equilibrium population reconstruction \(\overline{f}_{i}^{neq}(t,\mathbf{x})\) from moments \(\left[\rho,\rho u_{\alpha},\Pi_{\alpha\beta}^{neq}\right](t,\mathbf{x})\), using a collision model, e.g. projected [115] or recursive [98; 99] regularization.
**Step 3**: Collision and streaming,
\[f_{i}^{p,col}(t,\mathbf{x})=f_{i}^{p,eq}(t,\mathbf{x})+\left(1-\frac{ \delta t}{\overline{\tau}}\right)\overline{f}_{i}^{neq}(t,\mathbf{x})\,, \tag{137}\] \[\overline{f}_{i}^{p}(t+\delta t,\mathbf{x})=f_{i}^{p,col}(t,\mathbf{x}- \mathbf{c}_{i}\delta t)\,. \tag{138}\]
**Step 4**: macroscopic reconstruction
\[\rho(t+\delta t,\mathbf{x})=\sum_{i}\overline{f}_{i}^{p}(t+\delta t, \mathbf{x})+\rho(t,\mathbf{x})[1-\theta(t,\mathbf{x})]. \tag{139}\] \[\rho u_{\alpha}(t+\delta t,\mathbf{x})=\sum_{i}c_{i\alpha}\overline{ f}_{i}^{p}(t+\delta t,\mathbf{x})\,,\] (140) \[\Pi_{\alpha\beta}^{\overline{f}^{neq}}(t+\delta t,\mathbf{x})=\sum_{i }c_{i\alpha}c_{i\beta}\left[\overline{f}_{i}^{p}-f_{i}^{p,eq}\right](t+\delta t,\mathbf{x})\,, \tag{141}\]
Update of the energy variable (hybrid or DDF method) [116; 117; 118; 84]. From this additional step, \(\theta(t+\delta t,\mathbf{x})\) is now updated.
Figure 11: Streamlines of the 2-D circular flame simulation colored by velocity magnitude (in m/s). Left column: density-based model [88], right column: pressure-based model [87]. Note the very different ranges of velocity magnitude from the two methods. The yellow contour is the heat release rate peak indicating the flame front.
Note the differences compared to the density-based model presented in the previous section:
* The zeroth moment of \(f_{i}\) is now \(p=\rho\theta\) instead of \(\rho\).
* The macroscopic reconstruction of Step 4 now includes a correction \(\rho(t+\delta t,\mathbf{x})=\sum_{i}\overline{f}_{i}^{p}(t+\delta t,\mathbf{x})+\rho(t, \mathbf{x})[1-\theta(t,\mathbf{x})]\) accounting for dilatation.
This second point was presented by the authors as a predictor-corrector procedure, close to early artificial compressibility methods [119, 120]. It is important to note, however, that despite being pressure-based, the algorithm is mass conserving globally, as density-based methods.
Link between pressure-based and density based formulationsSince many mesh transition algorithms, boundary conditions, etc. were initially developed for density-based algorithms [121], there is an interest in establishing a rigorous link between pressure-based and density-based algorithm.
This can be obtained noting that the correction \(\rho(t,\mathbf{x})[1-\theta(t,\mathbf{x})]\) in the macroscopic reconstruction can be equivalently embedded directly in the \(f_{0}\) term corresponding to the stationary discrete velocity by introducing the density-based function:
\[\overline{f}_{i}^{\rho}(t+\delta t,\mathbf{x})=\overline{f}_{i}^{p}(t+\delta t, \mathbf{x})+\delta_{0i}[1-\theta(t,\mathbf{x})], \tag{142}\]
where \(\delta_{0i}\) is the Kronecker symbol. By projecting \(\delta_{0i}\) on a Hermite polynomial basis, it was further shown [85] that this change is equivalent to adding to the classical \(\overline{f}_{i}^{\rho}\) a fourth order contribution, leading to an equilibrium function of the generic form
\[f_{i}^{eq}=\omega_{i}\Big{\{} \mathcal{H}^{(0)}\rho+\frac{\mathcal{H}_{i\alpha}^{(1)}}{c_{s}^{ 2}}\rho u_{\alpha}+\frac{\mathcal{H}_{i\alpha\beta}^{(2)}}{2c_{s}^{4}}\left[ \rho u_{\alpha}u_{\beta}+\delta_{\alpha\beta}\rho c_{s}^{2}(\theta-1)\right]+ \frac{\mathcal{H}_{i\alpha\beta\gamma}^{(3)}}{6c_{s}^{6}}\Big{[}\rho u_{\alpha }u_{\beta}u_{\gamma}\] \[-\kappa\rho c_{s}^{2}\left(u_{\alpha}\delta_{\beta\gamma}+u_{ \beta}\delta_{\gamma\alpha}+u_{\gamma}\delta_{\alpha\beta}\right)\Big{]}- \frac{\mathcal{A}_{i}+\mathcal{B}_{i}+\mathcal{C}_{i}}{12c_{s}^{4}}\rho[ \theta-1](1-\zeta)\Big{\}}\,, \tag{143}\]
with additional information projected onto fourth order polynomials \(\mathcal{A}_{i}\), \(\mathcal{B}_{i}\) and \(\mathcal{C}_{i}\). In the model, \(\kappa\) and \(\zeta\) are free parameters. For instance, \((\kappa,\zeta)=(1,1-\theta)\) corresponds to the density-based model of [117], while \((\kappa,\zeta)=(0,0)\) yields the pressure-based model of (136).
Successful applications of the resulting generic model include the modelling of
* Hele-Shaw cell [87];
* Turbulent premixed combustion burners; [90, 122]
* Turbulent lifted H\({}_{2}\)-air jet flame [123];
* Thermo-acoustic instabilities [122, 124];
* Cellular detonation structure [93];
with the last two points being illustrated in Fig. 12.
#### 3.2.4 Low Mach thermo-compressible pressure-based solver
In 2019, Hosseini et al. proposed a low Mach lattice Boltzmann scheme for simulations of combustion, and more generally of dilatable flows [125]. A low Mach reduction of the fully compressible models of the previous section was also proposed in [126]. The scheme is categorized as low Mach in the sense that it follows the philosophy of Majda's zero-Mach model [127] where after a Helmholtz decomposition of the velocity field, the divergence-free part is obtained via Poisson's equation and the curl-free part from the species and energy fluxes. At the difference of Majda's zero-Mach model, here the _divergence-free_ component solver allows for a certain level of compressibility, i.e. spurious acoustic waves. To recover this modified set of macroscopic equations the model makes use of the following modified kinetic system [125]:
\[\frac{\partial g_{i}^{\prime}}{\partial t}+c_{i\alpha}\frac{\partial g_{i}^{ \prime}}{\partial x_{\alpha}}=\frac{1}{\tau}\left({g_{i}^{\mathrm{eq}}}^{ \prime}-g_{i}^{\prime}\right)+\Xi_{i}, \tag{144}\]
where
\[g_{i}^{\prime}=w_{i}p_{h}+c_{s}^{2}\left(f_{i}-w_{i}\rho\right), \tag{145}\]
and the source term \(\Xi_{i}\) is defined as:
\[\Xi_{i}=c_{s}^{2}\left(\frac{f_{i}^{\mathrm{eq}}}{\rho}-w_{i}\right)\left(c_ {i\alpha}-u_{\alpha}\right)\frac{\partial\rho}{\partial x_{\alpha}}+w_{i}c_{ s}^{2}\rho\frac{\partial u_{\alpha}}{\partial x_{\alpha}}. \tag{146}\]
In this model, hydrodynamic pressure \(p_{h}\) and velocity \(\mathbf{u}\) are coupled via the distribution function via:
\[\sum_{i=0}^{Q-1}g_{i}^{\prime} =p_{h}, \tag{147a}\] \[\sum_{i=0}^{Q-1}c_{i\alpha}g_{i}^{\prime} =\rho c_{s}^{2}u_{\alpha}, \tag{147b}\]
Figure 12: Illustration of successful applications of the generic HRR model (143): (left) a cycle of a thermo-acoustic instability in the PRECCINSTA burner [122], and (right) 2-D detonation cellular structure with varying activation energy [93].
while density \(\rho\) is now a variable computed locally through the ideal equation of state:
\[\rho=\frac{p_{th}}{\bar{r}T}. \tag{148}\]
The source of divergence appearing in Eq. (146) is computed via the continuity equation combined with the energy and species balance equations:
\[\frac{\partial u_{\alpha}}{\partial x_{\alpha}}=-\frac{1}{p_{th}}\frac{dp_{th}} {dt}+\frac{1}{T}\left(\frac{\partial T}{\partial t}+u_{\alpha}\frac{\partial T }{\partial x_{\alpha}}\right)+\sum_{k=1}^{N_{sp}}\frac{\bar{W}}{W_{k}}\frac{1 }{T}\left(\frac{\partial Y_{k}}{\partial t}+u_{\alpha}\frac{\partial Y_{k}}{ \partial x_{\alpha}}\right), \tag{149}\]
where summation of \(\alpha\) is assumed. A multi-scale analysis of this kinetic model shows that the following balance equation is effectively applied to the hydrodynamic pressure [34]:
\[\frac{1}{\rho c_{s}^{2}}\partial_{t}p_{h}+\frac{\partial u_{\alpha}}{\partial x _{\alpha}}=-\frac{1}{p_{th}}\frac{dp_{th}}{dt}+\frac{1}{T}\left(\frac{ \partial T}{\partial t}+u_{\alpha}\frac{\partial T}{\partial x_{\alpha}} \right)+\sum_{k=1}^{N_{sp}}\frac{\bar{W}}{W_{k}}\frac{1}{T}\left(\frac{ \partial Y_{k}}{\partial t}+u_{\alpha}\frac{\partial Y_{k}}{\partial x_{ \alpha}}\right), \tag{150}\]
while for momentum, as for the classical lattice Boltzmann with second-order equilibrium, the Navier-Stokes equation is recovered with a deviation of order \(\propto\left\|\mathbf{u}\right\|^{3}\delta t^{3}/\delta x^{3}\) in both diagonal and deviatoric components of the viscous stress tensor.
Note that after integration along characteristics, the discrete time evolution equations for the re-defined distribution function \(\vec{g^{\prime}}_{i}\) are obtained as:
\[\vec{g^{\prime}}_{i}(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t)-\vec{g^{\prime}}_{ i}(\mathbf{x},t)=\frac{\delta t}{\bar{\tau}}\left(\vec{g^{\prime}}_{i}^{\rm eq}( \mathbf{x},t)-\vec{g^{\prime}}_{i}(\mathbf{x},t)\right)+\left(1-\frac{\delta t}{2\bar{ \tau}}\right)\Xi_{i}, \tag{151}\]
while moments are computed as:
\[\sum_{i=0}^{Q-1}\vec{g^{\prime}}_{i}+\frac{\delta tc_{s}^{2}}{2} \left(\rho\frac{\partial u_{\alpha}}{\partial x_{\alpha}}+u_{\alpha}\frac{ \partial\rho}{\partial x_{\alpha}}\right)=p_{h}, \tag{152a}\] \[\sum_{i=0}^{Q-1}c_{i\alpha}\vec{g^{\prime}}_{i}=\rho c_{s}^{2}u_ {\alpha}. \tag{152b}\]
While only the most basic single relaxation time form of this model is introduced here, interested readers can readily get access to more advanced realizations, for instance via the Cumulants-based multiple relaxation time collision operator in [128, 32].
Over the past couple of years this low Mach lattice Boltzmann solver, in combination with shock-capturing finite differences schemes like the weighted essentially non-oscillatory (WENO) have been used to model a variety of complex reacting flow configurations, including
* Turbulent combustion [129];
* Combustion in swirl burner [128];
* Combustion in porous media [130].
Some of these applications are illustrated in Fig. 13. A simpler form of this model was also used in [131] to
model droplet combustion. In terms of limitations, the model is - as obvious by its name - limited to the low Mach regime. Furthermore, as mentioned previously, the Navier-Stokes level viscous stress tensor admits deviation, for which corrections are to be published in an upcoming article. Finally, exact conservation is a topic that needs improvement here as the form of the energy and species balance equations along with the finite-difference discretizations and curved boundary treatment all lead to loss of exact conservation.
### Performance of multi-physics LBM solvers
Let us now provide details regarding the advantages and limitations of multi-physics LBM solvers, and how they compare to classical LBM solvers targeting mainly low-Mach aerodynamic and aero-acoustic applications.
Classical LBM solvers have found their success due to three main reasons : (i) ability to tackle complex geometries in a simple way, (ii) low dissipation owing to the streaming step, and (iii) a reduced computational cost due to the stencil compactness and octree structure. These advantages come at the cost of a heavier memory load (with more degrees of freedom), a complex treatment of boundary conditions (owing to the non body-fitted mesh), and filtering problems in grid refinement areas where both spatial and time discretizations are halved/doubled.
Let us now consider the three aforementioned advantages and see if they apply to LBM multiphysics solvers. (i) Ability to tackle complex geometries is preserved, as the discretization remains identical. (ii) The low dissipation property is more model dependent. For multiple distribution functions, each distribution is solved using the same algorithm so that dissipation is supposed to remain identical. For hybrid approaches, however, a separate numerical scheme is used for energy and species equations, which may lead to additional dissipation [84] on the entropy/enthalpy Kovasznay modes. Comparing the computational cost (iii) between LBM and Navier-Stokes solvers is a long standing issue. While a thorough comparison is still lacking, some
Figure 13: Illustration of some of the recent applications of the low Mach model of [125]. (a) Simulation of PRECCINSTA swirl injector at equivalence ratio 0.83 [128], the red surface illustrating the flame structure. (b) Simulation of deflagrating flame in a chamber with obstacles [128]. (c) Simulation of flame propagation in randomly-generated porous media [130].
LBM studies report reduced computational times (RCT), consistently below \(5\mu\)s per time step and grid point for relevant combustion problems [89; 90; 122; 132].
We list in Tab. 2 different references tackling combustion problems using multi-physics LB solvers (regardless of the strategy). This fast-expanding list is a clear indication that multi-physics LBM solvers have now reached sufficient maturity for combustion applications.
## 4 Conclusion and discussion
While extension of the lattice Boltzmann method to the simulation of combustion happened slower than other areas of application such as incompressible and multi-phase flows, the progress documented in recent years has laid the ground for widespread use and application of lattice Boltzmann to combustion. The complex simulations reported in the literature, see for instance [93; 122; 128; 130], show that the numerical approach has reached a level of maturity allowing it to be applied to realistic configurations. In this contribution we focused on the development of lattice Boltzmann-based approaches to model combustion and discussed some of the most pressing challenges and solutions proposed in the literature.
The evolution of the literature shows that one of the major challenges preventing progress in that area was the development of stable and efficient solutions to extend the lattice Boltzmann solvers to compressible regimes. Stability has been one of the most restricting issues in that area. While use of higher order lattices is one straightforward approach to move up to compressible flows, it has been observed that apart from the additional memory and computation costs stemming from the larger number of discrete velocities, higher-order quadrature are subject to more restrictive stability conditions, especially on the temperature. This has led the community to opt for approaches relying on low-order quadratures, i.e. third-order, which had shortcomings regarding the Navier-Stokes-level viscous stress tensor due to insufficient degrees of freedom in the model. Introduction of correction terms for the viscous stress tensor along with more robust collision operators has now paved the way for simulations involving large temperature variations.
Other issues specific to the simulation of combustion in the context of the lattice Boltzmann method are tied to additional balance equations for species and energy. A number of different strategies have been devised for these additional fields. Some rely on developing kinetic models and, therefore, lattice Boltzmann solvers for multi-species fluids, while others prefer either lattice-Boltzmann-based passive scalar solvers or classical FV/FD solvers for the balance equations, leading to a hybrid formulation.
While state-of-the-art lattice Boltzmann solvers are now routinely used for complex combustion configurations involving complex geometries and turbulent flows, a number of technical challenges still persist:
\begin{table}
\begin{tabular}{l|c} Canonical configuration & References \\ \hline Laminar premixed and diffusion flame & [20; 34; 113; 125] \\ Reacting Taylor-Green vortex & [89; 129] \\ Circular expanding flames & [133] \\ Darrieus-Landau instabilities & [87; 88] \\ Thermo-acoustic instabilities & [122; 124] \\ Turbulent premixed burner (LES) & [90; 122; 128] \\ Flame in porous media & [130] \\ Detonations & [93; 111] \\ \end{tabular}
\end{table}
Table 2: A list of canonical combustion problems treated using lattice Boltzmann methods in the recent literature
* One of the remaining challenges is to get exactly conservative curved boundary conditions for the lattice Boltzmann solver. While the bare half-way bounce-back method resulting in a stair-case approximation to the geometry [134, 135] ensures mass conservation, all curved treatment presented in the literature, see for instance [136, 137, 138], result in loss of conservativity of the boundary condition. For a more detailed discussed on conservation issues for curved boundary treatment interested readers are referred to [29, 139, 140]. A number of routes can be taken to overcome this issue such as the use of immersed boundaries which would come at the cost of diffuse interfaces, or the use of volumetric/fluxed based boundary treatments, see for instance [141, 142].
* Development and implementation of conservative and efficient dynamic grid-refinement strategies is also another topic to be further developed in the future. Although grid-refinement in the context of the lattice Boltzmann method has been developed and used since the early 2000's, see for instance [143, 144, 145, 146, 147], mass-conservation, spurious currents at refinement interfaces and dynamic refinement are still topics of discussion in the literature.
At the end of this detailed review regarding past achievements obtained for combustion thanks to LB simulations, it is now time to look briefly to the future. This work is obviously not the first review concerning lattice Boltzmann simulations, and previous studies have sometimes included long-term perspectives, most prominently in the work by Succi [148] - recently updated in [149]. It is obviously necessary to evaluate again recent evolutions in the light of those predictions. One must keep in mind that applications are in the focus of the present review, as stated in the title, so that it is does not appear meaningful to include exceedingly exotic concepts here.
In the same manner, the future of high-fidelity combustion simulations has been discussed in previous reviews, for instance [150, 151, 152]. Here also, reflecting on the corresponding statements will be useful. Of course, the aspects already discussed at length in the core of the present article will not be repeated here in the interest of space. Since the focus has been placed on important methodological aspects of LB in this review, a bird's view seems more appropriate to finish. As such, the main points emerging for the foreseeable future would read as follows.
* or related methods like Volume of Pixel
- appears as a promising solution [155].
* Multi-physics and multiscale applications, adaptivity: the growing performance of existing computational platforms coming with the ever-increasing complexity of the target applications, problems
involving a variety of physical and chemical processes - mostly coupled in a strongly non-linear manner - and taking place over a broad diversity of scales in time and space progressively become the rule. While concentrating here on combustion applications, it must still be recognized that LB has now virtually been used successfully for almost any kind of flows (and even beyond fluid mechanics). The next step - certainly soon to be reached - will for example involve accurate LB simulations of turbulent multiphase reacting flows including realistic chemistry, thermodynamics, and transport models for species and heat, up to radiative heat transfer. Such multi-physics configurations typically involve a broad range of relevant scales in time and space [156], from micrometers to centimeters for living beings [157], or even "from inside protons to the outer Universe", citing the optimistic statement of [149]. In that case, an optimal combination of different numerical approaches will become necessary to allow for a numerical solution of the full configuration, for instance by coupling LB to Molecular Dynamics simulations [158, 159, 160]. Multiscale issues can be partly mitigated by using adaptivity - in particular in space for LB (local grid refinement/coarsening), a solution that has already been discussed in this review [161, 122].
* usually taking place in the turbulent regime
- two lines of research will certainly be followed in the near future. One will concentrate on the thermoreactive part of the problem, for instance using Deep Neural Networks to describe kinetics, potentially leading to very large savings in computing time and memory [162]. The other line will concentrate on ML at subgrid scale (SGS) when using LB for Large-Eddy Simulations (LES) of turbulent reacting flows [122, 128]. In that case, either the behaviour of the pure turbulent flow will be described by a SGS model based on ML [163]
- an approach now well established in conventional CFD [164]; or ML could be used to represent directly turbulent/combustion coupling at subgrid scale, an even more challenging but potentially more rewarding solution, for which much remains to be done [165].
* New computational architectures: Though this might not be immediately clear for young scientists, the performance of a numerical approach is not constrained only by considerations from applied mathematics (stability, convergence, dissipation, dispersion), but is also directly impacted by the details of the computational architecture on which large-scale simulations are carried out. In that sense, the comparison of different methods in terms of computational performance completely depends on the employed system. While method 1 might be order-of-magnitude faster than method 2 on a conventional, single-core system, the comparison might be completely reversed on a large, fine-grain parallel computer, or when computing on Graphical Processing Units (GPU). In that sense, an essential advantage of LB (in addition to the error-free streaming step and linearity of the basic operator) is its locality. In the standard LB formulation, only first-neighbour communications are requested, making it perfectly suited for the currently employed computer architectures; this is one essential explanation to understand the growing success of LB for many applications. Nobody knows which computer architecture will dominate in 20 years. In his review from 2015, Succi [148] already mentioned the suitability of LB for Quantum Computing (QC). Indeed, porting high-order CFD methods involving unstructured grids on QC systems sounds like a nightmare, and LB could here again profit from its
apparent simplicity and locality. Lattice Boltzmann on Quantum Computers is a subject of current research [166; 167]. Still, QC systems being currently barely available for researchers, the impact of Quantum Computing on future LB simulations cannot be reliably estimated. The same applies to even more exotic architectures like biological computers, that have not even entered a preliminary test-phase.
## Acknowledgements
S.A.H. and I.K. would like to acknowledge the financial support of European Research Council (ERC) through the Advanced Grant no. 834763-PonD and computational resources provided by the Swiss National Supercomputing Center CSCS under grant no. s1212. P.B. acknowledges financial support from the French National Research agency (grants ANR-20-CE05-0009 & ANR-21-CE05-0028), as well as computational resources provided by Centre de Calcul Intensif d'Aix-Marseille and GENCI, France (Grant A0132B11951). D.T. acknowledges financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in TRR 287 (Project-ID No. 422037413).
| lattice Boltzmann 法は、近30年にもわたって計算不可の流体動態に存在し、現在、万能で効率的で非常に人気のある数値ツールとなっています。lattice Boltzmann 法は、ここ10年間の普及に、その効率性、低い数値損失、アルゴリズムの簡素さなどが貢献しています。近年進展により、新たな挑戦的な応用分野を開拓した: 燃焼シミュレーション。燃焼は、数値ツールにとって課題となることが多い理由の一つに、時間と空間における多くの変数とスケールが存在するため、非常に硬い多スケール問題となっています。この研究では、過去数年で開発された燃焼モデルと戦略の概要を提示し、燃焼シミュレーションにおける最近の応用、課題、展望について議論します。 |
2308.00151 | Classical stochastic representation of quantum mechanics | We show that the dynamics of a quantum system can be represented by the
dynamics of an underlying classical systems obeying the Hamilton equations of
motion. This is achieved by transforming the phase space of dimension $2n$ into
a Hilbert space of dimension $n$ which is obtained by a peculiar canonical
transformation that changes a pair of real canonical variables into a pair of
complex canonical variables which are complex conjugate of each other. The
probabilistic character of quantum mechanics is devised by treating the wave
function as a stochastic variable. The dynamics of the underlying system is
chosen so as to preserve the norm of the state vector. | Mário j. de Oliveira | 2023-07-31T21:02:43 | http://arxiv.org/abs/2308.00151v1 | # Classical stochastic representation of quantum mechanics
###### Abstract
We show that the dynamics of a quantum system can be represented by the dynamics of an underlying classical systems obeying the Hamilton equations of motion. This is achieved by transforming the phase space of dimension \(2n\) into a Hilbert space of dimension \(n\) which is obtained by a peculiar canonical transformation that changes a pair of real canonical variables into a pair of complex canonical variables which are complex conjugate of each other. The probabilistic character of quantum mechanics is devised by treating the wave function as a stochastic variable. The dynamics of the underlying system is chosen so as to preserve the norm of the state vector.
The earliest formulations of quantum mechanics were given by Schrodinger, who introduced the quantum wave equation that bears his name, and by Heisenberg who introduced the quantum matrix mechanics. These two formulations were shown to be equivalent and, together with other formulations, are understood as different representations of the same theory. The standard formulation of quantum mechanics considers that the quantum state is a vector with complex components belonging to the Hilbert vector space. The time evolution of the quantum state is given by a unitary transformation whose generator is the Hamiltonian operator acting on the Hilbert space. This type of evolution guarantees that the norm of the state vector is preserved for all times.
Quantum mechanics [1; 2; 3; 4; 5; 6; 7; 8] as a science of motion differs fundamentally from classical mechanics [9; 10; 11; 12]. For instance, the mathematical objects corresponding to real physical quantities such as position and momentum are very distinct in the two theories. In quantum mechanics they are operators acting on a Hilbert space and the possible outcomes of an observable are the eigenvalues of the corresponding operator. In fact classical and quantum mechanics are conflicting scientific theories of the same real phenomena and just one of them could give the correct predictions. It is admitted indeed that classical mechanics does not correctly describes nature at small scales. At large scales quantum mechanics is reduced to classical mechanics and both predict the same results.
The question we address here is not whether the science of quantum mechanics is equivalent to the science of classical mechanics, as they are not. The question we address is whether the abstract framework of quantum mechanics can in some sense be equivalent to the abstract framework of classical mechanics. We answer this question positively by showing that the dynamics of a state vector belonging to a Hilbert space of dimension \(n\) is equivalent to the dynamics of a classical system with \(n\) degrees of freedom. This classical system we call the _underlying_ system to avoid confusion with a real system described by classical mechanics. The wave equation is then understood as related to a pair of classical canonical variables, its real and imaginary parts being proportional to the coordinate and momentum, respectively. The underlying system cannot be any classical system, but only those whose motion preserves the norm of the complex wave function.
The idea of expressing classical mechanics in a Hilbert space that we use here was considered by Koopman [13] who showed that canonical transformation are equivalent to unitary transformation if the state functions in phase space are square integrable [14]. This result was also used by von Neumann to formulate classical mechanics as an operational theory [15].
Quantum mechanics has a probabilistic character that is particularly manifest in the standard interpretation of quantum mechanics according to which the square of the absolute value of the wave function is a probability. Here, the probabilistic character of thermodynamics is devised by considering that the wave function is a stochastic variable, that is, a time dependent random variable. Accordingly, the wave vector in the Hilbert follows a stochastic trajectory. In this sense, the present stochastic representation is in accordance with the consistent history interpretation of quantum mechanics [8].
Let us consider the representation of classical mechanics by the Hamilton equations of motion. In this representation, a state is defined as a vector of the phase space spanned by the canonical variables. The dimension of the vector phase space equals \(2n\) where \(n\) is the number of degrees of freedom, which is the number of pairs of canonically conjugate variables. The canonical Hamilton equations of motion are given by
\[\frac{dq_{k}}{dt}=\frac{\partial\mathcal{H}}{\partial p_{k}},\qquad\frac{dp_{k }}{dt}=-\frac{\partial\mathcal{H}}{\partial q_{k}}, \tag{1}\]
where \(\mathcal{H}\) is the Hamiltonian function and \((q_{k},p_{k})\) denotes one of the \(n\) pairs of canonically conjugate variables.
The pairwise formulation of the canonical equations of motion allows a _peculiar_ transformation [9] of the pair of real canonical variables \((q_{k},p_{k})\) to a pair of complex canonical variables \((z_{k},z_{k}^{*})\). This peculiar transformation is accomplished by \(z_{k}=\alpha_{k}q_{k}+i\beta_{k}p_{k}\) where \(\alpha_{k}\) and \(\beta_{k}\) are real constants such that \(\alpha_{k}\beta_{k}=1/2\mu\), and \(\mu\) is some constant with the physical dimension of coordinate\(\times\)momentum. This transformation guarantees that the pair \((z_{k},z_{k}^{*})\) is a pair of canonically conjugate variables. In terms of the new variables the equations of motion become
\[i\mu\frac{dz_{k}}{dt}=\frac{\partial\mathcal{H}}{\partial z_{k}^{*}},\qquad i \mu\frac{dz_{k}^{*}}{dt}=-\frac{\partial\mathcal{H}}{\partial z_{k}}, \tag{2}\]
where \(z_{k}\) and \(z_{k}^{*}\) are dimensionless and treated as independent variables, and \(\mathcal{H}\) is a real function of the set of variables \(\{z_{k}\}\) and \(\{z_{k}^{*}\}\). The Hamilton equations can also be written in terms of Poisson brackets
\[i\mu\frac{dz_{k}}{dt}=\{z_{k},\mathcal{H}\},\qquad i\mu\frac{dz_{k}^{*}}{dt}=\{z _{k}^{*},\mathcal{H}\}. \tag{3}\]
The Poisson brackets between two state functions \(\mathcal{A}\) and \(\mathcal{B}\) are defined by
\[\{\mathcal{A},\mathcal{B}\}=\sum_{j}\left(\frac{\partial\mathcal{A}}{\partial z _{j}}\frac{\partial\mathcal{B}}{\partial z_{j}^{*}}-\frac{\partial\mathcal{B} }{\partial z_{j}}\frac{\partial\mathcal{A}}{\partial z_{j}^{*}}\right), \tag{4}\]
and we remark that \(\{z_{j},z_{k}^{*}\}=\delta_{jk}\).
The time evolution of a state function \(\mathcal{A}\), that is, as a function of the set of variables \(\{z_{j}\}\) and \(\{z_{k}^{*}\}\), is given in terms of the Poisson brackets by
\[i\mu\frac{d\mathcal{A}}{dt}=\{\mathcal{A},\mathcal{H}\}. \tag{5}\]
which follows from (3).
As the two equations of motion for \(z_{k}\) and \(z_{k}^{*}\) are the complex conjugate of each other, we may consider them to be just one equation in complex variables. Thus we are representing the motion of a classical system as a trajectory in a vector space with \(n\) dimensions with complex components \(z_{k}\), which defines a Hilbert vector space.
We assume the Hamiltonian \(\mathcal{H}\) to be a bilinear function in the complex variables,
\[\mathcal{H}=\sum_{jk}H_{jk}z_{j}^{*}z_{k}, \tag{6}\]
where \(H_{jk}\) are understood as the elements of a matrix \(H\), which is Hermitian because \(\mathcal{H}\) is real. The norm \(\mathcal{N}\) of a state \(\{z_{k}\}\) is defined by
\[\mathcal{N}=\sum_{j}z_{j}^{*}z_{j}, \tag{7}\]
and we see that it is a constant of the motion since it commutes in the Poisson sense with the Hamiltonian, \(\{\mathcal{N},\mathcal{H}\}=0\). Therefore we may set \(\mathcal{N}\) equal to a constant which we choose to be \(1\).
If we replace the expression of \(\mathcal{H}\) given by the equation (6) into the equation of motion (3) we reach the equation
\[i\mu\frac{dz_{j}}{dt}=\sum_{k}H_{jk}z_{k}, \tag{8}\]
The variables \(z_{k}\) are understood as the components of a state vector \(\psi\) of the Hilbert, that is,
\[\psi=\sum_{j}z_{j}\phi_{j}. \tag{9}\]
where the vectors \(\{\phi_{j}\}\) form a complete basis of the Hilbert space. Defining the operator \(\hat{H}\) by
\[\hat{H}\phi_{k}=\sum_{j}H_{jk}\phi_{j}, \tag{10}\]
the equation (8) acquires the form
\[i\mu\frac{d}{dt}\psi=\hat{H}\psi, \tag{11}\]
which is the Schrodinger equation if we set the constant \(\mu\) equal to the Planck constant,
\[\mu=\hbar. \tag{12}\]
In accordance with the postulates of quantum mechanics, the possible outcomes of an observable \(\mathscr{A}\) are the eigenvalues of a matrix \(A\). If the system is in a state \(\psi\), given by (9), the quantum average of this observable is given by
\[\mathcal{A}=\sum_{jk}A_{jk}z_{j}^{*}z_{k}, \tag{13}\]
where \(A_{jk}\) are the elements of a matrix \(A\) whose eigenvalues are the possible outcomes of the observable \(\mathscr{A}\). We interpret \(\mathcal{A}\) as a state function related to the underlying classical system.
In the following we change the equations of motion for the purpose of treating the dynamic variables as stochastic variables, which we now denote by \(x_{k}\). This is accomplished by adding a white noise to the equation (2). We choose a noise that changes the phase \(\theta_{j}\) of \(x_{j}=r_{j}e^{i\theta_{j}}\) but not its absolute value \(r_{j}\). This is accomplished by writing equation (2) in the polar form
\[\frac{d\theta_{j}}{dt}=\frac{1}{2\mu r_{j}}\frac{\partial\mathcal{H}}{\partial r _{j}}+\zeta_{j}, \tag{14}\]
\[\frac{dr_{j}}{dt}=-\frac{1}{2\mu r_{j}}\frac{\partial\mathcal{H}}{\partial \theta_{j}}, \tag{15}\]
where \(\zeta_{j}\) is a stochastic variables with zero mean, and the Hamiltonian function is given by
\[\mathcal{H}=\sum_{jk}H_{jk}x_{j}^{*}x_{k}. \tag{16}\]
From the stochastic equation one can obtain the equation that governs the time evolution of the probability distribution [16; 17]. In the present case the probability distribution, which we denote by \(\mathscr{P}(x,x^{*})\), is defined on the Hilbert space. The probability distribution obeys the Fokker-Planck equation
\[\frac{\partial\mathscr{P}}{\partial t}=\frac{1}{i\mu}\{\mathscr{P},\mathcal{H }\}+\frac{1}{2}\sum_{jk}\gamma_{jk}\frac{\partial^{2}\mathscr{P}}{\partial \theta_{j}\partial\theta_{k}}, \tag{17}\]
where \(\gamma_{jk}=\gamma_{kj}\geq 0\).
As the noise \(\zeta_{i}\) does not change \(x_{j}^{*}x_{j}\), it will not change the norm
\[\mathcal{N}=\sum_{j}x_{j}^{*}x_{j}, \tag{18}\]
and taking into account that \(\{{\cal N},{\cal H}\}=0\), we conclude that \({\cal N}\) is strictly constant along a trajectory in the Hilbert space, despite the fact that the trajectory is stochastic. This result allows us to choose the norm to be equal to 1. We will also choose the constants \(\gamma_{jk}\) to be all equal so that the noise will not change the phase of \(x_{j}x_{k}^{*}\).
The solution of the Fokker-Planck equation (17) is a multivariate Gaussian distribution in the variables \(x_{j}\) and \(x_{k}^{*}\). Therefore, to construct \({\mathscr{P}}(x,x^{*})\), it suffices to determined the averages \(\langle x_{j}\rangle\) and the covariances \(\rho_{jk}=\langle x_{j}x_{k}^{*}\rangle\). From equation (17) we reach the equations
\[\frac{d}{dt}\langle x_{j}\rangle=\frac{1}{i\mu}\sum_{k}H_{jk}\langle x_{k} \rangle-\frac{\gamma}{2}\langle x_{j}\rangle, \tag{19}\]
\[i\mu\frac{d}{dt}\rho_{jk}=\sum_{\ell}(H_{j\ell}\rho_{\ell k}-\rho_{j\ell}H_{ \ell k}), \tag{20}\]
and we remark that there is no term corresponding to the noise in the last equation due to our choice of the same value of \(\gamma_{jk}=\gamma\). Taking into account that the norm (18) equals the unity then
\[\sum_{j}\rho_{jj}=1. \tag{21}\]
Defining \(\rho\) as the matrix with elements \(\rho_{jk}\), the last equation gives \({\rm Tr}\rho=1\), and the equation (20) acquires the form
\[i\mu\frac{d\rho}{dt}=[H,\rho]. \tag{22}\]
which is the quantum Liouville equation.
Two cases should be considered concerning the covariances \(\rho_{jk}\) at. If at the initial time, \(\rho_{jk}=z_{j}^{*}z_{k}\), this form will be preserved at all times and \(z_{j}\) is given by the equation
\[i\mu\frac{dz_{k}}{dt}=\sum_{j}z_{j}H_{jk}, \tag{23}\]
which is identified with equation (8) and thus equivalent to the Schrodinger equation. In this case \({\rm Tr}\rho^{2}=({\rm Tr}\rho)^{2}=1\), which corresponds to the quantum mechanics of pure states. It should be pointed out that (23) is a consequence of the quantum Liouville equation (22). In other words, \(\rho_{jk}=z_{j}^{*}z_{k}\) solves the equation (22) as long as \(z_{j}\) satisfies the equation (23). We remark in addition that \(z_{j}\) is not the average \(\langle x_{j}\rangle\). In fact \(\langle x_{j}\rangle\) vanishes for long times whereas \(z_{j}\) does not in general because
\[\sum_{j}z_{j}^{*}z_{j}=1, \tag{24}\]
which follows from (21). If \({\rm Tr}\rho^{2}<1\), then it is not possible to write \(\rho_{jk}\) as a product \(z_{j}^{*}z_{k}\), and this corresponds to the quantum mechanics of mixed states.
The average \(\bar{A}\) of a state function
\[{\cal A}=\sum_{jk}A_{jk}x_{j}^{*}x_{k} \tag{25}\]
is given by
\[\bar{A}=\sum_{jk}A_{jk}\rho_{kj}.={\rm Tr}A\rho \tag{26}\]
In the case of pure state, \(\rho=zz^{\dagger}\), where \(z\) is a column matrix with elements \(z_{j}\) and \(z^{\dagger}\) is the row matrix with elements \(z_{j}^{*}\), and \(\bar{A}\) is reduced to the usual quantum average \(\bar{A}=z^{\dagger}Az\).
We have considered above a noise that could change the phase of the variable \(x_{k}\) but not its absolute value. This lead us to the the quantum Liouville equation and to the Schrodinger equation. We consider now a more generic noise that allows us to reach the Lindblad equation that describes open quantum systems [18; 19].
We add a white noise to equation (8) which now reads
\[\frac{dx_{j}}{dt}=f_{j}+\zeta_{j}, \tag{27}\]
where
\[f_{j}=\frac{1}{i\mu}\sum_{k}H_{jk}x_{k}, \tag{28}\]
and \(\zeta_{j}\) is a stochastic variables with zero mean that we choose to be linear in the variables \(x_{j}\). The noise should be chosen to conserve in the strict sense the norm (18) along a stochastic trajectory. However, we relax this condition and requires that it is conserved on the average.
A precise meaning of the stochastic equation (27) is provided by writing it in a discrete time version [20], which is
\[\Delta x_{j}=\tau f_{j}+i\sqrt{\tau}\sum_{k}G_{jk}x_{k}-\frac{\tau}{2}\sum_{k} K_{jk}x_{k} \tag{29}\]
where \(\tau\) is the time interval and \(\Delta x_{j}\) is the corresponding increment in the dynamical variable \(x_{j}\). The quantities \(G_{jk}\) and \(K_{jk}\) are random variables to be found in such a way that the norm (18) is preserved.
Let us determine the increment in \(x_{j}x_{k}^{*}\) during an interval of time \(\tau\),
\[\Delta(x_{j}x_{k}^{*})=x_{j}\Delta x_{k}^{*}+x_{k}^{*}\Delta x_{j}+\Delta x_{j} \Delta x_{k}^{*}. \tag{30}\]
Using (29), we find up to terms of order \(\tau\)
\[\Delta(x_{j}x_{k}^{*})=x_{j}\tau f_{k}^{*}+x_{k}^{*}\tau f_{j}\]
\[+i\sqrt{\tau}\sum_{n}(G_{jn}x_{n}x_{k}^{*}-G_{kn}^{*}x_{j}x_{n}^{*})+\tau\sum _{n\ell}G_{kn}^{*}G_{j\ell}x_{\ell}x_{n}^{*}\]
\[-\frac{\tau}{2}\sum_{n}(K_{kn}^{*}x_{j}x_{n}^{*}+K_{jn}x_{n}x_{k}^{*}). \tag{31}\]
If we let \(k=j\) in this equation and sum in \(j\), we find the increment in the norm (18), which is
\[\Delta{\cal N}=i\sqrt{\tau}\sum_{jn}(G_{nj}-G^{*}_{jn})x_{j}x_{n}^{*}\]
\[+\frac{\tau}{2}\sum_{n\ell}(2\sum_{j}G^{*}_{jn}G_{j\ell}-K^{*}_{\ell n}-K_{n \ell})x_{\ell}x_{n}^{*}. \tag{32}\]
We choose \(K_{jk}\) so that the second summation vanishes, that is,
\[K_{jk}=\sum_{\ell}G^{*}_{\ell j}G_{\ell k}. \tag{33}\]
Next we choose \(G_{jk}=g_{jk}\xi_{jk}\), where \(\xi_{jk}\) are real stochastic variables with zero mean and covariances \(\langle\xi_{jk}\xi_{\ell n}\rangle=1\). If we require \(\Delta{\cal N}\) to vanish in the strict sense, that is, in any stochastic trajectory then \(g_{jk}\) should equal \(g^{*}_{kj}\), resulting in the vanishing of the first summation of (31). However, we require \(\Delta{\cal N}\) to vanish in the average so that no restriction in \(g_{jk}\) is needed as the first summation of (31) will vanish in the average.
Taking the average of both sides of equation (31), the terms proportional to \(\sqrt{\tau}\) vanish, resulting in the following expression for the time evolution of \(\rho_{jk}=\langle x_{j}x_{k}^{*}\rangle\),
\[\frac{d\rho_{jk}}{dt}=\frac{1}{i\mu}\sum_{\ell}(H_{j\ell}\rho_{\ell k}-\rho_{j \ell}H_{\ell k})+\sum_{n\ell}g_{j\ell}\rho_{\ell n}g^{*}_{kn}\]
\[-\frac{1}{2}\sum_{n\ell}(\rho_{jn}g^{*}_{\ell n}g_{\ell k}+g^{*}_{\ell j}g_{ \ell n}\rho_{nk}). \tag{34}\]
Denoting by \(g\) the matrix with elements \(g_{jk}\), this equation can be written in the form
\[\frac{d\rho}{dt}=\frac{1}{i\mu}[H,\rho]+\frac{1}{2}(2g\rho g^{\dagger}-\rho g ^{\dagger}g-g^{\dagger}g\rho) \tag{35}\]
which is the Lindblad equation for open quantum systems [18; 19].
We summarize our findings as follows. The dynamics of a quantum system was shown to be represented by an underlying classical system, which turns out to be a collection of interacting classical harmonic oscillators. The coordinate and momentum of the classical particles are understood as the real an imaginary parts of the wave function. The probabilistic character of quantum mechanics is introduced explicitly by treating the wave function as a time dependent random variable by adding a white noise to the Hamilton equations of motion that preserves the norm of the wave function. The Schrodinger equation and the quantum Liouville equations are obtained when the noise changes the phase but not the absolute value of the wave function.
The present representation obviously does not transform the science of quantum mechanics into the science of classical mechanics. The underlying classical system is not an observable as much as the wave function is not. However, the present representation allows an interpretation of quantum mechanics other than the standard interpretation [21; 22; 23]. As the trajectory in the Hilbert space is stochastic this representation fits the consistent history interpretation of quantum mechanics [8] if we bear in mind that each possible trajectory is a possible history.
| 量子システムのダイナミクスを、ハミルトン方程式に従う古典的な系に表すことができることを示します。これは、2n次元相対空間をn次元Hilbert空間に変換することによって達成されます。これは、2つの実可微分変数のパアを、互いに共役な複素可微分変数に変換する特殊な kanonic 変換によって得られます。量子力学の確率的性質は、波動関数を確率変数として扱うことによって、その性質が定められています。その基礎となる系は、状態ベクトルのノルムを保存するように選択されています。
Please let me know if you'd like to try translating another sentence! |
2308.16733 | PDRs4All IV. An embarrassment of riches: Aromatic infrared bands in the
Orion Bar | (Abridged) Mid-infrared observations of photodissociation regions (PDRs) are
dominated by strong emission features called aromatic infrared bands (AIBs).
The most prominent AIBs are found at 3.3, 6.2, 7.7, 8.6, and 11.2 $\mu$m. The
most sensitive, highest-resolution infrared spectral imaging data ever taken of
the prototypical PDR, the Orion Bar, have been captured by JWST. We provide an
inventory of the AIBs found in the Orion Bar, along with mid-IR template
spectra from five distinct regions in the Bar: the molecular PDR, the atomic
PDR, and the HII region. We use JWST NIRSpec IFU and MIRI MRS observations of
the Orion Bar from the JWST Early Release Science Program, PDRs4All (ID: 1288).
We extract five template spectra to represent the morphology and environment of
the Orion Bar PDR. The superb sensitivity and the spectral and spatial
resolution of these JWST observations reveal many details of the AIB emission
and enable an improved characterization of their detailed profile shapes and
sub-components. While the spectra are dominated by the well-known AIBs at 3.3,
6.2, 7.7, 8.6, 11.2, and 12.7 $\mu$m, a wealth of weaker features and
sub-components are present. We report trends in the widths and relative
strengths of AIBs across the five template spectra. These trends yield valuable
insight into the photochemical evolution of PAHs, such as the evolution
responsible for the shift of 11.2 $\mu$m AIB emission from class B$_{11.2}$ in
the molecular PDR to class A$_{11.2}$ in the PDR surface layers. This
photochemical evolution is driven by the increased importance of FUV processing
in the PDR surface layers, resulting in a "weeding out" of the weakest links of
the PAH family in these layers. For now, these JWST observations are consistent
with a model in which the underlying PAH family is composed of a few species:
the so-called 'grandPAHs'. | Ryan Chown, Ameek Sidhu, Els Peeters, Alexander G. G. M. Tielens, Jan Cami, Olivier Berné, Emilie Habart, Felipe Alarcón, Amélie Canin, Ilane Schroetter, Boris Trahin, Dries Van De Putte, Alain Abergel, Edwin A. Bergin, Jeronimo Bernard-Salas, Christiaan Boersma, Emeric Bron, Sara Cuadrado, Emmanuel Dartois, Daniel Dicken, Meriem El-Yajouri, Asunción Fuente, Javier R. Goicoechea, Karl D. Gordon, Lina Issa, Christine Joblin, Olga Kannavou, Baria Khan, Ozan Lacinbala, David Languignon, Romane Le Gal, Alexandros Maragkoudakis, Raphael Meshaka, Yoko Okada, Takashi Onaka, Sofia Pasquini, Marc W. Pound, Massimo Robberto, Markus Röllig, Bethany Schefter, Thiébaut Schirmer, Sílvia Vicente, Mark G. Wolfire, Marion Zannese, Isabel Aleman, Louis Allamandola, Rebecca Auchettl, Giuseppe Antonio Baratta, Salma Bejaoui, Partha P. Bera, John H. Black, Francois Boulanger, Jordy Bouwman, Bernhard Brandl, Philippe Brechignac, Sandra Brünken, Mridusmita Buragohain, Andrew Burkhardt, Alessandra Candian, Stéphanie Cazaux, Jose Cernicharo, Marin Chabot, Shubhadip Chakraborty, Jason Champion, Sean W. J. Colgan, Ilsa R. Cooke, Audrey Coutens, Nick L. J. Cox, Karine Demyk, Jennifer Donovan Meyer, Sacha Foschino, Pedro García-Lario, Lisseth Gavilan, Maryvonne Gerin, Carl A. Gottlieb, Pierre Guillard, Antoine Gusdorf, Patrick Hartigan, Jinhua He, Eric Herbst, Liv Hornekaer, Cornelia Jäger, Eduardo Janot-Pacheco, Michael Kaufman, Francisca Kemper, Sarah Kendrew, Maria S. Kirsanova, Pamela Klaassen, Sun Kwok, Álvaro Labiano, Thomas S. -Y. Lai, Timothy J. Lee, Bertrand Lefloch, Franck Le Petit, Aigen Li, Hendrik Linz, Cameron J. Mackie, Suzanne C. Madden, Joëlle Mascetti, Brett A. McGuire, Pablo Merino, Elisabetta R. Micelotta, Karl Misselt, Jon A. Morse, Giacomo Mulas, Naslim Neelamkodan, Ryou Ohsawa, Alain Omont, Roberta Paladini, Maria Elisabetta Palumbo, Amit Pathak, Yvonne J. Pendleton, Annemieke Petrignani, Thomas Pino, Elena Puga, Naseem Rangwala, Mathias Rapacioli, Alessandra Ricca, Julia Roman-Duval, Joseph Roser, Evelyne Roueff, Gaël Rouillé, Farid Salama, Dinalva A. Sales, Karin Sandstrom, Peter Sarre, Ella Sciamma-O'Brien, Kris Sellgren, Sachindev S. Shenoy, David Teyssier, Richard D. Thomas, Aditya Togi, Laurent Verstraete, Adolf N. Witt, Alwyn Wootten, Henning Zettergren, Yong Zhang, Ziwei E. Zhang, Junfeng Zhen | 2023-08-31T13:50:34 | http://arxiv.org/abs/2308.16733v2 | # PDRs4All
###### Abstract
Context:Mid-infrared observations of photodissociation regions (PDRs) are dominated by strong emission features called aromatic infrared bands (AIBs). The most prominent AIBs are found at 3.3, 6.2, 7.7, 8.6, and 11.2 \(\mu\)m. The most sensitive, highest-resolution infrared spectral imaging data ever taken of the prototypical PDR, the Orion Bar, have been captured by _JWST_. These high-quality data allow for an unprecedentedly detailed view of AIBs.
Aims:We provide an inventory of the AIBs found in the Orion Bar, along with mid-IR template spectra from five distinct regions in the Bar: the molecular PDR (i.e. the three H\({}_{2}\) dissociation fronts), the atomic PDR, and the H ii region.
Methods:We used _JWST_ NIRSpece IFU and MIRI MRS observations of the Orion Bar from the _JWST_ Early Release Science Program, PDRs4All (ID: 1288). We extracted five template spectra to represent the morphology and environment of the Orion Bar PDR. We investigated and characterised the AIBs in these template spectra. We describe the variations among them here.
Results:The superb sensitivity and the spectral and spatial resolution of these _JWST_ observations reveal many details of the AIB emission and enable an improved characterization of their detailed profile shapes and sub-components. The Orion Bar spectra are dominated by the well-known AIBs at 3.3, 6.2, 7.7, 8.6, 11.2, and 12.7 \(\mu\)m with well-defined profiles. In addition, the spectra display a wealth of weaker features and sub-components. The widths of many AIBs show clear and systematic variations, being narrowest in the atomic PDR template, but showing a clear broadening in the H ii region template while the broadest bands are found in the three dissociation front templates. In addition, the relative strengths of AIB (sub-)components vary among the template spectra as well. All AIB profiles are characteristic of class A sources as designated by Peeters et al. (2002a), except for the 11.2 \(\mu\)m AIB profile deep in the molecular zone, which belongs to class B\({}_{11.2}\). Furthermore, the observations show that the sub-components that contribute to the 5.75, 7.7, and 11.2 \(\mu\)m AIBs become much weaker in the PDR surface layers. We attribute this to the presence of small, more labile carriers in the deeper PDR layers that are photolysed away in the harsh radiation field near the surface. The 3.3/11.2 AIB intensity ratio decreases by about 40% between the dissociation fronts and the H ii region, indicating a shift in the polycyclic aromatic hydrocarbon (PAH) size distribution to larger PAHs in the PDR surface layers, also likely due to the effects of photochemistry. The
observed broadening of the bands in the molecular PDR is consistent with an enhanced importance of smaller PAHs since smaller PAHs attain a higher internal excitation energy at a fixed photon energy.
_Conclusions._ Spectral-imaging observations of the Orion Bar using _JWST_ yield key insights into the photochemical evolution of PAHs, such as the evolution responsible for the shift of 11.2 \(\mu\)m AIB emission from class B\({}_{11.2}\) in the molecular PDR to class A\({}_{11.2}\) in the PDR surface layers. This photochemical evolution is driven by the increased importance of FUV processing in the PDR surface layers, resulting in a "weeding out" of the weakest links of the PAH family in these layers. For now, these _JWST_ observations are consistent with a model in which the underlying PAH family is composed of a few species: the so-called 'grandPAHs'.
Footnote †: Tim Lee sadly passed away on Nov 3, 2022.
astrochemistry - infrared: ISM - ISM: molecules - ISM: individual objects: Orion Bar - ISM: photon-dominated region (PDR) - techniques: spectroscopic
## 1 Introduction
A major component of the infrared (IR) emission near star-forming regions in the Universe consists of a set of broad emission features at 3.3, 6.2, 7.7, 8.6, 11.2, and 12.7 \(\mu\)m (e.g. Tielens 2008, and references therein). These mid-IR emission features, referred to as aromatic infrared bands (AIBs), are generally attributed to vibrational emission from polycyclic aromatic hydrocarbons and related species upon absorption of interstellar far-ultraviolet (FUV; 6-13.6 eV) photons (Leger & Puget 1984; Allamandola et al. 1985). The AIB spectrum is very rich and consists of the main bands listed above and a plethora of weaker emission features. Moreover, many AIBs are in fact blends of strong and weak bands (e.g. Peeters et al. 2004a). The AIB emission is known to vary from source to source and spatially within extended sources in terms of the profile and relative intensities of the features (e.g. Joblin et al. 1996; Hony et al. 2001; Berne et al. 2007; Sandstrom et al. 2010; Boersma et al. 2012; Candian et al. 2012; Stock & Peeters 2017; Peeters et al. 2017). These remarkably widespread emission features have been described in many diverse astronomical sources, including protoplanetary disks (e.g. Meeus et al. 2001; Vicente et al. 2013), H ii regions (e.g. Bregman 1989; Peeters et al. 2002b), reflection nebulae (e.g. Peeters et al. 2002a; Werner et al. 2004), planetary nebulae (e.g. Gillet et al. 1973; Bregman 1989; Beintema et al. 1996), the interstellar medium (ISM) of galaxies ranging from the Milky Way (Boulanger et al. 1996), the Magellanic Clouds (e.g. Vermeij et al. 2002; Sandstrom et al. 2010), starburst galaxies, luminous and ultra-luminous IR galaxies, and high-redshift galaxies (e.g. Genzel et al. 1998; Lutz et al. 1998; Peeters et al. 2004b; Yan et al. 2005; Galliano et al. 2008), as well as in the harsh environments of galactic nuclei (e.g. Smith et al. 2007; Esquej et al. 2014; Jensen et al. 2017).
A useful observational proxy for studying AIBs is the spectroscopic classification scheme devised by Peeters et al. (2002a) which classifies each individual AIBs based on their profile shapes and precise peak positions (classes A, B, and C). While the AIBs observed in a given source generally belong to the same class, this is not always the case. In particular, the classes in the 6 to 9 \(\mu\)m region do not always correspond to those of the 3.3 and 11.2 AIBs (van Diedenhoven et al. 2004). Class A sources are the most common - they exhibit the "classical" AIBs, with a 6.2 \(\mu\)m AIB that peaks between 6.19 and 6.23 \(\mu\)m, a 7.7 \(\mu\)m complex in which the 7.6 \(\mu\)m sub-peak is stronger than the 7.8 \(\mu\)m sub-peak, and the 8.6 \(\mu\)m feature peaks at 8.6 \(\mu\)m. Class B sources can be slightly redshifted compared to class A, while at the same time the 7.7 \(\mu\)m complex peaks between 7.8 and 8 \(\mu\)m. Class C sources show a very broad emission band peaking near 8.2 \(\mu\)m, and typically do not exhibit the 6.2 or 7.7 \(\mu\)m AIBs.
These three classes were found to show a strong correlation with the type of object considered. The most common AIB spectrum, class A, is identified in the spectra of photodissociation regions (PDRs), Hii regions, reflection nebulae, the ISM, and galaxies. The most widely used template for class A sources has been the spectrum of the Orion Bar (Peeters et al. 2002a; van Diedenhoven et al. 2004). Class B sources are isolated Herbig Ae/Be stars and a few evolved stars; in fact, evolved star spectra can belong to either of the classes. Class C sources include post-AGB and Herbig Ae/Be stars, as well as a few T-Tauri disks (Peeters et al. 2002a; Bouwman et al. 2008; Shannon & Boersma 2019). More recent work has developed analogous classification schemes for other AIBs and has included a new class D (e.g. van Diedenhoven et al. 2004; Sloan et al. 2014; Matsuura et al. 2014).
Observed variations in AIBs reflect changes in the molecular properties of the species responsible for the AIB emission (charge, size, and molecular structure; e.g. Joblin et al. 1996; Berne et al. 2007; Pilleri et al. 2012; Boersma et al. 2013; Candian & Sarre 2015; Peeters et al. 2017; Robertson 1986; Dartois et al. 2004; Pino et al. 2008; Godard et al. 2011; Jones et al. 2013), which are set by the local physical conditions (including FUV radiation field strength, \(G_{0}\), gas temperature, and density, \(n\)(H); e.g. Bakes et al. 2001; Galliano et al. 2008; Pilleri et al. 2012, 2015; Stock et al. 2016; Schirmer et al. 2020; Schirmer et al. 2022; Sidhu et al. 2022; Knight et al. 2022b; Murga et al. 2022). The observed variability in AIB emission thus implies that the population responsible for their emission is not static, but undergoes photochemical evolution.
Observations using space-based IR observatories - in particular the Short-Wavelength Spectrometer (SWS; de Graauw et al. 1996) on board the _Infrared Space Observatory_ (_ISO_; Kessler et al. 1996) and the Infrared Spectrograph (IRS; Houck et al. 2004) on board the _Spitzer Space Telescope_(Werner et al. 2004) - have revealed the richness of AIBs (for a review see e.g. Peeters et al. 2004a; Tielens 2008). However, obtaining a full understanding of the photochemical evolution underlying AIBs has been limited by insufficient spatial and spectral resolution (e.g. _Spitzer-IRS_) or by limited sensitivity and spatial resolution (e.g. _ISO_/SWS) of these IR facilities.
_JWST_ is set to unravel the observed complexity of AIBs, as it offers access to the full wavelength range of importance for AIB studies at medium spectral resolution and at unprecedented spatial resolution and sensitivity. _JWST_ is able to resolve, for the first time, where and how the photochemical evolution of polycyclic aromatic related species, the carriers of AIBs, occurs while providing a detailed view of the resulting AIB spectral signatures. The PDRs4All _JWST_ Early Release Science Program observed the prototypical highly irradiated PDR, the Orion Bar (Berne et al. 2022; Habart et al. 2023; Peeters et al. 2023). The Orion Bar PDR has a G\({}_{0}\) which varies with position from about \(1\times 10^{4}\) to \(4\times 10^{4}\) Habings (e.g. Marconi et al. 1998; Peeters et al. 2023) and it has a gas density which varies from of a few \(10^{4}\) cm\({}^{-3}\) in the atomic PDR to \(\sim 10^{6}\) cm\({}^{-3}\) in the molecular region (e.g. Parmar et al. 1991; Tauber et al. 1994; Young Owl
et al. 2000; Bernard-Salas et al. 2012a; Goicoechea et al. 2016; Joblin et al. 2018; Habart et al. 2023). Given the proximity of Orion (414 pc; Menten et al. 2007), the PDRs4All dataset takes full advantage of _JWST_'s spatial resolution to showcase the AIB emission in unprecedented detail.
In this paper, we present five MIRI-MRS template spectra representing key regions of the Orion Bar PDR. Combined with corresponding _JWST_ NIRSpec-IFU template spectra (Peeters et al. 2023), we present an updated inventory and characterization of the AIB emission in this important reference source. We describe the observations, data reduction, and the determination of the underlying continuum in our template spectra in Sect. 2. In Sect. 3, we give a detailed account of the observed AIB bands and sub-components along with their vibrational assignments. We compare our findings with previous works and discuss the AIB profiles and the AIB variability in the Orion Bar in Sect. 4. Finally, we summarize our results and narrate a picture of the origins and evolution of the AIB emission in Sect. 5.
## 2 Data and data processing
### MIRI-MRS observations and data reduction
On 30 January 2023, _JWST_ observed the Orion Bar PDR with the Mid-Infrared Instrument (MIRI) in medium resolution spectroscopy (MRS) mode (Wells et al. 2015; Argyriou et al. 2023) as part of the PDRs4All Early Release Science program (Berne et al. 2022). We obtained a 1\(\times\)9 pointing mosaic in all four MRS channels (channels 1, 2, 3, and 4), and all three sub-bands within each channel (short, medium, and long). We applied a 4-point dither optimised for extended sources and use the FASTR1 readout pattern adapted for bright sources. We integrated for 521.7 s using 47 groups per integration and 4 integrations. The resulting datacube thus spans the full MRS wavelength range (4.90 to 27.90 \(\mu\)m) with a spectral resolution ranging from \(R\sim 3700\) in channel 1 to \(\sim 1700\) in channel 4 and a spatial resolution of 0.207\({}^{\prime\prime}\) at short wavelengths to 0.803\({}^{\prime\prime}\) at long wavelengths, corresponding to 86 and 332 AU, respectively at the distance of the Orion Nebula.
The mosaic was positioned to overlap the PDRs4All _JWST_ Near Infrared Spectrograph (NIRSpec) IFU (Boker et al. 2022) observations of the Orion Bar (Peeters et al. 2023, Fig. 1). Given the different fields of view of the MRS channels (\(\sim 3^{\prime\prime}\) in channel 1 to \(\sim 7^{\prime\prime}\) in channel 4), the spatial footprint with full wavelength coverage is limited by the field of view (FOV) of channel 1. The footprint shown in Fig. 1 represents the area with full MRS wavelength coverage, noting that the MRS data in channels 2-4 exist beyond the area shown, but we choose to use only the sub-set of data with full wavelength coverage. The NIRSpec IFU and MIRI MRS datasets combined provide a perpendicular cross-section of the Orion Bar from the H ii region to the molecular zone at very high spatial and spectral resolution from 0.97 to 27.9 \(\mu\)m.
We reduced the MIRI-MRS data using version 1.9.5.dev10\(\pm\)9\(\pm\)04688a77 of the _JWST_ pipeline1, and _JWST_ Calibration Reference Data System2 (CRDS) context 1041. We ran the _JWST_ pipeline with default parameters except the following. The master background subtraction, outlier detection, fringe- and residual-fringe correction steps were all turned on. Cubes were built using the drizzle algorithm. The pipeline combined all pointings for each sub-band, resulting in 12 cubes (4 channels of 3 sub-bands each) covering the entire field of view.
Footnote 1: [https://jwst-pipeline.readthedocs.io/en/latest/](https://jwst-pipeline.readthedocs.io/en/latest/)
Footnote 2: [https://jwst-crds.stsci.edu/](https://jwst-crds.stsci.edu/)
We stitched the 12 sub-band cubes into a single cube by re-projecting all of the cubes onto a common spatial grid using channel 1 short as a reference. We then scaled the spectra to match in flux where they overlap using channel 2 long as the reference. This stitching algorithm is part of the "Haute Couture" algorithm described in Canin et al., (in preparation).
While the pipeline and reference files produce high-quality data products, a few artefacts still remain in the data. The most important artifacts for our analysis are fringes and flux calibration that are not yet finalised (see Appendix B). Neither of these artefacts have a strong impact on our results, as discussed in Appendix B, although we do limit our analysis to wavelengths \(\leq 15\)\(\mu\)m due to the presence of artefacts beyond this range.
### Extracting template spectra from key regions
We extracted MIRI template spectra using the same extraction apertures as Peeters et al. (2023)3. These apertures are selected to represent the key physical zones of the Orion Bar PDR: the H ii region, the atomic PDR, and the three bright H i / H\({}_{2}\) dissociation fronts (DF I, DF 2, and DF 3) corresponding to three molecular hydrogen (H\({}_{2}\)) filaments that were identified in the NIRSpec FOV (Fig. 1). We emphasize that the remaining areas in the MRS spectral map will be analysed at a later time. We note that the AIB emission detected in the H ii region template originates from the background PDR. Combined with the NIRSpec templates of Peeters et al. (2023), these spectra capture all of the AIB emission in each of the five regions. In this paper, we focus on the inventory and characterization of the AIBs found in these template spectra. We refer to Peeters et al. (2023) and Van De Putte et al. (2023) for the inventories of the gas lines from atoms and small molecules extracted from NIRSpec and MIRI MRS data, respectively. For a detailed description of the Orion Bar PDR morphology as seen by _JWST_ we refer to Habart et al. (2023) and to Peeters et al. (2023).
Footnote 3: The template spectra will be available at [https://pdrs4all.org](https://pdrs4all.org)
### Measuring the underlying continuum
The AIB emission is perched on top of the continuum emission from stochastically heated very small grains (e.g. Smith et al. 2007). Different spectral decomposition methods have deduced additional emission components referred to as: 1) emission from evaporating very small grains (based on the blind signal separation method; Berne et al. 2007; Pilleri et al. 2012; Foschino et al. 2019) and 2) plateau emission due to large PAHs, PAH clusters and nanoparticles (Bregman et al. 1989; Roche et al. 1989; Peeters et al. 2012; Boersma et al. 2014; Sloan et al. 2014; Peeters et al. 2017).
In order to identify and characterize AIBs, we subtracted estimates of the continuum emission in each template spectrum. We computed a linear continuum for NIRSpec and a spline continuum anchored at selected wavelengths for MIRI data. Furthermore, we adopted the same anchor points for all five templates. While our measurements of the full-width at half-maximum (FWHM) of AIBs (see Table 1) do depend on the selected continuum, we note that our main goal - to catalog AIBs and their sub-components qualitatively - does not require highly precise estimates of the continuum.
The FWHM4 of each AIB complex was measured by normalizing the continuum-subtracted template spectrum by the peak intensity of the AIB and then calculating the FWHM of the entire AIB complex, that is, without taking into consideration blends, components, and/or sub-components that make up the AIB complex. The measured FWHM strongly depends on the estimated continuum emission; however, this does not impact qualitative trends in FWHM from template to template. The integrated flux of each AIB was computed from the continuum-subtracted spectra. We refer to Peeters et al. (2023) for details on how the 3.3 and 3.4 \(\mu\)m AIB fluxes were measured.
Footnote 4: We use the terms ‘width’ and ‘FWHM’ interchangeably when referring to AIB profiles.
## 3 AIB characteristics and assignments
The superb quality of the Orion Bar observations combined with the increased spectral resolution compared to prior IR space observations reveals an ever-better characterization of the AIBs in terms of sub-components, multiple components making up a "single" band, and the precise shapes of the band profiles (see Fig. 2 for an overview and Figs. 3 and 4 for zoom-ins of selected AIBs in the five template spectra).
We offer a detailed description of the spectral characteristics of the AIB emission as seen by _JWST_ as well as current vibrational assignments in Sects. 3.1 to 3.6. The detailed AIB inventory is listed in Table 1. We note that we consider all spectrally resolved emission features to be candidate AIBs. To assess whether a candidate AIB is real or an artefact, we compare the template spectra in the location of the candidate AIB to the spectrum of the calibration standard star 10 Lac (see Appendix B for details). Due to the very high signal-to-noise ratio (S/N), the spectra reveal an abundance of weak features, either as standalone features, or as shoulders of other bands. Occasionally these shoulders are only visible as a change in the slope along the wing of a stronger AIB and, in such cases, we estimated the central wavelength of the weak AIB visually based on the AIB profile of the main component.
Hereafter, all mentions of nominal AIBs, given in boldface in col. 1 of Table 1, namely 3.3, 3.4, 5.25, 5.75, 6.2, 7.7, 8.6, 11.2, 12.0, 12.7, and 13.5 \(\mu\)m, do not indicate the precise peak positions of these AIBs. The precise peak positions of these nominal AIBs are reported in terms of wavelength in col. 3 of Table 1. We note that we converted the positions in wavelength to wavenumber by rounding to the nearest integer in units of cm\({}^{-1}\) and so the precision of the reported wavenumbers does not reflect the instrumental precision of the peak position of the AIBs.
### The 3.2-3.5 \(\mu\)m (3125-2860 cm\({}^{-1}\)) range
The 3 \(\mu\)m spectral region is dominated by the 3.3 and 3.4 \(\mu\)m AIBs that peak at 3.29 and 3.4 \(\mu\)m, respectively. While some studies have found that the peak of the 3.3 \(\mu\)m AIB shifts toward longer wavelengths (van Diedenhoven et al. 2004), our measurements of the peak position of this band in each template are consistent with the nominal value of 3.29 \(\mu\)m. The band profiles, however, show some slight differences among the templates. The width of the 3.29 \(\mu\)m band varies slightly (see Table 1 and Fig. 5). The templates in order of increasing 3.29 \(\mu\)m band width are: atomic PDR, DF 1, H ii region, DF 2, and DF 3. While the small increase in width on the red side may be attributed to underlying broad plateau emission (see Peeters et al. 2023), the blue wing broadens by \(\sim\)5 cm\({}^{-1}\) on a total band width of \(\sim\) 37.5 cm\({}^{-1}\) (Table 1 and Peeters et al. 2023).
Figure 1: PDRs4All MIRI MRS and NIRSpec footprints (dashed and solid white boundaries, respectively), and spectral extraction apertures (black boxes in the right panel) on top of a composite NIRCam image of the Orion Bar (data from Habart et al. 2023). DF 1, DF 2, and DF 3 are H\({}_{2}\) dissociation fronts as designated in Habart et al. (2023). Red, green, and blue are encoded as F335M (AIB), F470N–F480M (H\({}_{2}\) emission), and F187N (Paschen \(\alpha\)), respectively.
The 3.29 \(\mu\)m band is characteristic for the CH stretching mode in PAHs. The peak position is somewhat dependent on molecular structure, for example the number of adjacent hydrogens on a ring and steric hindrance between opposing hydrogens in so-called 'bay' regions. Molecular symmetry has a more important effect as it controls the number of allowed transitions and the range over which IR activity is present (Maltseva et al., 2015, 2016). For a given PAH, the initial excitation energy has a very minor influence on the peak position (Mackie et al., 2022). Earlier works have also demonstrated this minor influence (Joblin et al., 1995; Pech et al., 2002). Additionally, Tokunaga and Bernstein (2021) found that the peak position and width of the 3.3 \(\mu\)m feature must be fitted simultaneously as both depend on the carrier. There is also a weak dependence of peak position on the charge state, but since the CH stretch is very weak in cations (Allamandola et al., 1999; Peeters et al., 2002), this is of no consequence. These modes are very much influenced by resonance effects with combination bands involving CC modes and CH in-plane bending modes (Mackie et al., 2015, 2016). Overall, in the emission spectra of highly excited species, the differences in peak position mentioned here will be too subtle compared to the impact of molecular symmetry when attempting to identify the carrier(s). The observed narrow width of the 3.3 \(\mu\)m AIB implies then emission by very symmetric PAHs (Pech et al., 2002; Ricca et al., 2012; Mackie et al., 2022).
The very weak shoulder on the blue side, namely, at \(\simeq\) 3.246 \(\mu\)m, has the same strength relative to the main feature in all template spectra, suggesting it is part of the same emission complex. Its peak wavelength may point toward the stretching mode of aromatic CH groups in bay regions or, alternatively, the effect of resonant interaction in a specific species (van Diedenhoven et al., 2004; Candian et al., 2012; Mackie et al., 2015) or aromatic CH in polyaromatic carbon clusters (Dubosq et al., 2023). In a recent analysis of the 3.3 \(\mu\)m AIB in the Red Rectangle, Tokunaga et al. (2022) found differences in the spectra of this source compared to earlier analyses (e.g. Tokunaga et al., 1991; Candian et al., 2012) due to the treatment of Pfund emission lines from the standard star. Candian et al. (2012) fit the 3.3 \(\mu\)m AIB in each spaxel of their IFU cube with two components and analysed spatial variations in the integrated intensities of these components. While issues with the standard star spectrum would affect all spectra in the cube (Tokunaga et al., 2022), spatial variations in integrated intensities should not be affected.
The AIB spectra reveal a plethora of bands longward of the 3.29 \(\mu\)m feature between \(\simeq\) 3.4 and \(\simeq\) 3.6 \(\mu\)m (Table A.1; Peeters et al., 2023; Sloan et al., 1997). As the relative strengths of these sub-components show variations from source to source and within sources (e.g. Joblin et al., 1996; Pilleri et al., 2015; Peeters et al., 2023), they are generally ascribed to different emitting groups on PAHs. Here, we note that the emission profile of the 3.4 \(\mu\)m band varies between the five templates, broadening to longer wavelength (Peeters et al., 2023), indicating the presence of multiple components in the main 3.4 \(\mu\)m band at 3.395, 3.403, and 3.424 \(\mu\)m. The other bands do not show such profile variations. Bands in this wavelength range are due to the CH stretching mode in aliphatic groups and assignments to methyl (CH\({}_{3}\)) groups attached to PAHs and to superhydrogenated PAHs have been proposed (Joblin et al., 1996; Bernstein et al., 1996; Maltseva et al., 2018; Buragohian et al., 2020; Pla et al., 2020). As for the aromatic CH stretching mode, the peak position is sensitive to resonances with combination bands of CC modes and CH in-plane bending modes (Mackie et al., 2018). Typically, methylated PAHs show a prominent band around 3.4 \(\mu\)m, but its peak position falls within a wide range, \(\simeq\) 0.17 \(\mu\)m (Maltseva et al., 2018).
Figure 2: AIB spectrum as seen by _JWST_ using the Orion Bar atomic PDR template spectrum (Sect. 2.2) as an example. Red shaded regions indicate emission from AIBs while blue curves indicate the underlying continuum. Figure is adapted from Peeters et al. (2004).
Figure 3: Zoom-ins on the template spectra at wavelength regions centered on the 3.3 \(\mu\)m AIB (Peeters et al. 2023, top), the 6.2 \(\mu\)m AIB (second from top), the 7.7 \(\mu\)m AIB (second from bottom), and the 11.2 \(\mu\)m AIB (bottom). Each spectrum (on an \(F_{v}\) scale) is normalised by the peak surface brightness of the indicated AIB on the y-axes in each panel. The vertical tick marks indicate the positions of identified (blue) or tentative (black) AIBs and components (see Table 1 and main text). A post-pipeline correction for residual artifacts was performed for Ch2-long (10.02–11.70 \(\mu\)m), Ch3-medium (13.34–15.57 \(\mu\)m), and Ch3-long (15.41–17.98 \(\mu\)m). For further details, see Appendix B. Red dashed vertical ticks indicate the wavelengths where we switch from using data from one MRS sub-band to another. Continued in Fig. 4.
et al. 2018; Buragohain et al. 2020). For hydrogenated PAHs, the main activity is at slightly longer wavelengths, \(\simeq 3.5\)\(\mu\)m within a somewhat narrower range (\(\simeq 0.05\)\(\mu\)m). As the extra hydrogens in superhydrogenated PAHs are relatively weakly bound (1.4-1.8 eV; Bauschlicher & Ricca 2014), astronomical models imply that superhydrogenated PAHs quickly lose all these sp\({}^{3}\) hydrogens in strongly irradiated PDRs (e.g. when \(G_{0}/n({\rm H})>0.03\); Andrews et al. 2016).
For further analysis of the AIB emission in this region, including the many weaker features listed in Table 6, we refer to Peeters et al. (2023).
### The 5-6 \(\mu\)m (1600-2000 cm\({}^{-1}\)) range
In the 5-6 \(\mu\)m region, previous observations have revealed two moderately weak AIB features at approximately 5.25 and 5.75 \(\mu\)m (Table 7; Allamandola et al. 1989a; Boersma et al. 2009b). The 5.25 \(\mu\)m band (Fig. 4) consists of a broad
Figure 4: Continued from Fig. 3. From left to right, top to bottom: Zoom-ins on the template spectra normalised by the peak flux of the 5.25, 5.75 and 5.878, 6.2, 8.6, 11.2 and 12.0, 12.7, 13.5, and 14.2 \(\mu\)m AIBs (indicated in the y-axis label of each panel). The panels show wavelength ranges that are also shown in Fig. 3, except for the panels that are centered on the 13.5 and 14.2 \(\mu\)m AIBs (small panels in the lower right). These figures illustrate the overall similarity and subtle differences in AIB profiles from region to region.
blue shoulder centered at \(\sim 5.18\)\(\mu\)m and extending to about 5.205 \(\mu\)m, followed by a sharp blue rise to a peak at 5.236 \(\mu\)m and a strong red wing extending to about 5.38 \(\mu\)m. A detailed inspection of the profiles reveals a very weak feature at \(\sim 5.30\)\(\mu\)m superposed on the red wing. Comparing the five template spectra, we conclude that the 5.25 \(\mu\)m feature broadens, in particular on the red side, with the narrowest feature seen in the atomic PDR, and then increasing in width in the H ii region, DF 1, DF 2, and DF 3 (Table 1 and Fig. 5). Besides the broadening, the observed profiles are very similar for the five templates, implying that the main feature consists of a single band.
Inspection of the template spectra (Fig. 4) reveals that the 5.75 \(\mu\)m band is a blend of three bands at 5.642, 5.699, and 5.755 \(\mu\)m (e.g. comparing the atomic PDR and DF 3 spectra in Fig. 4). The MIRI MRS spectra clearly exhibit a new, symmetric feature at 5.878 \(\mu\)m. We also report a tentative detection of two very weak features at 5.435 and 5.535 \(\mu\)m.
The spectra of PAHs show weak combination bands in this wavelength range generated by modes of the same type, for example, out-of-plane (OOP) modes (Boersma et al. 2009b,a). Combination bands involving in-plane modes occur at shorter wavelengths (\(3.8-4.4\)\(\mu\)m) and are typically an order of magnitude weaker (Mackie et al. 2015, 2016). Combination bands involving the OOP bending modes typically result in a spectrum with two relatively simple AIBs near 5.25 and 5.75 \(\mu\)m. For small PAHs, the ratio of the intrinsic strength of these bands to the OOP modes increases linearly with PAH size (Lemmens et al. 2019). This ratio increases further for the larger PAHs studied in Lemmens et al. (2021). However, whether the correlation continues linearly is yet to be confirmed.
### The 6.2 \(\mu\)m (1610 cm\({}^{-1}\)) AIB
The interstellar 6.2 \(\mu\)m band is one of the main AIBs. The profile peaks at 6.212 \(\mu\)m (1610 cm\({}^{-1}\)) and has a steep blue rise, a pronounced red wing, and a blue broad shoulder centered at \(\sim 6.07\)\(\mu\)m. Comparing the five template spectra, we conclude that the feature broadens toward the blue side at the same time that the red wing becomes more pronounced (by about 8.2 cm\({}^{-1}\) on a total width of \(\simeq 33.6\) cm\({}^{-1}\); Table 1). Pending confirmation, the peak position possibly varies (its value ranges from 6.2115 to 6.2161 \(\mu\)m). There is a distinct weaker feature at 6.024 \(\mu\)m (1660 cm\({}^{-1}\)) superposed on the blue shoulder. This symmetric feature has a constant width and varies in intensity independently of the main feature (see Fig. 4). This suggests that the 6.024 \(\mu\)m band is an independent component. We note that the observed strength variations of the 6.024 \(\mu\)m band do not affect the conclusion on the broadening of the blue side of the 6.2 \(\mu\)m band. There is a very weak feature perched on the red wing at 6.395 \(\mu\)m (1564 cm\({}^{-1}\)) in the template of the atomic region. It may be obscured by the stronger red wing in the other template spectra. A very subtle change in slope of the red wing may also be present near 6.5 \(\mu\)m in some templates (e.g. the atomic PDR).
observed strength of the 6.2 \(\mu\)m band relative to the CH stretching and OOP bending modes that dominate the neutral spectra, this AIB is attributed to PAH cations (Allamandola et al. 1999). In the past, the peak position was somewhat of an enigma. In early comparisons with harmonic calculations, this band arose at too red a wavelength in PAH cations. This problem was compounded by the adoption of a redshift of 15 cm\({}^{-1}\) to account for anharmonic effects during the emission cascade (for a summary, see Bauschlicher et al. 2009). However, model studies have revealed that anharmonicity introduces a red wing on the profile but does not lead to an appreciable redshift of the peak (Mackie et al. 2022). Recent experimental and quantum chemical studies of neutral, symmetric PAHs have shown that the mismatch between the experimental and interstellar 6.2 \(\mu\)m band positions is less severe than thought (Lemmens et al. 2021). Furthermore, quantum chemical studies on PAH cations have employed the cc-pVTZ basis set that better accounts for treatment of polarization in PAHs. With this basis set used in density functional theory calculations, the calculated peak position of the aromatic CC stretching mode in cations is in much better agreement with the observations (Ricca et al. 2021), but this still needs to be confirmed by experimental studies on PAH cations. The discrepancy noted in earlier studies between the peak position of the 6.2 \(\mu\)m AIB and the aromatic CC stretch in PAHs has prompted a number of suggestions. Specifically, incorporation of heteroatoms such as N into the ring backbone or coordination of atoms such as Si, the presence of aliphatic structures, protonated PAHs, and/or (pentagonal) defects will induce blue shifts in the peak position of this mode (Hudgins et al. 2005; Pino et al. 2008; Joalland et al. 2009; Carpenter et al. 2012; Galue 2014; T8uge et al. 2018; Wenzel et al. 2022; Rap et al. 2022). Further studies are warranted to assess whether these suggestions are still relevant.
The observed 6.024 \(\mu\)m band is at too short a wavelength to be an aromatic CC stretching vibration. Rather, this position is characteristic of the C=O stretch in conjugated carbonyl groups; that is, as quinones or attached to aromatic rings (Allamandola et al. 1989b; Sarre 2019). This band has not been the focus in many quantum chemical studies. We also note that the very weak feature at 6.395 \(\mu\)m is likely another aromatic CC stretching mode.
### The 7.7 \(\mu\)m (1300 cm\({}^{-1}\)) AIB complex
It has been well established that the 7.7 \(\mu\)m AIB is a blend of several features (Bregman 1989; Cohen et al. 1989; Peeters et al. 2002a). The _JWST_ spectra reveal that the main component at 7.626 \(\mu\)m is accompanied by moderately strong bands at 7.8 and 7.85 \(\mu\)m. The 7.8 \(\mu\)m component appears narrower in DF 2 and DF 3, peaking near 7.743 \(\mu\)m, although this may arise due to differences in the red wing of the 7.626 \(\mu\)m component or this may reflect the lack of a different component at 7.775 \(\mu\)m present in the atomic PDR, the H \(\Lambda\) region and DF 1. In any case, given the observed variations between the templates, these bands are independent components. The 7.7 \(\mu\)m AIB complex as a whole broadens significantly from the atomic to the molecular region (by about 18.8 cm\({}^{-1}\), Table 1). In addition to these moderately strong components, there are also weak features at 7.24 and 7.43 \(\mu\)m and between the 7.7 and 8.6 complexes at 8.223 and 8.330 \(\mu\)m. Very weak features are also present at shorter wavelengths (6.638, 6.711, 6.850, 6.943, 7.05, and 7.10 \(\mu\)m).
Bands in this wavelength range are due to modes with a mixed character of CC stretching and CH in-plane bending vibrations. As mentioned in Sect. 3.3, the strength of these modes is very dependent on the charge state of the species and the interstellar 7.7 \(\mu\)m AIB is generally attributed to PAH cations. The spectra of very symmetric PAHs become more complex with increasing size and the main band(s) in the \(7.5-8.0\)\(\mu\)m range shift systematically with size toward longer wavelength from about 7.6 to about 7.8 \(\mu\)m or even larger (Bauschlicher et al. 2008, 2009; Ricca et al. 2012). These quantum chemical calculations point toward compact PAHs in the size range 24 to 100 C atoms as the carriers and probably more toward the smaller size for the main 7.626 \(\mu\)m band and slightly larger for the two moderate components. Detailed spectral decompositions of _ISO-SWS_ and _Spitzer-IRS_ observations agree with these conclusions (Joblin et al. 2008; Shannon & Boersma 2019). The very weak features at 7.24 and 7.43 \(\mu\)m are likely also CC stretching modes. The CH deformation modes of aliphatic groups also occur around 6.8 and 7.2 \(\mu\)m, but these modes are weaker compared to the CH stretching modes of aliphatic groups around 3.4 \(\mu\)m (Wexler 1967; Yang et al. 2016; Dartois et al. 2005) and, given the weakness of the 3.4 \(\mu\)m AIB, we deem that identification unlikely for the weak features at 7.24 and 7.43 \(\mu\)m detected in all templates. The very weak features at 6.850 and 6.943 \(\mu\)m are only present in DF 2 and DF 3. As both these templates also show the strongest 3.4 \(\mu\)m emission, these bands may arise from CH deformation modes of aliphatic groups (Wexler 1967; Arnould et al. 2000).
### The 8.6 \(\mu\)m (1160 cm\({}^{-1}\)) AIB
This AIB peaks at 8.60 \(\mu\)m (1163 cm\({}^{-1}\)). The apparent shift toward shorter wavelengths in the DF 3 spectrum as well as the apparent broadening of the band are likely caused by the change in the underlying "continuum" due to the 7.7 \(\mu\)m AIB and/or plateau emission and/or very small grain emission. The change in slope in the blue wing at 8.46 and 8.54 \(\mu\)m suggests the presence of more than one component in this AIB. However, these components seem to be very weak compared to the main band. There is a similar change in slope at 8.74 \(\mu\)m and potentially at 8.89 \(\mu\)m in all template spectra and this likely has a similar origin. Since these features are very weak, we label them as tentative.
The 8.6 \(\mu\)m AIB is due to CH in-plane bending modes in PAHs, but this mode has a large CC stretching admixture. The intensity of this band increases significantly and it shifts to longer wavelength, producing the very prominent band that appears near 8.5 \(\mu\)m in the spectra of large (\(N_{\rm C}\sim 100\), \(N_{\rm C}\) being the number of C atoms in a PAH molecule) compact PAHs (Bauschlicher et al. 2008). For even larger, compact PAHs, this band starts to dominate the spectra in the \(7-9\)\(\mu\)m range and these species are excluded as carriers of the typical AIB emission (Ricca et al. 2012). In large polyaromatic and aliphatic systems, the geometrical distortions of the C-C backbone and defects, partly related to the hydrogen content, shift the position of this band (Carpentier et al. 2012; Dartois et al. 2020). The weaker features on the blue side of the main band may be due to somewhat smaller and/or less symmetric PAHs while the longer wavelength feature may be due to a minor amount of somewhat larger symmetric, compact PAHs.
### The 10-20 \(\mu\)m (500-1000 cm\({}^{-1}\)) range
This wavelength range is dominated by the strong AIB at 11.2 \(\mu\)m, a moderately strong AIB at 12.7 \(\mu\)m and a plethora of weaker AIBs at 10.95, 11.005, 12.0, 13.5, 13.95, 14.21, and 16.43 \(\mu\)m. The 11.2 \(\mu\)m AIB clearly displays two components
at 11.207 and 11.25 \(\mu\)m, along with a tentative component at 11.275 \(\mu\)m. The AIB peaks at the first component (11.207 \(\mu\)m) in the atomic PDR, the H ii region, and DF 1, while it peaks at the second component (11.25 \(\mu\)m) in DFs 2 and 3. These two components may shift to longer wavelengths in DFs 2 and 3, however, such shifts are still yet to be confirmed. The relative strengths of these two components vary across the five templates, indicating they are independent components. The combined 11.2 \(\mu\)m profile is asymmetric with a steep blue rise and a red wing. The AIB broadens significantly (by \(\sim 5.9\) cm\({}^{-1}\) on a total width of \(\sim 12.2\) cm\({}^{-1}\); Table 1) through the atomic PDR, DF 1, the H ii region, DF 2, and DF 3 in increasing order. This broadening is driven by changes in the red wing though similar but very small changes in the steepness of the blue side are present. An additional weaker component may be present on the red wing at 11.275 \(\mu\)m. In addition, similar to the 5.25 \(\mu\)m AIB, the 11.2 \(\mu\)m AIB displays a broad blue, slow-rising, shoulder from \(\sim 10.4\)\(\mu\)m up to the start of the steep blue wing. A well-known weaker AIB is present at 11.005 \(\mu\)m and is superposed on this blue shoulder.
The 12.0 \(\mu\)m band peaks at 11.955 \(\mu\)m and may have a second component at 12.125 \(\mu\)m. The template spectra furthermore display elevated emission between the red wing of the 11.2 \(\mu\)m band and the 12.2 \(\mu\)m band (see e.g. DF 3) suggestive of more complex AIB emission than expected based on the presence of these two bands. However, due to an artefact at 12.2 \(\mu\)m (see Appendix B), confirmation of the second component, the 12.0 \(\mu\)m profile, and this elevated emission between the 11.2 and 12.0 \(\mu\)m bands requires further improvements to the calibration.
The 12.7 \(\mu\)m band is very complex displaying a terraced blue wing and a steep red decline. It peaks at 12.779 \(\mu\)m except in the atomic PDR where it peaks at a second component at 12.729 \(\mu\)m. The strengths of these components vary independently from each other. Three additional terraces are located near 12.38, 12.52, and 12.625 \(\mu\)m and a red shoulder near 12.98 \(\mu\)m suggests the presence of an additional component. Given the complexity of the 12.7 \(\mu\)m band, spatial-spectral _JWST_ maps are required to understand its spectral decomposition into its numerous components. The entire 12.7 \(\mu\)m complex significantly broadens largely on the blue side but also on the red side. We report a broadening (by \(\sim 14.2\) cm\({}^{-1}\) on a total width of \(\sim 21.9\) cm\({}^{-1}\); Table 1 and Fig. 5) through the atomic PDR, DF 1, the H ii region, DF 2, and DF 3 in increasing order.
The 13.5 \(\mu\)m band peaks at 13.55 \(\mu\)m and may be accompanied by two additional components at 13.50 and 13.62 \(\mu\)m. The 13.5 \(\mu\)m band seems to broaden as well. We note that several artefacts exist just longwards of this band (Appendix B), hampering its analysis. Hence, future improvements to the calibration and additional observations on a wider range of sources will have to confirm this broadening. These artefacts also limit the detection and analysis of bands in the 14-15 \(\mu\)m range. We detected a band at 14.21 \(\mu\)m and potentially at 13.95 \(\mu\)m, although the latter is just to the red of the artefact at 13.92 \(\mu\)m.
Bands in the 11-14 \(\mu\)m range are attributed to CH OOP bending modes. The peak position and pattern of these bands is very characteristic for the molecular edge structure of the PAH; that is, the number of adjacent H's5. The bands making up the 11.2 \(\mu\)m AIB can be ascribed to neutral species with solo H's (Hony et al. 2001; Bauschlicher et al. 2008). The cationic solo H OOP band falls at slightly shorter wavelength than the corresponding solo H OOP band of neutral PAHs and the 11.0 \(\mu\)m AIB has been attributed to cations (Hudgins & Allamandola 1999; Hony et al. 2001; Rosenberg et al. 2011). The 12.7 AIB complex is due to either duo H's in neutral PAHs or trio H's in cations. For species with both solo and duo H's, coupling of the duo with the solo CH OOP modes splits the former into two bands. The sub-components in the 12.7 \(\mu\)m AIB may reflect this coupling and/or it may be caused by contributions of more than one species with duo's.
Footnote 5: Some earlier comparisons of the OOP modes pattern with laboratory and quantum chemical studies included a 15 cm\({}^{-1}\) shift to account for anharmonicity. Recent model studies have shown that such a shift is not warranted (Mackie et al. 2022).
The weak 12.0 \(\mu\)m AIB can be attributed to OOP modes of duo H's, while the 13.5 \(\mu\)m AIBs likely have an origin in OOP modes of quartet H's in pendant aromatic rings (Hony et al. 2001; Bauschlicher et al. 2008). The weak bands near 14.2 \(\mu\)m could be due to OOP modes of quintet H's. Alternatively, for larger PAHs, CCC skeletal modes are present in this wavelength range (Rica et al. 2012).
We also detected a band at 16.43 \(\mu\)m. Other weaker bands are present in this region but due to calibration issues (Appendix B), we refrained from characterizing them.
## 4 Discussion
### Comparison to previous observations
Overall, in terms of spectral inventory, the observed AIB emission in the 3 \(\mu\)m range is consistent with prior high-quality ground-based observations of the Orion Bar (e.g. Sloan et al. 1997). Likewise, the main characteristics of the AIB emission are also detected in prior observations of the Orion Bar carried out with _ISO-SWS_ (Verstraete et al. 2001; Peeters et al. 2002a; van Diedenhoven et al. 2004) and _Spitzer-IRS_ in short-low mode (Knight et al. 2022a). Furthermore, in retrospect, many (weaker) bands and sub-components of the AIB emission seen by _JWST_ may also be recognised in the _ISO-SWS_ observation of the Orion Bar, but they were too weak and too close to the S/N limit to be reported in previous works. However, as these _JWST_ data have an unparalleled combination of extremely high S/N, spectral resolution and, most importantly, superb spatial resolution, these spectral imaging data reveal already known bands and sub-components in unprecedented detail allowing for a much improved characterization of the AIB emission. In addition, these spectral imaging data reveal previously unreported components (blends) and sub-components of the AIB emission (indicated in Table A.1 and discussed in Sect. 3).
The AIBs at 5.75, 7.7, 8.6, 11.2, and 12.7 \(\mu\)m have complex sub-components. Boersma et al. (2009b) noted that the 5.75 AIB has an unusual profile, resembling a blended double-peaked feature. The _JWST_ template spectra indicate that the band is composed of three components with variable strengths. New components are also seen in the 12.7 \(\mu\)m band. Shannon et al. (2016) reported that the 12.7 \(\mu\)m band shifts to longer wavelengths at larger distances from the illuminating star in reflection nebulae. This is consistent with the behavior of this band in the Orion Bar reported here where it reflects the relative intensities of the two components at 12.729 and 12.779 \(\mu\)m. These authors also reported a change in the blue wing. The _JWST_ data now characterizes the components (i.e. terraces) in the blue wing and their relative intensities.
While the sub-components of the 8.6 \(\mu\)m AIB have not, to our knowledge, been reported in the literature, several studies detail sub-components in the 7.7, 11.2, and 12.7 \(\mu\)m AIBs. The 7.7 \(\mu\)m AIB complex is composed of two main sub-components at \(\sim\)7.626 and \(\sim\)7.8 \(\mu\)m (Cohen et al. 1989; Bregman 1989;
Peeters et al. 2002a). The 7.7 \(\mu\)m AIB complex is distinguished into four classes (A, B, C, and D) primarily based on its peak position (Peeters et al. 2002a; Sloan et al. 2014; Matsuura et al. 2014). Spectral-spatial imaging has revealed that the (class A) 7.7 \(\mu\)m profile varies within extended ISM-type sources and depends on the local physical conditions: the 7.8 \(\mu\)m component gains in prominence relative to the 7.626 \(\mu\)m component and is accompanied by increased emission "between" the 7.7 \(\mu\)m and 8.6 \(\mu\)m AIBs in regions with less harsh radiation fields (Bregman & Temi 2005; Berne et al. 2007; Pilleri et al. 2012; Boersma et al. 2014; Peeters et al. 2017; Stock & Peeters 2017; Foschino et al. 2019; Knight et al. 2022b). Our findings using the _JWST_ Orion Bar templates (Fig. 3) are consistent with these past results. Pilleri et al. (2012) attributed this to an increased contribution of evaporating very small grains (eVSGs). In addition, the _JWST_ data reveal that the 7.8 \(\mu\)m component is composed of three components whose relative contribution varies.
Likewise, the 11.2 \(\mu\)m AIB has been classified into class A\({}_{11.2}\), B\({}_{11.2}\), and A(B)\({}_{11.2}\). Class A\({}_{11.2}\) peaks in the 11.20-11.24 \(\mu\)m range and displays a less pronounced red wing relative to the peak intensity (corresponding to a FWHM of \(\sim\)0.17 \(\mu\)m), class B\({}_{11.2}\) peaks at \(\sim\)11.25 \(\mu\)m and shows a more pronounced red wing (FWHM of \(\sim\)0.20 \(\mu\)m), and class A(B)\({}_{11.2}\) is a mix with a peak position as that of class A\({}_{11.2}\) and prominence of its red wing as that of class B\({}_{11.2}\) (resulting in a FWHM of \(\sim\)0.21 \(\mu\)m; van Diedenhoven et al. 2004). As for the 7.7 AIB complex, ISM-type sources display a class A\({}_{11.2}\) profile. Recent spectral-imaging data however indicated that the 11.2 \(\mu\)m profile shifts to slightly longer wavelengths accompanied with a stronger red wing relative to the peak intensity in two (out of 17) positions of the Orion Veil (Boersma et al. 2012) and in two reflection nebulae (Boersma et al. 2013; Shannon et al. 2016). These authors classified these profile variations as a shift from class A\({}_{11.2}\) to class A(B)\({}_{11.2}\), which, in the case of the two reflection nebulae, occurred when moving away from the illuminating star. Boersma et al. (2014) linked the change in the 11.2 \(\mu\)m profile to a change in the 7.7 \(\mu\)m AIB complex (probed by the 11.2/11.3 and 7.6/7.8 intensity ratios, respectively). A change in peak position along with a broadening of the profile is consistent with the _JWST_ templates of the Orion Bar. As discussed in Sect. 3, the change in the peak position of the 11.2 \(\mu\)m AIB reflects the relative importance of two components at 11.207 and 11.25 \(\mu\)m that are now clearly discerned in the _JWST_ data. Furthermore, thanks to the increased spectral and spatial resolution, we conclude that DF 3 belongs to class B\({}_{11.2}\) (Fig. 6). Hence, the Orion Bar exhibits class A\({}_{11.2}\) profiles near the surface of the PDR which evolved from class B\({}_{11.2}\) profiles deeper in the molecular zone.
The _ISO-SWS_ observations of the Orion Bar (taken in a 14\({}^{\prime\prime}\times 20^{\prime\prime}\)aperture) resemble the atomic PDR template, even when centered on DF 36. This resemblance is due to the fact that the AIB emission is significantly stronger in the atomic PDR compared to the molecular PDR (Habart et al. 2023; Peeters et al. 2023) and it dominates the emission within the large ISO/SWS aperture. Hence, the _JWST_ spectrum of the atomic PDR in the Orion Bar (Fig. 2) serves as the updated, high-resolution, more detailed template spectrum for class A AIB emission. The DF 2 and DF 3 templates, which probe regions deep in the molecular PDR, no longer exhibit class A\({}_{11.2}\) profiles while the 3.3, 6.2, 7.7, and 8.6 \(\mu\)m AIBs still clearly belong to class A. A similar situation, where individual targets are found to belong to two classes, has been reported for two targets: the planetary nebula Hb 5 and the Circinus galaxy (van Diedenhoven et al. 2004). These authors furthermore found that the other two galaxies in their sample display class A profiles for the 3.3, 6.2, 7.7, and 8.6 \(\mu\)m AIBs, while displaying a class A(B)\({}_{11.2}\) AIB profile. This suggests that, out of the main AIBs, the 11.2 \(\mu\)m AIB is the cleanest indicator of the shift from class B to class A.
Footnote 6: _ISO-SWS_ observation with TDT of 69501806 (uniquely identifies the ISO observation).
### AIB profiles
Broadly speaking, the prominent AIBs can be separated into three groups: 1) bands with a steep blue rise and a pronounced red wing. The 5.25, 6.2, and 11.2 \(\mu\)m AIBs are clear examples. The profiles also often show a shoulder on the blue side, which is considerably weaker than the red wing; 2) bands that are clear blends of multiple components. This group includes the 3.4, 5.75, 7.7, and 12.7 \(\mu\)m AIBs. They typically comprise three or more sub-components.; and 3) bands that seem to be symmetric, often resembling Gaussian profiles. This group includes the 3.3, 5.878, and 8.6 \(\mu\)m AIBs, as well as the 6.024 and 11.005 \(\mu\)m AIBs. These divisions are not entirely strict. The
Figure 6: Comparison of the 11.2 \(\mu\)m profile in the five template spectra with a class A 11.2 \(\mu\)m profile represented by the _ISO-SWS_ spectrum of the Orion Bar H2S1 (van Diedenhoven et al. 2004, top panel) and a class B 11.2 \(\mu\)m profile represented by the _ISO-SWS_ spectrum of HD 44179 (van Diedenhoven et al. 2004, bottom panel).
group 1 AIBs typically have very weak features perched on the red and/or blue side of the profile. Likewise, it is conceivable that each of the blended components in the group 2 AIBs might have an intrinsic profile with relatively sharp blue rise and a more gradual red wing that is obfuscated by blending. For example, while the 3.403 \(\mu\)m AIB is often blended with a feature at 3.424 \(\mu\)m, this component is very weak and the 3.403 \(\mu\)m profile resembles that of group 1. We note that while the 11.2 \(\mu\)m AIB is also a blend of three components, the character of the profile is dominated by the presence of a steep blue rise and a pronounced red wing rather than the presence of the sub-components. Therefore, we list the 11.2 \(\mu\)m AIB in group 1.
Profiles with a steep blue rise and a pronounced red wing are characteristic for the effects of anharmonicity (Pech et al., 2002; Mackie et al., 2022). Detailed models have been developed that follow the emission cascade for a highly excited, single PAH and that include the effects of anharmonic interactions based on quantum chemical calculations (Mackie et al., 2022). These models do not contain free parameters besides the size of the emitting species (i.e. the average excitation level after absorption of a FUV photon) and the resulting profiles agree qualitatively well with the observations of group 1 AIBs (Mackie et al., 2022). The results show that the wavelength extent of the red wing depends on the details of the anharmonic coupling coefficients with other modes. The strength of the wing relative to the peak emission is sensitive to the excitation level of the emitting species after absorption of the UV photon (i.e. the initial average energy per mode) and the cascade process (e.g. how fast the energy is "leaking" away through radiative cooling). The steepness of the blue rise is controlled by rotational broadening. Analysis of the profile of the 11.2 \(\mu\)m AIB observed by ISO/SWS suggests emission by a modestly sized PAH (\(N_{\rm C}\sim 30\); Mackie et al., 2022) but that conclusion has to be reassessed given the presence of more than one component in this AIB in the _JWST_ spectra of the Orion Bar.
Not all bands will show equally prominent anharmonic profiles. In particular, the far-infrared modes in small PAHs are very harmonic in nature (Lemmens et al., 2020) and their profiles would not develop red wings. Likewise, the aromatic CH stretching modes are very susceptible to resonances with combination bands (Maltseva et al., 2015, 2016; Mackie et al., 2015, 2016) and this interaction dominates their profiles (Mackie et al., 2022).
### AIB variability in Orion
Spatial-spectral maps carry much promise to untangle the complexity of the AIBs and possibly link observed variations to the presence of specific carriers. The first forays into this field were based on _Spitzer_ spectral maps. Analysis of the spatial behaviour of individual AIBs and AIB components revealed their interdependence as well as new components (e.g. Boersma et al., 2009; Peeters et al., 2012; Boersma et al., 2013, 2014; Shannon et al., 2016; Peeters et al., 2017). Analyses based on blind signal separation methods have uncovered several distinct components and spectral details (Berne et al., 2007; Joblin et al., 2008; Pilleri et al., 2012; Foschino et al., 2019), but the increased spectral and spatial resolution, as well as the higher sensitivity of _JWST_, can be expected to take this to a new level. Indeed, some of the spectral details uncovered by previous spatial-spectral studies are now directly detected in the presented _JWST_ data of the Orion Bar. Applications of blind signal separation techniques, as well as spectral fitting using PAHFIT (Smith et al., 2007) and the Python PAH Database (Matthew J. Shannon & Christiaan Boersma, 2018) on the full spectral map of the Orion Bar may be a promising ground for additional detections and potential identifications. Here, we address spatial-spectral variations on inspection of the five template spectra. More detailed analyses are deferred to future studies.
While the spectra are rich in components (Table 1), there is little diversity between the template spectra. All templates show evidence for each sub-components, except possibly for a few very weak bands whose presence may be easily lost in the profiles of nearby strong bands. The most obvious variations are the increased prominence of the sub-components in the 7.7 \(\mu\)m AIB at 7.743 and 7.85 \(\mu\)m and in the 11.2 \(\mu\)m AIB at 11.25 \(\mu\)m in the DF 2 and DF 3 spectra, and the variation in the relative strength of the sub-components of the weak 5.75 \(\mu\)m band. Similarly, the width of many AIBs (3.3, 5.25, 6.2, 7.7, 11.2, and 12.7) broadens significantly (Table 1). This broadening is also systematic, with the FWHM being smallest in the atomic PDR, increasing subsequently in DF 1, followed by the H ii region, then DF 2, and finally DF 3 (Table 1 and Fig. 5). The only exception to this systematic trend is the H ii region having a FWHM smaller than DF 1 (but larger than the atomic PDR) for the 5.25 and 7.7 \(\mu\)m AIBs. As pointed out in Sect. 2.2, we note that the AIB emission in the H ii region originates from the background PDR.
There is also some region-to-region variation in the relative strengths of the main AIBs (Fig. 5). The largest variations are seen in the 3.4 \(\mu\)m AIB to 3.3 \(\mu\)m AIB ratio (\(\sim 100\%\) greater in DF 3 than in the atomic PDR) and in the 3.3 \(\mu\)m AIB to 11.2 \(\mu\)m AIB ratio (\(\sim 40\%\) greater in DF 3 than in the atomic PDR). Both of these ratios are largest in DF 2 and DF 3, and decrease in the atomic region. Variations in the strength of the CC modes and in-plane CH bending modes (6.2, 7.7, and 8.6 \(\mu\)m) relative to the 11.2 \(\mu\)m OOP modes are more modest (at the 10-20% level). Larger variations occur in the relative strength of the moderate bands, namely, the 5.25, 5.878, 6.024, and 11.955 \(\mu\)m bands - which are much more pronounced in DF 2 and DF 3 than in the atomic zone. As noted in Sect. 4.1, the _ISO-SWS_ data of the Orion Bar resembles the atomic PDR template. Hence, the range in spectral variability within class A AIBs is well represented by the five templates for all AIBs except the 11.2 \(\mu\)m AIB. For the 11.2 \(\mu\)m AIB, the presented data not only showcase the class A AIB variability but also the shifts from class A to class A(B) and then to class B. It is expected that future _JWST_ observations probing a large range of physical conditions and environments further extend the spectral variability in the AIB emission.
The anharmonic profile of bands due to smaller PAHs will tend to have a less steep blue rise due to the increase in the rotational broadening as well as a slight increase in the width and a more pronounced red wing due the higher internal excitation for the same photon energy (Mackie et al., 2022; Tielens, 2021). Hence, the presence of somewhat smaller PAHs may be at the origin (of some) of the overall profile variations in the 3.4, 5.25, 6.2, and 11.2 \(\mu\)m AIBs in the _JWST_ template spectra. We note that the spectrum of the DF 1 template resembles much more the atomic region template than the DF 2 and DF 3 dissociation front templates (Figs. 3 and 4, as well as Table 1). This is likely due to the terraced-field-like structure of the molecular PDR resulting in a strongly enhanced line-of-sight visual extinction through the foreground atomic PDR toward DF 1 compared to DF 2 and DF 3 (Habart et al., 2023; Peeters et al., 2023). Hence, a large contribution from the atomic region in the foreground is contributing to the emission toward DF 1. The overall similarity of the spectra suggests that the PAH family is very robust but has a small amount of additional species in the DF 2 and DF 3 zones that is not present in the surface layers of the PDR. In the Orion Bar, the PDR material is advected from the molecular zone to
the ionization front at about 1 km s\({}^{-1}\)(Pabst et al., 2019) over \(\approx 20,000\) yr. In that period, a PAH will have absorbed some \(10^{8}\) UV photons and, yet, apparently the effect on the composition of the interstellar PAH family is only minor as it only results in a change in the prominence of the sub-components in the 7.7 and 11.2 \(\mu\)m AIBs. This likely reflects that for moderate-to-large PAHs (\(N_{\rm C}\gtrsim 30\)), photodragmentation is a minor channel compared to IR emission and, moreover, when fragmentation occurs, the H-loss channel dominates over C loss (Allain et al., 1996a,b; Zhen et al., 2014b,a; Wenzel et al., 2020) and then rapidly followed by rehydrogenation with abundant atomic H (Montillaud et al., 2013; Andrews et al., 2016). We note that the UV field increases by about two orders of magnitude between the H\({}_{2}\) dissociation front and the PDR surface but the atomic H abundance increases by a similar factor as H\({}_{2}\) is increasingly photolysed near the surface. Hence, the ratio of the local FUV field to the atomic hydrogen density, \(G_{0}/n\)(H), which controls the photoprocessing (Andrews et al., 2016), does not vary much among the five template regions. We thus suggest that the additional species in the deeper layers of the Orion Bar causing the increased prominence of sub-components in the 5.75 (at 5.755 \(\mu\)m), 7.7, and 11.2 \(\mu\)m AIBs, are aromatic species and/or functional groups and/or pendant rings that are more susceptible to photolysis.
Photolytic processing of the PAH family with position in the Orion Bar may also leave its imprint on the PAH size distribution and this will affect the relative strength of the AIBs. The 3.3/11.2 AIB ratio has long been used as an indicator of the size of the emitting species (Allamandola et al., 1989b; Pech et al., 2002; Ricca et al., 2012; Mori et al., 2012; Croiset et al., 2016; Maragkoudakis et al., 2020; Knight et al., 2021, 2022a) as this ratio is controlled by the "excitation temperature" of the emitting species and, hence, for a fixed FUV photon energy, by the size. For the five _JWST_ template regions, the 3.3/11.2 AIB ratio is observed to vary by about 40%, being largest in DF 3 and decreasing toward DF 2, followed by DF 1, the atomic PDR and the H ii region. The variation in this ratio is slightly less than what is observed in the reflection nebulae NGC 7023 and in the larger Orion region (e.g. the Orion Bar and the Veil region beyond the Orion Bar) and corresponds to an increase in the typical size of the emitting species by about 40% toward the surface (Croiset et al., 2016; Knight et al., 2021, 2022a; Murga et al., 2022). Hence, we link the decreased prominence of the sub-component in the 5.75 \(\mu\)m (at 5.755 \(\mu\)m), 7.7 \(\mu\)m (at 7.743 and 7.775 \(\mu\)m), and 11.2 \(\mu\)m AIBs (at 11.275 \(\mu\)m) as well as the variation in the 3.3/11.2 AIB ratio to the effects of photolysis as material is advected from the deeper layers of the PDR to the surface.
Variations in the 6.2/11.2 ratio (Fig. 5) are generally attributed to variations in the ionised fraction of PAH (Peeters et al., 2002a; Galliano et al., 2008; Stock et al., 2016; Boersma et al., 2018). This ratio is only 12% stronger in DF 3 than the surface of the PDR. The limited variations in the 6.2/11.2 AIB ratio is at odds with those measured by _Spitzer_ and _ISO_. Specifically, this ratio is observed to increase by about 50% across the Orion Bar when approaching the Trapezium cluster (Knight et al., 2022a). Moreover, Galliano et al. (2008) measured an increase in this ratio by almost a factor of 2 over (a much wider swath of) the Orion Bar. The PAH ionization balance is controlled by the ionization parameter, \(G_{0}T^{1/2}/n_{e}\) with \(G_{0}\), \(T\), and \(n_{e}\) the intensity of the FUV field, the gas temperature, and the electron density. The high spatial resolution of _JWST_ allows for a clear separation of the emission at the H\({}_{2}\) dissociation fronts and the PDR surface. The limited variation in the 6.2/11.2 AIB ratio is somewhat surprising because the PAH ionizing photon flux differs by about a factor of 40 between the dissociation fronts (located at \(A_{V}=2\) mag) and the PDR surface; the gas temperature will also increase (slightly) toward the surface, while the electron abundance remains constant over this region (Tielens & Hollenbach, 1985). It is also possible that PAH cations contribute an appreciable amount to the 11.2 \(\mu\)m band (Shannon et al., 2016; Boersma et al., 2018). Further modelling will be important to fully understand the complexity of the Orion Bar (A. Sidhu et al., in prep.).
The 3.4/3.3 AIB ratio is observed to decrease by about 100% from DF 3 to the atomic PDR (Fig. 5). Such variations have also been seen in other nebulae and attributed to photofragmentation processes (Joblin et al., 1996). The 3.4 \(\mu\)m AIB is attributed to aliphatic CH modes in the form of a minor amount of H bonded to sp\({}^{3}\) C atoms either in the form of methyl groups or as superhydrogenated PAHs (Schutte et al., 1993; Bernstein et al., 1996; Joblin et al., 1996). The abundance of superhydrogenated PAHs is expected to be very small throughout the Orion Bar as such extra H's are readily lost in the strong FUV radiation field (Andrews et al., 2016). Methyl groups are also more easily photolysed than aromatic H's (energy barriers are 3.69, 4.00, and 4.47 eV for CH\({}_{2}\)-H, -CH\({}_{3}\) and aromatic H loss, respectively; Tielens, 2021). Recent experiments report that, for cations, this methyl group photolysis can lead to quite stable tropylium formation (loss of H followed by isomerization to a seven-membered ring; Jochims et al., 1999; Zhen et al., 2016; Wenzel et al., 2022). However, further investigation is required to firmly establish the importance of this fragmentation route for conditions present in the interstellar medium. If borne out, the reaction of the tropylium cation with atomic H has a calculated barrier of 3.2 kcal mol\({}^{-1}\) (1600 K; Bullins et al., 2009) and, hence, under warm, dense H-rich conditions, the methyl group could be reformed. Hence, in a "suitable" PDR, the species may cycle back and forth between a methyl functional group and the tropylium structure until eventually -CH\({}_{3}\) loss occurs. In any case, it can be expected that the stronger UV field nearer to the surface will reduce the number of CH methyl groups compared to the number of CH aromatic bonds. Further experimental and quantum chemical studies will have to address the competition between the various channels involved in the chemistry of methylated PAHs in PDRs.
The increased importance of PAH photolysis near the surface of the Orion Bar PDR is in line with the GrandPAH hypothesis that only the most resilient species can sustain the harsh conditions of strong FUV radiation fields. Thus, a limited number of compact, large PAHs will dominate the interstellar PAH family in these conditions (Andrews et al., 2015; Tielens, 2013). If the conditions are right, large PAHs may even be stripped of all their H's and isomerize to the fullerene, C\({}_{60}\)(Boersma et al., 2012; Berne et al., 2015; Zhen et al., 2014a). We also realize that the presence of somewhat smaller PAHs and the increased importance of aliphatic functional groups deep in the PDR may reflect the importance of ion-molecule and/or radical chemistry during the preceding dark cloud core phase modifying and/or forming PAHs in a bottom-up scenario akin to that proposed for the formation of benzonitrile, indene, and cyanonaphthalene (McGuire et al., 2018; Cernicharo et al., 2021; McGuire et al., 2021).
## 5 Conclusions
The superb sensitivity and spectral resolution of _JWST_ have revealed an ever-better characterization of the AIBs in the Orion Bar in terms of sub-components, multiple components making up a "single" band, and band profiles. In addition, the unprecedented spatial resolution of the spectral imaging data showcases the interdependence of the numerous AIB components.
We extracted five template spectra in apertures positioned on the H ii region, the atomic PDR, and the three dissociation fronts DFs 1, 2, and 3. The spectra display a wealth of detail and many weak features have now been firmly identified, their peak positions quantified, and their profiles established. At the same time, the spectra are really very simple. There are a limited number of strong bands with well defined peak positions and red-shaded profiles characteristic for anharmonic interactions. And there is little diversity between the templates. A modest variation is observed in the relative intensities of the main AIBs and of the sub-components of an AIB as well as a systematic broadening of the FWHM of many AIBs (smallest in the atomic PDR and largest in DF 3). Consequently, these templates demonstrate the spectral variations in the class A AIB emission as well as the shift from class B\({}_{11.2}\) (DF 3) to class A\({}_{11.2}\) (atomic PDR). The comparison of the template spectra with the _ISO-SWS_ spectrum of the Orion Bar underscores that the spectrum of the atomic region is the "poster child" for the class A spectrum (Fig. 2; Peeters et al. 2002a; van Diedenhoven et al. 2004). This comparison also demonstrates that in a large aperture, PDRs such as Orion are expected to show class A spectra. Conversely, PDRs with more gentle physical conditions (e.g. in the DF 3) are expected to display a slightly modified class A AIB spectrum (except for the 11.2 \(\mu\)m AIB), showcasing broader AIBs and an increased prominence of minor sub-components with respect to the exemplar class A spectrum. In the case of the 11.2 \(\mu\)m AIB, more gentle physical conditions broaden the AIB and increase the prominence of minor sub-components seen in class A, resulting in a class B 11.2 \(\mu\)m AIB profile. Hence, the templates suggest a shift from class B\({}_{11.2}\) (DF 3) to class A\({}_{11.2}\) (atomic PDR). Further modelling of the PDR physics and chemistry may help to pinpoint the physical and chemical processes that drive these spatial-spectral variations in the Orion Bar. Furthermore, we expect that similar studies of a variety of sources with _JWST_ will provide deeper insight in the origin of the A, B, C, and D classes identified by using measurements obtained with _ISO_ and _Spitzer_(Peeters et al., 2002a; van Diedenhoven et al., 2004; Sloan et al., 2014; Matsuura et al., 2014). In any case, the spatial-spectral variations in the Orion Bar provide a framework in which AIB spectra of extragalactic regions of massive star formation can be analysed in terms of the physical conditions in their PDRs.
An analysis of the _Spitzer_ spectra of a variety of objects revealed that the mid-IR spectra at the brightest spots in PDRs show remarkably similar AIBs and this has been taken to imply that in the harsh conditions of these positions, the PAH family is dominated by a few species that can withstand the harsh conditions in PDRs (Andrews et al., 2015; Tielens, 2013). For now, the limited diversity in the AIB characteristics of the Orion Bar templates points in the same direction. Indeed, a broad distribution of PAHs would result in much more sub-structure and variation behavior. In addition, the disappearance of the (weak) 11.2 \(\mu\)m component of the 11.2 \(\mu\)m AIB in the PDR surface layers implies that photochemistry is important: only the most robust species survive in the harsh conditions at the surface of the PDR. We note that while the cosmic AIB emission can be classified in four classes (A, B, C, and D), interstellar AIB emission invariably belongs to class A. The Orion Bar spectrum is the "poster child" of the class A spectrum - out of all the AIBs found in the Orion Bar template spectra, only the 11.2 \(\mu\)m AIB shows some indication of a class B contribution. Only very distinctly different classes of objects with unique histories display classes B, C, and D AIB emission. This too implies that the interstellar PAH family consists of a small set of very robust species.
Moreover, we conclude that the profiles of the 5.25, 6.2, and 11.2 \(\mu\)m AIBs are controlled by anharmonicity, rather than by blending of a number of bands, while variations in the widths of these bands in the different template spectra are related to variations in the excitation of the emitting species in those positions. As a corollary, this implies that these bands are likely dominated by emission of a single carrier, further supporting the GrandPAH hypothesis (Mackie et al., 2022). However, it remains to be seen whether this spectral similarity still holds when a much larger range of objects is investigated at the higher spectral resolution of _JWST_.
The much higher spatial resolution of _JWST_ provides further insight in the processes that might be relevant for the composition of the PAH family. Specifically, as argued in Sect. 4, the decreased prominence of the minor features in the 5.75, 7.7, and 11.2 \(\mu\)m AIBs in the atomic region indicates the loss of a sub-population of the PAH family (Berne & Tielens, 2012; Montilaud et al., 2013; Berne et al., 2015; Andrews et al., 2016) or the loss of very small grains (Pilleri et al., 2012, 2015) in this region. Likewise, the decrease in the methyl group coverage, as evidenced in the variation of the 3.4/3.3 AIB ratio, indicates the loss of more loosely bound functional groups in the surface layers or their conversion to aromatic moieties. Models suggest that photolysis of PAHs is controlled by the strength of the UV field over the atomic hydrogen density (Montillaud et al., 2013; Andrews et al., 2016; Berne et al., 2015) and over much of the Orion Bar PDR, \(G_{0}/n(\mathrm{H})\simeq 1\)(Tielens & Hollenbach, 1985; Bernard-Salas et al., 2012b), implying that the PAHs that are lost from the PAH family are small (\(N_{\mathrm{C}}\lesssim 50\)).
Hence, the picture that emerges from the analysis of the template spectra is the increased importance of FUV processing in PDR surface layers, resulting in a "weeding out" of the weakest links of the PAH family. These less resistant species are possibly formed from small hydrocarbons during the preceding dark cloud phase of the region in a bottom-up chemical scenario (Cuadrado et al., 2015; McGuire et al., 2021), feeding the gas with small hydrogen rich photolytically produced species (Alata et al., 2015). The UV processing of the PAH family will start with the loss of the smallest PAHs but as the PDR material is advected to the surface, larger and larger PAHs will become susceptible to photoprocessing. Similar scenarios have been proposed recently with data not obtained with _JWST_, albeit with slightly different numbers (e.g. \(N_{\mathrm{C}}\lesssim 50\) in Murga et al., 2022). In favourable conditions, the processing of very large PAHs (\(N_{\mathrm{C}}\gtrsim 60\)) may lead to the formation of C\({}_{60}\)(Berne & Tielens, 2012; Berne et al., 2015). While the present data around 18.9 \(\mu\)m are still marred by instrumental artefacts, future searches for the signature of fullerenes in the MIRI spectra of these surface layers may reveal whether such a scenario plays a role under the conditions of the Orion Bar.
_JWST_ is poised to obtain high-quality spectral imaging observations of a large sample of environments probing the full range of physical conditions that are relevant for AIB emission. These observations promise to capture the complexity of AIB emission with the unprecedented detail that is needed in order to advance our understanding of the photochemical evolution of large carbonaceous molecules.
###### Acknowledgements.
We thank the referee, Alan Tokunaga, for constructive comments on the manuscript. We are very grateful to the _JWST_ Help Desk for their support with pipeline and calibration issues that were encountered while writing this paper. This work is based on observations made with the NASA/ESA/ESA and James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #1288. Support for program #1288 was provided by NASA
through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. EP and JC acknowledge support from the University of Western Ontario, the Institute for Earth and Space Exploration, the Canadian Space Agency (CSA; 22JPMG01-16), and the Natural Sciences and Engineering Research Council of Canada. Studies of interstellar PAHs at Leiden Observatory (AT) are supported by a Spinoza preztein from the Dutch Science Agency, NWO. CB is grateful for an appointment at NASA Ames Research Center through the San Jose State University Research Foundation (80NSSC22M0107). Part of this work was supported by the Programme National "Physique et Chimie du Miliea Interallier" (PCM) of CNRS/INSU with IN/CNF po-funded by CEA and CNES. JRG and SC thank the Spanish MCINN for funding support under grant PID2019-10610-BI00-30. To support by JSPS Bilateral Program, Grant Number 12019399, work by YO and MR is carried out within the Collaborative Research Centre 956, sub-project C1, funded by the Deutsche Forschungsgemeinschaft (DFG) - project ID 18401867. This work was also supported by the Spanish program Umidad de Excelencia Maria de Maerta (CEX2020-001058-M, financed by MCIN/AEI/10.13039/501100011033. NN is funded by the United Arab Emirates University (UAEU) through UAEU Program for Advanced Research (UPAR) grant GO003479. AP (Amit Pathak) would like to acknowledge financial support from Department of Science and Technology - SERB via Core Research Grant (TSC-CRG grant) (SERB-CRG/20100907). Institutes of Finance (IoE) incentive grants, BHU (mentev2012)-2232439, Banaras Hindu University, Varanasi and thanks the Inter-University Centre for Astronomy and Astrophysics, Pune for associateciastichip. This work is sponsored (in part) by the CAS, through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. AR gratefully acknowledges support from the directed Work Package at NASA Ames titled: "Laboratory Astrophysics - The NASA Ames API IR Spectroscopic Database". MR acknowledges DST for the DST INSPIRE Faculty fellowship. HZ acknowledges support from the Swedish Research Council (contract No 2020-03437). PM acknowledges grants EURO201-122006, TEED2021-129416A-100 and DFD2021-1253090A-100 funded by MCIN/AEI/ 10.13039/501100011033 and European Union NextGenerationEU/PRTR. MSK is funded by RSCF, grant number 21-12-00373.
| ```
パノラマ観測によるオリオンバーのPAHの構造と化学的変化を明らかにするため、JWSTは優れた解像度と高感度で、PAHの構造を解明した。
``` |
2309.11375 | Performance update of an event-type based analysis for the Cherenkov
Telescope Array | The Cherenkov Telescope Array (CTA) will be the next-generation observatory
in the field of very-high-energy (20 GeV to 300 TeV) gamma-ray astroparticle
physics. The traditional approach to data analysis in this field is to apply
quality cuts, optimized using Monte Carlo simulations, on the data acquired to
maximize sensitivity. Subsequent steps of the analysis typically use the
surviving events to calculate one set of instrument response functions (IRFs)
to physically interpret the results. However, an alternative approach is the
use of event types, as implemented in experiments such as the Fermi-LAT. This
approach divides events into sub-samples based on their reconstruction quality,
and a set of IRFs is calculated for each sub-sample. The sub-samples are then
combined in a joint analysis, treating them as independent observations. In
previous works we demonstrated that event types, classified using Machine
Learning methods according to their expected angular reconstruction quality,
have the potential to significantly improve the CTA angular and energy
resolution of a point-like source analysis. Now, we validated the production of
event-type wise full-enclosure IRFs, ready to be used with science tools (such
as Gammapy and ctools). We will report on the impact of using such an
event-type classification on CTA high-level performance, compared to the
traditional procedure. | Juan Bernete, Orel Gueta, Tarek Hassan, Max Linhoff, Gernot Maier, Atreyee Sinha | 2023-09-20T14:56:39 | http://arxiv.org/abs/2309.11375v1 | # Performance update of an event-type based analysis for the Cherenkov Telescope Array
###### Abstract:
The Cherenkov Telescope Array (CTA) will be the next-generation observatory in the field of very-high-energy (20 GeV to 300 TeV) gamma-ray astroparticle physics. The traditional approach to data analysis in this field is to apply quality cuts, optimized using Monte Carlo simulations, on the data acquired to maximize sensitivity. Subsequent steps of the analysis typically use the surviving events to calculate one set of instrument response functions (IRFs) to physically interpret the results. However, an alternative approach is the use of event types, as implemented in experiments such as the _Fermi_-LAT. This approach divides events into sub-samples based on their reconstruction quality, and a set of IRFs is calculated for each sub-sample. The sub-samples are then combined in a joint analysis, treating them as independent observations. In previous works we demonstrated that event types, classified using Machine Learning methods according to their expected angular reconstruction quality, have the potential to significantly improve the CTA angular and energy resolution of a point-like source analysis. Now, we validated the production of event-type wise full-enclosure IRFs, ready to be used with science tools (such as _Gammapy_ and _ctools_). We will report on the impact of using such an event-type classification on CTA high-level performance, compared to the traditional procedure.
Introduction
The Cherenkov Telescope Array (CTA)1 represents the next-generation observatory in the field of very-high-energy gamma-ray astroparticle physics. It employs two arrays of imaging atmospheric Cherenkov telescopes (IACTs), one for each hemisphere, composed of telescopes of three different sizes. Its optimized configuration provides a major improvement in sensitivity and in angular and energy resolution with respect to the current generation of IACTs over a very broad energy range from 20 GeV up to more than 300 TeV.
Footnote 1: www.cta-observatory.org
The performance of this future observatory is estimated from detailed Monte Carlo (MC) simulations, described by a set of Instrument Response Functions (IRFs). The main IRF components describing the instrument performance to gamma-ray observations are the effective area, the energy dispersion and point-spread function (PSF). These IRFs are then used by science tools (such as gammapy [6] and ctools [10]) to simulate the instrument performance over specific science cases. The methodology to calculate the expected sensitivity and associated IRFs of CTA, as well as their detailed description, has been described in previous contributions (see [2, 4, 8]) and is briefly discussed in section 3.
The _Fermi_ Large Area Telescope (LAT) Collaboration [3] proved that high-level analysis performance can be significantly improved by separating events for which the response of the detector is different into event types and producing specific IRFs for each event type [5]. By including this extra knowledge into the likelihood analysis, multiple benefits are achieved: reducing background contamination, increasing the effective area and sensitivity as well as significantly improving the angular and energy resolution for a subset of the events. Inspired by the success of event types in _Fermi_-LAT, we present in this work the status of an analog implementation for IACTs, specifically for the future CTA.
This work is a natural continuation of Ref. [9], where we demonstrated that event types are able to improve the angular and energy resolution by up to 25% for a point-like source located at the center of the field of view (FoV). This first step did not allow the generalized use of event-type-wise IRFs at the science tools level to properly evaluate their impact over specific science cases.
In this work, we have validated the production of event-type wise offset-dependent point-like and full-enclosure IRFs for CTA (i.e. valid for both point-like or extended sources located anywhere within the FoV). These IRFs, tailored to each event type, are now ready to be used by science tools. We also present the impact of this event-type classification on the high-level performance of CTA, comparing it to the standard procedure (not using event types), as well as evaluate the potential for further improvement with a better event-type classification.
## 2 Event type partitioning
Previous work successfully demonstrated the effectiveness of machine learning (ML) methods in separating event types based on their expected quality in angular reconstruction [9]. Our approach begins at the Data Level 2 (DL2), as the product of a classical IACT analysis, which classification score called _gammaness_ and a list of lower-level parameters describing individual telescope images
and stereo parameters (such as Hillas parameterization, reconstructed altitude of shower maximum, etc...).
An event type is a unique tag for each event in a DL2 table that classifies of all them in terms of their quality in angular reconstruction. We use a ML regression model to predict the angular difference between true and reconstructed direction (from now on, predicted misdirection), so the division of event types reduces to establishing thresholds for the top X% reconstructed events (lowest predicted misdirection), the following Y%, etc. Where the number of event types and their proportions can be freely decided.
The event type partitioning methodology employed for this study is almost identical to the one described in the previous contribution [9] with the following differences:
* The MC simulated data used is diffuse (covering the full FoV of CTA telescopes) for gammas, protons and electrons.
* The regression model we use, a multilayer perceptron (MLP) neural network with a \(tanh\) as neuron activation function, has been further optimized.
* The thresholds in predicted misdirection to divide the event types are now dependent in both energy and offset angle, instead of only energy.
## 3 IRF production
The standard methodology to compute CTA IRFs [2, 4, 8] starts from DL2 table. A re-weight of the simulated events is needed so that they resemble the particle statistics expected from a CTA observation of a Crab-Nebula-like source (as a test case). To compute IRFs a cut optimization is needed, generally maximizing sensitivity as a function of the reconstructed energy. Events surviving these quality cuts are the ones that will be used to compute the final set of IRFs. The cut optimization is usually performed over the following parameters: multiplicity (number of telescopes used in the reconstruction of an event), _gammaness_ and, in the case of a point-like source analysis, the angular size of the signal region (_ON region_). Once CTA data is produced, the list of events surviving the _gammaness_ and multiplicity cuts together with their corresponding IRFs form the Data Level 3 (DL3) products.
With this procedure, the amount of data surviving quality cuts (and therefore actually used in the analysis) is small compared to the rejected data, while the latter could still be useful. Furthermore, as there is only one set of IRFs generated applied equally for all events, all the extra knowledge we have from the low-level analysis is lost.
In an event-type based analysis, the event type partitioning (as explained in Section 2) occurs before optimizing the cuts and computing the IRFs. This allows to create a number of independent lists (as many as event types), each one with their corresponding set of IRFs describing their average quality.
To compute the IRFs and store them in the proper format2 we used the library _pyitf3_. This library first needed to be tested and validated to produce offset-dependent and full-enclosure IRFs.
To validate it, we compared the resulting sensitivity and IRFs to the ones computed by _EventDisplay_[11] with the same MC data. The tests consisted in two steps:
1. Validate pyirf IRF computation. By using identical DL2 tables, we compared the computed IRF components by using exactly the same quality cuts as _EventDisplay_. The results were identical, and therefore the computation of all IRF components was validated.
2. Validate pyirf cut optimization. We performed two independent cut optimizations (with _pyirf_ and _EventDisplay_) by selecting the cuts that provide a better sensitivity in each energy bin, and compared resulting sensitivities. As shown in Fig. 1, they are not exactly the same but they agree to within 50% between 30 GeV and 100 TeV (also across different values of the FoV). The reason of the disagreement is not known, but is probably related to small differences in the cut selection methods (for example, _EventDisplay_ uses smaller bins for the direction cuts).
After performing these tests, we conclude _pyirlf_ is suitable to our needs, as it allows us to compute both point-like and full-enclosure IRFs properly for all different camera offset angles up to 6 \(deg\). Once the production of IRFs was validated, we produced various sets of event-type-wise IRFs, ready to be used with high-level science tools, in this case, _Gammapy_.
## 4 Results
We evaluate the expected angular reconstruction quality of all events, rank them and eventually classify them into different event-type partitionings to then produce event-type-wise offset
Figure 1: _EventDisplay_ and _pyirlf_ comparison of the resulting sensitivity for a Crab-like observation of 50 hours in the central FoV offset bin.
dependent IRFs for 50 hours of observing time for the "Alpha" layout of CTA-North (4 LSTs and 9 MSTs) [7].
By computing the angular resolution for the ranked top 20% events as reconstructed by our model, we show a 25 to 50 % improvement in angular resolution with respect to the standard cut optimization method (not using event types), as shown in Figure 2. We also computed the angular resolution for the true top 20% events, i.e.: ranking by the actual difference between the reconstructed and the true simulated position of each event, so we can see there is still room for improvement of our regression model.
We can use these IRFs to perform either 1D (spectral evaluations of point-like sources) or 3D (spectral and morphological studies) simulations with _Gammapy_. Datasets are simulated from a set of IRFs: we are able to perform simulations for a single IRFs set and for event-type-wise IRFs treating them as independent samples that may be combined in a joint-likelihood analysis. By doing this with a Crab-like source simulations over a wide range of fluxes, we can reconstruct the combined sensitivity from all event types as shown in Figure 3, by identifying over each bin in reconstructed energy the simulated flux that provides a 5\(\sigma\) detection. Note this method to compute sensitivity (for any set of observations or simulations at _Gammapy_ level) does not have the usual requirements generally included in the calculation of sensitivity, such as the requirement of the excess being larger than a 5% of the background (to account for systematics in the background) or the minimum number of 10 excess events (heavily affecting the sensitivity at the highest energies), which is the main reason of the disagreements at the lowest and highest energies with the _pyif_-estimated curve.
## 5 Conclusions
The conclusions of this work can be summarized by the following milestones:
1. Our ML regression model is able to predict the misdirection of each event and, therefore, can be used to separate event types. It should be noted there is still room for improvement.
Figure 2: Angular resolution for a 50 hours observation, comparison between the standard cuts case, the reconstructed top 20% events and the true top 20%. Repeated for different offset ranges.
2. Offset-dependence has been introduced and validated in the event-type partitioning process.
3. We are now able to produce consistently both point-like and full-enclosure event-type-wise IRFs over the full FoV, which allows high-level simulations with science tools such as _Gammapy_.
4. Event-type-wise IRFs show a significant improvement in angular resolution (25 to 50% over a subset of the events).
5. Preliminary _Gammapy_ analysis already shows that is possible to combine observations from different event-type samples for a better performance.
This work shows the great potential that an event-type based analysis could have for improving CTA's performance. A specific science case for fundamental physics with gamma-ray propagation [1] that could be benefited by event types is measuring intergalactic magnetic fields, in which the size of the PSF is crucial. Another important example is the Galactic Plane Survey, where the improved angular resolution at large offset angles will allow to separate sources and determine extensions and morphologies better than ever in this energy range.
Figure 3: Preliminary sensitivity curve reconstructed with _Gammapy_ by doing a likelihood analysis with combined event types (4 types with 25% of the events each) and with no event types, compared to the standard sensitivity computed with _pyiff_. Note that Gammapy-estimated sensitivity does not take into account any conditions on background systematics and minimum number of excess events, which affect the highest and the lowest energies.
## Acknowledgements
This work was conducted in the context of the CTA Consortium and CTA Observatory. We gratefully acknowledge financial support from the agencies and organizations listed here: [http://www.cta-observatory.org/consortium_acknowledgments](http://www.cta-observatory.org/consortium_acknowledgments).
| チェレンコフ天体観測所 (CTA) は、非常に高エネルギー (20 GeV から 300 TeV) γ-線天体物理学の次の世代の天体観測所となる。この分野でのデータ分析の伝統的なアプローチは、モナコシミュレーションを使って最適化された質量カットを適用することである。このアプローチでは、残存するイベントを使用して、光学的な解釈に役立つ測定器応答関数 (IRFs) を計算する。しかし、別のアプローチは、フェリ-ラットなどの実験で実装されているイベントタイプに基づくものである。このアプローチでは、イベントを再構成の品質に基づいてサブサンプルに分割し、各サブサンプルごとにIRFsを計算する。サブサンプルはその後、独立した観測として組み合わせられ、joint analysis にする。過去の研究では、角度の再構成の期待値に基づいて機械学習を使って分類されたイベントタイプは、ポイント |
2309.06609 | Jupiter's Metastable Companions | Jovian co-orbitals share Jupiter's orbit in 1:1 mean motion resonance. This
includes $>$10,000 so-called Trojan asteroids surrounding the leading (L4) and
trailing (L5) Lagrange points, viewed as stable groups dating back to planet
formation. Via a massive numerical study we identify for the first time some
Trojans which are certainly only `metastable'; instead of being primordial,
they are recent captures from heliocentric orbits into moderately long-lived
(10 kyr - 100 Myr) metastable states that will escape back to the scattering
regime. We have also identified (1) the first two jovian horseshoe co-orbitals
that exist for many resonant libration periods, and (2) eight jovian
quasi-satellites with metastable lifetimes of 4-130 kyr. Our perspective on the
Trojan population is thus now more complex as Jupiter joins the other giant
planets in having known metastable co-orbitals which are in steady-state
equilibrium with the planet-crossing Centaur and asteroid populations, in
agreement with theoretical estimates. | Sarah Greenstreet, Brett Gladman, Mario Juric | 2023-09-12T21:27:05 | http://arxiv.org/abs/2309.06609v2 | # Jupiter's Metastable Companions
###### Abstract
Jovian co-orbitals share Jupiter's orbit in 1:1 mean motion resonance. This includes >10,000 so-called Trojan asteroids surrounding the leading (L4) and trailing (L5) Lagrange points, viewed as stable groups dating back to planet formation. Via a massive numerical study we identify for the first time some Trojans which are certainly only'metastable'; instead of being primordial, they are recent captures from heliocentric orbits into moderately long-lived (10 kyr-100 Myr) metastable states that will escape back to the scattering regime. We have also identified (1) the first two jovian horseshoe co-orbitals that exist for many resonant libration periods, and (2) eight jovian quasi-satellites with metastable lifetimes of 4-130 kyr. Our perspective on the Trojan population is thus now more complex as Jupiter joins the other giant planets in having known metastable co-orbitals which are in steady-state equilibrium with the planet-crossing Centaur and asteroid populations, in agreement with theoretical estimates.
Keywords:Keyword1, Keyword2, Keyword3, Keyword4
## Introduction
The five famous Lagrange points of the circular restricted three-body problem are locations relative to the moving planet where objects have tiny relative accelerations. In particular the 'triangular' L4 and L5 Lagrange points are located 60 degrees ahead of and behind the planet along its orbit, and small bodies can oscillate for long durations back and forth around these points. The L4/L5 stability was initially a theoretical discovery, which was followed by the first Trojan detections in 1906 [1; 2], but now include more than 10,000 cataloged members; these \(>\)10,000 Trojans are viewed as stable populations that date back to planet formation.
Twenty-five years ago, [3] computed the stability of the first 270 Jupiter Trojans on their nominal orbits, showing that some Trojans may leave in the next 0.3-4 billion years; that study assumed all Trojans were primordial and that any recent departures were due to a combination of collisions and dynamical erosion, allowing some primordial Trojans to leak away at the current epoch. Here we will demonstrate the additional importance of recent temporary (metastable) captures into and out of co-orbital states on the shorter time scales of tens of kyr to Myr.
Most planets are now known to host temporary co-orbital companions (reviewed in [4] and [5]), defined as objects undergoing oscillation (libration) of their 1:1 resonant argument for time scales much shorter than the age of the Solar System before escaping the resonance; for direct orbits the resonant argument is simply the angle between the mean longitudes of the objects and planet. In addition to Trojans, co-orbital motion can be of the horseshoe type (when the small body passes through the direction \(180^{o}\) away from the planet and motion encloses both the L4 and L5 points). Like Trojan motion, horseshoe orbits were predicted analytically, but are in most cases very unstable [6]. No long-term stable horseshoe sharing a planet's solar orbit has ever been observed. Lastly, in the frame co-rotating with Jupiter, so-called 'quasi-satellites' have orbits that maintain large-distance motion encircling the planet [7].
Restricting our attention to the giant planets, Uranus and Saturn do not have L4 and L5 points stable for 4 Gyr [8]. Nevertheless a metastable uranian L4 Trojan [9] and a metastable saturnian horseshoe orbit [5] are known;'metastable' objects are here defined by undergoing many resonant argument librations before exiting the co-orbital state. Neptune's L4 and L5 points have long-term stability, but both stable and metastable Neptune Trojans are known [10; 11]. Curiously, Jupiter has been the sole giant planet to have no known metastable co-orbitals, despite the expectation that the planet should host such a population [4].
Planet-crossing small bodies can (rarely) find their way into co-orbital states, and numerical simulations can estimate both the steady-state fraction relative to the current planet-crossing population and the expected distribution of temporary-capture time scales (see Discussion). Because Jupiter is
constantly being approached by objects originating in the outer Solar System (Centaurs, that become Jupiter Family comets), and given the estimated number of Jupiter-encountering Centaurs, [4] calculated that the metastable capture fraction was high enough that metastable jovian co-orbitals should exist and trapping would generate all of Trojan, horseshoe, and quasi-satellite motions. Examples of all of these types will be illustrated in our results below (see Figure 1).
There has been a great deal of work studying the complex problem of co-orbital companions [12; 13; 14; 15; 16; 17; 18; 19; 20]; these studies have either been done in the context of a simplified problem (one planet, sometimes on a circular orbit) or for time scales that are only slightly longer than the resonant libration period (of hundreds of years) or did not explore the range of behaviors and time scales possible due to the orbital uncertainties. Our work pushes the sample size and the level of the model detail much further by using full N-body simulations, by exploring time scales covering thousands of resonant libration periods, and by utilizing large numbers of 'clones' drawn from the orbital uncertainty region for determining the robustness of the resonant states; we also study the entire population of known objects with semimajor axes near that of Jupiter (nearly 12,000 objects). As a result, we have identified not only the first such metastable (2-13 kyr) jovian horseshoe orbits, but also the first known set of jovian Trojans which are metastable on intermediate time scales of 0.01-30 Myr and must be recently captured into L4 or L5 motion, increasing the complexity of how we should view the Jupiter Trojan population.
## Results
We used numerical integrations of observationally-derived orbits and 999 clones within the orbital uncertainty region (\(\simeq\)11.6 million state vectors) to search for semimajor axis oscillation around Jupiter's value of 5.20 au as well as resonant argument libration for periods of time long enough (\(>\)1 kyr) to distinguish transient co-orbital capture or non-resonant behavior from primordial Trojan stability [4; 5; 9]. This calculation required approximately 20 CPU years on a Beowulf cluster at the University of British Columbia. We securely identify the transient co-orbitals and non-resonant objects in the sample of 11,581 objects in the 'near-Jupiter population' (i.e., semimajor axes \(a=\)4.5-5.9 au, within \(\simeq 2\) jovian Hill sphere radii of Jupiter's \(a_{J}\)). We classify objects as belonging to one of the following dynamical classes based on their fraction of resonant clones and resonant time scales: "Trojans", "Transients", "Non-Resonant", or "Insecure" (see Figure 2 caption, Methods, and Table A1 for details). Figure 2 shows the semimajor axis vs eccentricity distribution of the sample of near-Jupiter objects along with our classifications. The "Trojans" (objects for which \(\geq\)95% of the 1000 clones remain in the 1:1 jovian resonance for 0.5 Myr) are deemed long-term stable and have not been integrated beyond this time scale; in the future we will extend these integrations to 4 Gyr to study the stability of
these objects on Solar System time scales. All other objects ("transient", "non-resonant", and "insecure") have been integrated for time scales long enough that all 1000 clones have left the resonance; these integration time scales range from a few hundred years for the non-resonant objects to up to \(\sim\)2 Gyr for the transient co-orbitals. We note that some "transient" or "insecure" objects can become trapped in the 1:1 resonance multiple times during the integrations.
Figure 1: Forward-integrated motion of 6 example metastable jovian co-orbitals identified in our analysis. Each object’s motion is shown in the heliocentric reference frame co-rotating with Jupiter (red dot) for a single resonant libration period, which are roughly as follows: 2015 EL77 (L4): 165 yr, 2019 QB65 (L4): 175 yr, 2015 YJ22 (L5): 250 yr, 288282 (L5): 145 yr, 2016 TE71 (HS): 480 yr, & 2020 MM5 (QS): 145 yr. Note the different axis scales for each object.
We base our classifications on the start of the integrations (i.e., the current time) and do not discuss (rare) multiple resonant traps in this paper.
Among the near-Jupiter sample, we have identified 27 objects (Table 1), which we are confident are _not_ primordial objects. Instead, they are almost certainly recently captured as Jupiter co-orbitals that remain metastable for time scales of \(10^{3}-10^{8}\) years. While each of these 27 objects share a commonality
Figure 2: Osculating semimajor axis vs eccentricity for the 11,581 objects with \(a\simeq a_{J}=5.20\) au that we classify with numerical integrations. The 11,423 “Trojans” (cyan) are objects for which \(\geq\)95% of their 1000 clones remain in 1:1 jovian resonance for 0.5 Myr. The 27 “Transients” (green) have \(\geq\)95% of their clones remain resonant for \(\geq\)1 kyr, but then leave the resonance. The 124 “Non-Resonant” (red) objects have \(\geq\)95% of their 1000 clones ejected from the resonance in \(<\)1 kyr. The 7 “Insecure” (orange) objects have 5%-95% of their 1000 clones remain in the resonance for \(\geq\)1 kyr before escaping (i.e., these objects would likely move to either “non-resonant” or “transient co-orbitals” upon further improvement of their orbital uncertainties). The dashed rectangle shows the \(a,e\) region that JPL Horizons and the Minor Planet Center (personal communication, Peter Veres) currently define as the ‘jovian Trojan’ parameter space; the 14 non-resonant objects in this box (listed in Table 10) are not Trojans, however, given that \(\geq\)95% of their 1000 numerically-integrated clones are ejected from the resonance in as little as tens or hundreds of years. The 27 metastable transients (green) have a larger range of semimajor axes, eccentricities, and inclinations (see Figure 11 for the semimajor axis vs inclination and eccentricity vs inclination projections) than the stable Trojans (cyan); objects can become temporarily bound to the resonance along its borders that stretch beyond the stable L4/L5 regions (cyan).
with the primordial Trojans by their presence in the 1:1 jovian mean-motion resonance, they are unique in their much shorter resonant stability time scales that can only mean that they are recent captures into the co-orbital population and are thus required to be placed in a category of jovian co-orbitals separate from the primordial Trojans.
First, we identify 12 L4 and 4 L5 Trojans (four of which are shown in Figures 1 and 3) that are surely unstable on time scales much shorter than ever previously discussed (only \(\sim\)Gyr time scales are discussed in [3]). The median time scales over which these metastable Trojans escape the resonance range from 1 kyr-23 Myr (Table A2), however, their observational uncertainties result in instability time scales that vary by an order of magnitude or more, as evidenced by the range in escape times of each object's 1000 clones (see Figure 3). This rapid departure means these 16 L4/L5 metastable Trojans cannot be members of the primordial population which are departing today, but must be recent metastable captures.
To date, no transient horseshoe co-orbitals of Jupiter have been identified to librate in the resonance on time scales of more than a couple hundred years (long enough for the object to experience several libration periods), despite the expectation that they should exist among the metastable jovian co-orbital population [4]. We have here identified the first two known metastable horseshoes of Jupiter (2015 OL106 and 2016 TE71), with the latter providing the first known example of a real object that remains in horseshoe motion with Jupiter for dozens of libration periods and resembles historical predictions of jovian horseshoe behavior [6].
Altogether, our metastable identifications include 12 L4 Trojans, 4 L5 Trojans, 2 horseshoes, 8 quasi-satellites, and the retrograde jovian co-orbital (514107) Ka'epaoka'awela 2015 BZ509. We note that a handful of these objects have been previously classified [16; 18; 20] based on their dynamical behavior over the next few hundred to couple thousand years (see captions for Tables 1, A3, & A4); differences between previous shorter time scale classifications and the longer metastable time scale classifications presented here are discussed below. Figure 1 shows the forward-integrated motion for a single libration period for six metastable object examples.
A unique aspect of our work is to determine the time scales over which these objects (and the clones representing their orbital uncertainty) will eventually escape the resonance. To determine their metastable time scales, we extended each object's integrations until all 1000 clones were removed (most often for getting too close to Jupiter). The cumulative distributions for the resonance escape times for the 1000 clones of each of the six objects shown in Figure 1 are given in Figure 3. This figure also presents our measurement of the instability time scale for the retrograde co-orbital (514107) with median value of 3.6 Myr; [19] estimated that the object remained in the near-Jupiter region for a lower limit of at least 1 Myr, while [21] estimated a median lifetime of 6.5 Myr for the object to escape the Solar System or collide with the Sun. The examples shown in Figure 3 depict the range in metastable resonant sticking time scales
(\(10^{3}-10^{8}\) years or longer) we have identified so far. Table 2 contains the full list of resonant sticking time scales for each of our 27 identified metastable jovian co-orbitals.
## Discussion
After the identification of asteroid (514107) as a retrograde jovian co-orbital [19], these are the first securely-identified metastable jovian co-orbitals for which the resonant sticking time scales have been established. While other groups [16; 17; 18; 20] have identified resonant behaviors of some of these objects, those analyses do not extend beyond the next \(\sim\)1-10 kyr nor do they utilize large numbers of clones drawn from the orbital uncertainty region for determining the certainty of a resonant classification. We confirm the current non-resonant and quasi-satellite classifications for a handful of objects (see Tables 1 & 3). However, our analysis is largely unique in identifying the transient nature of these objects by determining the time scales over which they (and the clones representing their orbital uncertainty) will eventually escape the resonance.
We find a number of resonant classifications that differ from previous studies [16; 17; 18; 20]; these objects are noted in Tables 1, 3, & 4. Note that [20] include many objects in their analysis having arc lengths of \(\leq\)5 days, which we omit given our requirement that objects have arc lengths \(>\)30 days to ensure their orbital uncertainty regions are determined by the observations rather
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Classification & Members & & & \\ \hline L4 Trojan & 163240 & 2010 AQ134 & 2010 VT278 & 2014 EJ166 & **2015 EL77** \\ L4 (cont) & 2015 HF1782 & 2015 HX159 & 2017 PC52 & **2019 QB65** & 2020 RL50 \\ L4 (cont) & 2020 ROS9 & 2020 SN84 & & & \\ L5 Trojan & **288282** & 613709 & **2015 YJ222**2 & 2018 BE7 & \\ Horseshoe & 2015 OL106 & **2016 TE71** & & & \\ Quasi-satellite & 2419441 & 3631351 & 5268891 & 2003 WG133 & 2004 AE91 \\ Retrograde & **514107** & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classifications of metastable jovian co-orbitals. Objects in **bold** are shown in Figures 1 and/or 3. Table 2 provides the resonance escape time scales for these 27 objects.
than dominated by orbit fitting assumptions. We additionally require resonant objects to librate in the 1:1 for at least 1 kyr in order to experience several libration periods before possible departure from the resonance (in the case of the transient captures). This is responsible for the classification differences for the objects that we classify as non-resonant that other studies find are resonant during the \(<\)1 kyr time scales they use (e.g., [20] provide classifications based on 600 yr integrations). In addition, we integrate each object's 1000 clones until all the clones have been removed from the integrations, which allows us to securely classify each co-orbital as transient in nature and determine the time scales over which they are stable in the resonance. This differs from the majority of the previous studies, which can only determine if an object is currently resonant but not how long it will remain resonant nor the fact that observational uncertainties can result in instability time scales that vary by an order of magnitude or more (see Figure 3).
We expect the number of transient co-orbitals and primordial Trojans among the 11,581 object sample to shift as we continue to integrate the 1000 clones for time periods longer than 0.5 Myr. Very long-lived resonant objects
Figure 3: Cumulative distribution for the resonance escape times for the 1000 clones of 7 selected transient jovian co-orbitals. The number in () after each designation is the median resonant time scale for each object’s current trap in the 1:1 resonance. For the full list of resonant sticking time scales for each of our 27 identified metastable jovian co-orbitals, see Table 2.
unstable in \(\lesssim\)1 Gyr (i.e., long-lived temporary captures) will become evident in longer integrations, shifting some objects from "Trojan" to "transient co-orbital" classification. This will then meld into the few long-known Jupiter Trojans unstable on Gyr time scales, which was suggested [3] to be due to a combination of long-term dynamical erosion and collisions.
Our perspective is thus now more complex. The Jupiter co-orbital population consists of a mix of objects with different resonant time scales that we very loosely divide into the following categories: extremely transient (\(\lesssim\)1 kyr), metastable (10 kyr-100 Myr), primordial Trojan erosion (\(\sim\)Gyr), and stable Trojans (longer than 5 Gyr Solar System time scales). Cases of extremely transient objects, which only last one (or a few) resonant libration periods, have been studied [for example, 16; 17; 18; 20]. Here we have shown for the first time that Jupiter joins the other giant planets by having recently-trapped co-orbitals that last for an enormous range of metastable time scales (10 kyr - 100 Myr) consistent with the transient co-orbital populations of all the giant planets. At the very longest time scales, only Jupiter and Neptune harbor both stable Trojans swarms and Trojans whose current stability time scales are of order Gyr. These latter objects can be a combination of the longest-lived traps of Centaurs and the slowly eroding edges of the original primordial population. The metastable objects we identify in this paper, however, must be recently captured into the co-orbital state out of the planet-crossing Centaur population, with a possible (probably small) contribution from escaping main-belt asteroids [4].
The metastable co-orbitals identified here thus represent the discovery of the first (curiously-missing) jovian members of the expected transient co-orbital population accompanying each giant planet [4; 5; 9]. Numerical simulations of the Centaur and escaped asteroid populations, both of which can become temporarily-trapped into 1:1 jovian resonance, allowed [4] to compute the steady-state fractions present in the jovian co-orbital population at any given time. Given the number of absolute magnitude \(H<18\) (sizes of order 1 km) near-Earth objects (NEOs) and Centaurs [22], [4] estimated that there should be \(\sim\)1-100 metastable jovian co-orbitals that remain resonant on time scales of \(\lesssim\)10 Myr. Here we identify 27 metastable jovian co-orbitals, all of which have \(H<18\), that remain stable for time scales of \(10^{3}-10^{8}\) years, in agreement with the theoretical estimate.
More metastable jovian co-orbitals will certainly be telescopically detected; given the rarity of capture into co-orbital resonance these additional co-orbitals are likely to be small, which is partly the reason more have not been identified to date by current surveys. The upcoming Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), with its large aperture and magnitude depth, should increase the number of Jupiter Trojan detections by \(\sim\)25x [23]. These fainter detections will also provide more objects currently in metastable traps with Jupiter; their identification as metasable, however, will
require more than simple osculating element cuts in semimajor axis and eccentricity near Jupiter's values, as our massive numerical study has demonstrated (see Figure 2).
The Lucy spacecraft mission will visit five Trojans during 2027 - 2033 [24]. We have carefully integrated these Trojans for 50 Myr to study their stability in the 1:1 jovian resonance. We find that all 1000 clones for each of these five mission targets remain stable in the resonance over this time scale and thus are almost certainly primordial objects.
A preliminary examination of the (sparse) color data from the Sloan Digital Sky Survey [25] for the faint metastable co-orbitals identified here shows that, relative to most known Trojans [26] and the Lucy flyby targets, the objects 2016 TE71 (metastable horseshoe), (288282) 2004 AH4 (metastable L5 Trojan), and (163240) 2002 EM157 (metastable L4 Trojan) have evidence for redder photometric \(g-r\) and/or \(g-i\) optical colors than typical Trojans. This would be expected if they are recently trapped Centaurs. | Jupiterの共軌道は、1:1の平均運動共鳴で、Jovian co-orbitalsはJupiters orbitに共鳴しています。これは、L4とL5のLagrange点の周囲に存在する、10,000以上のトロヤ星、これらの点は、惑星形成の際に形成された安定な群体として見なされています。巨大な数値計算を通じて、私たちは初めて、トロヤの一部が単なる「メタスタбільな」ものだと特定しました。それは原始的なものではなく、太陽系外から捕獲された比較的寿命の長い(10 kyr - 100 Myr)メタスタбільな状態に置かれています。これらのトロヤは、散乱状態に戻りうる状態です。私たちはまた、多くの共振運動周期を経験した、最初の2つのJupiters horseshoe co-orbitalsを特定し、さらに、4-130 kyrの寿命を持つ、8つのJupiter quasi- |
2309.05300 | Decoupling Common and Unique Representations for Multimodal
Self-supervised Learning | The increasing availability of multi-sensor data sparks wide interest in
multimodal self-supervised learning. However, most existing approaches learn
only common representations across modalities while ignoring intra-modal
training and modality-unique representations. We propose Decoupling Common and
Unique Representations (DeCUR), a simple yet effective method for multimodal
self-supervised learning. By distinguishing inter- and intra-modal embeddings
through multimodal redundancy reduction, DeCUR can integrate complementary
information across different modalities. We evaluate DeCUR in three common
multimodal scenarios (radar-optical, RGB-elevation, and RGB-depth), and
demonstrate its consistent improvement regardless of architectures and for both
multimodal and modality-missing settings. With thorough experiments and
comprehensive analysis, we hope this work can provide valuable insights and
raise more interest in researching the hidden relationships of multimodal
representations. | Yi Wang, Conrad M Albrecht, Nassim Ait Ali Braham, Chenying Liu, Zhitong Xiong, Xiao Xiang Zhu | 2023-09-11T08:35:23 | http://arxiv.org/abs/2309.05300v3 | # _DeCUR_: Decoupling Common & Unique Representations for Multimodal Self-supervision
###### Abstract
The increasing availability of multi-sensor data sparks interest in multimodal self-supervised learning. However, most existing approaches learn only common representations across modalities while ignoring intra-modal training and modality-unique representations. We propose **D**ecoupling **C**ommon and **U**nique **R**epresentations (DeCUR), a simple yet effective method for multimodal self-supervised learning. By distinguishing inter- and intra-modal embeddings, DeCUR is trained to integrate complementary information across different modalities. We evaluate DeCUR in three common multimodal scenarios (radar-optical, RGB-elevation, and RGB-depth), and demonstrate its consistent benefits on scene classification and semantic segmentation downstream tasks. Notably, we get straightforward improvements by transferring our pretrained backbones to state-of-the-art supervised multimodal methods without any hyperparameter tuning. Furthermore, we conduct a comprehensive explainability analysis to shed light on the interpretation of common and unique features in our multimodal approach. Codes are available at [https://github.com/zhu-xlab/DeCUR](https://github.com/zhu-xlab/DeCUR).
## 1 Introduction
Self-supervised learning has achieved breakthroughs in machine learning (Ericsson et al., 2022) and many other communities (Krishnan et al., 2022; Wang et al., 2022). Driven by the success in single modality representation learning, as well as the great potential that large-scale multi-sensor data bears, multimodal self-supervised learning is gaining increasing attention. While image, language and audio (Deldari et al., 2022) have been widely studied, multimodality in other real-world scenarios is lagging behind, such as RGBD indoor scene understanding and multi-sensor Earth observation. In this work, we dig into these important modalities and propose DeCUR, a simple yet effective self-supervised method for multimodal representation learning. We demonstrate the effectiveness of DeCUR on three common multimodal scenarios: Synthetic Aperture Radar (SAR) - multispectral optical, RGB - Digital Elevation Model (DEM), and RGB - depth.
A common strategy for exisiting multimodal self-supervised learning is to use different modalities as augmented views and conduct cross-modal contrastive learning. Such methods follow a similar design of SimCLR (Chen et al., 2020) and have been widely studied in image-language pretraining. One famous example is CLIP (Radford et al., 2021), where a contrastive loss is optimized for a batch of image-text pairs. However, these methods have common disadvantages such as requiring negative samples and a large batch size, which limit the performance on smaller-scale but scene-complex
Figure 1: Decoupled common and unique representations across two modalities visualized by t-SNE (Van der Maaten and Hinton, 2008). Each embedding dimension is one data point. Blue and green points indicate unique dimensions from modality A and B; red points indicate (overlapped) common dimensions. Best view in color.
datasets. To tackle these issues, we revisit Barlow Twins (Zbontar et al., 2021), a redundancy reduction based self-supervised learning algorithm that can work with small batch size, and does not rely on negative samples. Barlow Twins works by driving the normalized cross-correlation matrix of the embeddings of two augmented views towards the identity. We show that Barlow Twins can be naturally extended to multimodal pretraining with modality-specific encoders, and present its advantages over exsiting methods with contrastive negative sampling.
More importantly, most existing multimodal studies focus only on common representations across modalities (Scheibenreif et al., 2022; Radford et al., 2021; Wang et al., 2021), while ignoring intra-modal and modality-unique representations. This forces the model to put potentially orthogonal representations into a common embedding space, limiting the model's capacity to better understand the different modalities. To solve this problem, we introduce the idea of decoupling common and unique representations. This can be achieved by as simple as separating the corresponding embedding dimensions. During training, we maximize the similarity between common dimensions and decorrelate the unique dimensions across modalities. We also introduce intra-modal training on all dimensions, which ensures the meaningfulness of modality-unique dimensions, and enhances the model's ability to learn intra-modal knowledge.
In addition, little research has been conducted on the explainability of multimodal self-supervised learning. While multiple sensors serve as rich and sometimes unique information sources, existing works like Gur et al. (2021) only consider a single modality. To bridge this gap, we perform an extensive explainability analysis on our method. We visualize the saliency maps of common and unique representations and analyse the statistics from both spatial and spectral domain. The results provide valuable insights towards the interpretation of multimodal self-supervised learning.
In summary, our main contributions are listed as follows:
* We propose DeCUR, a simple yet effective multimodal self-supervised learning method. DeCUR decouples common and unique representations across different modalities and enhances both intra- and inter-modal learning.
* We evaluate DeCUR with rich experiments covering three important multimodal scenarios.
* We conduct extensive explainability analysis to push forward the interpretability of multimodal self-supervised learning.
## 2 Related work
Self-supervised learningSelf-supervised learning with a single modality has been widely studied. Following the literature, it can be categorized into three main types: generative methods (e.g. Autoencoder (Vincent et al., 2010) and MAE (He et al., 2022)), predictive methods (e.g. predicting rotation angles (Gidaris et al., 2018)) and contrastive methods (joint embedding architectures with or without negative samples). Contrastive methods can be further categorized into four strategies of self-supervision: 1) contrastive learning with negative samples (e.g. CPC (Oord et al., 2018), SimCLR (Chen et al., 2020) and MoCo (He et al., 2020)); 2) clustering feature embeddings (e.g. SwAV (Caron et al., 2020)); 3) knowledge distillation (e.g. BYOL (Grill et al., 2020), SimSiam (Chen & He, 2021) and DINO (Caron et al., 2021)); 4) redundancy reduction (e.g. Barlow Twins (Zbontar et al., 2021) and VICReg (Bardes et al., 2021)). While most existing multimodal works are closely related to the first strategy, DeCUR belongs to redundancy reduction as a natural extension of Barlow Twins that does not require negative samples. DeCUR's decoupling strategy can be perfectly integrated into a simple correlation-matrix-based loss design in Barlow Twins (unlike in VICReg which is also possible to apply but introduces complexity and more hyperparameters).
Multimodal self-supervised learningThe idea of contrastive self-supervised learning can be naturally transferred to multimodal scenarios, as different modalities are naively the augmented views for the joint embedding architectures. Currently, contrastive learning with negative samples has been mostly developed: CLIP (Radford et al., 2021) for language-image, VATT (Akbari et al., 2021) for video-audio-text, and variants of SimCLR (Scheibenreif et al., 2022; Xue et al., 2022) for radar-optical. Different from these methods, we propose to explore the potential of negative-free methods by extending the redundancy reduction loss of Barlow Twins. On the other hand, we share an insight with Yang et al. (2022) and Wang et al. (2022) that intra-modal representations are important complements to cross-modal representations. Based on that, we take one step further to decouple common and unique information from different modalities.
Modality decouplingWhile not widely explored in multimodal self-supervised learning, modality decoupling has been proved beneficial in supervised learning. Xiong et al. (2020, 2021) studied multimodal fusion from network architecture, proposing modality separation networks for RGB-D scene recognition. Peng et al. (2022) investigated modality dominance from the angle of optimization flow, proposing on-the-fly gradient modulation to balance and control the optimization of each modality in audio-visual learning. Zhou et al. (2023) observed feature redundancy for different supervision tasks, proposing to decompose task-specific and task-shared features for multitask learning in recommendation system. Different from the above, we directly perform modality decoupling on the embeddings by separating common and unique dimensions. This simple strategy neither requires architecture modification nor supervision guidance, thus fitting well the generalizability and transferability of self-supervised learning.
## 3 Methodology
Figure 2 presents the general structure of DeCUR. As a multimodal extension of Barlow Twins (Zbontar et al., 2021), DeCUR performs self-supervised learning by redundancy reduction in the joint embedding space of augmented views from intra-/inter-modalities.
Given a batch of multimodal input pairs \(X_{M1}\) and \(X_{M2}\), two batches of augmented views \(X_{M1}{}^{\prime}\) and \(X_{M1}{}^{\prime\prime}\) (or \(X_{M2}{}^{\prime}\) and \(X_{M2}{}^{\prime\prime}\)) are generated from each modality. Each of the four batches is then fed to a modality-specific encoder and projector, producing batches of embeddings \(Z_{M1}{}^{\prime}\), \(Z_{M1}{}^{\prime\prime}\), \(Z_{M2}{}^{\prime}\) and \(Z_{M2}{}^{\prime\prime}\) respectively. Batch normalization is applied on each batch of embeddings such that they are mean-centered along the batch dimension. Next, multimodal redundancy reduction is performed on the cross-correlation matrices \(\mathcal{C}\) of the embedding vectors.
\[\mathcal{C}_{ij}=\frac{\sum_{b}z_{b,i}^{A}z_{b,j}^{B}}{\sqrt{\sum_{b}\left(z_{ b,i}^{A}\right)^{2}}\sqrt{\sum_{b}\left(z_{b,j}^{B}\right)^{2}}} \tag{1}\]
where \(Z^{A}\), \(Z^{B}\) are two embedding vectors, \(b\) indexes batch samples, and \(i\), \(j\) index the dimension of the embedding vectors. \(\mathcal{C}\) is a square matrix with size the dimensionality of the embedding vectors, and with values comprised between -1 and 1.
Figure 2: The general structure of DeCUR. \(M1\) and \(M2\) represent two modalities. Black and white color in the cross-correlation matrices represent 1 and 0 respectively. Two augmented views from each modality are fed to modality-specific encoders (\(E1\), \(E2\)) and projectors (\(P1\), \(P2\)) to get the embeddings \(Z\). For cross-modal embeddings, the dimensions are separated into common and unique ones. The correlation matrix of the common dimensions is optimized to be close to the identity, while that of the unique ones to zero. For intra-modal embeddings, both common and unique dimensions are used for the correlation matrix which is optimized to be close to the identity. This naturally helps maintain the meaningfulness of the unique dimensions. In total, DeCUR decouples modality-unique embeddings and learns both intra- and inter-modal representations.
### Cross-modal representation decoupling
While most multimodal self-supervised learning algorithms consider only common representations, we introduce the existence of modality-unique representations and decouple them during training. This can be naively done by separating embedding dimensions \(K_{c}\) and \(K_{u}\) to store common and unique representations respectively. The common representations should be identical across modalities, while the modality-specific unique representations should be decorrelated.
On the one hand, a sub-matrix \(\mathcal{C}_{c}\) with size \(K_{c}\times K_{c}\) is generated from only the common dimensions of the embedding vectors \(Z_{M1}{}^{\prime}\) and \(Z_{M2}{}^{\prime}\) for both modalities. The redundancy reduction loss for the cross-modal common representations reads:
\[\mathcal{L}_{common}=\sum_{i}\left(1-\mathcal{C}_{ii}\right)^{2}+\lambda_{c} \cdot\sum_{i}\sum_{j\neq i}\mathcal{C}_{ij}^{\,2} \tag{2}\]
where \(\lambda_{c}\) is a positive constant trading off the importance of the first invariance term (to make the common embeddings invariant to the input modalities) and the second redundancy reduction term (to decorrelate the embedding vector components and avoid model collapse).
On the other hand, a sub-matrix \(\mathcal{C}_{u}\) with size \(K_{u}\times K_{u}\) is generated from only the unique dimensions of the embedding vectors \(Z_{M1}{}^{\prime}\) and \(Z_{M2}{}^{\prime}\) for both modalities. The redundancy reduction loss for the cross-modal unique representations reads:
\[\mathcal{L}_{unique}=\sum_{i}\mathcal{C}_{ii}^{\,2}+\lambda_{u}\cdot\sum_{i} \sum_{j\neq i}\mathcal{C}_{iij}^{\,2} \tag{3}\]
where \(\lambda_{u}\) is a positive constant trading-off the importance of the first decorrelation term (to decorrelate different modalities) and the second redundancy reduction term (to decorrelate the embedding vector components). However, pure decoupling doesn't ensure the meaningfulness of the unique dimensions, i.e., the model could generate random decorrelated values. To tackle this issue, we further introduce intra-modal representation enhancement that covers both common and unique dimensions within each modality.
### Intra-modal representation enhancing
To ensure the meaningfulness of the decoupled unique representations, as well as to enhance intra-modal representations, we introduce intra-modal training that covers both common and unique dimensions. For each modality, a cross-correlation matrix \(\mathcal{C}_{\text{M1}}\) (or \(\mathcal{C}_{\text{M2}}\)) is generated from the full dimensions of the embedding vectors \(Z_{M1}{}^{\prime}\) and \(Z_{M1}{}^{\prime\prime}\) (or \(Z_{M2}{}^{\prime}\) and \(Z_{M2}{}^{\prime\prime}\)). The redundancy reduction losses for the intra-modal representations reads:
\[\mathcal{L}_{M1}=\sum_{i}\left(1-\mathcal{C}_{\text{M1}ii}\right)^{2}+\lambda _{M1}\cdot\sum_{i}\sum_{j\neq i}\mathcal{C}_{\text{M1}ij}^{\,2} \tag{4}\]
\[\mathcal{L}_{M2}=\sum_{i}\left(1-\mathcal{C}_{\text{M2}ii}\right)^{2}+\lambda _{M2}\cdot\sum_{i}\sum_{j\neq i}\mathcal{C}_{\text{M2}ij}^{\,2} \tag{5}\]
where \(\lambda_{M1}\) and \(\lambda_{M2}\) are positive constants trading off the importance of the invariance term and the redundancy reduction term.
Combining the cross-modal common and unique and intra-modal loss terms, the overall training objective of DeCUR reads:
\[\mathcal{L}=\mathcal{L}_{common}+\mathcal{L}_{unique}+\mathcal{L}_{M1}+ \mathcal{L}_{M2} \tag{6}\]
Implementation Details
Pretraining datasetsWe pretrain DeCUR in three multimodal scenarios: SAR-optical, RGB-DEM and RGB-depth. For SAR-optical, we use the SSL4EO-S12 dataset (Wang et al., 2022c) which consists of 250k multi-modal (SAR-multispectral) multi-temporal (4 seasons) image triplets with size 264x264. One random season is selected to generate each augmented view. For RGB-DEM, we conduct pretraining on the training set of GeoNRW dataset (Baier et al., 2020). The dataset includes orthorectified aerial photographs (RGB), LiDAR-derived digital elevation models (DEM) and open street map refined segmentation maps from the German state North Rhine-Westphalia. We crop the raw 6942 training scenes to 111k patches with size 250x250. For RGB-depth, we use SUN-RGBD dataset which consists of 10335 RGBD pairs with various image sizes. Following Zhang et al. (2022), we preprocess the depth images to HHA format (Gupta et al., 2014).
Data augmentationsWe follow common augmentations in the SSL literature (Grill et al., 2020) for optical and RGB images, and remove non-double ones for specific modalities. Specifically, for SAR images, we use random resized crop (224 x 224), grayscale, Gaussian blur, and horizontal and vertical flip; for DEM images, we use random resized crop (224 x 224) and horizontal and vertical flip; for HHA images, we use random resized crop (224 x 224) and horizontal flip.
Model architectureAs a multimodal extension of Barlow Twins (Zbontar et al., 2021), each branch holds a separate backbone and a 3-layer MLP projector (each with output dimension 8192). DeCUR is trained on embedding representations after the projector, whose dimensions are separated to common and unique. We do a light grid search to get the best corresponding ratio. For SAR-optical, the percentage of common dimensions is 87.5%; for RGB-DEM and RGB-depth it is 75%. The backbones are transferred to downstream tasks. We use ResNet-50 (He et al., 2016) for all scenarios, with additional segformers (Xie et al., 2021) for RGB-Depth.
OptimizationWe follow the optimization protocol of Barlow Twins (Zbontar et al., 2021) and BYOL (Grill et al., 2020), with default epochs 100 and a batch size of 256 (epochs 200 and batch size 128 for RGB-depth). The trade-off parameters \(\lambda\) of the loss terms are set to 0.0051. Training is distributed across 4 NVIDIA A100 GPUs and takes about 30 hours on SSL4EO-S12, 4 hours on GeoNRW, and 6 hours on SUN-RGBD.
## 5 Experimental Results
We evaluate DeCUR by pretraining and transferring to three common multimodal tasks: SAR-optical scene classification, RGB-DEM semantic segmentation, and RGB-depth semantic segmentation. We follow common evaluation protocols of self-supervised learning; linear classification (with frozen encoder) and fine-tuning. We report results for full- and limited-label settings, and both multimodal and missing-modality (i.e., only a single modality is available) scenarios.
### SAR-optical scene classification
We pretrain SAR-optical encoders on SSL4EO-S12 (Wang et al., 2022c) and transfer them to BigEarthNet-MM (Sumbul et al., 2021), a multimodal multi-label scene classification dataset with 19 classes. Simple late fusion is used for multimodal transfer learning, i.e., concatenating the encoded features from both modalities, followed by one classification layer. Mean average precision (mAP, global average) is used as the evaluation metric.
We report multimodal linear classification and fine-tuning results with 1% and 100% training labels in Table 1 (left). DeCUR outperforms existing cross-modal SimCLR-like contrastive learning (Scheibenreif et al., 2022; Jain et al., 2022; Xue et al., 2022) by 2%-4.8% in most scenarios, while achieving comparable performance on fine-tuning with full labels. Notably, Barlow Twins itself works better than both the above methods and VICReg (Bardes et al., 2021), compared to which we improve by 0.7% and 1.4% on linear evaluation and fine-tuning with 1% labels.
Additionally, we report SAR-only results in Table 1 (right), as it is an essential scenario in practice when optical images are either unavailable or heavily covered by clouds. DeCUR outperforms
SimCLR in most scenarios by 5%-8%, while achieving comparable performance on fine-tuning with full labels. In addition, DeCUR outperforms single-modal Barlow Twins pretraining by 2.1%-2.5% with 1% labels and 0.8%-2.1% with full labels, indicating that joint multimodal pretraining helps the model better understand individual modalities.
### RGB-DEM semantic segmentation
We pretrain and evaluate RGB-DEM encoders on GeoNRW (Baier et al., 2020) for semantic segmentation (10 classes). We use simple fully convolutional networks (FCN) (Long et al., 2015) as the segmentation model, which concatenates the last three layer feature maps from both modalities, upsamples and sums them up for the prediction map. Similar to the classification task, linear classification is conducted by freezing the encoder, and fine-tuning is conducted by training all model parameters. Mean intersection over union (mIoU) is used as the evaluation metric.
We report multimodal linear classification and fine-tuning results with 1% and 100% training labels in Table 2 (left). Promisingly, DeCUR outperforms SimCLR and CLIP in all scenarios by a large margin from 6.7% to 12.1%. DeCUR also works better than Barlow Twins and VICReg by at least 3.3% with 1% labels.
Meanwhile, we report RGB-only results in Table 2 (right), as in practice DEM data is not always available. Again DeCUR shows a significant improvement compared to SimCLR and VICReg in all scenarios from 4% to 14.5%. In addition, DeCUR outperforms cross-modal Barlow Twins by 1.1%-2.9%, and single-modal Barlow Twins by 0.8%-7.7%.
### RGB-depth semantic segmentation
We pretrain RGB-depth encoders on SUN-RGBD (Song et al., 2015) and transfer them to SUN-RGBD and NYU-Depth v2 (Nathan Silberman and Fergus, 2012) datasets for semantic segmentation (37 and 40 classes, respectively). We transfer ResNet50 to simple FCN (Long et al., 2015) and Segformer (Xie et al., 2021) to the recent CMX (Zhang et al., 2022) model. We report single and multimodal fine-tuning results with mIoU and overall accuracy. As is shown in Table 3, when using simple segmentation models, DeCUR helps improve FCN over CLIP by 4.0% mIoU and 1.3% accuracy on SUN-RGBD, and 0.8% mIoU and 0.6% accuracy on NYU-Depth v2.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline SAR-optical & \multicolumn{2}{c}{1\% labels} & \multicolumn{2}{c}{100\% labels} & \multicolumn{2}{c}{100\% labels} \\ & Linear & Fine-tune & Linear & Fine-tune \\ \hline Rand. Int. & 58.7 & 57.0 & 70.1 & 70.1 \\ Supervised & 77.0 & 77.0 & 88.9 & 88.9 \\ SimCLR-cross & 77.4 & 78.7 & 82.8 & 89.6 \\ CLIP & 77.4 & 78.7 & 82.8 & 89.6 \\ Barlow Twins & 78.7 & 80.3 & - & \multicolumn{1}{c}{VICReg} & 69.3 \\ VICReg & 74.5 & 79.0 & - & \multicolumn{1}{c}{-} \\ DeCUR (ours) & **79.4** & **81.7** & **85.4** & **89.7** \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c c} \hline \hline SAR & \multicolumn{2}{c}{1\% labels} & \multicolumn{2}{c}{100\% labels} \\ & Linear & Fine-tune & Linear & Fine-tune \\ \hline Rand. Int. & 50.0 & 50.0 & 54.2 & 54.2 \\ Supervised & 67.5 & 67.5 & 51.9 & 81.9 \\ SimCLR-cross & 68.1 & 70.4 & 71.7 & **85.7** \\ Barlow Twins & 72.3 & 73.7 & - & \multicolumn{1}{c}{-} \\ VICReg & 69.3 & 71.9 & - & \multicolumn{1}{c}{-} \\ Barlow Twins-SAR & 71.2 & 73.3 & 77.5 & 81.6 \\ DeCUR (ours) & **73.7** & **75.4** & **78.3** & **83.7** \\ \hline \hline \end{tabular}
\end{table}
Table 1: SAR-optical transfer learning results on BigEarthNet-MM. Left: multimodal; right: SAR-only. We report linear classification and fine-tuning scores for training with both 100% and 1% labels. Rand. Int. represents random initialization, -cross represents cross-modal, -SAR represents SAR-only. Best per-column scores are marked in **bold**.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline RGB-DEM & \multicolumn{2}{c}{1\% labels} & \multicolumn{2}{c}{100\% labels} \\ & Linear & Fine-tune & Linear & Fine-tune \\ \hline Rand. Int. & 14.1 & 14.1 & 23.0 & 23.0 \\ Supervised & 22.1 & 22.1 & 44.0 & 44.0 \\ - SimCLR-cross & 23.0 & 30.2 & 33.2 & 47.5 \\ CLIP & 22.8 & 28.8 & 35.0 & 46.7 \\ Barlow Twins & 31.2 & 33.6 & - & \multicolumn{1}{c}{-} \\ VICReg & 27.4 & 32.8 & - & \multicolumn{1}{c}{-} \\ DeCUR (ours) & **34.9** & **36.9** & **43.9** & **48.7** \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c} \hline \hline RGB & \multicolumn{2}{c}{1\% labels} & \multicolumn{2}{c}{100\% labels} \\ & Linear & Fine-tune & Linear & Fine-tune \\ \hline Rand. Int. & 14.2 & 14.2 & 18.5 & 18.5 \\ Supervised & 17.5 & 17.5 & 38.8 & 38.8 \\ SimCLR-cross & 20.1 & 25.9 & 29.6 & 24.5 \\ Barlow Twins & 29.4 & 33.4 & - & \multicolumn{1}{c}{-} \\ VICReg & 23.7 & 28.7 & - & \multicolumn{1}{c}{-} \\ Barlow Twins-RGB & 28.6 & 32.6 & 36.2 & 45.7 \\ DeCUR (ours) & **31.4** & **34.5** & **43.9** & **46.5** \\ \hline \hline \end{tabular}
\end{table}
Table 2: RGB-DEM transfer learning results on GeoNRW. Left: multimodal; right: RGB-only. We report linear classification and fine-tuning mIoU scores for training with both 100% and 1% labels.
Promisingly, consistent improvements are observed by simply transferring the pretrained backbones to SOTA supervised multimodal fusion models. Following the published codebase and without tuning any hyperparameter, we push CMX-B2 from mIoU 49.7% to 50.6% on SUN-RGBD dataset, and CMX-B5 from mIoU 56.9% to 57.3% on NYU-Depth v2 dataset.
## 6 Ablation studies
For all ablation studies, we pretrain ResNet-50 backbones on SSL4EO-S12 for SAR-optical and GeoNRW for RGB-DEM. Unless explicitly noted, we do fine-tuning on BigEarthNet-MM (SAR-optical) and GeoNRW (RGB-DEM) with 1% training labels.
Loss termsThe ablation results about the components of our loss terms are shown in Table 4. We first remove both intra-modal training and modality decoupling, i.e., a cross-modal Barlow Twins remains. The downstream performance decreased as expected, as neither intra-modal information nor modality-unique information is learned. Then we remove intra-modal training and keep modality decoupling, which gives unstable performance change for different modality scenarios. This can be explained by the fact that without intra-modal training the unique dimensions can be randomly generated and are not necessarily meaningful. Finally, we remove modality decoupling and keep intra-modal training, which gives second best performance among the ablations. This confirms the benefits of intra-modal representations which can be a good complement to commonly learnt cross-modal representations. All of the above are below the combined DeCUR, proving the effectiveness of the total DeCUR loss.
Percentage of common dimensionsWe do a simple grid search based on downstream performance to find the best ratio between common and unique dimensions for SAR-optical and RGB-DEM respectively, as different modality combinations may have different representation overlaps. As is shown in Figure 3(a), the best percentage of common dimensions is 87.5% for SAR-optical and 75% for RGB-DEM. This could be in line with the fact that there is more valid modality-unique information in orthophoto and elevation model than in optical and SAR (when the optical image is cloud-free). In both scenarios, the downstream performance increases and decreases smoothly along with the reduced percentage of common dimensions. Interestingly, there is no significant performance drop when decoupling up to 50% unique dimensions. This indicates the sparsity of the common embedding space.
Number of projector dimensionsInherited from Barlow Twins (Zbontar et al., 2021), DeCUR also benefits from the increasing dimensionality of the projector. As can be seen from Figure 3(b), DeCUR keeps improving with all output dimensionality tested.
\begin{table}
\begin{tabular}{l c} \hline \hline & SAR-optical (mAP) & RGB-DEM (mIoU) \\ \hline DeCUR (ours) & **81.7** & **36.9** \\ w/o intra\&decoup. & 80.3 & 33.6 \\ w/o intra & 80.1 & 34.3 \\ w/o decoup. & 81.1 & 35.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation results on different loss components. _intra_ corresponds to intra-modal training; _decoup_. corresponds to modality decoupling. We report mAP-micro score on BigEarthNet-MM for SAR-optical, and mIoU score on GeoNRW for RGB-DEM.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \hline FCN (Long et al., 2015) & RGB & 27.4 & 68.2 & FCN (Lillep (Radford et al., 2021)) & RGB & 29.2 & 60.0 \\ FCN (Lillep (Radford et al., 2021)) & RGB & 30.5 & 74.2 & FCN (DeCUR) & RGB & 30.4 & 63.3 \\ FCN (DeCUR) & RGB & 34.5 & 75.5 & FCN (DeCUR) & RGB & 31.2 & 63.9 \\ SSA\(\,\)Gate (Chen et al., 2020b) & RGBD & 49.4 & 82.5 & S-Gane (Chen et al., 2020b) & RGBD & 32.4 & 77.9 \\ SGNet (Chen et al., 2021) & RGBD & 48.6 & 82.0 & SGNet (Chen et al., 2021) & RGBD & 51.1 & 76.8 \\ ShapeConv (Cao et al., 2021) & RGBD & 48.6 & 82.2 & ShapeConv (Cao et al., 2021) & RGBD & 51.3 & 76.4 \\ \(\,\)C\(\,\)K\(\,\)-B2 (Zhang et al., 2025) & RGBD & 49.7 & S-Z.8 & CMX-B3 (Zhang et al., 2022) & RGBD & 36.9 & 80.1 \\ CMX-B2 (DeCUR) & RGBD & **50.6** & **83.2** & CMX-B5 (DeCUR) & RGBD & **57.3** & **80.3** \\ \hline \hline \end{tabular}
\end{table}
Table 3: RGB-depth transfer learning results on SUN-RGBD (left) and NYU-Depth v2 (right).
Effect of the projectorInterestingly, DeCUR works well on the segmentation task even without the projector. As is shown in Figure 2(b), removing the projector gives reasonable downstream performances, while adding it can further enhance the representations with a large number of dimensions.
## 7 Discussion
In this section, we demonstrate an explainability analysis to interpret the multimodal representations learnt by DeCUR. We illustrate SAR-optical here, see Appendix for other multimodal scenarios.
Cross-modal representation alignmentTo monitor the fact that each modality contains unique information that is difficult to integrate into a common space, we calculate the cross-modal alignment of every embedding dimension. This is done by counting the on-diagonal losses of the cross-correlation matrix \(\mathcal{C}\):
\[\mathcal{L}_{i}=\left(1-\mathcal{C}_{ii}\right)^{2} \tag{7}\]
where \(i\) is the \(i_{th}\) embedding dimension. The closer \(\mathcal{L}_{i}\) to 1, the better the alignment of the two modalities in this dimension. We count the loss for all dimensions and plot the histogram of one random batch for both DeCUR and cross-modal Barlow Twins. The former explicitly decouples unique dimensions, while the latter assumes that all dimensions are common. As is shown in Figure 3(a), the alignment loss remains high for a certain number of dimensions with cross-modal Barlow Twins. On contrary, by allowing the decorrelation of several dimensions (the loss of which moves to 1), the misalignment of common dimensions decreases. We further visualize such effects with t-SNE by clustering among the embedding dimensions. Contrarily to the common t-SNE setting that each input sample is one point, we make each embedding dimension one point. As Figure 1 shows, modality-unique dimensions are well separated, and common dimensions are perfectly overlapped.
Figure 4: Cross-modal representation alignment (left) and spatial saliency visualization (right).
Figure 3: Ablation results on the percentage of common dimensions and the projector.
Spatial saliency visualizationWe use GradCAM (Selvaraju et al., 2017) to visualize the spatial saliency of input modalities corresponding to the common and unique embedding representations. For preparation, we average the common and unique dimensions as two single values output. Next, one-time backpropagation is performed w.r.t the corresponding output target (0 for common and 1 for unique) to get the GradCAM saliency map after the last convolutional layer. We then upsample the saliency maps to the size of the input. In total, one "common" and one "unique" saliency map are generated for each modality. We present one example for SAR-optical in Figure 3(b), which shows an overlap in interest region for the common representations and tend to be orthogonal for the unique representations. See the appendix for more examples.
Spatial saliency statisticsWe further calculate the statistics of the common and unique highlighted areas for the whole pretraining dataset. We multiply the saliency maps between common and between unique for the two modalities, take the logarithm, and normalize the results of each patch to 0 to 1. In other words, for each pair of images, we calculate one score for common area similarity and one for unique area similarity. We thus get one histogram for common and one for unique as shown in Figure 4(a). Though not significant, the histograms show a slight trend of unique scores being more towards 0 than common scores, indicating that the interesting areas of modality-unique representations tend to be more orthogonal than common representations which tend to overlap.
Spectral saliency statisticsThe insignificant difference in spatial saliency statistics are as expected, because the image-level semantics can not only be presented at spatial domain, but also other aspects such as the spectral domain for multispectral images. Therefore, we use Integrated Gradients (Sundararajan et al., 2017) to perform saliency analysis back to the input and count statistics over spectral bands in optical images. We don't use GradCAM here as it tends to lose class discriminability in shallow layers (Selvaraju et al., 2017). An importance score is assigned to each input feature by approximating the integral of gradients of the output (the preparation is the same as spatial saliency above) w.r.t. the inputs. We then average the importance scores of each band to get spectral saliency for both common and unique representations. We normalize the scores and do statistics over the whole SSL4EO-S12 dataset, and plot the histograms in Figure 4(b). The figure confirms the bigger influence of the spectral information on optical-unique representations. Meanwhile, the band-wise importance distribution is promisingly consistent with the domain knowledge: 1) near-infrared bands (B5-B8A, including vegetation red edge) are very important; 2) red (B4) is more important than blue (B2); 3) water vapour (B9) and cirrus (B10) are strongly related to atmosphere and thus less important for land surface monitoring; etc.
## 8 Conclusion
We presented DeCUR, a simple yet insightful multimodal self-supervised learning method. We introduced the idea of modality decoupling and intra-modal representation enhancing which can be implemented as a simple extension of Barlow Twins. Extensive experiments on three common multimodal scenarios prove the effectiveness of DeCUR. Moreover, we conduct a systematic explainability analysis to interpret the proposed method. Our results suggest that modality-decoupling bears great potential for multimodal self-supervised learning.
Figure 5: Spatial saliency statistics (left) and spectral saliency statistics (right). | 多センサーデータの増加により、多モデ-lized自教師学習への関心が高まっている。しかし、既存のほとんどの方法は、モデ-lization間で共通の表現しか学習せず、intra-modalトレーニングやモデ-lization特有の表現を無視している。私たちは、Decoupling Common andUnique Representations (DeCUR)という、簡素で効果的な多モデ-lization自教師学習手法を提案した。DeCURは、多モデ-lization間の冗長性を削減することで、異なるモデ-lization間で補完的な情報を統合することができる。私たちは、レーダー-光学、RGB-elevation、RGB-depthという3つの一般的な multimodalシナリオにおいて、DeCURを評価し、アーキテクチャとモデ-lizationの有無に関わらず、その改善を継続的に示した。徹底した実験と包括的な分析を通じて、私たちは、この研究が多モデ-lization表現の隠れた関係を |
2309.15113 | Stable Bosonic Topological Edge Modes in the Presence of Many-Body
Interactions | Many magnetic materials are predicted to exhibit bosonic topological edge
modes in their excitation spectra, because of the nontrivial topology of their
magnon, triplon or other quasi-particle band structures. However, there is a
discrepancy between theory prediction and experimental observation, which
suggests some underlying mechanism that intrinsically suppresses the expected
experimental signatures, like the thermal Hall current. Many-body interactions
that are not accounted for in the non-interacting quasi-particle picture are
most often identified as the reason for the absence of the topological edge
modes. Here we report stable bosonic edge modes at the boundaries of a ladder
quantum paramagnet with gapped triplon excitations in the presence of the full
many-body interaction. For the first time, we use tensor network methods to
resolve topological edge modes in the time-dependent spin-spin correlations and
the dynamical structure factor, which is directly accessible experimentally. We
further show that these edge modes have anomalously long time coherence,
discuss the topological phase diagram of the model, demonstrate the
fractionalization of its low-lying excitations, and propose potential material
candidates. | Niclas Heinsdorf, Darshan G. Joshi, Hosho Katsura, Andreas P. Schnyder | 2023-09-26T17:59:04 | http://arxiv.org/abs/2309.15113v1 | # Stable Bosonic Topological Edge Modes in the Presence of Many-Body Interactions
###### Abstract
Many magnetic materials are predicted to exhibit bosonic topological edge modes in their excitation spectra, because of the nontrivial topology of their magnon, triplon or other quasi-particle band structures. However, there is a discrepancy between theory prediction and experimental observation, which suggests some underlying mechanism that intrinsically suppresses the expected experimental signatures, like the thermal Hall current. Many-body interactions that are not accounted for in the non-interacting quasi-particle picture are most often identified as the reason for the absence of the topological edge modes. Here we report stable bosonic edge modes at the boundaries of a ladder quantum paramagnet with gapped triplon excitations in the presence of the full many-body interaction. For the first time, we use tensor network methods to resolve topological edge modes in the time-dependent spin-spin correlations and the dynamical structure factor, which is directly accessible experimentally. We further show that these edge modes have anomalously long time coherence, discuss the topological phase diagram of the model, demonstrate the fractionalization of its low-lying excitations, and propose potential material candidates.
The enormous success of the description of electronic topological phases of matter quickly inspired research that lead to the generalization of the theory to bosonic quasi-particles like photons [1; 2; 3; 4; 5; 6], phonons [7; 8; 9; 10], magnons [11; 12; 13; 14; 15; 16; 17; 18; 19] or triplons [20; 21; 22]. Analogous to the Quantum or Spin Hall effect for electrons, the nontrivial topology of magnetic excitations contributes to unusual responses like the Thermal Hall or Spin Nernst effect in the form of bosonic edge currents.
These topologically protected edge spin waves are a key ingredient for many future spintronic applications, which are a promising alternative to conventional computing, but with a minimal ecological footprint[23]. There is a multitude of blueprints that make use of their properties to build next-generation devices like spin-wave diodes, beam splitters, interferometers and others [23; 24; 25; 26].
Moreover, systems that are useful for storing and processing quantum information are highly sought-after. Typically, the quantum information of a many-body state is destroyed rapidly. Overcoming fast decoherence of quantum states is one of the main challenges of quantum computing, and has driven extensive theoretical interest in finding ways to increase their coherence times using many-body localization [27; 28; 29; 30; 31], prethermalization [32; 33; 34], quantum many-body scarring [35; 36; 37] or the states' topological properties[38; 39; 40; 41]. The evolution of topological states over time - specifically in the presence of defects or nonlinearities[42; 43; 44] - as well as imaging and detection techniques[45; 46; 47] are thriving fields of research and crucial for bringing real-world applications on their way.
Even though a significant amount of material candidates has been proposed to host nontrivial excitations [11; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154], their detection remains elusive. While some experiments indeed show signatures of the predicted bosonic edge modes[55; 56; 57; 58; 59; 60], others have not been able to produce evidence of topological quasiparticles [61; 62], and for most of the proposed candidates a clear indication of their presence is yet to be discovered.
Figure 1: (a) Schematic of the spin model on a ladder geometry. On each rung of the ladder, two spin-1/2 sites are (strongly) coupled through antiferromagnetic Heisenberg interaction. Along the side rails, the spins interact (weakly) antiferromagnetically. In this limit the spins form singlet pairs along the rungs of the ladder. (b) The same model but with anti-symmetric (DM) and symmetric (pseudo-dipolar) exchange in the \(y\)-direction given by \(D_{y}\) and \(\Gamma_{y}\). These terms introduce a winding and open a topological gap in the excitation spectrum.
Many reasons that could make the effects of the topological edge modes fail to materialize have been theorized. Among other things, they include thermal dampening, magnon-phonon coupling and domain formation, but what is most often identified as the source of the suppression of thermal Hall effect are many-body effects. In contrast to their fermionic counterparts, even small many-body interactions [61, 63, 64] or exchange anisotropies that are typically present in realistic models [65] seem to substantially affect bosonic topological transport properties.
In this work we report bosonic topological edge modes that are stable in the presence of the full many-body interaction using Density Matrix Renormalization Group (DMRG) and time-evolution methods.
_Model and Harmonic Approximation._--We consider a quantum \(S=1/2\) Heisenberg model on a ladder with spin-orbit interaction and an external magnetic field as shown schematically in Fig. 1. This model, first considered in Ref. [21] is given by the following Hamiltonian:
\[\hat{H} =\hat{H}_{\text{rung}}+\hat{H}_{\text{rail}}+\hat{H}_{\text{SOC} }+\hat{H}_{Z}, \tag{1}\] \[\hat{H}_{\text{rung}} =J\sum_{i}\hat{\mathbf{S}}_{li}\cdot\hat{\mathbf{S}}_{ri},\] (2) \[\hat{H}_{\text{rail}} =K\sum_{i}\left(\hat{\mathbf{S}}_{li}\cdot\hat{\mathbf{S}}_{li+1}+\hat{ \mathbf{S}}_{ri}\cdot\hat{\mathbf{S}}_{ri+1}\right),\] (3) \[\hat{H}_{\text{SOC}} =D_{y}\sum_{\alpha=l,r}\sum_{i}\left(\hat{S}_{\alpha i}^{x}\hat{ S}_{\alpha i+1}^{x}-\hat{S}_{\alpha i}^{x}\hat{S}_{\alpha i+1}^{z}\right),\] \[+\Gamma_{y}\sum_{\alpha=l,r}\sum_{i}\left(\hat{S}_{\alpha i}^{z} \hat{S}_{\alpha i+1}^{x}+\hat{S}_{\alpha i}^{x}\hat{S}_{\alpha i+1}^{z}\right),\] (4) \[\hat{H}_{Z} =h_{y}\sum_{\alpha=l,r}\sum_{i}\left(\hat{S}_{\alpha i}^{y}+\hat{ S}_{\alpha i}^{y}\right), \tag{5}\]
where \(\hat{\mathbf{S}}_{\alpha i}=\left(\hat{S}_{\alpha i}^{x},\hat{S}_{\alpha i}^{y}, \hat{S}_{\alpha i}^{z}\right)^{\intercal}\) are the usual spin operators with \(\alpha=l,r\) corresponding to the left and right rail of the ladder and \(i\) running over all rungs at positions \(r_{i}\).
In the limit of strong anti-ferromagnetic rung coupling, this model is the prototypical example of a gapped quantum paramagnet [66, 67]. In that limit the ground state of the model is close to a product state of spin singlets
\[|\psi_{0}\rangle\sim\bigotimes_{i=1}^{L}(|\uparrow\downarrow\rangle-|\downarrow \uparrow\rangle)/\sqrt{2}, \tag{6}\]
with low-lying triplet excitations, the corresponding quasiparticles being triplons. In the presence of SU(2) symmetry (\(D_{y}=\Gamma_{y}=h_{y}=0\)), the triplons are degenerate, but can be split by applying an external magnetic field due to their different magnetic quantum number. Alternatively, the degeneracy can be lifted through spin-orbit interactions. Its anti-symmetric part - similar to the magnetic field term - splits the triplon modes while retaining U(1) symmetry. The pseudo-dipolar, or \(\Gamma\)-term, which is the symmetric part of the spin-orbit interaction breaks U(1)-symmetry.
In the paramagnetic phase of the model, an effective low-energy spectrum can be calculated by first rewriting the model in terms of triplons through bond-operator theory[68], and then discarding all resulting terms that are not bilinear, which has been shown to be a controlled approximation [69, 70]. Within this harmonic approximation, it was shown in Ref. [21] that the so-obtained excitation spectrum can be characterized by a topological winding number that classifies the topology of the magnet's bulk and enforces - if nontrivial - triplon modes at the ends of the ladder with fractional particle number.
In contrast to the case of topological magnons of ordered magnets [71, 72, 73, 74, 59] - where there is a gapless Goldstone mode - the triplon excitations in the quantum paramagnet are gapped. As a consequence of this bulk gap the triplons, including the topological edge modes, do not have any low-energy states to decay into. Therefore their lifetime is not reduced significantly even in the presence of interactions. By applying an external magnetic field or through spin-anisotropy exchanges, magnon systems can become gapless too. However, a bulk gap induced by these effects is typically much smaller than the rung
Figure 2: Real part of the spin-spin correlation function at time \(t=50/J\) in the (a) trivial and (b) the topological case with \(K=0.01J\) and field strengths \(h_{y}/D_{y}=0.5\) and \(h_{y}/D_{y}=1.5\) respectively on a ladder with \(L=12\) rungs. The topological auto-correlation at this time step has peaks at the boundaries, which are absent in the trivial case. (c) Time dependence of the boundary auto-correlation Re \(C_{11}^{Li}\) for \(K=0.01J\), \(D_{y}=\Gamma_{y}=0.1J\) and no magnetic field on a ladder with \(L=8\) rungs for different boundary conditions. For PBC (no -gap mode), it decays rapidly whereas for OBC (in-gap mode) it does not decay. (d) The decay envelopes of (c) for different values of magnetic field. For finite field strengths the correlator decays also for OBC, but more slowly than for PBC.
coupling \(J\), which is the relevant spin exchange energy scale that determines the size of the gap in the ladder system.
The spin model presented herein is suitable to describe the magnetism of potentially many materials and belongs to the well studied class of two-leg ladder compounds. Examples of spin ladder materials are BiCu\({}_{2}\)PO\({}_{6}\)[76], NaV\({}_{2}\)O\({}_{5}\)[77; 78] or multiple cuprate ladders[79; 80; 81; 82]. The recipe for finding topological edge modes in these systems requires mainly two ingredients: (i) strong antiferromagnetical rung coupling (stronger than along the rails), and (ii) spin-orbit interaction. Because the Heisenberg couplings along the rungs and rails depend on the overlap of the localized electrons, their ratio can easily be tuned using e.g. strain. The same is true for the relative spin-orbit interaction strength[83; 84; 85; 86], however, the \(D\) and \(\Gamma\) term are additionally restricted by symmetries. These symmetries can be broken, if not already broken internally, by doping, gate voltages or structural manipulations like skewing and stretching. The abundance of experimental handles suggests that a topological phase transition might easily be switchable, which again expands the potential applications of these systems in future devices.
_Tensor Networks and Dynamical Response._--To relate our model directly to an experimentally measurable quantity, we compute the dynamic structure factor (DSF), which is most directly accessed through inelastic neutron scattering[87; 88] or inverse spin Hall noise spectroscopy (ISHNS)[89]. Because we are investigating the effect of interactions, and since our model is (quasi) one-dimensional, we use the DRMG algorithm to obtain the exact many-body ground state of the finite spin ladder[90].
In contrast to electronic systems, the band topology of bosonic quasiparticles is not a ground state property, so in order to access the low-lying excitations, we apply time-evolution methods[91; 92; 93; 94; 95; 96] to compute the odd-part of the time-dependent spin-spin correlation
\[C^{\gamma\gamma^{\prime}}_{ij}(t)=\langle\psi_{0}|\hat{\hat{S}}^{\gamma}_{i} \hat{U}(t)\hat{\hat{S}}^{\gamma^{\prime}}_{j}|\psi_{0}\rangle, \tag{7}\]
with \(\hat{\hat{S}}^{\gamma}_{i}=\hat{S}^{\gamma}_{li}-\hat{S}^{\gamma}_{ri}\) and the unitary time-evolution operator \(\hat{U}(t)\). Already the real-space and -time correlation function shows signatures of edge modes in the topologically nontrivial phase region (\(h_{y}<|D|\)). In Fig. 2 (a) and (b) the real part of \(C^{\gamma\gamma^{\prime}}_{ij}(t)\) for the topological and trivial case at a fixed time step are plotted. The former shows strong correlations that are pinned to the edges of the system, whereas they are absent in the latter.
In Fig. 2 (c), we plot the time dependence of Re \(C^{xx}_{11}\) at zero magnetic field for periodic (PBC) and open boundary conditions (OBC). For PBC there is no topologically protected edge mode at \(i=j=1\), and the excitation decays over time rapidly. In the open system the edge mode has strongly enhanced time coherence, and we cannot resolve any decay during the time it takes the excitation to hit the boundary at the other end of the ladder. Fig. 2 (d) shows the decay envelope of the excitation for increasing magnetic field for periodic (dashed line) and open (solid line) boundaries. For finite magnetic fields the boundary correlation starts to decay, and for stronger fields the difference in lifetime for PBC and OBC - as indicated by the shaded area - becomes less pronounced.
The DSF, which encodes the dynamical susceptibility of the paramagnet is defined as the Fourier transform of the spin-spin correlations in space and time
\[S^{\gamma\gamma^{\prime}}_{k}(\omega)=\frac{1}{2\pi L}\int_{- \infty}^{\infty}dt\ e^{i(\omega-\omega_{0})t}\sum_{i,j}e^{i(r_{j}-r_{i})k}C^{ \gamma\gamma^{\prime}}_{ij}(t). \tag{8}\]
It is a positive and real quantity and can be calculated using DMRG and time-evolution methods. Usually, the system size and parameters of the time-evolution are chosen to minimize finite size effects, and the state is evolved only for as long as the excitation does not reach the boundary of the system. In addition, by imposing translational invariance, one of the spatial degrees of freedom of \(C^{\gamma\gamma^{\prime}}_{ij}(t)\) can be fixed [97; 98; 99; 100]. Because we are explicitly studying the excitations in the presence of a boundary, we require the "full" real-space correlation function, as well as a "long-enough" time-evolution. \(S^{\gamma\gamma^{\prime}}_{k}(\omega)\) obeys sum rules[99] that we track to diagnose the severity of
Figure 3: \(S_{k}(\omega)\) for the (a) topological and (b) trivial quantum paramagnet with \(h_{y}/D_{y}=1/2\) and \(h_{y}/D_{y}=3/2\) respectively on a ladder with \(L=12\) rungs. The dashed line shows the spectrum of the effective low-energy models from Ref. [21]. (c) \(S_{3\pi/2}(\omega)\) for different values of \(h_{y}/D_{y}\). The first two curves lie in the topological phase region (see phase diagram at the bottom) with the in-gap modes marked by dashed lines. At \(h_{y}/D_{y}=1\) the gap is closed. The last curve lies in the trivial region and is gapped, with no mode in-between.
finite-time and -size effects, which lead to numerical artifacts (see supplementary material[101]).
We study the field-susceptible part of Eq. (8) which is given by \(S_{k}(\omega)=S_{k}^{xx}(\omega)+S_{k}^{zz}(\omega)\) to which we simply refer to as DSF from now.
_Results._--In Fig. 3 (a) and (b) we plot the DSF of the topological and trivial paramagnet along with the triplon bands obtained within the harmonic approximation in Ref. [21]. For small rail and spin-orbit coupling the harmonic approximation is accurate, because the density of triplons is small leading to only minor renormalizations from triplon-triplon interactions. In the topologically nontrivial case (small values of magnetic field) an additional mode appears between the two bulk bands that we hereafter refer to as _in-gap mode_. It is absent in the trivial phase (large values of magnetic field). Fig. 3 (c) shows the topological phase transition at \(k=3\pi/2\) with the external magnetic field \(h_{y}\) as a tuning parameter. At \(h_{y}=|D_{y}|\) the gap closes, and further increasing the magnetic field it reopens, but with no topological in-gap mode. We did finite size scaling with ladders of up to \(L=24\) rungs to confirm that the in-gap mode retains finite spectral weight.
To confirm that the in-gap mode is really localized at the boundaries of the system, we first calculate the local density of states of the lower band
\[\rho_{i}=\sum_{k}e^{-ir_{j}k}\int_{0}^{J+\delta}d\omega S_{k}(\omega), \tag{9}\]
where \(\delta\) is a small enough value such that the spectral weight of any boundary mode, but not that of the upper triplon band is included in the integral. We then define the "topological density of states" as the difference of the local densities of states of the nontrivial paramagnet for open and periodic boundary conditions [21]
\[\rho_{i}^{\rm top}=\rho_{i}^{\rm OBC}-\rho_{i}^{\rm PBC}. \tag{10}\]
This quantity isolates the contribution to the spectrum that stems from introducing a boundary and is shown Fig. 4 (a) for different values of magnetic field. We see two peaks located at the boundary of the ladder, that start to spread into the bulk for finite fields. To compute the associated particle number at one of the edges we integrate \(\rho_{i}^{\rm top}\) for the system's left termination
\[n_{t}=\int_{1}^{L/2}dr\rho_{i}^{\rm top}, \tag{11}\]
and find that the edge mode has fractional particle number. Our numerical result \(n_{t}\sim 0.43\) deviates slightly from the predicted value of \(0.5\)[21] due to spectral leakage caused by the aforementioned finite-size and time effects. However we expect that in the thermodynamic limit this value would indeed tend to \(0.5\). For larger values of field \(n_{t}\) becomes smaller and vanishes at the phase transition at \(h_{y}/D_{y}=1\).
The noninteracting triplon winding number is a topological band property[21], so the edge modes are naively expected to be stable only as long as there is a notion of "bands", or in other words as long as triplons provide a suitable quasi-particle description of the ladder's excitation spectrum. We consider three limits of the model's parameter space: (i) large magnetic field \(h_{y}\), (ii) large rail coupling \(K\) and (iii) large spin-orbit interaction \(D_{y}\) and \(\Gamma_{y}\). In case (i), the field-polarized phase, the lower triplon modes condenses at \(h_{y}\sim J\) and the system becomes ferromagnetic with magnons as its low-lying excitations. The band structures derived from linear spin wave theory are given in the supplemental material[101] and are topologically trivial. Case (ii) is the limit of two weakly coupled anti-ferromagnetic Heisenberg chains. The system does not order, even for strong rail coupling (\(K\sim 2J\)). The antiferromagnetic order parameter is finite also for small values of \(K\), but approaches zero as the system size \(L\) is increased, as confirmed by finite size scaling [101]. Case (iii) is the most relevant benchmark of the edge modes' stability that we can provide with the available handles in the model, because here the higher-order terms of the interacting triplon Hamiltonian become important[21]. In this limit there is significant overlap with the two- and three-triplon continuum that leads to damping and decay, and the quasi-particle description within the harmonic approximation breaks down. In contrast to case (i), increasing of \(D_{y}\) and \(\Gamma_{y}\) does not lead to condensation of the lower mode, and we did not find any evidence for a phase transition up to values of \(D_{y}=\Gamma_{y}=2.6J\). In Fig. 4 (b) we plot the logarithm of the DSF for \(D_{y}=\Gamma_{y}=1.2J\). Even though the upper triplon band is strongly damped and split into many energy levels, we find that the topological edge modes remain stable.
Figure 4: (a) Topological density of states \(\rho_{i}^{\rm top}\) for \(K=0.01J\), \(D_{y}=\Gamma_{y}=0.1J\) and different values of \(h_{y}\) on a ladder with \(L=8\) rungs. The in-gap mode is localized at the two boundaries of the system with fractional particle numbers at each termination. For finite magnetic field the in-gap mode spreads into the bulk and vanishes at the topological phase transition at \(h_{y}/D_{y}=1\). (b) The logarithm of the DSF for \(K=0.01J\), \(h_{y}=0.2\) and strong SOC \(D_{y}=\Gamma_{y}=1.2J\) on a ladder with \(L=12\) rungs. The quasi-particle peaks are dampened and split. The boundary mode is flat and marked by text.
_Conclusion_--We have demonstrated that, contrary to what the absence of experimental evidence for many predicted materials might suggest, topological bosonic edge modes can be stable in the presence of the full many-body interaction. Even though correlations have been identified as the reason for the absence of topological responses in some cases, our work clearly shows that they do not generically suppress bosonic edge modes, and it prompts the question of what other mechanism is responsible. A natural extension of this work is to apply our method to two-dimensional systems wrapped onto a cylinder. Multi-rail ladders[102; 103], a ferromagnet on a Lieb lattice[104] or the Shastry-Sutherland model[105] are obvious candidates, but also topological magnon systems that are potentially more fragile due to their larger decay phase space.
The absence of signatures of topological edge modes even in systems with good agreement between theory and experiment for bulk properties suggests that the responsible decay channel might be a surface effect, i.e., it lies at the boundary. In a quasi two-dimensional model the edge modes are not completely localized, but propagate along the boundary of the system. This setup would further allow the implementation of a defect at the boundary and to investigate the stability of the topological bosons in its presence.
The typical "workflow" for finding topological edge modes usually consists of classifying the topology of a translationally invariant model, and then infering their existence by bulk-boundary correspondence. Even if the symmetry that protects the band topology is broken weakly, the edge states are still expected to be present as long as the the perturbation is smaller than the gap they live in (these approximate symmetries are sometimes called quasi-symmetries[106; 107]). The chiral symmetry that allows for the definition of a winding number in the noninteracting case is broken by the rail coupling \(K\)[21], and is also clearly broken in the limit of strong SOC, nevertheless our numerical study shows that the edge modes persist, which invokes the question of a more suitable bulk classification.
In recent years, considerable effort has been invested in the extension of topological classifications to interacting systems [108; 109; 110; 111; 112; 113]. These schemes account for the many-body nature of the problem using either a matrix product state or Green's function representation, but are not applicable for the classification of bosonic excitation spectra. Detection of phase transitions using entanglement have been brought forward, but are - even though directly related to the dynamical response of a spin system - not sensitive to topological phase transitions[114]. It is essential to extend existing or find new methods that are able to capture the topological properties of e.g. dynamical spin responses, and we hope our work inspires fruitful research in that direction.
###### Acknowledgements.
We thank A. Nocera and S. M. Winter for helpful discussions. NH acknowledges financial support from the Max Planck Institute for Solid State Research in Stuttgart, Germany and the DAAD JSPS summer program 2022. DGJ acknowledges support from the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4007. A.P.S. and N.H. acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - TRR 360 - 492547816. H.K. was supported by JSPS KAKENHI Grant No. JP23H01093 and MEXT KAKENHI Grant-in-Aid for Transformative Research Areas A "Extreme Universe" (KAKENHI Grant No. JP21H05191).
| 多くの磁性材料は、その励起スペクトルにおいて、ボソニ的トポロジカルエッジモードが予測されている。これは、そのマグノン、トリプルオン、その他の準粒子バンド構造の非平凡なトポロジーが原因である。しかし、理論予測と実験観察の間には、不一致が生じている。これは、その期待される実験的シグナルを抑制する、ある基底的なメカニズムを暗示している。例えば、熱ハール電流。非相互作用準粒子像に含まれていない多体相互作用は、トポロジカルエッジモードの不存在の理由として頻繁に特定されている。ここでは、ガッパトリプルオン励起が存在する、ラダー量子パラマグの境界における安定したボソニ的エッジモードを報告する。 これは、時間変化のスピンスピン相関と力学構造ファクターで、その拓撲的エッジモードを |
2309.03771 | Space-Time Shift Keying Aided OTFS Modulation for Orthogonal Multiple
Access | Space-time shift keying-aided orthogonal time frequency space
modulation-based multiple access (STSK-OTFS-MA) is proposed for reliable uplink
transmission in high-Doppler scenarios. As a beneficial feature of our
STSK-OTFS-MA system, extra information bits are mapped onto the indices of the
active dispersion matrices, which allows the system to enjoy the joint benefits
of both STSK and OTFS signalling. Due to the fact that both the time-, space-
and DD-domain degrees of freedom are jointly exploited, our STSK-OTFS-MA
achieves increased diversity and coding gains. To mitigate the potentially
excessive detection complexity, the sparse structure of the equivalent
transmitted symbol vector is exploited, resulting in a pair of low-complexity
near-maximum likelihood (ML) multiuser detection algorithms. Explicitly, we
conceive a progressive residual check-based greedy detector (PRCGD) and an
iterative reduced-space check-based detector (IRCD). Then, we derive both the
unconditional single-user pairwise error probability (SU-UPEP) and a tight bit
error ratio (BER) union-bound for our single-user STSK-OTFS-MA system employing
the ML detector. Furthermore, the discrete-input continuous-output memoryless
channel (DCMC) capacity of the proposed system is derived. The optimal
dispersion matrices (DMs) are designed based on the maximum attainable
diversity and coding gain metrics. Finally, it is demonstrated that our
STSK-OTFS-MA system achieves both a lower BER and a higher DCMC capacity than
its conventional spatial modulation (SM) {and its orthogonal frequency-division
multiplexing (OFDM) counterparts. As a benefit, the proposed system strikes a
compelling BER vs. system complexity as well as BER vs. detection complexity
trade-offs. | Zeping Sui, Hongming Zhang, Sumei Sun, Lie-Liang Yang, Lajos Hanzo | 2023-09-07T15:20:21 | http://arxiv.org/abs/2309.03771v1 | # Space-Time Shift Keying Aided OTFS Modulation for Orthogonal Multiple Access
###### Abstract
Space-time shift keying-aided orthogonal time frequency space modulation-based multiple access (STSK-OTFS-MA) is proposed for reliable uplink transmission in high-Doppler scenarios. As a beneficial feature of our STSK-OTFS-MA system, extra information bits are mapped onto the indices of the active dispersion matrices, which allows the system to enjoy the joint benefits of both STSK and OTFS signalling. Due to the fact that both the time-, space- and DD-domain degrees of freedom are jointly exploited, our STSK-OTFS-MA achieves increased diversity and coding gains. To mitigate the potentially excessive detection complexity, the sparse structure of the equivalent transmitted symbol vector is exploited, resulting in a pair of low-complexity near-maximum likelihood (ML) multiuser detection algorithms. Explicitly, we conceive a progressive residual check-based greedy detector (PRCGD) and an iterative reduced-space check-based detector (IRCD). Then, we derive both the unconditional single-user pairwise error probability (SU-UPEP) and a tight bit error ratio (BER) union-bound for our single-user STSK-OTFS-MA system employing the ML detector. Furthermore, the discrete-input continuous-output memoryless channel (DCMC) capacity of the proposed system is derived. The optimal dispersion matrices (DMs) are designed based on the maximum attainable diversity and coding gain metrics. Finally, it is demonstrated that our STSK-OTFS-MA system achieves both a lower BER and a higher DCMC capacity than its conventional spatial modulation (SM) and its orthogonal frequency-division multiplexing (OFDM) counterparts. As a benefit, the proposed system strikes a compelling BER vs. system complexity as well as BER _vs._ detection complexity trade-offs.
Space-time shift keying (STSK), orthogonal time frequency space (OTFS), multiple access, maximum-likelihood detection, low-complexity detection, performance analysis.
## I Introduction
During the last decade, space-time shift keying (STSK) [4, 7, 12, 20] has been considered a compelling multifunctional multiple-input multiple-output (MIMO) arrangement, where the information bits are jointly mapped onto the conventional amplitude-phase modulated (APM) symbols and the indices of the active dispersion matrices (DMs). To elaborate further, each single APM symbol is dispersed to multiple transmit antennas (TAs) and time-slots by activating one out of \(Q\) DMs. Hence, STSK can achieve diversity and multiplexing gains [20]. By contrast, the conventional spatial modulation (SM) activates one TA to transmit a single APM symbol, yielding only receive diversity gain [15].
Orthogonal time frequency space (OTFS) modulation also constitutes a promising candidate for next-generation wireless networks [10, 5, 17]. Since it is capable of providing reliable transmission in high-mobility scenarios, it has been widely studied in the context of reconfigurable intelligent surfaces (RISs) [9] and low-earth orbit (LEO) satellites [18]. More specifically, in OTFS systems, the information symbols are mapped to the delay-Doppler (DD)-domain, and each symbol is spread across the entire time-frequency (TF)-domain by leveraging the inverse symplectic finite Fourier transform (ISFFT). Therefore, OTFS can attain both time and frequency diversity gains, if channels are time-frequency selective or doubly-selective [27, 30]. In addition, the sparse DD-domain sparse representations of the doubly-selective channels are incorporated into the OTFS theory, and the dimension of the DD-domain channel model is reduced to the number of resolvable paths [17, 5]. Since the communication distance and relative velocity can be approximated as constants within a few milliseconds, the DD-domain channels can be regarded as nearly time-invariant over an entire OTFS frame [31]. Furthermore, the inter-carrier interference (ICI) and inter-symbol interference (ISI) introduced by Doppler and delay spreads remain quasi-orthogonal with the aid of ISFFT, which can hence be processed in the Doppler-domain and delay-domain separately [5]. By contrast, the performance of the conventional orthogonal frequency-division multiplexing (OFDM) suffers severely in the face of high-Doppler doubly-selective channels, since the orthogonality of subcarriers may be destroyed by severe ICI. Therefore, in high-Doppler scenarios, OTFS constitutes a more promising signaling scheme than the conventional OFDM [17, 5].
More recently, OTFS-based non-orthogonal multiple-access (NOMA) communication schemes have been studied [2, 3, 25, 28], where different users arranged in the same DD-domain resource blocks (RBs) are distinguished by their unique sparse codewords [2, 25, 28] and power levels [3]. However, the performance of OTFS-NOMA may degrade significantly due to the non-orthogonality-induced interference, and the excessive complexity of the transceiver [8]. As a remedy, OTFS-based orthogonal MA (OTFS-OMA) techniques have been designed in [8, 26], where the RBs of different users are arranged in non-overlapping grids in the
TF- and/or DD-domains. In [8], the spectral efficiency (SE) of several OTFS-OMA schemes using rectangular pulse shapes are analyzed. Nevertheless, the above OTFS-OMA schemes only modulate data in the DD-domain. None of them exploits the degrees of freedom in the space-time (ST)-domain, even though it would be beneficial to leverage both transmit and receive diversity gains for further BER performance improvement.
As a parallel development, MA communications schemes have been designed in association with SM/STSK in [6, 7]. Specifically in [7], the STSK-aided OFDM-based multiple access (STSK-OFDM-MA) paradigms have been proposed, where an attractive diversity _vs._ multiplexing gain trade-off was provided. However, these SM/STSK-based schemes are designed based on flat Rayleigh fading or _low-mobility_ frequency-selective fading channels, but without considering the ICI imposed by _high-mobility environments_. However, next-generation MA systems aim for providing reliable data transmission under high-mobility scenarios [13, 19]. Therefore, the BER performance of the STSK-OFDM-MA schemes may degrade substantially under doubly-selective channels, yielding a significant system reliability loss. On the other hand, it can be observed from [7] that the STSK-OFDM-MA scheme is capable of providing better BER performance than the conventional solutions, resulting in a more reliable communication system. This observation implies that the reliability of the above-mentioned OTFS-OMA systems can be further enhanced by exploiting the STSK technique to attain both higher diversity and coding gains. Against this backdrop, in this paper we jointly invoke the DD-, space- and time-domain (TD) resources for transmission over doubly-selective channels. Explicitly, by intrinsically amalgamating STSK and OTFS-OMA, we propose space-time shift keying-aided OTFS-based MA (STSK-OTFS-MA) for reliable communications in doubly-selective channels.
The novel contributions of the paper are boldly and explicitly contrasted to the existing literatures in Table I, which are addressed below.
* We propose an STSK-OTFS-MA scheme for supporting the reliable data transmission of multiple users over high-mobility channels, where information is conveyed both by the classic APM symbols and the indices of active DMs. According to the \((N\times M)\) DD-domain grids, we first activate one out of \(Q\) DMs for spreading \(NM\) APM symbols to both the space and time dimensions, resulting in \(NM\) STSK blocks. Then a tailor-made ST mapper is conceived for mapping the elements of the STSK blocks onto the transmitted DD-domain symbol matrices of users, which enables our STSK-OTFS-MA to achieve both transmit and receive diversity gains. Moreover, based on the DD-domain statistics of users, a resource-allocation scheme is introduced for the STSK-OTFS-MA system. Explicitly, the RBs of different users are mapped to the non-overlapping grids along the delay domain to mitigate the multiuser interference (MUI) caused by the Doppler shift. Furthermore, the proposed STSK-OTFS-MA is capable of striking an attractive diversity versus multiplexing gain trade-off. Both the analytical and simulation results illustrated that the STSK-OTFS-MA advocated achieves a better BER performance than its conventional SM-OTFS, single-input-multiple-output (SIMO)-OTFS and STSK-OFDM-MA counterparts. Additionally, the general flexibility of the proposed STSK-OTFS-MA scheme is demonstrated in different-rate low-density parity-check (LDPC)-coded systems. Finally, the BER _vs._ system complexity of the STSK-OTFS-MA and other counterparts are also evaluated.
* A pair of low-complexity near-maximum likelihood detectors (MLDs) are proposed for the STSK-OTFS-MA scheme. Firstly, inspired by the family of greedy algorithms designed for compressed sensing (CS), a progressive residual check-based greedy detector (PRCGD) is conceived, where optimal local choices are obtained at each iteration, yielding a detector approaching the globally optimal performance. Furthermore, commencing with the consideration of detecting the APM and DM-index symbols separately, an iterative reduced-space check-based detector (IRCD) is proposed. Specifically, by sorting the reliabilities of all DM activation patterns (DAPs), a reduced set of DAPs is tested. Finally, the BER performance _vs._ complexity of the MLD, of the PRCGD and of the IRCD are compared.
* We derive the unconditional single-user pairwise error probability (SU-UPEP) of the STSK-OTFS-MA system. Then by invoking the union-bound technique, we derive the closed-form single-user BER bound of our proposed STSK-OTFS-MA system employing MLD, which is shown to be tight for moderate to high signal-to-noise
ratios (SNRs). Then, based on the SU-UPEP, both the diversity order and the ST coding gain achieved by the STSK-OTFS-MA system are determined.
* The single-user discrete-input continuous-output memoryless channel (DCMC) capacity of the STSK-OTFS-MA system is derived, which is demonstrated to outperform its SM counterpart. Additionally, based on the SU-UPEP and the DCMC capacity derived, the design criteria of DMs are proposed, for approaching the maximum attainable diversity order and ST coding gain.
The rest of the paper is structured as follows. In Section II, the proposed STSK-OTFS-MA system model is investigated. Then, low-complexity near-ML multiuser detection algorithms are proposed in Section III. In Section IV, the overall system performance is characterized and the DM design algorithm is detailed. The simulation results are shown in Section V. Finally, the conclusions are offered in Section VI.
_Notation:_ We use the following notation throughout this paper: \(\mathbb{C}\) and \(\mathbb{R}\) are the ring of complex and real; \(\mathbb{B}\) and \(\mathbb{Z}_{+}^{M}\) represent the real integer set of \(\{1,\ldots,M\}\) and the bit set consisting of \(\{0,1\}\); \(\mathbb{E}[\cdot]\) and \(\text{tr}\left\{\cdot\right\}\) denote the expectation and trace operator; \(x(l)\) and \(X(l,k)\) are the \(l\)th of vector \(\mathbf{x}\) and \((l,k)\)th element of matrix \(\mathbf{X}\), respectively; \(\text{vec}(\mathbf{A})\) denotes the vector formulated by stacking the columns of \(\mathbf{A}\) to obtain a single column vector matrix, and \(\text{vec}^{-1}(\mathbf{a})\) denotes the inverse vectorization operation to form the original matrix; \(\otimes\) denotes the Kronecker product of two matrices; \(\mathcal{CN}(\mathbf{a},\mathbf{B})\) is the complex Gaussian distribution having a mean vector \(\mathbf{a}\) and covariance matrix \(\mathbf{B}\); \(\mathbf{A}[:,1:n]\) and \(\mathbf{A}[1:m,:]\) represent the first \(n\) columns and first \(m\) rows of a matrix \(\mathbf{A}\), respectively; \(\mathbf{I}_{N}\) and \(\mathbf{I}_{N}(l)\) denote an \(N\)-dimensional identity matrix and its rows shift by \(l\); The module-\(N\) and determinant operations are defined by \([\cdot]_{N}\) and \(\text{det}(\cdot)\); \(\delta(\cdot)\) is the delta function; The uniform distribution in the interval \([a,b]\) is denoted by \(\mathcal{U}[a,b]\); \(\text{sta}\{\mathbf{A}_{n}^{(u)}\}|_{n=0}^{N-1}=[(\mathbf{A}_{0}^{(u)})^{T},(\mathbf{A}_{ n}^{(u)})^{T},\ldots,(\mathbf{A}_{N-1}^{(u)})^{T}]^{T}\) and \(\text{sta}\{\mathbf{A}_{n}^{(u)}\}|_{n=0}^{U-1}=[(\mathbf{A}_{n}^{(0)})^{T},(\mathbf{A}_{ n}^{(1)})^{T},\ldots,(\mathbf{A}_{n}^{(U-1)})^{T}]^{T}\) denotes the matrix (vector) formulated by stacking \(N\) and \(U\) identical-dimensional sub-matrices (sub-vectors) \(\mathbf{A}_{n}\) and \(\mathbf{A}_{n}^{(u)}\) for \(n=0,\ldots,N-1\) and \(u=0,\ldots,N-1\), respectively.
## II System Model
### _Transmitter Description_
Let us consider a single-cell uplink communication scenario, where the information signals of \(U\) users are simultaneously transmitted to a base station (BS). Specifically, we assume that \(N_{t}\) TAs are employed by each user, and the BS uses \(N_{r}\) receive antennas (RAs). Moreover, each TA transmits an OTFS signal having the bandwidth of \(B=M\Delta f\) and time-slot duration of \(T_{f}=NT\), where \(M\) and \(N\) denote the number of subcarriers and time intervals within an OTFS time-slot, while \(\Delta f\) and \(T\) represent the subcarrier spacing and symbol duration, respectively. Hence, we have a total of \(M_{d}=NM\) DD-domain RBs and each user occupies \(G=M_{d}/U\) RBs. As shown in Fig. 1, the information bit sequence \(\mathbf{b}^{(u)}\in\mathbb{B}^{L}\) transmitted by the user \(u\) is first partitioned into \(G\) groups, yielding \(\mathbf{b}^{(u)}=[\mathbf{b}_{1}^{(u)},\ldots,\mathbf{b}_{G}^{(u)}]\). The \(g\)th bit sequence \(\mathbf{b}_{g}^{(u)}\in\mathbb{B}^{L_{b}}\), \(g=0,\ldots,G-1\), contains \(L_{b}=\bar{L}/G=L_{1}+L_{2}\) bits. Explicitly, the subsequence \(\mathbf{b}_{1,g}^{(u)}\in\mathbb{B}^{L_{1}}\) is mapped into an index symbol in \(\{1,\ldots,Q\}\) for selecting an active DM in \(\mathcal{A}=\{\mathbf{A}_{1},\ldots,\mathbf{A}_{Q}\}\), where we have \(L_{1}=\log_{2}Q\). The remaining \(L_{2}\)-bit sequences \(\mathbf{b}_{2,g}^{(u)}\in\mathbb{B}^{L_{2}}\) are mapped into the normalized quadrature amplitude modulation (QAM)/phase-shift keying (PSK) symbols chosen from the constellation \(\mathcal{F}=\{f_{1},\ldots,f_{V}\}\), where \(L_{2}=\log_{2}V\). Hence, in an OTFS frame, the total number of bits transmitted per user can be given by \(L=UGL_{b}=NM\log_{2}(PQ)\). Assuming that the STSK symbol duration includes \(T_{c}\) OTFS time-slots, the ST codeword matrices of the user \(u\) can be expressed as [20]
\[\mathbf{S}_{d,g}^{(u)}=f_{l_{g}}^{(u)}\mathbf{A}_{d,g}^{(u)}\in\mathbb{C}^{N_{t}\times T _{c}}, \tag{1}\]
where \(f_{l_{g}}^{(u)}\subset\mathcal{F}\) and \(\mathbf{A}_{d,g}^{(u)}\subset\mathcal{A}\) denote a single QAM/PSK symbol and an active DM, respectively. By introducing \(\tilde{\mathbf{s}}_{d,g}^{(u)}=\text{vec}(\mathbf{S}_{d,g}^{(u)})\) and stacking all the \(G\) RBs of user \(u\), the \(u\)th ST stacked codeword vector is formulated as
\[\tilde{\mathbf{s}}_{d}^{(u)}=\text{sta}\{\tilde{\mathbf{s}}_{d,g}^{(u)}\}|_{g=0}^{G-1}, \quad g=0,\ldots,G-1. \tag{2}\]
Let us parameterize the STSK-OTFS-MA system by the five-tuple \((N_{t},N_{r},T_{c},Q,V)\). Note that the DM set can be generated using diverse design criteria, for example by minimizing the pairwise error probability or by maximizing the DCMC capacity [7], which will be further investigated in Section IV-D. In the STSK pre-processing block, the DMs are assumed to be normalized to maintain the transmitted power, and the constraint is given by [20]
\[\text{tr}(\mathbf{A}_{q}^{H}\mathbf{A}_{q})=T_{c},\quad q=1,\ldots,Q. \tag{3}\]
As shown in Fig. 2, the ST codewords are fed into the ST mapper, which is detailed in Section II-B, and the \(u\)th user's transmitted frame output by the ST mapper \(\mathbf{s}^{(u)}\in\mathbb{C}^{N_{t}G\times T_{c}}\) can be formulated as
\[\mathbf{s}^{(u)}=\begin{bmatrix}\mathbf{s}_{0,0}^{(u)}&\cdots&\mathbf{s}_{0,T_{c}-1}^{(u)} \\ \vdots&\ddots&\vdots\\ \mathbf{s}_{N_{t}-1}^{(u)}&\cdots&\mathbf{s}_{N_{t}-1,T_{c}-1}^{(u)}\end{bmatrix}, \tag{4}\]
where we have \(\mathbf{s}_{n_{t},t_{c}}^{(u)}=[S_{d,0}^{(u)}(n_{t},t_{c}),\ldots,S_{d,G-1}^{(u)}(n_ {t},t_{c})]^{T}\in\mathbb{C}^{G\times 1}\) for \(n_{t}=0,\ldots,N_{t}-1\) and \(t_{c}=0,\ldots,T_{c}-1\), which is formulated based on (1). Then the \(G\) ST codeword elements of \(\mathbf{s}_{n_{t},t_{c}}^{(u)}\) are mapped to \(M_{d}\) RBs, yielding
\[\mathbf{x}_{n_{t},t_{c}}^{(u)}=\mathbf{\mathcal{P}}^{(u)}\mathbf{s}_{n_{t},t_{c}}^{(u)}=[x_{n _{t},t_{c}}^{(u)}(0),\ldots,x_{n_{t},t_{c}}^{(u)}(M_{d}-1)]^{T}, \tag{5}\]
where \(\mathbf{\mathcal{P}}^{(u)}\) is the \((M_{d}\times G)\)-element resource allocation matrix. To alleviate the MUI caused by ICI, we introduce our delay-domain index-based RB allocation scheme, i.e., Scheme 1 illustrated in Fig. 3 (a), where each user occupies \(J=M/U\) columns of the DD-domain grids. By contrast, the Doppler-domain index-based resource allocation scheme (Scheme 2) shown in Fig. 3 (b) is invoked as the benchmark. Let us denote the column indices of user \(u\) by \(\mathcal{L}^{(u)}=\{l_{0}^{(u)},\ldots,l_{J-1}^{(u)}\}\). More specifically, if we assume that the ST codewords of user \(u\) are assigned to the RBs set \(\mathcal{N}^{(u)}\), then the corresponding
elements of \(\mathbf{\mathcal{P}}^{(u)}\) can be expressed as
\[\mathcal{P}^{(u)}(m_{d},g)=\begin{cases}1,&\text{if }m_{d}\in\mathcal{N}^{(u)} \\ 0,&\text{otherwise}\end{cases} \tag{6}\]
for \(0\leq m_{d}\leq M_{d}-1\). The indices in \(\mathcal{N}^{(u)}\) are given by \(m_{d}=l_{j}^{(u)}N+n\) and \(g=j^{(u)}N+n\), where \(l_{j}^{(u)}=(J\times u)+j\) for \(n=0,\ldots,N-1\) and \(j=0,\ldots,J-1\).
When considering the OTFS processing block, by defining a DD-domain codeword matrix as \(\mathbf{X}_{n_{t},t_{e}}^{(u)}=\text{vec}^{-1}(\mathbf{x}_{n_{t},t_{e}}^{(u)})\), the TF-domain signal can be formulated using ISFFT as
\[\tilde{X}_{n_{t},t_{e}}^{(u)}(n,m)=\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}\frac{X_{n_ {t},t_{e}}^{(u)}(k,l)}{\sqrt{M_{d}}}e^{j2\pi\left(\frac{nk}{N}-\frac{ml}{M} \right)}, \tag{7}\]
for \(n=0,\ldots,N-1\) and \(m=0,\ldots,M-1\). The transmitted TD signal is obtained by exploiting the Heisenberg transform, yielding
\[\tilde{s}_{n_{t},t_{e}}^{(u)}(t)=\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}\tilde{X}_{n _{t},t_{e}}^{(u)}(n,m)g_{\text{tx}}(t-nT)e^{j2\pi m\Delta f(t-nT)}, \tag{8}\]
where \(g_{\text{tx}}(t)\) denotes the transmit waveform.
### _Received Signals_
Assuming that the synchronization among uplink users is perfect. Let us consider a \(P\)-path DD-domain time-varying multipath channel model between the \(n_{t}\)th TA of user \(u\) and the \(n_{r}\)th RA, which can be formulated as \(h_{n_{r},n_{t}}^{(u)}(\tau,\nu)=\sum_{i=1}^{P}h_{i,n_{r},n_{t}}^{(u)}\delta( \tau-\tau_{i})\delta(\nu-\nu_{i})\), where \(h_{i,n_{r},n_{t}}^{(u)}\), \(\tau_{i}\) and \(\nu_{i}\) denote the complex-valued path gain between the \(n_{t}\)th TA and \(n_{r}\)th RA, normalized delay and Doppler shifts introduced by the \(i\)th path, respectively [5]. Here we have \(h_{i}\sim\mathcal{CN}(0,1/P),\forall i\), which is independent of \(\tau_{i}\) and \(\nu_{i}\). Therefore,
Fig. 1: Illustration of the STSK-OTFS-MA system.
Fig. 2: A toy example of the STSK-OTFS-MA system with \(N_{t}=T_{c}=N=M=U=2\).
the delay and Doppler shifts corresponding to the \(i\)th reflector are given by \(\nu_{i}=\frac{k_{i}}{NT_{i}},\tau_{i}=\frac{l_{i}}{M\Delta t}\), where \(l_{i}=a_{i}+\alpha_{i}\) and \(k_{i}=b_{i}+\beta_{i}\) represent the normalized delay and Doppler indices associated with the \(i\)th path, where \(a_{i}\) and \(b_{i}\) denote the integer delay and Doppler indices, while the fractional components are given by \(\alpha_{i},\beta_{i}\in\mathcal{U}[-\frac{1}{2},\frac{1}{2}]\). During the \(t_{c}\)th OTFS time-slot, the TD signal of the \(n_{r}\)th RA received from the \(n_{t}\)th TA of user \(u\) can be expressed as [17]
\[r_{n_{r},n_{t},t_{c}}^{(u)}(t)= \int\int h_{n_{r},n_{t}}^{(u)}(\tau,\nu)\tilde{s}_{n_{t},t_{c}}^{ (u)}(t-\tau)e^{j2\pi\nu(t-\tau)}d\tau d\nu+n_{r}^{(u)}(t-\tau)e^{-j2\pi m\Delta f (t^{\prime}-nT)}dt^{\prime}, \tag{9}\]
where \(n_{n_{r},n_{t},t_{c}}^{(u)}(t)\) is the complex-valued additive white Gaussian noise (AWGN). Based on the Wigner transform, the elements of the corresponding received TF-domain codeword matrix \(\tilde{\mathbf{Y}}_{n_{r},n_{t},t_{c}}^{(u)}\in\mathbb{C}^{N\times M}\) can be obtained as
\[\tilde{Y}_{n_{r},n_{t},t_{c}}^{(u)}(n,m)=\int r_{n_{r},n_{t}}^{(u)}(t^{\prime} )g_{\text{rx}}^{*}(t^{\prime}-nT)e^{-j2\pi m\Delta f(t^{\prime}-nT)}dt^{\prime}, \tag{10}\]
where \(g_{\text{rx}}(t)\) is the receive waveform. Then, upon utilizing the SEFT, the received DD-domain codeword matrix can be formulated as
\[Y_{n_{r},n_{t},t_{c}}^{(u)}(k,l)=\sum_{n=0}^{N-1}\sum_{m=0}^{M-1 }\frac{\tilde{Y}_{n_{r},n_{t},t_{c}}^{(u)}(n,m)}{\sqrt{M_{d}}}e^{-j2\pi\left( \frac{nk}{M}-\frac{ml}{M}\right)}, \tag{11}\]
for \(k=0,\ldots,N-1\) and \(l=0,\ldots,M-1\). Assuming that both the transmit and receive waveforms satisfy the bi-orthogonal condition, the vector-form input-output relationship for user \(u\) can be formulated as [17]
\[\mathbf{y}_{n_{r},n_{t},t_{c}}^{(u)}=\mathbf{H}_{n_{r},n_{t}}^{(u)}\mathbf{x}_{n_{t},t_{c }}^{(u)}+\mathbf{n}_{n_{r},n_{t},t_{c}}^{(u)}, \tag{12}\]
where we have \(\mathbf{y}_{n_{r},n_{t},t_{c}}^{(u)}=\text{vec}\left(\mathbf{Y}_{n_{r},n_{t},t_{c}}^{ (u)}\right)\), and \(\mathbf{n}_{n_{r},n_{t},t_{c}}^{(u)}\) denotes the complex-valued AWGN vector. Moreover, the effective DD-domain channel matrix \(\mathbf{H}_{n_{r},n_{t}}^{(u)}\) can be expressed as
\[\mathbf{H}_{n_{r},n_{t}}^{(u)}=\sum_{u=0}^{P}\mathbf{I}_{M}(l_{i})\otimes \left[\mathbf{I}_{N}(k_{i})h_{n_{r},n_{t}}^{(u)}e^{-j2\pi\frac{l_{i}k_{i}}{M_{d}} ^{(u)}}\right]\]
[11]. Let \(\mathbf{H}_{n_{r}}^{(u)}=\left[\mathbf{H}_{0,n_{r}}^{(u)},\ldots,\mathbf{H}_{N_{t}-1,n_{r} }^{(u)}\right]\) and \(\mathbf{x}_{t_{c}}^{(u)}=\text{sta}\{\mathbf{x}_{n_{t},t_{c}}^{(u)}\}_{n_{t}=0}^{N_{t} -1}\). Then the DD-domain codeword vector received by the \(n_{r}\)th RA within the \(t_{c}\)th time-slot can be expressed as
\[\sum_{u=0}^{U-1}\mathbf{H}_{n_{r},t_{c}}^{(u)}(t) \sum_{u=0}^{U-1}\mathbf{y}_{n_{r},t_{c}}^{(u)}+\mathbf{n}_{n_{r},t_{c}}= \sum_{u=0}^{U-1}\sum_{n_{t}=0}^{N_{t}-1}\mathbf{H}_{n_{r},n_{t}}^{(u)}\mathbf{x}_{n_{t },t_{c}}^{(u)}+\mathbf{n}_{n_{r},t_{c}}\] \[=\sum_{u=0}^{U-1}\mathbf{H}_{n_{r}}^{(u)}\mathbf{x}_{t_{c}}^{(u)}+\mathbf{n}_{ n_{r},t_{c}}, \tag{13}\]
where \(\mathbf{n}_{n_{r},t_{c}}\) is the corresponding complex AWGN vector with a zero mean and a covariance matrix of \(N_{0}\mathbf{I}_{M_{d}}\), expressed as \(\mathcal{CN}(0,N_{0}\mathbf{I}_{M_{d}})\). Hence, the average SNR per RA is given by \(\gamma=1/N_{0}\). By invoking all the signals of \(N_{r}\) RAs and let \(\tilde{\mathbf{y}}_{t_{c}}=\text{sta}\{\mathbf{y}_{n_{r},t_{c}}\}_{n_{r}=0}^{N_{r}-1}\), then it can be shown that the end-to-end DD-domain input-output relationship for the time-slot \(t_{c}\) is given as
\[\tilde{\mathbf{y}}_{t_{c}}=\sum_{u=0}^{U-1}\tilde{\mathbf{H}}^{(u)}\mathbf{x}_{t_{c}}^{(u)}+ \tilde{\mathbf{n}}_{t_{c}}, \tag{14}\]
for \(t_{c}=0,\ldots,T_{c}-1\), where \(\tilde{\mathbf{n}}_{t_{c}}=\text{sta}\{\mathbf{n}_{n_{r},t_{c}}\}|_{n_{r}=0}^{N_{r}-1}\) denotes the stacked noise vector at the BS side. The DD-domain MIMO channel matrix of the \(u\)th user can be expressed as \(\tilde{\mathbf{H}}^{(u)}=\text{sta}\{\mathbf{H}_{n_{r}}^{(u)}\}_{n_{r}=0}^{N_{r}-1}\). Moreover, we invoke the relationship between the ST codewords and the stacked codeword vector shown in (5), the transmitted codeword vector \(\mathbf{x}_{t_{c}}^{(u)}\) can
Fig. 3: Illustration of resource allocations (a) Scheme 1 (b) Scheme 2 with \(M=12\), \(N=12\), \(U=3\) and \(J=M/U=4\), where \(\mathcal{L}^{(u)}\) represents the column index set of user \(u\) for \(u=0,\ldots,U-1\).
be formulated as
\[\mathbf{x}_{t_{c}}^{(u)} =\left[\left(\mathbf{\mathcal{P}}^{(u)}\mathbf{s}_{0,t_{c}}^{(u)}\right)^{T},\ldots,\left(\mathbf{\mathcal{P}}^{(u)}\mathbf{s}_{N_{t}-1,t_{c}}^{(u)}\right)^{T} \right]^{T}\] \[=\left(\mathbf{I}_{N_{t}}\otimes\mathbf{\mathcal{P}}^{(u)}\right)\left[ \left(\mathbf{s}_{0,t_{c}}^{(u)}\right)^{T},\ldots,\left(\mathbf{s}_{N_{t}-1,t_{c}}^{ (u)}\right)^{T}\right]^{T}\] \[=\mathbf{\bar{\mathcal{P}}}^{(u)}\mathbf{s}_{t_{c}}^{(u)}, \tag{15}\]
where \(\mathbf{\bar{\mathcal{P}}}^{(u)}\) is the equivalent resource allocation matrix of user \(u\). Then, upon applying (15) to (14), \(\mathbf{\tilde{y}}_{t_{c}}\) can be rewritten as
\[\mathbf{\bar{y}}_{t_{c}} =\sum_{u=0}^{U-1}\mathbf{\bar{H}}^{(u)}\mathbf{\bar{\mathcal{P}}}^{(u)} \mathbf{s}_{t_{c}}^{(u)}+\mathbf{\tilde{n}}_{t_{c}}\] \[=\sum_{u=0}^{U-1}\mathbf{\Omega}^{(u)}\mathbf{s}_{t_{c}}^{(u)}+\mathbf{\tilde{ n}}_{t_{c}},\quad t_{c}=0,\ldots,T_{c}-1, \tag{16}\]
where \(\mathbf{\Omega}^{(u)}=\mathbf{\bar{H}}^{(u)}\mathbf{\bar{\mathcal{P}}}^{(u)}\) is a \((M_{d}N_{r}\times GN_{t})\)-dimensional matrix.
By considering all the \(T_{c}\) time-slots, let us define the overall received symbol vector and the transmitted STSK symbol vector of user \(u\) as \(\mathbf{\tilde{y}}=\text{sta}\{\mathbf{\tilde{y}}_{t_{c}}\}_{t_{c}=0}^{T_{c}-1}\) and \(\mathbf{\tilde{s}}^{(u)}=\text{sta}\{\mathbf{s}_{t_{c}}^{(u)}\}_{t_{c}=0}^{T_{c}-1}\), respectively. Consequently, \(\mathbf{\tilde{y}}\) can be obtained as
\[\mathbf{\tilde{y}} =\sum_{u=0}^{U-1}\left(\mathbf{I}_{T_{c}}\otimes\mathbf{\Omega}^{(u)} \right)\mathbf{\tilde{s}}^{(u)}+\mathbf{\tilde{n}}\] \[=\sum_{u=0}^{U-1}\mathbf{\tilde{\Omega}}^{(u)}\mathbf{\tilde{s}}^{(u)}+ \mathbf{\tilde{n}}=\mathbf{\tilde{\Omega}}\mathbf{\tilde{s}}+\mathbf{\tilde{n}}, \tag{17}\]
where \(\mathbf{\tilde{\Omega}}=\left[\mathbf{\tilde{\Omega}}^{(0)},\ldots,\mathbf{\tilde{\Omega} }^{(U-1)}\right]\in\mathbb{C}^{M_{d}N_{r}T_{c}\times GN_{r}T_{c}U}\), \(\mathbf{\tilde{s}}=\text{sta}\{\mathbf{\tilde{s}}^{(u)}\}_{u=0}^{U-1}\in\mathbb{C}^{ GN_{r}T_{c}U\times 1}\) and \(\mathbf{\tilde{n}}=\text{sta}\{\mathbf{\tilde{n}}_{t_{c}}\}_{t_{c}=0}^{T_{c}-1}\).
Now we further detail the ST merger shown in Fig. 1. For deriving the associated input-output relationship, we stack the transmitted ST codewords of \(U\) users shown in (2), yielding \(\mathbf{\tilde{s}}_{d}=\text{sta}\{\mathbf{\tilde{s}}_{d}^{(u)}\}_{u=0}^{U-1}\). As illustrated in Fig. 1 and Fig. 2 of Section II-A, the symbol vector \(\mathbf{\tilde{s}}_{d}\) is essentially obtained by rearranging all the \(GN_{t}T_{c}U\) ST codeword elements \(S_{d,g}^{(u)}(n_{t},t_{c})\) based on the following order 1 of _1) TA index 2) time-slot index 3) RB index 4) user index_. However, according to the STSK and multiuser system design principles [22, 6, 20, 26] and to the transceiver structure of Section II-A and Section II-B, the above-mentioned ST codeword elements should be placed in the following order 2 of _1) RB index 2) TA index 3) user index 4) time-slot index_, which can be further verified based on our derivation of (12)-(17). Therefore, we have \(\mathbf{\tilde{s}}=\mathbf{\Upsilon}\mathbf{\tilde{s}}_{d}\), where \(\mathbf{\Upsilon}\) is the \((GN_{t}T_{c}U\times GN_{t}T_{c}U)\)-element ST mapping matrix, whose elements are defined as
\[\Upsilon(d_{x},d_{y})=\begin{cases}1,&\text{if }d_{x}=g+n_{t}G+uN_{t}G+t_{c}GUN_{t} \\ &\text{and }d_{y}=n_{t}+t_{c}N_{t}+gN_{t}T_{c}+uGN_{t}T_{c}\\ 0,&\text{otherwise},\end{cases} \tag{18}\]
for \(0\leq d_{x},d_{y}\leq GN_{t}UT_{c}-1\). It should be noted that \(d_{x}\) and \(d_{y}\) are formulated based on order 2 and order 1, respectively. Finally, based on (17) and (18), the end-to-end input-output relationship for the OTFS frame can be expressed as
\[\mathbf{\tilde{y}}=\mathbf{\tilde{\Omega}}\mathbf{\Upsilon}\mathbf{\tilde{s}}_{d}+\mathbf{\tilde{n }}=\mathbf{\tilde{\Omega}}\mathbf{\Upsilon}\mathbf{\tilde{\chi}}\mathbf{K}+\mathbf{\tilde{n}}=\mathbf{ \mathcal{C}}\mathbf{K}+\mathbf{\tilde{n}}, \tag{19}\]
where \(\mathbf{\tilde{\chi}}=\mathbf{I}_{UG}\otimes\mathbf{\chi}\) with \(\mathbf{\chi}=[\text{vec}(\mathbf{A}_{1}),\ldots,\text{vec}(\mathbf{A}_{Q})]\in\mathbb{C}^ {N_{t}T_{c}\times\mathbb{Q}}\). Moreover, the equivalent transmitted symbol vector can be defined as \(\mathbf{K}=\text{sta}\{\mathbf{K}^{(u)}\}_{u=0}^{U-1}\), in which the subvectors can be formulated as \(\mathbf{K}^{(u)}=\text{sta}\{\mathbf{\tilde{K}}_{T_{c}^{(u)}}\}_{g=0}^{G-1}\), and the index sets are given by \(\mathcal{I}_{g}^{(u)}=\{q_{g}^{(u)},l_{g}^{(u)}\}\), where \(q_{g}^{(u)}\in\{1,\ldots,Q\}\) and \(l_{g}^{(u)}\in\{1,\ldots,V\}\) for \(u=0,\ldots,U-1\) and \(g=0,\ldots,G-1\). Moreover, the \(g\)th equivalent transmitted symbol vector of the \(u\)th user can be formulated as
\[\mathbf{\tilde{K}}_{\mathcal{I}_{g}^{(u)}}=[\underbrace{0,\cdots,0}_{q_{g}^{(u)}-1}, \underbrace{0,\cdots,0}_{Q-q_{g}^{(u)}}]^{T}\in\mathbb{C}^{Q\times 1}, \tag{20}\]
where the \(l_{g}^{(u)}\)th QAM/PSK symbol \(f_{l_{g}^{(u)}}\) is located in the \(q_{g}^{(u)}\)th element, while the active DM in the \(g\)th STSK block of user \(u\) is denoted as \(\mathbf{A}_{q_{g}^{(u)}}\). According to (19)-(20), it can be observed that a total of \(M_{d}\) ST codewords are transmitted in the entire system, hence only \(M_{d}\) elements of \(\mathbf{\tilde{K}}\) have a non-zero value. Therefore, the index candidate sets of non-zero valued elements are represented as \(\mathcal{Q}=\{\mathcal{Q}_{1},\ldots,\mathcal{Q}_{C}\}\), which has \(C=2^{M_{d}L_{1}}=Q^{M_{d}}\) index candidate subsets; The \(c\)th subset can be expressed as \(\mathcal{Q}_{c}=\{\mathcal{Q}_{c}(0),\ldots,\mathcal{Q}_{C}(M_{d}-1)\}\subset \mathcal{Q}\), whose elements obey \(\mathcal{Q}_{c}(m_{d})\in\mathbb{Z}_{+}^{QM_{d}}\) for \(m_{d}=0,\ldots,M_{d}-1\) and \(c=1,\ldots,C\). For a given \(\mathbf{K}\), the index candidate subset is denoted as \(\mathcal{I}\), where we have \(\mathcal{I}=\mathcal{Q}_{c}\subset\mathcal{Q}\), and the APM symbols can be expressed as \(\mathbf{K}_{d}=[K_{d}(0),\ldots,K_{d}(M_{d}-1)]^{T}\in\mathcal{F}^{M_{d}\times 1}\). For the sake of demonstration, all the candidates of the ST codeword vectors can be expressed as a specifically designed codebook, yielding
\[\mathcal{B}=\left\{\mathbf{B}_{1},\ldots,\mathbf{B}_{2^{L}}:\mathbf{B}_{i}\in\mathbb{C}^{ QM_{d}},i=1,\ldots,2^{L}\right\}, \tag{21}\]
which will be discussed in Section IV, while \(\mathbf{K}\) is a vector having elements selected from \(\mathcal{B}\). Based on (19), the received symbol vector \(\mathbf{\tilde{y}}\) for a given \(\mathbf{K}\) follows the Gaussian probability density function (PDF) of
\[p(\mathbf{\tilde{y}}|\mathbf{K})=\frac{1}{(\pi N_{0})^{M_{d}N_{r}T_{c}}}\exp\left(- \frac{||\mathbf{\tilde{y}}-\mathbf{C}\mathbf{K}||^{2}}{N_{0}}\right).\] (22
### _Maximum A Posteriori Detector (MAPD)_
Given the conditional PDF in (22), the optimum MAPD maximizes the _a posteriori_ probability of the equivalent transmitted vector \(\mathbf{K}\), yielding \(\mathbf{K}^{\text{MAP}}=\underset{\mathbf{B}_{i}\in\mathcal{B}}{\arg\max}\left\{p(\mathbf{B} _{i}|\tilde{\mathbf{y}})\right\}\). Assuming that the mapping process of different candidates in \(\mathcal{B}\) is independent and equiprobable, the MAPD is equivalent to the MLD, which can be written as
\[\mathbf{K}^{\text{ML}}=\underset{\mathbf{B}_{i}\in\mathcal{B}}{\arg\min}\left\{|| \tilde{\mathbf{y}}-\mathbf{C}\mathbf{B}_{i}||^{2}\right\}. \tag{24}\]
### _Progressive Residual Check Greedy Detector (PRCGD)_
We firstly rewrite the input-output relationship of (19) as
\[\tilde{\mathbf{y}}=\mathbf{C}\mathbf{K}+\tilde{\mathbf{n}}=\mathbf{C}\mathbf{\Upsilon}_{\mathcal{I}} \mathbf{K}_{d}+\tilde{\mathbf{n}}=\mathbf{C}_{\mathcal{I}}\mathbf{K}_{d}+\tilde{\mathbf{n}}, \tag{25}\]
where \(\mathbf{C}_{\mathcal{I}}=\mathbf{C}\mathbf{\Upsilon}_{\mathcal{I}}\in\mathbb{C}^{M_{d}N _{r}T_{x}\times M_{d}}\) and \(\mathbf{\Upsilon}_{\mathcal{I}}\) is a \((QM_{d}\times M_{d})\)-element mapping matrix associated with \(\mathcal{I}\). Assuming that the candidates in the index candidate sets \(\mathcal{Q}\) and in the constellation set \(\mathcal{F}\) are independent and equiprobable, similar to (24), the corresponding joint MLD can be formulated as \(\left(\mathcal{I}^{\text{ML}},\mathbf{K}_{d}^{\text{ML}}\right)=\underset{\mathbf{Q} _{c}\subset\mathcal{Q},\mathbf{I}_{c}\neq\mathcal{F}^{\text{ML}}}{\arg\min}\left\{ ||\tilde{\mathbf{y}}-\mathbf{C}_{\mathcal{Q}_{c}}\mathbf{I}||^{2}\right\}\). Since the complexity of the MLD is excessive, we propose the PRCGD. Based on the sparse structure of the transmitted symbol vector \(\mathbf{K}\), our objective is to harness the philosophy of greedy algorithms [32], which are often employed for sparse recovery in low-complexity multiuser detection (MUD). Specifically, the proposed PRCGD can provide locally optimal detection results based on the elements of \(\mathbf{K}\). Moreover, our PRCGD includes the reliability sorting and progressive detection stages detailed below.
At the reliability sorting stage, the received symbol vector \(\tilde{\mathbf{y}}\) is first processed by linear minimum mean square error (LMMSE) estimation to obtain the soft estimates of \(\mathbf{K}\) as \(\tilde{\mathbf{K}}=\left(\mathbf{C}^{H}\mathbf{C}+\frac{1}{\gamma_{s}}\mathbf{I}_{QM_{d}} \right)^{-1}\mathbf{C}^{H}\tilde{\mathbf{y}}\), where \(\tilde{\mathbf{K}}=[\tilde{K}(0),\ldots,\tilde{K}(QM_{d}-1)]^{T}\in\mathbb{C}^{ QM_{d}\times 1}\) and \(\gamma_{s}=\gamma/Q\) is the generalized average SNR per symbol. The elements in \(\tilde{\mathbf{K}}\) having relatively high magnitudes should also have relatively high probabilities of being active in \(\mathbf{K}\), which becomes more pronounced in the high-SNR region. Hence, inspired by [21, 32], we can order the magnitudes of the elements in \(\tilde{\mathbf{K}}\) in descending order to reflect the reliability of the index symbols, yielding
\[\mathcal{J}= \{j_{1},\ldots,j_{QM_{d}}\}\quad\text{subject to}\ \left|\tilde{K}(j_{1})\right|^{2}\geqslant\ldots\geqslant\left|\tilde{K}(j_{QM _{d}})\right|^{2}, \tag{26}\]
where we have \(j_{1}\in\{1,\ldots,QM_{d}\}\) for \(l=1,\ldots,QM_{d}\) and \(j_{l}\neq j_{q},\forall l\neq q\). Then, based on the reliability set \(\mathcal{J}\), the PRCGD enters the progressive detection stage, which is detailed below.
During the progressive detection stage, the PRCGD carries out the index symbol detection and the symbol-wise APM symbol detection separately. To elaborate further, we first select \(j_{t}\) from the reliability reliability set \(\mathcal{J}\) and exploit \(C_{t}\) DAPs in the \(t\)th iteration, yielding \(\mathcal{Q}^{t}=\left\{\mathcal{Q}_{1}^{t},\ldots,\mathcal{Q}_{C_{t}}^{t} \right\}\subset\mathcal{Q}\), where \(\mathcal{Q}_{c_{t}}^{t}(m_{d})\in\mathcal{Z}_{+}^{M_{d}}\) for \(m_{d}=0,\ldots,M_{d}-1\) and \(c_{t}=1,\ldots,C_{t}\). Based on the reliability set \(\mathcal{J}\), the DAPs are chosen under the constraint that \(j_{t}\) is a common value in all selected subsets, i.e., we have \(\bigcap_{c_{t}=1}^{C_{t}}\mathcal{Q}_{c_{t}}^{t}=j_{t}\). Upon invoking the DAPs \(\mathcal{Q}_{c_{t}}^{t}\) as the _a priori_ information, the ensuing APM symbol estimation can be formulated as the following optimization problem of \(\hat{\mathbf{K}}_{c_{t},d}=\underset{\mathbf{q}\in\mathcal{C}^{H}_{d}\times 1}{\arg\min}|| \tilde{\mathbf{y}}-\mathbf{C}_{\mathcal{Q}_{c_{t}}^{t}\mathbf{a}}||^{2}\). Then the corresponding least square solution can be formulated as
\[\hat{\mathbf{K}}_{c_{t},d} =\mathbf{C}_{\mathcal{Q}_{c_{t}}^{t}}^{\dagger}\tilde{\mathbf{y}}=\mathbf{C}_{ \mathcal{Q}_{c_{t}}^{t}}^{\dagger}\mathbf{C}_{\mathcal{I}}\mathbf{K}_{d}+\mathbf{C}_{ \mathcal{Q}_{c_{t}}^{t}}^{\dagger}\tilde{\mathbf{n}}\] \[=\mathbf{K}_{d}+\mathbf{r}_{\mathcal{Q}_{c_{t}}^{t}}^{\dagger}\mathbf{x}+ \tilde{\mathbf{n}}, \tag{27}\]
where we have the residual interference \(\mathbf{r}_{\mathcal{Q}_{c_{t}}^{t},\mathcal{I}}=\mathbf{0}\) in the case that all the index symbols are detected correctly, i.e., \(\mathcal{Q}_{c_{t}}^{t}=\mathcal{I}\), and \(\bar{\mathbf{n}}=\mathbf{C}_{\mathcal{Q}_{c_{t}}^{t}}^{\dagger}\tilde{\mathbf{n}}\) is the corresponding AWGN vector. Based on the DAPs \(\mathcal{Q}_{c_{t}}^{t}\), the estimates of the APM symbols \(\mathbf{f}_{c_{t}}^{t}=[f_{c_{t}}^{t}(0),\ldots,f_{c_{t}}^{t}(M_{d}-1)]^{T}\) can be obtained by symbol-wise ML detection, yielding,
\[f_{c_{t}}^{t}(m_{d})=\underset{f_{c_{t}}\in\mathcal{F}}{\arg\min}\left|\hat{K}_{c _{t},d}(j)-f_{v}\right|^{2}, \tag{28}\]
for \(m_{d}=0,\ldots,M_{d}-1\) and \(v=1,\ldots,V\). Hence, after testing all the \(C_{t}\) DAPs, the PRCGD delivers the corresponding APM candidate sets \(\tilde{\mathcal{F}}^{t}=\{\mathbf{f}_{1}^{t},\ldots,\mathbf{f}_{C_{t}}^{t}\}\).
Now we have obtained the estimates of the APM symbols and the DAPs grouped as \(\{\mathbf{f}^{\prime}_{c_{t}},\mathcal{Q}^{\epsilon}_{c_{t}}\}_{c_{t}=1}^{C_{t}}\). Therefore, the locally optimal set can be formulated as \((\mathcal{I}^{t},\mathbf{K}^{t}_{d})=\operatorname*{arg\,min}_{\mathcal{Q}^{ \epsilon}_{c_{t}}\subset\mathcal{Q}^{\epsilon}_{\cdot}\mathbf{f}^{\prime}_{c_{t}} \in\mathcal{F}_{t}}\left\{\left\|\tilde{\mathbf{y}}-\mathbf{C}_{\mathcal{Q}^{\epsilon }_{c_{t}}}\mathbf{f}^{\prime}_{c_{t}}\right\|^{2}\right\}\), where the residual error can be expressed as \(\epsilon^{t}=\left\|\tilde{\mathbf{y}}-\mathbf{C}_{\mathcal{I}^{\epsilon}}\mathbf{K}^{t}_{ d}\right\|^{2}\). Typically, the progressive detection terminates in the case of \(\epsilon^{t}<\epsilon_{0}\), where \(\epsilon_{0}\) is the predefined termination parameter. However, if this condition cannot be satisfied after testing all DAPs, the PRCGD returns the corresponding set with minimum residual error. The proposed PRCGD is summarized in Algorithm 1.
### _Iterative Reduced-Space Check Detector (IRCD)_
The main philosophy of IRCD is to carry out the detection of DAPs and APM symbols separately. Then near-ML detection performance can be obtained by only testing a reduced set of the entire DAP space \(\mathcal{Q}\). In contrast to the PRCGD, the index reliabilities of all elements in \(\tilde{\mathbf{K}}\) are invoked in the IRCD. Explicitly, the IRCD first derives the index reliability metric based on each DAP \(\mathcal{Q}_{c}\) after employing the LMMSE detection, yielding \(\rho_{c}=\sum_{m=0}^{M_{d}-1}\left|\tilde{K}^{(c_{m})}_{m_{d}}\right|^{2}\) for \(i^{c}_{m_{d}}=\mathcal{Q}_{c}(m_{d})\in\{0,\ldots,QM_{d}-1\}\) and \(c=1,\ldots,C\). Then the reliability metrics of all \(C=Q^{M_{d}}\) DAPs are sorted in descending order, which can be formulated as
\[\mathcal{R}=\{i_{1},\ldots,i_{C}\}\quad\text{subject to }\rho_{i_{1}}\geqslant \ldots\geqslant\rho_{i_{C}}, \tag{29}\]
where we have \(i_{c}\in\{1,\ldots,C\}\) for \(c=1,\ldots,C\) and \(\rho_{i_{p}}\neq\rho_{i_{q}}\), \(\forall p\neq q\). Similar to the PRCGD, the DAP associated with a higher value \(\rho_{i_{c}}\) in \(\mathcal{R}\) can be regarded as the correct result with a higher probability, especially in the high-SNR scenarios. It should be noted that our IRCD first tests the DAPs corresponding to \(\rho_{i_{1}}\) with the highest priority in the following stage, where we carry out the detection of the APM symbols, as discussed below.
In the second stage, the proposed IRCD first selects the DAP \(\mathcal{I}^{t}\) based on the reliability metric \(\rho_{i_{x}}\) in the \(t\)th iteration. Consequently, the detected APM symbols \(\mathbf{K}^{t}_{d}\in\mathcal{F}^{M_{d}}\) can be obtained based on the least square approach and the symbol-wise ML detection of (27) and (28), respectively. Hence, the detected set and APM symbols in the \(t\)th iteration can be grouped as \(\mathcal{G}^{t}=\{\mathcal{I}^{t},\mathbf{K}^{t}_{d}\}\). Assume that there are \(T_{2}\) DAPs to be tested during the second stage. Hence, the final detected DAP and the APM symbols can be obtained as \((\mathcal{I}^{\text{IRCD}},\mathbf{K}^{\text{IRCD}}_{d})=\operatorname*{arg\,min} \limits_{(\mathcal{I}^{c},\mathbf{K}^{t}_{d})\subset\mathcal{Q}}\left\|\tilde{\mathbf{y }}-\mathbf{C}_{\mathcal{I}^{c}}\mathbf{K}^{t}_{d}\right\|^{2}\), where \(\mathcal{G}=\{\mathcal{G}^{1}\cup\mathcal{G}^{2}\ldots\cup\mathcal{G}^{T_{2}}\}\). Our proposed IRCD is summarized in Algorithm 2.
```
0:\(\hat{\mathbf{y}}\), \(\mathbf{C}\), \(\mathcal{Q}\) and \(\gamma_{s}\).
1: Preparation: Set the maximum number of iteration \(T_{2}\), and \(\mathcal{G}=\emptyset\).
2:\(//\)Reliability Sorting:
3: Employ LMMSE detection as:
4:\(\hat{\mathbf{K}}=\left(\mathbf{C}^{H}\mathbf{C}+\frac{1}{\gamma_{s}}I_{QM_{d}}\right)^{ \mathbf{C}}\mathbf{\mathcal{G}}^{H}\hat{\mathbf{y}}\).
5: Compute the index reliability metrics based on DAPs \(\mathcal{Q}_{c}\) as \(\rho_{c}=\sum_{m=0}^{M_{d}-1}\left|\tilde{K}^{(c_{m}^{\dagger})}_{m_{d}}\right| ^{2}\) for \(i^{c}_{m_{d}}\in\{0,\ldots,QM_{d}-1\}\) and \(c=1,\ldots,C\).
6: Obtain the measurements of the index reliabilities as \(\mathcal{R}=\{i_{1},\ldots,i_{C}\}\quad\text{subject to }\rho_{i_{1}} \geqslant\ldots\geqslant\rho_{i_{C}}\).
7:\(//\)Reduced-space Check Detection:
8:for\(t=1\) to \(T_{2}\)do
9: Collect the DAP \(\mathcal{L}^{t}\) according to \(\rho_{i_{t}}\).
10: Carry out APM symbol estimation \(\mathbf{K}^{t}_{d}\) based on (27) and (28).
11: Obtain the detected results as \(\mathcal{G}^{t}=\{\hat{\mathbf{x}}^{t},\mathbf{K}^{t}_{d}\}\).
12: Calculate the residual error as
13:\(\epsilon^{t}=\left\|\tilde{\mathbf{y}}-\mathbf{C}_{\mathcal{I}^{t}}\mathbf{K}^{t}_{d}\right\| ^{2}\).
14: Group the detection candidate sets as \(\mathcal{G}=\mathcal{G}\cup\mathcal{G}^{t}\).
15:endfor
16: Compute the final detection results as
17:\((\mathcal{I}^{\text{IRCD}},\mathbf{K}^{\text{IRCD}}_{d})=\operatorname*{arg\,min} \limits_{(\mathcal{I}^{\text{IRCD}},\mathbf{K}^{t}_{d})\subset\mathcal{Q}}\left\| \tilde{\mathbf{y}}-\mathbf{C}_{\mathcal{I}^{t}}\mathbf{K}^{t}_{d}\right\|^{2}\).
18:return\(\mathcal{I}^{\text{IRCD}}\) and \(\mathbf{K}^{\text{IRCD}}_{d}\).
```
**Algorithm 2** Iterative Reduced-space Check Detector
### _Detection Complexity Analysis_
It can be concluded from (19) that the MLD evaluates all the \(M_{d}\) STSK blocks. Hence the complexity of the optimum MLD is on the order of \(\mathcal{O}[(VQ)^{M_{d}}]\), which is excessive for high values of \(M_{d}\).
Based on our analysis in Section III-B, it can be readily shown that the complexity of the PRCGD relies on the number of iterations and the value of \(C_{t}\). In more detail, we have the best scenario, if the PRCGD terminates after the first iteration and only a single DAP is considered. Therefore, the corresponding complexity is given by \(\mathcal{O}(M_{d}V)\), since only the symbol-wise detection of (28) is employed. By contrast, the worst-case scenario is when all the DAPs in \(\mathcal{Q}\) are tested. Since there are \(C=Q^{M_{d}}\) DAPs, the overall complexity is on the order of \(\mathcal{O}(Q^{M_{d}}M_{d}V)\). In more general cases, our PRCGD only has to consider a subset of \(\mathcal{Q}\), having \(C_{1}<C\) DAPs. Under this condition, the complexity of the PRCGD is on the order of \(\mathcal{O}(C_{1}M_{d}V)\).
Based on Section III-C, the complexity of each IRCD iteration is on the order of \(\mathcal{O}(M_{d}V)\). Hence, the overall complexity of IRCD is given by \(\mathcal{O}(T_{2}M_{d}V)\), and the worst scenario happens when the IRCD terminates after \(T_{2}=Q^{M_{d}}\) iterations, i.e., all the DAPs are tested. However, since the IRCD measures the index reliabilities of all DAPs, near-ML detection performance can be achieved by only testing a small subset of the entire DAP set, which is shown in Section V. Therefore, it can be readily demonstrated that the complexity of our IRCD can be significantly lower than that of the MLD.
### _System Complexity Analysis_
The system complexity of the SM/STSK-based schemes is identical to the corresponding MLD complexity [20], hence we provide our system complexity analysis in this section. We commence by specifying the SIMO-OTFS, SM-OTFS and STSK-OFDM-MA systems with the aid of their parameters as \((N_{r},V)\), \((N_{t},N_{r},V)\) and \((N_{t},N_{r},T_{c},Q,V)\), respectively. Since all the combinations of the symbols have to be evaluated in the MLD, the complexity of SIMO-OTFS \((N_{r},V)\) and SM-OTFS \((N_{t},N_{r},V)\) is on the order of \(\mathcal{O}(V^{MN})\) and \(\mathcal{O}[(N_{t}V)^{MN}]\)[22], respectively.
The STSK-aided OFDM-MA (STSK-OFDM-MA) \((N_{t},N_{r},T_{c},Q,V)\) system can be viewed as a special case of STSK-OTFS-MA \((N_{t},N_{r},T_{c},Q,V)\) associated with \(N=1\). Therefore, the corresponding system complexity can be expressed as \(\mathcal{O}[(QV)^{M}]\).
## IV Performance Analysis of the Single-User System and Dispersion Matrix Design
In this section, we commence with the BER analysis of the single-user STSK-OTFS-MA system and derive its performance upper-bound, referred to as the SU-UPEP. Then the diversity order and ST coding gain are derived. Moreover, the single-user DCMC capacity of the STSK-OTFS-MA system is discussed. Finally, based on the SU-UPEP and the DCMC capacity, we gain deeper insights into the criteria of DM design in Section IV-D, and the algorithm for optimizing the DMs is derived.
### _Analysis of Single-User Bit Error Ratio Performance_
The vector-form input-output relationship in (12) can be expressed as \(\left(\mathbf{\tilde{u}}_{n_{r},n_{t},t_{e}}^{(u)}\right)^{T}=\tilde{\mathbf{h}}_{n_{r },n_{t}}^{(u)}\tilde{\mathbf{X}}_{n_{t},t_{e}}^{(u)}+\left(\mathbf{\tilde{n}}_{n_{r},n _{t},t_{e}}^{(u)}\right)^{T}\), where \(\tilde{\mathbf{h}}_{n_{r},n_{t}}^{(u)}=\left[h_{n_{r},n_{t}}^{(u)}(1),\ldots,h_{n_{ r},n_{t}}^{(u)}(P)\right]\) with \(\tilde{h}_{n_{r},n_{t}}^{(u)}(i)=h_{n_{r},n_{t}}^{(u)}e^{-j2\pi u_{r}i_{r}},\psi\), and the \(m_{d}\)th column of \(\tilde{\mathbf{X}}_{n_{t},t_{e}}^{(u)}\in\mathbb{C}^{P\times M_{d}}^{(u)}\) can be obtained as
\[\tilde{\mathbf{X}}_{n_{t},t_{e}}^{(u)}[,m_{d}]=\begin{bmatrix}x_{n_{t},t_{e}}^{(u )}([k-k_{1}]_{N}+N[l-l_{1}]_{M})\\ \vdots\\ x_{n_{t},t_{e}}^{(u)}([k-k_{P}]_{N}+N[l-l_{P}]_{M})\end{bmatrix}, \tag{30}\]
where we have \(m_{d}=k+Nl\) for \(k=0,\ldots,N-1\) and \(l=0,\ldots,M-1\). Similar to (13), the \(n_{r}\)th received codeword vector within the \(t_{e}\)th OTFS time-slot can be formulated as
\[\mathbf{y}_{n_{r},t_{e}}^{T}=\sum_{u=0}^{U-1}\sum_{n_{t}=0}^{N_{t}-1} \tilde{\mathbf{h}}_{n_{r},n_{t}}^{(u)}\tilde{\mathbf{X}}_{n_{t},t_{e}}^{(u)}+\mathbf{n}_{n _{r},t_{e}}^{T}=\sum_{u=0}^{U-1}\tilde{\mathbf{h}}_{n_{r}}^{(u)}\tilde{\mathbf{X}}_{t_ {e}}^{(u)}+\mathbf{n}_{n_{r}}^{T} \tag{31}\]
where we have \(\tilde{\mathbf{h}}_{n_{r}}^{(u)}=\left[\tilde{\mathbf{h}}_{0,n_{r}}^{(u)},\ldots,\tilde {\mathbf{h}}_{N_{t}-1,n_{r}}^{(u)}\right]\) and \(\tilde{\mathbf{X}}_{t_{e}}^{(u)}=\operatorname{sta}(\tilde{\mathbf{X}}_{n_{t},t_{e}}^{ (u)})|_{n_{t}=0}^{N_{t}-1}\). Therefore, the end-to-end input-output relationship of the \(t_{e}\)th time-slot is given by \(\tilde{\mathbf{Y}}_{t_{e}}=\sum_{u=0}^{U-1}\tilde{\mathbf{H}}^{(u)}\tilde{\mathbf{X}}_{t_ {e}}^{(u)}+\tilde{\mathbf{n}}_{t_{e}}\) for \(t_{e}=0,\ldots,T_{c}-1\), where \(\tilde{\mathbf{Y}}_{t_{e}}=\left[\mathbf{y}_{0,t_{e}},\ldots,\mathbf{y}_{N_{t}-1,t_{e}} \right]^{T}\) is the matrix of received signal, \(\tilde{\mathbf{n}}_{t_{e}}=\left[\mathbf{n}_{0,t_{e}},\ldots,\mathbf{n}_{N_{t}-1,t_{e}} \right]^{T}\) denotes the noise matrix, and the channel matrix can be expressed as \(\tilde{\mathbf{H}}^{(u)}=\operatorname{sta}(\tilde{\mathbf{h}}_{n_{r}}^{(u)})|_{n_{r} }^{N_{r}-1}\). Finally, by defining \(\mathbf{Y}=\left[\tilde{\mathbf{Y}}_{0},\ldots,\tilde{\mathbf{Y}}_{T_{c}-1}\right]\), \(\tilde{\mathbf{X}}^{(u)}=\left[\tilde{\mathbf{X}}_{0}^{(u)},\ldots,\tilde{\mathbf{X}}_{T_ {c}-1}^{(u)}\right]\) and \(\tilde{\mathbf{n}}=\left[\tilde{\mathbf{n}}_{0},\ldots,\tilde{\mathbf{n}}_{T_{c}-1}\right]\), the input-output relationship for the entire transmitted frame can be formulated as
\[\mathbf{Y}=\sum_{u=0}^{U-1}\tilde{\mathbf{H}}^{(u)}\tilde{\mathbf{X}}^{(u)}+ \tilde{\mathbf{n}}=\mathbf{H}\mathbf{X}+\tilde{\mathbf{n}}, \tag{32}\]
where we have \(\mathbf{X}=\operatorname{sta}\{\tilde{\mathbf{X}}^{(u)}\}|_{u=0}^{U-1}\) and \(\mathbf{H}=\left[\tilde{\mathbf{H}}^{(u)},\ldots,\tilde{\mathbf{H}}^{(U-1)}\right]\). In a single-user scenario, we have \(\mathbf{H}\in\mathbb{C}^{N_{r}\times PN_{t}}\) and \(\mathbf{X}\in\mathbb{C}^{PN_{t}\times M_{d}T_{e}}\). For notational simplicity, the index \((u)\) is omitted in the rest of this section, since only a single user is considered. Therefore, the MLD associated with the input-output relationship shown in (32) can be formulated as \(\mathbf{X}^{\text{ML}}=\operatorname*{arg\,min}_{\mathbf{D}_{i}\in\mathcal{D}}\left\{|| \mathbf{Y}-\mathbf{H}\mathbf{D}_{i}||^{2}\right\}\), where the equivalent candidate space \(\mathcal{D}\) can be formulated based on \(\mathcal{B}\) in (21) and the mapping relationship in (30), yielding \(\mathcal{D}=\left\{\mathbf{D}_{1},\ldots,\mathbf{D}_{2^{L}}:\mathbf{D}_{i}\in\mathbb{C}^{PN _{t}\times M_{d}T_{c}},i=1,\ldots,2^{L}\right\}\).
Let us consider the pairwise error event \(\{\mathbf{X}^{c}\to\mathbf{X}^{c}\}\), where \(\mathbf{X}^{c}=\mathbf{D}_{i}\) denotes the transmitted codeword matrix, while \(\mathbf{X}^{c}=\mathbf{D}_{j}\), \(\forall i\neq j\), represents the erroneous detection results of the MLD, i.e., we have \(\mathbf{D}_{i}\neq\mathbf{D}_{j}\) for \(\mathbf{D}_{i},\mathbf{D}_{j}\in\mathcal{D}\). Let us define furthermore the error matrix space \(\mathcal{E}=\{\mathbf{E}=\mathbf{D}_{i}-\mathbf{D}_{j},\forall\mathbf{D}_{i},\mathbf{D}_{j}\in \mathcal{D},\forall i\neq j\}\). Then, the conditional PEP for a given channel matrix \(\mathbf{H}\) is obtained as \(P_{E}(\mathbf{X}^{c}\to\mathbf{X}^{c}|\mathbf{H})=P\left(||\mathbf{Y}-\mathbf{H}\mathbf{X}^{c}||^{2} \geqslant||\mathbf{Y}-\mathbf{H}\mathbf{X}^{c}||^{2}\right)\). Let us denote the elements of the corresponding matrices as \(H(a,b)\), \(X(b,c)\), \(n(a,c)\) and \(Y(a,c)\) for \(a=0,\ldots,N_{r}-1\), \(b=0,\ldots,M_{d}N_{t}-1\) and \(c=0,\ldots,M_{d}T_{c}-1\), respectively. Then, after some further algebraic simplifications, it can be shown that \(P_{E}(\mathbf{X}^{c}\to\mathbf{X}^{c}|\mathbf{H})\) may be written as
\[P_{E}(\mathbf{X}^{c}\to\mathbf{X}^{c}|\mathbf{H})= P\left(\sum_{c=0}^{M_{d}T_{c}-1}\sum_{a=0}^{N_{r}-1} \Re\left\{n^{*}(a,c)\sum_{b=0}^{PN_{t}-1}z\right\}\right.\] \[\geqslant\frac{1}{2}\sum_{c=0}^{M_{d}T_{c}-1}\sum_{a=0}^{N_{r}-1} \left|\sum_{b=0}^{PN_{t}-1}z\right|^{2}\right), \tag{33}\]
where \(z=H(a,b)\left[X^{c}(b,c)-X^{c}(b,c)\right]\). Consequently, when defining the modified Euclidean distance between two codeword matrices \(\mathbf{X}^{c}\) and \(\mathbf{X}^{c}\) as \(\delta(\mathbf{X}^{c},\mathbf{X}^{c})=\sum_{c=0}^{M_{d}T_{c}-1}\sum_{a=0}^{N_{r}-1} \left|\sum_{b=0}^{PN_{t}-1}z\right|^{2}=||\mathbf{H}(\mathbf{X}^{c}-\mathbf{X}^{c})||^{2}\), and considering that \(\sum_{c=0}^{M_{d}T_{c}-1}\sum_{a=0}^{N_{r}-1}\Re\left\{n^{*}(a,c)\sum_{b=0}^{PN_{t} -1}z\right\}\) in (33) is a complex-valued Gaussian random variable with zero mean and a variance of \(||\mathbf{H}(\mathbf{X}^{c}-\
[23], we can derive the MGF \(\Gamma_{\delta(\mathbf{X}^{c},\mathbf{X}^{c})}(t)\) based on the approach of [24], yielding
\[\Gamma_{\delta(\mathbf{X}^{c},\mathbf{X}^{c})}(t)=\det[\mathbf{I}_{PN_{i}N_{t}}-t(\mathbf{I}_{N_ {r}}\otimes\mathbf{R})/P]^{-1}. \tag{36}\]
Upon substituting (36) into (35), the SU-UPEP can now be expressed as
\[P_{E}(\mathbf{X}^{c}\rightarrow\mathbf{X}^{c}) =\frac{1}{\pi}\int_{0}^{\frac{\pi}{2}}\left[\det\left(\mathbf{I}_{N_{ r}PN_{t}}+\frac{\gamma}{4P\sin^{2}\theta}(\mathbf{I}_{N_{r}}\otimes\mathbf{R})\right)\right]^{ -N_{r}}d\theta\] \[=\frac{1}{\pi}\int_{0}^{\frac{\pi}{2}}\left[\det\left(\mathbf{I}_{PN_ {t}}+\frac{\gamma}{4P\sin^{2}\theta}\mathbf{R}\right)\right]^{-N_{r}}d\theta. \tag{37}\]
Let us define \(r=\text{rank}(\mathbf{R})\) and the nonzero eigenvalues of \(\mathbf{R}\) as \(\{\lambda_{1},\ldots,\lambda_{r}\}\). Then (37) can be expressed as
\[P_{E}(\mathbf{X}^{c}\rightarrow\mathbf{X}^{c})=\frac{1}{\pi}\int_{0}^{ \frac{\pi}{2}}\left[\prod_{j=1}^{r}\left(1+\frac{\lambda_{j}\gamma}{4P\sin^{2} \theta}\right)\right]^{-N_{r}}d\theta. \tag{38}\]
Finally, by leveraging the union bound technique, the average bit error ratio (ABER) of the single-user STSK-OTFS-MA system can be approximated as
\[P_{e}\approx\frac{1}{2^{L}L}\sum_{\mathbf{b}^{c}}\sum_{\mathbf{b}^{c}}D_{ b}(\mathbf{b}^{c},\mathbf{b}^{c})P_{E}(\mathbf{X}^{c}\rightarrow\mathbf{X}^{c}), \tag{39}\]
where \(D_{b}(\cdot,\cdot)\) denotes the Hamming distance function between two bit sequences, while \(\mathbf{b}^{c}\) and \(\mathbf{b}^{c}\) are the corresponding binary representations of \(\mathbf{X}^{c}\) and \(\mathbf{X}^{c}\).
### _Diversity Order and Coding Gain_
In (38) we have \(\lambda_{j}\gamma/(4P\sin^{2}\theta)\geqslant\lambda_{j}\gamma/4P\). Hence, the upper-bound of \(P_{E}(\mathbf{X}^{c}\rightarrow\mathbf{X}^{c})\) can be formulated as
\[P_{E}(\mathbf{X}^{c}\rightarrow\mathbf{X}^{c})\leqslant\frac{1}{2} \left[\prod_{j=1}^{r}\left(1+\frac{\lambda_{j}\gamma}{4P}\right)\right]^{-N_{ r}}. \tag{40}\]
Moreover, for high SNRs (\(\gamma\gg 1\)), (40) can be formulated as
\[P_{E}(\mathbf{X}^{c}\rightarrow\mathbf{X}^{c})\leqslant\frac{1}{2} \left[\left(\prod_{j=1}^{r}\lambda_{j}\right)^{1/r}\left(\frac{\gamma}{4P} \right)\right]^{-rN_{r}}, \tag{41}\]
where the exponent of the SNR is often referred to as the **diversity order** obtained by the MLD, which is
\[G_{D}=\min_{\forall\mathbf{E}\in\mathcal{E}}rN_{r}=\min_{\forall \mathbf{E}\in\mathcal{E}}\text{rank}(\mathbf{R})\cdot N_{r}, \tag{42}\]
and the maximum achievable diversity order is \(G_{\text{D,max}}=\min\{PN_{t},M_{d}T_{c}\}\cdot N_{r}\). Given the values of \(P\) and \(M_{d}\), it can be observed that the maximum achievable diversity order depends on the settings of \(N_{t}\) and \(T_{c}\). Although the diversity order can be increased by increasing the value of \(T_{c}\) in the case of \((PN_{t}>M_{d}T_{c})\), the transmit diversity order cannot be further improved, if we increase \(M_{d}T_{c}\) beyond \(PN_{t}\). In more detail, the system with a lower value of \(T_{c}\) may result in a higher transmission rate in (21) as well as a low computation complexity, since the dimension of DMs is lower.
The coding gain of the STSK-OTFS-MA system can be expressed as \(G_{C}=\min_{\forall\mathbf{B}\in\mathcal{E}}\left(\prod_{j=1}^{r}\lambda_{j} \right)^{1/r}\). It can be inferred from (41) that the diversity order \(G_{D}\) dominates the decay-rate of the SU-UPEP as the value of SNR increases. Furthermore, the coding gains \(G_{C}\) determines the horizontal shift of the STSK-aided SU-UPEP curve from the benchmark SU-UPEP curve of \(\frac{1}{2}(\gamma/4P)^{-G_{D}}\).
### _DCMC Capacity_
Now we derive the DCMC capacity of the single-user STSK-OTFS-MA scheme. Based on (19) and (21), the single-user DCMC capacity can be formulated as [16]
\[C_{\text{DCMC}} =\frac{1}{M_{d}T_{c}}\max_{p(\mathbf{B}_{1})\ldots p(\mathbf{B}_{2^{L}})} \sum_{i=1}^{2^{L}}\int_{-\infty}^{\infty}\ldots\int_{-\infty}^{\infty}p(\tilde{ \mathbf{y}}|\mathbf{B}_{i})p(\mathbf{B}_{i})\] \[\times\log_{2}\left[\frac{p(\tilde{\mathbf{y}}|\mathbf{B}_{i})}{\sum_{j=1 }^{2^{L}}p(\tilde{\mathbf{y}}|\mathbf{B}_{j})p(\mathbf{B}_{j})}\right]d\tilde{\mathbf{y}}. \tag{43}\]
where \(p(\tilde{\mathbf{y}}|\mathbf{B}_{i})\) is given by (22), when assuming that \(\mathbf{B}_{i}\) is transmitted. It should be noted that (43) is maximized under the condition that all candidate matrices in the space \(\mathcal{B}\) are independent and equiprobable, i.e., \(p(\mathbf{B}_{i})=1/2^{L},\forall i\). Hence we have
\[\log_{2}\left[\frac{p(\tilde{\mathbf{y}}|\mathbf{B}_{i})}{\sum_{j=1}^{2^{L }}p(\tilde{\mathbf{y}}|\mathbf{B}_{j})p(\mathbf{B}_{j})}\right]=\log_{2}\left(2^{L}\right)- \log_{2}\sum_{j=1}^{2^{L}}\exp(\Psi_{i,j}), \tag{44}\]
where \(\Psi_{i,j}=\gamma\left[-||\mathbf{C}(\mathbf{B}_{i}-\mathbf{B}_{j})+\tilde{\mathbf{n}}||^{2}+ ||\tilde{\mathbf{n}}||^{2}\right]\) can be formulated by substituting (22) into (44). Therefore, based on the assumption that the codeword matrix candidates are transmitted at the same probabilities, the DCMC capacity of the single-user STSK-OTFS-MA system can be expressed as
\[C_{\text{DCMC}}=\frac{1}{M_{d}T_{c}}\left\{L-\frac{1}{2^{L}} \sum_{i=1}^{2^{L}}\mathbb{E}_{\mathbf{C}}\left[\log_{2}\sum_{j=1}^{2^{L}}\exp \left(\Psi_{i,j}\right)\right]\right\}. \tag{45}\]
### _Design of Dispersion Matrices_
Based on the above analysis, let us now discuss the design of DMs. To obtain the best system performance, the DMs can be designed based either on the SU-UPEP of (35) or on the DCMC capacity of (45).
The asymptotic diversity order of the OTFS system is one at an infinite SNR, and higher diversity can be attained in finite-SNR scenarios [23]. However, similar to OFDM, OTFS also requires transmit precoding schemes to attain full diversity [23]. Therefore, the DMs of both STSK-OTFS-MA and STSK-OFDM-MA systems can be designed with the aid of our proposed Algorithm 3.
It can be shown that the above two methods lead to the same design, detailed as follows.
**Proposition 1**: _The design of DMs aiming for minimizing the SU-UPEP in the high-SNR region of (41) and that aiming for maximizing the DCMC capacity of (45) lead to the same
results, which can be formulated as **maximizing** the following metrics:
\[\Lambda_{\text{D}}=\min_{\forall\mathbf{E}\in\mathcal{E}}\text{rank}(\mathbf{R}),\quad \Lambda_{\text{C}}=\min_{\forall\mathbf{E}\in\mathcal{E}}\prod_{j=1}^{r}\lambda_{j}. \tag{46}\]
_Proof:_ It can be readily shown that the elements of \(\{\mathbf{Y},\tilde{\mathbf{n}}\}\) in (32) are the interleaved versions of the elements of \(\{\tilde{\mathbf{y}},\tilde{\mathbf{n}}\}\) in (19). Therefore, given a pairwise error event \(\{\mathbf{X}^{c}\rightarrow\mathbf{X}^{c}\}\) and a SNR \(\gamma\), we have \(||\mathbf{C}(\mathbf{B}_{i}-\mathbf{B}_{j})||^{2}=||\mathbf{H}(\mathbf{D}_{i}-\mathbf{D}_{j})||^{2}\). Furthermore, similar to the derivation shown in Section IV-A, it can be readily shown that based on (45) and the MGF technique, maximizing the DCMC capacity is equivalent to minimizing
\[\mathbb{E}_{\mathbf{H}}\left[-\gamma\delta(\mathbf{X}^{c},\mathbf{X}^{c})\right]\leqslant \frac{1}{2}\left[\prod_{j=1}^{r}\left(1+\frac{\lambda_{j}\gamma}{P}\right) \right]^{-N_{r}}, \tag{47}\]
which can be further upper-bounded by \(\frac{1}{2}\left[\left(\prod_{j=1}^{r}\lambda_{j}\right)^{1/r}\left(\frac{ \gamma}{P}\right)\right]^{-rN_{r}}\) under the condition of \(\gamma\gg 1\). It may now be observed that (41) and (47) are in similar forms. Note that it is more important to maximize the diversity order \(G_{D}\) in (42) than to maximize the coding gain \(G_{C}\), since it is the diversity order that dominates the slope of the ABER curve [32]. Since the value of \(N_{r}\) is fixed, we arrive at the criteria \(\Lambda_{\text{D}}\) in (46) by searching for the minimum rank value within the corresponding error matrix space \(\mathcal{E}\). Furthermore, the coding gain is given by \(\min_{\forall\mathbf{E}\in\mathcal{E}}\left(\prod_{j=1}^{r}\lambda_{j}\right)^{ \frac{1}{\Lambda_{\text{D,max}}}}\). Since \(\Lambda_{\text{D,max}}\) is a constant for \(\forall\mathbf{E}\in\mathcal{E}\), the corresponding criteria \(\Lambda_{\text{C}}\) can be derived in (46). \(\blacksquare\)
Based on the above analysis, let us now delve into the details of designing the DM set. Let us assume that \(\tilde{N}\) consecutive Monte Carlo simulations are employed. Let us define the DM set, design criteria, codeword difference matrix, error matrix, and error matrix space for the \(\hat{n}\)th experiment as \(\mathcal{A}_{\tilde{n}}\), \(\Lambda_{\text{D},\tilde{n}}\), \(\Lambda_{\text{C},\tilde{n}}\), \(\mathbf{R}_{\tilde{n}}\), \(\mathbf{E}_{\tilde{n}}\) and \(\mathcal{E}_{\tilde{n}}\) for \(\tilde{n}=1,\ldots,\tilde{N}\), respectively. For the \(\hat{n}\)th experiment, we first randomly generate the \(Q\) full-rank \((\tilde{T}\times\tilde{T})\)-dimensional unitary matrices \(\tilde{\mathbf{A}}_{q,\tilde{n}}\) for \(q=1,\ldots,Q\), where \(\tilde{T}=\max\{N_{t},T_{c}\}\). Then the DMs can be formulated as
\[\mathbf{A}_{q,\tilde{n}}=\begin{cases}\tilde{\mathbf{A}}_{q,\tilde{n}}[:1:T_{c}],& \text{if }N_{t}>T_{c}\\ \sqrt{\frac{T_{c}}{N_{t}}}\tilde{\mathbf{A}}_{q,\tilde{n}}[1:N_{t},:],&\text{if }T_{c}>T_{t}, \end{cases} \tag{48}\]
where the constant \(\sqrt{\frac{T_{c}}{N_{t}}}\) is used for satisfying the power constraint in (3), and we have \(\mathcal{A}_{\tilde{n}}=\{\mathbf{A}_{1,\tilde{n}},\ldots,\mathbf{A}_{Q,\tilde{n}}\}\). Moreover, the DM set of all \(\tilde{N}\) simulations is denoted by \(\mathcal{A}=\{\mathcal{A}_{1},\ldots,\mathcal{A}_{\tilde{N}}\}\). Assuming that there are \(\tilde{N}\) out of \(\tilde{N}\) DM sets that maximize the diversity order, it can be readily shown from (46) that the candidates of the optimal DM set are obtained as \(\mathcal{\tilde{A}}=\underset{\mathcal{\tilde{A}}_{\tilde{n}}\subset\mathcal{ A}}{\arg\max}\Lambda_{\text{D},\tilde{n}}=\underset{\mathcal{\tilde{A}}_{\tilde{n}} \subset\mathcal{A}}{\arg\max}\left\{\min_{\forall\mathbf{E}_{\tilde{n}}\in \mathcal{E}_{\tilde{n}}}\left\{\text{rank}(\mathbf{R}_{\tilde{n}})\right\}\right\}\), where \(\mathcal{\tilde{A}}=\left\{\mathcal{\tilde{A}}_{1},\ldots,\mathcal{\tilde{A}}_ {\tilde{N}}\right\},\mathcal{\tilde{A}}_{\tilde{n}}\subset\mathcal{A},\forall \tilde{n}\). Finally, the optimal DM set can be obtained as \(\mathcal{A}^{\text{opt}}=\underset{\mathcal{\tilde{A}}_{\tilde{n}}\subset \mathcal{\tilde{A}}}{\arg\max}\Lambda_{\text{C},\tilde{n}}=\underset{ \mathcal{\tilde{A}}_{\tilde{n}}\subset\mathcal{\tilde{A}}}{\arg\max}\left\{\min_{ \mathcal{\tilde{A}}_{\tilde{n}}\subset\mathcal{\tilde{A}}}{\min}\prod_{j=1}^{r} \lambda_{j}\right\}\). The proposed DM design method is summarized in Algorithm 3.
```
0: The values of \(Q\), \(T_{c}\), \(N_{t}\) and \(\tilde{T}=\max\{N_{t},T_{c}\}\).
1:Preparation: Set \(\tilde{N}\) as the number of Monte Carlo simulations.
2:for\(\tilde{n}=1\) to \(\tilde{N}\)do
3: Randomly generate \(Q\) full-rank \((\tilde{T}\times\tilde{T})\)-dimensional unitary matrices \(\tilde{A}_{\tilde{n}}=\{\mathbf{A}_{1,\tilde{n}},\ldots,\mathbf{A}_{Q,\tilde{n}}\}\).
4: Generate the DM set \(\mathcal{A}_{\tilde{n}}=\{\mathbf{A}_{1,\tilde{n}},\ldots,\mathbf{A}_{Q,\tilde{n}}\}\) based on (48) as
5:\(\mathbf{A}_{q,\tilde{n}}=\begin{cases}\tilde{\mathbf{A}}_{q,\tilde{n}}[:1:T_{c}],&\text{if }N_{t}>T_{c}\\ \sqrt{\frac{T_{c}}{N_{t}}}\tilde{\mathbf{A}}_{q,\tilde{n}}[1:N_{t},:],&\text{if }T_{c}>N_{t}, \end{cases}\)
6:endfor
7: Collect all DM matrices \(\mathcal{A}=\{\mathcal{A}_{1},\ldots,\mathcal{A}_{\tilde{N}}\}\).
8: Obtain \(\tilde{N}\) out of \(\tilde{N}\) DM sets that achieve the maximum diversity order as
9:\(\mathcal{\tilde{A}}=\underset{\mathcal{\tilde{A}}_{\tilde{n}}\subset\mathcal{A}}{ \arg\max}\Lambda_{\text{D},\tilde{n}}=\underset{\mathcal{\tilde{A}}_{\tilde{n}} \subset\mathcal{A}}{\arg\max}\left\{\min_{\forall\mathbf{E}_{\tilde{n}}\in\mathcal{E }_{\tilde{n}}}\left\{\text{rank}(\mathbf{R}_{\tilde{n}})\right\}\right\}\), where \(\mathcal{\tilde{A}}=\left\{\mathcal{\tilde{A}}_{1},\ldots,\mathcal{\tilde{A}}_{ \tilde{N}}\right\},\mathcal{\tilde{A}}_{\tilde{n}}\subset\mathcal{A},\forall \tilde{n}\).
10: Generate the optimal DM set as
11:\(\mathcal{A}^{\text{opt}}=\underset{\mathcal{\tilde{A}}_{\tilde{n}}\subset \mathcal{A}}{\arg\max}\Lambda_{\text{C},\tilde{n}}=\underset{\mathcal{\tilde{A}}_{ \tilde{n}}\subset\mathcal{\tilde{A}}}{\arg\max}\left\{\min_{\mathcal{\tilde{A}}_{ \tilde{n}}\in\mathcal{E}_{\tilde{n}}}\left\{\min_{j=1}^{r}\lambda_{j}\right\}\right\}\).
12:return\(\mathcal{A}^{\text{opt}}\).
```
**Algorithm 3** Dispersion Matrix Design
## V Performance Results
In this section, we provide the simulation results for characterizing the overall performance of STSK-OTFS-MA systems. Unless specifically defined, \(N=4\) time intervals are considered for an OTFS subframe and the entire OTFS frame has \(T_{c}=2\) subframes while resource allocation scheme 1 is invoked. The DMs are obtained from Algorithm 3. The subcarrier spacing and carrier frequency are \(\Delta f=15\) kHz and \(f_{c}=4\) GHz. The normalized maximum Doppler and delay shifts are set to \(k_{\text{max}}=M-1\) and \(l_{\text{max}}=N-1\)[23], respectively. The normalized indices of delay and Doppler associated with the \(i\)th path are given by \(l_{i}\in\mathcal{U}[0,l_{\text{max}}]\) and \(k_{i}\in\mathcal{U}[-k_{\text{max}},k_{\text{max}}]\), respectively. Furthermore, the SIMO-OTFS and SM-OTFS systems are specified as \((N_{r},V)\) and \((N_{t},N_{r},V)\), respectively. More explicitly, the SM-OTFS can also be viewed as a single-user STSK-OTFS-MA \((N_{t},N_{r},1,V,Q=N_{t})\) system, where \(\mathbf{A}_{q}=\mathbf{I}_{N_{t}}[:q]\) for \(q=1,\ldots,Q\)[20].
In
This is because both frequency diversity and space diversity can be achieved by the proposed STSK-OTFS-MA system, and a higher diversity order can be attained as \(P\) or \(N_{r}\) increases, which are consistent with our analytical results in (42).
Fig. 5 depicts the single-user BER performance of STSK-OTFS-MA \((2,2,2,Q,V)\) systems for \(P=\{2,4\}\) and \(N_{r}=\{1,2\}\), where \(M=8\) subcarriers are employed and the effect of different combinations of \(\{Q,V\}\) are investigated under the constraint of \(R=2\) bits/s/Hz. As shown in Fig. 5, for all BER curves specified by \(P=\{2,4\}\) and \(N_{r}=\{1,2\}\), the system exploiting \(Q=4\) and QPSK modulation is capable of achieving the best BER performance among the three sets of parameters \(\{Q,V\}\) considered. This observation implies that for a given transmission rate \(R\), an optimum combination of the parameters \(\{Q,V\}\) can be found, which leads to the best BER performance. Finally, given the same other parameters used, we observe based on Fig. 4 and Fig. 5 that our STSK-OTFS-MA system is capable of attaining a better BER performance for relatively lower values of the parameters \(\{Q,V\}\).
In Fig. 6, the BER performance of the two-user STSK-OTFS-MA \((2,1,2,2,2)\) system using MLD with fractional delay and Doppler shifts are investigated in the case of different mobile speeds, where we have \(N=4\) and \(M=4\). It is observed that the BER performance degrades as a higher speed is encountered, which is owing to the higher Doppler frequency. Explicitly, at a BER of \(10^{-4}\), the \(v=5\) km/h scenario yields about \(2.5\) dB and \(8\) dB SNR gain compared to the \(v=100\) km/h and \(v=300\) km/h cases.
In Fig. 7, we investigate the multiuser BER performance of our STSK-OTFS-MA \((2,N_{r},2,2,2)\) system employing a different number of RAs. Observe from Fig. 7 that the BER performance degrades when the system supports more users. This is because a higher MUI is experienced as \(U\) increases. More specifically, it can be observed that the four-user system equipped with \(N_{r}=1\) RA suffers from about a 2 dB performance loss compared to the single-user system at a BER of \(10^{-4}\). The corresponding performance erosion is reduced to about 1 dB for \(N_{r}=2\). This trend is reminiscent of the channel hardening phenomenon of the classic massive MIMO uplink detection [1].
Fig. 8 compares the BER performance of conventional SIMO-OTFS, of SM-OTFS as well as of the single-user STSK-OFDM-MA and the proposed single-user STSK-OTFS-MA systems, where \(R=3\) and \(R=4\) bits/s/Hz are considered, and the same values of \(N_{r}\) are employed under the same constraint of \(R\). Based on Fig. 8, we have the following observations. Firstly, for a given rate, the BER performance of SM-OTFS is better than that of the conventional SIMO-OTFS. This is because SM-OTFS may rely on a lower-order modulation scheme than SIMO-OTFS. Moreover, spatial
Fig. 4: Single-user BER performance using MLD and upper-bounds for STSK-OTFS-MA \((2,N_{r},2,2,2)\) systems with \(N=4\), \(M=8\) and \(N_{r}=\{1,2\}\), communicating over doubly-selective channels having the different number of paths at the same transmission rate of 1 bits/s/Hz. The upper-bounds are calculated based on (39).
Fig. 5: Single-user BER performance employing MLD for STSK-OTFS-MA \((2,2,2,Q,V)\) systems with \(P=\{2,4\}\) and \(N_{r}=\{1,2\}\) but invoking a different number of DMs and modulation orders at the same transmission rate of 2 bits/s/Hz.
Fig. 6: BER performance of two-user STSK-OTFS-MA \((2,1,2,2,2)\) system using MLD with fractional delay and Doppler shifts under different mobile velocities.
diversity can be attained by SM-OTFS. Secondly, it is found that the proposed STSK-OTFS-MA schemes are capable of achieving better performance than the SM-OTFS systems in all the scenarios considered, resulting in about 4 dB gain at a BER of \(10^{-5}\) for the rate of \(R=4\) bits/s/Hz, and about a 5 dB gain at a BER of \(10^{-5}\) in the \(R=3\) bits/s/Hz scenario. This observation can be explained by our analytical results of Section IV-B. As an ST coding scheme, the proposed STSK-OTFS-MA system can also achieve time diversity in addition to space diversity and frequency diversity, yielding a higher diversity order than the other schemes. Additionally, the maximum coding gain can be achieved by taking full advantage of Algorithm 3. Therefore, our OTFS-STSK-MA scheme outperforms the SIMO-OTFS and SM-OTFS schemes. Furthermore, STSK-OFDM-MA attains the worst BER performance at a given rate, since the ICI introduced by high-mobility channels is ignored in STSK-OFDM-MA. Specifically, at a BER of \(10^{-3}\), our proposed STSK-OTFS-MA is capable of attaining about \(6\) dB and \(9\) dB SNR gains in the \(R=3\) and \(R=4\) bits/s/Hz scenarios, respectively. Finally, the BER performances of all systems improve, as the rate is reduced due to the lower values of \(Q\) and \(V\) used. This observation is consistent with the conclusions in [20].
Fig. 9 evaluates the BER performance of STSK-OTFS-MA \((2,2,2,2,2)\) systems supporting \(U=2\) and \(U=4\) users by exploiting the proposed resource allocation schemes shown in Fig. 3. Observe from Fig. 9 that the Scheme 1-based system is capable of attaining better BER performance than the system utilising Scheme 2. Moreover, the performance gap becomes wider when our STSK-OTFS-MA supports more users. Explicitly, at a BER of \(10^{-5}\), the two-user Scheme 1-based system attains about \(2\) dB SNR gain over its Scheme 2 counterpart, while the gain escalates to \(4\) dB, when supporting \(U=4\) users. This is because the MUI becomes higher as the number of supported users increases. Additionally, the efficiency of our MUI mitigation and resource allocation Scheme 1 is boldly illustrated.
In Fig. 10, the single-user DCMC capacities are investigated for both the SM-OTFS \((2,N_{r},2)\), for the STSK-OFDM-MA \((2,N_{r},2,2,4)\) and for our STSK-OTFS-MA \((2,N_{r},2,2,4)\) schemes at a given rate of \(R=2\) bits/s/Hz, where \(N_{r}=2\) and \(N_{r}=4\) are considered respectively. Based on Fig. 10, we have the following observations. Firstly, the asymptotic capacities of both the SM-OTFS, STSK-OFDM-MA and of the STSK-OTFS-MA systems are \(R=2\) bits/s/Hz, which is independent of the number of RAs, since the rate was limited to \(2\) bits/s/Hz. Furthermore, given a value of \(N_{r}\), it is shown in Fig. 10 that the proposed STSK-OTFS-MA system always outperforms the SM-OTFS and STSK-OFDM-MA schemes. This observation can also be inferred from Fig. 8 and [20], since our STSK-OTFS-MA is capable of attaining extra time diversity and ST coding gains over the SM-OTFS scheme,
Fig. 8: BER performance of the conventional SIMO-OTFS scheme, the SM-OTFS scheme, the STSK-OFDM-MA scheme, and our STSK-OTFS-MA scheme both invoking MLD for the cases of the same transmission rate of \(R=3\) and \(R=4\) bits/s/Hz.
Fig. 7: BER performance utilizing MLD for STSK-OTFS-MA \((2,N_{r},2,2,2)\) systems with \(N=4\), \(M=4\) and \(N_{r}=\{1,2\}\) supporting a different number of users at the same transmission rate of \(R=1\) bits/s/Hz.
Fig. 9: BER performance of the STSK-OTFS-MA \((2,2,2,2,2)\) systems with \(N=M=4\) and supporting a different number of users by invoking proposed resource allocation schemes as shown in Fig. 3.
while the capacity of STSK-OFDM-MA erodes due to the ICI.
Fig. 11 characterizes the BER performance of both the MLD and PRCGD conceived for the STSK-OTFS-MA \((2,2,2,4,4)\) system operating at \(R=2\) bits/s/Hz. We observe from the results of Fig. 11 that the proposed PRCGD relying on two iterations is capable of attaining a BER performance close to that in the \(T_{1}=Q^{M_{d}}\) case. Moreover, as depicted in Fig. 11, a near-ML BER performance is attainable when the PRCGD invokes as few as two iterations. It should be noted that \(T_{1}\) denotes the upper-bound of the actual number of iterations in Algorithm 1.
The BER performance of our IRCD designed for STSK-OTFS-MA is characterized in Fig. 12, where the BER curve of MLD is depicted as the benchmark, while the other parameters are the same as those for Fig. 11. It should be emphasized that the IRCD improves a relatively low complexity by invoking an adequate number of \(T_{2}\) than the MLD for the same system. As shown in Fig. 12, the higher the value of \(T_{2}\), the better BER performance our STSK-OTFS-MA system becomes. Explicitly, in the cases of \(T_{2}>5/8Q^{M_{d}}\), the IRCD is capable of achieving a better performance than the PRCGD with \(T_{1}=1\), as shown in Fig. 11. Moreover, we can observe from Fig. 12 that at a BER of \(10^{-4}\), the IRCD with \(T_{2}=6/8Q^{M_{d}}\) attains a gain of about 1.5 dB over the case of using \(T_{2}=5/8Q^{M_{d}}\), while the IRCD associated with \(T_{2}=7/8Q^{M_{d}}\) iterations can also achieve a gain of about 1.5 dB over the IRCD with \(T_{2}=5/8Q^{M_{d}}\). Furthermore, Fig. 12 clearly shows that the IRCD with \(T_{2}=7/8Q^{M_{d}}\) is capable of attaining nearly the same performance as the MLD. Therefore, from the above observations we conclude that the IRCD with \(T_{2}=5/8Q^{M_{d}}\) to \(T_{2}=7/8Q^{M_{d}}\) can be implemented to achieve a desirable BER performance in contrast to the PRCGD associated with \(T_{1}=1\) and \(T_{1}=2\), as shown in Fig. 11, while improving a considerably lower complexity than the MLD.
To further compare IRCD and PRCGD, in Fig. 13 we characterize the BER performance of these two detectors in three-user STSK-OTFS-MA \((2,2,2,2,4)\) systems, yielding a rate of \(R=1.5\) bits/s/Hz. Specifically, the number of iterations is set to \(T_{1}=2\) and \(T_{2}=5/8Q^{M_{d}}\) for PRCGD and IRCD, respectively. Observe from Fig. 13 that the BER performance of the PRCGD with \(T_{1}=2\) iterations is about 0.5 dB and 1 dB worse than that of the IRCD with \(T_{2}=5/8Q^{M_{d}}\) and the MLD, respectively. To elaborate further, the PRCGD needs 0.5 dB higher SNR than the IRCD to achieve the BER of \(10^{-5}\). Furthermore, it can be concluded that the ISPD associated with \(T_{2}=5/8Q^{M_{d}}\) iterations obtain a good BER performance compared to the MLD, despite its lower complexity.
To illustrate the general flexibility of our STSK-OTFS-MA scheme, the BER performance of both uncoded, \(1/2\)-rate and \(2/3\)-rate LDPC coded STSK-OTFS-MA \((2,2,2,2,4)\)
Fig. 11: Two-user BER performance of the STSK-OTFS-MA \((2,2,2,4,4)\) system using MLD and the proposed PRCGD with the different number of iterations operating at \(R=2\) bits/s/Hz.
Fig. 12: Two-user BER performance of the STSK-OTFS-MA \((2,2,2,4,4)\) systems employing MLD and the proposed IRCD with the different number of iterations operating at \(R=2\) bits/s/Hz.
Fig. 10: The single-user DCMC capacity of the SM-OTFS \((2,N_{r},2)\), STSK-OFDM-MA \((2,N_{r},2,2,4)\) and STSK-OTFS-MA \((2,N_{r},2,2,4)\) systems with different number of RAs.
systems using MLD are evaluated in Fig. 14. All the remaining parameters are consistent with those in Fig. 13. In this context, the sum-product decoding algorithm is harnessed [14]. As observed in Fig. 14, the LDPC-coded system is capable of attaining an a substantial performance improvement compared to the conventional uncoded system. Moreover, at a BER of \(10^{-5}\), the \(1/2\)-rate LDPC coded system attains about \(2\) dB and \(7\) dB SNR gain compared to the \(2/3\)-rate LDPC coded and uncoded systems, respectively.
Fig. 15 portrays the corresponding computational complexity of the MLD, IRCD, and PRCGD employed in Fig. 13. We have the following observations based on Fig. 15. Firstly, the complexity of PRCGD with \(T_{1}=2\) is much lower than that of IRCD and of MLD. This can be explained by the fact that our PRCGD tests each DAP uniquely, and repeated searches can be avoided, as illustrated in Algorithm 1. Moreover, the PRCGD employs simple symbol-based detection for APM symbols, whereas the MLD detects all the APM symbols jointly, which can be seen by comparing (24) and (28). Secondly, the IRCD with \(T_{2}=5/8Q^{M_{d}}\) can provide about 12 orders of magnitude complexity reduction over MLD. Since the reliability sorting of all the DAPs is exploited, the full-research process of MLD can be avoided in the proposed IRCD, yielding a near-ML performance at a significantly lower complexity than the MLD. Finally, based on Fig. 13 and Fig. 15, it can be concluded that satisternetricy BER performances can be attained by invoking both the PRCGD with \(T_{1}=2\) and IRCD with \(T_{2}=5/8Q^{M_{d}}\), which impose much lower complexity than that of the MLD. Furthermore, the PRCGD with \(T_{1}=2\) attains a more attractive BER vs. complexity trade-off than the IRCD.
In Fig. 16, the system complexity of the conventional STSK-OFDM-MA, SIMO-OTFS, SM-OTFS and the proposed STSK-OTFS-MA employed in Fig. 8 is investigated. It is observed that STSK-OFDM-MA exhibits the lowest system complexity at a given rate among the OTFS-based systems,
Fig. 16: System complexity of the conventional SIMO-OTFS scheme, the SM-OTFS scheme, the STSK-OFDM-MA scheme, and our STSK-OTFS-MA scheme invoked in Fig. 8 for a transmission rate of \(R=3\) and \(R=4\) bits/Hz.
Fig. 13: Three-user BER performance of the STSK-OTFS-MA \((2,2,2,4,4)\) systems using MLD, our IRCD with \(T_{1}=2\), and the proposed PRCGD with \(T_{2}=5/8Q^{M_{d}}\) operating at \(R=1.5\) bits/sHz.
Fig. 14: BER performance of both the uncoded, rate-\(1/2\) and rate-\(2/3\) LDPC coded multiuser STSK-OTFS-MA \((2,2,2,2,4)\) systems invoking MLD.
Fig. 15: Multiuser detection complexity of the STSK-OTFS-MA \((2,2,2,2,4)\) systems invoking MLD, our IRCD with \(T_{1}=2\), and the proposed PRCGD with \(T_{2}=5/8Q^{M_{d}}\) operating at \(R=1.5\) bits/s/Hz.
followed by the SIMO-OTFS and SM-OTFS paradigms. This is because \(N=1\) is invoked in the STSK-OFDM-MA scheme, which significantly reduces the system complexity. However, as shown in Fig. 8, the BER performance of STSK-OFDM-MA is the worst. The best-performing STSK-OTFS-MA imposes the highest system complexity at a given rate as seen in Fig. 8 and Fig. 10. Hence, it is demonstrated that our proposed STSK-OTFS-MA strikes a beneficial performance _vs._ system complexity trade-off.
## VI Summary and Conclusions
An STSK-OTFS-MA system has been proposed, where each DD-domain APM symbol is spread over both the space and time dimensions by invoking DMs. Our theoretical derivations illustrated that the proposed STSK-OTFS-MA scheme takes full advantage of both time, frequency, space diversity and also attains ST coding gains. Then, a DD-domain RB allocation scheme has been conceived to mitigate the MUI. Moreover, a pair of low-complexity detectors have been proposed for STSK-OTFS-MA based on greedy algorithms and a codebook of DAPs. Furthermore, based on the MGF technique, the asymptotical BER upper-bound of single-user STSK-OTFS-MA has been derived. Our simulation results have shown that the upper-bound becomes tight at high SNRs. Additionally, the DCMC capacity of our STSK-OTFS-MA scheme has been quantified. Finally, by jointly leveraging the DCMC capacity and the BER union-bound, attractive DM design criteria have been proposed for attaining the maximum attainable diversity and coding gains. Both the analytical and simulation results have demonstrated the superiority of our STSK-OTFS-MA system in terms of both its BER and DCMC capacity. We also demonstrated that there exists an optimal combination of the DM sets and the modulation order. Finally, our simulation results demonstrated that both the proposed PRCGD and IRCD are capable of achieving near-ML BER performances at reduced complexity, while the proposed STSK-OTFS-MA scheme is capable of attaining better BER performance at an accepting system complexity compared to other counterparts.
| 空間時間シフトキーイングを補助する正交時間周波数空間変調に基づく多重アクセス(STSK-OTFS-MA)が、高ドップラー環境での信頼性の高いアップリンク伝送に提案されています。STSK-OTFS-MAシステムの利点として、アクティブの分散行列のインデックスに情報ビットをマッピングすることで、STSKとOTFSの信号の両方の利点を享受するシステムを実現します。時間、空間、DD域の自由度が協調的に利用されるため、STSK-OTFS-MAは、増加した多様性と符号化利点を達成します。検出の複雑性を軽減するため、等価送信シンボルベクトルの一意な構造を有効活用し、低複雑性近似最大似然性(ML)マルチユーザー検出アルゴリズムのペアを生成します。明確に、私たちは、漸進的残差チェックベースの |
2309.13562 | Keeping in Time: Adding Temporal Context to Sentiment Analysis Models | This paper presents a state-of-the-art solution to the LongEval CLEF 2023 Lab
Task 2: LongEval-Classification. The goal of this task is to improve and
preserve the performance of sentiment analysis models across shorter and longer
time periods. Our framework feeds date-prefixed textual inputs to a pre-trained
language model, where the timestamp is included in the text. We show
date-prefixed samples better conditions model outputs on the temporal context
of the respective texts. Moreover, we further boost performance by performing
self-labeling on unlabeled data to train a student model. We augment the
self-labeling process using a novel augmentation strategy leveraging the
date-prefixed formatting of our samples. We demonstrate concrete performance
gains on the LongEval-Classification evaluation set over non-augmented
self-labeling. Our framework achieves a 2nd place ranking with an overall score
of 0.6923 and reports the best Relative Performance Drop (RPD) of -0.0656 over
the short evaluation set. | Dean Ninalga | 2023-09-24T06:38:21 | http://arxiv.org/abs/2309.13562v1 | # Keeping in Time: Adding Temporal Context to Sentiment Analysis Models
###### Abstract
This paper presents a state-of-the-art solution to the LongEval CLEF 2023 Lab Task 2: _LongEval-Classification_[1]. The goal of this task is to improve and preserve the performance of sentiment analysis models across shorter and longer time periods. Our framework feeds _date-prefixed_ textual inputs to a pre-trained language model, where the timestamp is included in the text. We show _date-prefixed_ samples better conditions model outputs on the temporal context of the respective texts. Moreover, we further boost performance by performing self-labeling on unlabeled data to train a student model. We augment the self-labeling process using a novel augmentation strategy leveraging the _date-prefixed_ formatting of our samples. We demonstrate concrete performance gains on the LongEval-Classification [1] evaluation set over non-augmented self-labeling. Our framework achieves a 2nd place ranking with an overall score of 0.6923 and reports the best _Relative Performance Drop_ (RPD) [2] of -0.0656 over the short evaluation set (see Alkhalifa et al. [3]).
S +
Footnote †: © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 40 International (CC BY 4.0).
1
Footnote 1: Toronto, Canada
Self-Labeling, Sentiment Analysis, Temporal Misalignment, Date-Prefixing
## 1 Introduction
The application of language models such as BERT [4], RoBERTa [5] and XLM-RoBERTa [6] to textual data is a core component in many natural language processing (NLP) pipelines. However, a notable limitation of most language models is their lack of temporal awareness, as they typically encode text into fixed representations. Conversely, the nature of textual data is inherently dynamic and subject to change over time. Where traditional meanings of words, phrases, and concepts are constantly evolving [7, 8]. Furthermore, significant events can alter the factual basis of the text [9]. Although metadata of well-known text corpora includes timestamps, timestamps are almost never used within many NLP pipelines. A sentiment analysis model trained today could interpret the phrase: "You are just like X" as a positive sentiment. However, an issue can arise once people consider a comparison to 'X' as a non-positive comparison. Subsequently, the model becomes _misaligned_ if this flip in public opinion occurs. Hence, it can be difficult to train models that can generalize to future data without a sense of temporal context and awareness [10].
Mitigating _temporal misalignment_[11] between the facts and general sentiments of the current world and those found in text corpora is an active area of focus in various areas of research in nlp. In particular, work in NER (named-entity-recognition) [12, 9, 13] and question-and-answering [14, 15, 12, 16] often directly address temporal misalignment as they are considered _knowledge-intensive_ tasks [10].
A common and straightforward way to address temporal misalignment in textual data is to create new models (or update old ones) with the most recent data available [17, 18, 10]. However, continually growing datasets incur an increase in computational costs for data acquisition and training models which also contributes to an ever-increasing environmental cost [19, 20]. Therefore, finding a solution outside of continuous retraining that preserves model performance over time is desirable.
In this paper, we follow Dhingra et al. [7] who use an alternative approach that modifies the textual input with its timestamp. Thus, we can take advantage of text-only pre-trained language models used for classification in addition to conditioning the models with the temporal context for the input.
We will outline our system, which is aligned with some of the recent works in NER and temporal misalignment, and evaluate it on the _LongEval-Classification_ benchmark [1].
Our contribution is two-fold: (1) We show that date-prefixing the input text with its timestamp conditions the outputs of a language model on the temporal context of the input. (2) We utilize an augmentation strategy that leverages the date-prefixing by randomly modifying the timestamp of unlabeled inputs. We show that this augmentation strategy improves the performance benefits of semi-supervised learning on unlabeled data.
## 2 Background and Related Work
Recently, _TempLama_[7] showed that directly placing the year of the timestamp as a prefix in the text is performative in the context of named-entity-recognition. They, then feed the date-prefixed inputs to a T5 [21] model to directly model the temporal context. Cao and Wang [22] directly compares a date-prefixing approach to an embedding approach where the date is numerically embedded with a linear projection. [22] in the context of text generation, found that linear projection was less sensitive to the timestamps while date-prefixing is better at generating more temporally sensitive facts.
Self-labeling (or self-distillation) is a semi-supervised learning strategy that typically involves learning from pseudo-labels for unlabeled data. Self-labeling is demonstrated to add performance gains across a variety of domains including text classification [23]. Agarwal and Nenkova [9] found that self-labeling performs better than specialized pre-training objectives such as domain-adaptive pretraining [24] across several tasks including sentiment analysis. However, it is important to note that recently Ushio et al. [25] have shown that self-labeling, as presented in [9], is not as effective for NER when compared to models trained for specific time periods.
## 3 Methodology
Figure 1 provides an overview of our system. Following Agarwal and Nenkova [9], we first train a teacher model on the full labeled dataset to create pseudo-labels for the unlabeled data. During this training phase, every sample in the labeled dataset is date-prefixed, meaning that the year of the timestamp is included as part of the input text. We use a novel augmentation strategy on the date prefixes (see Section section 3.3) to condition the pseudo-labels on the temporal context learned by the teacher. A new student model is then trained for 22000 training steps on the generated pseudo-labels and is subsequently trained on the original labeled data that was used for the teacher. Finally, we use the resulting student model for inference. For simplicity, both the teacher and student models share the same architecture. We provide further detail on the individual components of our system in the following sections.
### Pre-Trained Model
Using a pre-trained language is generally much better than training a new model from scratch. However, it is not always clear which pre-training works best for any particular task. Here we use Bernice [26] a variant of XLM-RoBERTa [6] specialized for Twitter data. We train a single model for inference on the test set and we do not rely on ensembling techniques. We train using the cross-entropy classification loss.
### Date-Prefixing
Consistent with Dhingra et al. [7] we prefix each input text with the year of the given time-stamp followed by the text itself (e.g. "year: 2023 text: I really do enjoy drinks with friends"). As we observe from Table 1 training on this data conditions the model outputs with the temporal context found in the data using date-prefixing. Table 1 provides real input and output examples based on a trained model across various years. We do not modify the architecture of the language model to take the timestamp as a vector input. By maintaining the use of textual-only input we are able to leverage any existing pre-trained models that have text-embedding only input.
Figure 1: **Method Overview: (top-row) summarization of our semi-supervised learning training pipeline stages, (bottom-row): modifications we made to the pipeline and at what stage they apply**
### Date-Prefix Augmentation
When creating pseudo-labels to train a student model we use an augmentation strategy that takes advantage of our date-prefixing. Namely, given an unlabeled sample and its timestamp we randomly replace the year in the timestamp with a year between 2013 and 2021. Where, the years 2013 and 2021 are the earliest and latest years found in the labeled datasets, respectively. We perform an ablation experiment (see Section 4) demonstrating that this augmentation strategy outperforms non-augmented self-labeling on the evaluation set.
### Training and Evaluation
We use a single model trained using both the training and development sets for two epochs for inference on the test set. Model parameters using the Adam optimizer [27] with a constant learning rate of 1e-5 using the binary-cross-entropy loss. Performance is measured using the macro-averaged F1 score of the future samples.
## 4 Experiments
### Experimental Setup
In this section, we will compare the performance of models trained with and without the proposed augmentation strategies for pseudo-label generation. Namely, we will use a trained teacher model to generate labels with and without date-prefix augmentation. Subsequently, we a student models on each of the two sets of pseudo labels for 6000 training steps. Finally, then compare the downstream performance of each model.
Models will only be provided labels for the training set and trained until saturation on the interim evaluation set. For our experiments, we report the macro-averaged F1 scores for each subset of the evaluation set. We will also report the Relative Performance Drop (RPD) [2] for comparison between short and long-term time differences with respect to model performance.
\[\text{RPD}=\frac{f_{t_{j}}^{score}-f_{t_{0}}^{score}}{f_{t_{0}}^{score}} \tag{1}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Example input & Output & Label & Original Year & Prefix Year \\ \hline “year: 2013 text: I really do enjoy being single” & \(0.503\) & positive & 2018 & 2017 \\ “year: 2018 text: I really do enjoy being single” & \(0.510\) & positive & 2018 & 2018 \\ “year: 2023 text: I really do enjoy being single” & \(0.495\) & negative & 2018 & 2023 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Date Prompt Conditioning:** A demonstration of the date-prompting and subsequent model outputs conditioned on the prefix year. The model output is between 0 and 1, where the input is considered positive only if the output is above 0.5. The example input text is taken from the _LongEval-Classification_ dataset [1].
### Results
We report the evaluation results of our experiments in Table 2. Indeed, we see an overall improvement in performance especially when we observe the'short' evaluation set results when using our full framework. Additionally, the model using date-prefix augmentation gives by far the best RDP of \(-0.0532\) with respect to the 'within' and'short' evaluation sets. Note that the non-augmented models gives the best RDP of \(-0.0411\) with respect to the 'within' and 'long' evaluation sets. However, when finetunning this same model on the gold labels, the RPD more than doubles to \(-0.0852\) and is much worse than our full framework with \(-0.0681\). A similar drop in performance can be seen when observing the F1 score on the 'long' evaluation set. It appears that fine-tuning the non-augmented model with clean data incurs a significant drop in performance. However, it is clear that our proposed augmentation strategy can leverage the older labeled data and attain significant performance gains.
## 5 Conclusion
In this paper, we introduce a competitive framework for preserving the performance of sentiment analysis models across various temporal periods. We promote date-prefixing, as a straightforward solution to condition the output of pre-trained language models with the temporal context of input text. Furthermore, we build on the self-labeling framework developed by Agarwal and Nenkova [9]. Namely, given our date-prefix formatting, we can generate pseudo-labels conditioned on the temporal context of the input text. We verify the performance gains of our proposed system against self-labeling without our augmentation strategy in our ablation experiments. Altogether, our system yields competitive performance in overall score and attains the best RPD for the short evaluation set [3].
| この論文では、2023年LongEval CLEF LabTask 2: LongEval-Classificationに関する最先端のソリューションを提示します。このタスクの目的は、短い時間と長い時間の間の感情分析モデルの性能を向上させ、維持することです。私たちのフレームワークは、タイムスタンプを含むテキストを入力として、日付付きのテキストを入力します。日付付きのサンプルは、それぞれのテキストの時間の経過におけるモデルの出力に、より良い条件を提供します。さらに、ラベル付けされていないデータの自己ラベル付けにより、学生モデルをトレーニングします。自ラベル付けプロセスを、日付付きのサンプルのフォーマットを利用した新しい拡張戦略で拡張します。長Eval-Classification評価セットでのコン crete のパフォーマンス向上を示します。このフレームワークは、未拡張の自ラベル付けで、短時間評価セットのRPD(相対的パフォーマンス低下)を -0.0656 で記録します。 |
2309.13204 | Challenges in Quasinormal Mode Extraction: Perspectives from Numerical
solutions to the Teukolsky Equation | The intricacies of black hole ringdown analysis are amplified by the absence
of a complete set of orthogonal basis functions for quasinormal modes. Although
damped sinusoids effectively fit the ringdown signals from binary black hole
mergers, the risk of overfitting remains, due to initial transients and
nonlinear effects. In light of this challenge, we introduce two methods for
extracting quasinormal modes in numerical simulations and qualitatively study
how the transient might affect quasinormal mode fitting. In one method, we
accurately fit quasinormal modes by using their spatial functional form at
constant time hypersurfaces, while in the other method, we exploit both spatial
and temporal aspects of the quasinormal modes. Both fitting methods leverage
the spatial behavior of quasinormal eigenfunctions to enhance accuracy,
outperforming conventional time-only fitting techniques at null infinity. We
also show that we can construct an inner product for which the quasinormal
eigenfunctions form an orthonormal (but not complete) set. We then conduct
numerical experiments involving linearly perturbed Kerr black holes in horizon
penetrating, hyperboloidally compactified coordinates, as this setup enables a
more precise isolation and examination of the ringdown phenomenon. From
solutions to the Teukolsky equation, describing scattering of an ingoing
gravitational wave pulse, we find that the contributions from early-time
transients can lead to large uncertainties in the fit to the amplitudes of
higher overtones ($n\geq 3$). While the methods we discuss here cannot be
applied directly to data from merger observations, our findings underscore the
persistence of ambiguities in interpreting ringdown signals, even with access
to both temporal and spatial information. | Hengrui Zhu, Justin L. Ripley, Alejandro Cárdenas-Avendaño, Frans Pretorius | 2023-09-22T22:57:25 | http://arxiv.org/abs/2309.13204v3 | Challenges in Quasinormal Mode Extraction: Perspectives from Numerical solutions to the Teukolsky Equation
###### Abstract
The intricacies of black hole ringdown analysis are amplified by the absence of a complete set of orthogonal basis functions for quasinormal modes. Although damped sinusoids effectively fit the ringdown signals from binary black hole mergers, the risk of overfitting remains, due to initial transients and nonlinear effects. In light of this challenge, we introduce two methods for extracting quasinormal modes in numerical simulations and qualitatively study how the transient might affect quasinormal mode fitting. In one method, we accurately fit quasinormal modes by using their spatial functional form at constant time hypersurfaces, while in the other method, we exploit both spatial and temporal aspects of the quasinormal modes. Both fitting methods leverage the spatial behavior of quasinormal eigenfunctions to enhance accuracy, outperforming conventional time-only fitting techniques at null infinity. We also show that we can construct an inner product for which the quasinormal eigenfunctions form an orthonormal (but not complete) set. We then conduct numerical experiments involving linearly perturbed Kerr black holes in horizon penetrating, hyperboloidally compactified coordinates, as this setup enables a more precise isolation and examination of the ringdown phenomenon. From solutions to the Teukolsky equation, describing scattering of an ingoing gravitational wave pulse, we find that the contributions from early-time transients can lead to large uncertainties in the fit to the amplitudes of higher overtones (\(n\geq 3\)). While the methods we discuss here cannot be applied directly to data from merger observations, our findings underscore the persistence of ambiguities in interpreting ringdown signals, even with access to both temporal and spatial information.
## I Introduction
According to general relativity, when two black holes merge they form a highly distorted black hole that then rings down a stationary Kerr black hole. The gravitational waves emitted during the ringdown are thought to be well described by linear black hole perturbation theory (_linear theory_ for short), as governed by the spin-\(\pm 2\) Teukolsky equation [1; 2]. The Teukolsky equation is a separable partial differential equation for a single complex scalar, the real and imaginary parts of which describe the two physical gravitational wave polarizations.
The Teukolsky equation (with physical boundary conditions) does not have mode solutions; instead it has exponentially decaying _quasinormal mode_ (QNM) solutions [3; 4]. It is thought that shortly after merger the ringdown signal is well described by a superposition of QNMs [5; 6; 7], and fits to numerical relativity results seem to confirm this expectation (e.g., [8; 9; 10; 11; 12]). At much later times (\(\mathcal{O}(100M)\) after coalescence, where \(M\) is the mass of the remnant), the signal is expected to transition to decay as a power law. This is the so-called "tail" part of the waveform [13; 14; 15; 16]. An interesting property of the frequencies of the QNMs is that they are uniquely determined by the remnant black hole's mass and spin. The reverse process of estimating the mass and spin of the remnant black hole from the QNM spectrum of the ringdown is known as _black hole spectroscopy_[17; 18; 19; 20; 21; 22].
To maximize the science that can be extracted from an observed ringdown--whether for measuring properties of the merger, or for testing general relativity--one needs a prediction for what the excitation amplitude of each QNM is for a given merger. At present, computing these excitation amplitudes is an open problem for a remnant formed in a merger of black holes with comparable mass (though some information can be gleaned from properties of the Green's function for Kerr perturbations [23], or using linear theory in the extreme mass ratio limit [24; 25; 26]). In lieu of such calculations, one can attempt to measure the excitation amplitudes directly from numerical relativity solutions of merger events. At present, such approaches typically assume the ringdown can be entirely described by a sum of linear QNMs, and attempt to find the best first set of amplitudes that reproduce the ringdown signal (see e.g. [8; 9; 11; 27; 28; 29]). These studies have demonstrated, for example, that in an astrophysical merger of a nearly equal mass, non-precessing binary, the \(l=m=2\) mode is the maximally excited QNM, and the relative excitation amplitudes of other angular modes may point to properties of the progenitor binary system, e.g. precession, eccentricity, mass ratio, etc.
There are many difficulties in attempting to ascribe excitation amplitudes to merger events from fits to numerical relativity waveforms. The main difficulty already present at the linear level is that the QNMs do not com
prise a complete basis of black hole perturbations, and the gravitational wave "perturbation" will contain a _transient part_ that can have a significant amplitude relative to the sum of QNMs, in particular in the time window of \(O(10M)\) around peak amplitude of the ringdown. Note that in this paper we use the phrase _transient part_ (or sometimes _prompt part_) to refer to the non-QNM part of a ringdown waveform [30]. Beyond the linear level a host of additional difficulties arise, including non-linear mode coupling (quadratic modes have only recently begun to be studied in full numerical relativity merger waveforms [31; 32; 33; 34; 35]), and the effects of back reaction of the gravitational wave energy. The latter complicates the questions of what the appropriate background is for computing linear perturbations about, and how good a constant amplitude approximation is for the early time linear QNM spectrum of the waveform (due in part to non-linear energy exchange between modes). Though these difficulties are not thought to have much effect on measuring the dominant fundamental \(l=m=2\) QNM, it is less clear how well higher overtones and harmonics can be extracted. As such there is still much debate within the gravitational wave community about which modes should be included in the ringdown fit (see e.g., [30; 11]).
Given the intrinsic complexity of the problem and since both non-modal and nonlinear effects could play a non-trivial role, several ways of analyzing and decomposing the ringdown signal from numerical simulations into QNMs have been proposed [36; 30; 11; 12; 32]. Most of these methods involve finding the best fit to the ringdown signal with a sum of damped sinusoids with quasi-normal mode frequencies1, using gravitational waveforms extrapolated to future null infinity, or through Cauchy-characteristic extraction (CCE). Though, as discussed above, the signal is expected to contain more than simply the set of linearly damped QNMs, and if we do not know _a priori_ what the transient part of the waveform is, it is easy to envision that this process could result in _overfitting_ : an erroneous QNM amplitude measurement due to overlap of the QNM mode with a transient part of the signal. Particularly susceptible to overfitting are the higher overtones, whose damping times are sufficiently rapid that unless they are excited with extremely high amplitude, only a small number of cycles, or even only a fraction of a cycle of the QNM will be visible above an effective noise floor set by numerical truncation error (assuming all other extraction systematics are under control). Some studies have already pointed to overfitting, by showing different fitting procedures can give different results for the QNM amplitudes of the ringdown of a black hole produced from a binary collision [30; 37].
Footnote 1: An exception to this procedure is Ref [32], where the authors eliminated the dominant modes through a linear filter. Another exception is Ref [36], where properties of spheroidal harmonics are explored to separate the prograde and retrograde contribution to the ringdown signal.
The main purpose of this paper is to gain more insight into the nature of mode fitting, and hence the problem of overfitting. Instead of studying the full nonlinear ringdown of a black hole produced from a binary collision, we attempt to reconstruct the quasinormal mode spectrum of solutions to the Teukolsky equation. This allows us to study in detail how easy it is to distinguish the transient contribution to the signal from the quasinormal modes2. In our fitting procedures we include the spatial profiles of the quasinormal mode eigenfunctions, which reduces systematic uncertainties in our fits. To aid in utilizing the spatial dependence of the QNMs in our fits we make use of horizon penetrating, hyperboloidally compactified (HPHC) coordinates, in which the QNM solutions to the Teukolsky equation are regular everywhere, from the black hole horizon to null infinity [38; 39]. We consider two fitting procedures to linear data: one that uses the spatial variation of the Weyl scalar field and its time derivative on a single constant time slice, and another that uses both spatial and temporal information. Within both procedures, fitting the quasinormal mode amplitudes reduces to a problem of linear regression, given the fixed black hole mass and spin.
Footnote 2: We expect the transient contribution to strongly depend on the initial data; here, we only focus on scattering experiment, where the initial data consist of an infalling pulse of gravitational wave onto the black hole.
We then apply these fitting procedures to a set of time domain numerical solutions to the Teukolsky equation. We demonstrate that with pure QNM initial data, we can stably recover the amplitudes of arbitrary linear combinations of QNMs. By _stable_ here we mean that the method recovers the correct amplitudes (to within truncation error) over a non-negligible window of time. When we consider scattering initial data with non-QNM contributions though, we find that we cannot stably extract the amplitude of higher (\(n\geq 3\)) QNM overtones, and the traditional time-only fit at future null infinity can only faithfully stably extract the fundamental and first overtone over a much narrow window for fitting time. Conversely, we demonstrate the power of using spatial information to establish a best-case scenario for extracting QNMs. We note that this paper is more a "proof of principle" for the linear case, in that we have not tried to optimize the linear perturbation to "best fit" any particular merger event, and leave a more extensive study of the issue of initial conditions to future work.
The rest of this paper is organized as follows. In section (II), we review the derivation of the Teukolsky equation in HPHC coordinates, our code for computing pure QNM initial data, and our code for evolving the Teukolsky equation in the time domain. In section (III), we introduce our two fitting procedures that make use of spatial information of the quasinormal modes. In section (IV), we show results from applying those two methods to numerical solutions to the Teukolsky equation with sev
eral different classes of initial data. Lastly, we compare our new fitting procedures with the traditional time-only fit at future null infinity in section (5). We discuss the implications of our results and conclude in section (6). In Appendices A, B, and C we discuss some details of computing the QNM eigenfunctions, their radial structure in HPHC coordinates, and give some convergence results from our code, respectively.
## II The Teukolsky equation on a hyperboloidal slice
In this section, we briefly review the Teukolsky equation and QNMs in HPHC coordinates. We refer the reader to Refs. [38; 39; 40; 41; 42] for further details.
The Teukolsky Equation (TE) [1] was first derived in Boyer-Lindquist (BL) coordinates [43]. Constant time hypersurfaces in BL coordinates do not penetrate the black hole horizon, nor do they reach future null infinity - instead, they stretch from the bifurcation sphere of the black hole to spatial infinity. One consequence of these properties is that the radial eigenfunctions for quasinormal modes are necessarily divergent at the asymptotic radial boundaries (\(r_{*}\rightarrow\pm\infty\), where \(r_{*}\) is the tortoise coordinate) when evaluated on constant time slices [1]. This feature of the quasinormal eigenfunctions (QNEs) in BL coordinates complicates the analysis of computing QNM excitation factors of black hole ringdown. This is because constructing a well-defined inner product (from which the excitation factors of the quasinormal modes can be computed) involves an integration in the complex plane [44; 45; 23; 46]. By contrast, since constant time hypersurfaces in HPHC coordinates span from the black hole horizon to future null infinity, the QNM solutions to the TE in these coordinates remain regular everywhere exterior to the black hole [38; 39]. This opens up the possibility of a simpler inner-product matrix that could be used to determine the quasinormal mode content of a given gravitational waveform (see, for example, Ref. [47]). Furthermore, the ringdown signal behaves like damped standing waves spatially in HPHC, instead of traveling wave packets in coordinates that asymptote to spatial infinity.
In this work we use the same HPHC coordinates described in Ref. [39]. These coordinates are identical to BL coordinates, up to a redefinition of the time coordinate \(\tau\) and azimuthal coordinate \(\phi\), which are related to the BL coordinates \(\left(t,r,\vartheta,\varphi\right)\) via
\[d\tau\equiv dt+\left(\frac{2Mr}{\Delta}+\frac{dh}{dr}\right)dr,\qquad d\phi \equiv d\varphi+\frac{a}{\Delta}dr, \tag{1}\]
where \(M,a\) are the mass and spin of the black hole. Here \(h(r)\) is a "height" function designed to make the radially ingoing characteristic speed zero at infinity [48; 41; 42], that we chose to be
\[\frac{dh}{dr}=-1-\frac{4M}{r}. \tag{2}\]
To bring future null infinity (located at \(r\rightarrow\infty\)) to a finite point, we compactify the radial coordinate via
\[\rho\equiv\frac{1}{r}. \tag{3}\]
We additionally rescale the Newman-Penrose scalar \(\psi\) to make the Teukolsky equation regular at the horizon and to remove the "long-range potential" in the radial coordinate [49; 50]
\[\psi\equiv\frac{1}{r}\Delta^{-s}\Psi. \tag{4}\]
With all the above definitions, the TE reads
\[\left[16M^{2}-a^{2}\sin^{2}\theta+8M\left(4M^{2}-a^{2}\right) \rho-16a^{2}M^{2}\rho^{2}\right]\partial_{\tau}^{2}\Psi-\rho^{4}\Delta\partial _{\rho}^{2}\Psi-{}_{s}\not{\Delta}\Psi\] \[-2\left[1+\left(a^{2}-8M^{2}\right)\rho^{2}+4a^{2}M\rho^{3} \right]\partial_{\tau}\partial_{\rho}\Psi+2a\rho^{2}\partial_{\rho}\partial_ {\phi}\Psi+2a\left(1+4M\rho\right)\partial_{\tau}\partial_{\phi}\Psi\] \[+2\left[s\left(-2M+ia\cos\theta\right)+\left(4M^{2}\left\{s+2 \right\}-a^{2}\right)\rho-6Ma^{2}\rho^{2}\right]\partial_{\tau}\Psi\] \[+2\left[-1-s+\left(s+3\right)M\rho-2a^{2}\rho^{2}\right]\rho \partial_{\rho}\Psi+2a\rho\partial_{\phi}\Psi\] \[+2\left(Ms+M-a^{2}\rho\right)\rho\Psi =0, \tag{5}\]
where \(s\) is the spin-weight of the scalar \(\Psi\). For the remainder of this article, we set \(s=-2\), so that \(\Psi\) corresponds to the Weyl scalar \(\Psi_{4}\).
Lastly, to make the radial boundary independent of the black hole spin, we perform the substitution:
\[\rho\to r_{+}\rho, \tag{6}\]
where \(r_{+}=M+\sqrt{M^{2}-a^{2}}\) is the radius of the outer horizon in BL coordinates. This substitution makes the TE regular at future null infinity (\(\rho=0\)) and on the black hole horizon (\(\rho=1\)), regardless of spin.
We solve Eq. (5) in the time domain, using a modification of the code described in Refs. [42; 51], which we will now briefly describe. The numerical implementation decomposes \(\Psi\) into its azimuthal modes, \(\Psi\left(t,\rho,\theta,\phi\right)=\sum_{m}e^{im\phi}\Psi\left(t,\rho,\theta\right)\). The code then evolves each \(m-\)mode on a two dimensional \(\rho-\theta\) grid. The angular direction is discretized using a pseudospectral method made up of
spin-weighted spherical harmonics, and the radial direction with a fourth order finite difference method, as opposed to the implementation presented in Ref. [42], which makes use of a pseudospectral Chebyshev discretization in the radial direction. To evolve in time, the code uses a method-of-lines algorithm with a 4th order Runge Kutta integrator. We consider two classes of initial data, described in more detail in Sec. (IV): (1) a linear superposition of quasinormal modes, and (2) a Gaussian pulse (which we call "scattering" initial data). We construct our quasinormal mode initial data using a slight modification, described in detail in Appendix (A), of the algorithm presented in Ref. [39] (publicly available at [51].)
## III Spatial and Spacetime Fitting with QNM Eigenfunctions
Let us consider a linearly perturbed black hole with fixed known mass and spin. Since the quasinormal mode decomposition of the solution can be recovered using a linear least squares algorithm if the linearized gravitational solution can be entirely described as a superposition of quasinormal modes [11; 33], we fix the quasinormal mode frequencies, and then fit for the complex amplitudes of the modes that minimize the residual error. In our fitting procedures, we minimize not just the residual error of our waveform fit at future null infinity, but also the error of the waveform over the entire computational domain, which ranges from the horizon to null infinity.
We consider two different mode extraction methods: _spatial_ and _spacetime_ fitting, which we describe in detail in Sec. (III.1) and Sec. (III.2), respectively. Spatial fitting refers to measuring the amplitudes for each QNM on a fixed time slice \(t=t_{0}\), given the data \(\{\Psi_{4}(t_{0},\mathbf{r}),\partial_{t}\Psi_{4}(t_{0},\mathbf{r})\}\)[23; 34; 52]. That is, for a fixed azimuthal number \(m\), we minimize the residual
\[\mathcal{R}= \sum_{i,j}\left(\Psi_{4}\left(t_{0},\rho_{i},\theta_{j}\right)- \sum_{[p],n,l}A_{[p]ln}R_{[p]ln}\left(\rho_{i}\right){}_{-2}S\left(a\omega_{[ p]ln},\theta_{j}\right)e^{-i\omega_{[p]ln}t_{0}}\right)^{2}\] \[+\left(\partial_{t}\Psi_{4}\left(t_{0},\rho_{i},\theta_{j}\right)+ \sum_{[p],n,l}i\omega_{[p]ln}A_{[p]ln}R_{[p]ln}\left(\rho_{i}\right){}_{-2}S \left(a\omega_{[p]ln},\theta_{j}\right)e^{-i\omega_{[p]ln}t_{0}}\right)^{2}, \tag{7}\]
for the complex constants \(A_{[p]ln}\), where \({}_{-2}S\) are the spin-weighted _spheroidal_ harmonics, and \(R_{[p]ln}(\rho)\) and \(\omega_{[p]l^{\prime}n}\) are the QNM radial eigenfunctions and frequencies, respectively. In the above expression, the sum is over the prograde and retrograde (\([p]=\pm\)) modes, the overtones \(n\), angular number \(l\), radial grid points \(\rho_{i}\), and angular gridpoints \(\theta_{j}\). In practice, we perform a spherical harmonic decomposition of the signal in \(\theta\) before minimizing the residual.
On the other hand, the spacetime fitting consists of finding the best quasinormal mode fit to the rescaled Weyl scalar \(\Psi_{4}\) over the entire time domain we evolve for, i.e., in _both_ space and time. Specifically, we minimize the residual
\[\mathcal{R}=\sum_{i,j,k}\left(\Psi_{4}\left(t_{k},\rho_{i},\theta_{j}\right)- \sum_{[p],n,l}A_{[p]ln}R_{[p]ln}\left(\rho_{i}\right){}_{-2}S\left(a\omega_{[ p]ln},\theta_{j}\right)e^{-i\omega_{[p]ln}t_{k}}\right)^{2}\, \tag{8}\]
where now we include a sum over the time steps \(t_{k}\). As we discussed above, both fitting methods differ from previous QNM fitting procedures as our residual includes the radial profile of the modes.
If the gravitational waveform is dominated by quasinormal modes, our fitting procedure provides a robust way to determine the quasinormal mode content of a gravitational waveform. We now provide specific details of both approaches.
### Spatial Fitting
In this approach, we find a sum of the QNE with amplitudes that best represent the data \(\{\Psi,\partial_{t}\Psi\}\) on a constant time hypersurface [54, 23, 23]. At intermediate times \(t\), i.e. after initial data transients have decayed but before the tail contributions are evident, we expect the linear gravitational wave to be well approximated by a sum of quasinormal modes. In this regime, the field and its time derivative on a constant time slice, \(t_{0}\), can then be approximated by:
\[\Psi_{4}(\rho,\theta,t_{0})=\sum_{p\in\{\pm\}}\sum_{n}\sum_{l}A_{[p]ln\ -2}Y_{l}(\theta)\sum_{l^{\prime}}c_{[p]ll^{\prime}n}\ R_{[p]l^{\prime}n}( \rho)\exp\{-i\omega_{[p]l^{\prime}n}t_{0}\}\, \tag{9}\]
\[\partial_{t}\Psi_{4}(\rho,\theta,t_{0})=\sum_{p\in\{\pm\}}\sum_{n}\sum_{l}A_{[ p]ln\ -2}Y_{l}(\theta)\sum_{l^{\prime}}\left(-i\omega_{[p]l^{\prime}n}\right)c_{[p]l^{ \prime}n}\ R_{[p]l^{\prime}n}(\rho)\exp\{-i\omega_{[p]l^{\prime}n}t_{0}\}\, \tag{10}\]
where \(c_{[p]l^{\prime}n}\) are the spherical-spheroidal mixing coefficients, \(-Z_{l}\) are the spin-weighted _spherical_ harmonics, and \(R_{[p]ln}(\rho)\) and \(\omega_{[p]l^{\prime}n}\) are the QNM eigenfunctions and frequencies, respectively.
We can rewrite Eqs. (9) and Eq. (10) as a matrix equation for the amplitudes \(A_{[p]ln}\). In terms of the spherical harmonics for \(\Psi_{4}\), we may write for each angular number \(l\)
\[M_{[p]ll^{\prime}n}(\rho_{i})A_{[p]l^{\prime}n} =\Psi_{4,l}(\rho_{i}) \tag{11a}\] \[-i\omega_{[p]l^{\prime}n}M_{[p]l^{\prime}n}(\rho_{i})A_{[p]l^{ \prime}n} =\partial_{t}\Psi_{4,l}(\rho_{i})\, \tag{11b}\]
where repeated indices are summed over3, and
Footnote 3: The \(i\)’s in parenthesis and as subscripts index the radial grid points, \(\sqrt{-1}\) otherwise.
\[\Psi_{4,l}(\rho,t) :=\int_{\theta}\Psi_{4}(\rho,\theta,t)\ _{-2}Y_{l}^{*}(\theta)d\theta, \tag{12}\] \[M_{[p]l^{\prime}n}(\rho_{i}) :=c_{[p]l^{\prime}n}\ R_{[p]l^{\prime}n}(\rho_{i})\exp\{-i\omega_{ [p]l^{\prime}n}t_{0}\}. \tag{13}\]
The QNM amplitudes \(A_{[p]l^{\prime}n}\) must simultaneously solve equations (11a) and (11b) for all \(l\), which we do numerically. Here, we can simply stack the two matrix equations in the radial direction (indexed by \(i\)) and solve the resultant equation by a minimization matrix solver. Specifically, we stack via the following rule
\[N_{[p]l^{\prime}n}(i)=\begin{cases}M_{[p]l^{\prime}n}(\rho_{i})&\text{if }i \leq i_{max}\\ -i\omega_{[p]l^{\prime}n}M_{[p]l^{\prime}n}(\rho_{i-i_{max}})&\text{if }i>i_{max},\end{cases} \tag{14}\]
and, similarly, for the right hand side of Eqs. (11):
\[b_{l}(i)=\begin{cases}\Psi_{4,l}(\rho_{i})&\text{if }i\leq i_{max}\\ \partial_{t}\Psi_{4,l}(\rho_{i-i_{max}})&\text{if }i>i_{max}\,\end{cases} \tag{15}\]
where \(i_{max}\) is the number of radial grid points. Under this procedure, Eqs. (11) can now be written as
\[N_{IJ}A_{J}=b_{I}, \tag{16}\]
where \(I\) indexes the spatial components (radial \(i\) and angular \(l\),) and \(J\) indexes the modes (prograde/retrograde \([p]\), angular index \(l^{\prime}\), and overtone number \(n\).) The matrix \(N_{IJ}\), where we pack the fitting basis functions as column vectors, is called the _design matrix_ (see, e.g., Ref. [55]).
We find that when the initial data is a pure, arbitrary superposition of QNMs, we correctly recover the amplitudes and phases of the modes when we solve the matrix equation (16). The design matrix induces an inner product via
\[P(\Psi_{[p]ln},\Psi_{[p^{\prime}]l^{\prime}n^{\prime}}) :=\frac{1}{2}\langle N^{-1}\Psi_{[p]ln},N^{-1}\Psi_{[p^{\prime}]l^ {\prime}n^{\prime}}\rangle\] \[=\delta_{pp^{\prime}}\delta_{ll^{\prime}}\delta_{nn^{\prime}}, \tag{17}\]
where \(\langle\cdot,\cdot\rangle\) denotes the usual inner product on \(\mathbb{C}^{d}\), and \(\Psi_{[p]l}\) and \(\Psi_{[p^{\prime}]l^{\prime}n^{\prime}}\) are constructed from the quasinormal mode eigenfunctions, as in Eq. (15). We numerically find that the design matrix defined in Eq. (14) has full rank and its right inverse exists as long as the overtones are radially resolved, with \(i_{max}\gg n_{max}\), i.e., there are more radial points than overtones in our fit. For given numerical data, we can determine the QNM amplitudes by computing \(N^{-1}b\).
### Spacetime Fitting
We now minimize a quadratic residual, as with the spatial fitting, but now we also sum over the different time steps. When the linear gravitational wave is dominated by QNMs, the fitting problem reduces again to solving Eq. (9) for the amplitude \(A_{[p]ln}\)'s, given numerical data \(\{\Psi_{4}(\rho,\theta,t)\}\) within \(t\in[t_{0},t_{1}]\).
As in the spatial fit, we decompose \(\Psi_{4}\) into spin-weighted spherical harmonics. Discretizing the radial (\(r_{i}\)) and time (\(t_{j}\)) coordinates, the design matrix now takes
the form:
\[M_{[p]ll^{\prime}n}(\rho_{i},t_{j})=\] \[\left(-i\omega_{[p]l^{\prime}n}\right)c_{[p]ll^{\prime}n}\ R_{[p]l^ {\prime}n}(\rho_{i})\exp\{-i\omega_{[p]l^{\prime}n}t_{j}\}\, \tag{18}\]
where now the right-hand-side is set to be the field for the entire spacetime perturbation:
\[b_{l}(\rho_{i},t_{j})=\Psi_{4,l}(\rho_{i},t_{j}). \tag{19}\]
The spacetime fitting as a matrix equation is then
\[M_{IJ}A_{J}=b_{I}, \tag{20}\]
where \(I\) now indexes _both_ the temporal and spatial components (time \(j\), radial \(i\), and angular \(l\)), and \(J\) indexes the modes by \([p]\), \(l^{\prime}\), and \(n\). In Sec. (IV.2) and Sec. (IV.3) we demonstrate that the spacetime fit results are consistent with the spatial fit in regimes where we expect QNMs to dominate the solution.
Before going to the numerical examples, we first briefly discuss the incompleteness of the QNEs as a function basis for general solutions to the TE, and a resulting caveat of fitting QNMs due to the presence of non-modal solutions of the TE.
### A Caveat: Mode vs. Transient
While we can define an inner product under which the QNMs are orthonormal by making use of their radial and angular information, the modes remain incomplete as a basis for fitting black hole ringdown. By incompleteness, we mean that a generic solution to the TE cannot be represented as a sum of QNMs, when the solution violates the physical boundary condition: no ingoing wave at the horizon and no ingoing wave at infinity4. As we already mentioned, in addition to QNMs, solutions to the TE also admit a "prompt" and a "tail" contribution [57; 58], the sum of which we refer to as the "transient" part of the solution. Prompt here relates to the kind of perturbations we expect following a black hole merger, or scattering a compact wave packet off the black hole, and refers to the early rise in the waveform before the QNMs dominate. The tail part of the solution arises from back-scattered gravitational waves on the Kerr geometry, and dominates the solution at late times (beyond the times considered in this paper).
Footnote 4: Ref [56] suggests completeness of QNMs as a basis for solutions to the TE that respect the physical boundary conditions.
At the linear level there are no prompt or tail contributions to the solution if the initial data consists of purely quasinormal modes. However, for more generic initial data that better describes a distorted black hole formed from an astrophysical merger, there will be prompt and tail contributions [23; 30]. In these more generic settings, assuming the signal is purely made up of QNMs and fitting to those can lead to biased results, in particular for the high-\(n\) overtones, as they typically decay quite rapidly and on similar time scales to the prompt part of the transients. As we mentioned earlier, we call this overfitting the signal.
The prompt response dies off rapidly in time as it is sourced over a relatively small spacetime volume around the remnant black hole at the time of merger, and the corresponding wavefronts essentially follow geodesic trajectories to either future null infinity or the black hole horizon. Starting the QNM fit at later times should reduce the bias caused by the contribution of this transient response in the signal. However, the exact form of prompt response depends heavily on the initial data; in some cases one might expect it to be large enough, and decay slowly enough, to mask the higher overtones. By contrast, the tail contribution decays in a power-law fashion in time, slower than the QNM contribution [13]. Thus, the tail response may bias quasinormal mode fitting at late times (provided the signal to noise ratio of the signal was large enough to resolve a late time signal).
To assess the quality of our fitting results when non-QNM contributions to the solution are present, we adapt the technique presented in Refs. [30; 33; 34]. Namely, we vary the start time of the spacetime fitting, or time at which we apply the spatial fitting, and check if the amplitude for each quasinormal mode remains constant. We discuss the results of this exercise in Sec. (IV.2) and Sec. (IV.3).
## IV Numerical examples
Here we present some examples of applying the proposed spatial and spacetime fitting to numerical solutions of the TE with different initial conditions, as described in Sec. (II). Unless otherwise mentioned, all simulation results presented here were from runs with resolution \(n_{\rho}=512\) and \(n_{\theta}=40\), where \(n_{\rho}\) and \(n_{\theta}\) are the number of radial and angular grid points respectively, (see Appendix C for convergence results).
First, in Sec. (IV.1) we evolve initial data that consists of a single QNM, to demonstrate the accuracy of our evolution and (quasinormal mode) initial data codes, described in the Appendix (A). In Sec. (IV.2), we move to a more complicated class of initial data: a superposition of QNMs. In this case, we demonstrate that we can still reliably recover the amplitudes of the QNMs up to numerical precision of the solution, using both fitting techniques.
Lastly, in Sec. (IV.3) we consider scattering initial data (that is, initial data that cannot be written as a pure sum of quasinormal modes). In this case, we also extract the QNM content from the signal, although we do not have a direct theoretical estimate for the QNM amplitudes for this class of initial data. We do demonstrate that both the spatial and spacetime fitting methods are consistent, in the sense that both yield identical estimates for the
QNM amplitudes given the same initial data, and such estimates are stable with respect to fitting time at least for the fundamental mode and the first two overtones5. We further point out that the instability of fitting to the \(n\geq 3\) overtones for scattering initial data is likely due to the presence of the transient solution masking the high-\(n\) overtone spectrum, though their initial excitation amplitudes might be lower than that from black hole mergers.
Footnote 5: Due to how long-lived the transient is in the case of a near-extremal black hole, for that example we can only extract the fundamental mode during the numerical integration time of 500M.
### Evolving a single QNM
Let us consider the evolution of a single QNM for both Schwarzschild and near extremal Kerr (\(a=0.999\)) backgrounds. We set the initial condition to be either the \(l=m=2\) fundamental mode (\(n=0\)), or the \(l=m=2,\ n=3\) overtone6. As illustrated in Fig. (1), we can accurately evolve both the fundamental mode and overtone (blue solid lines) as compared to the analytic solution (red dashed lines). The residuals at future null infinity between the analytical solution and these runs are plotted in black. As shown in Appendix (C), this residual converges to zero at \(4^{\text{th}}\) order with increasing resolution.
Footnote 6: With our implementation we can evolve even higher overtones accurately; we only show the \(n=3\) overtone as an example.
These results are a strong test of our evolution code: if an overtone is excited, the code is capable of capturing it up to numerical precision. This accuracy provides the foundation for our following analysis.
### Evolving and fitting to a superposition of QNMs
In this section, we consider initial data that consists of a superposition of QNMs. We demonstrate that the spatial and spacetime fitting procedures, proposed in Sec. (III), can also correctly extract the QNM amplitudes in this case.
Let us consider initial data constructed by superposing the \(l=m=2\) fundamental (\(n=0\)), fourth (\(n=4\)), and the fifth (\(n=5\)) overtones on a Kerr background with spin \(a=0.7\) (the expected remnant spin from the astrophysical merger of equal mass, non-spinning, quasi-circular binaries [59]). In Fig. (2), we show the amplitudes extracted by applying the spatial fit at different \(t=\) constant surfaces (colored, solid lines) which match, up to numerical error, the analytical values of the mode amplitudes (grey, dashed lines). As a check, we have also included overtones that _are not_ present in the initial data in our fit, to demonstrate that the results have amplitudes consistent with the numerical error of our initial data and time evolution code. This test demonstrates the robustness of our fitting procedure, at least when applied to linearized solutions to the Einstein equations, with purely QNM initial data.
Furthermore, as in, e.g., Ref. [30], in Fig. (3) we show the stability of the fitting by factoring out the known decay rates of the modes. By doing that, the resulting QNM amplitudes are expected to be constant when fitting at different times, i.e., we consider the extraction of a given mode to be _stable_ if we recover a roughly constant amplitude (and phase) over some relevant, non-negligible time period. We have also compared the results between the spatial fit (colored, solid lines) and the spacetime fit (colored, dashed lines). We find that both methods are capable of stably extracting all the QNMs present (even the \(5^{th}\) overtone) until their amplitudes reach a floor set by numerical truncation error. This suggests that the inner product presented in Sec. (III.1) indeed establishes orthogonality between modes, complementing recent analytical results [46; 56].
### Evolving and fitting to scattering initial data
For our final example, we apply our quasinormal mode fitting procedures to analyze scattering initial data. This type of initial data excites the black hole in a more complex manner than quasinormal mode initial data, and we anticipate that a prompt, non-QNM transient solution to the Teukolsky equation will be noticeable in the ring-down signal. Specifically, we consider an approximately ingoing Gaussian pulse7 as initial data:
Footnote 7: Gaussian in Boyer-Lindquist radial coordinate \(r_{BL}\).
\[\Psi_{4}(\rho,\theta) =\exp\left\{-\frac{(\rho^{-1}-r_{0})^{2}}{w^{2}}\right\}\ _{-2}Y_{l}(\theta) \tag{21a}\] \[\partial_{t}\Psi_{4}(\rho,\theta) =-\frac{\rho^{2}}{2+4\rho}\partial_{\rho}(\Psi_{4}(\rho,\theta))\, \tag{21b}\]
where \(r_{0}\) and \(w\) specifies the initial central location and width of the pulse, respectively. For the context of this paper, regardless of the black hole spin, we specify the angular part of the initial data to be purely \(l=m=2\) spin weight \(-2\) spherical harmonics, and the radial part as a Gaussian centered at \(r_{0}=8\)M with width \(w=1\). For Kerr black holes, we expect the \(l>2\) modes to also be excited due to spherical-spheroidal mixing. To account for this mixing, we include up to \(l=4\) modes when constructing the design matrices for fitting, and we include up to the \(n=5\) overtones, both prograde and retrograde, for the \(l=2\) modes and up to \(n=1\) for \(l=3,4\) modes, unless otherwise specified8.
Footnote 8: We checked that the quality of the fit does not improve upon adding more modes, either higher harmonics or overtones.
In Fig. (4), we show the fitting results for different modes after applying the spatial fit
spacetime fitting (dashed lines) procedures to the numerical data obtained with the scattering initial data on Kerr backgrounds with spins \(a=0\), \(a=0.7\) and, \(a=0.999\). To assess the stability of the fits over time, we anchored the amplitude of each mode at a common time \(t_{2}\), chosen (somewhat arbitrarily) to be the time when \(\Psi_{4}\) peaks at future null infinity; that is, we divide the mode amplitude at the time of fitting (the horizontal axis) by the expected amplitude evolution from the time \(t_{2}\) to the fitting time. The subsequent fit is then stable if the fitted amplitude and phase remain constant over an interval of fitting start times. For clarity, we have only plotted the overtones for the prograde, \(l=m=2\) mode in Fig. (4). The fundamental retrograde modes and higher multipoles from spherical-spheroidal mixing can also be extracted stably using our fitting methods.
Both fitting methods yield consistent results, although they inherently have different truncation error (or, loosely speaking, "noise") characteristics. We speculate that as the spacetime fitting method uses the late time signal, its effective signal-to-noise ratio decreases faster than the spatial fit's. Consequently, the mode amplitude computed from this method typically becomes unstable slightly earlier than that from the spatial fit. On the other hand, spatial fitting does not incorporate information from late times, hence it is more sensitive to the early-time transient, and consequently tends to become stable slightly later than the spacetime fitted amplitudes.
Overall, for these scattering experiments we found that we can stably extract the fundamental mode and the first two overtones for a period of at least \(\sim 15\) M, around or after the time of peak \(\psi_{4}\), for Kerr backgrounds not too close to extremal 9. However, the fitting for higher overtones was generically unstable10. Given that in the previous section we demonstrated that our code and fitting
Figure 1: Evolving single QNMs (\(n=0\) and \(n=3\), left and right panels, respectively) for (top) Schwarzschild and (bottom) near-extremal Kerr with \(a=0.999\). The numerical solution at future null infinity is shown with solid blue lines, while the analytical predictions are drawn with red dashed lines. Plotted in black is the residual/difference between the two, which convergence studies (see Appendix (C)) show arises purely from numerical truncation error.
algorithms are capable of solving for and extracting superpositions of QNMs with overtones higher than the second, and that linear theory tells us the difference between these two classes of initial data reside in the transient part of the solution (as discussed above), this suggests that the source of the fitting instability is the presence of transients, with scattering initial data. We note, however, the \(n\geq 3\) overtones could be more strongly excited during mergers, and hence still be stably fitted. We defer the study of such initial data to a future work.
## V A comparison to the traditional time-only fitting at future null infinity
In this section, we test the quality of a QNM fit from a traditional (time-only) fitting method, see, e.g. Ref. [11], and compare that against our fitting procedures. The time-only fit we employ here is equivalent to our spacetime fit restricted at future null infinity; namely it takes into account of the angular eigenfunction (spherical-spheroidal mixing) but without any radial information. In numerical relativity, it is common to estimate the value the waveform takes at future null infinity, either through extrapolating waveforms measured at several finite radii, or through a Cauchy characteristic extraction/matching [62; 63; 64; 65; 66; 67; 68]), and then to find the best fit QNMs using fits to the temporal evolution of a select set of angular harmonics of the waveform at infinity.
We will assess the quality of these temporal fits in two ways. First, we consider the stability of the fitting, namely how well one can stably extract the amplitude and phase when changing the fitting start time [30]. Second, we consider the recovery of spatial information by testing whether the extracted mode amplitudes from performing time-only fits at several different radii agree with the radial eigenfunctions for the modes.
### Stability of the time-only fitting
In Fig. (5) we compare the results from the time-only fit at future null infinity (dotted lines) to the spatial fitting (solid lines), applied to the Gaussian pulse scattering experiment described in Sec. (IV.3). We find that, for Schwarzschild and Kerr with \(a=0.7\), the time interval over which the first overtone's (\(n=1\)) amplitude and phase can be stably extracted is much shorter in duration with the time-only fit, while the second and higher overtones can never be stably extracted with the time-only fit11. Interestingly, we find that when applying the time-only fit to the \(a=0.7\) Kerr black hole, the first overtone
Figure 2: Extracted amplitudes when applying the spatial fit (colored solid lines), described in Sec. (III.1) for an initial superposition of QNMs (\(n=0\), \(n=4\) and \(n=5\)) on a Kerr background with \(a=0.7\). Modes that are present in the initial data are plotted in bold lines. The grey dashed lines show the expected mode amplitude given our QNM initial conditions. The amplitudes for all modes are recovered to numerical precision, while the modes that are not present in the initial data have extracted amplitudes consistent with truncation error.
Figure 3: Stability of fitting superpositions of QNMs (\(n=0\), \(n=4\) and \(n=5\)) on a Kerr background with \(a=0.7\) (from the same run as shown in Fig. (2)). When factoring out the decay rate, the mode amplitudes we extract become constant in time, until the numerical noise dominates. The amplitudes extracted from the spacetime fit (colored dashed lines) are consistent with those obtained from the spatial fitting (colored solid lines), and both agree with the analytical results (grey, dashed lines). As expected, when fitting the overtones that were _not_ included in the initial data, the amplitudes are always unstable.
can be stably extracted _before_\(\Psi_{4}\) peaks at future null infinity. Whether the above holds in astrophysical ringdown (that is, with initial data that smoothly matches to the gravitational wave signal after merger) needs further study, but earlier results do indicate that at least with numerical relativity waveforms one can decompose the signal into QNMs beginning at the time of peak strain [11, 8], roughly 10 M before the peak of \(\Psi_{4}\) (issues of overfitting aside).
Footnote 1: The _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_of the _effective_ of the _effective_of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_of the _effective_ of the _effective_of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_effective_ of the _effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_effective_ of the _effective_ of the _effective_effective_ of the _effective_effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_of the _effective
We now perform a waveform fit at several different fixed radii on numerical data with the scattering initial data described in Sec. (IV.3), and test if the fitted amplitude and phase for each mode at different radii agree with the prediction from the radial eigenfunction. We vary the fitting start time \(t_{0}\) for the time-only fit and evaluate the radial mismatch \(\mathcal{M}\) as a function of \(t_{0}\). The result is shown in Fig. (6). We find that for Schwarzschild, (with scattering initial data), the time-only fit can identify the fundamental mode and first two overtones. For Kerr with spin \(a=0.7\), we can only faithfully reconstruct up to the first overtone12. Lastly, in the near-extremal limit, the radial mismatch for the fundamental mode decreases as \(t_{0}\) increases because the transient decays away faster than the fundamental mode, yet none of the overtones are correctly recovered within the time span of our numerical integration (500 M).
Footnote 12: The marginal dip in the mismatch for the \(n=2\) mode around \(t_{0}=15\sim 20M\) hints at its existence for Kerr with \(a=0.7\), which we do extract stably using the spatial fit (see Fig. 5).
To illustrate the quality of the reconstructed radial structure, in Fig. (7), we plot the mode amplitudes as a function of radius from time-only fits against the expected radial eigenfunctions (grey, dashed lines), with \(t_{0}=t_{1}\) (colored, solid lines) and \(t_{2}\) (colored, dashed lines), the times at which the waveform peaks at the horizon and null infinity respectively. For visual comparison, the amplitudes for the known radial functions are set to agree with the time-only fit at future null infinity when \(t_{0}=t_{1}\) (solid lines), by construction.
As indicated by Fig. (6) and Fig. (7), the radial eigenfunctions are better recovered by the time-only fit at a surprisingly _early_ time (overfitting aside), except for the near-extremal case. We further note that our ability to extract the QNM radial variation through the time-only fit also depends on the initial data, which, as we already discussed, heavily impacts the form of the transient signal. We defer a detailed study of the initial data and interpretation of the seemingly better-behaved fittings at early time to a future work.
## VI Discussion
In this work we have presented two new techniques for extracting the quasinormal mode content from perturbed single black holes. The main novel aspect of the
Figure 6: Spatial mismatch \(\mathcal{M}\) (Eq. 23) for the time-only fit as a function of fitting start time \(t_{0}\), applied to scattering experiments for a Kerr black hole with \(a=0\) (top), \(a=0.7\) (middle), and \(a=0.999\) (bottom). The radial amplitude variation agrees relatively well with the known radial function when \(\mathcal{M}<10^{-3}\), i.e., outside the shaded region.
Figure 7: Mode amplitude radial variation from time-only fit-\(\mathrm{tinh}\) (colored lines) for scattering initial data (see Eq. (21)). Note that the vertical axis for each subplot has a different scale. We plot the measured radial amplitude variation with two fitting start times \(t_{1}\) (colored, solid lines) and \(t_{2}\) (colored, dashed lines), the time at which the waveform peaks at \(\mathcal{H}^{+}\) and \(\mathcal{J}^{+}\) respectively. The known radial functions are plotted as grey, dashed lines for comparison, whose amplitudes are chosen to match the solid colored lines at \(\mathcal{J}^{+}\). The seemingly better agreement for the overtones in the case of \(a=0.999\) near null infinity is likely due to the similar frequencies of the family of zero-damped modes in the extremal limit, i.e., approaching this limit the overtones do not decay much more rapidly that the fundamental mode.
fitting procedures is they utilize the radial structure of each QNM over the full exterior of the black hole spacetime. This is aided by our use of horizon penetrating, hyperboloidally compactified coordinates, in which the quasinormal mode eigenfunctions to the Teukolsky equation are well-behaved from the horizon to future null infinity.
We used the methods described in Refs. [39, 42] to solve the Teukolsky equation in the time domain, evolving initial data that can be a superposition of a chosen number of QNMs together with a more generic transient. We first showed that our fitting procedures are capable of stably extracting the correct amplitudes when the initial data consists of a superposition of pure QNMs, include rapidly decaying high overtones, until the corresponding amplitudes drop below a floor set by numerical truncation error. The reason that the fitting procedure works this well is that it uses more information about the waveform - namely its radial dependence. This drastically increases the number of data points available in the fit as compared to a fit at a fixed radius (or at a small number of radii). Moreover, by making use of the radial dependence of the modes, we can construct an inner product for which the quasinormal modes are orthogonal with respect to each other. This allows us to project out QNM amplitudes from a perturbation consisting of a pure sum of QNMs at a given fixed time slice, though such projection can still be biased in the presence of transients (defined as any non-QNM component of the perturbation).
With confidence in our ability to accurately extract QNMs in the absence of transients, we examined the linear excitation of a black hole through a prompt, compact pulse of gravitational wave. This investigation aimed to shed light on the issue of _overfitting_ when attempting to extract the excitation amplitudes of QNMs from numerical simulations of binary black hole mergers, which can lead to erroneous QNM amplitude measurements if transients are not accounted for. As we are using the Teukolsky equation for the evolution of the perturbation, we can only study the effects of the linear transients. However, since it seems unlikely that non-linear effects would help with the problem of overfitting, our study can be considered a best-case scenario for the theoretical measurement of excitation amplitudes. First, we showed that even using our new fitting algorithms, the presence of linear transients (with scattering initial data) prevented stable measurement beyond the \(2^{nd}\) overtone of the fundamental mode of the dominant \(\ell=m=2\) perturbation for a Schwarschild and an \(a=0.7\) Kerr black hole (for an \(a=0.999\) Kerr black hole we could only stably extract the fundamental mode during our integration time of \(500\)M).
We then compared our new fitting procedures to a more traditional time-only fit. This analysis showed that a time-only fit may result in erroneous amplitude for the first overtone of the \(\ell=m=2\) mode of an \(a=0.7\) Kerr black hole, outside an interval for fitting start time of order \(15\) M. Moreover, when performing the time-only fit at different radii, we found that the amplitude and phase one obtains from the fitting at each radius does not match the predicted behavior of the second (and higher) overtones of the quasinormal mode radial eigenfunctions (again, except for the case of Schwarzschild where the second overtone does match to reasonable precision). In the case of a near-extremal hole (\(a=0.999\)), only the fundamental mode can be faithfully extracted due to long-lasting transient instability near the horizon; the overtones might be present in the signal but require longer time of integration and higher numerical resolution.
A significant issue regarding extrapolating our results to what this implies for existing studies, which have attempted to extract mode amplitudes in the full non-linear case, is that we have not attempted to match our Gaussian-pulse perturbation of the black hole to the initial data of any particular merger event, as we do not have theoretical estimate for the excitation amplitudes of the overtones in merger case. Thus, though we expect similar issues to occur in the non-linear case at some overtone number \(n\) for any given angular mode, and expect overfitting to be worse in the non-linear case, to put a threshold on the cutoff overtone number from the linear problem would require a study using better adapted initial data.
The basic idea of the techniques described here - fitting for the spatial behavior of the quasinormal eigenfunctions and their dependence on time - could, in principle, be applied to fully nonlinear numerical relativity simulations of ringdown. However, there are several complications: the method requires a fixed gauge choice in the ringdown phase (potentially achievable through an appropriate gauge driver condition [69, 70]), and a careful treatment of wave-extraction in the strong field [71]. Additionally, spatial information could only be used far away from the remnant black hole, where the gravitational waves could be well described by linear (and possibly second-order) perturbation theory.
Our ability to set up pure mode initial data should allow for further studies of nonlinear, second-order effects during black hole ringdown. Studying the time evolution of pure quasinormal mode initial data in a non-linear/second order code would allow one to systematically study the efficiency of mode mixing. Additionally, with pure quasinormal mode eigenfunction initial data, one could study the functional form of the second order source term that appears in the solution to the second order vacuum Teukolsky equation [42, 72]. Doing so would allow us to study how the source term varies with different kinds of quasinormal modes, such as the overtones. We leave a study of these effects to future work.
Our setup-solutions to the Teukolsky equation with fitting procedures that use the entire waveform, not just its value at future null infinity-is arguably "optimal" for extracting the QNM signal. Specifically, the solutions we studied do not exhibit nonlinearities, allowing us to concentrate solely on the signal's prompt and QNM con
tributions. Do astrophysical mergers excite the overtones of the remnant black hole more cleanly compared to the scattering experiments proposed here? We leave this question to a future study to set up initial data describing the perturbed remnant black hole from merger calculations. Nevertheless, given the challenges of fitting for the overtones in this (simplified) setup, our results provide further evidence that fitting for the overtones in astrophysical or full numerical relativity data, as well as the interpretation thereof, is a highly sensitive process that depends significantly on the data extraction and fitting procedures employed.
## Acknowledgements
We thank Emanuele Berti, Will Farr, Elena Giorgi, Abhishek Hegade, Stefan Hollands, Lam Hui, Maximiliano Isi, Macarena Lagos, Lionel London, Nicholas Loutrel, Sizheng Ma, Keefe Mitman, Rob Owen, Harrison Siegel, and Zihan Zhou for their useful comments regarding various aspects and physical implications of this project. H.Z. especially thank Will Farr, Maximiliano Isi, Sizheng Ma, and Harrison Siegel for discussions regarding fitting procedures. The authors are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University which is consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing. J.L.R. acknowledges support from the Simons Foundation through Award number 896696 and the NSF through WoU tidal 1-476799-244000-191100. A.C.-A. acknowledges support from the Simons Foundation.
## Appendix A Calculating quasinormal modes and eigenfunctions for the Teukolsky equation
As we discuss in Sec. (IV), one set of initial data we use consists of a linear superposition of quasinormal eigenfunctions (QNEs). To compute the quasinormal modes, we follow the algorithm presented in [39], except for one change which we found that allows us to stably solve for the higher overtones. Here we briefly outline the algorithm, and the improvement to it (the code we used can be accessed at [51]).
The Teukolsky equation (5) separates under the following decomposition
\[\Psi\left(\tau,r,\theta,\phi\right)=e^{im\phi-i\omega\tau}R\left(\rho\right)S \left(\theta\right). \tag{10}\]
From this, we obtain two ordinary differential equations, which we schematically write as
\[A\left(\rho\right)\frac{d^{2}R}{d\rho^{2}}+B\left(\omega,m,\rho \right)\frac{dR}{d\rho}+\left(C\left(\omega,m,\rho\right)-{}_{s}\Lambda_{l}^{ m}\right)R =0, \tag{11a}\] \[\frac{1}{\sin\theta}\frac{d}{d\theta}\left(\sin\theta\frac{dS}{d \theta}\right)+\left(s+\frac{\left(m+s\cos\theta\right)^{2}}{\sin^{2}\theta}- 2a\omega s\cos\theta+a^{2}\omega^{2}\cos^{2}\theta+{}_{s}\Lambda_{l}^{m} \right)S =0, \tag{11b}\]
where \(A,B,C\) are functions that are given in [39]. Note that (11b) is the standard equation for the spin-weighted spheroidal harmonics [1]. Following [39], we view (11a) as defining two eigenvalue problems with the eigenvalue \({}_{s}\Lambda_{l}^{m}\). The set of \(\left\{\omega,R,S\right\}\) for which (11a) and (11b) have the same eigenvalue \({}_{s}\Lambda_{l}^{m}\) are the quasinormal modes and eigenfunctions of the Teukolsky equation [39]. We note that (11a) and (11b) also admit total transmition and scattering mode solutions, but they would be irregular at the outer boundary. Therefore, the regularity one inherits from the set of spectral basis functions eliminates such solutions.
We numerically discretize (11a) and (11b), solve for the eigenvalues and eigenvectors of the two systems, and then vary \(\omega\) until at least one of the eigenvalues for the two discretized systems coincide. The value \(\omega\) is then a quasinormal mode frequency, and the corresponding eigenvector with eigenvalue \({}_{s}\Lambda_{l}^{m}\) giving the quasinormal eigenfunction. As in [39] we discretize (11b) using a spectral method first described in [73, 50]. The radial equation (11a) was discretized using a Chebyshev pseudospectral method in [39]. We found that solving for the higher overtone quasinormal modes using the radial Chebyshev pseudospectral method required using a large number of collocation points. This led to numerically ill-conditioned discretizations of (11a), which then required the use of higher-precision arithmetic. Here we describe a spectral method which leads to sparse, well-conditioned discretizations of (11a), even when we solve for the higher quasinormal mode overtones.
The spectral method makes use of the properties of the Ultraspherical (also called the Gegenbauer) polynomials [74]. For completeness, we outline the basic idea of the method here, although we refer to [74] for a more complete exposition. Ultimately we used the ApproxFun
[75]13 implementation of these methods in our code. Our conventions follow [76].
Footnote 13: [https://github.com/JuliaApproximation/ApproxFun.jl](https://github.com/JuliaApproximation/ApproxFun.jl)
We first transform the radial coordinate to \(x\in[-1,1]\). We next expand \(R\) in a series of Chebyshev polynomials of the first kind
\[R(x)=\sum_{n=0}^{N}c_{n}T_{n}(x). \tag{13}\]
The derivative of the Chebyshev polynomials of the first kind can be written in terms of the Chebyshev polynomials of the second kind
\[\frac{dT_{n}(x)}{dx}=nU_{n}(x). \tag{14}\]
For higher order derivatives with respect to \(x\), we use the following property of the Ultraspherical polynomials
\[\frac{dC_{n}^{(\lambda)}(x)}{dx}=2\lambda C_{n-1}^{(\lambda+1)}(x), \tag{15}\]
where \(U_{n}(x)=C_{n}^{(1)}(x)\). To conclude, we see that we can write
\[\frac{dR}{dx}=\sum_{n=1}nc_{n}U_{n-1}(x). \tag{16a}\] \[\frac{d^{2}R}{dx^{2}}=\sum_{n=2}2nc_{n}C_{n-2}^{(2)}(x). \tag{16b}\]
Consider the vectorial representation of \(R\), Eq. (13)
\[R=\mathbf{T}^{T}\mathbf{c}, \tag{17}\]
where \(\mathbf{T}=\left(T_{0}\left(x\right),T_{1}\left(x\right),...,T_{N}\left(x \right)\right)\) and \(\mathbf{c}\equiv\left(c_{0},c_{1},...,c_{N}\right)\). We see that we can write the first and second derivatives of \(R\) as
\[\frac{dR}{dx} =\mathbf{U}^{T}\mathbb{D}_{1}\mathbf{c}, \tag{18a}\] \[\frac{d^{2}R}{dx^{2}} =\mathbf{C}^{T}\mathbb{D}_{2}\mathbf{c}, \tag{18b}\]
where \(\mathbf{U}=\left(U_{0}\left(x\right),U_{1}\left(x\right),...,U_{N}\left(x \right)\right)\), \(\mathbf{C}\equiv\left(C_{0}^{\left(2\right)}\left(x\right),C_{1}^{\left(2 \right)}\left(x\right),...,C_{N}^{\left(2\right)}\left(x\right)\right)\), and \(\mathbb{D}_{1}\) and \(\mathbb{D}_{2}\) are sparse matrices, the components of which can be inferred from Eqs. (16). To complete the discretization of (12a), we need to convert \(A,B,C\), along with \(dR/d\rho\) and \(R\) to the polynomial basis \(C_{n}^{\left(2\right)}\), which can be done using sparse matrices [74]. Ultimately with this method, (12a) can be discretized to take the form
\[\left(\mathbb{A}-\lambda\mathbb{B}\right)\mathbf{c}=0. \tag{19}\]
where \(\mathbb{A}\) and \(\mathbb{B}\) are sparse matrices with a relatively small condition number. We do not need to impose boundary conditions as regularity at the boundaries imposes the ingoing radiation condition at the black hole horizon and the outgoing radiation condition at future null infinity [48; 39]. We solve the generalized eigenvalue problem (19) using standard numerical linear algebra routines (that is, with the eigen solver in the Julia standard library).
## Appendix B Structure of QNM Radial Eigenfunctions
Here, we briefly discuss the structure of the radial eigenfunctions for QNMs on \(\tau=const.\) HPHC hypersurfaces. In HPHC coordinates, \(\tau=const.\) hypersurfaces become tangent to null surfaces at the future horizon and null infinity (and hence tangent to the characteristic curves of the Teukolsky equation). There are two main effects that determine the far-field behavior of the radial quasinormal eigenfunctions to the Teukolsky equation. First, there is some flexibility in HPHC coordinates as to where \(\tau=const.\) hypersurfaces intersect future null infinity (while pinned to the same location at the horizon) [41]; we call this flexibility a _propagation effect_. Second, the rate at which the coordinate volume on the slice changes as a function of \(\rho\) controls the behavior of the eigenfunction at future null infinity, which gives rise to the familiar 1/r decay at large radii. Since we solve for the rescaled variable \(\Psi_{4}\) defined in Eq. (4) in both the QNM (initial data) code and the evolution code, the \(1/r\) volume effect is factored out. We discuss the propagation effect in detail below.
### Propagation effects
To understand the nature of propagation effects in HPHC coordinates, we first solve the null geodesic equation in ingoing Eddington-Finkelstein coordinates (for a related discussion see Appendix C of [42]). Setting \(\xi_{\theta}=\xi_{\phi}=0\), we find that the characteristic speeds of outgoing and ingoing null geodesics are
\[c_{+} =\frac{\xi_{v}^{+}}{\xi_{r}^{+}}=1-\frac{4Mr}{2Mr+\Sigma_{BL}} \tag{20a}\] \[c_{-} =\frac{\xi_{v}^{-}}{\xi_{r}^{-}}=-1. \tag{20b}\]
To determine the radial null characterisititics on a hyperboloidal slice, we first define a radial coordinate \(\rho(r)\) and time coordinate \(T(v,r)\). Under that coordinate change the characteristic speeds are
\[\tilde{c}_{\pm}=\frac{d\rho/dr}{\frac{1}{\tilde{c}_{\pm}^{\pm}}\partial_{v}T+ \partial_{r}T}. \tag{21}\]
From this, we can determine the time that it takes for a radially outgoing wave, starting at radius \(\rho_{0}\) to reach
null infinity by integrating
\[\tau_{+}(\rho_{0})=\int_{\rho_{0}}^{\rho_{\mathcal{J}^{+}}}\frac{1}{\tilde{c}_{+}( \rho)}d\rho. \tag{10}\]
Similarly, one can compute the time in this coordinate for a radially ingoing wave to reach the black hole horizon:
\[\tau_{-}(\rho_{0})=\int_{\rho_{\mathcal{H}^{+}}}^{\rho_{0}}\frac{1}{\tilde{c}_{- }(\rho)}d\rho\, \tag{11}\]
where \(\rho_{\mathcal{J}^{+}}\) and \(\rho_{\mathcal{H}^{+}}\) are, respectively, the radius of null infinity and horizon in this coordinate which we assume to be independent of time.
One may interpret these time intervals as the amount by which the slice mismatches with a radially outgoing/ingoing spherical wavefront. As the mode is exponentially decaying, the amplitude of the mode will be affected by this time mismatch. For a quasinormal mode with frequency \(\omega\), the amplitude variation of the radial wavefunction due to this mismatch time, for the outgoing part of the wave, is given by:
\[A(\rho)\propto\exp\{\Im\{\omega\}\tau_{+}(\rho)\}. \tag{12}\]
Note that here the amplitude increases faster towards infinity for a wave with a higher decay rate.
Fig. (8) diagrams the basic intuition behind this result. Far from the black hole, a spherical wave would be approximately advected with a decay of 1/r along the null geodesics. After factoring out the 1/r decay, we expect the amplitude of the wave to be roughly constant along the outgoing null geodesics labeled by grey dashed lines at large radius. We see that on a \(T=const.\) hypersurface, the faster decaying mode would have a radial amplitude \(A(\rho)\) that decays faster as we approach the black hole.
We note that the propagation time depends on the geodesic trajectory one considers (although the natural choice is to compute the propagation time from the characteristic speed of the Teukolsky equation, as we do here). Since QNMs carry angular momentum, we expect that the propagation times (10) and (11) are lower bounds on the propagation time of a quasinormal mode wavefront. We also note that the wavefront of a mode only travels along a null geodesic in the eikonal limit; finite-wavelength effects may further complicate our above argument. Nevertheless, we show that the QNM radial functions roughly follow the above argument in Fig. (9). In that figure we plot the radial eigenfunction for the overtones following the procedure outlined in Appendix (A), and compare them to the predicted scaling of the radial amplitude from (12). In particular, we present the radial eigenfunctions for successive overtones of a Schwarzschild black hole with \(l=m=2\). We see that the radial profiles of the modes roughly follow the scaling as predicted by Eq. (12).
To determine \(\tau_{+}\left(\rho\right)\), we note that in the coordinates we are using that
\[\rho(r) =\frac{L^{2}}{r} \tag{13}\] \[T(v,r) =v+h(r)\, \tag{14}\]
Figure 8: Penrose diagram of the Kerr exterior. The black dashed curve describes a \(T=const.\) hypersurface in HPHC coordinates. The gray dotted lines represent the trajectories of outoing null geodesics. The box in the top-right corner shows the observed black hole ringdown signal at future null infinity, one with slower decay (blue) and the other faster (red). Note that the signals are illustrative only; any decaying wave would have infinite amplitude at \(i^{0}\), zero amplitude at \(i^{+}\), and infinitely many oscillations between.
Figure 9: Radial eigenfunctions of overtones for Schwarzschild with \(l=m=2\) (solid, colored lines). We also plot predictions from radially outgoing geodesics with dashed lines, by evaluating equation (12) with the overtone frequencies. We note that the geodesic prediction yields larger slopes for the radial functions at \(\mathcal{J}^{+}\) for overtones with higher n, in accordance with the eigenfunctions we calculated from the Teukolsky equation. However, the slope for radial eigenfunctions is not precisely matched by the geodesic prediction. This is likely due to the fact that quasinormal modes do not exactly satisfy the eikonal limit from which (12) is derived.
where
\[\frac{dh}{dr}=-1-\frac{4M}{r}. \tag{10}\]
Here, \(L\) is a constant length scale that we take to be 1. The locations of the horizon and future null infinity are then \(\rho=\frac{1}{M+\sqrt{M^{2}-a^{2}}}\) and \(\rho=0\), respectively. The ingoing and outgoing characteristic speeds in these coordinates are
\[\tilde{c}_{+} =-\frac{a^{2}\rho^{2}\cos(\theta)-2M\rho+1}{8M^{2}-4a^{2}M\rho\cos (\theta)} \tag{11}\] \[\tilde{c}_{-} =\frac{\rho^{2}}{4M\rho+2}. \tag{12}\]
For illustrative purposes we calculate the propagation time defined in equations (10) and (11), for null rays for a \(M=1/2\), \(a=0\) black hole. In this case, the ingoing and outgoing propagation time become:
\[\tau_{+}(\rho) =2\log\left(1-\rho\right) \tag{13a}\] \[\tau_{-}(\rho) =2-\frac{2}{\rho}+2\log\left(\rho\right). \tag{13b}\]
Note that the outgoing time delay diverges at the horizon, and the ingoing time delay diverges at infinity; this reflects the fact that there is no outgoing radiation at the black hole horizon and no ingoing radiation at future null infinity.
## Appendix C Convergence Tests
In this section we show the numerical convergence of time domain code [77] used in this work. We consider the evolution of a single QNM, and a numerical scattering experiment. The time evolution code makes use of pseudo-spectral methods in the angular (\(\theta\)) direction and \(4^{th}\) order finite difference methods in the radial (\(\rho\)) direction.
The initial data is then integrated in time using an explicit RK4 integrator. Therefore, fixing the angular resolution, one expects the code to approach the continuum solution with \(4^{th}\) order convergence. In general, we find that that the numerical error is dominated by the radial direction.
For our convergence tests, we fix the number of angular collocation points to be 40, and increase the radial resolution by successive factors of 2. We see \(4^{th}\) order convergence in Fig. (10), for single QNM evolution, and in Fig. (11), for the gravitational wave scattering experiment. We show that the numerical resolution of our simulations is not the limiting factor in the precision of our QNM fits to scattering initial data in Fig. (12), where we compare the spatial fit applied to both the high and mid resolution runs.
Figure 11: Convergence in a scattering experiment simulation off a Kerr background with \(a=0.7\) (top) and \(a=0.999\) (bottom). Here, the number of angular collocation points is fixed to \(n_{\theta}=40\) for all runs, and we change the radial resolution by factors of 2 successively. The ultra low resolution is run with \(n_{\rho}=64\), low with \(n_{\rho}=128\), mid with \(n_{\rho}=256\), and high with \(n_{\rho}=512\).
Figure 10: Convergence evolving a single QNM with \(n=3\) and \(a=0.999\). Here, the number of angular collocation points is fixed to 40 for all runs, and we change the radial resolution by factors of two successively. The low resolution is run with \(n_{\rho}=128\), mid with \(n_{\rho}=256\), and high with \(n_{\rho}=512\) grid points. By the _analytic_ answer we mean the prediction for \(\Psi_{4}\) at future null infinity for an \(n=3\), \(l=m=2\) quasinormal mode.
Figure 12: Convergence of fitting for the scattering initial data on a Kerr background with \(a=0.7\) (top) and \(a=0.999\) (bottom), here illustrated with the spatial fitting algorithm (spacetime fitting shows similar convergence properties). The solid lines show the fitting result from the high resolution simulation (\(n_{\rho}=512\)), and dashed lines shows that from the mid resolution (\(n_{\rho}=256\)); for \(a=0.999\), we also plot the low resolution (\(n_{\rho}=128\)) in dotted lines. We find that for \(a=0.7\) increasing numerical resolution does not improve the stability of fitting the overtones, indicating the source of instability is due to a resolved transient. For \(a=0.999\), we find that the late time kink in the slope for the first overtone (red lines) is due to the numerical truncation error; the kink moves to a later time as resolution improves, from around 180M for the low resolution (red, dashed line) to around 400M to the high resolution (red, solid line). However, the transient dominates the first overtone for at least the first 250M of evolution, during which the mid and high resolution agree. | ブラックホールリングダウン分析の複雑さは、擬音波モードの完全な直交基底関数セットが存在しないという欠如によって増幅されます。 2つの方法を用いて、数値シミュレーションで擬音波モードを抽出することを提案し、初期の遷移と非線形効果が、リングダウン信号にフィットするのに、過学習のリスクが残ることをqualitatively studyします。 1つの方法では、一定の時間ヒュスペフンスにおける空間的関数形を使用して擬音波モードを正確にフィットさせます。 もう1つの方法は、空間と時間的な側面を両方利用して擬音波モードを捉えます。両方のフィッティング方法では、擬音波 sugli関数子の空間的振る舞いを利用して精度を向上させ、非線形時間フィッティング技術を上回ることを示しました。また、擬音波 sugli関数形が直交する基底系に成り立つことを |
2309.15675 | SJTU-TMQA: A quality assessment database for static mesh with texture
map | In recent years, static meshes with texture maps have become one of the most
prevalent digital representations of 3D shapes in various applications, such as
animation, gaming, medical imaging, and cultural heritage applications.
However, little research has been done on the quality assessment of textured
meshes, which hinders the development of quality-oriented applications, such as
mesh compression and enhancement. In this paper, we create a large-scale
textured mesh quality assessment database, namely SJTU-TMQA, which includes 21
reference meshes and 945 distorted samples. The meshes are rendered into
processed video sequences and then conduct subjective experiments to obtain
mean opinion scores (MOS). The diversity of content and accuracy of MOS has
been shown to validate its heterogeneity and reliability. The impact of various
types of distortion on human perception is demonstrated. 13 state-of-the-art
objective metrics are evaluated on SJTU-TMQA. The results report the highest
correlation of around 0.6, indicating the need for more effective objective
metrics. The SJTU-TMQA is available at https://ccccby.github.io | Bingyang Cui, Qi Yang, Kaifa Yang, Yiling Xu, Xiaozhong Xu, Shan Liu | 2023-09-27T14:18:04 | http://arxiv.org/abs/2309.15675v1 | # SJTU-TMQA: A Quality Assessment Database for Static Mesh with Texture Map
###### Abstract
In recent years, static meshes with texture maps have become one of the most prevalent digital representations of 3D shapes in various applications, such as animation, gaming, medical imaging, and cultural heritage applications. However, little research has been done on the quality assessment of textured meshes, which hinders the development of quality-oriented applications, such as mesh compression and enhancement. In this paper, we create a large-scale textured mesh quality assessment database, namely SJTU-TMQA, which includes 21 reference meshes and 945 distorted samples. The meshes are rendered into processed video sequences and then conduct subjective experiments to obtain mean opinion scores (MOS). The diversity of content and accuracy of MOS has been shown to validate its heterogeneity and reliability. The impact of various types of distortion on human perception is demonstrated. 13 state-of-the-art objective metrics are evaluated on SJTU-TMQA. The results report the highest correlation of around 0.6, indicating the need for more effective objective metrics. The SJTU-TMQA is available at [https://ccccby.github.io](https://ccccby.github.io)
Bingyang Cui\({}^{\star}\) Qi Yang\({}^{\dagger}\) Kaifa Yang\({}^{\star}\) Yiling Xu\({}^{\star}\) Xiaozhong Xu\({}^{\dagger}\) Shan Liu\({}^{\dagger}\)\({}^{\star}\) Cooperative Medianet Innovation Center, Shanghai Jiaotong University
\({}^{\dagger}\)Media Lab, Tencent
3D textured mesh, quality assessment, human visual system, database
## 1 Introduction
With the technological advancements of computer graphics and the development of rendering technologies, 3D static meshes with texture maps are constantly applied in many areas due to their effectiveness in representing 3D objects or scenes. A typical 3D textured mesh contains a number of faces with 3D points as vertices, each face is textured with a texture map indicated by texture coordinates. For brevity, we use textured mesh to indicate static mesh with texture map. The quality of textured mesh is important for human perception-oriented applications, such as immersive gaming, animation, and digital museums. However, 3D textured meshes have a large volume of data. They require effective compression and transmission algorithms before practical utilizations, in which different types of distortion might be introduced and degrade subjective perceived quality. To optimize textured mesh processing algorithms with respect to quality of experience, mesh quality assessment (MQA) has become a hotspot in recent study [1, 2, 3].
MQA includes two aspects: subjective and objective quality assessment. Subjective quality assessment is the most reliable method, which needs to invite subjects to evaluate the perceptual quality of distorted meshes in strictly controlled testing environments. Objective quality assessment aims to study objective metrics that have high correlations with human perceptual quality, replacing subjective experiments in practical and real-time applications to reduce the cost of time, human resources, and money. Therefore, to design effective objective quality metrics and facilitate the application of textured meshes, subjective MQA needs to be fully studied, and a database containing diverse mesh contents, rich distortion types, and reliable mean opinion scores (MOS) is expected.
Over the past years, some researchers have conducted studies on subjective MQA and established several databases. For example, [4, 5] focus on colorless meshes and mainly consider single distortion types, such as noise addition and lossy compression. [3] studies meshes with vertex color and releases a database with 480 distorted meshes under compression and simplification distortion. [1, 2] investigate textured meshes and propose superimposed distortion types, including mesh simplification/decimation, texture map downsampling, and coordinate quantization.
However, the aforementioned public databases have weaknesses, limiting their utilization in current studies. First, [3, 4, 5] are for colorless or vertex-color meshes, while meshes with texture map are the star of emerging immersive multimedia applications. Second, they are limited by the small-scale [4, 5] or the restricted range of distortion types [1, 2, 3], making them insufficient for a comprehensive MQA study.
To mitigate the above problems, we create a large-scale textured mesh database containing rich contents and multiple types of distortion in this paper, called SJTU-TMQA. 21 reference meshes are selected from different categories, including human figures, inanimate objects, animals, and plants. Eight types of distortion: six single distortion types and two superimposed distortion types are injected into each reference mesh at different distortion levels, leading to 945 distorted
meshes. The distorted meshes are rendered into processed video sequences (PVS) with a predefined camera path, and 73 viewers aged 18 to 30 are collected to perform subjective experiments with a lab environment. The diversity of source content, the accuracy of the MOS, and the influence of different types of distortion are demonstrated. 13 state-of-the-art (SOTA) objective metrics are tested on SJTU-TMQA. The best results report correlations of around 0.60, indicating that the proposed SJTU-TMQA is a challenging database and serves as a catalyst for a more effective objective metric study.
## 2 Database Construction
In this section, we detail the construction of SJTU-TMQA, including source mesh selection, distortion generation, PVS generation, training and rating session, and outlier removal.
### Source mesh selection and preprocessing
To better study the perceived subjective quality of textured meshes, 21 high quality source meshes are carefully selected from SketchFab1. These meshes encompass a diverse array of categories, including human figures, inanimate objects, animals, and plants. Fig. 1 illustrates the snapshots of the source content. PymeshLab2 library is used to remove redundant and invalid information (e.g., unreferenced vertices and null faces) from the reference mesh as proposed in [6].
Footnote 1: [https://sketchfah.com/features/free-3d-models](https://sketchfah.com/features/free-3d-models)
Footnote 2: [https://github.com/cnr-isti-vclab/PyMeshLab](https://github.com/cnr-isti-vclab/PyMeshLab)
### Distortion generation
To simulate various types of distortion resulting from acquisition noise, resampling, compression, and other factors, 8 different distortion types are introduced and detailed as follows:
\(\bullet\)**Downsampling (DS)**: DS is applied to the texture map of the textured mesh. The "Image.LANCZOS" low-pass filter offered by PIL library3 is used to resize the texture map to 45%, 35%, 25%, 15%, and 5% of the original resolution.
Footnote 3: [https://github.com/python-pillow/Pillow](https://github.com/python-pillow/Pillow)
\(\bullet\)**Gaussian noise (GN)**: GN is applied to the vertex coordinates of the textured mesh. All vertices of reference meshes are enhanced with a random Gaussian distributed geometry shift which magnitude are 0.5%, 1.0%, 1.5%, 2.0%, and 2.5% of the minimum dimension of the bounding box.
\(\bullet\)**Texture map compression (TMC)**: TMC is applied to the texture map of the textured mesh. We use the "imwrite('jpg', 'Quality')" compression function offered by Matlab software, which is based on the libjpeg library4, with the following quality parameters: 24, 20, 16, 12, 8, and 4.
Footnote 4: [https://jpeg.org/jpeg/software.html](https://jpeg.org/jpeg/software.html)
\(\bullet\)**Quantization Position (QP)**: QP is applied to the vertex coordinates of the textured mesh. Draco5 is used to perform uniform quantization with bits set to 7, 8, 9, 10, and 11.
Footnote 5: [https://github.com/google/draco](https://github.com/google/draco)
\(\bullet\)**Simplification without texture (SOT)**: SOT is applied to the faces of the mesh sample, in which the number of vertices is reduced and consequently leads to larger face sizes. Iterative edge collapse and a quadric error metric (QEM) [7] are used to perform simplification and reduce the number of faces by 10%, 25%, 40%, and 55% compared to source meshes.
\(\bullet\)**Simplification with texture (SWT)**: SWT is also applied to the faces of the mesh sample, but the texture information is injected to guide the QEM simplification results. We uniformly reduce the number of faces by 20%, 35%, 50%, 65%, and 80% compared to source meshes.
\(\bullet\)**Mixed Quantization (MQ)**: MQ is a superimposed distortion that applied QP and QT (texture coordinate quantization in Draco) at the same time. We carefully set the appropriate parameters, i.e. (QP / bits, QT / bits), to (12, 12), (11, 12), (10, 12), (9, 11), (8, 10), (7, 8).
\(\bullet\)**Geometry and Texture map Compression (GTC)**: GTC is a superimposed distortion which is a combination of MQ and TMC distortion. We selected three distortion levels from MQ ((11, 12), (9, 11), and (7, 8)) and TMC distortion (20, 12, and 4), respectively, leading to the generation of 3x3=9 distorted meshes with pair matching.
In all, we obtain \(21\) x \((5+5+6+5+4+5+6+9)\)\(=945\) distorted meshes.
### PVS generation
To perform subjective experiments, each distorted mesh is rendered to PVS with 1920x1080 resolution and 30 fps, using a pre-defined camera paths: the camera rotates around the \(z\) axis with a rotation step of \(0.75^{\circ}\) degrees per frame, and the rotation radius is equal to the mesh maximum bounding box. A complete rotation (\(360^{\circ}\)) around the mesh results in 495 frame images captured by OpenGL. Then, we group the images into PVSs using FFMPEG with libx265, and the constant rate factor is set to 10 to ensure visually lossless encoding [8]. Each PVS has a duration of 16 seconds.
### Training and rating session
To ensure the reliability of the collected subjective scores, we use "bench" shown in Fig. 1 to generate a training session
Figure 1: The 3D graphic source model of our database
with the same method as [1]. In the rating session, a double stimulus impairment scale method is used and an 11-level impairment scale proposed by ITU-T P. 910 [9] is used as the voting method. The subjective experiment is conducted on a 27-inch AOC Q2790PQ monitor with resolution 2560\(\times\)1440 in an indoor lab environment under normal lighting conditions. The display resolution is adjusted to 1920\(\times\)1080 to ensure the consistency with the PVSs. To avoid visual fatigue caused by an overly long experiment time, we randomly divide the 945 PVS into 21 subgroups.
### Outlier removal
Two consecutive steps are adopted to remove outliers from the raw subjective scores. First, each rating session additionally contains an extremely low-quality PVS and a duplicated PVS, known as "trapping samples". After collecting subjective scores, we first remove outliers according to the trapping results. Second, ITU-R BT.500 [10] is used to detect and remove outliers again. Finally, three outliers are identified and removed from the poor subjective score.
## 3 Database Analysis
In this section, the diversity of content in SJTU-TMQA is first proved, then subjective experiment results are analyzed to demonstrate the reliability of MOS.
### Diversity of SJTU-TMQA content
Geometry and color complexities are proposed to validate the diversity of content, which quantified by spatial perceptual information (SI) [9] and the color metric (CM) [11], respectively. We use the depth and color image obtained by projection with six views of its bounding box [12] to calculate the SI and CM of the reference mesh. The maximum SI and CM values are selected to illustrate the scatter plot of geometry complexity vs. color complexity in Fig. 2(a). The relatively uniform distribution of the scatter points indicates the diversity of the SJTU-TMQA content.
### Analysis of MOS
Fig. 2(b) reports the MOS distribution of SJTU-TMQA. For each score segment, SJTU-TMQA has at least 100 distorted meshes, indicating that SJTU-TMQA covers a wide range of quality scores.
To prove the accuracy of MOS and analyze the impact of different distortion on subjective perception, MOS vs. distortion parameter plots of four meshes which belong to different types of content (i.e., deadRose, elena, fruitSet, and hawk), are shown in Fig. 3. Except for QP, most of the curves of DS, GN, TMC, SOT, and SWT showcase perfect monotonicity, which proves the accuracy of the MOS. For QP, except for "elena", the other three meshes present limited MOS variations. We think the reasons are: first, the influence of QP can be masked by mesh texture; and second, "elena" belongs to the human figure and human observers are particularly sensitive to facial features that are known as salient areas [8]. Minor distortion in these areas can easily be detected and reflected via MOS variation.
## 4 Objective Metrics Testing
Four types of objective metrics are tested based on SJTU-TMQA: image-based, point-based, video-based, and model-based metrics. Image-based metrics, proposed by [13], use 16 projected images of meshes to quantify quality. Two image-based quality metrics (Geo_PSNR and RGB_PSNR) are tested. Point-based metrics first use sampling to convert mesh into point clouds, and then measure quality using point cloud objective metrics. Four point-based metrics (D1 [14], D2 [15], YUV_PSNR, and PCQM_PSNR [16]) are tested. Grid sampling with a grid resolution of 1024 is used to sample meshes into point clouds as proposed in [13]. Video-based metrics use the PVSs viewed in the subjective experiment as input, then image/video quality metrics are applied to predict mesh quality. Three video-based metrics (PSNR, SSIM [17], VMAF [18]) are calculated. Model-based metrics directly use the raw data from the mesh to assess quality. Four model-based metrics (Hausedorff distance (HD) [19], GL2 [20],
Figure 3: MOS vs. Distortion Parameters (DP).
Figure 2: (a): Geometry vs. Color complexity; (b): MOS distribution of SJTU-TMQA
### Performance of metrics
To ensure consistency between the objective and MOS of the various objective metrics, a five-parameter logistic fitting function proposed by the video quality experts group [23] is used to map the dynamic range of the scores from the objective metric to a common scale. Two indicators commonly used in quality assessment society are offered to quantify the efficiency of various metrics: Pearson linear correlation coefficient (PLCC) for prediction accuracy, and Spearman rank-order correlation coefficient (SRCC) for prediction monotonicity.
### Correlation of metric
The results of the metric on the entire database are shown in Table 1 "All" columns. YUV_PSNR reports the best performance, followed by RGB_PSNR, PCQM_PSNR, and VMAF. Fig. 4 shows the scatter plots of two metrics, in which the yellow lines represent the best-fitted cruves. We observe that the scatter plot of YUV_PSNR is obviously better than VMAF, in which the scatter points are closer to the best-fit line. YUV_PSNR tend to give low scores for GN samples. VMAF leans towards reporting high scores for QP and TMC.
The best overall correlations are below 0.6, which is far from the expectation that a robust metric should present a correlation at least above 0.80, indicating that SJTU-TMQA is a challenging database. Geo_PSNR, D1, D2, and all model-based metrics show extremely low performance. The reason is that they only consider geometric features, while some samples in SJTU-TMQA are lossless with regard to geometry information, such as DS and TMC.
### Analysis by type of distortion
For an in-depth analysis, the SRCC results for different types of distortion are illustrated in Table 1 "Distortion" columns. '-' means that the results of the metric for the samples applied with this kind of distortion are meaningless. VMAF presents good performance on DS distortion, in which it reports a correlation around 0.85. TPDM shows the best performance on GN and SWT with SRCC = 0.77 and 0.80. VMAF again exhibits the best performance on TMC, but the correlation is only 0.65. D1 and D2 showcase best results on QP and MQ with SRCC around 0.75 and 0.80, indicating that D1 and D2 might be good at predicting quantization distortion. PCQM_PSNR reports a correlation around 0.70 on SOT, which is obviously better than most metrics. GTC is the most challenging type of distortion, in which no metric reports a correlation higher than 0.6.
### Weakness of SOTA metrics
Given that the highest correlation of the SOTA metric is only around 0.6, revealing that the SOTA metrics have weaknesses which are summarized as follows: for image and video-based metrics, one weakness is that projection might cause information loss [12] and mask original mesh distortion. Furthermore, their performance is influenced by background information, which causes unstable score magnitudes for different types of contents [1]. For point-based metrics, the performance is closely related to the mesh sampling method. For the same mesh, different sampling methods and sampling resolutions can generate point clouds with obviously different perceptions, and consequently incur unstable metric performance [24]. For model-based metrics, most of them do not consider color attributes and cannot deal with geometry lossless distortion. Besides, they have strict requirements for tested meshes, such as the same connectivity or the same vertex density between reference and distorted meshes [13].
## 5 Conclusion
In this paper, we create a large-scale textured mesh database called SJTU-TMQA which consists of 21 static textured meshes with diverse contents, rich distortion types, and accurate MOS. The relationship between MOS and distortion is analyzed, and four types of SOTA objective metrics are evaluated based on SJTU-TMQA. The results demonstrate that human perception is influenced by content characteristics and distortion types, and the best metric only achieves a correlation of around 0.60. This database can serve as a benchmark for objective metrics testing, providing opportunities for further metric research.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Index} & \multicolumn{2}{|c|}{All} & \multicolumn{4}{|c|}{Distortion} \\ \hline Type & Metric & PLCC & SRCC & DS & GN & TMC & QP & SWT & SOT & MQ & GTC \\ \hline \multirow{3}{*}{A} & Geo\_PSNR & 0.16 & 0.09 & -0.41 & -0.62 & 0.46 & 0.30 & 0.69 & 0.20 \\ \cline{2-11} & RGB\_PSNR & 0.55 & 0.58 & 0.74 & 0.55 & 0.62 & 0.64 & 0.61 & 0.63 & 0.67 & 0.57 \\ \hline \multirow{3}{*}{B} & D1 & 0.05 & 0.13 & - & -0.43 & -0.75 & 0.64 & 0.40 & 0.78 & 0.30 \\ \cline{2-11} & VDV\_PSNR & **0.59** & **0.65** & 0.78 & 0.37 & 0.64 & 0.66 & 0.49 & 0.68 & 0.67 & **0.59** \\ \hline \multirow{3}{*}{C} & PCQM\_PSNR & 0.48 & 0.55 & 0.78 & 0.29 & 0.61 & 0.68 & 0.52 & **0.70** & 0.69 & 0.43 \\ \cline{2-11} & PSNR & 0.40 & 0.44 & 0.73 & 0.88 & 0.46 & 0.66 & 0.51 & 0.48 & 0.65 \\ \cline{2-11} & SSIM & 0.33 & 0.48 & 0.49 & 0.03 & 0.50 & 0.61 & 0.40 & 0.47 & 0.67 & 0.01 \\ \cline{2-11} & VMAF & 0.48 & 0.53 & **0.85** & 0.48 & **0.65** & 0.73 & 0.60 & 0.48 & 0.76 & 0.24 \\ \hline \multirow{3}{*}{D} & FID & 0.14 & 0.06 & -0.13 & -0.12 & 0.18 & 0.14 & 0.24 & 0.09 \\ \cline{2-11} & GL2 & 0.06 & 0.08 & -0.01 & -0.11 & 0.14 & 0.13 & 0.23 & 0.08 \\ \cline{1-1} \cline{2-11} & MSDM2 & 0.12 & 0.05 & - & 0.36 & -0.49 & 0.64 & 0.05 & 0.51 & 0.17 \\ \cline{1-1} \cline{2-11} & TPDM & 0.15 & 0.10 & - & **0.77** & - & 0.60 & **0.80** & 0.65 & 0.61 & 0.28 \\ \hline \end{tabular}
\end{table}
Table 1: Metric performance on SJTU-TMQA
Figure 4: Scatter plot of objective metrics vs. MOS. | 近年、テクスチャマップ付き静的メッシュは、3D形状のデジタル表現として様々なアプリケーションで広く普及しており、アニメーション、ゲーム、医療画像、文化遺産アプリケーションなどにおいて、その役割は重要となっています。しかし、テクスチャメッシュの品質評価に対する研究は限られており、品質重視のアプリケーション開発を阻害しています。この論文では、SJTU-TMQAという大規模なテクスチャメッシュ品質評価データベースを作成しました。これは21の基準メッシュと945の歪みサンプルを含みます。メッシュはレンダリングされ、加工されたビデオシーケンスとして作成され、主観的な実験を実施することで、平均意見スコア(MOS)を取得しました。コンテンツの多様性とMOSの精度が、その多様性と信頼性を検証しています。様々な種類の歪みの人間の認知への影響を示しています。SJTU-TMQAには、13の最新の客観的な評価指標が |
2309.11413 | Enhancing motion trajectory segmentation of rigid bodies using a novel
screw-based trajectory-shape representation | Trajectory segmentation refers to dividing a trajectory into meaningful
consecutive sub-trajectories. This paper focuses on trajectory segmentation for
3D rigid-body motions. Most segmentation approaches in the literature represent
the body's trajectory as a point trajectory, considering only its translation
and neglecting its rotation. We propose a novel trajectory representation for
rigid-body motions that incorporates both translation and rotation, and
additionally exhibits several invariant properties. This representation
consists of a geometric progress rate and a third-order trajectory-shape
descriptor. Concepts from screw theory were used to make this representation
time-invariant and also invariant to the choice of body reference point. This
new representation is validated for a self-supervised segmentation approach,
both in simulation and using real recordings of human-demonstrated pouring
motions. The results show a more robust detection of consecutive submotions
with distinct features and a more consistent segmentation compared to
conventional representations. We believe that other existing segmentation
methods may benefit from using this trajectory representation to improve their
invariance. | Arno Verduyn, Maxim Vochten, Joris De Schutter | 2023-09-20T15:40:22 | http://arxiv.org/abs/2309.11413v1 | # Enhancing motion trajectory segmentation of rigid bodies
###### Abstract
Trajectory segmentation refers to dividing a trajectory into meaningful consecutive sub-trajectories. This paper focuses on trajectory segmentation for 3D rigid-body motions. Most segmentation approaches in the literature represent the body's trajectory as a point trajectory, considering only its translation and neglecting its rotation. We propose a novel trajectory representation for rigid-body motions that incorporates both translation and rotation, and additionally exhibits several invariant properties. This representation consists of a geometric progress rate and a third-order trajectory-shape descriptor. Concepts from screw theory were used to make this representation time-invariant and also invariant to the choice of body reference point. This new representation is validated for a self-supervised segmentation approach, both in simulation and using real recordings of human-demonstrated pouring motions. The results show a more robust detection of consecutive sub-motions with distinct features and a more consistent segmentation compared to conventional representations. We believe that other existing segmentation methods may benefit from using this trajectory representation to improve their invariance.
## I Introduction
Trajectory segmentation aims to divide a trajectory into consecutive sub-trajectories that have a specific meaning for the considered application. It is a valuable tool to reduce the dimensionality of trajectories, thereby improving the efficiency and reliability of trajectory recognition and learning algorithms. Trajectory segmentation has found applications in various research fields, including monitoring urban traffic [1], tracking changes in animal behavior [2], analyzing robot-assisted minimally invasive surgical procedures [3], and learning motion primitives for robots from human-demonstrated object manipulation tasks [4, 5, 6, 7].
In the literature, three types of segmentation approaches can be distinguished: supervised segmentation, unsupervised segmentation, and self-supervised segmentation.
_Supervised segmentation_ algorithms rely on prior expert knowledge, such as a predefined library of template segments, such as in [8], or a set of predefined segmentation criteria. The approaches in [1, 9] segment trajectories based on criteria that assess the homogeneity of movement profiles, including the location, speed and heading of the object.
_Unsupervised segmentation_ algorithms do not rely on prior expert knowledge. These approaches are typically event-based, searching for discrete segmentation points along the trajectory. The approach in [5] identifies segmentation points by fitting a Gaussian Mixture Model to the trajectory data and identifying regions of overlap between the confidence ellipsoids of consecutive Gaussians. Another approach involves extracting segmentation points from local extrema in curvature and torsion along the trajectory, such as in [6].
_Self-supervised segmentation_ can be viewed as a supervised approach in which the segmentation criteria are learned from the data rather than being predetermined by expert knowledge. The segmentation approach in [10] detects transitions in movement profiles and labels them as 'features'. Similar features are then clustered together, and subsequently, the trajectory is segmented into a sequence of cluster labels. The approach in [7] initially segments trajectories in an unsupervised way. It then learns a library of motion primitives from the segmented data, which is subsequently used to improve the initial segmentation accuracy.
The literature on trajectory segmentation has two main shortcomings. The first shortcoming is that the object is typically approximated as a moving point, considering only the translational motion while neglecting the rotational motion. Hence, not all trajectory information is taken into account.
The second shortcoming is that supervised and self-supervised segmentation approaches based on template matching typically are dependent on the motion profile of the trajectory and the choice of references. These references include both the coordinate frame in which the trajectory coordinates are expressed and the reference point on the body being tracked. This dependency limits the approach's capability to generalize across different setups.
The main objective of this work is to enhance template-based approaches in supervised and self-supervised trajectory segmentation by incorporating invariance with respect to time, coordinate frame, and body reference point.
Our approach consists of first reparameterizing the trajectory to achieve time-invariance, i.e. invariance to changes in motion profile. This is achieved by defining a novel geometric progress rate using Screw Theory, inspired by [11]. The progress rate combines both rotation and translation, while also being invariant to the chosen body reference point. The defined progress rate is regularized to better cope with pure translations compared to [11]. After reparameterization, the trajectory is represented using a novel trajectory-shape descriptor based on first-order kinematics of the trajectory. Finally, invariance to changes in the coordinate frame is achieved by performing a spatial alignment before matching the trajectory descriptor with template segments in the library. The latter template segments are also represented using the same trajectory-shape descriptor.
The contribution of this work is (1) the introduction of a novel _screw-based geometric progress rate_ for rigid-body trajectories, and (2) a novel _trajectory-shape descriptor_. These concepts were applied in a self-supervised segmentation approach using simulations and real recordings of human-demonstrated pouring motions. The results show that more consistent segmentation results can be achieved by using an invariant trajectory descriptor compared to conventional descriptors that are not invariant to changes in execution speed, coordinate frame, and reference point.
The outline of the paper is as follows. Section II reviews essential background on rigid-body trajectories and screw theory. Section III introduces the novel screw-based geometric progress rate and trajectory-shape descriptor for rigid-body trajectories. Section IV explains the implementation of these novel concepts in a self-supervised segmentation approach. Section V explains the validation of the segmentation approach using simulations and real recordings of human-demonstrated pouring motions. Section VI concludes with a discussion and suggests future work.
## II Preliminaries
The displacement of a rigid body in 3D space is commonly represented by attaching a body frame to the rigid body and expressing the position \(\mathbf{p}\) and orientation \(R\) of this frame with respect to a fixed world frame. The position coordinates \(\mathbf{p}\) represent the relative position of the origin of the body frame with respect to the origin of the world frame. The rotation matrix \(R\), which is part of the Special Orthogonal group SO(3), represents the relative orientation of the body frame with respect to the world frame1. The position \(\mathbf{p}\) and orientation \(R\) of the rigid body can be combined into the homogeneous transformation matrix \(T=\begin{bmatrix}R&\mathbf{p}\\ \mathbf{0}&1\end{bmatrix}\), which is part of the Special Euclidean group \(SE(3)\). The homogeneous transformation matrix as a function of time \(T(t)\) represents a temporal rigid-body trajectory.
Footnote 1: Note that other possibilities exist for representing the orientation such as the quaternion or the axis-angle representation.
The first-order kinematics of the rigid-body trajectory \(T(t)\) are commonly represented by a 6D twist \(\mathbf{\ell}\) consisting of two 3D vectors: a rotational velocity vector \(\mathbf{\omega}\) and a linear velocity vector \(\mathbf{v}\). Based on the coordinate frame in which these velocities are expressed and based on the reference point for the linear velocity \(\mathbf{v}\), three twists are commonly defined in the literature [12, 13], i.e. the _pose twist_, _spatial twist_, and _body twist_. A twist is considered left- or right-invariant when it is invariant to changes of the world or body frame, respectively. The _body twist_ is left-invariant, the _spatial twist_ is right-invariant, and the _pose twist_ is neither left- nor right-invariant.
In rigid-body kinematics, the Mozzi-Chasles' theorem [14, 15] states that a rigid-body velocity can always be represented as a rotation about an axis in space and a translation parallel to this axis, referred to as the _Instantaneous Screw Axis_ (ISA). The direction of the ISA is uniquely defined by the direction of the rotational velocity \(\mathbf{\omega}\), while the location of the ISA is defined by any point on the ISA. A unique choice is to take the point on the ISA that lies closest to the reference point for the linear velocity \(\mathbf{v}\). This point's position vector \(\mathbf{p}_{\perp}\) as well as the translational velocity parallel to the ISA, \(\mathbf{\nu}\), are calculated from the twist components \(\mathbf{\omega}\) and \(\mathbf{v}\):
\[\mathbf{p}_{\perp}=\frac{\mathbf{\omega}\times\mathbf{v}}{\|\mathbf{\omega}\|^{2}}\quad\text{ and }\quad\mathbf{\nu}=\mathbf{v}+\mathbf{\omega}\times\mathbf{p}_{\perp}. \tag{1}\]
When the _pose twist_ is chosen, the coordinates of \(\mathbf{p}_{\perp}\) and \(\mathbf{\nu}\) are in the world frame, while \(\mathbf{p}_{\perp}\) is the position vector from the body frame's origin to the closest point on the ISA.
The magnitudes of the rotational and translational velocities \(\|\mathbf{\omega}\|\) and \(\|\mathbf{\nu}\|\) are closely related to the 'SE(3) invariants' in [16] and it has been shown that they are both left- and right-invariant, also referred to as _bi-invariant_.
## III Screw-based trajectory representation
This section introduces a novel screw-based geometric progress rate for defining the progress over a trajectory, independent of time. Next, a new invariant trajectory-shape descriptor is introduced for rigid-body trajectories, which is both time-invariant and invariant to the choice of the reference point on the body. Invariance to changes in coordinate frame is obtained by proposing a spatial alignment algorithm.
### _Screw-based geometric progress rate_
To obtain a time-invariant representation of the trajectory, a geometric progress rate has to be defined first. We propose to combine the magnitude of the rotational velocity \(\|\mathbf{\omega}\|\) and the translational velocity \(\|\mathbf{\nu}\|\) parallel to the ISA:
\[\dot{s}=\sqrt{L^{2}\|\mathbf{\omega}\|^{2}+\|\mathbf{\nu}\|^{2}}\, \tag{2}\]
where \(L\) is a weighting factor with units [m]. The progress rate \(\dot{s}\) [m/s] can be interpreted as the magnitude of the linear velocity of any point on the body, at a distance \(L\) from the ISA (see Fig. 1). The progress parameter \(s\) [m] is then found as the integral of the progress rate over time, and can be considered as a scalar value signifying the geometric progress in translation and rotation over the trajectory.
Since \(\|\mathbf{\omega}\|\) and \(\|\mathbf{\nu}\|\) are bi-invariant properties of the trajectory, as mentioned in Section II, the resulting geometric progress rate \(\dot{s}\) is also bi-invariant. However, it is important to clarify that the proposed progress rate does not comply with the conditions for being a bi-invariant metric on \(se(3)\)[17]. For example, consider a serial kinematic chain of two twist motions \(\mathbf{\ell}_{1}\) and \(\mathbf{\ell}_{2}\) generating a resulting motion \(\mathbf{\ell}_{1+2}\). Then, the triangle inequality \(\dot{s}_{1+2}\leq\dot{s}_{1}+\dot{s}_{2}\) (a necessary condition for being a metric) does not hold for all cases on \(se(3)\). Consider the example of a pure translation \(\dot{s}_{1+2}=\|\mathbf{\nu}\|\). This translation can be generated by a couple of rotations with magnitude \(\|\mathbf{\omega}\|=\frac{\|\mathbf{\nu}\|}{2a}\), where \(a\) is half the distance between the two rotation axes, as shown in Fig. 2. Since the sum of the progress rates of these rotations equals:
\[\dot{s}_{1}+\dot{s}_{2}=2L\|\mathbf{\omega}\|=\frac{L}{a}\|\mathbf{\nu}\|, \tag{3}\]
it can be concluded that \(\dot{s}_{1}+\dot{s}_{2}<\dot{s}_{1+2}\), if \(L<a\). Since \(L\) is a predefined weighting factor and \(a\) can take any value in \(\mathbb{R}_{+}^{+}\) (because \(\mathbf{\omega}\) can always be decreased by increasing \(a\) while still resulting in the same \(\mathbf{\nu}\)), there is no choice for \(L\) for which the triangle inequality holds for each value of \(a\).
### _Regularized geometric progress rate_
The calculation of \(\mathbf{p}_{\perp}\) using (1) is degenerate for pure translations, i.e. when \(\|\mathbf{\omega}\|=0\). Because of this, approaching a pure translation can result in \(\|\mathbf{p}_{\perp}\|\) approaching infinity:
\[\text{if}\quad\lim_{\|\mathbf{\omega}\|\to 0}\frac{\mathbf{\omega}\times\mathbf{v}}{\|\mathbf{ \omega}\|}\neq\mathbf{0}\ \,\ \ \text{then}\quad\lim_{\|\mathbf{\omega}\|\to 0}\|\mathbf{p}_{\perp}\|=\infty. \tag{4}\]
To ensure that the calculated translational velocity of the rigid body and corresponding progress rate remain well-defined in these degenerate cases, a regularized translational velocity \(\mathbf{\tilde{\nu}}\) is introduced, such that:
\[\dot{s}=\sqrt{L^{2}\|\mathbf{\omega}\|^{2}+\|\mathbf{\tilde{\nu}}\|^{2}}\quad\text{ with}\quad\mathbf{\tilde{\nu}}=\mathbf{v}+\mathbf{\omega}\times\mathbf{\tilde{p}}\, \tag{5}\]
where \(\mathbf{\tilde{p}}\) is a regularized version of \(\mathbf{p}_{\perp}\), defined as follows:
\[\mathbf{\tilde{p}}=\left\{\begin{array}{ll}\mathbf{p}_{\perp}&\text{if}\ \|\mathbf{p}_{ \perp}\|\leq b\\ b\frac{\mathbf{p}_{\perp}}{\|\mathbf{p}_{\perp}\|}&\text{if}\ \|\mathbf{p}_{\perp}\|>b\\ \mathbf{0}&\text{if}\ \|\mathbf{\omega}\|=0.\end{array}\right. \tag{6}\]
In other words, a sphere with radius \(b\) is defined with the origin of the body frame as its center, so that when the position \(\mathbf{p}_{\perp}\) is outside of the sphere, it will be projected onto the sphere's surface. This is also visualized in Fig. 3.
The regularization in (6) has two important properties. First, the profile of \(\mathbf{\tilde{\nu}}\) remains continuous over the threshold where \(\|\mathbf{p}_{\perp}\|=b\), since:
\[\lim_{\|\mathbf{p}_{\perp}\|\to b^{+}}\mathbf{\tilde{p}}\ =\lim_{\|\mathbf{p}_{\perp}\|\to b^{-}}\mathbf{ \tilde{p}}. \tag{7}\]
Second, the regularization is independent of the execution speed of the motion, since \(\mathbf{p}_{\perp}\) is a geometric property.
In the Appendix, it is shown that choosing \(b\leq L\) ensures that the triangle inequality \(\dot{s}_{1+2}\leq\dot{s}_{1}+\dot{s}_{2}\) holds within the bi-invariant region \(\|\mathbf{p}_{\perp}\|\leq b\), for the example discussed in Section III-A. As a result, we propose to choose \(b=L\), since this results in the largest region of bi-invariance of the progress rate \(\dot{s}\) without violating the triangle inequality.
Remark that when \(\|\mathbf{p}_{\perp}\|>b\), the regularized translational velocity \(\mathbf{\tilde{\nu}}\) becomes dependent on the location of the body reference point and hence loses its _bi-invariant_ property. This is not a big problem, since the object is dominantly translating in that case. Hence, the sensitivity of the velocity \(\mathbf{\tilde{\nu}}\) to the location of the body reference point remains limited.
Algorithm 1 contains pseudocode to calculate the regularized progress rate \(\dot{s}\) assuming \(b=L\).
```
Data:\(\mathbf{\omega}\in\mathbb{R}^{3}\), \(\mathbf{v}\in\mathbb{R}^{3}\), \(L\in\mathbb{R}\) Result:\(\dot{s}\in\mathbb{R}\), \(\tilde{\mathbf{p}}\in\mathbb{R}^{3}\), \(\tilde{\mathbf{\nu}}\in\mathbb{R}^{3}\) if\(\|\mathbf{\omega}\|=0\)then \(\mathbf{\tilde{p}}\leftarrow\mathbf{0}\) ; \(\mathbf{\tilde{\nu}}\leftarrow\mathbf{v}\) ; else \(\mathbf{\tilde{p}}\leftarrow(\mathbf{\omega}\times\mathbf{v})/(\mathbf{\omega}\cdot\mathbf{\omega})\) ; if\(\|\mathbf{\tilde{p}}\|>L\)then ; \(\mathbf{\tilde{\nu}}\leftarrow\mathbf{v}+\mathbf{\omega}\times\mathbf{\tilde{p}}\) ; end if \(\dot{s}\leftarrow\sqrt{L^{2}\mathbf{\omega}\cdot\mathbf{\omega}+\mathbf{\tilde{\nu}}\cdot \mathbf{\tilde{\nu}}}\) ;
```
**Algorithm 1**Calculation regularized progress rate \(\dot{s}\)
### _Reparameterization to geometric domain_
The temporal rigid-body trajectory \(T(t)\) can now be reparameterized to a geometric trajectory \(T(s)=T(t(s))\) using the geometric progress rate \(\dot{s}\) defined in (5). As a result, the execution speed will be separated from the geometric path of the trajectory. Such a geometric trajectory \(T(s)\) is also referred to as a _unit-speed_ trajectory, since the geometric derivative \(s^{\prime}\left(=\frac{ds}{ds}\right)\) of the progress \(s\) along this trajectory is equal to one:
\[s^{\prime}=1=\sqrt{L^{2}\left\|\mathbf{\omega}(s)\right\|^{2}+\left\|\mathbf{\tilde{\nu }}(s)\right\|^{2}}. \tag{8}\]
In practice, for discrete measurement data, this reparameterization is performed in three steps. Firstly, the temporal pose twist \(\mathbf{\ell}\) is calculated from the temporal pose trajectory \(T(t)\) by numerical differentiation with the matrix logarithm operator [12]. Secondly, the progress rate \(\dot{s}\) is determined using Algorithm 1. The progress \(s\) along the discrete trajectory is then found by a cumulative sum on \(\dot{s}\). Thirdly, the trajectory \(T\) is reparameterized from time \(t\) to progress \(s\) with fixed progress step \(\Delta s\) using _Screw Linear Interpolation_[18].
### _Screw-based trajectory-shape descriptor_
This subsection introduces a screw-based trajectory-shape descriptor based on the rotational and translational velocities \(\mathbf{\omega}(s)\) and \(\mathbf{\tilde{\nu}}(s)\) along the reparameterized trajectory. These
Fig. 1: Interpretation of the proposed progress rate \(\dot{s}\) as the linear velocity of a point on the moving body at a distance \(L\) from the ISA.
Fig. 3: Point \(\mathbf{\tilde{p}}\) is defined as a characteristic reference point on the body. When \(\mathbf{p}_{\perp}\) is within the spherical region with radius \(b\), then \(\mathbf{\tilde{p}}=\mathbf{p}_{\perp}\). Outside the sphere, \(\mathbf{\tilde{p}}\) is the projection of \(\mathbf{p}_{\perp}\) on the sphere’s surface.
velocities can be calculated similarly to the calculation of \(\mathbf{\omega}(t)\) and \(\mathbf{\tilde{\nu}}(t)\), but now starting from the reparameterized trajectory \(T(s)\) instead of \(T(t)\). Afterwards, a normalization step is included to ensure that property (8) holds.
The proposed trajectory-shape descriptor at a given trajectory sample \(i\) is of third order, and consists of the rotational and translational velocities at subsequent samples, centered at the sample point \(i\), and stacked into a \(3\times 6\) matrix \(S_{i}\):
\[S_{i}=\left[L\mathbf{\omega}_{i-1}\quad L\mathbf{\omega}_{i}\quad L\mathbf{\omega}_{i+1} \quad\mathbf{\tilde{\nu}}_{i-1}\quad\mathbf{\tilde{\nu}}_{i}\quad\mathbf{\tilde{\nu}}_{i+1 }\right]. \tag{9}\]
Since the velocities in (9) have their coordinates expressed with respect to some coordinate frame, a relative rotation alignment is needed before two local trajectory-shape descriptors can be compared. Similarly to the rotation alignment proposed in [19], shape descriptor \(S_{2}\) can be aligned with \(S_{1}\) in three steps:
1. Obtain the singular value decomposition of the relative \(3\times 3\) matrix \(S_{1}S_{2}^{T}=U\Sigma V\).
2. Calculate the rotation matrix \(R=VU^{T}\), while ensuring that \(R\in SO(3)\) by changing the sign of the third column of \(U\) when \(\det(VU^{T})=-1\).
3. Align \(S_{1}\) with \(S_{2}\) by left-multiplying \(S_{1}\) with \(R\).
The difference in local trajectory-shape \(\Delta_{1}^{2}S\) of two descriptors \(S_{1}\) and \(S_{2}\) is then defined as the Frobenius norm of the difference between the descriptors after alignment:
\[\Delta_{1}^{2}S=\left\|RS_{1}-S_{2}\right\|_{F}. \tag{10}\]
The difference \(\Delta_{1}^{2}S\) is a value with units [m], measuring the difference in local change of the trajectory shape.
## IV Application to Segmentation
This section applies the proposed screw-based progress rate \(\dot{s}\) and trajectory-shape descriptor \(S\) to a trajectory segmentation approach. Envisioned towards incremental learning applications, a self-supervised segmentation approach based on incremental clustering was devised. This segmentation approach consists of two phases: an offline learning phase where a library of trajectory-shape primitives is learned, and a trajectory segmentation phase.
**Offline template learning**: Trajectory-shape primitives are learned by an incremental clustering approach, where the mean of each cluster represents a learned primitive. The clustering approach consists of four steps:
1) _Initialization:_ Use the first sample point \(S_{1}\) to generate the first cluster with mean \(S_{1}\) and a chosen initial guess for its standard deviation \(\sigma_{0}\).
2) _Cluster growing:_ Given a sample point \(i\) with corresponding trajectory-shape descriptor \(S_{i}\), calculate the difference \(\Delta S\) between the descriptor \(S_{i}\) and the mean \(\bar{S}\) of every learned cluster. Afterwards, add the sample point to the cluster with the smallest difference in trajectory-shape, if this difference is smaller than three times its standard deviation \(\sigma\), and update the mean \(\bar{S}\) and standard deviation \(\sigma\) of the cluster accordingly. If no such cluster exists, create a new cluster with mean \(S_{i}\) and initial standard deviation \(\sigma_{0}\).
3) _Cluster parameter update:_ After all the available data is clustered, update the value for \(\sigma_{0}\) based on the mean of the standard deviations of all the learned clusters \(\bar{\sigma}\): \(\sigma_{0,next}=\bar{\sigma}\ +\ \hat{\sigma}\), with \(\hat{\sigma}\) a tuning parameter related to the process noise of the clusters. Iterate steps 1 to 3 until \(\sigma_{0}\) converges to a steady value.
4) _Outlier removal:_ After convergence, remove sparse clusters that do not represent at least \(\beta\%\) of the data, with \(\beta\) being a chosen value.
**Trajectory segmentation phase**: Each sample point gets associated with a learned primitive (using a 1-Nearest-Neighbor classifier) as long as the difference in trajectory-shape \(\Delta S\) is smaller than \(3\sigma\) of the respective cluster. Otherwise, the sample point is labeled as 'non-classified'. Afterwards, segments are formed by grouping consecutive sample points along the trajectory that were associated with the same trajectory-shape primitive.
## V Experiments
A key property of the proposed trajectory representation is its invariance to time and the choice of body reference point. The main aim of the experiments is to demonstrate the advantages of these invariant properties for trajectory segmentation. This is done both in simulation and using real recordings of human-demonstrated pouring motions.
The simulated trajectories represent temporal rigid-body trajectories \(T(t)\) of pouring motions performed with two types of objects: a teakettle and a bottle. The strategy of the pouring motion consists of a sequence of six intuitive sub-motions, also visualized in Fig. 4:
* _slide+_ : the object is slid across the table from its initial position to a fixed location on the table
* _lift+_ : the object is lifted and reoriented so that the spout is directed towards the glass
* _tilt+_ : the object is tilted to pour the liquid
* _tilt-_ : the object's tilt is undone
* _lift-_ : the object is placed back on the table
* _slide-_ : the object is slid back
We aim to robustly segment the pouring motion into these sub-motions. To simulate sensor noise, white noise with a standard deviation of 2\({}^{\circ}\) and 1 mm were added to the object's orientation and reference point's location, respectively.
To study robustness to changes in the body reference point, each of the three trajectories was simulated with a different reference point: the first was near the spout (P1), the second near the handle (P2), and the third near the center of mass (P3). Fig. 4 on the left illustrates that, when rotation of the object is involved, the different reference points result in significantly different point trajectories.
The generalization capability is tested by applying the primitives that were learned for the kettle motion to another object. The object was chosen to be a bottle of which the reference point was chosen in the center of its opening.
The proposed approach is compared to other methods based on the literature, shown in Table I. _Method_\(A\) does not transform the object's trajectory to a geometric domain. _Methods_\(A\)_and_\(B\) neglect information on the object's orientation by choosing \(L=0\). _Methods_\(B\)_and_\(C\) use the arclength of the reference point's trajectory as the geometric
progress parameter, such as in [19]. _Method \(D\)_ uses the traveled angle of the moving object as the geometric progress parameter, such as in [20]. _Method \(E\)_ uses a combination of the rotational velocity and linear velocity of the reference point on the body as the progress rate, such as in [17]. _Method \(F\)_ uses a screw-based progress rate without the proposed regularization, such as in [11]. _Method \(G\)_ uses the proposed screw-based geometric progress rate.
To validate that the proposed method works in practice, it was tested on recordings of real pouring motions. These motions were recorded using an HTC VIVE motion capture system, consisting of a tracker attached to the kettle (see Fig. (a)a), and two base stations. The VIVE system recorded the pose trajectories with a frequency of 60 Hz and an accuracy in the order of a few mm and a few degrees. To introduce contextual variations in the measurements, the tracker was physically attached to the kettle at two different locations, one at the side of the kettle (P1) and one near the top of the handle (P2). For each tracker location, three trials were recorded. Fig. (b)b depicts the kettle's pose trajectory for the first trial with the tracker attached to the side.
### _Data processing_
The simulated data and the recorded data were processed in the same way. The rigid-body trajectories were first preprocessed using a Kalman smoother with a constant acceleration model to deal with the effects of the measurement noise. For the methods \(B\) to \(G\), the trajectories were first reparameterized as in Section III-A according to the chosen definition of the progress rate \(\dot{s}\). Then, the local shape descriptor was calculated as in Section III-D. Finally, the same segmentation algorithm (Section IV) is applied for all approaches. For reproducability reasons, all software used in the experiments is made publicly available [21].
The values of the tuning parameters are reported in Table I. A good choice for \(L\) depends on numerous factors, including the scale of the motion and the scale of the moving object. Given the scale of the objects and motions of interest, a value of \(L=30\) cm seemed reasonable. The other parameters in Table I were manually tuned. For method \(A\), the parameters \(\Delta s\) and \(\hat{\sigma}\) have different units compared to the ones of the other methods since method \(A\) does not transform the trajectories to a geometric domain before segmentation.
### _Results_
**Simulated data**: Fig. 6 visualizes the segmentation results of all methods for the simulated data. The ground-truth segmentation points are indicated by the vertical lines. To compare between methods, the segmented trajectories were transformed back to the time domain by re-applying the motion profile \(s(t)\) that was extracted from the trajectories. This was done by inverting the reparameterization procedure of Section III-C. The segmentation results are also evaluated quantitatively by reporting the number of _detected sub-motions_ and _consistent segments_. A sub-motion was considered 'detected' when a corresponding segment was formed. A sub-motion was considered 'consistently segmented' when the corresponding segments were associated to the same trajectory-shape primitive across the three trials.
The results of the simulation are interpreted as follows. Method \(A\) generated a relatively high number of segments. The segmentation is mainly based on differences in magnitude of the reference point's velocity. The gray segments represent regions of standstill. The light and dark blue segments represent segments of low and high magnitude in velocity, respectively. Methods \(B\) to \(G\) generated segments based on differences in shape of the rigid-body trajectory.
For methods \(A\)-\(E\), the learning of the primitives and segmentation of the trajectories was dependent on the location of the reference point on the object. Furthermore, methods \(A\)-\(C\) could not deal well with pure rotations of the object. Method \(A\) classified these segments either as stationary or as segments with low magnitude in velocity. Methods \(B\) and \(C\) treated these segments as outliers. The reason for this is that during these pure rotations, the traveled arclength of the reference point remained relatively small, resulting in a small number of geometric sample points representing these pure rotations. Hence, for pure rotations, sparse clusters were created, which were seen as outliers. Following a similar reasoning, method \(D\) could not deal with pure translations.
Method \(F\) performed the segmentation in a reference-point invariant way, but could not deal well with pure translations, since \(\boldsymbol{p}_{\perp}\) is degenerate in this case.
The proposed method \(G\) dealt well with pure translations and performed the segmentation in a reference-point invari
Fig. 4: Visualization of three rigid-body trajectories (red, green, and blue) representing simulated pouring motions performed with a kettle. Different body reference points (P1, P2 and P3) were considered.
Fig. 5: (a) Human demonstration of a pouring motion using a teakettle to which an HTC VIVE tracker is attached. (b) Visualization of the first trial within a batch of six trials in the same simulation environment as Fig. 4.
ant way. The trajectories of the kettle were consistently segmented into six segments, corresponding to the six intuitive sub-motions. More in detail, all sub-motions were detected (6/6) and all sub-motions were consistently segmented (6/6). Fig. 7 visualizes the segmented trajectories.
The proposed approach also succeeded to segment the simulated pouring motion performed with a bottle (with significantly different location of the reference point) using the trajectory-shape primitives learned from the trajectories of the kettle. This illustrates the capability of the approach to generalize to different objects with different geometries.
**Real data**: Fig. 8 visualizes the segmentation results of the proposed method \(G\) for the real recorded pouring motions. To illustrate the generated segments in the geometric domain, the extra transformation back to the time domain was not performed. The same values for the tuning parameters as reported in Table I were used for this experiment.
Three primitives were learned from the real data. The corresponding values of the mean trajectory-shape descriptor \(\bar{S}\) of the three clusters are reported in Table II. The learned primitives represent 1D motions, since for each primitive, the three columns are almost identical. The first primitive represents a 1D translation (slide). The second primitive represents a rotation with a non-zero pitch (lift). The third primitive represents a pure rotation (tilt). The segmentation approach created segments conform the six intuitive sub-motions, apart from some short segments near transition regions. These short segments can be avoided by implementing a postprocessing step or a more advanced segmentation algorithm from the literature, which is part of future work.
## VI Discussion and Conclusion
The objective of this work was to enhance template-based trajectory segmentation approaches by incorporating invariance. Time-invariance was achieved by reparameterizing the trajectory using a novel geometric progress parameter. By considering the translation along the screw axis, the progress parameter was made invariant to the choice of body reference point. Based on the reparameterized trajectory, a screw-based trajectory-shape descriptor was proposed to characterize the local geometry of the trajectory.
For the devised self-supervised segmentation scheme, the results showed a more robust detection of consecutive sub-motions with distinct features and a more consistent segmentation thanks to the invariant properties of the screw-based progress parameter and the trajectory-shape descriptor.
The proposed approach also has a more practical advantage. Due to the invariance, the formation of the segments becomes invariant to changes in sensor setup (i.e. changes in the location and angle of the camera, changes in the location of markers or trackers on the object, etc.). Therefore, sensor calibration efforts can be reduced.
Future work is to examine the benefits of the invariant segmentation approach for other types of object manipulation tasks and to verify the extent to which other segmentation methods may benefit from the invariant approach.
## Appendix
This appendix shows that the triangle inequality property \(\dot{s}_{1+2}\leq\dot{s}_{1}+\dot{s}_{2}\) in Section III-B is always satisfied for the regularized progress rate \(\dot{s}\) if \(b\) equals \(L\) in (6).
Consider again the case of a couple of rotations generating a translation as explained in Section III-B. Additionally consider that the rotation axes of the couple are symmetrically positioned w.r.t. the reference point, such that \(a=\|\boldsymbol{p}_{\perp}\|\). Equation (3) then remains valid under the proposed regularization action when \(\|\boldsymbol{p}_{\perp}\|\leq b\). From (3), it was derived that the triangle inequality holds when \(a\leq L\). Hence, given that \(a=\|\boldsymbol{p}_{\perp}\|\) and \(\|\boldsymbol{p}_{\perp}\|\leq b\), then \(a\leq L\) is always satisfied when \(b\leq L\).
Fig. 8: Segmentation results of proposed method \(G\) for the real recorded pouring motion data. P1 and P2 correspond to trials with a different location of the motion tracker on the kettle.
Fig. 6: Comparison of segmentation results between methods \(A\)-\(G\) for the simulated pouring motion data. Segments associated with different primitives are indicated with different colors. The black regions contain non-classified samples.
Fig. 7: Visualization of the segmented trajectories using the proposed method (method \(G\)). | 軌跡分割とは、軌跡を意味のある連続するサブ軌跡に分割することを指します。この論文では、3D剛体運動の軌跡分割に焦点を当てています。文献におけるほとんどの分割手法は、体の軌跡を点軌跡として表し、その移動のみを考慮し、回転を無視しています。回転と移動を組み込んだ新しい軌跡表現を提案し、さらに幾何学的進歩率と3次の軌跡形状記述子を含みます。この表現は、スクリュー理論から概念を用いて時間不変と体基準点の選択に不変にすることで時間不変化を実現しています。この新しい表現は、自己教師ありの分割手法で検証され、シミュレーションと人間が示した注ぎ動作の実際の記録を使用して評価されました。結果として、連続的なサブ運動のより強い検出と、従来の表現と比較してより一貫性の高い分割が実現しました。この |
2309.17062 | Adjoints, wrapping, and morphisms at infinity | For a localization of a smooth proper category along a subcategory preserved
by the Serre functor, we show that morphisms in Efimov's algebraizable
categorical formal punctured neighborhood of infinity can be computed using the
natural cone between right and left adjoints of the localization functor. In
particular, this recovers the following result of Ganatra--Gao--Venkatesh:
morphisms in categorical formal punctured neighborhoods of wrapped Fukaya
categories are computed by Rabinowitz wrapping. | Tatsuki Kuwagaki, Vivek Shende | 2023-09-29T08:49:21 | http://arxiv.org/abs/2309.17062v3 | # Adjoints, wrapping, and morphisms at infinity
###### Abstract
For a localization of a smooth proper category, we show that morphisms in Efimov's algebraizable categorical formal punctured neighborhood of infinity can be computed using the natural cone between right and left adjoints of the localization functor. In particular, this recovers the following result of Ganatra-Gao-Venkatesh: morphisms in categorical formal punctured neighborhoods of wrapped Fukaya categories are computed by Rabinowitz wrapping.
To any dg category \(\mathcal{S}\) over a field \(\mathbb{K}\), Efimov has associated an 'algebraizable categorical formal punctured neighborhood of infinity' [1].
\[\mathcal{S}\to\widehat{\mathcal{S}}_{\infty}\]
We are interested here in the case when \(\mathcal{S}\) admits a localization sequence
\[0\to\mathcal{K}\xrightarrow{j}\mathcal{C}\xrightarrow{i^{L}}\mathcal{S}\to 0 \tag{0.1}\]
where \(\mathcal{C}\) is smooth (perfect diagonal bimodule) and locally proper (finite dimensional Hom spaces).
In this case, Efimov showed that \(\widehat{\mathcal{S}}_{\infty}\) can be computed as follows. To any category \(\mathcal{T}\) we may associate its 'pseudo-perfect modules' \(\mathcal{T}^{pp}=\operatorname{Hom}(\mathcal{T},\operatorname{Perf}\mathbb{K})\). Since \(\mathcal{K}\) is locally proper, the Yoneda embedding gives \(\mathcal{K}\hookrightarrow\mathcal{K}^{pp}\). Form the quotient:
\[\operatorname{Perf}_{top}(\widehat{\mathcal{S}}_{\infty}):=\mathcal{K}^{pp}/ \mathcal{K} \tag{0.2}\]
The composition of the Yoneda functor with passage to the quotient gives a map
\[\mathcal{C}\to\operatorname{Hom}(\mathcal{K},\operatorname{Perf}(\mathbb{K}))/ \mathcal{K}\]
This map evidently factors through \(\mathcal{S}\), and \(\widehat{\mathcal{S}}_{\infty}\) is the full subcategory of \(\operatorname{Perf}_{top}(\widehat{\mathcal{S}}_{\infty})\) generated by the image of \(\mathcal{S}\), or equivalently \(\mathcal{C}\).
As always with quotient categories, it is not easy to compute morphism spaces directly from the definition. Our purpose here is to give a more explicit formula for morphisms in \(\widehat{\mathcal{S}}_{\infty}\). Our result is inspired by, and implies, a result of Gao-Ganatra-Venkatesh in the situation where \(\mathcal{S}\) is the Fukaya category of a Weinstein manifold [2].
**Theorem 1**.: _Let \(i:\operatorname{Mod}\mathcal{S}\to\operatorname{Mod}\mathcal{C}\) be the pullback functor on module categories. Then for \(c,d\in\mathcal{C}\), there is a natural isomorphism_
\[\operatorname{Hom}_{\widehat{\mathcal{S}}_{\infty}}(c,d)=\operatorname{Cone}( \operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c),d)\to\operatorname {Hom}_{\operatorname{Mod}\mathcal{C}}(c,ii^{L}(d)))\]
_where the map is induced by the unit maps \(c\to ii^{L}(c)\) and \(d\to ii^{L}(d)\)._
**Remark 2**.: The map \(i\) also has a right adjoint \(i^{R}\); we can also express the formula as \(\operatorname{Hom}_{\widehat{\mathcal{S}}_{\infty}}(c,d)=\operatorname{Hom }_{\operatorname{Mod}\mathcal{C}}(c,\operatorname{Cone}(ii^{R}(d)\to ii^{L}(d)))\).
**Remark 3**.: It may be nontrivial to express compositions in \(\widehat{S}_{\infty}\) in terms of the formula above. We give an expression at the level of cohomology in Appendix A.
We will give the proof of this theorem after illustrating in algebraic and symplectic geometry:
**Example 4** (Coherent sheaves).: Let \(Y\) be a smooth proper algebraic variety, and \(X\subset Y\) an open subvariety with complement \(Z\). Then \(\operatorname{Coh}(Y)\) is smooth and proper, and one has
\[\operatorname{Coh}(X)=\operatorname{Coh}(Y)/\operatorname{Coh}_{Z}(Y),\]
where \(\operatorname{Coh}_{Z}(Y)\) is the full subcategory on sheaves set-theoretically supported on \(Z\). Writing \(x:X\to Y\) for the inclusion, our result asserts that given \(E,F\in\operatorname{Coh}(Y)\),
\[\operatorname{Hom}_{\widehat{\operatorname{Coh}(X)}_{\infty}}(E,F)= \operatorname{Cone}(\operatorname{Hom}_{Q\operatorname{Coh}(Y)}(x_{*}x^{*}E,F) \to\operatorname{Hom}_{Q\operatorname{Coh}(Y)}(E,x_{*}x^{*}F))\]
Note we may compute this cone of Homs after restricting to any Zariski neighborhood of \(Z\), since \(x_{*}x^{*}E\to E\) and \(x_{*}x^{*}F\to F\) are isomorphisms away from such neighborhood.
Let us do an example of the example. We take \(Y=\mathbb{P}^{1}\), \(X=\mathbb{P}^{1}\setminus 0\), and \(E=F=\mathcal{O}\). In the Zariski chart \(\mathbb{P}^{1}\setminus\infty\), we compute:
\[\operatorname{Cone}(\operatorname{Hom}_{\mathbb{K}[t]}(\mathbb{K}[t,t^{-1}], \mathbb{K}[t])\to\operatorname{Hom}_{\mathbb{K}[t]}(\mathbb{K}[t],\mathbb{K}[ t,t^{-1}]))\cong\mathbb{K}((t))\]
Indeed, the second term in the cone is obviously \(\mathbb{K}[t,t^{-1}]\). One can show that the first is in fact isomorphic to \((\mathbb{K}[[t]]/\mathbb{K}[t])[-1]\); we include a calculation in Appendix B. We leave it to the reader to check that the cone realizes the nontrivial extension
\[0\to\mathbb{K}[t,t^{-1}]\to\mathbb{K}((t))\to\mathbb{K}[[t]]/\mathbb{K}[t]\to 0.\]
**Example 5** (Fukaya categories).: Let \(W\) be a Weinstein symplectic manifold and \(\Lambda\subset\partial_{\infty}W\) a generically Legendrian total stop, such as the core of a fiber of an open book decomposition of \(\partial_{\infty}W\). Let \(\Lambda^{\prime}\subset\Lambda\) be a closed subset. Then [3] the (partially) wrapped Fukaya category \(\operatorname{Fuk}(W,\Lambda)\) is smooth and proper, and we have a localization sequence
\[0\to\langle D_{\Lambda\setminus\Lambda^{\prime}}\rangle\to\operatorname{Fuk}( W,\Lambda)\to\operatorname{Fuk}(W,\Lambda^{\prime})\to 0 \tag{0.3}\]
where \(D_{\Lambda\setminus\Lambda^{\prime}}\) are the so-called linking disks to \(\Lambda\setminus\Lambda^{\prime}\).
Suppose given a Lagrangian \(M\in\operatorname{Fuk}(W,\Lambda)\). As in [3], by a _negative wrapping_\(M\rightsquigarrow M^{-}\), we mean an isotopy induced by a Hamiltonian which is linear and negative at contact infinity. So long as \(M^{-}\) avoids \(\Lambda\) and hence defines an element of \(\operatorname{Fuk}(W,\Lambda)\), there is a continuation morphism \(M\to M^{-}\). Essentially by definition,
\[\operatorname{Hom}_{\operatorname{Fuk}(W,\Lambda^{\prime})}(\,\cdot\,,M)= \underset{M\to M^{-}}{\lim}\operatorname{Hom}_{\operatorname{Fuk}(W,\Lambda)} (\,\cdot\,,M^{-})=\operatorname{Hom}_{\operatorname{Mod}\operatorname{Fuk}(W,\Lambda)}(\,\cdot\,,\underset{M\to M^{-}}{\lim}M^{-})\]
where the limit is taken over wrappings where the entire isotopy avoids \(\Lambda^{\prime}\).
In other words, there is a natural isomorphism
\[ii^{L}(M)\cong\underset{M\to M^{-}}{\lim}M^{-}\]
We conclude:
\[\operatorname{Hom}_{\operatorname{Fuk}(W,\Lambda^{\prime})_{ \infty}}(L,M) = \operatorname{Cone}(\operatorname{Hom}_{\operatorname{Fuk}(W, \Lambda)}(\,\underset{L\to L^{-}}{\lim}L^{-},M)\to\operatorname{Hom}_{ \operatorname{Fuk}(W,\Lambda)}(L,\underset{M\to M^{-}}{\lim}M^{-}))\] \[= \operatorname{Cone}(\,\underset{L\to L^{-}}{\lim}\, \operatorname{Hom}_{\operatorname{Fuk}(W,\Lambda)}(L^{-},M)\to\underset{M \to M^{-}}{\lim}\operatorname{Hom}_{\operatorname{Fuk}(W,\Lambda)}(L,M^{-}))\]
This recovers a result originally proven in [2] for \(\Lambda^{\prime}=\emptyset\).
The remainder of this note concerns the proof of Theorem 1.
We have the diagram:
\(\operatorname{Mod}\mathcal{K}\)\(\underset{j}{\overset{j^{RR}}{\lim}}\)\(\operatorname{Mod}\mathcal{C}\)\(\underset{i^{L}}{\overset{i^{R}}{\lim}}\)\(\operatorname{Mod}\mathcal{S}\)
Here, \(j^{R}\) and \(i\) are the natural pullback of modules under the identification of indand module- categories. These each have right and left adjoints, and the left adjoints compose with the Yoneda embeddings to give the original \(j\) and \(i^{L}\):
We note some properties of this diagram. The maps \(i,j,j^{RR}\) are fully faithful; we have \(j^{R}j=1_{\operatorname{Mod}\mathcal{K}}=j^{R}j^{RR}\) and \(i^{L}i=1_{\operatorname{Mod}\mathcal{S}}=i^{R}i\).
We will later be interested in the Drinfeld-Verdier quotient \((\operatorname{Mod}\mathcal{C})/\mathcal{K}\). (Note this differs from \(\operatorname{Mod}\mathcal{C}/\operatorname{Mod}\mathcal{K}=\operatorname{ Mod}\mathcal{S}\).) It will be useful that certain morphisms can already be computed in \(\mathcal{C}\):
**Lemma 6**.: _For any \(c,d\in\mathcal{C}\),_
\[\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}/\mathcal{K}}(ii^{L}(c),d) \cong\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c),d)\cong \operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(c,ii^{R}(d)). \tag{0.4}\]
_and_
\[\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}/\mathcal{K}}(ii^{L}(c),ii^{ L}(d))\cong\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c),ii^{L}(d)) \cong\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(c,ii^{L}(d)). \tag{0.5}\]
_Additionally,_
\[\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}/\mathcal{K}}(c,ii^{L}(d)) \cong\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}}(c,ii^{L}(d)). \tag{0.6}\]
_and_
\[\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}/\mathcal{K}}(c,d)\cong \operatorname{Hom}_{\operatorname{Mod\mathcal{C}}}(c,ii^{L}(d)) \tag{0.7}\]
Proof.: A morphism in \(\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}/\mathcal{K}}(ii^{L}(c),(d))\) is given by a roof diagram
\[ii^{L}(c)\xrightarrow{f}c^{\prime}\xleftarrow{g}d \tag{0.8}\]
such that \(\operatorname{Cone}(g)\in\mathcal{K}\). Since \(\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}}(ii^{L}(c),\operatorname{ Cone}(g))=0\), \(f\) is induced by a morphism \(ii^{L}(c)\to d\). This shows (0.4). Now (0.5) follows \(ii^{R}ii^{L}=ii^{L}\).
Similarly, take a morphism in \(\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}/\mathcal{K}}(c,ii^{L}(d))\). Then it is given by a roof diagram
\[c\xleftarrow{f}c^{\prime}\xrightarrow{g}ii^{L}(d) \tag{0.9}\]
such that \(\operatorname{Cone}(f)\in\mathcal{K}\). Since \(\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}}(\operatorname{Cone}(f), ii^{L}(d))=0\), \(g\) in induced by a morphism \(c\to ii^{L}(d)\). This establishes (0.6).
Finally, since \(j^{R}(d)\in\operatorname{Mod\mathcal{K}}\), we have \(d_{i}\in\mathcal{K}\) such that \(\varinjlim_{\overrightarrow{i}}d_{i}=j^{R}(d)\). Since \(j\) is colimit preserving and \(c\) is compact, we have
\[\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}}(c,jj^{R}(d))\cong \varinjlim_{i}\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}}(c,d_{i}). \tag{0.10}\]
Take any morphism \(f\in\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}}(c,jj^{R}(d))\). The above isomorphism implies \(f\) factors through \(d_{i}\in\mathcal{K}\) for some sufficiently large \(i\). This implies \(\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}/\mathcal{K}}(c,jj^{R}(d))\cong 0\). Applying this result to the triangle
\[\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}/\mathcal{K}}(c,d)\to \operatorname{Hom}_{\operatorname{Mod\mathcal{C}}/\mathcal{K}}(c,ii^{L}(d)) \to\operatorname{Hom}_{\operatorname{Mod\mathcal{C}}/\mathcal{K}}(c,jj^{R}(d ))\to, \tag{0.11}\]
we get (0.7).
**Lemma 7**.: _Given an exact sequence as in (0.1), the restrictions of \(i\) and \(j^{R}\) to pseudoperfect modules have the following properties:_
* \(i:\mathcal{S}^{pp}\to\mathcal{C}^{pp}\) _is fully faithful_
* _the image of_ \(i\) _is the kernel of_ \(j^{R}\)__
Proof.: For the second statement:
\[\mathcal{S}^{pp}=\operatorname{Hom}(\mathcal{S},\operatorname{Perf}(\mathbb{K} ))=\operatorname{Hom}(\mathcal{C}\oplus_{\mathcal{K}}0,\operatorname{Perf}( \mathbb{K}))=\mathcal{C}^{pp}\times_{\mathcal{K}^{pp}}0\]
**Remark 8**.: Note we do not claim the map \(\mathcal{C}^{pp}/i(\mathcal{S}^{pp})\to\mathcal{K}^{pp}\) is fully faithful.
**Corollary 9**.: _Assume \(\mathcal{C}\) is smooth and proper, so \(\mathcal{C}^{pp}=\mathcal{C}\). Then the kernel of the map_
\[\mathcal{C}\xrightarrow{j^{R}}\mathcal{K}^{pp}\to\mathcal{K}^{pp}/\mathcal{K}\]
_is generated by \(\mathcal{K}\) and \(\mathcal{C}\cap i(\mathcal{S})\)._
Proof.: After Lemma 7, the only thing remaining to check is \(i(\mathcal{S}^{pp})=\mathcal{C}\cap i(\mathcal{S})\). Smoothness of \(\mathcal{C}\) implies smoothness of \(\mathcal{S}\), hence \(\mathcal{S}^{pp}\subset\mathcal{S}\), giving the inclusion \(\subset\). On the other hand for \(s\in\mathcal{S}\) satisfies \(i(s)\in\mathcal{C}\), then for \(c\in\mathcal{C}\) we have
\[\mathrm{Hom}_{\mathcal{S}}(i^{L}(c),s)=\mathrm{Hom}_{\mathcal{C}}(c,i(s))\]
by properness of \(\mathcal{C}\), this \(\mathrm{Hom}\) is perfect. But \(i^{L}\) is surjective, so \(s\in\mathcal{S}^{pp}\).
Proof of Theorem 1.: Consider the category \((\mathcal{C},\mathrm{Mod}\,\mathcal{S})\) generated by \(\mathcal{C}\) and \(\mathrm{Mod}\,\mathcal{S}\) in \(\mathrm{Mod}\,\mathcal{C}\). Since \(j^{R}\) kills \(\mathrm{Mod}\,\mathcal{S}\), we have an induced functor \((\mathcal{C},\mathrm{Mod}\,\mathcal{S})\to\mathcal{K}^{pp}\). The kernel is generated by \(\mathrm{Mod}\,\mathcal{S}\), and we have a map
\[[j_{R}]\colon\left(\mathcal{C},\mathrm{Mod}\mathcal{S}\right)/\mathrm{Mod} \mathcal{S}\to\mathcal{K}^{pp}. \tag{0.12}\]
As \([j_{R}]\) can be embedded into an equivalence \(\mathrm{Mod}\,\mathcal{C}/\mathrm{Mod}\,\mathcal{S}\cong\mathrm{Mod}\mathcal{K}\), it is in particular fully faithful. Hence we get an equivalence:
\[\left(\left(\mathcal{C},\mathrm{Mod}\mathcal{S}\right)/\mathrm{Mod}\mathcal{S }\right)/\mathcal{K}\cong\widehat{\mathcal{S}}_{\infty}\subset\mathcal{K}^{pp }/\mathcal{K}. \tag{0.13}\]
Consider the embedding
\[\left(\mathcal{C},\mathrm{Mod}\mathcal{S}\right)/\mathrm{Mod}\mathcal{S} \hookrightarrow\mathrm{Mod}\mathcal{K}\hookrightarrow\mathrm{Mod}\mathcal{C}. \tag{0.14}\]
given by \(jj^{R}\). We use the same notation after passing to the quotient by \(\mathcal{K}\):
\[jj^{R}\colon\left(\left(\mathcal{C},\mathrm{Mod}\mathcal{S}\right)/\mathrm{ Mod}\mathcal{S}\right)/\mathcal{K}\hookrightarrow\mathrm{Mod}\mathcal{C}/ \mathcal{K} \tag{0.15}\]
Thus far we have shown
\[\mathrm{Hom}_{\widehat{\mathcal{S}}_{\infty}}(c,d)=\mathrm{Hom}_{\mathrm{ Mod}\mathcal{C}/\mathcal{K}}(jj^{R}(c),jj^{R}(d))\]
Since we have an exact triangle
\[jj^{R}\to\mathrm{id}\to ii^{L}\to, \tag{0.16}\]
we have
\[\mathrm{Hom}_{\mathrm{Mod}\mathcal{C}/\mathcal{K}}(jj^{R}(c),jj^{R}(d))\cong \mathrm{Cone}(C_{1}\to C_{2})[-1] \tag{0.17}\]
where
\[\begin{split} C_{1}&:=\mathrm{Cone}(\mathrm{Hom}_{ \mathrm{Mod}\mathcal{C}/\mathcal{K}}(ii^{L}(c),d)\to\mathrm{Hom}_{\mathrm{Mod} \mathcal{C}/\mathcal{K}}(c,d))\\ C_{2}&:=\mathrm{Cone}(\mathrm{Hom}_{\mathrm{Mod} \mathcal{C}/\mathcal{K}}(ii^{L}(c),ii^{L}(d))\to\mathrm{Hom}_{\mathrm{Mod} \mathcal{C}/\mathcal{K}}(c,ii^{L}(d))).\end{split} \tag{0.18}\]
By (0.5), we see \(C_{2}=0\). To complete the proof we rewrite \(C_{1}\) using (0.4) and (0.7).
Compositions in \(\widehat{S}_{\infty}\)
Let \(c_{0},c_{1},c_{2}\) be objects of \(\mathcal{C}\), viewed also as objects of \(\widehat{\mathcal{S}}_{\infty}\). We express the underlying complex of \(\operatorname{Hom}_{\widehat{\mathcal{S}}_{\infty}}(c_{i},c_{i+1})\) as
\[\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c_{i}),c_{i+1})[1] \oplus\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c_{i}),ii^{L }(c_{i+1}))).\] (A.1)
We will use the unit morphism
\[u\colon c_{i}\to ii^{L}(c_{i}).\] (A.2)
We will compose
\[\begin{split}(f_{0},g_{0})&\in\operatorname{Hom}_{ \operatorname{Mod}\mathcal{C}}(ii^{L}(c_{0}),c_{1})[1]\oplus\operatorname{ Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c_{i}),ii^{L}(c_{i+1})))\\ (f_{1},g_{1})&\in\operatorname{Hom}_{\operatorname{ Mod}\mathcal{C}}(ii^{L}(c_{1}),c_{2})[1]\oplus\operatorname{Hom}_{ \operatorname{Mod}\mathcal{C}}(ii^{L}(c_{i}),ii^{L}(c_{i+1}))).\end{split}\] (A.3)
We use the notation from the proof of Theorem 1. We have the projection
\[\pi\colon\operatorname{Cone}(C_{1}\to C_{2})[-1]\to C_{1},\] (A.4)
which is a quasi-isomorphism. For each \((f_{i},g_{i})\), we have a cocycle lift
\[\begin{split}&(f_{i},g_{i},u\circ g_{i}\circ u^{-1},0)\\ &\in\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}/\mathcal{K} }(ii^{L}(c_{i}),c_{i+1})[-1]\oplus\operatorname{Hom}_{\operatorname{Mod} \mathcal{C}/\mathcal{K}}(c_{i},c_{i+1})\\ &\oplus\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}/\mathcal{ K}}(ii^{L}(c_{i}),ii^{L}(c_{i+1}))[-2]\oplus\operatorname{Hom}_{\operatorname{Mod} \mathcal{C}/\mathcal{K}}(c_{i},ii^{L}(c_{i+1})))[-1],\end{split}\] (A.5)
which is the underlying vector space of \(\operatorname{Cone}(C_{1}\to C_{2})\), which is the underlying vector space of the hom-space \(\operatorname{Hom}(\operatorname{Cone}(c_{i}\to ii^{L}(c_{i})),\operatorname {Cone}(c_{i+1}\to ii^{L}(c_{i+1})))\). Here \(g_{i}\circ u^{-1}\) is only cohomologically well-defined. We then directly calculate and get
\[\begin{split}&(f_{1},g_{1},u\circ g_{1}\circ u^{-1},0)\circ(f_{ 0},g_{0},u\circ g_{0}\circ u^{-1},0)\\ &=(g_{1}\circ f_{0}+f_{1}\circ u\circ g_{0}\circ u^{-1},g_{1} \circ g_{0},\star_{1},\star_{2}),\end{split}\] (A.6)
where the last two components are omitted.
We interpret each term as a morphism of \(\operatorname{Mod}\mathcal{C}\). By taking the following identification, \(u^{-1}\) disappears:
\[\begin{split}&(f_{i},g_{i},g_{i},0)\\ &\in\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c_{ i}),c_{i+1})[-1]\oplus\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c_{i}),ii^{L }(c_{i+1}))\\ &\oplus\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L} (c_{i}),ii^{L}(c_{i+1}))[-2]\oplus\operatorname{Hom}_{\operatorname{Mod} \mathcal{C}}(ii^{L}(c_{i}),ii^{L}(c_{i+1})))[-1].\end{split}\] (A.7)
Then the terms in
\[(g_{1}\circ u\circ f_{0}+f_{1}\circ g_{0},g_{1}\circ g_{0})\] (A.8)
are well-defined except for \(g_{1}\circ f_{0}\) lands in the correct place \(\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c_{i}),c_{i+1})[-1] \oplus\operatorname{Hom}_{\operatorname{Mod}\mathcal{C}}(ii^{L}(c_{i}),ii^{L }(c_{i+1}))\). Here we put \(u\) the head of two \(f_{0}\), which also comes from the identification with \(\operatorname{Mod}\mathcal{C}\).
A priori, \(g_{1}\circ u\circ f_{0}\) is not in \(\operatorname{Hom}_{\operatorname{ModC}}(ii^{L}(c_{i}),c_{i+1})[-1]\), but \(\operatorname{Hom}_{\operatorname{ModC}}(ii^{L}(c_{i}),ii^{L}(c_{i+1}))[-1]\). But, by construction, there is some \(u^{-1}\circ g_{1}\circ u\circ f_{0}\in\operatorname{Hom}_{\operatorname{ ModC}}(ii^{L}(c_{i}),c_{i+1})[-1]\) such that \(u\circ(u^{-1}\circ g_{1}\circ f_{0})=g_{1}\circ u\circ f_{0}\). Hence, at the cohomological level, we obtain the following formula for the composition:
\[(f_{1},g_{1})\circ(f_{0},g_{0}):=(u^{-1}\circ g_{1}\circ u\circ f_{0}+f_{1} \circ g_{0},g_{1}\circ g_{0}).\] (A.9)
One way to write formulas beyond the cohomological level would be the following. Choose a projection \(C_{1}\to H^{*}(C_{1})\) and the splitting of \(\operatorname{Cone}(C_{1}\to C_{2})[1]\to H^{*}(C_{1})\), one obtains the contracting homotopy from \(\operatorname{Cone}(C_{1}\to C_{2})[1]\) to \(H^{*}(C_{1})\). Then, by running the homological perturbation theory, one obtains an \(A_{\infty}\)-structure upgrading the above composition formula, which is by construction quasi-equivalent to \(\widehat{\mathcal{S}}_{\infty}\).
## Appendix B \(\operatorname{Hom}_{\mathbb{K}[t]}(\mathbb{K}[t,t^{-1}],\mathbb{K}[t])\)
A free resolution of \(\mathbb{K}[t,t^{-1}]\) is given by:
\[\bigoplus_{n\leq-1}\mathbb{K}[t]\cdot r_{n} \to \bigoplus_{n\leq 0}\mathbb{K}[t]\cdot s_{n}\] \[r_{n} \mapsto ts_{n}-s_{n+1}\]
where \(r_{n},s_{n}\) are just basis elements. Dualizing gives
\[\prod_{n\leq 0}\mathbb{K}[t]\cdot s_{n}^{*} \to \prod_{n\leq-1}\mathbb{K}[t]\cdot r_{n}^{*}\] \[s_{n}^{*} \mapsto tr_{n}^{*}-r_{n-1}^{*}\]
Consider the following \(\mathbb{K}[t]\)-linear map
\[\Sigma\colon\prod_{n\leq-1}\mathbb{K}[t]r_{n}^{*}\to\mathbb{K}[[t]];r_{n}^{*} \mapsto t^{-n-1}.\] (B.1)
We claim that
\[\prod_{n\leq-1}\mathbb{K}[t]\cdot s_{n}^{*}\to\prod_{n\leq-1}\mathbb{K}[t] \cdot r_{n}^{*}\to\mathbb{K}[[t]]\to 0\] (B.2)
is an exact sequence. Indeed, it is obvious that the composition is zero. Suppose \(\prod f_{n}(t)r_{n}^{*}\) goes to zero. For each monomial \(\alpha r_{n}^{*}\) of \(\prod f_{n}(t)r_{n}^{*}\), we set
\[\deg(\alpha r_{n}^{*}):=\deg(\alpha)-n-1.\]
Let \(N\) be the lowest nonzero number where \(\prod f_{n}(t)r_{n}^{*}\) has a nonzero degree \(N\) monomial. Note that the number of degree \(N\) monomials in \(\prod f_{n}(t)r_{n}^{*}\) are finite. Hence, by adding an element coming from \(\prod_{n\leq-1}\mathbb{K}[t]\cdot s_{n}^{*}\), one can assume that the sum of the degree \(N\) monomials is \(\beta r_{-N-1}^{*}\) for some scalar \(\beta\). Since this is still in the kernel of \(\Sigma\) and the
degree \(N\)-part of \(\Sigma(\beta r^{*}_{-N-1})=\beta t^{N}\), \(\beta\) is zero. Inductively, adding elements coming from \(\prod_{n\leq-1}\mathbb{K}[t]\cdot s^{*}_{n}\), we get \(\ker\Sigma=\prod_{n\leq-1}\mathbb{K}[t]\cdot s^{*}_{n}\).
Hence
\[\prod_{n\leq 0}\mathbb{K}[t]\cdot s^{*}_{n}\to\prod_{n\leq-1}\mathbb{K}[t] \cdot r^{*}_{n}\to\mathbb{K}[[t]]/\mathbb{K}[t]\to 0\] (B.3)
is also an exact sequence. (It is also easy to see that the first map is injective.)
### Acknowledgment
We would like to thank Adrian Petr for some questions about the Rabinowitz Fukaya category. The first-named author's work is supported by JSPS KAKENHI Grant Numbers 22K13912, 20H01794, 23H01068. The second-named author's work is supported Novo Nordisk Foundation grant NNF20OC0066298, Villum Fonden Villum Investigator grant 37814, and Danish National Research Foundation grant DNRF157.
| ```
Localizationされた滑らかな適切なカテゴリの smooth proper category の例に沿って、Serre functorsにより保存されるサブカテゴリにおいて、Efimov の代数可積のカテゴリ的論理的穴あき近傍の Morphisms を計算する。この計算は、定理の自然なCone を右と左の反復の定理の定理の定理の定理を計算する。特に、これにより、Ganatra-Gao-Venkatesh の次の結果が得られる: wrapped Fukayacategories のカテゴリ的論理的穴あき近傍の Morphisms を計算するのにRabinowitz wrappingを使用する。
``` |
2309.08142 | MAVIS: Multi-Camera Augmented Visual-Inertial SLAM using SE2(3) Based
Exact IMU Pre-integration | We present a novel optimization-based Visual-Inertial SLAM system designed
for multiple partially overlapped camera systems, named MAVIS. Our framework
fully exploits the benefits of wide field-of-view from multi-camera systems,
and the metric scale measurements provided by an inertial measurement unit
(IMU). We introduce an improved IMU pre-integration formulation based on the
exponential function of an automorphism of SE_2(3), which can effectively
enhance tracking performance under fast rotational motion and extended
integration time. Furthermore, we extend conventional front-end tracking and
back-end optimization module designed for monocular or stereo setup towards
multi-camera systems, and introduce implementation details that contribute to
the performance of our system in challenging scenarios. The practical validity
of our approach is supported by our experiments on public datasets. Our MAVIS
won the first place in all the vision-IMU tracks (single and multi-session
SLAM) on Hilti SLAM Challenge 2023 with 1.7 times the score compared to the
second place. | Yifu Wang, Yonhon Ng, Inkyu Sa, Alvaro Parra, Cristian Rodriguez, Tao Jun Lin, Hongdong Li | 2023-09-15T04:15:37 | http://arxiv.org/abs/2309.08142v5 | # MAVIS: Multi-Camera Augmented Visual-Inertial SLAM
###### Abstract
We present a novel optimization-based Visual-Inertial SLAM system designed for multiple partially overlapped camera systems, named MAVIS. Our framework fully exploits the benefits of wide field-of-view from multi-camera systems, and the metric scale measurements provided by an inertial measurement unit (IMU). We introduce an improved IMU pre-integration formulation based on the exponential function of an automorphism of SE\({}_{2}(3)\), which can effectively enhance tracking performance under fast rotational motion and extended integration time. Furthermore, we extend conventional front-end tracking and back-end optimization module designed for monocular or stereo setup towards multi-camera systems, and introduce implementation details that contribute to the performance of our system in challenging scenarios. The practical validity of our approach is supported by our experiments on public datasets. Our MAVIS won the first place in all the vision-IMU tracks (single and multi-session SLAM) on Hili SLAM Challenge 2023 with 1.7 times the score compared to the second place1.
Footnote 1: [https://hili-challenge.com/leader-board-2023.html](https://hili-challenge.com/leader-board-2023.html)
## I Introduction
Robust and real-time Simultaneous Localization And Mapping (SLAM) is a long-standing problem within the computer vision and robotics communities. Pure vision-based solutions lack the level of robustness and accuracy found in lidar-based solutions, and are thus often complemented by additional sensors such as -- on XR (VR/AR) virtual and augmented reality devices -- a low-cost IMU measuring the angular velocity and acceleration. While existing monocular or stereo visual-inertial solutions [1, 2, 3, 4, 5, 6, 7, 8, 9] have demonstrated their potential to enhance robustness in degenerate scenarios such as texture-less environments or agile motion by integrating IMU measurements, there are still existing challenges such as limited camera field-of-view, and a restricted ability to handle feature tracking failures for durations exceeding 10 seconds. These challenges can lead to rapid system divergence, even when using IMUs, causing a significant degradation in positioning accuracy.
The present paper focuses on yet another type of sensor systems, namely multi-camera systems. As shown in Figure 1, the forward-facing stereo cameras offer a broader co-visibility area compared to the left, right, and upwards cameras, which have limited overlap with the forward-facing stereo pair. Such systems offer the advantage of a larger fields-of-view, omni-directional observation of the environment that improves motion estimation accuracy and robustness against failures due to texture-poor environment. However, an inherent limitation of such setups is that the introduction of additional cameras directly leads to an increase in computational cost. The proper handling of measurements from all cameras is crucial for balancing accuracy, robustness, and computational efficiency.
Moreover, in order to improve the computational efficiency of optimization-based visual-inertial navigation methods without compromising accuracy, [10] has introduced an IMU pre-integration method, which can combine hundreds of inertial measurements into a single relative motion constraint by pre-integrating measurements between selected keyframes. This formulation plays a vital role in enhancing the effectiveness of front-end feature tracking across cameras and overall performance. However, existing methods [10, 11] rely on imprecise integration of the position and velocity where the IMU is assumed to be non-rotating between IMU measurements, such approximation can negatively impact the accuracy of pre-integrated poses, especially for fast rotational motion and extended integration time.
Our contributions are as follows:
* We present MAVIS, the state-of-the-art optimization-based visual-inertial SLAM framework specifically designed for multiple partially overlapped camera system.
* We introduce a new IMU pre-integration formulation based on the exponential function of an automorphism of \(\mathbf{SE_{2}}(3)\). Our approach ensures highly accurate integration of IMU data, which directly contributes to the improved tracking performance of our SLAM system.
* We demonstrate a substantial advantage of MAVIS in terms of robustness and accuracy through an extensive experimental evaluation. Our method attains the first place in both the vision-only single-session and multi-session tracks of the Hili SLAM Challenge 2023.
## II Related Work
The advantages and challenges of monocular or stereo visual-inertial SLAM have been discussed extensively in previous frameworks [1, 2, 3, 4, 5, 6, 7, 8, 9]. For a comprehensive survey,
Fig. 1: AlphaSense multi-camera module as an example of multi-camera system analyzed in this paper, with a forward-facing stereo cameras and multiple sideward monocular cameras.
please refer to [12] and the latest research [13]. Here, we mainly focus on vision-based solutions for multi-camera systems. [14] extended ORB-SLAM2 [15] to multi-camera setups, supporting various rigidly coupled multi-camera systems. [16] introduced an adaptive SLAM system design for arbitrary multi-camera setups, requiring no sensor-specific tuning. Several works [17, 18, 19, 20, 21] focus on utilizing a surround-view camera system, often with multiple non-overlapping monocular cameras, or specializing in motion estimation for ground vehicles. While demonstrating advantages in robustness in complex environments, these methods exhibit limited performance in highly dynamic scenarios and minor accuracy improvements in real-world experiments.
While many multi-camera visual-inertial solutions have been presented, none achieve a perfect balance among accuracy, robustness, and computational efficiency, especially in challenging scenarios. VILENS-MC [22] presents a multi-camera visual-inertial odometry system based on factor graph optimization. It improves tracking efficiency through cross-camera feature selection. However, it lacks a local map tracking module and loop closure optimization, leading to reduced performance in revisited locations compared to ORB-SLAM3 [9]. BAMF-SLAM [23] introduces a multi-fisheye VI-SLAM system that relies on dense pixel-wise correspondences in a tightly-coupled semi-pose-graph bundle adjustment. This approach delivers exceptional accuracy but demands a high-end GPU for near real-time performance.
### _Inertial Preintegration_
The theory of IMU preintegration was firstly proposed by [24, 25]. This work involves the discrete integration of the inertial measurement dynamics in a local frame of reference, such that the bias of state dynamics can be efficiently corrected at each optimization step. [26] presents a singularity-free orientation representation on \(\mathbf{SO(3)}\) manifold, incorporating the IMU preintegration into optimization-based VINS, significantly improve on the stability of [25]. Moreover, [1, 27] introduced preintegration in the continuous form using quaternions, in order to overcome the discretization effect and improve the accuracy. There are also several approaches solving this problem by using analytical solution [28, 29] or a switched linear system [30, 31]. Another work that is closely related to ours is introduced by [32]. It extended on-manifold pre-integration of [10] to the Lie group \(\mathbf{SE_{2}(3)}\). However, their method is still limited to Euler integration for position and velocity where the orientation is assumed non-rotating between IMU measurements.
## III Methodology
In this work, we present our multi-camera VI-SLAM system with a novel automorphism of \(\mathbf{SE_{2}(3)}\) exponential-based exact IMU pre-integration formulation, alongside an extended front-end tracking and back-end optimization modules for multi-camera setups. Figure 2 show the main system components.
### _IMU intrinsic compensation_
We start by adding an IMU _intrinsic compensation_ prior to the IMU pre-integration stage. When we use an IMU information in visual-inertial SLAM system, it is common to model raw accelerometer and gyroscope measurements as
\[\dot{\mathbf{a}}_{t} =\mathbf{a}_{t}+\mathbf{b}_{a_{t}}+\mathbf{n}_{a_{t}} \tag{1a}\] \[\ddot{\omega}_{t} =\omega_{t}+\mathbf{b}_{\omega_{t}}+\mathbf{n}_{\omega_{t}}, \tag{1b}\]
which only considers measurement noise density (i.e., \(\mathbf{n}\)) and bias \(\mathbf{b}\). Affected by acceleration bias \(\mathbf{b}_{a_{t}}\) and gyroscope bias \(\mathbf{b}_{\omega_{t}}\), as well as additive noises \(\mathbf{n}_{a}\) and \(\mathbf{n}_{\omega}\), model (1) is simple and useful, resulting in a good approximation for devices with factory calibrated intrinsics. However, it may produce impaired calibration results for low-cost, consumer grade inertial sensors which exhibit significant axis misalignment and scale factor errors. Inspired by [33], we extend the IMU model (1) by introducing the IMU intrinsics modelling
\[\ddot{\mathbf{a}}_{t} =\mathbf{S}_{\alpha}\mathbf{M}_{\alpha}\mathbf{a}_{t}+\mathbf{b} _{a_{t}}+\mathbf{n}_{a_{t}} \tag{2a}\] \[\ddot{\omega}_{t} =\mathbf{S}_{\omega}\mathbf{M}_{\omega}\omega_{t}^{\prime}+ \mathbf{A}_{\omega}\mathbf{a}_{t}+\mathbf{b}_{\omega_{t}}+\mathbf{n}_{\omega_ {t}}, \tag{2b}\]
whereas \(\omega_{t}^{\prime}=\mathbf{C}_{\omega}\omega_{t}\) is the rotation matrix between accelerometer and gyroscope. We define \(\mathbf{S}_{\alpha}\) and \(\mathbf{S}_{\omega}\) as the diagonal matrix of scaling effects, \(\mathbf{M}_{\alpha}\) and \(\mathbf{M}_{\omega}\) are lower unitriangular matrix corresponding to misalignment small angles, and let \(\mathbf{A}_{\omega}\) be a fully populated matrix of skew. All characteristics above can be obtained by performing the widely-adopted Kalibr [34].
### _IMU pre-integration_
Following the previous section, we compensate IMU axis misalignment and scale factor, and use the notation \(\bar{\omega}_{t}\) and \(\bar{a}_{t}\) to denote the biased, but noise/skew-free and scale-correct IMU measurements. In the following sections, we drop or retain the subscript \(t\) to simplify the notation, or highlight the time dependency of the variable.
We now proceed to our core contribution: a novel, exact IMU pre-integration formulation based on the exponential function of an automorphism of the matrix Lie group \(\mathbf{SE_{2}(3)}\). We firstly model the noise-free kinematics of the system as follow:
\[\left\{\begin{aligned} \dot{\mathbf{R}}&=\mathbf{R}( \bar{\omega}-\mathbf{b}_{\omega})^{\wedge}\\ \dot{\mathbf{p}}&=\mathbf{v}\\ \dot{\mathbf{v}}&=\mathbf{R}(\bar{\mathbf{a}}- \mathbf{b}_{a})+\mathbf{g}\end{aligned}\right. \tag{3}\]
Fig. 2: Overview of our proposed visual-inertial localization and mapping pipeline for multi-camera systems.
where \((\mathbf{R},\dot{\mathbf{p}},\dot{\mathbf{v}})\) are the first order derivative of rotation, position and velocity of the IMU frame and \(\mathbf{g}\) denotes the gravity vector, both expressed with respect to a world fixed frame. We also define the random walk of the biases \(\mathbf{b}_{\omega}^{\text{-}}=\tau_{\omega}\) and \(\mathbf{b}_{a}=\tau_{a}\). We can represent the IMU pose using the extended pose in \(\mathbf{SE_{2}(3)}\) Lie group, such that the state is \(\xi=\left[\begin{matrix}\mathbf{R}&\mathbf{p}&\mathbf{v}\\ 0&1&0\\ 0&0&1\end{matrix}\right]\) and the kinematics of the state \(\xi\) can be represented using an automorphism of \(\mathbf{SE_{2}(3)}\)[35]
\[\dot{\xi}=(\mathbf{G}-\mathbf{D})\xi+\xi(\mathbf{U}-\mathbf{B}+\mathbf{D}) \tag{4}\]
where
\[\mathbf{U} =\begin{bmatrix}\omega^{\wedge}&0&\mathbf{a}\\ 0&0&0\\ 0&0&0\end{bmatrix}, \mathbf{G} =\begin{bmatrix}0&0&\mathbf{g}\\ 0&0&0\\ 0&0&0\end{bmatrix},\] \[\mathbf{D} =\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&1&0\end{bmatrix}, \mathbf{B} =\begin{bmatrix}(\mathbf{b}_{\omega})^{\wedge}&0&\mathbf{b}_{a}\\ 0&0&0\\ 0&0&0\end{bmatrix}\]
Let us assume \(t_{i-1}\) is the start time of pre-integration at previous image frame \(\mathcal{F}_{i-1}\). We define \(t_{j}\) be the timestamp of an arbitrary IMU measurement between frames \(\mathcal{F}_{i-1}\) and \(\mathcal{F}_{i}\), and let \(t_{j^{\prime}}\) be the timestamp of its subsequent IMU measurement, such that the small integration time \(\delta=t_{j^{\prime}}-t_{j}\). Let \(\xi_{t_{j}}\) be the extended pose at time instant \(t_{j}\). The exact integration given (4), and assuming \(\mathbf{U}\) and \(\mathbf{B}\) are constant within the integration time \(\delta\) is
\[\xi_{t_{j^{\prime}}}=\exp\left(\delta(\mathbf{G}-\mathbf{D})\right)\xi_{t_{j}} \exp(\delta(\mathbf{U}-\mathbf{B}+\mathbf{D})) \tag{5}\]
Assuming there are \(N\) sets of IMU measurements between time \(t_{j}\) and \(t_{i-1}\), we have
\[\xi_{t_{j}}=\exp\left((\sum_{s=0}^{N}\delta_{s})(\mathbf{G}-\mathbf{D})\right) \xi_{t_{i-1}}\prod_{s=0}^{N}\exp(\delta_{s}(\mathbf{U}_{s}-\mathbf{B}_{s}+ \mathbf{D})) \tag{6}\]
We define \(\mathcal{T}=\sum_{s=0}^{N}\delta_{s}\) and then rearrange (6), we obtain
\[\xi_{t_{i-1}}^{-1}\exp\left(-\mathcal{T}(\mathbf{G}-\mathbf{D})\right)\xi_{t_{ j}}=\prod_{s=0}^{N}\exp(\delta_{s}(\mathbf{U}_{s}-\mathbf{B}_{s}+\mathbf{D})). \tag{7}\]
The exponential on the left equals
\[\exp\left(-\mathcal{T}(\mathbf{G}-\mathbf{D})\right)=\begin{bmatrix}\mathbf{I }&\frac{1}{2}\mathcal{T}^{2}\mathbf{g}&-\mathcal{T}\mathbf{g}\\ 0&1&0\\ 0&\mathcal{T}&1\end{bmatrix}. \tag{8}\]
Substituting (8) into (7), the left hand side components of (7) is exactly the same as the middle of equation (33) in [10], the right hand side is our newly derived pre-integration terms, where each of the exponential can be expanded as
\[\exp(\delta_{s}(\mathbf{U}_{s}-\mathbf{B}_{s}+\mathbf{D}))= \tag{9}\] \[\begin{bmatrix}\exp(\delta_{s}(\omega_{s}-\mathbf{b}_{\omega_{s}}) ^{\wedge})&\mathbf{J}_{2}(\mathbf{a}_{s}-\mathbf{b}_{\omega_{s}})&\mathbf{J}_ {1}(\mathbf{a}_{s}-\mathbf{b}_{\omega_{s}})\\ 0&1&0\\ 0&\delta_{s}&1\end{bmatrix},\]
where
\[\theta =\|\dot{\mathbf{\omega}}\| \tag{10}\] \[\mathbf{J}_{1} =\delta_{s}\mathbf{I}+\frac{1}{\theta^{2}}(1-\cos(\delta_{s} \theta))\hat{\omega}^{\wedge}+\frac{1}{\theta^{3}}(\delta_{s}\theta-\sin(\delta _{s}\theta))(\hat{\omega}^{\wedge})^{2}\] (11) \[\mathbf{J}_{2} =\frac{1}{2}\delta_{s}{}^{2}\mathbf{I}+\frac{1}{\theta^{3}}( \delta_{s}\theta-\sin(\delta_{s}\theta))\hat{\omega}^{\wedge}+\frac{1}{\theta^ {4}}(\frac{1}{2}\delta_{s}{}^{2}\theta^{2}+\cos(\delta_{s}\theta)-1)(\hat{ \omega}^{\wedge})^{2} \tag{12}\]
Here, we use \(\tilde{\omega}=\omega_{s}-\mathbf{b}_{\omega_{s}}\). The iterative pre-integration is finally given as
\[\begin{cases}\Delta\mathbf{R}_{t_{j^{\prime}}}^{t_{i}}=\Delta\mathbf{R}_{t_{j} }^{t_{i}}\exp(\delta_{j}(\omega_{j}-\mathbf{b}_{\omega_{j}})^{\wedge})\\ \Delta\mathbf{p}_{t_{j}}^{t_{i}}=\Delta\mathbf{p}_{t_{j}}^{t_{i}}+\delta_{j} \Delta\mathbf{v}_{t_{j}}^{t_{i}}+\Delta\mathbf{R}_{t_{j}}^{t_{i}}\mathbf{J}_{2} (\mathbf{a}_{j}-\mathbf{b}_{a_{j}})\\ \Delta\mathbf{v}_{t_{j^{\prime}}}^{t_{i}}=\Delta\mathbf{v}_{t_{j}}^{t_{i}}+ \Delta\mathbf{R}_{t_{j}}^{t_{i}}\mathbf{J}_{1}(\mathbf{a}_{j}-\mathbf{b}_{a_{j}}) \end{cases} \tag{13}\]
Note that under the Euler integration scheme, Jacobian terms \(\mathbf{J}_{1}\) and \(\mathbf{J}_{2}\) are simplified to be \(\mathbf{J}_{1}=\delta_{s}\mathbf{I}\) and \(\mathbf{J}_{2}=\frac{1}{2}\delta_{s}^{2}\mathbf{I}\).
In practical implementation, the noise-free terms are not available, and are thus substituted by their corresponding estimate denoted with the \((\hat{\cdot})\) notation. We then deal with the covariance propagation given the uncertainty of the previous estimate and measurement noise. We define the error terms as follows:
\[e_{ij^{\prime}}=[(e_{ij^{\prime}}^{\Delta\mathbf{R}})^{\top},(e_{ij^{\prime}}^{ \Delta\mathbf{p}})^{\top},(e_{ij^{\prime}}^{\Delta\mathbf{v}})^{\top},(e_{j^{ \prime}}^{\mathbf{b}_{\omega}})^{\top},(e_{j^{\prime}}^{\mathbf{b}_{\omega}})^{ \top}]^{\top} \tag{14}\] \[e_{ij^{\prime}}^{\Delta\mathbf{R}}=\log(\Delta\mathbf{R}_{t_{j^{ \prime}}}^{t_{i}}\cap\Delta\mathbf{R}_{t_{j}}^{t_{i}})^{\vee}\sim\mathcal{N}( \mathcal{N},\Delta\mathbf{R}_{t_{j}}^{\Delta\mathbf{v}})\in\mathbb{R}^{3}\] (15a) \[e_{ij^{\prime}}^{\Delta\mathbf{p}}=\Delta\mathbf{p}_{t_{j^{\prime}}}^{t _{i}}-\Delta\mathbf{p}_{t_{j^{\prime}}}^{t_{i}}\sim\mathcal{N}(0,\Sigma_{ij^{ \prime}}^{\mathbf{p}})\in\mathbb{R}^{3}\] (15b) \[e_{ij^{\prime}}^{\Delta\Delta\mathbf{v}}=\Delta\mathbf{v}_{t_{j}}^{t _{i}}-\Delta\mathbf{v}_{t_{j^{\prime}}}^{t_{i}}\sim\mathcal{N}(0,\Sigma_{ij^{ \prime}}^{\mathbf{v}})\in\mathbb{R}^{3}\] (15c) \[e_{j^{\prime}}^{\mathbf{b}_{\omega}}=\hat{\mathbf{b}}_{\omega_{j}}- \mathbf{b}_{\omega_{j}}\sim\mathcal{N}(0,\Sigma_{ij^{\prime}}^{\mathbf{b}_{ \omega}})\in\mathbb{R}^{3}\] (15d) \[e_{j^{\prime}}^{\mathbf{b}_{a}}=\hat{\mathbf{b}}_{a_{j}}-\mathbf{b}_{ a_{j}}\sim\mathcal{N}(0,\Sigma_{ij^{\prime}}^{\mathbf{b}_{a}})\in\mathbb{R}^{3} \tag{15e}\]
We derive the matrix representation of the pre-integration terms' evolution
\[e_{ij^{\prime}}=\mathbf{A}_{j^{\prime}}e_{ij}+\mathbf{B}_{j^{\prime}}\mathbf{n}_{ j^{\prime}} \tag{16}\]
where
\[\mathbf{A}=\begin{bmatrix}\exp(-\delta_{j^{\prime}}(\hat{\omega}_{j^{\prime}}-\hat{ \mathbf{b}}_{\omega_{j^{\prime}}})^{\wedge})&0&0&-\delta_{j^{\prime}}\mathbf{I}&0 \\ -\Delta\mathbf{\hat{R}}_{t_{j}}^{t_{i}}(\hat{\mathbf{J}}_{2}(\hat{\mathbf{a}}_{j^{ \prime}}-\hat{\mathbf{b}}_{\omega_{j^{\prime}}})^{\wedge}&\mathbf{I}& \delta_{
significantly improve the tracking performance when the camera revisited a previous location. We define the body frame to be the same as the IMU frame. We first project all local map points onto the multi-camera image at the current time by using the predicted relative pose obtained from IMU pre-integration. As shown in Fig 3, feature matching is done for both intra-cameras and inter-cameras to enhance the co-visibility relationships. Given the multi-camera systems are precisely calibrated, the projected landmarks on an arbitrary camera can be formulated by:
\[(u_{c_{k}}^{n},v_{c_{k}}^{n})=\pi_{c_{k}}(\mathbf{T}_{bc_{k}}^{-1}\mathbf{T}_{ \mathcal{I}_{i}^{-1}}^{-1}\mathbf{T}_{i-1}^{-1}\mathbf{p}_{n}), \tag{20}\]
whereas \(\mathbf{p}_{n}\) denotes the position of landmark \(n\) in world coordinate, and \(\pi_{c_{k}}\) be the projection function which turns \(\mathbf{p}_{n}\) into a pixel location \((u_{c_{k}}^{n},v_{c_{k}}^{n})\) using intrinsic parameters of camera \(c_{k}\). We define \(\mathbf{T}_{bc_{k}}\) as the extrinsic parameters of camera \(c_{k}\) with respect to the IMU coordinates. Let \(\mathbf{T}_{i-1}\) be the absolute pose of reference frame \(\mathcal{F}_{i-1}\) in world coordinate and \(\mathbf{T}_{\mathcal{I}_{i}^{i-1}}\) be the estimated relative pose between current frame \(\mathcal{F}_{i}\) and reference frame using IMU pre-integration. For those 2D features that are not associated with landmarks, we perform feature matching between the current frame and keyframes, and create new local map points through triangulation. All these relationships are then used in the back-end optimization of MAVIS to augment co-visibility edges and improve the positioning accuracy. In addition, we utilize the distance between descriptors for further validation and employ a robust cost function in the back-end optimization to eliminate all incorrectly matched feature points.
### _Back-end optimization_
Similar to many other visual-inertial SLAM/VIO systems based on optimization scheme [2, 9, 22], our MAVIS updates all the state vector by minimizing both visual reprojection errors based on observations from all cameras and error terms from pre-integrated IMU measurement using a sliding-window bundle adjustment scheme. We define the state vector \(\mathcal{X}\) as the combination of motion states \(\{\mathbf{x}\}=[\mathbf{x}_{1}\cdots\mathbf{x}_{v}]\) and landmarks states \(\{\mathbf{p}\}=[\mathbf{p}_{0}\cdots\mathbf{p}_{l}]\), whereas \(\mathbf{x}_{i}\) contains 6 DoF body poses \(\mathbf{R}_{i}\) and \(\mathbf{t}_{i}\), linear velocity \(\mathbf{v}_{i}\), and IMU biases \(\mathbf{b}_{\omega_{i}}\) and \(\mathbf{b}_{a_{i}}\). \(\mathbf{p}_{n}\) denotes the position of the landmark \(n\). The visual-inertial BA can be formulated as:
\[\min_{\mathcal{X}}\sum_{i=1}^{v}\Bigg{(}\|\mathbf{r}_{\mathbf{x}_{i},\mathcal{ I}_{i}^{i-1}}\|_{\sum_{\mathbf{x}_{i}}:\mathcal{I}_{i}^{i-1}}^{2}+\sum_{k=0}^{m} \sum_{n\in\mathcal{L}_{i}}\rho\|\mathbf{r}_{\mathbf{e}_{k},\mathbf{x}_{i}, \mathbf{p}_{n}}\|_{\sum_{\mathbf{e}_{k},\mathbf{x}_{i},\mathbf{p}_{n}}}\Bigg{)}, \tag{21}\]
where
* \(\mathcal{L}_{i}\) is the set of landmarks observed in keyframe \(\mathcal{F}_{i}\).
* \(\mathbf{r}_{\mathbf{x}_{i},\mathcal{I}_{i}^{i-1}}\) is the residual for IMU, and \(\mathbf{r}_{\mathbf{c}_{k},\mathbf{x}_{i},\mathbf{p}_{n}}\) is the residual for visual measurements of camera \(c_{k}\).
* \(\rho(\cdot)\) is the robust kernel used to eliminate outliers.
## IV Application to Multi-camera Vi-SLAM
An overview of our system architecture is shown in Figure 2. After introducing IMU pre-integration, front-end tracking, and back-end optimization modules in Sec III, we now proceed to the implementation particulars in this section, which directly contribute to the overall precision and robustness of our SLAM system.
### _Visual measurement pre-processing_
To address the challenges in real-world application scenarios and released datasets from [36], such as dark scenes, frame drops, and data discontinuities, we conduct data pre-processing prior to the feature extraction step. Specifically, we apply histogram equalization to compensate for dark frames. This technique significantly enhances both the quantity and distribution of the extracted feature points. Additionally, as multi-camera devices require increased bandwidth for transmitting image data, they are more likely to encounter frame drops and synchronization issues. We therefore leverage the remaining images in feature tracking to avoid integrating IMU data for long duration. We obtain the time delay between each camera and the IMU during the camera-IMU extrinsic parameter calibration process, and add a mid-exposure time compensation parameter to further reduce the impact of synchronization errors. By adopting these approaches, we achieve visible improvements in the performance of our SLAM solution.
### _Camera-IMU initialization_
Inspired by [9], we employ a robust and accurate multi-camera IMU initialization method in our MAVIS. We firstly use stereo information from the first frame to generate an initial map. Once feature depth is known in the first keyframe, we project all landmarks in adjacent keyframes using the propagated motion model. Notably, all projections in subsequent stages are performed for both intra-camera and inter-camera scenarios. Please refer to Section III-C for detailed front-end tracking. Subsequently, we execute bundle adjustment for pure visual MAP estimation within 2 seconds while calculating IMU pre-integration and co-variance between adjacent keyframes. Following the pure IMU optimization method, we jointly optimize map point positions, IMU parameters, and camera poses through multi-visual-inertial bundle adjustment. In order to avoid trajectory jumps in real-time outputs, we fix the last frame's pose during optimization. More advanced algorithms such as [39] can also be employed, but is not integrated into MAVIS.
Fig. 3: Geometric relationships of local feature matching across multiple cameras.
### _Loop closure_
The loop-closure module utilizes the DBoW2 library [40] for candidate frame detection. To fully exploit the multi-camera system's wide field-of-view, we detect putative loop closures from both intra-cameras and inter-camera, which enables our MAVIS system to correctly detect loops--like u-turn motions--that regular monocular or stereo VIO systems fail to detect (cf. illustrated in Figure 4). Similar to [1, 7, 9], a geometric verification is performed to remove outlier loop closures. Upon detecting a correct closed loop, we execute a global bundle adjustment, optimizing the entire trajectory with all available information, substantially reducing drift across most sequences.
### _Multi-Agent SLAM_
To participate in Hilti SLAM Challenges 2023 [41] multi-session, we adopt a similar approach for merging maps via our loop-closure correction module (refer to Figure 5). We employ the Bag-of-Words (BoW) model [40] to identify potential overlapping keyframes, and utilizing local maps to assist geometric alignment. By fusing multiple sub-maps and conducting successful verification checks, we finally generate a globally-consistent map. In the multi-session challenge context, we designate one single-session sequence as the base for the global map. Upon the completion of SLAM processing for a sequence, the map is locally saved, and pre-loaded prior to processing subsequent data sequences. With systematic processing of all sequences and map fusion, we can integrate all submaps into a global map.
## V Experiments
We evaluate the performance of our method on diverse datasets. We firstly compare it to most state-of-the-art visual-inertial approaches in stereo-inertial setup using the EuRoC datasets [38]. We furthermore evaluate our system under multi-camera setup on the Hilti SLAM Challenge 2023 Datasets [41], featuring challenging sequences from handheld devices and ground robots. Both qualitative and quantitative results highlight the effectiveness of our system. Our framework is implemented in C++ and evaluated on an Ubuntu 20.04 desktop, equipped with an AMD Ryzen 9 5950X 16-Core Processor.
### _Performance on EuRoC datasets_
Our first experiments utilize the widely-used EuRoC datasets [38], featuring sequences captured by a drone flying inside the room, equipped with synchronized stereo cameras and an IMU. We benchmark our **MAVIS** against state-of-the-art methods, including VINS-MONO [1], OKVIS [2], SVOGTSAM [3], EqVIO [4], OpenVINS [6], VINS-FUSION [5], Kimera [7], BASALT [8], and ORB-SLAM3 [9]. We employ the EuRoC dataset's calibration results and exclude our IMU intrinsic compensation for a fair comparison. We quantitatively evaluate Absolute Trajectory Error (RMSE in meters) using EVO [42] and summarize the results in Table I. Best results are in **bold**, while "-" indicates a method's failure to complete the sequence.
As shown in the table above, our method outperforms in most sequences. Among 11 sequences in ATE evaluations, our approach achieves the best results in 6 of them, with only ORB-SLAM3 [9] and BASALT [8] approaching our method. We also provide the standard deviation (std) of RMSE for all sequences, for measuring the robustness of different algorithms on the same datasets. Our method again surpasses all alternatives with a 0.017 std error. To summarize, our stereo-inertial setup demonstrates state-of-the-art performance in terms of accuracy and robustness. This could be attributed to our improved IMU pre-integration formulation, which provides more precise motion modeling and robustness in handling rapid rotations and extended integration times.
### _Performance on Hilti SLAM Challenge 2023_
To thoroughly evaluate the robustness and accuracy of our multi-camera VI-SLAM system in challenging conditions, we conducted experiments on the Hilti SLAM Challenge
\begin{table}
\begin{tabular}{l|l|c c c c c c c c c c c} \hline \hline & & **MH-01** & **MH-02** & **MH-03** & **MH-04** & **MH-05** & **V-101** & **V-102** & **V-103** & **V-201** & **V-202** & **V-203** & **Std.** \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Monocular** \\ **Inertial** \\ \end{tabular} } & VINS-MONO [1] & 0.070 & 0.050 & 0.080 & 0.120 & 0.090 & 0.040 & 0.060 & 0.110 & 0.060 & 0.060 & 0.090 & 0.025 \\ & OKVIS [2] & 0.160 & 0.220 & 0.240 & 0.340 & 0.470 & 0.090 & 0.200 & 0.240 & 0.130 & 0.160 & 0.290 & 0.106 \\ & SVOGTSAM [3] & 0.050 & 0.030 & 0.120 & 0.130 & 0.160 & 0.070 & 0.110 & - & 0.070 & - & - & 0.044 \\ & EqVIO [4] & 0.176 & 0.236 & 0.112 & 0.165 & 0.238 & 0.063 & 0.128 & 0.216 & 0.058 & 0.158 & 0.176 & 0.062 \\ \hline \multirow{4}{*}{
\begin{tabular}{c} **Stero** \\ **Inertial** \\ \end{tabular} } & OpenVINS [6] & 0.183 & 0.129 & 0.170 & 0.172 & 0.212 & 0.055 & 0.044 & 0.069 & 0.058 & 0.045 & 0.147 & 0.063 \\ & VINS-FUSION [5] & 0.181 & 0.092 & 0.167 & 0.203 & 0.416 & 0.064 & 0.270 & 0.157 & 0.065 & - & 0.160 & 0.105 \\ & Kimera [7] & 0.080 & 0.090 & 0.110 & 0.150 & 0.240 & 0.050 & 0.110 & 0.120 & 0.070 & 0.100 & 0.190 & 0.055 \\ & BASALT [8] & 0.080 & 0.060 & 0.050 & 0.100 & 0.080 & 0.040 & 0.020 & 0.030 & **0.030** & 0.020 & - & 0.028 \\ & ORB-SLAM3 [37] & 0.035 & 0.033 & 0.035 & **0.051** & 0.082 & 0.038 & **0.014** & 0.024 & 0.032 & **0.014** & **0.024** & 0.019 \\ & **MAVIS(Ours)** & **0.024** & **0.025** & **0.032** & 0.053 & **0.075** & **0.034** & 0.016 & **0.021** & 0.031 & 0.021 & 0.039 & **0.017** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Performance comparison on the EuRoC [38] datasets, RMSE of Absolute trajectory Error (ATE) in meters.
Fig. 4: Multiple detected closed loops from _site1_handheld_4_ sequence in Hilti SLAM Challenge 2023. Accumulated drifts are significantly reduced, ensuring enhanced accuracy and robustness.
2023 datasets [41]. This datasets involve handheld sequences using an Alphasense multi-camera development kit from Sevensense Robotics AG, which synchronizes an IMU with four grayscale fisheye cameras for data collection. For robot sequences, it uses four stereo OAK-D cameras and a high-end Xsens MTi-670 IMU mounted on a ground robot. We maintain consistent parameters within each dataset for our experiments. However, for the robot sequences in the Hilti Challenge 2023 datasets, we encountered inter-stereo-pair time synchronization issues in _site2_robot_1_. Consequently, we used only a pair of stereo cameras with IMU for this sequence. For the other two robot sequences, we selected the best-synchronized four cameras in each sequence. The datasets provide millimeter-accurate ground truth on multiple control points for ATE evaluation. We compared our approach to BAMF-SLAM [23], ranked 2nd in the single-session vision/IMU-only track, and Maplab2.0 [43], ranked 2nd in the multi-session track. Results are illustrated in Table II, the following are worth noting:
* We provide the difficulties, timing and score information for each sequence in Table II. The datasets suffer from challenges such as low-light, textureless environment, unsynchronized cameras, data loss, and the absence of closed-loop exploration scenarios. While all methods successfully process the entire datasets without any gross errors.
* Our method achieves superior performace, clearly outperforms in both single-session and multi-session. We achieve close to 2 times the score compared to the 2nd place. More detailed analysis and further quantitative results can be found on our technical report in [41].
* Our system's runtime performance also demonstrates its practical potential. It runs in real-time on a standard desktop using only CPU. However, BAMF-SLAM [23] requires a Nvidia GeForce RTX 4090 GPU for processing and still runs 1.6 times slower than ours.
We also test our method in the Hilti SLAM Challenge 2022 [36], achieving best results with a score of 130.2, which is three times higher than the 2nd place's score of 40.9. Please refer to the live leaderboard on [41] for more details.
## VI Conclusion
In this paper, we introduce MAVIS, an optimization-based visual-inertial SLAM system for multi-camera systems. In comparison to alternatives, we present an exact IMU pre-integration formulation based on the \(\mathbf{SE_{2}(3)}\) exponential, effectively improving tracking performance, especially during rapid rotations and extended integration times. We also extend front-end tracking and back-end optimization modules for multi-camera systems and introduce implementation particulars to enhance system performance in challenging scenarios. Extensive experiments across multiple datasets validate the superior performance of our method. We believe this robust and versatile SLAM system holds significant practical value for the community.
\begin{table}
\begin{tabular}{c|c c c|c|c|c} \hline \hline
**Sequence name** & \multicolumn{2}{c|}{**Diffuculties**} & \multicolumn{2}{c|}{**Sequence length**} & \multicolumn{2}{c|}{**MAVIS(Ours)**} & \multicolumn{1}{c}{BAMF-SLAM} & \multicolumn{1}{c}{Maplab2.0} \\ \hline site1\_handheld\_1 & dark: around stairs going to Floor 1 & 204.71s & 224.59s & **32.5** & 10.0 & 15.0 \\ site1\_handheld\_2 & dark: around stairs going to Floor 2 & 167.11s & 211.98s & **23.75** & 5.0 & 12.5 \\ site1\_handheld\_3 & insufficient overlap for multi-session & 170.63s & 204.09s & **22.5** & 17.5 & 5.0 \\ site1\_handheld\_4 & - & 295.42s & 364.41s & **30.0** & 5.0 & 8.33 \\ site1\_handheld\_5 & - & 159.29s & 196.86s & **26.67** & 11.67 & 13.33 \\ site1\_mmid\_session & & & & 4.17 & - & **5.28** \\ \hline site2\_robot\_1 & unsynchronised cameras, long, no loop closure & 699.31s & 531.89s & **15.71** & 6.43 & 7.86 \\ site2\_robot\_2 & unsynchronised cameras & 305.79s & 194.40s & **53.33** & 28.33 & 15.0 \\ site2\_robot\_3 & dark, insufficient overlap for multi-session & 359.00s & 187.30s & **19.0** & 13.0 & 5.0 \\ site2\_mmid\_session & & & & **3.33** & - & 2.33 \\ \hline site3\_handheld\_1 & - & 97.18s & 124.07s & **105.0** & 45.0 & 10.0 \\ site3\_handheld\_2 & dropped data & 148.13s & 182.31s & 35.0 & **56.67** & 11.67 \\ site3\_handheld\_3 & dropped data & 189.60s & 243.46s & 23.75 & **31.25** & 7.5 \\ site3\_handheld\_4 & - & 106.88s & 130.34s & **65.0** & 37.5 & 10.0 \\ site3\_mmid\_session & & & & **19.55** & - & 7.73 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Difficulties, timing and score information for test sequences
Fig. 5: Perspective view of reconstructed scenes. From left to right: site_1, site_2, site_3. Trajectory is coloured in sequence from red, green, blue, magenta. | ```
私たちは、複数の partially overlapped カメラシステムを対象とした、新しい最適化ベースの視覚-慣性SLAMシステム「MAVIS」を提案します。私たちのフレームワークは、マルチカメラシステムの広視野を最大限に活用し、慣性計測器(IMU)によって提供されるメトリックスケール測定値を完全に活用しています。IMUの事前統合方法を、SE_2(3)の自動同型変換の指数関数に基づいた改良型で導入し、高速回転運動や長い積分時間でのトラッキング性能を効果的に高めます。さらに、単眼またはステレオ設定の従来のフロントエンドトラッキングとバックエンド最適化モジュールを、マルチカメラシステムに向けた拡張し、このシステムの課題対応性能に貢献する実装の詳細も導入しました。このアプローチの実用的な有効性は、公開データセットでの実験結果によって裏付けられます。MAVISは、Hilti SLAM Challenge 2 |
2301.13426 | Discrete Search in Heterogeneous Integer Spaces for Automated Choice of
Parameters using Correct-by-Construction Methods | Discrete Search of integer spaces for tool parameter values provides a
powerful methodology for modeling and finding a heuristically optimal parameter
list for a given system. Current tools and implementations that exist focus
primarily on homogeneous tool parameters, and the implementations for
heterogeneous tool parameters is lacking. In this paper we introduce a
correct-by-construction method of heterogeneous parameter reachability and
validity search, and further outline the implementation as well as a
demonstration using examples of heterogeneous systems that this tool can be
used for. | Omar Radwan, Yilin Zhang, Luca Geretti | 2023-01-31T05:47:25 | http://arxiv.org/abs/2301.13426v1 | Discrete Search in Heterogeneous Integer Spaces for Automated Choice of Parameters using Correct-by-Construction Methods
###### Abstract
Discrete Search of integer spaces for tool parameter values provides a powerful methodology for modeling and finding a heuristically optimal parameter list for a given system. Current tools and implementations that exist focus primarily on homogeneous tool parameters, and the implementations for heterogeneous tool parameters is lacking. In this paper we introduce a correct-by-construction method of heterogeneous parameter reachability and validity search, and further outline the implementation as well as a demonstration using examples of heterogeneous systems that this tool can be used for.
## I Introduction
### _Premise_
Discrete Search of integer spaces provides a powerful mechanism through which to explore the reachable set of a given system. Current design cools work primarily for homogeneous parameter spaces, and mapping a heterogeneous parameter space into the integer domain would provide a strong backbone for both performance and allow for a wide range of uses in many hybrid systems as well as hybrid parameters that are contained within a single system. There are precautions that would need to be taken for hybrid systems, which primarily consist of having unsafe states, that even though they are reachable, they would be considered to be unsafe in a real-world implementation, as well dependencies between variables, that could transcend the homogeneous dependencies that are trivial (i.e. comparing two integers together as compared to a comparison between floating point and Boolean). There also would exist optimal state locations of the parameter set, and those would be modeled using an arbitrary cost function.
### _Related Work_
Related work consists primarily of homogeneous tool parameter exploration implementations, and those concern themselves primarily with arriving at the reachable set primarily for homogeneous parameter sets. This would include the tool Ariadne [1], which has features built-in that allow it to find an approximation of the given reachable set by giving by controlling the growth of the approximation error.
One other concern that arises when attempting to model heterogeneous parameters in integer spaces is the problem of solvability within bounded time with close approximation, and as outlined in [2], there does exist a finite bound for finite discovery. There was a foray into unbounded analysis, but that is infeasible given the constraints and would be too computationally exhaustive. Another issue that comes up is discrete versus non-discrete evolution in terms of time, and this was a problem resolved by setting as a condition that there can only exist discrete time steps and discrete evolution.
### _Our Approach_
For the implementation demonstrated in this paper, we focus on a number of contributions that create a fast and efficient method of finding the optimal set given existing constraints and cost. We also define a semantic format that supports the representation of heterogeneous parameters, which better suits it for discrete search along hybrid domains.
For exploring the adjacent set space from our beginning iteration point(initial state), there are a number of possible implementation decisions that would need to be made on how best to explore the reachable set given the constraints. The path that we decided on was to create a correct by construction approach, that would allow the exploration tool to only explore the reachable set that is also valid given the constraints and dependencies that are supplied. Our flow is as follows: given a parameter list which can consist of integer, Boolean, and composite parameters, as well as a list of constraints and dependencies between variables, and a cost function, we aim to find a valid parameter state that satisfies all of our given requirements.
For our implementation, we split our computational engine into two general algorithms. Our first algorithm involves com
Fig. 1: From Citation [3]
Fig. 7: Product layout parameter from entering probabilistic set, and hence avoids estimation (attention at [44]).
puting a correct-by-construction interval for a given parameter given our requirements, and our current state when it comes to other parameters that exist within our set space. The second algorithm is our step-by-step evolution iteration across the set space of the parameter list based on the computation of local optimal cost.
Compared to existing and related works, our approach has the following contributions:
* Developed a representation for heterogeneous parameter sets that allows for the discretization of all parameters and results in the ability for integer space exploration for all relevant types
* Created a correct-by-construction approach to not only finding the reachable set of a given parameter set, but also allowing the inclusion of heterogeneous inter-parameter dependencies and assertions.
* Designed a method of evolution that allows for quick computation of adjacent states for a given set of already locally-optimal parameter instances with a method of back-tracing and reset if arriving at an invalid location
* Demonstrate the applicability and the versatility of our implementation on two examples that involve computing minimum cost for a computer architecture design and a re-programmable logic circuit with a demonstration of the implementation of pseudo-Boolean constraints
## II Implementation
### _Environment and Language Considerations_
We decided on implementing our design in Python [4], the reason for that being that Python allows a host of libraries and type-interfacing that would allow us to quickly prototype, verify, and extend during testing. We also chose Python for the reason of being able to interface easily with JSON [5], which is our input-format of choice. JSON was chosen due to its status as being very well-adopted and would provide an easy interface for other CAD tools to create tool-parameter sets for analysis using our program.
We also use a number of Python libraries to do the necessary computations that are required for our implementation. A special recognition is deserved of Numpy [6], which is a library that allows for very quick computation of intervals, arrays, and sets. Since we are operating in the integer domain, integer arrays using Numpy libraries make the cost of computation a significantly smaller area of concern during implementation.
### _Motivation for Design_
To better improve the performance of discrete search in heterogeneous space, there do exist a number of limitations. Firstly, a slight weakness exists in parsing string type assertions and evaluating them in a computationally static format as opposed to extensive abstract syntax trees and symbolic interval computation. Secondly, considering various typed parameters and assertion relations, it is necessary to have a uniform interface design such that algorithm implementation is isolated with complicated typed transformation, which is why JSON was selected, which could become unwieldy if enumerated or vector parameters which to be considered. In this case, a tool that would generate a statically-enumerated JSON format that is acceptable to our program would be required.
### _Evolution Algorithm_
In this section, we introduce how the program will explore feasible set constrained by assertions. The JSON format input will be interpreted and loaded into our program. For the sake of generality, we assume that there are \(n\) parameters denoted as \(x_{1},\cdots x_{n}\). First of all, for each parameter \(x_{i}\), we randomly generate \(N-1\) valid neighboring points. For the random sampling of these points, we experimented with a couple methods. One was uniform sampling from the valid interval, the other two where linear and square weighted sampling with respect of distance from the interval. After these were tested, we found that square weighting was the most effective, and we will demonstrate these findings during our examples. With \(x_{i}\) itself, these \(N\) points form a list \(\{x_{i}^{j}\}_{j=1}^{N}\). In total there are \(n\) lists.
During the evolution process, each point will randomly generate a neighboring point from its valid set. Therefore, all \(n\cdot N\) points will generate another \(n\) new lists. Without loss of generality, we denote these \(n\) new lists as \(\{x_{i}^{j}\}_{j=N+1}^{2N}\). Next we the original list and new list with the same footnote \(i\) to get \(n\) new list \(\{x_{i}^{j}\}_{j=1}^{2N}\). From these \(n\) list, we evaluate \(2N\) cost function values as \(\{c_{j}=F(x_{1}^{j},x_{2}^{j},\cdots x_{n}^{j})\mid j=1,\cdots,2N\}\). For these \(2N\) cost values, we keep the smaller half and corresponding parameter values to form \(n\) new lists. Repeat the above steps until the ending requirements are satisfied. A pseudo-code for this algorithm can be found at Algorithm 1
### _Approach for feasibility checking between heterogeneous parameters_
For defining the the set of parameters that would exist for a given system, we supply two atomic types and one composite type:
1. Integer type
2. Boolean type
3. Composite type
Integers exist in the Integer domain, and Boolean's likewise in the Boolean domain. Composites are different in that they are modeled like an array, given a composite parameter \(C\), \(C\) can contain any number of composites, Boolean's, and integers. This allows the modeling of parameters that cannot be modeled as strictly scalar integer or Boolean values. Floats, complex numbers, and vectors are all examples of what can be modeled as a composite set. Furthermore, to maintain the desired behavior of these parameters, the constraint paradigm that we introduce allows us to describe the behavior of how these composite parameters undergo evolution.
As an example, take \(Cube\), which of type composite, and it is defined by 3-equal length sides \(x,y,z\), such that \(Cube(t)=\{x,y,z\in\mathbb{Z},x==y==z\}\forall t\) where \(t\) is time-step during evolution. For the case of this parameter, the instantiation of the of the domain of each sub-parameter would go with the
parameter declarations, while the instantiation of the constraint that is intrinsic to cubes would be added to the constraints field that is given.
This paradigm of allowing composite parameters to have unique behaviors could lead to invalid states during evolution, if one sub-parameter undergoes evolution independently and is now not equal to the other two, that would lead to an undesirable state. For this reason correct-by-construction interval generation for each of the sub-parameters is done with all assertions and constraints in mind.
One note on using composite parameters to model floating point numbers. Initially during development we had planned to incorporate a floating point type, however the tediousness of setting properties for floating point as an atomic type is redundant as all the properties of a floating point value(mantissa, exponent, significant figures) can be modeled as sub-parameters of a composite value, and the user can specify the desired constraints and behaviors for comparison and incorporation between the composite-ized floating point value and other parameters.
### _Feasibility Checking given Constraints_
In this section, a detailed explanation about how to construct valid neighboring set is given. Suppose that there are \(m\) assertions \(\{\mathcal{A}_{i}\}_{i=1}^{m}\) on \(n\) parameters. For each parameter \(x_{i}\), assertions containing \(x_{i}\) are selected out of \(m\), which is \(\{\mathcal{A}_{k}\mid x_{i}\in\mathcal{A}_{k}\}\). Next, iterate through other parameters and apply their values into these assertions. Finally, Intersect all the intervals after evaluating the assertions to get the final interval. A new value for \(x_{i}\) is sampled randomly from the final interval based on the square of their distance to \(x_{i}\). By default, values closer to \(x_{i}\) have higher probabilities to be selected. More details can be found in Algorithm 2
```
Input: List of variables \(L_{v}\), Iterating parameter \(T\), List of assertions \(L_{a}\), Cost function \(F\) Output: Optimal value of variables \(L_{v}^{*}\)
1//This is for initial value selection, since we need to enter the set space is what we presume to be a valid point foreach\(v\) in \(L_{v}\)do
2\(v\) := Sample Uniform Distribution(Lower Bound of \(v\), Upper Bound of \(v\))
3 Construct \(V_{i}\) as the set of \(n\) sample of \(v_{i}\)
4
5 end for
6while\(T<=K\)do
7foreach variable \(v_{i}\) in \(L_{v}\)do
8\(V_{i}\) is the set of \(n\) values of \(v_{i}\)\(S_{v_{i}}\) =get_intersect_of_all_valid_intervals(\(L_{a},L_{v},v_{i}\)).
9\(S_{visonted}\) = Arrange by incrementing closeness to value of \(a_{k}\)
10\(Weights_{S_{visonted}}\) = array from 0 to length of \(S_{visonted}\)foreach\(w\) in \(Weights_{S_{visonted}}\)do
11\(w\) = (length of \(S_{visonted}\) - index of \(w\))\({}^{2}\)
12 end for
13 Use weighted sampling of \(Weights_{S_{visonted}}\) to randomly sample \(n\) new values of \(v_{i}\) from \(S_{visonted}\).
14 Append these \(n\) values into \(V_{i}\)
15 Construct \(n\) new list of variables \(\{L_{v}^{j}\}_{j=1}^{n}\), \(L_{v}^{j}[i]=L_{v}[i]\).
16 Pick \(L_{v}^{k}\) with minimum \(F(L_{v}^{k})\) in \(\{L_{v}^{j}\}_{i=j}^{n}\).
17 Update \(L_{v}[i]=L_{v}^{k}[i]\).
18 Delete \(v_{i}\) in \(V_{i}\) with \(n\) highest cost values.
19 Update \(T\).
20
21 end for
22
23 end for return\(L_{v}\)
```
**Algorithm 1**Evolution of Adjacent Optimal Cost
### _Desired Implementation Aspects that Proved Infeasible_
One initial idea that was considered well thought out and feasible was the incorporation of symbolic computation for our constraint and dependency valid interval generation. The Sympy [7] library in Python was going to be utilized for this purpose. Though the algorithm was functional, the symbolic computation cost was extremely prohibitive, and was not feasible for a general-use case. After doing much research to attempt to make it feasible, we discovered that even Sympy as an organization recognizes that the substitution and evaluation is cost-prohibitive, and recommends other avenues for repetitive computation. For this reason we had to re-calibrate and find another solution. This solution was to do string replacement of our given parameters with their values into the string representation of our constraints, dependencies, and costs. Then these string representations would be converted into lambda functions that would be operated on by the Numpy array operations. Since Numpy on the back-end uses C libraries to do computation, this lessened our computation time by an order of magnitude, for mostly the same functionality.
The functionality that is missing is due to the inherent behavioral properties of lambda functions. Symbolic computation was desired as it allowed the incorporation of very rigorous Boolean SAT exploration, but this is not a feature that is possible with the lambda paradigm. Therefore, to allow the extend-ability of Boolean values, fuzzy pseudo-Boolean logic [8] is implemented, which does allow for an adequate semantic representation of Boolean logic.
## III Examples of Application
For an example foray to explore what our program would be able to handle, we decided on two different, yet related, domains.
### _FPGA Synthesis_
For our first example(outlined in 2), we decided on modeling our problem as an FPGA cost problem. Given a number of constraints on an FPGA, i.e. memory size, available memory ports, available input and output ports we have \(Routine_{1,2,3}\), and only two of the previously mentioned three can be
installed on the FPGA fabric, and depending on which two are loaded onto the fabric, we then must enable a minimum number of memory, I/O, and interconnection ports, as well as have different memory properties. We then created a polynomial cost function of these constraints, in an aim of it becoming nonlinear and make the algorithm demonstrate its effectiveness in traversing the set space while attempting to find the given most optimal cost.
One highlight of this example is the inclusion of pseudo-Boolean constraints, which manifest in the requirement that only two of the three routines can function at any time, which in terms of cost, creates a piece-wise function. The parameter variation that is generated during random sampling is able to traverse this piece wise function, because even though we generate points using a correct-by-construction approach, in some cases there is no valid interval, and in that case we reset for that specific parameter back to the largest valid interval, and randomly sample that. This allows the program to exit any possible rut that it enters while making an early decision on which Routine set to choose, and so it can backtrack as necessary and choose another Routine set if the specific parameter space undergoing evolution is no longer valid. The results for these are demonstrated in Figure 3 with different weights for random sampling methods from the valid intervals generated.
### _Computer Architecture Design_
Another example that we used is the creation of of a computer architecture system. During the creation of a new computer architecture, or the generation of a new implementation of an architecture, multiple design decisions must be made with respect to area, inter-connectivity, interface requirements, and transistor count. In this example, we model a simple multi-fetch, multi-execution, processor design. We drafted the requirements in terms of dependencies and constraints, and given the constraints and requirements for the interfaces and inter-connectivity between components, we aim to find the minimal transistor count. This was a more rudimentary design, and it aimed to find the computation limit of our implementation. One thing that we attempted to model was having very large integer sets, and exploring those. Emulating design space exploration for computer architectures with such large intervals was the reason we had to refactor our computation engine from purely symbolic to the lambda paradigm, as the symbolic computation was not able to run search space exploration and computation in a reasonable amount of time with this example. The results for those example are posted in Figure 4, along with the variation between random sampling methods from the valid intervals generated.
### _Performance and Efficacy_
As aforementioned during the discussion on the implementation, performance was a major bottleneck in our implementation, and there were a number of features that needed to be added to be able to guarantee reasonable performance. The first was the use of lambdas to calculate the valid interval set. The second, which is outlined in the algorithm, is keeping a short list of the least-cost neighbors that exist, and generating new random neighbors from that list. This allows us to have multiple different forays into the search space, and we could possibly arrive to many local minima's, but we only choose the most optimal local minima. Computation time is static across iterations, and there are parameter options to increase or decrease the exhaustiveness of the search depending on the intended use cases.
Fig. 2: Illustration of FPGA Paradigm for Testing Our Implementation
We also wanted to verify the efficacy of our design and do the best possible effort into generating the most optimal point. To verify that our results where sane, we ran multiple different instances of both the FPGA and Computer Architecture description JSON files, and averaged those results out, and did this for three different weights for random sampling(uniform, linear weighted, square weighted), and what we found that in all cases, our results for all runs where fairly similar, but there are some noticeable differences worth discussion.
Firstly, the uniform random search has better performance for lower iterations, and this is because during early stages of evolution, a majority portion of the set space has yet to be explored, and uniform sampling allows us to traverse the majority of the set space early. However after a lot of iterations, the square weighted random sampling from the interval eventually makes us arrive to a more optimal cost, and this is because as more and more of the set space is invalidated, the parameters that are undergoing evolution get much closer to the local optima, and square weighting allows us to more likely sample these local optima and arrive at them at a quicker rate than both uniform and linear random sampling.
## IV Summary
To reiterate the major points that have been mentioned throughout this paper, we have created a tool that performs discrete search of integer spaces of mapped heterogeneous parameters to the integer domain, and we utilized correct-by-construction methods to ensure that given constraints and dependencies are met, while attempting to find the most optimal cost. This differs from the previous literature in that it is able to accommodate for heterogeneous data structures and is able to model hybrid systems, while comparatively the existing literature exists primarily for reachability and homogeneous parameter exploration. The main takeaways from this endeavor include that there is a significant divide between the tools that are used in industry, and the potential for tools that could be used to better-optimize processes and methods that are used. The main hurdle for widespread adoption of these methods includes a difficulty of understanding and use, as well as a computational cost-barrier that is evident in very complex systems.
### _Wish-list of additional features_
One feature that would have been useful to incorporate would have been incorporating a Boolean SAT or SMT solver [9], which would have allowed us to bypass pseudo-Boolean constraints entirely, which are generated heuristically, and instead rigorously solve Boolean equations for all possible solutions. Incorporation a Boolean SAT solver such as Z3 would've been time-prohibitive, but would've allowed for a greater range of expressively for constraints.
### _Application Files_
Due to space reasons, we do not go into detail on the specifics of the Computer Architecture Example and the FPGA Example. Please contact the authors for more information.
## V Some thoughts on optimization and use cases
Optimization aims at searching for values of \(\mathbf{x}\) which minimizes the objective function \(f\) bounded by constraints. A general formula of optimization problem is in equation (1).
\[\begin{split}\operatorname*{arg\,min}_{x}&\min f( \mathbf{x})\\ s.t.& Constraints\ on\ \mathbf{x}\end{split} \tag{1}\]
In addition to existing gradient based methods which requires the objective function to be differentiable or even more smooth, discrete search algorithm proposed in this paper achieves a high degree of performance on all kinds of objective functions.
One of the most important features of cyber-physical systems is that they contains both continuous system components and discrete system components. In this case, the constraints may include discrete forms like SATs, and continuous forms like inequalities. Our discrete search algorithm can be used to choose optimal parameters for a cyber-physical system.
Fig. 4: Table of the Impact of Different Weights and Effect on Set Exploration for Architecture Example
Fig. 3: Table of the Impact of Different Weights and Effect on Set Exploration for FPGA Example
## VI Further possible work
We would like to explore more about the background of reachability analysis. Where does this problem rise from. Moreover, as for existing optimization algorithms like heuristic algorithms, gradient based methods and interior point methods, what are the bottlenecks on applying these algorithms on hybrid system reachability analysis.
Another topic is the connection between reachability analysis and optimization algorithm. If the reachability problem can be formulated into an optimization problem, then it will be easier to understand the problem from the mathematical properties of objective function.
| discreteな検索を用いることで、整数空間のツールパラメータ値探索は、モデル化と、ヘuristically最適なパラメータリストの発見のための強力な方法論を提供します。現在、使用されているツールと実装は、均一なツールパラメータに焦点を当てており、多様性のあるツールパラメータに対する実装は不足しています。本論文では、多様性のあるパラメータの到達可能性と有効性を探索するための正確に構築された方法を導入し、さらにツールの実装とその例を用いて、このツールが用いることができる多様性のあるシステムの例を説明します。 |
2309.09248 | The Director: A Composable Behaviour System with Soft Transitions | Software frameworks for behaviour are critical in robotics as they enable the
correct and efficient execution of functions. While modern behaviour systems
have improved their composability, they do not focus on smooth transitions and
often lack functionality. In this work, we present the Director, a novel
behaviour framework that addresses these problems. It has functionality for
soft transitions, multiple implementations of the same action chosen based on
conditionals, and strict resource control. The system was successfully used in
the 2022/2023 Virtual Season and RoboCup 2023 Bordeaux, in the Humanoid Kid
Size League. It is implemented at https://github.com/NUbots/DirectorSoccer,
which also contains over thirty automated tests and technical documentation on
its implementation in NUClear. | Ysobel Sims, Trent Houliston, Thomas O'Brien, Alexandre Mendes, Stephan Chalup | 2023-09-17T11:56:59 | http://arxiv.org/abs/2309.09248v2 | # The Director: A Composable Behaviour System with Soft Transitions
###### Abstract
Software frameworks for behaviour are critical in robotics as they enable the correct and efficient execution of functions. While modern behaviour systems have improved their composability, they do not focus on smooth transitions and often lack functionality. In this work, we present the Director, a novel behaviour framework and algorithm that addresses these problems. It has functionality for soft transitions, multiple implementations of the same action chosen based on conditionals, and strict resource control. This system has shown success in the Humanoid Kid Size 2022/2023 Virtual Season and the Humanoid Kid Size RoboCup 2023 Bordeaux competition.
Keywords:decision making robotics behaviour trees
## 1 Introduction
Autonomous robotics is a large field where robots perform diverse tasks. The actions performed at any given moment depend on state and environment information. Behaviour frameworks allow developers to define rules for behaviour algorithms to manage a robot's actions. There are many desirable features for a behaviour system:
* It should facilitate a smooth transition between actions
* It should account for differences over time in information quality and environment knowledge
* It should be flexible and versatile
* Modules competing for resources should not be able to use a resource at the same time, such as motors
* The behaviour should be composable for quick development
* State information should be available for debugging
* The system should run in real-time
Looking back at previous research on behaviour systems, the classical subsumption system [3] provided a modular approach but lacked composability and functionality. Behaviour trees are modular and composable but are computationally expensive. They lack some functionality, such as changing actions based on the existence or absence of environment knowledge, which research aims to address [1, 10, 7]. The \(ABC^{2}\) behaviour system previously used in RoboCup used
a queue-based agenda focusing on multi-agent gameplay. It did not facilitate smooth transitions and lacked complexity beyond multi-agent functionality. The Dynamic Stack Decider (DSD) [6] is composable and maintainable but lacks desired functionality for soft transitions. The Humanoid Control Module [2] was incorporated with the DSD to abstract lower-level hardware modules from higher-level strategy modules and to avoid conflicts in motor control. It suggests that while the DSD has benefits, it alone does not provide all desired functionality for a complete behaviour system.
The literature has improved over time towards composable, functional and transparent systems. However, each still lacks some desired functionality. Research often focuses on one or a few key components, such as fidelity of information [9], without considering all aspects needed for a general behaviour system, including transitions. This research aims to address these gaps.
In this work, we present The Director, a behaviour framework and algorithm for autonomous systems that emphasises modularity and transitions. It incorporates functionality for soft transitions to ensure safe robot motions as tasks change. Its modular architecture enables programmers to focus on small and specific functionality for the robot rather than the entire system. The algorithm's versatility facilitates complex behaviours, including defining multiple ways to complete one task chosen based on conditionals. A library of automated tests supports the creation of the Director's backend algorithm. The Director's design integrates many desirable features into a simple, coherent framework.
The Director is general-purpose and is implementable in most robotic software architectures, such as ROS. We have implemented it within NUClear [4] for the RoboCup Humanoid Soccer League, shown in both in the 2022/2023 Virtual Season and RoboCup 2023 Bordeaux, with a game won with the Director in each competition. NUClear is a modular, low-latency message-passing architecture for resource-constrained robotics systems. We converted from a subsumption-based system to the Director, with successful testing in the Webots RoboCup simulation. The Director framework aims to simplify the implementation of new behaviours, and we found this to be true when converting our system to the Director.
## 2 Algorithm
The Director is an algorithm for controlling the flow of behaviour in a system. It uses Providers and Tasks to build a tree structure, where Tasks are messages requesting functionality, and Providers provide the functionality for those behaviours. Providers can call subtasks, which are considered children of the Provider. The Director has a wide range of features to solve limitations in existing behaviour systems. Definitions of key terms are in Table 1. Figure 1 shows an example of a Director graph for a soccer-playing humanoid robot.
We use a basic soccer-playing scenario to describe the concepts in the Director. The scenario is that a humanoid robot approaches a ball, kicks the ball, falls over mid-kick and gets back up. The reason for falling over may be an unstable
kick engine or the interference of another player. The transition from walking to kicking should be stable. When the robot falls over, it should stop kicking and get up. Figure 1 shows the Director graph at the start of this scenario, where the robot has seen the ball and is walking toward it.
A behaviour system can be split into five main sections, as shown at the top of Figure 1. The Director algorithm does not have an inherent knowledge of these layers, but they can be incorporated into the program's structure to more easily conceptualise and modularise the behaviour. The literature uses different terms interchangeably, which can lead to confusion.
We propose the following terminology. The actuation layer involves controlling the robot's hardware. The skill layer is responsible for performing complex physical movements of the robot. The planning layer determines when and in what way the skill layer is called based on the current state of the environment. The strategy layer makes specific high-level decisions by stringing together planners that run based on priority and state information. Strategy modules are often small and do not receive specific data from the layer above. The purpose layer is at the top of the Director tree and determines the overall goal of the robot.
Figure 1: This figure shows a Director graph for a humanoid robot playing soccer. It shows active (solid blue), blocked (dashed red), and inactive (dotted black) Providers and Tasks. The values of Tasks are their priority.
The skill and actuation layers can abstract platform-specific functionality from the higher-level behaviour layers. The Humanoid Control Module introduced by Bestmann et al. [2] abstracts the robot's motions from the higher behaviour layers and avoids conflicts in motor control. They focus on using the same higher-level modules across legged and wheeled robots, with the lower-level control module implementing the platform-specific skills and actuations. The Director inherently facilitates this abstraction through loose coupling.
### Providers
Providers are functions that perform actions to satisfy the requirements of one particular Task, either directly or by running subtasks. The function runs when the Task it provides for is requested from anywhere in the system. Standard Providers use the 'Provide' DSL keyword. These Providers may have conditions on when they can run, defined in their declaration.
Providers run when a new Task is received or when a subtask is Done. In addition, they may run on other triggers defined within the general software architecture, such as running at a particular frequency. Other triggers will only run the Provider if it is active in the tree. Functions within the software system that aren't Providers are non-Providers.
In our soccer-playing scenario, the boxes in the diagram are Providers. In the beginning, the robot should walk to the ball. The WalkToBall box is a Provider
\begin{table}
\begin{tabular}{|c|c|} \hline
**Concept** & **Description** \\ \hline Provider & A function that provides the functionality for a Task. \\ \hline Non-Provider & A function that doesn’t provide for a Task. \\ \hline Provider Group & A collection of Providers that service the same Task. \\ \hline Task & A request for a single piece of functionality. \\ \hline Subtask & Task from a Provider. \\ \hline Root Task & A Task from a non-Provider. \\ \hline Priority & A value that determines if a Task can take control. \\ \hline Optional & The Task is not required to run and will defer to non-optional Tasks. \\ \hline Done & A Task that indicates that the Provider has completed its Task. \\ \hline Idle & A Task that will keep running the last set of Tasks for a Provider. \\ \hline \end{tabular}
\end{table}
Table 1: Definitions for key terms for the Director algorithm.
\begin{table}
\begin{tabular}{|c|c|} \hline
**DSL Keywords** & **Description** \\ \hline Provide & The normal Provider that provides the functionality for a Task. \\ \hline Start & The Provider that is run when a Provider group gains control. \\ \hline Stop & The Provider that is run when a Provider group loses control. \\ \hline Needs & The Provider needs control of the specified subtask to run. \\ \hline When & Conditions on when the Provider can be active. \\ \hline Causing & The state that will be achieved by running this Provider. \\ \hline Uses & Retrieves information on a Provider’s subtask state. \\ \hline RunReason & Gives information on why the Provider ran. \\ \hline \end{tabular}
\end{table}
Table 2: DSL keywords used in the Director.
that provides the functionality for the WalkToBall Task. It finds the ball position and requests a WalkTo Task with this ball position data. The WalkTo Task manages what velocity to send to the walk engine based on the target position. The graph continues with the walk engine and limb Providers, with the leaf Providers controlling the motors.
#### 3.2.1 Provider Groups
A group of Providers for one Task type is called a Provider group. Only one Provider in a group can run at a time, with the running Provider determined by DSL keywords or declaration order. If a Provider can no longer run, the Task is reassigned to another Provider in the group. Subtasks from a Provider in a group are from the group, not the specific Provider.
Striker and Walk are Provider groups in Figure 1. Only one Provider in a Provider group is active at any given time. The Striker is using the playing Provider since that condition is true. If the game phase were to transition to the ready state, the active Provider would be the ready Provider.
#### 3.2.2 Provider Types
There are three types of Providers, each specified by a DSL keyword. The 'Provide' type executes functionality for a Task, as discussed previously. If a 'Start' Provider type exists, the 'Provide' Provider will run after it.
The 'Start' type sets up the state of the Provider when a Provider group gains control of its Task after not having control previously. It runs once.
The 'Stop' type is used to clean up code and only runs when a Provider group is losing control of its Task and will no longer run. It cannot run without a 'Start' or 'Provide' running before it. It also runs once.
In our scenario, the Walk Provider group will have a Start Provider that sets up the walk engine. When the Kick Provider takes over, the Walk Provider group will have a Stop Provider that cleans up the walk engine.
#### 3.2.3 Needs
The DSL keyword 'Needs' is used to ensure that a Provider can only run if it has priority to take control of its subtasks.
In our scenario, the skill Providers Need the relevant limbs for their Task. For example, the Walk and Kick Providers will specify that they Need the LeftLeg and RightLeg Tasks. This conflict means the Walk and Kick Providers cannot run simultaneously. Because the KickToGoal Task has higher priority than the WalkToBall Task, if both are requested to run then the Kick will take control and the Walk will be blocked from running as it does not have control of its Needs subtasks.
If a Provider does not have the Needs keyword, it will run and request subtasks, but those subtasks will not run until Providers are available to run them. In our example, the planning modules do not Need their subtask. They continuously run without taking control of the motors until they need to act. The RelaxWhenFalling does not Need the Relax, since it only requests the Relax subtask when the robot is falling. It will not take over the motors until the robot
falls over while kicking. When this happens, the Kick will lose control. Once the robot has settled on the ground, the GetUpWhenFalling will request the GetUp, taking control of the motors from the Relax.
1.2 When 'When' is a DSL keyword that only allows a Provider to execute when the system satisfies a specified condition. The condition has three parts - a state type, a comparison operator and a state value.
In our scenario, the Kick only runs When the stability state of the robot is equal to standing. This prevents the robot from transitioning from the walk mid-step and falling over.
This functionality, combined with Provider groups, solves the problem described by Veloso et al. [9], where existing systems don't consider the precision of environment knowledge. They proposed a behaviour system that emphasised the accuracy and fidelity of information in determining how to perform a task. A single action, such as localising, could have multiple methods for execution depending on the quality of sensor data. In the Director, each method can have an associated Provider, with the precision level for that method defined in the When conditional.
#### 2.1.3 Causing
The DSL keyword 'Causing' in the Director allows for soft transitions between Providers. It declares that the Provider will cause a condition to be satisfied. Like 'When', it has a state type, a comparison operator and a state value. If a Provider has a 'When' condition, the Director will prioritise lower priority Providers with a matching 'Causing' if the 'When' fails.
In our scenario, transitioning between walking and kicking would be difficult without Causing. The Kick requires the robot to be standing, but it is walking and therefore does not satisfy this condition. A version of the Walk in the Walk Provider group causes the stability state to become equal to standing. This standing Walk Provider makes the walk stop cleanly and stand still, allowing the Kick to execute safely.
The 'Causing' keyword enables smoother transitions between Providers, allowing a walk to cause the robot to enter a state where it can transition to kicking.
### Tasks
Tasks are jobs that Providers execute and can contain data. They typically have three pieces of information: Task data, priority level, and an optional flag. Prioritisation of Tasks determines which should run, with non-optional Tasks always taking priority over optional ones. If a Task is optional, other subtasks can run if it cannot, but this is not true for non-optional subtasks. The Director will implement all-or-nothing for non-optional subtasks. Both Providers and non-Providers can request Tasks.
In our scenario, the Walk Provider requests LeftLeg and RightLeg subtasks. The Provider must have control over both Tasks for them to run. The Walk
Provider also has optional LeftArm and RightArm subtasks, which will run if possible but will not block the other subtasks from running.
#### 4.1.2 Root Tasks
Tasks requested from non-Providers are called root tasks and are the starting points of the Director graph. These tasks are siblings within the tree. Root tasks are different from Tasks requested from Providers because they cannot be removed by running a Provider without the Task. Instead, the Task needs to be manually flagged in a way so that the Director will remove it.
In our scenario, FallManagement and Striker are Tasks requested from non-Providers. These requests start the Director graph.
#### 4.1.3 Priority
Priority in the Director is determined based on the closest common ancestor of the two competing Tasks. For root tasks, the closest common ancestor will be the root element. Once the closest common ancestor is determined, the priority of each Task's branch will determine which Task has higher priority. The winner takes control and becomes active, while the evicted Task will watch for an opportunity to take back control.
When a Task's ancestor tree has an optional Task between itself and the common ancestor, it is considered optional. If one Task has an optional parentage and the other does not, then the optional Task will automatically lose. The Tasks are compared normally if both have optional Tasks in their parentage.
In Figure 1, the Striker Provider group requests four subtasks with different priorities. The KickToGoal Task has higher priority than the WalkToBall Task. When the KickToGoal Provider requests Tasks to execute the Kick, it will take over the limbs from the Walk because the KickToGoal branch has higher priority.
#### 4.1.4 Done Tasks
A Done Task is a particular Task requested by a Provider to signal to its parent that it has completed its Task. The Provider group that created this Task will then be re-executed with the knowledge that it was triggered by a Done event from one of its children. The Done Provider's Task will not be removed from the Director tree unless it is a root Task.
In Figure 1, when the LeftHipYaw Provider moves to the requested motor position, it will send a Done Task to its parent. When the LeftLeg has received Done Tasks from all of its motors, it will send a Done Task itself. It can then run the next sequence of motor positions.
#### 4.1.5 Idle Tasks
An Idle Task is a particular Task a Provider requests to signal to continue running its previous Tasks. For example, In our scenario, the GetUp-WhenFallen can send an Idle Task when it is getting up and does not want to re-request the GetUp Task. If it is re-requested, the GetUp may run from the beginning and not reach the end of its motion.
### Data
Providers can access data about the local state of the Director to assist in making decisions and understanding the system state.
**Uses** 'Uses' is a DSL keyword used to obtain information about subtasks. It provides information about the run state of the subtask and whether it is Done. The run states include whether the subtask is currently running, queued to run, or not yet requested by the current Provider. The information comes from the subtask corresponding to the template type of the Uses keyword.
In the scenario with the LeftLeg and its motors, the Uses keyword tells the LeftLeg when all of its motors are Done. The GetUpWhenFallen can determine if the GetUp is currently running by using the Uses keyword. It can use this to determine whether to send an Idle Task rather than re-requesting the GetUp Task.
#### 2.3.1 Run Reason
The RunReason DSL word retrieves information about why a Provider is running.
There are five possible run reasons - a new Task was requested, the Provider has become active, the Provider has become inactive, a subtask is Done, and the Provider is pushed because a higher priority module wants its Causing state to be true.
```
1defProvide<GetUpWhenFallen,Uses<GetUp>,Trigger<Sensors>>
2(RunReasonrun_reason,Uses<GetUp>getup,Sensorssensors):
3#Sensorsmessagereceived
4ifrun_reasonisOTHER_TRIGGER:
5#Calculateifweshouldgetupusingsensors
6is_fallen=sensors.gyro>config.gyro_limit
7#RequestaneewGetUp
8ifis_fallenandgetup.run_stateisNO_TASK:
9request_task(GetUp())
10#IdleontheGetUpifstillgettinguporqueuedtogetup
11elseif(is_fallenandgetup.run_stateisQUEUED)
12or(getup.run_stateandnotgetup.done):
13request_task(Idle())
14#ElsenoTaskisrequested,
15#theGetUpisremovedfromtheDirectorgraph
```
Listing 1: Pseudocode for the GetUpWhenFallen Provider using the Uses and RunReason DSL keywords.
The RelaxWhenFalling and GetUpWhenFallen Providers in our scenario run when the robot falls over and once it is fallen, respectively. These Providers check the sensor data to determine if it needs to request its subtask. The run reason information tells the Providers if they are running because of a new sensor message or because their subtask is Done. An example of code for the GetUpWhenFallen Provider is in Listing 1.
## 3 Evaluation
### Composability
The Director makes it easy to combine modules. Requesting relevant Tasks combines desired modules into the program. Calling a 'ChaseBall' Task from a non-Provider can easily demonstrate the robot chasing a ball. The Task call will trigger all the necessary modules to chase a ball through subtasks. The easy modification and addition of behaviours are critical for debugging and developing in high-pressure scenarios, such as RoboCup. Subsumption-based systems [3] have more difficulty achieving this, as the layers build upon each other and are more tightly coupled.
### Extensibility
In the Director system, adding new components is straightforward and does not require a global reevaluation of priorities, unlike the subsumption architecture. New Providers and Tasks can be created and requested without significant changes to the rest of the system. Subsections of the system can be easily isolated. This flexibility allows for easy experimentation, modification and debugging of the system.
In our experience, the development of behaviour modules within the Director framework is significantly quicker than in our previous subsumption-like system. After converting our system, the high modularity made it easy to see ways to extend the functionality of the robots.
### Transitions
In humanoid robotics, transitions are a factor in preventing the robot from falling over and causing damage. The Director supports clean and safe transitions through conditionals on Providers to prevent them from running unless the system is in a particular state. Additionally, the Director includes functionality for soft transitions. Rather than immediately transitioning from one action to another, a Provider can handle the transition using the pushing functionality.
In the previous section, we used a scenario to explain the soft transition functionality with the 'When' and 'Causing' keywords. The Kick Provider's 'When' condition required the system stability state to be equal to'standing'. A matching 'Causing' Walk Provider satisfied this state by making the walk engine
stop cleanly. When the Kick Provider tried to take control, it pushed the Walk Provider group to use the 'Causing' Walk Provider. The Walk then stops cleanly before transitioning to the Kick Provider.
This smooth transition from walking to kicking is critical for stability. Conditions on Providers make the robot act safely and move between motions smoothly. Existing literature rarely addresses transitions, and we are not aware of other systems with soft transitioning functionality.
### Hardware Control
The Director has a strict control system where a Provider can only provide for one Task at a time, and only one Provider of a Task type can be active at a time. Other Tasks and Providers are blocked and must wait for an opportunity to take control. This core rule in the algorithm prevents multiple sources from controlling one resource, such as the motors. By grouping motors, the system can ensure that only one module controls a kinematic chain.
In our previous system, all motor commands moved through one module that enforced a similar strict control system. Another solution proposed by Bestmann and Zhang uses the Humanoid Control Module [2] to implement a mutex on the motors. These approaches lack the modularity and composability that the Director inherently creates throughout the system, from high-level strategy modules down to low-level motor modules.
### Versatility
The Director has a versatile algorithm with extensive functionality. Research often focuses on adding one specific functionality. The Director aims to incorporate all needed functionality. Functionality for transitions and strict hardware control, as described previously, are critical parts of this versatility.
Another important aspect is the ability to create multiple implementations for one action. Provider groups facilitate this, where one implementation runs based on the system state. Numerous research articles address this concept in the context of the quality and existence of environment information [9, 7, 10]. Provider groups provide this functionality in a generalised way, where conditions determine the active Provider.
Modern implementations of behaviour trees for autonomous robotics request tasks and act based on the state of the subtask [8, 7, 10]. The Uses information and Done Tasks within the Director provide this functionality and could extend to conform to the behaviour tree structure if desired by adding fail and success responses from subtasks.
\(ABC^{2}\)[5] has a similar Provider-Task relationship and can apply conditionals to nodes. They do not address transitions, resource control and multiple implementations for tasks. These are not as important within the two-dimensional simulation competition scenario used in the article.
The complexity of the Director requires careful implementation of the back-end algorithm. We provide over thirty automated tests for the Director to aid
in the development of the algorithm 1. The computational complexity of the algorithm is comparable to other behaviour tree systems.
Footnote 1: [https://github.com/NUbots/DirectorSoccer/tree/main/module/extension/Director/tests](https://github.com/NUbots/DirectorSoccer/tree/main/module/extension/Director/tests)
### Transparency
While transparency is useful for debugging, it can also increase complexity when implementing behaviours. Providers in the Director have limited access to the current system state, with only local information accessible. They do not know the origin of their Task, although the Director algorithm could include this information if needed. While messages within the system can provide more context to Providers about the system state and environment, the Director is designed to be decoupled, allowing for shorter and more composable modules without dependencies on other aspects of the system.
The Director algorithm has a complete view of the system state, with all Providers and their active tasks and watchers visible at any given time. A graphical representation of the behaviour system from this information would enhance debugging. Additionally, each module could manage its history individually.
## 4 Conclusion
We presented the Director framework and algorithm and placed it within the context of existing behaviour systems. It is modular, composable, extensible and has functionality critical for autonomous robotic systems. The Director supports soft transitions, multiple implementations for the same task chosen based on conditionals, conditional requirements on Providers, and strict resource control extending to motor control.
#### 4.0.1 Acknowledgements.
This research is supported by 4Tel Pty Ltd and an Australian Government Research Training Program Scholarship to the first author. We acknowledge contributors of the NUbots Robotics Research Group, whose work this publication builds upon. Thank you to Alexandre Mendes and Stephan Chalup for their review of this work.
| ソフトウェアフレームワークはロボット工学において行動制御に不可欠であり、その機能の正確かつ効率的な実行を可能にする。現代の行動システムは、組み込み機能を向上させているが、滑らかな遷移に焦点を当てず、機能が不足している。この研究では、Directorという新しい行動フレームワークを提案する。これは、これらの問題に対応するために設計された、ソフトな遷移機能、条件に基づいて同じ行動の複数の実装、そして厳格なリソース管理機能を備えたフレームワークである。このシステムは、2022/2023年の仮想シーズンと2023年ボルドーのRoboCupで成功した。人間oidKidSize Leagueでは、このシステムを成功的に実装した。GitHubでhttps://github.com/NUbots/DirectorSoccerに実装されている。このサイトには、約30個の自動テストとNUClearにおける実装に関する技術的なドキュメントも含ま |
2309.11407 | On the topology of higher-order age-dependent random connection models | In this paper, we investigate the potential of the age-dependent random
connection model (ADRCM) with the aim of representing higher-order networks. A
key contribution of our work are probabilistic limit results in large domains.
More precisely, we first prove that the higher-order degree distributions have
a power-law tail. Second, we establish central limit theorems for the edge
counts and Betti numbers of the ADRCM in the regime where the degree
distribution is light tailed. Moreover, in the heavy-tailed regime, we prove
that asymptotically, the recentered and suitably rescaled edge counts converge
to a stable distribution. We also propose a modification of the ADRCM in the
form of a thinning procedure that enables independent adjustment of the
power-law exponents for vertex and edge degrees. To apply the derived theorems
to finite networks, we conduct a simulation study illustrating that the
power-law degree distribution exponents approach their theoretical limits for
large networks. It also indicates that in the heavy-tailed regime, the limit
distribution of the recentered and suitably rescaled Betti numbers is stable.
We demonstrate the practical application of the theoretical results to
real-world datasets by analyzing scientific collaboration networks based on
data from arXiv. | Christian Hirsch, Peter Juhasz | 2023-09-20T15:31:32 | http://arxiv.org/abs/2309.11407v1 | # On the topology of higher-order
###### Abstract
In this paper, we investigate the potential of the age-dependent random connection model (ADRCM) with the aim of representing higher-order networks. A key contribution of our work are probabilistic limit results in large domains. More precisely, we first prove that the higher-order degree distributions have a power-law tail. Second, we establish central limit theorems for the edge counts and Betti numbers of the ADRCM in the regime where the degree distribution is light tailed. Moreover, in the heavy-tailed regime, we prove that asymptotically, the recentered and suitably rescaled edge counts converge to a stable distribution. We also propose a modification of the ADRCM in the form of a thinning procedure that enables independent adjustment of the power-law exponents for vertex and edge degrees. To apply the derived theorems to finite networks, we conduct a simulation study illustrating that the power-law degree distribution exponents approach their theoretical limits for large networks. It also indicates that in the heavy-tailed regime, the limit distribution of the recentered and suitably rescaled Betti numbers is stable. We demonstrate the practical application of the theoretical results to real-world datasets by analyzing scientific collaboration networks based on data from arXiv.
_Keywords:_ higher-order network, degree distribution, stochastic geometry, random connection model
_MSC Classification:_ 60D05, 60G55, 60F05
## 1 Introduction
In recent decades, the field of complex networks has emerged as a powerful framework for analyzing systems whose properties cannot be understood by studying their parts in isolation. The human brain, collaboration among researchers, the interaction of chemical elements, technological infrastructures, or the evolution of species are some examples for complex systems in which studying the relationships between the parts is inevitable (Battiston et al., 2020; Holland and Leinhardt, 1976). For instance, a collaboration network of scientists based on data from arxiv is illustrated in Figure 1, where vertices represent authors of publications and each document is represented by a simplex.
Figure 1: The largest component of a higher-order network of scientists.
Apart from a descriptive approach, it is often desirable to develop a stochastic model for generating synthetic networks. The key advantage of creating such a stochastic model representation is that it enables a more refined analysis and a tool to deeper understand properties of a complex system. Through this approach, it becomes feasible to reveal effects that might remain hidden in an actual dataset, particularly if its size is not large enough. For an excellent discussion on complex network models as null models, we also refer the reader to van der Hofstad et al. (2020).
The traditional way of modeling complex systems relies on binary networks where the parts of the system are represented by vertices, and their relationships are represented by connections between them. At the turn of the century, the field of complex networks experienced a rapid growth due to the insight that networks occurring in a wide variety of disciplines share a common set of key characteristics.
In their seminal work, Barabasi and Albert (1999) discovered that many key empirical features of complex networks are explained by a surprisingly simple _preferential attachment model_. Loosely speaking, it provides mathematical precision to the idea that many real networks emerge through a "rich-get-richer" mechanism. In addition to the broad impact of network science in the application domains, complex networks also became the subject of intense research activity in mathematics, where rigorous mathematical proofs were provided of many of the effects that were previously empirically identified in network science (van der Hofstad, 2017). In particular, the analysis of large preferential attachment models has become a highly fruitful research topic leading to important results such as the limiting distributions of various network characteristics (Dereich and Morters, 2009, 2013).
One of the shortcomings of the standard preferential attachment models is that they lead to tree-like structures, thus failing to reproduce the clustering property observed in real-world networks. To address this issue, among others, spatial variants of the preferential attachment models have been proposed (Jacob and Morters, 2015; Jacob and Morters, 2017). Here, the network nodes are embedded in Euclidean space so that the preferential-attachment rule can take into account the spatial positions. While the embedding produces the desired clustering effects, it makes the mathematical analysis more complicated. Subsequently, it was realized by Gracar et al. (2019) that the decisive scale-free and clustering properties of spatial preferential attachment mechanism could also be realized by a simplified construction rule. In the _age-dependent random connection model (ADRCM)_, the connection probability to an existing vertex now depends on the age rather than the in-degree of that vertex. In particular, knowing the age of a vertex does not require any information on the network structure in the neighborhood of that vertex. This gives a far larger degree of spatial independence, which substantially simplifies many of the mathematical derivations. Later, Komjathy and Lodewijks (2020); Gracar et al. (2022) described a more general framework for incorporating weights into the connection mechanism.
As traditional network analysis was designed to study pairwise relationships between entities, simple network models are not capable of modeling higher-order interactions in which more than two entities are involved. The study of higher-order network models has recently gained special attention due to its ability to capture these multibody relationships a simple network model cannot handle. Among others, the need for higher-order relationships arise in scientific collaboration networks where the joint publication of three authors is not identical to three distinct papers with two authors each. Beyond collaboration networks, the study of group relationships could already explain several phenomena like the synchronization of neurons or the working mechanism of supply chain routes, see (Xu et al., 2016).
We model higher-order networks using simplicial complexes, where the relationships are represented with simplices of various dimensions. The key benefit of modeling higher-order networks with simplicial complexes is that we can describe the networks using tools from topological data analysis (TDA). This form of analysis was carried out in a number of studies (Baccini et al., 2022; Carstens and Horadam, 2013; Patania et al., 2017; Petri et al., 2013).
While these studies investigate different datasets and rely on different TDA tools to analyze them, none of these works proposes a mathematical model to represent such higher-order networks. Previously, Fountoulakis et al. (2022) considered a stochastic model for higher-order complex networks. However, as this model relies on a form preferential attachment mechanism, even the derivation of the asymptotic degree distribution is highly involved. In contrast, since the ADRCM relies on a far simpler connection mechanism, in the present paper we are able to derive results that are substantially more refined than the degree distribution. The main contributions of the present work are as follows:
1. We begin by rigorously proving that the higher-order degree distributions follow a power law.
2. As a basis for hypothesis tests, we provide central limit theorems (CLTs) and stable limit theorems for the edge count and Betti numbers.
3. Recognizing the limitations of the ADRCM, we propose a model extension of the ADRCM capable of matching both any given admissible vertex and edge degree exponents.
4. Since these results are proved in the limit for large networks, we support the validity of these results for finite-size networks through conducting a simulation study.
5. Showing the convergence of the related quantities for finite-size networks, we proceed by developing statistical tests for finite networks based on the number of triangles and the Betti numbers for different parameter regimes.
6. Finally, we illustrate the use of these hypothesis tests for analyzing real-world collaboration networks.
We now expand on the above points in more detail and refer to Section 2 for details.
As discussed earlier, the ADRCM stands out as an appealing model due to its ability to replicate key features - power-law distributed vertex degrees and a high clustering coefficient - observed in real-world complex networks, while also being mathematically tractable. In light of this, our approach utilizes the ADRCM as a foundation and endows it with a higher-order structure by forming the _clique complex_. That is, the simplices in this complex are the cliques of the underlying graph. A set of \(k+1\) vertices forms a \(k\)-simplex if and only if it is a \(k\)-clique, i.e., if and only if there is an edge between every pair of the \(k+1\) vertices.
While for binary networks, the degree distribution is arguably the most fundamental characteristics, for higher-order networks, it is essential to understand also higher-order adjacencies. Hence, to extend the concept of degree distributions to higher-order networks, we draw upon the concept of _generalized degrees_ introduced by Bianconi and Rahmede (2016). For \(m^{\prime}\geqslant m\), one considers the distribution of the number of \(m^{\prime}\)-simplices containing a typical \(m\)-simplex as a face. For instance, the standard vertex degree corresponds to the scenario where \((m,m^{\prime})=(0,1)\). One of the fundamental findings by Gracar et al. (2019) is that in the ADRCM, the vertex-degree distribution satisfies a power law. As another example, for \((m,m^{\prime})=(1,2)\) the _edge degree_ counts the number of triangles adjacent to a given edge. In Theorem 1, we show that the generalized degrees also adhere to a power-law distribution. Furthermore, we relate the exponents of the higher-order degree distribution to the exponent governing the vertex-degree distribution. We pay special attention to the edge-degree distribution, since the formation of triangles in complex spatial networks is also of high interest for determining the clustering coefficient, as explored by van der Hofstad et al. (2020, 2022).
In our second main result, we find that the distribution of the recentered and rescaled edge count in the ADRCM converges to a normal distribution for light-tailed degree distribution and to a stable distribution for heavy-tailed degree distributions (Theorems 3 and 4). Based on our simulation study, we conjecture that these asymptotic results extend to higher-dimensional simplices.
Next, turning our focus to the features relevant in TDA, we continue with the analysis of the distribution of the Betti numbers of the clique complexes generated by the ADRCM. Siu et al. (2023) derive asymptotic expressions for the growth rate of the expected Betti numbers in non-spatial preferential attachment models. In contrast, the focus of our study is on the fluctuations around the expectation, enabling the application of hypothesis tests. In Theorem 2, we prove a CLT for the Betti numbers if the degree distribution is sufficiently light-tailed. We also conjecture that for other values of the model parameters, the distribution of Betti numbers follows a stable distribution. Again, this hypothesis gains credibility through the results of our simulation study, supporting the above conjecture.
By analyzing the empirical distributions within the arXiv data set, we find that the relation between the exponents governing vertex and edge degrees from Theorem 1 to be too rigid to be applicable in real-world scenarios. To address this limitation, we present a model extension that provides a larger flexibility to the original ADRCM by introducing a new parameter. The main challenge of establishing this result is to ensure that we can independently adjust the edge-degree exponent, while keeping the vertex-degree exponent intact. More precisely, we proceed as follows: First, we increase both the vertex and the edge-degree exponents by adjusting the original parameters of the ADRCM, so that the edge-degree exponent reaches the desired value. Then, we apply a dependent thinning operation involving the random removal of a fraction of certain edges that do not affect the edge-degree exponent, but which decrease the vertex-degree exponent. These steps lead to the desired greater flexibility between vertex and edge degrees formalized in Theorem 5.
Our theoretical results presented above hold in the limit for very large networks. For applications to real data, we accompany our theoretical results by a simulation study.
* First, we explore the finite-size effects on higher-order degree distributions by examining the rate of convergence of the degree distribution exponents to their theoretical limits. We see that the fluctuations of the exponents around their theoretical values decrease with increasing network size. The simulations also reveal that apart from their fluctuations, the exponents also have a bias due to the finite size. An interesting aspect of the simulation study is that, through Palm theory, we are able to simulate typical simplices in infinite networks that are free of finite-size effects.
* Simulating three sets of networks with different model parameters, we validate our theoretical results regarding the edge-count distributions. Furthermore, we also estimate the parameters of the distributions that are not explicitly derived in the theorems. Finally, we discover the finite-size effects that are the most prominent in certain boundary cases.
* As for the exploration of the edge-count distribution, we conduct a similar analysis for the Betti numbers. This analysis supports our conjectures concerning the stable distribution of Betti numbers.
Next, we demonstrate the application of the theorems on four real-world collaboration networks of scientists based on arXiv data. After a general exploratory analysis, we analyze the vertex and edge-degree distribution exponents.
Based on the fitted vertex-degree exponents of the datasets, we fix the model parameters to use the ADRCM for further analysis of collaboration networks. These fitted parameters guarantee that the vertex-degree exponent and the edge count are modeled correctly. Thus, instead of the edge count, we conduct hypothesis tests based on the triangle count, where the null hypothesis is that the dataset is well described by the ADRCM. Similar tests are also conducted for the Betti numbers.
The results of the hypothesis tests reveal that the topological structure of scientific collaboration networks is highly complex. In particular, an elementary two-parameter model, such as the ADRCM, is not enough to capture all aspects of higher-order networks.
The rest of the manuscript is organized as follows. Section 2 presents our main theoretical results regarding the higher-order networks generated by extending the ADRCM model to a clique complex. Sections 3, 4, 5, 6 contains the proofs of the theorems stated in Section 2. Section 7 details the simulation study to demonstrate the validity of the asymptotic results discussed in Section 2 for finite networks. Section 8 illustrates the application of the ADRCM model to higher-order networks of scientific collaborations. Lastly, Section 9 includes a summary and ideas for directions of further research.
## 2 Model and main results
The higher-order network model discussed in this paper is an extension of the ADRCM, which we now recall for the convenience of the reader. In this network model, vertices arrive according to a Poisson process and are placed uniformly at random in Euclidean space. Two vertices are connected with a probability given by the profile function, which is a function of the distance and the ages of the vertices.
As justified below, we restrict our attention to the special case of latent Euclidean dimension is \(d=1\) and profile function \(\varphi(r)=\mathds{1}_{[0,1]}(r)\). Let \(\mathcal{P}=\{P_{i}\}=\{(X_{i},U_{i})\}_{i\geq 1}\) be a unit-intensity Poisson point process on \(\mathbb{R}\times[0,1]\), let \(\beta>0\) and \(0<\gamma<1\). Then, for \((x,u),(y,v)\in\mathcal{P}\) with \(u\leqslant v\), there is an edge from \((y,v)\) to \((x,u)\), in symbols \((y,v)\to(x,u)\), if and only if
\[|x-y|\leqslant\frac{\beta}{2}u^{-\gamma}v^{\gamma-1},\]
where \(\beta>0\) is a parameter governing the edge density. We henceforth denote this network by \(G:=G(\mathcal{P})\).
We stress that the framework developed by Gracar et al. (2019) allows to treat arbitrary dimensions and far more general connection functions. However, the results by Gracar et al. (2019), van der Hofstad et al. (2022) indicate that many of the key network properties, such as the scaling of the vertex degree or the clustering coefficient, depend neither on the dimension nor the connection function. We expect that a similar observation holds for higher-order characteristics and therefore decided to work with the simplest form of the ADRCM, greatly reducing the level of technicality in the presentation of the proofs.
While \(G\) determines the binary vertex-interactions, in many applications, higher-order interactions play a crucial role. The key idea for taking this observation into account is to extend \(G\) to a simplicial complex. The most popular approach for achieving this goal relies on the _clique complex_(Dey and Wang, 2021). Here, a set of \(k+1\) vertices forms a \(k\)-simplex if and only if it is a \(k\)-clique, i.e., if and only if there is an edge between every pair of the \(k+1\) vertices. To ease readability, we will henceforth also write \(G=G(\mathcal{P})\) not only for the binary ADRCM network but also for the clique complex generated by it.
While Gracar et al. (2019) analyze a number of key characteristics of the ADRCM considered as a binary network, we focus on the simplicial structure. Specifically, we deal with the higher-order degrees and the Betti numbers, which we introduce in Sections 2.1 and 2.2, respectively.
### Higher-order degree distribution
Arguably the most fundamental characteristics of complex networks is the degree distribution. While the standard degree distribution provides an important summary of a complex network, it ignores higher-order structures. Therefore, Courtney and Bianconi (2016) argue to consider generalized degrees that are able to convey information on the adjacency structure of simplices of varying dimensions.
To define the typical vertex degree, the idea is to add to \(\mathcal{P}\) a distinguished typical vertex of the form \(o=(0,U)\) where \(U\) is uniform in \([0,1]\) and independent of \(\mathcal{P}\)(Gracar et al., 2019). We let \(G_{*}=G(\mathcal{P}\cup\{o\})\) denote the ADRCM constructed on the extended vertex set. Then, the typical vertex degree is that of \(o\) in \(G_{*}\). We define the tail of the vertex degree distribution \(d_{0,1}(k)\) to be
\[d_{0,1}(k)=\mathbb{P}\big{(}\deg_{1}(o)\geqslant k\big{)},\]
i.e., the probability that the vertex degree at the typical vertex exceeds \(k\geqslant 0\). In this context, Gracar et al. (2019) proved that the ADRCM is scale free in the sense that the degree distribution satisfies a power law:
\[\lim_{k\uparrow\infty}\log(d_{0,1}(k))/\log(k)=-\frac{1}{\gamma}.\]
While the higher-order vertex degrees provide a more refined picture than the standard vertex degrees, it is also important to go beyond vertices by considering higher-dimensional simplices as well.
To study generalized degrees, we define the higher-order degree of an \(m\)-simplex \(\Delta\subseteq G\) as
\[\deg_{m^{\prime}}(\Delta):=|\{\sigma\in G:\sigma\supseteq\Delta,\,|\sigma|=m^{ \prime}+1\}|,\]
represented as the number of \(m^{\prime}\)-simplices containing \(\Delta\). For instance, \((m,m^{\prime})=(0,1)\) recovers the standard vertex degree and the higher-order vertex degree \(\deg_{m^{\prime}}(v)\) of the vertex \(v\) denotes for the number of \(m^{\prime}\)-simplices that are incident to \(v\).
To study the generalized degree distributions, we consider typical simplices via the concept of Palm distribution. Here, we describe the specific setting needed in the present paper, and refer the reader to Last and Penrose (2016) for a more general introduction to Palm theory. For \(m\geqslant 0\), we can consider the \(m\)-simplices \(\Delta_{m}=\{P_{0},\ldots,P_{m}\}\) in \(G\) as a marked point process by centering \(\Delta_{m}\) at its oldest vertex \(c(\Delta_{m})\). Let \(\mathcal{T}_{m}(\mathcal{P})\) denote the family of \(m\)-simplices in the clique complex on \(G\). Then, the expectation of a function \(f\) of the typical \(m\)-simplex \(\Delta_{m}^{*}\) is given by
\[\mathbb{E}[f\left(\Delta_{m}^{*},\,\mathcal{P}\right)]=\frac{1}{\lambda_{m}} \mathbb{E}\Big{[}\sum_{\Delta\in\mathcal{T}_{m}(\mathcal{P})}\mathds{1}\left\{ c\left(\Delta\right)\in[0,1]\right\}f\left(\Delta-c\left(\Delta\right),\, \mathcal{P}-c\left(\Delta\right)\right)\Big{]}, \tag{1}\]
where \(\lambda_{m}>0\) is the simplex density and where \(f\colon\mathcal{C}_{m}\times\mathbf{N}_{\mathrm{loc}}\to[0,\infty)\) is any measurable function from the space \(\mathcal{C}_{m}\) of distinct \((m+1)\)-tuples of points in \(\mathbb{R}\times[0,1]\) and the space of locally finite point processes to \([0,\infty)\), and which is symmetric in the first \(m+1\) arguments.
In the present paper, we extend (Gracar et al., 2019, Proposition 4.1) result by considering the generalized vertex degree distribution
\[d_{m,m^{\prime}}(k)=\mathbb{P}\big{(}\deg_{m^{\prime}}(\Delta_{m})\geqslant k \big{)},\]
represented as the distribution of the number of \(m^{\prime}\)-simplices incident to \(o\).
**Theorem 1** (Power law for the typical vertex & edge degree).: _Let \(\gamma\in(0,1)\) and \(m^{\prime}\geqslant m\geqslant 0\). Then,_
\[\lim_{k\uparrow\infty}\log(d_{m,m^{\prime}}(k))/\log(k)=m-\frac{m+1}{\gamma}.\]
### Central and stable limit theorems
As outlined in Section 1, to decide whether a given model is a good fit for a dataset, it is important to be able to carry out statistical hypothesis tests. In this work, we discuss possible hypothesis tests that become asymptotically exact for growing networks. While higher-order degree distributions are an important tool for describing higher-order networks, they only provide a highly restricted view of the topological structure. The idea behind TDA is to rely on invariants from algebraic topology for extracting more refined shape-related information. In this context, one of the most celebrated characteristics is the Betti numbers, which, loosely speaking, can be interpreted as the number of topological holes in a dataset. For a more detailed explanation on Betti numbers, we refer the reader to Hiraoka et al. (2018). One attractive way to develop a hypothesis test is to show that the considered test statistic becomes asymptotically normal. This is the content of the following theorem. Here, we write \(\beta_{n,q}\) for the \(q\)th Betti number of the clique complex \(G\big{(}\mathcal{P}\cap[0,n]\big{)}\).
**Theorem 2** (CLT for the Betti numbers).: _Let \(q\geqslant 0\) and \(0<\gamma<1/4\). Then, \(\mathsf{Var}(\beta_{n,q})^{-1/2}(\beta_{n,q}-\mathbb{E}[\beta_{n,q}])\) converges in distribution to a standard normal distribution._
A disadvantage of Theorem 2 is that our proof imposes a substantial constraint on the parameter \(\gamma\). In particular, Theorem 2 considers a regime where the variance of the degree distribution is finite, while for many real-world datasets it is infinite. Note that for large values \(\gamma\), the ADRCM gives rise to extremely long edges, which makes it difficult to control spatial correlations over long distances, which is the main challenge in the proof. While we expect that by a more careful argumentation in the proof of Theorem 2, the range of \(\gamma\) could be extended, we conjecture that the asymptotic normality breaks down for values of \(\gamma>1/2\). To provide evidence for this conjecture, we now illustrate that a similar effect occurs for a more elementary test statistic, namely, the edge count
\[S_{n}:=\left|\{(y,v)\to(x,u):\,(y,v),\,(x,u)\in\mathcal{P},\,x\in[0,n]\}\right|.\]
The key observation is that depending on whether \(\gamma\) is smaller or larger than \(1/2\), the variance of the \(S_{n}\) at a typical vertex is either finite or infinite. Hence, we should only expect a CLT in the finite variance regime. We show that this is indeed the case.
**Theorem 3** (CLT for the edge count).: _Let \(\gamma<1/2\). Then, \(\mathsf{Var}(S_{n})^{-1/2}(S_{n}-\mathbb{E}[S_{n}])\) converges in distribution to a standard normal distribution._
For \(\gamma>1/2\) since the degree distribution is heavy-tailed, the right tails in the edge count are more pronounced than those of a normal distribution. For many combinatorially defined network models like the configuration model, the degrees are taken iid from a given distribution. Hence, here the limiting vertex-degree distribution follows from the classical stable central limit theorem (Whitt, 2002, Theorem 4.5.2). We also refer the reader to van der Hofstad et al. (2020) for a discussion in this direction. However, we are not aware of any existing corresponding results for spatial networks, which often feature strong spatial correlations between the individual vertex degrees. Hence, the main challenge in the proof of Theorem 4 is to understand and overcome these correlations in order to extend the results from the combinatorial networks to spatial network models.
**Theorem 4** (Stable limit law for the edge count).: _Let \(\gamma\in(1/2,1)\). Then, \(n^{-\gamma}(S_{n}-\mathbb{E}[S_{n}])\) converges in distribution to \(\mathcal{S}\), where \(\mathcal{S}\) is a \(1/\gamma\)-stable distribution._
### Model extensions
Theorem 1 expresses the power-law exponent of the vertex degree distribution and the edge degree distribution in terms of \(\gamma\). However, as we will illustrate in Section 8, when analyzing datasets of scientific collaborations, the relation between the vertex and edge exponents suggested in Theorem 1 may often be violated in real datasets. More precisely, for a given vertex degree exponent, we found the edge degree in the data to be substantially more heavy-tailed than suggested in Theorem 1. In other words, real datasets exhibit a larger proportion of edges incident to a large number of triangles than what can be realized by the ADRCM. Alternatively, we could choose \(\gamma\) so as to match the power-law exponent of the edge degree in the data. In this case, however, the vertex degrees of the fitted model would exhibit too heavy tails.
To address this shortcoming, we propose a model extension _thinned age-dependent random connection model (TADRCM)_, where we remove some edges so that the power-law exponent of the edge degrees is not affected. The key observation is that for edges with high edge degrees, typically both endpoints are very old. However, only a very small proportion of vertices connect to more than one very old vertex. More precisely, we say that an edge \((z,w)\to(x,u)\) is _protected_ if \(w\leqslant 2u\) or if there exists a vertex \((y,v)\) with \(v\leqslant 2u\leqslant 4v\) with \((z,w)\to(y,v)\). An edge is _exposed_ if it is not protected. Then, we define the TADRCM \(G^{\text{th},\eta}\) of \(G\), by removing independently exposed edges. The key idea is to use a retention probability of \(w^{\eta}\), where \(\eta>0\) is a new model parameter. Our next main result is the following analogue of Theorem 1 for the thinned model, where \(d^{\text{th},\eta}_{m,m^{\prime}}\) is defined as \(d_{m,m^{\prime}}\), except that we use the TADRCM instead of the ADRCM.
**Theorem 5** (Power law for the thinned typical vertex and edge degree).: _Let \(\gamma\in(1/2,1)\) and \(\eta>0\) be such that \(2/\gamma-1>1/(\gamma-\eta)\). Then,_
\[\lim_{k\uparrow\infty}\log(d^{\text{th},\eta}_{0,m^{\prime}}(k))/\log(k)=-1/( \gamma-\eta)\quad\text{ and }\quad\lim_{k\uparrow\infty}\log(d^{\text{th},\eta}_{1,m^{\prime}}(k))/\log(k) =1-2/\gamma.\]
We stress that alternative approaches also exist to enhance the flexibility of the ADRCM. For instance, van der Hofstad et al. (2022) introduce a different extension, focusing on clustering properties. However, in the scope of our work, we found the thinning-based model more convenient for two reasons. First, through Theorem 5, the parameters \(\gamma\) and \(\eta\) are related very transparently to the vertex and the edge degrees, which simplifies substantially fitting the model to datasets. In contrast, van der Hofstad et al. (2022) discuss a model where the connection between the model parameters and degree exponents is less obvious, and it is not immediate, which combination of higher-order degrees can be realized in the model. Second, when carrying out the proofs, it is convenient that in the ADRCM the outdegree is Poisson-distributed independently of the vertex age. Although we expect that our proofs could be adapted to the extension from van der Hofstad et al. (2022) some of the steps would require more work.
## 3 Proof of Theorem 1 - power-law exponents for higher-order simplex
In this section, we establish Theorem 1, i.e., we compute the power-law exponents for the higher-order simplex degrees in the ADRCM. To reach this goal, we consider separately the lower and upper bounds in Sections 3.1 and 3.2, respectively.
To prepare the proof, we start with an integral representation for the distribution of the typical \(m\)-simplex \(\Delta_{m}^{*}\). While (1) provides a conceptually clean definition of the expectation of a function of a typical \(m\)-simplex, it is not ideal for carrying out actual computations. For this reason, we derive an alternative representation in Proposition 6 below.
In this representation, we write \(o:=o_{0}:=(0,u)\) with \(u\in[0,1]\) for the typical vertex at the origin and
\[\mathbf{o}_{m}:=(o_{1},\ldots,o_{m}):=\big{(}(y_{1},v_{1}),\ldots,(y_{m},v_{m}) \big{)}\in\mathbb{T}^{m}:=(\mathbb{R}\times[0,1])^{m}\]
for the remaining vertices. Then, we let \(g_{m}(u,\mathbf{o}_{m})\) be the indicator of the event that \((o_{0},\mathbf{o}_{m})\) forms an \(m\)-simplex in the ADRCM with \(u\leqslant v_{1}\leqslant\cdots\leqslant v_{m}\). Henceforth, we let \(I_{r}(x):=[-r/2+x,r/2+x]\) denote the interval of side length \(r>0\) centered at \(x\in\mathbb{R}\). We let \(\mathbf{N}_{\text{loc}}\) denote the family of all locally finite subsets of \(\mathbb{T}\).
**Proposition 6** (Distribution of the typical \(m\)-simplex).: _Let \(m\geqslant 1\). Then,_
\[\mathbb{E}[f(\Delta_{m}^{*},\mathcal{P})]=\frac{\int_{0}^{1}\int_{\mathbb{T}^ {m}}\mathbb{E}[f(\{\mathbf{o},\mathbf{o}_{m}\},\mathcal{P}\cup\{o,\mathbf{o}_{m}\})]g_{ m}(u,\mathbf{o}_{m})\mathrm{d}\mathbf{o}_{m}\mathrm{d}u}{\int_{0}^{1}\int_{\mathbb{T}^ {m}}g_{m}(u,\mathbf{o}_{m})\mathrm{d}\mathbf{o}_{m}\mathrm{d}\mathbf{o}_{m}\mathrm{d}u},\]
_for any measurable \(f\colon\mathcal{C}_{m}\times\mathbf{N}_{\text{loc}}\to[0,\infty)\), which is translation-covariant in the sense that \(f((x+y,u),\varphi+y)=f((x,u),\varphi)\) for every \((x,u)\in\mathbb{T},y\in\mathbb{R}\) and \(\varphi\in\mathbf{N}_{\text{loc}}\)._
To ensure that the Palm version is well-defined, we need to show that the denominator is finite. We formulate this property as a separate auxiliary result. First, define the function
\[\mu_{m}(u):=\int_{\mathbb{T}^{m}}g_{m}(u,\mathbf{o}_{m})\mathrm{d}\mathbf{o}_{m}.\]
For instance, \(\mu_{0}\equiv 1\) and also for \(m=1\) the expression simplifies. To that end, we write
\[M(p):=\{p^{\prime}\in\mathbb{T}\colon p^{\prime}\to p\}\]
for the set of all space-time points connecting to \(p\in\mathbb{T}\). Then,
\[\mu(u):=\mu_{1}(u)=|M(o)|=\int_{u}^{1}|I_{\beta u^{-\gamma}v^{ \gamma-1}}(0)|\mathrm{d}v=\frac{\beta}{\gamma}(u^{-\gamma}-1) \tag{2}\]
is the expected in-degree of the typical vertex. That is, \(\mu_{1}(u)=\mathbb{E}[D_{\mathrm{n}}(u)]\), where
\[D_{\mathrm{n}}(u):=\big{|}\mathcal{P}\cap M(o)\big{|}\]
is the in-degree of the typical vertex \(o\).
For general \(m\geqslant 1\), we can derive the small-\(u\) asymptotics.
**Lemma 7** (Asymptotics for \(\mu_{m}(u)\)).: _Let \(m\geqslant 1\), \(\gamma\in(0,1)\) and \(\eta>0\). Then, \(\mu_{m}(u)\in O(u^{-\gamma-\eta})\)._
Proof.: First,
\[\int_{\mathbb{T}}g_{m}(u,\mathbf{o}_{m})\mathrm{d}o_{m}\leqslant g_{m-1}(u,\mathbf{o} _{m-1})\int_{0}^{1}\big{|}I_{\beta u^{-\gamma}_{m-1}v^{\gamma-1}_{m}}(y_{m-1} )\big{|}\mathrm{d}v_{m}=\frac{\beta}{\gamma}g_{m-1}(u,\mathbf{o}_{m-1})v^{-\gamma }_{m-1}.\]
Next,
\[\int_{\mathbb{T}}g_{m-1}(u,\mathbf{o}_{m-1})v^{-\gamma-\eta}_{m-1} \mathrm{d}o_{m-1}\leqslant g_{m-2}(u,\mathbf{o}_{m-2})\int_{v_{m-2}}^{1}\beta v^{ -\gamma}_{m-2}v^{-1-\eta}_{m-1}\mathrm{d}v_{m-1}\leqslant\frac{\beta}{\eta}g _{m-2}(u,\mathbf{o}_{m-2})v^{-\gamma-\eta}_{m-2}.\]
Hence, iterating this bound yields that \(\mu_{m}(u)\leqslant\beta^{m}u^{-\gamma-\eta}/(\gamma\eta^{m-1})\), as asserted.
Proof of Proposition 6.: Let \(g^{\prime}_{m}(P_{0},\ldots,P_{m})\) be the indicator of the event that \(\{P_{0},\ldots,P_{m}\}\) forms an \(m\)-simplex in the ADRCM with \(U_{0}\leqslant\cdots\leqslant U_{m}\). Let \(A\subseteq\mathbb{R}\) be a Borel set with \(|A|=1\). Then
\[\lambda_{m}\mathbb{E}[f(\Delta^{*}_{m},\mathcal{P})]=\mathbb{E} \Big{[}\sum_{P_{0},\ldots,P_{m}\in\mathcal{P}_{\mathrm{distinct}}\atop 0 \leqslant\dots\leqslant U_{m}}\mathds{1}\{X_{0}\in A\}f\big{(}\{P_{0},\ldots,P_ {m}\},\mathcal{P}\big{)}\,g^{\prime}_{m}(P_{0},\ldots,P_{m})\Big{]},\]
Then, writing \(\mathbf{p}=(p_{1},\ldots,p_{m})\), by the Mecke formula (Last and Penrose, 2016, Theorem 4.4),
\[\lambda_{m}\mathbb{E}[f(\Delta^{*}_{m},\mathcal{P})]=\int_{A\times[0,1]}\int _{\mathbb{T}}\mathbb{E}\big{[}f(\{p_{0},\mathbf{p}\},\mathcal{P})\big{]}g^{\prime }_{m}(\{p_{0},\mathbf{p}\})\mathrm{d}\mathbf{p}\,\mathrm{d}p_{0}.\]
As \(|A|=1\), a substitution \(\mathbf{p}_{m}=\mathbf{o}_{m}+p_{0}\) and an application of Fubini's theorem give that
\[\lambda_{m}\mathbb{E}[f(\Delta^{*}_{m},\mathcal{P})]=\int_{0}^{1} \int_{\mathbb{T}^{m}}\mathbb{E}\big{[}f(\{o,\mathbf{o}_{m}\},\mathcal{P})\big{]}g _{m}(u,\mathbf{o}_{m})\,\mathrm{d}\mathbf{o}_{m}\,\mathrm{d}u.\]
Hence, evaluating this equality for \(f=1\) concludes the proof.
### Proof of lower bound
Next, we prove the lower bound by relying on the Palm representation derived in Proposition 6. More precisely, we produce specific configurations of \(u,\mathbf{o}_{m}\) that occur with sufficiently high probability and such that \(\mathbb{P}(\deg_{m^{\prime}}(u,\mathbf{o}_{m})\geqslant k)\) is bounded away from \(0\).
Proof of Theorem 1, lower bound.: To ease notation, we put \(\beta^{\prime}:=\beta/2\). First, let \(p:=\mathbb{P}\big{(}\mathcal{P}([0,\beta^{\prime}]\times[3/4,1])\geqslant m^ {\prime}\big{)}\) denote the probability that a \((\beta^{\prime}\times 0.25)\)-box contains at least \(m^{\prime}\) Poisson points. Furthermore, let \(M:=\lceil 2/p\rceil\). Then, consider the set \(B_{k}\subseteq[0,1]^{m+1}\times\mathbb{R}^{m}\) given by
\[B_{k}:=B^{\prime}_{k}\times[0,\beta^{\prime}k]^{m}:=\big{(}\prod_{j\leqslant m +1}[(j/(Mmk))^{1/\gamma},((j+1)/(Mmk))^{1/\gamma}]\big{)}\times[0,\beta^{ \prime}k]^{m}.\]
Since \(|B_{k}|\in\Omega(k^{-(m+1)/\gamma+m})\), we only need to verify the following two items for every \((u,\mathbf{o}_{m})\in B_{k}\).
1. The \((o,\mathbf{o}_{m})\) points form an m-simplex in the clique complex of the ADRCM.
2. It holds that \(\mathbb{P}\big{(}\deg_{m^{\prime}}(u,\mathbf{o}_{m})\geqslant k\big{)}\geqslant 1/2\)
For part (a), note that every \((u,\mathbf{o}_{m})\in B_{k}\) indeed defines an \(m\)-simplex since
\[\max_{i\leqslant m}|y_{i}|\leqslant\beta^{\prime}k\leqslant\beta^{\prime}((Mk)^{- 1/\gamma})^{-\gamma}\ \ \text{and}\ \max_{i,j\leqslant m}|y_{i}-y_{j}|\leqslant\beta^{\prime}k\leqslant\beta^{ \prime}((Mk)^{-1/\gamma})^{-\gamma}.\]
For part (b), we note that the events \(E_{i,k}:=\big{\{}\mathcal{P}\big{(}[i\beta^{\prime},(i+1)\beta^{\prime}] \times[3/4,1]\big{)}\geqslant m^{\prime}\big{\}}\) are independent for \(i\leqslant kM.\) Moreover, let \(N_{k}:=\sum_{i\leqslant kM}\mathbf{1}(E_{i,k})\) be the number of the events that occur. Then, \(N_{k}\) is a binomial random variable with \(kM\) trials and success probability \(p.\) Since \(kMp\geqslant 2k,\) the binomial concentration result implies that \(\mathbb{P}(N_{k}\geqslant k)\geqslant 1/2\) holds for sufficiently large \(k.\)
Hence, it suffices to show that almost surely, \(N_{k}\leqslant\deg_{m^{\prime}}(u,\mathbf{o}_{m}).\) To achieve this goal, we first note that for fixed \(i\leqslant kM\) any two points in \([i\beta^{\prime},(i+1)\beta^{\prime}]\times[0,1]\) are connected by an edge. Moreover, we claim that any \((Z,W)\in[0,\beta^{\prime}kM]\times[3/4,1]\) connects to \(o\) and to every \(o_{i},\)\(i\leqslant m.\) Now,
\[|Z-0|\leqslant\beta^{\prime}kM\leqslant\beta^{\prime}((kM)^{-1/\gamma})^{- \gamma}\ \ \text{and}\ \ \max_{i\leqslant m}|Z-y_{i}|\leqslant\beta^{\prime}kM\leqslant\beta^{\prime}(( kM)^{-1/\gamma})^{-\gamma}.\]
This concludes the proof since the Poisson concentration inequality (Penrose, 2003, Lemma 1.2) implies that \(\mathbb{P}(\mathcal{P}(C_{k})\geqslant k)\to 1\) as \(k\uparrow\infty.\)
### Proof of upper bound
In this subsection, we prove the upper bound for the simplex degree in Theorem 1. First, to provide the reader with a gentle introduction, we present the case of the in-degree, which was considered previously by (Gracar et al., 2019, Proposition 4.1). In fact, (Gracar et al., 2021, Lemma 4) is slightly more refined than Theorem 1 in the sense that it provides not only the asymptotics for the tail probabilities but for the entire probability mass function. Nevertheless, we include the short argument here because it makes the presentation self-contained and provides a intuition for the more complicated higher-order case.
The key observation is that conditioned on the arrival time \(u\) of the typical vertex \(o=(0,u),\) the in-degree is Poisson distributed. Indeed, by the restriction theorem, the in-neighbors form a Poisson point process for fixed \(u\)(Last and Penrose, 2016, Theorem 5.2).
Proof of upper bound for indegree.: First, note that if \(\mu(u)\leqslant k/2\) - where \(\mu(u)\) is the expected in-degree of the typical vertex introduced in (2) -, then by the Poisson concentration inequality, the probability \(\mathbb{P}(D_{\mathsf{in}}(u)\geqslant k)\) vanishes exponentially fast in \(k.\) Hence, we may assume that \(u\leqslant\mu^{-1}(k/2).\) Noting that (2) gives that \(\mu^{-1}(k/2)\in O(k^{-1/\gamma})\) concludes the proof.
To tackle the general case, we proceed in two steps. First, we reduce to the case where \(m^{\prime}=m+1,\) and then deal with this case. For the reduction step, the key idea is that the out-degree of a given vertex is Poisson distributed with a finite parameter (Gracar et al., 2019). Hence, the number of simplices containing a given point as its youngest vertex has rapidly decaying tail probabilities. In particular, there are only a few simplices containing a given vertex as its youngest vertex as this number is bounded from above by the outdegree of the vertex at hand.
We want to show that for the higher-order degree of the typical vertex, \(o=(0,U),\)
\[\limsup_{k\uparrow\infty}\frac{1}{\log(k)}\log\mathbb{P}(\deg_{m^{\prime}}( \Delta_{m})\geqslant k)\leqslant m-\frac{m+1}{\gamma}. \tag{3}\]
Hence, using Proposition 6, we see that (3) is equivalent to
\[\limsup_{k\uparrow\infty}\frac{1}{\log(k)}\log\int_{0}^{1}\varphi_{k,m,m^{ \prime}}(u)\mathrm{d}u\leqslant m-\frac{m+1}{\gamma}. \tag{4}\]
where
\[\varphi_{k,m,m^{\prime}}(u):=\int_{\mathbb{T}^{m}}\mathbb{P}\big{(}\deg_{m^{ \prime}}(u,\mathbf{o}_{m})\geqslant k\big{)}g_{m}(u,\mathbf{o}_{m})\mathrm{d}\mathbf{o}_{ m}.\]
Proof of reduction to \(m^{\prime}=m+1\).: Let \(M(\mathbf{o}_{m}):=\bigcap_{j\leqslant m}M(o_{j})\) denote the common in-neighbors of \(o_{1},\ldots,o_{m}.\) Then, the goal of this step is to reduce the problem to deriving the asserted power-law bound for the expression \(\mathbb{P}\big{(}\mathcal{P}(M(\mathbf{o}_{m}))\geqslant k\big{)}.\) First, Lemma 7 gives that \(\varphi_{k,m,m^{\prime}}(u)\in O(u^{-\gamma}).\) Hence, we may assume that \(u\geqslant k^{-2k},\) where \(K\) is chosen such that \((1-\gamma)K=(m+1)/\gamma-m.\)
Now, we note that any \(m^{\prime}\)-simplex containing the typical \(m\)-simplex consists of the \(m+1\) vertices of the typical simplex and \(m^{\prime}-m\) additional Poisson points. In particular, the number of \((m^{\prime}-m)\)-simplices containing the typical vertex \(o\) as its youngest vertex is at most \(D_{\mathsf{out}}(o)^{m^{\prime}-m}.\) Moreover,
\[\mathbb{P}\big{(}D_{\mathsf{out}}(o)^{m^{\prime}-m}\geqslant k\big{)}=\mathbb{ P}\big{(}D_{\mathsf{out}}(o)\geqslant k^{\frac{1}{m^{\prime}-m}}\big{)}, \tag{5}\]
which decays stretched exponentially by (Gracar et al., 2019, Proposition 4.1) and Poisson concentration (Penrose, 2003, Lemma 1.2).
Hence, it suffices to consider the number \(N_{m,m^{\prime}}\) of \(m^{\prime}\) simplices incident to the typical \(m\) simplex with the property that the youngest vertex is one of the \(m^{\prime}-m\) Poisson points \(P_{i}\in\mathcal{P}.\) Again, the number of
\((m^{\prime}-m)\)-simplices having \(P_{i}\) as its youngest vertex is bounded above by \(D_{\text{out}}(P_{i})^{m^{\prime}-m}\). Hence, we have for any \(\varepsilon>0\) that
\[\mathbb{P}(N_{m,m^{\prime}}\geqslant k)\leqslant\mathbb{P}\Big{(}\sum_{P_{i} \in M(\mathbf{o}_{m})}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We start with the innermost integral. Here, by Lemma 8, we deduce that if \(\mu(o_{m-1},o_{m})\geqslant k\), then
\[v_{m}\leqslant(\beta/\gamma)^{1/\gamma}k^{-1/\gamma}s_{\wedge}(y_{m-1}-y_{m},v_ {m-1},).\]
Therefore,
\[\int_{\mathbb{T}}\mathds{1}\{\mu(o_{m-1},o_{m})\geqslant k\}\mathrm{d}o_{m} \leqslant(\beta/\gamma)^{1/\gamma}k^{-1/\gamma}\int_{-\infty}^{\infty}s_{ \wedge}(y_{m-1}-y_{m},v_{m-1})\mathrm{d}y_{m}. \tag{7}\]
Hence, applying part a of Lemma 9 shows that for some \(c>0\),
\[\int_{0}^{1}\int_{\mathbb{T}^{m}}\prod_{n\leqslant m}\mathds{1}\{\mu(o_{n-1}, o_{n})\geqslant k\}\mathrm{d}o_{m}\mathrm{d}u\leqslant ck^{-1/\gamma}\int_{ 0}^{1}\int_{\mathbb{T}^{m-1}}\prod_{n\leqslant m-1}\mathds{1}\{\mu(o_{n-1},o_{ n})\geqslant k\}v_{m-1}^{-\gamma}\mathrm{d}\mathbf{o}_{m-1}\mathrm{d}u.\]
We now continue to compute the integral over \(o_{m-1}\), which is the next innermost integral. More generally, we claim that for every \(n\geqslant 1\) and sufficiently small \(\eta>0\), we have that
\[\int_{\mathbb{T}}\mathds{1}\{\mu(o_{n-1},o_{n})\geqslant k\}v_{n}^{-\gamma- \eta}\mathrm{d}o_{n}\in O\big{(}k^{-(1-\gamma-\eta)/\gamma}v_{n-1}^{-\gamma- \eta}\big{)}.\]
First, similarly as in (7), for some \(c>0\),
\[\int_{\mathbb{T}}\mathds{1}\{\mu(o_{n-1},o_{n})\geqslant k\}v_{n}^{-\gamma- \eta}\mathrm{d}o_{n}\leqslant ck^{-(1-\gamma-\eta)/\gamma}\int_{-I_{\mu_{n-1} ^{-1}}}s_{\wedge}(y_{n-1}-y_{n},v_{n-1})^{1-\gamma-\eta}\mathrm{d}y_{n}.\]
Therefore, by part (b) of Lemma 9,
\[\int_{\mathbb{T}}\mathds{1}\{\mu(o_{n-1},o_{n})\geqslant k\}v_{n}^{-\gamma- \eta}\mathrm{d}o_{n}\in O\big{(}k^{-(1-\gamma-\eta)/\gamma}v_{n-1}^{-\gamma- \eta}\big{)}.\]
as asserted.
## 4 Proof of Theorem 2 - CLT for Betti numbers
In this section, we prove Theorem 2. The idea is to proceed similarly as [11, Theorem 5.2] and apply the general Poisson CLT from [2, Theorem 3.1]. While the general strategy is similar to that chosen by [11, Theorem 5.2], the long-range dependencies in the ADRCM require more refined argumentation. Therefore, we provide additional details here. For a locally finite set \(\varphi\subseteq\mathbb{R}\times[0,1]\) we let \(\beta(\varphi)=\beta_{q}(\varphi)\) denote the \(q\)th Betti number computed of the ADRCM on \(\varphi\). To state the conditions of [2, Theorem 1] precisely, we introduce the add-one cost operator
\[\delta(\varphi,u):=\beta(\varphi\cup\{(0,u)\})-\beta(\varphi).\]
Now, to apply [2, Theorem 3.1], we need to verify the following two conditions:
1. It holds that \(\sup_{n\geqslant 1}\mathbb{E}[\delta(\mathcal{P}_{n},U)^{4}]<\infty\) (**moment condition**).
2. It holds that \(\delta(\mathcal{P}\cap W_{n},U)\) converges almost surely to a finite limit as \(n\uparrow\infty\) (**weak stabilization**), where \(W_{n}=[-n/2,n/2]\times[0,1]\).
We now verify separately the weak stabilization and the moment condition. In both cases, we follow the general strategy outlined by [11, Theorem 5.2]. To make the presentation self-contained, we provide the most important steps of the proof. First, we consider the moment condition.
Proof of Theorem 2, moment condition.: As in the proof by [11, Theorem 5.2], we note that \(\Delta(\varphi,U)\) is bounded above by the number of \(q\)- and \((q+1)\)-simplices containing \(o\). Thus,
\[\mathbb{E}[\delta(\varphi,U)^{4}] =\int_{0}^{\infty}\mathbb{P}[\delta(\varphi,U)\geqslant s^{1/4}] \mathrm{d}s\leqslant\int_{0}^{\infty}\mathbb{P}[\mathrm{deg}_{q}(o)+\mathrm{ deg}_{q+1}(o)\geqslant s^{1/4}]\mathrm{d}s\] \[\leqslant\int_{0}^{\infty}d_{0,q}(s^{1/4}/2)+d_{0,q+1}(s^{1/4}/2) \mathrm{d}s,\]
where the last inequality holds as if \(\mathrm{deg}_{q}(o)+\mathrm{deg}_{q+1}(o)\geqslant s^{1/4}\), then at least one of \(\mathrm{deg}_{q}(o)\) or \(\mathrm{deg}_{q+1}(o)\) is larger than \(s^{1/4}/2\). Now, by Theorem 1, both \(d_{0,q}\) and \(d_{0,q+1}\) have tail index \(1/\gamma>4\), thereby showing the finiteness of the above integral.
Second, we consider the weak stabilization.
Proof of Theorem 2, weak stabilization.: For \(n\geqslant 1\), we write
\[\beta_{n}:=\dim(Z_{n})-\dim(B_{n}):=\dim(Z(\mathcal{P}_{n}))-\dim(B(\mathcal{P}_{ n}))\]
for the Betti number of the ADRCM constructed on \(\mathcal{P}_{n}\), noting that this characteristic is the dimension difference of the corresponding cycle space \(Z_{n}\) and boundary space \(B_{n}\), respectively. Similarly, we set
\[\beta^{\prime}_{n}:=\dim(Z^{\prime}_{n})-\dim(B^{\prime}_{n}):=\dim(Z( \mathcal{P}_{n}\cup\{o\}))-\dim(B(\mathcal{P}_{n}\cup\{o\})),\]
where we have now added the typical vertex, \(o=(0,U)\). Hence, it suffices to show the weak stabilization with respect to \(\dim(Z_{n})\) and \(\dim(B_{n})\) separately. We now discuss the case of \(\dim(Z_{n})\), noting that the arguments for \(\dim(B_{n})\) are very similar. To check weak stabilization, we show that the sequence \(\dim(Z^{\prime}_{n})-\dim(Z_{n})\) is increasing and bounded.
First, to show that \(\dim(Z^{\prime}_{n})-\dim(Z_{n})\) is bounded, we note that \(\dim(Z^{\prime}_{n})-\dim(Z_{n})\leqslant\deg_{q,n}(o)\), where, \(\deg_{q,n}(o)\) denotes the number of \(q\)-simplices in \(W_{n}\) containing the typical vertex \(o\). This is because the \(q\)-simplices constructed from \(\mathcal{P}_{n}\cup\{o\}\) can be decomposed into the set of \(q\)-simplices containing the typical vertex \(o\) and into the family of all simplices formed in \(\mathcal{P}_{n}\). We refer to the arguments by [13, Lemma 2.9] for the rigorous result. Now, almost surely, there exists \(n_{0}\geqslant 1\) such that for \(n\geqslant n_{0}\), the neighbors of \(o\) do not change any further. In particular, \(\dim(Z^{\prime}_{n})-\dim(Z_{n})\leqslant|K^{o}_{n}|\).
Second, we show that \(\dim(Z^{\prime}_{n})-\dim(Z_{n})\) is nondecreasing. To that end, we take \(n_{2}\geqslant n_{1}\) and consider the canonical map
\[Z^{\prime}_{n_{1},q}\to Z^{\prime}_{n_{2},q}/Z_{n_{2},q},\]
where the index \(q\) refers to the dimension of the cycle space. Then, any cycle contained in the kernel of this map consists of simplices formed by vertices in \(\mathcal{P}_{n}\). In other words, the kernel equals \(Z_{n_{1},q}\), which shows that the induced map
\[Z^{\prime}_{n_{1},q}/Z_{n_{1},q}\to Z^{\prime}_{n_{2},q}/Z_{n_{2},q}\]
is injective. In particular, \(\dim(Z^{\prime}_{n_{1}})-\dim(Z_{n_{1}})\leqslant\dim(Z^{\prime}_{n_{2}})- \dim(Z_{n_{2}})\), as asserted.
## 5 Proof of Theorems 3 and 4 - asymptotics of edge counts
In this section, we prove Theorems 3 and 4. In both results, the idea is to write
\[S_{n}=\sum_{i\leqslant n}T_{i}:=\sum_{i\leqslant n}\sum_{P_{j}\in[i-1,i] \times[0,1]}D_{\mathfrak{n}}(P_{j}),\]
i.e., to express the edge count \(S_{n}\) as the sum of the indegrees of all vertices contained in \([i-1,i]\times[0,1]\). For the proofs of Theorems 3 and 4, it will be important to compute variances of suitable sums of indegrees. For ease of reference, we therefore state such bounds as a general auxiliary result. To make this precise, we henceforth let
\[S(B)=\sum_{P_{j}\in B}D_{\mathfrak{n}}(P_{i})\]
denote the indegree sum for all vertices contained in the space-time region \(B\subseteq\mathbb{T}\).
**Lemma 10** (Variance of accumulated indegrees).: _Let \(\gamma\neq 1/2\), \(A\subseteq\mathbb{R}\) and \(u_{*}>0\). Then, there exists a constant \(c_{\mathfrak{M}}>0\) such that_
\[\mathsf{Var}(S(A\times[u_{*},1])\leqslant c_{\mathfrak{M}}\Big{(}|A|(1+u_{*} ^{1-2\gamma})+\iint_{A\times A}\int_{u^{*}}^{1\wedge(S/|x-y|)}s_{\wedge}(u,|x- y|)^{\gamma}\mathrm{d}(x,y)\mathrm{d}u\Big{)}\]
Proof.: First, we note that
\[D_{\mathfrak{n}}\big{(}(x,u),\mathcal{P}\cup\{(x,u),(y,v)\}\big{)}=D_{ \mathfrak{n}}\big{(}(x,u),\mathcal{P}\cup\{(x,u)\}\big{)}+\mathds{1}\{(y,v) \in M(x,u)\}\big{)}.\]
Hence, by the Mecke formula [11, Theorem 4.4] with \(B:=A\times[u,1]\)),
\[\mathsf{Var}(S(B))= \int_{B}\mathbb{E}[D_{\mathfrak{n}}(x,u)^{2}]\mathrm{d}(x,u)+\int _{B}\int_{B}\mathsf{Cov}\big{(}D_{\mathfrak{n}}(x,u),D_{\mathfrak{n}}(y,v) \big{)}\mathrm{d}(x,u)\mathrm{d}(y,v)\] \[+2\int_{B}\mathbb{E}[D_{\mathfrak{n}}(x,u)]\big{|}\{(y,v)\in B \colon(x,u)\in M(y,v)\}\big{|}\mathrm{d}(x,u),\]
where \(D_{\mathfrak{n}}(x,u)\) denotes the in-degree of the vertex \((x,u)\) on the ADRCM constructed on \(\mathcal{P}\cup\{(x,u)\}\). Now, note that \(|\{(y,v)\in B\colon(x,u)\in M(y,v)\}|\in O(1)\) and that \(\mathbb{E}[D_{\mathfrak{n}}(x,u)]\leqslant\mathbb{E}[D_{\mathfrak{n}}(x,u)^{ 2}]\). Hence, it suffices to bound the sum
\[\int_{B}\mathbb{E}[D_{\mathfrak{n}}(x,u)^{2}]\mathrm{d}(x,u)+\int_{B}\int_{B} \mathsf{Cov}\big{(}D_{\mathfrak{n}}(x,u),D_{\mathfrak{n}}(y,v)\big{)}\mathrm{ d}(x,u)\mathrm{d}(y,v),\]
and we deal with the two summands separately.
We start by bounding \(\mathbb{E}[D_{\mathsf{m}}(x,u)^{2}].\) Since \(D_{\mathsf{m}}(x,u)\) is a Poisson random variable with mean \(\mu(u)\in O(u^{-\gamma})\), the Poisson concentration inequality shows that \(\mathbb{E}[D_{\mathsf{m}}(x,u)^{2}]\in O(u^{-2\gamma})\). Now, we note that \(\int_{u_{*}}^{1}u^{-2\gamma}=(1-2\gamma)^{-1}(1-u_{*}^{1-2\gamma})\), which is of order \(O(u^{1-2\gamma})\) for \(\gamma>1/2\) and of order \(O(1)\) for \(\gamma<1/2\).
To bound the covariance recall from (6) that a point \((z,w)\) connects to both \((x,u)\) and \((y,v)\) if and only if \((z,w)\in M\big{(}(x,u),(y,v)\big{)}.\) Hence, by the independence property of the Poisson process, \(\mathsf{Cov}\big{(}D_{\mathsf{m}}(x,u),D_{\mathsf{m}}(y,v)\big{)}=\mu\big{(}(x,u),(y,v)\big{)}.\) In particular, applying Lemma 8 concludes the proof.
First, we prove the CLT for the simplex count in the regime \(\gamma<1/2\), where we will rely on a general CLT for associated random variables [Whitt, 2002, Theorem 4.4.3]. Here, we recall that the real-valued random variables \(T_{1},\ldots,T_{k}\) are _associated_ if
\[\mathsf{Cov}\big{(}f_{1}(T_{1},\ldots,T_{k}),f_{2}(T_{1},\ldots,T_{k})\big{)} \geqslant 0.\]
holds for any coordinatewise increasing functions \(f_{1},f_{2}\colon\mathbb{R}^{k}\to[0,\infty)\).
Proof of Theorem 3.: Since the in-degrees are an increasing function in the underlying Poisson point process, we conclude from the Harris-FKG theorem [Last and Penrose, 2016, Theorem 20.4] that the random variables \(\{T_{i}\}\) are associated. Hence, to apply [Whitt, 2002, Theorem 4.4.3], it remains to prove that \(\mathsf{Var}(T_{1})<\infty\) and \(\sum_{k\geqslant 2}\mathsf{Cov}(T_{1},T_{k})<\infty.\) The finiteness of \(\mathsf{Var}(T_{1})\) follows from Lemma 10 so that it remains to consider the covariance sum.
We prove that \(\mathsf{Cov}(T_{1},T_{k})\in O(k^{-1-\gamma})\), recalling that \(\gamma<1/2.\) Proceeding similarly as in Lemma 10, and setting \(a:=|x-y|\), we need to show that \(\int_{0}^{\beta/a}s_{\wedge}(a,u)^{\gamma}\mathrm{d}u\in O(k^{-\gamma-1}).\) Hence, applying part (c) of Lemma 9 concludes the proof.
Next, we prove Theorem 4, i.e., the stable limit theorem for the edge count. Before proving Theorem 4, we stress that while there are several general limit results in the literature for deriving the distributional convergence to \(\alpha\)-stable limits [Basrak et al., 2012, Decreusefond et al., 2016, Heinrich and Wolf, 1993], these do not apply in our setting. More precisely, it is difficult to verify [Basrak et al., 2012, Condition 3.3] since the ADRCM is mixing but not \(\phi\)-mixing. Second, [Decreusefond et al., 2016, Theorem 7.8] give a general convergence result of Poisson functionals to \(\alpha\)-stable random variables with \(\alpha\in(0,1)\). However, this corresponds to the case where \(\gamma>1\), which is not possible due to the model constraints. While [Decreusefond et al., 2016, Remark 7.9] state that in principle, the method should generalize to \(\alpha\in(1,2)\), the ensuing computations may lead to difficulties that are difficult to tackle. Third, Heinrich and Wolf [1993] derive a general limit result for U-statistics based on iid input. However, in our setting, we work in a growing domain, so the distributions change after every step.
Before starting the proof of Theorem 4, it will be convenient to review the classical stable limit theorem for iid sequences from [Whitt, 2002, Theorem 4.5.2]. To ease presentation, we restrict to the present setting of nonnegative random variables. More precisely, let \(\{X_{i}\}_{i}\) be iid nonnegative random variables such that \(\mathbb{P}(X_{i}>x)\sim Ax^{-\alpha}\) for some \(\alpha\in(1,2)\) and \(A>0.\) Then, \(n^{-1/\alpha}(\sum_{i\leqslant n}X_{i}-n\mathbb{E}[X_{1}])\) converges in distribution to an \(\alpha\)-stable random variable \(\mathcal{S}\).
A key step proving Theorem 4 is a truncation argument, which we first discuss in the iid case.
**Lemma 11** (Truncation in the iid case).: _Let \(\{X_{i}\}_{i}\) be iid random variables with \(\mathbb{P}(X_{i}>x)\sim Ax^{-\alpha}\) for some \(\alpha\in(1,2)\) and \(A>0.\) Then, for every \(a<1/\alpha\),_
\[n^{-1/\alpha}\Big{(}\sum_{i\leqslant n}X_{i}\mathds{1}\{X_{i}\leqslant n^{a} \}-n\mathbb{E}[X_{1}\mathds{1}\{X_{1}\leqslant n^{a}]\}\Big{)}\xrightarrow{ \mathclap{L^{2}}}0.\]
Proof.: Since the \(X_{i}\) are iid, the claim follows by showing that \(\mathsf{Var}(X_{i}\mathds{1}\{X_{i}\leqslant n^{a}\})\in o(n^{2/\alpha-1}).\) Now,
\[\mathbb{E}[X_{i}^{2}\mathds{1}\{X_{i}^{2}\leqslant n^{2a}\}]=\int_{0}^{n^{2a} }\mathbb{P}(X_{1}^{2}\in[r,n^{2a}])\mathrm{d}r.\]
Since \(\mathbb{P}(X_{1}^{2}\geqslant r)\asymp Ar^{-\alpha/2}\) we note that \(\int_{1}^{n^{2a}}\mathbb{P}(X_{1}^{2}>r)\mathrm{d}r\in O(n^{2a(1-\alpha/2)}).\) Hence, observing that \(2a(1-\alpha/2)<2/\alpha-1\) concludes the proof.
Now, we return to the case of the edge count in the ADRCM. The idea of proof is to decompose \(S_{n}\) as \(S_{n}^{\geqslant}+S_{n}^{\leqslant}\), where \(S_{n}^{\geqslant}\) and \(S_{n}^{\leqslant}\) contain the contributions of the young and the old vertices, respectively. More precisely, for \(u_{n}:=n^{-0.9}\) put
\[S_{n}^{\geqslant}:=\sum_{P_{1}\in[0,n]\times[u_{n},1]}D_{\mathsf{m}}(P_{1}), \quad\text{ and }\quad S_{n}^{\leqslant}:=\sum_{P_{1}\in[0,n]\times[0,u_{n}]}D_{ \mathsf{m}}(P_{1}).\]
First, we control the deviations of \(S_{n}^{\geqslant}\) via the Chebyshev inequality.
**Proposition 12** (\(S_{n}^{\geqslant}\) is negligible).: _It holds that \(n^{-\gamma}(S_{n}^{\geqslant}-\mathbb{E}[S_{n}^{\geqslant}])\) converges to 0 in probability._
Proof.: To prove the claim, we apply Lemma 10 with \(A=[0,n]\) and \(u_{*}=u_{n}.\) In particular, the first summand in Lemma 10 is then of order \(O(nu_{n}^{1-2\gamma})\). Now, since \(-0.9(1-2\gamma)+1<2\gamma\), we get \(nu_{n}^{1-2\gamma}\in o(n^{-2\gamma}).\) Hence, it suffices to bound the second summand in Lemma 10. Here, we can apply part a of Lemma 9 which shows that \(\int_{T_{k}}s_{\wedge}(a,u)^{\gamma}\mathrm{d}(a,u)\in O(1)\), thereby concluding the proof.
Second, we approximate \(S_{n}^{\leq}\) by a sum of iid Pareto random variables so that we can apply the stable CLT [Whitt, 2002, Theorem 4.5.2].
**Proposition 13** (\(S_{n}^{\leq}\) converges to a stable distribution).: _It holds that \(n^{-\gamma}(S_{n}^{\leq}-\mathbb{E}[S_{n}^{\leq}])\) converges in distribution to a stable random variable._
To maintain a clear structure, we conclude the proof of Theorem 4 before establishing Proposition 13.
Proof of Theorem 4.: By Proposition 12, \(n^{-\gamma}(S_{n}^{\geq}-\mathbb{E}[S_{n}^{\geq}])\) tends to \(0\) in probability, and \(n^{-\gamma}(S_{n}^{\leq}-\mathbb{E}[S_{n}^{\leq}])\) tends in distribution to a stable random variable. Hence, also
\[n^{-\gamma}(S_{n}-\mathbb{E}[S_{n}])=n^{-\gamma}(S_{n}^{\geq}-\mathbb{E}[S_{n }^{\geq}])+n^{-\gamma}(S_{n}^{\leq}-\mathbb{E}[S_{n}^{\leq}])\]
tends in distribution to a stable random variable.
It remains to prove Proposition 13. That is, the renormalized sum of the large in-degrees converges to a stable distribution. To make this precise, we introduce two further approximations, namely \(S_{n}^{(1)}\) and \(S_{n}^{(2)}\) that we define now. In these approximations, we replace the in-degree by its expectation, and replace the Poisson number of points in \([0,n]\) by a fixed number, respectively. More precisely, we set
\[S_{n}^{(1)}:=\sum_{\begin{subarray}{c}X_{i}\in\{0,n]\\ U_{i}\leq u_{n}\end{subarray}}}\mu(U_{i})\ \text{ and }\ S_{n}^{(2)}:=\sum_{ \begin{subarray}{c}U_{i}\leq u\\ U_{i}\leq u_{n}\end{subarray}}\mu(U_{i}).\]
The key step in the proof of Proposition 13 is to show that each of these expressions is close in \(L^{1}\)-norm.
**Lemma 14** (\(S_{n}^{\leq},S_{n}^{(1)}\) and \(S_{n}^{(2)}\)).: _It holds that \(\mathbb{E}[|S_{n}^{\leq}-S_{n}^{(1)}|]+\mathbb{E}[|S_{n}^{(1)}-S_{n}^{(2)}|]\in o (n^{\gamma})\)._
Before proving Lemma 14, we explain how to conclude the proof of Proposition 13.
Proof of Proposition 13.: By Lemma 14, it suffices to show that \(n^{-\gamma}(S_{n}^{(2)}-\mathbb{E}[S_{n}^{(2)}])\) converges in distribution to a stable random variable. We note that by construction, the summands \(\mu(U_{i})\), \(i\leq n\) are iid. Moreover, by Lemma 8, \(\mu(u)\sim(\beta/\gamma)u^{-\gamma}\). Hence, an application of Lemma 11 concludes the proof.
It remains to prove Lemma 14.
Proof of Lemma 14.: We prove the two parts separately.
\(\mathbb{E}[|\mathbf{S_{n}^{\leq}-S_{n}^{(1)}}|]\).: By the Mecke formula, it suffices to show that
\[n\int_{[0,u_{n}]}\mathbb{E}\big{[}|D_{\mathrm{m}}(o)-\mu(u)|\big{]}\mathrm{d} u\in o(n^{\gamma}).\]
To achieve this goal, we use that the centered moment of a Poisson random variable with parameter \(\lambda\) is given by \(2\lambda^{\lfloor\lambda\rfloor+1}e^{-\lambda}/\lfloor\lambda\rfloor!\). Specializing to \(\lambda=\mu(u)\) and applying the Stirling formula shows that \(\mathbb{E}[|D_{\mathrm{m}}(o)-\mu(u)|]\in O(u^{-0.6\gamma})\). Therefore,
\[n\int_{[0,u_{n}]}\mathbb{E}\big{[}|D_{\mathrm{m}}(o)-\mu(u)|\big{]}\mathrm{d} u\in O(nu_{n}^{1-0.6\gamma}).\]
Now, since \(1-0.9(1-0.6)\gamma<\gamma\), we deduce that \(nu_{n}^{1-0.6\gamma}\in o(n^{\gamma})\), thereby concluding the proof.
\(\mathbb{E}[|\mathbf{S_{n}^{(1)}-S_{n}^{(2)}}|]\).: Let \(N\) be a Poisson random variable with parameter \(N\). Then,
\[\mathbb{E}[|S_{n}^{(1)}-S_{n}^{(2)}|]\leq\mathbb{E}\big{[}|N-n|\big{]}\int_{0 }^{u_{n}}\mu(u)\mathrm{d}u.\]
First, the CLT for iid random variables gives that \(\mathbb{E}\big{[}|N-n|\big{]}\in O(\sqrt{n})\). Furthermore,
\[\int_{0}^{u_{n}}\mu(u)\mathrm{d}u\in O(u_{n}^{1-\gamma}).\]
Therefore, \(\mathbb{E}\big{[}|S_{n}^{(1)}-S_{n}^{(2)}|\big{]}\in O(\sqrt{n}u_{n}^{1- \gamma})\). Hence, noting that \(1/2-0.9+0.9\gamma<\gamma\) concludes the proof.
Proof of Theorem 5
We deal with parts (a) and (b) of Theorem 5 separately.
### Proof of part (a)
We start with part (a). In the assertion, we need to establish an upper and a lower bound for the probability that the typical degree in the thinned graph \(G^{\mathsf{th},\gamma}\) is large. First, we discuss the lower bound since in the proof, we can ignore the distinction between exposed and protected edges.
Proof of Theorem 5(a), lower bound.: Let \(G^{\prime}\) be obtained by independent edge thinning, where all edges of the ADRCM \(G\) are eligible to be removed. Moreover, the retention probability of an edge \((Y,V)\to o=(0,U)\) is set as \(U^{\eta}\). Then, \(G^{\prime}\subseteq G^{\mathsf{th},\eta}\) so that \(\mathbb{P}\big{(}\deg_{G^{\prime},\mathsf{th}}(o)\geqslant k\big{)}\geqslant \mathbb{P}\big{(}\deg_{G^{\prime},\mathsf{th}}(o)\geqslant k\big{)}\).
Now, the thinning theorem for Poisson point processes implies that, conditioned on \(U\), the retained in-neighbors form a Poisson point process. Hence, conditioned on \(U\), the in-degree is a Poisson random variable with a mean \(U^{\eta}\mu(U)\). Moreover,
\[\mathbb{P}\big{(}\deg_{G^{\prime},\mathsf{in}}(o)\geqslant k\big{)}\geqslant \mathbb{P}(U^{\eta}\mu(U)\geqslant 2k)-\mathbb{P}\big{(}\deg_{G^{\prime}, \mathsf{in}}(o)\leqslant k,U^{\eta}\mu(U)\geqslant 2k\big{)}.\]
By Poisson concentration, the second probability on the right decays exponentially in \(k\), whereas (2) yields the asserted \(\mathbb{P}(U^{\eta}\mu(U)\geqslant 2k)\in O\big{(}k^{-1/(\gamma-\eta)}\big{)}\).
Next, we prove the upper bound for the tail probabilities of the vertex degrees. That is, an upper bound for the probability that the typical degree is very large. Since in the model \(G^{\mathsf{th},\eta}\) only the exposed edges are thinned out, this is more difficult than the lower bound. Loosely speaking, we need to ensure that the number of protected edges is negligible so that it does not matter whether or not they are considered in the thinning. To achieve this goal, in Lemma 15, we bound the power-law exponent of the number of protected edges leading to the typical node \(o\).
**Lemma 15** (Power-law for the vertex degree of protected edges).: _It holds that_
\[\lim_{k\uparrow\infty}\log(d_{k}^{\mathsf{pr},\eta})/\log(k)=1-2/\gamma.\]
Before proving Lemma 15, we conclude the proof of the upper bound in part a of Theorem 5.
Proof Theorem 5(a), upper bound.: As in the proof of the lower bound, we let \(G^{\prime}\) be the graph obtained by independent edge thinning, where we allow all edges to be thinned. Moreover, we let \(I_{\mathsf{pr}}\) denote the number of protected edges incident to \(o\). Then,
\[\mathbb{P}\big{(}\deg_{G^{\prime},\mathsf{s}}(o)\geqslant k\big{)}\leqslant \mathbb{P}\big{(}\deg_{G^{\prime}}(o)\geqslant k/2\big{)}+\mathbb{P}\big{(}I _{\mathsf{pr}}\geqslant k/2\big{)}.\]
By Lemma 15, the second probability on the right hand side is of order at most \(k^{1-2/\gamma+o(1)}\). Hence, to conclude the proof, we need to show that the first probability is of order at most \(k^{-1/(\gamma-\eta)+o(1)}\). To that end, we proceed as in the proof of the lower bound. More precisely,
\[\mathbb{P}\big{(}\deg_{G^{\prime}}(o)\geqslant k\big{)}\leqslant\mathbb{P} \big{(}\deg_{G^{\prime}}(o)\geqslant k,U^{\eta}\mu(U)\leqslant k/2\big{)}+ \mathbb{P}(U^{\eta}\mu(U)\geqslant k/2).\]
Again, by Poisson concentration, the first probability on the right-hand side decays exponentially in \(k\), whereas the second one is of order \(k^{-1/(\eta-\gamma)+o(1)}\), as asserted.
Now, we prove Lemma 15. The idea is to carefully distinguish between different cases of how an edge can be protected, and then to bound each of the resulting probabilities separately.
Proof of Lemma 15.: Our goal is bound the probability that the number of protected edges leading to \(o\) is at least \(k\geqslant 1\). By definition, it suffices to bound \(\mathbb{P}(\mathcal{P}^{(1)}\geqslant k)\), and \(\mathbb{P}(\mathcal{P}^{(2)}\geqslant k)\), where
1. \(\mathcal{P}^{(1)}:=\big{\{}(Z,W)\in M(o)\colon U\leqslant W\leqslant 2U\big{\}}\).
2. \(\mathcal{P}^{(2)}:=\big{\{}(Z,W)\in M(o)\colon(Z,W)\to(Y,V)\) for some \((Y,V)\in\mathcal{P}\) with \(V\leqslant 2U\leqslant 4V\big{\}}\).
We now deal with the two cases separately and heavily rely on the result from [13, Proposition 4.1], that conditioned on \(U=u\), the in-degree of \(o\) is Poisson-distributed with mean in \(\mu(u)\).
\(\mathbb{P}(\{\mathcal{P}^{(1)}\}\geqslant\mathbf{k})\).: We note that conditioned on \(U=u\), the quantity \(|\mathcal{P}^{(1)}|\) is a Poisson random variable with mean \(\int_{a}^{\infty}|I_{\beta u-\gamma\gamma\gamma-1}|\mathrm{d}v=\beta(2^{\gamma }-1)/\gamma\). Hence, \(\mathbb{P}(|\mathcal{P}^{(1)}|\geqslant k)\) decays exponentially fast in \(k\).
\(\mathbb{P}(|\mathcal{P}^{(2)}|\geqslant\mathbf{k})\).: Note that if \((Z,W)\to(0,U)\) and \((Z,W)\to(Y,V)\), then
\[|Y|\leqslant\beta|Z|/2+\beta|Z-Y|/2\leqslant\beta U^{-\gamma}W^{\gamma-1}.\]
For any \(\varepsilon>0\), the probabilities \(\mathbb{P}\big{(}\mathcal{P}\big{(}[-\beta U^{-1},\beta U^{-1}]\times[U,2U] \big{)}\geqslant k^{\varepsilon}\big{)}\) decay at stretched exponential speed. Indeed, conditioned on \(U=u\), the random variable \(\mathcal{P}\big{(}[-\beta u^{-1},\beta u^{-1}]\times[u,2u]\big{)}\) is Poisson distributed with mean \(2\beta\) so that the asserted decay is a consequence of the Poisson concentration inequality.
Therefore, recalling (6), it suffices to bound
\[\int_{0}^{1}\int_{u/2}^{1}\int_{-\infty}^{\infty}\mathbb{P}\big{(} \mathcal{P}\big{(}M\big{(}(0,u),(y,v)\big{)}\big{)}\geqslant k^{1-\varepsilon} \big{)}\mathrm{d}y\mathrm{d}v\mathrm{d}u.\]
Again, applying the Poisson concentration inequality reduces this task to bounding
\[\int_{0}^{1}\int_{u/2}^{1}\int_{-\infty}^{\infty}\mathds{1}\{\mu \big{(}(0,u),(y,v)\big{)}\geqslant k^{1-\varepsilon}\big{)}\mathrm{d}y\mathrm{ d}v\mathrm{d}u. \tag{8}\]
Since \(\mu\big{(}(0,u),(y,v)\big{)}\leqslant\mu(u)\), we conclude that if \(\mu\big{(}(0,u),(y,v)\big{)}\geqslant k^{1-\varepsilon}\), then \(u\leqslant ck^{-(1-\varepsilon)/\gamma}\) for some \(c>0\). Moreover, we deduce from Lemma 8 that \(\mu\big{(}(0,u),(y,v)\big{)}\leqslant(\beta/\gamma)v^{-\gamma}s_{\wedge}(y,u)^ {\gamma}\). Therefore, (8) is bounded above by
\[(\beta/\gamma)^{1/\gamma}k^{-(1-\varepsilon)/\gamma}\int_{0}^{ ck^{-(1-\varepsilon)/\gamma}}\int_{-\infty}^{\infty}s_{\wedge}(y,u)\mathrm{d}y \mathrm{d}u.\]
Hence, an application of part (a) of Lemma 9 concludes the proof.
### Proof of part (b)
Next, we proof part (b) of Theorem 5. That is, the thinning operation does not affect the power-law exponent of the edge degree. Loosely speaking, the idea is that even after removing all exposed edges, the protected edges are sufficient to sustain a positive proportion of all the triangle leading to the high edge degree in the ADRCM.
As in the proof of part (a), we show upper and lower bounds for the tail probabilities separately. We start with the proof of the upper bound. Intuitively, it is not surprising that removing edges reduces the edge degrees. Nevertheless, to make the presentation self-contained, we give a rigorous proof.
Proof of part b) of Theorem 5, upper bound.: The key idea is to use the Palm representation of the typical edge degree. More precisely,
\[d_{1,k}=\mathbb{P}\big{(}\deg_{2}(\Delta_{1})\geqslant k\big{)}= \frac{1}{\lambda_{2}}\mathbb{E}\Big{[}\sum_{\begin{subarray}{c}(X,U),(Y,V) \in\mathcal{P}\\ (Y,V)\to(X,U)\end{subarray}}\mathds{1}\{X\in[0,1]\}\mathds{1}\{\deg_{2}\big{(} (X,U),(Y,V)\big{)}\geqslant k\}\Big{]},\]
where \(\lambda_{2}>0\) denotes the edge intensity of \(G\). Similarly, by writing \(\to^{\eta}\) to indicate a directed edge in the graph \(G^{\mathrm{d}\eta,\eta}\), we get that
\[d^{\prime}_{1,k}=\frac{1}{\lambda_{2}^{(\eta)}}\mathbb{E}\Big{[}\sum_{ \begin{subarray}{c}(X,U),(Y,V)\in\mathcal{P}\\ (Y,V)\to^{\eta}(X,U)\end{subarray}}\mathds{1}\{X\in[0,1]\}\mathds{1}\{\deg_{G }\big{(}(X,U),(Y,V)\big{)}\geqslant k\}\Big{]},\]
where \(\lambda_{2}^{(\eta)}\) is the edge intensity of the thinned graph. Now, noting that \(G^{\mathrm{d}\eta,\eta}\) is a subgraph of \(G\) implies that \(d^{\prime}_{1,k}\lambda_{2}^{(\eta)}\leqslant d_{1,k}\lambda_{2}\). In particular, \(\limsup_{k\uparrow\infty}k^{-1}\log(d^{\prime}_{1,k})\leqslant 1-2/\gamma\), as asserted.
The lower bound is more delicate since we need to show that triangles formed by the protected edges are sufficient to sustain the original edge degree even after the thinning. By monotonicity, it suffices to establish the asserted lower bound for the graph \(G^{\prime\prime}:=G^{(\infty)}\), i.e., the graph where only the protected edges are retained.
Proof of part b) of Theorem 5, lower bound.: As a preliminary observation, we note that a directed edge of the form \((Y,V)\to(X,U)\) with \(V\leqslant 2U\) is never exposed. Hence, as in the non-thinned case in Theorem 1, we need to derive a lower bound for the expression
\[\int_{0}^{1}\int_{u}^{2u}\int_{-\infty}^{\infty}T(u,v,y)\mathrm{d}y\mathrm{d}v \mathrm{d}u,\]
where \(T(u,v,y):=\mathbb{P}\big{(}\deg_{2}\big{(}(y,v),(o,u)\big{)}\geqslant k\big{)}\). To achieve this goal, we derive a lower bond for \(T(u,v,y)\) when \((u,v,y)\) is in the domain
\[B_{k}:=[(\beta/(64k))^{1/\gamma},(\beta/(32k))^{1/\gamma}]\times[( \beta/(32k))^{1/\gamma},(\beta/(16k))^{1/\gamma}]\times[0,8k].\]
First, note that \((y,v)\to(0,u)\) for every \((u,v,y)\in B_{k}\) as \((\beta/2)u^{-\gamma}v^{\gamma-1}\geqslant 16k\geqslant y\). Since \(|B_{k}|\in O(k^{1-2/\gamma})\), it therefore suffices to show that \(T(u,v,y)\) is uniformly bounded away from \(0\) for \((u,v,y)\in B_{k}\).
To achieve this goal, we first note that any point \((z,w)\in C_{k}:=[0,8k]\times[3/4,1]\) connects to both \((o,u)\) and \((y,v)\). Indeed,
\[|z-0|\leqslant 8k\leqslant(\beta/2)((\beta/(32k))^{1/\gamma})^{-\gamma}\ \text{ and }\ |z-y|\leqslant 8k\leqslant(\beta/2)((\beta/(16k))^{1/\gamma})^{-\gamma}.\]
Noting that \(v\leqslant 2u\) implies that both edges are protected and therefore also exist in \(G^{\mathrm{d}\eta,\eta}\). Now, we conclude since the Poisson concentration inequality implies that \(\mathbb{P}(\mathcal{P}(C_{k})\geqslant k)\to 1\) as \(k\uparrow\infty\)
Simulation study
This section serves as a bridge between the theory and its applications to real-world data. Specifically, we study to what extent the methods and limit theorems derived for the ADRCM apply to finite networks. Our Monte Carlo approach involves simulating multiple networks with identical model parameters. Subsequently, we calculate various network properties and subject them to statistical analysis, often entailing parameter estimation for theoretical probability distributions. Relying on Palm calculus, we also explore the simulation of typical simplices in infinite networks to examine fluctuations of different quantities around the limit, devoid of finite size effects.
### Simulation methods
To simulate a finite network, we follow a step-by-step process as outlined below.
1. We begin by fixing the network size, setting the volume \(V\) of the sampling window equal to the expected vertex number in the network. The vertex number \(N\) is drawn from a Poisson distribution with parameter \(V\). This step determines the actual vertex number in the network.
2. Next, we generate the birth times of the \(N\) vertices. Conditioned on the vertex number, the birth times are uniformly distributed. Thus, the birth times are generated by drawing \(N\) id uniformly distributed random variables from the interval \([0,1]\). For each vertex, its position is also generated independently and uniformly across the entire sampling window. This process corresponds to sampling the spatial Poisson point process conditioned on the point count.
3. Connections between vertices are created based on the following condition. For every pair of vertices \((x,u)\) and \((y,v)\), where \(u\leqslant v\), a connection is formed if the distance between the vertices satisfies \(|x-y|\leqslant\frac{1}{2}\beta u^{-\gamma}v^{\gamma-1}\). This criterion governs the establishment of connections in the network.
4. Finally, the generated binary network is expanded to a clique complex. This simplicial complex allows for topological analysis and examination of higher-order network properties.
Figure 2 shows the largest component of a generated network of size \(1\,000\,000\) with \(\gamma=0.7\).
To avoid the influence of finite size effects and simulate typical simplices in infinite-size networks, we use Palm calculus. The main idea is to focus only on the immediate neighborhood of a typical vertex placed at the origin, thereby eliminating the presence of finite-size effects for the central vertex. In this neighborhood, other vertices can form connections with the central vertex and with each other as well. Any vertex that cannot form a connection with the vertex at the origin is not considered. The simulation of a single network is visualized in Figure 3.
1. **Typical vertex.** We begin by randomly placing a vertex \((u,0)\) at the origin of the sampling window with a uniformly distributed birth time \(u\).
2. **Simulaton of older vertices.** We create older vertices to which the vertex \((u,0)\) connects by simulating a homogeneous Poisson point process in the red shaded area. The number of older vertices in the red area born up to time \(v_{0}\leqslant u\) is Poisson distributed with parameter \[\int\limits_{0}^{v_{0}}|I_{\beta v^{-\gamma}u^{\gamma-1}}|\,\mathrm{d}v=\frac {\beta}{1-\gamma}\,u^{\gamma-1}v_{0}^{1-\gamma},\] To generate the birth times \(\{v_{i}\}\) of the points, we simulate a homogeneous Poisson point process \(\{w_{i}\}\) in the domain \([0,\,u^{1-\gamma}]\) with intensity \(\beta u^{\gamma-1}/(1-\gamma)\). The cardinality of \(\{w_{i}\}\) will have the same
Figure 2: The largest component of a network sample generated by the ADRCM
distribution as the point count in the red area. We then transform \(\{w_{i}\}\) to receive the set of birth times: \(\{v_{i}\}=\{w_{i}^{1/(1-\gamma)}\}\). The transformation ensures that the birth times \(\{v_{i}\}\) have the required density. The positions \(y_{i}\) of the vertices are chosen uniformly in the respective domain \([-\frac{1}{2}\beta v_{i}^{-\gamma}u^{\gamma-1},\ \frac{1}{2}\beta v_{i}^{-\gamma}u^{ \gamma-1}]\).
3. **Simulaton of younger vertices.** Simulation of the younger neighbors of the typical vertex is similar. The number of younger vertices in the green area born up to time \(v_{0}\geqslant u\) is again Poisson distributed with parameter \[\int\limits_{u}^{v_{0}}|I_{\beta v^{-\gamma}u^{\gamma-1}}|\,\mathrm{d}v=\frac{ \beta}{\gamma}\left(u^{-\gamma}v_{0}^{\gamma}-1\right),\] To generate the birth times \(\{v_{i}\}\) using a homogeneous Poisson point process \(\{w_{i}\}\) in the domain \([u^{\gamma},\ 1]\) with intensity \(\beta u^{-\gamma}/\gamma\), which means that the number of elements in \(\{w_{i}\}\) will have the same distribution as the number of younger vertices connecting to the typical vertex. Then, we transform \(\{w_{i}\}\) as before to get the birth times \(\{v_{i}\}=\{w_{i}^{1/\gamma}\}\). The positions \(y_{i}\) are chosen uniformly in \([-\frac{1}{2}\beta u^{-\gamma}v_{i}^{\gamma-1},\ \frac{1}{2}\beta u^{-\gamma}v_{i}^{ \gamma-1}]\).
4. **Clique complex.** As before, the generated simple graph is expanded to a clique complex. Note that those simplices in the clique complex are not subject to finite-size effects, which include the central vertex \((0,u)\) at the origin.
### Higher-order degree distributions of the ADRCM
First, we illustrate that the higher-order degree distributions converge to their theoretical limit for increasing network size.
To estimate the parameters of power-law distributions, we face two problems. First, the domain in which the power-law distribution holds is not identical to the entire domain of the data. As discussed in Section 2, the degree of a typical vertex is a Poisson random variable whose parameter is itself a heavy-tailed random variable. Thus, the power-law distribution will only be visible for empirical values that are larger than a minimal value \(x_{\min}\), from where the influence of the Poisson distributions is negligible. On the other hand, \(x_{\min}\) cannot be too large since in this case the estimation of the power-law exponent becomes too inaccurate due to the low number of values above \(x_{\min}\). Considering these two effects, we carried out a pilot study and found \(x_{\min}=30\) to be a good compromise. With the domain at hand, we can estimate the exponent \(a\) of the power-law distributions via maximum likelihood (Clauset et al., 2009). In our setting, this means that
\[\hat{a}=1+n\,\Big{[}\sum_{i\leqslant n}\log\Big{(}\frac{x_{i}}{x_{\min}- \frac{1}{2}}\Big{)}\Big{]}^{-1},\]
where the index \(i\) goes over the data points \(x_{i}\geqslant x_{\min}\).
The vertex, edge, and triangle degree distributions of a generated network sample with \(100\,000\) vertices and \(\gamma=0.7\) can be seen in Figure 4 which illustrates the challenges in estimating power-law exponents for degree distributions. In the small-degree range, the power-law tail of the distribution is hidden due to the Poisson distribution. However, as the degrees exceed \(\sim 30\), the power-law tail is apparent.
In Theorem 1, we demonstrated that both the ordinary and the higher-order degree distributions follow a power-law tail. However, this result is rigorously established only for infinitely large networks. To apply this theorem to real data sets of finite size, it is essential to investigate the extent to which these findings hold for finite networks.
To address this, we conducted Monte Carlo simulations for finite network sizes. For each network size, we generated \(100\) networks with a parameter \(\gamma=0.7\). The power-law distribution was then fitted to their degree distributions using the described method. This process yielded \(100\) exponents for vertex, edge, and triangle degree distributions. Given that the parameters of the underlying ADRCM remained constant, this set of exponents provided a basis for statistical analysis. By repeating this procedure for networks of varying size, we assessed the convergence of degree distribution exponents to the theoretical limit established in Theorem 1.
Figure 3: Simulation of the Palm distribution. A typical vertex is placed to the origin with fixed birth time \(u\). The typical vertex connects to older vertices in the red shaded area, whereas younger vertices connect to the typical vertex in the green shaded area of the graph.
Additionally, we examined the simulation of Palm distributions using the same approach. For this case, \(100\,000\) infinite networks were simulated to fit the degree distribution exponents, equivalent to \(100\,000\) typical vertices. The edges and triangles considered in the simulation of the Palm distribution were those involving a special vertex placed at the origin.
The results of the simulations are depicted in Figure 5, presenting three sets of boxplots summarizing the distribution of the fitted exponents. The three subfigures visually illustrate the convergence of fitted exponents towards the theoretical limit, indicated by a red horizontal line. From the observed results, the following conclusions can be drawn.
* As the network size increases, the fluctuation of fitted exponents decreases. Smaller networks (with fewer than \(1\,000\) vertices) exhibit significant fluctuations, while larger networks (with over \(10\,000\) vertices) tend to approach the theoretical limit more closely. Infinite networks display the least fluctuations.
* For a given network size, higher dimensions lead to larger fluctuations in the fitted exponents. This suggests that considering higher dimensions introduces more variability in estimating the exponents of degree distributions.
* Fitted exponents for finite networks tend to be higher than the theoretical values, indicating a bias in the estimation process. This bias is attributed to the constraint on the maximum degree in each dimension due to the finite size, which results in the truncation of degree distribution tails. For small degrees, such truncation is absent. These effects lead to higher degree distribution exponents. The negligible bias observed in the distribution of exponents for "infinite" networks supports this explanation.
### Edge count of the ADRCM
In Theorem 3, we demonstrated that the edge count in large networks follows a normal distribution if \(\gamma<0.5\). Conversely, Theorem 4 established that the edge count distribution can be described by a stable distribution if \(\gamma>0.5\). To validate these claims in finite networks, we conducted an analysis of the edge count distribution in finite networks containing \(100\,000\) vertices.
For each of the three selected values of parameter \(\gamma\) (\(0.25\), \(0.50\), and \(0.60\)), we simulated \(1000\) networks with \(\beta=1\). Then, we examined the distributions of the edge counts for each of the three cases by fitting both a normal and a stable distribution to the empirical values.
To fit a normal distribution, we estimated the expectation as the sample mean and the variance as the sample variance. When fitting the stable distribution, we utilized the insights from Theorem 4 to set the \(\alpha\) and \(\beta\) parameters directly: \(\alpha=1/\gamma\) (if \(\gamma<0.5\), otherwise \(\alpha=2\)) and \(\beta=1\). The "location" and "scale" parameters needed to be estimated from the empirical distribution. For this purpose, we employed maximum likelihood estimation [Nolan, 2001].
Figure 4: Degree distributions of the ADRCM
Figure 5: Degree distribution exponents
Figure 6 visually represents the results of our analysis, showing the distributions of the edge counts for each of the three cases: \(\gamma=0.25\), \(\gamma=0.50\), and \(\gamma=0.60\).
The subfigures in Figure 6 provide a comprehensive view of the empirical and fitted distributions of the edge counts, along with Q-Q (Quantile-Quantile) plots for comparing the empirical and fitted distributions. The top row displays the empirical and fitted distributions, while the second and third rows present the Q-Q plots for the fitted normal distributions and the fitted stable distributions.
When \(\gamma=0.25\), the distribution of edge counts appears symmetric, and the fitted normal distribution closely aligns with the empirical data. However, for \(\gamma=0.6\), a fat right tail is clearly visible in the empirical distribution. This heavy-tailed behavior is not adequately captured by the fitted normal distribution, as evidenced by the deviation from the diagonal line in the Q-Q plot for the normal distribution. In contrast, the stable distribution provides a better fit, aligning well with the data points in the Q-Q plot. Interestingly, for \(\gamma=0.5\), the normal distribution does not describe the data as effectively as it does for \(\gamma=0.25\). We offer two potential reasons for this observation:
* The finite size of the network: In this case, a few high-degree vertices may contribute significantly to the total edge count. However, for a sufficiently large network, these contributions would be spread among many such vertices, leading to a more normal-like distribution.
* The boundary case of \(\gamma=0.5\): At this value, the degree distributions have an infinite variance, which can affect the distribution characteristics and may not be accurately captured by a normal distribution.
Supporting the validity of Theorem 4, the above observations suggest that the normal distribution appears
Figure 6: Distribution of the edge count for different \(\gamma\) parameters
to be a reasonably good fit when \(\gamma<0.5\), but the stable distribution explains the data more accurately if \(\gamma>0.5\).
### Betti numbers of the ADRCM
To establish the validity of Theorem 2 for finite networks, we conducted simulations on finite networks containing \(100\,000\) vertices. In this case, due to computational costs of computing Betti numbers, we performed \(100\) simulations for each of the three different values of parameter \(\gamma\): \(0.25\), \(0.50\), and \(0.67\).
For values of \(\gamma=0.25\), aligning with the findings of Theorem 2, we approximated the empirical values of the first Betti numbers with a normal distribution. For values of \(\gamma=0.6\), we observed the distribution of the Betti numbers and conjectured that they follow a stable distribution with stability parameter \(\alpha=1/\gamma\). We posit this based on the expectation that the infinite variance of the simplex count leads to a corresponding infinite variance of the Betti numbers.
In all cases, the parameter \(\beta\) of the stable distribution remained constant at \(-1\). As in Section 7.3, we estimated the remaining parameters of the fitted distributions via maximum likelihood. The results are visualized in Figure 7.
From the Q-Q plots it is evident that the fitted normal distribution provides a satisfactory approximation to the distribution of the Betti numbers for the simulations with \(\gamma=0.25\) and \(\gamma=0.50\). The points on the Q-Q plots are closely aligned with the diagonal line, indicating a good fit.
Figure 7: Distribution of the first Betti numbers with different \(\gamma\) parameters
However, for \(\gamma=0.67\), the distribution displays a heavy left tail, which is clearly visible both from the histogram and the Q-Q plot against the normal distribution: the points on the Q-Q plot significantly deviate from the diagonal line in the lower quantiles. The shallow slope of the points in the central section suggests that the standard deviation is not accurately captured by the normal distribution, which is also an artifact of the heavy left tail. In contrast, the stable distribution fits the histogram more accurately, as visualized in the Q-Q plot shown in Plot 7i. We can also see that the left tail is not entirely accurate in the stable distribution case. This is explained by two effects:
* the simulation number is low, thus there are not enough values in the left tail to precisely estimate the distribution;
* the minimum value of in the distribution is 0, suggesting the presence of finite-size effects.
All in all, the points on the Q-Q plot against the stable distribution follow more closely the diagonal line both in the central region and in the left tail of the distribution, reinforcing our earlier conjecture about the stable distribution of Betti numbers for \(\gamma>0.5\).
## 8 Analysis of collaboration networks
In this section, we analyze four datasets collected from arXiv to showcase the applications of our results and to further motivate our model extensions. As higher-order relationships appear naturally in the case of scientific collaborations, we chose to analyze a publicly available dataset of scientific papers. The authors of the papers are represented as vertices in a simplicial complex, whereas each paper represents a higher-order interaction of the authors.
Patania et al. (2017) also investigates higher-order collaboration networks on the arxiv data and extend the concept of triadic closure to higher dimensions. However, their analysis was purely of empirical nature and did not consider the question of using a stochastic higher-order network model. In contrast, we are compare the arxiv dataset with the ADRCM, and also perform hypothesis tests. We also note that although we consider a different time frame, the Betti numbers we found are largely consistent with the results published by Patania et al. (2017).
### Datasets
We analyze all available documents uploaded to arXiv from various scientific fields. For each document, we extracted the author names, the publication time and its primary category. The datasets were built using the primary categories of the documents the authors specified.
* **Computer Science (cs)**: The computer science dataset is the largest we analyze with more than \(400\,000\) authors.
* **Engineering (eess)**: The second dataset we analyze consists of documents from the scientific field of electrical engineering, which is built from around \(80\,000\) authors.
* **Mathematics (math)**: The mathematics dataset encompasses around \(200\,000\) authors.
* **Statistics (stat)**: The smallest dataset we analyze contains documents from the field of statistics, including around \(45\,000\) authors.
The largest components of the datasets are visualized in Figure 8, and their most important characteristics are summarized in Table 1.
As arXiv does not uniquely identify authors, we chose to use their full names as identifiers. Although in the case of a common full name, this method will result in treating distinct authors as if they were the same, the effect of these identifier collisions is greatly reduced as we consider scientific fields separately.
After identifying the authors as vertices, each document is considered as a higher-order interaction of the authors. This means that every document with \(n+1\) authors is represented by an \(n\)-simplex. Furthermore,
Figure 8: The largest component of the simplicial complexes built from the data sets
as our goal is to build a simplicial complex, every lower-dimensional face of this \(n\)-simplex is also added to the simplicial complex to ensure that it is closed under taking subsets.
A document with \(n+1\) authors has \(\binom{n+1}{n+1}\)\(m\)-faces, so in total \(\sum_{m=0}^{n}\binom{n+1}{m+1}=2^{n+1}-1\) number of simplices needs to be considered. This poses a twofold practical implementation challenge.
1. Due to computational reasons, we must limit the maximum dimension of the simplicial complex.
2. The more authors a document has, the higher its influence is on the simplicial complex, as the number of simplices grows exponentially with the number of authors. Carstens and Horadam (2013) have also found the same problem when analyzing collaboration networks. They tackled this problem by weighting the simplices: they assigned greater weights to smaller simplices, and to those, in which the represented collaboration was frequent. Although introducing weighted simplices is possible, it is beyond the scope of our present work.
Taking into account the above aspects, we consider interactions with a dimension of at most 20 (which, including all the faces, means more than 2 million simplices in total for a document with 21 authors). To further reduce computational complexity, we analyzed the 2-skeleton of the collaboration network, with the triangles being the highest dimensional simplices.
Using this procedure, we built four separate datasets, each representing publications of a specific scientific field published up to August 4, 2023. As we will see, the nature of collaborations significantly differ in the four cases, thus, by considering the four scientific fields separately, we can examine how the ADRCM model behaves for four distinct scientific communities.
It is also interesting to analyze the distribution of the dimension of the higher-order interactions, or, equivalently, of the per-document author count. Figure 9 visualizes the distribution of the per-document author count for each datasets revealing the typical size of the collaborations scientists participate in within each of the examined scientific fields.
The distribution related to the cs and eess datasets have the fattest tails, i.e., relatively higher number of documents have more authors. On the other hand, the opposite is true for the math and stat datasets, where most papers tend to have a lower number of authors. The dataset diversity of the different fields opens the opportunity to comprehensively examine the application of the theorems stated in Section 2.
To fit the ADRCM to the datasets, we need to set two model parameters. First, we can use Theorem 1 to estimate the parameter \(\gamma\) describing the datasets based on their vertex or higher-order degree distributions. The vertex and edge-degree distributions are visualized in Figure 10.
All plots exhibit a drop in the empirical distributions at the value 20. This is explained by the exclusion of documents with more than 21 authors. Due to combinatorial reasons, this discontinuity is more pronounced in the case of the edge-degree distributions. For heavier tailed distributions, a larger number of documents have more, than 21 authors, leading to a greater drop for the cs and eess datasets. As explained in the
\begin{table}
\begin{tabular}{|l|r|r|r|r|} \hline dataset & authors & documents & components & size of largest component \\ \hline cs & 433 244 & 452 881 & 22 576 & 370 494 \\ eess & 77 686 & 69 594 & 5 533 & 54 147 \\ math & 198 601 & 466 428 & 26 197 & 152 441 \\ stat & 44 380 & 36 689 & 4 049 & 32 373 \\ \hline \end{tabular}
\end{table}
Table 1: Main properties of the datasets
Figure 9: Distribution of authors per documents
beginning of Section 8.1, including thesedocuments would lead to the problem of the high influence of a few high-dimensional interactions as described earlier.
### Higher-order degree distributions
Just as in the case of the simulations, fitting the parameters of the power-law distribution poses computational challenges once again. When determining the minimum value \(x_{\min}\) from which the power law is visible, the goal is to find a balance between two conflicting interests.
* On the one hand, choosing a low minimum degree value would ensure enough data points in the degree distributions so that the estimates of the exponent is less noisy.
* On the other hand, choosing a high minimum degree value would remove the noise from light-tailed components of the degree.
Considering both effects, we found that setting the minimum value \(x_{\min}=10\) is a good compromise for fitting the power-law distributions. We note that this choice is more conservative than the one used in Section 7, where we set \(x_{\min}=30\). This is because we found the datasets to be more noisy than the simulated networks. Hence, we chose to use a smaller minimum value to enlarge the number of data points used for the fitting. After fitting the power-law distributions, we can use Theorem 1 to infer the \(\gamma\) model parameter based on the fitted power-law exponents.
The fitted exponents and the \(\gamma\) model parameters inferred from these exponents are summarized in Table 2.
In general, the edge-degree distributions have a thinner tail compared to that of the related vertex-degree distributions. We can see that the parameter \(\gamma\) differs substantially when inferred from the vertex- and edge-degree distributions, respectively. This observation shows that the ADRCM, with the connection kernel we apply it, is not flexible enough to capture the binary and the higher-order features at the same time. We henceforth infer \(\gamma\) from the vertex-degree distributions due to the following reasons. First, they are less affected by the high-dimensional interactions: a document with \(n\) authors contributes with \(n\) values in case of the vertex degrees, while it is represented by \(\binom{n}{2}\) values in the edge-degree distribution. Additionally, the computation of the vertex-degree distribution only requires the consideration of pairwise relationships, which the original ADRCM was designed to describe.
As discussed by Gracar et al. (2019), the parameter \(\beta\) governs the asymptotic edge density (the expected number of edges containing a vertex) of the generated networks through the formula \(\mathbb{E}[d_{0,1}]=\beta/(1-\gamma)\)
\begin{table}
\begin{tabular}{|l|r|r|r|r|} \hline \multirow{2}{*}{dataset} & \multicolumn{2}{c|}{vertex degree} & \multicolumn{2}{c|}{edge degree} \\ & exponent & inferred \(\gamma\) & exponent & inferred \(\gamma\) \\ \hline cs & -2.39 & 0.72 & -3.76 & 0.53 \\ eess & -2.98 & 0.50 & -4.14 & 0.48 \\ math & -2.79 & 0.56 & -4.47 & 0.45 \\ stat & -2.96 & 0.51 & -4.86 & 0.41 \\ \hline \end{tabular}
\end{table}
Table 2: Fitted exponents of the degree distributions and the inferred \(\gamma\) model parameters
Figure 10: Vertex-degree distributions (top) and edge-degree distributions (bottom) of the data sets
Thus, using the above formula, we estimate the parameter \(\beta\) from the mean vertex degree of the datasets.
The mean vertex degrees and the estimated parameter \(\hat{\beta}\) are shown in Table 3.
After fitting the model parameters, we can generate synthetic networks. We simulated a representative network for each datasets, whose largest components are visualized in Figure 11. When comparing with the plots of the actual datasets, we observe that although the ADRCM is capable of generating triangles and tetrahedrons, have a tendency to produce globally tree-like structures.
### Triangle counts
Next, we examine if the simplex counts in the ADRCM match those in the datasets. The simplex counts of the datasets are presented in Table 4.
The number of vertices is matched by the model on expectation as we choose the size of the sampling window accordingly. It is also irrelevant to examine the edge count, being asymptotically fixed through the parameter \(\beta\). Consequently, the first nontrivial dimension to consider is the triangle count.
As shown in Theorem 3, the number of edges follows a stable distribution if the model parameter \(\gamma\) is larger than \(0.5\), which is the case for our datasets. We conjecture that the distributions of higher-dimensional simplex counts also follow a stable distribution for \(\gamma>0.5\).
To study the distribution of the triangle counts, we simulated \(100\) networks with estimated parameters \(\hat{\beta}\) and \(\hat{\gamma}\) determined according to the datasets. For fitting stable distributions to the triangle counts, we use the method detailed in Section 7: while the parameters \(\alpha\) and \(\beta\) of the stable distribution are predicted based
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline dataset & vertices & edges & triangles \\ \hline cs & 433 244 & 2 073 235 & 4 055 220 \\ eess & 77 686 & 276 947 & 562 382 \\ math & 198 601 & 455 130 & 321 406 \\ stat & 44 380 & 114 003 & 135 800 \\ \hline \end{tabular}
\end{table}
Table 4: Number of simplices of different dimensions in the datasets
Figure 11: The largest component of simulated ADRCMs with fitted parameters.
\begin{table}
\begin{tabular}{|l|r|r|} \hline dataset & mean vertex degree & \(\widehat{\beta}\) \\ \hline cs & 9.57 & 2.69 \\ eess & 7.13 & 3.54 \\ math & 4.58 & 2.02 \\ stat & 5.14 & 2.52 \\ \hline \end{tabular}
\end{table}
Table 3: Mean vertex degree & \(\hat{\beta}\)
on our mathematical conjecture, the location and scale parameters are estimated via maximum likelihood. The fitted parameters are presented in Table 5.
After empirically verifying the simplex-count distribution, we conduct a hypothesis test based on the triangle counts. Our null model is the ADRCM model with the connection kernel from Section 2. The dataset values are marked by vertical green dashed lines in Figure 12. We conclude that in all cases, the ADRCM contains substantially more triangles compared to the dataset. In particular, the null hypothesis is rejected at the 5% level.
### Betti numbers
The presence of loops is important features of collaboration networks as they quantify its interconnectedness. In Section 7, we provided numerical evidence for the conjecture that the Betti numbers follow a stable distribution if \(\gamma>0.5\). On this basis, we can conduct a similar hypothesis test as above on the first Betti numbers, where we use the ADRCM as the null model.
The Betti numbers of the datasets we aim to test are presented in Table 6.
We again simulate 100 networks using the ADRCM with the fitted model parameters. As in Section 8.3, the parameters \(\alpha\) and \(\beta\) of the stable distributions are predicted by our conjecture, while the location and scale parameters are fitted via maximum likelihood. After fitting the stable distributions, we visualize the hypothesis testing in Figure 13. The parameters of the considered stable distributions are given in Table 7. In particular, the real datasets contain a significantly greater amount of loops than the networks generated by the ADRCM, thus the null hypothesis is rejected.
As explained by Patania et al. (2017), the loops in the network can be interpreted as bridges between communities. Hence, they are important features of scientific collaboration networks. Our analysis indicates that this community structure features a rich spatial correlation pattern, which cannot be captured fully by a simple two-parameter model such as the ADRCM.
We believe that the reason for this phenomena is at least partly due to the chosen connection kernel. The vertices connect to many vertices within their neighborhood with probability 1, thereby making it difficult to form loops. This suggests that the ADRCM generates network that appear tree-like on a global level with relatively few large loops.
To illustrate this idea, we carried out a pilot study, where we examined the influence of the connection kernel on the first Betti numbers. More precisely, we can employ the more general connection kernel where
\begin{table}
\begin{tabular}{|l|r|r|} \hline dataset & Betti-0 & Betti-1 \\ \hline cs & 22 576 & 168 770 \\ eess & 5 533 & 7 419 \\ math & 26 197 & 78 009 \\ stat & 4 049 & 7 275 \\ \hline \end{tabular}
\end{table}
Table 6: Betti numbers of the datasets
\begin{table}
\begin{tabular}{|l|r|r|r|r|} \hline dataset & \(\hat{\alpha}\) & \(\hat{\beta}\) & location & scale \\ \hline cs & 1.39 & -1.0 & 37 & 12.83 \\ eess & 1.98 & -1.0 & 105 & 9.66 \\ math & 1.79 & -1.0 & 490 & 19.16 \\ stat & 1.96 & -1.0 & 126 & 8.99 \\ \hline \end{tabular}
\end{table}
Table 7: Parameter estimates of the stable distributions for Betti-1
Figure 12: Stable distribution and hypothesis testing of the triangle counts for the datasets. The model parameters were determined based on the parameters of the datasets.
two vertices \((x,u),(y,v)\in\mathcal{P}\) with \(u\leqslant v\) connect with probability \(1/(2a)\) (\(a\geqslant 1/2\)), whenever \(|x-y|\leqslant a\,\beta u^{-\gamma}v^{\gamma-1}\)[Gracar et al., 2019]. Note that for \(a=0.5\), the connection kernel coincides with the one introduced in Section 2. Increasing the newly introduced model parameter \(a\) increases the distance of the vertices in which connections can be established. On the other hand, to keep the expected number of connections of the vertices intact, it simultaneously reduces the connection probability.
For \(\beta=1\) and \(\gamma=0.6\), we simulated six sets of \(100\) networks each, with a network size of \(100\,000\). We then gradually increased the value of the parameter \(a\) from the default value of \(0.5\), and kept track of the increase of the first Betti numbers. The results are shown in Table 8, and we conclude that even a slight increase of \(a\) results in a drastic growth of the first Betti numbers.
## 9 Conclusion and outlook
To analyze higher-order network structures, we investigated the ADRCM as a clique complex.
First, we examined how the neighborhood of simplices of different dimensions are organized and proved that the higher-order degree distributions have a power-law tail in the limit for large networks. Next, we proved that in the limit for large networks, the recentered and suitably rescaled edge count follows a normal distribution if the model parameter \(\gamma\) is less than \(0.5\), and a stable distribution for \(\gamma>0.5\). Turning our attention to the topological features, a CLT was proved for the Betti numbers if \(\gamma<0.25\). Recognizing the limitations of the ADRCM model, we devised a "thinning" procedure where certain types of edges are removed independently with a given thinning probability. This provided us with the possibility to adjust the edge degree exponents, while keeping the power-law exponent of the vertex degree distribution intact.
To show that the above theoretical results can be used in real-world data sets, we examined the extent to which the theorems are valid for finite networks by simulating several networks using identical model parameters. We found that the convergence of specific quantities to their limiting behavior is already clearly visible in networks of reasonable size. Furthermore, we also provided numerical evidence supporting our conjectures regarding the stable distribution of the Betti numbers when \(\gamma>0.5\).
Finally, after showing that the theoretical results are applicable to networks of finite size, we analyzed real-world scientific collaboration networks from arXiv. Following an exploratory analysis of these higher-order collaboration networks, we fitted the model parameters to the data. Developing hypothesis tests, we showed that - although several properties are well described by the higher-order ADRCM -, topologically important quantities, such as Betti numbers or the higher-dimensional simplex counts, are not well explained. Looking ahead, we present several directions for future research.
One promising avenue is to introduce Dowker complexes or weighted simplices in the network representation as proposed by Baccini et al. [2022]. Similarly to binary networks, incorporating weighted connections can describe a richer set of phenomena with simplicial complex models. Furthermore, by carefully tuning these weights, we can control and bound the influence of large simplices to avoid the large effects that high-dimensional interactions introduce due to the combinatorial explosion.
Incorporating time-dependent information into the analysis of higher-order networks would enrich our understanding of their evolution and temporal behavior. Exploring the dynamic aspect in the arXiv data sets opens up possibilities for detecting changes in the topology of scientific fields over time.
To gain a comprehensive understanding of network structures, we can investigate different embedding spaces to examine how the embedding space influences the topological and geometric features of the generated networks. Related to alternative embedding spaces, the investigation of alternative connection kernels could also lead to novel network models that better describe the topological properties of higher-order networks that might be missed by traditional network representations.
Figure 13: Hypothesis testing of Betti-1 for the datasets.
\begin{table}
\begin{tabular}{|r|r|r|r|r|r|r|} \hline parameter \(a\) & \(0.5\) & \(0.6\) & \(0.7\) & \(0.8\) & \(0.9\) & \(1.0\) \\ \hline mean of Betti-1 & \(170\) & \(4\,873\) & \(10\,976\) & \(17\,786\) & \(24\,914\) & \(31\,961\) \\ \hline \end{tabular}
\end{table}
Table 8: Influence of the profile function on Betti-1
## Acknowledgments
This work was supported by the Danish Data Science Academy, which is funded by the Novo Nordisk Foundation (NNF21SA0069429) and Villum Fonden (40516). We would also like express our gratitude to T. Owada for the careful reading of an earlier version and for helpful comments. His suggestions helped to improve both the content and the presentation of the material. The authors thank M. Brun for the interesting discussions and the remark on Dowker complexes.
| この論文では、年齢依存性ランダム接続モデル(ADRCM)の可能性を探求し、より高次のネットワークを表現することを目的としています。私たちの仕事の主な貢献は、大規模な領域における確率的極限結果です。より正確には、高次のDegree分布はパワースペクトル尾を持つことが証明されます。さらに、DEGREE分布が軽Tailedな領域におけるEDGECOUNTとBetti数の中央極限定理を証明します。一方、DEGREE分布が重Tailedな領域において、近似的に、リ-センターと適切にスケールされたEDGECOUNTが安定分布に収束することが証明されています。ADRCMの修正版を提案し、頂点と辺の度数のパワースペクトルを独立して調整することが可能にすることを示しました。この理論結果を有限ネットワークに適用するために、シミュレーション研究を実施し、大型ネットワークではパワースペクトル度数分布の指数が理論的な |
2309.16368 | Kinetic Simulation of He radio frequency capacitive coupled plasma | Radiofrequency capacitively coupled plasma is studied theoretically using a
Particle-in-Cell code. For He discharge, the time-averaged sheaths are in the
range of few centimeters. The sheath potential, ion, and electron energy and
angular distributions, discharge current, and dissipated power depend on the
driven potentials and frequencies. Increasing the amplitude of the high radio
frequencies increases the bulk density and the sheath potential and,
consequently, increases the plasma processing rate. Increasing the intermediate
radio frequency amplitude allows a wider sheath with a broad ion energy
distribution and a narrower ion angular distribution. Changing the amplitude
and the phase shift between driven frequencies provide different energies and
angular distribution allowing performing various processes. The interplay
between the sheath and bulk dynamics in the intermediate radiofrequency regime
and the high-frequency regime may excite harmonics in the discharge current. | M. Shihab, A. Elbadawy, M. S. Afify, N. El-Siragy | 2023-09-28T12:12:25 | http://arxiv.org/abs/2309.16368v1 | # Kinetic Simulation of He radio frequency capacitive coupled plasma
###### Abstract
Radio frequency sheaths, He discharge, Ion energy and angular distribution, distribution, Electron energy distribution, Power dissipation.
Radio frequency sheaths, He discharge, Ion energy and angular distribution, Electron energy distribution, Power dissipation.
## 1 Introduction
Low-temperature plasma has a great potential for numerous applications in the growth and processing of nanomaterials and the fabrication of microelectronics, e.g., carbon nanotubes, nanowires, thin-film depositions, and anisotropic etching of metallic, semiconductor, and dielectric materials. The energy of incident ions on substrates determines the process type and the flux of ions determines the rate of the process. In this contribution, we study Helium (He) discharge utilizing the Particle-In-Cell technique. He is an inert gas. Its chemistry is simple and could be used
to host different gas compositions - such as O\({}_{2}\), N\({}_{2}\), CF\({}_{4}\), CH\({}_{4}\), H\({}_{2}\)O-without affecting their chemistry. Here, we try to reveal the effect of the amplitude of driven radio frequencies and their phase shift on the discharge dynamics, the ion energies and the ion angular distribution at electrodes, the electron distribution, and the dissipated power in the plasma. The driven frequencies are 60 MHz and 1 MHz. Tailoring the driven potential or driven electrodes with different radio frequencies is one of the hot topics of research nowadays [7; 8; 9].
In the next section, we give a short overview of the Particle-in-Cell technique, then in section 3 we introduce our results and close the manuscript with a conclusion in section 4.
## 2 Particle-In -Cell
In the approach of modeling particles in a cell, Particle-in-Cell (PIC) has been widely utilized as a computational method to understand and forecast the behavior of plasma [1; 2]. An interpolation approach is used to collect charge and current density on a spatial mesh. The simulation domain is discretized into k\({}_{\rm th}\) grids as shown in figure (1). On grids, the field equations are solved. Interpolation is used to determine the force exerted on the super-particles. Each super-particle represents 10\({}^{3}\) t0 10\({}^{6}\) of real particles. In Fig. (2), the PIC simulation's computational cycle is depicted. The indices i and k are used to designate quantities evaluated for particles and grid points, respectively. There are four steps in the computing cycle (excluding the Monte-Carlo collision).
The particles' locations are computed initially. The force effects on and the acceleration of each particle are computed, then the particles' locations and velocities are updated in the first step. The charge density is then calculated at the grid points using the charge of each particle and the current in the second step, which is known as weighting. The third stage integrates the Poisson's equation to obtain the electric potential and uses a finite difference technique to compute the electric field E on the grid. In the final stage, the electric field at surrounding grid points is used to calculate the force on each particle. Because PIC algorithms can use Monte-Carlo methods to simulate collisions between charged particles and neutral atoms, an extra stage in the computing cycle is added, see, Fig. (2). Considering
Figure (1): The discretization of the simulation domain into k\({}_{\rm th}\) grids.
Figure (2): A schematic flow chart of PIC modules. The dashed lines are to short cut Monte-Carlo calculations for collisionless plasma.
collision between particles is time consuming. The null collision method is employed to perform simulation in a proper time. This method involves selecting a particle and comparing the relative chance of a collision to a random number to see if one really occurs. In low temperature plasma, the degree of ionization is small, hence, only electron-neutral and ion-neutral models were used for these simulations. Scalability is crucial because it connects simulations to the real world of plasma. The values supplied to parameters such as grid spacing, time step, and super-particle density are critical since they determine the simulation's speed and accuracy. A few aspects must be considered before performing a PIC simulation of the plasma in order to avoid unphysical outcomes.
In the simulation, the number of super-particles Np should be substantially more than the number of grid cells ng, i.e. Np \(\gg\) ng. This is to ensure that, on average, each grid cell includes multiple particles during the simulation. The simulation will be noisy if the number of particles is too low. The grid cell size \(\Delta\)x should be on the order of the Debye length. The Debye length is the longest distance over which individual particle Coulomb forces are significant. If the grid spacing is reduced, we will be unable to eliminate short-range particle interactions, which are irrelevant to the plasma's overall behavior. Important electric field gradients can be neglected if the spacing is made bigger than Debye length, because the fields are only determined at the grid points. Furthermore, because grid cells affect particle size, larger grid cells will produce unphysical results. In addition, the time step \(\Delta\)t must be less than the period of plasma oscillations and \(\Delta\)t should be small enough to allow stable and precise integration of the particle equations of motion, as well as correct reproduction of particle oscillations.
Particles passing across cell borders generate density fluctuations, which are then transmitted to the potentials and electric fields. If the fluctuations are lower than the magnitude of the applied potentials and do not cause unstable behavior, they can be ignored. Electrons are extremely sensitive to field fluctuations, which can lead to an unphysical increase in electron energy. Ions are slower to respond to fields; therefore, transitory fluctuations have no effect on them. If the artificial rise in electron energy grows great enough, it might produce excess ionization, which raises plasma density, which magnifies the heating, resulting in an exponential increase in plasma density, and the simulation eventually breaks down. Another issue is the loss of resolution in low-density areas, such as sheaths. As a result, the number of particles in a super-particle must be reduced, i.e. smaller super-particles must be used. For more details about PIC simulation, please read [10; 11; 12; 13].
## 3 Results and discussion
Here, we study employing Particle-in-Cell (PIC) ions' and electrons distributions in radio frequency capacitively coupled plasmas (RF-CCPs), where electrodes are biased with two radio frequencies,1 MHz and 60 MHz. The code is benchmarked, so its predictions are trustful [14]. For simulation, a geometrically symmetric reactor is chosen. The distance between electrodes is 15 cm. The time and space are discretized in a way to avoid
numerical instabilities. The simulation is repeated for 500 RF cycles of the 60 MHz. The time step is 1/600 from the periodic time of the 60 MHz. The distance between the two electrodes are discretized into 259 grids. The 1 MHz is comparable or smaller than typical ion plasma frequency in RF-CCPs, while the 60 MHz is much higher than the ion plasma frequency. The 1 MHz allow ions to respond partially to the instantaneous electric fields. On contrary, the 60 MHz compel ions to respond to the time averaged field. The driven potentials V = V\({}_{1}\)sin(2\(\pi\) 60MHz t) + V\({}_{2}\) sin (2\(\pi\) MHz t + \(\theta\)). Based on the amplitude of the driven potentials, three cases are considered: In case (1), V\({}_{1}\) = V\({}_{2}\) = 250V, where the effect of both frequencies is supposed to be equal. In case (2), V\({}_{1}\) = 100V and V\({}_{2}\) = 400V; the plasma is mainly driven by the intermediate frequency. In case (3), V\({}_{1}\) = 400V and V\({}_{2}\) = 100V; the plasma is ignited by the high frequency. The simulations are carried out twice for the three cases when the phase shift (\(\theta\)) is zero, corresponding results are displayed as solid lines. When the phase shift \(\theta\) is \(\pi\)/2, results are presented via dashed lines.
Let first discuss the results when there is no phase shift. Close to electrodes, quasineutrality breaks down and RF sheaths are formed due to the escape of electrons from the discharge volume into electrodes.
The time averaged density of the left sheath for three cases are shown in Figure 3. Case (1), case (2), and case (3) are represented with black, blue, and red solid lines, respectively. For each case, the upper line presents the ion density and the lower gives the electron density. The minimum sheath width is belonging to case (3), where, the amplitude of the high frequency signal is larger than the intermediate frequency. The power of the high frequency signal is mainly dissipated in the plasma bulk, therefore, the bulk density increases and providing a larger ion flux to the plasma sheath. When the amplitude of the high frequency is larger than the amplitude of intermediate frequency, i.e., case 2, the time averaged sheath is in the order of 1 cm. For case 3, the time averaged sheath width is roughly 2 cm. These large sheaths suggested that typical Ar capacitively coupled plasma reactors with a gap size of 5 cm or less may be not suitable for He discharges. For plasma etching, deposition, and sputtering at low pressures, the mean free path is large, and the thickness of the bulk may be not enough to ignite the plasma via electron-neutral background collisions.
Figure 4: The sheath potential for different plasma simulation cases shown in Fig. 3.
Figure 3: The plasma density between the two electrodes.
The corresponding sheath potentials are present in Figure 4 with the same legend as in Fig. 3. If we look only to solid lines, case (1) which is dominated by the potential of the high frequency has the largest peak to peak sheath potential. The ion energy distributions (IED) are shown in Fig. 5. Case (3) has the narrowest distribution shown as red solid line. The broadest IED is obtained when the intermediate frequency potential is dominant as in case (2). The corresponding ion angular distribution function (IADF) is shown in Figure 6. The highest peak of the IAD belongs to case (3) and the lowest one is due to case (2). From previous calculations, increasing the ion flux, the sheath potential, and decreasing the sheath width allow high peak of the ion angular distribution [8]. This matched very well with case (3), please investigate Figure 1 and Fig. 2. Considering a phase shift (0) of \(\pi/2\), a slight increase in the sheath width of case (1) is observed. However, the two sheaths for case (2) and case (3) still roughly the same. The bulk densities are not sensitive to the phase shift for all cases. The sheath potential for case (3) is the same with and without a phase shift. The red and dashed solid lines are almost identical. Also, for case (1) and case (2), the sheath potentials with and without the phase shift are comparable. Therefore, the IAD and IED for all cases are almost identical.
Also, as shown in Fig. 7 and 8, the electron energy distribution affected by changing the driven potentials. Increasing the amplitude of the high frequency component increases the height and the width of the electron energy distribution. On contrary, intermediate frequencies allow narrower electron energy distribution. At lower radio frequencies electrons may be able to enter nano structures etched in
Figure 5: The ion energy distribution function at the left electrode for different plasma simulation cases shown in Fig. 3.
Figure 6: The ion angular distribution function at the left electrode for different plasma simulation cases shown in Fig. 3.
Figure 7: The electron energy distribution function at the center of the dischargeofer different plasma simulation cases shown in Fig. 3.
substrates to neutralize positive charges on the etched trench surfaces [7]. The electron energy distribution at the center of the discharge is not affected by the phase shift. Only for cases (1) and (2) at the electrode, the height and the width of the distribution increased by adding a phase shift of \(\pi\)/2.
Also, to reveal possible resonances between the plasma bulk and the sheath, the current is shown in Fig. 9. For case (2) and case (3), the phase shift has no effect on the passing current. But for case (1), the amplitude of the current increases by increasing the phase shift. When there is not nonlinear interaction between the sheath and bulk dynamics, the Fourier analysis of the current should only display a two component in the frequency domain; i.e., 1 MHz and 60 MHz. As could be seen in Fig. 8, other component is generated in the plasma. The amplitude of these components are a function of the driven potentials and phase shifts.
The accumulated power is depicted in Fig. 11. The accumulated power increases by increasing the amplitude of the high frequency. It is not sensitive to the phase shift when the discharge is dominantly driven by high or intermediate frequency. However, when both amplitudes are equal, the phase shift has an effect. Plasma series resonance is responsible for the
Figure 8: The electron energy distribution function at the left electrode for different plasma simulation cases shown in Fig. 3.
Figure 11: Accumulated power as a function of time for different plasma simulation cases shown in Fig. 3.
Figure 10: The Fourier component of the current passing through the discharge for different plasma simulation cases shown in Fig. 3.
Figure 9: The current passing through the discharge for different plasma simulation cases shown in Fig. 3.
generation of new harmonics which affect the dissipated power [15, 16].
## 4 Summary
The discharge dynamics have been found to be controlled via tailoring the driven potential. For He discharge, the time-averaged sheaths are large, and typical Ar discharge RF-CCP reactors maybe not appropriate for He discharge. Increasing the amplitude of the high radio frequencies has been found to increase the bulk density and the sheath potential. Also, it has been found to accelerate ions in the plasma sheath with energies around the time-averaged values. On contrary, increasing the amplitude of the intermediate radiofrequency has been found to provide a wider sheath with a wider ion energy distribution and a narrower ion angular distribution. The height and the width of the electron energy distribution function was a function of the amplitude of the driven potentials. The plasma series resonance has been found to generate new harmonics in the discharge current and enhances the dissipated power to generate the plasma.
## 5 Acknowledgment
The authors thank the great discussion with T. Mussenbrock (Ruhr-University Bochum) and his YAPIC code is acknowledged. This project was supported financially by the Academy of Scientific Research and Technology (ASRT), Egypt, Grant No 6742. ASBT is the 2nd affiliation of this research.
| **電波誘導媒質によるプラズマは、粒子運動方程式を用いて理論的に検討されています。ヘリ discharge における平均的な電界は数センチメートル程度です。電界の電位、イオン、電子エネルギーと角度分布、放電電流、損失エネルギーは、駆動電位と周波数によって依存します。高周波数の電圧の振幅を増やすことは、体積密度と電界の電位を増加させ、その結果、プラズマ処理速度を増加させることにつながります。中周波数の電圧の振幅を増やすことで、幅広い電界と幅広いイオンエネルギー分布と狭いイオン角度分布が得られます。駆動周波数間の振幅と位相シフトを変化させると、異なるエネルギーと角度分布が得られ、さまざまなプロセスを実行することができます。中周波数領域における電界と体積運動の相互作用は、 |
2309.17076 | Benefits of mirror weight symmetry for 3D mesh segmentation in
biomedical applications | 3D mesh segmentation is an important task with many biomedical applications.
The human body has bilateral symmetry and some variations in organ positions.
It allows us to expect a positive effect of rotation and inversion invariant
layers in convolutional neural networks that perform biomedical segmentations.
In this study, we show the impact of weight symmetry in neural networks that
perform 3D mesh segmentation. We analyze the problem of 3D mesh segmentation
for pathological vessel structures (aneurysms) and conventional anatomical
structures (endocardium and epicardium of ventricles). Local geometrical
features are encoded as sampling from the signed distance function, and the
neural network performs prediction for each mesh node. We show that weight
symmetry gains from 1 to 3% of additional accuracy and allows decreasing the
number of trainable parameters up to 8 times without suffering the performance
loss if neural networks have at least three convolutional layers. This also
works for very small training sets. | Vladislav Dordiuk, Maksim Dzhigil, Konstantin Ushenin | 2023-09-29T09:10:58 | http://arxiv.org/abs/2309.17076v2 | # Benefits of mirror weight symmetry for 3D mesh segmentation in biomedical applications
###### Abstract
3D mesh segmentation is an important task with many biomedical applications. The human body has bilateral symmetry and some variations in organ positions. It allows us to expect a positive effect of rotation and inversion invariant layers in convolutional neural networks that perform biomedical segmentations. In this study, we show the impact of weight symmetry in neural networks that perform 3D mesh segmentation. We analyze the problem of 3D mesh segmentation for pathological vessel structures (aneurysms) and conventional anatomical structures (endocardium and epicardium of ventricles). Local geometrical features are encoded as sampling from the signed distance function, and the neural network performs prediction for each mesh node. We show that weight symmetry gains from 1 to 3% of additional accuracy and allows decreasing the number of trainable parameters up to 8 times without suffering the performance loss if neural networks have at least three convolutional layers. This also works for very small training sets.
3D mesh segmentation, biomedical segmentation, weight symmetry, rotation invariant, inversion invariant, hard constraints, symmetry in neural networks
## I Introduction
Convolutional neural networks successfully solve semantic and instance segmentation problems in biomedical applications. Results of neural network segmentation are especially notable in the segmentation of 3D imaging data such as computed tomography, magnetic resonance tomography [1], _etc._ Segmentation of such type of data provides 3D voxel models of abdominal organs. However, the creation of voxel models is only a first step for modern biomedical pipelines. Medical visualization requires a surface mesh of objects. Anatomical measurements require key points. Production of personalized prosthetics requires surface mesh with advanced processing. Personalized computer simulations based on finite element analysis require a 3D mesh of volumetric elements [2, 3].
All mentioned applications require segmentation of the surface mesh according to geometrical features or anatomical structures. 3D mesh segmentation is especially important for the detection of abnormal anatomical structures such as aneurysms [4], and the segmentation of organs according to their conventional anatomical structures.
3D mesh segmentation in biomedical applications has some limitations and advantages in comparison with 3D mesh segmentation in computer graphics and computer vision. Medical datasets usually include smaller number of cases. That leads to significant issues with the lack of cases available in the training dataset. Segmentation of some geometrical features is invariant to inversions because the human body is bilaterally symmetrical. Some structures also are invariant to rotation because of slight variations in organ positions in the body. Thus, we expect a positive effect of rotation and inversion invariant layers on the performance of neural networks in 3d mesh segmentation tasks.
Some approaches to rotation invariant neural network are presented in [5, 6, 7, 8]. A wide range of methods for invariancy is described in [9]. Unfortunately, most of the mentioned approaches significantly increase the number of trainable parameters. The other approach to achieve imperfect rotation invariancy is data augmentation. The augmentation pipeline does not enforce changes in neural network architecture, however it requires additional hyperparameter tuning for each specific task and increases the required computational resources.
In our study, we implement imperfect rotation and inversion symmetry of layers using mirror weight symmetry of convolutional neural networks [10]. This approach cannot achieve mathematically correct invariancy to some transformations of the input. However, weight symmetry is straightforward and simple to implement. This approach does not increase the number of parameters in the neural network and does not require data augmentation.
To show the benefits of weight symmetry, we use small neural networks with 130,000 - 700,000 parameters, which is 10-30 times less than the recent advanced approaches for 3D mesh segmentation [4]. Our neural networks predict the class of each node on the surface mesh according to the geometrical features of the local region. The local region is encoded with sampling from the signed distance function of the 3D object. Our study shows that this simple approach can segment 3D surface mesh with high precision, and can work with small train sets. Also, we show that weight symmetry in convolutional kernels improves the quality of segmentation
and drastically reduces the number of the required parameters in the neural networks.
## II Methods
### _Datasets_
In order to evaluate the performance of the neural networks and to study the effects of applying different types of weight symmetry, we use two datasets for 3D mesh segmentation. The first dataset is IntrA [4], it includes 116 3D meshes of blood vessels with aneurysms, that were reconstructed from magnetic resonance angiogram images of anonymous patients. Every point of each mesh is marked with a label that classifies it as a healthy part of a blood vessel or as an aneurysm.
In our work, we used the original labeling for two classes, but modified the surface of 3D models. We closed hollows on the sides of blood vessels using the "Close hole" function from Meshlab [11] software. This processing complements meshes to closed surfaces and makes it possible to compute the signed distance function from these surfaces.
The second dataset is derived from Automated Cardiac Diagnosis Challenge (ACDC) [12]. ACDC is a challenge for methods of magnetic resonance image segmentation. The training part of the ACDC dataset includes scans of ventricles for 100 patients. Labeling was performed by experts for the 3D volume region of the ventricular myocardium, left atrium cavity, and right atrium cavity.
We used the ground truth labels for the training dataset, provided for this challenge, in order to reconstruct 3D meshes of hearts. For that, we used marching cubes algorithm and Taubin smoothing. After the meshes were reconstructed, we labeled them, dividing each mesh into three classes: epicardium, left ventricular endocardium, and right ventricular endocardium. In order to avoid confusion, in the next sections we will refer to this dataset as ACDC-S, where S stands for surface, meaning that we work with a surface of reconstructed 3D mesh.
In summary, we perform a study on two 3D mesh segmentation datasets. The first one (IntrA) contains two classes and a task to locate the aneurysm on a blood vessel. The second one (ACDC-S) contains three classes with a task to mark up epicardium, left ventricular endocardium, and right ventricular endocardium. We divide each dataset with \(9\) different train-test ratios. They include \(10:90\), \(20:80\), \(30:70\), \(40:60\), \(50:50\), \(60:40\), \(70:30\), \(80:20\), and \(90:10\) ratios, where the first number stands for the portion of data, that would go into the train split and the second number refers to the test split. So every neural network would be trained with different splits a total of \(9\) times for each dataset. This approach to train-test splitting allows us to show that our methods are able to work with small train sets, which simulates the lack of data. This makes our study close to real biomedical applications, when number of available patients data is scarce.
### _Local geometry encoding_
In our approach, the neural network predicts the class of each vertex of 3D mesh, based on their local geometry features. In order to implement this, we used the encoding with signed distance function (SDF) [13]. It represents the geometry as a distance matrix, where the points outside an enclosed surface have positive distances and the distances of the points on the inside are negative.
Our processing goes as follows. We define a neighborhood of a point (x,y,z) as a cube \([x-\frac{a}{2};x+\frac{a}{2}]\times[y-\frac{a}{2};y+\frac{a}{2}]\times[z-\frac{ a}{2};z+\frac{a}{2}]\) with \(N\) points on each side. In this cube, we create a uniform mesh for data sampling. A point of a mesh is defined as \(\overline{\boldsymbol{p}}_{ijk}=\left((x-\frac{a}{2})+ih;(x-\frac{a}{2})+jh; (x-\frac{a}{2})+kh\right)\), where \(h=\frac{N}{N-1}\) is the spacing between the points in the cube.
To segment the surface of 3D objects the neural network (NN) solves the problem of multi-label classification: \(c=\mathrm{NN}\left(\{\{\{\mathrm{SDF}_{r}(\overline{\boldsymbol{p}}_{ijk}) \}_{i=1}^{N}\}_{j=1}^{N}\}_{k=1}^{N}\right)\) where \(\mathrm{SDF}_{r}(\cdot)\) is a signed distance to the boundary of the geometry model \(r\), and \(c\) is a class of point \((x,y,z)\) (\(c\in[0,\boldsymbol{c}_{max}]\), \(\boldsymbol{c}_{max}\in\mathbb{N}\)).
### _Weight symmetry_
In this paper we work with 3D convolution layers, so we implemented three types of mirror symmetry. The first type is f*, it is a symmetry along one axis. Since we pass a 3D tensor as an input to the neural network, there are three possible symmetries: f1, f2, and f3. They correspond to x, y, and z axes in the original medical datasets that were discussed in the previous sections. The second type of symmetry is f**, that
Fig. 1: Different types of symmetry, applied to a weight of a convolution layer with the kernel size of \(4\times 4\times 4\). The intersecting planes show the axis along which symmetry can be observed.
corresponds to the symmetry along two axes, this includes f12, f13 and f23 symmetries. Finally, f*** refers to the symmetry along all three axes and represents f123 symmetry.
The Fig. 1 shows how different symmetries affect a weight of 3D convolution layer with kernel size of \(4\times 4\times 4\). We obtain the baseline values for each convolution kernel with Kaiming weight initialization [14], which is the default initialization method for PyTorch library [15]. After this, mirror symmetries are applied to the weights using the flip of the original tensor and concatenation.
The applying of symmetry along one axis could be described as \(\tilde{\mathbf{W}}=W\oplus T_{a}(W)\), where W is the weight of the convolution layer, \(T_{a}\) is the flip, that reverses the order of elements in the tensor along the axis \(a\), and \(\oplus\) is concatenation along the same axis. The symmetry along several axes requires the repeat of the described transformation for every axis sequentially.
If we assume that the baseline tensor contains (\(N\times N\times N\)) elements, then in order to preserve the original kernel size with the use of f* symmetry, we would have to halve the number of parameters along the symmetry axis. This is needed as the concatenation would double the size of the tensor. This means that for f1 symmetry, we would use a tensor of (\(\frac{N}{2}\times N\times N\)) size, since after applying the symmetry along the axis 1, it would have the needed shape of (\(N\times N\times N\)). For f2 the tensor would be (\(N\times\frac{N}{2}\times N\)), and f3 symmetry would need (\(N\times N\times\frac{N}{2}\)) elements. The f** symmetry would require two concatenations, so we need to halve the tensor size along an additional axis. Thus, for f12 symmetry we would use weights with the size of (\(\frac{N}{2}\times\frac{N}{2}\times N\)), for f13 they would be (\(\frac{N}{2}\times N\times\frac{N}{2}\)), and f23 needs a tensor with (\(N\times\frac{N}{2}\times\frac{N}{2}\)) parameters. The convolution with f*** symmetry would require halving the size of the baseline tensor along all axes. So, f123 would require a tensor with the shape of (\(\frac{N}{2}\times\frac{N}{2}\times\frac{N}{2}\)). This allows us to decrease the amount of unique trainable parameters in each layer by two times with f* symmetries, by four times with f** and by eight times with f***, while being able to use the same architecture of the neural network.
### _Neural networks_
The use of symmetric weights reduces the amount of trainable parameters of neural networks, this leaves two experimental designs to consider. We can either keep the architecture and allow the number of parameters to significantly drop, or we can modify it to keep the number of parameters by increasing the number of convolution kernels. In this paper we study both approaches and refer to them as tasks.
First, we made three fully convolutional neural network architectures to work as baselines. They are shown in Fig. 2. Each model consists of one, two or three convolutional layers, followed by an MLP block. This block is the same for each architecture, and it includes two \(1\times 1\times 1\) convolution layers with \(160\) kernels each and an output layer with \(c\) kernels, where \(c\) is a number of classes in the dataset. All models use ReLU activation in deep layers and Sigmoid in the output layer. To distinguish the networks, we give each of them a name that reflects their number of layers, the shape of convolution and its stride. For instance, the notation K4S4-K4S4 means that the neural network includes two 3D convolution layers with kernel size and stride of \(4\times 4\times 4\).
The first task is to keep the number of trainable parameters in the models with symmetry the same as in the baselines.
Fig. 2: Neural network architectures. \(c\) in the last layer and output stands for the number of classes in the dataset.
For each of the baseline architectures, we created 7 additional models. They include three options with f* symmetry, one for each axis, three models with F** symmetry, and one model with f*** symmetry. After that, we increased the number of kernels in every model with symmetry. The resulting architectures are described in the Table I, and the total number of kernels and parameters for each architecture is shown in Table II.
For the task of keeping the architecture, we used the same models described above, but kept the amount of convolution kernels the same as in the baselines. This resulted in significant loss of trainable parameters, that can be seen in the Table II.
Finally, we had 15 models for each of three architectures. This includes the baseline, 7 models with the same number of trainable parameters, and 7 models with the same number of convolution kernels. Training of 45 models with 9 different train-test splits on two datasets resulted in a total of 810 experiments, that were conducted on the Uran supercomputer with NVIDIA K40m GPUs.
## III Results
### _Fixed number of parameters_
We compared the baselines to the models with symmetric layers, that have the same number of trainable parameters. The results show that the use of symmetric weights affects the performance, depending on the depth of the neural network, so that the deeper the network is, the more it would benefit from symmetry. Figure 3 and Table III demonstrate that the use of symmetry does not improve the performance of any model of K16S16 architecture, since it has only one convolution layer before the MLP block. The increase of convolutions up to two allows K4S4-K4S4 models with symmetry to reach and slightly outperform the baseline on IntrA dataset, and achieve the performance close to the baseline on ACDC-S dataset. K8S1-K8S1-K2S1 architecture shows notable improvement in the accuracy of the models with the use of symmetrical weights, as almost all models with symmetry outperform the baseline with every train-test split on IntrA, and show similar to the baseline results on ACDC-S.
### _Fixed architecture_
In order to see how a model with fewer parameters, but with the same architecture would compare to the baseline, we kept the baseline architectures shown in Fig.2 and applied symmetries to them without increasing the number of convolution kernels. The comparison revealed that the results of segmentation were almost identical to the ones described above. K16S16 models with symmetry showed a notable decrease in performance comparing to the baseline on both datasets, and the results were improving for the deeper networks. All K4S4-K4S4 models with symmetry achieved similar to the baseline performance on IntrA dataset, and most of them showed the results close to the baseline values on ACDC-S. As for the K8S1-K8S1-K2S1 architecture, its models with symmetry notably outperformed the baseline on IntrA dataset and showed the similar to the baseline results on ACDC-S.
### _Comparison of the results_
In this study we tracked the change of performance among the models, that were trained with different train-test splits. The results have shown that the baseline accuracy decreases up to 8.14% on IntrA dataset and up to 15.5% on ACDC-S, as the amount of data in train set goes down from \(90\%\) of the dataset to \(10\%\). The Fig. 3 shows that the baseline performance on IntrA dataset ranges from \(76.52\%\) to \(83.5\%\) accuracy for K16S16 model, from \(77.34\%\) to \(85.48\%\) for K4S4-K4S4, and from \(78.67\%\) to \(85.6\%\) for K8S1-K8S1-K2S1. As for the ACDC-S dataset, K16S16 model shows \(85.07\%-95.26\%\) accuracy, K4S4-K4S4 shows the results of \(78.17\%-93.67\%\), and K8S1-K8S1-K2S1 leads to \(86.79\%-95.09\%\) accuracy. As we can see, the accuracy of mesh segmentation is almost \(10\%\) higher on ACDC-S dataset comparing to IntrA. This could be possibly explained by the shape of the models in the datasets, since the heart meshes in ACDC-S are much more similar to each other than the blood vessels in IntrA.
The baseline values allowed us to evaluate the effect of the utilization of symmetric layers with different train-test splits. The Table III shows the change of accuracy for different types of symmetry for each model and dataset with \(20:80\), \(50:50\) and \(80:20\) data split ratios. The best overall performance for each split was achieved by K8S1-K8S1-K2S1 model with some type of symmetry. For IntrA dataset, the best performance for these three splits was achieved by a model with f23 symmetry and two models with f123 symmetry, resulting in \(83.57\%\), \(85.77\%\), and \(86.3\%\) accuracies respectively. For ACDC-S dataset the best models used f23, f2 and f123 symmetries, and showed \(90.84\%\), \(93.39\%\), and \(94.89\%\) accuracy correspondingly.
We mentioned above that the models with fixed number of parameters and the ones with fixed architecture have shown
[MISSING_PAGE_POST]
Figure 4: Results of segmentation done by best performing models, trained on 20% of dataset. Images on top are the models from IntrA dataset, the ones on the bottom are from ACDC-S.
similar results. Despite the accuracy values being very close, there were noticed some differences in their performance. In Table III we can see that with \(20:80\) train-test split, most models with the fixed number of parameters show small increase in performance on both datasets comparing to the ones with fixed architectures. The same behavior could be observed with \(50:50\) train-test split, however the gap in performance is smaller. Finally, as the train set increases up to \(80\%\) of the dataset, the difference between them becomes negligible.
The obtained findings suggest, that the increase in performance of models with symmetric weights is more dependent on the number of convolution layers than on the total number of trainable parameters. The Figure 4 confirms these results, as the deeper the network becomes, the fewer misclassifications are made with the models, that have symmetric weights. We can see that if the neural networks have less than three convolution layers, they struggle to identify the boundaries of aneurysm in the models from IntrA dataset, and label it only partly. At the same time, the model with three convolution layers was able to show results that were significantly more comparable to the baseline, covering the whole part of the blood vessel that includes the aneurysm. Similar results can be seen in ACDC-S dataset segmentation. It is clear that the baseline performance of K16S16 and K4S4-K4S4 models is better than the accuracy of the same models with symmetry. However, the model K8S1-K8S1-K2S1 that has three convolution layers produces results that are comparable to or better than the baseline.
## IV Conclusion
Utilizing weight symmetry significantly reduces the number of trainable parameters in each convolution kernel. Here, we analyze two strategies to introduce weight symmetry in neural
networks. The first approach is to keep the number of parameters with an increasing number of convolutional kernels. The second approach is to keep a number of convolutional kernels and significantly decrease the number of trainable parameters.
Both studied strategies of weight symmetry are suitable for some goals. The first strategy increases accuracy while maintaining the same number of parameters. The second strategy leads to a reduction in the number of trainable parameters in symmetric layers up to 8 times, with either a gain in performance or negligible decrease in accuracy. We observe that the benefits of weight symmetry are notable only if the neural network architecture has at least three convolutional layers before the multi-layer perceptron. Weight symmetry provides an additional accuracy up to 3% on small datasets. This effect is especially visible for small train sets, as the biggest improvement can be seen with train-test split with ratios from \(10\%:90\%\) to \(50\%:50\%\). The benefits of weight symmetry are not as notable on big train sets and advance the accuracy by only 1%. It is notable, that our solution can segment the IntraA dataset with almost the same quality that is presented in [4] and uses 10-30 times fewer parameters.
| 3次元メッシュ segmentaion は、多くの生物医学応用がある重要なタスクです。人体は双対的な対称性があり、臓器の位置にわずかな変異もあります。これは、3次元メッシュ分割を遂行する畳み込みニューラルネットワークにおける回転と反転不変層のポジティブな効果を期待できます。本研究では、3次元メッシュ分割を行う畳み込みニューラルネットワークにおける重みの対称性の影響を示します。私たちは、病理学的血管構造(aneurysm)と従来の解剖構造(心室のendocardiumとepicardium)に対する3次元メッシュ分割の問題を分析しました。局所的な幾何学的特徴は、符号距離関数のサンプルとしてエンコードされ、ニューラルネットワークは、各メッシュノードの予測を実行します。私たちは、重みの対称性が1%から3%の追加精度を得ることを示し、少なくとも3つの畳 |
2301.13432 | Manipulation of polarization topology using a Fabry-Pérot fiber cavity
with a higher-order mode optical nanofiber | Optical nanofiber cavity research has mainly focused on the fundamental mode.
Here, a Fabry-P\'erot fiber cavity with an optical nanofiber supporting the
higher-order modes, TE01, TM01, HE21o, and HE21e, is demonstrated. Using cavity
spectroscopy, with mode imaging and analysis, we observe cavity resonances that
exhibit complex, inhomogeneous states of polarization with topological features
containing Stokes singularities such as C-points, Poincar\'e vortices, and
L-lines. In situ tuning of the intracavity birefringence enables the desired
profile and polarization of the cavity mode to be obtained. These findings open
new research possibilities for cold atom manipulation and multimode cavity
quantum electrodynamics using the evanescent fields of higher-order mode
optical nanofibers. | Maki Maeda, Jameesh Keloth, Síle Nic Chormaic | 2023-01-31T05:57:48 | http://arxiv.org/abs/2301.13432v1 | Manipulation of polarization topology using a Fabry-Perot fiber cavity with a higher-order mode optical nanofiber
###### Abstract
Optical nanofiber cavity research has mainly focused on the fundamental mode. Here, a Fabry-Perot fiber cavity with an optical nanofiber supporting the higher-order modes, TE\({}_{01}\), TM\({}_{01}\), HE\({}_{21}^{o}\), and HE\({}_{21}^{e}\), is demonstrated. Using cavity spectroscopy, with mode imaging and analysis, we observe cavity resonances that exhibit complex, inhomogeneous states of polarization with topological features containing Stokes singularities such as C-points, Poincare vortices, and L-lines. _In situ_ tuning of the intracavity birefringence enables the desired profile and polarization of the cavity mode to be obtained. These findings open new research possibilities for cold atom manipulation and multimode cavity quantum electrodynamics using the evanescent fields of higher-order mode optical nanofibers.
## 1 Introduction
Novel phenomena that can be revealed in non-paraxial light, such as transverse spin and spin-orbit coupling, have led to increasing interest in the tightly confined light observed in nano-optical devices [1]. Optical nanofibers (ONFs), where the waist is subwavelength in size, are useful in this context because they provide very tight radial confinement of the electric field and facilitate diffraction-free propagation over several centimeters [2]. Most ONF research focuses on single-mode ONFs (SM-ONFs) that only support the fundamental mode, HE\({}_{11}\). In contrast, higher-order mode ONFs (HOM-ONFs), fabricated from a few-mode optical fiber, can guide HOMs, such as TE\({}_{01}\), TM\({}_{01}\), HE\({}_{21}^{e}\), and HE\({}_{21}^{o}\)[3]. In the weakly guided regime, which is generally used to describe light propagation in standard optical fiber, this group of modes can be viewed to form the linearly polarized mode, LP\({}_{11}\). To date, there has been a lot more attention paid to HOM-ONFs in theoretical work [4, 5, 6, 7, 8, 9, 10] than experimental work due to the difficulty in precisely controlling the fiber waist size and obtaining selective mode excitation at the waist [3, 11, 12].
In principle, there are many interesting phenomena which can be explored with a HOM-ONF. For example, it has been proposed that the relationship between spin angular momentum (SAM) and orbital angular momentum (OAM) can be studied [5, 10, 13, 14]. Additionally, it was proposed that a HOM-ONF could be used to trap and manipulate cold atoms [4, 15, 16]. Fabrication of an ONF that supports the HOMs was achieved [3, 17, 18] and subsequently shown to more efficiently manipulate dielectric microbeads in the evanescent field than SM-ONFs [19, 20]. Other experimental work has shown that when cold atoms also interact with HOMs, detected signals are stronger than when one uses a SM-ONF only [21].
Introducing a cavity system to the ONF could further increase light-matter interactions due to cavity quantum electrodynamics (cQED) effects [22, 23, 24]. To date, numerous types of SM-ONF-based cavities have been proposed [25, 26, 27, 28, 29, 30] and the interactions of their resonance modes with various quantum emitters have been studied [31, 32, 33]. Strong light-atom coupling using SM-ONF-based Fabry-Perot and ring resonators has already been achieved [34, 35]. Superstrong coupling of cold atoms and multiple longitudinal modes of a long fiber-ring resonator consisting of a SM-ONF section was demonstrated [36]. Utilizing multiple degenerate higher-order transverse modes in free-space has shown to exhibit strong coupling [37, 38], further illustrating the importance of realizing a HOM-ONF-based cavity system at this point. The advantages are
not only for enhanced interactions via cQED effects, but also for a better overall understanding of the behavior of the modes in such a cavity.
Studying the behavior of the HOM-ONF cavity spectrum and the cavity mode profiles gives additional insight into the nature of the HOMs themselves, as well as how they interfere with each other and interact with the external environment. The generation of TE\({}_{01}\) and TM\({}_{01}\) modes in a laser cavity consisting of a microfiber directional coupler-based mode converter was demonstrated previously [39]. However, earlier attempts to realize a passive HOM optical microfiber cavity did not yield any resonant peaks in the cavity spectrum apart from the fundamental modes; in other words, the typical donut- or lobe-shaped intensity profiles associated with HOMs were not observed [40], primarily due to challenges when engineering the taper profile to minimize losses at the taper transitions.
The inhomogeneous polarization structure of HOMs needs to be taken into account when studying a fiber cavity system with a HOM-ONF. In recent years, complex polarization distributions and the generation of polarization singularities have been investigated using various methods, giving rise to the relatively new field of singular optics [41]. Polarization singularities are a subset of Stokes singularities, _i.e._, phase singularity points in Stokes phases [42, 43]. In fact, higher-order fiber eigenmodes are vector optical fields with a polarization singularity called a V-point, where the state of polarization (SOP), _i.e._, how the polarization is distributed in the cross-section of a given mode, is undefined [41]. Other types of Stokes singularities can be formed in elliptical optical fields, such as the polarization singularity of C-points, where the polarization orientation is undefined [41, 42], and Poincare vortices, where the polarization handedness is undefined [43, 44, 45]. Moreover, points of linear polarization can form continuous lines, which are classified as L-lines [41].
The generation of all Stokes singularities within a single beam has been demonstrated using a free-space interferometer [43, 46]. Modal interference in a birefringent crystal can facilitate the creation of polarization singularities [47, 48]. As a result, the SOP can significantly vary along the propagation length, with C-points and L-lines propagating as C-lines, _i.e._, continuous lines of circular polarization, and L-surfaces, _i.e._, surfaces of linear polarization, respectively [47, 48, 49]. Moreover, polarization singularities can appear, move or disappear from a given cross-sectional region with a smooth and continuous change of birefringence [50]. Birefringent media were used to create laser cavity modes containing a polarization singularity [51, 52]. These experiments were limited to the generation of low-order V-points due to a lack of control in the amplitude, phase, and SOP, all of which would be required to create other types of polarization singularities [41]. A few-mode optical fiber cavity has the potential to generate complex laser modes by its highly variable degree of birefringence.
Interference and birefringence are generally inseparable properties in fibers. The modal interference pattern in a fiber changes continually with a periodicity of \(2\pi\) when the relative phase between modes is changed between \(0\) to \(2\pi\) as the eigenmodes propagate along the fiber [53]. This effect was used in a few-mode optical fiber to generate ellipse fields containing a C-point [54, 55]. Due to the increasing complexities of modal interference in few-mode fibers, filtering for the desired set of HOMs, and selectively exciting them to generate and manipulate polarization singularities, are necessary. Realizing a fiber cavity containing an ONF should enable both spatial and frequency filtering for selective excitation of HOMs, as well as enhancement of the resonant mode coupling effect [56, 57].
In this paper, we experimentally demonstrate a HOM-ONF-based Fabry-Perot fiber cavity. The transverse polarization topology of any given resonant mode is determined by selecting modes from the cavity spectra and analyzing the images of the transmitted mode profile. We also demonstrate _in situ_ intracavity manipulation of the modal birefringence to change the amplitude, frequency position, and the SOP of the modes. This work is a significant step towards gaining full control of the evanescent field at the HOM-ONF waist and extends the range of applications
for which such nanodevices could be used.
## 2 Methods
### Experiments
For the HOMs described in Section 1 to propagate throughout the cavity with a HOM-ONF, the nanofiber must be low loss for the entire LP\({}_{11}\) set of modes. Tapered fibers were drawn from SM1250 (9/80) fiber (Fibercore) using an oxy-hydrogen flame pulling rig. The untapered fiber supports the LP\({}_{01}\), LP\({}_{11}\), LP\({}_{21}\), and LP\({}_{02}\) modes at a wavelength, \(\lambda\) = 776 nm. The modes supported by the tapered fiber depend on the tapering profile and the waist diameter. We used two different tapered fibers with waist diameters of (i) \(\sim\) 450 nm for SM behavior (HE\({}_{11}^{o}\) and HE\({}_{11}^{e}\)) and (ii) \(\sim\) 840 nm for the HOM-ONF, which supports HE\({}_{11}^{o}\), HE\({}_{11}^{e}\), TE\({}_{01}\), TM\({}_{01}\), HE\({}_{21}^{o}\), and HE\({}_{21}^{e}\). The shape of the tapered fibers was chosen to be trilinear, see Fig. 1(a), with angles of \(\Omega_{1}\) = 2 mrad, \(\Omega_{2}\) = 0.5 mrad and \(\Omega_{3}\) = 1 mrad in order to be adiabatic for the LP\({}_{11}\) and LP\({}_{01}\) modes. Fiber transmission following the tapering process was >95% for the fundamental mode.
A sketch of the experimental setup is given in Fig. 1(b). The cavity was fabricated by splicing each pigtail of the tapered fiber to a commercial fiber Bragg grating (FBG) mirror (Omega Optical). The two FBG mirrors consisted of stacked dielectric mirrors coated on the end faces of fiber patchcords (SM1250 (9/80), Fibercore) and had a reflectivity of 97% at \(\lambda\) = 776 nm. Both mirrors had almost the same reflectivity over all input polarization angles (< 1% variation). The cavity also contained an in-line polarization controller (IPC, see Fig.1(b)) to manipulate the birefringence inside the cavity. Moving the paddles of the IPC induced stress and strain in the fiber, thereby changing the effective cavity length. A typical cavity length was \(\sim\) 2 m, which was physically measured and estimated from the cavity free-spectral range (FSR).
Figure 1: (a) Sketch of tapered optical fiber with trilinear shape, d\({}_{waist}\): waist diameter. (b) Schematic of experimental setup. L: lens, HWP: half-wave plate, PBS: polarizing beam splitter, M: mirror, M\({}_{C}\): cavity mirror, IPC: in-line polarization controller, BS: beam splitter, QWP: quarter-wave plate, which was inserted to calculate S\({}_{3}\), LP: linear polarizer, CCD: camera, MMF: multimode fiber, PD: photodiode.
A linearly polarized Gaussian beam from a laser at \(\lambda=776\) nm (Toptica DL100 pro) was launched into the fiber cavity. The laser frequency was either scanned or locked to a mode of interest using a Pound-Drever-Hall locking module (Toptica Digilock110). The cavity output beam was split into three paths: one for the laser feedback controller to observe the cavity spectra and to lock to specific modes, one for imaging the spatial profile of the modes with a CCD camera, and one for analyzing the transverse SOP of each mode using a removable quarter wave plate (QWP), a rotating linear polarizer, and a CCD camera, see Fig. 1(b). Six intensity profile images were taken in total for each mode. Four images were taken without the QWP and with the linear polarizer angle set to \(0^{\circ}\) (I\({}_{H}\)), \(45^{\circ}\) (I\({}_{D}\)), \(90^{\circ}\) (I\({}_{V}\)), and \(135^{\circ}\) (I\({}_{A}\)), and two images were taken by inserting the QWP set to \(90^{\circ}\) while the polarizer was set to \(45^{\circ}\) (I\({}_{R}\)) and \(135^{\circ}\) (I\({}_{L}\)). The SOPs were determined by analyzing the six profile images using Stokes polarimetry. Furthermore, the Stokes phase and Stokes index were determined [41], see Section 2.3.
### Simulations
Each mode experiences arbitrary birefringence as it propagates along the fiber. The total field in the fiber at any point is the sum of the propagating modes with a corresponding phase shift. The addition of FBG mirrors to the fiber induces an additional birefringence [56, 57], which can be incorporated in a single birefringence matrix. Note, this model does not include cavity boundary conditions since we only aim to simulate the spatial profiles of the fiber modes. We can calculate an arbitrary fiber field, \(\mathbf{E}\), due to interference and birefringence by taking a summation over different fiber modes, such that
\[\mathbf{E}=\sum_{M=1}^{n}J_{M}A_{M}\mathbf{E}_{M}e^{i\phi_{M}}, \tag{1}\]
where \(n\) is the number of eigenmodes to be interfered, \(\mathbf{E}_{M}\) is the electric field of a fiber eigenmode \(M\in\text{TE}_{0,m}\), \(\text{TM}_{0,m}\), \(\text{HE}_{\ell,m}\) and \(\text{EH}_{\ell,m}\), with \(\ell\in\mathbb{Z}^{+}\) being the azimuthal mode order, which defines the helical phase front and the associated phase gradient in the fiber transverse plane. \(m\in\mathbb{Z}^{+}\) is the radial mode order, which indicates the \(m^{th}\) solution of the corresponding eigenvalue equation [5]. \(A_{M}\) is the amplitude, \(\phi_{M}\) is the phase between modes, and \(J_{M}\) represents the arbitrary birefringence Jones matrix of each eigenmode \(\mathbf{E}_{M}\), such that
\[J_{M}=e^{i\eta_{M}/2}\begin{pmatrix}cos^{2}\theta_{M}+e^{i\eta_{M}}sin^{2} \theta_{M}&(1-e^{i\eta_{M}})cos\theta_{M}sin\theta_{M}\\ (1-e^{i\eta_{M}})cos\theta_{M}sin\theta_{M}&sin^{2}\theta_{M}+e^{i\eta_{M}} cos^{2}\theta_{M}\end{pmatrix}, \tag{2}\]
where \(\eta_{M}\) is the relative phase retardation induced between the fast axis and the slow axis, and \(\theta_{M}\) is the orientation of the fast axis with respect to the horizontal-axis, _i.e._, perpendicular to mode propagation.
Let us now consider the system with an ONF supporting \(\text{HE}_{11}^{o}\), \(\text{HE}_{11}^{e}\), \(\text{TE}_{01}\), \(\text{TM}_{01}\), \(\text{HE}_{21}^{o}\) and \(\text{HE}_{21}^{e}\), so that the number of modes that can be interfered is \(n\leq 6\). The cross-sectional profiles and SOPs of \(\text{TE}_{01}\) and \(\text{HE}_{21}^{e}\) are shown in Fig. 2(a, b), respectively. The \(\text{TM}_{01}\) and \(\text{HE}_{21}^{o}\) modes are not shown here but their vector fields are orthogonal to the \(\text{TE}_{01}\) and \(\text{HE}_{21}^{e}\) at every point, respectively. These modes have donut-shape mode profiles with linearly polarized vector fields at any point in the mode cross-section. As an example of possible fiber modes using Eq. 1, Fig. 2(c) illustrates in-phase interference of the \(\text{TE}_{01}\) and \(\text{HE}_{21}^{e}\) modes with equal amplitudes. The resulting mode has a lobe-shape intensity pattern with scalar fields. Fig. 2(d) is an example of a mode resulting from the interference of the circularly polarized \(\text{HE}_{11}\) and an out-of-phase (a \(\pi\)/2 phase difference) \(\text{TE}_{01}\) and \(\text{TM}_{01}\) with equal amplitudes. The SOP, which is overlapped on the intensity profile images, are marked as red and blue ellipse, corresponding to right and left handed orientation, respectively. This mode is the co-called lemon [55], which contains not only linear polarization but also elliptical and circular polarization components in one mode.
Figure 2: Simulations of (a) TE\({}_{01}\), (b) HE\({}_{21}^{e}\), (c) TE\({}_{01}\) + HE\({}_{21}^{e}\) and (d) lemon. The red and blue SOPs indicate right-handed and left-handed ellipticities, respectively. The scale bars show the normalized intensity (from 0 to 1) and the Stokes phase (from 0 to 2\(\pi\)). Stokes singularity points of \(\sigma_{12}\), \(\sigma_{23}\), and \(\sigma_{31}\) are indicated as pink, orange, and blue dots, respectively. An L-line is indicated in green.
When using Eq. 1 to simulate mode profiles, a number of eigenmodes with similar intensity patterns and SOPs to an experimentally observed cavity mode were selected as the initial conditions. Next, the variables \(A_{M}\), \(\phi_{M}\), \(\eta_{M}\), and \(\theta_{M}\) were tuned to match the experimentally observed cavity mode intensities, SOPs, and Stokes phases. Polarization topological defects in the simulated modes were then identified, using the method described in the following Section 2.3.
### Analysis
The polarization gradient was calculated in order to identify Stokes singularities in the cross-section of the mode. The gradient map is known as the Stokes phase, \(\phi_{ij}\), which is given by [42, 45]
\[\phi_{ij}=Arg(S_{i}+iS_{j}), \tag{3}\]
where \(S_{i}\) and \(S_{j}\) are Stokes parameters with \(\{i,j\}\in\{1,2,3\}\) in order, and \(i\neq j\). The phase uncertainty points, _i.e._, Stokes singularities, were identified by obtaining the Stokes indices, \(\sigma_{ij}\), which are defined as [42, 45]
\[\sigma_{ij}=\frac{1}{2\pi}\oint_{c}\phi_{ij}\cdot dc, \tag{4}\]
where \(\oint_{c}\phi_{ij}\cdot dc\) = \(\Delta\phi_{ij}\) is the counterclockwise azimuthal change of the Stokes phase around the Stokes singularity. Singularities of \(\sigma_{12}\) are known as V-points and C-points, in vector and ellipse fields, respectively [42]. Singularities of \(\sigma_{23}\) and \(\sigma_{31}\) are known as Poincare vortices [43, 44, 45]. L-lines are located where \(\phi_{23}\) = \(\{0,\pi,2\pi\}\). Table 1 is a summary of the classification of the Stokes singularity types in terms of the Stokes phases and singularity indices with the corresponding polarizations in the vector and ellipse fields [43, 45, 46, 58].
The Stokes singularity points and L-lines were found from the Stokes phases, then superimposed and marked on the mode profiles. As examples, from Figs. 2(a, b), the center of the mode profiles for both TE\({}_{01}\) and HE\({}_{21}^{e}\) contain a V-point, with \(\sigma_{12}\) = -2 and +2 (pink dot), respectively. These points were found from their Stokes phases \(\phi_{12}\) (lower panels in Figs. 2(a, b)). In contrast, the lemon mode in Fig. 2(d) has a closed loop representing an L-line (green) and all three types of Stokes singularities: a C-point with \(\sigma_{12}\) = -1 (pink dot), Poincare vortices with \(\sigma_{23}\) = -1 and +1 (orange dots), and \(\sigma_{31}\) = -1 and +1 (blue dots) were found from \(\phi_{12}\), \(\phi_{23}\), and \(\phi_{31}\), respectively. The lobe-shaped scalar mode in Fig. 2(c) does not have a \(2\pi\) gradient in any associated Stoke phases, since topological defects can only exist in non-scalar fields [41].
\begin{table}
\begin{tabular}{c c c c} \hline \hline Stokes & Stokes phase & Stokes index/ & Polarization \\ singularity & & Phase values & \\ \hline \hline V-point (v) & \(\phi_{12}\) & \(\sigma_{12}\) & Null \\ \hline C-point (e) & \(\phi_{12}\) & \(\sigma_{12}\) & R/L \\ \hline Poincaré & \(\phi_{23}\) & \(\sigma_{23}\) & H/V \\ vortex (e) & \(\phi_{31}\) & \(\sigma_{31}\) & D/A \\ \hline L-line (e) & \(\phi_{23}\) & 0, \(\pi\), \(2\pi\) & Linear \\ \hline \end{tabular}
\end{table}
Table 1: **List of Stokes singularities in vector fields (v) and ellipse fields (e) by the singularity index, \(\sigma_{ij}\), using the Stokes phase, \(\phi_{ij}\), with \(\{i,j\}\in\{1,2,3\}\) in order.**
Results and discussion
### Cavity with a single-mode optical nanofiber
As an initial experimental test, the spectrum for a HOM cavity containing an ONF of waist diameter \(\sim\) 450 nm was obtained, see Fig. 3(a). This ONF waist can only support the fundamental modes. The IPC paddle angles were set so that two distinct, well-separated modes with minimal spectral overlap were observed. The finesses of Modes 1 and 2 in Fig. 3(a) were 12 and 15, respectively. The laser was locked to each of these two cavity modes consecutively and the mode profiles were observed at the output end face of the fiber cavity. The corresponding mode intensity profiles, SOPs, and Stokes phases are shown in Figs. 3(b)(i, ii). The intensity profiles for both Modes 1 and 2 were slightly skewed Gaussian shapes. The HE\({}_{11}\) eigenmode intensity shape is Gaussian, so the slight deviation from the expected shape may be attributed to aberrations in the optical beam path. In terms of polarization distribution, the Stokes phases of Modes 1 and 2 were uniform; in other words, their SOPs were scalar fields, regardless of the IPC paddle angles chosen, as expected for the HE\({}_{11}\) mode.
Although the pretapered fiber supported the full set of eigenmodes in LP\({}_{11}\), LP\({}_{02}\), and LP\({}_{21}\), when the ONF with a diameter \(\sim\) 450 nm was inserted between the two sets of mirrors, only one or two modes with quasi-Gaussian profiles were observed, no matter which IPC paddle angles were chosen. The HOMs were filtered out due to the tapered fiber waist being SM, analogous to an intracavity pinhole spatial filter. Mode filtering as a function of the ONF waist diameter was observed experimentally [17]. However, here, we could additionally observe the mode filtering effect on the cavity spectrum and SOP of each mode.
In an ideal SM-ONF cavity with no birefringence, there are two degenerate orthogonal modes. However, due to random birefringence of the fiber and the cavity mirrors, the two modes become non-degenerate, _i.e._, separated in frequency, leading to coupling between the modes [59]. Mode coupling of orthogonal modes can occur in a birefringent medium and this effect can increase in a cavity configuration [60]. Mode coupling in an ONF cavity due to asymmetrical mirrors has been discussed previously [56] and experimental evidence of mode coupling due to intrinsic birefringence in a SM-ONF cavity has already been reported [57]. In our experiments, non-orthogonal combinations of SOPs were observed, as seen in Figs. 3(b)(i, ii). Mode 1 was horizontally polarized (red/blue lines in Fig. 3(b)(i)), while Mode 2 was left elliptically polarized (blue ellipse in Fig. 3(b)(ii)). By adjusting the IPC angles, it was possible to change the phase relationship and coupling between the HE\({}_{11}^{o}\) and HE\({}_{11}^{e}\) modes, and shift between orthogonal and non-orthogonal combinations of SOPs.
### Cavity with a higher-order mode optical nanofiber
Next, the spectrum for a HOM cavity containing an ONF of waist diameter \(\sim\) 840 nm was obtained, see Fig. 4(a). This ONF can support the HE\({}_{11}\), TE\({}_{01}\), TM\({}_{01}\), HE\({}_{21}^{o}\), and HE\({}_{21}^{e}\) modes. The IPC paddle angles were set to obtain the maximum number of well-resolved modes in a single FSR, see Fig. 4(a). One can clearly see five distinct peaks indicating that the HOM-ONF does not degrade the modes in the cavity and the finesses of the cavity modes are high enough to resolve them. The finesses of Modes 1 to 5 were 12, 16, 13, 22, and 13, respectively. The mode finesse values of the cavity with a HOM-ONF were in the same range as those for the cavity with a SM-ONF (Fig. 3(a)), implying that the HOM-ONF was adiabatic for the LP\({}_{11}\) group of modes. The laser was locked to each of the cavity modes consecutively and the mode profiles were observed at the output of the fiber cavity. The corresponding mode intensity profiles, SOPs, and Stokes phases are shown in Figs. 4(b)(i-iv). In the spectrum shown in Fig. 4(a), there were five distinctive modes, but locking to Mode 3 was not possible because of its close proximity to the dominant Mode 4.
Two flat-top intensity profiles were observed in Modes 1 and 4, Figs. 4(b)(i, iii) respectively.
Figure 3: (a) A typical spectrum for a HOM cavity with a SM-ONF as the laser is scanned over 150 MHz. The spectrum over a single FSR is indicated by the red box. (b) Mode intensity profiles showing the SOPs (top) and corresponding Stokes phases (bottom) for (i) Mode 1 and (ii) Mode 2. The red and blue SOPs indicate right-handed and left-handed ellipticities, respectively. The scale bars show the normalized intensity (from 0 to 1) and the Stokes phase (from 0 to 2\(\pi\)).
Figure 4: (a) A typical spectrum for a cavity with a HOM-ONF as the laser is scanned over 150 MHz. The spectrum over a single FSR is indicated by the red box. (b) Mode intensity profiles showing the SOP (top) and the corresponding Stokes phases (bottom) for (i) Mode 1, (ii) Mode 2, (iii) Mode 4, and (iv) Mode 5. The red and blue SOPs indicate right-handed and left-handed ellipticities, respectively. The scale bars show the normalized intensity (from 0 to 1) and the Stokes phase (from 0 to 2\(\pi\)). Stokes singularity points of \(\sigma_{12}\), \(\sigma_{23}\), and \(\sigma_{31}\) are indicated as pink, orange, and blue dots, respectively. L-lines are indicated in green. (c) Corresponding simulated results.
The SOPs of these modes are markedly different to those for the Gaussian-type modes in Figs. 3(b)(i, ii), which have simple scalar SOPs. Modes 1 and 4 were inhomogeneously polarized ellipse fields, showing regions of left and right circular polarizations divided by an L-line (Figs. 4(b)(i, iii)). The center of these two modes exhibited diagonal and anti-diagonal polarizations, respectively, _i.e._, the SOPs at the center of the modes were orthogonal to each other. Going towards the edges of the modes, the polarization changes from linear to circular, with opposite handedness either side of the L-lines. Notice also in Fig. 4(a) that Modes 1 and 4 are not well frequency separated from neighboring modes. This suggests that the mode profiles and SOPs of these modes were not only affected by birefringence and degenerate modal interference, but also some non-degenerate modal interference with neighboring cavity modes [60]. Additionally, for Mode 4, we identified two C-points (\(\sigma_{12}\) = -1), indicated by the pink dots in Fig. 4(b)(iii), where the value of \(\phi_{12}\) changed by \(2\pi\) (see Table 1). Interference of HE\({}_{11}\) with modes from the LP\({}_{11}\) group can generate C-points in a few-mode fiber [55], see Fig. 2(d).
We performed basic simulations to determine if combinations of HE\({}_{11}\) and some mode(s) in the LP\({}_{11}\) family could generate similar mode profiles and SOP structures as those in Figs. 4(b)(i, iii). The simulated results are shown in Figs. 4(c)(i, iii). The HE\({}_{11}\) and TM\({}_{01}\) modes were selected as possible contributors and their amplitudes, phase, and birefringence fitting parameters were tuned to match the experimental results. Modes 1 and 4, see Figs. 4(b)(i, iii), could have been formed from different mode combinations rather than our assumed HE\({}_{11}\) and TM\({}_{01}\); however, these modes were very likely formed by interference between HE\({}_{11}\) and some mode(s) of the LP\({}_{11}\) group, resulting in their inhomogeneous SOPs and flat-top shapes.
We also observed two distorted lobe-shaped modes, Modes 2 and 5, see Figs. 4(b)(ii, iv). The lobe-shaped pattern also arises from modal interference between modes in the LP\({}_{11}\) family (as an example, see Fig. 2(c)). With reference to Table 1, Mode 2, Fig. 4(b)(ii), showed all three types of Stokes singularities, indicated by pink dots for C-points (\(\sigma_{12}\) = +1) and orange/blue dots for Poincare vortices (\(\sigma_{23}\) = -1 /\(\sigma_{31}\) = +1), as presented in \(\phi_{12}\), \(\phi_{23}\), and \(\phi_{31}\), respectively. A single mode containing all Stokes singularities has been demonstrated using free-space interferometers [46, 43]; here, we generated them within a single mode using a fiber cavity system. Mode 5, Fig. 4(b)(iv), also had two C-points (\(\sigma_{12}\) = +1) and a Poincare vortex (\(\sigma_{23}\) = +1), as seen in \(\phi_{12}\), and \(\phi_{23}\), respectively. Fig. 4(a) shows that Modes 2 and 5 are not well frequency separated from Modes 1 and 4, respectively. Therefore, there is a likely contribution from the HE\({}_{11}\) mode resulting in distortion of the lobe shape.
To simulate Mode 2 in Fig. 4(b)(ii), we combined TE\({}_{01}\), HE\({}_{21}^{e}\), and HE\({}_{11}\), and to simulate Mode 5 in Fig. 4(b)(iv), we used TM\({}_{01}\), HE\({}_{21}^{e}\), and HE\({}_{11}\). The amplitude of each mode, phase shift, and birefringence parameters were adjusted to achieve a close fit. The simulated results are shown in Figs. 4(c)(ii, iv). These plots are not exact replications of the experimental results since the parameter space is large and the exact initial conditions are not known; nevertheless, the match is reasonably close.
Interestingly, many of the cavity modes obtained in different sets of spectra, which were generated using different IPC angles, exhibited Stokes singularities. Polarization singularities are known to propagate through a birefringent medium as C-lines and L-surfaces and their evolution is affected by the homogeneity of the birefringence along the propagation path [47, 48, 49]. This phenomenon is due to the conservation of the topological charge [49, 58, 61], and the Stokes index value, \(\sigma_{ij}\), remains constant [58]. However, our cavity is an inhomogeneous birefringent medium as it contains a number of different birefringent elements such as the FBG mirrors and the IPC, as such, the degree of birefringence varies along the propagation direction. Therefore, the presence of Stokes singularities in the imaged field at the cavity output does not necessarily guarantee the existence of such topological defects in the ONF region. Nonetheless, singularity points can enter, move and exit with a smooth and continuous variation of birefringence [50]. Therefore, the SOP is expected to evolve along the length of the cavity, with singularity points shifting and
making numerous entries and exits in the cross-section profile of the modes. However, since the ONF waist is relatively straight and uniform, the birefringence variation at the waist should be minimal [62] and topological features appearing at the start of the waist should be preserved every \(2\pi\) along the waist.
Theoretically, the HOM-ONF can support a total of six eigenmodes as mentioned earlier. Therefore, one might expect that the spectrum should show six distinct modes. However, we typically observed three to five distinct peaks in a single FSR depending on the IPC paddle angles. This could be explained by the lack of sufficient finesse to resolve all modes, some of which are closely overlapped [60]. However, it may be feasible to increase the mode finesses by increasing the mirror reflectivity and using an ONF with lower transmission loss than the one used (the estimated loss of Mode 4, the highest finesse in Fig. 4(a), was \(\sim 20\%\)). Nonetheless, the finesse values of our \(\sim 2\) m long cavity with a HOM-ONF should be sufficient for cQED experiments with narrow line-width emitters such as cold atoms.
### In situ higher-order cavity mode tuning
A key feature of this setup is the ability to tune the spectrum and SOP to create the desired mode in the cavity. We aimed to observe modes with donut-shaped intensity patterns and SOPs similar to the fiber eigenmodes TE\({}_{01}\) (Fig. 2(a)), TM\({}_{01}\), HE\({}_{21}^{o}\), and HE\({}_{21}^{e}\) (Fig. 2(b)). To achieve this, the laser was locked to a well-resolved lobe-shaped mode. The paddle angles of the IPC were then adjusted, and the mode shape was monitored with a CCD camera until a donut mode profile was observed. Unlocking and scanning the laser revealed a new spectrum with each mode containing
Figure 5: (a) Mode intensity profiles for quasi-donut-shaped cavity modes from the cavity containing a HOM-ONF with their SOPs (top) and Stokes phases (bottom) similar to the fiber eigenmodes of (i) HE\({}_{21}^{e}\), (ii) HE\({}_{21}^{o}\), (iii) TE\({}_{01}\), and (iv) TM\({}_{01}\). The red and blue SOPs indicate right-handed and left-handed ellipticities, respectively. Scale bars show intensity (from 0 to 1) and Stokes phase (from 0 to \(2\pi\)). Stokes singularities of \(\sigma_{12}\), \(\sigma_{23}\), and \(\sigma_{31}\) are indicated as pink, orange, and blue dots, respectively. L-lines are illustrated as green lines. (b) Corresponding simulated results.
a new profile. The IPC was adjusted again to maximize another mode and the laser was locked to this new mode. The IPC paddle angles were tuned to once more convert the mode profile to a donut shape. This procedure was repeated for four different modes, see Figs. 5(a)(i-iv), and these modes look similar to the true fiber eigenmodes of HE\({}_{11}^{e}\) (Fig. 2(b)), HE\({}_{11}^{o}\), TE\({}_{01}\) (Fig. 2(a)), and TM\({}_{01}\), respectively. There was a slight deformation from a perfect donut shape and their SOPs were not vector fields, but rather ellipse fields with alternating regions of opposite handiness. While the donut eigenmodes possessed a V-point at the center as indicated by pink dots in Figs. 2(a, b), the observed quasi-donut modes in Figs. 5(a)(i-iv) had some nominal intensity at the center. These modes had two C-points of \(\sigma_{12}\) = -1 or +1 near the center (see pink dots in Figs. 5 (a)(i-iv)), as opposed to a single point of \(\sigma_{12}\) = -2 or +2 in the true eigenmodes (Figs. 2(a, b)). Indeed, perturbation of vector field polarization singularities can occur when scalar linearly polarized beams are interfered [63].
These donut-shaped cavity modes were also simulated, as shown in Figs. 5(b)(i-iv). To obtain a good fit for the experimentally observed intensities, SOPs, and Stokes phases in Figs. 5(a)(i-iv), the simulated modes included a slight deformation of the donut shape by adding some components of the HE\({}_{11}\) mode to modes in the LP\({}_{11}\) group. Moreover, the simulated results show that the Stokes phases are very similar to those obtained experimentally. The number of possible combinations of modal interference with varying birefringence is large and this leads to discrepancies between the experiment and simulation. However, these findings indicate that the experimentally observed quasi-donut modes are likely the result of residual interference between the HE\({}_{11}\) mode and modes in the LP\({}_{11}\) group. Degeneracy of multiple modes may be avoided by increasing the cavity mode finesses so that each mode can be well separated. The system demonstrated here shows that, even in a complex system, the HOMs and their SOPs can be controlled to create exotic topological states.
## 4 Conclusion
We have experimentally demonstrated a Fabry-Perot fiber cavity with a HOM-ONF and performed cavity spectroscopy. The cavity mode profiles and transverse polarization topology were also determined by imaging and analyzing the individual cavity modes at the output. These modes had inhomogeneous polarization distributions with a number of Stokes singularities. We also simulated the fiber modes which closely match those observed at the output of the cavity. Moreover, _in situ_ intracavity manipulation of the modal birefringence and interference to select a specific mode of interest was demonstrated. This indicates that the evanescent field of an HON-ONF could be tuned by adjusting the IPC paddle angles.
These findings are a step toward investigating the interactions between SAM and OAM of a HOM-ONF. Research into the interference of HOMs at the waist of an ONF is an exciting opportunity to uncover the nature of light-matter interactions in tightly confining geometries with topological singularities. Additionally, the realization of a (de)multiplexing system using degenerate HOMs in an ONF-based cavity may be possible by improving the tunability of the modal birefringence and interference. Such a system is attractive for future quantum information platforms as efficient and secure storage.
The interference of higher-order cavity modes with fixed ratios in the evanescent field of an ONF may also be used to trap and manipulate cold atoms. Adjusting the overlap and SOP of the HOMs should result in movement of the trapping sites relative to each other, enabling some trap dynamics to be studied [4, 15, 16]. This cavity could be also used with quantum emitters to study multimode cQED effects using degenerate HOMs. The HOM cavity studied here had moderate finesse to enter the cQED experiments for interactions with cold atoms. In free-space optics, strong coupling of multiple transverse HOMs with atoms has been achieved [38], whereas this has not been achieved using an ONF-type cavity. Our work is a significant step towards this realization.
Moreover, the ability of our cavity to generate all three types of Stokes singularities may be useful to realize not only a C-point laser but also an all-Stokes singularity laser using a few-mode fiber. The combinations of fiber modes that we used in the simulations were found via manual trial-and-error estimates to obtain a visual match with the experimentally observed modes. More accurate control could be achieved by using machine learning techniques to fully cover the parameter space of permitted modes in the cavity. This may enable us to determine the correct combination of modes that lead to the observed cavity outputs and facilitate feedback to optimize the input to the system to generate desired modes in the cavity.
## Funding
Okinawa Institute of Science and Technology Graduate University.
Acknowledgments.The authors acknowledge F. Le Kien, L. Ruks, V. G. Truong, and J. M. Ward for discussions and K. Karlsson for technical assistance.
## Disclosures
The authors declare no conflicts of interest.
## Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
| 光ナノファイバーCavity研究は主に基本モードに焦点を当ててきました。そこで、光ナノファイバーを支えるFabry-PerotファイバーCavity、TE01, TM01, HE21o, and HE21e の高次モードを示しました。Cavity Spectroscopyを用いて、モード画像と解析を行い、非線形な状態の偏光特性を持つCavity Resonance を観察しました。これには、Stokes singularity のような C-point, Poincaré vortices, and L-linesが含まれます。intracavity birefringence の in situ tuningにより、Cavity Mode の望ましいプロファイルと偏光特性を得ることができました。これらの発見は、冷原子操作と高次モードCavity量子電磁力学の可能性を新たな研究方向に導きます。 |
2309.08156 | RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue | Evaluating open-domain dialogue systems is challenging for reasons such as
the one-to-many problem, i.e., many appropriate responses other than just the
golden response. As of now, automatic evaluation methods need better
consistency with humans, while reliable human evaluation can be time- and
cost-intensive. To this end, we propose the Reference-Assisted Dialogue
Evaluation (RADE) approach under the multi-task learning framework, which
leverages the pre-created utterance as reference other than the gold response
to relief the one-to-many problem. Specifically, RADE explicitly compares
reference and the candidate response to predict their overall scores. Moreover,
an auxiliary response generation task enhances prediction via a shared encoder.
To support RADE, we extend three datasets with additional rated responses other
than just a golden response by human annotation. Experiments on our three
datasets and two existing benchmarks demonstrate the effectiveness of our
method, where Pearson, Spearman, and Kendall correlations with human evaluation
outperform state-of-the-art baselines. | Zhengliang Shi, Weiwei Sun, Shuo Zhang, Zhen Zhang, Pengjie Ren, Zhaochun Ren | 2023-09-15T04:47:19 | http://arxiv.org/abs/2309.08156v2 | # RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue
###### Abstract
Evaluating open-domain dialogue systems is challenging for reasons such as the one-to-many problem, i.e., many appropriate responses other than just the golden response. As of now, automatic evaluation methods need better consistency with humans, while reliable human evaluation can be time- and cost-intensive. To this end, we propose the **R**eference-**A**ssisted **D**ialogue **E**valuation (RADE) approach under the multi-task learning framework, which leverages the pre-created utterance as reference other than the gold response to relief the one-to-many problem. Specifically, RADE explicitly compares reference and the candidate response to predict their overall scores. Moreover, an auxiliary response generation task enhances prediction via a shared encoder. To support RADE, we extend three datasets with additional rated responses other than just a golden response by human annotation. Experiments on our three datasets and two existing benchmarks demonstrate the effectiveness of our method, where Pearson, Spearman, and Kendall correlations with human evaluation outperform state-of-the-art baselines.
## 1 Introduction
Open-domain dialogue system, which focuses on non-goal-oriented chitchat, may converse on a broad range of arbitrary topics. Recent years have witnessed rapid advances in natural language generation (Zhang et al., 2020; Roller et al., 2021; Zhao et al., 2023), boosting the development of open-domain dialogue systems. Conversations with such systems resemble human-human interactions as various responses might fit the context, given that users often do not have a specific goal beyond enjoying the conversation. Evaluating these conversations is thus challenging because of the so-called one-to-many problem (Chan et al., 2021; Ji et al., 2022); see Figure 1 where three candidate responses with different semantics fit the context while there is only one golden response.
The most common practice of dialogue evaluation is done with reference-based metrics, which compare the generated response with a pre-created response, commonly referred to as the golden standard (Ji et al., 2022). The reference-based metrics calculate the similarity between the generated and gold responses at either lexical level (e.g., ROUGE (Lin, 2004), BLEU (Papineni et al., 2002)) or semantic level (e.g., BERTScore (Zhang et al., 2020), ADEM (Lowe et al., 2017)). However, these metrics ignore the one-to-many nature of open-domain dialogues. As illustrated at the bottom of Figure 1, the generated response "_Amazon is good but expensive..._" expresses the opposite semantics to the golden response "_I shop online..._" and is therefore considered a non-good response by the reference-based metrics. Therefore, these metrics may need a higher consistency with humans. Recently, _multi-reference methods_ and _reference-free methods_ are proposed to address the drawback of reference-based metrics. The former explicitly annotates multiple references for dialogue (Eric et al., 2021), whereas the latter discards the golden response in the evaluation and achieves high cor
Figure 1: An example to explain the one-to-many nature of open-domain dialogues.
relations with human judgments (Mehri and Eskenazi, 2020; Huang et al., 2020). However, drawback still exists in these two classes of methods. Multi-reference methods are costly and hard to generalize to different datasets, while reference-free methods are often unstable and vulnerable to data-induced biases1.
Footnote 1: The data-induced biases included two aspects: (1) Noise collected in data/annotations, (2) The reference-free models tend to favor the underlying models’ outputs and those from similar models or trained with similar datasets. (Khalid and Lee, 2022; Deutsch et al., 2022)
To overcome the weakness of existing evaluation methods and further resolve the one-to-many problem, we propose a new technique, namely **R**eference-**A**ssisted **D**ialogue **E**valuation (RADE). RADE considers the pre-created response as a reference instead of the golden standard.
To support RADE, we design a new human annotation task to extend existing datasets, which includes metric decompose and pairwise annotation, where a pre-scored golden response is paired with generated responses for rating following a unified rating score. The final scores are arrived at by aggregating ratings with a weighted sum from different sub-metrics. The human annotation collects labels for three high-quality datasets with 10,112 dialogues, which correspond to three downstream open-domain dialogue system tasks, i.e., chitchat, empathetic dialogue, and personal chat. These multi-domain datasets make RADE more robust when generalizing to cross-domain evaluation scenarios while having a better task-specific performance.
We propose a RADE model under the multi-task learning framework for automatic evaluation based on the newly collected datasets. Specifically, RADE first explicitly encodes the relation between dialogue context and generated response with reference assistance. Then RADE discriminates whether the reference or response fits the context better and predicts the scores for each utterance. To relieve the one-to-many problem, we augment RADE with a joint response generation task where RADE learns to generate the reference responses to better perceive the range of candidate responses.
Extensive experiments on our three benchmarks demonstrate that RADE achieves the best correlations with human judgment. We also examine two existing USR benchmark (Mehri and Eskenazi, 2020) where RADE outperforms the state-of-the-art methods, e.g., pushing the Pearson correlation coefficient to 48% (6.8% absolute improvement) and Spearman correlation coefficient to 46.6% (4.3% absolute improvement). Experiments also verify the generalizability of our proposed method.
Our contributions can be summarized as follows: (1) We propose the reference-assisted evaluation method, i.e., RADE, for open-domain dialogue evaluation; (2) We design a new human annotation task and collect three new dialogue evaluation datasets; (3) Experiments on our benchmarks and two existing benchmarks verify the effectiveness and robustness of the proposed methods; (4) We release three new benchmarks and the pre-trained evaluation model to facilitate future research on dialogue evaluation.
## 2 Related work
### Reference-based dialogue evaluation
Previous reference-based methods compare the generated response with the pre-created response at the lexical or semantic level. Lexical-level metrics, e.g., ROUGE (Lin, 2004), BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005), count the n-gram overlap between the candidate response and the reference response. These methods usually correlate poorly with human evaluation results due to the lexical mismatch problem (Liu et al., 2016). Semantic-level metrics evaluate address lexical mismatch problem by calculating similarity with high-dimension embeddings. For example, Sharma et al. (2017) measures the embedding distance between golden and generated response. Ghazarian et al. (2019) and Zhang et al. (2020) enhance the text representation using the large pre-train model, which has shown exemplary performance in capturing semantic similarity. However, they suffer from the one-to-many problem when evaluating open-domain dialogues since responses with various semantics may fit the dialogue context.
Recent works tend to relieve this drawback by annotating multiple references for dialogue, commonly referred to as multi-reference methods (Li et al., 2017; Sai et al., 2020), which are costly and hard to generalize to agnostic scenarios. The proposed RADE aims to consider the pre-created response as a candidate instead of the golden standard to address the one-to-many problem of dialogue evaluation.
### Reference-free dialogue evaluation
The reference-free methods are gaining more attention as they correlate more with human judgment only with the dialogue context and response. For example, MAUDE predicts the score of dialogue using pre-trained language models, GRADE Huang et al. (2020) evaluates the coherence of dialogues with the augmentation of the commonsense graph, EMS Chan et al. (2021) enhances the dialogue evaluation by capturing the representation of the context and response in latent space. Some methods further decompose the evaluation of responses into multiple perspectives Mehri and Eskenazi (2020),c; Phy et al. (2020), such as relevance, fluency, and engagingness, then aggregate the overall score from different sub-metrics with a weighted average. However, some recent studies Khalid and Lee (2022); Deutsch et al. (2022) reveal that the reference-free methods are vulnerable to data-induced biases and inherently biased toward models which are more similar to their own. In contrast, this paper proposes a reference-assisted approach, which enhances the robustness of the model using reference responses as a benchmark.
## 3 Task Formulation
In this work, we propose two tasks: (1) extending the existing datasets by human annotation, and (2) leveraging the rated references collected in (1) to enhance automatic evaluation.
Human annotationHuman annotation aims to extend existing datasets with multiple rated responses to facilitate automatic evaluation. Given a dialogue context \(c\), which is always paired with a golden response (denoted as reference) \(r_{h}\), we employ the generation models, e.g., Blender-Bot Roller et al. (2021), to generate one more response \(r_{a}\). We then assign a fixed overall score or derive from existing datasets to the reference as \(s_{h}\). The annotators are instructed to rate \(r_{a}\) as \(s_{a}\), following the same scale while taking the reference as a benchmark. The annotators are also asked to revise the reference score \(s_{h}\) if \(s_{h}\) is inappropriate.
Automatic evaluationGiven a dialogue context \(c\), the proposed RADE learns to evaluate the response \(r_{a}\) with the assistance of reference \(r_{h}\) under the multi-task learning framework. The first task explicitly models the relation between reference and response and discriminates which fits the context better. The scores of reference and response are predicted simultaneously. And the second task enhances the score prediction task by implicitly estimating the distribution of candidate responses.
## 4 Human Annotation
Our human annotation task aims to rate the candidate responses following a pre-scored reference as a benchmark. Since there are multiple perspectives to assess the response, we simplify by sorting the possible aspects into two categories: the general view and the task-specific view. As listed in Table 1, the former contains relevance, engagingness, and fluency, which are suitable for all dialogue agents. And task-specific criteria consist of understandability, emotional awareness, and personality awareness, which correspond to chitchat dialogue, emotional dialogue, and persona dialogue. We annotate rates on each metric and calculate the overall rating score by weighting these sub-metrics. Specifically, the weights are obtained based on the preference of users (see section A.1.3 for more details).
### Data preparation
We consider three datasets to extend: \(\bullet\)_DSTC-ChitChat (ChitChat)_Hori and Hori (2017), a chitchat dataset collected from Twitter, each example derived from the conversation between a customer and an agent. \(\bullet\)_Empathetic Dialogues (EmpaDial)_Rashkin et al. (2019), which consists of 25k dialogues grounded in emotional situations.
\begin{table}
\begin{tabular}{p{142.3pt}} \hline \hline
**Relevance**\({}^{\dagger}\)**: \\ _Whether the response matches dialogue context semantically._ \\ \hline
**Engagingness\({}^{\dagger}\)**: \\ _Whether the response is engaging or interesting rather than rigid template._ \\ \hline
**Fluency\({}^{\dagger}\)**: \\ _Whether the response is fluent and natural throughout the conversation._ \\ \hline
**Understandability\({}^{\ddagger}\)**: \\ _Is there any external knowledge contained in the response._ \\ \hline
**Emotional-awareness\({}^{\ddagger}\)**: \\ _Whether the agent capture the emotion of user and support_ \\ _empathic support._ \\ \hline
**Personality-awareness\({}^{\dagger}\)**: \\ _Whether the response conforms to given personality._ \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Criteria in human annotation. Metrics with \({}^{\dagger}\) are general metrics for all dialogue tasks, while metrics \({}^{\ddagger}\) are metrics for specific dialogue tasks (e.g., understandability for chitchat, emotion-awareness for emotional dialogue and personal-awareness for personal chat).**
\(\bullet\)_PersonaChat_(Zhang et al., 2018), a real-world dataset consisting of 10k dialogues where each participant plays the part of an assigned persona.
Then, we collect model-generated responses using the following seven well-performing dialogue models on these datasets: BlenderBot (Roller et al., 2021), DialoGPT (Zhang et al., 2020), KEMP (Li et al., 2022), MoEL (Lin et al., 2019), MIME (Majumder et al., 2020), EmpDG (Li et al., 2020), PersonaGPT (Tang et al., 2021).
The train-dev-test of collected datasets are split as Chitchat (1490/300/300, 5/1/1), Empathetic Dialogue (3022/500/500, 6/1/1), and Persona Chat (3000/500/500, 6/1/1). More details of these models are available in Appendix A.1.1.
### Human annotation details
We hire 40 annotators for data annotation. Following a five-scale standard, they are asked to label sub-metrics as listed in Table 1. The five-scale allows the annotators to factor in their subjective interpretation of the extent of success or failure of a system's response to satisfy a user's request. The dialogue context, rated reference response, and corresponding score are provided in each example. At least three annotators are required for each example. We annotated about 10k dialogues for the three datasets, and the statistics of the collected datasets are listed in Table 2. The ratings achieve reasonable inter-annotator agreements with Fleiss Kappa scores of 0.540, 0.554, and 0.533 on three datasets, respectively. More details about the annotation guideline and details are provided in Appendix A.1.2.
## 5 Reference-Assisted Automatic Evaluation
We propose RADE, a **R**eference-**A**ssisted Automatic **D**ialogue **E**valuation method under the framework of multi-task learning. Compared with reference-based methods that evaluate based on the distance between the golden and generated response, the proposed RADE explicitly discriminates whether the reference or candidate response fits the dialogue context better. To relieve the one-to-many problem, we augment RADE with a joint response generation task, which aims to perceive the range of feasible candidate responses. To improve the performance of RADE with the limited dataset, we propose a two-stage training strategy, including cross-domain pre-training and task-specific finetune.
The architecture of RADE is illustrated in Figure 2, which comprises a posterior encoder, a regression layer, and a candidate response generator.
Posterior encoder.The posterior encoder encodes the dialogue context \(c\), reference response \(r_{h}\), and model-generated response \(r_{a}\) into hidden representation. In particular, we first concatenate \(c\), \(r_{h}\) and \(r_{a}\) together into \(X\) with a specific token [SEP]:
\[X=\{c\;\texttt{[SEP]}\;r_{h}\;\texttt{[SEP]}\;r_{a}\} \tag{1}\]
Then the concatenated sequence is fed into a transformer-based encoder to get the representation \(\mathbf{H}\in\mathbb{R}^{|X|\times d}\):
\[\mathbf{H}=\mathrm{Encoder}(X), \tag{2}\]
where \(d\) is the hidden size of encoder, \(|X|\) is the length of sequence \(X\).
Regression layer.The regression layer aggregates the representation \(\mathbf{H}\) and predicts the scores of both reference and candidate response simultaneously. Specifically, a pooling layer aggregates the token-level representation into a sequence-level representation: \(\mathbf{h}\in\mathbb{R}^{d\times 1}\):
\[\mathbf{h}=\mathrm{Pooling}(\mathbf{H}) \tag{3}\]
Then, a feedforward network takes \(\mathbf{h}\) as input to predict the score of both reference and candidate response:
\[(\hat{s_{h}},\hat{s_{a}})=\mathrm{FeedForward}(\mathbf{h}), \tag{4}\]
where \(\hat{s_{h}}\) and \(\hat{s_{a}}\) denote the predicted score of \(r_{h}\) and \(r_{a}\), respectively.
\begin{table}
\begin{tabular}{l c c c} \hline
**Domain** & **ChitChat** & **EmpaDial** & **PersonaChat** \\ \hline \# Dialogues & 2,090 & 4,022 & 4,000 \\ Kappa & 0.540 & 0.554 & 0.533 \\ \hline \multicolumn{4}{l}{_Distribution of the score_} \\ Rating 1 & 0.5\% & 1.2\% & 3.7\% \\ Rating 2 & 15.6\% & 12.5\% & 12.6\% \\ Rating 3 & 48.3\% & 42.0\% & 50.5\% \\ Rating 4 & 29.5\% & 32.0\% & 23.9\% \\ Rating 5 & 5.1\% & 12.3\% & 9.4\% \\ \hline \end{tabular}
\end{table}
Table 2: **The statistics of the collected datasets.** For each example, the overall score of the response is mean of all sub-metrics.
Candidate response generator.To relieve the one-to-many problem, we devise a candidate response generator to perceive the range of feasible candidate responses (Chan et al., 2021). Specifically, a Transformer-based generator learns to generate reference responses autoregressively for a specific context. We first encode the dialogue context \(c\) using a encoder:
\[\hat{\mathbf{h}}=\mathrm{Encoder}\,(c), \tag{5}\]
where the \(\mathrm{Encoder}\) shares the same parameters with the posteriori encoder in Eq. (2). Then, we apply a Transformer-based decoder \(\mathrm{Decoder}\) to model the generation probability of reference response \(r_{h}\):
\[P(r_{h}|c)=\prod_{t=1}^{T}\mathrm{Decoder}(r_{h}^{(t)}|r_{h}^{(<t)},\hat{ \mathbf{h}}), \tag{6}\]
where \(T\) denotes the length of \(r_{h}\).
Compared with the previous reference-free methods, which estimate the relation between context and response only with the knowledge acquired from their training data, RADE explicitly takes the pre-created response as a benchmark to reduce the data-induced bias when generalizing to agnostic scenarios. Moreover, different from existing reference-based methods, which use the pre-created response as the golden standard without considering the semantic diversity of the response, we relieve the one-to-many problem via auxiliary response generation tasks. The share encoder enhances the capability of context representation which augment the performance of score-predicting task through multi-task learning.
### Two-stage training
The neural-based model has been proven prone to data-induced bias, but it is costly to annotate a large dataset in every specific task. Therefore, we propose a two-stage strategy that includes: (1) _cross-domain pre-training_, and (2) _task-specific fine-tuning_, keeping a tradeoff of performance between in- and cross-domain. As shown in Figure 2 (right), we pre-train our model based on existing human-annotated datasets from different downstream tasks of open-domain dialogue to improve the generalizability (Ye et al., 2021). Since the cross-domain datasets suffer from domain gaps and no pair-wised score, we finetune our model in the next stage with newly collected task-specific datasets.
Cross-domain pre-training.The pre-training datasets contain 54,438 dialogue-level examples collected from different downstream tasks, covering a wide range of domains (see more details in Table 7). For learning the coarse-grain judgment of generated response without human-annotated reference scores, our model is first pre-trained by minimizing a new cross-domain pre-training loss \(\mathcal{L}_{\text{Cross}}\). Concretely, the \(\mathcal{L}_{\text{Cross}}\) is composed of score-prediction loss and generation loss, which can be formulated as:
\[\mathcal{L}_{\text{Cross}}=\mathcal{L}_{\text{MSE}}(\hat{s}_{a},s_{a})+ \mathcal{L}_{\text{GEN}}, \tag{7}\]
Figure 2: **Left:** An overview of our model which consists of an encoder, a regression layer, and a response generator. **Right:** Our two-stage training process with cross-domain **pre-training** (PT) and **task-specific finetuning** (TS).
where \(\hat{s}_{a}\) and \(s_{a}\) denote the human-annotated score and the predicted score of the candidate response and \(\mathcal{L}_{\text{MSE}}(\hat{s}_{a},s_{a})=(\hat{s}_{a}-s_{a})^{2}\). \(\mathcal{L}_{\text{GEN}}\) is the response generation loss, which is defined as:
\[\mathcal{L}_{\text{GEN}}=-\log P(r_{h}|c), \tag{8}\]
where \(P(r_{h}|c)\) is the generation probability of \(r_{h}\) defined in Eq. (6).
Task-specific finetuning.We next finetune our model with newly annotated datasets to enhance the performance when evaluating task-specific dialogue agents. The optimize objective \(\mathcal{L}_{\text{In}}\) is composed of score-prediction loss, generation loss, and pair-wised ranking loss, which can be formulated as:
\[\begin{split}\mathcal{L}_{\text{In}}=&\mathcal{L}_{ \text{MSE}}(\hat{s}_{a},s_{a})+\mathcal{L}_{\text{MSE}}(\hat{s}_{h},s_{h})+\\ &\mathcal{L}_{\text{GEN}}+\mathcal{L}_{\text{PR}}\end{split} \tag{9}\]
where \(\mathcal{L}_{\text{MSE}}(\hat{s}_{a},s_{a})\) and \(\mathcal{L}_{\text{MSE}}(\hat{s}_{h},s_{h})\) are MSE score-prediction loss of reference response and candidate response, respectively. \(\mathcal{L}_{\text{GEN}}\) is the generation loss as defined in Eq. (8). \(\mathcal{L}_{\text{PR}}\) is the pair-wise ranking loss defined as:
\[\mathcal{L}_{\text{PR}}=-g(s_{h},s_{a})\log\frac{\text{e}^{\hat{s}_{a}}}{\text {e}^{\hat{s}_{h}}+\text{e}^{\hat{s}_{a}}}, \tag{10}\]
in which \(g(s_{h},s_{a})\) is a labeling function defined as:
\[g(s_{h},s_{a})=\begin{cases}0,&s_{h}\geq s_{a}\\ 1,&s_{h}<s_{a}\end{cases} \tag{11}\]
The \(\mathcal{L}_{\text{PR}}\) is introduced to assure that the rank order of the predicted scores satisfies the pre-annotated order. Compared to reference-free models that inherently favor outputs from their underlying models or those trained on similar datasets, RADE is specifically optimized to align with human intentions and effectively alleviate this bias.
## 6 Experimental Setup
### Dataset and evaluation metrics
We mainly conduct experiments on the three datasets annotated in Section 4. We further evaluate the models on two existing benchmarks, USR-TopicChat and USR-PersonaChat Mehri and Eskenazi (2020), to examine the generalizability of our method. The evaluation metrics include Pearson (\(r\)), Spearman (\(\rho\)), and Kendall (\(\tau\)) correlation, which measures the linear relationship, monotonic relationship, and the ordinal association between automatic evaluation and human evaluation, respectively2. We abbreviate the Pearson, Spearman, and Kendall correlation as \(r\), \(\rho\), and \(\tau\) for simplicity.
Footnote 2: We use SciPy ([https://scipy.org/](https://scipy.org/)) to calculate the scores.
### Implementation details
We initialize the parameters of the encoder and decoder with BART Lewis et al. (2020), a Transformer-based pre-trained model. BART is well-suited to our proposed model because it is capable of both text representation tasks and text generation tasks. We optimize the model using Adam optimizer with parameters \(\beta_{1}=0.98\), \(\beta_{2}=0.97\), and the learning rate of \(5e-5\). The model is trained up to 10 epochs, and we tune the hyper-parameters and pick the checkpoint on the development set. The training of the model can be done within 5 hours using two 2080Ti GPUs. We denote the RADE model that pre-trained on cross-domain datasets as **RADE (PT)**, and the model that further finetuned on task-specific data as **RADE (TS)**.
### Baselines
We compare our method with two types of baselines: reference-based and reference-free methods.
The reference-free baselines include: _DialoRPT_Gao et al. (2020), which trained on large-scale social media feedback data to predict ranking-based scores; _GRADE_Huang et al. (2020), which enhances the contextualized representations via topic-level commonsense graphs and predicts the score using a regression module; _FED_Mehri and Eskenazi (2020), an unsupervised dialogue evaluation model based on DialogGPT; _UniEval_Zhong et al. (2022), which evaluates the response from multiple perspectives; _QuesEval_Scialom et al. (2021), which evaluates the fact-based text using summarizing asks.
The reference-based baselines include: _RU-BER_Tao et al. (2018), an unsupervised evaluation metric considering the similarity of the response with dialog context and reference; _BERTScore_Zhang et al. (2020), which employs BERT to greedily match the response and the ground truth at the token level; _BLEURT_Sellam et al. (2020), which is a BERT-based model pre-trained with millions of synthetic examples; _BARTScore_De Bruyn et al. (2020), which weights the log-likelihood of the generated response as the score. We also test three reference-based lexical-level metrics: _ROUGE-L_, _BLEU-2_, and _METEOR_.
Moreover, we implement two reference-based baselines, BERT\({}_{\text{MLP}}\) and BART\({}_{\text{MLP}}\), which are trained with the same human-annotated datasets as RADE, and provide a reasonable comparison with our proposed model. Specifically, we obtain the text representations of the dialogue using BERT or BART and then feed the representations into a multi-layer perception to calculate the scores. For a more comprehensive analysis, we also fine-tune the two strongest baselines, QuantiDCE and GRADE, on our cross-domain datasets as well as our self-collected datasets, respectively.
## 7 Results and Analysis
### Experimental results
Overall performance.Table 3 shows the experimental performance for all methods. Overall, RADE achieves the best performance in three benchmarks in terms of all metrics. Concretely, the pre-trained model RADE (PT) gets better or comparable correlation with human judgment than the best baseline method on three dialogue tasks. The task-specific model RADE (TS), fine-tuned with the newly collected reference-assisted data, establishes a new state-of-the-art by improving the performance by about 30% on average compared to RADE (PT). For example, RADE (TS) gets \(r=0.601\), \(\rho=0.569\) in the ChitChat domain, and pushes \(r\) to \(0.863\) (\(0.314\) absolute improvements), \(\tau\) to \(0.685\) (\(0.287\) absolute improvements) in EmpaDial domain. This result suggests that training with in-domain datasets is critical to enhancing the task-specific evaluation capability of RADE. For a more comprehensive comparison, we also train the two strongest baselines (QuantiDCE and GRADE) with our cross-domain and self-collected datasets, respectively. And the result and analysis are provided in Appendix A.2.3.
Generalizability.We find that the performance of the reference-free method varies dramatically across domains. For example, GRADE and QuantiDCE, trained in the chitchat domain, achieve high correlations with human judgment in ChitChat and EmpaDial but perform poorly in PersonaChat. The
\begin{table}
\begin{tabular}{|l c c c c c c c c c|} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c|}{**ChitChat**} & \multicolumn{3}{c|}{**Empathetic Dialogue**} & \multicolumn{3}{c|}{**PersonaChat**} \\ \cline{2-11} & \(r\) & \(\rho\) & \(\tau\) & \(r\) & \(\rho\) & \(\tau\) & \(r\) & \(\rho\) & \(\tau\) \\ \hline \multicolumn{11}{|l|}{_Reference-free methods_} \\ \hline FED\({}_{\text{E}}\) (Mehri and Eskenazi, 2020b) & 0.241 & 0.254 & 0.177 & 0.202 & 0.218 & 0.218 & 0.138 & 0.120 & 0.086 \\ FED\({}_{\text{U}}\) (Mehri and Eskenazi, 2020b) & 0.235 & 0.248 & 0.171 & 0.147 & 0.156 & 0.106 & 0.145 & 0.162 & 0.117 \\ QuesEval (Scialom et al., 2021) & 0.045 & 0.021 & 0.013 & 0.069 & 0.084 & 0.057 & -0.003 & 0.034 & 0.0237 \\ UniEval (Zhong et al., 2022) & 0.456 & 0.470 & 0.312 & 0.403 & 0.435 & 0.286 & 0.306 & 0.338 & 0.244 \\ DialoRPT (Gao et al., 2020a) & -0.066\({}^{*}\) & -0.044\({}^{*}\) & -0.031\({}^{*}\) & 0.267 & 0.244 & 0.166 & -0.077\({}^{*}\) & -0.069\({}^{*}\) & -0.049\({}^{*}\) \\ GRADE (Huang et al., 2020) & 0.491 & 0.434 & 0.300 & 0.549 & 0.568 & 0.398 & -0.031\({}^{*}\) & -0.005 & -0.030\({}^{*}\) \\ QuantiDCE (Ye et al., 2021b) & 0.348 & 0.300 & 0.202 & 0.498 & 0.507 & 0.351 & 0.162 & 0.182 & 0.130 \\ \hline \multicolumn{11}{|l|}{_Reference-based lexicon-level methods_} \\ \hline ROUGE-L (Lin, 2004) & 0.215 & 0.178 & 0.129 & 0.213 & 0.214 & 0.148 & 0.118 & 0.114 & 0.079 \\ BLEU-2 (Papineni et al., 2002) & 0.201 & 0.200 & 0.158 & 0.057 & 0.041\({}^{*}\) & 0.032 & 0.060 & 0.039 & 0.031 \\ METEOR (Banerjee and Lavie, 2005) & 0.202 & 0.188 & 0.129 & 0.182 & 0.194 & 0.132 & 0.099 & 0.051 & 0.035 \\ \hline \multicolumn{11}{|l|}{_Reference-based semantic-level methods_} \\ \hline BERTScore (Zhang et al., 2020b) & 0.296 & 0.243 & 0.213 & 0.167 & 0.243 & 0.173 & 0.278 & 0.292 & 0.196 \\ BARTScore (Lewis et al., 2020) & 0.133 & 0.057 & 0.039 & 0.256 & 0.253 & 0.173 & 0.143 & 0.168 & 0.115 \\ RUBER (Tao et al., 2018) & 0.332 & 0.351 & 0.369 & 0.252 & 0.256 & 0.183 & 0.122 & 0.123 & 0.089 \\ BLEURT (Sellam et al., 2020) & 0.353 & 0.363 & 0.249 & 0.343 & 0.337 & 0.232 & 0.105 & 0.140 & 0.102 \\ BERT\({}_{\text{MLP}}\)\({}^{\dagger}\)(Devlin et al., 2019) & 0.304 & 0.301 & 0.192 & 0.501 & 0.537 & 0.373 & 0.331 & 0.360 & 0.251 \\ BART\({}_{\text{MLP}}\)\({}^{\dagger}\)(Lewis et al., 2020) & 0.431 & 0.440 & 0.312 & 0.412 & 0.447 & 0.356 & 0.310 & 0.335 & 0.242 \\ \hline \multicolumn{11}{|l|}{_Reference-assisted methods_} \\ \hline
**RADE** (Pre-trained model, PT) & 0.472 & 0.491 & 0.334 & 0.650 & 0.601 & 0.427 & 0.386 & 0.390 & 0.285 \\
**RADE** (Task-specific model, TS) & **0.601** & **0.569** & **0.409** & **0.863** & **0.849** & **0.685** & **0.470** & **0.465** & **0.347** \\ \hline \hline \multicolumn{11}{|l|}{_AblationStudy_} \\ - w/o \(C_{\text{PR}}\) & 0.503 & 0.514 & 0.353 & 0.773 & 0.756 & 0.613 & 0.406 & 0.403 & 0.313 \\ - w/o \(C_{\text{GEN}}\) & 0.451 & 0.482 & 0.332 & 0.751 & 0.740 & 0.602 & 0.387 & 0.372 & 0.272 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results on three benchmarks. The metrics \(r\), \(\rho\), and \(\tau\) indicate the Pearson’s \(\rho\), Spearman’s \(r\), and Kendall’\(\tau\). All values are statistically significant to p-value < 0.05 unless marked by \({}^{*}\). Methods with \({}^{\dagger}\) are implemented by ourselves. We underline the best results of each group of baselines methods and bold the best results of all methods. The bottom of the table show the ablation study, where the proposed RADE is compared with several variants (-w/o: without). See section 7.2 for details.**
result indicates that the contextual representation capabilities of unsupervised methods are limited by their training data and, therefore, are prone to data-induced bias, decreasing their performance when employing agnostic scenarios. In contrast, the gap between the proposed RADE (PT) methods across different domains is relatively small. These results indicate that RADE has better generalizability than reference-free methods due to the assistance of reference and the proposed cross-domain training strategy.
Results on USR benchmarks.We further examine our methods on two USR datasets Mehri and Eskenazi (2020) to verify the efficiency and robustness of RADE when generalizing to existing dialogue evaluation benchmarks. The results are listed in Table 4. Experiments show that RADE, which has not explicitly trained on these datasets, achieves better or comparable results to previous supervised methods. See Appendix A.2.4 for more results and details.
### Ablation study
We perform an ablation study to investigate the influence of different components in our methods. We examine two ablative variants: (1) w/o \(\mathcal{L}_{\text{PR}}\): we remove the ranking-based loss \(\mathcal{L}_{\text{PR}}\) to verify its effectiveness (w/o \(\mathcal{L}_{\text{PR}}\)); (2) w/o \(\mathcal{L}_{\text{GEN}}\): we remove the \(\mathcal{L}_{\text{GEN}}\) to verify training with response generation task jointly can improve the predicting correlation with human judgment.
Table 3 presents the results. Overall, the variants of our methods show a decreased performance compared to the base model. For example, Pearson drops 0.10, 0.09, and 0.07 in three benchmarks, respectively, after the \(\mathcal{L}_{\text{PR}}\) is removed. This result indicates that ranking-based loss can enhance performance by explicitly building the relation between response and reference. After removing the \(\mathcal{L}_{\text{GEN}}\), the correlation in all benchmarks has a prominent decrease, e.g., Spearman correlation drops by 0.15, 0.10, and 0.09, respectively. The results suggest that the auxiliary response generation task improves the representation capability of our method and relieves the one-to-many problem.
### Case study
Our case studies demonstrate that RADE is more consistent with human judgment than baselines. Details about our case studies are available in Appendix A.2.5.
### Qualitative analysis
To explain more intuitively, we show the scatter plots against human judgments for different automatic evaluation methods (i.e., RADE, GRADE, BERTScore, METEOR) on the EmpaDial dataset in Figure 3. As shown in Figure 3 (a), our method RADE achieves a stronger correlation with human judgment than the other methods. Figure 3 (d) illustrates that METEOR scores are zero or extremely low for the most response. It results from the one-to-many nature of open-domain dialogue, and word overlapping occasionally occurs. Figure 3 (c) suggests that the BERTScore scores are mainly concentrated in the range of 0.3-0.6, indicating no significant differentiation between the different responses. Figure 3 (b) shows that GRADE achieves a better
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{2}{c}{USR-Topical} & \multicolumn{2}{c}{USR-Pearson} \\ \cline{2-5} & \(r\) & \(\rho\) & \(r\) & \(\rho\) \\ \hline GRADE & 0.200 & 0.217 & 0.358 & 0.352 \\ USR & 0.412 & 0.423 & 0.440 & 0.418 \\ USL-H & 0.322 & 0.340 & **0.495** & **0.523** \\ \hline METEOR & 0.336 & 0.391 & 0.253 & 0.271 \\ BERTScore & 0.298 & 0.325 & 0.152 & 0.122 \\ BLEURT & 0.216 & 0.261 & 0.065 & 0.054 \\ \hline
**Ours** & **0.480** & **0.466** & 0.451 & 0.465 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on USR-TopicalChat and USR-PearsonChat Mehri and Eskenazi (2020).
Figure 3: Score correlation of automatic evaluation and human evaluation on the EmpaDial domain. The horizontal axis indicates the different automatic evaluation methods, and the vertical axis indicates human rating.
correlation with human judgments. However, the distribution of GRADE predicted scores is concentrated in the high-scoring band, resulting in a low distinction of responses; RADE uses reference as a benchmark and thus has a more balanced distribution of predicted scores.
## 8 Discussions
The impact of the training data scale.To explore the minimum data scale required for our method, we train RADE using different amounts of randomly sampled annotated data. We observe a minor degradation in RADE's performance as the amount of data decreases. For example, when training on 2,400 examples from the EmpatheticDialogue dataset, RADE(TS) achieves Pearman'r=0.837 and Spearman'rho=0.829; whereas with 1,200 examples, it obtains Pearman'r=0.807 and Spearman'rho=0.806. All results are averaged over three runs. Moreover, we find that RADE outperforms all baselines with only 800 training examples in three datasets, respectively.
The difference between golden and candidate Responses._Golden response_ refers to a scenario where there is only one correct response, and any different response is given a low score. For example, BERTScore calculates the cosine similarity between the golden and model-generated response. However, _Candidate responses_ implies that there can be multiple correct answers, which is more flexible and human-intuitive. And RADE is optimized to align with this human intention using generative and pairwise-ranking loss. If more references are available, the RADE can consider multiple valid responses to make more reliable evaluations. To achieve this, we can concatenate model-generated responses with different references. However, due to the limitation of our datasets, we concatenate one reference and model-generated response, which are then fed to the encoder.
Employing RADE when the reference response is not available.Considering the reference is not always available in real-world scenarios, we design two alternatives to enable RADE, i.e., constructing a pseudo-reference via retrieval or generative method. We verify the two solutions on the FED dataset and the details can be found in Appendix A.3.
## 9 Conclusion
We have presented a new reference-assist dialogue evaluation (RADE) method to address the one-to-many problem when evaluating open-domain dialogue systems. RADE evaluates the response generated by open-domain dialogue agents with the assistance of reference response. In addition, we have curated the reference-assisted dialogue evaluation datasets by expanding three existing datasets via a pairwise human annotation. The extended datasets contain over 10K dialogues. Extensive experiments on three extended datasets and two existing benchmarks have verified the effectiveness and robustness of the proposed methods and their generalizability.
### Limitations
The main limitation of this paper is the need for human-labeled reference responses. We will explore automated or human-machine collaboration methods to reduce the cost of annotation in the next stage. Another limitation is that we need to explore whether other auxiliary tasks can also enhance the performance of score prediction. In the future, we also plan to reproduce the proposed method for other, less resource-rich languages.
### Ethics Statement
The paper proposes a dialogue evaluation method, which is intended to evaluate open-ended dialogue on topics such as books and movies. A new dataset is developed using some existing dialogue systems, such as DialoGPT, which are trained on large-scale web data that is known to contain biased or discriminatory content. The datasets that we trained on may also include subjective knowledge (comments on movies) that may express the bias of the writers. | |
2309.07265 | Safe and Accelerated Deep Reinforcement Learning-based O-RAN Slicing: A
Hybrid Transfer Learning Approach | The open radio access network (O-RAN) architecture supports intelligent
network control algorithms as one of its core capabilities. Data-driven
applications incorporate such algorithms to optimize radio access network (RAN)
functions via RAN intelligent controllers (RICs). Deep reinforcement learning
(DRL) algorithms are among the main approaches adopted in the O-RAN literature
to solve dynamic radio resource management problems. However, despite the
benefits introduced by the O-RAN RICs, the practical adoption of DRL algorithms
in real network deployments falls behind. This is primarily due to the slow
convergence and unstable performance exhibited by DRL agents upon deployment
and when encountering previously unseen network conditions. In this paper, we
address these challenges by proposing transfer learning (TL) as a core
component of the training and deployment workflows for the DRL-based
closed-loop control of O-RAN functionalities. To this end, we propose and
design a hybrid TL-aided approach that leverages the advantages of both policy
reuse and distillation TL methods to provide safe and accelerated convergence
in DRL-based O-RAN slicing. We conduct a thorough experiment that accommodates
multiple services, including real VR gaming traffic to reflect practical
scenarios of O-RAN slicing. We also propose and implement policy reuse and
distillation-aided DRL and non-TL-aided DRL as three separate baselines. The
proposed hybrid approach shows at least: 7.7% and 20.7% improvements in the
average initial reward value and the percentage of converged scenarios, and a
64.6% decrease in reward variance while maintaining fast convergence and
enhancing the generalizability compared with the baselines. | Ahmad M. Nagib, Hatem Abou-Zeid, Hossam S. Hassanein | 2023-09-13T18:58:34 | http://arxiv.org/abs/2309.07265v2 | Safe and Accelerated Deep Reinforcement Learning-based O-RAN Slicing: A Hybrid Transfer Learning Approach
###### Abstract
The open radio access network (O-RAN) architecture supports intelligent network control algorithms as one of its core capabilities. Data-driven applications incorporate such algorithms to optimize radio access network (RAN) functions via RAN intelligent controllers (RICs). Deep reinforcement learning (DRL) algorithms are among the main approaches adopted in the O-RAN literature to solve dynamic radio resource management problems. However, despite the benefits introduced by the O-RAN RICs, the practical adoption of DRL algorithms in real network deployments falls behind. This is primarily due to the slow convergence and unstable performance exhibited by DRL agents upon deployment and when encountering previously unseen network conditions. In this paper, we address these challenges by proposing transfer learning (TL) as a core component of the training and deployment workflows for the DRL-based closed-loop control of O-RAN functionalities. To this end, we propose and design a hybrid TL-aided approach that leverages the advantages of both policy reuse and distillation TL methods to provide _safe and accelerated_ convergence in DRL-based O-RAN slicing. We conduct a thorough experiment that accommodates multiple services, including real VR gaming traffic to reflect practical scenarios of O-RAN slicing. We also propose and implement policy reuse and distillation-aided DRL and non-TL-aided DRL as three separate baselines. The proposed hybrid approach shows at least: 7.7% and 20.7% improvements in the average initial reward value and the percentage of converged scenarios, and a 64.6% decrease in reward variance while maintaining fast convergence and enhancing the generalizability compared with the baselines.
Deep Reinforcement Learning (DRL), Transfer Learning (TL), Trustworthy DRL, Safe and Accelerated DRL, O-RAN Slicing, 6G
## I Introduction
The open radio access network (O-RAN) architecture [1] was proposed by the O-RAN alliance to support the evolution of next-generation networks (NGNs) [2]. Virtualization, openness, and intelligence are inherent properties of such an architecture. The O-RAN architecture provides open interfaces for flexible network management and automation [3]. These standardized interfaces will enable mobile network operators (MNOs) to flexibly and intelligently control various radio resource management (RRM) functionalities in a closed-loop fashion. The flexibility provided by O-RAN is essential as more customizable radio access network (RAN) products are needed to adapt to the various network scenarios and the new services offered. This also allows MNOs to dynamically change the network configurations to reflect their priorities and objectives at a given time. The O-RAN paradigm is therefore expected to bring gains in many cellular network applications, especially those that necessitate dynamic control based on the network conditions and service requirements such as network slicing [3]. In NGNs, the optimization domains and network requirements are expected to become larger and tighter respectively. This will make solving dynamic RRM problems even more complex [4].
Next-generation cellular networks have key performance indicators (KPIs) and other measurements used to quantify the performance of the various services. Examples are throughput, latency, quality of experience (QoE), quality of service (QoS), and radio channel conditions. This is seamlessly compatible with the deep reinforcement learning (DRL) feedback loop of observing the system state, taking action, and receiving rewards accordingly. DRL agents can adapt to the dynamic O-RAN environment and make quick decisions based on the available knowledge in an open-control fashion [5]. Hence, DRL algorithms are among the most promising methods to design O-RAN-compliant data-driven applications hosted by the near-real-time (near-RT) RAN intelligent controllers (RICs) [6]. Such applications are called xApps. This allows an MNO to intelligently adjust network settings to achieve an optimal RRM configuration for a given network condition.
Despite the potential benefits introduced by the O-RAN RICs, the practical adoption of DRL algorithms in real network deployments falls behind. The main reasons for this are the slow convergence and unstable performance that the DRL agents undergo [7]. This is particularly evident when DRL-based xApps are newly deployed and when experiencing extreme situations or significant changes in the network conditions [8]. Slow convergence relates to the considerable number of time steps the DRL agent takes to find or recover optimal configurations for a given RRM functionality. Unstable performance relates to sudden drops in the O-RAN system's
performance. A certain performance level must be maintained by O-RAN systems to guarantee users' QoE and the overall system's QoS. Hence, the instabilities due to DRL exploration affect these two measurements negatively.
The training of DRL agents should be done offline initially according to the O-RAN recommendations [9]. This ensures that the trained models do not affect the performance and stability of the network. Nevertheless, the offline simulation environments are usually inaccurate and do not reflect all the situations that could be experienced in practical deployment environments. This applies even if real network data was logged and used to simulate such environments offline [10]. This is not the case in other applications such as training a DRL agent to play a computer game. Unlike O-RAN-based NGNs, the game training environment will still match the deployment environment. However, whenever an xApp is newly deployed in O-RAN's near-RT RIC, some online learning is still required by the incorporated DRL agent to adapt to the live network environment [11].
Moreover, learning is needed whenever the agent experiences extreme cases or when the network context changes significantly [12]. In both situations, some exploration may be required, while the DRL agent recovers, to avoid affecting the performance of the available services substantially. Convergence needs to be quick and stable so that the end user's QoE is not affected. Nonetheless, it may take thousands of learning steps to recover, given the stochasticity of NGNs O-RAN systems and the exploratory nature of the DRL-based xApps. This is of great significance in live network deployments. NGNs can only tolerate a few iterations of stable exploration while optimizing near-RT O-RAN functionalities [7].
The approaches used to tackle such challenges are known as _safe and accelerated_ DRL techniques as defined in [8] and [13]. Such techniques attempt to reduce service level agreements (SLAs) violations and to avoid any system performance instabilities. They also aim to shorten the exploration and recovery duration of DRL-based xApps in O-RAN. These approaches can help pave the way for adopting trustworthy DRL to optimize dynamic RRM functionalities in O-RAN. Transfer learning (TL) is among the main methods used to address the DRL-related practical challenges mentioned earlier [14, 15]. TL can be used to guide a newly deployed DRL-based xApp while learning the optimal policy in network conditions it has not experienced before. A policy learned by a previously trained DRL agent can be used as a guide in such a case. This can be done in several ways as demonstrated in this study.
In this paper, we address the challenge of slow and unstable DRL convergence in the context of O-RAN slicing. To the best of our knowledge, this is the first work to propose TL as a core component for _safe and accelerated_ DRL-based xApps in O-RAN, and more specifically in the closed-loop control of O-RAN slicing. Our contributions can be summarized as follows:
* We propose to incorporate TL as a core component of DRL-based control of network functionalities in the O-RAN architecture. We propose training and deployment workflows in the non-real-time (non-RT) and near-RT RICs respectively. TL tackles the challenges of slow and unstable DRL convergence by reusing knowledge from pre-trained expert policies. The proposed flows aim at enhancing the convergence and generalizability of O-RAN DRL-based xApps. They accommodate the difference between offline training and live deployment environments. They also accommodate significant changes in the network conditions.
* We propose a hybrid TL-aided DRL approach for _safe and accelerated_ convergence of DRL-based O-RAN slicing xApps. The proposed approach combines policy reuse and distillation TL methods to guide the DRL agent's convergence and strike a balance between deterministic and directed exploratory actions. We also propose policy reuse and distillation as two separate TL-aided DRL baselines in addition to the non-TL-aided DRL baseline.
* We conduct a thorough study on intelligent O-RAN slicing to demonstrate the gains of the proposed hybrid TL-aided DRL approach. We analyze the reward convergence behavior of the proposed approach and baselines. We then evaluate their safety and acceleration aspects. We finally investigate the effect of the introduced parameter, \(\gamma\), which controls the TL method to be used, on the performance of the proposed hybrid approach. Our approach shows at least: a 7.7% and 20.7% improvements in the average initial reward value and the percentage of converged scenarios, and a 64.6% decrease in reward variance while maintaining fast convergence and enhancing the generalizability compared with the baselines.
* Our experiments support multiple services, including real VR gaming traffic to reflect immersive scenarios of O-RAN slicing in NGNs. We develop and publicly share the implementation of our OpenAI Gym-compatible DRL environment, and the proposed approach and baselines to facilitate research on trustworthy DRL in O-RAN.
The rest of the paper is structured as follows: Section II discusses the related work. Section III details the system model. The proposed O-RAN workflows, hybrid approach, and baselines are described in Section IV. Section V includes the experimental setup and an analysis of the results. Finally, we conclude our work and present potential future directions in Section VI.
## II Related Work
The work in [16] is amongst the earliest research to consider slicing from an O-RAN perspective. The authors promote using DRL as one of several machine learning (ML) schemes to optimize O-RAN slicing. They highlight that DRL provides faster convergence when compared with tabular reinforcement learning (RL) given large state and action spaces. The authors of [17] and [18] employ the concept of federated reinforcement learning (FRL) in the context of O-RAN slicing. In [17], the authors suggest that the global model built using FRL learns to generalize at a slow rate. However, it achieves relatively more robust training by leveraging the shared experience. In [18], the authors employ the knowledge gained in one network application and share it to solve a slightly different
problem in another application. This is done by coordinating power control and radio resource allocation xApps for network slicing in O-RAN. A joint global model is created and then disassembled into local Q-tables for the different xApps to follow when deciding the actions to take. The authors indicate that FRL can enable faster convergence but with relatively lower rewards. Nevertheless, a global generic model can still be prone to instabilities and require some exploration when transferred to make decisions in a target local context.
The concept of _safe and accelerated_ DRL is partially addressed by a few research studies in the context of O-RAN. In [8], we discussed the need for and categorized the approaches to _safe and accelerated_ DRL in NGNs. We also examined the viability of TL variants to accelerate DRL-based RAN slicing using a basic case study. In [19], the problem of resource allocation in O-RAN slicing is addressed. The authors mainly rely on the inherent properties of the DRL algorithms when choosing one to employ. For instance, they mention that the implemented actor-critic (AC)-based solution can produce a faster convergence compared to the proximal policy optimization (PPO)-based one as the off-policy model has relatively improved sample efficiency. However, they show that the AC-based solution has a failed exploration at the beginning and a lower reward value in general. Moreover, both DRL-based solutions can take around 20 thousand learning steps to converge with no guarantee of fast and stable performance if deployed in the near-RT RIC of O-RAN. Hence, robust performance cannot be insured solely based on the inherent properties of the DRL algorithm chosen.
In [20], O-RAN slicing is one of the 3 developed exemplary xApps. The authors address some convergence-related issues by choosing PPO as it proved to be reliable and efficient in literature. They also propose pre-processing the observations using autoencoders before feeding them into the DRL agent. This reduces the dimensionality of the observations yet retains a good representation of the system state. Moreover, in [21], the authors show that the proposed deep Q-network (DQN)-based algorithm experiences relatively slow convergence on the communication level of slicing compared with the computational level. They indicate that this is primarily due to the mobility of end devices. Consequently, the channel gains between the base stations (BSs) and the end devices change frequently, delaying the convergence. More relevantly, authors of [22] attempt to accelerate the convergence by proposing an evolutionary-based DRL approach in the context of O-RAN slicing. However, it appears that training and deployment flows are not isolated. Meaning that the DRL training is carried out in the near-RT RIC. Online training is not recommended by the O-RAN alliance as it comes with risks such as the requirement for exploration [11]. Furthermore, the costs of online training are not demonstrated.
The authors of [23] propose a centralized DRL-based approach for dynamic RAN slicing. They run simulations using different combinations of parameters to find those leading to short convergence times. This is mainly done for the offline training process and still falls under the "acceleration via design choices" category defined in [8]. They also propose a non-DRL-based crowding game approach that experiences relatively faster convergence in the offline training phase. Nonetheless, if deployed in the near-RT RIC, additional time is required for the crowding game approach to compute the cost of each strategy and make a decision as it is based on the users' KPIs. Finally, in [11], parallelization is suggested as one of the design approaches for reducing the convergence time. This is mainly done by utilizing several environments in parallel. This can reduce the real clock time of DRL agents' convergence during offline training. However, this is not supported during the deployment phase in real networks.
Different from the aforementioned reviewed work, in this paper, we propose isolated systematic training-deployment workflows that consider the O-RAN recommendations. Such flows incorporate expert policies pre-trained on the same problem, and fine-tuned using live network data, to guide the DRL convergence. The flows are algorithm-agnostic and should work with any DRL algorithms or settings configured by the MNO. Furthermore, the proposed policy transfer-aided DRL approach and baselines are primarily concerned with the practical live deployment in the O-RAN's near-RT RIC. Finally, the proposed hybrid TL-aided DRL approach ensures that the maximum level of rewards is reached safely and quickly. We demonstrate that using real network VR gaming traffic to reflect an important immersive and latency-intolerant scenario in NGNs [24].
## III Deep Reinforcement Learning-based O-RAN Slicing
### _O-RAN Intelligent Controllers_
O-RAN will give MNOs more control over the network. For instance, the O-RAN-based NGNs will include generic modules and interfaces for data collection, distribution, and processing [3]. This will enable data-driven closed-loop control using ML as a core component of the network operation [25]. Such data-driven applications can be deployed on two levels, namely, near-RT RIC, and non-RT RIC. They are called xApps when deployed on the near-RT RIC and rApps on the non-RT RIC [9]. xApps interact with the RAN nodes via the E2 interface. The E2 interface includes the service model (SM) component that helps xApps fulfill the various RRM functionalities. This can be accomplished through a standardized interaction between xApps and the virtualized O-RAN infrastructure as shown in Fig. 1.
NGNs will be heterogeneous on multiple levels. This includes radio access technologies (RATs), communication paradigms, and cell and user equipment (UE) types. O-RAN slicing is one of the paradigms that enable extra flexibility for MNOs. It allows for supporting a wide range of use cases and deployment scenarios simultaneously [26]. This happens while sharing the same infrastructure among various services in a way that fulfills their different requirements. Hence, MNOs are required to optimize a myriad of network functionalities that operate at different timescales and have different goals [4]. For instance, admission control, packet scheduling, and handover management are examples of network functionalities that require the efficient utilization of scarce radio resources [5]. This process is called RRM [7] and a common approach to
managing such limited radio resources is to build a DRL-based O-RAN xApp. As illustrated in Fig. 1, a DRL agent observes the state of the O-RAN environment and takes action with the objective of maximizing its rewards. This is measured in terms of network KPIs relevant to the functionality at hand and this process is carried out in an open-control fashion.
O-RAN standardized open interfaces enable MNOs to collect live data from the RAN to optimize network performance. This vital paradigm will be more feasible on a larger scale in future 6G networks. Such O-RAN interfaces allow the consultation of DRL agents to select the best actions to optimize a network function given a network condition as highlighted in Fig. 1.
### _System Model_
In this paper, we focus on the downlink O-RAN inter-slice resource allocation problem. This belongs to the radio access level of network slicing as defined in [26]. The objective is to allocate the available physical resource blocks (PRBs) to the admitted slices while fulfilling their SLAs. A list of some notations used in this paper is provided in Table I. The inter-slice radio resource allocation problem can be formulated as follows [7, 27]:
Any given BS supports multiple services that are reflected by a set of slices, \(\mathcal{S}=\{1,2,\ldots,S\}\), that share the available bandwidth, \(B\). We consider a set of UEs, \(\mathcal{U}=\{1,2,\ldots,U\}\), connected to a BS. Each UE, \(u\), can request one type of service at a time for downlink transmission. A slice, \(s\), has a set of requests, \(\mathcal{R}_{s}=\{1,2,\ldots,R_{s}\}\), where \(R_{s}\) is the number of requests made by users belonging to a slice \(s\). The total demand, \(D_{s}\), of such users can be represented as follows:
\[D_{s}=\sum_{r_{s}\in R_{s}}d_{r_{s}}, \tag{1}\]
where \(d_{r_{s}}\) is the demand of a request, \(r_{s}\), made by a user belonging to slice \(s\). Moreover, any given slice, \(s\), contributes to the overall BS's traffic as follows:
\[\kappa_{s}=\frac{D_{s}}{\sum_{i=1}^{|S|}D_{i}} \tag{2}\]
The allocation of PRBs among the available slices, \(S\), needs to be optimized. This can be described by the vector, \(a\in\mathrm{I\!R}^{\mathrm{S}}\). At the beginning of any slicing window, an O-RAN slicing xApp decides to choose a specific slicing PRB allocation configuration, \(a\), out of the \(A\) possible configurations, where \(\mathcal{A}=\{1,2,\ldots,A\}\). Based on such a decision, the system performance is affected. For the purpose of this paper, the system performance is represented in terms of the latency of the admitted slices. This mainly depends on a queue maintained at the BS.
### _O-RAN Slicing: Mapping to Deep Reinforcement Learning_
A DRL-based xApps's objective is to maximize the long-term reward expectation, that is,
\[\underset{a}{\text{argmax}}\ \mathbb{E}\{R(\mathbf{a},\mathbf{\kappa})\}, \tag{3}\]
where \(\mathbb{E}(\cdot)\) represents the expectation of the argument. This allows us to learn a policy, \(\pi\), that takes a state, \(\kappa\in\mathcal{K}\), as input, and outputs an action, \(a=\pi(\kappa)\in\mathcal{A}\). The main challenge in solving (3) is the varying demand over time.
Fig. 1: Block diagram of the DRL-based slicing xApp interaction with the O-RAN environment.
\begin{table}
\begin{tabular}{|c|l|} \hline
**Symbol** & **Description** \\ \hline \(S\) & Number of available slices \\ \hline \(B\) & Available bandwidth shared among slices \\ \hline \(U\) & Number of available UEs \\ \hline \(R_{s}\) & Number of requests made by users belonging to a slice \(s\) \\ \hline \(D_{s}\) & Total demand of users belonging to a slice \(s\) \\ \hline \(d_{r_{s}}\) & Demand of a request \(r_{s}\) made by a user of slice \(s\) \\ \hline \(\kappa_{s}\) & Contribution of slice \(s\) to overall BS’s traffic \\ \hline \(\mathcal{A}\) & Set of available slicing PRB allocation configurations \\ \hline \(a\) & A given slicing PRB allocation configuration \\ \hline \(b_{s}\) & Bandwidth allocated to slice \(s\) \\ \hline \(R\) & Reward function \\ \hline \(w_{s}\) & Priority of fulfilling the latency requirement of slice \(s\) \\ \hline \(\hat{\Omega}\) & Slicing window size \\ \hline \(l_{s}\) & Average latency in the previous slicing window for slice \(s\) \\ \hline \(\pi\) & RL agent’s policy \\ \hline \(\pi_{\mathbf{\pi}}\) & Expert policy \\ \hline \(\pi_{\mathbf{\mathrm{L}}}\) & Learner policy \\ \hline \(\mathcal{P}_{\mathbf{\mathrm{k}}}\) & Set of stored expert policies \\ \hline \(c_{1}\) & A sigmoid function parameter to decide the point to start penalizing the agent’s actions \\ \hline \(c_{2}\) & A sigmoid function parameter to reflect the acceptable latency for each slice \\ \hline \(\mathcal{M}\) & Source domain \\ \hline \(T\) & Knowledge transfer duration \\ \hline \(\theta\) & Transfer rate which decides whether to follow the transferred knowledge or the learner policy \\ \hline \(\nu\) & Transfer rate decay \\ \hline \(\gamma\) & Hybrid approach parameter which decides the policy transfer method to follow \\ \hline \end{tabular}
\end{table} TABLE I: List of Notations
To find the optimal solution, an exhaustive search can be performed, considering all possible allocations at the start of each slicing window and recording the resulting system performance. However, this approach is both computationally expensive and practically infeasible. Therefore, DRL provides a viable alternative for solving the problem. We describe our DRL design in the following subsections.
#### Iii-B1 State Representation
As seen in Fig. 1, the slicing xApp deployed in the near-RT RIC begins with observing the system state. We represent the state of the O-RAN system in terms of the slices' contribution to the overall BS's traffic within the preceding slicing window, \(\Omega_{t-1}\). This can be reflected by a vector of size \(S\) as follows:
\[\kappa=(\kappa_{1},...,\kappa_{s},...,\kappa_{S}) \tag{4}\]
#### Iii-B2 Action Space
Based on the observed state, the xApp takes an action at the beginning of each slicing window. It selects the PRB allocation configuration per slice. We represent it as the percentage of bandwidth allocated to each slice as follows:
\[a=(b_{1},...,b_{s},...,b_{S}),\text{ subject to }b_{1}+...+b_{S}=B \tag{5}\]
#### Iii-B3 Reward Function Design
After taking the action, the DRL-based xApp receives reward feedback in terms of network KPIs calculated at the end of every slicing window. In this paper, we define rewards as a function of latency because we prioritize the delay-intolerant VR gaming service and for better results' interpretability.
Safe RL can be defined as the process of learning policies that maximize the expectation of the return to ensure reasonable system performance or respect safety constraints [13]. This can be during the learning or deployment processes. Hence, in this paper, safety can be described as having a reasonable latency performance during the deployment of DRL-based O-RAN slicing xApps. Safe RL can reduce or prevent undesirable situations through 1) transforming the optimization criterion, or 2) modifying the exploration process of the RL agent [13]. In this paper, we design a risk-sensitive reward function. In risk-sensitive approaches, the optimization criterion is changed to include a parameter that allows the sensitivity to the risk to be controlled [13]. Thus, we employ a sigmoid-based [28] reward function that includes parameters to reflect the acceptable latency for each slice. This enables penalizing the xApp for undesirable actions that get the system close to violating the defined latency requirements of each slice.
The reward function defined in this study reflects a weighted sum of an inverse form of latency. It allows more control over the effect of getting closer to the minimum acceptable level of each slice's SLAs as follows:
\[R=\sum_{s=1}^{\|S\|}w_{s}\ *\ \frac{1}{1+e^{\ c1_{s}\ *\ (\ l_{s}\ -\ c2_{s}\ )}} \tag{6}\]
Since we focus on the delay requirements of the different services, we use latency as a variable. The weight, \(w_{s}\), reflects the priority of fulfilling the latency requirement of slice \(s\), and \(l_{s}\) is the average latency experienced within slice \(s\) during the previous slicing window. The function's effect can be adjusted by configuring two parameters, namely \(c_{1}\) and \(c_{2}\) as seen in Fig. 2. The parameter \(c1\) sets the slope for the sigmoid function, thereby indicating when penalties should start being applied to the agent's actions. On the other hand, \(c_{2}\) represents the inflection point. Such a point reflects the minimum acceptable delay performance for each slice according to its respective SLAs. Different constant values of \(c_{1}\) and \(c_{2}\) are utilized for the slices based on the defined SLAs.
## IV Transfer Learning for Safe and Accelerated DRL-based O-RAN Slicing
We propose to modify the DRL exploration process to avoid risky situations. We do so by including prior knowledge of the learning task by exploiting expert pre-trained policies to allow for a faster and safer DRL exploration setting [13]. We propose to incorporate transfer learning as a core component of the training-deployment workflows of DRL-based xApps in the O-RAN architecture. In this section, we first describe the proposed training and deployment flows in the context of O-RAN. Then, we present the developed baselines and the proposed hybrid TL-aided DRL approach.
### _Training and Deployment Flows in Policy Transfer-Aided O-RAN Architecture_
The DRL training is normally carried out using a simulated offline environment. Hence, when the DRL agent is deployed in a live network, there will be a performance gap leading to an undesired exploration performance [5]. This also happens when the context of the network changes significantly. For instance, when the number and type of the available slices change. We propose training and deployment workflows for DRL-based xApps in the O-RAN architecture. These flows address the challenges of slow and unstable DRL convergence
Fig. 2: An example of the reward function: \(c_{1}\) decides the point to start penalizing the agent’s actions; and \(c_{2}\) reflects the acceptable latency for each slice.
in O-RAN-based NGNs. The DRL training and deployment workflows are proposed to be hosted in O-RAN non-RT and near-RT RICs respectively. They aim to enhance the DRL convergence and generalizability in the context of the O-RAN architecture. They also provide the readers with insights on how to overcome key challenges when deploying an O-RAN DRL-based xApp for network slicing, and RRM in general.
#### Iii-A1 Training Workflow
The proposed O-RAN training workflow makes use of real network data collected at the non-RT-RIC to train the policy of a learner agent planned for deployment as seen in Fig. 3. It does not rely on pure offline simulations or mathematical models in training. Nevertheless, the DRL agent training is still carried out in the non-RT RIC according to O-RAN alliance recommendations as seen in the figure [20]. The O1 interface is employed to collect data every \(\Omega\) seconds, reflecting the slicing window size. The data collected represents the relevant network measurements during a slicing window. This includes but is not limited to, throughput, delay, number of available slices, types of services supported, and traffic load for each slice. The number of PRBs allocated to each slice should also be logged. The compiled data mainly reflect the system state, action taken, and reward parameters. This allows building offline simulations to train a DRL agent in the non-RT RIC using such data. However, this data does not guarantee that all the state-action pairs are represented. Hence, the training environment still does not reflect all the cases that a DRL agent can experience if deployed in an xApp in the near-RT RIC.
In the training phase, several DRL agents are trained using the collected data to reflect various contexts. The DRL agent's actions are taken based on the system's state. Rewards are calculated from the collected KPIs that correspond to the logged state-action pairs. This is done until the agents being trained converge. The compiled data should reflect BSs having different contexts and properties. As an initial step, the MNO can store a set of expert policies, \(\mathcal{P}_{\text{E}}=\{1,2,\dots,\pi_{\text{E}},\dots,\Pi_{\text{E}}\}\), that result in good convergence performance for the various contexts during the training process. Subsequently, they will be loaded to guide the convergence of other DRL agents via policy transfer. Such a set of policies should also be updated based on the policies' performance after being fine-tuned in a live network setting.
As proposed in the next sub-section, policy transfer is carried out whenever a new DRL-based xApp is deployed or when the BS context changes. The context can be in the form of the number of slices, types of services supported by the BS, and the MNO's SLA fulfillment priorities at a given time. An approach to choosing the right policy for a given context is another research direction on efficient policy transfer [29].
#### Iii-A2 Deployment Workflow
Given a live network context, a policy of a trained DRL agent is deployed as an xApp. Such a DRL-based xApp will still experience some exploration due to the difference between the training and deployment environments [11]. Upon the termination of the training phase, the xApp loads the proper policy from the policy directory in the non-RT RIC via the A1 interface as highlighted in Fig. 3. The policy is loaded based on the context of the BS to be controlled. Such a policy is used to guide the DRL agent of a newly deployed xApp. This also allows the policies to get fine-tuned using live network data. Once they prove to meet the various slices' SLAs in certain network contexts, the policy directory should be updated.
In addition to being newly deployed, the xApp may also experience extreme conditions that were not reflected in the training data. The proposed deployment flow reuses existing knowledge from saved expert policies that were proven to provide reasonable performance in certain live network contexts. This accommodates the difference between the training data and the actual live network conditions. This also accommodates significant changes in the network context. The expert policies are used as guidance for the current agent while trying to recover instead of randomly exploring the action space.
Furthermore, an MNO may decide to change some DRL-related configurations. The MNO can do so through the open interfaces supported by O-RAN. For instance, the MNO can reconfigure the slicing window size, state representa
Fig. 3: Block diagram of the policy transfer-guided O-RAN system architecture.
tion, action space, or reward function. As an example, the MNO may decide to modify the weights of the utilized reward function. Such modifications can reflect a change in the MNO's priorities of fulfilling the SLAs of the different network slices. In both situations of extreme conditions and MNO reconfigurations, a new policy can be loaded from the policy directory to match the latest context of the BSs of interest. One or more policies can be loaded at once depending on the policy transfer configurations set by the MNO. Such configurations are inputted to the O-RAN slicing xApp via the A1 interface as seen in Fig. 3. Upon deployment, the recommended action is decided based on the system state, and the previously mentioned MNO configurations. This should be done within the time range of the near-RT RIC [23]. The xApp then executes the action taken via the E2 interface to allocate resources among the available slices for the duration of \(\Omega\) seconds. Accordingly, scheduling is carried out per slice based on the scheduling algorithm configured by the MNO. Then, the state of the system per slicing window is captured based on the state representation chosen by the MNO. Finally, the DRL-based xApp's policy is updated depending on the selected DRL algorithm and other settings such as buffer size, and learning rate.
Loading a new policy to guide the DRL-based xApp can also be triggered by the reward feedback. For instance, if the reward value drops below a pre-defined threshold value for some time, this may indicate that the context has significantly changed. Hence, the used DRL agent's policy needs an update. This gives an example of how to identify significant changes in the network context. This will consequently lead to incorporating a new expert policy to guide the DRL agent toward convergence. The questions of how to identify significant changes in network conditions, when to load a new policy, and which policy to use for TL-aided DRL slicing are not the focus of this paper. However, we conducted another study to address a subset of these topics [29]. The loaded policy can guide the DRL-based xApp in several ways. This should be decided by the MNO as O-RAN supports customizing the network based on the MNO's preferences. We propose and evaluate the performance of three transfer learning-based approaches in guiding the DRL-based xApps in the following sub-sections.
### _Policy Transfer Baseline Approaches_
In this paper, we propose to employ TL to address the challenge of slow convergence and lack of generalizability of DRL-based xApps. TL can also indirectly tackle the instabilities experienced during convergence. By modifying the exploration process, TL-aided DRL can avoid risky situations by receiving guidance based on prior knowledge. The difference between one type of TL and another mainly depends on the form of knowledge to be transferred. Policy transfer is a type of TL where policies of pre-trained DRL agents are transferred from one or more source domains to guide a newly deployed DRL agent's policy. In this subsection, we first propose to employ two variants of policy transfer, namely, policy reuse and distillation as baselines. We then propose a novel policy transfer method which is a hybrid of such two approaches to achieve an improved convergence performance.
#### Preliminaries
In general, policy transfer can be carried out via directly reusing expert policies to guide a target learner agent. Alternatively, this can be done via distilling previously acquired knowledge. This can be obtained from the target learner's perspective, or from the source expert's perspective [30]. In this paper, we focus on policy distillation from the expert perspective. The knowledge transferred in both policy reuse and distillation approaches is the same. Here, the policy of one or more pre-trained DRL agents is used to guide a newly deployed DRL-based xApp. The main difference between the two approaches is how the transferred policies are used to guide the newly deployed agent to take action. Given a source expert policy \(\pi_{E}\) that is trained on data from a source domain \(\mathcal{M}\). A learner policy \(\pi_{L}\) is trained on data from a target domain guided by the knowledge obtained from \(\{\pi_{E}\}\). When more than one source expert policy is used, a more generic case can be described as follows [30]: Given a set of source policies \(\pi_{E_{1}},\pi_{E_{2}},\ldots,\pi_{E_{P}}\) trained on data from a set of source domains \(\mathcal{M}_{1},\mathcal{M}_{2},\ldots,\mathcal{M}_{K}\). A learner policy \(\pi_{L}\) is trained on data from a target domain by making use of knowledge from \(\{\pi_{E_{i}}\}_{i=1}^{P}\).
#### Iii-B1 Policy Reuse
The first policy transfer technique that we propose to employ for accelerating the DRL-based slicing xApp is known as policy reuse. This can be done in several ways [30]. In this paper, we propose to carry out policy reuse as described in Algorithm 1.
```
0:\(\pi_{E}\), MNO configurations, \(\kappa\)
0:\(\theta\), \(T\), \(\Omega\), buffer size, \(\beta\), transfer rate decay, \(\nu\)
0: PRB Allocation per slice
1: Load the appropriate pre-trained expert policy from the stored policies \(\mathcal{P}_{\text{E}}\)
2: Initialize the learner action value function with random weights or from a policy pre-trained using a significantly different traffic pattern
3:if\(t<T\)do:
4: Generate a random number \(x\), where \(0\leq x\leq 1\)
5:if\(x\leq\theta\)do:
6: Consult the expert policy
7: Choose an action according to Theorem (1)
8:elseif\(x>\theta\)do:
9: Choose an action according to the learner policy
10:endif
11:elseif\(t\geq T\)do:
12: Choose an action according to the learner agent's policy
13:endif
14: The DRL-based xApp acts based on the action recommended in the previous algorithm steps, \(a_{t}\)
15:Allocate PRBs to the available slices according to \(a_{t}\)
16: Execute scheduling within each slice
17: Calculate reward \(R\) using (6)
18: Update the learner agent's policy based on the reward received every \(\beta\) step
19:\(t\gets t+1\)
20:\(\theta\leftarrow\theta\) * \(\nu\)
```
**Algorithm 1** Proposed Policy Reuse Approach
One or more source expert policies are first trained and
fine-tuned as defined in Section IV-A. Then, an expert policy is directly reused to guide the target policy of a learner DRL agent of a newly deployed xApp [30, 14]. This should also happen when the xApp experiences a significant change in network conditions. The learner agent is configured to consult the expert policy and follow its recommended actions given a state. This happens for \(T\) time steps, namely transfer duration. Meanwhile, the target learner policy is continuously updated based on the reward feedback the learner agent receives. The expert policy is deterministic and is not updated.
We employ the concept of transfer rate similar to [31]. This gives the newly deployed agent the flexibility to not rely fully on the expert policy but also consult the learner policy being trained. This is particularly important since, although the expert policy is trained using real network data, the granularity of such data and its generality constraints make the expert policy limited. Having a transfer rate enables consulting both the source and target policies based on a parameter \(\theta\) configured by the MNO where \(\pi=(1-\theta)\pi_{L}+\theta\pi_{E}\). If \(\theta=1\), this indicates that the action recommended by the expert policy is always taken during the first \(T\) time steps after deployment. Then, the actions recommended by the updated learner policy are followed afterward. However, a smaller or different decaying transfer rate can be configured to switch between the source expert and target learner policies during exploration. For instance, upon deploying the O-RAN's xApp, \(\pi_{E}\) is expected to perform better than \(\pi_{L}\). The newly deployed DRL agent's policy may be more uncertain than the expert policy given a specific network context. However, as time passes, \(\pi_{L}\) gradually becomes more adapted to the real network environment compared to the source expert policy. Thus, a decaying transfer rate is proposed so that the target learner policy takes more control as it approaches \(T\) time steps.
If more than one expert policy is used, policy reuse can be in the form of a weighted combination of these source policies. Here, for a given state, the xApp greedily picks the action with the highest reward, from all the available policies. This is referred to as the generalized policy improvement theorem for policy reuse of one or more source policies. It can be represented as follows [32]:
**Theorem 1** (Generalized Policy Improvement): _Let \(\left\{\pi_{i}\right\}_{i=1}^{n}\) be \(n\) policies and let \(\left\{\hat{Q}^{\pi_{i}}\right\}_{i=1}^{n}\) be their approximated action-value functions, s.t: \(\left|Q^{\pi_{i}}(\kappa,a)-\hat{Q}^{\pi_{i}}(\kappa,a)\right|\leq\epsilon \forall\kappa\in\mathcal{K},a\in\mathcal{A}\), and \(i\in[n]\). Define \(\pi(\kappa)=\arg\max\limits_{a}\max\limits_{i}\hat{Q}^{\pi_{i}}(\kappa,a)\), then: \(Q^{\pi}(\kappa,a)\geq\max\limits_{i}\hat{Q}^{\pi_{i}}(\kappa,a)-\frac{2}{1- \lambda}\epsilon,\forall\kappa\in\mathcal{K},a\in\mathcal{A}\), where \(\lambda\) is a discounted factor, \(\lambda\in(0,1]\)._
#### Iv-B2 Policy Distillation
In policy distillation, one or more source policies are used to guide a target learner policy. This is done by minimizing the divergence of action distributions between the source expert policy \(\pi_{E}\) and target learner policy \(\pi_{L}\), which can be written as \(\mathcal{H}^{\times}\left(\pi_{E}\left(\tau_{t}\right)\mid\pi_{L}\left(\tau_{ t}\right)\right)\)[30]:
\[\min_{L}\mathbb{E}_{\tau\sim\pi_{E}}\left[\sum_{t=1}^{|\tau|}\nabla_{L} \mathcal{H}^{\times}\left(\pi_{E}\left(\tau_{t}\right)\mid\pi_{L}\left(\tau_{ t}\right)\right)\right] \tag{7}\]
where this reflects an expectation that is taken over trajectories, \(\tau\), sampled from the source expert policy \(\pi_{E}\). In expert distillation approaches, \(N\) expert policies are individually learned for \(N\) source tasks. Consequently, each expert policy results in a dataset \(D^{E}=\left\{\kappa_{i},\mathbf{q}_{i}\right\}_{i=0}^{N}\). Such datasets are mainly comprised of states \(\kappa\) and action values \(\mathbf{q}\), such that
\[\mathbf{q}_{i}=\left[Q\left(\kappa_{i},a_{1}\right),Q\left(\kappa_{i},a_{2}\right), \ldots\mid a_{j}\in\mathcal{A}\right] \tag{8}\]
Finally, expert policies should be distilled into one policy. As mentioned before, this can be done by minimizing the divergence between each expert policy \(\pi_{E_{i}}(a\mid\kappa)\) and the learner policy \(\pi_{L}\). One example is the KL-divergence that can be calculated as follows given the dataset \(D^{E}\)[33]:
\[\min_{L}\mathcal{D}_{KL}\left(\pi^{E}\mid\pi_{L}\right)\approx\sum_{i=1}^{ \left|D^{E}\right|}\operatorname{softmax}\left(\frac{\mathbf{q}_{i}^{E}}{\tau} \right)\ln\left(\frac{\operatorname{softmax}\left(\mathbf{q}_{i}^{E}\right)}{ \operatorname{softmax}\left(\mathbf{q}_{i}^{E}\right)}\right) \tag{9}\]
We are using one expert policy at a time in the slicing xApp scenario. Hence, we follow a similar approach by calculating a vector value exactly at the midpoint between the actions recommended by the expert policy \(\pi_{E}\) and the learner policy \(\pi_{L}\) given a state. Then, an action with the shortest Euclidean distance to that vector value is chosen from the action space as described in Algorithm 2 as follows:
**Input:**\(\pi_{E}\), MNO configurations, \(\kappa\)
**Parameters:**\(\theta\), \(T\), \(\Omega\), buffer size, \(\beta\), transfer rate decay, \(\nu\)
**Output:** PRB Allocation per slice
```
1:Load the appropriate pre-trained expert policy from \(\mathcal{P}_{\text{E}}\)
2:Initialize the learner action value function with random weights or from a policy pre-trained using a significantly different traffic pattern
3:if\(t<T\)do:
4: Generate a random number \(x\), where \(0\leq x\leq 1\)
5:if\(x\leq\theta\)do:
6: Consult the expert policy
7: Choose an action according to Theorem (1)
8: Consult the learner agent's policy
9: Find the midpoint between the actions recommended by the expert and learner policies
10: Calculate the Euclidean distance between such a vector and all actions in the action space according to (10) to get the closest action
11:elseif\(x>\theta\)do:
12: Choose an action according to the learner policy
13:endif
14:elseif\(t\geq T\)do:
15: Choose an action according to the learner agent's policy
16:endif
17: The DRL-based xApp acts based on the action recommended in the previous algorithm steps, \(a_{t}\)
18:Allocate PRBs to the available slices according to \(a_{t}\)
19: Execute scheduling within each slice
20: Calculate reward \(R\) using (6)
21: Update the learner agent's policy based on the reward received every \(\beta\) step
22:\(t\gets t+1\)
23:\(\theta\leftarrow\theta\)* \(\nu\)
```
**Algorithm 2** Proposed Policy Distillation Approach
\[d\left(a_{\pi_{L}},a_{\pi_{E}}\right)=\sqrt{\sum_{s=1}^{S}{(a_{\pi_{L}s}-a_{\pi_{E} s})^{2}}} \tag{10}\]
where \(a_{\pi_{E}}\) and \(a_{\pi_{L}}\) are vectors of actions recommended by the expert policy and the agent's learner policy respectively. Again, the learner agent of the deployed xApp follows the distilled policy with a probability that depends on the transfer rate, \(\theta\), configured by the MNO.
### _Proposed Hybrid Policy Transfer Approach_
Employing transfer learning should generally result in gains when compared with the non-TL-aided DRL approach. The knowledge of expert policies fine-tuned in a live network is reused to guide the learner agent instead of randomly exploring the action space. The policy reuse approach, however, is expected to perform poorly when the expert policy is trained on traffic patterns that are very different from those of the actual deployment environment. Hence, policy reuse delays the DRL agent's recovery when there is a big discrepancy between the source domain and the target domain.
On the other hand, policy distillation may prevent the learner agent from gaining the maximum possible rewards in some cases. For instance, this may happen when it is being guided by an expert policy that was trained on traffic patterns that are very similar to those of the actual deployment environment. Hence, it is expected that the two aforementioned policy transfer approaches will have some drawbacks in certain situations. We propose a hybrid of the two approaches to achieve a more robust TL-aided DRL exploration and increase the overall reward feedback. This is helpful when the transferred policies are not generic enough to robustly adapt to new traffic patterns. We introduce a parameter, \(\gamma\), similar to \(\theta\) to balance between exploiting the expert policy and exploring a distilled action.
The proposed approach is implemented using the proposed deployment workflow that adheres to the O-RAN architecture. Guidance is carried out by modifying the exploration process [13]. This allows the learner agent to use an expert policy with probability \(\gamma\), and minimize the divergence between the expert and learner policies with probability (1 - \(\gamma\)). The reused policy may have been learned under similar or different traffic conditions relative to the current conditions. Our proposed hybrid transfer learning approach combines two TL approaches to accommodate these two situations. The first is policy reuse which directly follows the expert policy's recommended action. This is beneficial when the reused policy is pre-trained under similar conditions to that of the learner agent. The second is policy distillation, which minimizes the divergence between the expert and the learner policies' actions. This is beneficial when the reused policy is pre-trained under different conditions from that of the learner agent. A hybrid approach allows the DRL agent to start with a good reward value whenever it is newly deployed in a live network. It also enables the agent to converge quickly to the optimal slicing configuration compared with the two approaches separately.
The main components of the proposed hybrid approach and the interactions between its components are visualized in Fig. 4. As summarized in Algorithm 3, the proposed approach follows the steps below:
1. The slicing xApp consults both the expert and the learner policies to get their recommended actions given the current system state.
2. The expert and the learner policies provide the xApp with their recommended actions to take.
3. The learner agent of the slicing xApp decides whether to follow its own policy, the expert policy, or a distilled policy. This is mainly determined by the configuration of \(\gamma\) and \(\theta\) set by the MNO as described in Algorithm 3. A bigger value of \(\gamma\) implies that the distillation will happen less often during the transfer time \(T\). Based on that decision, the slicing xApp executes the proper action to allocate PRBs among the admitted slices.
4. The slicing xApp logs the relevant KPIs and calculates
Fig. 4: Block diagram of the proposed hybrid approach’s components and interactions.
the reward feedback based on the reward function defined by the MNO.
5. The slicing xApp updates the learner policy based on the received reward. The frequency of such updates depends on parameters such as the buffer size. Moreover, the policy update equation depends on the DRL algorithm followed by the learner agent. The expert policy is deterministic and cannot be updated.
6. After \(T\) time steps, the xApp follows the latest version of the learner policy until experiencing significant changes in the network conditions. This can be detected in many ways but it is not the focus of this paper. For instance, the TL-aided DRL slicing xApp can track the reward feedback it receives. It can then start a new policy transfer procedure when the reward values become lower than a threshold defined by the MNO.
## V Simulations and Results
### _Simulation Settings_
We follow the DRL design described in Section III-C and summarized in Table II. We conduct a thorough study to test the proposed training and deployment O-RAN flows. We also examine the convergence performance of the three proposed policy transfer-aided DRL approaches in the context of O-RAN slicing. To do so, we follow an approach similar to the one proposed in Section IV-A. We first train several expert models using real VR gaming traffic [25] to reflect practical scenarios of immersive applications in 6G networks. The VR gaming data includes multiple games and multiple configurations per game. Additionally, voice over new radio (VoNR) and video requests are generated based on the parameters described in Table III following models similar to the ones defined in [34]. VR gaming users generate the largest requests. Moreover, video users receive packets more frequently compared to the other two service types. Finally, VoNR users generate small and constant-size requests.
beginning of our simulation runs to guide the learner agent. The learner policy is initialized randomly. Such a configuration is set to reflect the big difference between the offline training simulation and the real deployment environments. This also reflects a significant change in the network's conditions, and hence, the need for exploration in both cases. This enables evaluation of the proposed approach's performance against the baselines under extreme conditions. However, our framework can be configured to accommodate any pre-trained policy to be used as an initial policy for the learner DRL agent.
We compare the three proposed TL-aided DRL approaches with their traditional non-TL-aided DRL counterparts. This allows us to study the gains in convergence performance in terms of the average initial reward value, convergence rate, rewards variance per run, and the number of converged scenarios. This evaluates some of the safety and acceleration aspects of the proposed approaches. For that, we also use various traffic patterns from the VR gaming dataset to reflect two main learner agent scenarios:
1. A scenario that includes DRL-based slicing xApps that are deployed in an environment experiencing traffic patterns similar to those used to train the expert policies guiding such xApps.
2. A scenario in which the slicing xApps are experiencing traffic patterns that are different from the patterns used to train the expert policies that guide the policy transfer process.
In both the training and deployment flows, the DRL-based slicing xApp allocates the limited PRBs to the available 3 slices. After that, round-robin scheduling is executed within each slice at the granularity of 1 ms. The slicing window size is 100 ms. Hence, scheduling continues for 100 time slots. The scheduler can allocate resources to multiple transmissions per transmission time interval (TTI) if enough resources are available. Moreover, unsatisfied users leave the system if they have several unfulfilled requests. The reward function defines the goal of an RL problem [35]. In our experiment, the goal is to keep the latency of the different slices in an acceptable range defined by the slices' SLAs. This is reflected by the \(c_{2}\) parameter in (6). Thus, unsatisfied users are considered implicitly in terms of high latency before they leave. This enables the reward function to penalize the DRL agent for taking actions that lead to high delays during a given slicing window, and hence, users leaving the system.
We used the configurations defined in Table II for all the expert and learner agents. For better generality of the results, we ran all the possible combinations out of the listed hyper-parameters of the implemented approaches. The agents considered in this paper employ the proximal policy optimization (PPO) algorithm [36] as an underlying DRL algorithm. We adopt the PPO implementation from the Tensorforce Python package 1. We modified it accordingly, to accommodate the flows and the policy transfer algorithms proposed in Section IV. VR gaming slices are relatively more latency intolerant. Therefore, the reward function weights and parameters are configured to reflect such sensitivity as seen in Table II.
Footnote 1: Available at [https://github.com/tensorflowc/tensorforce](https://github.com/tensorflowc/tensorforce)
The O-RAN specifications mandate that any ML-based solution should not be trained online. It can only be fine-tuned online to ensure that the trained models do not affect the performance and stability of the network [9]. To avoid that, we configure the DRL agent to have a low initial exploration rate. We also employ an exploration decay to restrict the random actions exploration of the learner agents. We study three primary aspects of the results. We start with analyzing the reward convergence behavior of the different approaches. We then evaluate their safety and acceleration aspects. We finally investigate the effect of the introduced parameter \(\gamma\) on the performance of the proposed hybrid TL-aided DRL approach.
All the experiments carried out in this study were conducted on a Linux machine with 8 CPUs, 64 GB of RAM, and an NVIDIA GeForce RTX 2080Ti GPU. The developed simulation environment that includes the implementation of the proposed methods is available on GitHub2. Such a DRL environment follows OpenAI Gym standards3. This allows the development of methods that can interact instantly with the environment. Hence, enables fellow researchers to reuse, extend, and compare the proposed approaches against their approaches.
Footnote 2: Available at [https://github.com/ahmadnagib/TL-aided-DRL](https://github.com/ahmadnagib/TL-aided-DRL)
Footnote 3: OpenAI Gym: [https://www.gymlibrary.dev/](https://www.gymlibrary.dev/)
### _Reward Convergence Behaviour_
It is better to reuse local expert policies that were trained in contexts similar to the deployment environment. However, these are not always available. Hence, we show the reward convergence performance of the proposed approaches when an expert policy trained in a similar or different context is reused. As described in Section V-A, we run all the combinations in Table II. We choose the best 64 runs of each approach in terms of the average normalized reward per run and show the average rewards over the duration of a simulation run in Fig. 5.
Fig. 4(a) and Fig. 4(c) show the average convergence performance of the approaches in scenarios where the expert policy is trained using a traffic pattern similar to the deployment
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & **Video** & **VoNR** & **VR gaming** \\ \hline
**Scheduling algorithm** & Round-robin per 1 ms slot & & \\ \hline
**Slicing window size** & PRB allocation among slices every 100 & scheduling time slots & \\ \hline
**Packet interarrival time** & Truncated Pareto (mean = 6 ms, max = 12.5 ms) & Uniform (min = 0 ms, max = 160 & Real VR gaming dataset [24] \\ \hline
**Packet size** & Truncated Pareto (mean = 100 B, max = 250 B) & Constant (40 B) & Real VR gaming dataset [24] \\ \hline
**Number of users** & Poisson (max = 43, mean = 20) & Poisson (max = 104, mean = 70) & Poisson (max = 7, mean = 1) \\ \hline \end{tabular}
\end{table} TABLE III: Experiment Setup: Simulation Parameters Settings
environment pattern. The hybrid approach has the highest initial reward value in both cases. The policy reuse approach comes second and its convergence behavior is very similar to that of the hybrid approach. This is primarily attributed to the similarity between the source and target policies' environments. The policy reuse follows the expert policy's actions and they are of high return most of the time due to such similarity. Nevertheless, unlike the other approaches, the hybrid approach accommodates the small differences between training and deployment environments by additionally following a distilled action for some time during the transfer time, depending on the \(\gamma\) parameter setting. This enables it to explore in a safer way. Hence it converges to the best average reward in both situations.
On the other hand, Fig. 4(b) and Fig. 4(d) show the average performance of the approaches in scenarios where the expert policy is trained using a different traffic pattern. The proposed approach still has the best overall reward convergence performance. The policy reuse approach, however, has almost the worst start and average reward values for a significant percentage of the simulation run duration. This can be attributed to the differences between the source and target policies environment. Hence, blindly following the expert policy given the restricted exploration will not lead to optimal actions as the network conditions are different. The policy distillation approach has a much better start, however, it converges to a sub-optimal value function as it tries to explore very carefully by finding an action that minimizes the divergence between the expert and the learner policies' actions. This prevents policy distillation from exploring the other possible high-return actions before the end of the limited 4000 exploration steps. Again, the hybrid approach accommodates the differences between training and deployment environments by switching between an expert policy reuse action and a distilled action depending on the \(\gamma\) parameter setting.
The non-hybrid approaches are not able to explore the whole action space given the restricted exploration setting in terms of initial exploration, exploration decay, and exploration
Fig. 5: Reward convergence of the proposed approaches: a) and c) traffic patterns 1 and 2 guided by an expert policy trained using a similar traffic pattern; b) and d) traffic patterns 1 and 2 guided by an expert policy trained using a different traffic pattern (average of best 64 runs).
end step defined in Table II. Consequently, they sometimes fail to converge to the optimal slicing configurations. On the other hand, the DRL agents following the hybrid approach are guided by two kinds of live network knowledge. The exploration process of the hybrid approach is modified to incorporate such knowledge. Thus, it does not require as much random exploration or following the local learner policy as the other approaches during the transfer time, \(T\).
### _Safety and Acceleration Evaluation_
We now present statistics compiled from the best 64 runs of all the approaches given traffic patterns 1 and 2 in Fig. 5(a) and Fig. 5(b) respectively. The figure depicts the acceleration and safety aspects of the different approaches. More specifically, we measure the initial average normalized reward, variance in the reward, number of steps to converge to the best reward, and percentage of converged simulation runs for each approach. This measures whether an approach starts with a good reward value, the change in reward values afterward, the speed of convergence, and the ability to finally converge to the optimal policy respectively.
Such observations confirm the results presented in Fig. 5 and the hypothesis made in Section IV-C. The proposed hybrid approach tries to maximize the reward for some time. It also tries to cautiously explore new actions by striking a balance between a deterministic and a guided exploratory action at other times. It inherits the best of both policy reuse and distillation approaches regardless of the nature of the expert policy's training environment and the actual deployment traffic conditions.
Consequently, the proposed hybrid approach has the highest initial reward value and the highest percentage of converged runs with at least 7.7% and 20.7% improvements over the policy reuse approach respectively. It also yields the lowest variance in reward values per run with at least a 64.6% decrease in variance when compared with policy reuse. It does so while still having the second-best performance in terms of the number of steps to converge. However, policy reuse which comes first in this metric only converges up to 82.8% of the time. Hence, the number of steps to converge is averaged over a smaller number of data samples.
The hybrid approach can switch between two approaches of knowledge transfer. This enables it to deal with various expert
Fig. 6: Safety and acceleration performance of the proposed approaches averaged over 64 best runs (the higher the better for the percentage of converged runs and the average start reward): a) traffic pattern 1; b) traffic pattern 2.
policies' pre-training conditions whether they are similar or different from the experienced live network conditions. The non-TL-aided approach has no sources of knowledge. It only relies on its own policy and random exploration which is restricted in such an O-RAN deployment scenario. Thus, it has almost the worst performance on all the compared aspects except for the percentage of converged runs in the scenario presented in Fig. 5(a). The policy reuse and distillation are close in their overall performance except for the percentage of convergence runs. This is primarily because the two traffic patterns are not hugely different from those used to pre-train the expert policy. Thus, the policy reuse can manage to converge more frequently if it only relies on following the expert policy, unlike policy distillation which fails more often as it conservatively converges towards a much lower average reward.
### _Effect of the Introduced Hybrid Transfer Learning Parameter_
We also examine the effect of the introduced parameter \(\gamma\) as shown in Fig. 7. The two sub-figures show the average performance of the proposed approach based on the best 64 runs for each value of \(\gamma\). Fig 6(a) shows that the number of steps needed by a learner agent to converge increases with the decrease of \(\gamma\) value. The hybrid TL-aided approach with \(\gamma=0.3\) needs around 60% of the simulation run steps to converge to the optimal reward value. It also shows a slight increase in the reward variance per run and a slight decrease in the initial reward value. This can be attributed to fewer policy reuse-based actions taken as defined in Algorithm 3 and showcased in Fig. 6(b). The used traffic pattern is still not very different from that used to pre-train the expert policies. The policy reuse approach showed a slight advantage over policy distillation in such situations as in Fig. 6 and Fig. 5. Hence, an overall slight degradation in performance is expected. However, the hybrid approach still shows robust behavior given the different \(\gamma\) values. This is due to restricting random actions and relying on both distillation and reuse during the majority of the transfer time as seen in Fig. 6(b). It is worth noting that the probability of taking a reuse or distillation action does not only rely on \(\gamma\) and hence their counts do not change identically when changing \(\gamma\).
Fig. 7: The effect of the introduced parameter \(\gamma\) on the convergence performance averaged over 64 best runs given traffic pattern 2: a) safety and acceleration performance given different \(\gamma\) values; b) action counts during transfer time \(T\).
## VI Conclusion and Future Work
Reusing existing knowledge is a major step towards having _safe and accelerated_ DRL-based xApps in the O-RAN paradigm. In this paper, we propose a hybrid TL-aided DRL approach that combines policy reuse and distillation TL methods. A thorough study on intelligent O-RAN slicing is conducted to demonstrate the DRL convergence performance gains of using the proposed approach. For this, a public VR cloud gaming dataset is incorporated to reflect an example of realistic immersive applications of O-RAN slicing. The proposed hybrid approach proves to be effective whether the expert policies are pre-trained in a context similar to or different from that of the deployment environment. Results show at least: 7.7% and 20.7% improvements in the average initial reward value and the number of converged scenarios, and a 64.6% decrease in reward variance while maintaining fast convergence and enhancing the generalizability compared with the baselines. This facilitates a _safe and accelerated_ DRL convergence when a slicing xApp is newly deployed in a live network and when the network context changes significantly.
Although the proposed hybrid approach proves to outperform the baselines, the associated hyper-parameters need dynamic optimization based on the context of both the deployment and expert policy training environments. Studying how to conditionally trigger policy transfer instead of relying on probabilities during the transfer time is another interesting research problem that should be addressed. Furthermore, research about the benefits and ways to reuse imperfect and low-cost policies is needed. Finally, combining the proposed method with other approaches such as constrained DRL [13] and time series forecasting [37] is a promising step toward trustworthy DRL in O-RAN slicing.
| オープンラジオアクセスネットワーク(O-RAN)アーキテクチャは、そのコア能力の一つとして、Intelligent Network Control Algorithmをサポートします。データ駆動アプリケーションは、RAN機能を最適化する目的で、RAN intelligent controllers(RICs)を用いてこれらのアルゴリズムを組み込みます。深層強化学習(DRL)アルゴリズムは、O-RAN論文の中で主要なアプローチとして採用されています。しかし、O-RAN RICによる導入の恩恵はありますが、DRLアルゴリズムを実装したリアルネットワーク展開の導入は遅れています。これは主に、DRL agenetが展開時に遭遇するネットワーク条件を学習する際に、遅延の発生と安定性不足によって引き起こされることが考えられます。この論文では、これらの課題を解決するために、DRLに基づく閉ループ制御のトレーニングと展開ワークフローの核心となる構成要素として、転移学習(TL)を提案します。この目的として、政策再利用と |
2308.04445 | Getting from Generative AI to Trustworthy AI: What LLMs might learn from
Cyc | Generative AI, the most popular current approach to AI, consists of large
language models (LLMs) that are trained to produce outputs that are plausible,
but not necessarily correct. Although their abilities are often uncanny, they
are lacking in aspects of reasoning, leading LLMs to be less than completely
trustworthy. Furthermore, their results tend to be both unpredictable and
uninterpretable.
We lay out 16 desiderata for future AI, and discuss an alternative approach
to AI which could theoretically address many of the limitations associated with
current approaches: AI educated with curated pieces of explicit knowledge and
rules of thumb, enabling an inference engine to automatically deduce the
logical entailments of all that knowledge. Even long arguments produced this
way can be both trustworthy and interpretable, since the full step-by-step line
of reasoning is always available, and for each step the provenance of the
knowledge used can be documented and audited. There is however a catch: if the
logical language is expressive enough to fully represent the meaning of
anything we can say in English, then the inference engine runs much too slowly.
That's why symbolic AI systems typically settle for some fast but much less
expressive logic, such as knowledge graphs. We describe how one AI system, Cyc,
has developed ways to overcome that tradeoff and is able to reason in higher
order logic in real time.
We suggest that any trustworthy general AI will need to hybridize the
approaches, the LLM approach and more formal approach, and lay out a path to
realizing that dream. | Doug Lenat, Gary Marcus | 2023-07-31T16:29:28 | http://arxiv.org/abs/2308.04445v1 | # Getting from Generative AI to Trustworthy AI:
###### Abstract
Generative AI, the most popular current approach to AI, consists of large language models (LLMs) that are trained to produce outputs that are _plausible_, but not necessarily _correct_. Although their abilities are often uncanny, they are lacking in aspects of reasoning, leading LLMs to be less than completely trustworthy. Furthermore, their results tend to be both unpredictable and uninterpretable.
We lay out 16 desiderata for future AI, and discuss an alternative approach to AI which could theoretically address many of the limitations associated with current approaches: AI educated with curated pieces of explicit knowledge and rules of thumb, enabling an inference engine to automatically deduce the logical entailments of all that knowledge. Even long arguments produced this way can be both trustworthy and interpretable, since the full step-by-step line of reasoning is always available, and for each step the provenance of the knowledge used can be documented and audited. There is however a catch: if the logical language is expressive enough to fully represent the meaning of anything we can say in English, then the inference engine runs much too slowly. That's why symbolic AI systems typically settle for some fast but much less expressive logic, such as knowledge graphs. We describe how one AI system, Cyc, has developed ways to overcome that tradeoff and is able to reason in higher order logic in real time.
We suggest that any trustworthy general AI will need to hybridize the approaches, the LLM approach and more formal approach, and lay out a path to realizing that dream.
1. Introduction
For all the progress in artificial intelligence in the last decade, trustworthy artificial intelligence remains elusive. Trained statistically with an astronomical number of parameters on large quantities of texts and images, today's Al's (ChatGPT, Bard, etc.) are certainly impressive, but they have been trained to be _plausible_, but not necessarily _correct_. As a result, they are untrustworthy, unstable and brittle. Some examples of what we mean by these terms:
* but, it later turned out to be a _completely nonexistent_ journal article: "_It took a real journal, the European Journal of Internal Medicine. It took the last names and first names... of authors who have published in said journal. And it confabulated out of thin air_ [the title of] _a study that would apparently support this viewpoint_". [Faust 2023] It's fortunate that he not only asked for a citation but also then tried to find and read the cited article. As another example, lawyers were recently sanctioned for citing six nonexistent cases "found" by ChatGPT, and multiple people have been accused of crimes they did not commit, such as a sexual harassment, based on confabulated evidence.
* _Unstable_. As one recent study [Chen et al, 2023] showed, large language models can be quite unstable in behaviors (e.g., determining whether a given integer is prime or composite) from one month to the next.
* _Brittle_. LLM's also sometimes make mistakes that no person would make. E.g., after ChatGPT told us that Romeo commits suicide at the end of _Romeo and Juliet_, we asked whether Romeo dies during the play, and it said there was no way to know! It also answered incorrectly when we asked whether Vladimir Putin believes that cats can breathe, and when we asked whether non-round skateboard wheels would work as well as round ones. Another recent study showed that systems could undermined by adversarial attacks that no human would fall to prey to [Zou, et al, 2023].
In our view, the underlying problem is that LLMs understand too little about the nearly limitless richness of how the world works, about everyday life, human interaction, different cultures, etc. It's notoriously difficult to define "understanding" concisely -- we discuss over a dozen components of "understanding" in the next section. But it mostly comes down to _knowledge_, _reasoning_, and _world models_[Marcus, 2020], none of which is well handled within Large Language Models.
* Knowledge: People know many individual _facts_, but equally importantly we have (i) a large, broad, stable base of common sense (e.g., "_you can't be in two places at once_"); and (ii) a large set of qualitative models of how the world works (e.g., "_if it rains, uncovered outdoor items will get wet_").
* Reasoning: We routinely combine pieces of knowledge and perform multi-step reasoning. If we hear that the President is flying into town tomorrow afternoon, we might adjust our schedule or our planned routes accordingly. If we see on the evening
news that the President is making an unplanned trip elsewhere instead, we re-adjust to that. If the US should finally elect our first female president, we would effortlessly generalize that knowledge, regardless of prior history.
In a nutshell, humans possess knowledge and reasoning capabilities, which resemble Kahneman's System 2 (which the second authors calls "deliberative reasoning"), but today's generative AI - more like Kahneman's fast and automatic "System 1" do not. As a result, much of what is obvious to people remains unreliable within the large language model approach.
The next section teases apart "knowledge" and "reasoning", breaking them down into 16 key elements. Section 3 then discusses the progress of accomplishing each of them in a particular "System 2" AI today, Cyc, which is very different from an LLM. Finally, Section 4 considers the ways in which these two types of AI's might work together to produce a trustworthy general AI.
## 2 Sixteen Desiderata for a _Trustworthy_ General AI
A general AI which is trustworthy needn't think exactly the same way humans do, but it ought to at least possess the following 16 capabilities:
1. _Explanation._ A trustworthy AI should be able to recount its line of reasoning behind any answer it gives. Asking a series of repeated _Why is that?_ follow-up questions should elicit increasingly fundamental knowledge, ultimately bottoming out in first principles and "given" ground truths. Each piece of evidence, knowledge, rule of thumb, etc. invoked in that reasoning chain should also have its source or provenance known. This is a higher standard than people hold each other to, most of the time, but is expected in science and whenever there is a very important decision such as one involving family healthcare, finance, and so on. The explanation should be as concise as appropriate, prioritizing and filtering details based on context and prior and tacit knowledge the user has (or is inferred to have), and resource constraints the user is under (or is inferred to be under).
2. _Deduction:_ A trustworthy AI should be able to perform the same types of deductions as people do, as deeply as people generally reason. If you know that countries have borders, and Andorra is a country, then you can infer that Andorra has borders. That use of _modus ponens_ is one type of deduction. Another type is arithmetic: if someone enters a room that had four people, it now has five. Exhaustive search is another type of deduction: A chess player soon to be checkmated considers the tree of all moves and counter-moves at that point, and tips over their king. Understanding connectives like _and, or, not_ is important, including various "flavors" of negation (e.g., not being able to conclude P is different from being able to conclude P is false.) Deduction also includes recognizing when one statement blatantly contradicts another, and when one statement is obviously redundant with another.
3. _Induction._ Often thought of as a complement to deduction, when certain conclusions cannot be logically deduced. A typical example: An animal's species generally determines the major features of its anatomy. So if you hear about a new type of invertebrate that has just been discovered - let's call it a dwim -- and hear or see that it has eight legs and two wings, you induce that most dwims will have eight legs and two wings.This kind of reasoning sometimes leads to errors but it helps us cope with the rich, complicated world that we live in. A nearly-ubiquitous form of inductive reasoning is _temporal projection:_ If you believe or know that X is true at time \(t_{t}\), then you infer how likely it is to be true at time \(t_{e}\). E.g., I learn you own a house, from which I can infer how likely it was you owned it 2 years ago or 3 years from now. Most such projections follow one type of probability decay curve (linear, normal, Gaussian, etc.) for each direction, with the corresponding parameters. Similar projections apply across location, security, and dozens of other dimensions. Things change at boundaries (e.g., state lines) and interrupting events (e.g., getting divorced and selling your house, or less dramatically the ringing of a phone).
4. _Analogy._ Much human reasoning involves analogizing to far-flung and (superficially) unrelated things. The ability to do that by its very nature requires knowing about that vast, broad, panoply of things (objects, actions, properties, etc.) to which one might be analogizing.
5. _Abductive Reasoning,_ sometimes known as inference to the best explanation. If a janitor sees a set of chairs in a room that looks like the set of chairs the janitor observed the night before, the presumption, possibly incorrect, but best explanation, other things being equal, is that it is the same set of chairs. This kind of reasoning can lead to errors but, like induction and analogy, it is so useful that we do it all the time.
6. _Theory of Mind:_ When we talk with another person, we usually have (or quickly build up) a good model of what they know, are capable of, care about, and so on. We then use that model to guide our interactions: to be more terse with a colleague, to be less terse with a stranger, to use simpler concepts and vocabulary with a young child, etc. Similar presumptions about prior and tacit shared knowledge occur when interacting with your neighbor, with someone who is about your age, with someone who is much older/younger, with someone attending or participating in the same event, etc. An overly loquacious Al could appear condescending, patronizing, or pedantic; one that's too terse could appear cryptic, uncooperative, and, most seriously, frequently be misunderstood. If conversing with a person who is ambiguous or vague, the Al should be able to infer whether, in each instance, it's better (given the conversation "goal") to adopt and reflect that level of vagueness or to ask some clarifying questions or to avoid both of those paths and instead just temporize (delay), e.g., by changing the subject. The Al should revise its model of other agents (and indeed of the world as a whole) over time -- ultimately over the entire lifetime of each person it interacts with -- adding new temporally tagged revisions rather than overwriting and only keeping the latest model around. One channel of information informing its model of person/group/idea X is _indirect_: what others have said about X, taking _their_ models into account of course. One other aspect of Theory of Mind worth mentioning is a model of _self:_ understanding what it, the Al, is, what it is doing at the moment and why, and -- very importantly -- having a
good model of what it does and doesn't know, and a good model of what it is and isn't capable of and what its "contract" with this user currently is (see item 13, below).
7. _Quantifier-fluency:_ Consider "_Every Swede has a king_" versus "_Every Swede has a mother_" -- each person in Sweden of course has the same king but not every Swede shares the same mother! In logic such ambiguities are naturally avoided by using _variables_ which are _quantified_. The first sentence would be written "_There exists_ a king \(x\) such that _for each_ Swede \(y\), _x is y_'s king"1 and in the second would be written "_For each_ Swede \(y\), _there exists_ a mother \(x\) such that \(x\) is _y_'s mother." Of course non-logicians still understand what is meant by each of the syntactically similar English sentences because of their common sense, the mental models they already have about families, motherhood, monarchies, etc.
8. _Modal-fluency_. Besides those two quantifiers, we often qualify statements with phrases like "_l hope that..."_. "He _is afraid that..."_, "_ Jane _believes that..."_, "_'...so it is _possible_ that..."_, "...so it _must be the case_ that...", etc. These pervade human speech and writing, and people are quite good at correctly and almost effortlessly reasoning with such so-called modal operators, including quite deep nestings of such, e.g., "Ukraine hopes that the U.S. believes that Putin plans to..."
9. _Defeasibility_. Much of what one hears, reads, says, believes, and reasons with is only _true by default_.2. New information arrives all the time, and many conclusions that were reached would have turned out differently if that new information had been known at the time. To be trustworthy, an AI needs to be able to assimilate new information and revise its earlier beliefs and earlier answers. For some critical applications, it might need to actively inform others that it's retracting and revising some response it gave them in the past.3 Footnote 2: More precisely, there exists _exactly one such_ king \(x\). Instead of packing that into this axiom (and having to repeat that in stating other axioms), it’s cost-effective to factor it out as a separate axiom, a separate rule of thumb: One generally does not have two or more different kings at the same time. 2 For example, you know the “rule” that each person has a biological mother who is or was a different, and older, person. But even that rule must have some exception(s) or else there would already have been an infinite number of people born! There are a few sorts of things which are absolutely true, with no exceptions, and always will be, such as how to spell the English word “misspell”, what the sum of two integers must be, and the rules of chess... well, the rules _today_ (the 75-move rule was added in 2014). 3 People are often hit-or-miss doing this sort of accommodation, and as we age we inevitably retain more and more “stale” conclusions; Als can do better than people at this, and might usefully serve as a sort of mental prosthesis or amplifier to help us avoid such unwanted remnants. 4 If there are 367 people in a building, a _nonconstructive_ argument can be made that at least two have the same birthday based on the number of days in a year. This doesn’t tell us _who_ those individuals are.
Footnote 4: If there are 367 people in a building, a _nonconstructive_ argument can be made that at least two have the same birthday based on the number of days in a year. This doesn’t tell us _who_ those individuals are.
seemingly straightforward questions might have multiple incommensurably-good answers, and each _answer_ could have its own set of pro- and con- arguments.
_11. Contexts._ Some pieces of advice apply at football games but not classroom lectures (e.g., stand up and cheer to signify approval). Some statements are true in someone's (or some group's) belief system, but not in others'. Some, such as who the king of Sweden is, change over time. Having knowledge contextualized, and the ability to reason within and across contexts, can be vital. It is also essential to be able to reason _about_ contexts, such as when trying to decide in which context(s) an entailment should or should not be inferred to hold true. Most human communication leaves some elements of the context _implicit_, which can lead to, e.g., conflation when training an LLM. When performing a task (e.g., interacting with a person), the _use context_ is important: inferring why they are being asked this, what resource constraints they may be under, what context is the user in, what use will be made of their response, and so on. _Culture_ is one important type of context, and making this explicit should reduce miscommunications and facilitate cross-cultural interactions.
_12. Meta-knowledge and meta-reasoning._ A trustworthy reasoner -- be it human or AI -- needs to be able to access and reason about its own knowledge, ideally including the history and provenance5 of each fact or rule of thumb, and should have an accurate and realistic model of what it does/doesn't know, and how good/bad it is at various tasks. Wild guesses should not be advanced as blithely as those supported by strong arguments. The reasoner may sometimes benefit by pausing, in the midst of working on some problem, to _reflect:_ introspect about what tactic it has been trying, how well that seems to be working out, how much longer it's going to require, reason about whether it might be better to change tactics or strategies, and so on. After the fact it might be important to analyze what it did, so as to improve its tactics and strategies for the future. The AI should be able to introspect and explain why it changed its mind about something from yesterday, and hypothesize plausible scenarios which would cause it to change its mind about something - then cache those, and be alert for signs that they may be occurring. The AI should be able to understand, and come up with, jokes, which usually involve meta-reasoning, common sense, and a theory of mind. Another important type of meta-reasoning is critical thinking about whether and when some particular source can be trusted. Theory of mind, contexts, pro/con argumentation (above) can also all be considered types of meta-knowledge and meta-reasoning.
Footnote 5: Since LLMs don’t explain their reasoning down to the provenance of the elements used, it’s unclear how to tell if one LLM infringed on another. This is relevant, now, given how frighteningly easy it is to fast-follow even the best LLMs [12]
_13. Explicitly ethical._ A trustworthy AI should follow a core of guiding principles which appear invidate, such as not lying or causing emotional or physical harm. As everyone knows, however, these are often shaded and complicated and conflicting (e.g., lying to someone so as not to unnecessarily hurt them; a doctor snapping a dislocated shoulder back into place) and ever-changing, requiring meta-reasoning to resolve. There will never be universal agreement on what this core "constitution" should be, so one important aspect is that there will be a large space of multiple _contexts_ each inheriting from more general ones and imposing their own revisions. Interactions with the AI, and
tasks performed by it, would be situated in such contexts. One important aspect is that the AI's then-current corpus of ethics be explicitly known and inspectable by everyone it is affecting, and this tenet is one of the very few which should never change -- part of the small, immutable, kernel "contract" between an AI and its users. Successfully performing certain tasks involving interaction with people may require the AI to be empathetic (or at least sympathetic and apologetic if it is barred from empathizing by that "contract"). Another aspect of this is that the AI will need to make -- and keep -- promises to each person and group of people it interacts with, subject to their "contracts" -- a common example of this would be not betraying confidences. As always there would be exceptions, such as if a life were in danger, a subpoena, etc [Taylor Olsen 2023].
14. _Sufficient speed._ Just like a human being working on a task, an AI needs to be responsive _enough_ given the type of problem it is working on. Some applications require microsecond response times (and therefore cannot be performed by people), some require real-time human dialogue response times (on the order of \(\gamma\)'s second), and it's fine for some other applications to run at slower speeds (e.g., writing a complete 200-page NIH grant proposal). Human "hardware" is relatively uniform, but of course an AI's speed will drastically depend on the computing hardware running it.
15. _Sufficiently Lingual and Embodied._ Some applications would be almost impossibly difficult without the performer -- a human or an AI -- being able to converse in natural language, or hear and speak (understanding and generating appropriate prosody), or visually parse scenes and recognize objects, move around, manipulate physical objects, use instruments and devices, sense texture, pressure, temperature, odors, etc. The modifier "_Sufficiently_" is important, since many applications will require little if any such embodiment, motor or perception capabilities, and little if any natural language dialogue capabilities. Natural language understanding alone involves many AI-complete elements, such as correct disambiguation, understanding of idioms, metaphor, sarcasm, foreshadowing, irony, subtext, and so on. Almost all the knowledge and reasoning of the AI is language-independent (e.g., the fact that writing pens are smaller than penitentiaries has nothing to do with the fact that the English string "pen" could denote either one). But there is still of course the matter of knowing and mastering the lexicon, grammar, idioms, and so on for different natural languages, and mapping the language-specific terms and phrases to terms and expressions in the AI's representation of knowledge.
16. _Broadly and Deeply Knowledgeable._ We take for granted that anyone we speak or write to has a vast shared foundation of fundamental knowledge about the world, from common sense to models of traffic, weather, crime, etc. Knowing a vast plethora of _facts_ is less important today than it used to be, thanks to the omnipresent internet, and Google in particular, but an effective person or AI needs to be able to access (and understand) facts they need as they need them; humans rely heavily on web searching, AI's may be a little less facile at that but better at accessing databases, web services, and structured websites. Per the preceding 15 elements, a trustworthy AI should
leverage the meaning of each piece of knowledge it has or acquires6: be able to explain it, reason about its provenance and trustworthiness, deduce things that logically follow, induce and abduce what a reasonable person would, analogize to (and from) it at least as well as most people do, etc., and do all that as quickly as necessary.
Footnote 6: We can’t count on being as lucky as Chance the gardener in [10].
There are some other capabilities which are effectively combinations of the above 16.
* One important one is _Planning_. Planning can involve combining all the various types of reasoning above (deductive and inductive, analogy, meta-reasoning, weighing pro and con arguments, etc.) The same goes for _Choosing_, as in selecting an item to buy, and in prioritizing tasks.
* Another example is _Learning_. That also can and should draw on all the above types of reasoning capabilities. E.g., a typical 2023 robot might require an enormous number of spills and accidents to learn how to clean a hotel room or drive a car, but a human only requires only a modest amount of experience to become proficient, because of all the background knowledge and reasoning skills we have.
Any general artificial intelligence should have these 16 capabilities7 if it is to be trusted where the cost of error is high. LLMs today struggle with most of these 16 categories of learning. In the next section, we discuss how Cyc, perhaps the extreme realization of System 2 (deliberative reasoning) existing on Earth today, approaches these.
Footnote 7: There are many additional criteria one could add to the above list (e.g., functioning in a way that faithfully mirrors human cognition including error rates and delay times; or having a sense of humor) but these 16 are the most frequently important ones. Conversely, there are of course many quite useful applications such as calculator and calendar apps, that lack almost all of those 16 capabilities; we understand when and how and what to trust them with, not unlike the way we treat physical tools.
## 3 How Cyc handles some of these 16 elements
Large Language Models such as OpenAI's ChatGPT and Google's Bard and Microsoft's Bing/Sydney represent one pole in potential architectural space, in which essentially neither knowledge nor reasoning is explicit. Cycorp's Cyc represents the opposite pole: a four-decade-long 50-person project to explicitly articulate the tens of millions of pieces of common sense and general models of the world that people have, represent those in a form that computers can reason over mechanically, and develop reasoning algorithms which, working together, are able to do that reasoning sufficiently quickly.
Whereas LLMs are trained automatically, statistically, quickly, from large text corpora, usually using self-supervised learning, Cyc has been built by painstakingly identifying each useful axiom individually and then writing it by hand in a logical formalism, and then entering that into the
growing Cyc knowledge base. This process has been accelerated by gamification, NLU, etc., but each axiom is hand-checked for default correctness, generality, and best placement into the microtheories (contexts) it applies to, before entering it into the Cyc knowledge base.
Cyc began with a frame-and-slots representation akin to today's Knowledge Graphs [Lenat, Prakash, and Shepherd 1986], and an inference engine that ran expert systems style if-then rules, also known as situation-action rules. Gradually, over its first several years, the indispensability of having an expressive representation language -- one as expressive as English, Arabic, or Portuguese -- became clear. Namely, a trustworthy general AI needs to be able to represent more or less anything that people say and write to each other -- e.g., "Putin wants the U.S. to be worried that shortly after any U.S. tanks actually arrive in Ukraine, China will blockade Taiwan".
A natural language like English is of course sufficiently expressive to express enormous nuance (though perhaps not, e.g, to express knowledge of what a cat looks or how to use a can opener). But the AI also needs to be able to algorithmically infer the things that people would, from such statements. That's one of the main reasons we communicate with other people: not just to have our utterances _remembered_, but also have the listener/reader _reason_ with them and, if/when/as appropriate, _act_ on them.
Some of that expected reasoning happens right away, inferences we expect our audience to immediately conclude; and some will occur in the future, when what we just said to them might have some impact on their future reasoning and decision-making. There are various methods that computer science, linguistics, and AI have developed to reason from sentences represented in natural language, such as latent semantic indexing, but those are very incomplete.
A strong alternative to natural languages exists, namely representing each utterance in a formal logical language. Algorithms can then mechanically operate on sets of such statements, and produce all their entailments automatically, one by one. There are many such logics to choose from, but only higher order logic can represent the same breadth of thought as a natural language. By the late 1980's, Cyc assertions and rules were expressed in such a language, CycL. It includes full first order logic (with variables, nested quantifiers, predicates, functions, etc.), and allows statements _about_ other statements, statements about functions, statements about what the inference engine is trying to do at any moment and why, the earlier sentence about Putin and Ukraine, and so on.[Lenat and Guha 1990]
The main reasoning mechanism in Cyc is akin to mechanical "theorem proving" on sentences expressed in predicate calculus -- formal logic. An example of this is sketched in Figure 1. The _reasoning_ is 100% logically sound, but premises like "people love their children" are only true the vast majority of the time, not in every case, so at the end it is merely extremely likely that the person watching their daughter take her first step is smiling. That's why we put "theorem proving" in quotes: what gets inferred is not a guaranteed-to-be-true theorem, it's just a strong argument.
Let's walk through a simple example of first-order machine deduction. Suppose we have a situation like: "A person sees their daughter take her first step." An AGI should be able to answer a question like "Is that person smiling, in that situation? (And, if so, why?)"
The first step in applying machine deduction is to express both the situation and the question in logic. The 3 variables p, d, e, represent the person watching, their daughter, and the walking event. "A" is the symbol for AND. The negated clause says: there is no _prior_ event, f, in which the person d was walking.
**The situation**:
A0. (3p) (3d) (3e) is-a(p, Person) \(\bigwedge\) daughter(p, d) \(\bigwedge\) is-a(e, Event) \(\bigwedge\) sees(p, e)
\(\bigwedge\) action(e, Walking) \(\bigwedge\) performer(e, d)
\(\bigwedge\) (f) (is-a(f, Event) \(\bigwedge\) action(f, Walking) \(\bigwedge\) performer(f, d) \(\bigwedge\) startsBefore(f, e))
**The question**: expressionDuring(p, e, Smiling) \(\quad\leftarrow\) True or False: p is smiling during that event
Assume that there is also a set of "common sense" axioms which are available to use in bridging between the situation and the question. In English, six of these would say:
A1. People love their children.
A2. If you find out that someone you love has accomplished something significant, it makes you happy.
A3. When something makes you happy, you smile.
A4. Taking one's first step is a significant accomplishment for people.
A5. If you see some event happening, you know the performer and the action
A6. A person's daughter is one of their children.
In logic, A1 for example would be: (vx)(vy) ((is-a(x,Person) \(\bigwedge\) parent(x,y)) \(\Rightarrow\) loves(x,y)). More literally, this would be read as "For any x and y, if x is a person and the parent of y, then it follows that x loves y."
As mentioned above, these rules of thumb are all just _true by default_. So they can be used to deduce an _argument_ for the person p smiling, not a _proof_ that that must be the case. To produce that argument, the first step is to _negate_ the question:
**NO: "expressionDuring(p, e, Smiling). \(\quad\leftarrow\) "-" is the symbol for NOT.**
Then, step by step, two of the available axioms get _unified_ to produce a new conclusion, a lemma. The available axioms, at any step in this process, include axioms A1-A6, plus the situation A0, plus the negated question NQ, plus all the lemmas derived so far. For example, unifying A6 and A1 produces the conclusion that people love their daughters. Unifying _that_ with A0 produces the conclusion: loves(p, d). After about half a dozen more such reasoning steps, a contradiction is derived - i.e., something and its negation, which can't both be true. But A0, A1,...A6 are given as True, so it must be NQ that's false. Which means Q must be true. So we now have an _argument_ that person p was smiling. The entire step-by-step deduction chain is the answer to _Why?_ Namely: your daughters are also your children, you love your children so you love your daughters, etc. etc.
**Figure 1. Using first-order logic to answer a question.**_Notice that this is a different source of power than the statistically-driven operations over large corpora in Large Language models. Still, even logic can lead to an error, especially if the reasoning chain is very long and each step is just usually true. As discussed later in the essay, in principle both Logic and LLMs might serve as a sanity-check on each other: one might be suspicious if an LLM predicts something which has no logical argument supporting it; and, conversely, be suspicious if a logic-based AI deduces something which seems to have few if any instantiations occuring in any human texts ever written._
Searching for an _algorithm_ that could mechanically grind out all the logical entailments of a set of statements has a long and noble history dating back to Aristotle's syllogisms: all men are mortal, Socrates is a man, hence Socrates is mortal. Following Frege, Whitehead, Russell, and other late 19th and early 20th century philosophers, enormous progress has been made over the
last century in implementing logical reasoning engines. However, achieving adequate speed for highly expressive languages remains an unsolved problem.
To run even tolerably fast, most symbolic AI systems today restrict the logical language in which formal statements are expressed, e.g. to knowledge graph or propositional logic which does not allow quantifiers, variables, modals, etc., or to constrained subsets of first order logic (e.g., description logic), which reduces the computational demands at the cost of expressiveness. Given the arduous nature of the reasoning required (see Figure 1) and the need for tens of millions of general rules of thumb, not just a handful (A1-A6) of them, it is understandable almost all AI researchers and developers have gone in the opposite direction, abandoning or trivializing symbolic representation and reasoning, and instead seeking one or another sort of "free lunch" in the form of perceptrons, multi-layer neural networks and, most recently, LLMs.
But, per our 16 desiderata, numbers 7 and 8 especially, limiting an AI to such a narrow "baby-talk" language would be a huge barrier to it ever becoming a trustworthy general AI. Even full first-order logic is much less than what people routinely use in writing and talking to each other8, and falls far short of what a trustworthy AI must be able to handle correctly and expeditiously in almost every real-world application.
Footnote 8: Here, e.g., is the first sentence in the first international story on CNN.com as the first author typed these words on March 22, 2023: “_Florida Gov. Ron DeSantis is making a significant shift in tone toward the war in Ukraine, calling Russian President Vladimir Putin a “war criminal” who should be held accountable, in another portion of a Piers Morgan interview teased in the New York Post._”
For that reason, Cycorp has persevered, unwilling to sacrifice the expressiveness of the logic involved, and its Cyc AI is the culmination of that effort. Over the past four decades it has developed _engineering solution_s to manage each of the 16 elements described in Section 2. Some are elegant; others simply required a lot of elbow grease -- e.g., for item 16, Cyc's knowledge base (KB) comprises tens of millions of hand-authored assertions, almost all of which are general "rule of thumb" axioms (most of the "facts" Cyc knows are ones that it can just look up on the internet much as a person would, or access in databases where the schema of the database has been aligned to Cyc's ontology.)
Let's turn to items on the list of 16, now. The final one-- being Knowledgeable -- sounds like a no-brainer, but what knowledge is and isn't cost-effective to hand-axiomatize? How did Cycorp decide what to enter into the Cyc KB? Cycorp had its ontologists examine random pieces of text, identifying places where the writer correctly assumed that the reader would disambiguate some polysemous word, some prepositional phrase attachment, multiple pronoun referents, etc. That in turn then gets articulated as a piece of common sense knowledge. E.g., consider the sentence "_The horse was led into the barn while its head was still wet_". The thing being wet is the horse, not the barn, not the weather, etc. Change one word -- "_The horse was led into the barn while its **roof** was still wet_" and now the thing being wet is clearly the barn. That leads to the nuggets "horses have heads", "barns don't have heads", "barns have roofs", etc. The ontologist next formalizes this using the language of predicate calculus and the
vocabulary of the (growing as needed) Cyc ontology. But before and after that formalizing, the ontologist will try to generalize the assertion to the point where it still remains default-true. In this case, a moderately good generalization might be "animals have heads". The ontologist would also ferret out, and axiomatize that a horse doesn't have two or more heads (that generalizes to _everything_ by default, i.e., it is default-true that anything that has a head only has one head), and that two different horses don't share the same head, a horse has the same head for its entire lifetime, the head connects to the body at the top of the neck, the head faces forward, the head is about a tenth the size of the body, etc. Of course there are exceptions to all those axioms, such as hydras and Cerberus and rare two-headed live births, but each generalization, each rule of thumb, holds true _by default_. There can be systematic exceptions, and individual exceptions, and exceptions to the exceptions, etc.
Tens of millions of assertions and rules were written and entered into Cyc's KB by hand, but it is important to realize that even just performing _one step_ of reasoning, Cyc could generate tens of billions of new conclusions that follow from what it already knows. In just a few more reasoning steps, Cyc could conclude trillions of trillions of new, default-true statements. It generally doesn't just metaphorically sit back and ponder things, though, it is asked questions; and, running on a typical laptop today, Cyc can nearly instantaneously answer any of those trillions of trillions of commonsense inferences. E.g., how many thumbs did Lincoln's maternal grandmother's mother have? Consider if we ask Cyc why it thinks that Bullwinkle the Moose and Mickey Mouse are not the same individual. E.g., it might have looked that fact up in some compendium of facts about pairs of individuals (but there would have be more than \(10^{20}\) such facts!) but a better approach would be if it had applied a more general rule like "Mooses and Mice are disjoint". But even then, Cyc would need to know about \((10^{4})^{2}\) - on the order of a hundred million - similar rules to cover even just the 10,000 most common types of living things. Instead, decades ago the Cyc ontologists pointed Cyc to the Linnaean taxonomy system and added just one single rule to the Cyc KB of the form: For any 2 taxons, if one is not a specialization of the other (through a series of sub-taxon links), assume they are disjoint. This type of generalization was critical to have the KB-building enterprise take only (!) a few million person-hours of effort rather than a trillion.
To speed up the educating process, the Cyc team developed tools that made use of the existing Cyc KB (and reasoners) to help the ontologists who were introspecting to unearth and formalize nuggets of common sense. For example, it was important that they _generalize_ each nugget before entering into Cyc's knowledge base. Suppose the original axiom they jot down is "different horses don't share a leg"; a good default-true generalization of that might be "different physical objects don't share physical parts". Further generalization is questionable -- e.g., many objects _do_ share a cost, country of origin, owner, time of creation, etc. A software tool helps the ontologist semi-automatically walk up the hierarchy of types from "horse" to "physical object", and from "leg" to "physical part".
Another useful Cyc-powered tool calls the ontologist's attention to any existing knowledge Cyc has that appears to contradict this new assertion. That's usually a good thing to happen, not a bad one: it points the ontologist to tease apart the _contexts_ in which each axiom applies -- e.g.,
the years in which each held true -- and have them each asserted only in the appropriate context (and its specializations). Even with those Cyc-powered KB-building tools, it has taken a coherent team of logicians and programmers four decades, 2000 person-years, to produce the current Cyc KB. Cycorp's experiments with larger-sized teams generally showed a net _decrease_ in total productivity, due to lack of coherence, deeper reporting chains, and so on.
The Cyc reasoner produces a complete, auditable, step-by-step trace of its chain of reasoning behind each pro- and con- argument it makes, including the full provenance of every fact and rule which was in any way used in each argument.
While natural language _understanding_ is an AI-hard problem, natural language _generation_ is more straightforward, and Cyc has templates that enable it to produce a passable English translation of anything which is expressed in its formal CycL representation language. Additional rules help it stitch the series of steps in an argument into a somewhat tilted but readable English paragraph. As we discuss in the next section, this might be another opportunity for synergy between Cyc and LLMs.
In describing how Cyc has tackled the 16 desiderata, a crucial question is #14: **how is Cyc able to operate sufficiently quickly**, often producing hundred-step-long arguments in seconds across such a huge KB expressed in higher order logic?
As we have already remarked, symbolic AI systems _other than Cyc often_ approach speed very differently. Many limit their KB (which is what led to stove-piped Expert Systems), or they limit the expressiveness of their representation of knowledge, or they limit the types of operations that can be performed on those (i.e., they adopt a more limited, but faster, logic.) E.g., they choose knowledge graphs or propositional logic which does not allow quantifiers, variables, models, and so on.
Cyc addresses this by separating the _epistemological_ problem -- what does the system know? -- from the _heuristic_ problem -- how can it reason efficiently? Every Cyc assertion is expressed in a nice, clean, expressive, higher order logic language -- the Epistemological Level (EL) language, CycL -- on which, in principle, a general theorem prover could operate. Slowly. Very very slowly. But Cyc also allows multiple redundant representations for each assertion, and in practice it uses multiple redundant, specialized reasoners -- Heuristic Level (HL) modules -- each of which is much faster than general theorem-proving when it applies.
By 1989, Cyc had 20 such high-level reasoners [11]; today it has over 1,100.
* For example, one fairly general high-level reasoner is able to quickly handle transitive relations, such as "_Is Austin physically located in the Milky Way galaxy?_ Often, a particular sub-problem will require chasing through dozens of physicallyLocatedIn links if the theorem prover had to operate on those assertions expressed in higher order logic in the EL, in CycL. But the transitive-reasoning Heuristic-Level module redundantly stores the full closure of each transitive relation, ahead of time. When Cyc wants the answer to any such question, that reasoner can just look up the answer in one step rather than having the theorem prover search for a long chain (or the absence of such).
That reasoner was extremely general; a more specific one handles the case where a problem can be represented as \(n\) linear equations in \(n\) unknowns.
* A fairly narrow Heuristic-Level module recognizes quadratic equations and applies the quadratic formula.
* Another relatively narrow Heuristic-Level module recognizes a chemical equation that needs balancing and calls on a domain-specific algorithm to do that.
When confronted with a problem, all 1,100 reasoners are effectively brought to bear, and the most efficient one which can make progress on it does so, and the process repeats, over and over again, the "conversation" among the 1,100 Heuristic-Level modules continuing until the problem has been solved, or resource bounds have been exceeded (and work suspends on it). In principle (but see footnote) there is always the general resolution theorem prover with its hand raised in the back of the room, so to speak: it _always_ thinks it could apply, but it is the last resort to be called on because it always takes so long to return an answer.9
Footnote 9: Note written by the first author: “Something we don’t often talk about: We noticed empirically that the general theorem-proving reasoner actually took so long that over a million queries in a row that called on it, as a last resort, just timed out. Going back farther, we saw that that had happened for decades. So, about one decade ago, we quietly turned the general theorem prover off, so it never gets called on! The only impact is that Cyc sometimes runs a bit faster, since it no longer has that attractive but useless nuisance available to it.”
When Cyc is applied to a new practical application, it is sometimes the case that even when it gets the right answers, its current battery of reasoners turns out to be unacceptably slow. In that case, the Cyc team shows to the human experts (who are able to perform the task quickly) Cyc's step by step reasoning chain and asks them to introspect and explain to us how they are able to avoid such cumbersome reasoning. The result is often a new special-purpose Heuristic-Level reasoner, possibly with its own new, redundant representation which enables it to run so quickly. This is what happened, e.g., for a chemical reaction application, where a special notation for chemical equations enabled a special-purpose algorithm to balance them quickly.
The trap the Cyc team fell into was assuming that there would be just one representation for knowledge, in which case it would have to be \(n^{\text{th}}\)-order predicate calculus (HOL) with modals, because it is the only one expressive enough for all AGI reasoning purposes. Committing to that meant vanily searching for some fast general-purpose reasoning algorithm over HOL, which probably doesn't exist. To escape from the trap the Cyc team built up a huge arsenal of redundant representations and redundant reasoners, such that in any given situation one of the efficient reasoners is usually able to operate on one of those representations and make some progress toward a solution. The entire arsenal is then brought to bear again, recursively, until the original problem has been fully dealt with or given up on. That last point raises another aspect of how Cyc reasons quickly: it budgets resources, depending on the application (e.g., acceptable wait times during a conversation with a person), and interrupts reasoners who exceeded their bid on how long they would take, and simply won't bother calling on reasoners who know they will take too long.
One important aspect that permeates Cyc is _context --_ item 11 on our list. Philosophers [1] and Al researchers [10] have long advocated for introducing an extra argument in every logical assertion, to stand for the context in which that assertion holds true. In this tradition, Cyc makes each such context a first-class term in its language. Thus one can make assertions and rules _about_ contexts, such as the CanadianProfessional-Hockey context is a specialization of the CanadianSports, ProfessionalSports, Hockey, Post1900, and RealWorld contexts. Just as many Cyc rules pertain to devices, emotions, everyday actions, and so on, some Cyc rules pertain to -- and reason about -- contexts. One of the most important is knowing, if P is true in one context, C1, and P implies Q is true in another context, C2, then in which contexts C3 is it reasonable to expect Q to be true?
Each Cyc context, also called a Microtheory, has a set of domain assumptions which can be thought of as conjuncts for each of the assertions in that context. Because the common assumptions are factored out, the assertions in that context turn out to be much terser, and the reasoners are thereby able to operate very efficiently within the same context. E.g., consider the 2023 context: every assertion and rule doesn't need to start out "In the year 2023,...". This is even more critical since most contexts are _rich objects:_ there is an infinite number of things one _could_ say about them. Consider a narrow context like an auto accident. There is no end to the things we _could_ assert about it: the color of the hair of each driver, the brand of paint that got smudged, etc. etc. etc.
Asserting two statements in the same context means we have factored out all those shared details, some explicitly (e.g., the date and time of the car accident) the vast majority of which are not worth stating (the position of each blade of grass at the scene) and will never be stated. There are contexts for locations, times, cultures, performers, activities, and contexts for what a person or a group of people believe to be true10. Cyc can reason within the StarWars context, e.g., and name several Jedi, and not have a contradiction with the same question being asked in the RealWorld context and answering that there are no Jedi. Today there are about 10,000 contexts explicitly given names in Cyc's ontology; that number is kept _down_ thanks to Cyc functions which return contexts as their value, such as IntersectContexts, which obviate the need for reifying combinatorially more contexts.
Footnote 10: In general these are _counterfactual_ beliefs from the point of view of a broader context, or else we wouldn’t have needed to create that more specific belief context.
### #9, Defeasible
Almost all knowledge in Cyc's KB is merely true-by-default. Exceptions can be stated explicitly, for an individual or for an entire _type_. E.g., teachers today are typically sedentary, but gym teachers are an exception, but this particular person is an exception to that exception, but while recuperating... etc. What this means is that Cyc typically can find multiple "proofs" for an answer, and even multiple "disproofs" for the same answer. Those are in quotes because these are really just alternative lines of reasoning, pro- and con- arguments. That means that Cyc reasoning is based around argumentation, not proof, as the next point explains.
#11, Pro- and Con- Arguments
When asked a question, Cyc gathers all the pro- and con-arguments it can find for all the answers that have at least one pro- or con- argument, and then applies meta-level rules to decide which arguments to prefer over which others. E.g., more specific ones trump more general ones. If most Spaniards primarily speak Spanish at home, but most residents of Vitoria primarily speak Basque, and Fred lives in Vitoria, then there is an _argument_ that Fred primarily speaks Spanish at home (he's a Spaniard) but a _preferred_ argument that he primarily speaks Basque. There is also a weak but very general argument Fred _doesn't_ speak Basque at home, namely that comparatively few people speak Basque. But the argument that he does speak Basque is preferred to that weak argument.
## 4 Synergizing an LLM and CYC
The two types of Al's have different strengths and weaknesses.
* While Cyc's KB is both deep and broad, it is often not deep and broad _enough;_ while Cyc's natural language understanding and generation is good, it is often not good _enough,_ certainly not as good as ChatGPT or BARD which is able to chat (for at least a sequence of a few back and forth utterances) about, well, almost anything. And while Cyc can reason quickly, it is often not fast _enough,_ certainly not as fast as those LLM-trained chatbots who are able to always respond at acceptable human conversation speeds.
* The current LLM-based chatbots aren't so much understanding and inferring as remembering and espousing. They do astoundingly well at some things, but there is room for improvement in most of the 16 capabilities listed in Section 2, other than breadth and speed and what we might call "the wisdom of the crowd" type reasoning.
How could a system like Cyc help ameliorate this? More symmetrically, how could a knowledge-rich, reasoning-rich symbolic system like Cyc and an LLM work together, so as to be better than either can on its own? We see several opportunities for such synergy, some very short-term and some longer-term ones:
## 1 Symbolic systems such as Cyc as a Source of Trust, to reject false confabulations
LLMs present narratives that are so well-spoken they can make compelling statements that are actually false. Much of their content is true, of course, since the patterns of language reflect the real world. But if's a cloudy mirror, due to superficial falsehoods like analogies and metaphors, and less excusable ones like misinformation and disinformation. Cyc and LLMs might synergize by acting as devil's advocates for each
other, e.g. by asking each to explain the negation of what the other says. BARD is already doing a version of this, by offering to call on Google immediately afterwards to help correct some of what it might have just gotten wrong. Bard's continued confabulations are a reminder however, of how nontrivial this kind of integration is; such an integration should be an important focus of research.
2. Symbolic systems such as Cyc as a Source of Truth, to bias LLMs towards correctness LLMs are trained on (some fraction of) the corpora of ten trillion sentences on the internet. But there is so much tacit common sense about the world that is assumed yet rarely or never explicitly expressed. For example, search for "Do people believe that cats can breathe?" and you'll get tens of millions of hits that answer _related_ questions, but not that one, because _of course_ people believe that! And everyone knows that everyone knows that. So why bother ever saying/writing it? Cyc is a repository of default-true common sense, including things like that (of course axiomatized much more generally than about cats and breathing!), which are so fundamental that it would be confusing or insulting for a person to explicitly say or write them when communicating with another human being. Cyc's inference capabilities can be viewed as a compression algorithm allowing us to implicitly embed exponentially more default-true statements than what is explicitly stated in the KB. Asking it to proactively reason forward and generate _millions or billions of default-true statements_ that could serve as the basis for training future LLMs to be more biased toward common sense and correctness. One could also use envision using Cyc, if it were enhanced with more robust natural language understanding, to filter out inputs to an LLM, inputs that Cyc infers to be false, before the LLM is exposed to and trained on those falsehoods. In that way, the falsehoods never even get into the model. Even better, Cyc might then be able to "fix up" the input, e.g., by more properly contextualizing it, into something which _is_ true.
3. LLMs as generators of candidate assertions/rules to add to symbolic systems such as Cyc LLMs can be asked to translate natural language sentences into CycL, and they already are able to do that! Here is what happened when GPT-3 was asked to write a CycL sentence that means _"Did you touch a blue object located in the capital of France on September 25th, 2022?"_ It responded: (thereExists?object (and (isa?object BlueObject) (located?object (CityFn ParisFrance)) (thereExists?date (and (isa?date Date) (dayOfDate?date 25) (monthOfDate?date 9) (yearOfDate?date 2022)
(touched?object?date))))) This at first glance amazed the Cyc team -- it looked for a moment like it might be close to correct. But it turns out to have several serious mistakes and garblings, such as the thing that is touching the blue object is the _date_; and several predicate names are wrong, have the wrong argument number and type, and so on; and there is no referent for "you" in the original English sentence. But it might be close _enough_, it might be cost-effective to have Cyc do the translation starting from that garbled-CycL (plus the original English sentence), rather than starting only with the original English sentence. I.e., it might turn out to be much easier to get Cyc to understand English this way rather than trying to get Cyc to transform open-ended English sentences all on its own, in one step.
This leads to a few additional, potentially game-changing ideas:
3.a. The first is that we could ask the LLM to translate each sentence it was trained on, and/or can generate (obviously starting with relatively tiny subsets of such) into CycL, one sentence or paragraph at a time. Then, given the fractured-CycL sentence(s) suggested by the LLM, we use Cyc to (as automatically as possible) turn it into "real" CycL and contextualize it. During the process, we repeatedly have Cyc ask itself: Can you already prove this? Can you already disprove this? Can existing content (in Cyc) be strengthened to accommodate this?And so on.
3.b. The second idea here is based around the fact that Cyc already has a good CycL-to-English translation (NLG) capability. We could ask an LLM to translate into Cyc the generated English for each CycL assertion that Cyc already knows, i.e., each assertion which is already in the Cyc KB and/or which Cyc can infer (obviously starting with relatively tiny subsets of such). Since we already have a good CycL translation of that sentence -- it's what we started with! -- we could then _correct_ the LLM by giving it what we know to be a good CycL translation. This should make the process described in the previous paragraph, 3.a., increasingly accurate, increasingly correct.
3.c. These two processes can be interleaved, and a third process would then be to augment Cyc's NLG so that it generates better, more natural-sounding English sentences. This would initially involve human editors who decide what's "better", and Cyc-fluent knowledge engineers who then tweak and add to Cyc's set of NLG templates to get Cyc to produce the better English translations. Eventually, it's possible that the LLM might get sufficiently proficient at translating CycL into English that Cyc's current NLG utility, and this process 3.c, would become unnecessary. The other way this could go would be for the LLM's ability to generate differently-worded English sentences that mean the same thing, to incrementally improve Cyc's NLG capabilities11, gradually giving it more and more variety and naturalness in the English that Cyc generates.
Footnote 11: The simplest, but still useful, example of this would be when the LLM generates a sentence using some English word that wasn’t even known to Cyc at that time.
4. Symbolic systems such as Cyc as a Source of Inference to dramatically extend LLM coverage
LLMs are capable of recapitulating statements or extending patterns and analogies but both of those are impoverished compared to the novel multi-step inferences that humans can produce from a set of "given" statements.
So the idea here is: Given a set of related statements, e.g., the recent N things that were said in this LLM dialogue with this user, have Cyc work to infer (deduce, induce, abduce, analogize, etc.) new consequences, feed those to the LLM, have it (or Cyc) translate those into English.
The Cyc reasoner therefore could act as a potentially exponential amplifier over the latent content already expressible by the LLM. This would naturally follow once the previous two synergy capabilities, (2) and (3), above, are realized.
LLMs usually contain a "feedforward" layer to help them generalize from their input text to a higher level of abstraction. Cyc could use its understanding of the input text to add a _semantic_ feedforward layer, thereby extending what the LLM is trained on, and further biasing the LLM toward truth and logical entailment. (WolframAlpha's extension to ChatGPT is somewhat in this spirit.)
## 5 Symbolic systems such as Cyc as Source of Explanation to provide audit and provenance support
LLMs' inability to soundly explain how they came to their conclusions and what actual trustworthy (or not) knowledge is the basis for those intermediate steps renders them unsuitable for large classes of applications that require soundness, trust, and auditability, such as medical decision-making applications. Cyc can provide exactly this; provenance and explicit justification are the superpower of machine reasoning over non-symbolic LLM representation.
Another example of this kind of synergy occurred in a project Cycorp and the Cleveland Clinic did for NIH's National Library of Medicine. Databases have been built up of patients' mapped genomes and the disease that brought them into the hospital. Using that, one can statistically learn correlations between point mutations in their DNA and their disease. But such A-Z correlations turned out to be enormously noisy. Enter Cyc, which took each of those _hypotheses_ and tried to find a long chain of reasoning that could account for that (e.g., this mutation is next to this gene, which when expressed would be this protein, which in turn could catalyze this reaction, which...ten steps later interfered with bone resorption, which is why the patient developed early-onset osteoporosis.) Such causal pathways generally made predictions along the way (e.g., that the patient would also have slightly elevated bioactive vitamin-D levels) which could then be independently confirmed or disconfirmed in the patient database. Asking Cyc to explain a particular A-Z correlation
was exponentially faster than just asking it to ruminate and propose plausible causal chains in an unguided fashion.
In conclusion, there have been two very different types of AI's being developed for literally generations, and each of them is advanced enough now to be applied -- and each _is_ being applied -- on its own; but there are opportunities for the two types to work together, perhaps in conjunction with other advances in probabilistic reasoning and working with incomplete knowledge, moving us one step further toward a general AI which is worthy of our trust.
| 生成式AIは、AIにおける最も人気のある現在の方法であり、大規模言語モデル(LLM)からなる。LLMは、可能性のある出力を生成するようにトレーニングされ、必ずしも正確ではない。彼らの能力は驚くほどだが、論理的な側面では欠如しており、LLMは完全に信頼できない。さらに、結果としては予測不可能で解釈不能な傾向がある。私たちは未来のAIの16の要件を提示し、現在のアプローチの多くの制限に対処できる、理論的に異なるAIアプローチを議論する。つまり、明確な知識とルールを用いてAIを教育し、自動的に推論エンジンが所有する知識の論理的 entailmentを導き出す。このように作成された長文は、完全なステップバイステップの論理的展開が常に可用であり、各ステップの知識の使用の起源も文書化および監査できる。しかし、課題は、論理言語が英語で言えることのあらゆる意味を |
2309.13210 | Large-area polycrystalline $α$-MoO3 thin films for IR photonics | In recent years, excitation of surface phonon polaritons (SPhPs) in van der
Waals materials received wide attention from the nanophotonics community.
Alpha-phase Molybdenum trioxide ($\alpha$-MoO3), a naturally occurring biaxial
hyperbolic crystal, emerged as a promising polaritonic material due to its
ability to support SPhPs for three orthogonal directions at different
wavelength bands (range 10-20 $\mu$m). Here, we report on the fabrication and
IR characterization of large-area (over 1 cm$^2$ size) $\alpha$-MoO3
polycrystalline films deposited on fused silica substrates by pulsed laser
deposition. Single alpha-phase MoO3 films exhibiting a polarization-dependent
reflection peak at 1006 cm$^{-1}$ with a resonance Q-factor as high as 53 were
achieved. Reflection can be tuned via changing incident polarization with a
dynamic range of $\Delta$R=0.3 at 45 deg. incidence angle. We also report a
polarization-independent almost perfect absorption condition (R<0.01) at 972
cm$^{-1}$ which is preserved for a broad angle of incidence. The development of
a low-cost polaritonic platform with high-Q resonances in the mid-infrared
(mid-IR) range is crucial for a wide number of functionalities including
sensors, filters, thermal emitters, and label-free biochemical sensing devices.
In this framework our findings appear extremely promising for the further
development of lithography-free, scalable films, for efficient and large-scale
devices operating in the free space, using far-field detection setups. | Maria Cristina Larciprete, Daniele Ceneda, Chiyu Yang, Sina Abedini Dereshgi, Federico Vittorio Lupo, Maria Pia Casaletto, Roberto Macaluso, Mauro Antezza, Zhuomin M. Zhang, Marco Centini, Koray Aydin | 2023-09-22T23:14:49 | http://arxiv.org/abs/2309.13210v1 | # Large-area polycrystalline \(\alpha\)-MoO3 thin films for IR photonics
###### Abstract
In recent years, excitation of surface phonon polaritons (SPhPs) in van der Waals materials received wide attention from the nanophotonics community. Alpha-phase Molybdenum trioxide (\(\alpha\)-MoO3), a naturally occurring biaxial hyperbolic crystal, emerged as a promising polaritonic material due to its ability to support SPhPs for three orthogonal directions at different wavelength bands (range 10-20 \(\upmu\)m). Here, we report on the fabrication and IR characterization of large-area (over 1 cm\({}^{2}\) size) \(\alpha\)-MoO\({}_{3}\) polycrystalline films deposited on fused silica substrates by pulsed laser deposition. Single \(\alpha\)-phase MoO\({}_{3}\) films exhibiting a polarization-dependent reflection peak at 1006 cm\({}^{-1}\) with a resonance Q-factor as high as 53 were achieved. Reflection can be tuned via changing incident polarization with a dynamic range of \(\Delta\)R=0.3 at 45\({}^{\circ}\) incidence angle. We also report a polarization-independent almost perfect absorption condition (R\(<\)0.01) at 972 cm\({}^{-1}\) which is preserved for a broad angle of incidence. The development of a low-cost polaritonic platform with high-Q resonances in the mid-infrared (mid-IR) range is crucial for a wide number of functionalities including sensors, filters, thermal emitters, and label-free biochemical sensing devices. In this framework our findings appear extremely promising for the further development of lithography-free, scalable films, for efficient and large-scale devices operating in the free space, using far-field detection setups.
Keywords: optical phonons, vdW materials, polarization tuning, Reststrahlen band, hyperbolic materials.
## 1 Introduction
Advances in nanophotonics have enabled the miniaturization of optical components due to the exploitation of surface plasmon polaritons (SPPs) [1] in the visible range that can strongly localize electromagnetic fields to small volumes. Recently, doped semiconductors [2; 3] and graphene [4] have been proposed to extend SPPs to the 2-8 \(\upmu\)m range. Also, nanoantennas have been employed to achieve electric field localization in the mid-IR range for sensing applications.
However, in order to take full advantage of surface-enhanced infrared absorption (SEIRA) techniques, a precise positioning of the analyte is required [5].
Moving toward the 8-20 \(\upmu\)m wavelength range, where vibrational absorption peaks provide relevant information on molecular bonds, the SPP approach is less effective due to the poor field confinement at longer wavelengths [6]. Moreover, for the development of a complete IR photonic platform, miniaturization, and integration of optical components with the chip-scale platforms using facile fabrication techniques [7, 8] is highly desired. A conventional IR polarizer is nowadays designed using state-of-the-art holographic techniques [9] onto IR transparent support (CaF\({}_{2}\), ZnSe, BaF\({}_{2}\)) with typical transmission losses of about 30%. Furthermore, the surface of such holographic grid polarizers is extremely delicate, and touching is absolutely to be avoided. Similarly, optical components such as polarization rotators have been realized using artificial metasurfaces [10] in the wavelength range up to 10 \(\upmu\)m. Functionality at longer wavelengths is achieved using a combination of two parallel polarizers and by tilting the plates with respect to each other. Given the complexity of these components, their integration with the chip-scale platform can, therefore, be prohibitive and the challenge for an efficient, integrated and robust IR photonic platform persists.
Recent promising solutions are based on the exploitation of polar materials [11] including ultra-thin van der Waals (vdW) materials such as MoO\({}_{3}\), MoS\({}_{2}\), Ga\({}_{2}\)O\({}_{3}\), hBN [12, 13]. Besides their strong anisotropy related to optical phonons (ideal for polarization rotation and control), they allow strong field localization by the excitation of surface waves called surface phonon polaritons (SPhPs), achieved through the coupling of the electromagnetic field with lattice vibrations. Several works reported on the great potential of polar materials for mid-IR sensing applications up to the terahertz (THz) regime [14, 15] and for the realization of compact IR photonic devices [16, 17].
Among vdW materials, Molybdenum trioxide (\(\alpha\)-MoO\({}_{3}\)) is attracting a great deal of attention [18] as it supports SPhPs in three different wavelength bands for the three orthogonal directions (range 10-20 \(\upmu\)m), rendering this material a naturally hyperbolic and biaxial material [19, 20]. Increased versatility can be obtained by combining it with other materials. Recent results show that \(\alpha\)-MoO\({}_{3}\) can be combined with vanadium dioxide, (VO\({}_{2}\), a phase change material that undergoes insulator to metal phase transition at a temperature of 68\({}^{\circ}\) C) [21, 22] in order to dynamically tune the polariton resonances. A metamaterial approach has also been proposed based on the random nanostructuring of \(\alpha\)-MoO\({}_{3}\) with subwavelength dielectric elements (i.e., air ellipsoidal inclusions). This scheme could increase design versatility as well as tuning and hybridization of polariton modes [23].
Despite huge potential of this promising material, the development of a novel, highly versatile and compact \(\alpha\)-MoO\({}_{3}\)-based IR photonics platforms is hampered by the lack of availability of high-quality scalable films and/or multilayer stacks. \(\alpha\)-MoO\({}_{3}\) for IR photonics and polaritonics is mostly used in the form of physical vapor deposition (PVD)-grown crystalline flakes. Although flakes allow exciting results in terms of hyperbolic phonon polariton excitation along x- and y-directions, there are several drawbacks that might limit the wide adaption of flakes geometries: the existing alignment techniques for flakes with a few tens of nanometers thickness are challenging; the flakes often have irregular shape preventing a good propagation of SPhPs; the dimensions of the flakes are usually limited to few hundreds of \(\upmu\)m at most, therefore the large area or integrated/multifunctional devices are not practical. The fabrication process for obtaining such flakes is very complex, requiring investigation of strategies to create efficient conditions for films growth such as high temperatures (e.g., 780\({}^{\circ}\)C [24]). Furthermore, a successive mechanical exfoliation process for transferring the desired MoO\({}_{3}\) 2D film onto a substrate of interest is needed [25]. In reference [26] a high confinement of near field signal corresponding to a Q factor of 40 has been reported in \(\alpha\)-MoO\({}_{3}\) covering submicron-width trenches. These flakes are, however, difficult to handle and integrate in a practical device while keeping low fabrication costs. Moreover, they are relatively small for far-field applications since their dimensions often reach the diffraction limit of 10-20 \(\upmu\)m range IR radiation thus requiring expensive and state of the art near-field detecting schemes.
The realization of single \(\alpha\)-phase, oriented, large area MoO\({}_{3}\) film is still an open technological challenge. Atomic layer deposition (ALD), has been used to obtain good quality \(\alpha\)-phase MoO\({}_{3}\) films, but only after 500 \({}^{\circ}\)C post-growth annealing [27]. This was necessary because deposition temperatures higher than 200 \({}^{\circ}\)C interfered with the stability of the employed precursor. Furthermore, a long annealing time (\(>\) 1h) was required when the annealing was performed at lower temperatures than 500\({}^{\circ}\)C. This means that, according to [28], ALD cannot be used to perform \(\alpha\)-phase MoO\({}_{3}\) films deposition in one single step, making more difficult a possible integration of the MoO\({}_{3}\) film within a multilayer structure. ALD is, furthermore, an expensive tool employing hazardous precursors gases and even higher temperatures are needed for depositing MoO\({}_{3}\) by sublimation [29].
Conventional sputtering techniques were also employed to deposit MoO\({}_{3}\) films at room temperature. This led to a multiphase crystalline MoO\({}_{3}\) film only after a post-growth annealing process [28-30]. When the annealing is performed at temperatures greater than 400 \({}^{\circ}\)C, it produces monoclinic \(\beta\)-phase MoO\({}_{3}\) films, not useful to exploit an optical phonons response [31]. Pulsed laser deposition (PLD) is a versatile and low-cost deposition technique which has already been
employed for the deposition of \(\alpha\)-phase MoO\({}_{3}\) films at 500 \({}^{\circ}\)C [32] and other metal oxides such as VO\({}_{2}\)[33], and ZnO [34]. Compared with the exfoliation technique, it allows depositing large area MoO\({}_{3}\) films, which can be much more easily handled and integrated into a multilayer structure. However, to the best of our knowledge, a detailed IR characterization aimed at the identification of possible applications of the obtained films has not been reported so far for large area MoO\({}_{3}\) films deposited either by PLD or ALD. In the following, we show that PLD can be employed to obtain \(\alpha\)-MoO\({}_{3}\) films at lower temperatures (e.g. 400 \({}^{\circ}\)C), without using harmful precursor gases normally employed by ALD and without the need of any post-growth annealing. Optical IR reflection spectra reveal a remarkable enhanced tunability of the reflection peak related to the z-axis phonon response as a function of the incident electric field polarization. Moreover, a polarization-independent perfect absorption condition is achieved for a broad angle of incidence. These features are not displayed from a single crystal flake. Our results show, for the first time, interesting possibilities for large-scale, lithography-free, polycrystalline MoO\({}_{3}\) film to be employed for IR signal management.
## 2 Sample Fabrication
### MoO\({}_{3}\) Deposition.
The mainly investigated structure in this study is composed of a 2200 nm (average thickness) MoO\({}_{3}\) film deposited on a fused silica substrate (Figure 1a) using pulsed laser deposition at 400\({}^{\circ}\)C and 0.1 mbar of oxygen pressure. The PLD system employed uses a Q-switched tripled Nd:YAG laser (Quantel mod. YG78C20, \(\lambda\) = 355 nm) generating 6 ns width pulses with an energy of 80 mJ per pulse [32; 33; 35]. The density of energy was maintained at 1.2 J cm\({}^{-2}\), and the repetition rate was 4 Hz. The MoO\({}_{3}\) target was a 1-inch diameter, 0.25-inch-thick disk (purity 99.9%).
Before each deposition, the substrates were cleaned in an ultrasonic bath with acetone, subsequently rinsed with isopropanol and then dried with compressed air. After cleaning, each substrate was clamped onto an electrical heater, which allows achieving temperatures as high as 800 \({}^{\circ}\)C. The heater was then placed inside a vacuum bell jar where oxygen gas can be introduced through an electromechanical valve to maintain the desired pressure.
The PLD deposition yields better crystallinity than sputtering due to its higher kinetic energy of ablated species. Furthermore, the PLD setup allows extremely versatile deposition conditions. Thus, the proper choice of deposition parameters and the resulting fabrication constraints is of crucial importance. In the present work, we focus attention on the best choice of parameters for the narrow band IR polarization filter functionality.
### Structural and Morphological Characterization.
X-ray diffraction (XRD) measurements were performed at room temperature to evaluate the crystalline structure of the deposited layers. XRD analysis was performed by using a D5005 diffractometer (Bruker AXS, Karlsruhe, Germany) equipped with a Cu K\(\alpha\) (1.5406 A) source and operating at 40 kV and 30 mA. The following experimental conditions were used: 5s acquisition time, 0.05\({}^{\circ}\) step in a 5\({}^{\circ}\) - 90\({}^{\circ}\) 2\(\Theta\) angular range. XRD patterns showed that the high-quality MoO\({}_{3}\) films deposited at 400\({}^{\circ}\)C exhibited the stable orthorhombic a-phase of MoO\({}_{3}\), as shown in Figure 1(b). For the sake of completeness, we include in the supporting material the XRD pattern for a similar sample, deposited at lower temperature (i.e., 200 \({}^{\circ}\)C), showing a monoclinic-only phase of the MoO\({}_{3}\) film (Figure S1).
The surface morphology of the MoO\({}_{3}\) thin films has been characterized by using an Anfatech high speed atomic force microscope (AFM). Consistently with X-ray diffraction measurements, AFM images of surface morphology shown in Figures 1c and 1d revealed a grain distribution in the thin film. The average grain size is around 400 nm and root-mean-square (RMS) roughness is about 100 nm. After the deposition, the film thickness was assessed by profilometry using a Dektak 150 profilometer. The average thickness was found to be approximately 2200 nm. Details on the profilometer measurements and a picture of the Sample are reported in Figure S2 of the supporting material file.
## 3 Results and Discussion
### Polarization-dependent reflection measurements.
IR reflection measurements have been performed using a FT-IR interferometer (Invenio-R, Bruker) in the spectral range 6000-400 cm\({}^{-1}\). The IR source was a glow-bar while the detector is based on deuterated triglycine sulfate (DTGS) pyroelectric detector.
A total of 64 interferograms were acquired for each measurement, with a spectral resolution of 1 cm\({}^{-1}\). A sample area of 3x3 mm\({}^{2}\) was selected during IR data acquisition using knife-edge apertures. The FT-IR platform is equipped with a reflectance unit allowing to set the angles of incidence and reflectance, from almost normal incidence (about 13\({}^{\circ}\)) to grazing angles (85\({}^{\circ}\)) as illustrated in Figure (2a). The polarization state of incident light was selected using a holographic polarizing filter with a motorized mounter. Two different sets of measurements were performed with incidence angles of 15\({}^{\circ}\) and 45\({}^{\circ}\), respectively. Specifically, the reflectance spectra were recorded as a function of different linear polarization states of the incoming light. The measured spectral reflectance curves for different incidence angles and polarization of the incoming beam are shown in Figure (2b) and Figure (2c) for 15\({}^{\circ}\) and 45\({}^{\circ}\) incidence angles, respectively. Here 0\({}^{\circ}\) polarization angle stands for p-polarized light while 90\({}^{\circ}\) stands for s-polarized light.
From Figure (2b) we note that the polycrystalline nature of the laser-deposited MoO\({}_{3}\) simultaneously unveils, also at quasi normal incidence, the three Reststrahlen bands associated to alpha phase of MoO\({}_{3}\): the x-Reststrahlen band, corresponding to the frequency range from 820 cm\({}^{-1}\) to 972 cm\({}^{-1}\); the \(y\)-Reststrahlen band, extending at lower frequencies, between 545 cm\({}^{-1}\) and 851 cm\({}^{-1}\) and the \(z\)-Reststrahlen band, which is located between 962 cm\({}^{-1}\) and 1010 cm\({}^{-1}\) which is also partially overlapped with the Reststrahlen band of glass substrate (fused silica) at 1000 cm\({}^{-1}\) and 1300 cm\({}^{-1}\)[18]. Moreover, the polarization-resolved set of measurements, shows that at quasi-normal incidence, the sample exhibits negligible in-plane anisotropy.
Figure 1: (a) Sketch of investigated sample. (b) X-Ray diffraction (XRD) pattern of a MoO\({}_{3}\) film deposited onto fused silica by PLD at 400 \({}^{\circ}\)C and 0.1 mbar oxygen pressure. The assigned peaks correspond to the orthorhombic phase of MoO\({}_{3}\) (ICDD 01-078-4612 card). (c-d) AFM images of a MoO\({}_{3}\) film deposited onto fused silica by PLD: (c) image area 10 \(\times\) 10 \(\upmu\)m\({}^{2}\); (d) image area 5\(\times\) 5 \(\upmu\)m\({}^{2}\).
An ABB Bomen FTLA 2000 FT-IR [36], was also used to measure the reflectance of the samples after twelve months of the sample growth. The results as presented in Figure S3 agree very well with each other.
We note that the three RBs are contiguous in frequency and their sequential overlaps give rise to interesting spectral features: a) a polarization-independent perfect absorption condition at 972 cm-1 (measured reflectivity less than 1%); b) a polarization tunable narrow band reflection peak at 1006 cm-1. The behaviors of these two features are completely different when we consider 45\({}^{\circ}\) incidence angle. Results are displayed in Figure (2c). We note that the perfect absorption condition is almost preserved for both p- and s-polarizations; at 45\({}^{\circ}\) the experimental minimum reflectivity for both s- and p-polarization ranges between 1% and 2%. On the other hand, a strong modulation of MoO\({}_{3}\) film infrared spectral features with the polarization of the incoming light (Figure 2d) has been experimentally observed at 1006 cm-1. Rotating the polarization state of incoming light (as highlighted in the legend) modifies both resonance intensity and width. It is worth noting that the polarization-dependent reflection peak at 0\({}_{\text{max}}\)=1006 cm-1 with a full width at half maximum (FWHM) \(\Delta\omega\)=17 cm-1 corresponding to a quality factor as high as Q=0\({}_{\text{max}}\)/\(\Delta\omega\) \(\sim\)60 is obtained in a lithography-free polar film. We note that the reflection peak is not a pure Lorentzian resonance. In order to provide a more accurate evaluation of the resonance linewidth we considered two Lorentzian-shaped curves respectively fitting the inner and the outer part of the experimental data. Results are reported in the supporting material (Figure S4): the FWHM of the experimental
Figure 2: (a) Sketches of investigated experimental configuration; polarization-dependent (0\({}^{\circ}\)=p-pol, 90\({}^{\circ}\)=s-pol) reflection FT-IR spectra measured at (b) 15\({}^{\circ}\) and (c) 45\({}^{\circ}\) incidence angle from a \(\alpha\)-MoO\({}_{3}\) film, grown on fused silica substrate using pulsed laser deposition; (d) surface plot of FT-IR reflection signal as a function of frequency and different polarization states of the incoming beam, measured at 45\({}^{\circ}\) incidence angle.
resonance has been then retrieved by taking the average between the FWHMs of the Lorentzian curves and the maximum semi-dispersion as FWHM\(\pm\)\(\Delta\)(FWHM) = (19 \(\pm\)3) cm\({}^{-1}\). Thus the Q factor has been evaluated as Q\(\pm\)\(\Delta\)Q=53\(\pm\)8. We finally include in the supporting material (Figure S5) the reflection spectra of the previously mentioned monoclinic \(\beta\)-MoO\({}_{3}\) film (XRD pattern depicted in Figure S1) for an incidence angle of 45\({}^{\circ}\) and several polarization angles. We note that the high-Q polarization-dependent reflection peak at 1006 cm\({}^{-1}\) does not appear. Indeed this feature is specifically related to the \(\alpha\)-MoO\({}_{3}\) optical phonon along the crystal z-axis (OPh\({}_{z}\)) [18].
### Theoretical models.
Theoretical modeling of the optical properties of the polycrystalline \(a\)-MoO\({}_{3}\) film are performed considering the following observations. (1) The AFM images of surface morphology reported in Figures (1c) and (1d) show that the grain sizes are much smaller than the infrared wavelengths in the measurements. (2) The almost negligible in-plane anisotropy demonstrated by the reflectance measurements at 15\({}^{\circ}\) angle of incidence (Figure 2b) implies that the crystallite grains in the material are nearly randomly oriented. (3) The thickness of \(a\)-MoO\({}_{3}\) film varies from about 2000 nm to 2300 nm, as provided by the profilometer (see Figure S2 of the supporting material file). Thus the material should be considered isotropic and homogenous, and an average thickness of 2200 nm is used in the simulation for this sample.
Conventional dispersion analysis of homogeneous composites seeks the effective dielectric function based on the individual constituents, also known as the effective medium theory (EMT) [37]. For example, polycrystal materials with various crystallite sizes can be predicted using a modified EMT proposed by Mayerhofer [38, 39].
We also note that phonon frequencies in the polycristalline sample can be shifted away from the TO and toward the LO position when compared with a perfect crystal. For instance, the resonance at 1006 cm\({}^{-1}\) in Figure 2b and 2c is shifted with respect to the bulk \(a\)-MoO\({}_{3}\)\(\omega_{\rm r,TO}=957\) cm\({}^{-1}\)[18]. This may be caused by the random orientation of crystallites in polycrystalline material, or the effect of air inclusions with an unknown volume fraction since a rough surface of \(a\)-MoO\({}_{3}\) film is observed. Due to such unknown parameters, EMT such as the Maxwell-Garnett theory or an arithmetic average of the principle dielectric functions did not provide a satisfactory agreement with the measured spectrum.
Therefore, an isotropic Lorentz model with three oscillators, which roughly correspond to the frequency values of the oscillators in the \(x\)-, \(y\)-, and \(z\)-directions of a perfect crystal \(a\)-MoO\({}_{3}\), is used to model the effective dispersion of the film:
\[\varepsilon(\omega)=\varepsilon_{inf}+\sum_{i=1}^{3}\frac{S_{i}\omega_{i}^{2} }{\omega_{i}^{2}-\upsilon_{i}(\omega-\omega^{2})}; \tag{1}\]
The resonance frequencies \(\omega\), oscillator strengths \(S_{i}\) damping coefficients \(\gamma_{i}\) in Eq. (1), with \(i\) = 1,2,3, and \(\varepsilon_{inf}\), are determined as fitting parameters by minimizing the RMS deviation between the calculated and measured reflection spectra. The fused silica substrate is modeled using the optical constants from Ref. [40].
Initially, we used [41] to calculate the reflectance for anisotropic stratified media. However, since the \(a\)-MoO\({}_{3}\) film behaves as an isotropic medium we used the standard transfer matrix method for multilayer structures [37] in order to improve the speed and the efficiency of the fitting algorithm. Results obtained with the two methods are in perfect agreement. A thickness of 2200 nm is used in the calculation. The described fitting procedure applied to the experimental reflection spectra at 15\({}^{\circ}\) and 45\({}^{\circ}\) angle of incidences, allowed us to retrieve the parameters for the polycrystalline film with a RMS deviation of 0.054. The obtained parameters are listed in Table 1 with a typical error bound of 20% considering uncertainties in the measurements and fitting.
As mentioned, there exist a shift of the phonon frequencies towards LO for the resonators when compared with the values for a perfect crystal \(a\)-MoO\({}_{3}\) reported in [18]: \(\omega_{\rm r,TO}=545\) cm\({}^{-1}\), \(\omega_{\rm x,TO}=821\) cm\({}^{-1}\) and \(\omega_{\rm z,TO}=957\) cm\({}^{-1}\).
Figure 3 shows the comparison between the modeled and the measured reflectance spectra for s- and p-polarized incident fields at 45\({}^{\circ}\) angle of incidence. In Figure 4 we compare the measured reflectance spectra at 15\({}^{\circ}\) of incidence with the theoretical predictions obtained with the fitted parameters evaluated from the previous data. In both cases, the model calculation is in reasonable agreement with the experiment, except for 600 cm\({}^{-1}\)\(<\omega<\) 1000 cm\({}^{-1}\). The fit of the sample deposited at 400 \({}^{\circ}\)C is not as good as that of the samples deposited at other temperatures. A better agreement could be obtained if more oscillators were used. However, this was not done due to the lack of information, and it is not the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \(S_{i}\) & \(\omega_{\rm i}\) [cm\({}^{-1}\)] & \(\gamma_{\rm i}\) [cm\({}^{-1}\)] \\ \hline
1 & 1.224 & 560 & 151 \\ \hline
2 & 0.100 & 841 & 33.0 \\ \hline
3 & 0.023 & 1005 & 3.74 \\ \hline \multicolumn{4}{|c|}{\(\varepsilon_{inf}\) = 2.69} \\ \hline \end{tabular}
\end{table}
Table 1: Fitted Lorentz oscillator parameters for the polycristalline \(a\)-MoO\({}_{3}\) film.
focus of this work. Overall, the RMS deviation between the model and experiments is within 0.06 for the entire spectrum. Details on the modeling of different samples and the effect of thickness are reported in Figures S6 and S7 of the supporting material, respectively.
As a final discussion, we focus on the small discrepancies between theoretical and experimental curves. It is worth mentioning that the reflectance measurement is performed with a measuring spot diameter of the order of a few millimeters. The inhomogeneity of the sampling area with varying surface roughness will result in the phonon frequency shifts to multiple positions. This effect leads to the deviation between experimental and theoretical data evaluated from a fixed frequency-oscillator model for calculation.
## 4 Conclusions
Flat optics requires the design of optical components into thin, planar, films to be integrated into photonic platforms. The use of vdW materials, leads to 2D flat optics with ultra-compact and tunable devices. Nevertheless, atomically thin optical elements suffer from alignment issues and will low possibilities of far-field applications. To overcome this limitation and achieve control and tuning of the spectral features over a large area, we prepared and investigated \(a\)-MoO\({}_{3}\) films using pulsed laser deposition. Although deposition parameter optimization is still required for the definition of a process to synthesize MoO\({}_{3}\) films with a high degree of crystallinity, our experimental findings show remarkable spectral features of the obtained polycrystalline films. Specifically, we reported both a polarization-independent perfect absorption behavior at 962 cm-1 well preserved for a broad angular incidence range, starting from normal incidence and an enhanced tunability vs. light polarization angle of a narrow band reflection peak at 1006 cm-1 with a Q factor of 53\(\pm\)8. The obtained high dynamic range of AR=0.3 with off-normal-excitation (which can be improved with increased incidence angle, see Figure S8 in supporting material) is reliable and repeatable and results in wide tunability that may have great potential for label-free biochemical sensing application and narrow-band detection in the IR range without the use of time- and cost-consuming lithographic processes. In particular, the investigated sharp resonance can find applications to identify a spectral marker associated with specific moieties such as phenylalanine [42] tryptophan [43] to give some examples. We stress the point that the low fabrication cost and the Q-factor are not the only two relevant parameters to take into account. The possibility to operate with large sensing area without the need for microscopy or near-field techniques is an important benefit. Our large-area samples only require a basic far-field source/detector scheme, making them suitable for low-cost, mass distribution devices.
## Acknowledgements
K.A. acknowledges support from the Air Force Office of Scientific Research under Award Number FA9550-22-1-0300. K.A. and M.C.L. also acknowledge the support from University La Sapienza for the Visiting Professor Program 2020 (Bando Professori Visitatori 2020). M.C, M.C.L, M.A. and Z.M.Z. acknowledge the KITP program 'Emerging Regimes and Implications of Quantum and Thermal Fluctuational Electrodynamics' 2022, where part of this work has been done. This research was supported in part by the National Science. Foundation under Grant No. PHY-1748958.
Figure 4: Comparison between the modeled (dash-dotted line) and the measured (solid line) reflectance spectra for s- and p-polarized incident fields at 15\({}^{\circ}\) angle of incidence.
Figure 3: Comparison between the modeled (dash-dotted line) and the measured (solid line) reflectance spectra for s- and p-polarized incident fields at 45\({}^{\circ}\) angle of incidence.
C.Y. was supported by the National Science Foundation (CBET-2029892).
| 最近の研究では、バウアー・ウェルの材料における表面Phononラジエーションの励起(SPhPs)が、ナノ光学コミュニティから注目を集めています。α相の酸化モリブデン(α-MoO3)、自然に存在する双軸の双曲幾何学組織であり、異なる波長帯域で3つの直交方向にSPhPsをサポートすることができ、波長10〜20μmの範囲で、その特性が注目を集めています。ここでは、高周波発光ダイオードを用いて、ガラス基板にα-MoO3多結晶膜を多層で形成し、その特性をIR測定しています。α-MoO3の単層は、1006 cm<sup>-1</sup>での偏光依存性反射ピークを持ち、高Q値の53を達成しています。反射は、入射偏光を変更することで調整 |
2310.00469 | State of In Situ Visualization in Simulations: We are fast. But are we
inspiring? | Visualization of dynamic processes in scientific high-performance computing
is an immensely data intensive endeavor. Application codes have recently
demonstrated scaling to full-size Exascale machines, and generating
high-quality data for visualization is consequently on the machine-scale,
easily spanning 100s of TBytes of input to generate a single video frame. In
situ visualization, the technique to consume the many-node decomposed data
in-memory, as exposed by applications, is the dominant workflow. Although in
situ visualization has achieved tremendous progress in the last decade, scaling
to system-size together with the application codes that produce its data, there
is one important question that we cannot skip: is what we produce insightful
and inspiring? | Axel Huebl, Arianna Formenti, Marco Garten, Jean-Luc Vay | 2023-09-30T19:11:23 | http://arxiv.org/abs/2310.00469v1 | # State of In Situ Visualization in Simulations:
###### Abstract
Visualization of dynamic processes in scientific high-performance computing is an immensely data intensive endeavor. Application codes have recently demonstrated scaling to full-size Exascale machines, and generating high-quality data for visualization is consequently on the machine-scale, easily spanning 100s of TBytes of input to generate a single video frame. In situ visualization, the technique to consume the many-node decomposed data in-memory, as exposed by applications, is the dominant workflow. Although in situ visualization has achieved tremendous progress in the last decade, scaling to system-size together with the application codes that produce its data, there is one important question that we cannot skip: is what we produce insightful and inspiring?
in situ visualization, high-performance computing, particle-in-cell, reflections, directions, lightning presentation submissions +
Footnote †: FOOTNOTE:+1]Footnote †: thanks: [FOOT
## 1. Introduction
In situ visualization is a tremendously powerful workflow to generate insight into the largest simulations run today. Recently, the 2022 Gordon Bell Prize-winning application WarpX (Bollman, 2022) was used to run in situ visualization on 552 nodes of the Frontier supercomputer (Bollman, 2022).
Immediate visualization of simulation dynamics at scale, from various camera angles, is powerful and helpful, providing answers to domain-science questions such as: Is a simulation evolving as planned? Are numerical options and resolution sufficiently set? Are many hardware or software issues/bugs appearing at scale? Yet, the scientifically most important question is: Does the visualization develop insight?
Gaining scientific insight from simulations is a complex and iterative process, with domain scientists connecting existing theory, empirical evidence and data from experiments and simulations. Visualizations can produce qualitative and quantitative representations of the dynamics at play. These representations can solidify understanding, guide the theoretical model building, help testing approximations and assumptions. An attractive visualization does help to communicate results and might inspire new scientific ideas.
Particularly for the latter part, domain scientists and audiences will compare the quality of their visualization with the state-of-the-art seen in everyday life: movies, games, advertising, etc. That is a high bar, given photo-realistic capabilities in these industries at high frame rates. Based on these expectations, can we produce in situ visualizations of scientific data that can be awe-inspiring and stimulate our minds? And - how much costs and/or scalability penalty are we willing to trade for this in high-performance computing?
## 2. Scalable Methods Wanted
Many algorithms offered in contemporary visualization frameworks (Bollman, 2022; Graf et al., 2018; Graf et al., 2019; Graf et al., 2019) are able to exploit some locality, e.g., by domain decomposing ray traces and iso-contour searches, composing results later on (Graf et al., 2019). Yet, advanced visualization techniques for casting shadows, tracing reflections, sorting collisions with objects, etc. are notoriously non-local and are thus challenging for multi-GPU implementations. Even volume-rendering more than one spatially overlapping source is non-trivial to do _in situ_, since established methods depend on a sampling technique that is hard to scale (Bollman, 2022). Additionally, many visualization techniques that scientists can use on single-node implementations would be highly desirable as distributed implementations for in situ frameworks: Taking Figure 1 as an example, if this was not _in situ_ generated, the authors would add multiple light sources, cast hard and soft shadows, select some isocontours for semi-transparent representation, and would smooth the generated iso-contours, by adding additional triangles that interpolate beyond the original resolution of the data source.
Consequently, there is a continued need for new, innovative, scalable in situ visualization methods. Both fast, low-overhead and higher-overhead (yet scalable), high-quality methods are needed. With respect to scalability, maybe there are tricks one can lend from other communities to generate artificial locality: occlude far-focus parts with mist as in gaming, simplify shadow masks and reflections, or aggressively exploit the adaptive resolution of mesh-refined data sources. Additionally, successful in situ implementations and workflows can likely be enhanced and benefit from evolution through standardization of APIs, vendor abstractions, render scene control and data descriptions, e.g., (Bollman, 2022; Graf et al., 2019; Graf et al., 2019).
## 3. Selected in Situ Visualization Needs
Adding to the challenges of addressing expectations set from offline rendering for in situ visualization, we surveyed the Beam, Plasma & Accelerator Simulation Toolkit (BLAST) (Bollman, 2022; Graf et al., 2018; Graf et al., 2019) codes and identified three selected needs specific to in situ visualization.
First, we noticed that domain scientists have to relearn how to express rendering scene descriptions for each in situ tool. Standardization is needed (Graf et al., 2019). Another approach might be domain-specific options in the simulation input language, automating the creation of visualization-configuration templates with mostly defaulted options - ready to be configured further for details by the inclined scientists when needed.
Second, video generation of iso-contours, glyphs (e.g., vectors placed in space), etc. often create "flicker" effects for surfaces and pointing of objects, simply based on the roughness of simulation data and steps selected for visualization. Research into transitions (or animations) between key/data frames with low memory overhead for HPC could be beneficial to reduce such effects.
Third, we also identified a commonly used algorithmic and simulation pattern for which in situ visualization would be ideally suited, but are not aware of any implemented solution yet: rendering of spatially-sliced data pipelines. In a large class of modeling codes, efficient solutions can be calculated by splitting the 3D domain over one axis. Instead of advancing the whole domain by an update, algorithms update a slice of the domain, e.g., from the back to the front of the 3D domain, and parallelize for the third spatial axis in _time_. Without spatially sliced rendering tools, a large number of algorithms and codes currently need to fall back to costly data output to "reconstruct" the spatial data domain that is required at once in offline visualization. Examples in laser-plasma and accelerator physics are the boosted frame technique (Graf et al., 2019; Graf et al., 2019; Graf et al., 2019) as shown in figure 1 (a more meaningful representation would transform slice-wise to the laboratory frame), the quasi-static method (Graf et al., 2018), or representations in reference trajectory space instead of time and space (Graf et al., 2019).
We believe addressing these challenges is timely and resulting in situ visualization will provide insight and inspiration for scientists.
## Acknowledgments
This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. This research was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of High Energy Physics, Scientific Discovery through Advanced Computing (SciDAC) program. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
State of In Situ Visualization in Simulations:
We are fast. But are we inspiring?
| 科学的ハイパーコンピューティングにおける動的プロセスを可視化する作業は、膨大なデータの要求です。アプリケーションコードは、フルサイズエクスアスケマシンへのスケールを実現しており、可視化のためのハイ品質なデータを生成するには、機械規模で100TBを超える入力から単一の動画フレームを生成することができ、インシツ視覚化は、アプリケーションによって示された、メモリ内分解されたデータの消費技術です。このインシツ視覚化は、過去10年で大きな進歩を遂げていますが、アプリケーションコードとデータを生成するためのシステム規模にスケールするまでには、重要な疑問が残ります。それは、私たちが生成するものは有益でインスピレーションに満ちているのでしょうか。 |